* [edk2-libc Patch 0/1] Add Python 3.6.8
@ 2021-09-02 17:12 Michael D Kinney
2021-09-02 17:12 ` [edk2-libc Patch 1/1] AppPkg/Applications/Python/Python-3.6.8: Py 3.6.8 UEFI changes Michael D Kinney
` (2 more replies)
0 siblings, 3 replies; 8+ messages in thread
From: Michael D Kinney @ 2021-09-02 17:12 UTC (permalink / raw)
To: devel; +Cc: Rebecca Cran, Jayaprakash N
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3588
This patch series contains the modifications required to
support Python 3.6.8 in the UEFI Shell. Currently supports
building Py3.6.8 for UEFI with IA32 and X64 architectures using
VS2017, VS2019 with the latest edk2/master.
There is an additional patch that must be applied first that
contains the source code from the Python project that is too
large to send as an email and does not need to be reviewed since
it is unmodified content from the Python project
https://github.com/python/cpython/tree/v3.6.8.
https://github.com/jpshivakavi/edk2-libc/tree/py36_base_code_from_python_project
https://github.com/jpshivakavi/edk2-libc/commit/d9f7b2e5748c382ad988a98bd3e5e4bb2d50c5c0
Cc: Rebecca Cran <rebecca@nuviainc.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Signed-off-by: Jayaprakash N <n.jayaprakash@intel.com>
Jayaprakash Nevara (1):
AppPkg/Applications/Python/Python-3.6.8: Py 3.6.8 UEFI changes
AppPkg/AppPkg.dsc | 3 +
.../Python/Python-3.6.8/Py368ReadMe.txt | 220 +
.../PyMod-3.6.8/Include/fileutils.h | 159 +
.../Python-3.6.8/PyMod-3.6.8/Include/osdefs.h | 51 +
.../PyMod-3.6.8/Include/pyconfig.h | 1322 ++
.../PyMod-3.6.8/Include/pydtrace.h | 74 +
.../Python-3.6.8/PyMod-3.6.8/Include/pyport.h | 788 +
.../PyMod-3.6.8/Lib/ctypes/__init__.py | 549 +
.../PyMod-3.6.8/Lib/genericpath.py | 157 +
.../Python-3.6.8/PyMod-3.6.8/Lib/glob.py | 110 +
.../PyMod-3.6.8/Lib/http/client.py | 1481 ++
.../Lib/importlib/_bootstrap_external.py | 1443 ++
.../Python/Python-3.6.8/PyMod-3.6.8/Lib/io.py | 99 +
.../PyMod-3.6.8/Lib/logging/__init__.py | 2021 ++
.../Python-3.6.8/PyMod-3.6.8/Lib/ntpath.py | 568 +
.../Python/Python-3.6.8/PyMod-3.6.8/Lib/os.py | 792 +
.../Python-3.6.8/PyMod-3.6.8/Lib/pydoc.py | 2686 +++
.../Python-3.6.8/PyMod-3.6.8/Lib/shutil.py | 1160 ++
.../Python-3.6.8/PyMod-3.6.8/Lib/site.py | 529 +
.../PyMod-3.6.8/Lib/subprocess.py | 1620 ++
.../Python-3.6.8/PyMod-3.6.8/Lib/zipfile.py | 2060 ++
.../PyMod-3.6.8/Modules/_blake2/impl/blake2.h | 161 +
.../PyMod-3.6.8/Modules/_ctypes/_ctypes.c | 5623 ++++++
.../PyMod-3.6.8/Modules/_ctypes/callproc.c | 1871 ++
.../Modules/_ctypes/ctypes_dlfcn.h | 29 +
.../Modules/_ctypes/libffi_msvc/ffi.c | 572 +
.../Modules/_ctypes/libffi_msvc/ffi.h | 331 +
.../Modules/_ctypes/libffi_msvc/ffi_common.h | 85 +
.../Modules/_ctypes/malloc_closure.c | 128 +
.../Python-3.6.8/PyMod-3.6.8/Modules/config.c | 159 +
.../PyMod-3.6.8/Modules/edk2module.c | 4348 +++++
.../PyMod-3.6.8/Modules/errnomodule.c | 890 +
.../PyMod-3.6.8/Modules/faulthandler.c | 1414 ++
.../PyMod-3.6.8/Modules/getpath.c | 1283 ++
.../Python-3.6.8/PyMod-3.6.8/Modules/main.c | 878 +
.../PyMod-3.6.8/Modules/selectmodule.c | 2638 +++
.../PyMod-3.6.8/Modules/socketmodule.c | 7810 ++++++++
.../PyMod-3.6.8/Modules/socketmodule.h | 282 +
.../PyMod-3.6.8/Modules/sre_lib.h | 1372 ++
.../PyMod-3.6.8/Modules/timemodule.c | 1526 ++
.../PyMod-3.6.8/Modules/zlib/gzguts.h | 218 +
.../PyMod-3.6.8/Objects/dictobject.c | 4472 +++++
.../PyMod-3.6.8/Objects/memoryobject.c | 3114 +++
.../Python-3.6.8/PyMod-3.6.8/Objects/object.c | 2082 ++
.../Objects/stringlib/transmogrify.h | 701 +
.../PyMod-3.6.8/Objects/unicodeobject.c | 15773 ++++++++++++++++
.../PyMod-3.6.8/Python/bltinmodule.c | 2794 +++
.../PyMod-3.6.8/Python/fileutils.c | 1767 ++
.../PyMod-3.6.8/Python/getcopyright.c | 38 +
.../PyMod-3.6.8/Python/importlib_external.h | 2431 +++
.../Python-3.6.8/PyMod-3.6.8/Python/marshal.c | 1861 ++
.../Python-3.6.8/PyMod-3.6.8/Python/pyhash.c | 437 +
.../PyMod-3.6.8/Python/pylifecycle.c | 1726 ++
.../Python-3.6.8/PyMod-3.6.8/Python/pystate.c | 969 +
.../Python-3.6.8/PyMod-3.6.8/Python/pytime.c | 749 +
.../Python-3.6.8/PyMod-3.6.8/Python/random.c | 636 +
.../Python/Python-3.6.8/Python368.inf | 275 +
.../Python-3.6.8/create_python368_pkg.bat | 48 +
.../Python/Python-3.6.8/srcprep.py | 30 +
59 files changed, 89413 insertions(+)
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/Py368ReadMe.txt
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/fileutils.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/osdefs.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pyconfig.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pydtrace.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pyport.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/ctypes/__init__.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/genericpath.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/glob.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/http/client.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/importlib/_bootstrap_external.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/io.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/logging/__init__.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/ntpath.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/os.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/pydoc.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/shutil.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/site.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/subprocess.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/zipfile.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_blake2/impl/blake2.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/_ctypes.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/callproc.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/ctypes_dlfcn.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi_common.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/malloc_closure.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/config.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/edk2module.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/errnomodule.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/faulthandler.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/getpath.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/main.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/selectmodule.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/socketmodule.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/socketmodule.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/sre_lib.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/timemodule.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/zlib/gzguts.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/dictobject.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/memoryobject.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/object.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/stringlib/transmogrify.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/unicodeobject.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/bltinmodule.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/fileutils.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/getcopyright.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/importlib_external.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/marshal.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pyhash.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pylifecycle.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pystate.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pytime.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/random.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/Python368.inf
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/create_python368_pkg.bat
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/srcprep.py
--
2.32.0.windows.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [edk2-libc Patch 1/1] AppPkg/Applications/Python/Python-3.6.8: Py 3.6.8 UEFI changes
2021-09-02 17:12 [edk2-libc Patch 0/1] Add Python 3.6.8 Michael D Kinney
@ 2021-09-02 17:12 ` Michael D Kinney
2021-09-02 18:41 ` [edk2-devel] " Rebecca Cran
2021-09-02 20:48 ` Rebecca Cran
2021-09-02 18:22 ` [edk2-devel] [edk2-libc Patch 0/1] Add Python 3.6.8 Michael D Kinney
2021-09-03 1:35 ` Michael D Kinney
2 siblings, 2 replies; 8+ messages in thread
From: Michael D Kinney @ 2021-09-02 17:12 UTC (permalink / raw)
To: devel; +Cc: Jayaprakash Nevara, Rebecca Cran
From: Jayaprakash Nevara <n.jayaprakash@intel.com>
REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3588
This commit contains several changes made to the base Python 3.6.8
source code to compile it and run it on UEFI shell.
Currently supports building Py3.6.8 for UEFI with IA32 and X64
architectures using VS2017, VS2019 with the latest edk2/master.
Cc: Rebecca Cran <rebecca@nuviainc.com>
Cc: Michael D Kinney <michael.d.kinney@intel.com>
Signed-off-by: Jayaprakash N <n.jayaprakash@intel.com>
---
AppPkg/AppPkg.dsc | 3 +
.../Python/Python-3.6.8/Py368ReadMe.txt | 220 +
.../PyMod-3.6.8/Include/fileutils.h | 159 +
.../Python-3.6.8/PyMod-3.6.8/Include/osdefs.h | 51 +
.../PyMod-3.6.8/Include/pyconfig.h | 1322 ++
.../PyMod-3.6.8/Include/pydtrace.h | 74 +
.../Python-3.6.8/PyMod-3.6.8/Include/pyport.h | 788 +
.../PyMod-3.6.8/Lib/ctypes/__init__.py | 549 +
.../PyMod-3.6.8/Lib/genericpath.py | 157 +
.../Python-3.6.8/PyMod-3.6.8/Lib/glob.py | 110 +
.../PyMod-3.6.8/Lib/http/client.py | 1481 ++
.../Lib/importlib/_bootstrap_external.py | 1443 ++
.../Python/Python-3.6.8/PyMod-3.6.8/Lib/io.py | 99 +
.../PyMod-3.6.8/Lib/logging/__init__.py | 2021 ++
.../Python-3.6.8/PyMod-3.6.8/Lib/ntpath.py | 568 +
.../Python/Python-3.6.8/PyMod-3.6.8/Lib/os.py | 792 +
.../Python-3.6.8/PyMod-3.6.8/Lib/pydoc.py | 2686 +++
.../Python-3.6.8/PyMod-3.6.8/Lib/shutil.py | 1160 ++
.../Python-3.6.8/PyMod-3.6.8/Lib/site.py | 529 +
.../PyMod-3.6.8/Lib/subprocess.py | 1620 ++
.../Python-3.6.8/PyMod-3.6.8/Lib/zipfile.py | 2060 ++
.../PyMod-3.6.8/Modules/_blake2/impl/blake2.h | 161 +
.../PyMod-3.6.8/Modules/_ctypes/_ctypes.c | 5623 ++++++
.../PyMod-3.6.8/Modules/_ctypes/callproc.c | 1871 ++
.../Modules/_ctypes/ctypes_dlfcn.h | 29 +
.../Modules/_ctypes/libffi_msvc/ffi.c | 572 +
.../Modules/_ctypes/libffi_msvc/ffi.h | 331 +
.../Modules/_ctypes/libffi_msvc/ffi_common.h | 85 +
.../Modules/_ctypes/malloc_closure.c | 128 +
.../Python-3.6.8/PyMod-3.6.8/Modules/config.c | 159 +
.../PyMod-3.6.8/Modules/edk2module.c | 4348 +++++
.../PyMod-3.6.8/Modules/errnomodule.c | 890 +
.../PyMod-3.6.8/Modules/faulthandler.c | 1414 ++
.../PyMod-3.6.8/Modules/getpath.c | 1283 ++
.../Python-3.6.8/PyMod-3.6.8/Modules/main.c | 878 +
.../PyMod-3.6.8/Modules/selectmodule.c | 2638 +++
.../PyMod-3.6.8/Modules/socketmodule.c | 7810 ++++++++
.../PyMod-3.6.8/Modules/socketmodule.h | 282 +
.../PyMod-3.6.8/Modules/sre_lib.h | 1372 ++
.../PyMod-3.6.8/Modules/timemodule.c | 1526 ++
.../PyMod-3.6.8/Modules/zlib/gzguts.h | 218 +
.../PyMod-3.6.8/Objects/dictobject.c | 4472 +++++
.../PyMod-3.6.8/Objects/memoryobject.c | 3114 +++
.../Python-3.6.8/PyMod-3.6.8/Objects/object.c | 2082 ++
.../Objects/stringlib/transmogrify.h | 701 +
.../PyMod-3.6.8/Objects/unicodeobject.c | 15773 ++++++++++++++++
.../PyMod-3.6.8/Python/bltinmodule.c | 2794 +++
.../PyMod-3.6.8/Python/fileutils.c | 1767 ++
.../PyMod-3.6.8/Python/getcopyright.c | 38 +
.../PyMod-3.6.8/Python/importlib_external.h | 2431 +++
.../Python-3.6.8/PyMod-3.6.8/Python/marshal.c | 1861 ++
.../Python-3.6.8/PyMod-3.6.8/Python/pyhash.c | 437 +
.../PyMod-3.6.8/Python/pylifecycle.c | 1726 ++
.../Python-3.6.8/PyMod-3.6.8/Python/pystate.c | 969 +
.../Python-3.6.8/PyMod-3.6.8/Python/pytime.c | 749 +
.../Python-3.6.8/PyMod-3.6.8/Python/random.c | 636 +
.../Python/Python-3.6.8/Python368.inf | 275 +
.../Python-3.6.8/create_python368_pkg.bat | 48 +
.../Python/Python-3.6.8/srcprep.py | 30 +
59 files changed, 89413 insertions(+)
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/Py368ReadMe.txt
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/fileutils.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/osdefs.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pyconfig.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pydtrace.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pyport.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/ctypes/__init__.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/genericpath.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/glob.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/http/client.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/importlib/_bootstrap_external.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/io.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/logging/__init__.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/ntpath.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/os.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/pydoc.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/shutil.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/site.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/subprocess.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/zipfile.py
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_blake2/impl/blake2.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/_ctypes.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/callproc.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/ctypes_dlfcn.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi_common.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/malloc_closure.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/config.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/edk2module.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/errnomodule.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/faulthandler.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/getpath.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/main.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/selectmodule.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/socketmodule.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/socketmodule.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/sre_lib.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/timemodule.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/zlib/gzguts.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/dictobject.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/memoryobject.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/object.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/stringlib/transmogrify.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/unicodeobject.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/bltinmodule.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/fileutils.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/getcopyright.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/importlib_external.h
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/marshal.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pyhash.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pylifecycle.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pystate.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pytime.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/random.c
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/Python368.inf
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/create_python368_pkg.bat
create mode 100644 AppPkg/Applications/Python/Python-3.6.8/srcprep.py
diff --git a/AppPkg/AppPkg.dsc b/AppPkg/AppPkg.dsc
index c2305d71..5938789d 100644
--- a/AppPkg/AppPkg.dsc
+++ b/AppPkg/AppPkg.dsc
@@ -126,6 +126,9 @@
#### Un-comment the following line to build Python 2.7.10.
# AppPkg/Applications/Python/Python-2.7.10/Python2710.inf
+#### Un-comment the following line to build Python 3.6.8.
+# AppPkg/Applications/Python/Python-3.6.8/Python368.inf
+
#### Un-comment the following line to build Lua.
# AppPkg/Applications/Lua/Lua.inf
diff --git a/AppPkg/Applications/Python/Python-3.6.8/Py368ReadMe.txt b/AppPkg/Applications/Python/Python-3.6.8/Py368ReadMe.txt
new file mode 100644
index 00000000..69bb6bd1
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/Py368ReadMe.txt
@@ -0,0 +1,220 @@
+ EDK II Python
+ ReadMe
+ Version 3.6.8
+ Release 1.00
+ 01 September 2021
+
+
+1. OVERVIEW
+===========
+This document is devoted to general information on building and setup of the
+Python environment for UEFI, the invocation of the interpreter, and things
+that make working with Python easier.
+
+It is assumed that you already have UDK2018 or later, or a current snapshot of
+the EDK II sources from www.tianocore.org (https://github.com/tianocore/edk2),
+and that you can successfully build packages within that distribution.
+
+2. Release Notes
+================
+ 1) All C extension modules must be statically linked (built in)
+ 2) The site and os modules must exist as discrete files in ...\lib\python36.8
+ 3) User-specific configurations are not supported.
+ 4) Environment variables are not supported.
+
+3. Getting and Building Python
+======================================================
+ 3.1 Getting Python
+ ==================
+ This file describes the UEFI port of version 3.6.8 of the CPython distribution.
+ For development ease, a subset of the Python 3.6.8 distribution has been
+ included as part of the AppPkg/Applications/Python/Python-3.6.8 source tree.
+ If this is sufficient, you may skip to section 3.2, Building Python.
+
+ If a full distribution is desired, it can be merged into the Python-3.6.8
+ source tree. Directory AppPkg/Applications/Python/Python-3.6.8 corresponds
+ to the root directory of the CPython 3.6.8 distribution. The full
+ CPython 3.6.8 source code may be downloaded from
+ http://www.python.org/ftp/python/3.6.8/.
+
+ A. Within your EDK II development tree, extract the Python distribution into
+ AppPkg/Applications/Python/Python-3.6.8. This should merge the additional
+ files into the source tree. It will also create the following directories:
+ Demo Doc Grammar Mac Misc
+ PC PCbuild RISCOS Tools
+
+ The greatest change will be within the Python-3.6.8/Lib directory where
+ many more packages and modules will be added. These additional components
+ may not have been ported to EDK II yet.
+
+ 3.2 Building Python
+ ===================
+ A. From the AppPkg/Applications/Python/Python-3.6.8 directory, execute the
+ srcprep.py script to copy the header files from within the
+ PyMod-3.6.8 sub-tree into their corresponding directories within the
+ distribution. This step only needs to be performed prior to the first
+ build of Python, or if one of the header files within the PyMod tree has been
+ modified.
+
+ B. Edit PyMod-3.6.8\Modules\config.c to enable the built-in modules you need.
+ By default, it is configured for the minimally required set of modules.
+ Mandatory Built-in Modules:
+ edk2 errno imp marshal
+
+ Additional built-in modules which are required to use the help()
+ functionality provided by PyDoc, are:
+ _codecs _collections _functools _random
+ _sre _struct _weakref binascii
+ gc itertools math _operator
+ time
+
+ C. Edit AppPkg/AppPkg.dsc to enable (uncomment) the Python368.inf line
+ within the [Components] section.
+
+ D. Build AppPkg using the standard "build" command:
+ For example, to build Python for an X64 CPU architecture:
+ build -a X64 -p AppPkg\AppPkg.dsc
+
+4. Python-related paths and files
+=================================
+Python depends upon the existence of several directories and files on the
+target system.
+
+ \EFI Root of the UEFI system area.
+ |- \Tools Location of the Python.efi executable.
+ |- \Boot UEFI specified Boot directory.
+ |- \StdLib Root of the Standard Libraries sub-tree.
+ |- \etc Configuration files used by libraries.
+ |- \lib Root of the libraries tree.
+ |- \python36.8 Directory containing the Python library
+ | modules.
+ |- \lib-dynload Dynamically loadable Python extensions.
+ | Not supported currently.
+ |- \site-packages Site-specific packages and modules.
+
+
+5. Installing Python
+====================
+These directories, on the target system, are populated from the development
+system as follows:
+
+ * \Efi\Tools receives a copy of Build/AppPkg/RELEASE_VS2017/X64/Python368.efi.
+ ^^^^^^^^^^^^^^^^
+ Modify the host path to match your build type and compiler.
+
+ * The \Efi\StdLib\etc directory is populated from the StdLib/Efi/StdLib/etc
+ source directory.
+
+ * Directory \Efi\StdLib\lib\python36.8 is populated with packages and modules
+ from the AppPkg/Applications/Python/Python-3.6.8/Lib directory.
+ The recommended minimum set of modules (.py, .pyc, and/or .pyo):
+ os stat ntpath warnings traceback
+ site types linecache genericpath
+
+ * Python C Extension Modules built as dynamically loadable extensions go into
+ the \Efi\StdLib\lib\python36.8\lib-dynload directory. This functionality is not
+ yet implemented.
+
+ A script, create_python368_pkg.bat , is provided which facilitates the population
+ of the target EFI package. Execute this script from within the
+ AppPkg/Applications/Python/Python-3.6.8 directory, providing the Tool Chain, Target
+ Build and destination directory which is the path to the destination directory.
+ The appropriate contents of the AppPkg/Applications/Python/Python-3.6.8/Lib and
+ Python368.efi Application from Build/AppPkg/RELEASE_VS2017/X64/ will be
+ ^^^^^^^^^^^^^^
+ copied into the specified destination directory.
+
+ Replace "RELEASE_VS2017", in the source path, with values appropriate for your tool chain.
+
+
+6. Example: Enabling socket support
+===================================
+ 1. enable {"_socket", init_socket}, in PyMod-3.6.8\Modules\config.c
+ 2. enable LibraryClasses BsdSocketLib and EfiSocketLib in Python368.inf.
+ 3. Build Python368
+ build -a X64 -p AppPkg\AppPkg.dsc
+ 6. copy Build\AppPkg\RELEASE_VS2017\X64\Python368.efi to \Efi\Tools on your
+ target system. Replace "RELEASE_VS2017", in the source path, with
+ values appropriate for your tool chain.
+
+7. Running Python
+=================
+ Python must currently be run from an EFI FAT-32 partition, or volume, under
+ the UEFI Shell. At the Shell prompt enter the desired volume name, followed
+ by a colon ':', then press Enter. Python can then be executed by typing its
+ name, followed by any desired options and arguments.
+
+ EXAMPLE:
+ Shell> fs0:
+ FS0:\> python368
+ Python 3.6.8 (default, Jun 24 2015, 17:38:32) [C] on uefi
+ Type "help", "copyright", "credits" or "license" for more information.
+ >>> exit()
+ FS0:\>
+
+
+8. Supported C Modules
+======================
+ Module Name C File(s)
+ =============== =============================================
+ _ast Python/Python-ast.c
+ _codecs Modules/_codecsmodule.c
+ _collections Modules/_collectionsmodule.c
+ _csv Modules/_csv.c
+ _functools Modules/_functoolsmodule.c
+ _io Modules/_io/_iomodule.c
+ _json Modules/_json.c
+ _md5 Modules/md5module.c
+ _multibytecodec Modules/cjkcodecs/multibytecodec.c
+ _random Modules/_randommodule.c
+ _sha1 Modules/sha1module.c
+ _sha256 Modules/sha256module.c
+ _sha512 Modules/sha512module.c
+ _sre Modules/_sre.c
+ _struct Modules/_struct.c
+ _symtable Modules/symtablemodule.c
+ _weakref Modules/_weakref.c
+ array Modules/arraymodule.c
+ binascii Modules/binascii.c
+ cmath Modules/cmathmodule.c
+ datetime Modules/_datetimemodule.c
+ edk2 Modules/PyMod-3.6.8/edk2module.c
+ errno Modules/errnomodule.c
+ gc Modules/gcmodule.c
+ imp Python/import.c
+ itertools Modules/itertoolsmodule.c
+ marshal Python/marshal.c
+ _operator Modules/_operator.c
+ parser Modules/parsermodule.c
+ select Modules/selectmodule.c
+ signal Modules/signalmodule.c
+ time Modules/timemodule.c
+ zlib Modules/zlibmodule.c
+
+
+9. Tested Python Library Modules
+================================
+This is a partial list of the packages and modules of the Python Standard
+Library that have been tested or used in some manner.
+
+ encodings genericpath.py site.py
+ importlib getopt.py socket.py
+ json hashlib.py sre.py
+ pydoc_data heapq.py sre_compile.py
+ xml inspect.py sre_constants.py
+ abc.py io.py sre_parse.py
+ argparse.py keyword.py stat.py
+ ast.py linecache.py string.py
+ atexit.py locale.py struct.py
+ binhex.py modulefinder.py textwrap.py
+ bisect.py ntpath.py token.py
+ calendar.py numbers.py tokenize.py
+ cmd.py optparse.py traceback.py
+ codecs.py os.py types.py
+ collections.py platform.py warnings.py
+ copy.py posixpath.py weakref.py
+ csv.py pydoc.py zipfile.py
+ fileinput.py random.py
+ formatter.py re.py
+ functools.py runpy.py
+# # #
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/fileutils.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/fileutils.h
new file mode 100644
index 00000000..5540505d
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/fileutils.h
@@ -0,0 +1,159 @@
+#ifndef Py_FILEUTILS_H
+#define Py_FILEUTILS_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#if !defined(Py_LIMITED_API) || Py_LIMITED_API+0 >= 0x03050000
+PyAPI_FUNC(wchar_t *) Py_DecodeLocale(
+ const char *arg,
+ size_t *size);
+
+PyAPI_FUNC(char*) Py_EncodeLocale(
+ const wchar_t *text,
+ size_t *error_pos);
+#endif
+
+#ifndef Py_LIMITED_API
+
+PyAPI_FUNC(wchar_t *) _Py_DecodeLocaleEx(
+ const char *arg,
+ size_t *size,
+ int current_locale);
+
+PyAPI_FUNC(char*) _Py_EncodeLocaleEx(
+ const wchar_t *text,
+ size_t *error_pos,
+ int current_locale);
+
+PyAPI_FUNC(PyObject *) _Py_device_encoding(int);
+
+#if defined(MS_WINDOWS) || defined(__APPLE__) || defined(UEFI_C_SOURCE)
+ /* On Windows, the count parameter of read() is an int (bpo-9015, bpo-9611).
+ On macOS 10.13, read() and write() with more than INT_MAX bytes
+ fail with EINVAL (bpo-24658). */
+# define _PY_READ_MAX INT_MAX
+# define _PY_WRITE_MAX INT_MAX
+#else
+ /* write() should truncate the input to PY_SSIZE_T_MAX bytes,
+ but it's safer to do it ourself to have a portable behaviour */
+# define _PY_READ_MAX PY_SSIZE_T_MAX
+# define _PY_WRITE_MAX PY_SSIZE_T_MAX
+#endif
+
+#ifdef MS_WINDOWS
+struct _Py_stat_struct {
+ unsigned long st_dev;
+ uint64_t st_ino;
+ unsigned short st_mode;
+ int st_nlink;
+ int st_uid;
+ int st_gid;
+ unsigned long st_rdev;
+ __int64 st_size;
+ time_t st_atime;
+ int st_atime_nsec;
+ time_t st_mtime;
+ int st_mtime_nsec;
+ time_t st_ctime;
+ int st_ctime_nsec;
+ unsigned long st_file_attributes;
+};
+#else
+# define _Py_stat_struct stat
+#endif
+
+PyAPI_FUNC(int) _Py_fstat(
+ int fd,
+ struct _Py_stat_struct *status);
+
+PyAPI_FUNC(int) _Py_fstat_noraise(
+ int fd,
+ struct _Py_stat_struct *status);
+
+PyAPI_FUNC(int) _Py_stat(
+ PyObject *path,
+ struct stat *status);
+
+PyAPI_FUNC(int) _Py_open(
+ const char *pathname,
+ int flags);
+
+PyAPI_FUNC(int) _Py_open_noraise(
+ const char *pathname,
+ int flags);
+
+PyAPI_FUNC(FILE *) _Py_wfopen(
+ const wchar_t *path,
+ const wchar_t *mode);
+
+PyAPI_FUNC(FILE*) _Py_fopen(
+ const char *pathname,
+ const char *mode);
+
+PyAPI_FUNC(FILE*) _Py_fopen_obj(
+ PyObject *path,
+ const char *mode);
+
+PyAPI_FUNC(Py_ssize_t) _Py_read(
+ int fd,
+ void *buf,
+ size_t count);
+
+PyAPI_FUNC(Py_ssize_t) _Py_write(
+ int fd,
+ const void *buf,
+ size_t count);
+
+PyAPI_FUNC(Py_ssize_t) _Py_write_noraise(
+ int fd,
+ const void *buf,
+ size_t count);
+
+#ifdef HAVE_READLINK
+PyAPI_FUNC(int) _Py_wreadlink(
+ const wchar_t *path,
+ wchar_t *buf,
+ size_t bufsiz);
+#endif
+
+#ifdef HAVE_REALPATH
+PyAPI_FUNC(wchar_t*) _Py_wrealpath(
+ const wchar_t *path,
+ wchar_t *resolved_path,
+ size_t resolved_path_size);
+#endif
+
+PyAPI_FUNC(wchar_t*) _Py_wgetcwd(
+ wchar_t *buf,
+ size_t size);
+
+PyAPI_FUNC(int) _Py_get_inheritable(int fd);
+
+PyAPI_FUNC(int) _Py_set_inheritable(int fd, int inheritable,
+ int *atomic_flag_works);
+
+PyAPI_FUNC(int) _Py_set_inheritable_async_safe(int fd, int inheritable,
+ int *atomic_flag_works);
+
+PyAPI_FUNC(int) _Py_dup(int fd);
+
+#ifndef MS_WINDOWS
+PyAPI_FUNC(int) _Py_get_blocking(int fd);
+
+PyAPI_FUNC(int) _Py_set_blocking(int fd, int blocking);
+#endif /* !MS_WINDOWS */
+
+PyAPI_FUNC(int) _Py_GetLocaleconvNumeric(
+ PyObject **decimal_point,
+ PyObject **thousands_sep,
+ const char **grouping);
+
+#endif /* Py_LIMITED_API */
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* !Py_FILEUTILS_H */
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/osdefs.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/osdefs.h
new file mode 100644
index 00000000..98ce842c
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/osdefs.h
@@ -0,0 +1,51 @@
+#ifndef Py_OSDEFS_H
+#define Py_OSDEFS_H
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+
+/* Operating system dependencies */
+
+#ifdef MS_WINDOWS
+#define SEP L'\\'
+#define ALTSEP L'/'
+#define MAXPATHLEN 256
+#define DELIM L';'
+#endif
+
+/* Filename separator */
+#ifndef SEP
+#define SEP L'/'
+#endif
+
+/* Max pathname length */
+#ifdef __hpux
+#include <sys/param.h>
+#include <limits.h>
+#ifndef PATH_MAX
+#define PATH_MAX MAXPATHLEN
+#endif
+#endif
+
+#ifndef MAXPATHLEN
+#if defined(PATH_MAX) && PATH_MAX > 1024
+#define MAXPATHLEN PATH_MAX
+#else
+#define MAXPATHLEN 1024
+#endif
+#endif
+
+/* Search path entry delimiter */
+#ifndef DELIM
+#ifndef UEFI_C_SOURCE
+#define DELIM L':'
+#else
+#define DELIM L';'
+#endif
+#endif
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* !Py_OSDEFS_H */
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pyconfig.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pyconfig.h
new file mode 100644
index 00000000..d4685da3
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pyconfig.h
@@ -0,0 +1,1322 @@
+/** @file
+ Manually generated Python Configuration file for EDK II.
+
+ Copyright (c) 2011 - 2021, Intel Corporation. All rights reserved.<BR>
+ This program and the accompanying materials are licensed and made available under
+ the terms and conditions of the BSD License that accompanies this distribution.
+ The full text of the license may be found at
+ http://opensource.org/licenses/bsd-license.
+
+ THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+**/
+#ifndef Py_PYCONFIG_H
+#define Py_PYCONFIG_H
+#ifdef UEFI_C_SOURCE
+#include <Uefi.h>
+#define PLATFORM "uefi"
+#define _rotl64(a, offset) (a << offset) | (a >> (64 - offset))
+#endif
+#define Py_BUILD_CORE
+
+//#define Py_LIMITED_API=0x03060000
+
+/* Define if building universal (internal helper macro) */
+#undef AC_APPLE_UNIVERSAL_BUILD
+
+/* Define for AIX if your compiler is a genuine IBM xlC/xlC_r and you want
+ support for AIX C++ shared extension modules. */
+#undef AIX_GENUINE_CPLUSPLUS
+
+/* Define this if you have AtheOS threads. */
+#undef ATHEOS_THREADS
+
+/* Define this if you have BeOS threads. */
+#undef BEOS_THREADS
+
+/* Define if you have the Mach cthreads package */
+#undef C_THREADS
+
+/* Define if C doubles are 64-bit IEEE 754 binary format, stored in ARM
+ mixed-endian order (byte order 45670123) */
+#undef DOUBLE_IS_ARM_MIXED_ENDIAN_IEEE754
+
+/* Define if C doubles are 64-bit IEEE 754 binary format, stored with the most
+ significant byte first */
+#undef DOUBLE_IS_BIG_ENDIAN_IEEE754
+
+/* Define if C doubles are 64-bit IEEE 754 binary format, stored with the
+ least significant byte first */
+#define DOUBLE_IS_LITTLE_ENDIAN_IEEE754
+
+/* Define if --enable-ipv6 is specified */
+#undef ENABLE_IPV6
+
+/* Define if flock needs to be linked with bsd library. */
+#undef FLOCK_NEEDS_LIBBSD
+
+/* Define if getpgrp() must be called as getpgrp(0). */
+#undef GETPGRP_HAVE_ARG
+
+/* Define if gettimeofday() does not have second (timezone) argument This is
+ the case on Motorola V4 (R40V4.2) */
+#undef GETTIMEOFDAY_NO_TZ
+
+/* Define to 1 if you have the 'acosh' function. */
+#undef HAVE_ACOSH
+
+/* struct addrinfo (netdb.h) */
+#undef HAVE_ADDRINFO
+
+/* Define to 1 if you have the 'alarm' function. */
+#undef HAVE_ALARM
+
+/* Define to 1 if you have the <alloca.h> header file. */
+#undef HAVE_ALLOCA_H
+
+/* Define this if your time.h defines altzone. */
+#undef HAVE_ALTZONE
+
+/* Define to 1 if you have the 'asinh' function. */
+#undef HAVE_ASINH
+
+/* Define to 1 if you have the <asm/types.h> header file. */
+#undef HAVE_ASM_TYPES_H
+
+/* Define to 1 if you have the 'atanh' function. */
+#undef HAVE_ATANH
+
+/* Define if GCC supports __attribute__((format(PyArg_ParseTuple, 2, 3))) */
+#undef HAVE_ATTRIBUTE_FORMAT_PARSETUPLE
+
+/* Define to 1 if you have the 'bind_textdomain_codeset' function. */
+#undef HAVE_BIND_TEXTDOMAIN_CODESET
+
+/* Define to 1 if you have the <bluetooth/bluetooth.h> header file. */
+#undef HAVE_BLUETOOTH_BLUETOOTH_H
+
+/* Define to 1 if you have the <bluetooth.h> header file. */
+#undef HAVE_BLUETOOTH_H
+
+/* Define if nice() returns success/failure instead of the new priority. */
+#undef HAVE_BROKEN_NICE
+
+/* Define if the system reports an invalid PIPE_BUF value. */
+#undef HAVE_BROKEN_PIPE_BUF
+
+/* Define if poll() sets errno on invalid file descriptors. */
+#undef HAVE_BROKEN_POLL
+
+/* Define if the Posix semaphores do not work on your system */
+#define HAVE_BROKEN_POSIX_SEMAPHORES 1
+
+/* Define if pthread_sigmask() does not work on your system. */
+#define HAVE_BROKEN_PTHREAD_SIGMASK 1
+
+/* define to 1 if your sem_getvalue is broken. */
+#define HAVE_BROKEN_SEM_GETVALUE 1
+
+/* Define if 'unsetenv' does not return an int. */
+#undef HAVE_BROKEN_UNSETENV
+
+/* Define this if you have the type _Bool. */
+#define HAVE_C99_BOOL 1
+
+/* Define to 1 if you have the 'chflags' function. */
+#undef HAVE_CHFLAGS
+
+/* Define to 1 if you have the 'chown' function. */
+#undef HAVE_CHOWN
+
+/* Define if you have the 'chroot' function. */
+#undef HAVE_CHROOT
+
+/* Define to 1 if you have the 'clock' function. */
+#define HAVE_CLOCK 1
+
+/* Define to 1 if you have the 'confstr' function. */
+#undef HAVE_CONFSTR
+
+/* Define to 1 if you have the <conio.h> header file. */
+#undef HAVE_CONIO_H
+
+/* Define to 1 if you have the 'copysign' function. */
+#define HAVE_COPYSIGN 1
+
+/* Define to 1 if you have the 'ctermid' function. */
+#undef HAVE_CTERMID
+
+/* Define if you have the 'ctermid_r' function. */
+#undef HAVE_CTERMID_R
+
+/* Define to 1 if you have the <curses.h> header file. */
+#undef HAVE_CURSES_H
+
+/* Define if you have the 'is_term_resized' function. */
+#undef HAVE_CURSES_IS_TERM_RESIZED
+
+/* Define if you have the 'resizeterm' function. */
+#undef HAVE_CURSES_RESIZETERM
+
+/* Define if you have the 'resize_term' function. */
+#undef HAVE_CURSES_RESIZE_TERM
+
+/* Define to 1 if you have the declaration of 'isfinite', and to 0 if you
+ don't. */
+#define HAVE_DECL_ISFINITE 0
+
+/* Define to 1 if you have the declaration of 'isinf', and to 0 if you don't.
+ */
+#define HAVE_DECL_ISINF 1
+
+/* Define to 1 if you have the declaration of 'isnan', and to 0 if you don't.
+ */
+#define HAVE_DECL_ISNAN 1
+
+/* Define to 1 if you have the declaration of 'tzname', and to 0 if you don't.
+ */
+#define HAVE_DECL_TZNAME 0
+
+/* Define to 1 if you have the device macros. */
+#undef HAVE_DEVICE_MACROS
+
+/* Define to 1 if you have the /dev/ptc device file. */
+#undef HAVE_DEV_PTC
+
+/* Define to 1 if you have the /dev/ptmx device file. */
+#undef HAVE_DEV_PTMX
+
+/* Define to 1 if you have the <direct.h> header file. */
+#undef HAVE_DIRECT_H
+
+/* Define to 1 if you have the <dirent.h> header file, and it defines 'DIR'. */
+#define HAVE_DIRENT_H 1
+
+/* Define to 1 if you have the <dlfcn.h> header file. */
+#undef HAVE_DLFCN_H
+
+/* Define to 1 if you have the 'dlopen' function. */
+#undef HAVE_DLOPEN
+
+/* Define to 1 if you have the 'dup2' function. */
+#define HAVE_DUP2 1
+
+/* Defined when any dynamic module loading is enabled. */
+#undef HAVE_DYNAMIC_LOADING
+
+/* Define if you have the 'epoll' functions. */
+#undef HAVE_EPOLL
+
+/* Define to 1 if you have the 'erf' function. */
+#undef HAVE_ERF
+
+/* Define to 1 if you have the 'erfc' function. */
+#undef HAVE_ERFC
+
+/* Define to 1 if you have the <errno.h> header file. */
+#define HAVE_ERRNO_H 1
+
+/* Define to 1 if you have the 'execv' function. */
+#undef HAVE_EXECV
+
+/* Define to 1 if you have the 'expm1' function. */
+#undef HAVE_EXPM1
+
+/* Define if you have the 'fchdir' function. */
+#undef HAVE_FCHDIR
+
+/* Define to 1 if you have the 'fchmod' function. */
+#undef HAVE_FCHMOD
+
+/* Define to 1 if you have the 'fchown' function. */
+#undef HAVE_FCHOWN
+
+/* Define to 1 if you have the <fcntl.h> header file. */
+#define HAVE_FCNTL_H 1
+
+/* Define if you have the 'fdatasync' function. */
+#undef HAVE_FDATASYNC
+
+/* Define to 1 if you have the 'finite' function. */
+#define HAVE_FINITE 1
+
+/* Define to 1 if you have the 'flock' function. */
+#undef HAVE_FLOCK
+
+/* Define to 1 if you have the 'fork' function. */
+#undef HAVE_FORK
+
+/* Define to 1 if you have the 'forkpty' function. */
+#undef HAVE_FORKPTY
+
+/* Define to 1 if you have the 'fpathconf' function. */
+#undef HAVE_FPATHCONF
+
+/* Define to 1 if you have the 'fseek64' function. */
+#undef HAVE_FSEEK64
+
+/* Define to 1 if you have the 'fseeko' function. */
+#define HAVE_FSEEKO 1
+
+/* Define to 1 if you have the 'fstatvfs' function. */
+#undef HAVE_FSTATVFS
+
+/* Define if you have the 'fsync' function. */
+#undef HAVE_FSYNC
+
+/* Define to 1 if you have the 'ftell64' function. */
+#undef HAVE_FTELL64
+
+/* Define to 1 if you have the 'ftello' function. */
+#define HAVE_FTELLO 1
+
+/* Define to 1 if you have the 'ftime' function. */
+#undef HAVE_FTIME
+
+/* Define to 1 if you have the 'ftruncate' function. */
+#undef HAVE_FTRUNCATE
+
+/* Define to 1 if you have the 'gai_strerror' function. */
+#undef HAVE_GAI_STRERROR
+
+/* Define to 1 if you have the 'gamma' function. */
+#undef HAVE_GAMMA
+
+/* Define if we can use gcc inline assembler to get and set x87 control word */
+#if defined(__GNUC__)
+ #define HAVE_GCC_ASM_FOR_X87 1
+#else
+ #undef HAVE_GCC_ASM_FOR_X87
+#endif
+
+/* Define if you have the getaddrinfo function. */
+//#undef HAVE_GETADDRINFO
+#define HAVE_GETADDRINFO 1
+
+/* Define to 1 if you have the 'getcwd' function. */
+#define HAVE_GETCWD 1
+
+/* Define this if you have flockfile(), getc_unlocked(), and funlockfile() */
+#undef HAVE_GETC_UNLOCKED
+
+/* Define to 1 if you have the 'getentropy' function. */
+#undef HAVE_GETENTROPY
+
+/* Define to 1 if you have the 'getgroups' function. */
+#undef HAVE_GETGROUPS
+
+/* Define to 1 if you have the 'gethostbyname' function. */
+//#undef HAVE_GETHOSTBYNAME
+#define HAVE_GETHOSTBYNAME 1
+
+/* Define this if you have some version of gethostbyname_r() */
+#undef HAVE_GETHOSTBYNAME_R
+
+/* Define this if you have the 3-arg version of gethostbyname_r(). */
+#undef HAVE_GETHOSTBYNAME_R_3_ARG
+
+/* Define this if you have the 5-arg version of gethostbyname_r(). */
+#undef HAVE_GETHOSTBYNAME_R_5_ARG
+
+/* Define this if you have the 6-arg version of gethostbyname_r(). */
+#undef HAVE_GETHOSTBYNAME_R_6_ARG
+
+/* Define to 1 if you have the 'getitimer' function. */
+#undef HAVE_GETITIMER
+
+/* Define to 1 if you have the 'getloadavg' function. */
+#undef HAVE_GETLOADAVG
+
+/* Define to 1 if you have the 'getlogin' function. */
+#undef HAVE_GETLOGIN
+
+/* Define to 1 if you have the 'getnameinfo' function. */
+//#undef HAVE_GETNAMEINFO
+#define HAVE_GETNAMEINFO 1
+
+/* Define if you have the 'getpagesize' function. */
+#undef HAVE_GETPAGESIZE
+
+/* Define to 1 if you have the 'getpeername' function. */
+#define HAVE_GETPEERNAME 1
+
+/* Define to 1 if you have the 'getpgid' function. */
+#undef HAVE_GETPGID
+
+/* Define to 1 if you have the 'getpgrp' function. */
+#undef HAVE_GETPGRP
+
+/* Define to 1 if you have the 'getpid' function. */
+#undef HAVE_GETPID
+
+/* Define to 1 if you have the 'getpriority' function. */
+#undef HAVE_GETPRIORITY
+
+/* Define to 1 if you have the 'getpwent' function. */
+#undef HAVE_GETPWENT
+
+/* Define to 1 if you have the 'getresgid' function. */
+#undef HAVE_GETRESGID
+
+/* Define to 1 if you have the 'getresuid' function. */
+#undef HAVE_GETRESUID
+
+/* Define to 1 if you have the 'getsid' function. */
+#undef HAVE_GETSID
+
+/* Define to 1 if you have the 'getspent' function. */
+#undef HAVE_GETSPENT
+
+/* Define to 1 if you have the 'getspnam' function. */
+#undef HAVE_GETSPNAM
+
+/* Define to 1 if you have the 'gettimeofday' function. */
+#undef HAVE_GETTIMEOFDAY
+
+/* Define to 1 if you have the 'getwd' function. */
+#undef HAVE_GETWD
+
+/* Define to 1 if you have the <grp.h> header file. */
+#undef HAVE_GRP_H
+
+/* Define if you have the 'hstrerror' function. */
+#undef HAVE_HSTRERROR
+
+/* Define to 1 if you have the 'hypot' function. */
+#undef HAVE_HYPOT
+
+/* Define to 1 if you have the <ieeefp.h> header file. */
+#undef HAVE_IEEEFP_H
+
+/* Define if you have the 'inet_aton' function. */
+#define HAVE_INET_ATON 1
+
+/* Define if you have the 'inet_pton' function. */
+#define HAVE_INET_PTON 1
+
+/* Define to 1 if you have the 'initgroups' function. */
+#undef HAVE_INITGROUPS
+
+/* Define if your compiler provides int32_t. */
+#undef HAVE_INT32_T
+
+/* Define if your compiler provides int64_t. */
+#undef HAVE_INT64_T
+
+/* Define to 1 if you have the <inttypes.h> header file. */
+#define HAVE_INTTYPES_H 1
+
+/* Define to 1 if you have the <io.h> header file. */
+#undef HAVE_IO_H
+
+/* Define to 1 if you have the 'kill' function. */
+#undef HAVE_KILL
+
+/* Define to 1 if you have the 'killpg' function. */
+#undef HAVE_KILLPG
+
+/* Define if you have the 'kqueue' functions. */
+#undef HAVE_KQUEUE
+
+/* Define to 1 if you have the <langinfo.h> header file. */
+#undef HAVE_LANGINFO_H /* non-functional in EFI. */
+
+/* Defined to enable large file support when an off_t is bigger than a long
+ and long long is available and at least as big as an off_t. You may need to
+ add some flags for configuration and compilation to enable this mode. (For
+ Solaris and Linux, the necessary defines are already defined.) */
+#undef HAVE_LARGEFILE_SUPPORT
+
+/* Define to 1 if you have the 'lchflags' function. */
+#undef HAVE_LCHFLAGS
+
+/* Define to 1 if you have the 'lchmod' function. */
+#undef HAVE_LCHMOD
+
+/* Define to 1 if you have the 'lchown' function. */
+#undef HAVE_LCHOWN
+
+/* Define to 1 if you have the 'lgamma' function. */
+#undef HAVE_LGAMMA
+
+/* Define to 1 if you have the 'dl' library (-ldl). */
+#undef HAVE_LIBDL
+
+/* Define to 1 if you have the 'dld' library (-ldld). */
+#undef HAVE_LIBDLD
+
+/* Define to 1 if you have the 'ieee' library (-lieee). */
+#undef HAVE_LIBIEEE
+
+/* Define to 1 if you have the <libintl.h> header file. */
+#undef HAVE_LIBINTL_H
+
+/* Define if you have the readline library (-lreadline). */
+#undef HAVE_LIBREADLINE
+
+/* Define to 1 if you have the 'resolv' library (-lresolv). */
+#undef HAVE_LIBRESOLV
+
+/* Define to 1 if you have the <libutil.h> header file. */
+#undef HAVE_LIBUTIL_H
+
+/* Define if you have the 'link' function. */
+#undef HAVE_LINK
+
+/* Define to 1 if you have the <linux/netlink.h> header file. */
+#undef HAVE_LINUX_NETLINK_H
+
+/* Define to 1 if you have the <linux/tipc.h> header file. */
+#undef HAVE_LINUX_TIPC_H
+
+/* Define to 1 if you have the 'log1p' function. */
+#undef HAVE_LOG1P
+
+/* Define this if you have the type long double. */
+#undef HAVE_LONG_DOUBLE
+
+/* Define this if you have the type long long. */
+#define HAVE_LONG_LONG 1
+
+/* Define to 1 if you have the 'lstat' function. */
+#define HAVE_LSTAT 1
+
+/* Define this if you have the makedev macro. */
+#undef HAVE_MAKEDEV
+
+/* Define to 1 if you have the 'memmove' function. */
+#define HAVE_MEMMOVE 1
+
+/* Define to 1 if you have the <memory.h> header file. */
+#undef HAVE_MEMORY_H
+
+/* Define to 1 if you have the 'mkfifo' function. */
+#undef HAVE_MKFIFO
+
+/* Define to 1 if you have the 'mknod' function. */
+#undef HAVE_MKNOD
+
+/* Define to 1 if you have the 'mktime' function. */
+#define HAVE_MKTIME 1
+
+/* Define to 1 if you have the 'mmap' function. */
+#undef HAVE_MMAP
+
+/* Define to 1 if you have the 'mremap' function. */
+#undef HAVE_MREMAP
+
+/* Define to 1 if you have the <ncurses.h> header file. */
+#undef HAVE_NCURSES_H
+
+/* Define to 1 if you have the <ndir.h> header file, and it defines 'DIR'. */
+#undef HAVE_NDIR_H
+
+/* Define to 1 if you have the <netpacket/packet.h> header file. */
+#undef HAVE_NETPACKET_PACKET_H
+
+/* Define to 1 if you have the 'nice' function. */
+#undef HAVE_NICE
+
+/* Define to 1 if you have the 'openpty' function. */
+#undef HAVE_OPENPTY
+
+/* Define if compiling using MacOS X 10.5 SDK or later. */
+#undef HAVE_OSX105_SDK
+
+/* Define to 1 if you have the 'pathconf' function. */
+#undef HAVE_PATHCONF
+
+/* Define to 1 if you have the 'pause' function. */
+#undef HAVE_PAUSE
+
+/* Define to 1 if you have the 'plock' function. */
+#undef HAVE_PLOCK
+
+/* Define to 1 if you have the 'poll' function. */
+#define HAVE_POLL 1
+
+/* Define to 1 if you have the <poll.h> header file. */
+#undef HAVE_POLL_H
+
+/* Define to 1 if you have the <process.h> header file. */
+#undef HAVE_PROCESS_H
+
+/* Define if your compiler supports function prototype */
+#define HAVE_PROTOTYPES 1
+
+/* Define if you have GNU PTH threads. */
+#undef HAVE_PTH
+
+/* Define to 1 if you have the 'pthread_atfork' function. */
+#undef HAVE_PTHREAD_ATFORK
+
+/* Defined for Solaris 2.6 bug in pthread header. */
+#undef HAVE_PTHREAD_DESTRUCTOR
+
+/* Define to 1 if you have the <pthread.h> header file. */
+#undef HAVE_PTHREAD_H
+
+/* Define to 1 if you have the 'pthread_init' function. */
+#undef HAVE_PTHREAD_INIT
+
+/* Define to 1 if you have the 'pthread_sigmask' function. */
+#undef HAVE_PTHREAD_SIGMASK
+
+/* Define to 1 if you have the <pty.h> header file. */
+#undef HAVE_PTY_H
+
+/* Define to 1 if you have the 'putenv' function. */
+#undef HAVE_PUTENV
+
+/* Define if the libcrypto has RAND_egd */
+#undef HAVE_RAND_EGD
+
+/* Define to 1 if you have the 'readlink' function. */
+#undef HAVE_READLINK
+
+/* Define to 1 if you have the 'realpath' function. */
+#define HAVE_REALPATH 1
+
+/* Define if you have readline 2.1 */
+#undef HAVE_RL_CALLBACK
+
+/* Define if you can turn off readline's signal handling. */
+#undef HAVE_RL_CATCH_SIGNAL
+
+/* Define if you have readline 2.2 */
+#undef HAVE_RL_COMPLETION_APPEND_CHARACTER
+
+/* Define if you have readline 4.0 */
+#undef HAVE_RL_COMPLETION_DISPLAY_MATCHES_HOOK
+
+/* Define if you have readline 4.2 */
+#undef HAVE_RL_COMPLETION_MATCHES
+
+/* Define if you have rl_completion_suppress_append */
+#undef HAVE_RL_COMPLETION_SUPPRESS_APPEND
+
+/* Define if you have readline 4.0 */
+#undef HAVE_RL_PRE_INPUT_HOOK
+
+/* Define to 1 if you have the 'round' function. */
+#undef HAVE_ROUND
+
+/* Define to 1 if you have the 'select' function. */
+#define HAVE_SELECT 1
+
+/* Define to 1 if you have the 'sem_getvalue' function. */
+#undef HAVE_SEM_GETVALUE
+
+/* Define to 1 if you have the 'sem_open' function. */
+#undef HAVE_SEM_OPEN
+
+/* Define to 1 if you have the 'sem_timedwait' function. */
+#undef HAVE_SEM_TIMEDWAIT
+
+/* Define to 1 if you have the 'sem_unlink' function. */
+#undef HAVE_SEM_UNLINK
+
+/* Define to 1 if you have the 'setegid' function. */
+#undef HAVE_SETEGID
+
+/* Define to 1 if you have the 'seteuid' function. */
+#undef HAVE_SETEUID
+
+/* Define to 1 if you have the 'setgid' function. */
+#undef HAVE_SETGID
+
+/* Define if you have the 'setgroups' function. */
+#undef HAVE_SETGROUPS
+
+/* Define to 1 if you have the 'setitimer' function. */
+#undef HAVE_SETITIMER
+
+/* Define to 1 if you have the 'setlocale' function. */
+#define HAVE_SETLOCALE 1
+
+/* Define to 1 if you have the 'setpgid' function. */
+#undef HAVE_SETPGID
+
+/* Define to 1 if you have the 'setpgrp' function. */
+#undef HAVE_SETPGRP
+
+/* Define to 1 if you have the 'setregid' function. */
+#undef HAVE_SETREGID
+
+/* Define to 1 if you have the 'setresgid' function. */
+#undef HAVE_SETRESGID
+
+/* Define to 1 if you have the 'setresuid' function. */
+#undef HAVE_SETRESUID
+
+/* Define to 1 if you have the 'setreuid' function. */
+#undef HAVE_SETREUID
+
+/* Define to 1 if you have the 'setsid' function. */
+#undef HAVE_SETSID
+
+/* Define to 1 if you have the 'setuid' function. */
+#undef HAVE_SETUID
+
+/* Define to 1 if you have the 'setvbuf' function. */
+#define HAVE_SETVBUF 1
+
+/* Define to 1 if you have the <shadow.h> header file. */
+#undef HAVE_SHADOW_H
+
+/* Define to 1 if you have the 'sigaction' function. */
+#undef HAVE_SIGACTION
+
+/* Define to 1 if you have the 'siginterrupt' function. */
+#undef HAVE_SIGINTERRUPT
+
+/* Define to 1 if you have the <signal.h> header file. */
+#define HAVE_SIGNAL_H 1
+
+/* Define to 1 if you have the 'sigrelse' function. */
+#undef HAVE_SIGRELSE
+
+/* Define to 1 if you have the 'snprintf' function. */
+#define HAVE_SNPRINTF 1
+
+/* Define if sockaddr has sa_len member */
+#undef HAVE_SOCKADDR_SA_LEN
+
+/* struct sockaddr_storage (sys/socket.h) */
+#undef HAVE_SOCKADDR_STORAGE
+
+/* Define if you have the 'socketpair' function. */
+#undef HAVE_SOCKETPAIR
+
+/* Define to 1 if you have the <spawn.h> header file. */
+#undef HAVE_SPAWN_H
+
+/* Define if your compiler provides ssize_t */
+#define HAVE_SSIZE_T 1
+
+/* Define to 1 if you have the 'statvfs' function. */
+#undef HAVE_STATVFS
+
+/* Define if you have struct stat.st_mtim.tv_nsec */
+#undef HAVE_STAT_TV_NSEC
+
+/* Define if you have struct stat.st_mtimensec */
+#undef HAVE_STAT_TV_NSEC2
+
+/* Define if your compiler supports variable length function prototypes (e.g.
+ void fprintf(FILE *, char *, ...);) *and* <stdarg.h> */
+#define HAVE_STDARG_PROTOTYPES 1
+
+/* Define to 1 if you have the <stdint.h> header file. */
+#define HAVE_STDINT_H 1
+
+/* Define to 1 if you have the <stdlib.h> header file. */
+#define HAVE_STDLIB_H 1
+
+/* Define to 1 if you have the 'strdup' function. */
+#define HAVE_STRDUP 1
+
+/* Define to 1 if you have the 'strftime' function. */
+#define HAVE_STRFTIME 1
+
+/* Define to 1 if you have the <strings.h> header file. */
+#undef HAVE_STRINGS_H
+
+/* Define to 1 if you have the <string.h> header file. */
+#define HAVE_STRING_H 1
+
+/* Define to 1 if you have the <stropts.h> header file. */
+#undef HAVE_STROPTS_H
+
+/* Define to 1 if 'st_birthtime' is a member of 'struct stat'. */
+#define HAVE_STRUCT_STAT_ST_BIRTHTIME 1
+
+/* Define to 1 if 'st_blksize' is a member of 'struct stat'. */
+#define HAVE_STRUCT_STAT_ST_BLKSIZE 1
+
+/* Define to 1 if 'st_blocks' is a member of 'struct stat'. */
+#undef HAVE_STRUCT_STAT_ST_BLOCKS
+
+/* Define to 1 if 'st_flags' is a member of 'struct stat'. */
+#undef HAVE_STRUCT_STAT_ST_FLAGS
+
+/* Define to 1 if 'st_gen' is a member of 'struct stat'. */
+#undef HAVE_STRUCT_STAT_ST_GEN
+
+/* Define to 1 if 'st_rdev' is a member of 'struct stat'. */
+#undef HAVE_STRUCT_STAT_ST_RDEV
+
+/* Define to 1 if 'st_dev' is a member of 'struct stat'. */
+#undef HAVE_STRUCT_STAT_ST_DEV
+
+/* Define to 1 if 'st_ino' is a member of 'struct stat'. */
+#undef HAVE_STRUCT_STAT_ST_INO
+
+/* Define to 1 if 'tm_zone' is a member of 'struct tm'. */
+#undef HAVE_STRUCT_TM_TM_ZONE
+
+/* Define to 1 if your 'struct stat' has 'st_blocks'. Deprecated, use
+ 'HAVE_STRUCT_STAT_ST_BLOCKS' instead. */
+#undef HAVE_ST_BLOCKS
+
+/* Define if you have the 'symlink' function. */
+#undef HAVE_SYMLINK
+
+/* Define to 1 if you have the 'sysconf' function. */
+#undef HAVE_SYSCONF
+
+/* Define to 1 if you have the <sysexits.h> header file. */
+#undef HAVE_SYSEXITS_H
+
+/* Define to 1 if you have the <sys/audioio.h> header file. */
+#undef HAVE_SYS_AUDIOIO_H
+
+/* Define to 1 if you have the <sys/bsdtty.h> header file. */
+#undef HAVE_SYS_BSDTTY_H
+
+/* Define to 1 if you have the <sys/dir.h> header file, and it defines 'DIR'.
+ */
+#undef HAVE_SYS_DIR_H
+
+/* Define to 1 if you have the <sys/epoll.h> header file. */
+#undef HAVE_SYS_EPOLL_H
+
+/* Define to 1 if you have the <sys/event.h> header file. */
+#undef HAVE_SYS_EVENT_H
+
+/* Define to 1 if you have the <sys/file.h> header file. */
+#undef HAVE_SYS_FILE_H
+
+/* Define to 1 if you have the <sys/loadavg.h> header file. */
+#undef HAVE_SYS_LOADAVG_H
+
+/* Define to 1 if you have the <sys/lock.h> header file. */
+#undef HAVE_SYS_LOCK_H
+
+/* Define to 1 if you have the <sys/mkdev.h> header file. */
+#undef HAVE_SYS_MKDEV_H
+
+/* Define to 1 if you have the <sys/modem.h> header file. */
+#undef HAVE_SYS_MODEM_H
+
+/* Define to 1 if you have the <sys/ndir.h> header file, and it defines 'DIR'.
+ */
+#undef HAVE_SYS_NDIR_H
+
+/* Define to 1 if you have the <sys/param.h> header file. */
+#define HAVE_SYS_PARAM_H 1
+
+/* Define to 1 if you have the <sys/poll.h> header file. */
+#define HAVE_SYS_POLL_H 1
+
+/* Define to 1 if you have the <sys/resource.h> header file. */
+#define HAVE_SYS_RESOURCE_H 1
+
+/* Define to 1 if you have the <sys/select.h> header file. */
+#define HAVE_SYS_SELECT_H 1
+
+/* Define to 1 if you have the <sys/socket.h> header file. */
+#define HAVE_SYS_SOCKET_H 1
+
+/* Define to 1 if you have the <sys/statvfs.h> header file. */
+#undef HAVE_SYS_STATVFS_H
+
+/* Define to 1 if you have the <sys/stat.h> header file. */
+#define HAVE_SYS_STAT_H 1
+
+/* Define to 1 if you have the <sys/termio.h> header file. */
+#undef HAVE_SYS_TERMIO_H
+
+/* Define to 1 if you have the <sys/times.h> header file. */
+#undef HAVE_SYS_TIMES_H
+
+/* Define to 1 if you have the <sys/time.h> header file. */
+#define HAVE_SYS_TIME_H 1
+
+/* Define to 1 if you have the <sys/types.h> header file. */
+#define HAVE_SYS_TYPES_H 1
+
+/* Define to 1 if you have the <sys/un.h> header file. */
+#undef HAVE_SYS_UN_H
+
+/* Define to 1 if you have the <sys/utsname.h> header file. */
+#undef HAVE_SYS_UTSNAME_H
+
+/* Define to 1 if you have the <sys/wait.h> header file. */
+#undef HAVE_SYS_WAIT_H
+
+/* Define to 1 if you have the system() command. */
+#define HAVE_SYSTEM 1
+
+/* Define to 1 if you have the 'tcgetpgrp' function. */
+#undef HAVE_TCGETPGRP
+
+/* Define to 1 if you have the 'tcsetpgrp' function. */
+#undef HAVE_TCSETPGRP
+
+/* Define to 1 if you have the 'tempnam' function. */
+#define HAVE_TEMPNAM 1
+
+/* Define to 1 if you have the <termios.h> header file. */
+#undef HAVE_TERMIOS_H
+
+/* Define to 1 if you have the <term.h> header file. */
+#undef HAVE_TERM_H
+
+/* Define to 1 if you have the 'tgamma' function. */
+#undef HAVE_TGAMMA
+
+/* Define to 1 if you have the <thread.h> header file. */
+#undef HAVE_THREAD_H
+
+/* Define to 1 if you have the 'timegm' function. */
+#undef HAVE_TIMEGM
+
+/* Define to 1 if you have the 'times' function. */
+#undef HAVE_TIMES
+
+/* Define to 1 if you have the 'tmpfile' function. */
+#define HAVE_TMPFILE 1
+
+/* Define to 1 if you have the 'tmpnam' function. */
+#define HAVE_TMPNAM 1
+
+/* Define to 1 if you have the 'tmpnam_r' function. */
+#undef HAVE_TMPNAM_R
+
+/* Define to 1 if your 'struct tm' has 'tm_zone'. Deprecated, use
+ 'HAVE_STRUCT_TM_TM_ZONE' instead. */
+#undef HAVE_TM_ZONE
+
+/* Define to 1 if you have the 'truncate' function. */
+#undef HAVE_TRUNCATE
+
+/* Define to 1 if you don't have 'tm_zone' but do have the external array
+ 'tzname'. */
+#undef HAVE_TZNAME
+
+/* Define this if you have tcl and TCL_UTF_MAX==6 */
+#undef HAVE_UCS4_TCL
+
+/* Define if your compiler provides uint32_t. */
+#undef HAVE_UINT32_T
+
+/* Define if your compiler provides uint64_t. */
+#undef HAVE_UINT64_T
+
+/* Define to 1 if the system has the type 'uintptr_t'. */
+#define HAVE_UINTPTR_T 1
+
+/* Define to 1 if you have the 'uname' function. */
+#undef HAVE_UNAME
+
+/* Define to 1 if you have the <unistd.h> header file. */
+#define HAVE_UNISTD_H 1
+
+/* Define to 1 if you have the 'unsetenv' function. */
+#undef HAVE_UNSETENV
+
+/* Define if you have a useable wchar_t type defined in wchar.h; useable means
+ wchar_t must be an unsigned type with at least 16 bits. (see
+ Include/unicodeobject.h). */
+#define HAVE_USABLE_WCHAR_T 1
+
+/* Define to 1 if you have the <util.h> header file. */
+#undef HAVE_UTIL_H
+
+/* Define to 1 if you have the 'utimes' function. */
+#undef HAVE_UTIMES
+
+/* Define to 1 if you have the <utime.h> header file. */
+#define HAVE_UTIME_H 1
+
+/* Define to 1 if you have the 'wait3' function. */
+#undef HAVE_WAIT3
+
+/* Define to 1 if you have the 'wait4' function. */
+#undef HAVE_WAIT4
+
+/* Define to 1 if you have the 'waitpid' function. */
+#undef HAVE_WAITPID
+
+/* Define if the compiler provides a wchar.h header file. */
+#define HAVE_WCHAR_H 1
+
+/* Define to 1 if you have the 'wcscoll' function. */
+#define HAVE_WCSCOLL 1
+
+/* Define if tzset() actually switches the local timezone in a meaningful way.
+ */
+#undef HAVE_WORKING_TZSET
+
+/* Define if the zlib library has inflateCopy */
+#undef HAVE_ZLIB_COPY
+
+/* Define to 1 if you have the '_getpty' function. */
+#undef HAVE__GETPTY
+
+/* Define if you are using Mach cthreads directly under /include */
+#undef HURD_C_THREADS
+
+/* Define if you are using Mach cthreads under mach / */
+#undef MACH_C_THREADS
+
+/* Define to 1 if 'major', 'minor', and 'makedev' are declared in <mkdev.h>.
+ */
+#undef MAJOR_IN_MKDEV
+
+/* Define to 1 if 'major', 'minor', and 'makedev' are declared in
+ <sysmacros.h>. */
+#undef MAJOR_IN_SYSMACROS
+
+/* Define if mvwdelch in curses.h is an expression. */
+#undef MVWDELCH_IS_EXPRESSION
+
+/* Define to the address where bug reports for this package should be sent. */
+#define PACKAGE_BUGREPORT "edk2-devel@lists.01.org"
+
+/* Define to the full name of this package. */
+#define PACKAGE_NAME "EDK II Python 2.7.10 Package"
+
+/* Define to the full name and version of this package. */
+#define PACKAGE_STRING "EDK II Python 2.7.10 Package V0.1"
+
+/* Define to the one symbol short name of this package. */
+#define PACKAGE_TARNAME "EADK_Python"
+
+/* Define to the home page for this package. */
+#define PACKAGE_URL "http://www.tianocore.org/"
+
+/* Define to the version of this package. */
+#define PACKAGE_VERSION "V0.1"
+
+/* Define if POSIX semaphores aren't enabled on your system */
+#define POSIX_SEMAPHORES_NOT_ENABLED 1
+
+/* Defined if PTHREAD_SCOPE_SYSTEM supported. */
+#undef PTHREAD_SYSTEM_SCHED_SUPPORTED
+
+/* Define as the preferred size in bits of long digits */
+#undef PYLONG_BITS_IN_DIGIT
+
+/* Define to printf format modifier for long long type */
+#define PY_FORMAT_LONG_LONG "ll"
+
+/* Define to printf format modifier for Py_ssize_t */
+#define PY_FORMAT_SIZE_T "z"
+
+/* Define as the integral type used for Unicode representation. */
+#define PY_UNICODE_TYPE wchar_t
+
+/* Define if you want to build an interpreter with many run-time checks. */
+#undef Py_DEBUG
+
+/* Defined if Python is built as a shared library. */
+#undef Py_ENABLE_SHARED
+
+/* Define if you want to have a Unicode type. */
+#define Py_USING_UNICODE
+
+/* assume C89 semantics that RETSIGTYPE is always void */
+#undef RETSIGTYPE
+
+/* Define if setpgrp() must be called as setpgrp(0, 0). */
+#undef SETPGRP_HAVE_ARG
+
+/* Define this to be extension of shared libraries (including the dot!). */
+#undef SHLIB_EXT
+
+/* Define if i>>j for signed int i does not extend the sign bit when i < 0 */
+#undef SIGNED_RIGHT_SHIFT_ZERO_FILLS
+
+/* The size of 'double', as computed by sizeof. */
+#define SIZEOF_DOUBLE 8
+
+/* The size of 'float', as computed by sizeof. */
+#define SIZEOF_FLOAT 4
+
+/* The size of 'fpos_t', as computed by sizeof. */
+#define SIZEOF_FPOS_T 8
+
+/* The size of 'int', as computed by sizeof. */
+#define SIZEOF_INT 4
+
+/* The size of 'long', as computed by sizeof. */
+#if defined(_MSC_VER) /* Handle Microsoft VC++ compiler specifics. */
+#define SIZEOF_LONG 4
+#else
+#define SIZEOF_LONG 8
+#endif
+
+/* The size of 'long double', as computed by sizeof. */
+#undef SIZEOF_LONG_DOUBLE
+
+/* The size of 'long long', as computed by sizeof. */
+#define SIZEOF_LONG_LONG 8
+
+/* The size of 'off_t', as computed by sizeof. */
+#ifdef UEFI_MSVC_64
+#define SIZEOF_OFF_T 8
+#else
+#define SIZEOF_OFF_T 4
+#endif
+
+
+/* The size of 'pid_t', as computed by sizeof. */
+#define SIZEOF_PID_T 4
+
+/* The size of 'pthread_t', as computed by sizeof. */
+#undef SIZEOF_PTHREAD_T
+
+/* The size of 'short', as computed by sizeof. */
+#define SIZEOF_SHORT 2
+
+/* The size of 'size_t', as computed by sizeof. */
+#ifdef UEFI_MSVC_64
+#define SIZEOF_SIZE_T 8
+#else
+#define SIZEOF_SIZE_T 4
+#endif
+
+/* The size of 'time_t', as computed by sizeof. */
+#define SIZEOF_TIME_T 4
+
+/* The size of 'uintptr_t', as computed by sizeof. */
+#ifdef UEFI_MSVC_64
+#define SIZEOF_UINTPTR_T 8
+#else
+#define SIZEOF_UINTPTR_T 4
+#endif
+
+/* The size of 'void *', as computed by sizeof. */
+#ifdef UEFI_MSVC_64
+#define SIZEOF_VOID_P 8
+#else
+#define SIZEOF_VOID_P 4
+#endif
+
+/* The size of 'wchar_t', as computed by sizeof. */
+#define SIZEOF_WCHAR_T 2
+
+/* The size of '_Bool', as computed by sizeof. */
+#define SIZEOF__BOOL 1
+
+/* Define to 1 if you have the ANSI C header files. */
+#define STDC_HEADERS 1
+
+/* Define if you can safely include both <sys/select.h> and <sys/time.h>
+ (which you can't on SCO ODT 3.0). */
+#undef SYS_SELECT_WITH_SYS_TIME
+
+/* Define if tanh(-0.) is -0., or if platform doesn't have signed zeros */
+#undef TANH_PRESERVES_ZERO_SIGN
+
+/* Define to 1 if you can safely include both <sys/time.h> and <time.h>. */
+#undef TIME_WITH_SYS_TIME
+
+/* Define to 1 if your <sys/time.h> declares 'struct tm'. */
+#undef TM_IN_SYS_TIME
+
+/* Enable extensions on AIX 3, Interix. */
+#ifndef _ALL_SOURCE
+# undef _ALL_SOURCE
+#endif
+/* Enable GNU extensions on systems that have them. */
+#ifndef _GNU_SOURCE
+# undef _GNU_SOURCE
+#endif
+/* Enable threading extensions on Solaris. */
+#ifndef _POSIX_PTHREAD_SEMANTICS
+# undef _POSIX_PTHREAD_SEMANTICS
+#endif
+/* Enable extensions on HP NonStop. */
+#ifndef _TANDEM_SOURCE
+# undef _TANDEM_SOURCE
+#endif
+/* Enable general extensions on Solaris. */
+#ifndef __EXTENSIONS__
+# undef __EXTENSIONS__
+#endif
+
+
+/* Define if you want to use MacPython modules on MacOSX in unix-Python. */
+#undef USE_TOOLBOX_OBJECT_GLUE
+
+/* Define if a va_list is an array of some kind */
+#undef VA_LIST_IS_ARRAY
+
+/* Define if you want SIGFPE handled (see Include/pyfpe.h). */
+#undef WANT_SIGFPE_HANDLER
+
+/* Define if you want wctype.h functions to be used instead of the one
+ supplied by Python itself. (see Include/unicodectype.h). */
+#define WANT_WCTYPE_FUNCTIONS 1
+
+/* Define if WINDOW in curses.h offers a field _flags. */
+#undef WINDOW_HAS_FLAGS
+
+/* Define if you want documentation strings in extension modules */
+#undef WITH_DOC_STRINGS
+
+/* Define if you want to use the new-style (Openstep, Rhapsody, MacOS) dynamic
+ linker (dyld) instead of the old-style (NextStep) dynamic linker (rld).
+ Dyld is necessary to support frameworks. */
+#undef WITH_DYLD
+
+/* Define to 1 if libintl is needed for locale functions. */
+#undef WITH_LIBINTL
+
+/* Define if you want to produce an OpenStep/Rhapsody framework (shared
+ library plus accessory files). */
+#undef WITH_NEXT_FRAMEWORK
+
+/* Define if you want to compile in Python-specific mallocs */
+#undef WITH_PYMALLOC
+
+/* Define if you want to compile in rudimentary thread support */
+#undef WITH_THREAD
+
+/* Define to profile with the Pentium timestamp counter */
+#undef WITH_TSC
+
+/* Define if you want pymalloc to be disabled when running under valgrind */
+#undef WITH_VALGRIND
+
+/* Define WORDS_BIGENDIAN to 1 if your processor stores words with the most
+ significant byte first (like Motorola and SPARC, unlike Intel). */
+#if defined AC_APPLE_UNIVERSAL_BUILD
+# if defined __BIG_ENDIAN__
+# define WORDS_BIGENDIAN 1
+# endif
+#else
+# ifndef WORDS_BIGENDIAN
+# undef WORDS_BIGENDIAN
+# endif
+#endif
+
+/* Define if arithmetic is subject to x87-style double rounding issue */
+#undef X87_DOUBLE_ROUNDING
+
+/* Define on OpenBSD to activate all library features */
+#undef _BSD_SOURCE
+
+/* Define on Irix to enable u_int */
+#undef _BSD_TYPES
+
+/* Define on Darwin to activate all library features */
+#undef _DARWIN_C_SOURCE
+
+/* This must be set to 64 on some systems to enable large file support. */
+#undef _FILE_OFFSET_BITS
+
+/* Define on Linux to activate all library features */
+#undef _GNU_SOURCE
+
+/* This must be defined on some systems to enable large file support. */
+#undef _LARGEFILE_SOURCE
+
+/* This must be defined on AIX systems to enable large file support. */
+#undef _LARGE_FILES
+
+/* Define to 1 if on MINIX. */
+#undef _MINIX
+
+/* Define on NetBSD to activate all library features */
+#define _NETBSD_SOURCE 1
+
+/* Define _OSF_SOURCE to get the makedev macro. */
+#undef _OSF_SOURCE
+
+/* Define to 2 if the system does not provide POSIX.1 features except with
+ this defined. */
+#undef _POSIX_1_SOURCE
+
+/* Define to activate features from IEEE Stds 1003.1-2001 */
+#undef _POSIX_C_SOURCE
+
+/* Define to 1 if you need to in order for 'stat' and other things to work. */
+#undef _POSIX_SOURCE
+
+/* Define if you have POSIX threads, and your system does not define that. */
+#undef _POSIX_THREADS
+
+/* Define to force use of thread-safe errno, h_errno, and other functions */
+#undef _REENTRANT
+
+/* Define for Solaris 2.5.1 so the uint32_t typedef from <sys/synch.h>,
+ <pthread.h>, or <semaphore.h> is not used. If the typedef were allowed, the
+ #define below would cause a syntax error. */
+#undef _UINT32_T
+
+/* Define for Solaris 2.5.1 so the uint64_t typedef from <sys/synch.h>,
+ <pthread.h>, or <semaphore.h> is not used. If the typedef were allowed, the
+ #define below would cause a syntax error. */
+#undef _UINT64_T
+
+/* Define to the level of X/Open that your system supports */
+#undef _XOPEN_SOURCE
+
+/* Define to activate Unix95-and-earlier features */
+#undef _XOPEN_SOURCE_EXTENDED
+
+/* Define on FreeBSD to activate all library features */
+#undef __BSD_VISIBLE
+
+/* Define to 1 if type 'char' is unsigned and you are not using gcc. */
+#ifndef __CHAR_UNSIGNED__
+# undef __CHAR_UNSIGNED__
+#endif
+
+/* Defined on Solaris to see additional function prototypes. */
+#undef __EXTENSIONS__
+
+/* Define to 'long' if <time.h> doesn't define. */
+//#undef clock_t
+
+/* Define to empty if 'const' does not conform to ANSI C. */
+//#undef const
+
+/* Define to 'int' if <sys/types.h> doesn't define. */
+//#undef gid_t
+
+/* Define to the type of a signed integer type of width exactly 32 bits if
+ such a type exists and the standard includes do not define it. */
+//#undef int32_t
+
+/* Define to the type of a signed integer type of width exactly 64 bits if
+ such a type exists and the standard includes do not define it. */
+//#undef int64_t
+
+/* Define to 'int' if <sys/types.h> does not define. */
+//#undef mode_t
+
+/* Define to 'long int' if <sys/types.h> does not define. */
+//#undef off_t
+
+/* Define to 'int' if <sys/types.h> does not define. */
+//#undef pid_t
+
+/* Define to empty if the keyword does not work. */
+//#undef signed
+
+/* Define to 'unsigned int' if <sys/types.h> does not define. */
+//#undef size_t
+
+/* Define to 'int' if <sys/socket.h> does not define. */
+//#undef socklen_t
+
+/* Define to 'int' if <sys/types.h> doesn't define. */
+//#undef uid_t
+
+/* Define to the type of an unsigned integer type of width exactly 32 bits if
+ such a type exists and the standard includes do not define it. */
+//#undef uint32_t
+
+/* Define to the type of an unsigned integer type of width exactly 64 bits if
+ such a type exists and the standard includes do not define it. */
+//#undef uint64_t
+
+/* Define to empty if the keyword does not work. */
+//#undef volatile
+
+#endif /*Py_PYCONFIG_H*/
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pydtrace.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pydtrace.h
new file mode 100644
index 00000000..a463004d
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pydtrace.h
@@ -0,0 +1,74 @@
+/* Static DTrace probes interface */
+
+#ifndef Py_DTRACE_H
+#define Py_DTRACE_H
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#ifdef WITH_DTRACE
+
+#include "pydtrace_probes.h"
+
+/* pydtrace_probes.h, on systems with DTrace, is auto-generated to include
+ `PyDTrace_{PROBE}` and `PyDTrace_{PROBE}_ENABLED()` macros for every probe
+ defined in pydtrace_provider.d.
+
+ Calling these functions must be guarded by a `PyDTrace_{PROBE}_ENABLED()`
+ check to minimize performance impact when probing is off. For example:
+
+ if (PyDTrace_FUNCTION_ENTRY_ENABLED())
+ PyDTrace_FUNCTION_ENTRY(f);
+*/
+
+#else
+
+/* Without DTrace, compile to nothing. */
+#ifndef UEFI_C_SOURCE
+static inline void PyDTrace_LINE(const char *arg0, const char *arg1, int arg2) {}
+static inline void PyDTrace_FUNCTION_ENTRY(const char *arg0, const char *arg1, int arg2) {}
+static inline void PyDTrace_FUNCTION_RETURN(const char *arg0, const char *arg1, int arg2) {}
+static inline void PyDTrace_GC_START(int arg0) {}
+static inline void PyDTrace_GC_DONE(int arg0) {}
+static inline void PyDTrace_INSTANCE_NEW_START(int arg0) {}
+static inline void PyDTrace_INSTANCE_NEW_DONE(int arg0) {}
+static inline void PyDTrace_INSTANCE_DELETE_START(int arg0) {}
+static inline void PyDTrace_INSTANCE_DELETE_DONE(int arg0) {}
+
+static inline int PyDTrace_LINE_ENABLED(void) { return 0; }
+static inline int PyDTrace_FUNCTION_ENTRY_ENABLED(void) { return 0; }
+static inline int PyDTrace_FUNCTION_RETURN_ENABLED(void) { return 0; }
+static inline int PyDTrace_GC_START_ENABLED(void) { return 0; }
+static inline int PyDTrace_GC_DONE_ENABLED(void) { return 0; }
+static inline int PyDTrace_INSTANCE_NEW_START_ENABLED(void) { return 0; }
+static inline int PyDTrace_INSTANCE_NEW_DONE_ENABLED(void) { return 0; }
+static inline int PyDTrace_INSTANCE_DELETE_START_ENABLED(void) { return 0; }
+static inline int PyDTrace_INSTANCE_DELETE_DONE_ENABLED(void) { return 0; }
+#else
+static void PyDTrace_LINE(const char *arg0, const char *arg1, int arg2) {}
+static void PyDTrace_FUNCTION_ENTRY(const char *arg0, const char *arg1, int arg2) {}
+static void PyDTrace_FUNCTION_RETURN(const char *arg0, const char *arg1, int arg2) {}
+static void PyDTrace_GC_START(int arg0) {}
+static void PyDTrace_GC_DONE(int arg0) {}
+static void PyDTrace_INSTANCE_NEW_START(int arg0) {}
+static void PyDTrace_INSTANCE_NEW_DONE(int arg0) {}
+static void PyDTrace_INSTANCE_DELETE_START(int arg0) {}
+static void PyDTrace_INSTANCE_DELETE_DONE(int arg0) {}
+
+static int PyDTrace_LINE_ENABLED(void) { return 0; }
+static int PyDTrace_FUNCTION_ENTRY_ENABLED(void) { return 0; }
+static int PyDTrace_FUNCTION_RETURN_ENABLED(void) { return 0; }
+static int PyDTrace_GC_START_ENABLED(void) { return 0; }
+static int PyDTrace_GC_DONE_ENABLED(void) { return 0; }
+static int PyDTrace_INSTANCE_NEW_START_ENABLED(void) { return 0; }
+static int PyDTrace_INSTANCE_NEW_DONE_ENABLED(void) { return 0; }
+static int PyDTrace_INSTANCE_DELETE_START_ENABLED(void) { return 0; }
+static int PyDTrace_INSTANCE_DELETE_DONE_ENABLED(void) { return 0; }
+#endif
+
+#endif /* !WITH_DTRACE */
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* !Py_DTRACE_H */
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pyport.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pyport.h
new file mode 100644
index 00000000..ca49c295
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pyport.h
@@ -0,0 +1,788 @@
+#ifndef Py_PYPORT_H
+#define Py_PYPORT_H
+
+#include "pyconfig.h" /* include for defines */
+
+/* Some versions of HP-UX & Solaris need inttypes.h for int32_t,
+ INT32_MAX, etc. */
+#ifdef UEFI_C_SOURCE
+#ifdef HAVE_INTTYPES_H
+#include <inttypes.h>
+#endif
+
+#ifdef HAVE_STDINT_H
+#include <stdint.h>
+#endif
+#else
+#include <inttypes.h>
+#endif
+
+/**************************************************************************
+Symbols and macros to supply platform-independent interfaces to basic
+C language & library operations whose spellings vary across platforms.
+
+Please try to make documentation here as clear as possible: by definition,
+the stuff here is trying to illuminate C's darkest corners.
+
+Config #defines referenced here:
+
+SIGNED_RIGHT_SHIFT_ZERO_FILLS
+Meaning: To be defined iff i>>j does not extend the sign bit when i is a
+ signed integral type and i < 0.
+Used in: Py_ARITHMETIC_RIGHT_SHIFT
+
+Py_DEBUG
+Meaning: Extra checks compiled in for debug mode.
+Used in: Py_SAFE_DOWNCAST
+
+**************************************************************************/
+
+/* typedefs for some C9X-defined synonyms for integral types.
+ *
+ * The names in Python are exactly the same as the C9X names, except with a
+ * Py_ prefix. Until C9X is universally implemented, this is the only way
+ * to ensure that Python gets reliable names that don't conflict with names
+ * in non-Python code that are playing their own tricks to define the C9X
+ * names.
+ *
+ * NOTE: don't go nuts here! Python has no use for *most* of the C9X
+ * integral synonyms. Only define the ones we actually need.
+ */
+
+/* long long is required. Ensure HAVE_LONG_LONG is defined for compatibility. */
+#ifndef HAVE_LONG_LONG
+#define HAVE_LONG_LONG 1
+#endif
+#ifndef PY_LONG_LONG
+#define PY_LONG_LONG long long
+/* If LLONG_MAX is defined in limits.h, use that. */
+#define PY_LLONG_MIN LLONG_MIN
+#define PY_LLONG_MAX LLONG_MAX
+#define PY_ULLONG_MAX ULLONG_MAX
+#endif
+
+#define PY_UINT32_T uint32_t
+#define PY_UINT64_T uint64_t
+
+/* Signed variants of the above */
+#define PY_INT32_T int32_t
+#define PY_INT64_T int64_t
+
+/* If PYLONG_BITS_IN_DIGIT is not defined then we'll use 30-bit digits if all
+ the necessary integer types are available, and we're on a 64-bit platform
+ (as determined by SIZEOF_VOID_P); otherwise we use 15-bit digits. */
+
+#ifndef PYLONG_BITS_IN_DIGIT
+#if SIZEOF_VOID_P >= 8
+#define PYLONG_BITS_IN_DIGIT 30
+#else
+#define PYLONG_BITS_IN_DIGIT 15
+#endif
+#endif
+
+/* uintptr_t is the C9X name for an unsigned integral type such that a
+ * legitimate void* can be cast to uintptr_t and then back to void* again
+ * without loss of information. Similarly for intptr_t, wrt a signed
+ * integral type.
+ */
+typedef uintptr_t Py_uintptr_t;
+typedef intptr_t Py_intptr_t;
+
+/* Py_ssize_t is a signed integral type such that sizeof(Py_ssize_t) ==
+ * sizeof(size_t). C99 doesn't define such a thing directly (size_t is an
+ * unsigned integral type). See PEP 353 for details.
+ */
+#ifdef HAVE_SSIZE_T
+typedef ssize_t Py_ssize_t;
+#elif SIZEOF_VOID_P == SIZEOF_SIZE_T
+typedef Py_intptr_t Py_ssize_t;
+#else
+# error "Python needs a typedef for Py_ssize_t in pyport.h."
+#endif
+
+/* Py_hash_t is the same size as a pointer. */
+#define SIZEOF_PY_HASH_T SIZEOF_SIZE_T
+typedef Py_ssize_t Py_hash_t;
+/* Py_uhash_t is the unsigned equivalent needed to calculate numeric hash. */
+#define SIZEOF_PY_UHASH_T SIZEOF_SIZE_T
+typedef size_t Py_uhash_t;
+
+/* Only used for compatibility with code that may not be PY_SSIZE_T_CLEAN. */
+#ifdef PY_SSIZE_T_CLEAN
+typedef Py_ssize_t Py_ssize_clean_t;
+#else
+typedef int Py_ssize_clean_t;
+#endif
+
+/* Largest possible value of size_t. */
+#define PY_SIZE_MAX SIZE_MAX
+
+/* Largest positive value of type Py_ssize_t. */
+#define PY_SSIZE_T_MAX ((Py_ssize_t)(((size_t)-1)>>1))
+/* Smallest negative value of type Py_ssize_t. */
+#define PY_SSIZE_T_MIN (-PY_SSIZE_T_MAX-1)
+
+/* PY_FORMAT_SIZE_T is a platform-specific modifier for use in a printf
+ * format to convert an argument with the width of a size_t or Py_ssize_t.
+ * C99 introduced "z" for this purpose, but not all platforms support that;
+ * e.g., MS compilers use "I" instead.
+ *
+ * These "high level" Python format functions interpret "z" correctly on
+ * all platforms (Python interprets the format string itself, and does whatever
+ * the platform C requires to convert a size_t/Py_ssize_t argument):
+ *
+ * PyBytes_FromFormat
+ * PyErr_Format
+ * PyBytes_FromFormatV
+ * PyUnicode_FromFormatV
+ *
+ * Lower-level uses require that you interpolate the correct format modifier
+ * yourself (e.g., calling printf, fprintf, sprintf, PyOS_snprintf); for
+ * example,
+ *
+ * Py_ssize_t index;
+ * fprintf(stderr, "index %" PY_FORMAT_SIZE_T "d sucks\n", index);
+ *
+ * That will expand to %ld, or %Id, or to something else correct for a
+ * Py_ssize_t on the platform.
+ */
+#ifndef PY_FORMAT_SIZE_T
+# if SIZEOF_SIZE_T == SIZEOF_INT && !defined(__APPLE__)
+# define PY_FORMAT_SIZE_T ""
+# elif SIZEOF_SIZE_T == SIZEOF_LONG
+# define PY_FORMAT_SIZE_T "l"
+# elif defined(MS_WINDOWS)
+# define PY_FORMAT_SIZE_T "I"
+# else
+# error "This platform's pyconfig.h needs to define PY_FORMAT_SIZE_T"
+# endif
+#endif
+
+/* Py_LOCAL can be used instead of static to get the fastest possible calling
+ * convention for functions that are local to a given module.
+ *
+ * Py_LOCAL_INLINE does the same thing, and also explicitly requests inlining,
+ * for platforms that support that.
+ *
+ * If PY_LOCAL_AGGRESSIVE is defined before python.h is included, more
+ * "aggressive" inlining/optimization is enabled for the entire module. This
+ * may lead to code bloat, and may slow things down for those reasons. It may
+ * also lead to errors, if the code relies on pointer aliasing. Use with
+ * care.
+ *
+ * NOTE: You can only use this for functions that are entirely local to a
+ * module; functions that are exported via method tables, callbacks, etc,
+ * should keep using static.
+ */
+
+#if defined(_MSC_VER)
+#if defined(PY_LOCAL_AGGRESSIVE)
+/* enable more aggressive optimization for visual studio */
+#ifdef UEFI_C_SOURCE
+#pragma optimize("gt", on)
+#else
+#pragma optimize("agtw", on)
+#endif
+#endif
+/* ignore warnings if the compiler decides not to inline a function */
+#pragma warning(disable: 4710)
+/* fastest possible local call under MSVC */
+#define Py_LOCAL(type) static type __fastcall
+#define Py_LOCAL_INLINE(type) static __inline type __fastcall
+#elif defined(USE_INLINE)
+#define Py_LOCAL(type) static type
+#define Py_LOCAL_INLINE(type) static inline type
+#else
+#define Py_LOCAL(type) static type
+#define Py_LOCAL_INLINE(type) static type
+#endif
+
+/* Py_MEMCPY is kept for backwards compatibility,
+ * see https://bugs.python.org/issue28126 */
+#define Py_MEMCPY memcpy
+
+#include <stdlib.h>
+
+#ifdef HAVE_IEEEFP_H
+#include <ieeefp.h> /* needed for 'finite' declaration on some platforms */
+#endif
+
+#include <math.h> /* Moved here from the math section, before extern "C" */
+
+/********************************************
+ * WRAPPER FOR <time.h> and/or <sys/time.h> *
+ ********************************************/
+
+#ifdef TIME_WITH_SYS_TIME
+#include <sys/time.h>
+#include <time.h>
+#else /* !TIME_WITH_SYS_TIME */
+#ifdef HAVE_SYS_TIME_H
+#include <sys/time.h>
+#else /* !HAVE_SYS_TIME_H */
+#include <time.h>
+#endif /* !HAVE_SYS_TIME_H */
+#endif /* !TIME_WITH_SYS_TIME */
+
+
+/******************************
+ * WRAPPER FOR <sys/select.h> *
+ ******************************/
+
+/* NB caller must include <sys/types.h> */
+
+#ifdef HAVE_SYS_SELECT_H
+#include <sys/select.h>
+#endif /* !HAVE_SYS_SELECT_H */
+
+/*******************************
+ * stat() and fstat() fiddling *
+ *******************************/
+
+#ifdef HAVE_SYS_STAT_H
+#include <sys/stat.h>
+#elif defined(HAVE_STAT_H)
+#include <stat.h>
+#endif
+
+#ifndef S_IFMT
+/* VisualAge C/C++ Failed to Define MountType Field in sys/stat.h */
+#define S_IFMT 0170000
+#endif
+
+#ifndef S_IFLNK
+/* Windows doesn't define S_IFLNK but posixmodule.c maps
+ * IO_REPARSE_TAG_SYMLINK to S_IFLNK */
+# define S_IFLNK 0120000
+#endif
+
+#ifndef S_ISREG
+#define S_ISREG(x) (((x) & S_IFMT) == S_IFREG)
+#endif
+
+#ifndef S_ISDIR
+#define S_ISDIR(x) (((x) & S_IFMT) == S_IFDIR)
+#endif
+
+#ifndef S_ISCHR
+#define S_ISCHR(x) (((x) & S_IFMT) == S_IFCHR)
+#endif
+
+#ifdef __cplusplus
+/* Move this down here since some C++ #include's don't like to be included
+ inside an extern "C" */
+extern "C" {
+#endif
+
+
+/* Py_ARITHMETIC_RIGHT_SHIFT
+ * C doesn't define whether a right-shift of a signed integer sign-extends
+ * or zero-fills. Here a macro to force sign extension:
+ * Py_ARITHMETIC_RIGHT_SHIFT(TYPE, I, J)
+ * Return I >> J, forcing sign extension. Arithmetically, return the
+ * floor of I/2**J.
+ * Requirements:
+ * I should have signed integer type. In the terminology of C99, this can
+ * be either one of the five standard signed integer types (signed char,
+ * short, int, long, long long) or an extended signed integer type.
+ * J is an integer >= 0 and strictly less than the number of bits in the
+ * type of I (because C doesn't define what happens for J outside that
+ * range either).
+ * TYPE used to specify the type of I, but is now ignored. It's been left
+ * in for backwards compatibility with versions <= 2.6 or 3.0.
+ * Caution:
+ * I may be evaluated more than once.
+ */
+#ifdef SIGNED_RIGHT_SHIFT_ZERO_FILLS
+#define Py_ARITHMETIC_RIGHT_SHIFT(TYPE, I, J) \
+ ((I) < 0 ? -1-((-1-(I)) >> (J)) : (I) >> (J))
+#else
+#define Py_ARITHMETIC_RIGHT_SHIFT(TYPE, I, J) ((I) >> (J))
+#endif
+
+/* Py_FORCE_EXPANSION(X)
+ * "Simply" returns its argument. However, macro expansions within the
+ * argument are evaluated. This unfortunate trickery is needed to get
+ * token-pasting to work as desired in some cases.
+ */
+#define Py_FORCE_EXPANSION(X) X
+
+/* Py_SAFE_DOWNCAST(VALUE, WIDE, NARROW)
+ * Cast VALUE to type NARROW from type WIDE. In Py_DEBUG mode, this
+ * assert-fails if any information is lost.
+ * Caution:
+ * VALUE may be evaluated more than once.
+ */
+#ifdef Py_DEBUG
+#define Py_SAFE_DOWNCAST(VALUE, WIDE, NARROW) \
+ (assert((WIDE)(NARROW)(VALUE) == (VALUE)), (NARROW)(VALUE))
+#else
+#define Py_SAFE_DOWNCAST(VALUE, WIDE, NARROW) (NARROW)(VALUE)
+#endif
+
+/* Py_SET_ERRNO_ON_MATH_ERROR(x)
+ * If a libm function did not set errno, but it looks like the result
+ * overflowed or not-a-number, set errno to ERANGE or EDOM. Set errno
+ * to 0 before calling a libm function, and invoke this macro after,
+ * passing the function result.
+ * Caution:
+ * This isn't reliable. See Py_OVERFLOWED comments.
+ * X is evaluated more than once.
+ */
+#if defined(__FreeBSD__) || defined(__OpenBSD__) || (defined(__hpux) && defined(__ia64))
+#define _Py_SET_EDOM_FOR_NAN(X) if (isnan(X)) errno = EDOM;
+#else
+#define _Py_SET_EDOM_FOR_NAN(X) ;
+#endif
+#define Py_SET_ERRNO_ON_MATH_ERROR(X) \
+ do { \
+ if (errno == 0) { \
+ if ((X) == Py_HUGE_VAL || (X) == -Py_HUGE_VAL) \
+ errno = ERANGE; \
+ else _Py_SET_EDOM_FOR_NAN(X) \
+ } \
+ } while(0)
+
+/* Py_SET_ERANGE_ON_OVERFLOW(x)
+ * An alias of Py_SET_ERRNO_ON_MATH_ERROR for backward-compatibility.
+ */
+#define Py_SET_ERANGE_IF_OVERFLOW(X) Py_SET_ERRNO_ON_MATH_ERROR(X)
+
+/* Py_ADJUST_ERANGE1(x)
+ * Py_ADJUST_ERANGE2(x, y)
+ * Set errno to 0 before calling a libm function, and invoke one of these
+ * macros after, passing the function result(s) (Py_ADJUST_ERANGE2 is useful
+ * for functions returning complex results). This makes two kinds of
+ * adjustments to errno: (A) If it looks like the platform libm set
+ * errno=ERANGE due to underflow, clear errno. (B) If it looks like the
+ * platform libm overflowed but didn't set errno, force errno to ERANGE. In
+ * effect, we're trying to force a useful implementation of C89 errno
+ * behavior.
+ * Caution:
+ * This isn't reliable. See Py_OVERFLOWED comments.
+ * X and Y may be evaluated more than once.
+ */
+#define Py_ADJUST_ERANGE1(X) \
+ do { \
+ if (errno == 0) { \
+ if ((X) == Py_HUGE_VAL || (X) == -Py_HUGE_VAL) \
+ errno = ERANGE; \
+ } \
+ else if (errno == ERANGE && (X) == 0.0) \
+ errno = 0; \
+ } while(0)
+
+#define Py_ADJUST_ERANGE2(X, Y) \
+ do { \
+ if ((X) == Py_HUGE_VAL || (X) == -Py_HUGE_VAL || \
+ (Y) == Py_HUGE_VAL || (Y) == -Py_HUGE_VAL) { \
+ if (errno == 0) \
+ errno = ERANGE; \
+ } \
+ else if (errno == ERANGE) \
+ errno = 0; \
+ } while(0)
+
+/* The functions _Py_dg_strtod and _Py_dg_dtoa in Python/dtoa.c (which are
+ * required to support the short float repr introduced in Python 3.1) require
+ * that the floating-point unit that's being used for arithmetic operations
+ * on C doubles is set to use 53-bit precision. It also requires that the
+ * FPU rounding mode is round-half-to-even, but that's less often an issue.
+ *
+ * If your FPU isn't already set to 53-bit precision/round-half-to-even, and
+ * you want to make use of _Py_dg_strtod and _Py_dg_dtoa, then you should
+ *
+ * #define HAVE_PY_SET_53BIT_PRECISION 1
+ *
+ * and also give appropriate definitions for the following three macros:
+ *
+ * _PY_SET_53BIT_PRECISION_START : store original FPU settings, and
+ * set FPU to 53-bit precision/round-half-to-even
+ * _PY_SET_53BIT_PRECISION_END : restore original FPU settings
+ * _PY_SET_53BIT_PRECISION_HEADER : any variable declarations needed to
+ * use the two macros above.
+ *
+ * The macros are designed to be used within a single C function: see
+ * Python/pystrtod.c for an example of their use.
+ */
+
+/* get and set x87 control word for gcc/x86 */
+#ifdef HAVE_GCC_ASM_FOR_X87
+#define HAVE_PY_SET_53BIT_PRECISION 1
+/* _Py_get/set_387controlword functions are defined in Python/pymath.c */
+#define _Py_SET_53BIT_PRECISION_HEADER \
+ unsigned short old_387controlword, new_387controlword
+#define _Py_SET_53BIT_PRECISION_START \
+ do { \
+ old_387controlword = _Py_get_387controlword(); \
+ new_387controlword = (old_387controlword & ~0x0f00) | 0x0200; \
+ if (new_387controlword != old_387controlword) \
+ _Py_set_387controlword(new_387controlword); \
+ } while (0)
+#define _Py_SET_53BIT_PRECISION_END \
+ if (new_387controlword != old_387controlword) \
+ _Py_set_387controlword(old_387controlword)
+#endif
+
+/* get and set x87 control word for VisualStudio/x86 */
+#if defined(_MSC_VER) && !defined(_WIN64) && !defined(UEFI_C_SOURCE)/* x87 not supported in 64-bit */
+#define HAVE_PY_SET_53BIT_PRECISION 1
+#define _Py_SET_53BIT_PRECISION_HEADER \
+ unsigned int old_387controlword, new_387controlword, out_387controlword
+/* We use the __control87_2 function to set only the x87 control word.
+ The SSE control word is unaffected. */
+#define _Py_SET_53BIT_PRECISION_START \
+ do { \
+ __control87_2(0, 0, &old_387controlword, NULL); \
+ new_387controlword = \
+ (old_387controlword & ~(_MCW_PC | _MCW_RC)) | (_PC_53 | _RC_NEAR); \
+ if (new_387controlword != old_387controlword) \
+ __control87_2(new_387controlword, _MCW_PC | _MCW_RC, \
+ &out_387controlword, NULL); \
+ } while (0)
+#define _Py_SET_53BIT_PRECISION_END \
+ do { \
+ if (new_387controlword != old_387controlword) \
+ __control87_2(old_387controlword, _MCW_PC | _MCW_RC, \
+ &out_387controlword, NULL); \
+ } while (0)
+#endif
+
+#ifdef HAVE_GCC_ASM_FOR_MC68881
+#define HAVE_PY_SET_53BIT_PRECISION 1
+#define _Py_SET_53BIT_PRECISION_HEADER \
+ unsigned int old_fpcr, new_fpcr
+#define _Py_SET_53BIT_PRECISION_START \
+ do { \
+ __asm__ ("fmove.l %%fpcr,%0" : "=g" (old_fpcr)); \
+ /* Set double precision / round to nearest. */ \
+ new_fpcr = (old_fpcr & ~0xf0) | 0x80; \
+ if (new_fpcr != old_fpcr) \
+ __asm__ volatile ("fmove.l %0,%%fpcr" : : "g" (new_fpcr)); \
+ } while (0)
+#define _Py_SET_53BIT_PRECISION_END \
+ do { \
+ if (new_fpcr != old_fpcr) \
+ __asm__ volatile ("fmove.l %0,%%fpcr" : : "g" (old_fpcr)); \
+ } while (0)
+#endif
+
+/* default definitions are empty */
+#ifndef HAVE_PY_SET_53BIT_PRECISION
+#define _Py_SET_53BIT_PRECISION_HEADER
+#define _Py_SET_53BIT_PRECISION_START
+#define _Py_SET_53BIT_PRECISION_END
+#endif
+
+/* If we can't guarantee 53-bit precision, don't use the code
+ in Python/dtoa.c, but fall back to standard code. This
+ means that repr of a float will be long (17 sig digits).
+
+ Realistically, there are two things that could go wrong:
+
+ (1) doubles aren't IEEE 754 doubles, or
+ (2) we're on x86 with the rounding precision set to 64-bits
+ (extended precision), and we don't know how to change
+ the rounding precision.
+ */
+
+#if !defined(DOUBLE_IS_LITTLE_ENDIAN_IEEE754) && \
+ !defined(DOUBLE_IS_BIG_ENDIAN_IEEE754) && \
+ !defined(DOUBLE_IS_ARM_MIXED_ENDIAN_IEEE754)
+#define PY_NO_SHORT_FLOAT_REPR
+#endif
+
+/* double rounding is symptomatic of use of extended precision on x86. If
+ we're seeing double rounding, and we don't have any mechanism available for
+ changing the FPU rounding precision, then don't use Python/dtoa.c. */
+#if defined(X87_DOUBLE_ROUNDING) && !defined(HAVE_PY_SET_53BIT_PRECISION)
+#define PY_NO_SHORT_FLOAT_REPR
+#endif
+
+
+/* Py_DEPRECATED(version)
+ * Declare a variable, type, or function deprecated.
+ * Usage:
+ * extern int old_var Py_DEPRECATED(2.3);
+ * typedef int T1 Py_DEPRECATED(2.4);
+ * extern int x() Py_DEPRECATED(2.5);
+ */
+#if defined(__GNUC__) && ((__GNUC__ >= 4) || \
+ (__GNUC__ == 3) && (__GNUC_MINOR__ >= 1))
+#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
+#else
+#define Py_DEPRECATED(VERSION_UNUSED)
+#endif
+
+/**************************************************************************
+Prototypes that are missing from the standard include files on some systems
+(and possibly only some versions of such systems.)
+
+Please be conservative with adding new ones, document them and enclose them
+in platform-specific #ifdefs.
+**************************************************************************/
+
+#ifdef SOLARIS
+/* Unchecked */
+extern int gethostname(char *, int);
+#endif
+
+#ifdef HAVE__GETPTY
+#include <sys/types.h> /* we need to import mode_t */
+extern char * _getpty(int *, int, mode_t, int);
+#endif
+
+/* On QNX 6, struct termio must be declared by including sys/termio.h
+ if TCGETA, TCSETA, TCSETAW, or TCSETAF are used. sys/termio.h must
+ be included before termios.h or it will generate an error. */
+#if defined(HAVE_SYS_TERMIO_H) && !defined(__hpux)
+#include <sys/termio.h>
+#endif
+
+#if defined(HAVE_OPENPTY) || defined(HAVE_FORKPTY)
+#if !defined(HAVE_PTY_H) && !defined(HAVE_LIBUTIL_H)
+/* BSDI does not supply a prototype for the 'openpty' and 'forkpty'
+ functions, even though they are included in libutil. */
+#include <termios.h>
+extern int openpty(int *, int *, char *, struct termios *, struct winsize *);
+extern pid_t forkpty(int *, char *, struct termios *, struct winsize *);
+#endif /* !defined(HAVE_PTY_H) && !defined(HAVE_LIBUTIL_H) */
+#endif /* defined(HAVE_OPENPTY) || defined(HAVE_FORKPTY) */
+
+
+/* On 4.4BSD-descendants, ctype functions serves the whole range of
+ * wchar_t character set rather than single byte code points only.
+ * This characteristic can break some operations of string object
+ * including str.upper() and str.split() on UTF-8 locales. This
+ * workaround was provided by Tim Robbins of FreeBSD project.
+ */
+
+#ifdef __FreeBSD__
+#include <osreldate.h>
+#if (__FreeBSD_version >= 500040 && __FreeBSD_version < 602113) || \
+ (__FreeBSD_version >= 700000 && __FreeBSD_version < 700054) || \
+ (__FreeBSD_version >= 800000 && __FreeBSD_version < 800001)
+# define _PY_PORT_CTYPE_UTF8_ISSUE
+#endif
+#endif
+
+
+#if defined(__APPLE__)
+# define _PY_PORT_CTYPE_UTF8_ISSUE
+#endif
+
+#ifdef _PY_PORT_CTYPE_UTF8_ISSUE
+#ifndef __cplusplus
+ /* The workaround below is unsafe in C++ because
+ * the <locale> defines these symbols as real functions,
+ * with a slightly different signature.
+ * See issue #10910
+ */
+#include <ctype.h>
+#include <wctype.h>
+#undef isalnum
+#define isalnum(c) iswalnum(btowc(c))
+#undef isalpha
+#define isalpha(c) iswalpha(btowc(c))
+#undef islower
+#define islower(c) iswlower(btowc(c))
+#undef isspace
+#define isspace(c) iswspace(btowc(c))
+#undef isupper
+#define isupper(c) iswupper(btowc(c))
+#undef tolower
+#define tolower(c) towlower(btowc(c))
+#undef toupper
+#define toupper(c) towupper(btowc(c))
+#endif
+#endif
+
+
+/* Declarations for symbol visibility.
+
+ PyAPI_FUNC(type): Declares a public Python API function and return type
+ PyAPI_DATA(type): Declares public Python data and its type
+ PyMODINIT_FUNC: A Python module init function. If these functions are
+ inside the Python core, they are private to the core.
+ If in an extension module, it may be declared with
+ external linkage depending on the platform.
+
+ As a number of platforms support/require "__declspec(dllimport/dllexport)",
+ we support a HAVE_DECLSPEC_DLL macro to save duplication.
+*/
+
+/*
+ All windows ports, except cygwin, are handled in PC/pyconfig.h.
+
+ Cygwin is the only other autoconf platform requiring special
+ linkage handling and it uses __declspec().
+*/
+#if defined(__CYGWIN__)
+# define HAVE_DECLSPEC_DLL
+#endif
+
+/* only get special linkage if built as shared or platform is Cygwin */
+#if defined(Py_ENABLE_SHARED) || defined(__CYGWIN__)
+# if defined(HAVE_DECLSPEC_DLL)
+# ifdef Py_BUILD_CORE
+# define PyAPI_FUNC(RTYPE) __declspec(dllexport) RTYPE
+# define PyAPI_DATA(RTYPE) extern __declspec(dllexport) RTYPE
+ /* module init functions inside the core need no external linkage */
+ /* except for Cygwin to handle embedding */
+# if defined(__CYGWIN__)
+# define PyMODINIT_FUNC __declspec(dllexport) PyObject*
+# else /* __CYGWIN__ */
+# define PyMODINIT_FUNC PyObject*
+# endif /* __CYGWIN__ */
+# else /* Py_BUILD_CORE */
+ /* Building an extension module, or an embedded situation */
+ /* public Python functions and data are imported */
+ /* Under Cygwin, auto-import functions to prevent compilation */
+ /* failures similar to those described at the bottom of 4.1: */
+ /* http://docs.python.org/extending/windows.html#a-cookbook-approach */
+# if !defined(__CYGWIN__)
+# define PyAPI_FUNC(RTYPE) __declspec(dllimport) RTYPE
+# endif /* !__CYGWIN__ */
+# define PyAPI_DATA(RTYPE) extern __declspec(dllimport) RTYPE
+ /* module init functions outside the core must be exported */
+# if defined(__cplusplus)
+# define PyMODINIT_FUNC extern "C" __declspec(dllexport) PyObject*
+# else /* __cplusplus */
+# define PyMODINIT_FUNC __declspec(dllexport) PyObject*
+# endif /* __cplusplus */
+# endif /* Py_BUILD_CORE */
+# endif /* HAVE_DECLSPEC */
+#endif /* Py_ENABLE_SHARED */
+
+/* If no external linkage macros defined by now, create defaults */
+#ifndef PyAPI_FUNC
+# define PyAPI_FUNC(RTYPE) RTYPE
+#endif
+#ifndef PyAPI_DATA
+# define PyAPI_DATA(RTYPE) extern RTYPE
+#endif
+#ifndef PyMODINIT_FUNC
+# if defined(__cplusplus)
+# define PyMODINIT_FUNC extern "C" PyObject*
+# else /* __cplusplus */
+# define PyMODINIT_FUNC PyObject*
+# endif /* __cplusplus */
+#endif
+
+/* limits.h constants that may be missing */
+
+#ifndef INT_MAX
+#define INT_MAX 2147483647
+#endif
+
+#ifndef LONG_MAX
+#if SIZEOF_LONG == 4
+#define LONG_MAX 0X7FFFFFFFL
+#elif SIZEOF_LONG == 8
+#define LONG_MAX 0X7FFFFFFFFFFFFFFFL
+#else
+#error "could not set LONG_MAX in pyport.h"
+#endif
+#endif
+
+#ifndef LONG_MIN
+#define LONG_MIN (-LONG_MAX-1)
+#endif
+
+#ifndef LONG_BIT
+#define LONG_BIT (8 * SIZEOF_LONG)
+#endif
+
+#if LONG_BIT != 8 * SIZEOF_LONG
+/* 04-Oct-2000 LONG_BIT is apparently (mis)defined as 64 on some recent
+ * 32-bit platforms using gcc. We try to catch that here at compile-time
+ * rather than waiting for integer multiplication to trigger bogus
+ * overflows.
+ */
+#error "LONG_BIT definition appears wrong for platform (bad gcc/glibc config?)."
+#endif
+
+#ifdef __cplusplus
+}
+#endif
+
+/*
+ * Hide GCC attributes from compilers that don't support them.
+ */
+#if (!defined(__GNUC__) || __GNUC__ < 2 || \
+ (__GNUC__ == 2 && __GNUC_MINOR__ < 7) )
+#define Py_GCC_ATTRIBUTE(x)
+#else
+#define Py_GCC_ATTRIBUTE(x) __attribute__(x)
+#endif
+
+/*
+ * Specify alignment on compilers that support it.
+ */
+#if defined(__GNUC__) && __GNUC__ >= 3
+#define Py_ALIGNED(x) __attribute__((aligned(x)))
+#else
+#define Py_ALIGNED(x)
+#endif
+
+/* Eliminate end-of-loop code not reached warnings from SunPro C
+ * when using do{...}while(0) macros
+ */
+#ifdef __SUNPRO_C
+#pragma error_messages (off,E_END_OF_LOOP_CODE_NOT_REACHED)
+#endif
+
+#ifndef Py_LL
+#define Py_LL(x) x##LL
+#endif
+
+#ifndef Py_ULL
+#define Py_ULL(x) Py_LL(x##U)
+#endif
+
+#define Py_VA_COPY va_copy
+
+/*
+ * Convenient macros to deal with endianness of the platform. WORDS_BIGENDIAN is
+ * detected by configure and defined in pyconfig.h. The code in pyconfig.h
+ * also takes care of Apple's universal builds.
+ */
+
+#ifdef WORDS_BIGENDIAN
+#define PY_BIG_ENDIAN 1
+#define PY_LITTLE_ENDIAN 0
+#else
+#define PY_BIG_ENDIAN 0
+#define PY_LITTLE_ENDIAN 1
+#endif
+
+#ifdef Py_BUILD_CORE
+/*
+ * Macros to protect CRT calls against instant termination when passed an
+ * invalid parameter (issue23524).
+ */
+#if defined _MSC_VER && _MSC_VER >= 1900 && !defined(UEFI_C_SOURCE)
+
+extern _invalid_parameter_handler _Py_silent_invalid_parameter_handler;
+#define _Py_BEGIN_SUPPRESS_IPH { _invalid_parameter_handler _Py_old_handler = \
+ _set_thread_local_invalid_parameter_handler(_Py_silent_invalid_parameter_handler);
+#define _Py_END_SUPPRESS_IPH _set_thread_local_invalid_parameter_handler(_Py_old_handler); }
+
+#else
+
+#define _Py_BEGIN_SUPPRESS_IPH
+#define _Py_END_SUPPRESS_IPH
+
+#endif /* _MSC_VER >= 1900 */
+#endif /* Py_BUILD_CORE */
+
+#ifdef UEFI_C_SOURCE
+#define _Py_BEGIN_SUPPRESS_IPH
+#define _Py_END_SUPPRESS_IPH
+#endif
+
+#ifdef __ANDROID__
+#include <android/api-level.h>
+#endif
+
+#endif /* Py_PYPORT_H */
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/ctypes/__init__.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/ctypes/__init__.py
new file mode 100644
index 00000000..07fb8bda
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/ctypes/__init__.py
@@ -0,0 +1,549 @@
+# Create and manipulate C data types in Python
+#
+# Copyright (c) 2011 - 2021, Intel Corporation. All rights reserved.<BR>
+# This program and the accompanying materials are licensed and made available under
+# the terms and conditions of the BSD License that accompanies this distribution.
+# The full text of the license may be found at
+# http://opensource.org/licenses/bsd-license.
+#
+# THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+# WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+
+import os as _os, sys as _sys
+
+__version__ = "1.1.0"
+
+from _ctypes import Union, Structure, Array
+from _ctypes import _Pointer
+from _ctypes import CFuncPtr as _CFuncPtr
+from _ctypes import __version__ as _ctypes_version
+from _ctypes import RTLD_LOCAL, RTLD_GLOBAL
+from _ctypes import ArgumentError
+
+from struct import calcsize as _calcsize
+
+if __version__ != _ctypes_version:
+ raise Exception("Version number mismatch", __version__, _ctypes_version)
+
+if _os.name == "nt":
+ from _ctypes import FormatError
+
+DEFAULT_MODE = RTLD_LOCAL
+if _os.name == "posix" and _sys.platform == "darwin":
+ # On OS X 10.3, we use RTLD_GLOBAL as default mode
+ # because RTLD_LOCAL does not work at least on some
+ # libraries. OS X 10.3 is Darwin 7, so we check for
+ # that.
+
+ if int(_os.uname().release.split('.')[0]) < 8:
+ DEFAULT_MODE = RTLD_GLOBAL
+
+from _ctypes import FUNCFLAG_CDECL as _FUNCFLAG_CDECL, \
+ FUNCFLAG_PYTHONAPI as _FUNCFLAG_PYTHONAPI, \
+ FUNCFLAG_USE_ERRNO as _FUNCFLAG_USE_ERRNO, \
+ FUNCFLAG_USE_LASTERROR as _FUNCFLAG_USE_LASTERROR
+
+# WINOLEAPI -> HRESULT
+# WINOLEAPI_(type)
+#
+# STDMETHODCALLTYPE
+#
+# STDMETHOD(name)
+# STDMETHOD_(type, name)
+#
+# STDAPICALLTYPE
+
+def create_string_buffer(init, size=None):
+ """create_string_buffer(aBytes) -> character array
+ create_string_buffer(anInteger) -> character array
+ create_string_buffer(aBytes, anInteger) -> character array
+ """
+ if isinstance(init, bytes):
+ if size is None:
+ size = len(init)+1
+ buftype = c_char * size
+ buf = buftype()
+ buf.value = init
+ return buf
+ elif isinstance(init, int):
+ buftype = c_char * init
+ buf = buftype()
+ return buf
+ raise TypeError(init)
+
+def c_buffer(init, size=None):
+## "deprecated, use create_string_buffer instead"
+## import warnings
+## warnings.warn("c_buffer is deprecated, use create_string_buffer instead",
+## DeprecationWarning, stacklevel=2)
+ return create_string_buffer(init, size)
+
+_c_functype_cache = {}
+def CFUNCTYPE(restype, *argtypes, **kw):
+ """CFUNCTYPE(restype, *argtypes,
+ use_errno=False, use_last_error=False) -> function prototype.
+
+ restype: the result type
+ argtypes: a sequence specifying the argument types
+
+ The function prototype can be called in different ways to create a
+ callable object:
+
+ prototype(integer address) -> foreign function
+ prototype(callable) -> create and return a C callable function from callable
+ prototype(integer index, method name[, paramflags]) -> foreign function calling a COM method
+ prototype((ordinal number, dll object)[, paramflags]) -> foreign function exported by ordinal
+ prototype((function name, dll object)[, paramflags]) -> foreign function exported by name
+ """
+ flags = _FUNCFLAG_CDECL
+ if kw.pop("use_errno", False):
+ flags |= _FUNCFLAG_USE_ERRNO
+ if kw.pop("use_last_error", False):
+ flags |= _FUNCFLAG_USE_LASTERROR
+ if kw:
+ raise ValueError("unexpected keyword argument(s) %s" % kw.keys())
+ try:
+ return _c_functype_cache[(restype, argtypes, flags)]
+ except KeyError:
+ class CFunctionType(_CFuncPtr):
+ _argtypes_ = argtypes
+ _restype_ = restype
+ _flags_ = flags
+ _c_functype_cache[(restype, argtypes, flags)] = CFunctionType
+ return CFunctionType
+
+if _os.name == "nt":
+ from _ctypes import LoadLibrary as _dlopen
+ from _ctypes import FUNCFLAG_STDCALL as _FUNCFLAG_STDCALL
+
+ _win_functype_cache = {}
+ def WINFUNCTYPE(restype, *argtypes, **kw):
+ # docstring set later (very similar to CFUNCTYPE.__doc__)
+ flags = _FUNCFLAG_STDCALL
+ if kw.pop("use_errno", False):
+ flags |= _FUNCFLAG_USE_ERRNO
+ if kw.pop("use_last_error", False):
+ flags |= _FUNCFLAG_USE_LASTERROR
+ if kw:
+ raise ValueError("unexpected keyword argument(s) %s" % kw.keys())
+ try:
+ return _win_functype_cache[(restype, argtypes, flags)]
+ except KeyError:
+ class WinFunctionType(_CFuncPtr):
+ _argtypes_ = argtypes
+ _restype_ = restype
+ _flags_ = flags
+ _win_functype_cache[(restype, argtypes, flags)] = WinFunctionType
+ return WinFunctionType
+ if WINFUNCTYPE.__doc__:
+ WINFUNCTYPE.__doc__ = CFUNCTYPE.__doc__.replace("CFUNCTYPE", "WINFUNCTYPE")
+
+elif _os.name == "posix":
+ from _ctypes import dlopen as _dlopen
+
+from _ctypes import sizeof, byref, addressof, alignment, resize
+from _ctypes import get_errno, set_errno
+from _ctypes import _SimpleCData
+
+def _check_size(typ, typecode=None):
+ # Check if sizeof(ctypes_type) against struct.calcsize. This
+ # should protect somewhat against a misconfigured libffi.
+ from struct import calcsize
+ if typecode is None:
+ # Most _type_ codes are the same as used in struct
+ typecode = typ._type_
+ actual, required = sizeof(typ), calcsize(typecode)
+ if actual != required:
+ raise SystemError("sizeof(%s) wrong: %d instead of %d" % \
+ (typ, actual, required))
+
+class py_object(_SimpleCData):
+ _type_ = "O"
+ def __repr__(self):
+ try:
+ return super().__repr__()
+ except ValueError:
+ return "%s(<NULL>)" % type(self).__name__
+_check_size(py_object, "P")
+
+class c_short(_SimpleCData):
+ _type_ = "h"
+_check_size(c_short)
+
+class c_ushort(_SimpleCData):
+ _type_ = "H"
+_check_size(c_ushort)
+
+class c_long(_SimpleCData):
+ _type_ = "l"
+_check_size(c_long)
+
+class c_ulong(_SimpleCData):
+ _type_ = "L"
+_check_size(c_ulong)
+
+if _calcsize("i") == _calcsize("l"):
+ # if int and long have the same size, make c_int an alias for c_long
+ c_int = c_long
+ c_uint = c_ulong
+else:
+ class c_int(_SimpleCData):
+ _type_ = "i"
+ _check_size(c_int)
+
+ class c_uint(_SimpleCData):
+ _type_ = "I"
+ _check_size(c_uint)
+
+class c_float(_SimpleCData):
+ _type_ = "f"
+_check_size(c_float)
+
+class c_double(_SimpleCData):
+ _type_ = "d"
+_check_size(c_double)
+
+class c_longdouble(_SimpleCData):
+ _type_ = "g"
+if sizeof(c_longdouble) == sizeof(c_double):
+ c_longdouble = c_double
+
+if _calcsize("l") == _calcsize("q"):
+ # if long and long long have the same size, make c_longlong an alias for c_long
+ c_longlong = c_long
+ c_ulonglong = c_ulong
+else:
+ class c_longlong(_SimpleCData):
+ _type_ = "q"
+ _check_size(c_longlong)
+
+ class c_ulonglong(_SimpleCData):
+ _type_ = "Q"
+ ## def from_param(cls, val):
+ ## return ('d', float(val), val)
+ ## from_param = classmethod(from_param)
+ _check_size(c_ulonglong)
+
+class c_ubyte(_SimpleCData):
+ _type_ = "B"
+c_ubyte.__ctype_le__ = c_ubyte.__ctype_be__ = c_ubyte
+# backward compatibility:
+##c_uchar = c_ubyte
+_check_size(c_ubyte)
+
+class c_byte(_SimpleCData):
+ _type_ = "b"
+c_byte.__ctype_le__ = c_byte.__ctype_be__ = c_byte
+_check_size(c_byte)
+
+class c_char(_SimpleCData):
+ _type_ = "c"
+c_char.__ctype_le__ = c_char.__ctype_be__ = c_char
+_check_size(c_char)
+
+class c_char_p(_SimpleCData):
+ _type_ = "z"
+ def __repr__(self):
+ return "%s(%s)" % (self.__class__.__name__, c_void_p.from_buffer(self).value)
+_check_size(c_char_p, "P")
+
+class c_void_p(_SimpleCData):
+ _type_ = "P"
+c_voidp = c_void_p # backwards compatibility (to a bug)
+_check_size(c_void_p)
+
+class c_bool(_SimpleCData):
+ _type_ = "?"
+
+from _ctypes import POINTER, pointer, _pointer_type_cache
+
+class c_wchar_p(_SimpleCData):
+ _type_ = "Z"
+ def __repr__(self):
+ return "%s(%s)" % (self.__class__.__name__, c_void_p.from_buffer(self).value)
+
+class c_wchar(_SimpleCData):
+ _type_ = "u"
+
+def _reset_cache():
+ _pointer_type_cache.clear()
+ _c_functype_cache.clear()
+ if _os.name == "nt":
+ _win_functype_cache.clear()
+ # _SimpleCData.c_wchar_p_from_param
+ POINTER(c_wchar).from_param = c_wchar_p.from_param
+ # _SimpleCData.c_char_p_from_param
+ POINTER(c_char).from_param = c_char_p.from_param
+ _pointer_type_cache[None] = c_void_p
+ # XXX for whatever reasons, creating the first instance of a callback
+ # function is needed for the unittests on Win64 to succeed. This MAY
+ # be a compiler bug, since the problem occurs only when _ctypes is
+ # compiled with the MS SDK compiler. Or an uninitialized variable?
+ CFUNCTYPE(c_int)(lambda: None)
+
+def create_unicode_buffer(init, size=None):
+ """create_unicode_buffer(aString) -> character array
+ create_unicode_buffer(anInteger) -> character array
+ create_unicode_buffer(aString, anInteger) -> character array
+ """
+ if isinstance(init, str):
+ if size is None:
+ size = len(init)+1
+ buftype = c_wchar * size
+ buf = buftype()
+ buf.value = init
+ return buf
+ elif isinstance(init, int):
+ buftype = c_wchar * init
+ buf = buftype()
+ return buf
+ raise TypeError(init)
+
+
+# XXX Deprecated
+def SetPointerType(pointer, cls):
+ if _pointer_type_cache.get(cls, None) is not None:
+ raise RuntimeError("This type already exists in the cache")
+ if id(pointer) not in _pointer_type_cache:
+ raise RuntimeError("What's this???")
+ pointer.set_type(cls)
+ _pointer_type_cache[cls] = pointer
+ del _pointer_type_cache[id(pointer)]
+
+# XXX Deprecated
+def ARRAY(typ, len):
+ return typ * len
+
+################################################################
+
+
+if _os.name != "edk2":
+ class CDLL(object):
+ """An instance of this class represents a loaded dll/shared
+ library, exporting functions using the standard C calling
+ convention (named 'cdecl' on Windows).
+
+ The exported functions can be accessed as attributes, or by
+ indexing with the function name. Examples:
+
+ <obj>.qsort -> callable object
+ <obj>['qsort'] -> callable object
+
+ Calling the functions releases the Python GIL during the call and
+ reacquires it afterwards.
+ """
+ _func_flags_ = _FUNCFLAG_CDECL
+ _func_restype_ = c_int
+ # default values for repr
+ _name = '<uninitialized>'
+ _handle = 0
+ _FuncPtr = None
+
+ def __init__(self, name, mode=DEFAULT_MODE, handle=None,
+ use_errno=False,
+ use_last_error=False):
+ self._name = name
+ flags = self._func_flags_
+ if use_errno:
+ flags |= _FUNCFLAG_USE_ERRNO
+ if use_last_error:
+ flags |= _FUNCFLAG_USE_LASTERROR
+
+ class _FuncPtr(_CFuncPtr):
+ _flags_ = flags
+ _restype_ = self._func_restype_
+ self._FuncPtr = _FuncPtr
+
+ if handle is None:
+ self._handle = _dlopen(self._name, mode)
+ else:
+ self._handle = handle
+
+ def __repr__(self):
+ return "<%s '%s', handle %x at %#x>" % \
+ (self.__class__.__name__, self._name,
+ (self._handle & (_sys.maxsize*2 + 1)),
+ id(self) & (_sys.maxsize*2 + 1))
+
+ def __getattr__(self, name):
+ if name.startswith('__') and name.endswith('__'):
+ raise AttributeError(name)
+ func = self.__getitem__(name)
+ setattr(self, name, func)
+ return func
+
+ def __getitem__(self, name_or_ordinal):
+ func = self._FuncPtr((name_or_ordinal, self))
+ if not isinstance(name_or_ordinal, int):
+ func.__name__ = name_or_ordinal
+ return func
+
+ class PyDLL(CDLL):
+ """This class represents the Python library itself. It allows to
+ access Python API functions. The GIL is not released, and
+ Python exceptions are handled correctly.
+ """
+ _func_flags_ = _FUNCFLAG_CDECL | _FUNCFLAG_PYTHONAPI
+
+if _os.name == "nt":
+
+ class WinDLL(CDLL):
+ """This class represents a dll exporting functions using the
+ Windows stdcall calling convention.
+ """
+ _func_flags_ = _FUNCFLAG_STDCALL
+
+ # XXX Hm, what about HRESULT as normal parameter?
+ # Mustn't it derive from c_long then?
+ from _ctypes import _check_HRESULT, _SimpleCData
+ class HRESULT(_SimpleCData):
+ _type_ = "l"
+ # _check_retval_ is called with the function's result when it
+ # is used as restype. It checks for the FAILED bit, and
+ # raises an OSError if it is set.
+ #
+ # The _check_retval_ method is implemented in C, so that the
+ # method definition itself is not included in the traceback
+ # when it raises an error - that is what we want (and Python
+ # doesn't have a way to raise an exception in the caller's
+ # frame).
+ _check_retval_ = _check_HRESULT
+
+ class OleDLL(CDLL):
+ """This class represents a dll exporting functions using the
+ Windows stdcall calling convention, and returning HRESULT.
+ HRESULT error values are automatically raised as OSError
+ exceptions.
+ """
+ _func_flags_ = _FUNCFLAG_STDCALL
+ _func_restype_ = HRESULT
+
+if _os.name != "edk2":
+ class LibraryLoader(object):
+ def __init__(self, dlltype):
+ self._dlltype = dlltype
+
+ def __getattr__(self, name):
+ if name[0] == '_':
+ raise AttributeError(name)
+ dll = self._dlltype(name)
+ setattr(self, name, dll)
+ return dll
+
+ def __getitem__(self, name):
+ return getattr(self, name)
+
+ def LoadLibrary(self, name):
+ return self._dlltype(name)
+
+ cdll = LibraryLoader(CDLL)
+ pydll = LibraryLoader(PyDLL)
+
+ if _os.name == "nt":
+ pythonapi = PyDLL("python dll", None, _sys.dllhandle)
+ elif _sys.platform == "cygwin":
+ pythonapi = PyDLL("libpython%d.%d.dll" % _sys.version_info[:2])
+ else:
+ pythonapi = PyDLL(None)
+
+
+if _os.name == "nt":
+ windll = LibraryLoader(WinDLL)
+ oledll = LibraryLoader(OleDLL)
+
+ if _os.name == "nt":
+ GetLastError = windll.kernel32.GetLastError
+ else:
+ GetLastError = windll.coredll.GetLastError
+ from _ctypes import get_last_error, set_last_error
+
+ def WinError(code=None, descr=None):
+ if code is None:
+ code = GetLastError()
+ if descr is None:
+ descr = FormatError(code).strip()
+ return OSError(None, descr, None, code)
+
+if sizeof(c_uint) == sizeof(c_void_p):
+ c_size_t = c_uint
+ c_ssize_t = c_int
+elif sizeof(c_ulong) == sizeof(c_void_p):
+ c_size_t = c_ulong
+ c_ssize_t = c_long
+elif sizeof(c_ulonglong) == sizeof(c_void_p):
+ c_size_t = c_ulonglong
+ c_ssize_t = c_longlong
+
+# functions
+
+from _ctypes import _memmove_addr, _memset_addr, _string_at_addr, _cast_addr
+
+## void *memmove(void *, const void *, size_t);
+memmove = CFUNCTYPE(c_void_p, c_void_p, c_void_p, c_size_t)(_memmove_addr)
+
+## void *memset(void *, int, size_t)
+memset = CFUNCTYPE(c_void_p, c_void_p, c_int, c_size_t)(_memset_addr)
+
+def PYFUNCTYPE(restype, *argtypes):
+ class CFunctionType(_CFuncPtr):
+ _argtypes_ = argtypes
+ _restype_ = restype
+ _flags_ = _FUNCFLAG_CDECL | _FUNCFLAG_PYTHONAPI
+ return CFunctionType
+
+_cast = PYFUNCTYPE(py_object, c_void_p, py_object, py_object)(_cast_addr)
+def cast(obj, typ):
+ return _cast(obj, obj, typ)
+
+_string_at = PYFUNCTYPE(py_object, c_void_p, c_int)(_string_at_addr)
+def string_at(ptr, size=-1):
+ """string_at(addr[, size]) -> string
+
+ Return the string at addr."""
+ return _string_at(ptr, size)
+
+try:
+ from _ctypes import _wstring_at_addr
+except ImportError:
+ pass
+else:
+ _wstring_at = PYFUNCTYPE(py_object, c_void_p, c_int)(_wstring_at_addr)
+ def wstring_at(ptr, size=-1):
+ """wstring_at(addr[, size]) -> string
+
+ Return the string at addr."""
+ return _wstring_at(ptr, size)
+
+
+if _os.name == "nt": # COM stuff
+ def DllGetClassObject(rclsid, riid, ppv):
+ try:
+ ccom = __import__("comtypes.server.inprocserver", globals(), locals(), ['*'])
+ except ImportError:
+ return -2147221231 # CLASS_E_CLASSNOTAVAILABLE
+ else:
+ return ccom.DllGetClassObject(rclsid, riid, ppv)
+
+ def DllCanUnloadNow():
+ try:
+ ccom = __import__("comtypes.server.inprocserver", globals(), locals(), ['*'])
+ except ImportError:
+ return 0 # S_OK
+ return ccom.DllCanUnloadNow()
+
+from ctypes._endian import BigEndianStructure, LittleEndianStructure
+
+# Fill in specifically-sized types
+c_int8 = c_byte
+c_uint8 = c_ubyte
+for kind in [c_short, c_int, c_long, c_longlong]:
+ if sizeof(kind) == 2: c_int16 = kind
+ elif sizeof(kind) == 4: c_int32 = kind
+ elif sizeof(kind) == 8: c_int64 = kind
+for kind in [c_ushort, c_uint, c_ulong, c_ulonglong]:
+ if sizeof(kind) == 2: c_uint16 = kind
+ elif sizeof(kind) == 4: c_uint32 = kind
+ elif sizeof(kind) == 8: c_uint64 = kind
+del(kind)
+
+_reset_cache()
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/genericpath.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/genericpath.py
new file mode 100644
index 00000000..46b8c921
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/genericpath.py
@@ -0,0 +1,157 @@
+"""
+Path operations common to more than one OS
+Do not use directly. The OS specific modules import the appropriate
+functions from this module themselves.
+"""
+import os
+import stat
+
+# If Python is built without Unicode support, the unicode type
+# will not exist. Fake one.
+class _unicode(object):
+ pass
+
+
+__all__ = ['commonprefix', 'exists', 'getatime', 'getctime', 'getmtime',
+ 'getsize', 'isdir', 'isfile', 'samefile', 'sameopenfile',
+ 'samestat', '_unicode']
+
+
+# Does a path exist?
+# This is false for dangling symbolic links on systems that support them.
+def exists(path):
+ """Test whether a path exists. Returns False for broken symbolic links"""
+ try:
+ os.stat(path)
+ except OSError:
+ return False
+ return True
+
+
+# This follows symbolic links, so both islink() and isdir() can be true
+# for the same path on systems that support symlinks
+def isfile(path):
+ """Test whether a path is a regular file"""
+ try:
+ st = os.stat(path)
+ except OSError:
+ return False
+ return stat.S_ISREG(st.st_mode)
+
+
+# Is a path a directory?
+# This follows symbolic links, so both islink() and isdir()
+# can be true for the same path on systems that support symlinks
+def isdir(s):
+ """Return true if the pathname refers to an existing directory."""
+ try:
+ st = os.stat(s)
+ except OSError:
+ return False
+ return stat.S_ISDIR(st.st_mode)
+
+
+def getsize(filename):
+ """Return the size of a file, reported by os.stat()."""
+ return os.stat(filename).st_size
+
+
+def getmtime(filename):
+ """Return the last modification time of a file, reported by os.stat()."""
+ return os.stat(filename).st_mtime
+
+
+def getatime(filename):
+ """Return the last access time of a file, reported by os.stat()."""
+ return os.stat(filename).st_atime
+
+
+def getctime(filename):
+ """Return the metadata change time of a file, reported by os.stat()."""
+ return os.stat(filename).st_ctime
+
+
+# Return the longest prefix of all list elements.
+def commonprefix(m):
+ "Given a list of pathnames, returns the longest common leading component"
+ if not m: return ''
+ # Some people pass in a list of pathname parts to operate in an OS-agnostic
+ # fashion; don't try to translate in that case as that's an abuse of the
+ # API and they are already doing what they need to be OS-agnostic and so
+ # they most likely won't be using an os.PathLike object in the sublists.
+ if not isinstance(m[0], (list, tuple)):
+ m = tuple(map(os.fspath, m))
+ s1 = min(m)
+ s2 = max(m)
+ for i, c in enumerate(s1):
+ if c != s2[i]:
+ return s1[:i]
+ return s1
+
+# Are two stat buffers (obtained from stat, fstat or lstat)
+# describing the same file?
+def samestat(s1, s2):
+ """Test whether two stat buffers reference the same file"""
+ return (s1.st_ino == s2.st_ino and
+ s1.st_dev == s2.st_dev)
+
+
+# Are two filenames really pointing to the same file?
+def samefile(f1, f2):
+ """Test whether two pathnames reference the same actual file"""
+ s1 = os.stat(f1)
+ s2 = os.stat(f2)
+ return samestat(s1, s2)
+
+
+# Are two open files really referencing the same file?
+# (Not necessarily the same file descriptor!)
+def sameopenfile(fp1, fp2):
+ """Test whether two open file objects reference the same file"""
+ s1 = os.fstat(fp1)
+ s2 = os.fstat(fp2)
+ return samestat(s1, s2)
+
+
+# Split a path in root and extension.
+# The extension is everything starting at the last dot in the last
+# pathname component; the root is everything before that.
+# It is always true that root + ext == p.
+
+# Generic implementation of splitext, to be parametrized with
+# the separators
+def _splitext(p, sep, altsep, extsep):
+ """Split the extension from a pathname.
+
+ Extension is everything from the last dot to the end, ignoring
+ leading dots. Returns "(root, ext)"; ext may be empty."""
+ # NOTE: This code must work for text and bytes strings.
+
+ sepIndex = p.rfind(sep)
+ if altsep:
+ altsepIndex = p.rfind(altsep)
+ sepIndex = max(sepIndex, altsepIndex)
+
+ dotIndex = p.rfind(extsep)
+ if dotIndex > sepIndex:
+ # skip all leading dots
+ filenameIndex = sepIndex + 1
+ while filenameIndex < dotIndex:
+ if p[filenameIndex:filenameIndex+1] != extsep:
+ return p[:dotIndex], p[dotIndex:]
+ filenameIndex += 1
+
+ return p, p[:0]
+
+def _check_arg_types(funcname, *args):
+ hasstr = hasbytes = False
+ for s in args:
+ if isinstance(s, str):
+ hasstr = True
+ elif isinstance(s, bytes):
+ hasbytes = True
+ else:
+ raise TypeError('%s() argument must be str or bytes, not %r' %
+ (funcname, s.__class__.__name__)) from None
+ if hasstr and hasbytes:
+ raise TypeError("Can't mix strings and bytes in path components") from None
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/glob.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/glob.py
new file mode 100644
index 00000000..d6eca248
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/glob.py
@@ -0,0 +1,110 @@
+"""Filename globbing utility."""
+
+import os
+import re
+import fnmatch
+
+__all__ = ["glob", "iglob"]
+
+def glob(pathname):
+ """Return a list of paths matching a pathname pattern.
+
+ The pattern may contain simple shell-style wildcards a la
+ fnmatch. However, unlike fnmatch, filenames starting with a
+ dot are special cases that are not matched by '*' and '?'
+ patterns.
+
+ """
+ return list(iglob(pathname))
+
+def iglob(pathname):
+ """Return an iterator which yields the paths matching a pathname pattern.
+
+ The pattern may contain simple shell-style wildcards a la
+ fnmatch. However, unlike fnmatch, filenames starting with a
+ dot are special cases that are not matched by '*' and '?'
+ patterns.
+
+ """
+ dirname, basename = os.path.split(pathname)
+ if not has_magic(pathname):
+ if basename:
+ if os.path.lexists(pathname):
+ yield pathname
+ else:
+ # Patterns ending with a slash should match only directories
+ if os.path.isdir(dirname):
+ yield pathname
+ return
+ if not dirname:
+ yield from glob1(None, basename)
+ return
+ # `os.path.split()` returns the argument itself as a dirname if it is a
+ # drive or UNC path. Prevent an infinite recursion if a drive or UNC path
+ # contains magic characters (i.e. r'\\?\C:').
+ if dirname != pathname and has_magic(dirname):
+ dirs = iglob(dirname)
+ else:
+ dirs = [dirname]
+ if has_magic(basename):
+ glob_in_dir = glob1
+ else:
+ glob_in_dir = glob0
+ for dirname in dirs:
+ for name in glob_in_dir(dirname, basename):
+ yield os.path.join(dirname, name)
+
+# These 2 helper functions non-recursively glob inside a literal directory.
+# They return a list of basenames. `glob1` accepts a pattern while `glob0`
+# takes a literal basename (so it only has to check for its existence).
+
+def glob1(dirname, pattern):
+ if not dirname:
+ if isinstance(pattern, bytes):
+ dirname = bytes(os.curdir, 'ASCII')
+ else:
+ dirname = os.curdir
+ try:
+ names = os.listdir(dirname)
+ except OSError:
+ return []
+ if not _ishidden(pattern):
+ names = [x for x in names if not _ishidden(x)]
+ return fnmatch.filter(names, pattern)
+
+def glob0(dirname, basename):
+ if not basename:
+ # `os.path.split()` returns an empty basename for paths ending with a
+ # directory separator. 'q*x/' should match only directories.
+ if os.path.isdir(dirname):
+ return [basename]
+ else:
+ if os.path.lexists(os.path.join(dirname, basename)):
+ return [basename]
+ return []
+
+
+magic_check = re.compile('([*?[])')
+magic_check_bytes = re.compile(b'([*?[])')
+
+def has_magic(s):
+ if isinstance(s, bytes):
+ match = magic_check_bytes.search(s)
+ else:
+ match = magic_check.search(s)
+ return match is not None
+
+def _ishidden(path):
+ return path[0] in ('.', b'.'[0])
+
+def escape(pathname):
+ """Escape all special characters.
+ """
+ # Escaping is done by wrapping any of "*?[" between square brackets.
+ # Metacharacters do not work in the drive part and shouldn't be escaped.
+ drive, pathname = os.path.splitdrive(pathname)
+ if isinstance(pathname, bytes):
+ pathname = magic_check_bytes.sub(br'[\1]', pathname)
+ else:
+ pathname = magic_check.sub(r'[\1]', pathname)
+ return drive + pathname
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/http/client.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/http/client.py
new file mode 100644
index 00000000..fe7cb47c
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/http/client.py
@@ -0,0 +1,1481 @@
+r"""HTTP/1.1 client library
+
+<intro stuff goes here>
+<other stuff, too>
+
+HTTPConnection goes through a number of "states", which define when a client
+may legally make another request or fetch the response for a particular
+request. This diagram details these state transitions:
+
+ (null)
+ |
+ | HTTPConnection()
+ v
+ Idle
+ |
+ | putrequest()
+ v
+ Request-started
+ |
+ | ( putheader() )* endheaders()
+ v
+ Request-sent
+ |\_____________________________
+ | | getresponse() raises
+ | response = getresponse() | ConnectionError
+ v v
+ Unread-response Idle
+ [Response-headers-read]
+ |\____________________
+ | |
+ | response.read() | putrequest()
+ v v
+ Idle Req-started-unread-response
+ ______/|
+ / |
+ response.read() | | ( putheader() )* endheaders()
+ v v
+ Request-started Req-sent-unread-response
+ |
+ | response.read()
+ v
+ Request-sent
+
+This diagram presents the following rules:
+ -- a second request may not be started until {response-headers-read}
+ -- a response [object] cannot be retrieved until {request-sent}
+ -- there is no differentiation between an unread response body and a
+ partially read response body
+
+Note: this enforcement is applied by the HTTPConnection class. The
+ HTTPResponse class does not enforce this state machine, which
+ implies sophisticated clients may accelerate the request/response
+ pipeline. Caution should be taken, though: accelerating the states
+ beyond the above pattern may imply knowledge of the server's
+ connection-close behavior for certain requests. For example, it
+ is impossible to tell whether the server will close the connection
+ UNTIL the response headers have been read; this means that further
+ requests cannot be placed into the pipeline until it is known that
+ the server will NOT be closing the connection.
+
+Logical State __state __response
+------------- ------- ----------
+Idle _CS_IDLE None
+Request-started _CS_REQ_STARTED None
+Request-sent _CS_REQ_SENT None
+Unread-response _CS_IDLE <response_class>
+Req-started-unread-response _CS_REQ_STARTED <response_class>
+Req-sent-unread-response _CS_REQ_SENT <response_class>
+"""
+
+import email.parser
+import email.message
+import http
+import io
+import os
+import re
+import socket
+import collections
+from urllib.parse import urlsplit
+
+# HTTPMessage, parse_headers(), and the HTTP status code constants are
+# intentionally omitted for simplicity
+__all__ = ["HTTPResponse", "HTTPConnection",
+ "HTTPException", "NotConnected", "UnknownProtocol",
+ "UnknownTransferEncoding", "UnimplementedFileMode",
+ "IncompleteRead", "InvalidURL", "ImproperConnectionState",
+ "CannotSendRequest", "CannotSendHeader", "ResponseNotReady",
+ "BadStatusLine", "LineTooLong", "RemoteDisconnected", "error",
+ "responses"]
+
+HTTP_PORT = 80
+HTTPS_PORT = 443
+
+_UNKNOWN = 'UNKNOWN'
+
+# connection states
+_CS_IDLE = 'Idle'
+_CS_REQ_STARTED = 'Request-started'
+_CS_REQ_SENT = 'Request-sent'
+
+
+# hack to maintain backwards compatibility
+globals().update(http.HTTPStatus.__members__)
+
+# another hack to maintain backwards compatibility
+# Mapping status codes to official W3C names
+responses = {v: v.phrase for v in http.HTTPStatus.__members__.values()}
+
+# maximal amount of data to read at one time in _safe_read
+MAXAMOUNT = 1048576
+
+# maximal line length when calling readline().
+_MAXLINE = 65536
+_MAXHEADERS = 100
+
+# Header name/value ABNF (http://tools.ietf.org/html/rfc7230#section-3.2)
+#
+# VCHAR = %x21-7E
+# obs-text = %x80-FF
+# header-field = field-name ":" OWS field-value OWS
+# field-name = token
+# field-value = *( field-content / obs-fold )
+# field-content = field-vchar [ 1*( SP / HTAB ) field-vchar ]
+# field-vchar = VCHAR / obs-text
+#
+# obs-fold = CRLF 1*( SP / HTAB )
+# ; obsolete line folding
+# ; see Section 3.2.4
+
+# token = 1*tchar
+#
+# tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*"
+# / "+" / "-" / "." / "^" / "_" / "`" / "|" / "~"
+# / DIGIT / ALPHA
+# ; any VCHAR, except delimiters
+#
+# VCHAR defined in http://tools.ietf.org/html/rfc5234#appendix-B.1
+
+# the patterns for both name and value are more lenient than RFC
+# definitions to allow for backwards compatibility
+_is_legal_header_name = re.compile(rb'[^:\s][^:\r\n]*').fullmatch
+_is_illegal_header_value = re.compile(rb'\n(?![ \t])|\r(?![ \t\n])').search
+
+# We always set the Content-Length header for these methods because some
+# servers will otherwise respond with a 411
+_METHODS_EXPECTING_BODY = {'PATCH', 'POST', 'PUT'}
+
+
+def _encode(data, name='data'):
+ """Call data.encode("latin-1") but show a better error message."""
+ try:
+ return data.encode("latin-1")
+ except UnicodeEncodeError as err:
+ raise UnicodeEncodeError(
+ err.encoding,
+ err.object,
+ err.start,
+ err.end,
+ "%s (%.20r) is not valid Latin-1. Use %s.encode('utf-8') "
+ "if you want to send it encoded in UTF-8." %
+ (name.title(), data[err.start:err.end], name)) from None
+
+
+class HTTPMessage(email.message.Message):
+ # XXX The only usage of this method is in
+ # http.server.CGIHTTPRequestHandler. Maybe move the code there so
+ # that it doesn't need to be part of the public API. The API has
+ # never been defined so this could cause backwards compatibility
+ # issues.
+
+ def getallmatchingheaders(self, name):
+ """Find all header lines matching a given header name.
+
+ Look through the list of headers and find all lines matching a given
+ header name (and their continuation lines). A list of the lines is
+ returned, without interpretation. If the header does not occur, an
+ empty list is returned. If the header occurs multiple times, all
+ occurrences are returned. Case is not important in the header name.
+
+ """
+ name = name.lower() + ':'
+ n = len(name)
+ lst = []
+ hit = 0
+ for line in self.keys():
+ if line[:n].lower() == name:
+ hit = 1
+ elif not line[:1].isspace():
+ hit = 0
+ if hit:
+ lst.append(line)
+ return lst
+
+def parse_headers(fp, _class=HTTPMessage):
+ """Parses only RFC2822 headers from a file pointer.
+
+ email Parser wants to see strings rather than bytes.
+ But a TextIOWrapper around self.rfile would buffer too many bytes
+ from the stream, bytes which we later need to read as bytes.
+ So we read the correct bytes here, as bytes, for email Parser
+ to parse.
+
+ """
+ headers = []
+ while True:
+ line = fp.readline(_MAXLINE + 1)
+ if len(line) > _MAXLINE:
+ raise LineTooLong("header line")
+ headers.append(line)
+ if len(headers) > _MAXHEADERS:
+ raise HTTPException("got more than %d headers" % _MAXHEADERS)
+ if line in (b'\r\n', b'\n', b''):
+ break
+ hstring = b''.join(headers).decode('iso-8859-1')
+ return email.parser.Parser(_class=_class).parsestr(hstring)
+
+
+class HTTPResponse(io.BufferedIOBase):
+
+ # See RFC 2616 sec 19.6 and RFC 1945 sec 6 for details.
+
+ # The bytes from the socket object are iso-8859-1 strings.
+ # See RFC 2616 sec 2.2 which notes an exception for MIME-encoded
+ # text following RFC 2047. The basic status line parsing only
+ # accepts iso-8859-1.
+
+ def __init__(self, sock, debuglevel=0, method=None, url=None):
+ # If the response includes a content-length header, we need to
+ # make sure that the client doesn't read more than the
+ # specified number of bytes. If it does, it will block until
+ # the server times out and closes the connection. This will
+ # happen if a self.fp.read() is done (without a size) whether
+ # self.fp is buffered or not. So, no self.fp.read() by
+ # clients unless they know what they are doing.
+ self.fp = sock.makefile("rb")
+ self.debuglevel = debuglevel
+ self._method = method
+
+ # The HTTPResponse object is returned via urllib. The clients
+ # of http and urllib expect different attributes for the
+ # headers. headers is used here and supports urllib. msg is
+ # provided as a backwards compatibility layer for http
+ # clients.
+
+ self.headers = self.msg = None
+
+ # from the Status-Line of the response
+ self.version = _UNKNOWN # HTTP-Version
+ self.status = _UNKNOWN # Status-Code
+ self.reason = _UNKNOWN # Reason-Phrase
+
+ self.chunked = _UNKNOWN # is "chunked" being used?
+ self.chunk_left = _UNKNOWN # bytes left to read in current chunk
+ self.length = _UNKNOWN # number of bytes left in response
+ self.will_close = _UNKNOWN # conn will close at end of response
+
+ def _read_status(self):
+ line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
+ if len(line) > _MAXLINE:
+ raise LineTooLong("status line")
+ if self.debuglevel > 0:
+ print("reply:", repr(line))
+ if not line:
+ # Presumably, the server closed the connection before
+ # sending a valid response.
+ raise RemoteDisconnected("Remote end closed connection without"
+ " response")
+ try:
+ version, status, reason = line.split(None, 2)
+ except ValueError:
+ try:
+ version, status = line.split(None, 1)
+ reason = ""
+ except ValueError:
+ # empty version will cause next test to fail.
+ version = ""
+ if not version.startswith("HTTP/"):
+ self._close_conn()
+ raise BadStatusLine(line)
+
+ # The status code is a three-digit number
+ try:
+ status = int(status)
+ if status < 100 or status > 999:
+ raise BadStatusLine(line)
+ except ValueError:
+ raise BadStatusLine(line)
+ return version, status, reason
+
+ def begin(self):
+ if self.headers is not None:
+ # we've already started reading the response
+ return
+
+ # read until we get a non-100 response
+ while True:
+ version, status, reason = self._read_status()
+ if status != CONTINUE:
+ break
+ # skip the header from the 100 response
+ while True:
+ skip = self.fp.readline(_MAXLINE + 1)
+ if len(skip) > _MAXLINE:
+ raise LineTooLong("header line")
+ skip = skip.strip()
+ if not skip:
+ break
+ if self.debuglevel > 0:
+ print("header:", skip)
+
+ self.code = self.status = status
+ self.reason = reason.strip()
+ if version in ("HTTP/1.0", "HTTP/0.9"):
+ # Some servers might still return "0.9", treat it as 1.0 anyway
+ self.version = 10
+ elif version.startswith("HTTP/1."):
+ self.version = 11 # use HTTP/1.1 code for HTTP/1.x where x>=1
+ else:
+ raise UnknownProtocol(version)
+
+ self.headers = self.msg = parse_headers(self.fp)
+
+ if self.debuglevel > 0:
+ for hdr in self.headers:
+ print("header:", hdr + ":", self.headers.get(hdr))
+
+ # are we using the chunked-style of transfer encoding?
+ tr_enc = self.headers.get("transfer-encoding")
+ if tr_enc and tr_enc.lower() == "chunked":
+ self.chunked = True
+ self.chunk_left = None
+ else:
+ self.chunked = False
+
+ # will the connection close at the end of the response?
+ self.will_close = self._check_close()
+
+ # do we have a Content-Length?
+ # NOTE: RFC 2616, S4.4, #3 says we ignore this if tr_enc is "chunked"
+ self.length = None
+ length = self.headers.get("content-length")
+
+ # are we using the chunked-style of transfer encoding?
+ tr_enc = self.headers.get("transfer-encoding")
+ if length and not self.chunked:
+ try:
+ self.length = int(length)
+ except ValueError:
+ self.length = None
+ else:
+ if self.length < 0: # ignore nonsensical negative lengths
+ self.length = None
+ else:
+ self.length = None
+
+ # does the body have a fixed length? (of zero)
+ if (status == NO_CONTENT or status == NOT_MODIFIED or
+ 100 <= status < 200 or # 1xx codes
+ self._method == "HEAD"):
+ self.length = 0
+
+ # if the connection remains open, and we aren't using chunked, and
+ # a content-length was not provided, then assume that the connection
+ # WILL close.
+ if (not self.will_close and
+ not self.chunked and
+ self.length is None):
+ self.will_close = True
+
+ def _check_close(self):
+ conn = self.headers.get("connection")
+ if self.version == 11:
+ # An HTTP/1.1 proxy is assumed to stay open unless
+ # explicitly closed.
+ conn = self.headers.get("connection")
+ if conn and "close" in conn.lower():
+ return True
+ return False
+
+ # Some HTTP/1.0 implementations have support for persistent
+ # connections, using rules different than HTTP/1.1.
+
+ # For older HTTP, Keep-Alive indicates persistent connection.
+ if self.headers.get("keep-alive"):
+ return False
+
+ # At least Akamai returns a "Connection: Keep-Alive" header,
+ # which was supposed to be sent by the client.
+ if conn and "keep-alive" in conn.lower():
+ return False
+
+ # Proxy-Connection is a netscape hack.
+ pconn = self.headers.get("proxy-connection")
+ if pconn and "keep-alive" in pconn.lower():
+ return False
+
+ # otherwise, assume it will close
+ return True
+
+ def _close_conn(self):
+ fp = self.fp
+ self.fp = None
+ fp.close()
+
+ def close(self):
+ try:
+ super().close() # set "closed" flag
+ finally:
+ if self.fp:
+ self._close_conn()
+
+ # These implementations are for the benefit of io.BufferedReader.
+
+ # XXX This class should probably be revised to act more like
+ # the "raw stream" that BufferedReader expects.
+
+ def flush(self):
+ super().flush()
+ if self.fp:
+ self.fp.flush()
+
+ def readable(self):
+ """Always returns True"""
+ return True
+
+ # End of "raw stream" methods
+
+ def isclosed(self):
+ """True if the connection is closed."""
+ # NOTE: it is possible that we will not ever call self.close(). This
+ # case occurs when will_close is TRUE, length is None, and we
+ # read up to the last byte, but NOT past it.
+ #
+ # IMPLIES: if will_close is FALSE, then self.close() will ALWAYS be
+ # called, meaning self.isclosed() is meaningful.
+ return self.fp is None
+
+ def read(self, amt=None):
+ if self.fp is None:
+ return b""
+
+ if self._method == "HEAD":
+ self._close_conn()
+ return b""
+
+ if amt is not None:
+ # Amount is given, implement using readinto
+ b = bytearray(amt)
+ n = self.readinto(b)
+ return memoryview(b)[:n].tobytes()
+ else:
+ # Amount is not given (unbounded read) so we must check self.length
+ # and self.chunked
+
+ if self.chunked:
+ return self._readall_chunked()
+
+ if self.length is None:
+ s = self.fp.read()
+ else:
+ try:
+ s = self._safe_read(self.length)
+ except IncompleteRead:
+ self._close_conn()
+ raise
+ self.length = 0
+ self._close_conn() # we read everything
+ return s
+
+ def readinto(self, b):
+ """Read up to len(b) bytes into bytearray b and return the number
+ of bytes read.
+ """
+
+ if self.fp is None:
+ return 0
+
+ if self._method == "HEAD":
+ self._close_conn()
+ return 0
+
+ if self.chunked:
+ return self._readinto_chunked(b)
+
+ if self.length is not None:
+ if len(b) > self.length:
+ # clip the read to the "end of response"
+ b = memoryview(b)[0:self.length]
+
+ # we do not use _safe_read() here because this may be a .will_close
+ # connection, and the user is reading more bytes than will be provided
+ # (for example, reading in 1k chunks)
+ n = self.fp.readinto(b)
+ if not n and b:
+ # Ideally, we would raise IncompleteRead if the content-length
+ # wasn't satisfied, but it might break compatibility.
+ self._close_conn()
+ elif self.length is not None:
+ self.length -= n
+ if not self.length:
+ self._close_conn()
+ return n
+
+ def _read_next_chunk_size(self):
+ # Read the next chunk size from the file
+ line = self.fp.readline(_MAXLINE + 1)
+ if len(line) > _MAXLINE:
+ raise LineTooLong("chunk size")
+ i = line.find(b";")
+ if i >= 0:
+ line = line[:i] # strip chunk-extensions
+ try:
+ return int(line, 16)
+ except ValueError:
+ # close the connection as protocol synchronisation is
+ # probably lost
+ self._close_conn()
+ raise
+
+ def _read_and_discard_trailer(self):
+ # read and discard trailer up to the CRLF terminator
+ ### note: we shouldn't have any trailers!
+ while True:
+ line = self.fp.readline(_MAXLINE + 1)
+ if len(line) > _MAXLINE:
+ raise LineTooLong("trailer line")
+ if not line:
+ # a vanishingly small number of sites EOF without
+ # sending the trailer
+ break
+ if line in (b'\r\n', b'\n', b''):
+ break
+
+ def _get_chunk_left(self):
+ # return self.chunk_left, reading a new chunk if necessary.
+ # chunk_left == 0: at the end of the current chunk, need to close it
+ # chunk_left == None: No current chunk, should read next.
+ # This function returns non-zero or None if the last chunk has
+ # been read.
+ chunk_left = self.chunk_left
+ if not chunk_left: # Can be 0 or None
+ if chunk_left is not None:
+ # We are at the end of chunk, discard chunk end
+ self._safe_read(2) # toss the CRLF at the end of the chunk
+ try:
+ chunk_left = self._read_next_chunk_size()
+ except ValueError:
+ raise IncompleteRead(b'')
+ if chunk_left == 0:
+ # last chunk: 1*("0") [ chunk-extension ] CRLF
+ self._read_and_discard_trailer()
+ # we read everything; close the "file"
+ self._close_conn()
+ chunk_left = None
+ self.chunk_left = chunk_left
+ return chunk_left
+
+ def _readall_chunked(self):
+ assert self.chunked != _UNKNOWN
+ value = []
+ try:
+ while True:
+ chunk_left = self._get_chunk_left()
+ if chunk_left is None:
+ break
+ value.append(self._safe_read(chunk_left))
+ self.chunk_left = 0
+ return b''.join(value)
+ except IncompleteRead:
+ raise IncompleteRead(b''.join(value))
+
+ def _readinto_chunked(self, b):
+ assert self.chunked != _UNKNOWN
+ total_bytes = 0
+ mvb = memoryview(b)
+ try:
+ while True:
+ chunk_left = self._get_chunk_left()
+ if chunk_left is None:
+ return total_bytes
+
+ if len(mvb) <= chunk_left:
+ n = self._safe_readinto(mvb)
+ self.chunk_left = chunk_left - n
+ return total_bytes + n
+
+ temp_mvb = mvb[:chunk_left]
+ n = self._safe_readinto(temp_mvb)
+ mvb = mvb[n:]
+ total_bytes += n
+ self.chunk_left = 0
+
+ except IncompleteRead:
+ raise IncompleteRead(bytes(b[0:total_bytes]))
+
+ def _safe_read(self, amt):
+ """Read the number of bytes requested, compensating for partial reads.
+
+ Normally, we have a blocking socket, but a read() can be interrupted
+ by a signal (resulting in a partial read).
+
+ Note that we cannot distinguish between EOF and an interrupt when zero
+ bytes have been read. IncompleteRead() will be raised in this
+ situation.
+
+ This function should be used when <amt> bytes "should" be present for
+ reading. If the bytes are truly not available (due to EOF), then the
+ IncompleteRead exception can be used to detect the problem.
+ """
+ s = []
+ while amt > 0:
+ chunk = self.fp.read(min(amt, MAXAMOUNT))
+ if not chunk:
+ raise IncompleteRead(b''.join(s), amt)
+ s.append(chunk)
+ amt -= len(chunk)
+ return b"".join(s)
+
+ def _safe_readinto(self, b):
+ """Same as _safe_read, but for reading into a buffer."""
+ total_bytes = 0
+ mvb = memoryview(b)
+ while total_bytes < len(b):
+ if MAXAMOUNT < len(mvb):
+ temp_mvb = mvb[0:MAXAMOUNT]
+ n = self.fp.readinto(temp_mvb)
+ else:
+ n = self.fp.readinto(mvb)
+ if not n:
+ raise IncompleteRead(bytes(mvb[0:total_bytes]), len(b))
+ mvb = mvb[n:]
+ total_bytes += n
+ return total_bytes
+
+ def read1(self, n=-1):
+ """Read with at most one underlying system call. If at least one
+ byte is buffered, return that instead.
+ """
+ if self.fp is None or self._method == "HEAD":
+ return b""
+ if self.chunked:
+ return self._read1_chunked(n)
+ if self.length is not None and (n < 0 or n > self.length):
+ n = self.length
+ try:
+ result = self.fp.read1(n)
+ except ValueError:
+ if n >= 0:
+ raise
+ # some implementations, like BufferedReader, don't support -1
+ # Read an arbitrarily selected largeish chunk.
+ result = self.fp.read1(16*1024)
+ if not result and n:
+ self._close_conn()
+ elif self.length is not None:
+ self.length -= len(result)
+ return result
+
+ def peek(self, n=-1):
+ # Having this enables IOBase.readline() to read more than one
+ # byte at a time
+ if self.fp is None or self._method == "HEAD":
+ return b""
+ if self.chunked:
+ return self._peek_chunked(n)
+ return self.fp.peek(n)
+
+ def readline(self, limit=-1):
+ if self.fp is None or self._method == "HEAD":
+ return b""
+ if self.chunked:
+ # Fallback to IOBase readline which uses peek() and read()
+ return super().readline(limit)
+ if self.length is not None and (limit < 0 or limit > self.length):
+ limit = self.length
+ result = self.fp.readline(limit)
+ if not result and limit:
+ self._close_conn()
+ elif self.length is not None:
+ self.length -= len(result)
+ return result
+
+ def _read1_chunked(self, n):
+ # Strictly speaking, _get_chunk_left() may cause more than one read,
+ # but that is ok, since that is to satisfy the chunked protocol.
+ chunk_left = self._get_chunk_left()
+ if chunk_left is None or n == 0:
+ return b''
+ if not (0 <= n <= chunk_left):
+ n = chunk_left # if n is negative or larger than chunk_left
+ read = self.fp.read1(n)
+ self.chunk_left -= len(read)
+ if not read:
+ raise IncompleteRead(b"")
+ return read
+
+ def _peek_chunked(self, n):
+ # Strictly speaking, _get_chunk_left() may cause more than one read,
+ # but that is ok, since that is to satisfy the chunked protocol.
+ try:
+ chunk_left = self._get_chunk_left()
+ except IncompleteRead:
+ return b'' # peek doesn't worry about protocol
+ if chunk_left is None:
+ return b'' # eof
+ # peek is allowed to return more than requested. Just request the
+ # entire chunk, and truncate what we get.
+ return self.fp.peek(chunk_left)[:chunk_left]
+
+ def fileno(self):
+ return self.fp.fileno()
+
+ def getheader(self, name, default=None):
+ '''Returns the value of the header matching *name*.
+
+ If there are multiple matching headers, the values are
+ combined into a single string separated by commas and spaces.
+
+ If no matching header is found, returns *default* or None if
+ the *default* is not specified.
+
+ If the headers are unknown, raises http.client.ResponseNotReady.
+
+ '''
+ if self.headers is None:
+ raise ResponseNotReady()
+ headers = self.headers.get_all(name) or default
+ if isinstance(headers, str) or not hasattr(headers, '__iter__'):
+ return headers
+ else:
+ return ', '.join(headers)
+
+ def getheaders(self):
+ """Return list of (header, value) tuples."""
+ if self.headers is None:
+ raise ResponseNotReady()
+ return list(self.headers.items())
+
+ # We override IOBase.__iter__ so that it doesn't check for closed-ness
+
+ def __iter__(self):
+ return self
+
+ # For compatibility with old-style urllib responses.
+
+ def info(self):
+ '''Returns an instance of the class mimetools.Message containing
+ meta-information associated with the URL.
+
+ When the method is HTTP, these headers are those returned by
+ the server at the head of the retrieved HTML page (including
+ Content-Length and Content-Type).
+
+ When the method is FTP, a Content-Length header will be
+ present if (as is now usual) the server passed back a file
+ length in response to the FTP retrieval request. A
+ Content-Type header will be present if the MIME type can be
+ guessed.
+
+ When the method is local-file, returned headers will include
+ a Date representing the file's last-modified time, a
+ Content-Length giving file size, and a Content-Type
+ containing a guess at the file's type. See also the
+ description of the mimetools module.
+
+ '''
+ return self.headers
+
+ def geturl(self):
+ '''Return the real URL of the page.
+
+ In some cases, the HTTP server redirects a client to another
+ URL. The urlopen() function handles this transparently, but in
+ some cases the caller needs to know which URL the client was
+ redirected to. The geturl() method can be used to get at this
+ redirected URL.
+
+ '''
+ return self.url
+
+ def getcode(self):
+ '''Return the HTTP status code that was sent with the response,
+ or None if the URL is not an HTTP URL.
+
+ '''
+ return self.status
+
+class HTTPConnection:
+
+ _http_vsn = 11
+ _http_vsn_str = 'HTTP/1.1'
+
+ response_class = HTTPResponse
+ default_port = HTTP_PORT
+ auto_open = 1
+ debuglevel = 0
+
+ @staticmethod
+ def _is_textIO(stream):
+ """Test whether a file-like object is a text or a binary stream.
+ """
+ return isinstance(stream, io.TextIOBase)
+
+ @staticmethod
+ def _get_content_length(body, method):
+ """Get the content-length based on the body.
+
+ If the body is None, we set Content-Length: 0 for methods that expect
+ a body (RFC 7230, Section 3.3.2). We also set the Content-Length for
+ any method if the body is a str or bytes-like object and not a file.
+ """
+ if body is None:
+ # do an explicit check for not None here to distinguish
+ # between unset and set but empty
+ if method.upper() in _METHODS_EXPECTING_BODY:
+ return 0
+ else:
+ return None
+
+ if hasattr(body, 'read'):
+ # file-like object.
+ return None
+
+ try:
+ # does it implement the buffer protocol (bytes, bytearray, array)?
+ mv = memoryview(body)
+ return mv.nbytes
+ except TypeError:
+ pass
+
+ if isinstance(body, str):
+ return len(body)
+
+ return None
+
+ def __init__(self, host, port=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
+ source_address=None):
+ self.timeout = timeout
+ self.source_address = source_address
+ self.sock = None
+ self._buffer = []
+ self.__response = None
+ self.__state = _CS_IDLE
+ self._method = None
+ self._tunnel_host = None
+ self._tunnel_port = None
+ self._tunnel_headers = {}
+
+ (self.host, self.port) = self._get_hostport(host, port)
+
+ # This is stored as an instance variable to allow unit
+ # tests to replace it with a suitable mockup
+ self._create_connection = socket.create_connection
+
+ def set_tunnel(self, host, port=None, headers=None):
+ """Set up host and port for HTTP CONNECT tunnelling.
+
+ In a connection that uses HTTP CONNECT tunneling, the host passed to the
+ constructor is used as a proxy server that relays all communication to
+ the endpoint passed to `set_tunnel`. This done by sending an HTTP
+ CONNECT request to the proxy server when the connection is established.
+
+ This method must be called before the HTML connection has been
+ established.
+
+ The headers argument should be a mapping of extra HTTP headers to send
+ with the CONNECT request.
+ """
+
+ if self.sock:
+ raise RuntimeError("Can't set up tunnel for established connection")
+
+ self._tunnel_host, self._tunnel_port = self._get_hostport(host, port)
+ if headers:
+ self._tunnel_headers = headers
+ else:
+ self._tunnel_headers.clear()
+
+ def _get_hostport(self, host, port):
+ if port is None:
+ i = host.rfind(':')
+ j = host.rfind(']') # ipv6 addresses have [...]
+ if i > j:
+ try:
+ port = int(host[i+1:])
+ except ValueError:
+ if host[i+1:] == "": # http://foo.com:/ == http://foo.com/
+ port = self.default_port
+ else:
+ raise InvalidURL("nonnumeric port: '%s'" % host[i+1:])
+ host = host[:i]
+ else:
+ port = self.default_port
+ if host and host[0] == '[' and host[-1] == ']':
+ host = host[1:-1]
+
+ return (host, port)
+
+ def set_debuglevel(self, level):
+ self.debuglevel = level
+
+ def _tunnel(self):
+ connect_str = "CONNECT %s:%d HTTP/1.0\r\n" % (self._tunnel_host,
+ self._tunnel_port)
+ connect_bytes = connect_str.encode("ascii")
+ self.send(connect_bytes)
+ for header, value in self._tunnel_headers.items():
+ header_str = "%s: %s\r\n" % (header, value)
+ header_bytes = header_str.encode("latin-1")
+ self.send(header_bytes)
+ self.send(b'\r\n')
+
+ response = self.response_class(self.sock, method=self._method)
+ (version, code, message) = response._read_status()
+
+ if code != http.HTTPStatus.OK:
+ self.close()
+ raise OSError("Tunnel connection failed: %d %s" % (code,
+ message.strip()))
+ while True:
+ line = response.fp.readline(_MAXLINE + 1)
+ if len(line) > _MAXLINE:
+ raise LineTooLong("header line")
+ if not line:
+ # for sites which EOF without sending a trailer
+ break
+ if line in (b'\r\n', b'\n', b''):
+ break
+
+ if self.debuglevel > 0:
+ print('header:', line.decode())
+
+ def connect(self):
+ """Connect to the host and port specified in __init__."""
+ self.sock = self._create_connection(
+ (self.host,self.port), self.timeout, self.source_address)
+ #self.sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
+
+ if self._tunnel_host:
+ self._tunnel()
+
+ def close(self):
+ """Close the connection to the HTTP server."""
+ self.__state = _CS_IDLE
+ try:
+ sock = self.sock
+ if sock:
+ self.sock = None
+ sock.close() # close it manually... there may be other refs
+ finally:
+ response = self.__response
+ if response:
+ self.__response = None
+ response.close()
+
+ def send(self, data):
+ """Send `data' to the server.
+ ``data`` can be a string object, a bytes object, an array object, a
+ file-like object that supports a .read() method, or an iterable object.
+ """
+
+ if self.sock is None:
+ if self.auto_open:
+ self.connect()
+ else:
+ raise NotConnected()
+
+ if self.debuglevel > 0:
+ print("send:", repr(data))
+ blocksize = 8192
+ if hasattr(data, "read") :
+ if self.debuglevel > 0:
+ print("sendIng a read()able")
+ encode = self._is_textIO(data)
+ if encode and self.debuglevel > 0:
+ print("encoding file using iso-8859-1")
+ while 1:
+ datablock = data.read(blocksize)
+ if not datablock:
+ break
+ if encode:
+ datablock = datablock.encode("iso-8859-1")
+ self.sock.sendall(datablock)
+ return
+ try:
+ self.sock.sendall(data)
+ except TypeError:
+ if isinstance(data, collections.Iterable):
+ for d in data:
+ self.sock.sendall(d)
+ else:
+ raise TypeError("data should be a bytes-like object "
+ "or an iterable, got %r" % type(data))
+
+ def _output(self, s):
+ """Add a line of output to the current request buffer.
+
+ Assumes that the line does *not* end with \\r\\n.
+ """
+ self._buffer.append(s)
+
+ def _read_readable(self, readable):
+ blocksize = 8192
+ if self.debuglevel > 0:
+ print("sendIng a read()able")
+ encode = self._is_textIO(readable)
+ if encode and self.debuglevel > 0:
+ print("encoding file using iso-8859-1")
+ while True:
+ datablock = readable.read(blocksize)
+ if not datablock:
+ break
+ if encode:
+ datablock = datablock.encode("iso-8859-1")
+ yield datablock
+
+ def _send_output(self, message_body=None, encode_chunked=False):
+ """Send the currently buffered request and clear the buffer.
+
+ Appends an extra \\r\\n to the buffer.
+ A message_body may be specified, to be appended to the request.
+ """
+ self._buffer.extend((b"", b""))
+ msg = b"\r\n".join(self._buffer)
+ del self._buffer[:]
+ self.send(msg)
+
+ if message_body is not None:
+
+ # create a consistent interface to message_body
+ if hasattr(message_body, 'read'):
+ # Let file-like take precedence over byte-like. This
+ # is needed to allow the current position of mmap'ed
+ # files to be taken into account.
+ chunks = self._read_readable(message_body)
+ else:
+ try:
+ # this is solely to check to see if message_body
+ # implements the buffer API. it /would/ be easier
+ # to capture if PyObject_CheckBuffer was exposed
+ # to Python.
+ memoryview(message_body)
+ except TypeError:
+ try:
+ chunks = iter(message_body)
+ except TypeError:
+ raise TypeError("message_body should be a bytes-like "
+ "object or an iterable, got %r"
+ % type(message_body))
+ else:
+ # the object implements the buffer interface and
+ # can be passed directly into socket methods
+ chunks = (message_body,)
+
+ for chunk in chunks:
+ if not chunk:
+ if self.debuglevel > 0:
+ print('Zero length chunk ignored')
+ continue
+
+ if encode_chunked and self._http_vsn == 11:
+ # chunked encoding
+ chunk = f'{len(chunk):X}\r\n'.encode('ascii') + chunk \
+ + b'\r\n'
+ self.send(chunk)
+
+ if encode_chunked and self._http_vsn == 11:
+ # end chunked transfer
+ self.send(b'0\r\n\r\n')
+
+ def putrequest(self, method, url, skip_host=False,
+ skip_accept_encoding=False):
+ """Send a request to the server.
+
+ `method' specifies an HTTP request method, e.g. 'GET'.
+ `url' specifies the object being requested, e.g. '/index.html'.
+ `skip_host' if True does not add automatically a 'Host:' header
+ `skip_accept_encoding' if True does not add automatically an
+ 'Accept-Encoding:' header
+ """
+
+ # if a prior response has been completed, then forget about it.
+ if self.__response and self.__response.isclosed():
+ self.__response = None
+
+
+ # in certain cases, we cannot issue another request on this connection.
+ # this occurs when:
+ # 1) we are in the process of sending a request. (_CS_REQ_STARTED)
+ # 2) a response to a previous request has signalled that it is going
+ # to close the connection upon completion.
+ # 3) the headers for the previous response have not been read, thus
+ # we cannot determine whether point (2) is true. (_CS_REQ_SENT)
+ #
+ # if there is no prior response, then we can request at will.
+ #
+ # if point (2) is true, then we will have passed the socket to the
+ # response (effectively meaning, "there is no prior response"), and
+ # will open a new one when a new request is made.
+ #
+ # Note: if a prior response exists, then we *can* start a new request.
+ # We are not allowed to begin fetching the response to this new
+ # request, however, until that prior response is complete.
+ #
+ if self.__state == _CS_IDLE:
+ self.__state = _CS_REQ_STARTED
+ else:
+ raise CannotSendRequest(self.__state)
+
+ # Save the method we use, we need it later in the response phase
+ self._method = method
+ if not url:
+ url = '/'
+ request = '%s %s %s' % (method, url, self._http_vsn_str)
+
+ # Non-ASCII characters should have been eliminated earlier
+ self._output(request.encode('ascii'))
+
+ if self._http_vsn == 11:
+ # Issue some standard headers for better HTTP/1.1 compliance
+
+ if not skip_host:
+ # this header is issued *only* for HTTP/1.1
+ # connections. more specifically, this means it is
+ # only issued when the client uses the new
+ # HTTPConnection() class. backwards-compat clients
+ # will be using HTTP/1.0 and those clients may be
+ # issuing this header themselves. we should NOT issue
+ # it twice; some web servers (such as Apache) barf
+ # when they see two Host: headers
+
+ # If we need a non-standard port,include it in the
+ # header. If the request is going through a proxy,
+ # but the host of the actual URL, not the host of the
+ # proxy.
+
+ netloc = ''
+ if isinstance(url,str):
+ url = bytes(url,encoding='utf-8')
+ b = url.decode('utf-8')
+ if b.startswith('http'):
+ nil, netloc, nil, nil, nil = urlsplit(url)
+
+ if netloc:
+ try:
+ netloc_enc = netloc.encode("ascii")
+ except UnicodeEncodeError:
+ netloc_enc = netloc.encode("idna")
+ self.putheader('Host', netloc_enc)
+ else:
+ if self._tunnel_host:
+ host = self._tunnel_host
+ port = self._tunnel_port
+ else:
+ host = self.host
+ port = self.port
+
+ try:
+ host_enc = host.encode("ascii")
+ except UnicodeEncodeError:
+ host_enc = host.encode("idna")
+
+ # As per RFC 273, IPv6 address should be wrapped with []
+ # when used as Host header
+
+ if host.find(':') >= 0:
+ host_enc = b'[' + host_enc + b']'
+
+ if port == self.default_port:
+ self.putheader('Host', host_enc)
+ else:
+ host_enc = host_enc.decode("ascii")
+ self.putheader('Host', "%s:%s" % (host_enc, port))
+
+ # note: we are assuming that clients will not attempt to set these
+ # headers since *this* library must deal with the
+ # consequences. this also means that when the supporting
+ # libraries are updated to recognize other forms, then this
+ # code should be changed (removed or updated).
+
+ # we only want a Content-Encoding of "identity" since we don't
+ # support encodings such as x-gzip or x-deflate.
+ if not skip_accept_encoding:
+ self.putheader('Accept-Encoding', 'identity')
+
+ # we can accept "chunked" Transfer-Encodings, but no others
+ # NOTE: no TE header implies *only* "chunked"
+ #self.putheader('TE', 'chunked')
+
+ # if TE is supplied in the header, then it must appear in a
+ # Connection header.
+ #self.putheader('Connection', 'TE')
+
+ else:
+ # For HTTP/1.0, the server will assume "not chunked"
+ pass
+
+ def putheader(self, header, *values):
+ """Send a request header line to the server.
+
+ For example: h.putheader('Accept', 'text/html')
+ """
+ if self.__state != _CS_REQ_STARTED:
+ raise CannotSendHeader()
+
+ if hasattr(header, 'encode'):
+ header = header.encode('ascii')
+
+ if not _is_legal_header_name(header):
+ raise ValueError('Invalid header name %r' % (header,))
+
+ values = list(values)
+ for i, one_value in enumerate(values):
+ if hasattr(one_value, 'encode'):
+ values[i] = one_value.encode('latin-1')
+ elif isinstance(one_value, int):
+ values[i] = str(one_value).encode('ascii')
+
+ if _is_illegal_header_value(values[i]):
+ raise ValueError('Invalid header value %r' % (values[i],))
+
+ value = b'\r\n\t'.join(values)
+ header = header + b': ' + value
+ self._output(header)
+
+ def endheaders(self, message_body=None, *, encode_chunked=False):
+ """Indicate that the last header line has been sent to the server.
+
+ This method sends the request to the server. The optional message_body
+ argument can be used to pass a message body associated with the
+ request.
+ """
+ if self.__state == _CS_REQ_STARTED:
+ self.__state = _CS_REQ_SENT
+ else:
+ raise CannotSendHeader()
+ self._send_output(message_body, encode_chunked=encode_chunked)
+
+ def request(self, method, url, body=None, headers={}, *,
+ encode_chunked=False):
+ """Send a complete request to the server."""
+ self._send_request(method, url, body, headers, encode_chunked)
+
+ def _send_request(self, method, url, body, headers, encode_chunked):
+ # Honor explicitly requested Host: and Accept-Encoding: headers.
+ header_names = frozenset(k.lower() for k in headers)
+ skips = {}
+ if 'host' in header_names:
+ skips['skip_host'] = 1
+ if 'accept-encoding' in header_names:
+ skips['skip_accept_encoding'] = 1
+
+ self.putrequest(method, url, **skips)
+
+ # chunked encoding will happen if HTTP/1.1 is used and either
+ # the caller passes encode_chunked=True or the following
+ # conditions hold:
+ # 1. content-length has not been explicitly set
+ # 2. the body is a file or iterable, but not a str or bytes-like
+ # 3. Transfer-Encoding has NOT been explicitly set by the caller
+
+ if 'content-length' not in header_names:
+ # only chunk body if not explicitly set for backwards
+ # compatibility, assuming the client code is already handling the
+ # chunking
+ if 'transfer-encoding' not in header_names:
+ # if content-length cannot be automatically determined, fall
+ # back to chunked encoding
+ encode_chunked = False
+ content_length = self._get_content_length(body, method)
+ if content_length is None:
+ if body is not None:
+ if self.debuglevel > 0:
+ print('Unable to determine size of %r' % body)
+ encode_chunked = True
+ self.putheader('Transfer-Encoding', 'chunked')
+ else:
+ self.putheader('Content-Length', str(content_length))
+ else:
+ encode_chunked = False
+
+ for hdr, value in headers.items():
+ self.putheader(hdr, value)
+ if isinstance(body, str):
+ # RFC 2616 Section 3.7.1 says that text default has a
+ # default charset of iso-8859-1.
+ body = _encode(body, 'body')
+ self.endheaders(body, encode_chunked=encode_chunked)
+
+ def getresponse(self):
+ """Get the response from the server.
+
+ If the HTTPConnection is in the correct state, returns an
+ instance of HTTPResponse or of whatever object is returned by
+ the response_class variable.
+
+ If a request has not been sent or if a previous response has
+ not be handled, ResponseNotReady is raised. If the HTTP
+ response indicates that the connection should be closed, then
+ it will be closed before the response is returned. When the
+ connection is closed, the underlying socket is closed.
+ """
+
+ # if a prior response has been completed, then forget about it.
+ if self.__response and self.__response.isclosed():
+ self.__response = None
+
+ # if a prior response exists, then it must be completed (otherwise, we
+ # cannot read this response's header to determine the connection-close
+ # behavior)
+ #
+ # note: if a prior response existed, but was connection-close, then the
+ # socket and response were made independent of this HTTPConnection
+ # object since a new request requires that we open a whole new
+ # connection
+ #
+ # this means the prior response had one of two states:
+ # 1) will_close: this connection was reset and the prior socket and
+ # response operate independently
+ # 2) persistent: the response was retained and we await its
+ # isclosed() status to become true.
+ #
+ if self.__state != _CS_REQ_SENT or self.__response:
+ raise ResponseNotReady(self.__state)
+
+ if self.debuglevel > 0:
+ response = self.response_class(self.sock, self.debuglevel,
+ method=self._method)
+ else:
+ response = self.response_class(self.sock, method=self._method)
+
+ try:
+ try:
+ response.begin()
+ except ConnectionError:
+ self.close()
+ raise
+ assert response.will_close != _UNKNOWN
+ self.__state = _CS_IDLE
+
+ if response.will_close:
+ # this effectively passes the connection to the response
+ self.close()
+ else:
+ # remember this, so we can tell when it is complete
+ self.__response = response
+
+ return response
+ except:
+ response.close()
+ raise
+
+try:
+ import ssl
+except ImportError:
+ pass
+else:
+ class HTTPSConnection(HTTPConnection):
+ "This class allows communication via SSL."
+
+ default_port = HTTPS_PORT
+
+ # XXX Should key_file and cert_file be deprecated in favour of context?
+
+ def __init__(self, host, port=None, key_file=None, cert_file=None,
+ timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
+ source_address=None, *, context=None,
+ check_hostname=None):
+ super(HTTPSConnection, self).__init__(host, port, timeout,
+ source_address)
+ if (key_file is not None or cert_file is not None or
+ check_hostname is not None):
+ import warnings
+ warnings.warn("key_file, cert_file and check_hostname are "
+ "deprecated, use a custom context instead.",
+ DeprecationWarning, 2)
+ self.key_file = key_file
+ self.cert_file = cert_file
+ if context is None:
+ context = ssl._create_default_https_context()
+ will_verify = context.verify_mode != ssl.CERT_NONE
+ if check_hostname is None:
+ check_hostname = context.check_hostname
+ if check_hostname and not will_verify:
+ raise ValueError("check_hostname needs a SSL context with "
+ "either CERT_OPTIONAL or CERT_REQUIRED")
+ if key_file or cert_file:
+ context.load_cert_chain(cert_file, key_file)
+ self._context = context
+ self._check_hostname = check_hostname
+
+ def connect(self):
+ "Connect to a host on a given (SSL) port."
+
+ super().connect()
+
+ if self._tunnel_host:
+ server_hostname = self._tunnel_host
+ else:
+ server_hostname = self.host
+
+ self.sock = self._context.wrap_socket(self.sock,
+ server_hostname=server_hostname)
+ if not self._context.check_hostname and self._check_hostname:
+ try:
+ ssl.match_hostname(self.sock.getpeercert(), server_hostname)
+ except Exception:
+ self.sock.shutdown(socket.SHUT_RDWR)
+ self.sock.close()
+ raise
+
+ __all__.append("HTTPSConnection")
+
+class HTTPException(Exception):
+ # Subclasses that define an __init__ must call Exception.__init__
+ # or define self.args. Otherwise, str() will fail.
+ pass
+
+class NotConnected(HTTPException):
+ pass
+
+class InvalidURL(HTTPException):
+ pass
+
+class UnknownProtocol(HTTPException):
+ def __init__(self, version):
+ self.args = version,
+ self.version = version
+
+class UnknownTransferEncoding(HTTPException):
+ pass
+
+class UnimplementedFileMode(HTTPException):
+ pass
+
+class IncompleteRead(HTTPException):
+ def __init__(self, partial, expected=None):
+ self.args = partial,
+ self.partial = partial
+ self.expected = expected
+ def __repr__(self):
+ if self.expected is not None:
+ e = ', %i more expected' % self.expected
+ else:
+ e = ''
+ return '%s(%i bytes read%s)' % (self.__class__.__name__,
+ len(self.partial), e)
+ def __str__(self):
+ return repr(self)
+
+class ImproperConnectionState(HTTPException):
+ pass
+
+class CannotSendRequest(ImproperConnectionState):
+ pass
+
+class CannotSendHeader(ImproperConnectionState):
+ pass
+
+class ResponseNotReady(ImproperConnectionState):
+ pass
+
+class BadStatusLine(HTTPException):
+ def __init__(self, line):
+ if not line:
+ line = repr(line)
+ self.args = line,
+ self.line = line
+
+class LineTooLong(HTTPException):
+ def __init__(self, line_type):
+ HTTPException.__init__(self, "got more than %d bytes when reading %s"
+ % (_MAXLINE, line_type))
+
+class RemoteDisconnected(ConnectionResetError, BadStatusLine):
+ def __init__(self, *pos, **kw):
+ BadStatusLine.__init__(self, "")
+ ConnectionResetError.__init__(self, *pos, **kw)
+
+# for backwards compatibility
+error = HTTPException
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/importlib/_bootstrap_external.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/importlib/_bootstrap_external.py
new file mode 100644
index 00000000..dcf41018
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/importlib/_bootstrap_external.py
@@ -0,0 +1,1443 @@
+"""Core implementation of path-based import.
+
+This module is NOT meant to be directly imported! It has been designed such
+that it can be bootstrapped into Python as the implementation of import. As
+such it requires the injection of specific modules and attributes in order to
+work. One should use importlib as the public-facing version of this module.
+
+"""
+#
+# IMPORTANT: Whenever making changes to this module, be sure to run
+# a top-level make in order to get the frozen version of the module
+# updated. Not doing so will result in the Makefile to fail for
+# all others who don't have a ./python around to freeze the module
+# in the early stages of compilation.
+#
+
+# See importlib._setup() for what is injected into the global namespace.
+
+# When editing this code be aware that code executed at import time CANNOT
+# reference any injected objects! This includes not only global code but also
+# anything specified at the class level.
+
+# Bootstrap-related code ######################################################
+_CASE_INSENSITIVE_PLATFORMS_STR_KEY = 'win',
+_CASE_INSENSITIVE_PLATFORMS_BYTES_KEY = 'cygwin', 'darwin'
+_CASE_INSENSITIVE_PLATFORMS = (_CASE_INSENSITIVE_PLATFORMS_BYTES_KEY
+ + _CASE_INSENSITIVE_PLATFORMS_STR_KEY)
+
+
+def _make_relax_case():
+ if sys.platform.startswith(_CASE_INSENSITIVE_PLATFORMS):
+ if sys.platform.startswith(_CASE_INSENSITIVE_PLATFORMS_STR_KEY):
+ key = 'PYTHONCASEOK'
+ else:
+ key = b'PYTHONCASEOK'
+
+ def _relax_case():
+ """True if filenames must be checked case-insensitively."""
+ return key in _os.environ
+ else:
+ def _relax_case():
+ """True if filenames must be checked case-insensitively."""
+ return False
+ return _relax_case
+
+
+def _w_long(x):
+ """Convert a 32-bit integer to little-endian."""
+ return (int(x) & 0xFFFFFFFF).to_bytes(4, 'little')
+
+
+def _r_long(int_bytes):
+ """Convert 4 bytes in little-endian to an integer."""
+ return int.from_bytes(int_bytes, 'little')
+
+
+def _path_join(*path_parts):
+ """Replacement for os.path.join()."""
+ return path_sep.join([part.rstrip(path_separators)
+ for part in path_parts if part])
+
+
+def _path_split(path):
+ """Replacement for os.path.split()."""
+ if len(path_separators) == 1:
+ front, _, tail = path.rpartition(path_sep)
+ return front, tail
+ for x in reversed(path):
+ if x in path_separators:
+ front, tail = path.rsplit(x, maxsplit=1)
+ return front, tail
+ return '', path
+
+
+def _path_stat(path):
+ """Stat the path.
+
+ Made a separate function to make it easier to override in experiments
+ (e.g. cache stat results).
+
+ """
+ return _os.stat(path)
+
+
+def _path_is_mode_type(path, mode):
+ """Test whether the path is the specified mode type."""
+ try:
+ stat_info = _path_stat(path)
+ except OSError:
+ return False
+ return (stat_info.st_mode & 0o170000) == mode
+
+
+def _path_isfile(path):
+ """Replacement for os.path.isfile."""
+ return _path_is_mode_type(path, 0o100000)
+
+
+def _path_isdir(path):
+ """Replacement for os.path.isdir."""
+ if not path:
+ path = _os.getcwd()
+ return _path_is_mode_type(path, 0o040000)
+
+
+def _write_atomic(path, data, mode=0o666):
+ """Best-effort function to write data to a path atomically.
+ Be prepared to handle a FileExistsError if concurrent writing of the
+ temporary file is attempted."""
+ # id() is used to generate a pseudo-random filename.
+ path_tmp = '{}.{}'.format(path, id(path))
+ fd = _os.open(path_tmp,
+ _os.O_EXCL | _os.O_CREAT | _os.O_WRONLY, mode & 0o666)
+ try:
+ # We first write data to a temporary file, and then use os.replace() to
+ # perform an atomic rename.
+ with _io.FileIO(fd, 'wb') as file:
+ file.write(data)
+ _os.rename(path_tmp, path)
+ except OSError:
+ try:
+ _os.unlink(path_tmp)
+ except OSError:
+ pass
+ raise
+
+
+_code_type = type(_write_atomic.__code__)
+
+
+# Finder/loader utility code ###############################################
+
+# Magic word to reject .pyc files generated by other Python versions.
+# It should change for each incompatible change to the bytecode.
+#
+# The value of CR and LF is incorporated so if you ever read or write
+# a .pyc file in text mode the magic number will be wrong; also, the
+# Apple MPW compiler swaps their values, botching string constants.
+#
+# There were a variety of old schemes for setting the magic number.
+# The current working scheme is to increment the previous value by
+# 10.
+#
+# Starting with the adoption of PEP 3147 in Python 3.2, every bump in magic
+# number also includes a new "magic tag", i.e. a human readable string used
+# to represent the magic number in __pycache__ directories. When you change
+# the magic number, you must also set a new unique magic tag. Generally this
+# can be named after the Python major version of the magic number bump, but
+# it can really be anything, as long as it's different than anything else
+# that's come before. The tags are included in the following table, starting
+# with Python 3.2a0.
+#
+# Known values:
+# Python 1.5: 20121
+# Python 1.5.1: 20121
+# Python 1.5.2: 20121
+# Python 1.6: 50428
+# Python 2.0: 50823
+# Python 2.0.1: 50823
+# Python 2.1: 60202
+# Python 2.1.1: 60202
+# Python 2.1.2: 60202
+# Python 2.2: 60717
+# Python 2.3a0: 62011
+# Python 2.3a0: 62021
+# Python 2.3a0: 62011 (!)
+# Python 2.4a0: 62041
+# Python 2.4a3: 62051
+# Python 2.4b1: 62061
+# Python 2.5a0: 62071
+# Python 2.5a0: 62081 (ast-branch)
+# Python 2.5a0: 62091 (with)
+# Python 2.5a0: 62092 (changed WITH_CLEANUP opcode)
+# Python 2.5b3: 62101 (fix wrong code: for x, in ...)
+# Python 2.5b3: 62111 (fix wrong code: x += yield)
+# Python 2.5c1: 62121 (fix wrong lnotab with for loops and
+# storing constants that should have been removed)
+# Python 2.5c2: 62131 (fix wrong code: for x, in ... in listcomp/genexp)
+# Python 2.6a0: 62151 (peephole optimizations and STORE_MAP opcode)
+# Python 2.6a1: 62161 (WITH_CLEANUP optimization)
+# Python 2.7a0: 62171 (optimize list comprehensions/change LIST_APPEND)
+# Python 2.7a0: 62181 (optimize conditional branches:
+# introduce POP_JUMP_IF_FALSE and POP_JUMP_IF_TRUE)
+# Python 2.7a0 62191 (introduce SETUP_WITH)
+# Python 2.7a0 62201 (introduce BUILD_SET)
+# Python 2.7a0 62211 (introduce MAP_ADD and SET_ADD)
+# Python 3000: 3000
+# 3010 (removed UNARY_CONVERT)
+# 3020 (added BUILD_SET)
+# 3030 (added keyword-only parameters)
+# 3040 (added signature annotations)
+# 3050 (print becomes a function)
+# 3060 (PEP 3115 metaclass syntax)
+# 3061 (string literals become unicode)
+# 3071 (PEP 3109 raise changes)
+# 3081 (PEP 3137 make __file__ and __name__ unicode)
+# 3091 (kill str8 interning)
+# 3101 (merge from 2.6a0, see 62151)
+# 3103 (__file__ points to source file)
+# Python 3.0a4: 3111 (WITH_CLEANUP optimization).
+# Python 3.0a5: 3131 (lexical exception stacking, including POP_EXCEPT)
+# Python 3.1a0: 3141 (optimize list, set and dict comprehensions:
+# change LIST_APPEND and SET_ADD, add MAP_ADD)
+# Python 3.1a0: 3151 (optimize conditional branches:
+# introduce POP_JUMP_IF_FALSE and POP_JUMP_IF_TRUE)
+# Python 3.2a0: 3160 (add SETUP_WITH)
+# tag: cpython-32
+# Python 3.2a1: 3170 (add DUP_TOP_TWO, remove DUP_TOPX and ROT_FOUR)
+# tag: cpython-32
+# Python 3.2a2 3180 (add DELETE_DEREF)
+# Python 3.3a0 3190 __class__ super closure changed
+# Python 3.3a0 3200 (__qualname__ added)
+# 3210 (added size modulo 2**32 to the pyc header)
+# Python 3.3a1 3220 (changed PEP 380 implementation)
+# Python 3.3a4 3230 (revert changes to implicit __class__ closure)
+# Python 3.4a1 3250 (evaluate positional default arguments before
+# keyword-only defaults)
+# Python 3.4a1 3260 (add LOAD_CLASSDEREF; allow locals of class to override
+# free vars)
+# Python 3.4a1 3270 (various tweaks to the __class__ closure)
+# Python 3.4a1 3280 (remove implicit class argument)
+# Python 3.4a4 3290 (changes to __qualname__ computation)
+# Python 3.4a4 3300 (more changes to __qualname__ computation)
+# Python 3.4rc2 3310 (alter __qualname__ computation)
+# Python 3.5a0 3320 (matrix multiplication operator)
+# Python 3.5b1 3330 (PEP 448: Additional Unpacking Generalizations)
+# Python 3.5b2 3340 (fix dictionary display evaluation order #11205)
+# Python 3.5b2 3350 (add GET_YIELD_FROM_ITER opcode #24400)
+# Python 3.5.2 3351 (fix BUILD_MAP_UNPACK_WITH_CALL opcode #27286)
+# Python 3.6a0 3360 (add FORMAT_VALUE opcode #25483
+# Python 3.6a0 3361 (lineno delta of code.co_lnotab becomes signed)
+# Python 3.6a1 3370 (16 bit wordcode)
+# Python 3.6a1 3371 (add BUILD_CONST_KEY_MAP opcode #27140)
+# Python 3.6a1 3372 (MAKE_FUNCTION simplification, remove MAKE_CLOSURE
+# #27095)
+# Python 3.6b1 3373 (add BUILD_STRING opcode #27078)
+# Python 3.6b1 3375 (add SETUP_ANNOTATIONS and STORE_ANNOTATION opcodes
+# #27985)
+# Python 3.6b1 3376 (simplify CALL_FUNCTIONs & BUILD_MAP_UNPACK_WITH_CALL)
+# Python 3.6b1 3377 (set __class__ cell from type.__new__ #23722)
+# Python 3.6b2 3378 (add BUILD_TUPLE_UNPACK_WITH_CALL #28257)
+# Python 3.6rc1 3379 (more thorough __class__ validation #23722)
+#
+# MAGIC must change whenever the bytecode emitted by the compiler may no
+# longer be understood by older implementations of the eval loop (usually
+# due to the addition of new opcodes).
+#
+# Whenever MAGIC_NUMBER is changed, the ranges in the magic_values array
+# in PC/launcher.c must also be updated.
+
+MAGIC_NUMBER = (3379).to_bytes(2, 'little') + b'\r\n'
+_RAW_MAGIC_NUMBER = int.from_bytes(MAGIC_NUMBER, 'little') # For import.c
+
+_PYCACHE = '__pycache__'
+_OPT = 'opt-'
+
+SOURCE_SUFFIXES = ['.py'] # _setup() adds .pyw as needed.
+
+BYTECODE_SUFFIXES = ['.pyc']
+# Deprecated.
+DEBUG_BYTECODE_SUFFIXES = OPTIMIZED_BYTECODE_SUFFIXES = BYTECODE_SUFFIXES
+
+def cache_from_source(path, debug_override=None, *, optimization=None):
+ """Given the path to a .py file, return the path to its .pyc file.
+
+ The .py file does not need to exist; this simply returns the path to the
+ .pyc file calculated as if the .py file were imported.
+
+ The 'optimization' parameter controls the presumed optimization level of
+ the bytecode file. If 'optimization' is not None, the string representation
+ of the argument is taken and verified to be alphanumeric (else ValueError
+ is raised).
+
+ The debug_override parameter is deprecated. If debug_override is not None,
+ a True value is the same as setting 'optimization' to the empty string
+ while a False value is equivalent to setting 'optimization' to '1'.
+
+ If sys.implementation.cache_tag is None then NotImplementedError is raised.
+
+ """
+ if debug_override is not None:
+ _warnings.warn('the debug_override parameter is deprecated; use '
+ "'optimization' instead", DeprecationWarning)
+ if optimization is not None:
+ message = 'debug_override or optimization must be set to None'
+ raise TypeError(message)
+ optimization = '' if debug_override else 1
+ #path = _os.fspath(path) #JP hack
+ head, tail = _path_split(path)
+ base, sep, rest = tail.rpartition('.')
+ tag = sys.implementation.cache_tag
+ if tag is None:
+ raise NotImplementedError('sys.implementation.cache_tag is None')
+ almost_filename = ''.join([(base if base else rest), sep, tag])
+ if optimization is None:
+ if sys.flags.optimize == 0:
+ optimization = ''
+ else:
+ optimization = sys.flags.optimize
+ optimization = str(optimization)
+ if optimization != '':
+ if not optimization.isalnum():
+ raise ValueError('{!r} is not alphanumeric'.format(optimization))
+ almost_filename = '{}.{}{}'.format(almost_filename, _OPT, optimization)
+ return _path_join(head, _PYCACHE, almost_filename + BYTECODE_SUFFIXES[0])
+
+
+def source_from_cache(path):
+ """Given the path to a .pyc. file, return the path to its .py file.
+
+ The .pyc file does not need to exist; this simply returns the path to
+ the .py file calculated to correspond to the .pyc file. If path does
+ not conform to PEP 3147/488 format, ValueError will be raised. If
+ sys.implementation.cache_tag is None then NotImplementedError is raised.
+
+ """
+ if sys.implementation.cache_tag is None:
+ raise NotImplementedError('sys.implementation.cache_tag is None')
+ #path = _os.fspath(path) #JP hack
+ head, pycache_filename = _path_split(path)
+ head, pycache = _path_split(head)
+ if pycache != _PYCACHE:
+ raise ValueError('{} not bottom-level directory in '
+ '{!r}'.format(_PYCACHE, path))
+ dot_count = pycache_filename.count('.')
+ if dot_count not in {2, 3}:
+ raise ValueError('expected only 2 or 3 dots in '
+ '{!r}'.format(pycache_filename))
+ elif dot_count == 3:
+ optimization = pycache_filename.rsplit('.', 2)[-2]
+ if not optimization.startswith(_OPT):
+ raise ValueError("optimization portion of filename does not start "
+ "with {!r}".format(_OPT))
+ opt_level = optimization[len(_OPT):]
+ if not opt_level.isalnum():
+ raise ValueError("optimization level {!r} is not an alphanumeric "
+ "value".format(optimization))
+ base_filename = pycache_filename.partition('.')[0]
+ return _path_join(head, base_filename + SOURCE_SUFFIXES[0])
+
+
+def _get_sourcefile(bytecode_path):
+ """Convert a bytecode file path to a source path (if possible).
+
+ This function exists purely for backwards-compatibility for
+ PyImport_ExecCodeModuleWithFilenames() in the C API.
+
+ """
+ if len(bytecode_path) == 0:
+ return None
+ rest, _, extension = bytecode_path.rpartition('.')
+ if not rest or extension.lower()[-3:-1] != 'py':
+ return bytecode_path
+ try:
+ source_path = source_from_cache(bytecode_path)
+ except (NotImplementedError, ValueError):
+ source_path = bytecode_path[:-1]
+ return source_path if _path_isfile(source_path) else bytecode_path
+
+
+def _get_cached(filename):
+ if filename.endswith(tuple(SOURCE_SUFFIXES)):
+ try:
+ return cache_from_source(filename)
+ except NotImplementedError:
+ pass
+ elif filename.endswith(tuple(BYTECODE_SUFFIXES)):
+ return filename
+ else:
+ return None
+
+
+def _calc_mode(path):
+ """Calculate the mode permissions for a bytecode file."""
+ try:
+ mode = _path_stat(path).st_mode
+ except OSError:
+ mode = 0o666
+ # We always ensure write access so we can update cached files
+ # later even when the source files are read-only on Windows (#6074)
+ mode |= 0o200
+ return mode
+
+
+def _check_name(method):
+ """Decorator to verify that the module being requested matches the one the
+ loader can handle.
+
+ The first argument (self) must define _name which the second argument is
+ compared against. If the comparison fails then ImportError is raised.
+
+ """
+ def _check_name_wrapper(self, name=None, *args, **kwargs):
+ if name is None:
+ name = self.name
+ elif self.name != name:
+ raise ImportError('loader for %s cannot handle %s' %
+ (self.name, name), name=name)
+ return method(self, name, *args, **kwargs)
+ try:
+ _wrap = _bootstrap._wrap
+ except NameError:
+ # XXX yuck
+ def _wrap(new, old):
+ for replace in ['__module__', '__name__', '__qualname__', '__doc__']:
+ if hasattr(old, replace):
+ setattr(new, replace, getattr(old, replace))
+ new.__dict__.update(old.__dict__)
+ _wrap(_check_name_wrapper, method)
+ return _check_name_wrapper
+
+
+def _find_module_shim(self, fullname):
+ """Try to find a loader for the specified module by delegating to
+ self.find_loader().
+
+ This method is deprecated in favor of finder.find_spec().
+
+ """
+ # Call find_loader(). If it returns a string (indicating this
+ # is a namespace package portion), generate a warning and
+ # return None.
+ loader, portions = self.find_loader(fullname)
+ if loader is None and len(portions):
+ msg = 'Not importing directory {}: missing __init__'
+ _warnings.warn(msg.format(portions[0]), ImportWarning)
+ return loader
+
+
+def _validate_bytecode_header(data, source_stats=None, name=None, path=None):
+ """Validate the header of the passed-in bytecode against source_stats (if
+ given) and returning the bytecode that can be compiled by compile().
+
+ All other arguments are used to enhance error reporting.
+
+ ImportError is raised when the magic number is incorrect or the bytecode is
+ found to be stale. EOFError is raised when the data is found to be
+ truncated.
+
+ """
+ exc_details = {}
+ if name is not None:
+ exc_details['name'] = name
+ else:
+ # To prevent having to make all messages have a conditional name.
+ name = '<bytecode>'
+ if path is not None:
+ exc_details['path'] = path
+ magic = data[:4]
+ raw_timestamp = data[4:8]
+ raw_size = data[8:12]
+ if magic != MAGIC_NUMBER:
+ message = 'bad magic number in {!r}: {!r}'.format(name, magic)
+ _bootstrap._verbose_message('{}', message)
+ raise ImportError(message, **exc_details)
+ elif len(raw_timestamp) != 4:
+ message = 'reached EOF while reading timestamp in {!r}'.format(name)
+ _bootstrap._verbose_message('{}', message)
+ raise EOFError(message)
+ elif len(raw_size) != 4:
+ message = 'reached EOF while reading size of source in {!r}'.format(name)
+ _bootstrap._verbose_message('{}', message)
+ raise EOFError(message)
+ if source_stats is not None:
+ try:
+ source_mtime = int(source_stats['mtime'])
+ except KeyError:
+ pass
+ else:
+ if _r_long(raw_timestamp) != source_mtime:
+ message = 'bytecode is stale for {!r}'.format(name)
+ _bootstrap._verbose_message('{}', message)
+ raise ImportError(message, **exc_details)
+ try:
+ source_size = source_stats['size'] & 0xFFFFFFFF
+ except KeyError:
+ pass
+ else:
+ if _r_long(raw_size) != source_size:
+ raise ImportError('bytecode is stale for {!r}'.format(name),
+ **exc_details)
+ return data[12:]
+
+
+def _compile_bytecode(data, name=None, bytecode_path=None, source_path=None):
+ """Compile bytecode as returned by _validate_bytecode_header()."""
+ code = marshal.loads(data)
+ if isinstance(code, _code_type):
+ _bootstrap._verbose_message('code object from {!r}', bytecode_path)
+ if source_path is not None:
+ _imp._fix_co_filename(code, source_path)
+ return code
+ else:
+ raise ImportError('Non-code object in {!r}'.format(bytecode_path),
+ name=name, path=bytecode_path)
+
+def _code_to_bytecode(code, mtime=0, source_size=0):
+ """Compile a code object into bytecode for writing out to a byte-compiled
+ file."""
+ data = bytearray(MAGIC_NUMBER)
+ data.extend(_w_long(mtime))
+ data.extend(_w_long(source_size))
+ data.extend(marshal.dumps(code))
+ return data
+
+
+def decode_source(source_bytes):
+ """Decode bytes representing source code and return the string.
+
+ Universal newline support is used in the decoding.
+ """
+ import tokenize # To avoid bootstrap issues.
+ source_bytes_readline = _io.BytesIO(source_bytes).readline
+ encoding = tokenize.detect_encoding(source_bytes_readline)
+ newline_decoder = _io.IncrementalNewlineDecoder(None, True)
+ return newline_decoder.decode(source_bytes.decode(encoding[0]))
+
+
+# Module specifications #######################################################
+
+_POPULATE = object()
+
+
+def spec_from_file_location(name, location=None, *, loader=None,
+ submodule_search_locations=_POPULATE):
+ """Return a module spec based on a file location.
+
+ To indicate that the module is a package, set
+ submodule_search_locations to a list of directory paths. An
+ empty list is sufficient, though its not otherwise useful to the
+ import system.
+
+ The loader must take a spec as its only __init__() arg.
+
+ """
+ if location is None:
+ # The caller may simply want a partially populated location-
+ # oriented spec. So we set the location to a bogus value and
+ # fill in as much as we can.
+ location = '<unknown>'
+ if hasattr(loader, 'get_filename'):
+ # ExecutionLoader
+ try:
+ location = loader.get_filename(name)
+ except ImportError:
+ pass
+ else:
+ #location = _os.fspath(location) #JP hack
+ location = location
+ # If the location is on the filesystem, but doesn't actually exist,
+ # we could return None here, indicating that the location is not
+ # valid. However, we don't have a good way of testing since an
+ # indirect location (e.g. a zip file or URL) will look like a
+ # non-existent file relative to the filesystem.
+
+ spec = _bootstrap.ModuleSpec(name, loader, origin=location)
+ spec._set_fileattr = True
+
+ # Pick a loader if one wasn't provided.
+ if loader is None:
+ for loader_class, suffixes in _get_supported_file_loaders():
+ if location.endswith(tuple(suffixes)):
+ loader = loader_class(name, location)
+ spec.loader = loader
+ break
+ else:
+ return None
+
+ # Set submodule_search_paths appropriately.
+ if submodule_search_locations is _POPULATE:
+ # Check the loader.
+ if hasattr(loader, 'is_package'):
+ try:
+ is_package = loader.is_package(name)
+ except ImportError:
+ pass
+ else:
+ if is_package:
+ spec.submodule_search_locations = []
+ else:
+ spec.submodule_search_locations = submodule_search_locations
+ if spec.submodule_search_locations == []:
+ if location:
+ dirname = _path_split(location)[0]
+ spec.submodule_search_locations.append(dirname)
+
+ return spec
+
+
+# Loaders #####################################################################
+
+class WindowsRegistryFinder:
+
+ """Meta path finder for modules declared in the Windows registry."""
+
+ REGISTRY_KEY = (
+ 'Software\\Python\\PythonCore\\{sys_version}'
+ '\\Modules\\{fullname}')
+ REGISTRY_KEY_DEBUG = (
+ 'Software\\Python\\PythonCore\\{sys_version}'
+ '\\Modules\\{fullname}\\Debug')
+ DEBUG_BUILD = False # Changed in _setup()
+
+ @classmethod
+ def _open_registry(cls, key):
+ try:
+ return _winreg.OpenKey(_winreg.HKEY_CURRENT_USER, key)
+ except OSError:
+ return _winreg.OpenKey(_winreg.HKEY_LOCAL_MACHINE, key)
+
+ @classmethod
+ def _search_registry(cls, fullname):
+ if cls.DEBUG_BUILD:
+ registry_key = cls.REGISTRY_KEY_DEBUG
+ else:
+ registry_key = cls.REGISTRY_KEY
+ key = registry_key.format(fullname=fullname,
+ sys_version='%d.%d' % sys.version_info[:2])
+ try:
+ with cls._open_registry(key) as hkey:
+ filepath = _winreg.QueryValue(hkey, '')
+ except OSError:
+ return None
+ return filepath
+
+ @classmethod
+ def find_spec(cls, fullname, path=None, target=None):
+ filepath = cls._search_registry(fullname)
+ if filepath is None:
+ return None
+ try:
+ _path_stat(filepath)
+ except OSError:
+ return None
+ for loader, suffixes in _get_supported_file_loaders():
+ if filepath.endswith(tuple(suffixes)):
+ spec = _bootstrap.spec_from_loader(fullname,
+ loader(fullname, filepath),
+ origin=filepath)
+ return spec
+
+ @classmethod
+ def find_module(cls, fullname, path=None):
+ """Find module named in the registry.
+
+ This method is deprecated. Use exec_module() instead.
+
+ """
+ spec = cls.find_spec(fullname, path)
+ if spec is not None:
+ return spec.loader
+ else:
+ return None
+
+
+class _LoaderBasics:
+
+ """Base class of common code needed by both SourceLoader and
+ SourcelessFileLoader."""
+
+ def is_package(self, fullname):
+ """Concrete implementation of InspectLoader.is_package by checking if
+ the path returned by get_filename has a filename of '__init__.py'."""
+ filename = _path_split(self.get_filename(fullname))[1]
+ filename_base = filename.rsplit('.', 1)[0]
+ tail_name = fullname.rpartition('.')[2]
+ return filename_base == '__init__' and tail_name != '__init__'
+
+ def create_module(self, spec):
+ """Use default semantics for module creation."""
+
+ def exec_module(self, module):
+ """Execute the module."""
+ code = self.get_code(module.__name__)
+ if code is None:
+ raise ImportError('cannot load module {!r} when get_code() '
+ 'returns None'.format(module.__name__))
+ _bootstrap._call_with_frames_removed(exec, code, module.__dict__)
+
+ def load_module(self, fullname):
+ """This module is deprecated."""
+ return _bootstrap._load_module_shim(self, fullname)
+
+
+class SourceLoader(_LoaderBasics):
+
+ def path_mtime(self, path):
+ """Optional method that returns the modification time (an int) for the
+ specified path, where path is a str.
+
+ Raises IOError when the path cannot be handled.
+ """
+ raise IOError
+
+ def path_stats(self, path):
+ """Optional method returning a metadata dict for the specified path
+ to by the path (str).
+ Possible keys:
+ - 'mtime' (mandatory) is the numeric timestamp of last source
+ code modification;
+ - 'size' (optional) is the size in bytes of the source code.
+
+ Implementing this method allows the loader to read bytecode files.
+ Raises IOError when the path cannot be handled.
+ """
+ return {'mtime': self.path_mtime(path)}
+
+ def _cache_bytecode(self, source_path, cache_path, data):
+ """Optional method which writes data (bytes) to a file path (a str).
+
+ Implementing this method allows for the writing of bytecode files.
+
+ The source path is needed in order to correctly transfer permissions
+ """
+ # For backwards compatibility, we delegate to set_data()
+ return self.set_data(cache_path, data)
+
+ def set_data(self, path, data):
+ """Optional method which writes data (bytes) to a file path (a str).
+
+ Implementing this method allows for the writing of bytecode files.
+ """
+
+
+ def get_source(self, fullname):
+ """Concrete implementation of InspectLoader.get_source."""
+ path = self.get_filename(fullname)
+ try:
+ source_bytes = self.get_data(path)
+ except OSError as exc:
+ raise ImportError('source not available through get_data()',
+ name=fullname) from exc
+ return decode_source(source_bytes)
+
+ def source_to_code(self, data, path, *, _optimize=-1):
+ """Return the code object compiled from source.
+
+ The 'data' argument can be any object type that compile() supports.
+ """
+ return _bootstrap._call_with_frames_removed(compile, data, path, 'exec',
+ dont_inherit=True, optimize=_optimize)
+
+ def get_code(self, fullname):
+ """Concrete implementation of InspectLoader.get_code.
+
+ Reading of bytecode requires path_stats to be implemented. To write
+ bytecode, set_data must also be implemented.
+
+ """
+ source_path = self.get_filename(fullname)
+ source_mtime = None
+ try:
+ bytecode_path = cache_from_source(source_path)
+ except NotImplementedError:
+ bytecode_path = None
+ else:
+ try:
+ st = self.path_stats(source_path)
+ except IOError:
+ pass
+ else:
+ source_mtime = int(st['mtime'])
+ try:
+ data = self.get_data(bytecode_path)
+ except OSError:
+ pass
+ else:
+ try:
+ bytes_data = _validate_bytecode_header(data,
+ source_stats=st, name=fullname,
+ path=bytecode_path)
+ except (ImportError, EOFError):
+ pass
+ else:
+ _bootstrap._verbose_message('{} matches {}', bytecode_path,
+ source_path)
+ return _compile_bytecode(bytes_data, name=fullname,
+ bytecode_path=bytecode_path,
+ source_path=source_path)
+ source_bytes = self.get_data(source_path)
+ code_object = self.source_to_code(source_bytes, source_path)
+ _bootstrap._verbose_message('code object from {}', source_path)
+ if (not sys.dont_write_bytecode and bytecode_path is not None and
+ source_mtime is not None):
+ data = _code_to_bytecode(code_object, source_mtime,
+ len(source_bytes))
+ try:
+ self._cache_bytecode(source_path, bytecode_path, data)
+ _bootstrap._verbose_message('wrote {!r}', bytecode_path)
+ except NotImplementedError:
+ pass
+ return code_object
+
+
+class FileLoader:
+
+ """Base file loader class which implements the loader protocol methods that
+ require file system usage."""
+
+ def __init__(self, fullname, path):
+ """Cache the module name and the path to the file found by the
+ finder."""
+ self.name = fullname
+ self.path = path
+
+ def __eq__(self, other):
+ return (self.__class__ == other.__class__ and
+ self.__dict__ == other.__dict__)
+
+ def __hash__(self):
+ return hash(self.name) ^ hash(self.path)
+
+ @_check_name
+ def load_module(self, fullname):
+ """Load a module from a file.
+
+ This method is deprecated. Use exec_module() instead.
+
+ """
+ # The only reason for this method is for the name check.
+ # Issue #14857: Avoid the zero-argument form of super so the implementation
+ # of that form can be updated without breaking the frozen module
+ return super(FileLoader, self).load_module(fullname)
+
+ @_check_name
+ def get_filename(self, fullname):
+ """Return the path to the source file as found by the finder."""
+ return self.path
+
+ def get_data(self, path):
+ """Return the data from path as raw bytes."""
+ with _io.FileIO(path, 'r') as file:
+ return file.read()
+
+
+class SourceFileLoader(FileLoader, SourceLoader):
+
+ """Concrete implementation of SourceLoader using the file system."""
+
+ def path_stats(self, path):
+ """Return the metadata for the path."""
+ st = _path_stat(path)
+ return {'mtime': st.st_mtime, 'size': st.st_size}
+
+ def _cache_bytecode(self, source_path, bytecode_path, data):
+ # Adapt between the two APIs
+ mode = _calc_mode(source_path)
+ return self.set_data(bytecode_path, data, _mode=mode)
+
+ def set_data(self, path, data, *, _mode=0o666):
+ """Write bytes data to a file."""
+ parent, filename = _path_split(path)
+ path_parts = []
+ # Figure out what directories are missing.
+ while parent and not _path_isdir(parent):
+ parent, part = _path_split(parent)
+ path_parts.append(part)
+ # Create needed directories.
+ for part in reversed(path_parts):
+ parent = _path_join(parent, part)
+ try:
+ _os.mkdir(parent)
+ except FileExistsError:
+ # Probably another Python process already created the dir.
+ continue
+ except OSError as exc:
+ # Could be a permission error, read-only filesystem: just forget
+ # about writing the data.
+ _bootstrap._verbose_message('could not create {!r}: {!r}',
+ parent, exc)
+ return
+ try:
+ _write_atomic(path, data, _mode)
+ _bootstrap._verbose_message('created {!r}', path)
+ except OSError as exc:
+ # Same as above: just don't write the bytecode.
+ _bootstrap._verbose_message('could not create {!r}: {!r}', path,
+ exc)
+
+
+class SourcelessFileLoader(FileLoader, _LoaderBasics):
+
+ """Loader which handles sourceless file imports."""
+
+ def get_code(self, fullname):
+ path = self.get_filename(fullname)
+ data = self.get_data(path)
+ bytes_data = _validate_bytecode_header(data, name=fullname, path=path)
+ return _compile_bytecode(bytes_data, name=fullname, bytecode_path=path)
+
+ def get_source(self, fullname):
+ """Return None as there is no source code."""
+ return None
+
+
+# Filled in by _setup().
+EXTENSION_SUFFIXES = []
+
+
+class ExtensionFileLoader(FileLoader, _LoaderBasics):
+
+ """Loader for extension modules.
+
+ The constructor is designed to work with FileFinder.
+
+ """
+
+ def __init__(self, name, path):
+ self.name = name
+ self.path = path
+
+ def __eq__(self, other):
+ return (self.__class__ == other.__class__ and
+ self.__dict__ == other.__dict__)
+
+ def __hash__(self):
+ return hash(self.name) ^ hash(self.path)
+
+ def create_module(self, spec):
+ """Create an unitialized extension module"""
+ module = _bootstrap._call_with_frames_removed(
+ _imp.create_dynamic, spec)
+ _bootstrap._verbose_message('extension module {!r} loaded from {!r}',
+ spec.name, self.path)
+ return module
+
+ def exec_module(self, module):
+ """Initialize an extension module"""
+ _bootstrap._call_with_frames_removed(_imp.exec_dynamic, module)
+ _bootstrap._verbose_message('extension module {!r} executed from {!r}',
+ self.name, self.path)
+
+ def is_package(self, fullname):
+ """Return True if the extension module is a package."""
+ file_name = _path_split(self.path)[1]
+ return any(file_name == '__init__' + suffix
+ for suffix in EXTENSION_SUFFIXES)
+
+ def get_code(self, fullname):
+ """Return None as an extension module cannot create a code object."""
+ return None
+
+ def get_source(self, fullname):
+ """Return None as extension modules have no source code."""
+ return None
+
+ @_check_name
+ def get_filename(self, fullname):
+ """Return the path to the source file as found by the finder."""
+ return self.path
+
+
+class _NamespacePath:
+ """Represents a namespace package's path. It uses the module name
+ to find its parent module, and from there it looks up the parent's
+ __path__. When this changes, the module's own path is recomputed,
+ using path_finder. For top-level modules, the parent module's path
+ is sys.path."""
+
+ def __init__(self, name, path, path_finder):
+ self._name = name
+ self._path = path
+ self._last_parent_path = tuple(self._get_parent_path())
+ self._path_finder = path_finder
+
+ def _find_parent_path_names(self):
+ """Returns a tuple of (parent-module-name, parent-path-attr-name)"""
+ parent, dot, me = self._name.rpartition('.')
+ if dot == '':
+ # This is a top-level module. sys.path contains the parent path.
+ return 'sys', 'path'
+ # Not a top-level module. parent-module.__path__ contains the
+ # parent path.
+ return parent, '__path__'
+
+ def _get_parent_path(self):
+ parent_module_name, path_attr_name = self._find_parent_path_names()
+ return getattr(sys.modules[parent_module_name], path_attr_name)
+
+ def _recalculate(self):
+ # If the parent's path has changed, recalculate _path
+ parent_path = tuple(self._get_parent_path()) # Make a copy
+ if parent_path != self._last_parent_path:
+ spec = self._path_finder(self._name, parent_path)
+ # Note that no changes are made if a loader is returned, but we
+ # do remember the new parent path
+ if spec is not None and spec.loader is None:
+ if spec.submodule_search_locations:
+ self._path = spec.submodule_search_locations
+ self._last_parent_path = parent_path # Save the copy
+ return self._path
+
+ def __iter__(self):
+ return iter(self._recalculate())
+
+ def __setitem__(self, index, path):
+ self._path[index] = path
+
+ def __len__(self):
+ return len(self._recalculate())
+
+ def __repr__(self):
+ return '_NamespacePath({!r})'.format(self._path)
+
+ def __contains__(self, item):
+ return item in self._recalculate()
+
+ def append(self, item):
+ self._path.append(item)
+
+
+# We use this exclusively in module_from_spec() for backward-compatibility.
+class _NamespaceLoader:
+ def __init__(self, name, path, path_finder):
+ self._path = _NamespacePath(name, path, path_finder)
+
+ @classmethod
+ def module_repr(cls, module):
+ """Return repr for the module.
+
+ The method is deprecated. The import machinery does the job itself.
+
+ """
+ return '<module {!r} (namespace)>'.format(module.__name__)
+
+ def is_package(self, fullname):
+ return True
+
+ def get_source(self, fullname):
+ return ''
+
+ def get_code(self, fullname):
+ return compile('', '<string>', 'exec', dont_inherit=True)
+
+ def create_module(self, spec):
+ """Use default semantics for module creation."""
+
+ def exec_module(self, module):
+ pass
+
+ def load_module(self, fullname):
+ """Load a namespace module.
+
+ This method is deprecated. Use exec_module() instead.
+
+ """
+ # The import system never calls this method.
+ _bootstrap._verbose_message('namespace module loaded with path {!r}',
+ self._path)
+ return _bootstrap._load_module_shim(self, fullname)
+
+
+# Finders #####################################################################
+
+class PathFinder:
+
+ """Meta path finder for sys.path and package __path__ attributes."""
+
+ @classmethod
+ def invalidate_caches(cls):
+ """Call the invalidate_caches() method on all path entry finders
+ stored in sys.path_importer_caches (where implemented)."""
+ for finder in sys.path_importer_cache.values():
+ if hasattr(finder, 'invalidate_caches'):
+ finder.invalidate_caches()
+
+ @classmethod
+ def _path_hooks(cls, path):
+ """Search sys.path_hooks for a finder for 'path'."""
+ if sys.path_hooks is not None and not sys.path_hooks:
+ _warnings.warn('sys.path_hooks is empty', ImportWarning)
+ for hook in sys.path_hooks:
+ try:
+ return hook(path)
+ except ImportError:
+ continue
+ else:
+ return None
+
+ @classmethod
+ def _path_importer_cache(cls, path):
+ """Get the finder for the path entry from sys.path_importer_cache.
+
+ If the path entry is not in the cache, find the appropriate finder
+ and cache it. If no finder is available, store None.
+
+ """
+ if path == '':
+ try:
+ path = _os.getcwd()
+ except FileNotFoundError:
+ # Don't cache the failure as the cwd can easily change to
+ # a valid directory later on.
+ return None
+ try:
+ finder = sys.path_importer_cache[path]
+ except KeyError:
+ finder = cls._path_hooks(path)
+ sys.path_importer_cache[path] = finder
+ return finder
+
+ @classmethod
+ def _legacy_get_spec(cls, fullname, finder):
+ # This would be a good place for a DeprecationWarning if
+ # we ended up going that route.
+ if hasattr(finder, 'find_loader'):
+ loader, portions = finder.find_loader(fullname)
+ else:
+ loader = finder.find_module(fullname)
+ portions = []
+ if loader is not None:
+ return _bootstrap.spec_from_loader(fullname, loader)
+ spec = _bootstrap.ModuleSpec(fullname, None)
+ spec.submodule_search_locations = portions
+ return spec
+
+ @classmethod
+ def _get_spec(cls, fullname, path, target=None):
+ """Find the loader or namespace_path for this module/package name."""
+ # If this ends up being a namespace package, namespace_path is
+ # the list of paths that will become its __path__
+ namespace_path = []
+ for entry in path:
+ if not isinstance(entry, (str, bytes)):
+ continue
+ finder = cls._path_importer_cache(entry)
+ if finder is not None:
+ if hasattr(finder, 'find_spec'):
+ spec = finder.find_spec(fullname, target)
+ else:
+ spec = cls._legacy_get_spec(fullname, finder)
+ if spec is None:
+ continue
+ if spec.loader is not None:
+ return spec
+ portions = spec.submodule_search_locations
+ if portions is None:
+ raise ImportError('spec missing loader')
+ # This is possibly part of a namespace package.
+ # Remember these path entries (if any) for when we
+ # create a namespace package, and continue iterating
+ # on path.
+ namespace_path.extend(portions)
+ else:
+ spec = _bootstrap.ModuleSpec(fullname, None)
+ spec.submodule_search_locations = namespace_path
+ return spec
+
+ @classmethod
+ def find_spec(cls, fullname, path=None, target=None):
+ """Try to find a spec for 'fullname' on sys.path or 'path'.
+
+ The search is based on sys.path_hooks and sys.path_importer_cache.
+ """
+ if path is None:
+ path = sys.path
+ spec = cls._get_spec(fullname, path, target)
+ if spec is None:
+ return None
+ elif spec.loader is None:
+ namespace_path = spec.submodule_search_locations
+ if namespace_path:
+ # We found at least one namespace path. Return a
+ # spec which can create the namespace package.
+ spec.origin = 'namespace'
+ spec.submodule_search_locations = _NamespacePath(fullname, namespace_path, cls._get_spec)
+ return spec
+ else:
+ return None
+ else:
+ return spec
+
+ @classmethod
+ def find_module(cls, fullname, path=None):
+ """find the module on sys.path or 'path' based on sys.path_hooks and
+ sys.path_importer_cache.
+
+ This method is deprecated. Use find_spec() instead.
+
+ """
+ spec = cls.find_spec(fullname, path)
+ if spec is None:
+ return None
+ return spec.loader
+
+
+class FileFinder:
+
+ """File-based finder.
+
+ Interactions with the file system are cached for performance, being
+ refreshed when the directory the finder is handling has been modified.
+
+ """
+
+ def __init__(self, path, *loader_details):
+ """Initialize with the path to search on and a variable number of
+ 2-tuples containing the loader and the file suffixes the loader
+ recognizes."""
+ loaders = []
+ for loader, suffixes in loader_details:
+ loaders.extend((suffix, loader) for suffix in suffixes)
+ self._loaders = loaders
+ # Base (directory) path
+ self.path = path or '.'
+ self._path_mtime = -1
+ self._path_cache = set()
+ self._relaxed_path_cache = set()
+
+ def invalidate_caches(self):
+ """Invalidate the directory mtime."""
+ self._path_mtime = -1
+
+ find_module = _find_module_shim
+
+ def find_loader(self, fullname):
+ """Try to find a loader for the specified module, or the namespace
+ package portions. Returns (loader, list-of-portions).
+
+ This method is deprecated. Use find_spec() instead.
+
+ """
+ spec = self.find_spec(fullname)
+ if spec is None:
+ return None, []
+ return spec.loader, spec.submodule_search_locations or []
+
+ def _get_spec(self, loader_class, fullname, path, smsl, target):
+ loader = loader_class(fullname, path)
+ return spec_from_file_location(fullname, path, loader=loader,
+ submodule_search_locations=smsl)
+
+ def find_spec(self, fullname, target=None):
+ """Try to find a spec for the specified module.
+
+ Returns the matching spec, or None if not found.
+ """
+ is_namespace = False
+ tail_module = fullname.rpartition('.')[2]
+ try:
+ mtime = _path_stat(self.path or _os.getcwd()).st_mtime
+ except OSError:
+ mtime = -1
+ if mtime != self._path_mtime:
+ self._fill_cache()
+ self._path_mtime = mtime
+ # tail_module keeps the original casing, for __file__ and friends
+ if _relax_case():
+ cache = self._relaxed_path_cache
+ cache_module = tail_module.lower()
+ else:
+ cache = self._path_cache
+ cache_module = tail_module
+ # Check if the module is the name of a directory (and thus a package).
+ if cache_module in cache:
+ base_path = _path_join(self.path, tail_module)
+ for suffix, loader_class in self._loaders:
+ init_filename = '__init__' + suffix
+ full_path = _path_join(base_path, init_filename)
+ if _path_isfile(full_path):
+ return self._get_spec(loader_class, fullname, full_path, [base_path], target)
+ else:
+ # If a namespace package, return the path if we don't
+ # find a module in the next section.
+ is_namespace = _path_isdir(base_path)
+ # Check for a file w/ a proper suffix exists.
+ for suffix, loader_class in self._loaders:
+ full_path = _path_join(self.path, tail_module + suffix)
+ _bootstrap._verbose_message('trying {}', full_path, verbosity=2)
+ if cache_module + suffix in cache:
+ if _path_isfile(full_path):
+ return self._get_spec(loader_class, fullname, full_path,
+ None, target)
+ if is_namespace:
+ _bootstrap._verbose_message('possible namespace for {}', base_path)
+ spec = _bootstrap.ModuleSpec(fullname, None)
+ spec.submodule_search_locations = [base_path]
+ return spec
+ return None
+
+ def _fill_cache(self):
+ """Fill the cache of potential modules and packages for this directory."""
+ path = self.path
+ try:
+ contents = _os.listdir(path or _os.getcwd())
+ except (FileNotFoundError, PermissionError, NotADirectoryError):
+ # Directory has either been removed, turned into a file, or made
+ # unreadable.
+ contents = []
+ # We store two cached versions, to handle runtime changes of the
+ # PYTHONCASEOK environment variable.
+ if not sys.platform.startswith('win'):
+ self._path_cache = set(contents)
+ else:
+ # Windows users can import modules with case-insensitive file
+ # suffixes (for legacy reasons). Make the suffix lowercase here
+ # so it's done once instead of for every import. This is safe as
+ # the specified suffixes to check against are always specified in a
+ # case-sensitive manner.
+ lower_suffix_contents = set()
+ for item in contents:
+ name, dot, suffix = item.partition('.')
+ if dot:
+ new_name = '{}.{}'.format(name, suffix.lower())
+ else:
+ new_name = name
+ lower_suffix_contents.add(new_name)
+ self._path_cache = lower_suffix_contents
+ if sys.platform.startswith(_CASE_INSENSITIVE_PLATFORMS):
+ self._relaxed_path_cache = {fn.lower() for fn in contents}
+
+ @classmethod
+ def path_hook(cls, *loader_details):
+ """A class method which returns a closure to use on sys.path_hook
+ which will return an instance using the specified loaders and the path
+ called on the closure.
+
+ If the path called on the closure is not a directory, ImportError is
+ raised.
+
+ """
+ def path_hook_for_FileFinder(path):
+ """Path hook for importlib.machinery.FileFinder."""
+ if not _path_isdir(path):
+ raise ImportError('only directories are supported', path=path)
+ return cls(path, *loader_details)
+
+ return path_hook_for_FileFinder
+
+ def __repr__(self):
+ return 'FileFinder({!r})'.format(self.path)
+
+
+# Import setup ###############################################################
+
+def _fix_up_module(ns, name, pathname, cpathname=None):
+ # This function is used by PyImport_ExecCodeModuleObject().
+ loader = ns.get('__loader__')
+ spec = ns.get('__spec__')
+ if not loader:
+ if spec:
+ loader = spec.loader
+ elif pathname == cpathname:
+ loader = SourcelessFileLoader(name, pathname)
+ else:
+ loader = SourceFileLoader(name, pathname)
+ if not spec:
+ spec = spec_from_file_location(name, pathname, loader=loader)
+ try:
+ ns['__spec__'] = spec
+ ns['__loader__'] = loader
+ ns['__file__'] = pathname
+ ns['__cached__'] = cpathname
+ except Exception:
+ # Not important enough to report.
+ pass
+
+
+def _get_supported_file_loaders():
+ """Returns a list of file-based module loaders.
+
+ Each item is a tuple (loader, suffixes).
+ """
+ extensions = ExtensionFileLoader, _imp.extension_suffixes()
+ source = SourceFileLoader, SOURCE_SUFFIXES
+ bytecode = SourcelessFileLoader, BYTECODE_SUFFIXES
+ return [extensions, source, bytecode]
+
+
+def _setup(_bootstrap_module):
+ """Setup the path-based importers for importlib by importing needed
+ built-in modules and injecting them into the global namespace.
+
+ Other components are extracted from the core bootstrap module.
+
+ """
+ global sys, _imp, _bootstrap
+ _bootstrap = _bootstrap_module
+ sys = _bootstrap.sys
+ _imp = _bootstrap._imp
+
+ # Directly load built-in modules needed during bootstrap.
+ self_module = sys.modules[__name__]
+ for builtin_name in ('_io', '_warnings', 'builtins', 'marshal'):
+ if builtin_name not in sys.modules:
+ builtin_module = _bootstrap._builtin_from_name(builtin_name)
+ else:
+ builtin_module = sys.modules[builtin_name]
+ setattr(self_module, builtin_name, builtin_module)
+
+ # Directly load the os module (needed during bootstrap).
+ os_details = ('posix', ['/']), ('nt', ['\\', '/']), ('edk2', ['\\', '/'])
+ for builtin_os, path_separators in os_details:
+ # Assumption made in _path_join()
+ assert all(len(sep) == 1 for sep in path_separators)
+ path_sep = path_separators[0]
+ if builtin_os in sys.modules:
+ os_module = sys.modules[builtin_os]
+ break
+ else:
+ try:
+ os_module = _bootstrap._builtin_from_name(builtin_os)
+ break
+ except ImportError:
+ continue
+ else:
+ raise ImportError('importlib requires posix or nt or edk2')
+ setattr(self_module, '_os', os_module)
+ setattr(self_module, 'path_sep', path_sep)
+ setattr(self_module, 'path_separators', ''.join(path_separators))
+
+ # Directly load the _thread module (needed during bootstrap).
+ try:
+ thread_module = _bootstrap._builtin_from_name('_thread')
+ except ImportError:
+ # Python was built without threads
+ thread_module = None
+ setattr(self_module, '_thread', thread_module)
+
+ # Directly load the _weakref module (needed during bootstrap).
+ weakref_module = _bootstrap._builtin_from_name('_weakref')
+ setattr(self_module, '_weakref', weakref_module)
+
+ # Directly load the winreg module (needed during bootstrap).
+ if builtin_os == 'nt':
+ winreg_module = _bootstrap._builtin_from_name('winreg')
+ setattr(self_module, '_winreg', winreg_module)
+
+ # Constants
+ setattr(self_module, '_relax_case', _make_relax_case())
+ EXTENSION_SUFFIXES.extend(_imp.extension_suffixes())
+ if builtin_os == 'nt':
+ SOURCE_SUFFIXES.append('.pyw')
+ if '_d.pyd' in EXTENSION_SUFFIXES:
+ WindowsRegistryFinder.DEBUG_BUILD = True
+
+
+def _install(_bootstrap_module):
+ """Install the path-based import components."""
+ _setup(_bootstrap_module)
+ supported_loaders = _get_supported_file_loaders()
+ sys.path_hooks.extend([FileFinder.path_hook(*supported_loaders)])
+ sys.meta_path.append(PathFinder)
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/io.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/io.py
new file mode 100644
index 00000000..1c5ffcf9
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/io.py
@@ -0,0 +1,99 @@
+"""The io module provides the Python interfaces to stream handling. The
+builtin open function is defined in this module.
+
+At the top of the I/O hierarchy is the abstract base class IOBase. It
+defines the basic interface to a stream. Note, however, that there is no
+separation between reading and writing to streams; implementations are
+allowed to raise an OSError if they do not support a given operation.
+
+Extending IOBase is RawIOBase which deals simply with the reading and
+writing of raw bytes to a stream. FileIO subclasses RawIOBase to provide
+an interface to OS files.
+
+BufferedIOBase deals with buffering on a raw byte stream (RawIOBase). Its
+subclasses, BufferedWriter, BufferedReader, and BufferedRWPair buffer
+streams that are readable, writable, and both respectively.
+BufferedRandom provides a buffered interface to random access
+streams. BytesIO is a simple stream of in-memory bytes.
+
+Another IOBase subclass, TextIOBase, deals with the encoding and decoding
+of streams into text. TextIOWrapper, which extends it, is a buffered text
+interface to a buffered raw stream (`BufferedIOBase`). Finally, StringIO
+is an in-memory stream for text.
+
+Argument names are not part of the specification, and only the arguments
+of open() are intended to be used as keyword arguments.
+
+data:
+
+DEFAULT_BUFFER_SIZE
+
+ An int containing the default buffer size used by the module's buffered
+ I/O classes. open() uses the file's blksize (as obtained by os.stat) if
+ possible.
+"""
+# New I/O library conforming to PEP 3116.
+
+__author__ = ("Guido van Rossum <guido@python.org>, "
+ "Mike Verdone <mike.verdone@gmail.com>, "
+ "Mark Russell <mark.russell@zen.co.uk>, "
+ "Antoine Pitrou <solipsis@pitrou.net>, "
+ "Amaury Forgeot d'Arc <amauryfa@gmail.com>, "
+ "Benjamin Peterson <benjamin@python.org>")
+
+__all__ = ["BlockingIOError", "open", "IOBase", "RawIOBase", "FileIO",
+ "BytesIO", "StringIO", "BufferedIOBase",
+ "BufferedReader", "BufferedWriter", "BufferedRWPair",
+ "BufferedRandom", "TextIOBase", "TextIOWrapper",
+ "UnsupportedOperation", "SEEK_SET", "SEEK_CUR", "SEEK_END", "OpenWrapper"]
+
+
+import _io
+import abc
+
+from _io import (DEFAULT_BUFFER_SIZE, BlockingIOError, UnsupportedOperation,
+ open, FileIO, BytesIO, StringIO, BufferedReader,
+ BufferedWriter, BufferedRWPair, BufferedRandom,
+ IncrementalNewlineDecoder, TextIOWrapper)
+
+OpenWrapper = _io.open # for compatibility with _pyio
+
+# Pretend this exception was created here.
+UnsupportedOperation.__module__ = "io"
+
+# for seek()
+SEEK_SET = 0
+SEEK_CUR = 1
+SEEK_END = 2
+
+# Declaring ABCs in C is tricky so we do it here.
+# Method descriptions and default implementations are inherited from the C
+# version however.
+class IOBase(_io._IOBase, metaclass=abc.ABCMeta):
+ __doc__ = _io._IOBase.__doc__
+
+class RawIOBase(_io._RawIOBase, IOBase):
+ __doc__ = _io._RawIOBase.__doc__
+
+class BufferedIOBase(_io._BufferedIOBase, IOBase):
+ __doc__ = _io._BufferedIOBase.__doc__
+
+class TextIOBase(_io._TextIOBase, IOBase):
+ __doc__ = _io._TextIOBase.__doc__
+
+RawIOBase.register(FileIO)
+
+for klass in (BytesIO, BufferedReader, BufferedWriter, BufferedRandom,
+ BufferedRWPair):
+ BufferedIOBase.register(klass)
+
+for klass in (StringIO, TextIOWrapper):
+ TextIOBase.register(klass)
+del klass
+
+try:
+ from _io import _WindowsConsoleIO
+except ImportError:
+ pass
+else:
+ RawIOBase.register(_WindowsConsoleIO)
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/logging/__init__.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/logging/__init__.py
new file mode 100644
index 00000000..c605b10c
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/logging/__init__.py
@@ -0,0 +1,2021 @@
+# Copyright 2001-2016 by Vinay Sajip. All Rights Reserved.
+#
+# Permission to use, copy, modify, and distribute this software and its
+# documentation for any purpose and without fee is hereby granted,
+# provided that the above copyright notice appear in all copies and that
+# both that copyright notice and this permission notice appear in
+# supporting documentation, and that the name of Vinay Sajip
+# not be used in advertising or publicity pertaining to distribution
+# of the software without specific, written prior permission.
+# VINAY SAJIP DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING
+# ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
+# VINAY SAJIP BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR
+# ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER
+# IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+
+"""
+Logging package for Python. Based on PEP 282 and comments thereto in
+comp.lang.python.
+
+Copyright (C) 2001-2016 Vinay Sajip. All Rights Reserved.
+
+To use, simply 'import logging' and log away!
+"""
+
+import sys, os, time, io, traceback, warnings, weakref, collections
+
+from string import Template
+
+__all__ = ['BASIC_FORMAT', 'BufferingFormatter', 'CRITICAL', 'DEBUG', 'ERROR',
+ 'FATAL', 'FileHandler', 'Filter', 'Formatter', 'Handler', 'INFO',
+ 'LogRecord', 'Logger', 'LoggerAdapter', 'NOTSET', 'NullHandler',
+ 'StreamHandler', 'WARN', 'WARNING', 'addLevelName', 'basicConfig',
+ 'captureWarnings', 'critical', 'debug', 'disable', 'error',
+ 'exception', 'fatal', 'getLevelName', 'getLogger', 'getLoggerClass',
+ 'info', 'log', 'makeLogRecord', 'setLoggerClass', 'shutdown',
+ 'warn', 'warning', 'getLogRecordFactory', 'setLogRecordFactory',
+ 'lastResort', 'raiseExceptions']
+
+try:
+ import threading
+except ImportError: #pragma: no cover
+ threading = None
+
+__author__ = "Vinay Sajip <vinay_sajip@red-dove.com>"
+__status__ = "production"
+# The following module attributes are no longer updated.
+__version__ = "0.5.1.2"
+__date__ = "07 February 2010"
+
+#---------------------------------------------------------------------------
+# Miscellaneous module data
+#---------------------------------------------------------------------------
+
+#
+#_startTime is used as the base when calculating the relative time of events
+#
+_startTime = time.time()
+
+#
+#raiseExceptions is used to see if exceptions during handling should be
+#propagated
+#
+raiseExceptions = True
+
+#
+# If you don't want threading information in the log, set this to zero
+#
+logThreads = True
+
+#
+# If you don't want multiprocessing information in the log, set this to zero
+#
+logMultiprocessing = True
+
+#
+# If you don't want process information in the log, set this to zero
+#
+logProcesses = True
+
+#---------------------------------------------------------------------------
+# Level related stuff
+#---------------------------------------------------------------------------
+#
+# Default levels and level names, these can be replaced with any positive set
+# of values having corresponding names. There is a pseudo-level, NOTSET, which
+# is only really there as a lower limit for user-defined levels. Handlers and
+# loggers are initialized with NOTSET so that they will log all messages, even
+# at user-defined levels.
+#
+
+CRITICAL = 50
+FATAL = CRITICAL
+ERROR = 40
+WARNING = 30
+WARN = WARNING
+INFO = 20
+DEBUG = 10
+NOTSET = 0
+
+_levelToName = {
+ CRITICAL: 'CRITICAL',
+ ERROR: 'ERROR',
+ WARNING: 'WARNING',
+ INFO: 'INFO',
+ DEBUG: 'DEBUG',
+ NOTSET: 'NOTSET',
+}
+_nameToLevel = {
+ 'CRITICAL': CRITICAL,
+ 'FATAL': FATAL,
+ 'ERROR': ERROR,
+ 'WARN': WARNING,
+ 'WARNING': WARNING,
+ 'INFO': INFO,
+ 'DEBUG': DEBUG,
+ 'NOTSET': NOTSET,
+}
+
+def getLevelName(level):
+ """
+ Return the textual representation of logging level 'level'.
+
+ If the level is one of the predefined levels (CRITICAL, ERROR, WARNING,
+ INFO, DEBUG) then you get the corresponding string. If you have
+ associated levels with names using addLevelName then the name you have
+ associated with 'level' is returned.
+
+ If a numeric value corresponding to one of the defined levels is passed
+ in, the corresponding string representation is returned.
+
+ Otherwise, the string "Level %s" % level is returned.
+ """
+ # See Issues #22386, #27937 and #29220 for why it's this way
+ result = _levelToName.get(level)
+ if result is not None:
+ return result
+ result = _nameToLevel.get(level)
+ if result is not None:
+ return result
+ return "Level %s" % level
+
+def addLevelName(level, levelName):
+ """
+ Associate 'levelName' with 'level'.
+
+ This is used when converting levels to text during message formatting.
+ """
+ _acquireLock()
+ try: #unlikely to cause an exception, but you never know...
+ _levelToName[level] = levelName
+ _nameToLevel[levelName] = level
+ finally:
+ _releaseLock()
+
+if hasattr(sys, '_getframe'):
+ currentframe = lambda: sys._getframe(3)
+else: #pragma: no cover
+ def currentframe():
+ """Return the frame object for the caller's stack frame."""
+ try:
+ raise Exception
+ except Exception:
+ return sys.exc_info()[2].tb_frame.f_back
+
+#
+# _srcfile is used when walking the stack to check when we've got the first
+# caller stack frame, by skipping frames whose filename is that of this
+# module's source. It therefore should contain the filename of this module's
+# source file.
+#
+# Ordinarily we would use __file__ for this, but frozen modules don't always
+# have __file__ set, for some reason (see Issue #21736). Thus, we get the
+# filename from a handy code object from a function defined in this module.
+# (There's no particular reason for picking addLevelName.)
+#
+
+_srcfile = os.path.normcase(addLevelName.__code__.co_filename)
+
+# _srcfile is only used in conjunction with sys._getframe().
+# To provide compatibility with older versions of Python, set _srcfile
+# to None if _getframe() is not available; this value will prevent
+# findCaller() from being called. You can also do this if you want to avoid
+# the overhead of fetching caller information, even when _getframe() is
+# available.
+#if not hasattr(sys, '_getframe'):
+# _srcfile = None
+
+
+def _checkLevel(level):
+ if isinstance(level, int):
+ rv = level
+ elif str(level) == level:
+ if level not in _nameToLevel:
+ raise ValueError("Unknown level: %r" % level)
+ rv = _nameToLevel[level]
+ else:
+ raise TypeError("Level not an integer or a valid string: %r" % level)
+ return rv
+
+#---------------------------------------------------------------------------
+# Thread-related stuff
+#---------------------------------------------------------------------------
+
+#
+#_lock is used to serialize access to shared data structures in this module.
+#This needs to be an RLock because fileConfig() creates and configures
+#Handlers, and so might arbitrary user threads. Since Handler code updates the
+#shared dictionary _handlers, it needs to acquire the lock. But if configuring,
+#the lock would already have been acquired - so we need an RLock.
+#The same argument applies to Loggers and Manager.loggerDict.
+#
+if threading:
+ _lock = threading.RLock()
+else: #pragma: no cover
+ _lock = None
+
+
+def _acquireLock():
+ """
+ Acquire the module-level lock for serializing access to shared data.
+
+ This should be released with _releaseLock().
+ """
+ if _lock:
+ _lock.acquire()
+
+def _releaseLock():
+ """
+ Release the module-level lock acquired by calling _acquireLock().
+ """
+ if _lock:
+ _lock.release()
+
+#---------------------------------------------------------------------------
+# The logging record
+#---------------------------------------------------------------------------
+
+class LogRecord(object):
+ """
+ A LogRecord instance represents an event being logged.
+
+ LogRecord instances are created every time something is logged. They
+ contain all the information pertinent to the event being logged. The
+ main information passed in is in msg and args, which are combined
+ using str(msg) % args to create the message field of the record. The
+ record also includes information such as when the record was created,
+ the source line where the logging call was made, and any exception
+ information to be logged.
+ """
+ def __init__(self, name, level, pathname, lineno,
+ msg, args, exc_info, func=None, sinfo=None, **kwargs):
+ """
+ Initialize a logging record with interesting information.
+ """
+ ct = time.time()
+ self.name = name
+ self.msg = msg
+ #
+ # The following statement allows passing of a dictionary as a sole
+ # argument, so that you can do something like
+ # logging.debug("a %(a)d b %(b)s", {'a':1, 'b':2})
+ # Suggested by Stefan Behnel.
+ # Note that without the test for args[0], we get a problem because
+ # during formatting, we test to see if the arg is present using
+ # 'if self.args:'. If the event being logged is e.g. 'Value is %d'
+ # and if the passed arg fails 'if self.args:' then no formatting
+ # is done. For example, logger.warning('Value is %d', 0) would log
+ # 'Value is %d' instead of 'Value is 0'.
+ # For the use case of passing a dictionary, this should not be a
+ # problem.
+ # Issue #21172: a request was made to relax the isinstance check
+ # to hasattr(args[0], '__getitem__'). However, the docs on string
+ # formatting still seem to suggest a mapping object is required.
+ # Thus, while not removing the isinstance check, it does now look
+ # for collections.Mapping rather than, as before, dict.
+ if (args and len(args) == 1 and isinstance(args[0], collections.Mapping)
+ and args[0]):
+ args = args[0]
+ self.args = args
+ self.levelname = getLevelName(level)
+ self.levelno = level
+ self.pathname = pathname
+ try:
+ self.filename = os.path.basename(pathname)
+ self.module = os.path.splitext(self.filename)[0]
+ except (TypeError, ValueError, AttributeError):
+ self.filename = pathname
+ self.module = "Unknown module"
+ self.exc_info = exc_info
+ self.exc_text = None # used to cache the traceback text
+ self.stack_info = sinfo
+ self.lineno = lineno
+ self.funcName = func
+ self.created = ct
+ self.msecs = (ct - int(ct)) * 1000
+ self.relativeCreated = (self.created - _startTime) * 1000
+ if logThreads and threading:
+ self.thread = threading.get_ident()
+ self.threadName = threading.current_thread().name
+ else: # pragma: no cover
+ self.thread = None
+ self.threadName = None
+ if not logMultiprocessing: # pragma: no cover
+ self.processName = None
+ else:
+ self.processName = 'MainProcess'
+ mp = sys.modules.get('multiprocessing')
+ if mp is not None:
+ # Errors may occur if multiprocessing has not finished loading
+ # yet - e.g. if a custom import hook causes third-party code
+ # to run when multiprocessing calls import. See issue 8200
+ # for an example
+ try:
+ self.processName = mp.current_process().name
+ except Exception: #pragma: no cover
+ pass
+ if logProcesses and hasattr(os, 'getpid'):
+ self.process = os.getpid()
+ else:
+ self.process = None
+
+ def __str__(self):
+ return '<LogRecord: %s, %s, %s, %s, "%s">'%(self.name, self.levelno,
+ self.pathname, self.lineno, self.msg)
+
+ __repr__ = __str__
+
+ def getMessage(self):
+ """
+ Return the message for this LogRecord.
+
+ Return the message for this LogRecord after merging any user-supplied
+ arguments with the message.
+ """
+ msg = str(self.msg)
+ if self.args:
+ msg = msg % self.args
+ return msg
+
+#
+# Determine which class to use when instantiating log records.
+#
+_logRecordFactory = LogRecord
+
+def setLogRecordFactory(factory):
+ """
+ Set the factory to be used when instantiating a log record.
+
+ :param factory: A callable which will be called to instantiate
+ a log record.
+ """
+ global _logRecordFactory
+ _logRecordFactory = factory
+
+def getLogRecordFactory():
+ """
+ Return the factory to be used when instantiating a log record.
+ """
+
+ return _logRecordFactory
+
+def makeLogRecord(dict):
+ """
+ Make a LogRecord whose attributes are defined by the specified dictionary,
+ This function is useful for converting a logging event received over
+ a socket connection (which is sent as a dictionary) into a LogRecord
+ instance.
+ """
+ rv = _logRecordFactory(None, None, "", 0, "", (), None, None)
+ rv.__dict__.update(dict)
+ return rv
+
+#---------------------------------------------------------------------------
+# Formatter classes and functions
+#---------------------------------------------------------------------------
+
+class PercentStyle(object):
+
+ default_format = '%(message)s'
+ asctime_format = '%(asctime)s'
+ asctime_search = '%(asctime)'
+
+ def __init__(self, fmt):
+ self._fmt = fmt or self.default_format
+
+ def usesTime(self):
+ return self._fmt.find(self.asctime_search) >= 0
+
+ def format(self, record):
+ return self._fmt % record.__dict__
+
+class StrFormatStyle(PercentStyle):
+ default_format = '{message}'
+ asctime_format = '{asctime}'
+ asctime_search = '{asctime'
+
+ def format(self, record):
+ return self._fmt.format(**record.__dict__)
+
+
+class StringTemplateStyle(PercentStyle):
+ default_format = '${message}'
+ asctime_format = '${asctime}'
+ asctime_search = '${asctime}'
+
+ def __init__(self, fmt):
+ self._fmt = fmt or self.default_format
+ self._tpl = Template(self._fmt)
+
+ def usesTime(self):
+ fmt = self._fmt
+ return fmt.find('$asctime') >= 0 or fmt.find(self.asctime_format) >= 0
+
+ def format(self, record):
+ return self._tpl.substitute(**record.__dict__)
+
+BASIC_FORMAT = "%(levelname)s:%(name)s:%(message)s"
+
+_STYLES = {
+ '%': (PercentStyle, BASIC_FORMAT),
+ '{': (StrFormatStyle, '{levelname}:{name}:{message}'),
+ '$': (StringTemplateStyle, '${levelname}:${name}:${message}'),
+}
+
+class Formatter(object):
+ """
+ Formatter instances are used to convert a LogRecord to text.
+
+ Formatters need to know how a LogRecord is constructed. They are
+ responsible for converting a LogRecord to (usually) a string which can
+ be interpreted by either a human or an external system. The base Formatter
+ allows a formatting string to be specified. If none is supplied, the
+ the style-dependent default value, "%(message)s", "{message}", or
+ "${message}", is used.
+
+ The Formatter can be initialized with a format string which makes use of
+ knowledge of the LogRecord attributes - e.g. the default value mentioned
+ above makes use of the fact that the user's message and arguments are pre-
+ formatted into a LogRecord's message attribute. Currently, the useful
+ attributes in a LogRecord are described by:
+
+ %(name)s Name of the logger (logging channel)
+ %(levelno)s Numeric logging level for the message (DEBUG, INFO,
+ WARNING, ERROR, CRITICAL)
+ %(levelname)s Text logging level for the message ("DEBUG", "INFO",
+ "WARNING", "ERROR", "CRITICAL")
+ %(pathname)s Full pathname of the source file where the logging
+ call was issued (if available)
+ %(filename)s Filename portion of pathname
+ %(module)s Module (name portion of filename)
+ %(lineno)d Source line number where the logging call was issued
+ (if available)
+ %(funcName)s Function name
+ %(created)f Time when the LogRecord was created (time.time()
+ return value)
+ %(asctime)s Textual time when the LogRecord was created
+ %(msecs)d Millisecond portion of the creation time
+ %(relativeCreated)d Time in milliseconds when the LogRecord was created,
+ relative to the time the logging module was loaded
+ (typically at application startup time)
+ %(thread)d Thread ID (if available)
+ %(threadName)s Thread name (if available)
+ %(process)d Process ID (if available)
+ %(message)s The result of record.getMessage(), computed just as
+ the record is emitted
+ """
+
+ converter = time.localtime
+
+ def __init__(self, fmt=None, datefmt=None, style='%'):
+ """
+ Initialize the formatter with specified format strings.
+
+ Initialize the formatter either with the specified format string, or a
+ default as described above. Allow for specialized date formatting with
+ the optional datefmt argument. If datefmt is omitted, you get an
+ ISO8601-like (or RFC 3339-like) format.
+
+ Use a style parameter of '%', '{' or '$' to specify that you want to
+ use one of %-formatting, :meth:`str.format` (``{}``) formatting or
+ :class:`string.Template` formatting in your format string.
+
+ .. versionchanged:: 3.2
+ Added the ``style`` parameter.
+ """
+ if style not in _STYLES:
+ raise ValueError('Style must be one of: %s' % ','.join(
+ _STYLES.keys()))
+ self._style = _STYLES[style][0](fmt)
+ self._fmt = self._style._fmt
+ self.datefmt = datefmt
+
+ default_time_format = '%Y-%m-%d %H:%M:%S'
+ default_msec_format = '%s,%03d'
+
+ def formatTime(self, record, datefmt=None):
+ """
+ Return the creation time of the specified LogRecord as formatted text.
+
+ This method should be called from format() by a formatter which
+ wants to make use of a formatted time. This method can be overridden
+ in formatters to provide for any specific requirement, but the
+ basic behaviour is as follows: if datefmt (a string) is specified,
+ it is used with time.strftime() to format the creation time of the
+ record. Otherwise, an ISO8601-like (or RFC 3339-like) format is used.
+ The resulting string is returned. This function uses a user-configurable
+ function to convert the creation time to a tuple. By default,
+ time.localtime() is used; to change this for a particular formatter
+ instance, set the 'converter' attribute to a function with the same
+ signature as time.localtime() or time.gmtime(). To change it for all
+ formatters, for example if you want all logging times to be shown in GMT,
+ set the 'converter' attribute in the Formatter class.
+ """
+ ct = self.converter(record.created)
+ if datefmt:
+ s = time.strftime(datefmt, ct)
+ else:
+ t = time.strftime(self.default_time_format, ct)
+ s = self.default_msec_format % (t, record.msecs)
+ return s
+
+ def formatException(self, ei):
+ """
+ Format and return the specified exception information as a string.
+
+ This default implementation just uses
+ traceback.print_exception()
+ """
+ sio = io.StringIO()
+ tb = ei[2]
+ # See issues #9427, #1553375. Commented out for now.
+ #if getattr(self, 'fullstack', False):
+ # traceback.print_stack(tb.tb_frame.f_back, file=sio)
+ traceback.print_exception(ei[0], ei[1], tb, None, sio)
+ s = sio.getvalue()
+ sio.close()
+ if s[-1:] == "\n":
+ s = s[:-1]
+ return s
+
+ def usesTime(self):
+ """
+ Check if the format uses the creation time of the record.
+ """
+ return self._style.usesTime()
+
+ def formatMessage(self, record):
+ return self._style.format(record)
+
+ def formatStack(self, stack_info):
+ """
+ This method is provided as an extension point for specialized
+ formatting of stack information.
+
+ The input data is a string as returned from a call to
+ :func:`traceback.print_stack`, but with the last trailing newline
+ removed.
+
+ The base implementation just returns the value passed in.
+ """
+ return stack_info
+
+ def format(self, record):
+ """
+ Format the specified record as text.
+
+ The record's attribute dictionary is used as the operand to a
+ string formatting operation which yields the returned string.
+ Before formatting the dictionary, a couple of preparatory steps
+ are carried out. The message attribute of the record is computed
+ using LogRecord.getMessage(). If the formatting string uses the
+ time (as determined by a call to usesTime(), formatTime() is
+ called to format the event time. If there is exception information,
+ it is formatted using formatException() and appended to the message.
+ """
+ record.message = record.getMessage()
+ if self.usesTime():
+ record.asctime = self.formatTime(record, self.datefmt)
+ s = self.formatMessage(record)
+ if record.exc_info:
+ # Cache the traceback text to avoid converting it multiple times
+ # (it's constant anyway)
+ if not record.exc_text:
+ record.exc_text = self.formatException(record.exc_info)
+ if record.exc_text:
+ if s[-1:] != "\n":
+ s = s + "\n"
+ s = s + record.exc_text
+ if record.stack_info:
+ if s[-1:] != "\n":
+ s = s + "\n"
+ s = s + self.formatStack(record.stack_info)
+ return s
+
+#
+# The default formatter to use when no other is specified
+#
+_defaultFormatter = Formatter()
+
+class BufferingFormatter(object):
+ """
+ A formatter suitable for formatting a number of records.
+ """
+ def __init__(self, linefmt=None):
+ """
+ Optionally specify a formatter which will be used to format each
+ individual record.
+ """
+ if linefmt:
+ self.linefmt = linefmt
+ else:
+ self.linefmt = _defaultFormatter
+
+ def formatHeader(self, records):
+ """
+ Return the header string for the specified records.
+ """
+ return ""
+
+ def formatFooter(self, records):
+ """
+ Return the footer string for the specified records.
+ """
+ return ""
+
+ def format(self, records):
+ """
+ Format the specified records and return the result as a string.
+ """
+ rv = ""
+ if len(records) > 0:
+ rv = rv + self.formatHeader(records)
+ for record in records:
+ rv = rv + self.linefmt.format(record)
+ rv = rv + self.formatFooter(records)
+ return rv
+
+#---------------------------------------------------------------------------
+# Filter classes and functions
+#---------------------------------------------------------------------------
+
+class Filter(object):
+ """
+ Filter instances are used to perform arbitrary filtering of LogRecords.
+
+ Loggers and Handlers can optionally use Filter instances to filter
+ records as desired. The base filter class only allows events which are
+ below a certain point in the logger hierarchy. For example, a filter
+ initialized with "A.B" will allow events logged by loggers "A.B",
+ "A.B.C", "A.B.C.D", "A.B.D" etc. but not "A.BB", "B.A.B" etc. If
+ initialized with the empty string, all events are passed.
+ """
+ def __init__(self, name=''):
+ """
+ Initialize a filter.
+
+ Initialize with the name of the logger which, together with its
+ children, will have its events allowed through the filter. If no
+ name is specified, allow every event.
+ """
+ self.name = name
+ self.nlen = len(name)
+
+ def filter(self, record):
+ """
+ Determine if the specified record is to be logged.
+
+ Is the specified record to be logged? Returns 0 for no, nonzero for
+ yes. If deemed appropriate, the record may be modified in-place.
+ """
+ if self.nlen == 0:
+ return True
+ elif self.name == record.name:
+ return True
+ elif record.name.find(self.name, 0, self.nlen) != 0:
+ return False
+ return (record.name[self.nlen] == ".")
+
+class Filterer(object):
+ """
+ A base class for loggers and handlers which allows them to share
+ common code.
+ """
+ def __init__(self):
+ """
+ Initialize the list of filters to be an empty list.
+ """
+ self.filters = []
+
+ def addFilter(self, filter):
+ """
+ Add the specified filter to this handler.
+ """
+ if not (filter in self.filters):
+ self.filters.append(filter)
+
+ def removeFilter(self, filter):
+ """
+ Remove the specified filter from this handler.
+ """
+ if filter in self.filters:
+ self.filters.remove(filter)
+
+ def filter(self, record):
+ """
+ Determine if a record is loggable by consulting all the filters.
+
+ The default is to allow the record to be logged; any filter can veto
+ this and the record is then dropped. Returns a zero value if a record
+ is to be dropped, else non-zero.
+
+ .. versionchanged:: 3.2
+
+ Allow filters to be just callables.
+ """
+ rv = True
+ for f in self.filters:
+ if hasattr(f, 'filter'):
+ result = f.filter(record)
+ else:
+ result = f(record) # assume callable - will raise if not
+ if not result:
+ rv = False
+ break
+ return rv
+
+#---------------------------------------------------------------------------
+# Handler classes and functions
+#---------------------------------------------------------------------------
+
+_handlers = weakref.WeakValueDictionary() #map of handler names to handlers
+_handlerList = [] # added to allow handlers to be removed in reverse of order initialized
+
+def _removeHandlerRef(wr):
+ """
+ Remove a handler reference from the internal cleanup list.
+ """
+ # This function can be called during module teardown, when globals are
+ # set to None. It can also be called from another thread. So we need to
+ # pre-emptively grab the necessary globals and check if they're None,
+ # to prevent race conditions and failures during interpreter shutdown.
+ acquire, release, handlers = _acquireLock, _releaseLock, _handlerList
+ if acquire and release and handlers:
+ acquire()
+ try:
+ if wr in handlers:
+ handlers.remove(wr)
+ finally:
+ release()
+
+def _addHandlerRef(handler):
+ """
+ Add a handler to the internal cleanup list using a weak reference.
+ """
+ _acquireLock()
+ try:
+ _handlerList.append(weakref.ref(handler, _removeHandlerRef))
+ finally:
+ _releaseLock()
+
+class Handler(Filterer):
+ """
+ Handler instances dispatch logging events to specific destinations.
+
+ The base handler class. Acts as a placeholder which defines the Handler
+ interface. Handlers can optionally use Formatter instances to format
+ records as desired. By default, no formatter is specified; in this case,
+ the 'raw' message as determined by record.message is logged.
+ """
+ def __init__(self, level=NOTSET):
+ """
+ Initializes the instance - basically setting the formatter to None
+ and the filter list to empty.
+ """
+ Filterer.__init__(self)
+ self._name = None
+ self.level = _checkLevel(level)
+ self.formatter = None
+ # Add the handler to the global _handlerList (for cleanup on shutdown)
+ _addHandlerRef(self)
+ self.createLock()
+
+ def get_name(self):
+ return self._name
+
+ def set_name(self, name):
+ _acquireLock()
+ try:
+ if self._name in _handlers:
+ del _handlers[self._name]
+ self._name = name
+ if name:
+ _handlers[name] = self
+ finally:
+ _releaseLock()
+
+ name = property(get_name, set_name)
+
+ def createLock(self):
+ """
+ Acquire a thread lock for serializing access to the underlying I/O.
+ """
+ if threading:
+ self.lock = threading.RLock()
+ else: #pragma: no cover
+ self.lock = None
+
+ def acquire(self):
+ """
+ Acquire the I/O thread lock.
+ """
+ if self.lock:
+ self.lock.acquire()
+
+ def release(self):
+ """
+ Release the I/O thread lock.
+ """
+ if self.lock:
+ self.lock.release()
+
+ def setLevel(self, level):
+ """
+ Set the logging level of this handler. level must be an int or a str.
+ """
+ self.level = _checkLevel(level)
+
+ def format(self, record):
+ """
+ Format the specified record.
+
+ If a formatter is set, use it. Otherwise, use the default formatter
+ for the module.
+ """
+ if self.formatter:
+ fmt = self.formatter
+ else:
+ fmt = _defaultFormatter
+ return fmt.format(record)
+
+ def emit(self, record):
+ """
+ Do whatever it takes to actually log the specified logging record.
+
+ This version is intended to be implemented by subclasses and so
+ raises a NotImplementedError.
+ """
+ raise NotImplementedError('emit must be implemented '
+ 'by Handler subclasses')
+
+ def handle(self, record):
+ """
+ Conditionally emit the specified logging record.
+
+ Emission depends on filters which may have been added to the handler.
+ Wrap the actual emission of the record with acquisition/release of
+ the I/O thread lock. Returns whether the filter passed the record for
+ emission.
+ """
+ rv = self.filter(record)
+ if rv:
+ self.acquire()
+ try:
+ self.emit(record)
+ finally:
+ self.release()
+ return rv
+
+ def setFormatter(self, fmt):
+ """
+ Set the formatter for this handler.
+ """
+ self.formatter = fmt
+
+ def flush(self):
+ """
+ Ensure all logging output has been flushed.
+
+ This version does nothing and is intended to be implemented by
+ subclasses.
+ """
+ pass
+
+ def close(self):
+ """
+ Tidy up any resources used by the handler.
+
+ This version removes the handler from an internal map of handlers,
+ _handlers, which is used for handler lookup by name. Subclasses
+ should ensure that this gets called from overridden close()
+ methods.
+ """
+ #get the module data lock, as we're updating a shared structure.
+ _acquireLock()
+ try: #unlikely to raise an exception, but you never know...
+ if self._name and self._name in _handlers:
+ del _handlers[self._name]
+ finally:
+ _releaseLock()
+
+ def handleError(self, record):
+ """
+ Handle errors which occur during an emit() call.
+
+ This method should be called from handlers when an exception is
+ encountered during an emit() call. If raiseExceptions is false,
+ exceptions get silently ignored. This is what is mostly wanted
+ for a logging system - most users will not care about errors in
+ the logging system, they are more interested in application errors.
+ You could, however, replace this with a custom handler if you wish.
+ The record which was being processed is passed in to this method.
+ """
+ if raiseExceptions and sys.stderr: # see issue 13807
+ t, v, tb = sys.exc_info()
+ try:
+ sys.stderr.write('--- Logging error ---\n')
+ traceback.print_exception(t, v, tb, None, sys.stderr)
+ sys.stderr.write('Call stack:\n')
+ # Walk the stack frame up until we're out of logging,
+ # so as to print the calling context.
+ frame = tb.tb_frame
+ while (frame and os.path.dirname(frame.f_code.co_filename) ==
+ __path__[0]):
+ frame = frame.f_back
+ if frame:
+ traceback.print_stack(frame, file=sys.stderr)
+ else:
+ # couldn't find the right stack frame, for some reason
+ sys.stderr.write('Logged from file %s, line %s\n' % (
+ record.filename, record.lineno))
+ # Issue 18671: output logging message and arguments
+ try:
+ sys.stderr.write('Message: %r\n'
+ 'Arguments: %s\n' % (record.msg,
+ record.args))
+ except Exception:
+ sys.stderr.write('Unable to print the message and arguments'
+ ' - possible formatting error.\nUse the'
+ ' traceback above to help find the error.\n'
+ )
+ except OSError: #pragma: no cover
+ pass # see issue 5971
+ finally:
+ del t, v, tb
+
+ def __repr__(self):
+ level = getLevelName(self.level)
+ return '<%s (%s)>' % (self.__class__.__name__, level)
+
+class StreamHandler(Handler):
+ """
+ A handler class which writes logging records, appropriately formatted,
+ to a stream. Note that this class does not close the stream, as
+ sys.stdout or sys.stderr may be used.
+ """
+
+ terminator = '\n'
+
+ def __init__(self, stream=None):
+ """
+ Initialize the handler.
+
+ If stream is not specified, sys.stderr is used.
+ """
+ Handler.__init__(self)
+ if stream is None:
+ stream = sys.stderr
+ self.stream = stream
+
+ def flush(self):
+ """
+ Flushes the stream.
+ """
+ self.acquire()
+ try:
+ if self.stream and hasattr(self.stream, "flush"):
+ self.stream.flush()
+ finally:
+ self.release()
+
+ def emit(self, record):
+ """
+ Emit a record.
+
+ If a formatter is specified, it is used to format the record.
+ The record is then written to the stream with a trailing newline. If
+ exception information is present, it is formatted using
+ traceback.print_exception and appended to the stream. If the stream
+ has an 'encoding' attribute, it is used to determine how to do the
+ output to the stream.
+ """
+ try:
+ msg = self.format(record)
+ stream = self.stream
+ stream.write(msg)
+ stream.write(self.terminator)
+ self.flush()
+ except Exception:
+ self.handleError(record)
+
+ def __repr__(self):
+ level = getLevelName(self.level)
+ name = getattr(self.stream, 'name', '')
+ if name:
+ name += ' '
+ return '<%s %s(%s)>' % (self.__class__.__name__, name, level)
+
+
+class FileHandler(StreamHandler):
+ """
+ A handler class which writes formatted logging records to disk files.
+ """
+ def __init__(self, filename, mode='a', encoding=None, delay=False):
+ """
+ Open the specified file and use it as the stream for logging.
+ """
+ # Issue #27493: add support for Path objects to be passed in
+ # filename = os.fspath(filename)
+ #keep the absolute path, otherwise derived classes which use this
+ #may come a cropper when the current directory changes
+ self.baseFilename = os.path.abspath(filename)
+ self.mode = mode
+ self.encoding = encoding
+ self.delay = delay
+ if delay:
+ #We don't open the stream, but we still need to call the
+ #Handler constructor to set level, formatter, lock etc.
+ Handler.__init__(self)
+ self.stream = None
+ else:
+ StreamHandler.__init__(self, self._open())
+
+ def close(self):
+ """
+ Closes the stream.
+ """
+ self.acquire()
+ try:
+ try:
+ if self.stream:
+ try:
+ self.flush()
+ finally:
+ stream = self.stream
+ self.stream = None
+ if hasattr(stream, "close"):
+ stream.close()
+ finally:
+ # Issue #19523: call unconditionally to
+ # prevent a handler leak when delay is set
+ StreamHandler.close(self)
+ finally:
+ self.release()
+
+ def _open(self):
+ """
+ Open the current base file with the (original) mode and encoding.
+ Return the resulting stream.
+ """
+ return open(self.baseFilename, self.mode, encoding=self.encoding)
+
+ def emit(self, record):
+ """
+ Emit a record.
+
+ If the stream was not opened because 'delay' was specified in the
+ constructor, open it before calling the superclass's emit.
+ """
+ if self.stream is None:
+ self.stream = self._open()
+ StreamHandler.emit(self, record)
+
+ def __repr__(self):
+ level = getLevelName(self.level)
+ return '<%s %s (%s)>' % (self.__class__.__name__, self.baseFilename, level)
+
+
+class _StderrHandler(StreamHandler):
+ """
+ This class is like a StreamHandler using sys.stderr, but always uses
+ whatever sys.stderr is currently set to rather than the value of
+ sys.stderr at handler construction time.
+ """
+ def __init__(self, level=NOTSET):
+ """
+ Initialize the handler.
+ """
+ Handler.__init__(self, level)
+
+ @property
+ def stream(self):
+ return sys.stderr
+
+
+_defaultLastResort = _StderrHandler(WARNING)
+lastResort = _defaultLastResort
+
+#---------------------------------------------------------------------------
+# Manager classes and functions
+#---------------------------------------------------------------------------
+
+class PlaceHolder(object):
+ """
+ PlaceHolder instances are used in the Manager logger hierarchy to take
+ the place of nodes for which no loggers have been defined. This class is
+ intended for internal use only and not as part of the public API.
+ """
+ def __init__(self, alogger):
+ """
+ Initialize with the specified logger being a child of this placeholder.
+ """
+ self.loggerMap = { alogger : None }
+
+ def append(self, alogger):
+ """
+ Add the specified logger as a child of this placeholder.
+ """
+ if alogger not in self.loggerMap:
+ self.loggerMap[alogger] = None
+
+#
+# Determine which class to use when instantiating loggers.
+#
+
+def setLoggerClass(klass):
+ """
+ Set the class to be used when instantiating a logger. The class should
+ define __init__() such that only a name argument is required, and the
+ __init__() should call Logger.__init__()
+ """
+ if klass != Logger:
+ if not issubclass(klass, Logger):
+ raise TypeError("logger not derived from logging.Logger: "
+ + klass.__name__)
+ global _loggerClass
+ _loggerClass = klass
+
+def getLoggerClass():
+ """
+ Return the class to be used when instantiating a logger.
+ """
+ return _loggerClass
+
+class Manager(object):
+ """
+ There is [under normal circumstances] just one Manager instance, which
+ holds the hierarchy of loggers.
+ """
+ def __init__(self, rootnode):
+ """
+ Initialize the manager with the root node of the logger hierarchy.
+ """
+ self.root = rootnode
+ self.disable = 0
+ self.emittedNoHandlerWarning = False
+ self.loggerDict = {}
+ self.loggerClass = None
+ self.logRecordFactory = None
+
+ def getLogger(self, name):
+ """
+ Get a logger with the specified name (channel name), creating it
+ if it doesn't yet exist. This name is a dot-separated hierarchical
+ name, such as "a", "a.b", "a.b.c" or similar.
+
+ If a PlaceHolder existed for the specified name [i.e. the logger
+ didn't exist but a child of it did], replace it with the created
+ logger and fix up the parent/child references which pointed to the
+ placeholder to now point to the logger.
+ """
+ rv = None
+ if not isinstance(name, str):
+ raise TypeError('A logger name must be a string')
+ _acquireLock()
+ try:
+ if name in self.loggerDict:
+ rv = self.loggerDict[name]
+ if isinstance(rv, PlaceHolder):
+ ph = rv
+ rv = (self.loggerClass or _loggerClass)(name)
+ rv.manager = self
+ self.loggerDict[name] = rv
+ self._fixupChildren(ph, rv)
+ self._fixupParents(rv)
+ else:
+ rv = (self.loggerClass or _loggerClass)(name)
+ rv.manager = self
+ self.loggerDict[name] = rv
+ self._fixupParents(rv)
+ finally:
+ _releaseLock()
+ return rv
+
+ def setLoggerClass(self, klass):
+ """
+ Set the class to be used when instantiating a logger with this Manager.
+ """
+ if klass != Logger:
+ if not issubclass(klass, Logger):
+ raise TypeError("logger not derived from logging.Logger: "
+ + klass.__name__)
+ self.loggerClass = klass
+
+ def setLogRecordFactory(self, factory):
+ """
+ Set the factory to be used when instantiating a log record with this
+ Manager.
+ """
+ self.logRecordFactory = factory
+
+ def _fixupParents(self, alogger):
+ """
+ Ensure that there are either loggers or placeholders all the way
+ from the specified logger to the root of the logger hierarchy.
+ """
+ name = alogger.name
+ i = name.rfind(".")
+ rv = None
+ while (i > 0) and not rv:
+ substr = name[:i]
+ if substr not in self.loggerDict:
+ self.loggerDict[substr] = PlaceHolder(alogger)
+ else:
+ obj = self.loggerDict[substr]
+ if isinstance(obj, Logger):
+ rv = obj
+ else:
+ assert isinstance(obj, PlaceHolder)
+ obj.append(alogger)
+ i = name.rfind(".", 0, i - 1)
+ if not rv:
+ rv = self.root
+ alogger.parent = rv
+
+ def _fixupChildren(self, ph, alogger):
+ """
+ Ensure that children of the placeholder ph are connected to the
+ specified logger.
+ """
+ name = alogger.name
+ namelen = len(name)
+ for c in ph.loggerMap.keys():
+ #The if means ... if not c.parent.name.startswith(nm)
+ if c.parent.name[:namelen] != name:
+ alogger.parent = c.parent
+ c.parent = alogger
+
+#---------------------------------------------------------------------------
+# Logger classes and functions
+#---------------------------------------------------------------------------
+
+class Logger(Filterer):
+ """
+ Instances of the Logger class represent a single logging channel. A
+ "logging channel" indicates an area of an application. Exactly how an
+ "area" is defined is up to the application developer. Since an
+ application can have any number of areas, logging channels are identified
+ by a unique string. Application areas can be nested (e.g. an area
+ of "input processing" might include sub-areas "read CSV files", "read
+ XLS files" and "read Gnumeric files"). To cater for this natural nesting,
+ channel names are organized into a namespace hierarchy where levels are
+ separated by periods, much like the Java or Python package namespace. So
+ in the instance given above, channel names might be "input" for the upper
+ level, and "input.csv", "input.xls" and "input.gnu" for the sub-levels.
+ There is no arbitrary limit to the depth of nesting.
+ """
+ def __init__(self, name, level=NOTSET):
+ """
+ Initialize the logger with a name and an optional level.
+ """
+ Filterer.__init__(self)
+ self.name = name
+ self.level = _checkLevel(level)
+ self.parent = None
+ self.propagate = True
+ self.handlers = []
+ self.disabled = False
+
+ def setLevel(self, level):
+ """
+ Set the logging level of this logger. level must be an int or a str.
+ """
+ self.level = _checkLevel(level)
+
+ def debug(self, msg, *args, **kwargs):
+ """
+ Log 'msg % args' with severity 'DEBUG'.
+
+ To pass exception information, use the keyword argument exc_info with
+ a true value, e.g.
+
+ logger.debug("Houston, we have a %s", "thorny problem", exc_info=1)
+ """
+ if self.isEnabledFor(DEBUG):
+ self._log(DEBUG, msg, args, **kwargs)
+
+ def info(self, msg, *args, **kwargs):
+ """
+ Log 'msg % args' with severity 'INFO'.
+
+ To pass exception information, use the keyword argument exc_info with
+ a true value, e.g.
+
+ logger.info("Houston, we have a %s", "interesting problem", exc_info=1)
+ """
+ if self.isEnabledFor(INFO):
+ self._log(INFO, msg, args, **kwargs)
+
+ def warning(self, msg, *args, **kwargs):
+ """
+ Log 'msg % args' with severity 'WARNING'.
+
+ To pass exception information, use the keyword argument exc_info with
+ a true value, e.g.
+
+ logger.warning("Houston, we have a %s", "bit of a problem", exc_info=1)
+ """
+ if self.isEnabledFor(WARNING):
+ self._log(WARNING, msg, args, **kwargs)
+
+ def warn(self, msg, *args, **kwargs):
+ warnings.warn("The 'warn' method is deprecated, "
+ "use 'warning' instead", DeprecationWarning, 2)
+ self.warning(msg, *args, **kwargs)
+
+ def error(self, msg, *args, **kwargs):
+ """
+ Log 'msg % args' with severity 'ERROR'.
+
+ To pass exception information, use the keyword argument exc_info with
+ a true value, e.g.
+
+ logger.error("Houston, we have a %s", "major problem", exc_info=1)
+ """
+ if self.isEnabledFor(ERROR):
+ self._log(ERROR, msg, args, **kwargs)
+
+ def exception(self, msg, *args, exc_info=True, **kwargs):
+ """
+ Convenience method for logging an ERROR with exception information.
+ """
+ self.error(msg, *args, exc_info=exc_info, **kwargs)
+
+ def critical(self, msg, *args, **kwargs):
+ """
+ Log 'msg % args' with severity 'CRITICAL'.
+
+ To pass exception information, use the keyword argument exc_info with
+ a true value, e.g.
+
+ logger.critical("Houston, we have a %s", "major disaster", exc_info=1)
+ """
+ if self.isEnabledFor(CRITICAL):
+ self._log(CRITICAL, msg, args, **kwargs)
+
+ fatal = critical
+
+ def log(self, level, msg, *args, **kwargs):
+ """
+ Log 'msg % args' with the integer severity 'level'.
+
+ To pass exception information, use the keyword argument exc_info with
+ a true value, e.g.
+
+ logger.log(level, "We have a %s", "mysterious problem", exc_info=1)
+ """
+ if not isinstance(level, int):
+ if raiseExceptions:
+ raise TypeError("level must be an integer")
+ else:
+ return
+ if self.isEnabledFor(level):
+ self._log(level, msg, args, **kwargs)
+
+ def findCaller(self, stack_info=False):
+ """
+ Find the stack frame of the caller so that we can note the source
+ file name, line number and function name.
+ """
+ f = currentframe()
+ #On some versions of IronPython, currentframe() returns None if
+ #IronPython isn't run with -X:Frames.
+ if f is not None:
+ f = f.f_back
+ rv = "(unknown file)", 0, "(unknown function)", None
+ while hasattr(f, "f_code"):
+ co = f.f_code
+ filename = os.path.normcase(co.co_filename)
+ if filename == _srcfile:
+ f = f.f_back
+ continue
+ sinfo = None
+ if stack_info:
+ sio = io.StringIO()
+ sio.write('Stack (most recent call last):\n')
+ traceback.print_stack(f, file=sio)
+ sinfo = sio.getvalue()
+ if sinfo[-1] == '\n':
+ sinfo = sinfo[:-1]
+ sio.close()
+ rv = (co.co_filename, f.f_lineno, co.co_name, sinfo)
+ break
+ return rv
+
+ def makeRecord(self, name, level, fn, lno, msg, args, exc_info,
+ func=None, extra=None, sinfo=None):
+ """
+ A factory method which can be overridden in subclasses to create
+ specialized LogRecords.
+ """
+ rv = _logRecordFactory(name, level, fn, lno, msg, args, exc_info, func,
+ sinfo)
+ if extra is not None:
+ for key in extra:
+ if (key in ["message", "asctime"]) or (key in rv.__dict__):
+ raise KeyError("Attempt to overwrite %r in LogRecord" % key)
+ rv.__dict__[key] = extra[key]
+ return rv
+
+ def _log(self, level, msg, args, exc_info=None, extra=None, stack_info=False):
+ """
+ Low-level logging routine which creates a LogRecord and then calls
+ all the handlers of this logger to handle the record.
+ """
+ sinfo = None
+ if _srcfile:
+ #IronPython doesn't track Python frames, so findCaller raises an
+ #exception on some versions of IronPython. We trap it here so that
+ #IronPython can use logging.
+ try:
+ fn, lno, func, sinfo = self.findCaller(stack_info)
+ except ValueError: # pragma: no cover
+ fn, lno, func = "(unknown file)", 0, "(unknown function)"
+ else: # pragma: no cover
+ fn, lno, func = "(unknown file)", 0, "(unknown function)"
+ if exc_info:
+ if isinstance(exc_info, BaseException):
+ exc_info = (type(exc_info), exc_info, exc_info.__traceback__)
+ elif not isinstance(exc_info, tuple):
+ exc_info = sys.exc_info()
+ record = self.makeRecord(self.name, level, fn, lno, msg, args,
+ exc_info, func, extra, sinfo)
+ self.handle(record)
+
+ def handle(self, record):
+ """
+ Call the handlers for the specified record.
+
+ This method is used for unpickled records received from a socket, as
+ well as those created locally. Logger-level filtering is applied.
+ """
+ if (not self.disabled) and self.filter(record):
+ self.callHandlers(record)
+
+ def addHandler(self, hdlr):
+ """
+ Add the specified handler to this logger.
+ """
+ _acquireLock()
+ try:
+ if not (hdlr in self.handlers):
+ self.handlers.append(hdlr)
+ finally:
+ _releaseLock()
+
+ def removeHandler(self, hdlr):
+ """
+ Remove the specified handler from this logger.
+ """
+ _acquireLock()
+ try:
+ if hdlr in self.handlers:
+ self.handlers.remove(hdlr)
+ finally:
+ _releaseLock()
+
+ def hasHandlers(self):
+ """
+ See if this logger has any handlers configured.
+
+ Loop through all handlers for this logger and its parents in the
+ logger hierarchy. Return True if a handler was found, else False.
+ Stop searching up the hierarchy whenever a logger with the "propagate"
+ attribute set to zero is found - that will be the last logger which
+ is checked for the existence of handlers.
+ """
+ c = self
+ rv = False
+ while c:
+ if c.handlers:
+ rv = True
+ break
+ if not c.propagate:
+ break
+ else:
+ c = c.parent
+ return rv
+
+ def callHandlers(self, record):
+ """
+ Pass a record to all relevant handlers.
+
+ Loop through all handlers for this logger and its parents in the
+ logger hierarchy. If no handler was found, output a one-off error
+ message to sys.stderr. Stop searching up the hierarchy whenever a
+ logger with the "propagate" attribute set to zero is found - that
+ will be the last logger whose handlers are called.
+ """
+ c = self
+ found = 0
+ while c:
+ for hdlr in c.handlers:
+ found = found + 1
+ if record.levelno >= hdlr.level:
+ hdlr.handle(record)
+ if not c.propagate:
+ c = None #break out
+ else:
+ c = c.parent
+ if (found == 0):
+ if lastResort:
+ if record.levelno >= lastResort.level:
+ lastResort.handle(record)
+ elif raiseExceptions and not self.manager.emittedNoHandlerWarning:
+ sys.stderr.write("No handlers could be found for logger"
+ " \"%s\"\n" % self.name)
+ self.manager.emittedNoHandlerWarning = True
+
+ def getEffectiveLevel(self):
+ """
+ Get the effective level for this logger.
+
+ Loop through this logger and its parents in the logger hierarchy,
+ looking for a non-zero logging level. Return the first one found.
+ """
+ logger = self
+ while logger:
+ if logger.level:
+ return logger.level
+ logger = logger.parent
+ return NOTSET
+
+ def isEnabledFor(self, level):
+ """
+ Is this logger enabled for level 'level'?
+ """
+ if self.manager.disable >= level:
+ return False
+ return level >= self.getEffectiveLevel()
+
+ def getChild(self, suffix):
+ """
+ Get a logger which is a descendant to this one.
+
+ This is a convenience method, such that
+
+ logging.getLogger('abc').getChild('def.ghi')
+
+ is the same as
+
+ logging.getLogger('abc.def.ghi')
+
+ It's useful, for example, when the parent logger is named using
+ __name__ rather than a literal string.
+ """
+ if self.root is not self:
+ suffix = '.'.join((self.name, suffix))
+ return self.manager.getLogger(suffix)
+
+ def __repr__(self):
+ level = getLevelName(self.getEffectiveLevel())
+ return '<%s %s (%s)>' % (self.__class__.__name__, self.name, level)
+
+
+class RootLogger(Logger):
+ """
+ A root logger is not that different to any other logger, except that
+ it must have a logging level and there is only one instance of it in
+ the hierarchy.
+ """
+ def __init__(self, level):
+ """
+ Initialize the logger with the name "root".
+ """
+ Logger.__init__(self, "root", level)
+
+_loggerClass = Logger
+
+class LoggerAdapter(object):
+ """
+ An adapter for loggers which makes it easier to specify contextual
+ information in logging output.
+ """
+
+ def __init__(self, logger, extra):
+ """
+ Initialize the adapter with a logger and a dict-like object which
+ provides contextual information. This constructor signature allows
+ easy stacking of LoggerAdapters, if so desired.
+
+ You can effectively pass keyword arguments as shown in the
+ following example:
+
+ adapter = LoggerAdapter(someLogger, dict(p1=v1, p2="v2"))
+ """
+ self.logger = logger
+ self.extra = extra
+
+ def process(self, msg, kwargs):
+ """
+ Process the logging message and keyword arguments passed in to
+ a logging call to insert contextual information. You can either
+ manipulate the message itself, the keyword args or both. Return
+ the message and kwargs modified (or not) to suit your needs.
+
+ Normally, you'll only need to override this one method in a
+ LoggerAdapter subclass for your specific needs.
+ """
+ kwargs["extra"] = self.extra
+ return msg, kwargs
+
+ #
+ # Boilerplate convenience methods
+ #
+ def debug(self, msg, *args, **kwargs):
+ """
+ Delegate a debug call to the underlying logger.
+ """
+ self.log(DEBUG, msg, *args, **kwargs)
+
+ def info(self, msg, *args, **kwargs):
+ """
+ Delegate an info call to the underlying logger.
+ """
+ self.log(INFO, msg, *args, **kwargs)
+
+ def warning(self, msg, *args, **kwargs):
+ """
+ Delegate a warning call to the underlying logger.
+ """
+ self.log(WARNING, msg, *args, **kwargs)
+
+ def warn(self, msg, *args, **kwargs):
+ warnings.warn("The 'warn' method is deprecated, "
+ "use 'warning' instead", DeprecationWarning, 2)
+ self.warning(msg, *args, **kwargs)
+
+ def error(self, msg, *args, **kwargs):
+ """
+ Delegate an error call to the underlying logger.
+ """
+ self.log(ERROR, msg, *args, **kwargs)
+
+ def exception(self, msg, *args, exc_info=True, **kwargs):
+ """
+ Delegate an exception call to the underlying logger.
+ """
+ self.log(ERROR, msg, *args, exc_info=exc_info, **kwargs)
+
+ def critical(self, msg, *args, **kwargs):
+ """
+ Delegate a critical call to the underlying logger.
+ """
+ self.log(CRITICAL, msg, *args, **kwargs)
+
+ def log(self, level, msg, *args, **kwargs):
+ """
+ Delegate a log call to the underlying logger, after adding
+ contextual information from this adapter instance.
+ """
+ if self.isEnabledFor(level):
+ msg, kwargs = self.process(msg, kwargs)
+ self.logger.log(level, msg, *args, **kwargs)
+
+ def isEnabledFor(self, level):
+ """
+ Is this logger enabled for level 'level'?
+ """
+ if self.logger.manager.disable >= level:
+ return False
+ return level >= self.getEffectiveLevel()
+
+ def setLevel(self, level):
+ """
+ Set the specified level on the underlying logger.
+ """
+ self.logger.setLevel(level)
+
+ def getEffectiveLevel(self):
+ """
+ Get the effective level for the underlying logger.
+ """
+ return self.logger.getEffectiveLevel()
+
+ def hasHandlers(self):
+ """
+ See if the underlying logger has any handlers.
+ """
+ return self.logger.hasHandlers()
+
+ def _log(self, level, msg, args, exc_info=None, extra=None, stack_info=False):
+ """
+ Low-level log implementation, proxied to allow nested logger adapters.
+ """
+ return self.logger._log(
+ level,
+ msg,
+ args,
+ exc_info=exc_info,
+ extra=extra,
+ stack_info=stack_info,
+ )
+
+ @property
+ def manager(self):
+ return self.logger.manager
+
+ @manager.setter
+ def manager(self, value):
+ self.logger.manager = value
+
+ @property
+ def name(self):
+ return self.logger.name
+
+ def __repr__(self):
+ logger = self.logger
+ level = getLevelName(logger.getEffectiveLevel())
+ return '<%s %s (%s)>' % (self.__class__.__name__, logger.name, level)
+
+root = RootLogger(WARNING)
+Logger.root = root
+Logger.manager = Manager(Logger.root)
+
+#---------------------------------------------------------------------------
+# Configuration classes and functions
+#---------------------------------------------------------------------------
+
+def basicConfig(**kwargs):
+ """
+ Do basic configuration for the logging system.
+
+ This function does nothing if the root logger already has handlers
+ configured. It is a convenience method intended for use by simple scripts
+ to do one-shot configuration of the logging package.
+
+ The default behaviour is to create a StreamHandler which writes to
+ sys.stderr, set a formatter using the BASIC_FORMAT format string, and
+ add the handler to the root logger.
+
+ A number of optional keyword arguments may be specified, which can alter
+ the default behaviour.
+
+ filename Specifies that a FileHandler be created, using the specified
+ filename, rather than a StreamHandler.
+ filemode Specifies the mode to open the file, if filename is specified
+ (if filemode is unspecified, it defaults to 'a').
+ format Use the specified format string for the handler.
+ datefmt Use the specified date/time format.
+ style If a format string is specified, use this to specify the
+ type of format string (possible values '%', '{', '$', for
+ %-formatting, :meth:`str.format` and :class:`string.Template`
+ - defaults to '%').
+ level Set the root logger level to the specified level.
+ stream Use the specified stream to initialize the StreamHandler. Note
+ that this argument is incompatible with 'filename' - if both
+ are present, 'stream' is ignored.
+ handlers If specified, this should be an iterable of already created
+ handlers, which will be added to the root handler. Any handler
+ in the list which does not have a formatter assigned will be
+ assigned the formatter created in this function.
+
+ Note that you could specify a stream created using open(filename, mode)
+ rather than passing the filename and mode in. However, it should be
+ remembered that StreamHandler does not close its stream (since it may be
+ using sys.stdout or sys.stderr), whereas FileHandler closes its stream
+ when the handler is closed.
+
+ .. versionchanged:: 3.2
+ Added the ``style`` parameter.
+
+ .. versionchanged:: 3.3
+ Added the ``handlers`` parameter. A ``ValueError`` is now thrown for
+ incompatible arguments (e.g. ``handlers`` specified together with
+ ``filename``/``filemode``, or ``filename``/``filemode`` specified
+ together with ``stream``, or ``handlers`` specified together with
+ ``stream``.
+ """
+ # Add thread safety in case someone mistakenly calls
+ # basicConfig() from multiple threads
+ _acquireLock()
+ try:
+ if len(root.handlers) == 0:
+ handlers = kwargs.pop("handlers", None)
+ if handlers is None:
+ if "stream" in kwargs and "filename" in kwargs:
+ raise ValueError("'stream' and 'filename' should not be "
+ "specified together")
+ else:
+ if "stream" in kwargs or "filename" in kwargs:
+ raise ValueError("'stream' or 'filename' should not be "
+ "specified together with 'handlers'")
+ if handlers is None:
+ filename = kwargs.pop("filename", None)
+ mode = kwargs.pop("filemode", 'a')
+ if filename:
+ h = FileHandler(filename, mode)
+ else:
+ stream = kwargs.pop("stream", None)
+ h = StreamHandler(stream)
+ handlers = [h]
+ dfs = kwargs.pop("datefmt", None)
+ style = kwargs.pop("style", '%')
+ if style not in _STYLES:
+ raise ValueError('Style must be one of: %s' % ','.join(
+ _STYLES.keys()))
+ fs = kwargs.pop("format", _STYLES[style][1])
+ fmt = Formatter(fs, dfs, style)
+ for h in handlers:
+ if h.formatter is None:
+ h.setFormatter(fmt)
+ root.addHandler(h)
+ level = kwargs.pop("level", None)
+ if level is not None:
+ root.setLevel(level)
+ if kwargs:
+ keys = ', '.join(kwargs.keys())
+ raise ValueError('Unrecognised argument(s): %s' % keys)
+ finally:
+ _releaseLock()
+
+#---------------------------------------------------------------------------
+# Utility functions at module level.
+# Basically delegate everything to the root logger.
+#---------------------------------------------------------------------------
+
+def getLogger(name=None):
+ """
+ Return a logger with the specified name, creating it if necessary.
+
+ If no name is specified, return the root logger.
+ """
+ if name:
+ return Logger.manager.getLogger(name)
+ else:
+ return root
+
+def critical(msg, *args, **kwargs):
+ """
+ Log a message with severity 'CRITICAL' on the root logger. If the logger
+ has no handlers, call basicConfig() to add a console handler with a
+ pre-defined format.
+ """
+ if len(root.handlers) == 0:
+ basicConfig()
+ root.critical(msg, *args, **kwargs)
+
+fatal = critical
+
+def error(msg, *args, **kwargs):
+ """
+ Log a message with severity 'ERROR' on the root logger. If the logger has
+ no handlers, call basicConfig() to add a console handler with a pre-defined
+ format.
+ """
+ if len(root.handlers) == 0:
+ basicConfig()
+ root.error(msg, *args, **kwargs)
+
+def exception(msg, *args, exc_info=True, **kwargs):
+ """
+ Log a message with severity 'ERROR' on the root logger, with exception
+ information. If the logger has no handlers, basicConfig() is called to add
+ a console handler with a pre-defined format.
+ """
+ error(msg, *args, exc_info=exc_info, **kwargs)
+
+def warning(msg, *args, **kwargs):
+ """
+ Log a message with severity 'WARNING' on the root logger. If the logger has
+ no handlers, call basicConfig() to add a console handler with a pre-defined
+ format.
+ """
+ if len(root.handlers) == 0:
+ basicConfig()
+ root.warning(msg, *args, **kwargs)
+
+def warn(msg, *args, **kwargs):
+ warnings.warn("The 'warn' function is deprecated, "
+ "use 'warning' instead", DeprecationWarning, 2)
+ warning(msg, *args, **kwargs)
+
+def info(msg, *args, **kwargs):
+ """
+ Log a message with severity 'INFO' on the root logger. If the logger has
+ no handlers, call basicConfig() to add a console handler with a pre-defined
+ format.
+ """
+ if len(root.handlers) == 0:
+ basicConfig()
+ root.info(msg, *args, **kwargs)
+
+def debug(msg, *args, **kwargs):
+ """
+ Log a message with severity 'DEBUG' on the root logger. If the logger has
+ no handlers, call basicConfig() to add a console handler with a pre-defined
+ format.
+ """
+ if len(root.handlers) == 0:
+ basicConfig()
+ root.debug(msg, *args, **kwargs)
+
+def log(level, msg, *args, **kwargs):
+ """
+ Log 'msg % args' with the integer severity 'level' on the root logger. If
+ the logger has no handlers, call basicConfig() to add a console handler
+ with a pre-defined format.
+ """
+ if len(root.handlers) == 0:
+ basicConfig()
+ root.log(level, msg, *args, **kwargs)
+
+def disable(level):
+ """
+ Disable all logging calls of severity 'level' and below.
+ """
+ root.manager.disable = level
+
+def shutdown(handlerList=_handlerList):
+ """
+ Perform any cleanup actions in the logging system (e.g. flushing
+ buffers).
+
+ Should be called at application exit.
+ """
+ for wr in reversed(handlerList[:]):
+ #errors might occur, for example, if files are locked
+ #we just ignore them if raiseExceptions is not set
+ try:
+ h = wr()
+ if h:
+ try:
+ h.acquire()
+ h.flush()
+ h.close()
+ except (OSError, ValueError):
+ # Ignore errors which might be caused
+ # because handlers have been closed but
+ # references to them are still around at
+ # application exit.
+ pass
+ finally:
+ h.release()
+ except: # ignore everything, as we're shutting down
+ if raiseExceptions:
+ raise
+ #else, swallow
+
+#Let's try and shutdown automatically on application exit...
+import atexit
+atexit.register(shutdown)
+
+# Null handler
+
+class NullHandler(Handler):
+ """
+ This handler does nothing. It's intended to be used to avoid the
+ "No handlers could be found for logger XXX" one-off warning. This is
+ important for library code, which may contain code to log events. If a user
+ of the library does not configure logging, the one-off warning might be
+ produced; to avoid this, the library developer simply needs to instantiate
+ a NullHandler and add it to the top-level logger of the library module or
+ package.
+ """
+ def handle(self, record):
+ """Stub."""
+
+ def emit(self, record):
+ """Stub."""
+
+ def createLock(self):
+ self.lock = None
+
+# Warnings integration
+
+_warnings_showwarning = None
+
+def _showwarning(message, category, filename, lineno, file=None, line=None):
+ """
+ Implementation of showwarnings which redirects to logging, which will first
+ check to see if the file parameter is None. If a file is specified, it will
+ delegate to the original warnings implementation of showwarning. Otherwise,
+ it will call warnings.formatwarning and will log the resulting string to a
+ warnings logger named "py.warnings" with level logging.WARNING.
+ """
+ if file is not None:
+ if _warnings_showwarning is not None:
+ _warnings_showwarning(message, category, filename, lineno, file, line)
+ else:
+ s = warnings.formatwarning(message, category, filename, lineno, line)
+ logger = getLogger("py.warnings")
+ if not logger.handlers:
+ logger.addHandler(NullHandler())
+ logger.warning("%s", s)
+
+def captureWarnings(capture):
+ """
+ If capture is true, redirect all warnings to the logging package.
+ If capture is False, ensure that warnings are not redirected to logging
+ but to their original destinations.
+ """
+ global _warnings_showwarning
+ if capture:
+ if _warnings_showwarning is None:
+ _warnings_showwarning = warnings.showwarning
+ warnings.showwarning = _showwarning
+ else:
+ if _warnings_showwarning is not None:
+ warnings.showwarning = _warnings_showwarning
+ _warnings_showwarning = None
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/ntpath.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/ntpath.py
new file mode 100644
index 00000000..d1ffb774
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/ntpath.py
@@ -0,0 +1,568 @@
+
+# Module 'ntpath' -- common operations on WinNT/Win95 and UEFI pathnames.
+#
+# Copyright (c) 2015, Daryl McDaniel. All rights reserved.<BR>
+# Copyright (c) 2011 - 2012, Intel Corporation. All rights reserved.<BR>
+# This program and the accompanying materials are licensed and made available under
+# the terms and conditions of the BSD License that accompanies this distribution.
+# The full text of the license may be found at
+# http://opensource.org/licenses/bsd-license.
+#
+# THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+# WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+
+
+"""Common pathname manipulations, WindowsNT/95 and UEFI version.
+
+Instead of importing this module directly, import os and refer to this
+module as os.path.
+"""
+
+import os
+import sys
+import stat
+import genericpath
+import warnings
+
+from genericpath import *
+from genericpath import _unicode
+
+__all__ = ["normcase","isabs","join","splitdrive","split","splitext",
+ "basename","dirname","commonprefix","getsize","getmtime",
+ "getatime","getctime", "islink","exists","lexists","isdir","isfile",
+ "ismount","walk","expanduser","expandvars","normpath","abspath",
+ "splitunc","curdir","pardir","sep","pathsep","defpath","altsep",
+ "extsep","devnull","realpath","supports_unicode_filenames","relpath"]
+
+# strings representing various path-related bits and pieces
+curdir = '.'
+pardir = '..'
+extsep = '.'
+sep = '\\'
+pathsep = ';'
+altsep = '/'
+defpath = '.;C:\\bin'
+if 'ce' in sys.builtin_module_names:
+ defpath = '\\Windows'
+elif 'os2' in sys.builtin_module_names:
+ # OS/2 w/ VACPP
+ altsep = '/'
+devnull = 'nul'
+
+# Normalize the case of a pathname and map slashes to backslashes.
+# Other normalizations (such as optimizing '../' away) are not done
+# (this is done by normpath).
+
+def normcase(s):
+ """Normalize case of pathname.
+
+ Makes all characters lowercase and all slashes into backslashes."""
+ return s.replace("/", "\\").lower()
+
+
+# Return whether a path is absolute.
+# Trivial in Posix, harder on the Mac or MS-DOS.
+# For DOS it is absolute if it starts with a slash or backslash (current
+# volume), or if a pathname after the volume letter and colon / UNC resource
+# starts with a slash or backslash.
+
+def isabs(s):
+ """Test whether a path is absolute"""
+ s = splitdrive(s)[1]
+ return s != '' and s[:1] in '/\\'
+
+
+# Join two (or more) paths.
+def join(path, *paths):
+ """Join two or more pathname components, inserting "\\" as needed."""
+ result_drive, result_path = splitdrive(path)
+ for p in paths:
+ p_drive, p_path = splitdrive(p)
+ if p_path and p_path[0] in '\\/':
+ # Second path is absolute
+ if p_drive or not result_drive:
+ result_drive = p_drive
+ result_path = p_path
+ continue
+ elif p_drive and p_drive != result_drive:
+ if p_drive.lower() != result_drive.lower():
+ # Different drives => ignore the first path entirely
+ result_drive = p_drive
+ result_path = p_path
+ continue
+ # Same drive in different case
+ result_drive = p_drive
+ # Second path is relative to the first
+ if result_path and result_path[-1] not in '\\/':
+ result_path = result_path + '\\'
+ result_path = result_path + p_path
+ ## add separator between UNC and non-absolute path
+ if (result_path and result_path[0] not in '\\/' and
+ result_drive and result_drive[-1:] != ':'):
+ return result_drive + sep + result_path
+ return result_drive + result_path
+
+
+# Split a path in a drive specification (a drive letter followed by a
+# colon) and the path specification.
+# It is always true that drivespec + pathspec == p
+# NOTE: for UEFI (and even Windows) you can have multiple characters to the left
+# of the ':' for the device or drive spec. This is reflected in the modifications
+# to splitdrive() and splitunc().
+def splitdrive(p):
+ """Split a pathname into drive/UNC sharepoint and relative path specifiers.
+ Returns a 2-tuple (drive_or_unc, path); either part may be empty.
+
+ If you assign
+ result = splitdrive(p)
+ It is always true that:
+ result[0] + result[1] == p
+
+ If the path contained a drive letter, drive_or_unc will contain everything
+ up to and including the colon. e.g. splitdrive("c:/dir") returns ("c:", "/dir")
+
+ If the path contained a UNC path, the drive_or_unc will contain the host name
+ and share up to but not including the fourth directory separator character.
+ e.g. splitdrive("//host/computer/dir") returns ("//host/computer", "/dir")
+
+ Paths cannot contain both a drive letter and a UNC path.
+
+ """
+ if len(p) > 1:
+ normp = p.replace(altsep, sep)
+ if (normp[0:2] == sep*2) and (normp[2:3] != sep):
+ # is a UNC path:
+ # vvvvvvvvvvvvvvvvvvvv drive letter or UNC path
+ # \\machine\mountpoint\directory\etc\...
+ # directory ^^^^^^^^^^^^^^^
+ index = normp.find(sep, 2)
+ if index == -1:
+ return '', p
+ index2 = normp.find(sep, index + 1)
+ # a UNC path can't have two slashes in a row
+ # (after the initial two)
+ if index2 == index + 1:
+ return '', p
+ if index2 == -1:
+ index2 = len(p)
+ return p[:index2], p[index2:]
+ index = p.find(':')
+ if index != -1:
+ index = index + 1
+ return p[:index], p[index:]
+ return '', p
+
+# Parse UNC paths
+def splitunc(p):
+ """Split a pathname into UNC mount point and relative path specifiers.
+
+ Return a 2-tuple (unc, rest); either part may be empty.
+ If unc is not empty, it has the form '//host/mount' (or similar
+ using backslashes). unc+rest is always the input path.
+ Paths containing drive letters never have an UNC part.
+ """
+ if ':' in p:
+ return '', p # Drive letter or device name present
+ firstTwo = p[0:2]
+ if firstTwo == '//' or firstTwo == '\\\\':
+ # is a UNC path:
+ # vvvvvvvvvvvvvvvvvvvv equivalent to drive letter
+ # \\machine\mountpoint\directories...
+ # directory ^^^^^^^^^^^^^^^
+ normp = p.replace('\\', '/')
+ index = normp.find('/', 2)
+ if index <= 2:
+ return '', p
+ index2 = normp.find('/', index + 1)
+ # a UNC path can't have two slashes in a row
+ # (after the initial two)
+ if index2 == index + 1:
+ return '', p
+ if index2 == -1:
+ index2 = len(p)
+ return p[:index2], p[index2:]
+ return '', p
+
+
+# Split a path in head (everything up to the last '/') and tail (the
+# rest). After the trailing '/' is stripped, the invariant
+# join(head, tail) == p holds.
+# The resulting head won't end in '/' unless it is the root.
+
+def split(p):
+ """Split a pathname.
+
+ Return tuple (head, tail) where tail is everything after the final slash.
+ Either part may be empty."""
+
+ d, p = splitdrive(p)
+ # set i to index beyond p's last slash
+ i = len(p)
+ while i and p[i-1] not in '/\\':
+ i = i - 1
+ head, tail = p[:i], p[i:] # now tail has no slashes
+ # remove trailing slashes from head, unless it's all slashes
+ head2 = head
+ while head2 and head2[-1] in '/\\':
+ head2 = head2[:-1]
+ head = head2 or head
+ return d + head, tail
+
+
+# Split a path in root and extension.
+# The extension is everything starting at the last dot in the last
+# pathname component; the root is everything before that.
+# It is always true that root + ext == p.
+
+def splitext(p):
+ return genericpath._splitext(p, sep, altsep, extsep)
+splitext.__doc__ = genericpath._splitext.__doc__
+
+
+# Return the tail (basename) part of a path.
+
+def basename(p):
+ """Returns the final component of a pathname"""
+ return split(p)[1]
+
+
+# Return the head (dirname) part of a path.
+
+def dirname(p):
+ """Returns the directory component of a pathname"""
+ return split(p)[0]
+
+# Is a path a symbolic link?
+# This will always return false on systems where posix.lstat doesn't exist.
+
+def islink(path):
+ """Test for symbolic link.
+ On WindowsNT/95 and OS/2 always returns false
+ """
+ return False
+
+# alias exists to lexists
+lexists = exists
+
+# Is a path a mount point? Either a root (with or without drive letter)
+# or an UNC path with at most a / or \ after the mount point.
+
+def ismount(path):
+ """Test whether a path is a mount point (defined as root of drive)"""
+ unc, rest = splitunc(path)
+ if unc:
+ return rest in ("", "/", "\\")
+ p = splitdrive(path)[1]
+ return len(p) == 1 and p[0] in '/\\'
+
+
+# Directory tree walk.
+# For each directory under top (including top itself, but excluding
+# '.' and '..'), func(arg, dirname, filenames) is called, where
+# dirname is the name of the directory and filenames is the list
+# of files (and subdirectories etc.) in the directory.
+# The func may modify the filenames list, to implement a filter,
+# or to impose a different order of visiting.
+
+def walk(top, func, arg):
+ """Directory tree walk with callback function.
+
+ For each directory in the directory tree rooted at top (including top
+ itself, but excluding '.' and '..'), call func(arg, dirname, fnames).
+ dirname is the name of the directory, and fnames a list of the names of
+ the files and subdirectories in dirname (excluding '.' and '..'). func
+ may modify the fnames list in-place (e.g. via del or slice assignment),
+ and walk will only recurse into the subdirectories whose names remain in
+ fnames; this can be used to implement a filter, or to impose a specific
+ order of visiting. No semantics are defined for, or required of, arg,
+ beyond that arg is always passed to func. It can be used, e.g., to pass
+ a filename pattern, or a mutable object designed to accumulate
+ statistics. Passing None for arg is common."""
+ warnings.warnpy3k("In 3.x, os.path.walk is removed in favor of os.walk.",
+ stacklevel=2)
+ try:
+ names = os.listdir(top)
+ except os.error:
+ return
+ func(arg, top, names)
+ for name in names:
+ name = join(top, name)
+ if isdir(name):
+ walk(name, func, arg)
+
+
+# Expand paths beginning with '~' or '~user'.
+# '~' means $HOME; '~user' means that user's home directory.
+# If the path doesn't begin with '~', or if the user or $HOME is unknown,
+# the path is returned unchanged (leaving error reporting to whatever
+# function is called with the expanded path as argument).
+# See also module 'glob' for expansion of *, ? and [...] in pathnames.
+# (A function should also be defined to do full *sh-style environment
+# variable expansion.)
+
+def expanduser(path):
+ """Expand ~ and ~user constructs.
+
+ If user or $HOME is unknown, do nothing."""
+ if path[:1] != '~':
+ return path
+ i, n = 1, len(path)
+ while i < n and path[i] not in '/\\':
+ i = i + 1
+
+ if 'HOME' in os.environ:
+ userhome = os.environ['HOME']
+ elif 'USERPROFILE' in os.environ:
+ userhome = os.environ['USERPROFILE']
+ elif not 'HOMEPATH' in os.environ:
+ return path
+ else:
+ try:
+ drive = os.environ['HOMEDRIVE']
+ except KeyError:
+ drive = ''
+ userhome = join(drive, os.environ['HOMEPATH'])
+
+ if i != 1: #~user
+ userhome = join(dirname(userhome), path[1:i])
+
+ return userhome + path[i:]
+
+
+# Expand paths containing shell variable substitutions.
+# The following rules apply:
+# - no expansion within single quotes
+# - '$$' is translated into '$'
+# - '%%' is translated into '%' if '%%' are not seen in %var1%%var2%
+# - ${varname} is accepted.
+# - $varname is accepted.
+# - %varname% is accepted.
+# - varnames can be made out of letters, digits and the characters '_-'
+# (though is not verified in the ${varname} and %varname% cases)
+# XXX With COMMAND.COM you can use any characters in a variable name,
+# XXX except '^|<>='.
+
+def expandvars(path):
+ """Expand shell variables of the forms $var, ${var} and %var%.
+
+ Unknown variables are left unchanged."""
+ if '$' not in path and '%' not in path:
+ return path
+ import string
+ varchars = string.ascii_letters + string.digits + '_-'
+ if isinstance(path, _unicode):
+ encoding = sys.getfilesystemencoding()
+ def getenv(var):
+ return os.environ[var.encode(encoding)].decode(encoding)
+ else:
+ def getenv(var):
+ return os.environ[var]
+ res = ''
+ index = 0
+ pathlen = len(path)
+ while index < pathlen:
+ c = path[index]
+ if c == '\'': # no expansion within single quotes
+ path = path[index + 1:]
+ pathlen = len(path)
+ try:
+ index = path.index('\'')
+ res = res + '\'' + path[:index + 1]
+ except ValueError:
+ res = res + c + path
+ index = pathlen - 1
+ elif c == '%': # variable or '%'
+ if path[index + 1:index + 2] == '%':
+ res = res + c
+ index = index + 1
+ else:
+ path = path[index+1:]
+ pathlen = len(path)
+ try:
+ index = path.index('%')
+ except ValueError:
+ res = res + '%' + path
+ index = pathlen - 1
+ else:
+ var = path[:index]
+ try:
+ res = res + getenv(var)
+ except KeyError:
+ res = res + '%' + var + '%'
+ elif c == '$': # variable or '$$'
+ if path[index + 1:index + 2] == '$':
+ res = res + c
+ index = index + 1
+ elif path[index + 1:index + 2] == '{':
+ path = path[index+2:]
+ pathlen = len(path)
+ try:
+ index = path.index('}')
+ var = path[:index]
+ try:
+ res = res + getenv(var)
+ except KeyError:
+ res = res + '${' + var + '}'
+ except ValueError:
+ res = res + '${' + path
+ index = pathlen - 1
+ else:
+ var = ''
+ index = index + 1
+ c = path[index:index + 1]
+ while c != '' and c in varchars:
+ var = var + c
+ index = index + 1
+ c = path[index:index + 1]
+ try:
+ res = res + getenv(var)
+ except KeyError:
+ res = res + '$' + var
+ if c != '':
+ index = index - 1
+ else:
+ res = res + c
+ index = index + 1
+ return res
+
+
+# Normalize a path, e.g. A//B, A/./B and A/foo/../B all become A\B.
+# Previously, this function also truncated pathnames to 8+3 format,
+# but as this module is called "ntpath", that's obviously wrong!
+
+def normpath(path):
+ """Normalize path, eliminating double slashes, etc."""
+ # Preserve unicode (if path is unicode)
+ backslash, dot = (u'\\', u'.') if isinstance(path, _unicode) else ('\\', '.')
+ if path.startswith(('\\\\.\\', '\\\\?\\')):
+ # in the case of paths with these prefixes:
+ # \\.\ -> device names
+ # \\?\ -> literal paths
+ # do not do any normalization, but return the path unchanged
+ return path
+ path = path.replace("/", "\\")
+ prefix, path = splitdrive(path)
+ # We need to be careful here. If the prefix is empty, and the path starts
+ # with a backslash, it could either be an absolute path on the current
+ # drive (\dir1\dir2\file) or a UNC filename (\\server\mount\dir1\file). It
+ # is therefore imperative NOT to collapse multiple backslashes blindly in
+ # that case.
+ # The code below preserves multiple backslashes when there is no drive
+ # letter. This means that the invalid filename \\\a\b is preserved
+ # unchanged, where a\\\b is normalised to a\b. It's not clear that there
+ # is any better behaviour for such edge cases.
+ if prefix == '':
+ # No drive letter - preserve initial backslashes
+ while path[:1] == "\\":
+ prefix = prefix + backslash
+ path = path[1:]
+ else:
+ # We have a drive letter - collapse initial backslashes
+ if path.startswith("\\"):
+ prefix = prefix + backslash
+ path = path.lstrip("\\")
+ comps = path.split("\\")
+ i = 0
+ while i < len(comps):
+ if comps[i] in ('.', ''):
+ del comps[i]
+ elif comps[i] == '..':
+ if i > 0 and comps[i-1] != '..':
+ del comps[i-1:i+1]
+ i -= 1
+ elif i == 0 and prefix.endswith("\\"):
+ del comps[i]
+ else:
+ i += 1
+ else:
+ i += 1
+ # If the path is now empty, substitute '.'
+ if not prefix and not comps:
+ comps.append(dot)
+ return prefix + backslash.join(comps)
+
+
+# Return an absolute path.
+try:
+ from nt import _getfullpathname
+
+except ImportError: # not running on Windows - mock up something sensible
+ def abspath(path):
+ """Return the absolute version of a path."""
+ if not isabs(path):
+ if isinstance(path, _unicode):
+ cwd = os.getcwdu()
+ else:
+ cwd = os.getcwd()
+ path = join(cwd, path)
+ return normpath(path)
+
+else: # use native Windows method on Windows
+ def abspath(path):
+ """Return the absolute version of a path."""
+
+ if path: # Empty path must return current working directory.
+ try:
+ path = _getfullpathname(path)
+ except WindowsError:
+ pass # Bad path - return unchanged.
+ elif isinstance(path, _unicode):
+ path = os.getcwdu()
+ else:
+ path = os.getcwd()
+ return normpath(path)
+
+# realpath is a no-op on systems without islink support
+realpath = abspath
+# Win9x family and earlier have no Unicode filename support.
+supports_unicode_filenames = (hasattr(sys, "getwindowsversion") and
+ sys.getwindowsversion()[3] >= 2)
+
+def _abspath_split(path):
+ abs = abspath(normpath(path))
+ prefix, rest = splitunc(abs)
+ is_unc = bool(prefix)
+ if not is_unc:
+ prefix, rest = splitdrive(abs)
+ return is_unc, prefix, [x for x in rest.split(sep) if x]
+
+def relpath(path, start=curdir):
+ """Return a relative version of a path"""
+
+ if not path:
+ raise ValueError("no path specified")
+
+ start_is_unc, start_prefix, start_list = _abspath_split(start)
+ path_is_unc, path_prefix, path_list = _abspath_split(path)
+
+ if path_is_unc ^ start_is_unc:
+ raise ValueError("Cannot mix UNC and non-UNC paths (%s and %s)"
+ % (path, start))
+ if path_prefix.lower() != start_prefix.lower():
+ if path_is_unc:
+ raise ValueError("path is on UNC root %s, start on UNC root %s"
+ % (path_prefix, start_prefix))
+ else:
+ raise ValueError("path is on drive %s, start on drive %s"
+ % (path_prefix, start_prefix))
+ # Work out how much of the filepath is shared by start and path.
+ i = 0
+ for e1, e2 in zip(start_list, path_list):
+ if e1.lower() != e2.lower():
+ break
+ i += 1
+
+ rel_list = [pardir] * (len(start_list)-i) + path_list[i:]
+ if not rel_list:
+ return curdir
+ return join(*rel_list)
+
+try:
+ # The genericpath.isdir implementation uses os.stat and checks the mode
+ # attribute to tell whether or not the path is a directory.
+ # This is overkill on Windows - just pass the path to GetFileAttributes
+ # and check the attribute from there.
+ from nt import _isdir as isdir
+except ImportError:
+ # Use genericpath.isdir as imported above.
+ pass
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/os.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/os.py
new file mode 100644
index 00000000..b163199f
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/os.py
@@ -0,0 +1,792 @@
+
+# Module 'os' -- OS routines for NT, Posix, or UEFI depending on what system we're on.
+#
+# Copyright (c) 2015, Daryl McDaniel. All rights reserved.<BR>
+# Copyright (c) 2011 - 2012, Intel Corporation. All rights reserved.<BR>
+# This program and the accompanying materials are licensed and made available under
+# the terms and conditions of the BSD License that accompanies this distribution.
+# The full text of the license may be found at
+# http://opensource.org/licenses/bsd-license.
+#
+# THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+# WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+
+r"""OS routines for NT, Posix, or UEFI depending on what system we're on.
+
+This exports:
+ - all functions from edk2, posix, nt, os2, or ce, e.g. unlink, stat, etc.
+ - os.path is one of the modules uefipath, posixpath, or ntpath
+ - os.name is 'edk2', 'posix', 'nt', 'os2', 'ce' or 'riscos'
+ - os.curdir is a string representing the current directory ('.' or ':')
+ - os.pardir is a string representing the parent directory ('..' or '::')
+ - os.sep is the (or a most common) pathname separator ('/' or ':' or '\\')
+ - os.extsep is the extension separator ('.' or '/')
+ - os.altsep is the alternate pathname separator (None or '/')
+ - os.pathsep is the component separator used in $PATH etc
+ - os.linesep is the line separator in text files ('\r' or '\n' or '\r\n')
+ - os.defpath is the default search path for executables
+ - os.devnull is the file path of the null device ('/dev/null', etc.)
+
+Programs that import and use 'os' stand a better chance of being
+portable between different platforms. Of course, they must then
+only use functions that are defined by all platforms (e.g., unlink
+and opendir), and leave all pathname manipulation to os.path
+(e.g., split and join).
+"""
+
+#'
+
+import sys, errno
+
+_names = sys.builtin_module_names
+
+# Note: more names are added to __all__ later.
+__all__ = ["altsep", "curdir", "pardir", "sep", "extsep", "pathsep", "linesep",
+ "defpath", "name", "path", "devnull",
+ "SEEK_SET", "SEEK_CUR", "SEEK_END"]
+
+def _get_exports_list(module):
+ try:
+ return list(module.__all__)
+ except AttributeError:
+ return [n for n in dir(module) if n[0] != '_']
+
+if 'posix' in _names:
+ name = 'posix'
+ linesep = '\n'
+ from posix import *
+ try:
+ from posix import _exit
+ except ImportError:
+ pass
+ import posixpath as path
+
+ import posix
+ __all__.extend(_get_exports_list(posix))
+ del posix
+
+elif 'nt' in _names:
+ name = 'nt'
+ linesep = '\r\n'
+ from nt import *
+ try:
+ from nt import _exit
+ except ImportError:
+ pass
+ import ntpath as path
+
+ import nt
+ __all__.extend(_get_exports_list(nt))
+ del nt
+
+elif 'os2' in _names:
+ name = 'os2'
+ linesep = '\r\n'
+ from os2 import *
+ try:
+ from os2 import _exit
+ except ImportError:
+ pass
+ if sys.version.find('EMX GCC') == -1:
+ import ntpath as path
+ else:
+ import os2emxpath as path
+ from _emx_link import link
+
+ import os2
+ __all__.extend(_get_exports_list(os2))
+ del os2
+
+elif 'ce' in _names:
+ name = 'ce'
+ linesep = '\r\n'
+ from ce import *
+ try:
+ from ce import _exit
+ except ImportError:
+ pass
+ # We can use the standard Windows path.
+ import ntpath as path
+
+ import ce
+ __all__.extend(_get_exports_list(ce))
+ del ce
+
+elif 'riscos' in _names:
+ name = 'riscos'
+ linesep = '\n'
+ from riscos import *
+ try:
+ from riscos import _exit
+ except ImportError:
+ pass
+ import riscospath as path
+
+ import riscos
+ __all__.extend(_get_exports_list(riscos))
+ del riscos
+
+elif 'edk2' in _names:
+ name = 'edk2'
+ linesep = '\n'
+ from edk2 import *
+ try:
+ from edk2 import _exit
+ except ImportError:
+ pass
+ import ntpath as path
+ path.defpath = '.;/efi/tools/'
+
+ import edk2
+ __all__.extend(_get_exports_list(edk2))
+ del edk2
+
+else:
+ raise ImportError('no os specific module found')
+
+sys.modules['os.path'] = path
+from os.path import (curdir, pardir, sep, pathsep, defpath, extsep, altsep,
+ devnull)
+
+del _names
+
+# Python uses fixed values for the SEEK_ constants; they are mapped
+# to native constants if necessary in posixmodule.c
+SEEK_SET = 0
+SEEK_CUR = 1
+SEEK_END = 2
+
+#'
+
+# Super directory utilities.
+# (Inspired by Eric Raymond; the doc strings are mostly his)
+
+def makedirs(name, mode=777):
+ """makedirs(path [, mode=0777])
+
+ Super-mkdir; create a leaf directory and all intermediate ones.
+ Works like mkdir, except that any intermediate path segment (not
+ just the rightmost) will be created if it does not exist. This is
+ recursive.
+
+ """
+ head, tail = path.split(name)
+ if not tail:
+ head, tail = path.split(head)
+ if head and tail and not path.exists(head):
+ try:
+ makedirs(head, mode)
+ except OSError as e:
+ # be happy if someone already created the path
+ if e.errno != errno.EEXIST:
+ raise
+ if tail == curdir: # xxx/newdir/. exists if xxx/newdir exists
+ return
+ mkdir(name, mode)
+
+def removedirs(name):
+ """removedirs(path)
+
+ Super-rmdir; remove a leaf directory and all empty intermediate
+ ones. Works like rmdir except that, if the leaf directory is
+ successfully removed, directories corresponding to rightmost path
+ segments will be pruned away until either the whole path is
+ consumed or an error occurs. Errors during this latter phase are
+ ignored -- they generally mean that a directory was not empty.
+
+ """
+ rmdir(name)
+ head, tail = path.split(name)
+ if not tail:
+ head, tail = path.split(head)
+ while head and tail:
+ try:
+ rmdir(head)
+ except error:
+ break
+ head, tail = path.split(head)
+
+def renames(old, new):
+ """renames(old, new)
+
+ Super-rename; create directories as necessary and delete any left
+ empty. Works like rename, except creation of any intermediate
+ directories needed to make the new pathname good is attempted
+ first. After the rename, directories corresponding to rightmost
+ path segments of the old name will be pruned until either the
+ whole path is consumed or a nonempty directory is found.
+
+ Note: this function can fail with the new directory structure made
+ if you lack permissions needed to unlink the leaf directory or
+ file.
+
+ """
+ head, tail = path.split(new)
+ if head and tail and not path.exists(head):
+ makedirs(head)
+ rename(old, new)
+ head, tail = path.split(old)
+ if head and tail:
+ try:
+ removedirs(head)
+ except error:
+ pass
+
+__all__.extend(["makedirs", "removedirs", "renames"])
+
+def walk(top, topdown=True, onerror=None, followlinks=False):
+ """Directory tree generator.
+
+ For each directory in the directory tree rooted at top (including top
+ itself, but excluding '.' and '..'), yields a 3-tuple
+
+ dirpath, dirnames, filenames
+
+ dirpath is a string, the path to the directory. dirnames is a list of
+ the names of the subdirectories in dirpath (excluding '.' and '..').
+ filenames is a list of the names of the non-directory files in dirpath.
+ Note that the names in the lists are just names, with no path components.
+ To get a full path (which begins with top) to a file or directory in
+ dirpath, do os.path.join(dirpath, name).
+
+ If optional arg 'topdown' is true or not specified, the triple for a
+ directory is generated before the triples for any of its subdirectories
+ (directories are generated top down). If topdown is false, the triple
+ for a directory is generated after the triples for all of its
+ subdirectories (directories are generated bottom up).
+
+ When topdown is true, the caller can modify the dirnames list in-place
+ (e.g., via del or slice assignment), and walk will only recurse into the
+ subdirectories whose names remain in dirnames; this can be used to prune the
+ search, or to impose a specific order of visiting. Modifying dirnames when
+ topdown is false is ineffective, since the directories in dirnames have
+ already been generated by the time dirnames itself is generated. No matter
+ the value of topdown, the list of subdirectories is retrieved before the
+ tuples for the directory and its subdirectories are generated.
+
+ By default errors from the os.listdir() call are ignored. If
+ optional arg 'onerror' is specified, it should be a function; it
+ will be called with one argument, an os.error instance. It can
+ report the error to continue with the walk, or raise the exception
+ to abort the walk. Note that the filename is available as the
+ filename attribute of the exception object.
+
+ By default, os.walk does not follow symbolic links to subdirectories on
+ systems that support them. In order to get this functionality, set the
+ optional argument 'followlinks' to true.
+
+ Caution: if you pass a relative pathname for top, don't change the
+ current working directory between resumptions of walk. walk never
+ changes the current directory, and assumes that the client doesn't
+ either.
+
+ Example:
+
+ import os
+ from os.path import join, getsize
+ for root, dirs, files in os.walk('python/Lib/email'):
+ print root, "consumes",
+ print sum([getsize(join(root, name)) for name in files]),
+ print "bytes in", len(files), "non-directory files"
+ if 'CVS' in dirs:
+ dirs.remove('CVS') # don't visit CVS directories
+
+ """
+
+ islink, join, isdir = path.islink, path.join, path.isdir
+
+ # We may not have read permission for top, in which case we can't
+ # get a list of the files the directory contains. os.path.walk
+ # always suppressed the exception then, rather than blow up for a
+ # minor reason when (say) a thousand readable directories are still
+ # left to visit. That logic is copied here.
+ try:
+ # Note that listdir and error are globals in this module due
+ # to earlier import-*.
+ names = listdir(top)
+ except error as err:
+ if onerror is not None:
+ onerror(err)
+ return
+
+ dirs, nondirs = [], []
+ for name in names:
+ if isdir(join(top, name)):
+ dirs.append(name)
+ else:
+ nondirs.append(name)
+
+ if topdown:
+ yield top, dirs, nondirs
+ for name in dirs:
+ new_path = join(top, name)
+ if followlinks or not islink(new_path):
+ for x in walk(new_path, topdown, onerror, followlinks):
+ yield x
+ if not topdown:
+ yield top, dirs, nondirs
+
+__all__.append("walk")
+
+# Make sure os.environ exists, at least
+try:
+ environ
+except NameError:
+ environ = {}
+
+def execl(file, *args):
+ """execl(file, *args)
+
+ Execute the executable file with argument list args, replacing the
+ current process. """
+ execv(file, args)
+
+def execle(file, *args):
+ """execle(file, *args, env)
+
+ Execute the executable file with argument list args and
+ environment env, replacing the current process. """
+ env = args[-1]
+ execve(file, args[:-1], env)
+
+def execlp(file, *args):
+ """execlp(file, *args)
+
+ Execute the executable file (which is searched for along $PATH)
+ with argument list args, replacing the current process. """
+ execvp(file, args)
+
+def execlpe(file, *args):
+ """execlpe(file, *args, env)
+
+ Execute the executable file (which is searched for along $PATH)
+ with argument list args and environment env, replacing the current
+ process. """
+ env = args[-1]
+ execvpe(file, args[:-1], env)
+
+def execvp(file, args):
+ """execvp(file, args)
+
+ Execute the executable file (which is searched for along $PATH)
+ with argument list args, replacing the current process.
+ args may be a list or tuple of strings. """
+ _execvpe(file, args)
+
+def execvpe(file, args, env):
+ """execvpe(file, args, env)
+
+ Execute the executable file (which is searched for along $PATH)
+ with argument list args and environment env , replacing the
+ current process.
+ args may be a list or tuple of strings. """
+ _execvpe(file, args, env)
+
+__all__.extend(["execl","execle","execlp","execlpe","execvp","execvpe"])
+
+def _execvpe(file, args, env=None):
+ if env is not None:
+ func = execve
+ argrest = (args, env)
+ else:
+ func = execv
+ argrest = (args,)
+ env = environ
+
+ head, tail = path.split(file)
+ if head:
+ func(file, *argrest)
+ return
+ if 'PATH' in env:
+ envpath = env['PATH']
+ else:
+ envpath = defpath
+ PATH = envpath.split(pathsep)
+ saved_exc = None
+ saved_tb = None
+ for dir in PATH:
+ fullname = path.join(dir, file)
+ try:
+ func(fullname, *argrest)
+ except error as e:
+ tb = sys.exc_info()[2]
+ if (e.errno != errno.ENOENT and e.errno != errno.ENOTDIR
+ and saved_exc is None):
+ saved_exc = e
+ saved_tb = tb
+ if saved_exc:
+ raise error(saved_exc, saved_tb)
+ raise error(e, tb)
+
+# Change environ to automatically call putenv() if it exists
+try:
+ # This will fail if there's no putenv
+ putenv
+except NameError:
+ pass
+else:
+ import UserDict
+
+ # Fake unsetenv() for Windows
+ # not sure about os2 here but
+ # I'm guessing they are the same.
+
+ if name in ('os2', 'nt'):
+ def unsetenv(key):
+ putenv(key, "")
+
+ if name == "riscos":
+ # On RISC OS, all env access goes through getenv and putenv
+ from riscosenviron import _Environ
+ elif name in ('os2', 'nt'): # Where Env Var Names Must Be UPPERCASE
+ # But we store them as upper case
+ class _Environ(UserDict.IterableUserDict):
+ def __init__(self, environ):
+ UserDict.UserDict.__init__(self)
+ data = self.data
+ for k, v in environ.items():
+ data[k.upper()] = v
+ def __setitem__(self, key, item):
+ putenv(key, item)
+ self.data[key.upper()] = item
+ def __getitem__(self, key):
+ return self.data[key.upper()]
+ try:
+ unsetenv
+ except NameError:
+ def __delitem__(self, key):
+ del self.data[key.upper()]
+ else:
+ def __delitem__(self, key):
+ unsetenv(key)
+ del self.data[key.upper()]
+ def clear(self):
+ for key in self.data.keys():
+ unsetenv(key)
+ del self.data[key]
+ def pop(self, key, *args):
+ unsetenv(key)
+ return self.data.pop(key.upper(), *args)
+ def has_key(self, key):
+ return key.upper() in self.data
+ def __contains__(self, key):
+ return key.upper() in self.data
+ def get(self, key, failobj=None):
+ return self.data.get(key.upper(), failobj)
+ def update(self, dict=None, **kwargs):
+ if dict:
+ try:
+ keys = dict.keys()
+ except AttributeError:
+ # List of (key, value)
+ for k, v in dict:
+ self[k] = v
+ else:
+ # got keys
+ # cannot use items(), since mappings
+ # may not have them.
+ for k in keys:
+ self[k] = dict[k]
+ if kwargs:
+ self.update(kwargs)
+ def copy(self):
+ return dict(self)
+
+ else: # Where Env Var Names Can Be Mixed Case
+ class _Environ(UserDict.IterableUserDict):
+ def __init__(self, environ):
+ UserDict.UserDict.__init__(self)
+ self.data = environ
+ def __setitem__(self, key, item):
+ putenv(key, item)
+ self.data[key] = item
+ def update(self, dict=None, **kwargs):
+ if dict:
+ try:
+ keys = dict.keys()
+ except AttributeError:
+ # List of (key, value)
+ for k, v in dict:
+ self[k] = v
+ else:
+ # got keys
+ # cannot use items(), since mappings
+ # may not have them.
+ for k in keys:
+ self[k] = dict[k]
+ if kwargs:
+ self.update(kwargs)
+ try:
+ unsetenv
+ except NameError:
+ pass
+ else:
+ def __delitem__(self, key):
+ unsetenv(key)
+ del self.data[key]
+ def clear(self):
+ for key in self.data.keys():
+ unsetenv(key)
+ del self.data[key]
+ def pop(self, key, *args):
+ unsetenv(key)
+ return self.data.pop(key, *args)
+ def copy(self):
+ return dict(self)
+
+
+ environ = _Environ(environ)
+
+def getenv(key, default=None):
+ """Get an environment variable, return None if it doesn't exist.
+ The optional second argument can specify an alternate default."""
+ return environ.get(key, default)
+__all__.append("getenv")
+
+def _exists(name):
+ return name in globals()
+
+# Supply spawn*() (probably only for Unix)
+if _exists("fork") and not _exists("spawnv") and _exists("execv"):
+
+ P_WAIT = 0
+ P_NOWAIT = P_NOWAITO = 1
+
+ # XXX Should we support P_DETACH? I suppose it could fork()**2
+ # and close the std I/O streams. Also, P_OVERLAY is the same
+ # as execv*()?
+
+ def _spawnvef(mode, file, args, env, func):
+ # Internal helper; func is the exec*() function to use
+ pid = fork()
+ if not pid:
+ # Child
+ try:
+ if env is None:
+ func(file, args)
+ else:
+ func(file, args, env)
+ except:
+ _exit(127)
+ else:
+ # Parent
+ if mode == P_NOWAIT:
+ return pid # Caller is responsible for waiting!
+ while 1:
+ wpid, sts = waitpid(pid, 0)
+ if WIFSTOPPED(sts):
+ continue
+ elif WIFSIGNALED(sts):
+ return -WTERMSIG(sts)
+ elif WIFEXITED(sts):
+ return WEXITSTATUS(sts)
+ else:
+ raise error("Not stopped, signaled or exited???")
+
+ def spawnv(mode, file, args):
+ """spawnv(mode, file, args) -> integer
+
+Execute file with arguments from args in a subprocess.
+If mode == P_NOWAIT return the pid of the process.
+If mode == P_WAIT return the process's exit code if it exits normally;
+otherwise return -SIG, where SIG is the signal that killed it. """
+ return _spawnvef(mode, file, args, None, execv)
+
+ def spawnve(mode, file, args, env):
+ """spawnve(mode, file, args, env) -> integer
+
+Execute file with arguments from args in a subprocess with the
+specified environment.
+If mode == P_NOWAIT return the pid of the process.
+If mode == P_WAIT return the process's exit code if it exits normally;
+otherwise return -SIG, where SIG is the signal that killed it. """
+ return _spawnvef(mode, file, args, env, execve)
+
+ # Note: spawnvp[e] is't currently supported on Windows
+
+ def spawnvp(mode, file, args):
+ """spawnvp(mode, file, args) -> integer
+
+Execute file (which is looked for along $PATH) with arguments from
+args in a subprocess.
+If mode == P_NOWAIT return the pid of the process.
+If mode == P_WAIT return the process's exit code if it exits normally;
+otherwise return -SIG, where SIG is the signal that killed it. """
+ return _spawnvef(mode, file, args, None, execvp)
+
+ def spawnvpe(mode, file, args, env):
+ """spawnvpe(mode, file, args, env) -> integer
+
+Execute file (which is looked for along $PATH) with arguments from
+args in a subprocess with the supplied environment.
+If mode == P_NOWAIT return the pid of the process.
+If mode == P_WAIT return the process's exit code if it exits normally;
+otherwise return -SIG, where SIG is the signal that killed it. """
+ return _spawnvef(mode, file, args, env, execvpe)
+
+if _exists("spawnv"):
+ # These aren't supplied by the basic Windows code
+ # but can be easily implemented in Python
+
+ def spawnl(mode, file, *args):
+ """spawnl(mode, file, *args) -> integer
+
+Execute file with arguments from args in a subprocess.
+If mode == P_NOWAIT return the pid of the process.
+If mode == P_WAIT return the process's exit code if it exits normally;
+otherwise return -SIG, where SIG is the signal that killed it. """
+ return spawnv(mode, file, args)
+
+ def spawnle(mode, file, *args):
+ """spawnle(mode, file, *args, env) -> integer
+
+Execute file with arguments from args in a subprocess with the
+supplied environment.
+If mode == P_NOWAIT return the pid of the process.
+If mode == P_WAIT return the process's exit code if it exits normally;
+otherwise return -SIG, where SIG is the signal that killed it. """
+ env = args[-1]
+ return spawnve(mode, file, args[:-1], env)
+
+
+ __all__.extend(["spawnv", "spawnve", "spawnl", "spawnle",])
+
+
+if _exists("spawnvp"):
+ # At the moment, Windows doesn't implement spawnvp[e],
+ # so it won't have spawnlp[e] either.
+ def spawnlp(mode, file, *args):
+ """spawnlp(mode, file, *args) -> integer
+
+Execute file (which is looked for along $PATH) with arguments from
+args in a subprocess with the supplied environment.
+If mode == P_NOWAIT return the pid of the process.
+If mode == P_WAIT return the process's exit code if it exits normally;
+otherwise return -SIG, where SIG is the signal that killed it. """
+ return spawnvp(mode, file, args)
+
+ def spawnlpe(mode, file, *args):
+ """spawnlpe(mode, file, *args, env) -> integer
+
+Execute file (which is looked for along $PATH) with arguments from
+args in a subprocess with the supplied environment.
+If mode == P_NOWAIT return the pid of the process.
+If mode == P_WAIT return the process's exit code if it exits normally;
+otherwise return -SIG, where SIG is the signal that killed it. """
+ env = args[-1]
+ return spawnvpe(mode, file, args[:-1], env)
+
+
+ __all__.extend(["spawnvp", "spawnvpe", "spawnlp", "spawnlpe",])
+
+
+# Supply popen2 etc. (for Unix)
+if _exists("fork"):
+ if not _exists("popen2"):
+ def popen2(cmd, mode="t", bufsize=-1):
+ """Execute the shell command 'cmd' in a sub-process. On UNIX, 'cmd'
+ may be a sequence, in which case arguments will be passed directly to
+ the program without shell intervention (as with os.spawnv()). If 'cmd'
+ is a string it will be passed to the shell (as with os.system()). If
+ 'bufsize' is specified, it sets the buffer size for the I/O pipes. The
+ file objects (child_stdin, child_stdout) are returned."""
+ import warnings
+ msg = "os.popen2 is deprecated. Use the subprocess module."
+ warnings.warn(msg, DeprecationWarning, stacklevel=2)
+
+ import subprocess
+ PIPE = subprocess.PIPE
+ p = subprocess.Popen(cmd, shell=isinstance(cmd, basestring),
+ bufsize=bufsize, stdin=PIPE, stdout=PIPE,
+ close_fds=True)
+ return p.stdin, p.stdout
+ __all__.append("popen2")
+
+ if not _exists("popen3"):
+ def popen3(cmd, mode="t", bufsize=-1):
+ """Execute the shell command 'cmd' in a sub-process. On UNIX, 'cmd'
+ may be a sequence, in which case arguments will be passed directly to
+ the program without shell intervention (as with os.spawnv()). If 'cmd'
+ is a string it will be passed to the shell (as with os.system()). If
+ 'bufsize' is specified, it sets the buffer size for the I/O pipes. The
+ file objects (child_stdin, child_stdout, child_stderr) are returned."""
+ import warnings
+ msg = "os.popen3 is deprecated. Use the subprocess module."
+ warnings.warn(msg, DeprecationWarning, stacklevel=2)
+
+ import subprocess
+ PIPE = subprocess.PIPE
+ p = subprocess.Popen(cmd, shell=isinstance(cmd, basestring),
+ bufsize=bufsize, stdin=PIPE, stdout=PIPE,
+ stderr=PIPE, close_fds=True)
+ return p.stdin, p.stdout, p.stderr
+ __all__.append("popen3")
+
+ if not _exists("popen4"):
+ def popen4(cmd, mode="t", bufsize=-1):
+ """Execute the shell command 'cmd' in a sub-process. On UNIX, 'cmd'
+ may be a sequence, in which case arguments will be passed directly to
+ the program without shell intervention (as with os.spawnv()). If 'cmd'
+ is a string it will be passed to the shell (as with os.system()). If
+ 'bufsize' is specified, it sets the buffer size for the I/O pipes. The
+ file objects (child_stdin, child_stdout_stderr) are returned."""
+ import warnings
+ msg = "os.popen4 is deprecated. Use the subprocess module."
+ warnings.warn(msg, DeprecationWarning, stacklevel=2)
+
+ import subprocess
+ PIPE = subprocess.PIPE
+ p = subprocess.Popen(cmd, shell=isinstance(cmd, basestring),
+ bufsize=bufsize, stdin=PIPE, stdout=PIPE,
+ stderr=subprocess.STDOUT, close_fds=True)
+ return p.stdin, p.stdout
+ __all__.append("popen4")
+
+#import copy_reg as _copy_reg
+
+def _make_stat_result(tup, dict):
+ return stat_result(tup, dict)
+
+def _pickle_stat_result(sr):
+ (type, args) = sr.__reduce__()
+ return (_make_stat_result, args)
+
+try:
+ _copy_reg.pickle(stat_result, _pickle_stat_result, _make_stat_result)
+except NameError: # stat_result may not exist
+ pass
+
+def _make_statvfs_result(tup, dict):
+ return statvfs_result(tup, dict)
+
+def _pickle_statvfs_result(sr):
+ (type, args) = sr.__reduce__()
+ return (_make_statvfs_result, args)
+
+try:
+ _copy_reg.pickle(statvfs_result, _pickle_statvfs_result,
+ _make_statvfs_result)
+except NameError: # statvfs_result may not exist
+ pass
+
+if not _exists("urandom"):
+ def urandom(n):
+ """urandom(n) -> str
+
+ Return a string of n random bytes suitable for cryptographic use.
+
+ """
+ if name != 'edk2':
+ try:
+ _urandomfd = open("/dev/urandom", O_RDONLY)
+ except (OSError, IOError):
+ raise NotImplementedError("/dev/urandom (or equivalent) not found")
+ try:
+ bs = b""
+ while n > len(bs):
+ bs += read(_urandomfd, n - len(bs))
+ finally:
+ close(_urandomfd)
+ else:
+ bs = b'/\xd3\x00\xa1\x11\x9b+\xef\x1dM-G\xd0\xa7\xd6v\x8f?o\xcaS\xd3aa\x03\xf8?b0b\xf2\xc3\xdek~\x19\xe0<\xbf\xe5! \xe23>\x04\x15\xa7u\x82\x0f\xf5~\xe0\xc3\xbe\x02\x17\x9a;\x90\xdaF\xa4\xb7\x9f\x05\x95}T^\x86b\x02b\xbe\xa8 ct\xbd\xd1>\tf\xe3\xf73\xeb\xae"\xdf\xea\xea\xa0I\xeb\xe2\xc6\xa5'
+ return bs[:n]
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/pydoc.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/pydoc.py
new file mode 100644
index 00000000..ec521ce7
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/pydoc.py
@@ -0,0 +1,2686 @@
+#!/usr/bin/env python3
+"""Generate Python documentation in HTML or text for interactive use.
+
+At the Python interactive prompt, calling help(thing) on a Python object
+documents the object, and calling help() starts up an interactive
+help session.
+
+Or, at the shell command line outside of Python:
+
+Run "pydoc <name>" to show documentation on something. <name> may be
+the name of a function, module, package, or a dotted reference to a
+class or function within a module or module in a package. If the
+argument contains a path segment delimiter (e.g. slash on Unix,
+backslash on Windows) it is treated as the path to a Python source file.
+
+Run "pydoc -k <keyword>" to search for a keyword in the synopsis lines
+of all available modules.
+
+Run "pydoc -p <port>" to start an HTTP server on the given port on the
+local machine. Port number 0 can be used to get an arbitrary unused port.
+
+Run "pydoc -b" to start an HTTP server on an arbitrary unused port and
+open a Web browser to interactively browse documentation. The -p option
+can be used with the -b option to explicitly specify the server port.
+
+Run "pydoc -w <name>" to write out the HTML documentation for a module
+to a file named "<name>.html".
+
+Module docs for core modules are assumed to be in
+
+ https://docs.python.org/X.Y/library/
+
+This can be overridden by setting the PYTHONDOCS environment variable
+to a different URL or to a local directory containing the Library
+Reference Manual pages.
+"""
+__all__ = ['help']
+__author__ = "Ka-Ping Yee <ping@lfw.org>"
+__date__ = "26 February 2001"
+
+__credits__ = """Guido van Rossum, for an excellent programming language.
+Tommy Burnette, the original creator of manpy.
+Paul Prescod, for all his work on onlinehelp.
+Richard Chamberlain, for the first implementation of textdoc.
+"""
+
+# Known bugs that can't be fixed here:
+# - synopsis() cannot be prevented from clobbering existing
+# loaded modules.
+# - If the __file__ attribute on a module is a relative path and
+# the current directory is changed with os.chdir(), an incorrect
+# path will be displayed.
+
+import builtins
+import importlib._bootstrap
+import importlib._bootstrap_external
+import importlib.machinery
+import importlib.util
+import inspect
+import io
+import os
+import pkgutil
+import platform
+import re
+import sys
+import time
+import tokenize
+import urllib.parse
+import warnings
+from collections import deque
+from reprlib import Repr
+from traceback import format_exception_only
+
+
+# --------------------------------------------------------- common routines
+
+def pathdirs():
+ """Convert sys.path into a list of absolute, existing, unique paths."""
+ dirs = []
+ normdirs = []
+ for dir in sys.path:
+ dir = os.path.abspath(dir or '.')
+ normdir = os.path.normcase(dir)
+ if normdir not in normdirs and os.path.isdir(dir):
+ dirs.append(dir)
+ normdirs.append(normdir)
+ return dirs
+
+def getdoc(object):
+ """Get the doc string or comments for an object."""
+ result = inspect.getdoc(object) or inspect.getcomments(object)
+ return result and re.sub('^ *\n', '', result.rstrip()) or ''
+
+def splitdoc(doc):
+ """Split a doc string into a synopsis line (if any) and the rest."""
+ lines = doc.strip().split('\n')
+ if len(lines) == 1:
+ return lines[0], ''
+ elif len(lines) >= 2 and not lines[1].rstrip():
+ return lines[0], '\n'.join(lines[2:])
+ return '', '\n'.join(lines)
+
+def classname(object, modname):
+ """Get a class name and qualify it with a module name if necessary."""
+ name = object.__name__
+ if object.__module__ != modname:
+ name = object.__module__ + '.' + name
+ return name
+
+def isdata(object):
+ """Check if an object is of a type that probably means it's data."""
+ return not (inspect.ismodule(object) or inspect.isclass(object) or
+ inspect.isroutine(object) or inspect.isframe(object) or
+ inspect.istraceback(object) or inspect.iscode(object))
+
+def replace(text, *pairs):
+ """Do a series of global replacements on a string."""
+ while pairs:
+ text = pairs[1].join(text.split(pairs[0]))
+ pairs = pairs[2:]
+ return text
+
+def cram(text, maxlen):
+ """Omit part of a string if needed to make it fit in a maximum length."""
+ if len(text) > maxlen:
+ pre = max(0, (maxlen-3)//2)
+ post = max(0, maxlen-3-pre)
+ return text[:pre] + '...' + text[len(text)-post:]
+ return text
+
+_re_stripid = re.compile(r' at 0x[0-9a-f]{6,16}(>+)$', re.IGNORECASE)
+def stripid(text):
+ """Remove the hexadecimal id from a Python object representation."""
+ # The behaviour of %p is implementation-dependent in terms of case.
+ return _re_stripid.sub(r'\1', text)
+
+def _is_some_method(obj):
+ return (inspect.isfunction(obj) or
+ inspect.ismethod(obj) or
+ inspect.isbuiltin(obj) or
+ inspect.ismethoddescriptor(obj))
+
+def _is_bound_method(fn):
+ """
+ Returns True if fn is a bound method, regardless of whether
+ fn was implemented in Python or in C.
+ """
+ if inspect.ismethod(fn):
+ return True
+ if inspect.isbuiltin(fn):
+ self = getattr(fn, '__self__', None)
+ return not (inspect.ismodule(self) or (self is None))
+ return False
+
+
+def allmethods(cl):
+ methods = {}
+ for key, value in inspect.getmembers(cl, _is_some_method):
+ methods[key] = 1
+ for base in cl.__bases__:
+ methods.update(allmethods(base)) # all your base are belong to us
+ for key in methods.keys():
+ methods[key] = getattr(cl, key)
+ return methods
+
+def _split_list(s, predicate):
+ """Split sequence s via predicate, and return pair ([true], [false]).
+
+ The return value is a 2-tuple of lists,
+ ([x for x in s if predicate(x)],
+ [x for x in s if not predicate(x)])
+ """
+
+ yes = []
+ no = []
+ for x in s:
+ if predicate(x):
+ yes.append(x)
+ else:
+ no.append(x)
+ return yes, no
+
+def visiblename(name, all=None, obj=None):
+ """Decide whether to show documentation on a variable."""
+ # Certain special names are redundant or internal.
+ # XXX Remove __initializing__?
+ if name in {'__author__', '__builtins__', '__cached__', '__credits__',
+ '__date__', '__doc__', '__file__', '__spec__',
+ '__loader__', '__module__', '__name__', '__package__',
+ '__path__', '__qualname__', '__slots__', '__version__'}:
+ return 0
+ # Private names are hidden, but special names are displayed.
+ if name.startswith('__') and name.endswith('__'): return 1
+ # Namedtuples have public fields and methods with a single leading underscore
+ if name.startswith('_') and hasattr(obj, '_fields'):
+ return True
+ if all is not None:
+ # only document that which the programmer exported in __all__
+ return name in all
+ else:
+ return not name.startswith('_')
+
+def classify_class_attrs(object):
+ """Wrap inspect.classify_class_attrs, with fixup for data descriptors."""
+ results = []
+ for (name, kind, cls, value) in inspect.classify_class_attrs(object):
+ if inspect.isdatadescriptor(value):
+ kind = 'data descriptor'
+ results.append((name, kind, cls, value))
+ return results
+
+def sort_attributes(attrs, object):
+ 'Sort the attrs list in-place by _fields and then alphabetically by name'
+ # This allows data descriptors to be ordered according
+ # to a _fields attribute if present.
+ fields = getattr(object, '_fields', [])
+ try:
+ field_order = {name : i-len(fields) for (i, name) in enumerate(fields)}
+ except TypeError:
+ field_order = {}
+ keyfunc = lambda attr: (field_order.get(attr[0], 0), attr[0])
+ attrs.sort(key=keyfunc)
+
+# ----------------------------------------------------- module manipulation
+
+def ispackage(path):
+ """Guess whether a path refers to a package directory."""
+ if os.path.isdir(path):
+ for ext in ('.py', '.pyc'):
+ if os.path.isfile(os.path.join(path, '__init__' + ext)):
+ return True
+ return False
+
+def source_synopsis(file):
+ line = file.readline()
+ while line[:1] == '#' or not line.strip():
+ line = file.readline()
+ if not line: break
+ line = line.strip()
+ if line[:4] == 'r"""': line = line[1:]
+ if line[:3] == '"""':
+ line = line[3:]
+ if line[-1:] == '\\': line = line[:-1]
+ while not line.strip():
+ line = file.readline()
+ if not line: break
+ result = line.split('"""')[0].strip()
+ else: result = None
+ return result
+
+def synopsis(filename, cache={}):
+ """Get the one-line summary out of a module file."""
+ mtime = os.stat(filename).st_mtime
+ lastupdate, result = cache.get(filename, (None, None))
+ if lastupdate is None or lastupdate < mtime:
+ # Look for binary suffixes first, falling back to source.
+ if filename.endswith(tuple(importlib.machinery.BYTECODE_SUFFIXES)):
+ loader_cls = importlib.machinery.SourcelessFileLoader
+ elif filename.endswith(tuple(importlib.machinery.EXTENSION_SUFFIXES)):
+ loader_cls = importlib.machinery.ExtensionFileLoader
+ else:
+ loader_cls = None
+ # Now handle the choice.
+ if loader_cls is None:
+ # Must be a source file.
+ try:
+ file = tokenize.open(filename)
+ except OSError:
+ # module can't be opened, so skip it
+ return None
+ # text modules can be directly examined
+ with file:
+ result = source_synopsis(file)
+ else:
+ # Must be a binary module, which has to be imported.
+ loader = loader_cls('__temp__', filename)
+ # XXX We probably don't need to pass in the loader here.
+ spec = importlib.util.spec_from_file_location('__temp__', filename,
+ loader=loader)
+ try:
+ module = importlib._bootstrap._load(spec)
+ except:
+ return None
+ del sys.modules['__temp__']
+ result = module.__doc__.splitlines()[0] if module.__doc__ else None
+ # Cache the result.
+ cache[filename] = (mtime, result)
+ return result
+
+class ErrorDuringImport(Exception):
+ """Errors that occurred while trying to import something to document it."""
+ def __init__(self, filename, exc_info):
+ self.filename = filename
+ self.exc, self.value, self.tb = exc_info
+
+ def __str__(self):
+ exc = self.exc.__name__
+ return 'problem in %s - %s: %s' % (self.filename, exc, self.value)
+
+def importfile(path):
+ """Import a Python source file or compiled file given its path."""
+ magic = importlib.util.MAGIC_NUMBER
+ with open(path, 'rb') as file:
+ is_bytecode = magic == file.read(len(magic))
+ filename = os.path.basename(path)
+ name, ext = os.path.splitext(filename)
+ if is_bytecode:
+ loader = importlib._bootstrap_external.SourcelessFileLoader(name, path)
+ else:
+ loader = importlib._bootstrap_external.SourceFileLoader(name, path)
+ # XXX We probably don't need to pass in the loader here.
+ spec = importlib.util.spec_from_file_location(name, path, loader=loader)
+ try:
+ return importlib._bootstrap._load(spec)
+ except:
+ raise ErrorDuringImport(path, sys.exc_info())
+
+def safeimport(path, forceload=0, cache={}):
+ """Import a module; handle errors; return None if the module isn't found.
+
+ If the module *is* found but an exception occurs, it's wrapped in an
+ ErrorDuringImport exception and reraised. Unlike __import__, if a
+ package path is specified, the module at the end of the path is returned,
+ not the package at the beginning. If the optional 'forceload' argument
+ is 1, we reload the module from disk (unless it's a dynamic extension)."""
+ try:
+ # If forceload is 1 and the module has been previously loaded from
+ # disk, we always have to reload the module. Checking the file's
+ # mtime isn't good enough (e.g. the module could contain a class
+ # that inherits from another module that has changed).
+ if forceload and path in sys.modules:
+ if path not in sys.builtin_module_names:
+ # Remove the module from sys.modules and re-import to try
+ # and avoid problems with partially loaded modules.
+ # Also remove any submodules because they won't appear
+ # in the newly loaded module's namespace if they're already
+ # in sys.modules.
+ subs = [m for m in sys.modules if m.startswith(path + '.')]
+ for key in [path] + subs:
+ # Prevent garbage collection.
+ cache[key] = sys.modules[key]
+ del sys.modules[key]
+ module = __import__(path)
+ except:
+ # Did the error occur before or after the module was found?
+ (exc, value, tb) = info = sys.exc_info()
+ if path in sys.modules:
+ # An error occurred while executing the imported module.
+ raise ErrorDuringImport(sys.modules[path].__file__, info)
+ elif exc is SyntaxError:
+ # A SyntaxError occurred before we could execute the module.
+ raise ErrorDuringImport(value.filename, info)
+ elif issubclass(exc, ImportError) and value.name == path:
+ # No such module in the path.
+ return None
+ else:
+ # Some other error occurred during the importing process.
+ raise ErrorDuringImport(path, sys.exc_info())
+ for part in path.split('.')[1:]:
+ try: module = getattr(module, part)
+ except AttributeError: return None
+ return module
+
+# ---------------------------------------------------- formatter base class
+
+class Doc:
+
+ PYTHONDOCS = os.environ.get("PYTHONDOCS",
+ "https://docs.python.org/%d.%d/library"
+ % sys.version_info[:2])
+
+ def document(self, object, name=None, *args):
+ """Generate documentation for an object."""
+ args = (object, name) + args
+ # 'try' clause is to attempt to handle the possibility that inspect
+ # identifies something in a way that pydoc itself has issues handling;
+ # think 'super' and how it is a descriptor (which raises the exception
+ # by lacking a __name__ attribute) and an instance.
+ if inspect.isgetsetdescriptor(object): return self.docdata(*args)
+ if inspect.ismemberdescriptor(object): return self.docdata(*args)
+ try:
+ if inspect.ismodule(object): return self.docmodule(*args)
+ if inspect.isclass(object): return self.docclass(*args)
+ if inspect.isroutine(object): return self.docroutine(*args)
+ except AttributeError:
+ pass
+ if isinstance(object, property): return self.docproperty(*args)
+ return self.docother(*args)
+
+ def fail(self, object, name=None, *args):
+ """Raise an exception for unimplemented types."""
+ message = "don't know how to document object%s of type %s" % (
+ name and ' ' + repr(name), type(object).__name__)
+ raise TypeError(message)
+
+ docmodule = docclass = docroutine = docother = docproperty = docdata = fail
+
+ def getdocloc(self, object,
+ basedir=os.path.join(sys.base_exec_prefix, "lib",
+ "python%d.%d" % sys.version_info[:2])):
+ """Return the location of module docs or None"""
+
+ try:
+ file = inspect.getabsfile(object)
+ except TypeError:
+ file = '(built-in)'
+
+ docloc = os.environ.get("PYTHONDOCS", self.PYTHONDOCS)
+
+ basedir = os.path.normcase(basedir)
+ if (isinstance(object, type(os)) and
+ (object.__name__ in ('errno', 'exceptions', 'gc', 'imp',
+ 'marshal', 'posix', 'signal', 'sys',
+ '_thread', 'zipimport') or
+ (file.startswith(basedir) and
+ not file.startswith(os.path.join(basedir, 'site-packages')))) and
+ object.__name__ not in ('xml.etree', 'test.pydoc_mod')):
+ if docloc.startswith(("http://", "https://")):
+ docloc = "%s/%s" % (docloc.rstrip("/"), object.__name__.lower())
+ else:
+ docloc = os.path.join(docloc, object.__name__.lower() + ".html")
+ else:
+ docloc = None
+ return docloc
+
+# -------------------------------------------- HTML documentation generator
+
+class HTMLRepr(Repr):
+ """Class for safely making an HTML representation of a Python object."""
+ def __init__(self):
+ Repr.__init__(self)
+ self.maxlist = self.maxtuple = 20
+ self.maxdict = 10
+ self.maxstring = self.maxother = 100
+
+ def escape(self, text):
+ return replace(text, '&', '&', '<', '<', '>', '>')
+
+ def repr(self, object):
+ return Repr.repr(self, object)
+
+ def repr1(self, x, level):
+ if hasattr(type(x), '__name__'):
+ methodname = 'repr_' + '_'.join(type(x).__name__.split())
+ if hasattr(self, methodname):
+ return getattr(self, methodname)(x, level)
+ return self.escape(cram(stripid(repr(x)), self.maxother))
+
+ def repr_string(self, x, level):
+ test = cram(x, self.maxstring)
+ testrepr = repr(test)
+ if '\\' in test and '\\' not in replace(testrepr, r'\\', ''):
+ # Backslashes are only literal in the string and are never
+ # needed to make any special characters, so show a raw string.
+ return 'r' + testrepr[0] + self.escape(test) + testrepr[0]
+ return re.sub(r'((\\[\\abfnrtv\'"]|\\[0-9]..|\\x..|\\u....)+)',
+ r'<font color="#c040c0">\1</font>',
+ self.escape(testrepr))
+
+ repr_str = repr_string
+
+ def repr_instance(self, x, level):
+ try:
+ return self.escape(cram(stripid(repr(x)), self.maxstring))
+ except:
+ return self.escape('<%s instance>' % x.__class__.__name__)
+
+ repr_unicode = repr_string
+
+class HTMLDoc(Doc):
+ """Formatter class for HTML documentation."""
+
+ # ------------------------------------------- HTML formatting utilities
+
+ _repr_instance = HTMLRepr()
+ repr = _repr_instance.repr
+ escape = _repr_instance.escape
+
+ def page(self, title, contents):
+ """Format an HTML page."""
+ return '''\
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
+<html><head><title>Python: %s</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
+</head><body bgcolor="#f0f0f8">
+%s
+</body></html>''' % (title, contents)
+
+ def heading(self, title, fgcol, bgcol, extras=''):
+ """Format a page heading."""
+ return '''
+<table width="100%%" cellspacing=0 cellpadding=2 border=0 summary="heading">
+<tr bgcolor="%s">
+<td valign=bottom> <br>
+<font color="%s" face="helvetica, arial"> <br>%s</font></td
+><td align=right valign=bottom
+><font color="%s" face="helvetica, arial">%s</font></td></tr></table>
+ ''' % (bgcol, fgcol, title, fgcol, extras or ' ')
+
+ def section(self, title, fgcol, bgcol, contents, width=6,
+ prelude='', marginalia=None, gap=' '):
+ """Format a section with a heading."""
+ if marginalia is None:
+ marginalia = '<tt>' + ' ' * width + '</tt>'
+ result = '''<p>
+<table width="100%%" cellspacing=0 cellpadding=2 border=0 summary="section">
+<tr bgcolor="%s">
+<td colspan=3 valign=bottom> <br>
+<font color="%s" face="helvetica, arial">%s</font></td></tr>
+ ''' % (bgcol, fgcol, title)
+ if prelude:
+ result = result + '''
+<tr bgcolor="%s"><td rowspan=2>%s</td>
+<td colspan=2>%s</td></tr>
+<tr><td>%s</td>''' % (bgcol, marginalia, prelude, gap)
+ else:
+ result = result + '''
+<tr><td bgcolor="%s">%s</td><td>%s</td>''' % (bgcol, marginalia, gap)
+
+ return result + '\n<td width="100%%">%s</td></tr></table>' % contents
+
+ def bigsection(self, title, *args):
+ """Format a section with a big heading."""
+ title = '<big><strong>%s</strong></big>' % title
+ return self.section(title, *args)
+
+ def preformat(self, text):
+ """Format literal preformatted text."""
+ text = self.escape(text.expandtabs())
+ return replace(text, '\n\n', '\n \n', '\n\n', '\n \n',
+ ' ', ' ', '\n', '<br>\n')
+
+ def multicolumn(self, list, format, cols=4):
+ """Format a list of items into a multi-column list."""
+ result = ''
+ rows = (len(list)+cols-1)//cols
+ for col in range(cols):
+ result = result + '<td width="%d%%" valign=top>' % (100//cols)
+ for i in range(rows*col, rows*col+rows):
+ if i < len(list):
+ result = result + format(list[i]) + '<br>\n'
+ result = result + '</td>'
+ return '<table width="100%%" summary="list"><tr>%s</tr></table>' % result
+
+ def grey(self, text): return '<font color="#909090">%s</font>' % text
+
+ def namelink(self, name, *dicts):
+ """Make a link for an identifier, given name-to-URL mappings."""
+ for dict in dicts:
+ if name in dict:
+ return '<a href="%s">%s</a>' % (dict[name], name)
+ return name
+
+ def classlink(self, object, modname):
+ """Make a link for a class."""
+ name, module = object.__name__, sys.modules.get(object.__module__)
+ if hasattr(module, name) and getattr(module, name) is object:
+ return '<a href="%s.html#%s">%s</a>' % (
+ module.__name__, name, classname(object, modname))
+ return classname(object, modname)
+
+ def modulelink(self, object):
+ """Make a link for a module."""
+ return '<a href="%s.html">%s</a>' % (object.__name__, object.__name__)
+
+ def modpkglink(self, modpkginfo):
+ """Make a link for a module or package to display in an index."""
+ name, path, ispackage, shadowed = modpkginfo
+ if shadowed:
+ return self.grey(name)
+ if path:
+ url = '%s.%s.html' % (path, name)
+ else:
+ url = '%s.html' % name
+ if ispackage:
+ text = '<strong>%s</strong> (package)' % name
+ else:
+ text = name
+ return '<a href="%s">%s</a>' % (url, text)
+
+ def filelink(self, url, path):
+ """Make a link to source file."""
+ return '<a href="file:%s">%s</a>' % (url, path)
+
+ def markup(self, text, escape=None, funcs={}, classes={}, methods={}):
+ """Mark up some plain text, given a context of symbols to look for.
+ Each context dictionary maps object names to anchor names."""
+ escape = escape or self.escape
+ results = []
+ here = 0
+ pattern = re.compile(r'\b((http|ftp)://\S+[\w/]|'
+ r'RFC[- ]?(\d+)|'
+ r'PEP[- ]?(\d+)|'
+ r'(self\.)?(\w+))')
+ while True:
+ match = pattern.search(text, here)
+ if not match: break
+ start, end = match.span()
+ results.append(escape(text[here:start]))
+
+ all, scheme, rfc, pep, selfdot, name = match.groups()
+ if scheme:
+ url = escape(all).replace('"', '"')
+ results.append('<a href="%s">%s</a>' % (url, url))
+ elif rfc:
+ url = 'http://www.rfc-editor.org/rfc/rfc%d.txt' % int(rfc)
+ results.append('<a href="%s">%s</a>' % (url, escape(all)))
+ elif pep:
+ url = 'http://www.python.org/dev/peps/pep-%04d/' % int(pep)
+ results.append('<a href="%s">%s</a>' % (url, escape(all)))
+ elif selfdot:
+ # Create a link for methods like 'self.method(...)'
+ # and use <strong> for attributes like 'self.attr'
+ if text[end:end+1] == '(':
+ results.append('self.' + self.namelink(name, methods))
+ else:
+ results.append('self.<strong>%s</strong>' % name)
+ elif text[end:end+1] == '(':
+ results.append(self.namelink(name, methods, funcs, classes))
+ else:
+ results.append(self.namelink(name, classes))
+ here = end
+ results.append(escape(text[here:]))
+ return ''.join(results)
+
+ # ---------------------------------------------- type-specific routines
+
+ def formattree(self, tree, modname, parent=None):
+ """Produce HTML for a class tree as given by inspect.getclasstree()."""
+ result = ''
+ for entry in tree:
+ if type(entry) is type(()):
+ c, bases = entry
+ result = result + '<dt><font face="helvetica, arial">'
+ result = result + self.classlink(c, modname)
+ if bases and bases != (parent,):
+ parents = []
+ for base in bases:
+ parents.append(self.classlink(base, modname))
+ result = result + '(' + ', '.join(parents) + ')'
+ result = result + '\n</font></dt>'
+ elif type(entry) is type([]):
+ result = result + '<dd>\n%s</dd>\n' % self.formattree(
+ entry, modname, c)
+ return '<dl>\n%s</dl>\n' % result
+
+ def docmodule(self, object, name=None, mod=None, *ignored):
+ """Produce HTML documentation for a module object."""
+ name = object.__name__ # ignore the passed-in name
+ try:
+ all = object.__all__
+ except AttributeError:
+ all = None
+ parts = name.split('.')
+ links = []
+ for i in range(len(parts)-1):
+ links.append(
+ '<a href="%s.html"><font color="#ffffff">%s</font></a>' %
+ ('.'.join(parts[:i+1]), parts[i]))
+ linkedname = '.'.join(links + parts[-1:])
+ head = '<big><big><strong>%s</strong></big></big>' % linkedname
+ try:
+ path = inspect.getabsfile(object)
+ url = urllib.parse.quote(path)
+ filelink = self.filelink(url, path)
+ except TypeError:
+ filelink = '(built-in)'
+ info = []
+ if hasattr(object, '__version__'):
+ version = str(object.__version__)
+ if version[:11] == '$' + 'Revision: ' and version[-1:] == '$':
+ version = version[11:-1].strip()
+ info.append('version %s' % self.escape(version))
+ if hasattr(object, '__date__'):
+ info.append(self.escape(str(object.__date__)))
+ if info:
+ head = head + ' (%s)' % ', '.join(info)
+ docloc = self.getdocloc(object)
+ if docloc is not None:
+ docloc = '<br><a href="%(docloc)s">Module Reference</a>' % locals()
+ else:
+ docloc = ''
+ result = self.heading(
+ head, '#ffffff', '#7799ee',
+ '<a href=".">index</a><br>' + filelink + docloc)
+
+ modules = inspect.getmembers(object, inspect.ismodule)
+
+ classes, cdict = [], {}
+ for key, value in inspect.getmembers(object, inspect.isclass):
+ # if __all__ exists, believe it. Otherwise use old heuristic.
+ if (all is not None or
+ (inspect.getmodule(value) or object) is object):
+ if visiblename(key, all, object):
+ classes.append((key, value))
+ cdict[key] = cdict[value] = '#' + key
+ for key, value in classes:
+ for base in value.__bases__:
+ key, modname = base.__name__, base.__module__
+ module = sys.modules.get(modname)
+ if modname != name and module and hasattr(module, key):
+ if getattr(module, key) is base:
+ if not key in cdict:
+ cdict[key] = cdict[base] = modname + '.html#' + key
+ funcs, fdict = [], {}
+ for key, value in inspect.getmembers(object, inspect.isroutine):
+ # if __all__ exists, believe it. Otherwise use old heuristic.
+ if (all is not None or
+ inspect.isbuiltin(value) or inspect.getmodule(value) is object):
+ if visiblename(key, all, object):
+ funcs.append((key, value))
+ fdict[key] = '#-' + key
+ if inspect.isfunction(value): fdict[value] = fdict[key]
+ data = []
+ for key, value in inspect.getmembers(object, isdata):
+ if visiblename(key, all, object):
+ data.append((key, value))
+
+ doc = self.markup(getdoc(object), self.preformat, fdict, cdict)
+ doc = doc and '<tt>%s</tt>' % doc
+ result = result + '<p>%s</p>\n' % doc
+
+ if hasattr(object, '__path__'):
+ modpkgs = []
+ for importer, modname, ispkg in pkgutil.iter_modules(object.__path__):
+ modpkgs.append((modname, name, ispkg, 0))
+ modpkgs.sort()
+ contents = self.multicolumn(modpkgs, self.modpkglink)
+ result = result + self.bigsection(
+ 'Package Contents', '#ffffff', '#aa55cc', contents)
+ elif modules:
+ contents = self.multicolumn(
+ modules, lambda t: self.modulelink(t[1]))
+ result = result + self.bigsection(
+ 'Modules', '#ffffff', '#aa55cc', contents)
+
+ if classes:
+ classlist = [value for (key, value) in classes]
+ contents = [
+ self.formattree(inspect.getclasstree(classlist, 1), name)]
+ for key, value in classes:
+ contents.append(self.document(value, key, name, fdict, cdict))
+ result = result + self.bigsection(
+ 'Classes', '#ffffff', '#ee77aa', ' '.join(contents))
+ if funcs:
+ contents = []
+ for key, value in funcs:
+ contents.append(self.document(value, key, name, fdict, cdict))
+ result = result + self.bigsection(
+ 'Functions', '#ffffff', '#eeaa77', ' '.join(contents))
+ if data:
+ contents = []
+ for key, value in data:
+ contents.append(self.document(value, key))
+ result = result + self.bigsection(
+ 'Data', '#ffffff', '#55aa55', '<br>\n'.join(contents))
+ if hasattr(object, '__author__'):
+ contents = self.markup(str(object.__author__), self.preformat)
+ result = result + self.bigsection(
+ 'Author', '#ffffff', '#7799ee', contents)
+ if hasattr(object, '__credits__'):
+ contents = self.markup(str(object.__credits__), self.preformat)
+ result = result + self.bigsection(
+ 'Credits', '#ffffff', '#7799ee', contents)
+
+ return result
+
+ def docclass(self, object, name=None, mod=None, funcs={}, classes={},
+ *ignored):
+ """Produce HTML documentation for a class object."""
+ realname = object.__name__
+ name = name or realname
+ bases = object.__bases__
+
+ contents = []
+ push = contents.append
+
+ # Cute little class to pump out a horizontal rule between sections.
+ class HorizontalRule:
+ def __init__(self):
+ self.needone = 0
+ def maybe(self):
+ if self.needone:
+ push('<hr>\n')
+ self.needone = 1
+ hr = HorizontalRule()
+
+ # List the mro, if non-trivial.
+ mro = deque(inspect.getmro(object))
+ if len(mro) > 2:
+ hr.maybe()
+ push('<dl><dt>Method resolution order:</dt>\n')
+ for base in mro:
+ push('<dd>%s</dd>\n' % self.classlink(base,
+ object.__module__))
+ push('</dl>\n')
+
+ def spill(msg, attrs, predicate):
+ ok, attrs = _split_list(attrs, predicate)
+ if ok:
+ hr.maybe()
+ push(msg)
+ for name, kind, homecls, value in ok:
+ try:
+ value = getattr(object, name)
+ except Exception:
+ # Some descriptors may meet a failure in their __get__.
+ # (bug #1785)
+ push(self._docdescriptor(name, value, mod))
+ else:
+ push(self.document(value, name, mod,
+ funcs, classes, mdict, object))
+ push('\n')
+ return attrs
+
+ def spilldescriptors(msg, attrs, predicate):
+ ok, attrs = _split_list(attrs, predicate)
+ if ok:
+ hr.maybe()
+ push(msg)
+ for name, kind, homecls, value in ok:
+ push(self._docdescriptor(name, value, mod))
+ return attrs
+
+ def spilldata(msg, attrs, predicate):
+ ok, attrs = _split_list(attrs, predicate)
+ if ok:
+ hr.maybe()
+ push(msg)
+ for name, kind, homecls, value in ok:
+ base = self.docother(getattr(object, name), name, mod)
+ if callable(value) or inspect.isdatadescriptor(value):
+ doc = getattr(value, "__doc__", None)
+ else:
+ doc = None
+ if doc is None:
+ push('<dl><dt>%s</dl>\n' % base)
+ else:
+ doc = self.markup(getdoc(value), self.preformat,
+ funcs, classes, mdict)
+ doc = '<dd><tt>%s</tt>' % doc
+ push('<dl><dt>%s%s</dl>\n' % (base, doc))
+ push('\n')
+ return attrs
+
+ attrs = [(name, kind, cls, value)
+ for name, kind, cls, value in classify_class_attrs(object)
+ if visiblename(name, obj=object)]
+
+ mdict = {}
+ for key, kind, homecls, value in attrs:
+ mdict[key] = anchor = '#' + name + '-' + key
+ try:
+ value = getattr(object, name)
+ except Exception:
+ # Some descriptors may meet a failure in their __get__.
+ # (bug #1785)
+ pass
+ try:
+ # The value may not be hashable (e.g., a data attr with
+ # a dict or list value).
+ mdict[value] = anchor
+ except TypeError:
+ pass
+
+ while attrs:
+ if mro:
+ thisclass = mro.popleft()
+ else:
+ thisclass = attrs[0][2]
+ attrs, inherited = _split_list(attrs, lambda t: t[2] is thisclass)
+
+ if thisclass is builtins.object:
+ attrs = inherited
+ continue
+ elif thisclass is object:
+ tag = 'defined here'
+ else:
+ tag = 'inherited from %s' % self.classlink(thisclass,
+ object.__module__)
+ tag += ':<br>\n'
+
+ sort_attributes(attrs, object)
+
+ # Pump out the attrs, segregated by kind.
+ attrs = spill('Methods %s' % tag, attrs,
+ lambda t: t[1] == 'method')
+ attrs = spill('Class methods %s' % tag, attrs,
+ lambda t: t[1] == 'class method')
+ attrs = spill('Static methods %s' % tag, attrs,
+ lambda t: t[1] == 'static method')
+ attrs = spilldescriptors('Data descriptors %s' % tag, attrs,
+ lambda t: t[1] == 'data descriptor')
+ attrs = spilldata('Data and other attributes %s' % tag, attrs,
+ lambda t: t[1] == 'data')
+ assert attrs == []
+ attrs = inherited
+
+ contents = ''.join(contents)
+
+ if name == realname:
+ title = '<a name="%s">class <strong>%s</strong></a>' % (
+ name, realname)
+ else:
+ title = '<strong>%s</strong> = <a name="%s">class %s</a>' % (
+ name, name, realname)
+ if bases:
+ parents = []
+ for base in bases:
+ parents.append(self.classlink(base, object.__module__))
+ title = title + '(%s)' % ', '.join(parents)
+ doc = self.markup(getdoc(object), self.preformat, funcs, classes, mdict)
+ doc = doc and '<tt>%s<br> </tt>' % doc
+
+ return self.section(title, '#000000', '#ffc8d8', contents, 3, doc)
+
+ def formatvalue(self, object):
+ """Format an argument default value as text."""
+ return self.grey('=' + self.repr(object))
+
+ def docroutine(self, object, name=None, mod=None,
+ funcs={}, classes={}, methods={}, cl=None):
+ """Produce HTML documentation for a function or method object."""
+ realname = object.__name__
+ name = name or realname
+ anchor = (cl and cl.__name__ or '') + '-' + name
+ note = ''
+ skipdocs = 0
+ if _is_bound_method(object):
+ imclass = object.__self__.__class__
+ if cl:
+ if imclass is not cl:
+ note = ' from ' + self.classlink(imclass, mod)
+ else:
+ if object.__self__ is not None:
+ note = ' method of %s instance' % self.classlink(
+ object.__self__.__class__, mod)
+ else:
+ note = ' unbound %s method' % self.classlink(imclass,mod)
+
+ if name == realname:
+ title = '<a name="%s"><strong>%s</strong></a>' % (anchor, realname)
+ else:
+ if cl and inspect.getattr_static(cl, realname, []) is object:
+ reallink = '<a href="#%s">%s</a>' % (
+ cl.__name__ + '-' + realname, realname)
+ skipdocs = 1
+ else:
+ reallink = realname
+ title = '<a name="%s"><strong>%s</strong></a> = %s' % (
+ anchor, name, reallink)
+ argspec = None
+ if inspect.isroutine(object):
+ try:
+ signature = inspect.signature(object)
+ except (ValueError, TypeError):
+ signature = None
+ if signature:
+ argspec = str(signature)
+ if realname == '<lambda>':
+ title = '<strong>%s</strong> <em>lambda</em> ' % name
+ # XXX lambda's won't usually have func_annotations['return']
+ # since the syntax doesn't support but it is possible.
+ # So removing parentheses isn't truly safe.
+ argspec = argspec[1:-1] # remove parentheses
+ if not argspec:
+ argspec = '(...)'
+
+ decl = title + self.escape(argspec) + (note and self.grey(
+ '<font face="helvetica, arial">%s</font>' % note))
+
+ if skipdocs:
+ return '<dl><dt>%s</dt></dl>\n' % decl
+ else:
+ doc = self.markup(
+ getdoc(object), self.preformat, funcs, classes, methods)
+ doc = doc and '<dd><tt>%s</tt></dd>' % doc
+ return '<dl><dt>%s</dt>%s</dl>\n' % (decl, doc)
+
+ def _docdescriptor(self, name, value, mod):
+ results = []
+ push = results.append
+
+ if name:
+ push('<dl><dt><strong>%s</strong></dt>\n' % name)
+ if value.__doc__ is not None:
+ doc = self.markup(getdoc(value), self.preformat)
+ push('<dd><tt>%s</tt></dd>\n' % doc)
+ push('</dl>\n')
+
+ return ''.join(results)
+
+ def docproperty(self, object, name=None, mod=None, cl=None):
+ """Produce html documentation for a property."""
+ return self._docdescriptor(name, object, mod)
+
+ def docother(self, object, name=None, mod=None, *ignored):
+ """Produce HTML documentation for a data object."""
+ lhs = name and '<strong>%s</strong> = ' % name or ''
+ return lhs + self.repr(object)
+
+ def docdata(self, object, name=None, mod=None, cl=None):
+ """Produce html documentation for a data descriptor."""
+ return self._docdescriptor(name, object, mod)
+
+ def index(self, dir, shadowed=None):
+ """Generate an HTML index for a directory of modules."""
+ modpkgs = []
+ if shadowed is None: shadowed = {}
+ for importer, name, ispkg in pkgutil.iter_modules([dir]):
+ if any((0xD800 <= ord(ch) <= 0xDFFF) for ch in name):
+ # ignore a module if its name contains a surrogate character
+ continue
+ modpkgs.append((name, '', ispkg, name in shadowed))
+ shadowed[name] = 1
+
+ modpkgs.sort()
+ contents = self.multicolumn(modpkgs, self.modpkglink)
+ return self.bigsection(dir, '#ffffff', '#ee77aa', contents)
+
+# -------------------------------------------- text documentation generator
+
+class TextRepr(Repr):
+ """Class for safely making a text representation of a Python object."""
+ def __init__(self):
+ Repr.__init__(self)
+ self.maxlist = self.maxtuple = 20
+ self.maxdict = 10
+ self.maxstring = self.maxother = 100
+
+ def repr1(self, x, level):
+ if hasattr(type(x), '__name__'):
+ methodname = 'repr_' + '_'.join(type(x).__name__.split())
+ if hasattr(self, methodname):
+ return getattr(self, methodname)(x, level)
+ return cram(stripid(repr(x)), self.maxother)
+
+ def repr_string(self, x, level):
+ test = cram(x, self.maxstring)
+ testrepr = repr(test)
+ if '\\' in test and '\\' not in replace(testrepr, r'\\', ''):
+ # Backslashes are only literal in the string and are never
+ # needed to make any special characters, so show a raw string.
+ return 'r' + testrepr[0] + test + testrepr[0]
+ return testrepr
+
+ repr_str = repr_string
+
+ def repr_instance(self, x, level):
+ try:
+ return cram(stripid(repr(x)), self.maxstring)
+ except:
+ return '<%s instance>' % x.__class__.__name__
+
+class TextDoc(Doc):
+ """Formatter class for text documentation."""
+
+ # ------------------------------------------- text formatting utilities
+
+ _repr_instance = TextRepr()
+ repr = _repr_instance.repr
+
+ def bold(self, text):
+ """Format a string in bold by overstriking."""
+ return ''.join(ch + '\b' + ch for ch in text)
+
+ def indent(self, text, prefix=' '):
+ """Indent text by prepending a given prefix to each line."""
+ if not text: return ''
+ lines = [prefix + line for line in text.split('\n')]
+ if lines: lines[-1] = lines[-1].rstrip()
+ return '\n'.join(lines)
+
+ def section(self, title, contents):
+ """Format a section with a given heading."""
+ clean_contents = self.indent(contents).rstrip()
+ return self.bold(title) + '\n' + clean_contents + '\n\n'
+
+ # ---------------------------------------------- type-specific routines
+
+ def formattree(self, tree, modname, parent=None, prefix=''):
+ """Render in text a class tree as returned by inspect.getclasstree()."""
+ result = ''
+ for entry in tree:
+ if type(entry) is type(()):
+ c, bases = entry
+ result = result + prefix + classname(c, modname)
+ if bases and bases != (parent,):
+ parents = (classname(c, modname) for c in bases)
+ result = result + '(%s)' % ', '.join(parents)
+ result = result + '\n'
+ elif type(entry) is type([]):
+ result = result + self.formattree(
+ entry, modname, c, prefix + ' ')
+ return result
+
+ def docmodule(self, object, name=None, mod=None):
+ """Produce text documentation for a given module object."""
+ name = object.__name__ # ignore the passed-in name
+ synop, desc = splitdoc(getdoc(object))
+ result = self.section('NAME', name + (synop and ' - ' + synop))
+ all = getattr(object, '__all__', None)
+ docloc = self.getdocloc(object)
+ if docloc is not None:
+ result = result + self.section('MODULE REFERENCE', docloc + """
+
+The following documentation is automatically generated from the Python
+source files. It may be incomplete, incorrect or include features that
+are considered implementation detail and may vary between Python
+implementations. When in doubt, consult the module reference at the
+location listed above.
+""")
+
+ if desc:
+ result = result + self.section('DESCRIPTION', desc)
+
+ classes = []
+ for key, value in inspect.getmembers(object, inspect.isclass):
+ # if __all__ exists, believe it. Otherwise use old heuristic.
+ if (all is not None
+ or (inspect.getmodule(value) or object) is object):
+ if visiblename(key, all, object):
+ classes.append((key, value))
+ funcs = []
+ for key, value in inspect.getmembers(object, inspect.isroutine):
+ # if __all__ exists, believe it. Otherwise use old heuristic.
+ if (all is not None or
+ inspect.isbuiltin(value) or inspect.getmodule(value) is object):
+ if visiblename(key, all, object):
+ funcs.append((key, value))
+ data = []
+ for key, value in inspect.getmembers(object, isdata):
+ if visiblename(key, all, object):
+ data.append((key, value))
+
+ modpkgs = []
+ modpkgs_names = set()
+ if hasattr(object, '__path__'):
+ for importer, modname, ispkg in pkgutil.iter_modules(object.__path__):
+ modpkgs_names.add(modname)
+ if ispkg:
+ modpkgs.append(modname + ' (package)')
+ else:
+ modpkgs.append(modname)
+
+ modpkgs.sort()
+ result = result + self.section(
+ 'PACKAGE CONTENTS', '\n'.join(modpkgs))
+
+ # Detect submodules as sometimes created by C extensions
+ submodules = []
+ for key, value in inspect.getmembers(object, inspect.ismodule):
+ if value.__name__.startswith(name + '.') and key not in modpkgs_names:
+ submodules.append(key)
+ if submodules:
+ submodules.sort()
+ result = result + self.section(
+ 'SUBMODULES', '\n'.join(submodules))
+
+ if classes:
+ classlist = [value for key, value in classes]
+ contents = [self.formattree(
+ inspect.getclasstree(classlist, 1), name)]
+ for key, value in classes:
+ contents.append(self.document(value, key, name))
+ result = result + self.section('CLASSES', '\n'.join(contents))
+
+ if funcs:
+ contents = []
+ for key, value in funcs:
+ contents.append(self.document(value, key, name))
+ result = result + self.section('FUNCTIONS', '\n'.join(contents))
+
+ if data:
+ contents = []
+ for key, value in data:
+ contents.append(self.docother(value, key, name, maxlen=70))
+ result = result + self.section('DATA', '\n'.join(contents))
+
+ if hasattr(object, '__version__'):
+ version = str(object.__version__)
+ if version[:11] == '$' + 'Revision: ' and version[-1:] == '$':
+ version = version[11:-1].strip()
+ result = result + self.section('VERSION', version)
+ if hasattr(object, '__date__'):
+ result = result + self.section('DATE', str(object.__date__))
+ if hasattr(object, '__author__'):
+ result = result + self.section('AUTHOR', str(object.__author__))
+ if hasattr(object, '__credits__'):
+ result = result + self.section('CREDITS', str(object.__credits__))
+ try:
+ file = inspect.getabsfile(object)
+ except TypeError:
+ file = '(built-in)'
+ result = result + self.section('FILE', file)
+ return result
+
+ def docclass(self, object, name=None, mod=None, *ignored):
+ """Produce text documentation for a given class object."""
+ realname = object.__name__
+ name = name or realname
+ bases = object.__bases__
+
+ def makename(c, m=object.__module__):
+ return classname(c, m)
+
+ if name == realname:
+ title = 'class ' + self.bold(realname)
+ else:
+ title = self.bold(name) + ' = class ' + realname
+ if bases:
+ parents = map(makename, bases)
+ title = title + '(%s)' % ', '.join(parents)
+
+ doc = getdoc(object)
+ contents = doc and [doc + '\n'] or []
+ push = contents.append
+
+ # List the mro, if non-trivial.
+ mro = deque(inspect.getmro(object))
+ if len(mro) > 2:
+ push("Method resolution order:")
+ for base in mro:
+ push(' ' + makename(base))
+ push('')
+
+ # Cute little class to pump out a horizontal rule between sections.
+ class HorizontalRule:
+ def __init__(self):
+ self.needone = 0
+ def maybe(self):
+ if self.needone:
+ push('-' * 70)
+ self.needone = 1
+ hr = HorizontalRule()
+
+ def spill(msg, attrs, predicate):
+ ok, attrs = _split_list(attrs, predicate)
+ if ok:
+ hr.maybe()
+ push(msg)
+ for name, kind, homecls, value in ok:
+ try:
+ value = getattr(object, name)
+ except Exception:
+ # Some descriptors may meet a failure in their __get__.
+ # (bug #1785)
+ push(self._docdescriptor(name, value, mod))
+ else:
+ push(self.document(value,
+ name, mod, object))
+ return attrs
+
+ def spilldescriptors(msg, attrs, predicate):
+ ok, attrs = _split_list(attrs, predicate)
+ if ok:
+ hr.maybe()
+ push(msg)
+ for name, kind, homecls, value in ok:
+ push(self._docdescriptor(name, value, mod))
+ return attrs
+
+ def spilldata(msg, attrs, predicate):
+ ok, attrs = _split_list(attrs, predicate)
+ if ok:
+ hr.maybe()
+ push(msg)
+ for name, kind, homecls, value in ok:
+ if callable(value) or inspect.isdatadescriptor(value):
+ doc = getdoc(value)
+ else:
+ doc = None
+ try:
+ obj = getattr(object, name)
+ except AttributeError:
+ obj = homecls.__dict__[name]
+ push(self.docother(obj, name, mod, maxlen=70, doc=doc) +
+ '\n')
+ return attrs
+
+ attrs = [(name, kind, cls, value)
+ for name, kind, cls, value in classify_class_attrs(object)
+ if visiblename(name, obj=object)]
+
+ while attrs:
+ if mro:
+ thisclass = mro.popleft()
+ else:
+ thisclass = attrs[0][2]
+ attrs, inherited = _split_list(attrs, lambda t: t[2] is thisclass)
+
+ if thisclass is builtins.object:
+ attrs = inherited
+ continue
+ elif thisclass is object:
+ tag = "defined here"
+ else:
+ tag = "inherited from %s" % classname(thisclass,
+ object.__module__)
+
+ sort_attributes(attrs, object)
+
+ # Pump out the attrs, segregated by kind.
+ attrs = spill("Methods %s:\n" % tag, attrs,
+ lambda t: t[1] == 'method')
+ attrs = spill("Class methods %s:\n" % tag, attrs,
+ lambda t: t[1] == 'class method')
+ attrs = spill("Static methods %s:\n" % tag, attrs,
+ lambda t: t[1] == 'static method')
+ attrs = spilldescriptors("Data descriptors %s:\n" % tag, attrs,
+ lambda t: t[1] == 'data descriptor')
+ attrs = spilldata("Data and other attributes %s:\n" % tag, attrs,
+ lambda t: t[1] == 'data')
+
+ assert attrs == []
+ attrs = inherited
+
+ contents = '\n'.join(contents)
+ if not contents:
+ return title + '\n'
+ return title + '\n' + self.indent(contents.rstrip(), ' | ') + '\n'
+
+ def formatvalue(self, object):
+ """Format an argument default value as text."""
+ return '=' + self.repr(object)
+
+ def docroutine(self, object, name=None, mod=None, cl=None):
+ """Produce text documentation for a function or method object."""
+ realname = object.__name__
+ name = name or realname
+ note = ''
+ skipdocs = 0
+ if _is_bound_method(object):
+ imclass = object.__self__.__class__
+ if cl:
+ if imclass is not cl:
+ note = ' from ' + classname(imclass, mod)
+ else:
+ if object.__self__ is not None:
+ note = ' method of %s instance' % classname(
+ object.__self__.__class__, mod)
+ else:
+ note = ' unbound %s method' % classname(imclass,mod)
+
+ if name == realname:
+ title = self.bold(realname)
+ else:
+ if cl and inspect.getattr_static(cl, realname, []) is object:
+ skipdocs = 1
+ title = self.bold(name) + ' = ' + realname
+ argspec = None
+
+ if inspect.isroutine(object):
+ try:
+ signature = inspect.signature(object)
+ except (ValueError, TypeError):
+ signature = None
+ if signature:
+ argspec = str(signature)
+ if realname == '<lambda>':
+ title = self.bold(name) + ' lambda '
+ # XXX lambda's won't usually have func_annotations['return']
+ # since the syntax doesn't support but it is possible.
+ # So removing parentheses isn't truly safe.
+ argspec = argspec[1:-1] # remove parentheses
+ if not argspec:
+ argspec = '(...)'
+ decl = title + argspec + note
+
+ if skipdocs:
+ return decl + '\n'
+ else:
+ doc = getdoc(object) or ''
+ return decl + '\n' + (doc and self.indent(doc).rstrip() + '\n')
+
+ def _docdescriptor(self, name, value, mod):
+ results = []
+ push = results.append
+
+ if name:
+ push(self.bold(name))
+ push('\n')
+ doc = getdoc(value) or ''
+ if doc:
+ push(self.indent(doc))
+ push('\n')
+ return ''.join(results)
+
+ def docproperty(self, object, name=None, mod=None, cl=None):
+ """Produce text documentation for a property."""
+ return self._docdescriptor(name, object, mod)
+
+ def docdata(self, object, name=None, mod=None, cl=None):
+ """Produce text documentation for a data descriptor."""
+ return self._docdescriptor(name, object, mod)
+
+ def docother(self, object, name=None, mod=None, parent=None, maxlen=None, doc=None):
+ """Produce text documentation for a data object."""
+ repr = self.repr(object)
+ if maxlen:
+ line = (name and name + ' = ' or '') + repr
+ chop = maxlen - len(line)
+ if chop < 0: repr = repr[:chop] + '...'
+ line = (name and self.bold(name) + ' = ' or '') + repr
+ if doc is not None:
+ line += '\n' + self.indent(str(doc))
+ return line
+
+class _PlainTextDoc(TextDoc):
+ """Subclass of TextDoc which overrides string styling"""
+ def bold(self, text):
+ return text
+
+# --------------------------------------------------------- user interfaces
+
+def pager(text):
+ """The first time this is called, determine what kind of pager to use."""
+ global pager
+ pager = getpager()
+ pager(text)
+
+def getpager():
+ """Decide what method to use for paging through text."""
+ if not hasattr(sys.stdin, "isatty"):
+ return plainpager
+ if not hasattr(sys.stdout, "isatty"):
+ return plainpager
+ if not sys.stdin.isatty() or not sys.stdout.isatty():
+ return plainpager
+ use_pager = os.environ.get('MANPAGER') or os.environ.get('PAGER')
+ if use_pager:
+ if sys.platform == 'win32': # pipes completely broken in Windows
+ return lambda text: tempfilepager(plain(text), use_pager)
+ elif sys.platform == 'uefi':
+ return lambda text: tempfilepager(plain(text), use_pager)
+ elif os.environ.get('TERM') in ('dumb', 'emacs'):
+ return lambda text: pipepager(plain(text), use_pager)
+ else:
+ return lambda text: pipepager(text, use_pager)
+ if os.environ.get('TERM') in ('dumb', 'emacs'):
+ return plainpager
+ if sys.platform == 'uefi':
+ return plainpager
+ if sys.platform == 'win32':
+ return lambda text: tempfilepager(plain(text), 'more <')
+ if hasattr(os, 'system') and os.system('(less) 2>/dev/null') == 0:
+ return lambda text: pipepager(text, 'less')
+
+ import tempfile
+ (fd, filename) = tempfile.mkstemp()
+ os.close(fd)
+ try:
+ if hasattr(os, 'system') and os.system('more "%s"' % filename) == 0:
+ return lambda text: pipepager(text, 'more')
+ else:
+ return ttypager
+ finally:
+ os.unlink(filename)
+
+def plain(text):
+ """Remove boldface formatting from text."""
+ return re.sub('.\b', '', text)
+
+def pipepager(text, cmd):
+ """Page through text by feeding it to another program."""
+ import sys
+ if sys.platform != 'uefi':
+ import subprocess
+ proc = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE)
+ try:
+ with io.TextIOWrapper(proc.stdin, errors='backslashreplace') as pipe:
+ try:
+ pipe.write(text)
+ except KeyboardInterrupt:
+ # We've hereby abandoned whatever text hasn't been written,
+ # but the pager is still in control of the terminal.
+ pass
+ except OSError:
+ pass # Ignore broken pipes caused by quitting the pager program.
+ while True:
+ try:
+ proc.wait()
+ break
+ except KeyboardInterrupt:
+ # Ignore ctl-c like the pager itself does. Otherwise the pager is
+ # left running and the terminal is in raw mode and unusable.
+ pass
+ else:
+ pipe = os.popen(cmd, 'w')
+ try:
+ pipe.write(_encode(text))
+ pipe.close()
+ except IOError:
+ pass # Ignore broken pipes caused by quitting the pager program.
+
+def tempfilepager(text, cmd):
+ """Page through text by invoking a program on a temporary file."""
+ import tempfile
+ filename = tempfile.mktemp()
+ with open(filename, 'w', errors='backslashreplace') as file:
+ file.write(text)
+ try:
+ os.system(cmd + ' "' + filename + '"')
+ finally:
+ os.unlink(filename)
+
+def _escape_stdout(text):
+ # Escape non-encodable characters to avoid encoding errors later
+ encoding = getattr(sys.stdout, 'encoding', None) or 'utf-8'
+ return text.encode(encoding, 'backslashreplace').decode(encoding)
+
+def ttypager(text):
+ """Page through text on a text terminal."""
+ lines = plain(_escape_stdout(text)).split('\n')
+ try:
+ import tty
+ fd = sys.stdin.fileno()
+ old = tty.tcgetattr(fd)
+ tty.setcbreak(fd)
+ getchar = lambda: sys.stdin.read(1)
+ except (ImportError, AttributeError, io.UnsupportedOperation):
+ tty = None
+ getchar = lambda: sys.stdin.readline()[:-1][:1]
+
+ try:
+ try:
+ h = int(os.environ.get('LINES', 0))
+ except ValueError:
+ h = 0
+ if h <= 1:
+ h = 25
+ r = inc = h - 1
+ sys.stdout.write('\n'.join(lines[:inc]) + '\n')
+ while lines[r:]:
+ sys.stdout.write('-- more --')
+ sys.stdout.flush()
+ c = getchar()
+
+ if c in ('q', 'Q'):
+ sys.stdout.write('\r \r')
+ break
+ elif c in ('\r', '\n'):
+ sys.stdout.write('\r \r' + lines[r] + '\n')
+ r = r + 1
+ continue
+ if c in ('b', 'B', '\x1b'):
+ r = r - inc - inc
+ if r < 0: r = 0
+ sys.stdout.write('\n' + '\n'.join(lines[r:r+inc]) + '\n')
+ r = r + inc
+
+ finally:
+ if tty:
+ tty.tcsetattr(fd, tty.TCSAFLUSH, old)
+
+def plainpager(text):
+ """Simply print unformatted text. This is the ultimate fallback."""
+ sys.stdout.write(plain(_escape_stdout(text)))
+
+def describe(thing):
+ """Produce a short description of the given thing."""
+ if inspect.ismodule(thing):
+ if thing.__name__ in sys.builtin_module_names:
+ return 'built-in module ' + thing.__name__
+ if hasattr(thing, '__path__'):
+ return 'package ' + thing.__name__
+ else:
+ return 'module ' + thing.__name__
+ if inspect.isbuiltin(thing):
+ return 'built-in function ' + thing.__name__
+ if inspect.isgetsetdescriptor(thing):
+ return 'getset descriptor %s.%s.%s' % (
+ thing.__objclass__.__module__, thing.__objclass__.__name__,
+ thing.__name__)
+ if inspect.ismemberdescriptor(thing):
+ return 'member descriptor %s.%s.%s' % (
+ thing.__objclass__.__module__, thing.__objclass__.__name__,
+ thing.__name__)
+ if inspect.isclass(thing):
+ return 'class ' + thing.__name__
+ if inspect.isfunction(thing):
+ return 'function ' + thing.__name__
+ if inspect.ismethod(thing):
+ return 'method ' + thing.__name__
+ return type(thing).__name__
+
+def locate(path, forceload=0):
+ """Locate an object by name or dotted path, importing as necessary."""
+ parts = [part for part in path.split('.') if part]
+ module, n = None, 0
+ while n < len(parts):
+ nextmodule = safeimport('.'.join(parts[:n+1]), forceload)
+ if nextmodule: module, n = nextmodule, n + 1
+ else: break
+ if module:
+ object = module
+ else:
+ object = builtins
+ for part in parts[n:]:
+ try:
+ object = getattr(object, part)
+ except AttributeError:
+ return None
+ return object
+
+# --------------------------------------- interactive interpreter interface
+
+text = TextDoc()
+plaintext = _PlainTextDoc()
+html = HTMLDoc()
+
+def resolve(thing, forceload=0):
+ """Given an object or a path to an object, get the object and its name."""
+ if isinstance(thing, str):
+ object = locate(thing, forceload)
+ if object is None:
+ raise ImportError('''\
+No Python documentation found for %r.
+Use help() to get the interactive help utility.
+Use help(str) for help on the str class.''' % thing)
+ return object, thing
+ else:
+ name = getattr(thing, '__name__', None)
+ return thing, name if isinstance(name, str) else None
+
+def render_doc(thing, title='Python Library Documentation: %s', forceload=0,
+ renderer=None):
+ """Render text documentation, given an object or a path to an object."""
+ if renderer is None:
+ renderer = text
+ object, name = resolve(thing, forceload)
+ desc = describe(object)
+ module = inspect.getmodule(object)
+ if name and '.' in name:
+ desc += ' in ' + name[:name.rfind('.')]
+ elif module and module is not object:
+ desc += ' in module ' + module.__name__
+
+ if not (inspect.ismodule(object) or
+ inspect.isclass(object) or
+ inspect.isroutine(object) or
+ inspect.isgetsetdescriptor(object) or
+ inspect.ismemberdescriptor(object) or
+ isinstance(object, property)):
+ # If the passed object is a piece of data or an instance,
+ # document its available methods instead of its value.
+ object = type(object)
+ desc += ' object'
+ return title % desc + '\n\n' + renderer.document(object, name)
+
+def doc(thing, title='Python Library Documentation: %s', forceload=0,
+ output=None):
+ """Display text documentation, given an object or a path to an object."""
+ try:
+ if output is None:
+ pager(render_doc(thing, title, forceload))
+ else:
+ output.write(render_doc(thing, title, forceload, plaintext))
+ except (ImportError, ErrorDuringImport) as value:
+ print(value)
+
+def writedoc(thing, forceload=0):
+ """Write HTML documentation to a file in the current directory."""
+ try:
+ object, name = resolve(thing, forceload)
+ page = html.page(describe(object), html.document(object, name))
+ with open(name + '.html', 'w', encoding='utf-8') as file:
+ file.write(page)
+ print('wrote', name + '.html')
+ except (ImportError, ErrorDuringImport) as value:
+ print(value)
+
+def writedocs(dir, pkgpath='', done=None):
+ """Write out HTML documentation for all modules in a directory tree."""
+ if done is None: done = {}
+ for importer, modname, ispkg in pkgutil.walk_packages([dir], pkgpath):
+ writedoc(modname)
+ return
+
+class Helper:
+
+ # These dictionaries map a topic name to either an alias, or a tuple
+ # (label, seealso-items). The "label" is the label of the corresponding
+ # section in the .rst file under Doc/ and an index into the dictionary
+ # in pydoc_data/topics.py.
+ #
+ # CAUTION: if you change one of these dictionaries, be sure to adapt the
+ # list of needed labels in Doc/tools/pyspecific.py and
+ # regenerate the pydoc_data/topics.py file by running
+ # make pydoc-topics
+ # in Doc/ and copying the output file into the Lib/ directory.
+
+ keywords = {
+ 'False': '',
+ 'None': '',
+ 'True': '',
+ 'and': 'BOOLEAN',
+ 'as': 'with',
+ 'assert': ('assert', ''),
+ 'break': ('break', 'while for'),
+ 'class': ('class', 'CLASSES SPECIALMETHODS'),
+ 'continue': ('continue', 'while for'),
+ 'def': ('function', ''),
+ 'del': ('del', 'BASICMETHODS'),
+ 'elif': 'if',
+ 'else': ('else', 'while for'),
+ 'except': 'try',
+ 'finally': 'try',
+ 'for': ('for', 'break continue while'),
+ 'from': 'import',
+ 'global': ('global', 'nonlocal NAMESPACES'),
+ 'if': ('if', 'TRUTHVALUE'),
+ 'import': ('import', 'MODULES'),
+ 'in': ('in', 'SEQUENCEMETHODS'),
+ 'is': 'COMPARISON',
+ 'lambda': ('lambda', 'FUNCTIONS'),
+ 'nonlocal': ('nonlocal', 'global NAMESPACES'),
+ 'not': 'BOOLEAN',
+ 'or': 'BOOLEAN',
+ 'pass': ('pass', ''),
+ 'raise': ('raise', 'EXCEPTIONS'),
+ 'return': ('return', 'FUNCTIONS'),
+ 'try': ('try', 'EXCEPTIONS'),
+ 'while': ('while', 'break continue if TRUTHVALUE'),
+ 'with': ('with', 'CONTEXTMANAGERS EXCEPTIONS yield'),
+ 'yield': ('yield', ''),
+ }
+ # Either add symbols to this dictionary or to the symbols dictionary
+ # directly: Whichever is easier. They are merged later.
+ _strprefixes = [p + q for p in ('b', 'f', 'r', 'u') for q in ("'", '"')]
+ _symbols_inverse = {
+ 'STRINGS' : ("'", "'''", '"', '"""', *_strprefixes),
+ 'OPERATORS' : ('+', '-', '*', '**', '/', '//', '%', '<<', '>>', '&',
+ '|', '^', '~', '<', '>', '<=', '>=', '==', '!=', '<>'),
+ 'COMPARISON' : ('<', '>', '<=', '>=', '==', '!=', '<>'),
+ 'UNARY' : ('-', '~'),
+ 'AUGMENTEDASSIGNMENT' : ('+=', '-=', '*=', '/=', '%=', '&=', '|=',
+ '^=', '<<=', '>>=', '**=', '//='),
+ 'BITWISE' : ('<<', '>>', '&', '|', '^', '~'),
+ 'COMPLEX' : ('j', 'J')
+ }
+ symbols = {
+ '%': 'OPERATORS FORMATTING',
+ '**': 'POWER',
+ ',': 'TUPLES LISTS FUNCTIONS',
+ '.': 'ATTRIBUTES FLOAT MODULES OBJECTS',
+ '...': 'ELLIPSIS',
+ ':': 'SLICINGS DICTIONARYLITERALS',
+ '@': 'def class',
+ '\\': 'STRINGS',
+ '_': 'PRIVATENAMES',
+ '__': 'PRIVATENAMES SPECIALMETHODS',
+ '`': 'BACKQUOTES',
+ '(': 'TUPLES FUNCTIONS CALLS',
+ ')': 'TUPLES FUNCTIONS CALLS',
+ '[': 'LISTS SUBSCRIPTS SLICINGS',
+ ']': 'LISTS SUBSCRIPTS SLICINGS'
+ }
+ for topic, symbols_ in _symbols_inverse.items():
+ for symbol in symbols_:
+ topics = symbols.get(symbol, topic)
+ if topic not in topics:
+ topics = topics + ' ' + topic
+ symbols[symbol] = topics
+
+ topics = {
+ 'TYPES': ('types', 'STRINGS UNICODE NUMBERS SEQUENCES MAPPINGS '
+ 'FUNCTIONS CLASSES MODULES FILES inspect'),
+ 'STRINGS': ('strings', 'str UNICODE SEQUENCES STRINGMETHODS '
+ 'FORMATTING TYPES'),
+ 'STRINGMETHODS': ('string-methods', 'STRINGS FORMATTING'),
+ 'FORMATTING': ('formatstrings', 'OPERATORS'),
+ 'UNICODE': ('strings', 'encodings unicode SEQUENCES STRINGMETHODS '
+ 'FORMATTING TYPES'),
+ 'NUMBERS': ('numbers', 'INTEGER FLOAT COMPLEX TYPES'),
+ 'INTEGER': ('integers', 'int range'),
+ 'FLOAT': ('floating', 'float math'),
+ 'COMPLEX': ('imaginary', 'complex cmath'),
+ 'SEQUENCES': ('typesseq', 'STRINGMETHODS FORMATTING range LISTS'),
+ 'MAPPINGS': 'DICTIONARIES',
+ 'FUNCTIONS': ('typesfunctions', 'def TYPES'),
+ 'METHODS': ('typesmethods', 'class def CLASSES TYPES'),
+ 'CODEOBJECTS': ('bltin-code-objects', 'compile FUNCTIONS TYPES'),
+ 'TYPEOBJECTS': ('bltin-type-objects', 'types TYPES'),
+ 'FRAMEOBJECTS': 'TYPES',
+ 'TRACEBACKS': 'TYPES',
+ 'NONE': ('bltin-null-object', ''),
+ 'ELLIPSIS': ('bltin-ellipsis-object', 'SLICINGS'),
+ 'SPECIALATTRIBUTES': ('specialattrs', ''),
+ 'CLASSES': ('types', 'class SPECIALMETHODS PRIVATENAMES'),
+ 'MODULES': ('typesmodules', 'import'),
+ 'PACKAGES': 'import',
+ 'EXPRESSIONS': ('operator-summary', 'lambda or and not in is BOOLEAN '
+ 'COMPARISON BITWISE SHIFTING BINARY FORMATTING POWER '
+ 'UNARY ATTRIBUTES SUBSCRIPTS SLICINGS CALLS TUPLES '
+ 'LISTS DICTIONARIES'),
+ 'OPERATORS': 'EXPRESSIONS',
+ 'PRECEDENCE': 'EXPRESSIONS',
+ 'OBJECTS': ('objects', 'TYPES'),
+ 'SPECIALMETHODS': ('specialnames', 'BASICMETHODS ATTRIBUTEMETHODS '
+ 'CALLABLEMETHODS SEQUENCEMETHODS MAPPINGMETHODS '
+ 'NUMBERMETHODS CLASSES'),
+ 'BASICMETHODS': ('customization', 'hash repr str SPECIALMETHODS'),
+ 'ATTRIBUTEMETHODS': ('attribute-access', 'ATTRIBUTES SPECIALMETHODS'),
+ 'CALLABLEMETHODS': ('callable-types', 'CALLS SPECIALMETHODS'),
+ 'SEQUENCEMETHODS': ('sequence-types', 'SEQUENCES SEQUENCEMETHODS '
+ 'SPECIALMETHODS'),
+ 'MAPPINGMETHODS': ('sequence-types', 'MAPPINGS SPECIALMETHODS'),
+ 'NUMBERMETHODS': ('numeric-types', 'NUMBERS AUGMENTEDASSIGNMENT '
+ 'SPECIALMETHODS'),
+ 'EXECUTION': ('execmodel', 'NAMESPACES DYNAMICFEATURES EXCEPTIONS'),
+ 'NAMESPACES': ('naming', 'global nonlocal ASSIGNMENT DELETION DYNAMICFEATURES'),
+ 'DYNAMICFEATURES': ('dynamic-features', ''),
+ 'SCOPING': 'NAMESPACES',
+ 'FRAMES': 'NAMESPACES',
+ 'EXCEPTIONS': ('exceptions', 'try except finally raise'),
+ 'CONVERSIONS': ('conversions', ''),
+ 'IDENTIFIERS': ('identifiers', 'keywords SPECIALIDENTIFIERS'),
+ 'SPECIALIDENTIFIERS': ('id-classes', ''),
+ 'PRIVATENAMES': ('atom-identifiers', ''),
+ 'LITERALS': ('atom-literals', 'STRINGS NUMBERS TUPLELITERALS '
+ 'LISTLITERALS DICTIONARYLITERALS'),
+ 'TUPLES': 'SEQUENCES',
+ 'TUPLELITERALS': ('exprlists', 'TUPLES LITERALS'),
+ 'LISTS': ('typesseq-mutable', 'LISTLITERALS'),
+ 'LISTLITERALS': ('lists', 'LISTS LITERALS'),
+ 'DICTIONARIES': ('typesmapping', 'DICTIONARYLITERALS'),
+ 'DICTIONARYLITERALS': ('dict', 'DICTIONARIES LITERALS'),
+ 'ATTRIBUTES': ('attribute-references', 'getattr hasattr setattr ATTRIBUTEMETHODS'),
+ 'SUBSCRIPTS': ('subscriptions', 'SEQUENCEMETHODS'),
+ 'SLICINGS': ('slicings', 'SEQUENCEMETHODS'),
+ 'CALLS': ('calls', 'EXPRESSIONS'),
+ 'POWER': ('power', 'EXPRESSIONS'),
+ 'UNARY': ('unary', 'EXPRESSIONS'),
+ 'BINARY': ('binary', 'EXPRESSIONS'),
+ 'SHIFTING': ('shifting', 'EXPRESSIONS'),
+ 'BITWISE': ('bitwise', 'EXPRESSIONS'),
+ 'COMPARISON': ('comparisons', 'EXPRESSIONS BASICMETHODS'),
+ 'BOOLEAN': ('booleans', 'EXPRESSIONS TRUTHVALUE'),
+ 'ASSERTION': 'assert',
+ 'ASSIGNMENT': ('assignment', 'AUGMENTEDASSIGNMENT'),
+ 'AUGMENTEDASSIGNMENT': ('augassign', 'NUMBERMETHODS'),
+ 'DELETION': 'del',
+ 'RETURNING': 'return',
+ 'IMPORTING': 'import',
+ 'CONDITIONAL': 'if',
+ 'LOOPING': ('compound', 'for while break continue'),
+ 'TRUTHVALUE': ('truth', 'if while and or not BASICMETHODS'),
+ 'DEBUGGING': ('debugger', 'pdb'),
+ 'CONTEXTMANAGERS': ('context-managers', 'with'),
+ }
+
+ def __init__(self, input=None, output=None):
+ self._input = input
+ self._output = output
+
+ input = property(lambda self: self._input or sys.stdin)
+ output = property(lambda self: self._output or sys.stdout)
+
+ def __repr__(self):
+ if inspect.stack()[1][3] == '?':
+ self()
+ return ''
+ return '<%s.%s instance>' % (self.__class__.__module__,
+ self.__class__.__qualname__)
+
+ _GoInteractive = object()
+ def __call__(self, request=_GoInteractive):
+ if request is not self._GoInteractive:
+ self.help(request)
+ else:
+ self.intro()
+ self.interact()
+ self.output.write('''
+You are now leaving help and returning to the Python interpreter.
+If you want to ask for help on a particular object directly from the
+interpreter, you can type "help(object)". Executing "help('string')"
+has the same effect as typing a particular string at the help> prompt.
+''')
+
+ def interact(self):
+ self.output.write('\n')
+ while True:
+ try:
+ request = self.getline('help> ')
+ if not request: break
+ except (KeyboardInterrupt, EOFError):
+ break
+ request = request.strip()
+
+ # Make sure significant trailing quoting marks of literals don't
+ # get deleted while cleaning input
+ if (len(request) > 2 and request[0] == request[-1] in ("'", '"')
+ and request[0] not in request[1:-1]):
+ request = request[1:-1]
+ if request.lower() in ('q', 'quit'): break
+ if request == 'help':
+ self.intro()
+ else:
+ self.help(request)
+
+ def getline(self, prompt):
+ """Read one line, using input() when appropriate."""
+ if self.input is sys.stdin:
+ return input(prompt)
+ else:
+ self.output.write(prompt)
+ self.output.flush()
+ return self.input.readline()
+
+ def help(self, request):
+ if type(request) is type(''):
+ request = request.strip()
+ if request == 'keywords': self.listkeywords()
+ elif request == 'symbols': self.listsymbols()
+ elif request == 'topics': self.listtopics()
+ elif request == 'modules': self.listmodules()
+ elif request[:8] == 'modules ':
+ self.listmodules(request.split()[1])
+ elif request in self.symbols: self.showsymbol(request)
+ elif request in ['True', 'False', 'None']:
+ # special case these keywords since they are objects too
+ doc(eval(request), 'Help on %s:')
+ elif request in self.keywords: self.showtopic(request)
+ elif request in self.topics: self.showtopic(request)
+ elif request: doc(request, 'Help on %s:', output=self._output)
+ else: doc(str, 'Help on %s:', output=self._output)
+ elif isinstance(request, Helper): self()
+ else: doc(request, 'Help on %s:', output=self._output)
+ self.output.write('\n')
+
+ def intro(self):
+ self.output.write('''
+Welcome to Python {0}'s help utility!
+
+If this is your first time using Python, you should definitely check out
+the tutorial on the Internet at https://docs.python.org/{0}/tutorial/.
+
+Enter the name of any module, keyword, or topic to get help on writing
+Python programs and using Python modules. To quit this help utility and
+return to the interpreter, just type "quit".
+
+To get a list of available modules, keywords, symbols, or topics, type
+"modules", "keywords", "symbols", or "topics". Each module also comes
+with a one-line summary of what it does; to list the modules whose name
+or summary contain a given string such as "spam", type "modules spam".
+'''.format('%d.%d' % sys.version_info[:2]))
+
+ def list(self, items, columns=4, width=80):
+ items = list(sorted(items))
+ colw = width // columns
+ rows = (len(items) + columns - 1) // columns
+ for row in range(rows):
+ for col in range(columns):
+ i = col * rows + row
+ if i < len(items):
+ self.output.write(items[i])
+ if col < columns - 1:
+ self.output.write(' ' + ' ' * (colw - 1 - len(items[i])))
+ self.output.write('\n')
+
+ def listkeywords(self):
+ self.output.write('''
+Here is a list of the Python keywords. Enter any keyword to get more help.
+
+''')
+ self.list(self.keywords.keys())
+
+ def listsymbols(self):
+ self.output.write('''
+Here is a list of the punctuation symbols which Python assigns special meaning
+to. Enter any symbol to get more help.
+
+''')
+ self.list(self.symbols.keys())
+
+ def listtopics(self):
+ self.output.write('''
+Here is a list of available topics. Enter any topic name to get more help.
+
+''')
+ self.list(self.topics.keys())
+
+ def showtopic(self, topic, more_xrefs=''):
+ try:
+ import pydoc_data.topics
+ except ImportError:
+ self.output.write('''
+Sorry, topic and keyword documentation is not available because the
+module "pydoc_data.topics" could not be found.
+''')
+ return
+ target = self.topics.get(topic, self.keywords.get(topic))
+ if not target:
+ self.output.write('no documentation found for %s\n' % repr(topic))
+ return
+ if type(target) is type(''):
+ return self.showtopic(target, more_xrefs)
+
+ label, xrefs = target
+ try:
+ doc = pydoc_data.topics.topics[label]
+ except KeyError:
+ self.output.write('no documentation found for %s\n' % repr(topic))
+ return
+ doc = doc.strip() + '\n'
+ if more_xrefs:
+ xrefs = (xrefs or '') + ' ' + more_xrefs
+ if xrefs:
+ import textwrap
+ text = 'Related help topics: ' + ', '.join(xrefs.split()) + '\n'
+ wrapped_text = textwrap.wrap(text, 72)
+ doc += '\n%s\n' % '\n'.join(wrapped_text)
+ pager(doc)
+
+ def _gettopic(self, topic, more_xrefs=''):
+ """Return unbuffered tuple of (topic, xrefs).
+
+ If an error occurs here, the exception is caught and displayed by
+ the url handler.
+
+ This function duplicates the showtopic method but returns its
+ result directly so it can be formatted for display in an html page.
+ """
+ try:
+ import pydoc_data.topics
+ except ImportError:
+ return('''
+Sorry, topic and keyword documentation is not available because the
+module "pydoc_data.topics" could not be found.
+''' , '')
+ target = self.topics.get(topic, self.keywords.get(topic))
+ if not target:
+ raise ValueError('could not find topic')
+ if isinstance(target, str):
+ return self._gettopic(target, more_xrefs)
+ label, xrefs = target
+ doc = pydoc_data.topics.topics[label]
+ if more_xrefs:
+ xrefs = (xrefs or '') + ' ' + more_xrefs
+ return doc, xrefs
+
+ def showsymbol(self, symbol):
+ target = self.symbols[symbol]
+ topic, _, xrefs = target.partition(' ')
+ self.showtopic(topic, xrefs)
+
+ def listmodules(self, key=''):
+ if key:
+ self.output.write('''
+Here is a list of modules whose name or summary contains '{}'.
+If there are any, enter a module name to get more help.
+
+'''.format(key))
+ apropos(key)
+ else:
+ self.output.write('''
+Please wait a moment while I gather a list of all available modules...
+
+''')
+ modules = {}
+ def callback(path, modname, desc, modules=modules):
+ if modname and modname[-9:] == '.__init__':
+ modname = modname[:-9] + ' (package)'
+ if modname.find('.') < 0:
+ modules[modname] = 1
+ def onerror(modname):
+ callback(None, modname, None)
+ ModuleScanner().run(callback, onerror=onerror)
+ self.list(modules.keys())
+ self.output.write('''
+Enter any module name to get more help. Or, type "modules spam" to search
+for modules whose name or summary contain the string "spam".
+''')
+
+help = Helper()
+
+class ModuleScanner:
+ """An interruptible scanner that searches module synopses."""
+
+ def run(self, callback, key=None, completer=None, onerror=None):
+ if key: key = key.lower()
+ self.quit = False
+ seen = {}
+
+ for modname in sys.builtin_module_names:
+ if modname != '__main__':
+ seen[modname] = 1
+ if key is None:
+ callback(None, modname, '')
+ else:
+ name = __import__(modname).__doc__ or ''
+ desc = name.split('\n')[0]
+ name = modname + ' - ' + desc
+ if name.lower().find(key) >= 0:
+ callback(None, modname, desc)
+
+ for importer, modname, ispkg in pkgutil.walk_packages(onerror=onerror):
+ if self.quit:
+ break
+
+ if key is None:
+ callback(None, modname, '')
+ else:
+ try:
+ spec = pkgutil._get_spec(importer, modname)
+ except SyntaxError:
+ # raised by tests for bad coding cookies or BOM
+ continue
+ loader = spec.loader
+ if hasattr(loader, 'get_source'):
+ try:
+ source = loader.get_source(modname)
+ except Exception:
+ if onerror:
+ onerror(modname)
+ continue
+ desc = source_synopsis(io.StringIO(source)) or ''
+ if hasattr(loader, 'get_filename'):
+ path = loader.get_filename(modname)
+ else:
+ path = None
+ else:
+ try:
+ module = importlib._bootstrap._load(spec)
+ except ImportError:
+ if onerror:
+ onerror(modname)
+ continue
+ desc = module.__doc__.splitlines()[0] if module.__doc__ else ''
+ path = getattr(module,'__file__',None)
+ name = modname + ' - ' + desc
+ if name.lower().find(key) >= 0:
+ callback(path, modname, desc)
+
+ if completer:
+ completer()
+
+def apropos(key):
+ """Print all the one-line module summaries that contain a substring."""
+ def callback(path, modname, desc):
+ if modname[-9:] == '.__init__':
+ modname = modname[:-9] + ' (package)'
+ print(modname, desc and '- ' + desc)
+ def onerror(modname):
+ pass
+ with warnings.catch_warnings():
+ warnings.filterwarnings('ignore') # ignore problems during import
+ ModuleScanner().run(callback, key, onerror=onerror)
+
+# --------------------------------------- enhanced Web browser interface
+
+def _start_server(urlhandler, port):
+ """Start an HTTP server thread on a specific port.
+
+ Start an HTML/text server thread, so HTML or text documents can be
+ browsed dynamically and interactively with a Web browser. Example use:
+
+ >>> import time
+ >>> import pydoc
+
+ Define a URL handler. To determine what the client is asking
+ for, check the URL and content_type.
+
+ Then get or generate some text or HTML code and return it.
+
+ >>> def my_url_handler(url, content_type):
+ ... text = 'the URL sent was: (%s, %s)' % (url, content_type)
+ ... return text
+
+ Start server thread on port 0.
+ If you use port 0, the server will pick a random port number.
+ You can then use serverthread.port to get the port number.
+
+ >>> port = 0
+ >>> serverthread = pydoc._start_server(my_url_handler, port)
+
+ Check that the server is really started. If it is, open browser
+ and get first page. Use serverthread.url as the starting page.
+
+ >>> if serverthread.serving:
+ ... import webbrowser
+
+ The next two lines are commented out so a browser doesn't open if
+ doctest is run on this module.
+
+ #... webbrowser.open(serverthread.url)
+ #True
+
+ Let the server do its thing. We just need to monitor its status.
+ Use time.sleep so the loop doesn't hog the CPU.
+
+ >>> starttime = time.time()
+ >>> timeout = 1 #seconds
+
+ This is a short timeout for testing purposes.
+
+ >>> while serverthread.serving:
+ ... time.sleep(.01)
+ ... if serverthread.serving and time.time() - starttime > timeout:
+ ... serverthread.stop()
+ ... break
+
+ Print any errors that may have occurred.
+
+ >>> print(serverthread.error)
+ None
+ """
+ import http.server
+ import email.message
+ import select
+ import threading
+
+ class DocHandler(http.server.BaseHTTPRequestHandler):
+
+ def do_GET(self):
+ """Process a request from an HTML browser.
+
+ The URL received is in self.path.
+ Get an HTML page from self.urlhandler and send it.
+ """
+ if self.path.endswith('.css'):
+ content_type = 'text/css'
+ else:
+ content_type = 'text/html'
+ self.send_response(200)
+ self.send_header('Content-Type', '%s; charset=UTF-8' % content_type)
+ self.end_headers()
+ self.wfile.write(self.urlhandler(
+ self.path, content_type).encode('utf-8'))
+
+ def log_message(self, *args):
+ # Don't log messages.
+ pass
+
+ class DocServer(http.server.HTTPServer):
+
+ def __init__(self, port, callback):
+ self.host = 'localhost'
+ self.address = (self.host, port)
+ self.callback = callback
+ self.base.__init__(self, self.address, self.handler)
+ self.quit = False
+
+ def serve_until_quit(self):
+ while not self.quit:
+ rd, wr, ex = select.select([self.socket.fileno()], [], [], 1)
+ if rd:
+ self.handle_request()
+ self.server_close()
+
+ def server_activate(self):
+ self.base.server_activate(self)
+ if self.callback:
+ self.callback(self)
+
+ class ServerThread(threading.Thread):
+
+ def __init__(self, urlhandler, port):
+ self.urlhandler = urlhandler
+ self.port = int(port)
+ threading.Thread.__init__(self)
+ self.serving = False
+ self.error = None
+
+ def run(self):
+ """Start the server."""
+ try:
+ DocServer.base = http.server.HTTPServer
+ DocServer.handler = DocHandler
+ DocHandler.MessageClass = email.message.Message
+ DocHandler.urlhandler = staticmethod(self.urlhandler)
+ docsvr = DocServer(self.port, self.ready)
+ self.docserver = docsvr
+ docsvr.serve_until_quit()
+ except Exception as e:
+ self.error = e
+
+ def ready(self, server):
+ self.serving = True
+ self.host = server.host
+ self.port = server.server_port
+ self.url = 'http://%s:%d/' % (self.host, self.port)
+
+ def stop(self):
+ """Stop the server and this thread nicely"""
+ self.docserver.quit = True
+ self.join()
+ # explicitly break a reference cycle: DocServer.callback
+ # has indirectly a reference to ServerThread.
+ self.docserver = None
+ self.serving = False
+ self.url = None
+
+ thread = ServerThread(urlhandler, port)
+ thread.start()
+ # Wait until thread.serving is True to make sure we are
+ # really up before returning.
+ while not thread.error and not thread.serving:
+ time.sleep(.01)
+ return thread
+
+
+def _url_handler(url, content_type="text/html"):
+ """The pydoc url handler for use with the pydoc server.
+
+ If the content_type is 'text/css', the _pydoc.css style
+ sheet is read and returned if it exits.
+
+ If the content_type is 'text/html', then the result of
+ get_html_page(url) is returned.
+ """
+ class _HTMLDoc(HTMLDoc):
+
+ def page(self, title, contents):
+ """Format an HTML page."""
+ css_path = "pydoc_data/_pydoc.css"
+ css_link = (
+ '<link rel="stylesheet" type="text/css" href="%s">' %
+ css_path)
+ return '''\
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
+<html><head><title>Pydoc: %s</title>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
+%s</head><body bgcolor="#f0f0f8">%s<div style="clear:both;padding-top:.5em;">%s</div>
+</body></html>''' % (title, css_link, html_navbar(), contents)
+
+ def filelink(self, url, path):
+ return '<a href="getfile?key=%s">%s</a>' % (url, path)
+
+
+ html = _HTMLDoc()
+
+ def html_navbar():
+ version = html.escape("%s [%s, %s]" % (platform.python_version(),
+ platform.python_build()[0],
+ platform.python_compiler()))
+ return """
+ <div style='float:left'>
+ Python %s<br>%s
+ </div>
+ <div style='float:right'>
+ <div style='text-align:center'>
+ <a href="index.html">Module Index</a>
+ : <a href="topics.html">Topics</a>
+ : <a href="keywords.html">Keywords</a>
+ </div>
+ <div>
+ <form action="get" style='display:inline;'>
+ <input type=text name=key size=15>
+ <input type=submit value="Get">
+ </form>
+ <form action="search" style='display:inline;'>
+ <input type=text name=key size=15>
+ <input type=submit value="Search">
+ </form>
+ </div>
+ </div>
+ """ % (version, html.escape(platform.platform(terse=True)))
+
+ def html_index():
+ """Module Index page."""
+
+ def bltinlink(name):
+ return '<a href="%s.html">%s</a>' % (name, name)
+
+ heading = html.heading(
+ '<big><big><strong>Index of Modules</strong></big></big>',
+ '#ffffff', '#7799ee')
+ names = [name for name in sys.builtin_module_names
+ if name != '__main__']
+ contents = html.multicolumn(names, bltinlink)
+ contents = [heading, '<p>' + html.bigsection(
+ 'Built-in Modules', '#ffffff', '#ee77aa', contents)]
+
+ seen = {}
+ for dir in sys.path:
+ contents.append(html.index(dir, seen))
+
+ contents.append(
+ '<p align=right><font color="#909090" face="helvetica,'
+ 'arial"><strong>pydoc</strong> by Ka-Ping Yee'
+ '<ping@lfw.org></font>')
+ return 'Index of Modules', ''.join(contents)
+
+ def html_search(key):
+ """Search results page."""
+ # scan for modules
+ search_result = []
+
+ def callback(path, modname, desc):
+ if modname[-9:] == '.__init__':
+ modname = modname[:-9] + ' (package)'
+ search_result.append((modname, desc and '- ' + desc))
+
+ with warnings.catch_warnings():
+ warnings.filterwarnings('ignore') # ignore problems during import
+ def onerror(modname):
+ pass
+ ModuleScanner().run(callback, key, onerror=onerror)
+
+ # format page
+ def bltinlink(name):
+ return '<a href="%s.html">%s</a>' % (name, name)
+
+ results = []
+ heading = html.heading(
+ '<big><big><strong>Search Results</strong></big></big>',
+ '#ffffff', '#7799ee')
+ for name, desc in search_result:
+ results.append(bltinlink(name) + desc)
+ contents = heading + html.bigsection(
+ 'key = %s' % key, '#ffffff', '#ee77aa', '<br>'.join(results))
+ return 'Search Results', contents
+
+ def html_getfile(path):
+ """Get and display a source file listing safely."""
+ path = urllib.parse.unquote(path)
+ with tokenize.open(path) as fp:
+ lines = html.escape(fp.read())
+ body = '<pre>%s</pre>' % lines
+ heading = html.heading(
+ '<big><big><strong>File Listing</strong></big></big>',
+ '#ffffff', '#7799ee')
+ contents = heading + html.bigsection(
+ 'File: %s' % path, '#ffffff', '#ee77aa', body)
+ return 'getfile %s' % path, contents
+
+ def html_topics():
+ """Index of topic texts available."""
+
+ def bltinlink(name):
+ return '<a href="topic?key=%s">%s</a>' % (name, name)
+
+ heading = html.heading(
+ '<big><big><strong>INDEX</strong></big></big>',
+ '#ffffff', '#7799ee')
+ names = sorted(Helper.topics.keys())
+
+ contents = html.multicolumn(names, bltinlink)
+ contents = heading + html.bigsection(
+ 'Topics', '#ffffff', '#ee77aa', contents)
+ return 'Topics', contents
+
+ def html_keywords():
+ """Index of keywords."""
+ heading = html.heading(
+ '<big><big><strong>INDEX</strong></big></big>',
+ '#ffffff', '#7799ee')
+ names = sorted(Helper.keywords.keys())
+
+ def bltinlink(name):
+ return '<a href="topic?key=%s">%s</a>' % (name, name)
+
+ contents = html.multicolumn(names, bltinlink)
+ contents = heading + html.bigsection(
+ 'Keywords', '#ffffff', '#ee77aa', contents)
+ return 'Keywords', contents
+
+ def html_topicpage(topic):
+ """Topic or keyword help page."""
+ buf = io.StringIO()
+ htmlhelp = Helper(buf, buf)
+ contents, xrefs = htmlhelp._gettopic(topic)
+ if topic in htmlhelp.keywords:
+ title = 'KEYWORD'
+ else:
+ title = 'TOPIC'
+ heading = html.heading(
+ '<big><big><strong>%s</strong></big></big>' % title,
+ '#ffffff', '#7799ee')
+ contents = '<pre>%s</pre>' % html.markup(contents)
+ contents = html.bigsection(topic , '#ffffff','#ee77aa', contents)
+ if xrefs:
+ xrefs = sorted(xrefs.split())
+
+ def bltinlink(name):
+ return '<a href="topic?key=%s">%s</a>' % (name, name)
+
+ xrefs = html.multicolumn(xrefs, bltinlink)
+ xrefs = html.section('Related help topics: ',
+ '#ffffff', '#ee77aa', xrefs)
+ return ('%s %s' % (title, topic),
+ ''.join((heading, contents, xrefs)))
+
+ def html_getobj(url):
+ obj = locate(url, forceload=1)
+ if obj is None and url != 'None':
+ raise ValueError('could not find object')
+ title = describe(obj)
+ content = html.document(obj, url)
+ return title, content
+
+ def html_error(url, exc):
+ heading = html.heading(
+ '<big><big><strong>Error</strong></big></big>',
+ '#ffffff', '#7799ee')
+ contents = '<br>'.join(html.escape(line) for line in
+ format_exception_only(type(exc), exc))
+ contents = heading + html.bigsection(url, '#ffffff', '#bb0000',
+ contents)
+ return "Error - %s" % url, contents
+
+ def get_html_page(url):
+ """Generate an HTML page for url."""
+ complete_url = url
+ if url.endswith('.html'):
+ url = url[:-5]
+ try:
+ if url in ("", "index"):
+ title, content = html_index()
+ elif url == "topics":
+ title, content = html_topics()
+ elif url == "keywords":
+ title, content = html_keywords()
+ elif '=' in url:
+ op, _, url = url.partition('=')
+ if op == "search?key":
+ title, content = html_search(url)
+ elif op == "getfile?key":
+ title, content = html_getfile(url)
+ elif op == "topic?key":
+ # try topics first, then objects.
+ try:
+ title, content = html_topicpage(url)
+ except ValueError:
+ title, content = html_getobj(url)
+ elif op == "get?key":
+ # try objects first, then topics.
+ if url in ("", "index"):
+ title, content = html_index()
+ else:
+ try:
+ title, content = html_getobj(url)
+ except ValueError:
+ title, content = html_topicpage(url)
+ else:
+ raise ValueError('bad pydoc url')
+ else:
+ title, content = html_getobj(url)
+ except Exception as exc:
+ # Catch any errors and display them in an error page.
+ title, content = html_error(complete_url, exc)
+ return html.page(title, content)
+
+ if url.startswith('/'):
+ url = url[1:]
+ if content_type == 'text/css':
+ path_here = os.path.dirname(os.path.realpath(__file__))
+ css_path = os.path.join(path_here, url)
+ with open(css_path) as fp:
+ return ''.join(fp.readlines())
+ elif content_type == 'text/html':
+ return get_html_page(url)
+ # Errors outside the url handler are caught by the server.
+ raise TypeError('unknown content type %r for url %s' % (content_type, url))
+
+
+def browse(port=0, *, open_browser=True):
+ """Start the enhanced pydoc Web server and open a Web browser.
+
+ Use port '0' to start the server on an arbitrary port.
+ Set open_browser to False to suppress opening a browser.
+ """
+ import webbrowser
+ serverthread = _start_server(_url_handler, port)
+ if serverthread.error:
+ print(serverthread.error)
+ return
+ if serverthread.serving:
+ server_help_msg = 'Server commands: [b]rowser, [q]uit'
+ if open_browser:
+ webbrowser.open(serverthread.url)
+ try:
+ print('Server ready at', serverthread.url)
+ print(server_help_msg)
+ while serverthread.serving:
+ cmd = input('server> ')
+ cmd = cmd.lower()
+ if cmd == 'q':
+ break
+ elif cmd == 'b':
+ webbrowser.open(serverthread.url)
+ else:
+ print(server_help_msg)
+ except (KeyboardInterrupt, EOFError):
+ print()
+ finally:
+ if serverthread.serving:
+ serverthread.stop()
+ print('Server stopped')
+
+
+# -------------------------------------------------- command-line interface
+
+def ispath(x):
+ return isinstance(x, str) and x.find(os.sep) >= 0
+
+def cli():
+ """Command-line interface (looks at sys.argv to decide what to do)."""
+ import getopt
+ class BadUsage(Exception): pass
+
+ # Scripts don't get the current directory in their path by default
+ # unless they are run with the '-m' switch
+ if '' not in sys.path:
+ scriptdir = os.path.dirname(sys.argv[0])
+ if scriptdir in sys.path:
+ sys.path.remove(scriptdir)
+ sys.path.insert(0, '.')
+
+ try:
+ opts, args = getopt.getopt(sys.argv[1:], 'bk:p:w')
+ writing = False
+ start_server = False
+ open_browser = False
+ port = None
+ for opt, val in opts:
+ if opt == '-b':
+ start_server = True
+ open_browser = True
+ if opt == '-k':
+ apropos(val)
+ return
+ if opt == '-p':
+ start_server = True
+ port = val
+ if opt == '-w':
+ writing = True
+
+ if start_server:
+ if port is None:
+ port = 0
+ browse(port, open_browser=open_browser)
+ return
+
+ if not args: raise BadUsage
+ for arg in args:
+ if ispath(arg) and not os.path.exists(arg):
+ print('file %r does not exist' % arg)
+ break
+ try:
+ if ispath(arg) and os.path.isfile(arg):
+ arg = importfile(arg)
+ if writing:
+ if ispath(arg) and os.path.isdir(arg):
+ writedocs(arg)
+ else:
+ writedoc(arg)
+ else:
+ help.help(arg)
+ except ErrorDuringImport as value:
+ print(value)
+
+ except (getopt.error, BadUsage):
+ cmd = os.path.splitext(os.path.basename(sys.argv[0]))[0]
+ print("""pydoc - the Python documentation tool
+
+{cmd} <name> ...
+ Show text documentation on something. <name> may be the name of a
+ Python keyword, topic, function, module, or package, or a dotted
+ reference to a class or function within a module or module in a
+ package. If <name> contains a '{sep}', it is used as the path to a
+ Python source file to document. If name is 'keywords', 'topics',
+ or 'modules', a listing of these things is displayed.
+
+{cmd} -k <keyword>
+ Search for a keyword in the synopsis lines of all available modules.
+
+{cmd} -p <port>
+ Start an HTTP server on the given port on the local machine. Port
+ number 0 can be used to get an arbitrary unused port.
+
+{cmd} -b
+ Start an HTTP server on an arbitrary unused port and open a Web browser
+ to interactively browse documentation. The -p option can be used with
+ the -b option to explicitly specify the server port.
+
+{cmd} -w <name> ...
+ Write out the HTML documentation for a module to a file in the current
+ directory. If <name> contains a '{sep}', it is treated as a filename; if
+ it names a directory, documentation is written for all the contents.
+""".format(cmd=cmd, sep=os.sep))
+
+if __name__ == '__main__':
+ cli()
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/shutil.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/shutil.py
new file mode 100644
index 00000000..d34a9d0f
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/shutil.py
@@ -0,0 +1,1160 @@
+"""Utility functions for copying and archiving files and directory trees.
+
+XXX The functions here don't copy the resource fork or other metadata on Mac.
+
+"""
+
+import os
+import sys
+import stat
+import fnmatch
+import collections
+import errno
+
+try:
+ import zlib
+ del zlib
+ _ZLIB_SUPPORTED = True
+except ImportError:
+ _ZLIB_SUPPORTED = False
+
+try:
+ import bz2
+ del bz2
+ _BZ2_SUPPORTED = True
+except ImportError:
+ _BZ2_SUPPORTED = False
+
+try:
+ import lzma
+ del lzma
+ _LZMA_SUPPORTED = True
+except ImportError:
+ _LZMA_SUPPORTED = False
+
+try:
+ from pwd import getpwnam
+except ImportError:
+ getpwnam = None
+
+try:
+ from grp import getgrnam
+except ImportError:
+ getgrnam = None
+
+__all__ = ["copyfileobj", "copyfile", "copymode", "copystat", "copy", "copy2",
+ "copytree", "move", "rmtree", "Error", "SpecialFileError",
+ "ExecError", "make_archive", "get_archive_formats",
+ "register_archive_format", "unregister_archive_format",
+ "get_unpack_formats", "register_unpack_format",
+ "unregister_unpack_format", "unpack_archive",
+ "ignore_patterns", "chown", "which", "get_terminal_size",
+ "SameFileError"]
+ # disk_usage is added later, if available on the platform
+
+class Error(OSError):
+ pass
+
+class SameFileError(Error):
+ """Raised when source and destination are the same file."""
+
+class SpecialFileError(OSError):
+ """Raised when trying to do a kind of operation (e.g. copying) which is
+ not supported on a special file (e.g. a named pipe)"""
+
+class ExecError(OSError):
+ """Raised when a command could not be executed"""
+
+class ReadError(OSError):
+ """Raised when an archive cannot be read"""
+
+class RegistryError(Exception):
+ """Raised when a registry operation with the archiving
+ and unpacking registries fails"""
+
+
+def copyfileobj(fsrc, fdst, length=16*1024):
+ """copy data from file-like object fsrc to file-like object fdst"""
+ while 1:
+ buf = fsrc.read(length)
+ if not buf:
+ break
+ fdst.write(buf)
+
+def _samefile(src, dst):
+ # Macintosh, Unix.
+ if hasattr(os.path, 'samefile'):
+ try:
+ return os.path.samefile(src, dst)
+ except OSError:
+ return False
+
+ # All other platforms: check for same pathname.
+ return (os.path.normcase(os.path.abspath(src)) ==
+ os.path.normcase(os.path.abspath(dst)))
+
+def copyfile(src, dst, *, follow_symlinks=True):
+ """Copy data from src to dst.
+
+ If follow_symlinks is not set and src is a symbolic link, a new
+ symlink will be created instead of copying the file it points to.
+
+ """
+ if _samefile(src, dst):
+ raise SameFileError("{!r} and {!r} are the same file".format(src, dst))
+
+ for fn in [src, dst]:
+ try:
+ st = os.stat(fn)
+ except OSError:
+ # File most likely does not exist
+ pass
+ else:
+ # XXX What about other special files? (sockets, devices...)
+ if stat.S_ISFIFO(st.st_mode):
+ raise SpecialFileError("`%s` is a named pipe" % fn)
+
+ if not follow_symlinks and os.path.islink(src):
+ os.symlink(os.readlink(src), dst)
+ else:
+ with open(src, 'rb') as fsrc:
+ with open(dst, 'wb') as fdst:
+ copyfileobj(fsrc, fdst)
+ return dst
+
+def copymode(src, dst, *, follow_symlinks=True):
+ """Copy mode bits from src to dst.
+
+ If follow_symlinks is not set, symlinks aren't followed if and only
+ if both `src` and `dst` are symlinks. If `lchmod` isn't available
+ (e.g. Linux) this method does nothing.
+
+ """
+ if not follow_symlinks and os.path.islink(src) and os.path.islink(dst):
+ if hasattr(os, 'lchmod'):
+ stat_func, chmod_func = os.lstat, os.lchmod
+ else:
+ return
+ elif hasattr(os, 'chmod'):
+ stat_func, chmod_func = os.stat, os.chmod
+ else:
+ return
+
+ st = stat_func(src)
+ chmod_func(dst, stat.S_IMODE(st.st_mode))
+
+if hasattr(os, 'listxattr'):
+ def _copyxattr(src, dst, *, follow_symlinks=True):
+ """Copy extended filesystem attributes from `src` to `dst`.
+
+ Overwrite existing attributes.
+
+ If `follow_symlinks` is false, symlinks won't be followed.
+
+ """
+
+ try:
+ names = os.listxattr(src, follow_symlinks=follow_symlinks)
+ except OSError as e:
+ if e.errno not in (errno.ENOTSUP, errno.ENODATA):
+ raise
+ return
+ for name in names:
+ try:
+ value = os.getxattr(src, name, follow_symlinks=follow_symlinks)
+ os.setxattr(dst, name, value, follow_symlinks=follow_symlinks)
+ except OSError as e:
+ if e.errno not in (errno.EPERM, errno.ENOTSUP, errno.ENODATA):
+ raise
+else:
+ def _copyxattr(*args, **kwargs):
+ pass
+
+def copystat(src, dst, *, follow_symlinks=True):
+ """Copy file metadata
+
+ Copy the permission bits, last access time, last modification time, and
+ flags from `src` to `dst`. On Linux, copystat() also copies the "extended
+ attributes" where possible. The file contents, owner, and group are
+ unaffected. `src` and `dst` are path names given as strings.
+
+ If the optional flag `follow_symlinks` is not set, symlinks aren't
+ followed if and only if both `src` and `dst` are symlinks.
+ """
+ def _nop(*args, ns=None, follow_symlinks=None):
+ pass
+
+ # follow symlinks (aka don't not follow symlinks)
+ follow = follow_symlinks or not (os.path.islink(src) and os.path.islink(dst))
+ if follow:
+ # use the real function if it exists
+ def lookup(name):
+ return getattr(os, name, _nop)
+ else:
+ # use the real function only if it exists
+ # *and* it supports follow_symlinks
+ def lookup(name):
+ fn = getattr(os, name, _nop)
+ if fn in os.supports_follow_symlinks:
+ return fn
+ return _nop
+
+ st = lookup("stat")(src, follow_symlinks=follow)
+ mode = stat.S_IMODE(st.st_mode)
+ lookup("utime")(dst, ns=(st.st_atime_ns, st.st_mtime_ns),
+ follow_symlinks=follow)
+ try:
+ lookup("chmod")(dst, mode, follow_symlinks=follow)
+ except NotImplementedError:
+ # if we got a NotImplementedError, it's because
+ # * follow_symlinks=False,
+ # * lchown() is unavailable, and
+ # * either
+ # * fchownat() is unavailable or
+ # * fchownat() doesn't implement AT_SYMLINK_NOFOLLOW.
+ # (it returned ENOSUP.)
+ # therefore we're out of options--we simply cannot chown the
+ # symlink. give up, suppress the error.
+ # (which is what shutil always did in this circumstance.)
+ pass
+ if hasattr(st, 'st_flags'):
+ try:
+ lookup("chflags")(dst, st.st_flags, follow_symlinks=follow)
+ except OSError as why:
+ for err in 'EOPNOTSUPP', 'ENOTSUP':
+ if hasattr(errno, err) and why.errno == getattr(errno, err):
+ break
+ else:
+ raise
+ _copyxattr(src, dst, follow_symlinks=follow)
+
+def copy(src, dst, *, follow_symlinks=True):
+ """Copy data and mode bits ("cp src dst"). Return the file's destination.
+
+ The destination may be a directory.
+
+ If follow_symlinks is false, symlinks won't be followed. This
+ resembles GNU's "cp -P src dst".
+
+ If source and destination are the same file, a SameFileError will be
+ raised.
+
+ """
+ if os.path.isdir(dst):
+ dst = os.path.join(dst, os.path.basename(src))
+ copyfile(src, dst, follow_symlinks=follow_symlinks)
+ copymode(src, dst, follow_symlinks=follow_symlinks)
+ return dst
+
+def copy2(src, dst, *, follow_symlinks=True):
+ """Copy data and metadata. Return the file's destination.
+
+ Metadata is copied with copystat(). Please see the copystat function
+ for more information.
+
+ The destination may be a directory.
+
+ If follow_symlinks is false, symlinks won't be followed. This
+ resembles GNU's "cp -P src dst".
+
+ """
+ if os.path.isdir(dst):
+ dst = os.path.join(dst, os.path.basename(src))
+ copyfile(src, dst, follow_symlinks=follow_symlinks)
+ copystat(src, dst, follow_symlinks=follow_symlinks)
+ return dst
+
+def ignore_patterns(*patterns):
+ """Function that can be used as copytree() ignore parameter.
+
+ Patterns is a sequence of glob-style patterns
+ that are used to exclude files"""
+ def _ignore_patterns(path, names):
+ ignored_names = []
+ for pattern in patterns:
+ ignored_names.extend(fnmatch.filter(names, pattern))
+ return set(ignored_names)
+ return _ignore_patterns
+
+def copytree(src, dst, symlinks=False, ignore=None, copy_function=copy2,
+ ignore_dangling_symlinks=False):
+ """Recursively copy a directory tree.
+
+ The destination directory must not already exist.
+ If exception(s) occur, an Error is raised with a list of reasons.
+
+ If the optional symlinks flag is true, symbolic links in the
+ source tree result in symbolic links in the destination tree; if
+ it is false, the contents of the files pointed to by symbolic
+ links are copied. If the file pointed by the symlink doesn't
+ exist, an exception will be added in the list of errors raised in
+ an Error exception at the end of the copy process.
+
+ You can set the optional ignore_dangling_symlinks flag to true if you
+ want to silence this exception. Notice that this has no effect on
+ platforms that don't support os.symlink.
+
+ The optional ignore argument is a callable. If given, it
+ is called with the `src` parameter, which is the directory
+ being visited by copytree(), and `names` which is the list of
+ `src` contents, as returned by os.listdir():
+
+ callable(src, names) -> ignored_names
+
+ Since copytree() is called recursively, the callable will be
+ called once for each directory that is copied. It returns a
+ list of names relative to the `src` directory that should
+ not be copied.
+
+ The optional copy_function argument is a callable that will be used
+ to copy each file. It will be called with the source path and the
+ destination path as arguments. By default, copy2() is used, but any
+ function that supports the same signature (like copy()) can be used.
+
+ """
+ names = os.listdir(src)
+ if ignore is not None:
+ ignored_names = ignore(src, names)
+ else:
+ ignored_names = set()
+
+ os.makedirs(dst)
+ errors = []
+ for name in names:
+ if name in ignored_names:
+ continue
+ srcname = os.path.join(src, name)
+ dstname = os.path.join(dst, name)
+ try:
+ if os.path.islink(srcname):
+ linkto = os.readlink(srcname)
+ if symlinks:
+ # We can't just leave it to `copy_function` because legacy
+ # code with a custom `copy_function` may rely on copytree
+ # doing the right thing.
+ os.symlink(linkto, dstname)
+ copystat(srcname, dstname, follow_symlinks=not symlinks)
+ else:
+ # ignore dangling symlink if the flag is on
+ if not os.path.exists(linkto) and ignore_dangling_symlinks:
+ continue
+ # otherwise let the copy occurs. copy2 will raise an error
+ if os.path.isdir(srcname):
+ copytree(srcname, dstname, symlinks, ignore,
+ copy_function)
+ else:
+ copy_function(srcname, dstname)
+ elif os.path.isdir(srcname):
+ copytree(srcname, dstname, symlinks, ignore, copy_function)
+ else:
+ # Will raise a SpecialFileError for unsupported file types
+ copy_function(srcname, dstname)
+ # catch the Error from the recursive copytree so that we can
+ # continue with other files
+ except Error as err:
+ errors.extend(err.args[0])
+ except OSError as why:
+ errors.append((srcname, dstname, str(why)))
+ try:
+ copystat(src, dst)
+ except OSError as why:
+ # Copying file access times may fail on Windows
+ if getattr(why, 'winerror', None) is None:
+ errors.append((src, dst, str(why)))
+ if errors:
+ raise Error(errors)
+ return dst
+
+# version vulnerable to race conditions
+def _rmtree_unsafe(path, onerror):
+ try:
+ if os.path.islink(path):
+ # symlinks to directories are forbidden, see bug #1669
+ raise OSError("Cannot call rmtree on a symbolic link")
+ except OSError:
+ onerror(os.path.islink, path, sys.exc_info())
+ # can't continue even if onerror hook returns
+ return
+ names = []
+ try:
+ names = os.listdir(path)
+ except OSError:
+ onerror(os.listdir, path, sys.exc_info())
+ for name in names:
+ fullname = os.path.join(path, name)
+ try:
+ mode = os.lstat(fullname).st_mode
+ except OSError:
+ mode = 0
+ if stat.S_ISDIR(mode):
+ _rmtree_unsafe(fullname, onerror)
+ else:
+ try:
+ os.unlink(fullname)
+ except OSError:
+ onerror(os.unlink, fullname, sys.exc_info())
+ try:
+ os.rmdir(path)
+ except OSError:
+ onerror(os.rmdir, path, sys.exc_info())
+
+# Version using fd-based APIs to protect against races
+def _rmtree_safe_fd(topfd, path, onerror):
+ names = []
+ try:
+ names = os.listdir(topfd)
+ except OSError as err:
+ err.filename = path
+ onerror(os.listdir, path, sys.exc_info())
+ for name in names:
+ fullname = os.path.join(path, name)
+ try:
+ orig_st = os.stat(name, dir_fd=topfd, follow_symlinks=False)
+ mode = orig_st.st_mode
+ except OSError:
+ mode = 0
+ if stat.S_ISDIR(mode):
+ try:
+ dirfd = os.open(name, os.O_RDONLY, dir_fd=topfd)
+ except OSError:
+ onerror(os.open, fullname, sys.exc_info())
+ else:
+ try:
+ if os.path.samestat(orig_st, os.fstat(dirfd)):
+ _rmtree_safe_fd(dirfd, fullname, onerror)
+ try:
+ os.rmdir(name, dir_fd=topfd)
+ except OSError:
+ onerror(os.rmdir, fullname, sys.exc_info())
+ else:
+ try:
+ # This can only happen if someone replaces
+ # a directory with a symlink after the call to
+ # stat.S_ISDIR above.
+ raise OSError("Cannot call rmtree on a symbolic "
+ "link")
+ except OSError:
+ onerror(os.path.islink, fullname, sys.exc_info())
+ finally:
+ os.close(dirfd)
+ else:
+ try:
+ os.unlink(name, dir_fd=topfd)
+ except OSError:
+ onerror(os.unlink, fullname, sys.exc_info())
+_use_fd_functions = 1
+
+# _use_fd_functions = ({os.open, os.stat, os.unlink, os.rmdir} <=
+# os.supports_dir_fd and
+# os.listdir in os.supports_fd and
+# os.stat in os.supports_follow_symlinks)
+
+def rmtree(path, ignore_errors=False, onerror=None):
+ """Recursively delete a directory tree.
+
+ If ignore_errors is set, errors are ignored; otherwise, if onerror
+ is set, it is called to handle the error with arguments (func,
+ path, exc_info) where func is platform and implementation dependent;
+ path is the argument to that function that caused it to fail; and
+ exc_info is a tuple returned by sys.exc_info(). If ignore_errors
+ is false and onerror is None, an exception is raised.
+
+ """
+ if ignore_errors:
+ def onerror(*args):
+ pass
+ elif onerror is None:
+ def onerror(*args):
+ raise
+ if _use_fd_functions:
+ # While the unsafe rmtree works fine on bytes, the fd based does not.
+ if isinstance(path, bytes):
+ path = os.fsdecode(path)
+ # Note: To guard against symlink races, we use the standard
+ # lstat()/open()/fstat() trick.
+ try:
+ orig_st = os.lstat(path)
+ except Exception:
+ onerror(os.lstat, path, sys.exc_info())
+ return
+ try:
+ fd = os.open(path, os.O_RDONLY)
+ except Exception:
+ onerror(os.lstat, path, sys.exc_info())
+ return
+ try:
+ if os.path.samestat(orig_st, os.fstat(fd)):
+ _rmtree_safe_fd(fd, path, onerror)
+ try:
+ os.rmdir(path)
+ except OSError:
+ onerror(os.rmdir, path, sys.exc_info())
+ else:
+ try:
+ # symlinks to directories are forbidden, see bug #1669
+ raise OSError("Cannot call rmtree on a symbolic link")
+ except OSError:
+ onerror(os.path.islink, path, sys.exc_info())
+ finally:
+ os.close(fd)
+ else:
+ return _rmtree_unsafe(path, onerror)
+
+# Allow introspection of whether or not the hardening against symlink
+# attacks is supported on the current platform
+rmtree.avoids_symlink_attacks = _use_fd_functions
+
+def _basename(path):
+ # A basename() variant which first strips the trailing slash, if present.
+ # Thus we always get the last component of the path, even for directories.
+ sep = os.path.sep + (os.path.altsep or '')
+ return os.path.basename(path.rstrip(sep))
+
+def move(src, dst, copy_function=copy2):
+ """Recursively move a file or directory to another location. This is
+ similar to the Unix "mv" command. Return the file or directory's
+ destination.
+
+ If the destination is a directory or a symlink to a directory, the source
+ is moved inside the directory. The destination path must not already
+ exist.
+
+ If the destination already exists but is not a directory, it may be
+ overwritten depending on os.rename() semantics.
+
+ If the destination is on our current filesystem, then rename() is used.
+ Otherwise, src is copied to the destination and then removed. Symlinks are
+ recreated under the new name if os.rename() fails because of cross
+ filesystem renames.
+
+ The optional `copy_function` argument is a callable that will be used
+ to copy the source or it will be delegated to `copytree`.
+ By default, copy2() is used, but any function that supports the same
+ signature (like copy()) can be used.
+
+ A lot more could be done here... A look at a mv.c shows a lot of
+ the issues this implementation glosses over.
+
+ """
+ real_dst = dst
+ if os.path.isdir(dst):
+ if _samefile(src, dst):
+ # We might be on a case insensitive filesystem,
+ # perform the rename anyway.
+ os.rename(src, dst)
+ return
+
+ real_dst = os.path.join(dst, _basename(src))
+ if os.path.exists(real_dst):
+ raise Error("Destination path '%s' already exists" % real_dst)
+ try:
+ os.rename(src, real_dst)
+ except OSError:
+ if os.path.islink(src):
+ linkto = os.readlink(src)
+ os.symlink(linkto, real_dst)
+ os.unlink(src)
+ elif os.path.isdir(src):
+ if _destinsrc(src, dst):
+ raise Error("Cannot move a directory '%s' into itself"
+ " '%s'." % (src, dst))
+ copytree(src, real_dst, copy_function=copy_function,
+ symlinks=True)
+ rmtree(src)
+ else:
+ copy_function(src, real_dst)
+ os.unlink(src)
+ return real_dst
+
+def _destinsrc(src, dst):
+ src = os.path.abspath(src)
+ dst = os.path.abspath(dst)
+ if not src.endswith(os.path.sep):
+ src += os.path.sep
+ if not dst.endswith(os.path.sep):
+ dst += os.path.sep
+ return dst.startswith(src)
+
+def _get_gid(name):
+ """Returns a gid, given a group name."""
+ if getgrnam is None or name is None:
+ return None
+ try:
+ result = getgrnam(name)
+ except KeyError:
+ result = None
+ if result is not None:
+ return result[2]
+ return None
+
+def _get_uid(name):
+ """Returns an uid, given a user name."""
+ if getpwnam is None or name is None:
+ return None
+ try:
+ result = getpwnam(name)
+ except KeyError:
+ result = None
+ if result is not None:
+ return result[2]
+ return None
+
+def _make_tarball(base_name, base_dir, compress="gzip", verbose=0, dry_run=0,
+ owner=None, group=None, logger=None):
+ """Create a (possibly compressed) tar file from all the files under
+ 'base_dir'.
+
+ 'compress' must be "gzip" (the default), "bzip2", "xz", or None.
+
+ 'owner' and 'group' can be used to define an owner and a group for the
+ archive that is being built. If not provided, the current owner and group
+ will be used.
+
+ The output tar file will be named 'base_name' + ".tar", possibly plus
+ the appropriate compression extension (".gz", ".bz2", or ".xz").
+
+ Returns the output filename.
+ """
+ if compress is None:
+ tar_compression = ''
+ elif _ZLIB_SUPPORTED and compress == 'gzip':
+ tar_compression = 'gz'
+ elif _BZ2_SUPPORTED and compress == 'bzip2':
+ tar_compression = 'bz2'
+ elif _LZMA_SUPPORTED and compress == 'xz':
+ tar_compression = 'xz'
+ else:
+ raise ValueError("bad value for 'compress', or compression format not "
+ "supported : {0}".format(compress))
+
+ import tarfile # late import for breaking circular dependency
+
+ compress_ext = '.' + tar_compression if compress else ''
+ archive_name = base_name + '.tar' + compress_ext
+ archive_dir = os.path.dirname(archive_name)
+
+ if archive_dir and not os.path.exists(archive_dir):
+ if logger is not None:
+ logger.info("creating %s", archive_dir)
+ if not dry_run:
+ os.makedirs(archive_dir)
+
+ # creating the tarball
+ if logger is not None:
+ logger.info('Creating tar archive')
+
+ uid = _get_uid(owner)
+ gid = _get_gid(group)
+
+ def _set_uid_gid(tarinfo):
+ if gid is not None:
+ tarinfo.gid = gid
+ tarinfo.gname = group
+ if uid is not None:
+ tarinfo.uid = uid
+ tarinfo.uname = owner
+ return tarinfo
+
+ if not dry_run:
+ tar = tarfile.open(archive_name, 'w|%s' % tar_compression)
+ try:
+ tar.add(base_dir, filter=_set_uid_gid)
+ finally:
+ tar.close()
+
+ return archive_name
+
+def _make_zipfile(base_name, base_dir, verbose=0, dry_run=0, logger=None):
+ """Create a zip file from all the files under 'base_dir'.
+
+ The output zip file will be named 'base_name' + ".zip". Returns the
+ name of the output zip file.
+ """
+ import zipfile # late import for breaking circular dependency
+
+ zip_filename = base_name + ".zip"
+ archive_dir = os.path.dirname(base_name)
+
+ if archive_dir and not os.path.exists(archive_dir):
+ if logger is not None:
+ logger.info("creating %s", archive_dir)
+ if not dry_run:
+ os.makedirs(archive_dir)
+
+ if logger is not None:
+ logger.info("creating '%s' and adding '%s' to it",
+ zip_filename, base_dir)
+
+ if not dry_run:
+ with zipfile.ZipFile(zip_filename, "w",
+ compression=zipfile.ZIP_DEFLATED) as zf:
+ path = os.path.normpath(base_dir)
+ if path != os.curdir:
+ zf.write(path, path)
+ if logger is not None:
+ logger.info("adding '%s'", path)
+ for dirpath, dirnames, filenames in os.walk(base_dir):
+ for name in sorted(dirnames):
+ path = os.path.normpath(os.path.join(dirpath, name))
+ zf.write(path, path)
+ if logger is not None:
+ logger.info("adding '%s'", path)
+ for name in filenames:
+ path = os.path.normpath(os.path.join(dirpath, name))
+ if os.path.isfile(path):
+ zf.write(path, path)
+ if logger is not None:
+ logger.info("adding '%s'", path)
+
+ return zip_filename
+
+_ARCHIVE_FORMATS = {
+ 'tar': (_make_tarball, [('compress', None)], "uncompressed tar file"),
+}
+
+if _ZLIB_SUPPORTED:
+ _ARCHIVE_FORMATS['gztar'] = (_make_tarball, [('compress', 'gzip')],
+ "gzip'ed tar-file")
+ _ARCHIVE_FORMATS['zip'] = (_make_zipfile, [], "ZIP file")
+
+if _BZ2_SUPPORTED:
+ _ARCHIVE_FORMATS['bztar'] = (_make_tarball, [('compress', 'bzip2')],
+ "bzip2'ed tar-file")
+
+if _LZMA_SUPPORTED:
+ _ARCHIVE_FORMATS['xztar'] = (_make_tarball, [('compress', 'xz')],
+ "xz'ed tar-file")
+
+def get_archive_formats():
+ """Returns a list of supported formats for archiving and unarchiving.
+
+ Each element of the returned sequence is a tuple (name, description)
+ """
+ formats = [(name, registry[2]) for name, registry in
+ _ARCHIVE_FORMATS.items()]
+ formats.sort()
+ return formats
+
+def register_archive_format(name, function, extra_args=None, description=''):
+ """Registers an archive format.
+
+ name is the name of the format. function is the callable that will be
+ used to create archives. If provided, extra_args is a sequence of
+ (name, value) tuples that will be passed as arguments to the callable.
+ description can be provided to describe the format, and will be returned
+ by the get_archive_formats() function.
+ """
+ if extra_args is None:
+ extra_args = []
+ if not callable(function):
+ raise TypeError('The %s object is not callable' % function)
+ if not isinstance(extra_args, (tuple, list)):
+ raise TypeError('extra_args needs to be a sequence')
+ for element in extra_args:
+ if not isinstance(element, (tuple, list)) or len(element) !=2:
+ raise TypeError('extra_args elements are : (arg_name, value)')
+
+ _ARCHIVE_FORMATS[name] = (function, extra_args, description)
+
+def unregister_archive_format(name):
+ del _ARCHIVE_FORMATS[name]
+
+def make_archive(base_name, format, root_dir=None, base_dir=None, verbose=0,
+ dry_run=0, owner=None, group=None, logger=None):
+ """Create an archive file (eg. zip or tar).
+
+ 'base_name' is the name of the file to create, minus any format-specific
+ extension; 'format' is the archive format: one of "zip", "tar", "gztar",
+ "bztar", or "xztar". Or any other registered format.
+
+ 'root_dir' is a directory that will be the root directory of the
+ archive; ie. we typically chdir into 'root_dir' before creating the
+ archive. 'base_dir' is the directory where we start archiving from;
+ ie. 'base_dir' will be the common prefix of all files and
+ directories in the archive. 'root_dir' and 'base_dir' both default
+ to the current directory. Returns the name of the archive file.
+
+ 'owner' and 'group' are used when creating a tar archive. By default,
+ uses the current owner and group.
+ """
+ save_cwd = os.getcwd()
+ if root_dir is not None:
+ if logger is not None:
+ logger.debug("changing into '%s'", root_dir)
+ base_name = os.path.abspath(base_name)
+ if not dry_run:
+ os.chdir(root_dir)
+
+ if base_dir is None:
+ base_dir = os.curdir
+
+ kwargs = {'dry_run': dry_run, 'logger': logger}
+
+ try:
+ format_info = _ARCHIVE_FORMATS[format]
+ except KeyError:
+ raise ValueError("unknown archive format '%s'" % format)
+
+ func = format_info[0]
+ for arg, val in format_info[1]:
+ kwargs[arg] = val
+
+ if format != 'zip':
+ kwargs['owner'] = owner
+ kwargs['group'] = group
+
+ try:
+ filename = func(base_name, base_dir, **kwargs)
+ finally:
+ if root_dir is not None:
+ if logger is not None:
+ logger.debug("changing back to '%s'", save_cwd)
+ os.chdir(save_cwd)
+
+ return filename
+
+
+def get_unpack_formats():
+ """Returns a list of supported formats for unpacking.
+
+ Each element of the returned sequence is a tuple
+ (name, extensions, description)
+ """
+ formats = [(name, info[0], info[3]) for name, info in
+ _UNPACK_FORMATS.items()]
+ formats.sort()
+ return formats
+
+def _check_unpack_options(extensions, function, extra_args):
+ """Checks what gets registered as an unpacker."""
+ # first make sure no other unpacker is registered for this extension
+ existing_extensions = {}
+ for name, info in _UNPACK_FORMATS.items():
+ for ext in info[0]:
+ existing_extensions[ext] = name
+
+ for extension in extensions:
+ if extension in existing_extensions:
+ msg = '%s is already registered for "%s"'
+ raise RegistryError(msg % (extension,
+ existing_extensions[extension]))
+
+ if not callable(function):
+ raise TypeError('The registered function must be a callable')
+
+
+def register_unpack_format(name, extensions, function, extra_args=None,
+ description=''):
+ """Registers an unpack format.
+
+ `name` is the name of the format. `extensions` is a list of extensions
+ corresponding to the format.
+
+ `function` is the callable that will be
+ used to unpack archives. The callable will receive archives to unpack.
+ If it's unable to handle an archive, it needs to raise a ReadError
+ exception.
+
+ If provided, `extra_args` is a sequence of
+ (name, value) tuples that will be passed as arguments to the callable.
+ description can be provided to describe the format, and will be returned
+ by the get_unpack_formats() function.
+ """
+ if extra_args is None:
+ extra_args = []
+ _check_unpack_options(extensions, function, extra_args)
+ _UNPACK_FORMATS[name] = extensions, function, extra_args, description
+
+def unregister_unpack_format(name):
+ """Removes the pack format from the registry."""
+ del _UNPACK_FORMATS[name]
+
+def _ensure_directory(path):
+ """Ensure that the parent directory of `path` exists"""
+ dirname = os.path.dirname(path)
+ if not os.path.isdir(dirname):
+ os.makedirs(dirname)
+
+def _unpack_zipfile(filename, extract_dir):
+ """Unpack zip `filename` to `extract_dir`
+ """
+ import zipfile # late import for breaking circular dependency
+
+ if not zipfile.is_zipfile(filename):
+ raise ReadError("%s is not a zip file" % filename)
+
+ zip = zipfile.ZipFile(filename)
+ try:
+ for info in zip.infolist():
+ name = info.filename
+
+ # don't extract absolute paths or ones with .. in them
+ if name.startswith('/') or '..' in name:
+ continue
+
+ target = os.path.join(extract_dir, *name.split('/'))
+ if not target:
+ continue
+
+ _ensure_directory(target)
+ if not name.endswith('/'):
+ # file
+ data = zip.read(info.filename)
+ f = open(target, 'wb')
+ try:
+ f.write(data)
+ finally:
+ f.close()
+ del data
+ finally:
+ zip.close()
+
+def _unpack_tarfile(filename, extract_dir):
+ """Unpack tar/tar.gz/tar.bz2/tar.xz `filename` to `extract_dir`
+ """
+ import tarfile # late import for breaking circular dependency
+ try:
+ tarobj = tarfile.open(filename)
+ except tarfile.TarError:
+ raise ReadError(
+ "%s is not a compressed or uncompressed tar file" % filename)
+ try:
+ tarobj.extractall(extract_dir)
+ finally:
+ tarobj.close()
+
+_UNPACK_FORMATS = {
+ 'tar': (['.tar'], _unpack_tarfile, [], "uncompressed tar file"),
+ 'zip': (['.zip'], _unpack_zipfile, [], "ZIP file"),
+}
+
+if _ZLIB_SUPPORTED:
+ _UNPACK_FORMATS['gztar'] = (['.tar.gz', '.tgz'], _unpack_tarfile, [],
+ "gzip'ed tar-file")
+
+if _BZ2_SUPPORTED:
+ _UNPACK_FORMATS['bztar'] = (['.tar.bz2', '.tbz2'], _unpack_tarfile, [],
+ "bzip2'ed tar-file")
+
+if _LZMA_SUPPORTED:
+ _UNPACK_FORMATS['xztar'] = (['.tar.xz', '.txz'], _unpack_tarfile, [],
+ "xz'ed tar-file")
+
+def _find_unpack_format(filename):
+ for name, info in _UNPACK_FORMATS.items():
+ for extension in info[0]:
+ if filename.endswith(extension):
+ return name
+ return None
+
+def unpack_archive(filename, extract_dir=None, format=None):
+ """Unpack an archive.
+
+ `filename` is the name of the archive.
+
+ `extract_dir` is the name of the target directory, where the archive
+ is unpacked. If not provided, the current working directory is used.
+
+ `format` is the archive format: one of "zip", "tar", "gztar", "bztar",
+ or "xztar". Or any other registered format. If not provided,
+ unpack_archive will use the filename extension and see if an unpacker
+ was registered for that extension.
+
+ In case none is found, a ValueError is raised.
+ """
+ if extract_dir is None:
+ extract_dir = os.getcwd()
+
+ if format is not None:
+ try:
+ format_info = _UNPACK_FORMATS[format]
+ except KeyError:
+ raise ValueError("Unknown unpack format '{0}'".format(format))
+
+ func = format_info[1]
+ func(filename, extract_dir, **dict(format_info[2]))
+ else:
+ # we need to look at the registered unpackers supported extensions
+ format = _find_unpack_format(filename)
+ if format is None:
+ raise ReadError("Unknown archive format '{0}'".format(filename))
+
+ func = _UNPACK_FORMATS[format][1]
+ kwargs = dict(_UNPACK_FORMATS[format][2])
+ func(filename, extract_dir, **kwargs)
+
+
+if hasattr(os, 'statvfs'):
+
+ __all__.append('disk_usage')
+ _ntuple_diskusage = collections.namedtuple('usage', 'total used free')
+ _ntuple_diskusage.total.__doc__ = 'Total space in bytes'
+ _ntuple_diskusage.used.__doc__ = 'Used space in bytes'
+ _ntuple_diskusage.free.__doc__ = 'Free space in bytes'
+
+ def disk_usage(path):
+ """Return disk usage statistics about the given path.
+
+ Returned value is a named tuple with attributes 'total', 'used' and
+ 'free', which are the amount of total, used and free space, in bytes.
+ """
+ st = os.statvfs(path)
+ free = st.f_bavail * st.f_frsize
+ total = st.f_blocks * st.f_frsize
+ used = (st.f_blocks - st.f_bfree) * st.f_frsize
+ return _ntuple_diskusage(total, used, free)
+
+elif os.name == 'nt':
+
+ import nt
+ __all__.append('disk_usage')
+ _ntuple_diskusage = collections.namedtuple('usage', 'total used free')
+
+ def disk_usage(path):
+ """Return disk usage statistics about the given path.
+
+ Returned values is a named tuple with attributes 'total', 'used' and
+ 'free', which are the amount of total, used and free space, in bytes.
+ """
+ total, free = nt._getdiskusage(path)
+ used = total - free
+ return _ntuple_diskusage(total, used, free)
+
+
+def chown(path, user=None, group=None):
+ """Change owner user and group of the given path.
+
+ user and group can be the uid/gid or the user/group names, and in that case,
+ they are converted to their respective uid/gid.
+ """
+
+ if user is None and group is None:
+ raise ValueError("user and/or group must be set")
+
+ _user = user
+ _group = group
+
+ # -1 means don't change it
+ if user is None:
+ _user = -1
+ # user can either be an int (the uid) or a string (the system username)
+ elif isinstance(user, str):
+ _user = _get_uid(user)
+ if _user is None:
+ raise LookupError("no such user: {!r}".format(user))
+
+ if group is None:
+ _group = -1
+ elif not isinstance(group, int):
+ _group = _get_gid(group)
+ if _group is None:
+ raise LookupError("no such group: {!r}".format(group))
+
+ os.chown(path, _user, _group)
+
+def get_terminal_size(fallback=(80, 24)):
+ """Get the size of the terminal window.
+
+ For each of the two dimensions, the environment variable, COLUMNS
+ and LINES respectively, is checked. If the variable is defined and
+ the value is a positive integer, it is used.
+
+ When COLUMNS or LINES is not defined, which is the common case,
+ the terminal connected to sys.__stdout__ is queried
+ by invoking os.get_terminal_size.
+
+ If the terminal size cannot be successfully queried, either because
+ the system doesn't support querying, or because we are not
+ connected to a terminal, the value given in fallback parameter
+ is used. Fallback defaults to (80, 24) which is the default
+ size used by many terminal emulators.
+
+ The value returned is a named tuple of type os.terminal_size.
+ """
+ # columns, lines are the working values
+ try:
+ columns = int(os.environ['COLUMNS'])
+ except (KeyError, ValueError):
+ columns = 0
+
+ try:
+ lines = int(os.environ['LINES'])
+ except (KeyError, ValueError):
+ lines = 0
+
+ # only query if necessary
+ if columns <= 0 or lines <= 0:
+ try:
+ size = os.get_terminal_size(sys.__stdout__.fileno())
+ except (AttributeError, ValueError, OSError):
+ # stdout is None, closed, detached, or not a terminal, or
+ # os.get_terminal_size() is unsupported
+ size = os.terminal_size(fallback)
+ if columns <= 0:
+ columns = size.columns
+ if lines <= 0:
+ lines = size.lines
+
+ return os.terminal_size((columns, lines))
+
+def which(cmd, mode=os.F_OK | os.X_OK, path=None):
+ """Given a command, mode, and a PATH string, return the path which
+ conforms to the given mode on the PATH, or None if there is no such
+ file.
+
+ `mode` defaults to os.F_OK | os.X_OK. `path` defaults to the result
+ of os.environ.get("PATH"), or can be overridden with a custom search
+ path.
+
+ """
+ # Check that a given file can be accessed with the correct mode.
+ # Additionally check that `file` is not a directory, as on Windows
+ # directories pass the os.access check.
+ def _access_check(fn, mode):
+ return (os.path.exists(fn) and os.access(fn, mode)
+ and not os.path.isdir(fn))
+
+ # If we're given a path with a directory part, look it up directly rather
+ # than referring to PATH directories. This includes checking relative to the
+ # current directory, e.g. ./script
+ if os.path.dirname(cmd):
+ if _access_check(cmd, mode):
+ return cmd
+ return None
+
+ if path is None:
+ path = os.environ.get("PATH", os.defpath)
+ if not path:
+ return None
+ path = path.split(os.pathsep)
+
+ if sys.platform == "win32":
+ # The current directory takes precedence on Windows.
+ if not os.curdir in path:
+ path.insert(0, os.curdir)
+
+ # PATHEXT is necessary to check on Windows.
+ pathext = os.environ.get("PATHEXT", "").split(os.pathsep)
+ # See if the given file matches any of the expected path extensions.
+ # This will allow us to short circuit when given "python.exe".
+ # If it does match, only test that one, otherwise we have to try
+ # others.
+ if any(cmd.lower().endswith(ext.lower()) for ext in pathext):
+ files = [cmd]
+ else:
+ files = [cmd + ext for ext in pathext]
+ else:
+ # On other platforms you don't have things like PATHEXT to tell you
+ # what file suffixes are executable, so just pass on cmd as-is.
+ files = [cmd]
+
+ seen = set()
+ for dir in path:
+ normdir = os.path.normcase(dir)
+ if not normdir in seen:
+ seen.add(normdir)
+ for thefile in files:
+ name = os.path.join(dir, thefile)
+ if _access_check(name, mode):
+ return name
+ return None
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/site.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/site.py
new file mode 100644
index 00000000..16eb0eff
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/site.py
@@ -0,0 +1,529 @@
+"""Append module search paths for third-party packages to sys.path.
+
+****************************************************************
+* This module is automatically imported during initialization. *
+****************************************************************
+
+This is a UEFI-specific version of site.py.
+
+In earlier versions of Python (up to 1.5a3), scripts or modules that
+needed to use site-specific modules would place ``import site''
+somewhere near the top of their code. Because of the automatic
+import, this is no longer necessary (but code that does it still
+works).
+
+This will append site-specific paths to the module search path. It starts with sys.prefix and
+sys.exec_prefix (if different) and appends
+lib/python<version>/site-packages as well as lib/site-python.
+The
+resulting directories, if they exist, are appended to sys.path, and
+also inspected for path configuration files.
+
+A path configuration file is a file whose name has the form
+<package>.pth; its contents are additional directories (one per line)
+to be added to sys.path. Non-existing directories (or
+non-directories) are never added to sys.path; no directory is added to
+sys.path more than once. Blank lines and lines beginning with
+'#' are skipped. Lines starting with 'import' are executed.
+
+For example, suppose sys.prefix and sys.exec_prefix are set to
+/Efi/StdLib and there is a directory /Efi/StdLib/lib/python27.10/site-packages
+with three subdirectories, foo, bar and spam, and two path
+configuration files, foo.pth and bar.pth. Assume foo.pth contains the
+following:
+
+ # foo package configuration
+ foo
+ bar
+ bletch
+
+and bar.pth contains:
+
+ # bar package configuration
+ bar
+
+Then the following directories are added to sys.path, in this order:
+
+ /Efi/StdLib/lib/python27.10/site-packages/bar
+ /Efi/StdLib/lib/python27.10/site-packages/foo
+
+Note that bletch is omitted because it doesn't exist; bar precedes foo
+because bar.pth comes alphabetically before foo.pth; and spam is
+omitted because it is not mentioned in either path configuration file.
+
+After these path manipulations, an attempt is made to import a module
+named sitecustomize, which can perform arbitrary additional
+site-specific customizations. If this import fails with an
+ImportError exception, it is silently ignored.
+
+Copyright (c) 2011 - 2021, Intel Corporation. All rights reserved.<BR>
+
+This program and the accompanying materials are licensed and made available under
+the terms and conditions of the BSD License that accompanies this distribution.
+The full text of the license may be found at
+http://opensource.org/licenses/bsd-license.
+
+THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+"""
+
+import sys
+import os
+import builtins
+import traceback
+from io import open
+# Prefixes for site-packages; add additional prefixes like /usr/local here
+PREFIXES = [sys.prefix, sys.exec_prefix]
+# Enable per user site-packages directory
+# set it to False to disable the feature or True to force the feature
+ENABLE_USER_SITE = False
+
+# for distutils.commands.install
+# These values are initialized by the getuserbase() and getusersitepackages()
+# functions, through the main() function when Python starts.
+USER_SITE = None
+USER_BASE = None
+
+
+def makepath(*paths):
+ dir = os.path.join(*paths)
+ try:
+ dir = os.path.abspath(dir)
+ except OSError:
+ pass
+ return dir, os.path.normcase(dir)
+
+
+def abs__file__():
+ """Set all module' __file__ attribute to an absolute path"""
+ for m in list(sys.modules.values()):
+ if hasattr(m, '__loader__'):
+ continue # don't mess with a PEP 302-supplied __file__
+ try:
+ m.__file__ = os.path.abspath(m.__file__)
+ except (AttributeError, OSError):
+ pass
+
+
+def removeduppaths():
+ """ Remove duplicate entries from sys.path along with making them
+ absolute"""
+ # This ensures that the initial path provided by the interpreter contains
+ # only absolute pathnames, even if we're running from the build directory.
+ L = []
+ known_paths = set()
+ for dir in sys.path:
+ # Filter out duplicate paths (on case-insensitive file systems also
+ # if they only differ in case); turn relative paths into absolute
+ # paths.
+ dir, dircase = makepath(dir)
+ if not dircase in known_paths:
+ L.append(dir)
+ known_paths.add(dircase)
+ sys.path[:] = L
+ return known_paths
+
+
+def _init_pathinfo():
+ """Return a set containing all existing directory entries from sys.path"""
+ d = set()
+ for dir in sys.path:
+ try:
+ if os.path.isdir(dir):
+ dir, dircase = makepath(dir)
+ d.add(dircase)
+ except TypeError:
+ continue
+ return d
+
+
+def addpackage(sitedir, name, known_paths):
+ """Process a .pth file within the site-packages directory:
+ For each line in the file, either combine it with sitedir to a path
+ and add that to known_paths, or execute it if it starts with 'import '.
+ """
+ if known_paths is None:
+ _init_pathinfo()
+ reset = 1
+ else:
+ reset = 0
+ fullname = os.path.join(sitedir, name)
+ try:
+ f = open(fullname, "r")
+ except IOError:
+ return
+ with f:
+ for n, line in enumerate(f):
+ if line.startswith("#"):
+ continue
+ try:
+ if line.startswith(("import ", "import\t")):
+ exec(line)
+ continue
+ line = line.rstrip()
+ dir, dircase = makepath(sitedir, line)
+ if not dircase in known_paths and os.path.exists(dir):
+ sys.path.append(dir)
+ known_paths.add(dircase)
+ except Exception as err:
+ print("Error processing line {:d} of {}:\n".format(
+ n+1, fullname), file=sys.stderr)
+ for record in traceback.format_exception(*sys.exc_info()):
+ for line in record.splitlines():
+ print(' '+line, file=sys.stderr)
+ print("\nRemainder of file ignored", file=sys.stderr)
+ break
+ if reset:
+ known_paths = None
+ return known_paths
+
+
+def addsitedir(sitedir, known_paths=None):
+ """Add 'sitedir' argument to sys.path if missing and handle .pth files in
+ 'sitedir'"""
+ if known_paths is None:
+ known_paths = _init_pathinfo()
+ reset = 1
+ else:
+ reset = 0
+ sitedir, sitedircase = makepath(sitedir)
+ if not sitedircase in known_paths:
+ sys.path.append(sitedir) # Add path component
+ try:
+ names = os.listdir(sitedir)
+ except os.error:
+ return
+ dotpth = os.extsep + "pth"
+ names = [name for name in names if name.endswith(dotpth)]
+ for name in sorted(names):
+ addpackage(sitedir, name, known_paths)
+ if reset:
+ known_paths = None
+ return known_paths
+
+
+def check_enableusersite():
+ """Check if user site directory is safe for inclusion
+
+ The function tests for the command line flag (including environment var),
+ process uid/gid equal to effective uid/gid.
+
+ None: Disabled for security reasons
+ False: Disabled by user (command line option)
+ True: Safe and enabled
+ """
+ if sys.flags.no_user_site:
+ return False
+
+ if hasattr(os, "getuid") and hasattr(os, "geteuid"):
+ # check process uid == effective uid
+ if os.geteuid() != os.getuid():
+ return None
+ if hasattr(os, "getgid") and hasattr(os, "getegid"):
+ # check process gid == effective gid
+ if os.getegid() != os.getgid():
+ return None
+
+ return True
+
+def getuserbase():
+ """Returns the `user base` directory path.
+
+ The `user base` directory can be used to store data. If the global
+ variable ``USER_BASE`` is not initialized yet, this function will also set
+ it.
+ """
+ global USER_BASE
+ if USER_BASE is not None:
+ return USER_BASE
+ from sysconfig import get_config_var
+ USER_BASE = get_config_var('userbase')
+ return USER_BASE
+
+def getusersitepackages():
+ """Returns the user-specific site-packages directory path.
+
+ If the global variable ``USER_SITE`` is not initialized yet, this
+ function will also set it.
+ """
+ global USER_SITE
+ user_base = getuserbase() # this will also set USER_BASE
+
+ if USER_SITE is not None:
+ return USER_SITE
+
+ from sysconfig import get_path
+ import os
+
+ USER_SITE = get_path('purelib', '%s_user' % os.name)
+ return USER_SITE
+
+def addusersitepackages(known_paths):
+ """Add a per user site-package to sys.path
+
+ Each user has its own python directory with site-packages in the
+ home directory.
+ """
+ if ENABLE_USER_SITE and os.path.isdir(user_site):
+ # get the per user site-package path
+ # this call will also make sure USER_BASE and USER_SITE are set
+ user_site = getusersitepackages()
+
+ addsitedir(user_site, known_paths)
+ return known_paths
+
+def getsitepackages():
+ """Returns a list containing all global site-packages directories
+ (and possibly site-python).
+
+ For each directory present in the global ``PREFIXES``, this function
+ will find its `site-packages` subdirectory depending on the system
+ environment, and will return a list of full paths.
+ """
+ sitepackages = []
+ seen = set()
+
+ for prefix in PREFIXES:
+ if not prefix or prefix in seen:
+ continue
+ seen.add(prefix)
+
+ ix = sys.version.find(' ')
+ if ix != -1:
+ micro = sys.version[4:ix]
+ else:
+ micro = '0'
+
+ sitepackages.append(os.path.join(prefix, "lib",
+ "python" + sys.version[0] + sys.version[2] + '.' + micro,
+ "site-packages"))
+ sitepackages.append(os.path.join(prefix, "lib", "site-python"))
+ return sitepackages
+
+def addsitepackages(known_paths):
+ """Add site-packages (and possibly site-python) to sys.path"""
+ for sitedir in getsitepackages():
+ if os.path.isdir(sitedir):
+ addsitedir(sitedir, known_paths)
+
+ return known_paths
+
+def setBEGINLIBPATH():
+ """The UEFI port has optional extension modules that do double duty
+ as DLLs (even though they have .efi file extensions) for other extensions.
+ The library search path needs to be amended so these will be found
+ during module import. Use BEGINLIBPATH so that these are at the start
+ of the library search path.
+
+ """
+ dllpath = os.path.join(sys.prefix, "Lib", "lib-dynload")
+ libpath = os.environ['BEGINLIBPATH'].split(os.path.pathsep)
+ if libpath[-1]:
+ libpath.append(dllpath)
+ else:
+ libpath[-1] = dllpath
+ os.environ['BEGINLIBPATH'] = os.path.pathsep.join(libpath)
+
+
+def setquit():
+ """Define new builtins 'quit' and 'exit'.
+
+ These are objects which make the interpreter exit when called.
+ The repr of each object contains a hint at how it works.
+
+ """
+ eof = 'Ctrl-D (i.e. EOF)'
+
+ class Quitter(object):
+ def __init__(self, name):
+ self.name = name
+ def __repr__(self):
+ return 'Use %s() or %s to exit' % (self.name, eof)
+ def __call__(self, code=None):
+ # Shells like IDLE catch the SystemExit, but listen when their
+ # stdin wrapper is closed.
+ try:
+ sys.stdin.close()
+ except:
+ pass
+ raise SystemExit(code)
+ builtins.quit = Quitter('quit')
+ builtins.exit = Quitter('exit')
+
+
+class _Printer(object):
+ """interactive prompt objects for printing the license text, a list of
+ contributors and the copyright notice."""
+
+ MAXLINES = 23
+
+ def __init__(self, name, data, files=(), dirs=()):
+ self.__name = name
+ self.__data = data
+ self.__files = files
+ self.__dirs = dirs
+ self.__lines = None
+
+ def __setup(self):
+ if self.__lines:
+ return
+ data = None
+ for dir in self.__dirs:
+ for filename in self.__files:
+ filename = os.path.join(dir, filename)
+ try:
+ fp = open(filename, "r")
+ data = fp.read()
+ fp.close()
+ break
+ except IOError:
+ pass
+ if data:
+ break
+ if not data:
+ data = self.__data
+ self.__lines = data.split('\n')
+ self.__linecnt = len(self.__lines)
+
+ def __repr__(self):
+ self.__setup()
+ if len(self.__lines) <= self.MAXLINES:
+ return "\n".join(self.__lines)
+ else:
+ return "Type %s() to see the full %s text" % ((self.__name,)*2)
+
+ def __call__(self):
+ self.__setup()
+ prompt = 'Hit Return for more, or q (and Return) to quit: '
+ lineno = 0
+ while 1:
+ try:
+ for i in range(lineno, lineno + self.MAXLINES):
+ print((self.__lines[i]))
+ except IndexError:
+ break
+ else:
+ lineno += self.MAXLINES
+ key = None
+ while key is None:
+ key = input(prompt)
+ if key not in ('', 'q'):
+ key = None
+ if key == 'q':
+ break
+
+def setcopyright():
+ """Set 'copyright' and 'credits' in __builtin__"""
+ builtins.copyright = _Printer("copyright", sys.copyright)
+ builtins.credits = _Printer("credits", """\
+ Thanks to CWI, CNRI, BeOpen.com, Zope Corporation and a cast of thousands
+ for supporting Python development. See www.python.org for more information.""")
+ here = os.path.dirname(os.__file__)
+ builtins.license = _Printer(
+ "license", "See https://www.python.org/psf/license/",
+ ["LICENSE.txt", "LICENSE"],
+ [os.path.join(here, os.pardir), here, os.curdir])
+
+
+class _Helper(object):
+ """Define the builtin 'help'.
+ This is a wrapper around pydoc.help (with a twist).
+
+ """
+
+ def __repr__(self):
+ return "Type help() for interactive help, " \
+ "or help(object) for help about object."
+ def __call__(self, *args, **kwds):
+ import pydoc
+ return pydoc.help(*args, **kwds)
+
+def sethelper():
+ builtins.help = _Helper()
+
+def setencoding():
+ """Set the string encoding used by the Unicode implementation. The
+ default is 'ascii', but if you're willing to experiment, you can
+ change this."""
+ encoding = "ascii" # Default value set by _PyUnicode_Init()
+ if 0:
+ # Enable to support locale aware default string encodings.
+ import locale
+ loc = locale.getdefaultlocale()
+ if loc[1]:
+ encoding = loc[1]
+ if 0:
+ # Enable to switch off string to Unicode coercion and implicit
+ # Unicode to string conversion.
+ encoding = "undefined"
+ if encoding != "ascii":
+ # On Non-Unicode builds this will raise an AttributeError...
+ sys.setdefaultencoding(encoding) # Needs Python Unicode build !
+
+
+def execsitecustomize():
+ """Run custom site specific code, if available."""
+ try:
+ import sitecustomize
+ except ImportError:
+ pass
+ except Exception:
+ if sys.flags.verbose:
+ sys.excepthook(*sys.exc_info())
+ else:
+ print("'import sitecustomize' failed; use -v for traceback", file=sys.stderr)
+
+
+def execusercustomize():
+ """Run custom user specific code, if available."""
+ try:
+ import usercustomize
+ except ImportError:
+ pass
+ except Exception:
+ if sys.flags.verbose:
+ sys.excepthook(*sys.exc_info())
+ else:
+ print("'import usercustomize' failed; use -v for traceback", file=sys.stderr)
+
+
+def main():
+ global ENABLE_USER_SITE
+
+ abs__file__()
+ known_paths = removeduppaths()
+ if ENABLE_USER_SITE is None:
+ ENABLE_USER_SITE = check_enableusersite()
+ known_paths = addusersitepackages(known_paths)
+ known_paths = addsitepackages(known_paths)
+ setquit()
+ setcopyright()
+ sethelper()
+ setencoding()
+ execsitecustomize()
+ # Remove sys.setdefaultencoding() so that users cannot change the
+ # encoding after initialization. The test for presence is needed when
+ # this module is run as a script, because this code is executed twice.
+ if hasattr(sys, "setdefaultencoding"):
+ del sys.setdefaultencoding
+
+main()
+
+def _script():
+ help = """\
+ %s
+
+ Path elements are normally separated by '%s'.
+ """
+
+ print("sys.path = [")
+ for dir in sys.path:
+ print(" %r," % (dir,))
+ print("]")
+
+ import textwrap
+ print(textwrap.dedent(help % (sys.argv[0], os.pathsep)))
+ sys.exit(0)
+
+if __name__ == '__main__':
+ _script()
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/subprocess.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/subprocess.py
new file mode 100644
index 00000000..24ea86c0
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/subprocess.py
@@ -0,0 +1,1620 @@
+# subprocess - Subprocesses with accessible I/O streams
+#
+# For more information about this module, see PEP 324.
+#
+# Copyright (c) 2003-2005 by Peter Astrand <astrand@lysator.liu.se>
+#
+# Licensed to PSF under a Contributor Agreement.
+# See http://www.python.org/2.4/license for licensing details.
+
+r"""Subprocesses with accessible I/O streams
+
+This module allows you to spawn processes, connect to their
+input/output/error pipes, and obtain their return codes.
+
+For a complete description of this module see the Python documentation.
+
+Main API
+========
+run(...): Runs a command, waits for it to complete, then returns a
+ CompletedProcess instance.
+Popen(...): A class for flexibly executing a command in a new process
+
+Constants
+---------
+DEVNULL: Special value that indicates that os.devnull should be used
+PIPE: Special value that indicates a pipe should be created
+STDOUT: Special value that indicates that stderr should go to stdout
+
+
+Older API
+=========
+call(...): Runs a command, waits for it to complete, then returns
+ the return code.
+check_call(...): Same as call() but raises CalledProcessError()
+ if return code is not 0
+check_output(...): Same as check_call() but returns the contents of
+ stdout instead of a return code
+getoutput(...): Runs a command in the shell, waits for it to complete,
+ then returns the output
+getstatusoutput(...): Runs a command in the shell, waits for it to complete,
+ then returns a (exitcode, output) tuple
+"""
+
+import sys
+_mswindows = (sys.platform == "win32")
+_uefi = (sys.platform == "uefi")
+import io
+import os
+import time
+import signal
+import builtins
+import warnings
+import errno
+from time import monotonic as _time
+import edk2 #JP added
+# Exception classes used by this module.
+class SubprocessError(Exception): pass
+
+
+class CalledProcessError(SubprocessError):
+ """Raised when run() is called with check=True and the process
+ returns a non-zero exit status.
+
+ Attributes:
+ cmd, returncode, stdout, stderr, output
+ """
+ def __init__(self, returncode, cmd, output=None, stderr=None):
+ self.returncode = returncode
+ self.cmd = cmd
+ self.output = output
+ self.stderr = stderr
+
+ def __str__(self):
+ if self.returncode and self.returncode < 0:
+ try:
+ return "Command '%s' died with %r." % (
+ self.cmd, signal.Signals(-self.returncode))
+ except ValueError:
+ return "Command '%s' died with unknown signal %d." % (
+ self.cmd, -self.returncode)
+ else:
+ return "Command '%s' returned non-zero exit status %d." % (
+ self.cmd, self.returncode)
+
+ @property
+ def stdout(self):
+ """Alias for output attribute, to match stderr"""
+ return self.output
+
+ @stdout.setter
+ def stdout(self, value):
+ # There's no obvious reason to set this, but allow it anyway so
+ # .stdout is a transparent alias for .output
+ self.output = value
+
+
+class TimeoutExpired(SubprocessError):
+ """This exception is raised when the timeout expires while waiting for a
+ child process.
+
+ Attributes:
+ cmd, output, stdout, stderr, timeout
+ """
+ def __init__(self, cmd, timeout, output=None, stderr=None):
+ self.cmd = cmd
+ self.timeout = timeout
+ self.output = output
+ self.stderr = stderr
+
+ def __str__(self):
+ return ("Command '%s' timed out after %s seconds" %
+ (self.cmd, self.timeout))
+
+ @property
+ def stdout(self):
+ return self.output
+
+ @stdout.setter
+ def stdout(self, value):
+ # There's no obvious reason to set this, but allow it anyway so
+ # .stdout is a transparent alias for .output
+ self.output = value
+
+
+if _mswindows:
+ import threading
+ import msvcrt
+ import _winapi
+ class STARTUPINFO:
+ dwFlags = 0
+ hStdInput = None
+ hStdOutput = None
+ hStdError = None
+ wShowWindow = 0
+else:
+ if not _uefi: #JP hack, subprocess will not work on EFI shell
+ import _posixsubprocess
+
+ import select
+ import selectors
+ try:
+ import threading
+ except ImportError:
+ import dummy_threading as threading
+
+ # When select or poll has indicated that the file is writable,
+ # we can write up to _PIPE_BUF bytes without risk of blocking.
+ # POSIX defines PIPE_BUF as >= 512.
+ _PIPE_BUF = getattr(select, 'PIPE_BUF', 512)
+
+ # poll/select have the advantage of not requiring any extra file
+ # descriptor, contrarily to epoll/kqueue (also, they require a single
+ # syscall).
+ if hasattr(selectors, 'PollSelector'):
+ _PopenSelector = selectors.PollSelector
+ else:
+ _PopenSelector = selectors.SelectSelector
+
+
+__all__ = ["Popen", "PIPE", "STDOUT", "call", "check_call", "getstatusoutput",
+ "getoutput", "check_output", "run", "CalledProcessError", "DEVNULL",
+ "SubprocessError", "TimeoutExpired", "CompletedProcess"]
+ # NOTE: We intentionally exclude list2cmdline as it is
+ # considered an internal implementation detail. issue10838.
+
+if _mswindows:
+ from _winapi import (CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP,
+ STD_INPUT_HANDLE, STD_OUTPUT_HANDLE,
+ STD_ERROR_HANDLE, SW_HIDE,
+ STARTF_USESTDHANDLES, STARTF_USESHOWWINDOW)
+
+ __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP",
+ "STD_INPUT_HANDLE", "STD_OUTPUT_HANDLE",
+ "STD_ERROR_HANDLE", "SW_HIDE",
+ "STARTF_USESTDHANDLES", "STARTF_USESHOWWINDOW",
+ "STARTUPINFO"])
+
+ class Handle(int):
+ closed = False
+
+ def Close(self, CloseHandle=_winapi.CloseHandle):
+ if not self.closed:
+ self.closed = True
+ CloseHandle(self)
+
+ def Detach(self):
+ if not self.closed:
+ self.closed = True
+ return int(self)
+ raise ValueError("already closed")
+
+ def __repr__(self):
+ return "%s(%d)" % (self.__class__.__name__, int(self))
+
+ __del__ = Close
+ __str__ = __repr__
+
+
+# This lists holds Popen instances for which the underlying process had not
+# exited at the time its __del__ method got called: those processes are wait()ed
+# for synchronously from _cleanup() when a new Popen object is created, to avoid
+# zombie processes.
+_active = []
+
+def _cleanup():
+ for inst in _active[:]:
+ res = inst._internal_poll(_deadstate=sys.maxsize)
+ if res is not None:
+ try:
+ _active.remove(inst)
+ except ValueError:
+ # This can happen if two threads create a new Popen instance.
+ # It's harmless that it was already removed, so ignore.
+ pass
+
+PIPE = -1
+STDOUT = -2
+DEVNULL = -3
+
+
+# XXX This function is only used by multiprocessing and the test suite,
+# but it's here so that it can be imported when Python is compiled without
+# threads.
+
+def _optim_args_from_interpreter_flags():
+ """Return a list of command-line arguments reproducing the current
+ optimization settings in sys.flags."""
+ args = []
+ value = sys.flags.optimize
+ if value > 0:
+ args.append('-' + 'O' * value)
+ return args
+
+
+def _args_from_interpreter_flags():
+ """Return a list of command-line arguments reproducing the current
+ settings in sys.flags, sys.warnoptions and sys._xoptions."""
+ flag_opt_map = {
+ 'debug': 'd',
+ # 'inspect': 'i',
+ # 'interactive': 'i',
+ 'dont_write_bytecode': 'B',
+ 'no_site': 'S',
+ 'verbose': 'v',
+ 'bytes_warning': 'b',
+ 'quiet': 'q',
+ # -O is handled in _optim_args_from_interpreter_flags()
+ }
+ args = _optim_args_from_interpreter_flags()
+ for flag, opt in flag_opt_map.items():
+ v = getattr(sys.flags, flag)
+ if v > 0:
+ args.append('-' + opt * v)
+
+ if sys.flags.isolated:
+ args.append('-I')
+ else:
+ if sys.flags.ignore_environment:
+ args.append('-E')
+ if sys.flags.no_user_site:
+ args.append('-s')
+
+ for opt in sys.warnoptions:
+ args.append('-W' + opt)
+
+ # -X options
+ xoptions = getattr(sys, '_xoptions', {})
+ for opt in ('faulthandler', 'tracemalloc',
+ 'showalloccount', 'showrefcount', 'utf8'):
+ if opt in xoptions:
+ value = xoptions[opt]
+ if value is True:
+ arg = opt
+ else:
+ arg = '%s=%s' % (opt, value)
+ args.extend(('-X', arg))
+
+ return args
+
+
+def call(*popenargs, timeout=None, **kwargs):
+ """Run command with arguments. Wait for command to complete or
+ timeout, then return the returncode attribute.
+
+ The arguments are the same as for the Popen constructor. Example:
+
+ retcode = call(["ls", "-l"])
+ """
+ with Popen(*popenargs, **kwargs) as p:
+ try:
+ return p.wait(timeout=timeout)
+ except:
+ p.kill()
+ p.wait()
+ raise
+
+
+def check_call(*popenargs, **kwargs):
+ """Run command with arguments. Wait for command to complete. If
+ the exit code was zero then return, otherwise raise
+ CalledProcessError. The CalledProcessError object will have the
+ return code in the returncode attribute.
+
+ The arguments are the same as for the call function. Example:
+
+ check_call(["ls", "-l"])
+ """
+ retcode = call(*popenargs, **kwargs)
+ if retcode:
+ cmd = kwargs.get("args")
+ if cmd is None:
+ cmd = popenargs[0]
+ raise CalledProcessError(retcode, cmd)
+ return 0
+
+
+def check_output(*popenargs, timeout=None, **kwargs):
+ r"""Run command with arguments and return its output.
+
+ If the exit code was non-zero it raises a CalledProcessError. The
+ CalledProcessError object will have the return code in the returncode
+ attribute and output in the output attribute.
+
+ The arguments are the same as for the Popen constructor. Example:
+
+ >>> check_output(["ls", "-l", "/dev/null"])
+ b'crw-rw-rw- 1 root root 1, 3 Oct 18 2007 /dev/null\n'
+
+ The stdout argument is not allowed as it is used internally.
+ To capture standard error in the result, use stderr=STDOUT.
+
+ >>> check_output(["/bin/sh", "-c",
+ ... "ls -l non_existent_file ; exit 0"],
+ ... stderr=STDOUT)
+ b'ls: non_existent_file: No such file or directory\n'
+
+ There is an additional optional argument, "input", allowing you to
+ pass a string to the subprocess's stdin. If you use this argument
+ you may not also use the Popen constructor's "stdin" argument, as
+ it too will be used internally. Example:
+
+ >>> check_output(["sed", "-e", "s/foo/bar/"],
+ ... input=b"when in the course of fooman events\n")
+ b'when in the course of barman events\n'
+
+ If universal_newlines=True is passed, the "input" argument must be a
+ string and the return value will be a string rather than bytes.
+ """
+ if 'stdout' in kwargs:
+ raise ValueError('stdout argument not allowed, it will be overridden.')
+
+ if 'input' in kwargs and kwargs['input'] is None:
+ # Explicitly passing input=None was previously equivalent to passing an
+ # empty string. That is maintained here for backwards compatibility.
+ kwargs['input'] = '' if kwargs.get('universal_newlines', False) else b''
+
+ return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
+ **kwargs).stdout
+
+
+class CompletedProcess(object):
+ """A process that has finished running.
+
+ This is returned by run().
+
+ Attributes:
+ args: The list or str args passed to run().
+ returncode: The exit code of the process, negative for signals.
+ stdout: The standard output (None if not captured).
+ stderr: The standard error (None if not captured).
+ """
+ def __init__(self, args, returncode, stdout=None, stderr=None):
+ self.args = args
+ self.returncode = returncode
+ self.stdout = stdout
+ self.stderr = stderr
+
+ def __repr__(self):
+ args = ['args={!r}'.format(self.args),
+ 'returncode={!r}'.format(self.returncode)]
+ if self.stdout is not None:
+ args.append('stdout={!r}'.format(self.stdout))
+ if self.stderr is not None:
+ args.append('stderr={!r}'.format(self.stderr))
+ return "{}({})".format(type(self).__name__, ', '.join(args))
+
+ def check_returncode(self):
+ """Raise CalledProcessError if the exit code is non-zero."""
+ if self.returncode:
+ raise CalledProcessError(self.returncode, self.args, self.stdout,
+ self.stderr)
+
+
+def run(*popenargs, input=None, timeout=None, check=False, **kwargs):
+ """Run command with arguments and return a CompletedProcess instance.
+
+ The returned instance will have attributes args, returncode, stdout and
+ stderr. By default, stdout and stderr are not captured, and those attributes
+ will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them.
+
+ If check is True and the exit code was non-zero, it raises a
+ CalledProcessError. The CalledProcessError object will have the return code
+ in the returncode attribute, and output & stderr attributes if those streams
+ were captured.
+
+ If timeout is given, and the process takes too long, a TimeoutExpired
+ exception will be raised.
+
+ There is an optional argument "input", allowing you to
+ pass a string to the subprocess's stdin. If you use this argument
+ you may not also use the Popen constructor's "stdin" argument, as
+ it will be used internally.
+
+ The other arguments are the same as for the Popen constructor.
+
+ If universal_newlines=True is passed, the "input" argument must be a
+ string and stdout/stderr in the returned object will be strings rather than
+ bytes.
+ """
+ if input is not None:
+ if 'stdin' in kwargs:
+ raise ValueError('stdin and input arguments may not both be used.')
+ kwargs['stdin'] = PIPE
+
+ with Popen(*popenargs, **kwargs) as process:
+ try:
+ stdout, stderr = process.communicate(input, timeout=timeout)
+ except TimeoutExpired:
+ process.kill()
+ stdout, stderr = process.communicate()
+ raise TimeoutExpired(process.args, timeout, output=stdout,
+ stderr=stderr)
+ except:
+ process.kill()
+ process.wait()
+ raise
+ retcode = process.poll()
+ if check and retcode:
+ raise CalledProcessError(retcode, process.args,
+ output=stdout, stderr=stderr)
+ return CompletedProcess(process.args, retcode, stdout, stderr)
+
+
+def list2cmdline(seq):
+ """
+ Translate a sequence of arguments into a command line
+ string, using the same rules as the MS C runtime:
+
+ 1) Arguments are delimited by white space, which is either a
+ space or a tab.
+
+ 2) A string surrounded by double quotation marks is
+ interpreted as a single argument, regardless of white space
+ contained within. A quoted string can be embedded in an
+ argument.
+
+ 3) A double quotation mark preceded by a backslash is
+ interpreted as a literal double quotation mark.
+
+ 4) Backslashes are interpreted literally, unless they
+ immediately precede a double quotation mark.
+
+ 5) If backslashes immediately precede a double quotation mark,
+ every pair of backslashes is interpreted as a literal
+ backslash. If the number of backslashes is odd, the last
+ backslash escapes the next double quotation mark as
+ described in rule 3.
+ """
+
+ # See
+ # http://msdn.microsoft.com/en-us/library/17w5ykft.aspx
+ # or search http://msdn.microsoft.com for
+ # "Parsing C++ Command-Line Arguments"
+ result = []
+ needquote = False
+ for arg in seq:
+ bs_buf = []
+
+ # Add a space to separate this argument from the others
+ if result:
+ result.append(' ')
+
+ needquote = (" " in arg) or ("\t" in arg) or not arg
+ if needquote:
+ result.append('"')
+
+ for c in arg:
+ if c == '\\':
+ # Don't know if we need to double yet.
+ bs_buf.append(c)
+ elif c == '"':
+ # Double backslashes.
+ result.append('\\' * len(bs_buf)*2)
+ bs_buf = []
+ result.append('\\"')
+ else:
+ # Normal char
+ if bs_buf:
+ result.extend(bs_buf)
+ bs_buf = []
+ result.append(c)
+
+ # Add remaining backslashes, if any.
+ if bs_buf:
+ result.extend(bs_buf)
+
+ if needquote:
+ result.extend(bs_buf)
+ result.append('"')
+
+ return ''.join(result)
+
+
+# Various tools for executing commands and looking at their output and status.
+#
+
+def getstatusoutput(cmd):
+ """Return (exitcode, output) of executing cmd in a shell.
+
+ Execute the string 'cmd' in a shell with 'check_output' and
+ return a 2-tuple (status, output). The locale encoding is used
+ to decode the output and process newlines.
+
+ A trailing newline is stripped from the output.
+ The exit status for the command can be interpreted
+ according to the rules for the function 'wait'. Example:
+
+ >>> import subprocess
+ >>> subprocess.getstatusoutput('ls /bin/ls')
+ (0, '/bin/ls')
+ >>> subprocess.getstatusoutput('cat /bin/junk')
+ (1, 'cat: /bin/junk: No such file or directory')
+ >>> subprocess.getstatusoutput('/bin/junk')
+ (127, 'sh: /bin/junk: not found')
+ >>> subprocess.getstatusoutput('/bin/kill $$')
+ (-15, '')
+ """
+ try:
+ data = check_output(cmd, shell=True, universal_newlines=True, stderr=STDOUT)
+ exitcode = 0
+ except CalledProcessError as ex:
+ data = ex.output
+ exitcode = ex.returncode
+ if data[-1:] == '\n':
+ data = data[:-1]
+ return exitcode, data
+
+def getoutput(cmd):
+ """Return output (stdout or stderr) of executing cmd in a shell.
+
+ Like getstatusoutput(), except the exit status is ignored and the return
+ value is a string containing the command's output. Example:
+
+ >>> import subprocess
+ >>> subprocess.getoutput('ls /bin/ls')
+ '/bin/ls'
+ """
+ return getstatusoutput(cmd)[1]
+
+
+_PLATFORM_DEFAULT_CLOSE_FDS = object()
+
+
+class Popen(object):
+ """ Execute a child program in a new process.
+
+ For a complete description of the arguments see the Python documentation.
+
+ Arguments:
+ args: A string, or a sequence of program arguments.
+
+ bufsize: supplied as the buffering argument to the open() function when
+ creating the stdin/stdout/stderr pipe file objects
+
+ executable: A replacement program to execute.
+
+ stdin, stdout and stderr: These specify the executed programs' standard
+ input, standard output and standard error file handles, respectively.
+
+ preexec_fn: (POSIX only) An object to be called in the child process
+ just before the child is executed.
+
+ close_fds: Controls closing or inheriting of file descriptors.
+
+ shell: If true, the command will be executed through the shell.
+
+ cwd: Sets the current directory before the child is executed.
+
+ env: Defines the environment variables for the new process.
+
+ universal_newlines: If true, use universal line endings for file
+ objects stdin, stdout and stderr.
+
+ startupinfo and creationflags (Windows only)
+
+ restore_signals (POSIX only)
+
+ start_new_session (POSIX only)
+
+ pass_fds (POSIX only)
+
+ encoding and errors: Text mode encoding and error handling to use for
+ file objects stdin, stdout and stderr.
+
+ Attributes:
+ stdin, stdout, stderr, pid, returncode
+ """
+ _child_created = False # Set here since __del__ checks it
+
+ def __init__(self, args, bufsize=-1, executable=None,
+ stdin=None, stdout=None, stderr=None,
+ preexec_fn=None, close_fds=_PLATFORM_DEFAULT_CLOSE_FDS,
+ shell=False, cwd=None, env=None, universal_newlines=False,
+ startupinfo=None, creationflags=0,
+ restore_signals=True, start_new_session=False,
+ pass_fds=(), *, encoding=None, errors=None):
+ """Create new Popen instance."""
+ _cleanup()
+ # Held while anything is calling waitpid before returncode has been
+ # updated to prevent clobbering returncode if wait() or poll() are
+ # called from multiple threads at once. After acquiring the lock,
+ # code must re-check self.returncode to see if another thread just
+ # finished a waitpid() call.
+ self._waitpid_lock = threading.Lock()
+
+ self._input = None
+ self._communication_started = False
+ if bufsize is None:
+ bufsize = -1 # Restore default
+ if not isinstance(bufsize, int):
+ raise TypeError("bufsize must be an integer")
+
+ if _mswindows:
+ if preexec_fn is not None:
+ raise ValueError("preexec_fn is not supported on Windows "
+ "platforms")
+ any_stdio_set = (stdin is not None or stdout is not None or
+ stderr is not None)
+ if close_fds is _PLATFORM_DEFAULT_CLOSE_FDS:
+ if any_stdio_set:
+ close_fds = False
+ else:
+ close_fds = True
+ elif close_fds and any_stdio_set:
+ raise ValueError(
+ "close_fds is not supported on Windows platforms"
+ " if you redirect stdin/stdout/stderr")
+ else:
+ # POSIX
+ if close_fds is _PLATFORM_DEFAULT_CLOSE_FDS:
+ close_fds = True
+ if pass_fds and not close_fds:
+ warnings.warn("pass_fds overriding close_fds.", RuntimeWarning)
+ close_fds = True
+ if startupinfo is not None:
+ raise ValueError("startupinfo is only supported on Windows "
+ "platforms")
+ if creationflags != 0:
+ raise ValueError("creationflags is only supported on Windows "
+ "platforms")
+
+ self.args = args
+ self.stdin = None
+ self.stdout = None
+ self.stderr = None
+ self.pid = None
+ self.returncode = None
+ self.universal_newlines = universal_newlines
+ self.encoding = encoding
+ self.errors = errors
+
+ # Input and output objects. The general principle is like
+ # this:
+ #
+ # Parent Child
+ # ------ -----
+ # p2cwrite ---stdin---> p2cread
+ # c2pread <--stdout--- c2pwrite
+ # errread <--stderr--- errwrite
+ #
+ # On POSIX, the child objects are file descriptors. On
+ # Windows, these are Windows file handles. The parent objects
+ # are file descriptors on both platforms. The parent objects
+ # are -1 when not using PIPEs. The child objects are -1
+ # when not redirecting.
+
+ (p2cread, p2cwrite,
+ c2pread, c2pwrite,
+ errread, errwrite) = self._get_handles(stdin, stdout, stderr)
+
+ # We wrap OS handles *before* launching the child, otherwise a
+ # quickly terminating child could make our fds unwrappable
+ # (see #8458).
+
+ if _mswindows:
+ if p2cwrite != -1:
+ p2cwrite = msvcrt.open_osfhandle(p2cwrite.Detach(), 0)
+ if c2pread != -1:
+ c2pread = msvcrt.open_osfhandle(c2pread.Detach(), 0)
+ if errread != -1:
+ errread = msvcrt.open_osfhandle(errread.Detach(), 0)
+
+ text_mode = encoding or errors or universal_newlines
+
+ self._closed_child_pipe_fds = False
+
+ try:
+ if p2cwrite != -1:
+ self.stdin = io.open(p2cwrite, 'wb', bufsize)
+ if text_mode:
+ self.stdin = io.TextIOWrapper(self.stdin, write_through=True,
+ line_buffering=(bufsize == 1),
+ encoding=encoding, errors=errors)
+ if c2pread != -1:
+ self.stdout = io.open(c2pread, 'rb', bufsize)
+ if text_mode:
+ self.stdout = io.TextIOWrapper(self.stdout,
+ encoding=encoding, errors=errors)
+ if errread != -1:
+ self.stderr = io.open(errread, 'rb', bufsize)
+ if text_mode:
+ self.stderr = io.TextIOWrapper(self.stderr,
+ encoding=encoding, errors=errors)
+
+ self._execute_child(args, executable, preexec_fn, close_fds,
+ pass_fds, cwd, env,
+ startupinfo, creationflags, shell,
+ p2cread, p2cwrite,
+ c2pread, c2pwrite,
+ errread, errwrite,
+ restore_signals, start_new_session)
+ except:
+ # Cleanup if the child failed starting.
+ for f in filter(None, (self.stdin, self.stdout, self.stderr)):
+ try:
+ f.close()
+ except OSError:
+ pass # Ignore EBADF or other errors.
+
+ if not self._closed_child_pipe_fds:
+ to_close = []
+ if stdin == PIPE:
+ to_close.append(p2cread)
+ if stdout == PIPE:
+ to_close.append(c2pwrite)
+ if stderr == PIPE:
+ to_close.append(errwrite)
+ if hasattr(self, '_devnull'):
+ to_close.append(self._devnull)
+ for fd in to_close:
+ try:
+ if _mswindows and isinstance(fd, Handle):
+ fd.Close()
+ else:
+ os.close(fd)
+ except OSError:
+ pass
+
+ raise
+
+ def _translate_newlines(self, data, encoding, errors):
+ data = data.decode(encoding, errors)
+ return data.replace("\r\n", "\n").replace("\r", "\n")
+
+ def __enter__(self):
+ return self
+
+ def __exit__(self, type, value, traceback):
+ if self.stdout:
+ self.stdout.close()
+ if self.stderr:
+ self.stderr.close()
+ try: # Flushing a BufferedWriter may raise an error
+ if self.stdin:
+ self.stdin.close()
+ finally:
+ # Wait for the process to terminate, to avoid zombies.
+ self.wait()
+
+ def __del__(self, _maxsize=sys.maxsize, _warn=warnings.warn):
+ if not self._child_created:
+ # We didn't get to successfully create a child process.
+ return
+ if self.returncode is None:
+ # Not reading subprocess exit status creates a zombi process which
+ # is only destroyed at the parent python process exit
+ _warn("subprocess %s is still running" % self.pid,
+ ResourceWarning, source=self)
+ # In case the child hasn't been waited on, check if it's done.
+ self._internal_poll(_deadstate=_maxsize)
+ if self.returncode is None and _active is not None:
+ # Child is still running, keep us alive until we can wait on it.
+ _active.append(self)
+
+ def _get_devnull(self):
+ if not hasattr(self, '_devnull'):
+ self._devnull = os.open(os.devnull, os.O_RDWR)
+ return self._devnull
+
+ def _stdin_write(self, input):
+ if input:
+ try:
+ self.stdin.write(input)
+ except BrokenPipeError:
+ pass # communicate() must ignore broken pipe errors.
+ except OSError as exc:
+ if exc.errno == errno.EINVAL:
+ # bpo-19612, bpo-30418: On Windows, stdin.write() fails
+ # with EINVAL if the child process exited or if the child
+ # process is still running but closed the pipe.
+ pass
+ else:
+ raise
+
+ try:
+ self.stdin.close()
+ except BrokenPipeError:
+ pass # communicate() must ignore broken pipe errors.
+ except OSError as exc:
+ if exc.errno == errno.EINVAL:
+ pass
+ else:
+ raise
+
+ def communicate(self, input=None, timeout=None):
+ """Interact with process: Send data to stdin. Read data from
+ stdout and stderr, until end-of-file is reached. Wait for
+ process to terminate.
+
+ The optional "input" argument should be data to be sent to the
+ child process (if self.universal_newlines is True, this should
+ be a string; if it is False, "input" should be bytes), or
+ None, if no data should be sent to the child.
+
+ communicate() returns a tuple (stdout, stderr). These will be
+ bytes or, if self.universal_newlines was True, a string.
+ """
+
+ if self._communication_started and input:
+ raise ValueError("Cannot send input after starting communication")
+
+ # Optimization: If we are not worried about timeouts, we haven't
+ # started communicating, and we have one or zero pipes, using select()
+ # or threads is unnecessary.
+ if (timeout is None and not self._communication_started and
+ [self.stdin, self.stdout, self.stderr].count(None) >= 2):
+ stdout = None
+ stderr = None
+ if self.stdin:
+ self._stdin_write(input)
+ elif self.stdout:
+ stdout = self.stdout.read()
+ self.stdout.close()
+ elif self.stderr:
+ stderr = self.stderr.read()
+ self.stderr.close()
+ self.wait()
+ else:
+ if timeout is not None:
+ endtime = _time() + timeout
+ else:
+ endtime = None
+
+ try:
+ stdout, stderr = self._communicate(input, endtime, timeout)
+ finally:
+ self._communication_started = True
+
+ sts = self.wait(timeout=self._remaining_time(endtime))
+
+ return (stdout, stderr)
+
+
+ def poll(self):
+ """Check if child process has terminated. Set and return returncode
+ attribute."""
+ return self._internal_poll()
+
+
+ def _remaining_time(self, endtime):
+ """Convenience for _communicate when computing timeouts."""
+ if endtime is None:
+ return None
+ else:
+ return endtime - _time()
+
+
+ def _check_timeout(self, endtime, orig_timeout):
+ """Convenience for checking if a timeout has expired."""
+ if endtime is None:
+ return
+ if _time() > endtime:
+ raise TimeoutExpired(self.args, orig_timeout)
+
+
+ if _mswindows:
+ #
+ # Windows methods
+ #
+ def _get_handles(self, stdin, stdout, stderr):
+ """Construct and return tuple with IO objects:
+ p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite
+ """
+ if stdin is None and stdout is None and stderr is None:
+ return (-1, -1, -1, -1, -1, -1)
+
+ p2cread, p2cwrite = -1, -1
+ c2pread, c2pwrite = -1, -1
+ errread, errwrite = -1, -1
+
+ if stdin is None:
+ p2cread = _winapi.GetStdHandle(_winapi.STD_INPUT_HANDLE)
+ if p2cread is None:
+ p2cread, _ = _winapi.CreatePipe(None, 0)
+ p2cread = Handle(p2cread)
+ _winapi.CloseHandle(_)
+ elif stdin == PIPE:
+ p2cread, p2cwrite = _winapi.CreatePipe(None, 0)
+ p2cread, p2cwrite = Handle(p2cread), Handle(p2cwrite)
+ elif stdin == DEVNULL:
+ p2cread = msvcrt.get_osfhandle(self._get_devnull())
+ elif isinstance(stdin, int):
+ p2cread = msvcrt.get_osfhandle(stdin)
+ else:
+ # Assuming file-like object
+ p2cread = msvcrt.get_osfhandle(stdin.fileno())
+ p2cread = self._make_inheritable(p2cread)
+
+ if stdout is None:
+ c2pwrite = _winapi.GetStdHandle(_winapi.STD_OUTPUT_HANDLE)
+ if c2pwrite is None:
+ _, c2pwrite = _winapi.CreatePipe(None, 0)
+ c2pwrite = Handle(c2pwrite)
+ _winapi.CloseHandle(_)
+ elif stdout == PIPE:
+ c2pread, c2pwrite = _winapi.CreatePipe(None, 0)
+ c2pread, c2pwrite = Handle(c2pread), Handle(c2pwrite)
+ elif stdout == DEVNULL:
+ c2pwrite = msvcrt.get_osfhandle(self._get_devnull())
+ elif isinstance(stdout, int):
+ c2pwrite = msvcrt.get_osfhandle(stdout)
+ else:
+ # Assuming file-like object
+ c2pwrite = msvcrt.get_osfhandle(stdout.fileno())
+ c2pwrite = self._make_inheritable(c2pwrite)
+
+ if stderr is None:
+ errwrite = _winapi.GetStdHandle(_winapi.STD_ERROR_HANDLE)
+ if errwrite is None:
+ _, errwrite = _winapi.CreatePipe(None, 0)
+ errwrite = Handle(errwrite)
+ _winapi.CloseHandle(_)
+ elif stderr == PIPE:
+ errread, errwrite = _winapi.CreatePipe(None, 0)
+ errread, errwrite = Handle(errread), Handle(errwrite)
+ elif stderr == STDOUT:
+ errwrite = c2pwrite
+ elif stderr == DEVNULL:
+ errwrite = msvcrt.get_osfhandle(self._get_devnull())
+ elif isinstance(stderr, int):
+ errwrite = msvcrt.get_osfhandle(stderr)
+ else:
+ # Assuming file-like object
+ errwrite = msvcrt.get_osfhandle(stderr.fileno())
+ errwrite = self._make_inheritable(errwrite)
+
+ return (p2cread, p2cwrite,
+ c2pread, c2pwrite,
+ errread, errwrite)
+
+
+ def _make_inheritable(self, handle):
+ """Return a duplicate of handle, which is inheritable"""
+ h = _winapi.DuplicateHandle(
+ _winapi.GetCurrentProcess(), handle,
+ _winapi.GetCurrentProcess(), 0, 1,
+ _winapi.DUPLICATE_SAME_ACCESS)
+ return Handle(h)
+
+
+ def _execute_child(self, args, executable, preexec_fn, close_fds,
+ pass_fds, cwd, env,
+ startupinfo, creationflags, shell,
+ p2cread, p2cwrite,
+ c2pread, c2pwrite,
+ errread, errwrite,
+ unused_restore_signals, unused_start_new_session):
+ """Execute program (MS Windows version)"""
+
+ assert not pass_fds, "pass_fds not supported on Windows."
+
+ if not isinstance(args, str):
+ args = list2cmdline(args)
+
+ # Process startup details
+ if startupinfo is None:
+ startupinfo = STARTUPINFO()
+ if -1 not in (p2cread, c2pwrite, errwrite):
+ startupinfo.dwFlags |= _winapi.STARTF_USESTDHANDLES
+ startupinfo.hStdInput = p2cread
+ startupinfo.hStdOutput = c2pwrite
+ startupinfo.hStdError = errwrite
+
+ if shell:
+ startupinfo.dwFlags |= _winapi.STARTF_USESHOWWINDOW
+ startupinfo.wShowWindow = _winapi.SW_HIDE
+ comspec = os.environ.get("COMSPEC", "cmd.exe")
+ args = '{} /c "{}"'.format (comspec, args)
+
+ # Start the process
+ try:
+ hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
+ # no special security
+ None, None,
+ int(not close_fds),
+ creationflags,
+ env,
+ os.fspath(cwd) if cwd is not None else None,
+ startupinfo)
+ finally:
+ # Child is launched. Close the parent's copy of those pipe
+ # handles that only the child should have open. You need
+ # to make sure that no handles to the write end of the
+ # output pipe are maintained in this process or else the
+ # pipe will not close when the child process exits and the
+ # ReadFile will hang.
+ if p2cread != -1:
+ p2cread.Close()
+ if c2pwrite != -1:
+ c2pwrite.Close()
+ if errwrite != -1:
+ errwrite.Close()
+ if hasattr(self, '_devnull'):
+ os.close(self._devnull)
+ # Prevent a double close of these handles/fds from __init__
+ # on error.
+ self._closed_child_pipe_fds = True
+
+ # Retain the process handle, but close the thread handle
+ self._child_created = True
+ self._handle = Handle(hp)
+ self.pid = pid
+ _winapi.CloseHandle(ht)
+
+ def _internal_poll(self, _deadstate=None,
+ _WaitForSingleObject=_winapi.WaitForSingleObject,
+ _WAIT_OBJECT_0=_winapi.WAIT_OBJECT_0,
+ _GetExitCodeProcess=_winapi.GetExitCodeProcess):
+ """Check if child process has terminated. Returns returncode
+ attribute.
+
+ This method is called by __del__, so it can only refer to objects
+ in its local scope.
+
+ """
+ if self.returncode is None:
+ if _WaitForSingleObject(self._handle, 0) == _WAIT_OBJECT_0:
+ self.returncode = _GetExitCodeProcess(self._handle)
+ return self.returncode
+
+
+ def wait(self, timeout=None, endtime=None):
+ """Wait for child process to terminate. Returns returncode
+ attribute."""
+ if endtime is not None:
+ warnings.warn(
+ "'endtime' argument is deprecated; use 'timeout'.",
+ DeprecationWarning,
+ stacklevel=2)
+ timeout = self._remaining_time(endtime)
+ if timeout is None:
+ timeout_millis = _winapi.INFINITE
+ else:
+ timeout_millis = int(timeout * 1000)
+ if self.returncode is None:
+ result = _winapi.WaitForSingleObject(self._handle,
+ timeout_millis)
+ if result == _winapi.WAIT_TIMEOUT:
+ raise TimeoutExpired(self.args, timeout)
+ self.returncode = _winapi.GetExitCodeProcess(self._handle)
+ return self.returncode
+
+
+ def _readerthread(self, fh, buffer):
+ buffer.append(fh.read())
+ fh.close()
+
+
+ def _communicate(self, input, endtime, orig_timeout):
+ # Start reader threads feeding into a list hanging off of this
+ # object, unless they've already been started.
+ if self.stdout and not hasattr(self, "_stdout_buff"):
+ self._stdout_buff = []
+ self.stdout_thread = \
+ threading.Thread(target=self._readerthread,
+ args=(self.stdout, self._stdout_buff))
+ self.stdout_thread.daemon = True
+ self.stdout_thread.start()
+ if self.stderr and not hasattr(self, "_stderr_buff"):
+ self._stderr_buff = []
+ self.stderr_thread = \
+ threading.Thread(target=self._readerthread,
+ args=(self.stderr, self._stderr_buff))
+ self.stderr_thread.daemon = True
+ self.stderr_thread.start()
+
+ if self.stdin:
+ self._stdin_write(input)
+
+ # Wait for the reader threads, or time out. If we time out, the
+ # threads remain reading and the fds left open in case the user
+ # calls communicate again.
+ if self.stdout is not None:
+ self.stdout_thread.join(self._remaining_time(endtime))
+ if self.stdout_thread.is_alive():
+ raise TimeoutExpired(self.args, orig_timeout)
+ if self.stderr is not None:
+ self.stderr_thread.join(self._remaining_time(endtime))
+ if self.stderr_thread.is_alive():
+ raise TimeoutExpired(self.args, orig_timeout)
+
+ # Collect the output from and close both pipes, now that we know
+ # both have been read successfully.
+ stdout = None
+ stderr = None
+ if self.stdout:
+ stdout = self._stdout_buff
+ self.stdout.close()
+ if self.stderr:
+ stderr = self._stderr_buff
+ self.stderr.close()
+
+ # All data exchanged. Translate lists into strings.
+ if stdout is not None:
+ stdout = stdout[0]
+ if stderr is not None:
+ stderr = stderr[0]
+
+ return (stdout, stderr)
+
+ def send_signal(self, sig):
+ """Send a signal to the process."""
+ # Don't signal a process that we know has already died.
+ if self.returncode is not None:
+ return
+ if sig == signal.SIGTERM:
+ self.terminate()
+ elif sig == signal.CTRL_C_EVENT:
+ os.kill(self.pid, signal.CTRL_C_EVENT)
+ elif sig == signal.CTRL_BREAK_EVENT:
+ os.kill(self.pid, signal.CTRL_BREAK_EVENT)
+ else:
+ raise ValueError("Unsupported signal: {}".format(sig))
+
+ def terminate(self):
+ """Terminates the process."""
+ # Don't terminate a process that we know has already died.
+ if self.returncode is not None:
+ return
+ try:
+ _winapi.TerminateProcess(self._handle, 1)
+ except PermissionError:
+ # ERROR_ACCESS_DENIED (winerror 5) is received when the
+ # process already died.
+ rc = _winapi.GetExitCodeProcess(self._handle)
+ if rc == _winapi.STILL_ACTIVE:
+ raise
+ self.returncode = rc
+
+ kill = terminate
+
+ else:
+ #
+ # POSIX methods
+ #
+ def _get_handles(self, stdin, stdout, stderr):
+ """Construct and return tuple with IO objects:
+ p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite
+ """
+ p2cread, p2cwrite = -1, -1
+ c2pread, c2pwrite = -1, -1
+ errread, errwrite = -1, -1
+
+ if stdin is None:
+ pass
+ elif stdin == PIPE:
+ p2cread, p2cwrite = os.pipe()
+ elif stdin == DEVNULL:
+ p2cread = self._get_devnull()
+ elif isinstance(stdin, int):
+ p2cread = stdin
+ else:
+ # Assuming file-like object
+ p2cread = stdin.fileno()
+
+ if stdout is None:
+ pass
+ elif stdout == PIPE:
+ c2pread, c2pwrite = os.pipe()
+ elif stdout == DEVNULL:
+ c2pwrite = self._get_devnull()
+ elif isinstance(stdout, int):
+ c2pwrite = stdout
+ else:
+ # Assuming file-like object
+ c2pwrite = stdout.fileno()
+
+ if stderr is None:
+ pass
+ elif stderr == PIPE:
+ errread, errwrite = os.pipe()
+ elif stderr == STDOUT:
+ if c2pwrite != -1:
+ errwrite = c2pwrite
+ else: # child's stdout is not set, use parent's stdout
+ errwrite = sys.__stdout__.fileno()
+ elif stderr == DEVNULL:
+ errwrite = self._get_devnull()
+ elif isinstance(stderr, int):
+ errwrite = stderr
+ else:
+ # Assuming file-like object
+ errwrite = stderr.fileno()
+
+ return (p2cread, p2cwrite,
+ c2pread, c2pwrite,
+ errread, errwrite)
+
+
+ def _execute_child(self, args, executable, preexec_fn, close_fds,
+ pass_fds, cwd, env,
+ startupinfo, creationflags, shell,
+ p2cread, p2cwrite,
+ c2pread, c2pwrite,
+ errread, errwrite,
+ restore_signals, start_new_session):
+ """Execute program (POSIX version)"""
+
+ if isinstance(args, (str, bytes)):
+ args = [args]
+ else:
+ args = list(args)
+
+ if shell:
+ args = ["/bin/sh", "-c"] + args
+ if executable:
+ args[0] = executable
+
+ if executable is None:
+ executable = args[0]
+ orig_executable = executable
+
+ # For transferring possible exec failure from child to parent.
+ # Data format: "exception name:hex errno:description"
+ # Pickle is not used; it is complex and involves memory allocation.
+ errpipe_read, errpipe_write = os.pipe()
+ # errpipe_write must not be in the standard io 0, 1, or 2 fd range.
+ low_fds_to_close = []
+ while errpipe_write < 3:
+ low_fds_to_close.append(errpipe_write)
+ errpipe_write = os.dup(errpipe_write)
+ for low_fd in low_fds_to_close:
+ os.close(low_fd)
+ try:
+ try:
+ # We must avoid complex work that could involve
+ # malloc or free in the child process to avoid
+ # potential deadlocks, thus we do all this here.
+ # and pass it to fork_exec()
+
+ if env is not None:
+ env_list = []
+ for k, v in env.items():
+ k = os.fsencode(k)
+ if b'=' in k:
+ raise ValueError("illegal environment variable name")
+ env_list.append(k + b'=' + os.fsencode(v))
+ else:
+ env_list = None # Use execv instead of execve.
+ executable = os.fsencode(executable)
+ if os.path.dirname(executable):
+ executable_list = (executable,)
+ else:
+ # This matches the behavior of os._execvpe().
+ executable_list = tuple(
+ os.path.join(os.fsencode(dir), executable)
+ for dir in os.get_exec_path(env))
+ fds_to_keep = set(pass_fds)
+ fds_to_keep.add(errpipe_write)
+ self.pid = _posixsubprocess.fork_exec(
+ args, executable_list,
+ close_fds, tuple(sorted(map(int, fds_to_keep))),
+ cwd, env_list,
+ p2cread, p2cwrite, c2pread, c2pwrite,
+ errread, errwrite,
+ errpipe_read, errpipe_write,
+ restore_signals, start_new_session, preexec_fn)
+ self._child_created = True
+ finally:
+ # be sure the FD is closed no matter what
+ os.close(errpipe_write)
+
+ # self._devnull is not always defined.
+ devnull_fd = getattr(self, '_devnull', None)
+ if p2cread != -1 and p2cwrite != -1 and p2cread != devnull_fd:
+ os.close(p2cread)
+ if c2pwrite != -1 and c2pread != -1 and c2pwrite != devnull_fd:
+ os.close(c2pwrite)
+ if errwrite != -1 and errread != -1 and errwrite != devnull_fd:
+ os.close(errwrite)
+ if devnull_fd is not None:
+ os.close(devnull_fd)
+ # Prevent a double close of these fds from __init__ on error.
+ self._closed_child_pipe_fds = True
+
+ # Wait for exec to fail or succeed; possibly raising an
+ # exception (limited in size)
+ errpipe_data = bytearray()
+ while True:
+ part = os.read(errpipe_read, 50000)
+ errpipe_data += part
+ if not part or len(errpipe_data) > 50000:
+ break
+ finally:
+ # be sure the FD is closed no matter what
+ os.close(errpipe_read)
+
+ if errpipe_data:
+ try:
+ pid, sts = os.waitpid(self.pid, 0)
+ if pid == self.pid:
+ self._handle_exitstatus(sts)
+ else:
+ self.returncode = sys.maxsize
+ except ChildProcessError:
+ pass
+
+ try:
+ exception_name, hex_errno, err_msg = (
+ errpipe_data.split(b':', 2))
+ # The encoding here should match the encoding
+ # written in by the subprocess implementations
+ # like _posixsubprocess
+ err_msg = err_msg.decode()
+ except ValueError:
+ exception_name = b'SubprocessError'
+ hex_errno = b'0'
+ err_msg = 'Bad exception data from child: {!r}'.format(
+ bytes(errpipe_data))
+ child_exception_type = getattr(
+ builtins, exception_name.decode('ascii'),
+ SubprocessError)
+ if issubclass(child_exception_type, OSError) and hex_errno:
+ errno_num = int(hex_errno, 16)
+ child_exec_never_called = (err_msg == "noexec")
+ if child_exec_never_called:
+ err_msg = ""
+ # The error must be from chdir(cwd).
+ err_filename = cwd
+ else:
+ err_filename = orig_executable
+ if errno_num != 0:
+ err_msg = os.strerror(errno_num)
+ if errno_num == errno.ENOENT:
+ err_msg += ': ' + repr(err_filename)
+ raise child_exception_type(errno_num, err_msg, err_filename)
+ raise child_exception_type(err_msg)
+
+#JP Hack
+ def _handle_exitstatus(self, sts, _WIFSIGNALED=None,
+ _WTERMSIG=None, _WIFEXITED=None,
+ _WEXITSTATUS=None, _WIFSTOPPED=None,
+ _WSTOPSIG=None):
+ pass
+ '''
+ def _handle_exitstatus(self, sts, _WIFSIGNALED=edk2.WIFSIGNALED,
+ _WTERMSIG=edk2.WTERMSIG, _WIFEXITED=edk2.WIFEXITED,
+ _WEXITSTATUS=edk2.WEXITSTATUS, _WIFSTOPPED=edk2.WIFSTOPPED,
+ _WSTOPSIG=edk2.WSTOPSIG):
+ """All callers to this function MUST hold self._waitpid_lock."""
+ # This method is called (indirectly) by __del__, so it cannot
+ # refer to anything outside of its local scope.
+ if _WIFSIGNALED(sts):
+ self.returncode = -_WTERMSIG(sts)
+ elif _WIFEXITED(sts):
+ self.returncode = _WEXITSTATUS(sts)
+ elif _WIFSTOPPED(sts):
+ self.returncode = -_WSTOPSIG(sts)
+ else:
+ # Should never happen
+ raise SubprocessError("Unknown child exit status!")
+
+ def _internal_poll(self, _deadstate=None, _waitpid=os.waitpid,
+ _WNOHANG=os.WNOHANG, _ECHILD=errno.ECHILD):
+ """Check if child process has terminated. Returns returncode
+ attribute.
+
+ This method is called by __del__, so it cannot reference anything
+ outside of the local scope (nor can any methods it calls).
+
+ """
+ if self.returncode is None:
+ if not self._waitpid_lock.acquire(False):
+ # Something else is busy calling waitpid. Don't allow two
+ # at once. We know nothing yet.
+ return None
+ try:
+ if self.returncode is not None:
+ return self.returncode # Another thread waited.
+ pid, sts = _waitpid(self.pid, _WNOHANG)
+ if pid == self.pid:
+ self._handle_exitstatus(sts)
+ except OSError as e:
+ if _deadstate is not None:
+ self.returncode = _deadstate
+ elif e.errno == _ECHILD:
+ # This happens if SIGCLD is set to be ignored or
+ # waiting for child processes has otherwise been
+ # disabled for our process. This child is dead, we
+ # can't get the status.
+ # http://bugs.python.org/issue15756
+ self.returncode = 0
+ finally:
+ self._waitpid_lock.release()
+ return self.returncode
+ '''
+ def _internal_poll(self, _deadstate=None, _waitpid=None,
+ _WNOHANG=None, _ECHILD=None):
+ pass #JP Hack
+
+ def _try_wait(self, wait_flags):
+ """All callers to this function MUST hold self._waitpid_lock."""
+ try:
+ (pid, sts) = os.waitpid(self.pid, wait_flags)
+ except ChildProcessError:
+ # This happens if SIGCLD is set to be ignored or waiting
+ # for child processes has otherwise been disabled for our
+ # process. This child is dead, we can't get the status.
+ pid = self.pid
+ sts = 0
+ return (pid, sts)
+
+
+ def wait(self, timeout=None, endtime=None):
+ """Wait for child process to terminate. Returns returncode
+ attribute."""
+ if self.returncode is not None:
+ return self.returncode
+
+ if endtime is not None:
+ warnings.warn(
+ "'endtime' argument is deprecated; use 'timeout'.",
+ DeprecationWarning,
+ stacklevel=2)
+ if endtime is not None or timeout is not None:
+ if endtime is None:
+ endtime = _time() + timeout
+ elif timeout is None:
+ timeout = self._remaining_time(endtime)
+
+ if endtime is not None:
+ # Enter a busy loop if we have a timeout. This busy loop was
+ # cribbed from Lib/threading.py in Thread.wait() at r71065.
+ delay = 0.0005 # 500 us -> initial delay of 1 ms
+ while True:
+ if self._waitpid_lock.acquire(False):
+ try:
+ if self.returncode is not None:
+ break # Another thread waited.
+ (pid, sts) = self._try_wait(os.WNOHANG)
+ assert pid == self.pid or pid == 0
+ if pid == self.pid:
+ self._handle_exitstatus(sts)
+ break
+ finally:
+ self._waitpid_lock.release()
+ remaining = self._remaining_time(endtime)
+ if remaining <= 0:
+ raise TimeoutExpired(self.args, timeout)
+ delay = min(delay * 2, remaining, .05)
+ time.sleep(delay)
+ else:
+ while self.returncode is None:
+ with self._waitpid_lock:
+ if self.returncode is not None:
+ break # Another thread waited.
+ (pid, sts) = self._try_wait(0)
+ # Check the pid and loop as waitpid has been known to
+ # return 0 even without WNOHANG in odd situations.
+ # http://bugs.python.org/issue14396.
+ if pid == self.pid:
+ self._handle_exitstatus(sts)
+ return self.returncode
+
+
+ def _communicate(self, input, endtime, orig_timeout):
+ if self.stdin and not self._communication_started:
+ # Flush stdio buffer. This might block, if the user has
+ # been writing to .stdin in an uncontrolled fashion.
+ try:
+ self.stdin.flush()
+ except BrokenPipeError:
+ pass # communicate() must ignore BrokenPipeError.
+ if not input:
+ try:
+ self.stdin.close()
+ except BrokenPipeError:
+ pass # communicate() must ignore BrokenPipeError.
+
+ stdout = None
+ stderr = None
+
+ # Only create this mapping if we haven't already.
+ if not self._communication_started:
+ self._fileobj2output = {}
+ if self.stdout:
+ self._fileobj2output[self.stdout] = []
+ if self.stderr:
+ self._fileobj2output[self.stderr] = []
+
+ if self.stdout:
+ stdout = self._fileobj2output[self.stdout]
+ if self.stderr:
+ stderr = self._fileobj2output[self.stderr]
+
+ self._save_input(input)
+
+ if self._input:
+ input_view = memoryview(self._input)
+
+ with _PopenSelector() as selector:
+ if self.stdin and input:
+ selector.register(self.stdin, selectors.EVENT_WRITE)
+ if self.stdout:
+ selector.register(self.stdout, selectors.EVENT_READ)
+ if self.stderr:
+ selector.register(self.stderr, selectors.EVENT_READ)
+
+ while selector.get_map():
+ timeout = self._remaining_time(endtime)
+ if timeout is not None and timeout < 0:
+ raise TimeoutExpired(self.args, orig_timeout)
+
+ ready = selector.select(timeout)
+ self._check_timeout(endtime, orig_timeout)
+
+ # XXX Rewrite these to use non-blocking I/O on the file
+ # objects; they are no longer using C stdio!
+
+ for key, events in ready:
+ if key.fileobj is self.stdin:
+ chunk = input_view[self._input_offset :
+ self._input_offset + _PIPE_BUF]
+ try:
+ self._input_offset += os.write(key.fd, chunk)
+ except BrokenPipeError:
+ selector.unregister(key.fileobj)
+ key.fileobj.close()
+ else:
+ if self._input_offset >= len(self._input):
+ selector.unregister(key.fileobj)
+ key.fileobj.close()
+ elif key.fileobj in (self.stdout, self.stderr):
+ data = os.read(key.fd, 32768)
+ if not data:
+ selector.unregister(key.fileobj)
+ key.fileobj.close()
+ self._fileobj2output[key.fileobj].append(data)
+
+ self.wait(timeout=self._remaining_time(endtime))
+
+ # All data exchanged. Translate lists into strings.
+ if stdout is not None:
+ stdout = b''.join(stdout)
+ if stderr is not None:
+ stderr = b''.join(stderr)
+
+ # Translate newlines, if requested.
+ # This also turns bytes into strings.
+ if self.encoding or self.errors or self.universal_newlines:
+ if stdout is not None:
+ stdout = self._translate_newlines(stdout,
+ self.stdout.encoding,
+ self.stdout.errors)
+ if stderr is not None:
+ stderr = self._translate_newlines(stderr,
+ self.stderr.encoding,
+ self.stderr.errors)
+
+ return (stdout, stderr)
+
+
+ def _save_input(self, input):
+ # This method is called from the _communicate_with_*() methods
+ # so that if we time out while communicating, we can continue
+ # sending input if we retry.
+ if self.stdin and self._input is None:
+ self._input_offset = 0
+ self._input = input
+ if input is not None and (
+ self.encoding or self.errors or self.universal_newlines):
+ self._input = self._input.encode(self.stdin.encoding,
+ self.stdin.errors)
+
+
+ def send_signal(self, sig):
+ """Send a signal to the process."""
+ # Skip signalling a process that we know has already died.
+ if self.returncode is None:
+ os.kill(self.pid, sig)
+
+ def terminate(self):
+ """Terminate the process with SIGTERM
+ """
+ self.send_signal(signal.SIGTERM)
+
+ def kill(self):
+ """Kill the process with SIGKILL
+ """
+ self.send_signal(signal.SIGKILL)
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/zipfile.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/zipfile.py
new file mode 100644
index 00000000..77ed6666
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/zipfile.py
@@ -0,0 +1,2060 @@
+"""
+Read and write ZIP files.
+
+XXX references to utf-8 need further investigation.
+"""
+import io
+import os
+import re
+import importlib.util
+import sys
+import time
+import stat
+import shutil
+import struct
+import binascii
+
+try:
+ import threading
+except ImportError:
+ import dummy_threading as threading
+
+try:
+ import zlib # We may need its compression method
+ crc32 = zlib.crc32
+except ImportError:
+ zlib = None
+ crc32 = binascii.crc32
+
+try:
+ import bz2 # We may need its compression method
+except ImportError:
+ bz2 = None
+
+try:
+ import lzma # We may need its compression method
+except ImportError:
+ lzma = None
+
+__all__ = ["BadZipFile", "BadZipfile", "error",
+ "ZIP_STORED", "ZIP_DEFLATED", "ZIP_BZIP2", "ZIP_LZMA",
+ "is_zipfile", "ZipInfo", "ZipFile", "PyZipFile", "LargeZipFile"]
+
+class BadZipFile(Exception):
+ pass
+
+
+class LargeZipFile(Exception):
+ """
+ Raised when writing a zipfile, the zipfile requires ZIP64 extensions
+ and those extensions are disabled.
+ """
+
+error = BadZipfile = BadZipFile # Pre-3.2 compatibility names
+
+
+ZIP64_LIMIT = (1 << 31) - 1
+ZIP_FILECOUNT_LIMIT = (1 << 16) - 1
+ZIP_MAX_COMMENT = (1 << 16) - 1
+
+# constants for Zip file compression methods
+ZIP_STORED = 0
+ZIP_DEFLATED = 8
+ZIP_BZIP2 = 12
+ZIP_LZMA = 14
+# Other ZIP compression methods not supported
+
+DEFAULT_VERSION = 20
+ZIP64_VERSION = 45
+BZIP2_VERSION = 46
+LZMA_VERSION = 63
+# we recognize (but not necessarily support) all features up to that version
+MAX_EXTRACT_VERSION = 63
+
+# Below are some formats and associated data for reading/writing headers using
+# the struct module. The names and structures of headers/records are those used
+# in the PKWARE description of the ZIP file format:
+# http://www.pkware.com/documents/casestudies/APPNOTE.TXT
+# (URL valid as of January 2008)
+
+# The "end of central directory" structure, magic number, size, and indices
+# (section V.I in the format document)
+structEndArchive = b"<4s4H2LH"
+stringEndArchive = b"PK\005\006"
+sizeEndCentDir = struct.calcsize(structEndArchive)
+
+_ECD_SIGNATURE = 0
+_ECD_DISK_NUMBER = 1
+_ECD_DISK_START = 2
+_ECD_ENTRIES_THIS_DISK = 3
+_ECD_ENTRIES_TOTAL = 4
+_ECD_SIZE = 5
+_ECD_OFFSET = 6
+_ECD_COMMENT_SIZE = 7
+# These last two indices are not part of the structure as defined in the
+# spec, but they are used internally by this module as a convenience
+_ECD_COMMENT = 8
+_ECD_LOCATION = 9
+
+# The "central directory" structure, magic number, size, and indices
+# of entries in the structure (section V.F in the format document)
+structCentralDir = "<4s4B4HL2L5H2L"
+stringCentralDir = b"PK\001\002"
+sizeCentralDir = struct.calcsize(structCentralDir)
+
+# indexes of entries in the central directory structure
+_CD_SIGNATURE = 0
+_CD_CREATE_VERSION = 1
+_CD_CREATE_SYSTEM = 2
+_CD_EXTRACT_VERSION = 3
+_CD_EXTRACT_SYSTEM = 4
+_CD_FLAG_BITS = 5
+_CD_COMPRESS_TYPE = 6
+_CD_TIME = 7
+_CD_DATE = 8
+_CD_CRC = 9
+_CD_COMPRESSED_SIZE = 10
+_CD_UNCOMPRESSED_SIZE = 11
+_CD_FILENAME_LENGTH = 12
+_CD_EXTRA_FIELD_LENGTH = 13
+_CD_COMMENT_LENGTH = 14
+_CD_DISK_NUMBER_START = 15
+_CD_INTERNAL_FILE_ATTRIBUTES = 16
+_CD_EXTERNAL_FILE_ATTRIBUTES = 17
+_CD_LOCAL_HEADER_OFFSET = 18
+
+# The "local file header" structure, magic number, size, and indices
+# (section V.A in the format document)
+structFileHeader = "<4s2B4HL2L2H"
+stringFileHeader = b"PK\003\004"
+sizeFileHeader = struct.calcsize(structFileHeader)
+
+_FH_SIGNATURE = 0
+_FH_EXTRACT_VERSION = 1
+_FH_EXTRACT_SYSTEM = 2
+_FH_GENERAL_PURPOSE_FLAG_BITS = 3
+_FH_COMPRESSION_METHOD = 4
+_FH_LAST_MOD_TIME = 5
+_FH_LAST_MOD_DATE = 6
+_FH_CRC = 7
+_FH_COMPRESSED_SIZE = 8
+_FH_UNCOMPRESSED_SIZE = 9
+_FH_FILENAME_LENGTH = 10
+_FH_EXTRA_FIELD_LENGTH = 11
+
+# The "Zip64 end of central directory locator" structure, magic number, and size
+structEndArchive64Locator = "<4sLQL"
+stringEndArchive64Locator = b"PK\x06\x07"
+sizeEndCentDir64Locator = struct.calcsize(structEndArchive64Locator)
+
+# The "Zip64 end of central directory" record, magic number, size, and indices
+# (section V.G in the format document)
+structEndArchive64 = "<4sQ2H2L4Q"
+stringEndArchive64 = b"PK\x06\x06"
+sizeEndCentDir64 = struct.calcsize(structEndArchive64)
+
+_CD64_SIGNATURE = 0
+_CD64_DIRECTORY_RECSIZE = 1
+_CD64_CREATE_VERSION = 2
+_CD64_EXTRACT_VERSION = 3
+_CD64_DISK_NUMBER = 4
+_CD64_DISK_NUMBER_START = 5
+_CD64_NUMBER_ENTRIES_THIS_DISK = 6
+_CD64_NUMBER_ENTRIES_TOTAL = 7
+_CD64_DIRECTORY_SIZE = 8
+_CD64_OFFSET_START_CENTDIR = 9
+
+_DD_SIGNATURE = 0x08074b50
+
+_EXTRA_FIELD_STRUCT = struct.Struct('<HH')
+
+def _strip_extra(extra, xids):
+ # Remove Extra Fields with specified IDs.
+ unpack = _EXTRA_FIELD_STRUCT.unpack
+ modified = False
+ buffer = []
+ start = i = 0
+ while i + 4 <= len(extra):
+ xid, xlen = unpack(extra[i : i + 4])
+ j = i + 4 + xlen
+ if xid in xids:
+ if i != start:
+ buffer.append(extra[start : i])
+ start = j
+ modified = True
+ i = j
+ if not modified:
+ return extra
+ return b''.join(buffer)
+
+def _check_zipfile(fp):
+ try:
+ if _EndRecData(fp):
+ return True # file has correct magic number
+ except OSError:
+ pass
+ return False
+
+def is_zipfile(filename):
+ """Quickly see if a file is a ZIP file by checking the magic number.
+
+ The filename argument may be a file or file-like object too.
+ """
+ result = False
+ try:
+ if hasattr(filename, "read"):
+ result = _check_zipfile(fp=filename)
+ else:
+ with open(filename, "rb") as fp:
+ result = _check_zipfile(fp)
+ except OSError:
+ pass
+ return result
+
+def _EndRecData64(fpin, offset, endrec):
+ """
+ Read the ZIP64 end-of-archive records and use that to update endrec
+ """
+ try:
+ fpin.seek(offset - sizeEndCentDir64Locator, 2)
+ except OSError:
+ # If the seek fails, the file is not large enough to contain a ZIP64
+ # end-of-archive record, so just return the end record we were given.
+ return endrec
+
+ data = fpin.read(sizeEndCentDir64Locator)
+ if len(data) != sizeEndCentDir64Locator:
+ return endrec
+ sig, diskno, reloff, disks = struct.unpack(structEndArchive64Locator, data)
+ if sig != stringEndArchive64Locator:
+ return endrec
+
+ if diskno != 0 or disks != 1:
+ raise BadZipFile("zipfiles that span multiple disks are not supported")
+
+ # Assume no 'zip64 extensible data'
+ fpin.seek(offset - sizeEndCentDir64Locator - sizeEndCentDir64, 2)
+ data = fpin.read(sizeEndCentDir64)
+ if len(data) != sizeEndCentDir64:
+ return endrec
+ sig, sz, create_version, read_version, disk_num, disk_dir, \
+ dircount, dircount2, dirsize, diroffset = \
+ struct.unpack(structEndArchive64, data)
+ if sig != stringEndArchive64:
+ return endrec
+
+ # Update the original endrec using data from the ZIP64 record
+ endrec[_ECD_SIGNATURE] = sig
+ endrec[_ECD_DISK_NUMBER] = disk_num
+ endrec[_ECD_DISK_START] = disk_dir
+ endrec[_ECD_ENTRIES_THIS_DISK] = dircount
+ endrec[_ECD_ENTRIES_TOTAL] = dircount2
+ endrec[_ECD_SIZE] = dirsize
+ endrec[_ECD_OFFSET] = diroffset
+ return endrec
+
+
+def _EndRecData(fpin):
+ """Return data from the "End of Central Directory" record, or None.
+
+ The data is a list of the nine items in the ZIP "End of central dir"
+ record followed by a tenth item, the file seek offset of this record."""
+
+ # Determine file size
+ fpin.seek(0, 2)
+ filesize = fpin.tell()
+
+ # Check to see if this is ZIP file with no archive comment (the
+ # "end of central directory" structure should be the last item in the
+ # file if this is the case).
+ try:
+ fpin.seek(-sizeEndCentDir, 2)
+ except OSError:
+ return None
+ data = fpin.read()
+ if (len(data) == sizeEndCentDir and
+ data[0:4] == stringEndArchive and
+ data[-2:] == b"\000\000"):
+ # the signature is correct and there's no comment, unpack structure
+ endrec = struct.unpack(structEndArchive, data)
+ endrec=list(endrec)
+
+ # Append a blank comment and record start offset
+ endrec.append(b"")
+ endrec.append(filesize - sizeEndCentDir)
+
+ # Try to read the "Zip64 end of central directory" structure
+ return _EndRecData64(fpin, -sizeEndCentDir, endrec)
+
+ # Either this is not a ZIP file, or it is a ZIP file with an archive
+ # comment. Search the end of the file for the "end of central directory"
+ # record signature. The comment is the last item in the ZIP file and may be
+ # up to 64K long. It is assumed that the "end of central directory" magic
+ # number does not appear in the comment.
+ maxCommentStart = max(filesize - (1 << 16) - sizeEndCentDir, 0)
+ fpin.seek(maxCommentStart, 0)
+ data = fpin.read()
+ start = data.rfind(stringEndArchive)
+ if start >= 0:
+ # found the magic number; attempt to unpack and interpret
+ recData = data[start:start+sizeEndCentDir]
+ if len(recData) != sizeEndCentDir:
+ # Zip file is corrupted.
+ return None
+ endrec = list(struct.unpack(structEndArchive, recData))
+ commentSize = endrec[_ECD_COMMENT_SIZE] #as claimed by the zip file
+ comment = data[start+sizeEndCentDir:start+sizeEndCentDir+commentSize]
+ endrec.append(comment)
+ endrec.append(maxCommentStart + start)
+
+ # Try to read the "Zip64 end of central directory" structure
+ return _EndRecData64(fpin, maxCommentStart + start - filesize,
+ endrec)
+
+ # Unable to find a valid end of central directory structure
+ return None
+
+
+class ZipInfo (object):
+ """Class with attributes describing each file in the ZIP archive."""
+
+ __slots__ = (
+ 'orig_filename',
+ 'filename',
+ 'date_time',
+ 'compress_type',
+ 'comment',
+ 'extra',
+ 'create_system',
+ 'create_version',
+ 'extract_version',
+ 'reserved',
+ 'flag_bits',
+ 'volume',
+ 'internal_attr',
+ 'external_attr',
+ 'header_offset',
+ 'CRC',
+ 'compress_size',
+ 'file_size',
+ '_raw_time',
+ )
+
+ def __init__(self, filename="NoName", date_time=(1980,1,1,0,0,0)):
+ self.orig_filename = filename # Original file name in archive
+
+ # Terminate the file name at the first null byte. Null bytes in file
+ # names are used as tricks by viruses in archives.
+ null_byte = filename.find(chr(0))
+ if null_byte >= 0:
+ filename = filename[0:null_byte]
+ # This is used to ensure paths in generated ZIP files always use
+ # forward slashes as the directory separator, as required by the
+ # ZIP format specification.
+ if os.sep != "/" and os.sep in filename:
+ filename = filename.replace(os.sep, "/")
+
+ self.filename = filename # Normalized file name
+ self.date_time = date_time # year, month, day, hour, min, sec
+
+ if date_time[0] < 1980:
+ raise ValueError('ZIP does not support timestamps before 1980')
+
+ # Standard values:
+ self.compress_type = ZIP_STORED # Type of compression for the file
+ self.comment = b"" # Comment for each file
+ self.extra = b"" # ZIP extra data
+ if sys.platform == 'win32':
+ self.create_system = 0 # System which created ZIP archive
+ else:
+ # Assume everything else is unix-y
+ self.create_system = 3 # System which created ZIP archive
+ self.create_version = DEFAULT_VERSION # Version which created ZIP archive
+ self.extract_version = DEFAULT_VERSION # Version needed to extract archive
+ self.reserved = 0 # Must be zero
+ self.flag_bits = 0 # ZIP flag bits
+ self.volume = 0 # Volume number of file header
+ self.internal_attr = 0 # Internal attributes
+ self.external_attr = 0 # External file attributes
+ # Other attributes are set by class ZipFile:
+ # header_offset Byte offset to the file header
+ # CRC CRC-32 of the uncompressed file
+ # compress_size Size of the compressed file
+ # file_size Size of the uncompressed file
+
+ def __repr__(self):
+ result = ['<%s filename=%r' % (self.__class__.__name__, self.filename)]
+ if self.compress_type != ZIP_STORED:
+ result.append(' compress_type=%s' %
+ compressor_names.get(self.compress_type,
+ self.compress_type))
+ hi = self.external_attr >> 16
+ lo = self.external_attr & 0xFFFF
+ if hi:
+ result.append(' filemode=%r' % stat.filemode(hi))
+ if lo:
+ result.append(' external_attr=%#x' % lo)
+ isdir = self.is_dir()
+ if not isdir or self.file_size:
+ result.append(' file_size=%r' % self.file_size)
+ if ((not isdir or self.compress_size) and
+ (self.compress_type != ZIP_STORED or
+ self.file_size != self.compress_size)):
+ result.append(' compress_size=%r' % self.compress_size)
+ result.append('>')
+ return ''.join(result)
+
+ def FileHeader(self, zip64=None):
+ """Return the per-file header as a bytes object."""
+ dt = self.date_time
+ dosdate = (dt[0] - 1980) << 9 | dt[1] << 5 | dt[2]
+ dostime = dt[3] << 11 | dt[4] << 5 | (dt[5] // 2)
+ if self.flag_bits & 0x08:
+ # Set these to zero because we write them after the file data
+ CRC = compress_size = file_size = 0
+ else:
+ CRC = self.CRC
+ compress_size = self.compress_size
+ file_size = self.file_size
+
+ extra = self.extra
+
+ min_version = 0
+ if zip64 is None:
+ zip64 = file_size > ZIP64_LIMIT or compress_size > ZIP64_LIMIT
+ if zip64:
+ fmt = '<HHQQ'
+ extra = extra + struct.pack(fmt,
+ 1, struct.calcsize(fmt)-4, file_size, compress_size)
+ if file_size > ZIP64_LIMIT or compress_size > ZIP64_LIMIT:
+ if not zip64:
+ raise LargeZipFile("Filesize would require ZIP64 extensions")
+ # File is larger than what fits into a 4 byte integer,
+ # fall back to the ZIP64 extension
+ file_size = 0xffffffff
+ compress_size = 0xffffffff
+ min_version = ZIP64_VERSION
+
+ if self.compress_type == ZIP_BZIP2:
+ min_version = max(BZIP2_VERSION, min_version)
+ elif self.compress_type == ZIP_LZMA:
+ min_version = max(LZMA_VERSION, min_version)
+
+ self.extract_version = max(min_version, self.extract_version)
+ self.create_version = max(min_version, self.create_version)
+ filename, flag_bits = self._encodeFilenameFlags()
+ header = struct.pack(structFileHeader, stringFileHeader,
+ self.extract_version, self.reserved, flag_bits,
+ self.compress_type, dostime, dosdate, CRC,
+ compress_size, file_size,
+ len(filename), len(extra))
+ return header + filename + extra
+
+ def _encodeFilenameFlags(self):
+ try:
+ return self.filename.encode('ascii'), self.flag_bits
+ except UnicodeEncodeError:
+ return self.filename.encode('utf-8'), self.flag_bits | 0x800
+
+ def _decodeExtra(self):
+ # Try to decode the extra field.
+ extra = self.extra
+ unpack = struct.unpack
+ while len(extra) >= 4:
+ tp, ln = unpack('<HH', extra[:4])
+ if tp == 1:
+ if ln >= 24:
+ counts = unpack('<QQQ', extra[4:28])
+ elif ln == 16:
+ counts = unpack('<QQ', extra[4:20])
+ elif ln == 8:
+ counts = unpack('<Q', extra[4:12])
+ elif ln == 0:
+ counts = ()
+ else:
+ raise BadZipFile("Corrupt extra field %04x (size=%d)" % (tp, ln))
+
+ idx = 0
+
+ # ZIP64 extension (large files and/or large archives)
+ if self.file_size in (0xffffffffffffffff, 0xffffffff):
+ self.file_size = counts[idx]
+ idx += 1
+
+ if self.compress_size == 0xFFFFFFFF:
+ self.compress_size = counts[idx]
+ idx += 1
+
+ if self.header_offset == 0xffffffff:
+ old = self.header_offset
+ self.header_offset = counts[idx]
+ idx+=1
+
+ extra = extra[ln+4:]
+
+ @classmethod
+ def from_file(cls, filename, arcname=None):
+ """Construct an appropriate ZipInfo for a file on the filesystem.
+
+ filename should be the path to a file or directory on the filesystem.
+
+ arcname is the name which it will have within the archive (by default,
+ this will be the same as filename, but without a drive letter and with
+ leading path separators removed).
+ """
+ if isinstance(filename, os.PathLike):
+ filename = os.fspath(filename)
+ st = os.stat(filename)
+ isdir = stat.S_ISDIR(st.st_mode)
+ mtime = time.localtime(st.st_mtime)
+ date_time = mtime[0:6]
+ # Create ZipInfo instance to store file information
+ if arcname is None:
+ arcname = filename
+ arcname = os.path.normpath(os.path.splitdrive(arcname)[1])
+ while arcname[0] in (os.sep, os.altsep):
+ arcname = arcname[1:]
+ if isdir:
+ arcname += '/'
+ zinfo = cls(arcname, date_time)
+ zinfo.external_attr = (st.st_mode & 0xFFFF) << 16 # Unix attributes
+ if isdir:
+ zinfo.file_size = 0
+ zinfo.external_attr |= 0x10 # MS-DOS directory flag
+ else:
+ zinfo.file_size = st.st_size
+
+ return zinfo
+
+ def is_dir(self):
+ """Return True if this archive member is a directory."""
+ return self.filename[-1] == '/'
+
+
+class _ZipDecrypter:
+ """Class to handle decryption of files stored within a ZIP archive.
+
+ ZIP supports a password-based form of encryption. Even though known
+ plaintext attacks have been found against it, it is still useful
+ to be able to get data out of such a file.
+
+ Usage:
+ zd = _ZipDecrypter(mypwd)
+ plain_char = zd(cypher_char)
+ plain_text = map(zd, cypher_text)
+ """
+
+ def _GenerateCRCTable():
+ """Generate a CRC-32 table.
+
+ ZIP encryption uses the CRC32 one-byte primitive for scrambling some
+ internal keys. We noticed that a direct implementation is faster than
+ relying on binascii.crc32().
+ """
+ poly = 0xedb88320
+ table = [0] * 256
+ for i in range(256):
+ crc = i
+ for j in range(8):
+ if crc & 1:
+ crc = ((crc >> 1) & 0x7FFFFFFF) ^ poly
+ else:
+ crc = ((crc >> 1) & 0x7FFFFFFF)
+ table[i] = crc
+ return table
+ crctable = None
+
+ def _crc32(self, ch, crc):
+ """Compute the CRC32 primitive on one byte."""
+ return ((crc >> 8) & 0xffffff) ^ self.crctable[(crc ^ ch) & 0xff]
+
+ def __init__(self, pwd):
+ if _ZipDecrypter.crctable is None:
+ _ZipDecrypter.crctable = _ZipDecrypter._GenerateCRCTable()
+ self.key0 = 305419896
+ self.key1 = 591751049
+ self.key2 = 878082192
+ for p in pwd:
+ self._UpdateKeys(p)
+
+ def _UpdateKeys(self, c):
+ self.key0 = self._crc32(c, self.key0)
+ self.key1 = (self.key1 + (self.key0 & 255)) & 4294967295
+ self.key1 = (self.key1 * 134775813 + 1) & 4294967295
+ self.key2 = self._crc32((self.key1 >> 24) & 255, self.key2)
+
+ def __call__(self, c):
+ """Decrypt a single character."""
+ assert isinstance(c, int)
+ k = self.key2 | 2
+ c = c ^ (((k * (k^1)) >> 8) & 255)
+ self._UpdateKeys(c)
+ return c
+
+
+class LZMACompressor:
+
+ def __init__(self):
+ self._comp = None
+
+ def _init(self):
+ props = lzma._encode_filter_properties({'id': lzma.FILTER_LZMA1})
+ self._comp = lzma.LZMACompressor(lzma.FORMAT_RAW, filters=[
+ lzma._decode_filter_properties(lzma.FILTER_LZMA1, props)
+ ])
+ return struct.pack('<BBH', 9, 4, len(props)) + props
+
+ def compress(self, data):
+ if self._comp is None:
+ return self._init() + self._comp.compress(data)
+ return self._comp.compress(data)
+
+ def flush(self):
+ if self._comp is None:
+ return self._init() + self._comp.flush()
+ return self._comp.flush()
+
+
+class LZMADecompressor:
+
+ def __init__(self):
+ self._decomp = None
+ self._unconsumed = b''
+ self.eof = False
+
+ def decompress(self, data):
+ if self._decomp is None:
+ self._unconsumed += data
+ if len(self._unconsumed) <= 4:
+ return b''
+ psize, = struct.unpack('<H', self._unconsumed[2:4])
+ if len(self._unconsumed) <= 4 + psize:
+ return b''
+
+ self._decomp = lzma.LZMADecompressor(lzma.FORMAT_RAW, filters=[
+ lzma._decode_filter_properties(lzma.FILTER_LZMA1,
+ self._unconsumed[4:4 + psize])
+ ])
+ data = self._unconsumed[4 + psize:]
+ del self._unconsumed
+
+ result = self._decomp.decompress(data)
+ self.eof = self._decomp.eof
+ return result
+
+
+compressor_names = {
+ 0: 'store',
+ 1: 'shrink',
+ 2: 'reduce',
+ 3: 'reduce',
+ 4: 'reduce',
+ 5: 'reduce',
+ 6: 'implode',
+ 7: 'tokenize',
+ 8: 'deflate',
+ 9: 'deflate64',
+ 10: 'implode',
+ 12: 'bzip2',
+ 14: 'lzma',
+ 18: 'terse',
+ 19: 'lz77',
+ 97: 'wavpack',
+ 98: 'ppmd',
+}
+
+def _check_compression(compression):
+ if compression == ZIP_STORED:
+ pass
+ elif compression == ZIP_DEFLATED:
+ if not zlib:
+ raise RuntimeError(
+ "Compression requires the (missing) zlib module")
+ elif compression == ZIP_BZIP2:
+ if not bz2:
+ raise RuntimeError(
+ "Compression requires the (missing) bz2 module")
+ elif compression == ZIP_LZMA:
+ if not lzma:
+ raise RuntimeError(
+ "Compression requires the (missing) lzma module")
+ else:
+ raise NotImplementedError("That compression method is not supported")
+
+
+def _get_compressor(compress_type):
+ if compress_type == ZIP_DEFLATED:
+ return zlib.compressobj(zlib.Z_DEFAULT_COMPRESSION,
+ zlib.DEFLATED, -15)
+ elif compress_type == ZIP_BZIP2:
+ return bz2.BZ2Compressor()
+ elif compress_type == ZIP_LZMA:
+ return LZMACompressor()
+ else:
+ return None
+
+
+def _get_decompressor(compress_type):
+ if compress_type == ZIP_STORED:
+ return None
+ elif compress_type == ZIP_DEFLATED:
+ return zlib.decompressobj(-15)
+ elif compress_type == ZIP_BZIP2:
+ return bz2.BZ2Decompressor()
+ elif compress_type == ZIP_LZMA:
+ return LZMADecompressor()
+ else:
+ descr = compressor_names.get(compress_type)
+ if descr:
+ raise NotImplementedError("compression type %d (%s)" % (compress_type, descr))
+ else:
+ raise NotImplementedError("compression type %d" % (compress_type,))
+
+
+class _SharedFile:
+ def __init__(self, file, pos, close, lock, writing):
+ self._file = file
+ self._pos = pos
+ self._close = close
+ self._lock = lock
+ self._writing = writing
+
+ def read(self, n=-1):
+ with self._lock:
+ if self._writing():
+ raise ValueError("Can't read from the ZIP file while there "
+ "is an open writing handle on it. "
+ "Close the writing handle before trying to read.")
+ self._file.seek(self._pos)
+ data = self._file.read(n)
+ self._pos = self._file.tell()
+ return data
+
+ def close(self):
+ if self._file is not None:
+ fileobj = self._file
+ self._file = None
+ self._close(fileobj)
+
+# Provide the tell method for unseekable stream
+class _Tellable:
+ def __init__(self, fp):
+ self.fp = fp
+ self.offset = 0
+
+ def write(self, data):
+ n = self.fp.write(data)
+ self.offset += n
+ return n
+
+ def tell(self):
+ return self.offset
+
+ def flush(self):
+ self.fp.flush()
+
+ def close(self):
+ self.fp.close()
+
+
+class ZipExtFile(io.BufferedIOBase):
+ """File-like object for reading an archive member.
+ Is returned by ZipFile.open().
+ """
+
+ # Max size supported by decompressor.
+ MAX_N = 1 << 31 - 1
+
+ # Read from compressed files in 4k blocks.
+ MIN_READ_SIZE = 4096
+
+ def __init__(self, fileobj, mode, zipinfo, decrypter=None,
+ close_fileobj=False):
+ self._fileobj = fileobj
+ self._decrypter = decrypter
+ self._close_fileobj = close_fileobj
+
+ self._compress_type = zipinfo.compress_type
+ self._compress_left = zipinfo.compress_size
+ self._left = zipinfo.file_size
+
+ self._decompressor = _get_decompressor(self._compress_type)
+
+ self._eof = False
+ self._readbuffer = b''
+ self._offset = 0
+
+ self.newlines = None
+
+ # Adjust read size for encrypted files since the first 12 bytes
+ # are for the encryption/password information.
+ if self._decrypter is not None:
+ self._compress_left -= 12
+
+ self.mode = mode
+ self.name = zipinfo.filename
+
+ if hasattr(zipinfo, 'CRC'):
+ self._expected_crc = zipinfo.CRC
+ self._running_crc = crc32(b'')
+ else:
+ self._expected_crc = None
+
+ def __repr__(self):
+ result = ['<%s.%s' % (self.__class__.__module__,
+ self.__class__.__qualname__)]
+ if not self.closed:
+ result.append(' name=%r mode=%r' % (self.name, self.mode))
+ if self._compress_type != ZIP_STORED:
+ result.append(' compress_type=%s' %
+ compressor_names.get(self._compress_type,
+ self._compress_type))
+ else:
+ result.append(' [closed]')
+ result.append('>')
+ return ''.join(result)
+
+ def readline(self, limit=-1):
+ """Read and return a line from the stream.
+
+ If limit is specified, at most limit bytes will be read.
+ """
+
+ if limit < 0:
+ # Shortcut common case - newline found in buffer.
+ i = self._readbuffer.find(b'\n', self._offset) + 1
+ if i > 0:
+ line = self._readbuffer[self._offset: i]
+ self._offset = i
+ return line
+
+ return io.BufferedIOBase.readline(self, limit)
+
+ def peek(self, n=1):
+ """Returns buffered bytes without advancing the position."""
+ if n > len(self._readbuffer) - self._offset:
+ chunk = self.read(n)
+ if len(chunk) > self._offset:
+ self._readbuffer = chunk + self._readbuffer[self._offset:]
+ self._offset = 0
+ else:
+ self._offset -= len(chunk)
+
+ # Return up to 512 bytes to reduce allocation overhead for tight loops.
+ return self._readbuffer[self._offset: self._offset + 512]
+
+ def readable(self):
+ return True
+
+ def read(self, n=-1):
+ """Read and return up to n bytes.
+ If the argument is omitted, None, or negative, data is read and returned until EOF is reached..
+ """
+ if n is None or n < 0:
+ buf = self._readbuffer[self._offset:]
+ self._readbuffer = b''
+ self._offset = 0
+ while not self._eof:
+ buf += self._read1(self.MAX_N)
+ return buf
+
+ end = n + self._offset
+ if end < len(self._readbuffer):
+ buf = self._readbuffer[self._offset:end]
+ self._offset = end
+ return buf
+
+ n = end - len(self._readbuffer)
+ buf = self._readbuffer[self._offset:]
+ self._readbuffer = b''
+ self._offset = 0
+ while n > 0 and not self._eof:
+ data = self._read1(n)
+ if n < len(data):
+ self._readbuffer = data
+ self._offset = n
+ buf += data[:n]
+ break
+ buf += data
+ n -= len(data)
+ return buf
+
+ def _update_crc(self, newdata):
+ # Update the CRC using the given data.
+ if self._expected_crc is None:
+ # No need to compute the CRC if we don't have a reference value
+ return
+ self._running_crc = crc32(newdata, self._running_crc)
+ # Check the CRC if we're at the end of the file
+ if self._eof and self._running_crc != self._expected_crc:
+ raise BadZipFile("Bad CRC-32 for file %r" % self.name)
+
+ def read1(self, n):
+ """Read up to n bytes with at most one read() system call."""
+
+ if n is None or n < 0:
+ buf = self._readbuffer[self._offset:]
+ self._readbuffer = b''
+ self._offset = 0
+ while not self._eof:
+ data = self._read1(self.MAX_N)
+ if data:
+ buf += data
+ break
+ return buf
+
+ end = n + self._offset
+ if end < len(self._readbuffer):
+ buf = self._readbuffer[self._offset:end]
+ self._offset = end
+ return buf
+
+ n = end - len(self._readbuffer)
+ buf = self._readbuffer[self._offset:]
+ self._readbuffer = b''
+ self._offset = 0
+ if n > 0:
+ while not self._eof:
+ data = self._read1(n)
+ if n < len(data):
+ self._readbuffer = data
+ self._offset = n
+ buf += data[:n]
+ break
+ if data:
+ buf += data
+ break
+ return buf
+
+ def _read1(self, n):
+ # Read up to n compressed bytes with at most one read() system call,
+ # decrypt and decompress them.
+ if self._eof or n <= 0:
+ return b''
+
+ # Read from file.
+ if self._compress_type == ZIP_DEFLATED:
+ ## Handle unconsumed data.
+ data = self._decompressor.unconsumed_tail
+ if n > len(data):
+ data += self._read2(n - len(data))
+ else:
+ data = self._read2(n)
+
+ if self._compress_type == ZIP_STORED:
+ self._eof = self._compress_left <= 0
+ elif self._compress_type == ZIP_DEFLATED:
+ n = max(n, self.MIN_READ_SIZE)
+ data = self._decompressor.decompress(data, n)
+ self._eof = (self._decompressor.eof or
+ self._compress_left <= 0 and
+ not self._decompressor.unconsumed_tail)
+ if self._eof:
+ data += self._decompressor.flush()
+ else:
+ data = self._decompressor.decompress(data)
+ self._eof = self._decompressor.eof or self._compress_left <= 0
+
+ data = data[:self._left]
+ self._left -= len(data)
+ if self._left <= 0:
+ self._eof = True
+ self._update_crc(data)
+ return data
+
+ def _read2(self, n):
+ if self._compress_left <= 0:
+ return b''
+
+ n = max(n, self.MIN_READ_SIZE)
+ n = min(n, self._compress_left)
+
+ data = self._fileobj.read(n)
+ self._compress_left -= len(data)
+ if not data:
+ raise EOFError
+
+ if self._decrypter is not None:
+ data = bytes(map(self._decrypter, data))
+ return data
+
+ def close(self):
+ try:
+ if self._close_fileobj:
+ self._fileobj.close()
+ finally:
+ super().close()
+
+
+class _ZipWriteFile(io.BufferedIOBase):
+ def __init__(self, zf, zinfo, zip64):
+ self._zinfo = zinfo
+ self._zip64 = zip64
+ self._zipfile = zf
+ self._compressor = _get_compressor(zinfo.compress_type)
+ self._file_size = 0
+ self._compress_size = 0
+ self._crc = 0
+
+ @property
+ def _fileobj(self):
+ return self._zipfile.fp
+
+ def writable(self):
+ return True
+
+ def write(self, data):
+ if self.closed:
+ raise ValueError('I/O operation on closed file.')
+ nbytes = len(data)
+ self._file_size += nbytes
+ self._crc = crc32(data, self._crc)
+ if self._compressor:
+ data = self._compressor.compress(data)
+ self._compress_size += len(data)
+ self._fileobj.write(data)
+ return nbytes
+
+ def close(self):
+ if self.closed:
+ return
+ super().close()
+ # Flush any data from the compressor, and update header info
+ if self._compressor:
+ buf = self._compressor.flush()
+ self._compress_size += len(buf)
+ self._fileobj.write(buf)
+ self._zinfo.compress_size = self._compress_size
+ else:
+ self._zinfo.compress_size = self._file_size
+ self._zinfo.CRC = self._crc
+ self._zinfo.file_size = self._file_size
+
+ # Write updated header info
+ if self._zinfo.flag_bits & 0x08:
+ # Write CRC and file sizes after the file data
+ fmt = '<LLQQ' if self._zip64 else '<LLLL'
+ self._fileobj.write(struct.pack(fmt, _DD_SIGNATURE, self._zinfo.CRC,
+ self._zinfo.compress_size, self._zinfo.file_size))
+ self._zipfile.start_dir = self._fileobj.tell()
+ else:
+ if not self._zip64:
+ if self._file_size > ZIP64_LIMIT:
+ raise RuntimeError('File size unexpectedly exceeded ZIP64 '
+ 'limit')
+ if self._compress_size > ZIP64_LIMIT:
+ raise RuntimeError('Compressed size unexpectedly exceeded '
+ 'ZIP64 limit')
+ # Seek backwards and write file header (which will now include
+ # correct CRC and file sizes)
+
+ # Preserve current position in file
+ self._zipfile.start_dir = self._fileobj.tell()
+ self._fileobj.seek(self._zinfo.header_offset)
+ self._fileobj.write(self._zinfo.FileHeader(self._zip64))
+ self._fileobj.seek(self._zipfile.start_dir)
+
+ self._zipfile._writing = False
+
+ # Successfully written: Add file to our caches
+ self._zipfile.filelist.append(self._zinfo)
+ self._zipfile.NameToInfo[self._zinfo.filename] = self._zinfo
+
+class ZipFile:
+ """ Class with methods to open, read, write, close, list zip files.
+
+ z = ZipFile(file, mode="r", compression=ZIP_STORED, allowZip64=True)
+
+ file: Either the path to the file, or a file-like object.
+ If it is a path, the file will be opened and closed by ZipFile.
+ mode: The mode can be either read 'r', write 'w', exclusive create 'x',
+ or append 'a'.
+ compression: ZIP_STORED (no compression), ZIP_DEFLATED (requires zlib),
+ ZIP_BZIP2 (requires bz2) or ZIP_LZMA (requires lzma).
+ allowZip64: if True ZipFile will create files with ZIP64 extensions when
+ needed, otherwise it will raise an exception when this would
+ be necessary.
+
+ """
+
+ fp = None # Set here since __del__ checks it
+ _windows_illegal_name_trans_table = None
+
+ def __init__(self, file, mode="r", compression=ZIP_STORED, allowZip64=True):
+ """Open the ZIP file with mode read 'r', write 'w', exclusive create 'x',
+ or append 'a'."""
+ if mode not in ('r', 'w', 'x', 'a'):
+ raise ValueError("ZipFile requires mode 'r', 'w', 'x', or 'a'")
+
+ _check_compression(compression)
+
+ self._allowZip64 = allowZip64
+ self._didModify = False
+ self.debug = 0 # Level of printing: 0 through 3
+ self.NameToInfo = {} # Find file info given name
+ self.filelist = [] # List of ZipInfo instances for archive
+ self.compression = compression # Method of compression
+ self.mode = mode
+ self.pwd = None
+ self._comment = b''
+
+ # Check if we were passed a file-like object
+# if isinstance(file, os.PathLike):
+# file = os.fspath(file)
+ if isinstance(file, str):
+ # No, it's a filename
+ self._filePassed = 0
+ self.filename = file
+ modeDict = {'r' : 'rb', 'w': 'w+b', 'x': 'x+b', 'a' : 'r+b',
+ 'r+b': 'w+b', 'w+b': 'wb', 'x+b': 'xb'}
+ filemode = modeDict[mode]
+ while True:
+ try:
+ self.fp = io.open(file, filemode)
+ except OSError:
+ if filemode in modeDict:
+ filemode = modeDict[filemode]
+ continue
+ raise
+ break
+ else:
+ self._filePassed = 1
+ self.fp = file
+ self.filename = getattr(file, 'name', None)
+ self._fileRefCnt = 1
+ self._lock = threading.RLock()
+ self._seekable = True
+ self._writing = False
+
+ try:
+ if mode == 'r':
+ self._RealGetContents()
+ elif mode in ('w', 'x'):
+ # set the modified flag so central directory gets written
+ # even if no files are added to the archive
+ self._didModify = True
+ try:
+ self.start_dir = self.fp.tell()
+ except (AttributeError, OSError):
+ self.fp = _Tellable(self.fp)
+ self.start_dir = 0
+ self._seekable = False
+ else:
+ # Some file-like objects can provide tell() but not seek()
+ try:
+ self.fp.seek(self.start_dir)
+ except (AttributeError, OSError):
+ self._seekable = False
+ elif mode == 'a':
+ try:
+ # See if file is a zip file
+ self._RealGetContents()
+ # seek to start of directory and overwrite
+ self.fp.seek(self.start_dir)
+ except BadZipFile:
+ # file is not a zip file, just append
+ self.fp.seek(0, 2)
+
+ # set the modified flag so central directory gets written
+ # even if no files are added to the archive
+ self._didModify = True
+ self.start_dir = self.fp.tell()
+ else:
+ raise ValueError("Mode must be 'r', 'w', 'x', or 'a'")
+ except:
+ fp = self.fp
+ self.fp = None
+ self._fpclose(fp)
+ raise
+
+ def __enter__(self):
+ return self
+
+ def __exit__(self, type, value, traceback):
+ self.close()
+
+ def __repr__(self):
+ result = ['<%s.%s' % (self.__class__.__module__,
+ self.__class__.__qualname__)]
+ if self.fp is not None:
+ if self._filePassed:
+ result.append(' file=%r' % self.fp)
+ elif self.filename is not None:
+ result.append(' filename=%r' % self.filename)
+ result.append(' mode=%r' % self.mode)
+ else:
+ result.append(' [closed]')
+ result.append('>')
+ return ''.join(result)
+
+ def _RealGetContents(self):
+ """Read in the table of contents for the ZIP file."""
+ fp = self.fp
+ try:
+ endrec = _EndRecData(fp)
+ except OSError:
+ raise BadZipFile("File is not a zip file")
+ if not endrec:
+ raise BadZipFile("File is not a zip file")
+ if self.debug > 1:
+ print(endrec)
+ size_cd = endrec[_ECD_SIZE] # bytes in central directory
+ offset_cd = endrec[_ECD_OFFSET] # offset of central directory
+ self._comment = endrec[_ECD_COMMENT] # archive comment
+
+ # "concat" is zero, unless zip was concatenated to another file
+ concat = endrec[_ECD_LOCATION] - size_cd - offset_cd
+ if endrec[_ECD_SIGNATURE] == stringEndArchive64:
+ # If Zip64 extension structures are present, account for them
+ concat -= (sizeEndCentDir64 + sizeEndCentDir64Locator)
+
+ if self.debug > 2:
+ inferred = concat + offset_cd
+ print("given, inferred, offset", offset_cd, inferred, concat)
+ # self.start_dir: Position of start of central directory
+ self.start_dir = offset_cd + concat
+ fp.seek(self.start_dir, 0)
+ data = fp.read(size_cd)
+ fp = io.BytesIO(data)
+ total = 0
+ while total < size_cd:
+ centdir = fp.read(sizeCentralDir)
+ if len(centdir) != sizeCentralDir:
+ raise BadZipFile("Truncated central directory")
+ centdir = struct.unpack(structCentralDir, centdir)
+ if centdir[_CD_SIGNATURE] != stringCentralDir:
+ raise BadZipFile("Bad magic number for central directory")
+ if self.debug > 2:
+ print(centdir)
+ filename = fp.read(centdir[_CD_FILENAME_LENGTH])
+ flags = centdir[5]
+ if flags & 0x800:
+ # UTF-8 file names extension
+ filename = filename.decode('utf-8')
+ else:
+ # Historical ZIP filename encoding
+ filename = filename.decode('cp437')
+ # Create ZipInfo instance to store file information
+ x = ZipInfo(filename)
+ x.extra = fp.read(centdir[_CD_EXTRA_FIELD_LENGTH])
+ x.comment = fp.read(centdir[_CD_COMMENT_LENGTH])
+ x.header_offset = centdir[_CD_LOCAL_HEADER_OFFSET]
+ (x.create_version, x.create_system, x.extract_version, x.reserved,
+ x.flag_bits, x.compress_type, t, d,
+ x.CRC, x.compress_size, x.file_size) = centdir[1:12]
+ if x.extract_version > MAX_EXTRACT_VERSION:
+ raise NotImplementedError("zip file version %.1f" %
+ (x.extract_version / 10))
+ x.volume, x.internal_attr, x.external_attr = centdir[15:18]
+ # Convert date/time code to (year, month, day, hour, min, sec)
+ x._raw_time = t
+ x.date_time = ( (d>>9)+1980, (d>>5)&0xF, d&0x1F,
+ t>>11, (t>>5)&0x3F, (t&0x1F) * 2 )
+
+ x._decodeExtra()
+ x.header_offset = x.header_offset + concat
+ self.filelist.append(x)
+ self.NameToInfo[x.filename] = x
+
+ # update total bytes read from central directory
+ total = (total + sizeCentralDir + centdir[_CD_FILENAME_LENGTH]
+ + centdir[_CD_EXTRA_FIELD_LENGTH]
+ + centdir[_CD_COMMENT_LENGTH])
+
+ if self.debug > 2:
+ print("total", total)
+
+
+ def namelist(self):
+ """Return a list of file names in the archive."""
+ return [data.filename for data in self.filelist]
+
+ def infolist(self):
+ """Return a list of class ZipInfo instances for files in the
+ archive."""
+ return self.filelist
+
+ def printdir(self, file=None):
+ """Print a table of contents for the zip file."""
+ print("%-46s %19s %12s" % ("File Name", "Modified ", "Size"),
+ file=file)
+ for zinfo in self.filelist:
+ date = "%d-%02d-%02d %02d:%02d:%02d" % zinfo.date_time[:6]
+ print("%-46s %s %12d" % (zinfo.filename, date, zinfo.file_size),
+ file=file)
+
+ def testzip(self):
+ """Read all the files and check the CRC."""
+ chunk_size = 2 ** 20
+ for zinfo in self.filelist:
+ try:
+ # Read by chunks, to avoid an OverflowError or a
+ # MemoryError with very large embedded files.
+ with self.open(zinfo.filename, "r") as f:
+ while f.read(chunk_size): # Check CRC-32
+ pass
+ except BadZipFile:
+ return zinfo.filename
+
+ def getinfo(self, name):
+ """Return the instance of ZipInfo given 'name'."""
+ info = self.NameToInfo.get(name)
+ if info is None:
+ raise KeyError(
+ 'There is no item named %r in the archive' % name)
+
+ return info
+
+ def setpassword(self, pwd):
+ """Set default password for encrypted files."""
+ if pwd and not isinstance(pwd, bytes):
+ raise TypeError("pwd: expected bytes, got %s" % type(pwd).__name__)
+ if pwd:
+ self.pwd = pwd
+ else:
+ self.pwd = None
+
+ @property
+ def comment(self):
+ """The comment text associated with the ZIP file."""
+ return self._comment
+
+ @comment.setter
+ def comment(self, comment):
+ if not isinstance(comment, bytes):
+ raise TypeError("comment: expected bytes, got %s" % type(comment).__name__)
+ # check for valid comment length
+ if len(comment) > ZIP_MAX_COMMENT:
+ import warnings
+ warnings.warn('Archive comment is too long; truncating to %d bytes'
+ % ZIP_MAX_COMMENT, stacklevel=2)
+ comment = comment[:ZIP_MAX_COMMENT]
+ self._comment = comment
+ self._didModify = True
+
+ def read(self, name, pwd=None):
+ """Return file bytes for name."""
+ with self.open(name, "r", pwd) as fp:
+ return fp.read()
+
+ def open(self, name, mode="r", pwd=None, *, force_zip64=False):
+ """Return file-like object for 'name'.
+
+ name is a string for the file name within the ZIP file, or a ZipInfo
+ object.
+
+ mode should be 'r' to read a file already in the ZIP file, or 'w' to
+ write to a file newly added to the archive.
+
+ pwd is the password to decrypt files (only used for reading).
+
+ When writing, if the file size is not known in advance but may exceed
+ 2 GiB, pass force_zip64 to use the ZIP64 format, which can handle large
+ files. If the size is known in advance, it is best to pass a ZipInfo
+ instance for name, with zinfo.file_size set.
+ """
+ if mode not in {"r", "w"}:
+ raise ValueError('open() requires mode "r" or "w"')
+ if pwd and not isinstance(pwd, bytes):
+ raise TypeError("pwd: expected bytes, got %s" % type(pwd).__name__)
+ if pwd and (mode == "w"):
+ raise ValueError("pwd is only supported for reading files")
+ if not self.fp:
+ raise ValueError(
+ "Attempt to use ZIP archive that was already closed")
+
+ # Make sure we have an info object
+ if isinstance(name, ZipInfo):
+ # 'name' is already an info object
+ zinfo = name
+ elif mode == 'w':
+ zinfo = ZipInfo(name)
+ zinfo.compress_type = self.compression
+ else:
+ # Get info object for name
+ zinfo = self.getinfo(name)
+
+ if mode == 'w':
+ return self._open_to_write(zinfo, force_zip64=force_zip64)
+
+ if self._writing:
+ raise ValueError("Can't read from the ZIP file while there "
+ "is an open writing handle on it. "
+ "Close the writing handle before trying to read.")
+
+ # Open for reading:
+ self._fileRefCnt += 1
+ zef_file = _SharedFile(self.fp, zinfo.header_offset,
+ self._fpclose, self._lock, lambda: self._writing)
+ try:
+ # Skip the file header:
+ fheader = zef_file.read(sizeFileHeader)
+ if len(fheader) != sizeFileHeader:
+ raise BadZipFile("Truncated file header")
+ fheader = struct.unpack(structFileHeader, fheader)
+ if fheader[_FH_SIGNATURE] != stringFileHeader:
+ raise BadZipFile("Bad magic number for file header")
+
+ fname = zef_file.read(fheader[_FH_FILENAME_LENGTH])
+ if fheader[_FH_EXTRA_FIELD_LENGTH]:
+ zef_file.read(fheader[_FH_EXTRA_FIELD_LENGTH])
+
+ if zinfo.flag_bits & 0x20:
+ # Zip 2.7: compressed patched data
+ raise NotImplementedError("compressed patched data (flag bit 5)")
+
+ if zinfo.flag_bits & 0x40:
+ # strong encryption
+ raise NotImplementedError("strong encryption (flag bit 6)")
+
+ if zinfo.flag_bits & 0x800:
+ # UTF-8 filename
+ fname_str = fname.decode("utf-8")
+ else:
+ fname_str = fname.decode("cp437")
+
+ if fname_str != zinfo.orig_filename:
+ raise BadZipFile(
+ 'File name in directory %r and header %r differ.'
+ % (zinfo.orig_filename, fname))
+
+ # check for encrypted flag & handle password
+ is_encrypted = zinfo.flag_bits & 0x1
+ zd = None
+ if is_encrypted:
+ if not pwd:
+ pwd = self.pwd
+ if not pwd:
+ raise RuntimeError("File %r is encrypted, password "
+ "required for extraction" % name)
+
+ zd = _ZipDecrypter(pwd)
+ # The first 12 bytes in the cypher stream is an encryption header
+ # used to strengthen the algorithm. The first 11 bytes are
+ # completely random, while the 12th contains the MSB of the CRC,
+ # or the MSB of the file time depending on the header type
+ # and is used to check the correctness of the password.
+ header = zef_file.read(12)
+ h = list(map(zd, header[0:12]))
+ if zinfo.flag_bits & 0x8:
+ # compare against the file type from extended local headers
+ check_byte = (zinfo._raw_time >> 8) & 0xff
+ else:
+ # compare against the CRC otherwise
+ check_byte = (zinfo.CRC >> 24) & 0xff
+ if h[11] != check_byte:
+ raise RuntimeError("Bad password for file %r" % name)
+
+ return ZipExtFile(zef_file, mode, zinfo, zd, True)
+ except:
+ zef_file.close()
+ raise
+
+ def _open_to_write(self, zinfo, force_zip64=False):
+ if force_zip64 and not self._allowZip64:
+ raise ValueError(
+ "force_zip64 is True, but allowZip64 was False when opening "
+ "the ZIP file."
+ )
+ if self._writing:
+ raise ValueError("Can't write to the ZIP file while there is "
+ "another write handle open on it. "
+ "Close the first handle before opening another.")
+
+ # Sizes and CRC are overwritten with correct data after processing the file
+ if not hasattr(zinfo, 'file_size'):
+ zinfo.file_size = 0
+ zinfo.compress_size = 0
+ zinfo.CRC = 0
+
+ zinfo.flag_bits = 0x00
+ if zinfo.compress_type == ZIP_LZMA:
+ # Compressed data includes an end-of-stream (EOS) marker
+ zinfo.flag_bits |= 0x02
+ if not self._seekable:
+ zinfo.flag_bits |= 0x08
+
+ if not zinfo.external_attr:
+ zinfo.external_attr = 0o600 << 16 # permissions: ?rw-------
+
+ # Compressed size can be larger than uncompressed size
+ zip64 = self._allowZip64 and \
+ (force_zip64 or zinfo.file_size * 1.05 > ZIP64_LIMIT)
+
+ if self._seekable:
+ self.fp.seek(self.start_dir)
+ zinfo.header_offset = self.fp.tell()
+
+ self._writecheck(zinfo)
+ self._didModify = True
+
+ self.fp.write(zinfo.FileHeader(zip64))
+
+ self._writing = True
+ return _ZipWriteFile(self, zinfo, zip64)
+
+ def extract(self, member, path=None, pwd=None):
+ """Extract a member from the archive to the current working directory,
+ using its full name. Its file information is extracted as accurately
+ as possible. `member' may be a filename or a ZipInfo object. You can
+ specify a different directory using `path'.
+ """
+ if path is None:
+ path = os.getcwd()
+ else:
+ path = os.fspath(path)
+
+ return self._extract_member(member, path, pwd)
+
+ def extractall(self, path=None, members=None, pwd=None):
+ """Extract all members from the archive to the current working
+ directory. `path' specifies a different directory to extract to.
+ `members' is optional and must be a subset of the list returned
+ by namelist().
+ """
+ if members is None:
+ members = self.namelist()
+
+ if path is None:
+ path = os.getcwd()
+# else:
+# path = os.fspath(path)
+
+ for zipinfo in members:
+ self._extract_member(zipinfo, path, pwd)
+
+ @classmethod
+ def _sanitize_windows_name(cls, arcname, pathsep):
+ """Replace bad characters and remove trailing dots from parts."""
+ table = cls._windows_illegal_name_trans_table
+ if not table:
+ illegal = ':<>|"?*'
+ table = str.maketrans(illegal, '_' * len(illegal))
+ cls._windows_illegal_name_trans_table = table
+ arcname = arcname.translate(table)
+ # remove trailing dots
+ arcname = (x.rstrip('.') for x in arcname.split(pathsep))
+ # rejoin, removing empty parts.
+ arcname = pathsep.join(x for x in arcname if x)
+ return arcname
+
+ def _extract_member(self, member, targetpath, pwd):
+ """Extract the ZipInfo object 'member' to a physical
+ file on the path targetpath.
+ """
+ if not isinstance(member, ZipInfo):
+ member = self.getinfo(member)
+
+ # build the destination pathname, replacing
+ # forward slashes to platform specific separators.
+ arcname = member.filename.replace('/', os.path.sep)
+
+ if os.path.altsep:
+ arcname = arcname.replace(os.path.altsep, os.path.sep)
+ # interpret absolute pathname as relative, remove drive letter or
+ # UNC path, redundant separators, "." and ".." components.
+ arcname = os.path.splitdrive(arcname)[1]
+ invalid_path_parts = ('', os.path.curdir, os.path.pardir)
+ arcname = os.path.sep.join(x for x in arcname.split(os.path.sep)
+ if x not in invalid_path_parts)
+ if os.path.sep == '\\':
+ # filter illegal characters on Windows
+ arcname = self._sanitize_windows_name(arcname, os.path.sep)
+
+ targetpath = os.path.join(targetpath, arcname)
+ targetpath = os.path.normpath(targetpath)
+
+ # Create all upper directories if necessary.
+ upperdirs = os.path.dirname(targetpath)
+ if upperdirs and not os.path.exists(upperdirs):
+ os.makedirs(upperdirs)
+
+ if member.is_dir():
+ if not os.path.isdir(targetpath):
+ os.mkdir(targetpath)
+ return targetpath
+
+ with self.open(member, pwd=pwd) as source, \
+ open(targetpath, "wb") as target:
+ shutil.copyfileobj(source, target)
+
+ return targetpath
+
+ def _writecheck(self, zinfo):
+ """Check for errors before writing a file to the archive."""
+ if zinfo.filename in self.NameToInfo:
+ import warnings
+ warnings.warn('Duplicate name: %r' % zinfo.filename, stacklevel=3)
+ if self.mode not in ('w', 'x', 'a'):
+ raise ValueError("write() requires mode 'w', 'x', or 'a'")
+ if not self.fp:
+ raise ValueError(
+ "Attempt to write ZIP archive that was already closed")
+ _check_compression(zinfo.compress_type)
+ if not self._allowZip64:
+ requires_zip64 = None
+ if len(self.filelist) >= ZIP_FILECOUNT_LIMIT:
+ requires_zip64 = "Files count"
+ elif zinfo.file_size > ZIP64_LIMIT:
+ requires_zip64 = "Filesize"
+ elif zinfo.header_offset > ZIP64_LIMIT:
+ requires_zip64 = "Zipfile size"
+ if requires_zip64:
+ raise LargeZipFile(requires_zip64 +
+ " would require ZIP64 extensions")
+
+ def write(self, filename, arcname=None, compress_type=None):
+ """Put the bytes from filename into the archive under the name
+ arcname."""
+ if not self.fp:
+ raise ValueError(
+ "Attempt to write to ZIP archive that was already closed")
+ if self._writing:
+ raise ValueError(
+ "Can't write to ZIP archive while an open writing handle exists"
+ )
+
+ zinfo = ZipInfo.from_file(filename, arcname)
+
+ if zinfo.is_dir():
+ zinfo.compress_size = 0
+ zinfo.CRC = 0
+ else:
+ if compress_type is not None:
+ zinfo.compress_type = compress_type
+ else:
+ zinfo.compress_type = self.compression
+
+ if zinfo.is_dir():
+ with self._lock:
+ if self._seekable:
+ self.fp.seek(self.start_dir)
+ zinfo.header_offset = self.fp.tell() # Start of header bytes
+ if zinfo.compress_type == ZIP_LZMA:
+ # Compressed data includes an end-of-stream (EOS) marker
+ zinfo.flag_bits |= 0x02
+
+ self._writecheck(zinfo)
+ self._didModify = True
+
+ self.filelist.append(zinfo)
+ self.NameToInfo[zinfo.filename] = zinfo
+ self.fp.write(zinfo.FileHeader(False))
+ self.start_dir = self.fp.tell()
+ else:
+ with open(filename, "rb") as src, self.open(zinfo, 'w') as dest:
+ shutil.copyfileobj(src, dest, 1024*8)
+
+ def writestr(self, zinfo_or_arcname, data, compress_type=None):
+ """Write a file into the archive. The contents is 'data', which
+ may be either a 'str' or a 'bytes' instance; if it is a 'str',
+ it is encoded as UTF-8 first.
+ 'zinfo_or_arcname' is either a ZipInfo instance or
+ the name of the file in the archive."""
+ if isinstance(data, str):
+ data = data.encode("utf-8")
+ if not isinstance(zinfo_or_arcname, ZipInfo):
+ zinfo = ZipInfo(filename=zinfo_or_arcname,
+ date_time=time.localtime(time.time())[:6])
+ zinfo.compress_type = self.compression
+ if zinfo.filename[-1] == '/':
+ zinfo.external_attr = 0o40775 << 16 # drwxrwxr-x
+ zinfo.external_attr |= 0x10 # MS-DOS directory flag
+ else:
+ zinfo.external_attr = 0o600 << 16 # ?rw-------
+ else:
+ zinfo = zinfo_or_arcname
+
+ if not self.fp:
+ raise ValueError(
+ "Attempt to write to ZIP archive that was already closed")
+ if self._writing:
+ raise ValueError(
+ "Can't write to ZIP archive while an open writing handle exists."
+ )
+
+ if compress_type is not None:
+ zinfo.compress_type = compress_type
+
+ zinfo.file_size = len(data) # Uncompressed size
+ with self._lock:
+ with self.open(zinfo, mode='w') as dest:
+ dest.write(data)
+
+ def __del__(self):
+ """Call the "close()" method in case the user forgot."""
+ self.close()
+
+ def close(self):
+ """Close the file, and for mode 'w', 'x' and 'a' write the ending
+ records."""
+ if self.fp is None:
+ return
+
+ if self._writing:
+ raise ValueError("Can't close the ZIP file while there is "
+ "an open writing handle on it. "
+ "Close the writing handle before closing the zip.")
+
+ try:
+ if self.mode in ('w', 'x', 'a') and self._didModify: # write ending records
+ with self._lock:
+ if self._seekable:
+ self.fp.seek(self.start_dir)
+ self._write_end_record()
+ finally:
+ fp = self.fp
+ self.fp = None
+ self._fpclose(fp)
+
+ def _write_end_record(self):
+ for zinfo in self.filelist: # write central directory
+ dt = zinfo.date_time
+ dosdate = (dt[0] - 1980) << 9 | dt[1] << 5 | dt[2]
+ dostime = dt[3] << 11 | dt[4] << 5 | (dt[5] // 2)
+ extra = []
+ if zinfo.file_size > ZIP64_LIMIT \
+ or zinfo.compress_size > ZIP64_LIMIT:
+ extra.append(zinfo.file_size)
+ extra.append(zinfo.compress_size)
+ file_size = 0xffffffff
+ compress_size = 0xffffffff
+ else:
+ file_size = zinfo.file_size
+ compress_size = zinfo.compress_size
+
+ if zinfo.header_offset > ZIP64_LIMIT:
+ extra.append(zinfo.header_offset)
+ header_offset = 0xffffffff
+ else:
+ header_offset = zinfo.header_offset
+
+ extra_data = zinfo.extra
+ min_version = 0
+ if extra:
+ # Append a ZIP64 field to the extra's
+ extra_data = _strip_extra(extra_data, (1,))
+ extra_data = struct.pack(
+ '<HH' + 'Q'*len(extra),
+ 1, 8*len(extra), *extra) + extra_data
+
+ min_version = ZIP64_VERSION
+
+ if zinfo.compress_type == ZIP_BZIP2:
+ min_version = max(BZIP2_VERSION, min_version)
+ elif zinfo.compress_type == ZIP_LZMA:
+ min_version = max(LZMA_VERSION, min_version)
+
+ extract_version = max(min_version, zinfo.extract_version)
+ create_version = max(min_version, zinfo.create_version)
+ try:
+ filename, flag_bits = zinfo._encodeFilenameFlags()
+ centdir = struct.pack(structCentralDir,
+ stringCentralDir, create_version,
+ zinfo.create_system, extract_version, zinfo.reserved,
+ flag_bits, zinfo.compress_type, dostime, dosdate,
+ zinfo.CRC, compress_size, file_size,
+ len(filename), len(extra_data), len(zinfo.comment),
+ 0, zinfo.internal_attr, zinfo.external_attr,
+ header_offset)
+ except DeprecationWarning:
+ print((structCentralDir, stringCentralDir, create_version,
+ zinfo.create_system, extract_version, zinfo.reserved,
+ zinfo.flag_bits, zinfo.compress_type, dostime, dosdate,
+ zinfo.CRC, compress_size, file_size,
+ len(zinfo.filename), len(extra_data), len(zinfo.comment),
+ 0, zinfo.internal_attr, zinfo.external_attr,
+ header_offset), file=sys.stderr)
+ raise
+ self.fp.write(centdir)
+ self.fp.write(filename)
+ self.fp.write(extra_data)
+ self.fp.write(zinfo.comment)
+
+ pos2 = self.fp.tell()
+ # Write end-of-zip-archive record
+ centDirCount = len(self.filelist)
+ centDirSize = pos2 - self.start_dir
+ centDirOffset = self.start_dir
+ requires_zip64 = None
+ if centDirCount > ZIP_FILECOUNT_LIMIT:
+ requires_zip64 = "Files count"
+ elif centDirOffset > ZIP64_LIMIT:
+ requires_zip64 = "Central directory offset"
+ elif centDirSize > ZIP64_LIMIT:
+ requires_zip64 = "Central directory size"
+ if requires_zip64:
+ # Need to write the ZIP64 end-of-archive records
+ if not self._allowZip64:
+ raise LargeZipFile(requires_zip64 +
+ " would require ZIP64 extensions")
+ zip64endrec = struct.pack(
+ structEndArchive64, stringEndArchive64,
+ 44, 45, 45, 0, 0, centDirCount, centDirCount,
+ centDirSize, centDirOffset)
+ self.fp.write(zip64endrec)
+
+ zip64locrec = struct.pack(
+ structEndArchive64Locator,
+ stringEndArchive64Locator, 0, pos2, 1)
+ self.fp.write(zip64locrec)
+ centDirCount = min(centDirCount, 0xFFFF)
+ centDirSize = min(centDirSize, 0xFFFFFFFF)
+ centDirOffset = min(centDirOffset, 0xFFFFFFFF)
+
+ endrec = struct.pack(structEndArchive, stringEndArchive,
+ 0, 0, centDirCount, centDirCount,
+ centDirSize, centDirOffset, len(self._comment))
+ self.fp.write(endrec)
+ self.fp.write(self._comment)
+ self.fp.flush()
+
+ def _fpclose(self, fp):
+ assert self._fileRefCnt > 0
+ self._fileRefCnt -= 1
+ if not self._fileRefCnt and not self._filePassed:
+ fp.close()
+
+
+class PyZipFile(ZipFile):
+ """Class to create ZIP archives with Python library files and packages."""
+
+ def __init__(self, file, mode="r", compression=ZIP_STORED,
+ allowZip64=True, optimize=-1):
+ ZipFile.__init__(self, file, mode=mode, compression=compression,
+ allowZip64=allowZip64)
+ self._optimize = optimize
+
+ def writepy(self, pathname, basename="", filterfunc=None):
+ """Add all files from "pathname" to the ZIP archive.
+
+ If pathname is a package directory, search the directory and
+ all package subdirectories recursively for all *.py and enter
+ the modules into the archive. If pathname is a plain
+ directory, listdir *.py and enter all modules. Else, pathname
+ must be a Python *.py file and the module will be put into the
+ archive. Added modules are always module.pyc.
+ This method will compile the module.py into module.pyc if
+ necessary.
+ If filterfunc(pathname) is given, it is called with every argument.
+ When it is False, the file or directory is skipped.
+ """
+ pathname = os.fspath(pathname)
+ if filterfunc and not filterfunc(pathname):
+ if self.debug:
+ label = 'path' if os.path.isdir(pathname) else 'file'
+ print('%s %r skipped by filterfunc' % (label, pathname))
+ return
+ dir, name = os.path.split(pathname)
+ if os.path.isdir(pathname):
+ initname = os.path.join(pathname, "__init__.py")
+ if os.path.isfile(initname):
+ # This is a package directory, add it
+ if basename:
+ basename = "%s/%s" % (basename, name)
+ else:
+ basename = name
+ if self.debug:
+ print("Adding package in", pathname, "as", basename)
+ fname, arcname = self._get_codename(initname[0:-3], basename)
+ if self.debug:
+ print("Adding", arcname)
+ self.write(fname, arcname)
+ dirlist = os.listdir(pathname)
+ dirlist.remove("__init__.py")
+ # Add all *.py files and package subdirectories
+ for filename in dirlist:
+ path = os.path.join(pathname, filename)
+ root, ext = os.path.splitext(filename)
+ if os.path.isdir(path):
+ if os.path.isfile(os.path.join(path, "__init__.py")):
+ # This is a package directory, add it
+ self.writepy(path, basename,
+ filterfunc=filterfunc) # Recursive call
+ elif ext == ".py":
+ if filterfunc and not filterfunc(path):
+ if self.debug:
+ print('file %r skipped by filterfunc' % path)
+ continue
+ fname, arcname = self._get_codename(path[0:-3],
+ basename)
+ if self.debug:
+ print("Adding", arcname)
+ self.write(fname, arcname)
+ else:
+ # This is NOT a package directory, add its files at top level
+ if self.debug:
+ print("Adding files from directory", pathname)
+ for filename in os.listdir(pathname):
+ path = os.path.join(pathname, filename)
+ root, ext = os.path.splitext(filename)
+ if ext == ".py":
+ if filterfunc and not filterfunc(path):
+ if self.debug:
+ print('file %r skipped by filterfunc' % path)
+ continue
+ fname, arcname = self._get_codename(path[0:-3],
+ basename)
+ if self.debug:
+ print("Adding", arcname)
+ self.write(fname, arcname)
+ else:
+ if pathname[-3:] != ".py":
+ raise RuntimeError(
+ 'Files added with writepy() must end with ".py"')
+ fname, arcname = self._get_codename(pathname[0:-3], basename)
+ if self.debug:
+ print("Adding file", arcname)
+ self.write(fname, arcname)
+
+ def _get_codename(self, pathname, basename):
+ """Return (filename, archivename) for the path.
+
+ Given a module name path, return the correct file path and
+ archive name, compiling if necessary. For example, given
+ /python/lib/string, return (/python/lib/string.pyc, string).
+ """
+ def _compile(file, optimize=-1):
+ import py_compile
+ if self.debug:
+ print("Compiling", file)
+ try:
+ py_compile.compile(file, doraise=True, optimize=optimize)
+ except py_compile.PyCompileError as err:
+ print(err.msg)
+ return False
+ return True
+
+ file_py = pathname + ".py"
+ file_pyc = pathname + ".pyc"
+ pycache_opt0 = importlib.util.cache_from_source(file_py, optimization='')
+ pycache_opt1 = importlib.util.cache_from_source(file_py, optimization=1)
+ pycache_opt2 = importlib.util.cache_from_source(file_py, optimization=2)
+ if self._optimize == -1:
+ # legacy mode: use whatever file is present
+ if (os.path.isfile(file_pyc) and
+ os.stat(file_pyc).st_mtime >= os.stat(file_py).st_mtime):
+ # Use .pyc file.
+ arcname = fname = file_pyc
+ elif (os.path.isfile(pycache_opt0) and
+ os.stat(pycache_opt0).st_mtime >= os.stat(file_py).st_mtime):
+ # Use the __pycache__/*.pyc file, but write it to the legacy pyc
+ # file name in the archive.
+ fname = pycache_opt0
+ arcname = file_pyc
+ elif (os.path.isfile(pycache_opt1) and
+ os.stat(pycache_opt1).st_mtime >= os.stat(file_py).st_mtime):
+ # Use the __pycache__/*.pyc file, but write it to the legacy pyc
+ # file name in the archive.
+ fname = pycache_opt1
+ arcname = file_pyc
+ elif (os.path.isfile(pycache_opt2) and
+ os.stat(pycache_opt2).st_mtime >= os.stat(file_py).st_mtime):
+ # Use the __pycache__/*.pyc file, but write it to the legacy pyc
+ # file name in the archive.
+ fname = pycache_opt2
+ arcname = file_pyc
+ else:
+ # Compile py into PEP 3147 pyc file.
+ if _compile(file_py):
+ if sys.flags.optimize == 0:
+ fname = pycache_opt0
+ elif sys.flags.optimize == 1:
+ fname = pycache_opt1
+ else:
+ fname = pycache_opt2
+ arcname = file_pyc
+ else:
+ fname = arcname = file_py
+ else:
+ # new mode: use given optimization level
+ if self._optimize == 0:
+ fname = pycache_opt0
+ arcname = file_pyc
+ else:
+ arcname = file_pyc
+ if self._optimize == 1:
+ fname = pycache_opt1
+ elif self._optimize == 2:
+ fname = pycache_opt2
+ else:
+ msg = "invalid value for 'optimize': {!r}".format(self._optimize)
+ raise ValueError(msg)
+ if not (os.path.isfile(fname) and
+ os.stat(fname).st_mtime >= os.stat(file_py).st_mtime):
+ if not _compile(file_py, optimize=self._optimize):
+ fname = arcname = file_py
+ archivename = os.path.split(arcname)[1]
+ if basename:
+ archivename = "%s/%s" % (basename, archivename)
+ return (fname, archivename)
+
+
+def main(args = None):
+ import textwrap
+ USAGE=textwrap.dedent("""\
+ Usage:
+ zipfile.py -l zipfile.zip # Show listing of a zipfile
+ zipfile.py -t zipfile.zip # Test if a zipfile is valid
+ zipfile.py -e zipfile.zip target # Extract zipfile into target dir
+ zipfile.py -c zipfile.zip src ... # Create zipfile from sources
+ """)
+ if args is None:
+ args = sys.argv[1:]
+
+ if not args or args[0] not in ('-l', '-c', '-e', '-t'):
+ print(USAGE)
+ sys.exit(1)
+
+ if args[0] == '-l':
+ if len(args) != 2:
+ print(USAGE)
+ sys.exit(1)
+ with ZipFile(args[1], 'r') as zf:
+ zf.printdir()
+
+ elif args[0] == '-t':
+ if len(args) != 2:
+ print(USAGE)
+ sys.exit(1)
+ with ZipFile(args[1], 'r') as zf:
+ badfile = zf.testzip()
+ if badfile:
+ print("The following enclosed file is corrupted: {!r}".format(badfile))
+ print("Done testing")
+
+ elif args[0] == '-e':
+ if len(args) != 3:
+ print(USAGE)
+ sys.exit(1)
+
+ with ZipFile(args[1], 'r') as zf:
+ zf.extractall(args[2])
+
+ elif args[0] == '-c':
+ if len(args) < 3:
+ print(USAGE)
+ sys.exit(1)
+
+ def addToZip(zf, path, zippath):
+ if os.path.isfile(path):
+ zf.write(path, zippath, ZIP_DEFLATED)
+ elif os.path.isdir(path):
+ if zippath:
+ zf.write(path, zippath)
+ for nm in os.listdir(path):
+ addToZip(zf,
+ os.path.join(path, nm), os.path.join(zippath, nm))
+ # else: ignore
+
+ with ZipFile(args[1], 'w') as zf:
+ for path in args[2:]:
+ zippath = os.path.basename(path)
+ if not zippath:
+ zippath = os.path.basename(os.path.dirname(path))
+ if zippath in ('', os.curdir, os.pardir):
+ zippath = ''
+ addToZip(zf, path, zippath)
+
+if __name__ == "__main__":
+ main()
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_blake2/impl/blake2.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_blake2/impl/blake2.h
new file mode 100644
index 00000000..cd6b26db
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_blake2/impl/blake2.h
@@ -0,0 +1,161 @@
+/*
+ BLAKE2 reference source code package - reference C implementations
+
+ Copyright 2012, Samuel Neves <sneves@dei.uc.pt>. You may use this under the
+ terms of the CC0, the OpenSSL Licence, or the Apache Public License 2.0, at
+ your option. The terms of these licenses can be found at:
+
+ - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
+ - OpenSSL license : https://www.openssl.org/source/license.html
+ - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0
+
+ More information about the BLAKE2 hash function can be found at
+ https://blake2.net.
+*/
+#pragma once
+#ifndef __BLAKE2_H__
+#define __BLAKE2_H__
+
+#include <stddef.h>
+#include <stdint.h>
+
+#ifdef BLAKE2_NO_INLINE
+#define BLAKE2_LOCAL_INLINE(type) static type
+#endif
+
+#ifndef BLAKE2_LOCAL_INLINE
+#define BLAKE2_LOCAL_INLINE(type) static inline type
+#endif
+
+#if defined(__cplusplus)
+extern "C" {
+#endif
+
+ enum blake2s_constant
+ {
+ BLAKE2S_BLOCKBYTES = 64,
+ BLAKE2S_OUTBYTES = 32,
+ BLAKE2S_KEYBYTES = 32,
+ BLAKE2S_SALTBYTES = 8,
+ BLAKE2S_PERSONALBYTES = 8
+ };
+
+ enum blake2b_constant
+ {
+ BLAKE2B_BLOCKBYTES = 128,
+ BLAKE2B_OUTBYTES = 64,
+ BLAKE2B_KEYBYTES = 64,
+ BLAKE2B_SALTBYTES = 16,
+ BLAKE2B_PERSONALBYTES = 16
+ };
+
+ typedef struct __blake2s_state
+ {
+ uint32_t h[8];
+ uint32_t t[2];
+ uint32_t f[2];
+ uint8_t buf[2 * BLAKE2S_BLOCKBYTES];
+ size_t buflen;
+ uint8_t last_node;
+ } blake2s_state;
+
+ typedef struct __blake2b_state
+ {
+ uint64_t h[8];
+ uint64_t t[2];
+ uint64_t f[2];
+ uint8_t buf[2 * BLAKE2B_BLOCKBYTES];
+ size_t buflen;
+ uint8_t last_node;
+ } blake2b_state;
+
+ typedef struct __blake2sp_state
+ {
+ blake2s_state S[8][1];
+ blake2s_state R[1];
+ uint8_t buf[8 * BLAKE2S_BLOCKBYTES];
+ size_t buflen;
+ } blake2sp_state;
+
+ typedef struct __blake2bp_state
+ {
+ blake2b_state S[4][1];
+ blake2b_state R[1];
+ uint8_t buf[4 * BLAKE2B_BLOCKBYTES];
+ size_t buflen;
+ } blake2bp_state;
+
+
+#pragma pack(push, 1)
+ typedef struct __blake2s_param
+ {
+ uint8_t digest_length; /* 1 */
+ uint8_t key_length; /* 2 */
+ uint8_t fanout; /* 3 */
+ uint8_t depth; /* 4 */
+ uint32_t leaf_length; /* 8 */
+ uint8_t node_offset[6];// 14
+ uint8_t node_depth; /* 15 */
+ uint8_t inner_length; /* 16 */
+ /* uint8_t reserved[0]; */
+ uint8_t salt[BLAKE2S_SALTBYTES]; /* 24 */
+ uint8_t personal[BLAKE2S_PERSONALBYTES]; /* 32 */
+ } blake2s_param;
+
+ typedef struct __blake2b_param
+ {
+ uint8_t digest_length; /* 1 */
+ uint8_t key_length; /* 2 */
+ uint8_t fanout; /* 3 */
+ uint8_t depth; /* 4 */
+ uint32_t leaf_length; /* 8 */
+ uint64_t node_offset; /* 16 */
+ uint8_t node_depth; /* 17 */
+ uint8_t inner_length; /* 18 */
+ uint8_t reserved[14]; /* 32 */
+ uint8_t salt[BLAKE2B_SALTBYTES]; /* 48 */
+ uint8_t personal[BLAKE2B_PERSONALBYTES]; /* 64 */
+ } blake2b_param;
+#pragma pack(pop)
+
+ /* Streaming API */
+ int blake2s_init( blake2s_state *S, const uint8_t outlen );
+ int blake2s_init_key( blake2s_state *S, const uint8_t outlen, const void *key, const uint8_t keylen );
+ int blake2s_init_param( blake2s_state *S, const blake2s_param *P );
+ int blake2s_update( blake2s_state *S, const uint8_t *in, uint64_t inlen );
+ int blake2s_final( blake2s_state *S, uint8_t *out, uint8_t outlen );
+
+ int blake2b_init( blake2b_state *S, const uint8_t outlen );
+ int blake2b_init_key( blake2b_state *S, const uint8_t outlen, const void *key, const uint8_t keylen );
+ int blake2b_init_param( blake2b_state *S, const blake2b_param *P );
+ int blake2b_update( blake2b_state *S, const uint8_t *in, uint64_t inlen );
+ int blake2b_final( blake2b_state *S, uint8_t *out, uint8_t outlen );
+
+ int blake2sp_init( blake2sp_state *S, const uint8_t outlen );
+ int blake2sp_init_key( blake2sp_state *S, const uint8_t outlen, const void *key, const uint8_t keylen );
+ int blake2sp_update( blake2sp_state *S, const uint8_t *in, uint64_t inlen );
+ int blake2sp_final( blake2sp_state *S, uint8_t *out, uint8_t outlen );
+
+ int blake2bp_init( blake2bp_state *S, const uint8_t outlen );
+ int blake2bp_init_key( blake2bp_state *S, const uint8_t outlen, const void *key, const uint8_t keylen );
+ int blake2bp_update( blake2bp_state *S, const uint8_t *in, uint64_t inlen );
+ int blake2bp_final( blake2bp_state *S, uint8_t *out, uint8_t outlen );
+
+ /* Simple API */
+ int blake2s( uint8_t *out, const void *in, const void *key, const uint8_t outlen, const uint64_t inlen, uint8_t keylen );
+ int blake2b( uint8_t *out, const void *in, const void *key, const uint8_t outlen, const uint64_t inlen, uint8_t keylen );
+
+ int blake2sp( uint8_t *out, const void *in, const void *key, const uint8_t outlen, const uint64_t inlen, uint8_t keylen );
+ int blake2bp( uint8_t *out, const void *in, const void *key, const uint8_t outlen, const uint64_t inlen, uint8_t keylen );
+
+ static int blake2( uint8_t *out, const void *in, const void *key, const uint8_t outlen, const uint64_t inlen, uint8_t keylen )
+ {
+ return blake2b( out, in, key, outlen, inlen, keylen );
+ }
+
+#if defined(__cplusplus)
+}
+#endif
+
+#endif
+
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/_ctypes.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/_ctypes.c
new file mode 100644
index 00000000..ea6b3811
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/_ctypes.c
@@ -0,0 +1,5623 @@
+/*
+ ToDo:
+
+ Get rid of the checker (and also the converters) field in PyCFuncPtrObject and
+ StgDictObject, and replace them by slot functions in StgDictObject.
+
+ think about a buffer-like object (memory? bytes?)
+
+ Should POINTER(c_char) and POINTER(c_wchar) have a .value property?
+ What about c_char and c_wchar arrays then?
+
+ Add from_mmap, from_file, from_string metaclass methods.
+
+ Maybe we can get away with from_file (calls read) and with a from_buffer
+ method?
+
+ And what about the to_mmap, to_file, to_str(?) methods? They would clobber
+ the namespace, probably. So, functions instead? And we already have memmove...
+*/
+
+/*
+
+Name methods, members, getsets
+==============================================================================
+
+PyCStructType_Type __new__(), from_address(), __mul__(), from_param()
+UnionType_Type __new__(), from_address(), __mul__(), from_param()
+PyCPointerType_Type __new__(), from_address(), __mul__(), from_param(), set_type()
+PyCArrayType_Type __new__(), from_address(), __mul__(), from_param()
+PyCSimpleType_Type __new__(), from_address(), __mul__(), from_param()
+
+PyCData_Type
+ Struct_Type __new__(), __init__()
+ PyCPointer_Type __new__(), __init__(), _as_parameter_, contents
+ PyCArray_Type __new__(), __init__(), _as_parameter_, __get/setitem__(), __len__()
+ Simple_Type __new__(), __init__(), _as_parameter_
+
+PyCField_Type
+PyCStgDict_Type
+
+==============================================================================
+
+class methods
+-------------
+
+It has some similarity to the byref() construct compared to pointer()
+from_address(addr)
+ - construct an instance from a given memory block (sharing this memory block)
+
+from_param(obj)
+ - typecheck and convert a Python object into a C function call parameter
+ The result may be an instance of the type, or an integer or tuple
+ (typecode, value[, obj])
+
+instance methods/properties
+---------------------------
+
+_as_parameter_
+ - convert self into a C function call parameter
+ This is either an integer, or a 3-tuple (typecode, value, obj)
+
+functions
+---------
+
+sizeof(cdata)
+ - return the number of bytes the buffer contains
+
+sizeof(ctype)
+ - return the number of bytes the buffer of an instance would contain
+
+byref(cdata)
+
+addressof(cdata)
+
+pointer(cdata)
+
+POINTER(ctype)
+
+bytes(cdata)
+ - return the buffer contents as a sequence of bytes (which is currently a string)
+
+*/
+
+/*
+ * PyCStgDict_Type
+ * PyCStructType_Type
+ * UnionType_Type
+ * PyCPointerType_Type
+ * PyCArrayType_Type
+ * PyCSimpleType_Type
+ *
+ * PyCData_Type
+ * Struct_Type
+ * Union_Type
+ * PyCArray_Type
+ * Simple_Type
+ * PyCPointer_Type
+ * PyCField_Type
+ *
+ */
+
+#define PY_SSIZE_T_CLEAN
+
+#include "Python.h"
+#include "structmember.h"
+
+#include <ffi.h>
+#ifdef MS_WIN32
+#include <windows.h>
+#include <malloc.h>
+#ifndef IS_INTRESOURCE
+#define IS_INTRESOURCE(x) (((size_t)(x) >> 16) == 0)
+#endif
+#else
+#include "ctypes_dlfcn.h"
+#endif
+#include "ctypes.h"
+
+PyObject *PyExc_ArgError;
+
+/* This dict maps ctypes types to POINTER types */
+PyObject *_ctypes_ptrtype_cache;
+
+static PyTypeObject Simple_Type;
+
+/* a callable object used for unpickling */
+static PyObject *_unpickle;
+
+
+
+/****************************************************************/
+
+typedef struct {
+ PyObject_HEAD
+ PyObject *key;
+ PyObject *dict;
+} DictRemoverObject;
+
+static void
+_DictRemover_dealloc(PyObject *myself)
+{
+ DictRemoverObject *self = (DictRemoverObject *)myself;
+ Py_XDECREF(self->key);
+ Py_XDECREF(self->dict);
+ Py_TYPE(self)->tp_free(myself);
+}
+
+static PyObject *
+_DictRemover_call(PyObject *myself, PyObject *args, PyObject *kw)
+{
+ DictRemoverObject *self = (DictRemoverObject *)myself;
+ if (self->key && self->dict) {
+ if (-1 == PyDict_DelItem(self->dict, self->key))
+ /* XXX Error context */
+ PyErr_WriteUnraisable(Py_None);
+ Py_CLEAR(self->key);
+ Py_CLEAR(self->dict);
+ }
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+static PyTypeObject DictRemover_Type = {
+ PyVarObject_HEAD_INIT(NULL, 0)
+ "_ctypes.DictRemover", /* tp_name */
+ sizeof(DictRemoverObject), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ _DictRemover_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ 0, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ _DictRemover_call, /* tp_call */
+ 0, /* tp_str */
+ 0, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+/* XXX should participate in GC? */
+ Py_TPFLAGS_DEFAULT, /* tp_flags */
+ "deletes a key from a dictionary", /* tp_doc */
+ 0, /* tp_traverse */
+ 0, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ 0, /* tp_iter */
+ 0, /* tp_iternext */
+ 0, /* tp_methods */
+ 0, /* tp_members */
+ 0, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ 0, /* tp_init */
+ 0, /* tp_alloc */
+ 0, /* tp_new */
+ 0, /* tp_free */
+};
+
+int
+PyDict_SetItemProxy(PyObject *dict, PyObject *key, PyObject *item)
+{
+ PyObject *obj;
+ DictRemoverObject *remover;
+ PyObject *proxy;
+ int result;
+
+ obj = PyObject_CallObject((PyObject *)&DictRemover_Type, NULL);
+ if (obj == NULL)
+ return -1;
+
+ remover = (DictRemoverObject *)obj;
+ assert(remover->key == NULL);
+ assert(remover->dict == NULL);
+ Py_INCREF(key);
+ remover->key = key;
+ Py_INCREF(dict);
+ remover->dict = dict;
+
+ proxy = PyWeakref_NewProxy(item, obj);
+ Py_DECREF(obj);
+ if (proxy == NULL)
+ return -1;
+
+ result = PyDict_SetItem(dict, key, proxy);
+ Py_DECREF(proxy);
+ return result;
+}
+
+PyObject *
+PyDict_GetItemProxy(PyObject *dict, PyObject *key)
+{
+ PyObject *result;
+ PyObject *item = PyDict_GetItem(dict, key);
+
+ if (item == NULL)
+ return NULL;
+ if (!PyWeakref_CheckProxy(item))
+ return item;
+ result = PyWeakref_GET_OBJECT(item);
+ if (result == Py_None)
+ return NULL;
+ return result;
+}
+
+/******************************************************************/
+
+/*
+ Allocate a memory block for a pep3118 format string, filled with
+ a suitable PEP 3118 type code corresponding to the given ctypes
+ type. Returns NULL on failure, with the error indicator set.
+
+ This produces type codes in the standard size mode (cf. struct module),
+ since the endianness may need to be swapped to a non-native one
+ later on.
+ */
+static char *
+_ctypes_alloc_format_string_for_type(char code, int big_endian)
+{
+ char *result;
+ char pep_code = '\0';
+
+ switch (code) {
+#if SIZEOF_INT == 2
+ case 'i': pep_code = 'h'; break;
+ case 'I': pep_code = 'H'; break;
+#elif SIZEOF_INT == 4
+ case 'i': pep_code = 'i'; break;
+ case 'I': pep_code = 'I'; break;
+#elif SIZEOF_INT == 8
+ case 'i': pep_code = 'q'; break;
+ case 'I': pep_code = 'Q'; break;
+#else
+# error SIZEOF_INT has an unexpected value
+#endif /* SIZEOF_INT */
+#if SIZEOF_LONG == 4
+ case 'l': pep_code = 'l'; break;
+ case 'L': pep_code = 'L'; break;
+#elif SIZEOF_LONG == 8
+ case 'l': pep_code = 'q'; break;
+ case 'L': pep_code = 'Q'; break;
+#else
+# error SIZEOF_LONG has an unexpected value
+#endif /* SIZEOF_LONG */
+#if SIZEOF__BOOL == 1
+ case '?': pep_code = '?'; break;
+#elif SIZEOF__BOOL == 2
+ case '?': pep_code = 'H'; break;
+#elif SIZEOF__BOOL == 4
+ case '?': pep_code = 'L'; break;
+#elif SIZEOF__BOOL == 8
+ case '?': pep_code = 'Q'; break;
+#else
+# error SIZEOF__BOOL has an unexpected value
+#endif /* SIZEOF__BOOL */
+ default:
+ /* The standard-size code is the same as the ctypes one */
+ pep_code = code;
+ break;
+ }
+
+ result = PyMem_Malloc(3);
+ if (result == NULL) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+
+ result[0] = big_endian ? '>' : '<';
+ result[1] = pep_code;
+ result[2] = '\0';
+ return result;
+}
+
+/*
+ Allocate a memory block for a pep3118 format string, copy prefix (if
+ non-null) and suffix into it. Returns NULL on failure, with the error
+ indicator set. If called with a suffix of NULL the error indicator must
+ already be set.
+ */
+char *
+_ctypes_alloc_format_string(const char *prefix, const char *suffix)
+{
+ size_t len;
+ char *result;
+
+ if (suffix == NULL) {
+ assert(PyErr_Occurred());
+ return NULL;
+ }
+ len = strlen(suffix);
+ if (prefix)
+ len += strlen(prefix);
+ result = PyMem_Malloc(len + 1);
+ if (result == NULL) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+ if (prefix)
+ strcpy(result, prefix);
+ else
+ result[0] = '\0';
+ strcat(result, suffix);
+ return result;
+}
+
+/*
+ Allocate a memory block for a pep3118 format string, adding
+ the given prefix (if non-null), an additional shape prefix, and a suffix.
+ Returns NULL on failure, with the error indicator set. If called with
+ a suffix of NULL the error indicator must already be set.
+ */
+char *
+_ctypes_alloc_format_string_with_shape(int ndim, const Py_ssize_t *shape,
+ const char *prefix, const char *suffix)
+{
+ char *new_prefix;
+ char *result;
+ char buf[32];
+ Py_ssize_t prefix_len;
+ int k;
+
+ prefix_len = 32 * ndim + 3;
+ if (prefix)
+ prefix_len += strlen(prefix);
+ new_prefix = PyMem_Malloc(prefix_len);
+ if (new_prefix == NULL) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+ new_prefix[0] = '\0';
+ if (prefix)
+ strcpy(new_prefix, prefix);
+ if (ndim > 0) {
+ /* Add the prefix "(shape[0],shape[1],...,shape[ndim-1])" */
+ strcat(new_prefix, "(");
+ for (k = 0; k < ndim; ++k) {
+ if (k < ndim-1) {
+ sprintf(buf, "%"PY_FORMAT_SIZE_T"d,", shape[k]);
+ } else {
+ sprintf(buf, "%"PY_FORMAT_SIZE_T"d)", shape[k]);
+ }
+ strcat(new_prefix, buf);
+ }
+ }
+ result = _ctypes_alloc_format_string(new_prefix, suffix);
+ PyMem_Free(new_prefix);
+ return result;
+}
+
+/*
+ PyCStructType_Type - a meta type/class. Creating a new class using this one as
+ __metaclass__ will call the constructor StructUnionType_new. It replaces the
+ tp_dict member with a new instance of StgDict, and initializes the C
+ accessible fields somehow.
+*/
+
+static PyCArgObject *
+StructUnionType_paramfunc(CDataObject *self)
+{
+ PyCArgObject *parg;
+ StgDictObject *stgdict;
+
+ parg = PyCArgObject_new();
+ if (parg == NULL)
+ return NULL;
+
+ parg->tag = 'V';
+ stgdict = PyObject_stgdict((PyObject *)self);
+ assert(stgdict); /* Cannot be NULL for structure/union instances */
+ parg->pffi_type = &stgdict->ffi_type_pointer;
+ /* For structure parameters (by value), parg->value doesn't contain the structure
+ data itself, instead parg->value.p *points* to the structure's data
+ See also _ctypes.c, function _call_function_pointer().
+ */
+ parg->value.p = self->b_ptr;
+ parg->size = self->b_size;
+ Py_INCREF(self);
+ parg->obj = (PyObject *)self;
+ return parg;
+}
+
+static PyObject *
+StructUnionType_new(PyTypeObject *type, PyObject *args, PyObject *kwds, int isStruct)
+{
+ PyTypeObject *result;
+ PyObject *fields;
+ StgDictObject *dict;
+
+ /* create the new instance (which is a class,
+ since we are a metatype!) */
+ result = (PyTypeObject *)PyType_Type.tp_new(type, args, kwds);
+ if (!result)
+ return NULL;
+
+ /* keep this for bw compatibility */
+ if (PyDict_GetItemString(result->tp_dict, "_abstract_"))
+ return (PyObject *)result;
+
+ dict = (StgDictObject *)PyObject_CallObject((PyObject *)&PyCStgDict_Type, NULL);
+ if (!dict) {
+ Py_DECREF(result);
+ return NULL;
+ }
+ /* replace the class dict by our updated stgdict, which holds info
+ about storage requirements of the instances */
+ if (-1 == PyDict_Update((PyObject *)dict, result->tp_dict)) {
+ Py_DECREF(result);
+ Py_DECREF((PyObject *)dict);
+ return NULL;
+ }
+ Py_SETREF(result->tp_dict, (PyObject *)dict);
+ dict->format = _ctypes_alloc_format_string(NULL, "B");
+ if (dict->format == NULL) {
+ Py_DECREF(result);
+ return NULL;
+ }
+
+ dict->paramfunc = StructUnionType_paramfunc;
+
+ fields = PyDict_GetItemString((PyObject *)dict, "_fields_");
+ if (!fields) {
+ StgDictObject *basedict = PyType_stgdict((PyObject *)result->tp_base);
+
+ if (basedict == NULL)
+ return (PyObject *)result;
+ /* copy base dict */
+ if (-1 == PyCStgDict_clone(dict, basedict)) {
+ Py_DECREF(result);
+ return NULL;
+ }
+ dict->flags &= ~DICTFLAG_FINAL; /* clear the 'final' flag in the subclass dict */
+ basedict->flags |= DICTFLAG_FINAL; /* set the 'final' flag in the baseclass dict */
+ return (PyObject *)result;
+ }
+
+ if (-1 == PyObject_SetAttrString((PyObject *)result, "_fields_", fields)) {
+ Py_DECREF(result);
+ return NULL;
+ }
+ return (PyObject *)result;
+}
+
+static PyObject *
+PyCStructType_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
+{
+ return StructUnionType_new(type, args, kwds, 1);
+}
+
+static PyObject *
+UnionType_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
+{
+ return StructUnionType_new(type, args, kwds, 0);
+}
+
+static const char from_address_doc[] =
+"C.from_address(integer) -> C instance\naccess a C instance at the specified address";
+
+static PyObject *
+CDataType_from_address(PyObject *type, PyObject *value)
+{
+ void *buf;
+ if (!PyLong_Check(value)) {
+ PyErr_SetString(PyExc_TypeError,
+ "integer expected");
+ return NULL;
+ }
+ buf = (void *)PyLong_AsVoidPtr(value);
+ if (PyErr_Occurred())
+ return NULL;
+ return PyCData_AtAddress(type, buf);
+}
+
+static const char from_buffer_doc[] =
+"C.from_buffer(object, offset=0) -> C instance\ncreate a C instance from a writeable buffer";
+
+static int
+KeepRef(CDataObject *target, Py_ssize_t index, PyObject *keep);
+
+static PyObject *
+CDataType_from_buffer(PyObject *type, PyObject *args)
+{
+ PyObject *obj;
+ PyObject *mv;
+ PyObject *result;
+ Py_buffer *buffer;
+ Py_ssize_t offset = 0;
+
+ StgDictObject *dict = PyType_stgdict(type);
+ if (!dict) {
+ PyErr_SetString(PyExc_TypeError, "abstract class");
+ return NULL;
+ }
+
+ if (!PyArg_ParseTuple(args, "O|n:from_buffer", &obj, &offset))
+ return NULL;
+
+ mv = PyMemoryView_FromObject(obj);
+ if (mv == NULL)
+ return NULL;
+
+ buffer = PyMemoryView_GET_BUFFER(mv);
+
+ if (buffer->readonly) {
+ PyErr_SetString(PyExc_TypeError,
+ "underlying buffer is not writable");
+ Py_DECREF(mv);
+ return NULL;
+ }
+
+ if (!PyBuffer_IsContiguous(buffer, 'C')) {
+ PyErr_SetString(PyExc_TypeError,
+ "underlying buffer is not C contiguous");
+ Py_DECREF(mv);
+ return NULL;
+ }
+
+ if (offset < 0) {
+ PyErr_SetString(PyExc_ValueError,
+ "offset cannot be negative");
+ Py_DECREF(mv);
+ return NULL;
+ }
+
+ if (dict->size > buffer->len - offset) {
+ PyErr_Format(PyExc_ValueError,
+ "Buffer size too small "
+ "(%zd instead of at least %zd bytes)",
+ buffer->len, dict->size + offset);
+ Py_DECREF(mv);
+ return NULL;
+ }
+
+ result = PyCData_AtAddress(type, (char *)buffer->buf + offset);
+ if (result == NULL) {
+ Py_DECREF(mv);
+ return NULL;
+ }
+
+ if (-1 == KeepRef((CDataObject *)result, -1, mv)) {
+ Py_DECREF(result);
+ return NULL;
+ }
+
+ return result;
+}
+
+static const char from_buffer_copy_doc[] =
+"C.from_buffer_copy(object, offset=0) -> C instance\ncreate a C instance from a readable buffer";
+
+static PyObject *
+GenericPyCData_new(PyTypeObject *type, PyObject *args, PyObject *kwds);
+
+static PyObject *
+CDataType_from_buffer_copy(PyObject *type, PyObject *args)
+{
+ Py_buffer buffer;
+ Py_ssize_t offset = 0;
+ PyObject *result;
+ StgDictObject *dict = PyType_stgdict(type);
+ if (!dict) {
+ PyErr_SetString(PyExc_TypeError, "abstract class");
+ return NULL;
+ }
+
+ if (!PyArg_ParseTuple(args, "y*|n:from_buffer_copy", &buffer, &offset))
+ return NULL;
+
+ if (offset < 0) {
+ PyErr_SetString(PyExc_ValueError,
+ "offset cannot be negative");
+ PyBuffer_Release(&buffer);
+ return NULL;
+ }
+
+ if (dict->size > buffer.len - offset) {
+ PyErr_Format(PyExc_ValueError,
+ "Buffer size too small (%zd instead of at least %zd bytes)",
+ buffer.len, dict->size + offset);
+ PyBuffer_Release(&buffer);
+ return NULL;
+ }
+
+ result = GenericPyCData_new((PyTypeObject *)type, NULL, NULL);
+ if (result != NULL) {
+ memcpy(((CDataObject *)result)->b_ptr,
+ (char *)buffer.buf + offset, dict->size);
+ }
+ PyBuffer_Release(&buffer);
+ return result;
+}
+#ifndef UEFI_C_SOURCE
+static const char in_dll_doc[] =
+"C.in_dll(dll, name) -> C instance\naccess a C instance in a dll";
+
+static PyObject *
+CDataType_in_dll(PyObject *type, PyObject *args)
+{
+ PyObject *dll;
+ char *name;
+ PyObject *obj;
+ void *handle;
+ void *address;
+
+ if (!PyArg_ParseTuple(args, "Os:in_dll", &dll, &name))
+ return NULL;
+
+ obj = PyObject_GetAttrString(dll, "_handle");
+ if (!obj)
+ return NULL;
+ if (!PyLong_Check(obj)) {
+ PyErr_SetString(PyExc_TypeError,
+ "the _handle attribute of the second argument must be an integer");
+ Py_DECREF(obj);
+ return NULL;
+ }
+ handle = (void *)PyLong_AsVoidPtr(obj);
+ Py_DECREF(obj);
+ if (PyErr_Occurred()) {
+ PyErr_SetString(PyExc_ValueError,
+ "could not convert the _handle attribute to a pointer");
+ return NULL;
+ }
+
+#ifdef MS_WIN32
+ address = (void *)GetProcAddress(handle, name);
+ if (!address) {
+ PyErr_Format(PyExc_ValueError,
+ "symbol '%s' not found",
+ name);
+ return NULL;
+ }
+#else
+ address = (void *)ctypes_dlsym(handle, name);
+ if (!address) {
+#ifdef __CYGWIN__
+/* dlerror() isn't very helpful on cygwin */
+ PyErr_Format(PyExc_ValueError,
+ "symbol '%s' not found",
+ name);
+#else
+ PyErr_SetString(PyExc_ValueError, ctypes_dlerror());
+#endif
+ return NULL;
+ }
+#endif
+ return PyCData_AtAddress(type, address);
+}
+#endif
+static const char from_param_doc[] =
+"Convert a Python object into a function call parameter.";
+
+static PyObject *
+CDataType_from_param(PyObject *type, PyObject *value)
+{
+ PyObject *as_parameter;
+ int res = PyObject_IsInstance(value, type);
+ if (res == -1)
+ return NULL;
+ if (res) {
+ Py_INCREF(value);
+ return value;
+ }
+ if (PyCArg_CheckExact(value)) {
+ PyCArgObject *p = (PyCArgObject *)value;
+ PyObject *ob = p->obj;
+ const char *ob_name;
+ StgDictObject *dict;
+ dict = PyType_stgdict(type);
+
+ /* If we got a PyCArgObject, we must check if the object packed in it
+ is an instance of the type's dict->proto */
+ if(dict && ob) {
+ res = PyObject_IsInstance(ob, dict->proto);
+ if (res == -1)
+ return NULL;
+ if (res) {
+ Py_INCREF(value);
+ return value;
+ }
+ }
+ ob_name = (ob) ? Py_TYPE(ob)->tp_name : "???";
+ PyErr_Format(PyExc_TypeError,
+ "expected %s instance instead of pointer to %s",
+ ((PyTypeObject *)type)->tp_name, ob_name);
+ return NULL;
+ }
+
+ as_parameter = PyObject_GetAttrString(value, "_as_parameter_");
+ if (as_parameter) {
+ value = CDataType_from_param(type, as_parameter);
+ Py_DECREF(as_parameter);
+ return value;
+ }
+ PyErr_Format(PyExc_TypeError,
+ "expected %s instance instead of %s",
+ ((PyTypeObject *)type)->tp_name,
+ Py_TYPE(value)->tp_name);
+ return NULL;
+}
+
+static PyMethodDef CDataType_methods[] = {
+ { "from_param", CDataType_from_param, METH_O, from_param_doc },
+ { "from_address", CDataType_from_address, METH_O, from_address_doc },
+ { "from_buffer", CDataType_from_buffer, METH_VARARGS, from_buffer_doc, },
+ { "from_buffer_copy", CDataType_from_buffer_copy, METH_VARARGS, from_buffer_copy_doc, },
+#ifndef UEFI_C_SOURCE
+ { "in_dll", CDataType_in_dll, METH_VARARGS, in_dll_doc },
+#endif
+ { NULL, NULL },
+};
+
+static PyObject *
+CDataType_repeat(PyObject *self, Py_ssize_t length)
+{
+ if (length < 0)
+ return PyErr_Format(PyExc_ValueError,
+ "Array length must be >= 0, not %zd",
+ length);
+ return PyCArrayType_from_ctype(self, length);
+}
+
+static PySequenceMethods CDataType_as_sequence = {
+ 0, /* inquiry sq_length; */
+ 0, /* binaryfunc sq_concat; */
+ CDataType_repeat, /* intargfunc sq_repeat; */
+ 0, /* intargfunc sq_item; */
+ 0, /* intintargfunc sq_slice; */
+ 0, /* intobjargproc sq_ass_item; */
+ 0, /* intintobjargproc sq_ass_slice; */
+ 0, /* objobjproc sq_contains; */
+
+ 0, /* binaryfunc sq_inplace_concat; */
+ 0, /* intargfunc sq_inplace_repeat; */
+};
+
+static int
+CDataType_clear(PyTypeObject *self)
+{
+ StgDictObject *dict = PyType_stgdict((PyObject *)self);
+ if (dict)
+ Py_CLEAR(dict->proto);
+ return PyType_Type.tp_clear((PyObject *)self);
+}
+
+static int
+CDataType_traverse(PyTypeObject *self, visitproc visit, void *arg)
+{
+ StgDictObject *dict = PyType_stgdict((PyObject *)self);
+ if (dict)
+ Py_VISIT(dict->proto);
+ return PyType_Type.tp_traverse((PyObject *)self, visit, arg);
+}
+
+static int
+PyCStructType_setattro(PyObject *self, PyObject *key, PyObject *value)
+{
+ /* XXX Should we disallow deleting _fields_? */
+ if (-1 == PyType_Type.tp_setattro(self, key, value))
+ return -1;
+
+ if (value && PyUnicode_Check(key) &&
+ _PyUnicode_EqualToASCIIString(key, "_fields_"))
+ return PyCStructUnionType_update_stgdict(self, value, 1);
+ return 0;
+}
+
+
+static int
+UnionType_setattro(PyObject *self, PyObject *key, PyObject *value)
+{
+ /* XXX Should we disallow deleting _fields_? */
+ if (-1 == PyObject_GenericSetAttr(self, key, value))
+ return -1;
+
+ if (PyUnicode_Check(key) &&
+ _PyUnicode_EqualToASCIIString(key, "_fields_"))
+ return PyCStructUnionType_update_stgdict(self, value, 0);
+ return 0;
+}
+
+
+PyTypeObject PyCStructType_Type = {
+ PyVarObject_HEAD_INIT(NULL, 0)
+ "_ctypes.PyCStructType", /* tp_name */
+ 0, /* tp_basicsize */
+ 0, /* tp_itemsize */
+ 0, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ &CDataType_as_sequence, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ 0, /* tp_getattro */
+ PyCStructType_setattro, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC, /* tp_flags */
+ "metatype for the CData Objects", /* tp_doc */
+ (traverseproc)CDataType_traverse, /* tp_traverse */
+ (inquiry)CDataType_clear, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ 0, /* tp_iter */
+ 0, /* tp_iternext */
+ CDataType_methods, /* tp_methods */
+ 0, /* tp_members */
+ 0, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ 0, /* tp_init */
+ 0, /* tp_alloc */
+ PyCStructType_new, /* tp_new */
+ 0, /* tp_free */
+};
+
+static PyTypeObject UnionType_Type = {
+ PyVarObject_HEAD_INIT(NULL, 0)
+ "_ctypes.UnionType", /* tp_name */
+ 0, /* tp_basicsize */
+ 0, /* tp_itemsize */
+ 0, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ &CDataType_as_sequence, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ 0, /* tp_getattro */
+ UnionType_setattro, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC, /* tp_flags */
+ "metatype for the CData Objects", /* tp_doc */
+ (traverseproc)CDataType_traverse, /* tp_traverse */
+ (inquiry)CDataType_clear, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ 0, /* tp_iter */
+ 0, /* tp_iternext */
+ CDataType_methods, /* tp_methods */
+ 0, /* tp_members */
+ 0, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ 0, /* tp_init */
+ 0, /* tp_alloc */
+ UnionType_new, /* tp_new */
+ 0, /* tp_free */
+};
+
+
+/******************************************************************/
+
+/*
+
+The PyCPointerType_Type metaclass must ensure that the subclass of Pointer can be
+created. It must check for a _type_ attribute in the class. Since are no
+runtime created properties, a CField is probably *not* needed ?
+
+class IntPointer(Pointer):
+ _type_ = "i"
+
+The PyCPointer_Type provides the functionality: a contents method/property, a
+size property/method, and the sequence protocol.
+
+*/
+
+static int
+PyCPointerType_SetProto(StgDictObject *stgdict, PyObject *proto)
+{
+ if (!proto || !PyType_Check(proto)) {
+ PyErr_SetString(PyExc_TypeError,
+ "_type_ must be a type");
+ return -1;
+ }
+ if (!PyType_stgdict(proto)) {
+ PyErr_SetString(PyExc_TypeError,
+ "_type_ must have storage info");
+ return -1;
+ }
+ Py_INCREF(proto);
+ Py_XSETREF(stgdict->proto, proto);
+ return 0;
+}
+
+static PyCArgObject *
+PyCPointerType_paramfunc(CDataObject *self)
+{
+ PyCArgObject *parg;
+
+ parg = PyCArgObject_new();
+ if (parg == NULL)
+ return NULL;
+
+ parg->tag = 'P';
+ parg->pffi_type = &ffi_type_pointer;
+ Py_INCREF(self);
+ parg->obj = (PyObject *)self;
+ parg->value.p = *(void **)self->b_ptr;
+ return parg;
+}
+
+static PyObject *
+PyCPointerType_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
+{
+ PyTypeObject *result;
+ StgDictObject *stgdict;
+ PyObject *proto;
+ PyObject *typedict;
+
+ typedict = PyTuple_GetItem(args, 2);
+ if (!typedict)
+ return NULL;
+/*
+ stgdict items size, align, length contain info about pointers itself,
+ stgdict->proto has info about the pointed to type!
+*/
+ stgdict = (StgDictObject *)PyObject_CallObject(
+ (PyObject *)&PyCStgDict_Type, NULL);
+ if (!stgdict)
+ return NULL;
+ stgdict->size = sizeof(void *);
+ stgdict->align = _ctypes_get_fielddesc("P")->pffi_type->alignment;
+ stgdict->length = 1;
+ stgdict->ffi_type_pointer = ffi_type_pointer;
+ stgdict->paramfunc = PyCPointerType_paramfunc;
+ stgdict->flags |= TYPEFLAG_ISPOINTER;
+
+ proto = PyDict_GetItemString(typedict, "_type_"); /* Borrowed ref */
+ if (proto && -1 == PyCPointerType_SetProto(stgdict, proto)) {
+ Py_DECREF((PyObject *)stgdict);
+ return NULL;
+ }
+
+ if (proto) {
+ StgDictObject *itemdict = PyType_stgdict(proto);
+ const char *current_format;
+ /* PyCPointerType_SetProto has verified proto has a stgdict. */
+ assert(itemdict);
+ /* If itemdict->format is NULL, then this is a pointer to an
+ incomplete type. We create a generic format string
+ 'pointer to bytes' in this case. XXX Better would be to
+ fix the format string later...
+ */
+ current_format = itemdict->format ? itemdict->format : "B";
+ if (itemdict->shape != NULL) {
+ /* pointer to an array: the shape needs to be prefixed */
+ stgdict->format = _ctypes_alloc_format_string_with_shape(
+ itemdict->ndim, itemdict->shape, "&", current_format);
+ } else {
+ stgdict->format = _ctypes_alloc_format_string("&", current_format);
+ }
+ if (stgdict->format == NULL) {
+ Py_DECREF((PyObject *)stgdict);
+ return NULL;
+ }
+ }
+
+ /* create the new instance (which is a class,
+ since we are a metatype!) */
+ result = (PyTypeObject *)PyType_Type.tp_new(type, args, kwds);
+ if (result == NULL) {
+ Py_DECREF((PyObject *)stgdict);
+ return NULL;
+ }
+
+ /* replace the class dict by our updated spam dict */
+ if (-1 == PyDict_Update((PyObject *)stgdict, result->tp_dict)) {
+ Py_DECREF(result);
+ Py_DECREF((PyObject *)stgdict);
+ return NULL;
+ }
+ Py_SETREF(result->tp_dict, (PyObject *)stgdict);
+
+ return (PyObject *)result;
+}
+
+
+static PyObject *
+PyCPointerType_set_type(PyTypeObject *self, PyObject *type)
+{
+ StgDictObject *dict;
+
+ dict = PyType_stgdict((PyObject *)self);
+ if (!dict) {
+ PyErr_SetString(PyExc_TypeError,
+ "abstract class");
+ return NULL;
+ }
+
+ if (-1 == PyCPointerType_SetProto(dict, type))
+ return NULL;
+
+ if (-1 == PyDict_SetItemString((PyObject *)dict, "_type_", type))
+ return NULL;
+
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+static PyObject *_byref(PyObject *);
+
+static PyObject *
+PyCPointerType_from_param(PyObject *type, PyObject *value)
+{
+ StgDictObject *typedict;
+
+ if (value == Py_None) {
+ /* ConvParam will convert to a NULL pointer later */
+ Py_INCREF(value);
+ return value;
+ }
+
+ typedict = PyType_stgdict(type);
+ if (!typedict) {
+ PyErr_SetString(PyExc_TypeError,
+ "abstract class");
+ return NULL;
+ }
+
+ /* If we expect POINTER(<type>), but receive a <type> instance, accept
+ it by calling byref(<type>).
+ */
+ switch (PyObject_IsInstance(value, typedict->proto)) {
+ case 1:
+ Py_INCREF(value); /* _byref steals a refcount */
+ return _byref(value);
+ case -1:
+ return NULL;
+ default:
+ break;
+ }
+
+ if (PointerObject_Check(value) || ArrayObject_Check(value)) {
+ /* Array instances are also pointers when
+ the item types are the same.
+ */
+ StgDictObject *v = PyObject_stgdict(value);
+ assert(v); /* Cannot be NULL for pointer or array objects */
+ if (PyObject_IsSubclass(v->proto, typedict->proto)) {
+ Py_INCREF(value);
+ return value;
+ }
+ }
+ return CDataType_from_param(type, value);
+}
+
+static PyMethodDef PyCPointerType_methods[] = {
+ { "from_address", CDataType_from_address, METH_O, from_address_doc },
+ { "from_buffer", CDataType_from_buffer, METH_VARARGS, from_buffer_doc, },
+ { "from_buffer_copy", CDataType_from_buffer_copy, METH_VARARGS, from_buffer_copy_doc, },
+#ifndef UEFI_C_SOURCE
+ { "in_dll", CDataType_in_dll, METH_VARARGS, in_dll_doc},
+#endif
+ { "from_param", (PyCFunction)PyCPointerType_from_param, METH_O, from_param_doc},
+ { "set_type", (PyCFunction)PyCPointerType_set_type, METH_O },
+ { NULL, NULL },
+};
+
+PyTypeObject PyCPointerType_Type = {
+ PyVarObject_HEAD_INIT(NULL, 0)
+ "_ctypes.PyCPointerType", /* tp_name */
+ 0, /* tp_basicsize */
+ 0, /* tp_itemsize */
+ 0, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ &CDataType_as_sequence, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ 0, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC, /* tp_flags */
+ "metatype for the Pointer Objects", /* tp_doc */
+ (traverseproc)CDataType_traverse, /* tp_traverse */
+ (inquiry)CDataType_clear, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ 0, /* tp_iter */
+ 0, /* tp_iternext */
+ PyCPointerType_methods, /* tp_methods */
+ 0, /* tp_members */
+ 0, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ 0, /* tp_init */
+ 0, /* tp_alloc */
+ PyCPointerType_new, /* tp_new */
+ 0, /* tp_free */
+};
+
+
+/******************************************************************/
+/*
+ PyCArrayType_Type
+*/
+/*
+ PyCArrayType_new ensures that the new Array subclass created has a _length_
+ attribute, and a _type_ attribute.
+*/
+
+static int
+CharArray_set_raw(CDataObject *self, PyObject *value, void *Py_UNUSED(ignored))
+{
+ char *ptr;
+ Py_ssize_t size;
+ Py_buffer view;
+
+ if (PyObject_GetBuffer(value, &view, PyBUF_SIMPLE) < 0)
+ return -1;
+ size = view.len;
+ ptr = view.buf;
+ if (size > self->b_size) {
+ PyErr_SetString(PyExc_ValueError,
+ "byte string too long");
+ goto fail;
+ }
+
+ memcpy(self->b_ptr, ptr, size);
+
+ PyBuffer_Release(&view);
+ return 0;
+ fail:
+ PyBuffer_Release(&view);
+ return -1;
+}
+
+static PyObject *
+CharArray_get_raw(CDataObject *self, void *Py_UNUSED(ignored))
+{
+ return PyBytes_FromStringAndSize(self->b_ptr, self->b_size);
+}
+
+static PyObject *
+CharArray_get_value(CDataObject *self, void *Py_UNUSED(ignored))
+{
+ Py_ssize_t i;
+ char *ptr = self->b_ptr;
+ for (i = 0; i < self->b_size; ++i)
+ if (*ptr++ == '\0')
+ break;
+ return PyBytes_FromStringAndSize(self->b_ptr, i);
+}
+
+static int
+CharArray_set_value(CDataObject *self, PyObject *value, void *Py_UNUSED(ignored))
+{
+ char *ptr;
+ Py_ssize_t size;
+
+ if (value == NULL) {
+ PyErr_SetString(PyExc_TypeError,
+ "can't delete attribute");
+ return -1;
+ }
+
+ if (!PyBytes_Check(value)) {
+ PyErr_Format(PyExc_TypeError,
+ "bytes expected instead of %s instance",
+ Py_TYPE(value)->tp_name);
+ return -1;
+ } else
+ Py_INCREF(value);
+ size = PyBytes_GET_SIZE(value);
+ if (size > self->b_size) {
+ PyErr_SetString(PyExc_ValueError,
+ "byte string too long");
+ Py_DECREF(value);
+ return -1;
+ }
+
+ ptr = PyBytes_AS_STRING(value);
+ memcpy(self->b_ptr, ptr, size);
+ if (size < self->b_size)
+ self->b_ptr[size] = '\0';
+ Py_DECREF(value);
+
+ return 0;
+}
+
+static PyGetSetDef CharArray_getsets[] = {
+ { "raw", (getter)CharArray_get_raw, (setter)CharArray_set_raw,
+ "value", NULL },
+ { "value", (getter)CharArray_get_value, (setter)CharArray_set_value,
+ "string value"},
+ { NULL, NULL }
+};
+
+#ifdef CTYPES_UNICODE
+static PyObject *
+WCharArray_get_value(CDataObject *self, void *Py_UNUSED(ignored))
+{
+ Py_ssize_t i;
+ wchar_t *ptr = (wchar_t *)self->b_ptr;
+ for (i = 0; i < self->b_size/(Py_ssize_t)sizeof(wchar_t); ++i)
+ if (*ptr++ == (wchar_t)0)
+ break;
+ return PyUnicode_FromWideChar((wchar_t *)self->b_ptr, i);
+}
+
+static int
+WCharArray_set_value(CDataObject *self, PyObject *value, void *Py_UNUSED(ignored))
+{
+ Py_ssize_t result = 0;
+ Py_UNICODE *wstr;
+ Py_ssize_t len;
+
+ if (value == NULL) {
+ PyErr_SetString(PyExc_TypeError,
+ "can't delete attribute");
+ return -1;
+ }
+ if (!PyUnicode_Check(value)) {
+ PyErr_Format(PyExc_TypeError,
+ "unicode string expected instead of %s instance",
+ Py_TYPE(value)->tp_name);
+ return -1;
+ } else
+ Py_INCREF(value);
+
+ wstr = PyUnicode_AsUnicodeAndSize(value, &len);
+ if (wstr == NULL)
+ return -1;
+ if ((size_t)len > self->b_size/sizeof(wchar_t)) {
+ PyErr_SetString(PyExc_ValueError,
+ "string too long");
+ result = -1;
+ goto done;
+ }
+ result = PyUnicode_AsWideChar(value,
+ (wchar_t *)self->b_ptr,
+ self->b_size/sizeof(wchar_t));
+ if (result >= 0 && (size_t)result < self->b_size/sizeof(wchar_t))
+ ((wchar_t *)self->b_ptr)[result] = (wchar_t)0;
+ done:
+ Py_DECREF(value);
+
+ return result >= 0 ? 0 : -1;
+}
+
+static PyGetSetDef WCharArray_getsets[] = {
+ { "value", (getter)WCharArray_get_value, (setter)WCharArray_set_value,
+ "string value"},
+ { NULL, NULL }
+};
+#endif
+
+/*
+ The next three functions copied from Python's typeobject.c.
+
+ They are used to attach methods, members, or getsets to a type *after* it
+ has been created: Arrays of characters have additional getsets to treat them
+ as strings.
+ */
+/*
+static int
+add_methods(PyTypeObject *type, PyMethodDef *meth)
+{
+ PyObject *dict = type->tp_dict;
+ for (; meth->ml_name != NULL; meth++) {
+ PyObject *descr;
+ descr = PyDescr_NewMethod(type, meth);
+ if (descr == NULL)
+ return -1;
+ if (PyDict_SetItemString(dict, meth->ml_name, descr) < 0) {
+ Py_DECREF(descr);
+ return -1;
+ }
+ Py_DECREF(descr);
+ }
+ return 0;
+}
+
+static int
+add_members(PyTypeObject *type, PyMemberDef *memb)
+{
+ PyObject *dict = type->tp_dict;
+ for (; memb->name != NULL; memb++) {
+ PyObject *descr;
+ descr = PyDescr_NewMember(type, memb);
+ if (descr == NULL)
+ return -1;
+ if (PyDict_SetItemString(dict, memb->name, descr) < 0) {
+ Py_DECREF(descr);
+ return -1;
+ }
+ Py_DECREF(descr);
+ }
+ return 0;
+}
+*/
+
+static int
+add_getset(PyTypeObject *type, PyGetSetDef *gsp)
+{
+ PyObject *dict = type->tp_dict;
+ for (; gsp->name != NULL; gsp++) {
+ PyObject *descr;
+ descr = PyDescr_NewGetSet(type, gsp);
+ if (descr == NULL)
+ return -1;
+ if (PyDict_SetItemString(dict, gsp->name, descr) < 0) {
+ Py_DECREF(descr);
+ return -1;
+ }
+ Py_DECREF(descr);
+ }
+ return 0;
+}
+
+static PyCArgObject *
+PyCArrayType_paramfunc(CDataObject *self)
+{
+ PyCArgObject *p = PyCArgObject_new();
+ if (p == NULL)
+ return NULL;
+ p->tag = 'P';
+ p->pffi_type = &ffi_type_pointer;
+ p->value.p = (char *)self->b_ptr;
+ Py_INCREF(self);
+ p->obj = (PyObject *)self;
+ return p;
+}
+
+static PyObject *
+PyCArrayType_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
+{
+ PyTypeObject *result;
+ StgDictObject *stgdict;
+ StgDictObject *itemdict;
+ PyObject *length_attr, *type_attr;
+ Py_ssize_t length;
+ Py_ssize_t itemsize, itemalign;
+
+ /* create the new instance (which is a class,
+ since we are a metatype!) */
+ result = (PyTypeObject *)PyType_Type.tp_new(type, args, kwds);
+ if (result == NULL)
+ return NULL;
+
+ /* Initialize these variables to NULL so that we can simplify error
+ handling by using Py_XDECREF. */
+ stgdict = NULL;
+ type_attr = NULL;
+
+ length_attr = PyObject_GetAttrString((PyObject *)result, "_length_");
+ if (!length_attr || !PyLong_Check(length_attr)) {
+ PyErr_SetString(PyExc_AttributeError,
+ "class must define a '_length_' attribute, "
+ "which must be a positive integer");
+ Py_XDECREF(length_attr);
+ goto error;
+ }
+ length = PyLong_AsSsize_t(length_attr);
+ Py_DECREF(length_attr);
+ if (length == -1 && PyErr_Occurred()) {
+ if (PyErr_ExceptionMatches(PyExc_OverflowError)) {
+ PyErr_SetString(PyExc_OverflowError,
+ "The '_length_' attribute is too large");
+ }
+ goto error;
+ }
+
+ type_attr = PyObject_GetAttrString((PyObject *)result, "_type_");
+ if (!type_attr) {
+ PyErr_SetString(PyExc_AttributeError,
+ "class must define a '_type_' attribute");
+ goto error;
+ }
+
+ stgdict = (StgDictObject *)PyObject_CallObject(
+ (PyObject *)&PyCStgDict_Type, NULL);
+ if (!stgdict)
+ goto error;
+
+ itemdict = PyType_stgdict(type_attr);
+ if (!itemdict) {
+ PyErr_SetString(PyExc_TypeError,
+ "_type_ must have storage info");
+ goto error;
+ }
+
+ assert(itemdict->format);
+ stgdict->format = _ctypes_alloc_format_string(NULL, itemdict->format);
+ if (stgdict->format == NULL)
+ goto error;
+ stgdict->ndim = itemdict->ndim + 1;
+ stgdict->shape = PyMem_Malloc(sizeof(Py_ssize_t) * stgdict->ndim);
+ if (stgdict->shape == NULL) {
+ PyErr_NoMemory();
+ goto error;
+ }
+ stgdict->shape[0] = length;
+ if (stgdict->ndim > 1) {
+ memmove(&stgdict->shape[1], itemdict->shape,
+ sizeof(Py_ssize_t) * (stgdict->ndim - 1));
+ }
+
+ itemsize = itemdict->size;
+ if (length * itemsize < 0) {
+ PyErr_SetString(PyExc_OverflowError,
+ "array too large");
+ goto error;
+ }
+
+ itemalign = itemdict->align;
+
+ if (itemdict->flags & (TYPEFLAG_ISPOINTER | TYPEFLAG_HASPOINTER))
+ stgdict->flags |= TYPEFLAG_HASPOINTER;
+
+ stgdict->size = itemsize * length;
+ stgdict->align = itemalign;
+ stgdict->length = length;
+ stgdict->proto = type_attr;
+
+ stgdict->paramfunc = &PyCArrayType_paramfunc;
+
+ /* Arrays are passed as pointers to function calls. */
+ stgdict->ffi_type_pointer = ffi_type_pointer;
+
+ /* replace the class dict by our updated spam dict */
+ if (-1 == PyDict_Update((PyObject *)stgdict, result->tp_dict))
+ goto error;
+ Py_SETREF(result->tp_dict, (PyObject *)stgdict); /* steal the reference */
+ stgdict = NULL;
+
+ /* Special case for character arrays.
+ A permanent annoyance: char arrays are also strings!
+ */
+ if (itemdict->getfunc == _ctypes_get_fielddesc("c")->getfunc) {
+ if (-1 == add_getset(result, CharArray_getsets))
+ goto error;
+#ifdef CTYPES_UNICODE
+ } else if (itemdict->getfunc == _ctypes_get_fielddesc("u")->getfunc) {
+ if (-1 == add_getset(result, WCharArray_getsets))
+ goto error;
+#endif
+ }
+
+ return (PyObject *)result;
+error:
+ Py_XDECREF((PyObject*)stgdict);
+ Py_XDECREF(type_attr);
+ Py_DECREF(result);
+ return NULL;
+}
+
+PyTypeObject PyCArrayType_Type = {
+ PyVarObject_HEAD_INIT(NULL, 0)
+ "_ctypes.PyCArrayType", /* tp_name */
+ 0, /* tp_basicsize */
+ 0, /* tp_itemsize */
+ 0, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ &CDataType_as_sequence, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ 0, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */
+ "metatype for the Array Objects", /* tp_doc */
+ 0, /* tp_traverse */
+ 0, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ 0, /* tp_iter */
+ 0, /* tp_iternext */
+ CDataType_methods, /* tp_methods */
+ 0, /* tp_members */
+ 0, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ 0, /* tp_init */
+ 0, /* tp_alloc */
+ PyCArrayType_new, /* tp_new */
+ 0, /* tp_free */
+};
+
+
+/******************************************************************/
+/*
+ PyCSimpleType_Type
+*/
+/*
+
+PyCSimpleType_new ensures that the new Simple_Type subclass created has a valid
+_type_ attribute.
+
+*/
+
+static const char SIMPLE_TYPE_CHARS[] = "cbBhHiIlLdfuzZqQPXOv?g";
+
+static PyObject *
+c_wchar_p_from_param(PyObject *type, PyObject *value)
+{
+ PyObject *as_parameter;
+ int res;
+ if (value == Py_None) {
+ Py_INCREF(Py_None);
+ return Py_None;
+ }
+ if (PyUnicode_Check(value)) {
+ PyCArgObject *parg;
+ struct fielddesc *fd = _ctypes_get_fielddesc("Z");
+
+ parg = PyCArgObject_new();
+ if (parg == NULL)
+ return NULL;
+ parg->pffi_type = &ffi_type_pointer;
+ parg->tag = 'Z';
+ parg->obj = fd->setfunc(&parg->value, value, 0);
+ if (parg->obj == NULL) {
+ Py_DECREF(parg);
+ return NULL;
+ }
+ return (PyObject *)parg;
+ }
+ res = PyObject_IsInstance(value, type);
+ if (res == -1)
+ return NULL;
+ if (res) {
+ Py_INCREF(value);
+ return value;
+ }
+ if (ArrayObject_Check(value) || PointerObject_Check(value)) {
+ /* c_wchar array instance or pointer(c_wchar(...)) */
+ StgDictObject *dt = PyObject_stgdict(value);
+ StgDictObject *dict;
+ assert(dt); /* Cannot be NULL for pointer or array objects */
+ dict = dt && dt->proto ? PyType_stgdict(dt->proto) : NULL;
+ if (dict && (dict->setfunc == _ctypes_get_fielddesc("u")->setfunc)) {
+ Py_INCREF(value);
+ return value;
+ }
+ }
+ if (PyCArg_CheckExact(value)) {
+ /* byref(c_char(...)) */
+ PyCArgObject *a = (PyCArgObject *)value;
+ StgDictObject *dict = PyObject_stgdict(a->obj);
+ if (dict && (dict->setfunc == _ctypes_get_fielddesc("u")->setfunc)) {
+ Py_INCREF(value);
+ return value;
+ }
+ }
+
+ as_parameter = PyObject_GetAttrString(value, "_as_parameter_");
+ if (as_parameter) {
+ value = c_wchar_p_from_param(type, as_parameter);
+ Py_DECREF(as_parameter);
+ return value;
+ }
+ /* XXX better message */
+ PyErr_SetString(PyExc_TypeError,
+ "wrong type");
+ return NULL;
+}
+
+static PyObject *
+c_char_p_from_param(PyObject *type, PyObject *value)
+{
+ PyObject *as_parameter;
+ int res;
+ if (value == Py_None) {
+ Py_INCREF(Py_None);
+ return Py_None;
+ }
+ if (PyBytes_Check(value)) {
+ PyCArgObject *parg;
+ struct fielddesc *fd = _ctypes_get_fielddesc("z");
+
+ parg = PyCArgObject_new();
+ if (parg == NULL)
+ return NULL;
+ parg->pffi_type = &ffi_type_pointer;
+ parg->tag = 'z';
+ parg->obj = fd->setfunc(&parg->value, value, 0);
+ if (parg->obj == NULL) {
+ Py_DECREF(parg);
+ return NULL;
+ }
+ return (PyObject *)parg;
+ }
+ res = PyObject_IsInstance(value, type);
+ if (res == -1)
+ return NULL;
+ if (res) {
+ Py_INCREF(value);
+ return value;
+ }
+ if (ArrayObject_Check(value) || PointerObject_Check(value)) {
+ /* c_char array instance or pointer(c_char(...)) */
+ StgDictObject *dt = PyObject_stgdict(value);
+ StgDictObject *dict;
+ assert(dt); /* Cannot be NULL for pointer or array objects */
+ dict = dt && dt->proto ? PyType_stgdict(dt->proto) : NULL;
+ if (dict && (dict->setfunc == _ctypes_get_fielddesc("c")->setfunc)) {
+ Py_INCREF(value);
+ return value;
+ }
+ }
+ if (PyCArg_CheckExact(value)) {
+ /* byref(c_char(...)) */
+ PyCArgObject *a = (PyCArgObject *)value;
+ StgDictObject *dict = PyObject_stgdict(a->obj);
+ if (dict && (dict->setfunc == _ctypes_get_fielddesc("c")->setfunc)) {
+ Py_INCREF(value);
+ return value;
+ }
+ }
+
+ as_parameter = PyObject_GetAttrString(value, "_as_parameter_");
+ if (as_parameter) {
+ value = c_char_p_from_param(type, as_parameter);
+ Py_DECREF(as_parameter);
+ return value;
+ }
+ /* XXX better message */
+ PyErr_SetString(PyExc_TypeError,
+ "wrong type");
+ return NULL;
+}
+
+static PyObject *
+c_void_p_from_param(PyObject *type, PyObject *value)
+{
+ StgDictObject *stgd;
+ PyObject *as_parameter;
+ int res;
+
+/* None */
+ if (value == Py_None) {
+ Py_INCREF(Py_None);
+ return Py_None;
+ }
+ /* Should probably allow buffer interface as well */
+/* int, long */
+ if (PyLong_Check(value)) {
+ PyCArgObject *parg;
+ struct fielddesc *fd = _ctypes_get_fielddesc("P");
+
+ parg = PyCArgObject_new();
+ if (parg == NULL)
+ return NULL;
+ parg->pffi_type = &ffi_type_pointer;
+ parg->tag = 'P';
+ parg->obj = fd->setfunc(&parg->value, value, 0);
+ if (parg->obj == NULL) {
+ Py_DECREF(parg);
+ return NULL;
+ }
+ return (PyObject *)parg;
+ }
+ /* XXX struni: remove later */
+/* bytes */
+ if (PyBytes_Check(value)) {
+ PyCArgObject *parg;
+ struct fielddesc *fd = _ctypes_get_fielddesc("z");
+
+ parg = PyCArgObject_new();
+ if (parg == NULL)
+ return NULL;
+ parg->pffi_type = &ffi_type_pointer;
+ parg->tag = 'z';
+ parg->obj = fd->setfunc(&parg->value, value, 0);
+ if (parg->obj == NULL) {
+ Py_DECREF(parg);
+ return NULL;
+ }
+ return (PyObject *)parg;
+ }
+/* unicode */
+ if (PyUnicode_Check(value)) {
+ PyCArgObject *parg;
+ struct fielddesc *fd = _ctypes_get_fielddesc("Z");
+
+ parg = PyCArgObject_new();
+ if (parg == NULL)
+ return NULL;
+ parg->pffi_type = &ffi_type_pointer;
+ parg->tag = 'Z';
+ parg->obj = fd->setfunc(&parg->value, value, 0);
+ if (parg->obj == NULL) {
+ Py_DECREF(parg);
+ return NULL;
+ }
+ return (PyObject *)parg;
+ }
+/* c_void_p instance (or subclass) */
+ res = PyObject_IsInstance(value, type);
+ if (res == -1)
+ return NULL;
+ if (res) {
+ /* c_void_p instances */
+ Py_INCREF(value);
+ return value;
+ }
+/* ctypes array or pointer instance */
+ if (ArrayObject_Check(value) || PointerObject_Check(value)) {
+ /* Any array or pointer is accepted */
+ Py_INCREF(value);
+ return value;
+ }
+/* byref(...) */
+ if (PyCArg_CheckExact(value)) {
+ /* byref(c_xxx()) */
+ PyCArgObject *a = (PyCArgObject *)value;
+ if (a->tag == 'P') {
+ Py_INCREF(value);
+ return value;
+ }
+ }
+/* function pointer */
+ if (PyCFuncPtrObject_Check(value)) {
+ PyCArgObject *parg;
+ PyCFuncPtrObject *func;
+ func = (PyCFuncPtrObject *)value;
+ parg = PyCArgObject_new();
+ if (parg == NULL)
+ return NULL;
+ parg->pffi_type = &ffi_type_pointer;
+ parg->tag = 'P';
+ Py_INCREF(value);
+ parg->value.p = *(void **)func->b_ptr;
+ parg->obj = value;
+ return (PyObject *)parg;
+ }
+/* c_char_p, c_wchar_p */
+ stgd = PyObject_stgdict(value);
+ if (stgd && CDataObject_Check(value) && stgd->proto && PyUnicode_Check(stgd->proto)) {
+ PyCArgObject *parg;
+
+ switch (PyUnicode_AsUTF8(stgd->proto)[0]) {
+ case 'z': /* c_char_p */
+ case 'Z': /* c_wchar_p */
+ parg = PyCArgObject_new();
+ if (parg == NULL)
+ return NULL;
+ parg->pffi_type = &ffi_type_pointer;
+ parg->tag = 'Z';
+ Py_INCREF(value);
+ parg->obj = value;
+ /* Remember: b_ptr points to where the pointer is stored! */
+ parg->value.p = *(void **)(((CDataObject *)value)->b_ptr);
+ return (PyObject *)parg;
+ }
+ }
+
+ as_parameter = PyObject_GetAttrString(value, "_as_parameter_");
+ if (as_parameter) {
+ value = c_void_p_from_param(type, as_parameter);
+ Py_DECREF(as_parameter);
+ return value;
+ }
+ /* XXX better message */
+ PyErr_SetString(PyExc_TypeError,
+ "wrong type");
+ return NULL;
+}
+
+static PyMethodDef c_void_p_method = { "from_param", c_void_p_from_param, METH_O };
+static PyMethodDef c_char_p_method = { "from_param", c_char_p_from_param, METH_O };
+static PyMethodDef c_wchar_p_method = { "from_param", c_wchar_p_from_param, METH_O };
+
+static PyObject *CreateSwappedType(PyTypeObject *type, PyObject *args, PyObject *kwds,
+ PyObject *proto, struct fielddesc *fmt)
+{
+ PyTypeObject *result;
+ StgDictObject *stgdict;
+ PyObject *name = PyTuple_GET_ITEM(args, 0);
+ PyObject *newname;
+ PyObject *swapped_args;
+ static PyObject *suffix;
+ Py_ssize_t i;
+
+ swapped_args = PyTuple_New(PyTuple_GET_SIZE(args));
+ if (!swapped_args)
+ return NULL;
+
+ if (suffix == NULL)
+#ifdef WORDS_BIGENDIAN
+ suffix = PyUnicode_InternFromString("_le");
+#else
+ suffix = PyUnicode_InternFromString("_be");
+#endif
+ if (suffix == NULL) {
+ Py_DECREF(swapped_args);
+ return NULL;
+ }
+
+ newname = PyUnicode_Concat(name, suffix);
+ if (newname == NULL) {
+ Py_DECREF(swapped_args);
+ return NULL;
+ }
+
+ PyTuple_SET_ITEM(swapped_args, 0, newname);
+ for (i=1; i<PyTuple_GET_SIZE(args); ++i) {
+ PyObject *v = PyTuple_GET_ITEM(args, i);
+ Py_INCREF(v);
+ PyTuple_SET_ITEM(swapped_args, i, v);
+ }
+
+ /* create the new instance (which is a class,
+ since we are a metatype!) */
+ result = (PyTypeObject *)PyType_Type.tp_new(type, swapped_args, kwds);
+ Py_DECREF(swapped_args);
+ if (result == NULL)
+ return NULL;
+
+ stgdict = (StgDictObject *)PyObject_CallObject(
+ (PyObject *)&PyCStgDict_Type, NULL);
+ if (!stgdict) {
+ Py_DECREF(result);
+ return NULL;
+ }
+
+ stgdict->ffi_type_pointer = *fmt->pffi_type;
+ stgdict->align = fmt->pffi_type->alignment;
+ stgdict->length = 0;
+ stgdict->size = fmt->pffi_type->size;
+ stgdict->setfunc = fmt->setfunc_swapped;
+ stgdict->getfunc = fmt->getfunc_swapped;
+
+ Py_INCREF(proto);
+ stgdict->proto = proto;
+
+ /* replace the class dict by our updated spam dict */
+ if (-1 == PyDict_Update((PyObject *)stgdict, result->tp_dict)) {
+ Py_DECREF(result);
+ Py_DECREF((PyObject *)stgdict);
+ return NULL;
+ }
+ Py_SETREF(result->tp_dict, (PyObject *)stgdict);
+
+ return (PyObject *)result;
+}
+
+static PyCArgObject *
+PyCSimpleType_paramfunc(CDataObject *self)
+{
+ StgDictObject *dict;
+ char *fmt;
+ PyCArgObject *parg;
+ struct fielddesc *fd;
+
+ dict = PyObject_stgdict((PyObject *)self);
+ assert(dict); /* Cannot be NULL for CDataObject instances */
+ fmt = PyUnicode_AsUTF8(dict->proto);
+ assert(fmt);
+
+ fd = _ctypes_get_fielddesc(fmt);
+ assert(fd);
+
+ parg = PyCArgObject_new();
+ if (parg == NULL)
+ return NULL;
+
+ parg->tag = fmt[0];
+ parg->pffi_type = fd->pffi_type;
+ Py_INCREF(self);
+ parg->obj = (PyObject *)self;
+ memcpy(&parg->value, self->b_ptr, self->b_size);
+ return parg;
+}
+
+static PyObject *
+PyCSimpleType_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
+{
+ PyTypeObject *result;
+ StgDictObject *stgdict;
+ PyObject *proto;
+ const char *proto_str;
+ Py_ssize_t proto_len;
+ PyMethodDef *ml;
+ struct fielddesc *fmt;
+
+ /* create the new instance (which is a class,
+ since we are a metatype!) */
+ result = (PyTypeObject *)PyType_Type.tp_new(type, args, kwds);
+ if (result == NULL)
+ return NULL;
+
+ proto = PyObject_GetAttrString((PyObject *)result, "_type_"); /* new ref */
+ if (!proto) {
+ PyErr_SetString(PyExc_AttributeError,
+ "class must define a '_type_' attribute");
+ error:
+ Py_XDECREF(proto);
+ Py_XDECREF(result);
+ return NULL;
+ }
+ if (PyUnicode_Check(proto)) {
+ proto_str = PyUnicode_AsUTF8AndSize(proto, &proto_len);
+ if (!proto_str)
+ goto error;
+ } else {
+ PyErr_SetString(PyExc_TypeError,
+ "class must define a '_type_' string attribute");
+ goto error;
+ }
+ if (proto_len != 1) {
+ PyErr_SetString(PyExc_ValueError,
+ "class must define a '_type_' attribute "
+ "which must be a string of length 1");
+ goto error;
+ }
+ if (!strchr(SIMPLE_TYPE_CHARS, *proto_str)) {
+ PyErr_Format(PyExc_AttributeError,
+ "class must define a '_type_' attribute which must be\n"
+ "a single character string containing one of '%s'.",
+ SIMPLE_TYPE_CHARS);
+ goto error;
+ }
+ fmt = _ctypes_get_fielddesc(proto_str);
+ if (fmt == NULL) {
+ PyErr_Format(PyExc_ValueError,
+ "_type_ '%s' not supported", proto_str);
+ goto error;
+ }
+
+ stgdict = (StgDictObject *)PyObject_CallObject(
+ (PyObject *)&PyCStgDict_Type, NULL);
+ if (!stgdict)
+ goto error;
+
+ stgdict->ffi_type_pointer = *fmt->pffi_type;
+ stgdict->align = fmt->pffi_type->alignment;
+ stgdict->length = 0;
+ stgdict->size = fmt->pffi_type->size;
+ stgdict->setfunc = fmt->setfunc;
+ stgdict->getfunc = fmt->getfunc;
+#ifdef WORDS_BIGENDIAN
+ stgdict->format = _ctypes_alloc_format_string_for_type(proto_str[0], 1);
+#else
+ stgdict->format = _ctypes_alloc_format_string_for_type(proto_str[0], 0);
+#endif
+ if (stgdict->format == NULL) {
+ Py_DECREF(result);
+ Py_DECREF(proto);
+ Py_DECREF((PyObject *)stgdict);
+ return NULL;
+ }
+
+ stgdict->paramfunc = PyCSimpleType_paramfunc;
+/*
+ if (result->tp_base != &Simple_Type) {
+ stgdict->setfunc = NULL;
+ stgdict->getfunc = NULL;
+ }
+*/
+
+ /* This consumes the refcount on proto which we have */
+ stgdict->proto = proto;
+
+ /* replace the class dict by our updated spam dict */
+ if (-1 == PyDict_Update((PyObject *)stgdict, result->tp_dict)) {
+ Py_DECREF(result);
+ Py_DECREF((PyObject *)stgdict);
+ return NULL;
+ }
+ Py_SETREF(result->tp_dict, (PyObject *)stgdict);
+
+ /* Install from_param class methods in ctypes base classes.
+ Overrides the PyCSimpleType_from_param generic method.
+ */
+ if (result->tp_base == &Simple_Type) {
+ switch (*proto_str) {
+ case 'z': /* c_char_p */
+ ml = &c_char_p_method;
+ stgdict->flags |= TYPEFLAG_ISPOINTER;
+ break;
+ case 'Z': /* c_wchar_p */
+ ml = &c_wchar_p_method;
+ stgdict->flags |= TYPEFLAG_ISPOINTER;
+ break;
+ case 'P': /* c_void_p */
+ ml = &c_void_p_method;
+ stgdict->flags |= TYPEFLAG_ISPOINTER;
+ break;
+ case 's':
+ case 'X':
+ case 'O':
+ ml = NULL;
+ stgdict->flags |= TYPEFLAG_ISPOINTER;
+ break;
+ default:
+ ml = NULL;
+ break;
+ }
+
+ if (ml) {
+ PyObject *meth;
+ int x;
+ meth = PyDescr_NewClassMethod(result, ml);
+ if (!meth) {
+ Py_DECREF(result);
+ return NULL;
+ }
+ x = PyDict_SetItemString(result->tp_dict,
+ ml->ml_name,
+ meth);
+ Py_DECREF(meth);
+ if (x == -1) {
+ Py_DECREF(result);
+ return NULL;
+ }
+ }
+ }
+
+ if (type == &PyCSimpleType_Type && fmt->setfunc_swapped && fmt->getfunc_swapped) {
+ PyObject *swapped = CreateSwappedType(type, args, kwds,
+ proto, fmt);
+ StgDictObject *sw_dict;
+ if (swapped == NULL) {
+ Py_DECREF(result);
+ return NULL;
+ }
+ sw_dict = PyType_stgdict(swapped);
+#ifdef WORDS_BIGENDIAN
+ PyObject_SetAttrString((PyObject *)result, "__ctype_le__", swapped);
+ PyObject_SetAttrString((PyObject *)result, "__ctype_be__", (PyObject *)result);
+ PyObject_SetAttrString(swapped, "__ctype_be__", (PyObject *)result);
+ PyObject_SetAttrString(swapped, "__ctype_le__", swapped);
+ /* We are creating the type for the OTHER endian */
+ sw_dict->format = _ctypes_alloc_format_string("<", stgdict->format+1);
+#else
+ PyObject_SetAttrString((PyObject *)result, "__ctype_be__", swapped);
+ PyObject_SetAttrString((PyObject *)result, "__ctype_le__", (PyObject *)result);
+ PyObject_SetAttrString(swapped, "__ctype_le__", (PyObject *)result);
+ PyObject_SetAttrString(swapped, "__ctype_be__", swapped);
+ /* We are creating the type for the OTHER endian */
+ sw_dict->format = _ctypes_alloc_format_string(">", stgdict->format+1);
+#endif
+ Py_DECREF(swapped);
+ if (PyErr_Occurred()) {
+ Py_DECREF(result);
+ return NULL;
+ }
+ };
+
+ return (PyObject *)result;
+}
+
+/*
+ * This is a *class method*.
+ * Convert a parameter into something that ConvParam can handle.
+ */
+static PyObject *
+PyCSimpleType_from_param(PyObject *type, PyObject *value)
+{
+ StgDictObject *dict;
+ char *fmt;
+ PyCArgObject *parg;
+ struct fielddesc *fd;
+ PyObject *as_parameter;
+ int res;
+
+ /* If the value is already an instance of the requested type,
+ we can use it as is */
+ res = PyObject_IsInstance(value, type);
+ if (res == -1)
+ return NULL;
+ if (res) {
+ Py_INCREF(value);
+ return value;
+ }
+
+ dict = PyType_stgdict(type);
+ if (!dict) {
+ PyErr_SetString(PyExc_TypeError,
+ "abstract class");
+ return NULL;
+ }
+
+ /* I think we can rely on this being a one-character string */
+ fmt = PyUnicode_AsUTF8(dict->proto);
+ assert(fmt);
+
+ fd = _ctypes_get_fielddesc(fmt);
+ assert(fd);
+
+ parg = PyCArgObject_new();
+ if (parg == NULL)
+ return NULL;
+
+ parg->tag = fmt[0];
+ parg->pffi_type = fd->pffi_type;
+ parg->obj = fd->setfunc(&parg->value, value, 0);
+ if (parg->obj)
+ return (PyObject *)parg;
+ PyErr_Clear();
+ Py_DECREF(parg);
+
+ as_parameter = PyObject_GetAttrString(value, "_as_parameter_");
+ if (as_parameter) {
+ if (Py_EnterRecursiveCall("while processing _as_parameter_")) {
+ Py_DECREF(as_parameter);
+ return NULL;
+ }
+ value = PyCSimpleType_from_param(type, as_parameter);
+ Py_LeaveRecursiveCall();
+ Py_DECREF(as_parameter);
+ return value;
+ }
+ PyErr_SetString(PyExc_TypeError,
+ "wrong type");
+ return NULL;
+}
+
+static PyMethodDef PyCSimpleType_methods[] = {
+ { "from_param", PyCSimpleType_from_param, METH_O, from_param_doc },
+ { "from_address", CDataType_from_address, METH_O, from_address_doc },
+ { "from_buffer", CDataType_from_buffer, METH_VARARGS, from_buffer_doc, },
+ { "from_buffer_copy", CDataType_from_buffer_copy, METH_VARARGS, from_buffer_copy_doc, },
+#ifndef UEFI_C_SOURCE
+ { "in_dll", CDataType_in_dll, METH_VARARGS, in_dll_doc},
+#endif
+ { NULL, NULL },
+};
+
+PyTypeObject PyCSimpleType_Type = {
+ PyVarObject_HEAD_INIT(NULL, 0)
+ "_ctypes.PyCSimpleType", /* tp_name */
+ 0, /* tp_basicsize */
+ 0, /* tp_itemsize */
+ 0, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ &CDataType_as_sequence, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ 0, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */
+ "metatype for the PyCSimpleType Objects", /* tp_doc */
+ 0, /* tp_traverse */
+ 0, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ 0, /* tp_iter */
+ 0, /* tp_iternext */
+ PyCSimpleType_methods, /* tp_methods */
+ 0, /* tp_members */
+ 0, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ 0, /* tp_init */
+ 0, /* tp_alloc */
+ PyCSimpleType_new, /* tp_new */
+ 0, /* tp_free */
+};
+
+/******************************************************************/
+/*
+ PyCFuncPtrType_Type
+ */
+
+static PyObject *
+converters_from_argtypes(PyObject *ob)
+{
+ PyObject *converters;
+ Py_ssize_t i;
+ Py_ssize_t nArgs;
+
+ ob = PySequence_Tuple(ob); /* new reference */
+ if (!ob) {
+ PyErr_SetString(PyExc_TypeError,
+ "_argtypes_ must be a sequence of types");
+ return NULL;
+ }
+
+ nArgs = PyTuple_GET_SIZE(ob);
+ converters = PyTuple_New(nArgs);
+ if (!converters) {
+ Py_DECREF(ob);
+ return NULL;
+ }
+
+ /* I have to check if this is correct. Using c_char, which has a size
+ of 1, will be assumed to be pushed as only one byte!
+ Aren't these promoted to integers by the C compiler and pushed as 4 bytes?
+ */
+
+ for (i = 0; i < nArgs; ++i) {
+ PyObject *tp = PyTuple_GET_ITEM(ob, i);
+ PyObject *cnv = PyObject_GetAttrString(tp, "from_param");
+ if (!cnv)
+ goto argtypes_error_1;
+ PyTuple_SET_ITEM(converters, i, cnv);
+ }
+ Py_DECREF(ob);
+ return converters;
+
+ argtypes_error_1:
+ Py_XDECREF(converters);
+ Py_DECREF(ob);
+ PyErr_Format(PyExc_TypeError,
+ "item %zd in _argtypes_ has no from_param method",
+ i+1);
+ return NULL;
+}
+
+static int
+make_funcptrtype_dict(StgDictObject *stgdict)
+{
+ PyObject *ob;
+ PyObject *converters = NULL;
+
+ stgdict->align = _ctypes_get_fielddesc("P")->pffi_type->alignment;
+ stgdict->length = 1;
+ stgdict->size = sizeof(void *);
+ stgdict->setfunc = NULL;
+ stgdict->getfunc = NULL;
+ stgdict->ffi_type_pointer = ffi_type_pointer;
+
+ ob = PyDict_GetItemString((PyObject *)stgdict, "_flags_");
+ if (!ob || !PyLong_Check(ob)) {
+ PyErr_SetString(PyExc_TypeError,
+ "class must define _flags_ which must be an integer");
+ return -1;
+ }
+ stgdict->flags = PyLong_AS_LONG(ob) | TYPEFLAG_ISPOINTER;
+
+ /* _argtypes_ is optional... */
+ ob = PyDict_GetItemString((PyObject *)stgdict, "_argtypes_");
+ if (ob) {
+ converters = converters_from_argtypes(ob);
+ if (!converters)
+ goto error;
+ Py_INCREF(ob);
+ stgdict->argtypes = ob;
+ stgdict->converters = converters;
+ }
+
+ ob = PyDict_GetItemString((PyObject *)stgdict, "_restype_");
+ if (ob) {
+ if (ob != Py_None && !PyType_stgdict(ob) && !PyCallable_Check(ob)) {
+ PyErr_SetString(PyExc_TypeError,
+ "_restype_ must be a type, a callable, or None");
+ return -1;
+ }
+ Py_INCREF(ob);
+ stgdict->restype = ob;
+ stgdict->checker = PyObject_GetAttrString(ob, "_check_retval_");
+ if (stgdict->checker == NULL)
+ PyErr_Clear();
+ }
+/* XXX later, maybe.
+ ob = PyDict_GetItemString((PyObject *)stgdict, "_errcheck_");
+ if (ob) {
+ if (!PyCallable_Check(ob)) {
+ PyErr_SetString(PyExc_TypeError,
+ "_errcheck_ must be callable");
+ return -1;
+ }
+ Py_INCREF(ob);
+ stgdict->errcheck = ob;
+ }
+*/
+ return 0;
+
+ error:
+ Py_XDECREF(converters);
+ return -1;
+
+}
+
+static PyCArgObject *
+PyCFuncPtrType_paramfunc(CDataObject *self)
+{
+ PyCArgObject *parg;
+
+ parg = PyCArgObject_new();
+ if (parg == NULL)
+ return NULL;
+
+ parg->tag = 'P';
+ parg->pffi_type = &ffi_type_pointer;
+ Py_INCREF(self);
+ parg->obj = (PyObject *)self;
+ parg->value.p = *(void **)self->b_ptr;
+ return parg;
+}
+
+static PyObject *
+PyCFuncPtrType_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
+{
+ PyTypeObject *result;
+ StgDictObject *stgdict;
+
+ stgdict = (StgDictObject *)PyObject_CallObject(
+ (PyObject *)&PyCStgDict_Type, NULL);
+ if (!stgdict)
+ return NULL;
+
+ stgdict->paramfunc = PyCFuncPtrType_paramfunc;
+ /* We do NOT expose the function signature in the format string. It
+ is impossible, generally, because the only requirement for the
+ argtypes items is that they have a .from_param method - we do not
+ know the types of the arguments (although, in practice, most
+ argtypes would be a ctypes type).
+ */
+ stgdict->format = _ctypes_alloc_format_string(NULL, "X{}");
+ if (stgdict->format == NULL) {
+ Py_DECREF((PyObject *)stgdict);
+ return NULL;
+ }
+ stgdict->flags |= TYPEFLAG_ISPOINTER;
+
+ /* create the new instance (which is a class,
+ since we are a metatype!) */
+ result = (PyTypeObject *)PyType_Type.tp_new(type, args, kwds);
+ if (result == NULL) {
+ Py_DECREF((PyObject *)stgdict);
+ return NULL;
+ }
+
+ /* replace the class dict by our updated storage dict */
+ if (-1 == PyDict_Update((PyObject *)stgdict, result->tp_dict)) {
+ Py_DECREF(result);
+ Py_DECREF((PyObject *)stgdict);
+ return NULL;
+ }
+ Py_SETREF(result->tp_dict, (PyObject *)stgdict);
+
+ if (-1 == make_funcptrtype_dict(stgdict)) {
+ Py_DECREF(result);
+ return NULL;
+ }
+
+ return (PyObject *)result;
+}
+
+PyTypeObject PyCFuncPtrType_Type = {
+ PyVarObject_HEAD_INIT(NULL, 0)
+ "_ctypes.PyCFuncPtrType", /* tp_name */
+ 0, /* tp_basicsize */
+ 0, /* tp_itemsize */
+ 0, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ &CDataType_as_sequence, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ 0, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC, /* tp_flags */
+ "metatype for C function pointers", /* tp_doc */
+ (traverseproc)CDataType_traverse, /* tp_traverse */
+ (inquiry)CDataType_clear, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ 0, /* tp_iter */
+ 0, /* tp_iternext */
+ CDataType_methods, /* tp_methods */
+ 0, /* tp_members */
+ 0, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ 0, /* tp_init */
+ 0, /* tp_alloc */
+ PyCFuncPtrType_new, /* tp_new */
+ 0, /* tp_free */
+};
+
+
+/*****************************************************************
+ * Code to keep needed objects alive
+ */
+
+static CDataObject *
+PyCData_GetContainer(CDataObject *self)
+{
+ while (self->b_base)
+ self = self->b_base;
+ if (self->b_objects == NULL) {
+ if (self->b_length) {
+ self->b_objects = PyDict_New();
+ if (self->b_objects == NULL)
+ return NULL;
+ } else {
+ Py_INCREF(Py_None);
+ self->b_objects = Py_None;
+ }
+ }
+ return self;
+}
+
+static PyObject *
+GetKeepedObjects(CDataObject *target)
+{
+ CDataObject *container;
+ container = PyCData_GetContainer(target);
+ if (container == NULL)
+ return NULL;
+ return container->b_objects;
+}
+
+static PyObject *
+unique_key(CDataObject *target, Py_ssize_t index)
+{
+ char string[256];
+ char *cp = string;
+ size_t bytes_left;
+
+ Py_BUILD_ASSERT(sizeof(string) - 1 > sizeof(Py_ssize_t) * 2);
+ cp += sprintf(cp, "%x", Py_SAFE_DOWNCAST(index, Py_ssize_t, int));
+ while (target->b_base) {
+ bytes_left = sizeof(string) - (cp - string) - 1;
+ /* Hex format needs 2 characters per byte */
+ if (bytes_left < sizeof(Py_ssize_t) * 2) {
+ PyErr_SetString(PyExc_ValueError,
+ "ctypes object structure too deep");
+ return NULL;
+ }
+ cp += sprintf(cp, ":%x", Py_SAFE_DOWNCAST(target->b_index, Py_ssize_t, int));
+ target = target->b_base;
+ }
+ return PyUnicode_FromStringAndSize(string, cp-string);
+}
+
+/*
+ * Keep a reference to 'keep' in the 'target', at index 'index'.
+ *
+ * If 'keep' is None, do nothing.
+ *
+ * Otherwise create a dictionary (if it does not yet exist) id the root
+ * objects 'b_objects' item, which will store the 'keep' object under a unique
+ * key.
+ *
+ * The unique_key helper travels the target's b_base pointer down to the root,
+ * building a string containing hex-formatted indexes found during traversal,
+ * separated by colons.
+ *
+ * The index tuple is used as a key into the root object's b_objects dict.
+ *
+ * Note: This function steals a refcount of the third argument, even if it
+ * fails!
+ */
+static int
+KeepRef(CDataObject *target, Py_ssize_t index, PyObject *keep)
+{
+ int result;
+ CDataObject *ob;
+ PyObject *key;
+
+/* Optimization: no need to store None */
+ if (keep == Py_None) {
+ Py_DECREF(Py_None);
+ return 0;
+ }
+ ob = PyCData_GetContainer(target);
+ if (ob == NULL) {
+ Py_DECREF(keep);
+ return -1;
+ }
+ if (ob->b_objects == NULL || !PyDict_CheckExact(ob->b_objects)) {
+ Py_XSETREF(ob->b_objects, keep); /* refcount consumed */
+ return 0;
+ }
+ key = unique_key(target, index);
+ if (key == NULL) {
+ Py_DECREF(keep);
+ return -1;
+ }
+ result = PyDict_SetItem(ob->b_objects, key, keep);
+ Py_DECREF(key);
+ Py_DECREF(keep);
+ return result;
+}
+
+/******************************************************************/
+/*
+ PyCData_Type
+ */
+static int
+PyCData_traverse(CDataObject *self, visitproc visit, void *arg)
+{
+ Py_VISIT(self->b_objects);
+ Py_VISIT((PyObject *)self->b_base);
+ return 0;
+}
+
+static int
+PyCData_clear(CDataObject *self)
+{
+ Py_CLEAR(self->b_objects);
+ if ((self->b_needsfree)
+ && _CDataObject_HasExternalBuffer(self))
+ PyMem_Free(self->b_ptr);
+ self->b_ptr = NULL;
+ Py_CLEAR(self->b_base);
+ return 0;
+}
+
+static void
+PyCData_dealloc(PyObject *self)
+{
+ PyCData_clear((CDataObject *)self);
+ Py_TYPE(self)->tp_free(self);
+}
+
+static PyMemberDef PyCData_members[] = {
+ { "_b_base_", T_OBJECT,
+ offsetof(CDataObject, b_base), READONLY,
+ "the base object" },
+ { "_b_needsfree_", T_INT,
+ offsetof(CDataObject, b_needsfree), READONLY,
+ "whether the object owns the memory or not" },
+ { "_objects", T_OBJECT,
+ offsetof(CDataObject, b_objects), READONLY,
+ "internal objects tree (NEVER CHANGE THIS OBJECT!)"},
+ { NULL },
+};
+
+static int PyCData_NewGetBuffer(PyObject *myself, Py_buffer *view, int flags)
+{
+ CDataObject *self = (CDataObject *)myself;
+ StgDictObject *dict = PyObject_stgdict(myself);
+ Py_ssize_t i;
+
+ if (view == NULL) return 0;
+
+ view->buf = self->b_ptr;
+ view->obj = myself;
+ Py_INCREF(myself);
+ view->len = self->b_size;
+ view->readonly = 0;
+ /* use default format character if not set */
+ view->format = dict->format ? dict->format : "B";
+ view->ndim = dict->ndim;
+ view->shape = dict->shape;
+ view->itemsize = self->b_size;
+ if (view->itemsize) {
+ for (i = 0; i < view->ndim; ++i) {
+ view->itemsize /= dict->shape[i];
+ }
+ }
+ view->strides = NULL;
+ view->suboffsets = NULL;
+ view->internal = NULL;
+ return 0;
+}
+
+static PyBufferProcs PyCData_as_buffer = {
+ PyCData_NewGetBuffer,
+ NULL,
+};
+
+/*
+ * CData objects are mutable, so they cannot be hashable!
+ */
+static Py_hash_t
+PyCData_nohash(PyObject *self)
+{
+ PyErr_SetString(PyExc_TypeError, "unhashable type");
+ return -1;
+}
+
+static PyObject *
+PyCData_reduce(PyObject *myself, PyObject *args)
+{
+ CDataObject *self = (CDataObject *)myself;
+
+ if (PyObject_stgdict(myself)->flags & (TYPEFLAG_ISPOINTER|TYPEFLAG_HASPOINTER)) {
+ PyErr_SetString(PyExc_ValueError,
+ "ctypes objects containing pointers cannot be pickled");
+ return NULL;
+ }
+ return Py_BuildValue("O(O(NN))",
+ _unpickle,
+ Py_TYPE(myself),
+ PyObject_GetAttrString(myself, "__dict__"),
+ PyBytes_FromStringAndSize(self->b_ptr, self->b_size));
+}
+
+static PyObject *
+PyCData_setstate(PyObject *myself, PyObject *args)
+{
+ void *data;
+ Py_ssize_t len;
+ int res;
+ PyObject *dict, *mydict;
+ CDataObject *self = (CDataObject *)myself;
+ if (!PyArg_ParseTuple(args, "Os#", &dict, &data, &len))
+ return NULL;
+ if (len > self->b_size)
+ len = self->b_size;
+ memmove(self->b_ptr, data, len);
+ mydict = PyObject_GetAttrString(myself, "__dict__");
+ if (mydict == NULL) {
+ return NULL;
+ }
+ if (!PyDict_Check(mydict)) {
+ PyErr_Format(PyExc_TypeError,
+ "%.200s.__dict__ must be a dictionary, not %.200s",
+ Py_TYPE(myself)->tp_name, Py_TYPE(mydict)->tp_name);
+ Py_DECREF(mydict);
+ return NULL;
+ }
+ res = PyDict_Update(mydict, dict);
+ Py_DECREF(mydict);
+ if (res == -1)
+ return NULL;
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+/*
+ * default __ctypes_from_outparam__ method returns self.
+ */
+static PyObject *
+PyCData_from_outparam(PyObject *self, PyObject *args)
+{
+ Py_INCREF(self);
+ return self;
+}
+
+static PyMethodDef PyCData_methods[] = {
+ { "__ctypes_from_outparam__", PyCData_from_outparam, METH_NOARGS, },
+ { "__reduce__", PyCData_reduce, METH_NOARGS, },
+ { "__setstate__", PyCData_setstate, METH_VARARGS, },
+ { NULL, NULL },
+};
+
+PyTypeObject PyCData_Type = {
+ PyVarObject_HEAD_INIT(NULL, 0)
+ "_ctypes._CData",
+ sizeof(CDataObject), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ PyCData_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ 0, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ PyCData_nohash, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ 0, /* tp_getattro */
+ 0, /* tp_setattro */
+ &PyCData_as_buffer, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */
+ "XXX to be provided", /* tp_doc */
+ (traverseproc)PyCData_traverse, /* tp_traverse */
+ (inquiry)PyCData_clear, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ 0, /* tp_iter */
+ 0, /* tp_iternext */
+ PyCData_methods, /* tp_methods */
+ PyCData_members, /* tp_members */
+ 0, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ 0, /* tp_init */
+ 0, /* tp_alloc */
+ 0, /* tp_new */
+ 0, /* tp_free */
+};
+
+static int PyCData_MallocBuffer(CDataObject *obj, StgDictObject *dict)
+{
+ if ((size_t)dict->size <= sizeof(obj->b_value)) {
+ /* No need to call malloc, can use the default buffer */
+ obj->b_ptr = (char *)&obj->b_value;
+ /* The b_needsfree flag does not mean that we actually did
+ call PyMem_Malloc to allocate the memory block; instead it
+ means we are the *owner* of the memory and are responsible
+ for freeing resources associated with the memory. This is
+ also the reason that b_needsfree is exposed to Python.
+ */
+ obj->b_needsfree = 1;
+ } else {
+ /* In python 2.4, and ctypes 0.9.6, the malloc call took about
+ 33% of the creation time for c_int().
+ */
+ obj->b_ptr = (char *)PyMem_Malloc(dict->size);
+ if (obj->b_ptr == NULL) {
+ PyErr_NoMemory();
+ return -1;
+ }
+ obj->b_needsfree = 1;
+ memset(obj->b_ptr, 0, dict->size);
+ }
+ obj->b_size = dict->size;
+ return 0;
+}
+
+PyObject *
+PyCData_FromBaseObj(PyObject *type, PyObject *base, Py_ssize_t index, char *adr)
+{
+ CDataObject *cmem;
+ StgDictObject *dict;
+
+ assert(PyType_Check(type));
+ dict = PyType_stgdict(type);
+ if (!dict) {
+ PyErr_SetString(PyExc_TypeError,
+ "abstract class");
+ return NULL;
+ }
+ dict->flags |= DICTFLAG_FINAL;
+ cmem = (CDataObject *)((PyTypeObject *)type)->tp_alloc((PyTypeObject *)type, 0);
+ if (cmem == NULL)
+ return NULL;
+ assert(CDataObject_Check(cmem));
+
+ cmem->b_length = dict->length;
+ cmem->b_size = dict->size;
+ if (base) { /* use base's buffer */
+ assert(CDataObject_Check(base));
+ cmem->b_ptr = adr;
+ cmem->b_needsfree = 0;
+ Py_INCREF(base);
+ cmem->b_base = (CDataObject *)base;
+ cmem->b_index = index;
+ } else { /* copy contents of adr */
+ if (-1 == PyCData_MallocBuffer(cmem, dict)) {
+ Py_DECREF(cmem);
+ return NULL;
+ }
+ memcpy(cmem->b_ptr, adr, dict->size);
+ cmem->b_index = index;
+ }
+ return (PyObject *)cmem;
+}
+
+/*
+ Box a memory block into a CData instance.
+*/
+PyObject *
+PyCData_AtAddress(PyObject *type, void *buf)
+{
+ CDataObject *pd;
+ StgDictObject *dict;
+
+ assert(PyType_Check(type));
+ dict = PyType_stgdict(type);
+ if (!dict) {
+ PyErr_SetString(PyExc_TypeError,
+ "abstract class");
+ return NULL;
+ }
+ dict->flags |= DICTFLAG_FINAL;
+
+ pd = (CDataObject *)((PyTypeObject *)type)->tp_alloc((PyTypeObject *)type, 0);
+ if (!pd)
+ return NULL;
+ assert(CDataObject_Check(pd));
+ pd->b_ptr = (char *)buf;
+ pd->b_length = dict->length;
+ pd->b_size = dict->size;
+ return (PyObject *)pd;
+}
+
+/*
+ This function returns TRUE for c_int, c_void_p, and these kind of
+ classes. FALSE otherwise FALSE also for subclasses of c_int and
+ such.
+*/
+int _ctypes_simple_instance(PyObject *obj)
+{
+ PyTypeObject *type = (PyTypeObject *)obj;
+
+ if (PyCSimpleTypeObject_Check(type))
+ return type->tp_base != &Simple_Type;
+ return 0;
+}
+
+PyObject *
+PyCData_get(PyObject *type, GETFUNC getfunc, PyObject *src,
+ Py_ssize_t index, Py_ssize_t size, char *adr)
+{
+ StgDictObject *dict;
+ if (getfunc)
+ return getfunc(adr, size);
+ assert(type);
+ dict = PyType_stgdict(type);
+ if (dict && dict->getfunc && !_ctypes_simple_instance(type))
+ return dict->getfunc(adr, size);
+ return PyCData_FromBaseObj(type, src, index, adr);
+}
+
+/*
+ Helper function for PyCData_set below.
+*/
+static PyObject *
+_PyCData_set(CDataObject *dst, PyObject *type, SETFUNC setfunc, PyObject *value,
+ Py_ssize_t size, char *ptr)
+{
+ CDataObject *src;
+ int err;
+
+ if (setfunc)
+ return setfunc(ptr, value, size);
+
+ if (!CDataObject_Check(value)) {
+ StgDictObject *dict = PyType_stgdict(type);
+ if (dict && dict->setfunc)
+ return dict->setfunc(ptr, value, size);
+ /*
+ If value is a tuple, we try to call the type with the tuple
+ and use the result!
+ */
+ assert(PyType_Check(type));
+ if (PyTuple_Check(value)) {
+ PyObject *ob;
+ PyObject *result;
+ ob = PyObject_CallObject(type, value);
+ if (ob == NULL) {
+ _ctypes_extend_error(PyExc_RuntimeError, "(%s) ",
+ ((PyTypeObject *)type)->tp_name);
+ return NULL;
+ }
+ result = _PyCData_set(dst, type, setfunc, ob,
+ size, ptr);
+ Py_DECREF(ob);
+ return result;
+ } else if (value == Py_None && PyCPointerTypeObject_Check(type)) {
+ *(void **)ptr = NULL;
+ Py_INCREF(Py_None);
+ return Py_None;
+ } else {
+ PyErr_Format(PyExc_TypeError,
+ "expected %s instance, got %s",
+ ((PyTypeObject *)type)->tp_name,
+ Py_TYPE(value)->tp_name);
+ return NULL;
+ }
+ }
+ src = (CDataObject *)value;
+
+ err = PyObject_IsInstance(value, type);
+ if (err == -1)
+ return NULL;
+ if (err) {
+ memcpy(ptr,
+ src->b_ptr,
+ size);
+
+ if (PyCPointerTypeObject_Check(type)) {
+ /* XXX */
+ }
+
+ value = GetKeepedObjects(src);
+ if (value == NULL)
+ return NULL;
+
+ Py_INCREF(value);
+ return value;
+ }
+
+ if (PyCPointerTypeObject_Check(type)
+ && ArrayObject_Check(value)) {
+ StgDictObject *p1, *p2;
+ PyObject *keep;
+ p1 = PyObject_stgdict(value);
+ assert(p1); /* Cannot be NULL for array instances */
+ p2 = PyType_stgdict(type);
+ assert(p2); /* Cannot be NULL for pointer types */
+
+ if (p1->proto != p2->proto) {
+ PyErr_Format(PyExc_TypeError,
+ "incompatible types, %s instance instead of %s instance",
+ Py_TYPE(value)->tp_name,
+ ((PyTypeObject *)type)->tp_name);
+ return NULL;
+ }
+ *(void **)ptr = src->b_ptr;
+
+ keep = GetKeepedObjects(src);
+ if (keep == NULL)
+ return NULL;
+
+ /*
+ We are assigning an array object to a field which represents
+ a pointer. This has the same effect as converting an array
+ into a pointer. So, again, we have to keep the whole object
+ pointed to (which is the array in this case) alive, and not
+ only it's object list. So we create a tuple, containing
+ b_objects list PLUS the array itself, and return that!
+ */
+ return PyTuple_Pack(2, keep, value);
+ }
+ PyErr_Format(PyExc_TypeError,
+ "incompatible types, %s instance instead of %s instance",
+ Py_TYPE(value)->tp_name,
+ ((PyTypeObject *)type)->tp_name);
+ return NULL;
+}
+
+/*
+ * Set a slice in object 'dst', which has the type 'type',
+ * to the value 'value'.
+ */
+int
+PyCData_set(PyObject *dst, PyObject *type, SETFUNC setfunc, PyObject *value,
+ Py_ssize_t index, Py_ssize_t size, char *ptr)
+{
+ CDataObject *mem = (CDataObject *)dst;
+ PyObject *result;
+
+ if (!CDataObject_Check(dst)) {
+ PyErr_SetString(PyExc_TypeError,
+ "not a ctype instance");
+ return -1;
+ }
+
+ result = _PyCData_set(mem, type, setfunc, value,
+ size, ptr);
+ if (result == NULL)
+ return -1;
+
+ /* KeepRef steals a refcount from it's last argument */
+ /* If KeepRef fails, we are stumped. The dst memory block has already
+ been changed */
+ return KeepRef(mem, index, result);
+}
+
+
+/******************************************************************/
+static PyObject *
+GenericPyCData_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
+{
+ CDataObject *obj;
+ StgDictObject *dict;
+
+ dict = PyType_stgdict((PyObject *)type);
+ if (!dict) {
+ PyErr_SetString(PyExc_TypeError,
+ "abstract class");
+ return NULL;
+ }
+ dict->flags |= DICTFLAG_FINAL;
+
+ obj = (CDataObject *)type->tp_alloc(type, 0);
+ if (!obj)
+ return NULL;
+
+ obj->b_base = NULL;
+ obj->b_index = 0;
+ obj->b_objects = NULL;
+ obj->b_length = dict->length;
+
+ if (-1 == PyCData_MallocBuffer(obj, dict)) {
+ Py_DECREF(obj);
+ return NULL;
+ }
+ return (PyObject *)obj;
+}
+/*****************************************************************/
+/*
+ PyCFuncPtr_Type
+*/
+
+static int
+PyCFuncPtr_set_errcheck(PyCFuncPtrObject *self, PyObject *ob, void *Py_UNUSED(ignored))
+{
+ if (ob && !PyCallable_Check(ob)) {
+ PyErr_SetString(PyExc_TypeError,
+ "the errcheck attribute must be callable");
+ return -1;
+ }
+ Py_XINCREF(ob);
+ Py_XSETREF(self->errcheck, ob);
+ return 0;
+}
+
+static PyObject *
+PyCFuncPtr_get_errcheck(PyCFuncPtrObject *self, void *Py_UNUSED(ignored))
+{
+ if (self->errcheck) {
+ Py_INCREF(self->errcheck);
+ return self->errcheck;
+ }
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+static int
+PyCFuncPtr_set_restype(PyCFuncPtrObject *self, PyObject *ob, void *Py_UNUSED(ignored))
+{
+ if (ob == NULL) {
+ Py_CLEAR(self->restype);
+ Py_CLEAR(self->checker);
+ return 0;
+ }
+ if (ob != Py_None && !PyType_stgdict(ob) && !PyCallable_Check(ob)) {
+ PyErr_SetString(PyExc_TypeError,
+ "restype must be a type, a callable, or None");
+ return -1;
+ }
+ Py_INCREF(ob);
+ Py_XSETREF(self->restype, ob);
+ Py_XSETREF(self->checker, PyObject_GetAttrString(ob, "_check_retval_"));
+ if (self->checker == NULL)
+ PyErr_Clear();
+ return 0;
+}
+
+static PyObject *
+PyCFuncPtr_get_restype(PyCFuncPtrObject *self, void *Py_UNUSED(ignored))
+{
+ StgDictObject *dict;
+ if (self->restype) {
+ Py_INCREF(self->restype);
+ return self->restype;
+ }
+ dict = PyObject_stgdict((PyObject *)self);
+ assert(dict); /* Cannot be NULL for PyCFuncPtrObject instances */
+ if (dict->restype) {
+ Py_INCREF(dict->restype);
+ return dict->restype;
+ } else {
+ Py_INCREF(Py_None);
+ return Py_None;
+ }
+}
+
+static int
+PyCFuncPtr_set_argtypes(PyCFuncPtrObject *self, PyObject *ob, void *Py_UNUSED(ignored))
+{
+ PyObject *converters;
+
+ if (ob == NULL || ob == Py_None) {
+ Py_CLEAR(self->converters);
+ Py_CLEAR(self->argtypes);
+ } else {
+ converters = converters_from_argtypes(ob);
+ if (!converters)
+ return -1;
+ Py_XSETREF(self->converters, converters);
+ Py_INCREF(ob);
+ Py_XSETREF(self->argtypes, ob);
+ }
+ return 0;
+}
+
+static PyObject *
+PyCFuncPtr_get_argtypes(PyCFuncPtrObject *self, void *Py_UNUSED(ignored))
+{
+ StgDictObject *dict;
+ if (self->argtypes) {
+ Py_INCREF(self->argtypes);
+ return self->argtypes;
+ }
+ dict = PyObject_stgdict((PyObject *)self);
+ assert(dict); /* Cannot be NULL for PyCFuncPtrObject instances */
+ if (dict->argtypes) {
+ Py_INCREF(dict->argtypes);
+ return dict->argtypes;
+ } else {
+ Py_INCREF(Py_None);
+ return Py_None;
+ }
+}
+
+static PyGetSetDef PyCFuncPtr_getsets[] = {
+ { "errcheck", (getter)PyCFuncPtr_get_errcheck, (setter)PyCFuncPtr_set_errcheck,
+ "a function to check for errors", NULL },
+ { "restype", (getter)PyCFuncPtr_get_restype, (setter)PyCFuncPtr_set_restype,
+ "specify the result type", NULL },
+ { "argtypes", (getter)PyCFuncPtr_get_argtypes,
+ (setter)PyCFuncPtr_set_argtypes,
+ "specify the argument types", NULL },
+ { NULL, NULL }
+};
+
+#ifdef MS_WIN32
+static PPROC FindAddress(void *handle, const char *name, PyObject *type)
+{
+#ifdef MS_WIN64
+ /* win64 has no stdcall calling conv, so it should
+ also not have the name mangling of it.
+ */
+ return (PPROC)GetProcAddress(handle, name);
+#else
+ PPROC address;
+ char *mangled_name;
+ int i;
+ StgDictObject *dict;
+
+ address = (PPROC)GetProcAddress(handle, name);
+ if (address)
+ return address;
+ if (((size_t)name & ~0xFFFF) == 0) {
+ return NULL;
+ }
+
+ dict = PyType_stgdict((PyObject *)type);
+ /* It should not happen that dict is NULL, but better be safe */
+ if (dict==NULL || dict->flags & FUNCFLAG_CDECL)
+ return address;
+
+ /* for stdcall, try mangled names:
+ funcname -> _funcname@<n>
+ where n is 0, 4, 8, 12, ..., 128
+ */
+ mangled_name = alloca(strlen(name) + 1 + 1 + 1 + 3); /* \0 _ @ %d */
+ if (!mangled_name)
+ return NULL;
+ for (i = 0; i < 32; ++i) {
+ sprintf(mangled_name, "_%s@%d", name, i*4);
+ address = (PPROC)GetProcAddress(handle, mangled_name);
+ if (address)
+ return address;
+ }
+ return NULL;
+#endif
+}
+#endif
+
+/* Return 1 if usable, 0 else and exception set. */
+static int
+_check_outarg_type(PyObject *arg, Py_ssize_t index)
+{
+ StgDictObject *dict;
+
+ if (PyCPointerTypeObject_Check(arg))
+ return 1;
+
+ if (PyCArrayTypeObject_Check(arg))
+ return 1;
+
+ dict = PyType_stgdict(arg);
+ if (dict
+ /* simple pointer types, c_void_p, c_wchar_p, BSTR, ... */
+ && PyUnicode_Check(dict->proto)
+/* We only allow c_void_p, c_char_p and c_wchar_p as a simple output parameter type */
+ && (strchr("PzZ", PyUnicode_AsUTF8(dict->proto)[0]))) {
+ return 1;
+ }
+
+ PyErr_Format(PyExc_TypeError,
+ "'out' parameter %d must be a pointer type, not %s",
+ Py_SAFE_DOWNCAST(index, Py_ssize_t, int),
+ PyType_Check(arg) ?
+ ((PyTypeObject *)arg)->tp_name :
+ Py_TYPE(arg)->tp_name);
+ return 0;
+}
+
+/* Returns 1 on success, 0 on error */
+static int
+_validate_paramflags(PyTypeObject *type, PyObject *paramflags)
+{
+ Py_ssize_t i, len;
+ StgDictObject *dict;
+ PyObject *argtypes;
+
+ dict = PyType_stgdict((PyObject *)type);
+ if (!dict) {
+ PyErr_SetString(PyExc_TypeError,
+ "abstract class");
+ return 0;
+ }
+ argtypes = dict->argtypes;
+
+ if (paramflags == NULL || dict->argtypes == NULL)
+ return 1;
+
+ if (!PyTuple_Check(paramflags)) {
+ PyErr_SetString(PyExc_TypeError,
+ "paramflags must be a tuple or None");
+ return 0;
+ }
+
+ len = PyTuple_GET_SIZE(paramflags);
+ if (len != PyTuple_GET_SIZE(dict->argtypes)) {
+ PyErr_SetString(PyExc_ValueError,
+ "paramflags must have the same length as argtypes");
+ return 0;
+ }
+
+ for (i = 0; i < len; ++i) {
+ PyObject *item = PyTuple_GET_ITEM(paramflags, i);
+ int flag;
+ char *name;
+ PyObject *defval;
+ PyObject *typ;
+ if (!PyArg_ParseTuple(item, "i|ZO", &flag, &name, &defval)) {
+ PyErr_SetString(PyExc_TypeError,
+ "paramflags must be a sequence of (int [,string [,value]]) tuples");
+ return 0;
+ }
+ typ = PyTuple_GET_ITEM(argtypes, i);
+ switch (flag & (PARAMFLAG_FIN | PARAMFLAG_FOUT | PARAMFLAG_FLCID)) {
+ case 0:
+ case PARAMFLAG_FIN:
+ case PARAMFLAG_FIN | PARAMFLAG_FLCID:
+ case PARAMFLAG_FIN | PARAMFLAG_FOUT:
+ break;
+ case PARAMFLAG_FOUT:
+ if (!_check_outarg_type(typ, i+1))
+ return 0;
+ break;
+ default:
+ PyErr_Format(PyExc_TypeError,
+ "paramflag value %d not supported",
+ flag);
+ return 0;
+ }
+ }
+ return 1;
+}
+
+static int
+_get_name(PyObject *obj, const char **pname)
+{
+#ifdef MS_WIN32
+ if (PyLong_Check(obj)) {
+ /* We have to use MAKEINTRESOURCEA for Windows CE.
+ Works on Windows as well, of course.
+ */
+ *pname = MAKEINTRESOURCEA(PyLong_AsUnsignedLongMask(obj) & 0xFFFF);
+ return 1;
+ }
+#endif
+ if (PyBytes_Check(obj)) {
+ *pname = PyBytes_AS_STRING(obj);
+ return *pname ? 1 : 0;
+ }
+ if (PyUnicode_Check(obj)) {
+ *pname = PyUnicode_AsUTF8(obj);
+ return *pname ? 1 : 0;
+ }
+ PyErr_SetString(PyExc_TypeError,
+ "function name must be string, bytes object or integer");
+ return 0;
+}
+
+#ifndef UEFI_C_SOURCE
+static PyObject *
+PyCFuncPtr_FromDll(PyTypeObject *type, PyObject *args, PyObject *kwds)
+{
+ const char *name;
+ int (* address)(void);
+ PyObject *ftuple;
+ PyObject *dll;
+ PyObject *obj;
+ PyCFuncPtrObject *self;
+ void *handle;
+ PyObject *paramflags = NULL;
+
+ if (!PyArg_ParseTuple(args, "O|O", &ftuple, ¶mflags))
+ return NULL;
+ if (paramflags == Py_None)
+ paramflags = NULL;
+
+ ftuple = PySequence_Tuple(ftuple);
+ if (!ftuple)
+ /* Here ftuple is a borrowed reference */
+ return NULL;
+
+ if (!PyArg_ParseTuple(ftuple, "O&O", _get_name, &name, &dll)) {
+ Py_DECREF(ftuple);
+ return NULL;
+ }
+
+ obj = PyObject_GetAttrString(dll, "_handle");
+ if (!obj) {
+ Py_DECREF(ftuple);
+ return NULL;
+ }
+ if (!PyLong_Check(obj)) {
+ PyErr_SetString(PyExc_TypeError,
+ "the _handle attribute of the second argument must be an integer");
+ Py_DECREF(ftuple);
+ Py_DECREF(obj);
+ return NULL;
+ }
+ handle = (void *)PyLong_AsVoidPtr(obj);
+ Py_DECREF(obj);
+ if (PyErr_Occurred()) {
+ PyErr_SetString(PyExc_ValueError,
+ "could not convert the _handle attribute to a pointer");
+ Py_DECREF(ftuple);
+ return NULL;
+ }
+
+#ifdef MS_WIN32
+ address = FindAddress(handle, name, (PyObject *)type);
+ if (!address) {
+ if (!IS_INTRESOURCE(name))
+ PyErr_Format(PyExc_AttributeError,
+ "function '%s' not found",
+ name);
+ else
+ PyErr_Format(PyExc_AttributeError,
+ "function ordinal %d not found",
+ (WORD)(size_t)name);
+ Py_DECREF(ftuple);
+ return NULL;
+ }
+#else
+ address = (PPROC)ctypes_dlsym(handle, name);
+ if (!address) {
+#ifdef __CYGWIN__
+/* dlerror() isn't very helpful on cygwin */
+ PyErr_Format(PyExc_AttributeError,
+ "function '%s' not found",
+ name);
+#else
+ PyErr_SetString(PyExc_AttributeError, ctypes_dlerror());
+#endif
+ Py_DECREF(ftuple);
+ return NULL;
+ }
+#endif
+ Py_INCREF(dll); /* for KeepRef */
+ Py_DECREF(ftuple);
+ if (!_validate_paramflags(type, paramflags))
+ return NULL;
+
+ self = (PyCFuncPtrObject *)GenericPyCData_new(type, args, kwds);
+ if (!self)
+ return NULL;
+
+ Py_XINCREF(paramflags);
+ self->paramflags = paramflags;
+
+ *(void **)self->b_ptr = address;
+
+ if (-1 == KeepRef((CDataObject *)self, 0, dll)) {
+ Py_DECREF((PyObject *)self);
+ return NULL;
+ }
+
+ Py_INCREF(self);
+ self->callable = (PyObject *)self;
+ return (PyObject *)self;
+}
+#endif // UEFI_C_SOURCE
+
+#ifdef MS_WIN32
+static PyObject *
+PyCFuncPtr_FromVtblIndex(PyTypeObject *type, PyObject *args, PyObject *kwds)
+{
+ PyCFuncPtrObject *self;
+ int index;
+ char *name = NULL;
+ PyObject *paramflags = NULL;
+ GUID *iid = NULL;
+ Py_ssize_t iid_len = 0;
+
+ if (!PyArg_ParseTuple(args, "is|Oz#", &index, &name, ¶mflags, &iid, &iid_len))
+ return NULL;
+ if (paramflags == Py_None)
+ paramflags = NULL;
+
+ if (!_validate_paramflags(type, paramflags))
+ return NULL;
+
+ self = (PyCFuncPtrObject *)GenericPyCData_new(type, args, kwds);
+ self->index = index + 0x1000;
+ Py_XINCREF(paramflags);
+ self->paramflags = paramflags;
+ if (iid_len == sizeof(GUID))
+ self->iid = iid;
+ return (PyObject *)self;
+}
+#endif
+
+/*
+ PyCFuncPtr_new accepts different argument lists in addition to the standard
+ _basespec_ keyword arg:
+
+ one argument form
+ "i" - function address
+ "O" - must be a callable, creates a C callable function
+
+ two or more argument forms (the third argument is a paramflags tuple)
+ "(sO)|..." - (function name, dll object (with an integer handle)), paramflags
+ "(iO)|..." - (function ordinal, dll object (with an integer handle)), paramflags
+ "is|..." - vtable index, method name, creates callable calling COM vtbl
+*/
+static PyObject *
+PyCFuncPtr_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
+{
+ PyCFuncPtrObject *self;
+ PyObject *callable;
+ StgDictObject *dict;
+ CThunkObject *thunk;
+
+ if (PyTuple_GET_SIZE(args) == 0)
+ return GenericPyCData_new(type, args, kwds);
+#ifndef UEFI_C_SOURCE
+ if (1 <= PyTuple_GET_SIZE(args) && PyTuple_Check(PyTuple_GET_ITEM(args, 0)))
+ return PyCFuncPtr_FromDll(type, args, kwds);
+#endif // UEFI_C_SOURCE
+#ifdef MS_WIN32
+ if (2 <= PyTuple_GET_SIZE(args) && PyLong_Check(PyTuple_GET_ITEM(args, 0)))
+ return PyCFuncPtr_FromVtblIndex(type, args, kwds);
+#endif
+
+ if (1 == PyTuple_GET_SIZE(args)
+ && (PyLong_Check(PyTuple_GET_ITEM(args, 0)))) {
+ CDataObject *ob;
+ void *ptr = PyLong_AsVoidPtr(PyTuple_GET_ITEM(args, 0));
+ if (ptr == NULL && PyErr_Occurred())
+ return NULL;
+ ob = (CDataObject *)GenericPyCData_new(type, args, kwds);
+ if (ob == NULL)
+ return NULL;
+ *(void **)ob->b_ptr = ptr;
+ return (PyObject *)ob;
+ }
+
+ if (!PyArg_ParseTuple(args, "O", &callable))
+ return NULL;
+ if (!PyCallable_Check(callable)) {
+ PyErr_SetString(PyExc_TypeError,
+ "argument must be callable or integer function address");
+ return NULL;
+ }
+
+ /* XXX XXX This would allow passing additional options. For COM
+ method *implementations*, we would probably want different
+ behaviour than in 'normal' callback functions: return a HRESULT if
+ an exception occurs in the callback, and print the traceback not
+ only on the console, but also to OutputDebugString() or something
+ like that.
+ */
+/*
+ if (kwds && PyDict_GetItemString(kwds, "options")) {
+ ...
+ }
+*/
+
+ dict = PyType_stgdict((PyObject *)type);
+ /* XXXX Fails if we do: 'PyCFuncPtr(lambda x: x)' */
+ if (!dict || !dict->argtypes) {
+ PyErr_SetString(PyExc_TypeError,
+ "cannot construct instance of this class:"
+ " no argtypes");
+ return NULL;
+ }
+
+ thunk = _ctypes_alloc_callback(callable,
+ dict->argtypes,
+ dict->restype,
+ dict->flags);
+ if (!thunk)
+ return NULL;
+
+ self = (PyCFuncPtrObject *)GenericPyCData_new(type, args, kwds);
+ if (self == NULL) {
+ Py_DECREF(thunk);
+ return NULL;
+ }
+
+ Py_INCREF(callable);
+ self->callable = callable;
+
+ self->thunk = thunk;
+ *(void **)self->b_ptr = (void *)thunk->pcl_exec;
+
+ Py_INCREF((PyObject *)thunk); /* for KeepRef */
+ if (-1 == KeepRef((CDataObject *)self, 0, (PyObject *)thunk)) {
+ Py_DECREF((PyObject *)self);
+ return NULL;
+ }
+ return (PyObject *)self;
+}
+
+
+/*
+ _byref consumes a refcount to its argument
+*/
+static PyObject *
+_byref(PyObject *obj)
+{
+ PyCArgObject *parg;
+ if (!CDataObject_Check(obj)) {
+ PyErr_SetString(PyExc_TypeError,
+ "expected CData instance");
+ return NULL;
+ }
+
+ parg = PyCArgObject_new();
+ if (parg == NULL) {
+ Py_DECREF(obj);
+ return NULL;
+ }
+
+ parg->tag = 'P';
+ parg->pffi_type = &ffi_type_pointer;
+ parg->obj = obj;
+ parg->value.p = ((CDataObject *)obj)->b_ptr;
+ return (PyObject *)parg;
+}
+
+static PyObject *
+_get_arg(int *pindex, PyObject *name, PyObject *defval, PyObject *inargs, PyObject *kwds)
+{
+ PyObject *v;
+
+ if (*pindex < PyTuple_GET_SIZE(inargs)) {
+ v = PyTuple_GET_ITEM(inargs, *pindex);
+ ++*pindex;
+ Py_INCREF(v);
+ return v;
+ }
+ if (kwds && name && (v = PyDict_GetItem(kwds, name))) {
+ ++*pindex;
+ Py_INCREF(v);
+ return v;
+ }
+ if (defval) {
+ Py_INCREF(defval);
+ return defval;
+ }
+ /* we can't currently emit a better error message */
+ if (name)
+ PyErr_Format(PyExc_TypeError,
+ "required argument '%S' missing", name);
+ else
+ PyErr_Format(PyExc_TypeError,
+ "not enough arguments");
+ return NULL;
+}
+
+/*
+ This function implements higher level functionality plus the ability to call
+ functions with keyword arguments by looking at parameter flags. parameter
+ flags is a tuple of 1, 2 or 3-tuples. The first entry in each is an integer
+ specifying the direction of the data transfer for this parameter - 'in',
+ 'out' or 'inout' (zero means the same as 'in'). The second entry is the
+ parameter name, and the third is the default value if the parameter is
+ missing in the function call.
+
+ This function builds and returns a new tuple 'callargs' which contains the
+ parameters to use in the call. Items on this tuple are copied from the
+ 'inargs' tuple for 'in' and 'in, out' parameters, and constructed from the
+ 'argtypes' tuple for 'out' parameters. It also calculates numretvals which
+ is the number of return values for the function, outmask/inoutmask are
+ bitmasks containing indexes into the callargs tuple specifying which
+ parameters have to be returned. _build_result builds the return value of the
+ function.
+*/
+static PyObject *
+_build_callargs(PyCFuncPtrObject *self, PyObject *argtypes,
+ PyObject *inargs, PyObject *kwds,
+ int *poutmask, int *pinoutmask, unsigned int *pnumretvals)
+{
+ PyObject *paramflags = self->paramflags;
+ PyObject *callargs;
+ StgDictObject *dict;
+ Py_ssize_t i, len;
+ int inargs_index = 0;
+ /* It's a little bit difficult to determine how many arguments the
+ function call requires/accepts. For simplicity, we count the consumed
+ args and compare this to the number of supplied args. */
+ Py_ssize_t actual_args;
+
+ *poutmask = 0;
+ *pinoutmask = 0;
+ *pnumretvals = 0;
+
+ /* Trivial cases, where we either return inargs itself, or a slice of it. */
+ if (argtypes == NULL || paramflags == NULL || PyTuple_GET_SIZE(argtypes) == 0) {
+#ifdef MS_WIN32
+ if (self->index)
+ return PyTuple_GetSlice(inargs, 1, PyTuple_GET_SIZE(inargs));
+#endif
+ Py_INCREF(inargs);
+ return inargs;
+ }
+
+ len = PyTuple_GET_SIZE(argtypes);
+ callargs = PyTuple_New(len); /* the argument tuple we build */
+ if (callargs == NULL)
+ return NULL;
+
+#ifdef MS_WIN32
+ /* For a COM method, skip the first arg */
+ if (self->index) {
+ inargs_index = 1;
+ }
+#endif
+ for (i = 0; i < len; ++i) {
+ PyObject *item = PyTuple_GET_ITEM(paramflags, i);
+ PyObject *ob;
+ int flag;
+ PyObject *name = NULL;
+ PyObject *defval = NULL;
+
+ /* This way seems to be ~2 us faster than the PyArg_ParseTuple
+ calls below. */
+ /* We HAVE already checked that the tuple can be parsed with "i|ZO", so... */
+ Py_ssize_t tsize = PyTuple_GET_SIZE(item);
+ flag = PyLong_AS_LONG(PyTuple_GET_ITEM(item, 0));
+ name = tsize > 1 ? PyTuple_GET_ITEM(item, 1) : NULL;
+ defval = tsize > 2 ? PyTuple_GET_ITEM(item, 2) : NULL;
+
+ switch (flag & (PARAMFLAG_FIN | PARAMFLAG_FOUT | PARAMFLAG_FLCID)) {
+ case PARAMFLAG_FIN | PARAMFLAG_FLCID:
+ /* ['in', 'lcid'] parameter. Always taken from defval,
+ if given, else the integer 0. */
+ if (defval == NULL) {
+ defval = PyLong_FromLong(0);
+ if (defval == NULL)
+ goto error;
+ } else
+ Py_INCREF(defval);
+ PyTuple_SET_ITEM(callargs, i, defval);
+ break;
+ case (PARAMFLAG_FIN | PARAMFLAG_FOUT):
+ *pinoutmask |= (1 << i); /* mark as inout arg */
+ (*pnumretvals)++;
+ /* fall through */
+ case 0:
+ case PARAMFLAG_FIN:
+ /* 'in' parameter. Copy it from inargs. */
+ ob =_get_arg(&inargs_index, name, defval, inargs, kwds);
+ if (ob == NULL)
+ goto error;
+ PyTuple_SET_ITEM(callargs, i, ob);
+ break;
+ case PARAMFLAG_FOUT:
+ /* XXX Refactor this code into a separate function. */
+ /* 'out' parameter.
+ argtypes[i] must be a POINTER to a c type.
+
+ Cannot by supplied in inargs, but a defval will be used
+ if available. XXX Should we support getting it from kwds?
+ */
+ if (defval) {
+ /* XXX Using mutable objects as defval will
+ make the function non-threadsafe, unless we
+ copy the object in each invocation */
+ Py_INCREF(defval);
+ PyTuple_SET_ITEM(callargs, i, defval);
+ *poutmask |= (1 << i); /* mark as out arg */
+ (*pnumretvals)++;
+ break;
+ }
+ ob = PyTuple_GET_ITEM(argtypes, i);
+ dict = PyType_stgdict(ob);
+ if (dict == NULL) {
+ /* Cannot happen: _validate_paramflags()
+ would not accept such an object */
+ PyErr_Format(PyExc_RuntimeError,
+ "NULL stgdict unexpected");
+ goto error;
+ }
+ if (PyUnicode_Check(dict->proto)) {
+ PyErr_Format(
+ PyExc_TypeError,
+ "%s 'out' parameter must be passed as default value",
+ ((PyTypeObject *)ob)->tp_name);
+ goto error;
+ }
+ if (PyCArrayTypeObject_Check(ob))
+ ob = PyObject_CallObject(ob, NULL);
+ else
+ /* Create an instance of the pointed-to type */
+ ob = PyObject_CallObject(dict->proto, NULL);
+ /*
+ XXX Is the following correct any longer?
+ We must not pass a byref() to the array then but
+ the array instance itself. Then, we cannot retrive
+ the result from the PyCArgObject.
+ */
+ if (ob == NULL)
+ goto error;
+ /* The .from_param call that will ocurr later will pass this
+ as a byref parameter. */
+ PyTuple_SET_ITEM(callargs, i, ob);
+ *poutmask |= (1 << i); /* mark as out arg */
+ (*pnumretvals)++;
+ break;
+ default:
+ PyErr_Format(PyExc_ValueError,
+ "paramflag %d not yet implemented", flag);
+ goto error;
+ break;
+ }
+ }
+
+ /* We have counted the arguments we have consumed in 'inargs_index'. This
+ must be the same as len(inargs) + len(kwds), otherwise we have
+ either too much or not enough arguments. */
+
+ actual_args = PyTuple_GET_SIZE(inargs) + (kwds ? PyDict_Size(kwds) : 0);
+ if (actual_args != inargs_index) {
+ /* When we have default values or named parameters, this error
+ message is misleading. See unittests/test_paramflags.py
+ */
+ PyErr_Format(PyExc_TypeError,
+ "call takes exactly %d arguments (%zd given)",
+ inargs_index, actual_args);
+ goto error;
+ }
+
+ /* outmask is a bitmask containing indexes into callargs. Items at
+ these indexes contain values to return.
+ */
+ return callargs;
+ error:
+ Py_DECREF(callargs);
+ return NULL;
+}
+
+/* See also:
+ http://msdn.microsoft.com/library/en-us/com/html/769127a1-1a14-4ed4-9d38-7cf3e571b661.asp
+*/
+/*
+ Build return value of a function.
+
+ Consumes the refcount on result and callargs.
+*/
+static PyObject *
+_build_result(PyObject *result, PyObject *callargs,
+ int outmask, int inoutmask, unsigned int numretvals)
+{
+ unsigned int i, index;
+ int bit;
+ PyObject *tup = NULL;
+
+ if (callargs == NULL)
+ return result;
+ if (result == NULL || numretvals == 0) {
+ Py_DECREF(callargs);
+ return result;
+ }
+ Py_DECREF(result);
+
+ /* tup will not be allocated if numretvals == 1 */
+ /* allocate tuple to hold the result */
+ if (numretvals > 1) {
+ tup = PyTuple_New(numretvals);
+ if (tup == NULL) {
+ Py_DECREF(callargs);
+ return NULL;
+ }
+ }
+
+ index = 0;
+ for (bit = 1, i = 0; i < 32; ++i, bit <<= 1) {
+ PyObject *v;
+ if (bit & inoutmask) {
+ v = PyTuple_GET_ITEM(callargs, i);
+ Py_INCREF(v);
+ if (numretvals == 1) {
+ Py_DECREF(callargs);
+ return v;
+ }
+ PyTuple_SET_ITEM(tup, index, v);
+ index++;
+ } else if (bit & outmask) {
+ _Py_IDENTIFIER(__ctypes_from_outparam__);
+
+ v = PyTuple_GET_ITEM(callargs, i);
+ v = _PyObject_CallMethodId(v, &PyId___ctypes_from_outparam__, NULL);
+ if (v == NULL || numretvals == 1) {
+ Py_DECREF(callargs);
+ return v;
+ }
+ PyTuple_SET_ITEM(tup, index, v);
+ index++;
+ }
+ if (index == numretvals)
+ break;
+ }
+
+ Py_DECREF(callargs);
+ return tup;
+}
+
+static PyObject *
+PyCFuncPtr_call(PyCFuncPtrObject *self, PyObject *inargs, PyObject *kwds)
+{
+ PyObject *restype;
+ PyObject *converters;
+ PyObject *checker;
+ PyObject *argtypes;
+ StgDictObject *dict = PyObject_stgdict((PyObject *)self);
+ PyObject *result;
+ PyObject *callargs;
+ PyObject *errcheck;
+#ifdef MS_WIN32
+ IUnknown *piunk = NULL;
+#endif
+ void *pProc = NULL;
+
+ int inoutmask;
+ int outmask;
+ unsigned int numretvals;
+
+ assert(dict); /* Cannot be NULL for PyCFuncPtrObject instances */
+ restype = self->restype ? self->restype : dict->restype;
+ converters = self->converters ? self->converters : dict->converters;
+ checker = self->checker ? self->checker : dict->checker;
+ argtypes = self->argtypes ? self->argtypes : dict->argtypes;
+/* later, we probably want to have an errcheck field in stgdict */
+ errcheck = self->errcheck /* ? self->errcheck : dict->errcheck */;
+
+
+ pProc = *(void **)self->b_ptr;
+#ifdef MS_WIN32
+ if (self->index) {
+ /* It's a COM method */
+ CDataObject *this;
+ this = (CDataObject *)PyTuple_GetItem(inargs, 0); /* borrowed ref! */
+ if (!this) {
+ PyErr_SetString(PyExc_ValueError,
+ "native com method call without 'this' parameter");
+ return NULL;
+ }
+ if (!CDataObject_Check(this)) {
+ PyErr_SetString(PyExc_TypeError,
+ "Expected a COM this pointer as first argument");
+ return NULL;
+ }
+ /* there should be more checks? No, in Python */
+ /* First arg is a pointer to an interface instance */
+ if (!this->b_ptr || *(void **)this->b_ptr == NULL) {
+ PyErr_SetString(PyExc_ValueError,
+ "NULL COM pointer access");
+ return NULL;
+ }
+ piunk = *(IUnknown **)this->b_ptr;
+ if (NULL == piunk->lpVtbl) {
+ PyErr_SetString(PyExc_ValueError,
+ "COM method call without VTable");
+ return NULL;
+ }
+ pProc = ((void **)piunk->lpVtbl)[self->index - 0x1000];
+ }
+#endif
+ callargs = _build_callargs(self, argtypes,
+ inargs, kwds,
+ &outmask, &inoutmask, &numretvals);
+ if (callargs == NULL)
+ return NULL;
+
+ if (converters) {
+ int required = Py_SAFE_DOWNCAST(PyTuple_GET_SIZE(converters),
+ Py_ssize_t, int);
+ int actual = Py_SAFE_DOWNCAST(PyTuple_GET_SIZE(callargs),
+ Py_ssize_t, int);
+
+ if ((dict->flags & FUNCFLAG_CDECL) == FUNCFLAG_CDECL) {
+ /* For cdecl functions, we allow more actual arguments
+ than the length of the argtypes tuple.
+ */
+ if (required > actual) {
+ Py_DECREF(callargs);
+ PyErr_Format(PyExc_TypeError,
+ "this function takes at least %d argument%s (%d given)",
+ required,
+ required == 1 ? "" : "s",
+ actual);
+ return NULL;
+ }
+ } else if (required != actual) {
+ Py_DECREF(callargs);
+ PyErr_Format(PyExc_TypeError,
+ "this function takes %d argument%s (%d given)",
+ required,
+ required == 1 ? "" : "s",
+ actual);
+ return NULL;
+ }
+ }
+
+ result = _ctypes_callproc(pProc,
+ callargs,
+#ifdef MS_WIN32
+ piunk,
+ self->iid,
+#endif
+ dict->flags,
+ converters,
+ restype,
+ checker);
+/* The 'errcheck' protocol */
+ if (result != NULL && errcheck) {
+ PyObject *v = PyObject_CallFunctionObjArgs(errcheck,
+ result,
+ self,
+ callargs,
+ NULL);
+ /* If the errcheck function failed, return NULL.
+ If the errcheck function returned callargs unchanged,
+ continue normal processing.
+ If the errcheck function returned something else,
+ use that as result.
+ */
+ if (v == NULL || v != callargs) {
+ Py_DECREF(result);
+ Py_DECREF(callargs);
+ return v;
+ }
+ Py_DECREF(v);
+ }
+
+ return _build_result(result, callargs,
+ outmask, inoutmask, numretvals);
+}
+
+static int
+PyCFuncPtr_traverse(PyCFuncPtrObject *self, visitproc visit, void *arg)
+{
+ Py_VISIT(self->callable);
+ Py_VISIT(self->restype);
+ Py_VISIT(self->checker);
+ Py_VISIT(self->errcheck);
+ Py_VISIT(self->argtypes);
+ Py_VISIT(self->converters);
+ Py_VISIT(self->paramflags);
+ Py_VISIT(self->thunk);
+ return PyCData_traverse((CDataObject *)self, visit, arg);
+}
+
+static int
+PyCFuncPtr_clear(PyCFuncPtrObject *self)
+{
+ Py_CLEAR(self->callable);
+ Py_CLEAR(self->restype);
+ Py_CLEAR(self->checker);
+ Py_CLEAR(self->errcheck);
+ Py_CLEAR(self->argtypes);
+ Py_CLEAR(self->converters);
+ Py_CLEAR(self->paramflags);
+ Py_CLEAR(self->thunk);
+ return PyCData_clear((CDataObject *)self);
+}
+
+static void
+PyCFuncPtr_dealloc(PyCFuncPtrObject *self)
+{
+ PyCFuncPtr_clear(self);
+ Py_TYPE(self)->tp_free((PyObject *)self);
+}
+
+static PyObject *
+PyCFuncPtr_repr(PyCFuncPtrObject *self)
+{
+#ifdef MS_WIN32
+ if (self->index)
+ return PyUnicode_FromFormat("<COM method offset %d: %s at %p>",
+ self->index - 0x1000,
+ Py_TYPE(self)->tp_name,
+ self);
+#endif
+ return PyUnicode_FromFormat("<%s object at %p>",
+ Py_TYPE(self)->tp_name,
+ self);
+}
+
+static int
+PyCFuncPtr_bool(PyCFuncPtrObject *self)
+{
+ return ((*(void **)self->b_ptr != NULL)
+#ifdef MS_WIN32
+ || (self->index != 0)
+#endif
+ );
+}
+
+static PyNumberMethods PyCFuncPtr_as_number = {
+ 0, /* nb_add */
+ 0, /* nb_subtract */
+ 0, /* nb_multiply */
+ 0, /* nb_remainder */
+ 0, /* nb_divmod */
+ 0, /* nb_power */
+ 0, /* nb_negative */
+ 0, /* nb_positive */
+ 0, /* nb_absolute */
+ (inquiry)PyCFuncPtr_bool, /* nb_bool */
+};
+
+PyTypeObject PyCFuncPtr_Type = {
+ PyVarObject_HEAD_INIT(NULL, 0)
+ "_ctypes.PyCFuncPtr",
+ sizeof(PyCFuncPtrObject), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ (destructor)PyCFuncPtr_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ (reprfunc)PyCFuncPtr_repr, /* tp_repr */
+ &PyCFuncPtr_as_number, /* tp_as_number */
+ 0, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ (ternaryfunc)PyCFuncPtr_call, /* tp_call */
+ 0, /* tp_str */
+ 0, /* tp_getattro */
+ 0, /* tp_setattro */
+ &PyCData_as_buffer, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */
+ "Function Pointer", /* tp_doc */
+ (traverseproc)PyCFuncPtr_traverse, /* tp_traverse */
+ (inquiry)PyCFuncPtr_clear, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ 0, /* tp_iter */
+ 0, /* tp_iternext */
+ 0, /* tp_methods */
+ 0, /* tp_members */
+ PyCFuncPtr_getsets, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ 0, /* tp_init */
+ 0, /* tp_alloc */
+ PyCFuncPtr_new, /* tp_new */
+ 0, /* tp_free */
+};
+
+/*****************************************************************/
+/*
+ Struct_Type
+*/
+/*
+ This function is called to initialize a Structure or Union with positional
+ arguments. It calls itself recursively for all Structure or Union base
+ classes, then retrieves the _fields_ member to associate the argument
+ position with the correct field name.
+
+ Returns -1 on error, or the index of next argument on success.
+ */
+static Py_ssize_t
+_init_pos_args(PyObject *self, PyTypeObject *type,
+ PyObject *args, PyObject *kwds,
+ Py_ssize_t index)
+{
+ StgDictObject *dict;
+ PyObject *fields;
+ Py_ssize_t i;
+
+ if (PyType_stgdict((PyObject *)type->tp_base)) {
+ index = _init_pos_args(self, type->tp_base,
+ args, kwds,
+ index);
+ if (index == -1)
+ return -1;
+ }
+
+ dict = PyType_stgdict((PyObject *)type);
+ fields = PyDict_GetItemString((PyObject *)dict, "_fields_");
+ if (fields == NULL)
+ return index;
+
+ for (i = 0;
+ i < dict->length && (i+index) < PyTuple_GET_SIZE(args);
+ ++i) {
+ PyObject *pair = PySequence_GetItem(fields, i);
+ PyObject *name, *val;
+ int res;
+ if (!pair)
+ return -1;
+ name = PySequence_GetItem(pair, 0);
+ if (!name) {
+ Py_DECREF(pair);
+ return -1;
+ }
+ val = PyTuple_GET_ITEM(args, i + index);
+ if (kwds && PyDict_GetItem(kwds, name)) {
+ PyErr_Format(PyExc_TypeError,
+ "duplicate values for field %R",
+ name);
+ Py_DECREF(pair);
+ Py_DECREF(name);
+ return -1;
+ }
+
+ res = PyObject_SetAttr(self, name, val);
+ Py_DECREF(pair);
+ Py_DECREF(name);
+ if (res == -1)
+ return -1;
+ }
+ return index + dict->length;
+}
+
+static int
+Struct_init(PyObject *self, PyObject *args, PyObject *kwds)
+{
+/* Optimization possible: Store the attribute names _fields_[x][0]
+ * in C accessible fields somewhere ?
+ */
+ if (!PyTuple_Check(args)) {
+ PyErr_SetString(PyExc_TypeError,
+ "args not a tuple?");
+ return -1;
+ }
+ if (PyTuple_GET_SIZE(args)) {
+ Py_ssize_t res = _init_pos_args(self, Py_TYPE(self),
+ args, kwds, 0);
+ if (res == -1)
+ return -1;
+ if (res < PyTuple_GET_SIZE(args)) {
+ PyErr_SetString(PyExc_TypeError,
+ "too many initializers");
+ return -1;
+ }
+ }
+
+ if (kwds) {
+ PyObject *key, *value;
+ Py_ssize_t pos = 0;
+ while(PyDict_Next(kwds, &pos, &key, &value)) {
+ if (-1 == PyObject_SetAttr(self, key, value))
+ return -1;
+ }
+ }
+ return 0;
+}
+
+static PyTypeObject Struct_Type = {
+ PyVarObject_HEAD_INIT(NULL, 0)
+ "_ctypes.Structure",
+ sizeof(CDataObject), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ 0, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ 0, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ 0, /* tp_getattro */
+ 0, /* tp_setattro */
+ &PyCData_as_buffer, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */
+ "Structure base class", /* tp_doc */
+ (traverseproc)PyCData_traverse, /* tp_traverse */
+ (inquiry)PyCData_clear, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ 0, /* tp_iter */
+ 0, /* tp_iternext */
+ 0, /* tp_methods */
+ 0, /* tp_members */
+ 0, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ Struct_init, /* tp_init */
+ 0, /* tp_alloc */
+ GenericPyCData_new, /* tp_new */
+ 0, /* tp_free */
+};
+
+static PyTypeObject Union_Type = {
+ PyVarObject_HEAD_INIT(NULL, 0)
+ "_ctypes.Union",
+ sizeof(CDataObject), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ 0, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ 0, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ 0, /* tp_getattro */
+ 0, /* tp_setattro */
+ &PyCData_as_buffer, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */
+ "Union base class", /* tp_doc */
+ (traverseproc)PyCData_traverse, /* tp_traverse */
+ (inquiry)PyCData_clear, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ 0, /* tp_iter */
+ 0, /* tp_iternext */
+ 0, /* tp_methods */
+ 0, /* tp_members */
+ 0, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ Struct_init, /* tp_init */
+ 0, /* tp_alloc */
+ GenericPyCData_new, /* tp_new */
+ 0, /* tp_free */
+};
+
+
+/******************************************************************/
+/*
+ PyCArray_Type
+*/
+static int
+Array_init(CDataObject *self, PyObject *args, PyObject *kw)
+{
+ Py_ssize_t i;
+ Py_ssize_t n;
+
+ if (!PyTuple_Check(args)) {
+ PyErr_SetString(PyExc_TypeError,
+ "args not a tuple?");
+ return -1;
+ }
+ n = PyTuple_GET_SIZE(args);
+ for (i = 0; i < n; ++i) {
+ PyObject *v;
+ v = PyTuple_GET_ITEM(args, i);
+ if (-1 == PySequence_SetItem((PyObject *)self, i, v))
+ return -1;
+ }
+ return 0;
+}
+
+static PyObject *
+Array_item(PyObject *myself, Py_ssize_t index)
+{
+ CDataObject *self = (CDataObject *)myself;
+ Py_ssize_t offset, size;
+ StgDictObject *stgdict;
+
+
+ if (index < 0 || index >= self->b_length) {
+ PyErr_SetString(PyExc_IndexError,
+ "invalid index");
+ return NULL;
+ }
+
+ stgdict = PyObject_stgdict((PyObject *)self);
+ assert(stgdict); /* Cannot be NULL for array instances */
+ /* Would it be clearer if we got the item size from
+ stgdict->proto's stgdict?
+ */
+ size = stgdict->size / stgdict->length;
+ offset = index * size;
+
+ return PyCData_get(stgdict->proto, stgdict->getfunc, (PyObject *)self,
+ index, size, self->b_ptr + offset);
+}
+
+static PyObject *
+Array_subscript(PyObject *myself, PyObject *item)
+{
+ CDataObject *self = (CDataObject *)myself;
+
+ if (PyIndex_Check(item)) {
+ Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError);
+
+ if (i == -1 && PyErr_Occurred())
+ return NULL;
+ if (i < 0)
+ i += self->b_length;
+ return Array_item(myself, i);
+ }
+ else if (PySlice_Check(item)) {
+ StgDictObject *stgdict, *itemdict;
+ PyObject *proto;
+ PyObject *np;
+ Py_ssize_t start, stop, step, slicelen, cur, i;
+
+ if (PySlice_Unpack(item, &start, &stop, &step) < 0) {
+ return NULL;
+ }
+ slicelen = PySlice_AdjustIndices(self->b_length, &start, &stop, step);
+
+ stgdict = PyObject_stgdict((PyObject *)self);
+ assert(stgdict); /* Cannot be NULL for array object instances */
+ proto = stgdict->proto;
+ itemdict = PyType_stgdict(proto);
+ assert(itemdict); /* proto is the item type of the array, a
+ ctypes type, so this cannot be NULL */
+
+ if (itemdict->getfunc == _ctypes_get_fielddesc("c")->getfunc) {
+ char *ptr = (char *)self->b_ptr;
+ char *dest;
+
+ if (slicelen <= 0)
+ return PyBytes_FromStringAndSize("", 0);
+ if (step == 1) {
+ return PyBytes_FromStringAndSize(ptr + start,
+ slicelen);
+ }
+ dest = (char *)PyMem_Malloc(slicelen);
+
+ if (dest == NULL)
+ return PyErr_NoMemory();
+
+ for (cur = start, i = 0; i < slicelen;
+ cur += step, i++) {
+ dest[i] = ptr[cur];
+ }
+
+ np = PyBytes_FromStringAndSize(dest, slicelen);
+ PyMem_Free(dest);
+ return np;
+ }
+#ifdef CTYPES_UNICODE
+ if (itemdict->getfunc == _ctypes_get_fielddesc("u")->getfunc) {
+ wchar_t *ptr = (wchar_t *)self->b_ptr;
+ wchar_t *dest;
+
+ if (slicelen <= 0)
+ return PyUnicode_New(0, 0);
+ if (step == 1) {
+ return PyUnicode_FromWideChar(ptr + start,
+ slicelen);
+ }
+
+ dest = PyMem_New(wchar_t, slicelen);
+ if (dest == NULL) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+
+ for (cur = start, i = 0; i < slicelen;
+ cur += step, i++) {
+ dest[i] = ptr[cur];
+ }
+
+ np = PyUnicode_FromWideChar(dest, slicelen);
+ PyMem_Free(dest);
+ return np;
+ }
+#endif
+
+ np = PyList_New(slicelen);
+ if (np == NULL)
+ return NULL;
+
+ for (cur = start, i = 0; i < slicelen;
+ cur += step, i++) {
+ PyObject *v = Array_item(myself, cur);
+ if (v == NULL) {
+ Py_DECREF(np);
+ return NULL;
+ }
+ PyList_SET_ITEM(np, i, v);
+ }
+ return np;
+ }
+ else {
+ PyErr_SetString(PyExc_TypeError,
+ "indices must be integers");
+ return NULL;
+ }
+
+}
+
+static int
+Array_ass_item(PyObject *myself, Py_ssize_t index, PyObject *value)
+{
+ CDataObject *self = (CDataObject *)myself;
+ Py_ssize_t size, offset;
+ StgDictObject *stgdict;
+ char *ptr;
+
+ if (value == NULL) {
+ PyErr_SetString(PyExc_TypeError,
+ "Array does not support item deletion");
+ return -1;
+ }
+
+ stgdict = PyObject_stgdict((PyObject *)self);
+ assert(stgdict); /* Cannot be NULL for array object instances */
+ if (index < 0 || index >= stgdict->length) {
+ PyErr_SetString(PyExc_IndexError,
+ "invalid index");
+ return -1;
+ }
+ size = stgdict->size / stgdict->length;
+ offset = index * size;
+ ptr = self->b_ptr + offset;
+
+ return PyCData_set((PyObject *)self, stgdict->proto, stgdict->setfunc, value,
+ index, size, ptr);
+}
+
+static int
+Array_ass_subscript(PyObject *myself, PyObject *item, PyObject *value)
+{
+ CDataObject *self = (CDataObject *)myself;
+
+ if (value == NULL) {
+ PyErr_SetString(PyExc_TypeError,
+ "Array does not support item deletion");
+ return -1;
+ }
+
+ if (PyIndex_Check(item)) {
+ Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError);
+
+ if (i == -1 && PyErr_Occurred())
+ return -1;
+ if (i < 0)
+ i += self->b_length;
+ return Array_ass_item(myself, i, value);
+ }
+ else if (PySlice_Check(item)) {
+ Py_ssize_t start, stop, step, slicelen, otherlen, i, cur;
+
+ if (PySlice_Unpack(item, &start, &stop, &step) < 0) {
+ return -1;
+ }
+ slicelen = PySlice_AdjustIndices(self->b_length, &start, &stop, step);
+ if ((step < 0 && start < stop) ||
+ (step > 0 && start > stop))
+ stop = start;
+
+ otherlen = PySequence_Length(value);
+ if (otherlen != slicelen) {
+ PyErr_SetString(PyExc_ValueError,
+ "Can only assign sequence of same size");
+ return -1;
+ }
+ for (cur = start, i = 0; i < otherlen; cur += step, i++) {
+ PyObject *item = PySequence_GetItem(value, i);
+ int result;
+ if (item == NULL)
+ return -1;
+ result = Array_ass_item(myself, cur, item);
+ Py_DECREF(item);
+ if (result == -1)
+ return -1;
+ }
+ return 0;
+ }
+ else {
+ PyErr_SetString(PyExc_TypeError,
+ "indices must be integer");
+ return -1;
+ }
+}
+
+static Py_ssize_t
+Array_length(PyObject *myself)
+{
+ CDataObject *self = (CDataObject *)myself;
+ return self->b_length;
+}
+
+static PySequenceMethods Array_as_sequence = {
+ Array_length, /* sq_length; */
+ 0, /* sq_concat; */
+ 0, /* sq_repeat; */
+ Array_item, /* sq_item; */
+ 0, /* sq_slice; */
+ Array_ass_item, /* sq_ass_item; */
+ 0, /* sq_ass_slice; */
+ 0, /* sq_contains; */
+
+ 0, /* sq_inplace_concat; */
+ 0, /* sq_inplace_repeat; */
+};
+
+static PyMappingMethods Array_as_mapping = {
+ Array_length,
+ Array_subscript,
+ Array_ass_subscript,
+};
+
+PyTypeObject PyCArray_Type = {
+ PyVarObject_HEAD_INIT(NULL, 0)
+ "_ctypes.Array",
+ sizeof(CDataObject), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ 0, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ &Array_as_sequence, /* tp_as_sequence */
+ &Array_as_mapping, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ 0, /* tp_getattro */
+ 0, /* tp_setattro */
+ &PyCData_as_buffer, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */
+ "XXX to be provided", /* tp_doc */
+ (traverseproc)PyCData_traverse, /* tp_traverse */
+ (inquiry)PyCData_clear, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ 0, /* tp_iter */
+ 0, /* tp_iternext */
+ 0, /* tp_methods */
+ 0, /* tp_members */
+ 0, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ (initproc)Array_init, /* tp_init */
+ 0, /* tp_alloc */
+ GenericPyCData_new, /* tp_new */
+ 0, /* tp_free */
+};
+
+PyObject *
+PyCArrayType_from_ctype(PyObject *itemtype, Py_ssize_t length)
+{
+ static PyObject *cache;
+ PyObject *key;
+ PyObject *result;
+ char name[256];
+ PyObject *len;
+
+ if (cache == NULL) {
+ cache = PyDict_New();
+ if (cache == NULL)
+ return NULL;
+ }
+ len = PyLong_FromSsize_t(length);
+ if (len == NULL)
+ return NULL;
+ key = PyTuple_Pack(2, itemtype, len);
+ Py_DECREF(len);
+ if (!key)
+ return NULL;
+ result = PyDict_GetItemProxy(cache, key);
+ if (result) {
+ Py_INCREF(result);
+ Py_DECREF(key);
+ return result;
+ }
+
+ if (!PyType_Check(itemtype)) {
+ PyErr_SetString(PyExc_TypeError,
+ "Expected a type object");
+ Py_DECREF(key);
+ return NULL;
+ }
+#ifdef MS_WIN64
+ sprintf(name, "%.200s_Array_%Id",
+ ((PyTypeObject *)itemtype)->tp_name, length);
+#else
+ sprintf(name, "%.200s_Array_%ld",
+ ((PyTypeObject *)itemtype)->tp_name, (long)length);
+#endif
+
+ result = PyObject_CallFunction((PyObject *)&PyCArrayType_Type,
+ "s(O){s:n,s:O}",
+ name,
+ &PyCArray_Type,
+ "_length_",
+ length,
+ "_type_",
+ itemtype
+ );
+ if (result == NULL) {
+ Py_DECREF(key);
+ return NULL;
+ }
+ if (-1 == PyDict_SetItemProxy(cache, key, result)) {
+ Py_DECREF(key);
+ Py_DECREF(result);
+ return NULL;
+ }
+ Py_DECREF(key);
+ return result;
+}
+
+
+/******************************************************************/
+/*
+ Simple_Type
+*/
+
+static int
+Simple_set_value(CDataObject *self, PyObject *value, void *Py_UNUSED(ignored))
+{
+ PyObject *result;
+ StgDictObject *dict = PyObject_stgdict((PyObject *)self);
+
+ if (value == NULL) {
+ PyErr_SetString(PyExc_TypeError,
+ "can't delete attribute");
+ return -1;
+ }
+ assert(dict); /* Cannot be NULL for CDataObject instances */
+ assert(dict->setfunc);
+ result = dict->setfunc(self->b_ptr, value, dict->size);
+ if (!result)
+ return -1;
+
+ /* consumes the refcount the setfunc returns */
+ return KeepRef(self, 0, result);
+}
+
+static int
+Simple_init(CDataObject *self, PyObject *args, PyObject *kw)
+{
+ PyObject *value = NULL;
+ if (!PyArg_UnpackTuple(args, "__init__", 0, 1, &value))
+ return -1;
+ if (value)
+ return Simple_set_value(self, value, NULL);
+ return 0;
+}
+
+static PyObject *
+Simple_get_value(CDataObject *self, void *Py_UNUSED(ignored))
+{
+ StgDictObject *dict;
+ dict = PyObject_stgdict((PyObject *)self);
+ assert(dict); /* Cannot be NULL for CDataObject instances */
+ assert(dict->getfunc);
+ return dict->getfunc(self->b_ptr, self->b_size);
+}
+
+static PyGetSetDef Simple_getsets[] = {
+ { "value", (getter)Simple_get_value, (setter)Simple_set_value,
+ "current value", NULL },
+ { NULL, NULL }
+};
+
+static PyObject *
+Simple_from_outparm(PyObject *self, PyObject *args)
+{
+ if (_ctypes_simple_instance((PyObject *)Py_TYPE(self))) {
+ Py_INCREF(self);
+ return self;
+ }
+ /* call stgdict->getfunc */
+ return Simple_get_value((CDataObject *)self, NULL);
+}
+
+static PyMethodDef Simple_methods[] = {
+ { "__ctypes_from_outparam__", Simple_from_outparm, METH_NOARGS, },
+ { NULL, NULL },
+};
+
+static int Simple_bool(CDataObject *self)
+{
+ return memcmp(self->b_ptr, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", self->b_size);
+}
+
+static PyNumberMethods Simple_as_number = {
+ 0, /* nb_add */
+ 0, /* nb_subtract */
+ 0, /* nb_multiply */
+ 0, /* nb_remainder */
+ 0, /* nb_divmod */
+ 0, /* nb_power */
+ 0, /* nb_negative */
+ 0, /* nb_positive */
+ 0, /* nb_absolute */
+ (inquiry)Simple_bool, /* nb_bool */
+};
+
+/* "%s(%s)" % (self.__class__.__name__, self.value) */
+static PyObject *
+Simple_repr(CDataObject *self)
+{
+ PyObject *val, *result;
+
+ if (Py_TYPE(self)->tp_base != &Simple_Type) {
+ return PyUnicode_FromFormat("<%s object at %p>",
+ Py_TYPE(self)->tp_name, self);
+ }
+
+ val = Simple_get_value(self, NULL);
+ if (val == NULL)
+ return NULL;
+
+ result = PyUnicode_FromFormat("%s(%R)",
+ Py_TYPE(self)->tp_name, val);
+ Py_DECREF(val);
+ return result;
+}
+
+static PyTypeObject Simple_Type = {
+ PyVarObject_HEAD_INIT(NULL, 0)
+ "_ctypes._SimpleCData",
+ sizeof(CDataObject), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ 0, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ (reprfunc)&Simple_repr, /* tp_repr */
+ &Simple_as_number, /* tp_as_number */
+ 0, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ 0, /* tp_getattro */
+ 0, /* tp_setattro */
+ &PyCData_as_buffer, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */
+ "XXX to be provided", /* tp_doc */
+ (traverseproc)PyCData_traverse, /* tp_traverse */
+ (inquiry)PyCData_clear, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ 0, /* tp_iter */
+ 0, /* tp_iternext */
+ Simple_methods, /* tp_methods */
+ 0, /* tp_members */
+ Simple_getsets, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ (initproc)Simple_init, /* tp_init */
+ 0, /* tp_alloc */
+ GenericPyCData_new, /* tp_new */
+ 0, /* tp_free */
+};
+
+/******************************************************************/
+/*
+ PyCPointer_Type
+*/
+static PyObject *
+Pointer_item(PyObject *myself, Py_ssize_t index)
+{
+ CDataObject *self = (CDataObject *)myself;
+ Py_ssize_t size;
+ Py_ssize_t offset;
+ StgDictObject *stgdict, *itemdict;
+ PyObject *proto;
+
+ if (*(void **)self->b_ptr == NULL) {
+ PyErr_SetString(PyExc_ValueError,
+ "NULL pointer access");
+ return NULL;
+ }
+
+ stgdict = PyObject_stgdict((PyObject *)self);
+ assert(stgdict); /* Cannot be NULL for pointer object instances */
+
+ proto = stgdict->proto;
+ assert(proto);
+ itemdict = PyType_stgdict(proto);
+ assert(itemdict); /* proto is the item type of the pointer, a ctypes
+ type, so this cannot be NULL */
+
+ size = itemdict->size;
+ offset = index * itemdict->size;
+
+ return PyCData_get(proto, stgdict->getfunc, (PyObject *)self,
+ index, size, (*(char **)self->b_ptr) + offset);
+}
+
+static int
+Pointer_ass_item(PyObject *myself, Py_ssize_t index, PyObject *value)
+{
+ CDataObject *self = (CDataObject *)myself;
+ Py_ssize_t size;
+ Py_ssize_t offset;
+ StgDictObject *stgdict, *itemdict;
+ PyObject *proto;
+
+ if (value == NULL) {
+ PyErr_SetString(PyExc_TypeError,
+ "Pointer does not support item deletion");
+ return -1;
+ }
+
+ if (*(void **)self->b_ptr == NULL) {
+ PyErr_SetString(PyExc_ValueError,
+ "NULL pointer access");
+ return -1;
+ }
+
+ stgdict = PyObject_stgdict((PyObject *)self);
+ assert(stgdict); /* Cannot be NULL for pointer instances */
+
+ proto = stgdict->proto;
+ assert(proto);
+
+ itemdict = PyType_stgdict(proto);
+ assert(itemdict); /* Cannot be NULL because the itemtype of a pointer
+ is always a ctypes type */
+
+ size = itemdict->size;
+ offset = index * itemdict->size;
+
+ return PyCData_set((PyObject *)self, proto, stgdict->setfunc, value,
+ index, size, (*(char **)self->b_ptr) + offset);
+}
+
+static PyObject *
+Pointer_get_contents(CDataObject *self, void *closure)
+{
+ StgDictObject *stgdict;
+
+ if (*(void **)self->b_ptr == NULL) {
+ PyErr_SetString(PyExc_ValueError,
+ "NULL pointer access");
+ return NULL;
+ }
+
+ stgdict = PyObject_stgdict((PyObject *)self);
+ assert(stgdict); /* Cannot be NULL for pointer instances */
+ return PyCData_FromBaseObj(stgdict->proto,
+ (PyObject *)self, 0,
+ *(void **)self->b_ptr);
+}
+
+static int
+Pointer_set_contents(CDataObject *self, PyObject *value, void *closure)
+{
+ StgDictObject *stgdict;
+ CDataObject *dst;
+ PyObject *keep;
+
+ if (value == NULL) {
+ PyErr_SetString(PyExc_TypeError,
+ "Pointer does not support item deletion");
+ return -1;
+ }
+ stgdict = PyObject_stgdict((PyObject *)self);
+ assert(stgdict); /* Cannot be NULL for pointer instances */
+ assert(stgdict->proto);
+ if (!CDataObject_Check(value)) {
+ int res = PyObject_IsInstance(value, stgdict->proto);
+ if (res == -1)
+ return -1;
+ if (!res) {
+ PyErr_Format(PyExc_TypeError,
+ "expected %s instead of %s",
+ ((PyTypeObject *)(stgdict->proto))->tp_name,
+ Py_TYPE(value)->tp_name);
+ return -1;
+ }
+ }
+
+ dst = (CDataObject *)value;
+ *(void **)self->b_ptr = dst->b_ptr;
+
+ /*
+ A Pointer instance must keep the value it points to alive. So, a
+ pointer instance has b_length set to 2 instead of 1, and we set
+ 'value' itself as the second item of the b_objects list, additionally.
+ */
+ Py_INCREF(value);
+ if (-1 == KeepRef(self, 1, value))
+ return -1;
+
+ keep = GetKeepedObjects(dst);
+ if (keep == NULL)
+ return -1;
+
+ Py_INCREF(keep);
+ return KeepRef(self, 0, keep);
+}
+
+static PyGetSetDef Pointer_getsets[] = {
+ { "contents", (getter)Pointer_get_contents,
+ (setter)Pointer_set_contents,
+ "the object this pointer points to (read-write)", NULL },
+ { NULL, NULL }
+};
+
+static int
+Pointer_init(CDataObject *self, PyObject *args, PyObject *kw)
+{
+ PyObject *value = NULL;
+
+ if (!PyArg_UnpackTuple(args, "POINTER", 0, 1, &value))
+ return -1;
+ if (value == NULL)
+ return 0;
+ return Pointer_set_contents(self, value, NULL);
+}
+
+static PyObject *
+Pointer_new(PyTypeObject *type, PyObject *args, PyObject *kw)
+{
+ StgDictObject *dict = PyType_stgdict((PyObject *)type);
+ if (!dict || !dict->proto) {
+ PyErr_SetString(PyExc_TypeError,
+ "Cannot create instance: has no _type_");
+ return NULL;
+ }
+ return GenericPyCData_new(type, args, kw);
+}
+
+static PyObject *
+Pointer_subscript(PyObject *myself, PyObject *item)
+{
+ CDataObject *self = (CDataObject *)myself;
+ if (PyIndex_Check(item)) {
+ Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError);
+ if (i == -1 && PyErr_Occurred())
+ return NULL;
+ return Pointer_item(myself, i);
+ }
+ else if (PySlice_Check(item)) {
+ PySliceObject *slice = (PySliceObject *)item;
+ Py_ssize_t start, stop, step;
+ PyObject *np;
+ StgDictObject *stgdict, *itemdict;
+ PyObject *proto;
+ Py_ssize_t i, len, cur;
+
+ /* Since pointers have no length, and we want to apply
+ different semantics to negative indices than normal
+ slicing, we have to dissect the slice object ourselves.*/
+ if (slice->step == Py_None) {
+ step = 1;
+ }
+ else {
+ step = PyNumber_AsSsize_t(slice->step,
+ PyExc_ValueError);
+ if (step == -1 && PyErr_Occurred())
+ return NULL;
+ if (step == 0) {
+ PyErr_SetString(PyExc_ValueError,
+ "slice step cannot be zero");
+ return NULL;
+ }
+ }
+ if (slice->start == Py_None) {
+ if (step < 0) {
+ PyErr_SetString(PyExc_ValueError,
+ "slice start is required "
+ "for step < 0");
+ return NULL;
+ }
+ start = 0;
+ }
+ else {
+ start = PyNumber_AsSsize_t(slice->start,
+ PyExc_ValueError);
+ if (start == -1 && PyErr_Occurred())
+ return NULL;
+ }
+ if (slice->stop == Py_None) {
+ PyErr_SetString(PyExc_ValueError,
+ "slice stop is required");
+ return NULL;
+ }
+ stop = PyNumber_AsSsize_t(slice->stop,
+ PyExc_ValueError);
+ if (stop == -1 && PyErr_Occurred())
+ return NULL;
+ if ((step > 0 && start > stop) ||
+ (step < 0 && start < stop))
+ len = 0;
+ else if (step > 0)
+ len = (stop - start - 1) / step + 1;
+ else
+ len = (stop - start + 1) / step + 1;
+
+ stgdict = PyObject_stgdict((PyObject *)self);
+ assert(stgdict); /* Cannot be NULL for pointer instances */
+ proto = stgdict->proto;
+ assert(proto);
+ itemdict = PyType_stgdict(proto);
+ assert(itemdict);
+ if (itemdict->getfunc == _ctypes_get_fielddesc("c")->getfunc) {
+ char *ptr = *(char **)self->b_ptr;
+ char *dest;
+
+ if (len <= 0)
+ return PyBytes_FromStringAndSize("", 0);
+ if (step == 1) {
+ return PyBytes_FromStringAndSize(ptr + start,
+ len);
+ }
+ dest = (char *)PyMem_Malloc(len);
+ if (dest == NULL)
+ return PyErr_NoMemory();
+ for (cur = start, i = 0; i < len; cur += step, i++) {
+ dest[i] = ptr[cur];
+ }
+ np = PyBytes_FromStringAndSize(dest, len);
+ PyMem_Free(dest);
+ return np;
+ }
+#ifdef CTYPES_UNICODE
+ if (itemdict->getfunc == _ctypes_get_fielddesc("u")->getfunc) {
+ wchar_t *ptr = *(wchar_t **)self->b_ptr;
+ wchar_t *dest;
+
+ if (len <= 0)
+ return PyUnicode_New(0, 0);
+ if (step == 1) {
+ return PyUnicode_FromWideChar(ptr + start,
+ len);
+ }
+ dest = PyMem_New(wchar_t, len);
+ if (dest == NULL)
+ return PyErr_NoMemory();
+ for (cur = start, i = 0; i < len; cur += step, i++) {
+ dest[i] = ptr[cur];
+ }
+ np = PyUnicode_FromWideChar(dest, len);
+ PyMem_Free(dest);
+ return np;
+ }
+#endif
+
+ np = PyList_New(len);
+ if (np == NULL)
+ return NULL;
+
+ for (cur = start, i = 0; i < len; cur += step, i++) {
+ PyObject *v = Pointer_item(myself, cur);
+ PyList_SET_ITEM(np, i, v);
+ }
+ return np;
+ }
+ else {
+ PyErr_SetString(PyExc_TypeError,
+ "Pointer indices must be integer");
+ return NULL;
+ }
+}
+
+static PySequenceMethods Pointer_as_sequence = {
+ 0, /* inquiry sq_length; */
+ 0, /* binaryfunc sq_concat; */
+ 0, /* intargfunc sq_repeat; */
+ Pointer_item, /* intargfunc sq_item; */
+ 0, /* intintargfunc sq_slice; */
+ Pointer_ass_item, /* intobjargproc sq_ass_item; */
+ 0, /* intintobjargproc sq_ass_slice; */
+ 0, /* objobjproc sq_contains; */
+ /* Added in release 2.0 */
+ 0, /* binaryfunc sq_inplace_concat; */
+ 0, /* intargfunc sq_inplace_repeat; */
+};
+
+static PyMappingMethods Pointer_as_mapping = {
+ 0,
+ Pointer_subscript,
+};
+
+static int
+Pointer_bool(CDataObject *self)
+{
+ return (*(void **)self->b_ptr != NULL);
+}
+
+static PyNumberMethods Pointer_as_number = {
+ 0, /* nb_add */
+ 0, /* nb_subtract */
+ 0, /* nb_multiply */
+ 0, /* nb_remainder */
+ 0, /* nb_divmod */
+ 0, /* nb_power */
+ 0, /* nb_negative */
+ 0, /* nb_positive */
+ 0, /* nb_absolute */
+ (inquiry)Pointer_bool, /* nb_bool */
+};
+
+PyTypeObject PyCPointer_Type = {
+ PyVarObject_HEAD_INIT(NULL, 0)
+ "_ctypes._Pointer",
+ sizeof(CDataObject), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ 0, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ 0, /* tp_repr */
+ &Pointer_as_number, /* tp_as_number */
+ &Pointer_as_sequence, /* tp_as_sequence */
+ &Pointer_as_mapping, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ 0, /* tp_getattro */
+ 0, /* tp_setattro */
+ &PyCData_as_buffer, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */
+ "XXX to be provided", /* tp_doc */
+ (traverseproc)PyCData_traverse, /* tp_traverse */
+ (inquiry)PyCData_clear, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ 0, /* tp_iter */
+ 0, /* tp_iternext */
+ 0, /* tp_methods */
+ 0, /* tp_members */
+ Pointer_getsets, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ (initproc)Pointer_init, /* tp_init */
+ 0, /* tp_alloc */
+ Pointer_new, /* tp_new */
+ 0, /* tp_free */
+};
+
+
+/******************************************************************/
+/*
+ * Module initialization.
+ */
+
+static const char module_docs[] =
+"Create and manipulate C compatible data types in Python.";
+
+#ifdef MS_WIN32
+
+static const char comerror_doc[] = "Raised when a COM method call failed.";
+
+int
+comerror_init(PyObject *self, PyObject *args, PyObject *kwds)
+{
+ PyObject *hresult, *text, *details;
+ PyObject *a;
+ int status;
+
+ if (!_PyArg_NoKeywords(Py_TYPE(self)->tp_name, kwds))
+ return -1;
+
+ if (!PyArg_ParseTuple(args, "OOO:COMError", &hresult, &text, &details))
+ return -1;
+
+ a = PySequence_GetSlice(args, 1, PySequence_Size(args));
+ if (!a)
+ return -1;
+ status = PyObject_SetAttrString(self, "args", a);
+ Py_DECREF(a);
+ if (status < 0)
+ return -1;
+
+ if (PyObject_SetAttrString(self, "hresult", hresult) < 0)
+ return -1;
+
+ if (PyObject_SetAttrString(self, "text", text) < 0)
+ return -1;
+
+ if (PyObject_SetAttrString(self, "details", details) < 0)
+ return -1;
+
+ Py_INCREF(args);
+ Py_SETREF(((PyBaseExceptionObject *)self)->args, args);
+
+ return 0;
+}
+
+static PyTypeObject PyComError_Type = {
+ PyVarObject_HEAD_INIT(NULL, 0)
+ "_ctypes.COMError", /* tp_name */
+ sizeof(PyBaseExceptionObject), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ 0, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ 0, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ 0, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */
+ PyDoc_STR(comerror_doc), /* tp_doc */
+ 0, /* tp_traverse */
+ 0, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ 0, /* tp_iter */
+ 0, /* tp_iternext */
+ 0, /* tp_methods */
+ 0, /* tp_members */
+ 0, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ (initproc)comerror_init, /* tp_init */
+ 0, /* tp_alloc */
+ 0, /* tp_new */
+};
+
+
+static int
+create_comerror(void)
+{
+ PyComError_Type.tp_base = (PyTypeObject*)PyExc_Exception;
+ if (PyType_Ready(&PyComError_Type) < 0)
+ return -1;
+ Py_INCREF(&PyComError_Type);
+ ComError = (PyObject*)&PyComError_Type;
+ return 0;
+}
+
+#endif
+
+static PyObject *
+string_at(const char *ptr, int size)
+{
+ if (size == -1)
+ return PyBytes_FromStringAndSize(ptr, strlen(ptr));
+ return PyBytes_FromStringAndSize(ptr, size);
+}
+
+static int
+cast_check_pointertype(PyObject *arg)
+{
+ StgDictObject *dict;
+
+ if (PyCPointerTypeObject_Check(arg))
+ return 1;
+ if (PyCFuncPtrTypeObject_Check(arg))
+ return 1;
+ dict = PyType_stgdict(arg);
+ if (dict != NULL && dict->proto != NULL) {
+ if (PyUnicode_Check(dict->proto)
+ && (strchr("sPzUZXO", PyUnicode_AsUTF8(dict->proto)[0]))) {
+ /* simple pointer types, c_void_p, c_wchar_p, BSTR, ... */
+ return 1;
+ }
+ }
+ PyErr_Format(PyExc_TypeError,
+ "cast() argument 2 must be a pointer type, not %s",
+ PyType_Check(arg)
+ ? ((PyTypeObject *)arg)->tp_name
+ : Py_TYPE(arg)->tp_name);
+ return 0;
+}
+
+static PyObject *
+cast(void *ptr, PyObject *src, PyObject *ctype)
+{
+ CDataObject *result;
+ if (0 == cast_check_pointertype(ctype))
+ return NULL;
+ result = (CDataObject *)PyObject_CallFunctionObjArgs(ctype, NULL);
+ if (result == NULL)
+ return NULL;
+
+ /*
+ The casted objects '_objects' member:
+
+ It must certainly contain the source objects one.
+ It must contain the source object itself.
+ */
+ if (CDataObject_Check(src)) {
+ CDataObject *obj = (CDataObject *)src;
+ CDataObject *container;
+
+ /* PyCData_GetContainer will initialize src.b_objects, we need
+ this so it can be shared */
+ container = PyCData_GetContainer(obj);
+ if (container == NULL)
+ goto failed;
+
+ /* But we need a dictionary! */
+ if (obj->b_objects == Py_None) {
+ Py_DECREF(Py_None);
+ obj->b_objects = PyDict_New();
+ if (obj->b_objects == NULL)
+ goto failed;
+ }
+ Py_XINCREF(obj->b_objects);
+ result->b_objects = obj->b_objects;
+ if (result->b_objects && PyDict_CheckExact(result->b_objects)) {
+ PyObject *index;
+ int rc;
+ index = PyLong_FromVoidPtr((void *)src);
+ if (index == NULL)
+ goto failed;
+ rc = PyDict_SetItem(result->b_objects, index, src);
+ Py_DECREF(index);
+ if (rc == -1)
+ goto failed;
+ }
+ }
+ /* Should we assert that result is a pointer type? */
+ memcpy(result->b_ptr, &ptr, sizeof(void *));
+ return (PyObject *)result;
+
+ failed:
+ Py_DECREF(result);
+ return NULL;
+}
+
+#ifdef CTYPES_UNICODE
+static PyObject *
+wstring_at(const wchar_t *ptr, int size)
+{
+ Py_ssize_t ssize = size;
+ if (ssize == -1)
+ ssize = wcslen(ptr);
+ return PyUnicode_FromWideChar(ptr, ssize);
+}
+#endif
+
+
+static struct PyModuleDef _ctypesmodule = {
+ PyModuleDef_HEAD_INIT,
+ "_ctypes",
+ module_docs,
+ -1,
+ _ctypes_module_methods,
+ NULL,
+ NULL,
+ NULL,
+ NULL
+};
+
+PyMODINIT_FUNC
+PyInit__ctypes(void)
+{
+ PyObject *m;
+
+/* Note:
+ ob_type is the metatype (the 'type'), defaults to PyType_Type,
+ tp_base is the base type, defaults to 'object' aka PyBaseObject_Type.
+*/
+#ifdef WITH_THREAD
+ PyEval_InitThreads();
+#endif
+ m = PyModule_Create(&_ctypesmodule);
+ if (!m)
+ return NULL;
+
+ _ctypes_ptrtype_cache = PyDict_New();
+ if (_ctypes_ptrtype_cache == NULL)
+ return NULL;
+
+ PyModule_AddObject(m, "_pointer_type_cache", (PyObject *)_ctypes_ptrtype_cache);
+
+ _unpickle = PyObject_GetAttrString(m, "_unpickle");
+ if (_unpickle == NULL)
+ return NULL;
+
+ if (PyType_Ready(&PyCArg_Type) < 0)
+ return NULL;
+
+ if (PyType_Ready(&PyCThunk_Type) < 0)
+ return NULL;
+
+ /* StgDict is derived from PyDict_Type */
+ PyCStgDict_Type.tp_base = &PyDict_Type;
+ if (PyType_Ready(&PyCStgDict_Type) < 0)
+ return NULL;
+
+ /*************************************************
+ *
+ * Metaclasses
+ */
+
+ PyCStructType_Type.tp_base = &PyType_Type;
+ if (PyType_Ready(&PyCStructType_Type) < 0)
+ return NULL;
+
+ UnionType_Type.tp_base = &PyType_Type;
+ if (PyType_Ready(&UnionType_Type) < 0)
+ return NULL;
+
+ PyCPointerType_Type.tp_base = &PyType_Type;
+ if (PyType_Ready(&PyCPointerType_Type) < 0)
+ return NULL;
+
+ PyCArrayType_Type.tp_base = &PyType_Type;
+ if (PyType_Ready(&PyCArrayType_Type) < 0)
+ return NULL;
+
+ PyCSimpleType_Type.tp_base = &PyType_Type;
+ if (PyType_Ready(&PyCSimpleType_Type) < 0)
+ return NULL;
+
+ PyCFuncPtrType_Type.tp_base = &PyType_Type;
+ if (PyType_Ready(&PyCFuncPtrType_Type) < 0)
+ return NULL;
+
+ /*************************************************
+ *
+ * Classes using a custom metaclass
+ */
+
+ if (PyType_Ready(&PyCData_Type) < 0)
+ return NULL;
+
+ Py_TYPE(&Struct_Type) = &PyCStructType_Type;
+ Struct_Type.tp_base = &PyCData_Type;
+ if (PyType_Ready(&Struct_Type) < 0)
+ return NULL;
+ Py_INCREF(&Struct_Type);
+ PyModule_AddObject(m, "Structure", (PyObject *)&Struct_Type);
+
+ Py_TYPE(&Union_Type) = &UnionType_Type;
+ Union_Type.tp_base = &PyCData_Type;
+ if (PyType_Ready(&Union_Type) < 0)
+ return NULL;
+ Py_INCREF(&Union_Type);
+ PyModule_AddObject(m, "Union", (PyObject *)&Union_Type);
+
+ Py_TYPE(&PyCPointer_Type) = &PyCPointerType_Type;
+ PyCPointer_Type.tp_base = &PyCData_Type;
+ if (PyType_Ready(&PyCPointer_Type) < 0)
+ return NULL;
+ Py_INCREF(&PyCPointer_Type);
+ PyModule_AddObject(m, "_Pointer", (PyObject *)&PyCPointer_Type);
+
+ Py_TYPE(&PyCArray_Type) = &PyCArrayType_Type;
+ PyCArray_Type.tp_base = &PyCData_Type;
+ if (PyType_Ready(&PyCArray_Type) < 0)
+ return NULL;
+ Py_INCREF(&PyCArray_Type);
+ PyModule_AddObject(m, "Array", (PyObject *)&PyCArray_Type);
+
+ Py_TYPE(&Simple_Type) = &PyCSimpleType_Type;
+ Simple_Type.tp_base = &PyCData_Type;
+ if (PyType_Ready(&Simple_Type) < 0)
+ return NULL;
+ Py_INCREF(&Simple_Type);
+ PyModule_AddObject(m, "_SimpleCData", (PyObject *)&Simple_Type);
+
+ Py_TYPE(&PyCFuncPtr_Type) = &PyCFuncPtrType_Type;
+ PyCFuncPtr_Type.tp_base = &PyCData_Type;
+ if (PyType_Ready(&PyCFuncPtr_Type) < 0)
+ return NULL;
+ Py_INCREF(&PyCFuncPtr_Type);
+ PyModule_AddObject(m, "CFuncPtr", (PyObject *)&PyCFuncPtr_Type);
+
+ /*************************************************
+ *
+ * Simple classes
+ */
+
+ /* PyCField_Type is derived from PyBaseObject_Type */
+ if (PyType_Ready(&PyCField_Type) < 0)
+ return NULL;
+
+ /*************************************************
+ *
+ * Other stuff
+ */
+
+ DictRemover_Type.tp_new = PyType_GenericNew;
+ if (PyType_Ready(&DictRemover_Type) < 0)
+ return NULL;
+
+#ifdef MS_WIN32
+ if (create_comerror() < 0)
+ return NULL;
+ PyModule_AddObject(m, "COMError", ComError);
+
+ PyModule_AddObject(m, "FUNCFLAG_HRESULT", PyLong_FromLong(FUNCFLAG_HRESULT));
+ PyModule_AddObject(m, "FUNCFLAG_STDCALL", PyLong_FromLong(FUNCFLAG_STDCALL));
+#endif
+ PyModule_AddObject(m, "FUNCFLAG_CDECL", PyLong_FromLong(FUNCFLAG_CDECL));
+ PyModule_AddObject(m, "FUNCFLAG_USE_ERRNO", PyLong_FromLong(FUNCFLAG_USE_ERRNO));
+ PyModule_AddObject(m, "FUNCFLAG_USE_LASTERROR", PyLong_FromLong(FUNCFLAG_USE_LASTERROR));
+ PyModule_AddObject(m, "FUNCFLAG_PYTHONAPI", PyLong_FromLong(FUNCFLAG_PYTHONAPI));
+ PyModule_AddStringConstant(m, "__version__", "1.1.0");
+
+ PyModule_AddObject(m, "_memmove_addr", PyLong_FromVoidPtr(memmove));
+ PyModule_AddObject(m, "_memset_addr", PyLong_FromVoidPtr(memset));
+ PyModule_AddObject(m, "_string_at_addr", PyLong_FromVoidPtr(string_at));
+ PyModule_AddObject(m, "_cast_addr", PyLong_FromVoidPtr(cast));
+#ifdef CTYPES_UNICODE
+ PyModule_AddObject(m, "_wstring_at_addr", PyLong_FromVoidPtr(wstring_at));
+#endif
+
+/* If RTLD_LOCAL is not defined (Windows!), set it to zero. */
+#if !HAVE_DECL_RTLD_LOCAL
+#define RTLD_LOCAL 0
+#endif
+
+/* If RTLD_GLOBAL is not defined (cygwin), set it to the same value as
+ RTLD_LOCAL.
+*/
+#if !HAVE_DECL_RTLD_GLOBAL
+#define RTLD_GLOBAL RTLD_LOCAL
+#endif
+
+ PyModule_AddObject(m, "RTLD_LOCAL", PyLong_FromLong(RTLD_LOCAL));
+ PyModule_AddObject(m, "RTLD_GLOBAL", PyLong_FromLong(RTLD_GLOBAL));
+
+ PyExc_ArgError = PyErr_NewException("ctypes.ArgumentError", NULL, NULL);
+ if (PyExc_ArgError) {
+ Py_INCREF(PyExc_ArgError);
+ PyModule_AddObject(m, "ArgumentError", PyExc_ArgError);
+ }
+ return m;
+}
+
+/*
+ Local Variables:
+ compile-command: "cd .. && python setup.py -q build -g && python setup.py -q build install --home ~"
+ End:
+*/
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/callproc.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/callproc.c
new file mode 100644
index 00000000..1d041da2
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/callproc.c
@@ -0,0 +1,1871 @@
+/*
+ * History: First version dated from 3/97, derived from my SCMLIB version
+ * for win16.
+ */
+/*
+ * Related Work:
+ * - calldll http://www.nightmare.com/software.html
+ * - libffi http://sourceware.cygnus.com/libffi/
+ * - ffcall http://clisp.cons.org/~haible/packages-ffcall.html
+ * and, of course, Don Beaudry's MESS package, but this is more ctypes
+ * related.
+ */
+
+
+/*
+ How are functions called, and how are parameters converted to C ?
+
+ 1. _ctypes.c::PyCFuncPtr_call receives an argument tuple 'inargs' and a
+ keyword dictionary 'kwds'.
+
+ 2. After several checks, _build_callargs() is called which returns another
+ tuple 'callargs'. This may be the same tuple as 'inargs', a slice of
+ 'inargs', or a completely fresh tuple, depending on several things (is it a
+ COM method?, are 'paramflags' available?).
+
+ 3. _build_callargs also calculates bitarrays containing indexes into
+ the callargs tuple, specifying how to build the return value(s) of
+ the function.
+
+ 4. _ctypes_callproc is then called with the 'callargs' tuple. _ctypes_callproc first
+ allocates two arrays. The first is an array of 'struct argument' items, the
+ second array has 'void *' entries.
+
+ 5. If 'converters' are present (converters is a sequence of argtypes'
+ from_param methods), for each item in 'callargs' converter is called and the
+ result passed to ConvParam. If 'converters' are not present, each argument
+ is directly passed to ConvParm.
+
+ 6. For each arg, ConvParam stores the contained C data (or a pointer to it,
+ for structures) into the 'struct argument' array.
+
+ 7. Finally, a loop fills the 'void *' array so that each item points to the
+ data contained in or pointed to by the 'struct argument' array.
+
+ 8. The 'void *' argument array is what _call_function_pointer
+ expects. _call_function_pointer then has very little to do - only some
+ libffi specific stuff, then it calls ffi_call.
+
+ So, there are 4 data structures holding processed arguments:
+ - the inargs tuple (in PyCFuncPtr_call)
+ - the callargs tuple (in PyCFuncPtr_call)
+ - the 'struct arguments' array
+ - the 'void *' array
+
+ */
+
+#include "Python.h"
+#include "structmember.h"
+
+#ifdef MS_WIN32
+#include <windows.h>
+#include <tchar.h>
+#else
+#include "ctypes_dlfcn.h"
+#endif
+
+#ifdef MS_WIN32
+#include <malloc.h>
+#endif
+
+#include <ffi.h>
+#include "ctypes.h"
+#ifdef HAVE_ALLOCA_H
+/* AIX needs alloca.h for alloca() */
+#include <alloca.h>
+#endif
+
+#ifdef _Py_MEMORY_SANITIZER
+#include <sanitizer/msan_interface.h>
+#endif
+
+#if defined(_DEBUG) || defined(__MINGW32__)
+/* Don't use structured exception handling on Windows if this is defined.
+ MingW, AFAIK, doesn't support it.
+*/
+#define DONT_USE_SEH
+#endif
+
+#define CTYPES_CAPSULE_NAME_PYMEM "_ctypes pymem"
+
+static void pymem_destructor(PyObject *ptr)
+{
+ void *p = PyCapsule_GetPointer(ptr, CTYPES_CAPSULE_NAME_PYMEM);
+ if (p) {
+ PyMem_Free(p);
+ }
+}
+
+/*
+ ctypes maintains thread-local storage that has space for two error numbers:
+ private copies of the system 'errno' value and, on Windows, the system error code
+ accessed by the GetLastError() and SetLastError() api functions.
+
+ Foreign functions created with CDLL(..., use_errno=True), when called, swap
+ the system 'errno' value with the private copy just before the actual
+ function call, and swapped again immediately afterwards. The 'use_errno'
+ parameter defaults to False, in this case 'ctypes_errno' is not touched.
+
+ On Windows, foreign functions created with CDLL(..., use_last_error=True) or
+ WinDLL(..., use_last_error=True) swap the system LastError value with the
+ ctypes private copy.
+
+ The values are also swapped immeditately before and after ctypes callback
+ functions are called, if the callbacks are constructed using the new
+ optional use_errno parameter set to True: CFUNCTYPE(..., use_errno=TRUE) or
+ WINFUNCTYPE(..., use_errno=True).
+
+ New ctypes functions are provided to access the ctypes private copies from
+ Python:
+
+ - ctypes.set_errno(value) and ctypes.set_last_error(value) store 'value' in
+ the private copy and returns the previous value.
+
+ - ctypes.get_errno() and ctypes.get_last_error() returns the current ctypes
+ private copies value.
+*/
+
+/*
+ This function creates and returns a thread-local Python object that has
+ space to store two integer error numbers; once created the Python object is
+ kept alive in the thread state dictionary as long as the thread itself.
+*/
+PyObject *
+_ctypes_get_errobj(int **pspace)
+{
+ PyObject *dict = PyThreadState_GetDict();
+ PyObject *errobj;
+ static PyObject *error_object_name;
+ if (dict == 0) {
+ PyErr_SetString(PyExc_RuntimeError,
+ "cannot get thread state");
+ return NULL;
+ }
+ if (error_object_name == NULL) {
+ error_object_name = PyUnicode_InternFromString("ctypes.error_object");
+ if (error_object_name == NULL)
+ return NULL;
+ }
+ errobj = PyDict_GetItem(dict, error_object_name);
+ if (errobj) {
+ if (!PyCapsule_IsValid(errobj, CTYPES_CAPSULE_NAME_PYMEM)) {
+ PyErr_SetString(PyExc_RuntimeError,
+ "ctypes.error_object is an invalid capsule");
+ return NULL;
+ }
+ Py_INCREF(errobj);
+ }
+ else {
+ void *space = PyMem_Malloc(sizeof(int) * 2);
+ if (space == NULL)
+ return NULL;
+ memset(space, 0, sizeof(int) * 2);
+ errobj = PyCapsule_New(space, CTYPES_CAPSULE_NAME_PYMEM, pymem_destructor);
+ if (errobj == NULL) {
+ PyMem_Free(space);
+ return NULL;
+ }
+ if (-1 == PyDict_SetItem(dict, error_object_name,
+ errobj)) {
+ Py_DECREF(errobj);
+ return NULL;
+ }
+ }
+ *pspace = (int *)PyCapsule_GetPointer(errobj, CTYPES_CAPSULE_NAME_PYMEM);
+ return errobj;
+}
+
+static PyObject *
+get_error_internal(PyObject *self, PyObject *args, int index)
+{
+ int *space;
+ PyObject *errobj = _ctypes_get_errobj(&space);
+ PyObject *result;
+
+ if (errobj == NULL)
+ return NULL;
+ result = PyLong_FromLong(space[index]);
+ Py_DECREF(errobj);
+ return result;
+}
+
+static PyObject *
+set_error_internal(PyObject *self, PyObject *args, int index)
+{
+ int new_errno, old_errno;
+ PyObject *errobj;
+ int *space;
+
+ if (!PyArg_ParseTuple(args, "i", &new_errno))
+ return NULL;
+ errobj = _ctypes_get_errobj(&space);
+ if (errobj == NULL)
+ return NULL;
+ old_errno = space[index];
+ space[index] = new_errno;
+ Py_DECREF(errobj);
+ return PyLong_FromLong(old_errno);
+}
+
+static PyObject *
+get_errno(PyObject *self, PyObject *args)
+{
+ return get_error_internal(self, args, 0);
+}
+
+static PyObject *
+set_errno(PyObject *self, PyObject *args)
+{
+ return set_error_internal(self, args, 0);
+}
+
+#ifdef MS_WIN32
+
+static PyObject *
+get_last_error(PyObject *self, PyObject *args)
+{
+ return get_error_internal(self, args, 1);
+}
+
+static PyObject *
+set_last_error(PyObject *self, PyObject *args)
+{
+ return set_error_internal(self, args, 1);
+}
+
+PyObject *ComError;
+
+static WCHAR *FormatError(DWORD code)
+{
+ WCHAR *lpMsgBuf;
+ DWORD n;
+ n = FormatMessageW(FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM,
+ NULL,
+ code,
+ MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT), /* Default language */
+ (LPWSTR) &lpMsgBuf,
+ 0,
+ NULL);
+ if (n) {
+ while (iswspace(lpMsgBuf[n-1]))
+ --n;
+ lpMsgBuf[n] = L'\0'; /* rstrip() */
+ }
+ return lpMsgBuf;
+}
+
+#ifndef DONT_USE_SEH
+static void SetException(DWORD code, EXCEPTION_RECORD *pr)
+{
+ /* The 'code' is a normal win32 error code so it could be handled by
+ PyErr_SetFromWindowsErr(). However, for some errors, we have additional
+ information not included in the error code. We handle those here and
+ delegate all others to the generic function. */
+ switch (code) {
+ case EXCEPTION_ACCESS_VIOLATION:
+ /* The thread attempted to read from or write
+ to a virtual address for which it does not
+ have the appropriate access. */
+ if (pr->ExceptionInformation[0] == 0)
+ PyErr_Format(PyExc_OSError,
+ "exception: access violation reading %p",
+ pr->ExceptionInformation[1]);
+ else
+ PyErr_Format(PyExc_OSError,
+ "exception: access violation writing %p",
+ pr->ExceptionInformation[1]);
+ break;
+
+ case EXCEPTION_BREAKPOINT:
+ /* A breakpoint was encountered. */
+ PyErr_SetString(PyExc_OSError,
+ "exception: breakpoint encountered");
+ break;
+
+ case EXCEPTION_DATATYPE_MISALIGNMENT:
+ /* The thread attempted to read or write data that is
+ misaligned on hardware that does not provide
+ alignment. For example, 16-bit values must be
+ aligned on 2-byte boundaries, 32-bit values on
+ 4-byte boundaries, and so on. */
+ PyErr_SetString(PyExc_OSError,
+ "exception: datatype misalignment");
+ break;
+
+ case EXCEPTION_SINGLE_STEP:
+ /* A trace trap or other single-instruction mechanism
+ signaled that one instruction has been executed. */
+ PyErr_SetString(PyExc_OSError,
+ "exception: single step");
+ break;
+
+ case EXCEPTION_ARRAY_BOUNDS_EXCEEDED:
+ /* The thread attempted to access an array element
+ that is out of bounds, and the underlying hardware
+ supports bounds checking. */
+ PyErr_SetString(PyExc_OSError,
+ "exception: array bounds exceeded");
+ break;
+
+ case EXCEPTION_FLT_DENORMAL_OPERAND:
+ /* One of the operands in a floating-point operation
+ is denormal. A denormal value is one that is too
+ small to represent as a standard floating-point
+ value. */
+ PyErr_SetString(PyExc_OSError,
+ "exception: floating-point operand denormal");
+ break;
+
+ case EXCEPTION_FLT_DIVIDE_BY_ZERO:
+ /* The thread attempted to divide a floating-point
+ value by a floating-point divisor of zero. */
+ PyErr_SetString(PyExc_OSError,
+ "exception: float divide by zero");
+ break;
+
+ case EXCEPTION_FLT_INEXACT_RESULT:
+ /* The result of a floating-point operation cannot be
+ represented exactly as a decimal fraction. */
+ PyErr_SetString(PyExc_OSError,
+ "exception: float inexact");
+ break;
+
+ case EXCEPTION_FLT_INVALID_OPERATION:
+ /* This exception represents any floating-point
+ exception not included in this list. */
+ PyErr_SetString(PyExc_OSError,
+ "exception: float invalid operation");
+ break;
+
+ case EXCEPTION_FLT_OVERFLOW:
+ /* The exponent of a floating-point operation is
+ greater than the magnitude allowed by the
+ corresponding type. */
+ PyErr_SetString(PyExc_OSError,
+ "exception: float overflow");
+ break;
+
+ case EXCEPTION_FLT_STACK_CHECK:
+ /* The stack overflowed or underflowed as the result
+ of a floating-point operation. */
+ PyErr_SetString(PyExc_OSError,
+ "exception: stack over/underflow");
+ break;
+
+ case EXCEPTION_STACK_OVERFLOW:
+ /* The stack overflowed or underflowed as the result
+ of a floating-point operation. */
+ PyErr_SetString(PyExc_OSError,
+ "exception: stack overflow");
+ break;
+
+ case EXCEPTION_FLT_UNDERFLOW:
+ /* The exponent of a floating-point operation is less
+ than the magnitude allowed by the corresponding
+ type. */
+ PyErr_SetString(PyExc_OSError,
+ "exception: float underflow");
+ break;
+
+ case EXCEPTION_INT_DIVIDE_BY_ZERO:
+ /* The thread attempted to divide an integer value by
+ an integer divisor of zero. */
+ PyErr_SetString(PyExc_OSError,
+ "exception: integer divide by zero");
+ break;
+
+ case EXCEPTION_INT_OVERFLOW:
+ /* The result of an integer operation caused a carry
+ out of the most significant bit of the result. */
+ PyErr_SetString(PyExc_OSError,
+ "exception: integer overflow");
+ break;
+
+ case EXCEPTION_PRIV_INSTRUCTION:
+ /* The thread attempted to execute an instruction
+ whose operation is not allowed in the current
+ machine mode. */
+ PyErr_SetString(PyExc_OSError,
+ "exception: privileged instruction");
+ break;
+
+ case EXCEPTION_NONCONTINUABLE_EXCEPTION:
+ /* The thread attempted to continue execution after a
+ noncontinuable exception occurred. */
+ PyErr_SetString(PyExc_OSError,
+ "exception: nocontinuable");
+ break;
+
+ default:
+ PyErr_SetFromWindowsErr(code);
+ break;
+ }
+}
+
+static DWORD HandleException(EXCEPTION_POINTERS *ptrs,
+ DWORD *pdw, EXCEPTION_RECORD *record)
+{
+ *pdw = ptrs->ExceptionRecord->ExceptionCode;
+ *record = *ptrs->ExceptionRecord;
+ /* We don't want to catch breakpoint exceptions, they are used to attach
+ * a debugger to the process.
+ */
+ if (*pdw == EXCEPTION_BREAKPOINT)
+ return EXCEPTION_CONTINUE_SEARCH;
+ return EXCEPTION_EXECUTE_HANDLER;
+}
+#endif
+
+static PyObject *
+check_hresult(PyObject *self, PyObject *args)
+{
+ HRESULT hr;
+ if (!PyArg_ParseTuple(args, "i", &hr))
+ return NULL;
+ if (FAILED(hr))
+ return PyErr_SetFromWindowsErr(hr);
+ return PyLong_FromLong(hr);
+}
+
+#endif
+
+/**************************************************************/
+
+PyCArgObject *
+PyCArgObject_new(void)
+{
+ PyCArgObject *p;
+ p = PyObject_New(PyCArgObject, &PyCArg_Type);
+ if (p == NULL)
+ return NULL;
+ p->pffi_type = NULL;
+ p->tag = '\0';
+ p->obj = NULL;
+ memset(&p->value, 0, sizeof(p->value));
+ return p;
+}
+
+static void
+PyCArg_dealloc(PyCArgObject *self)
+{
+ Py_XDECREF(self->obj);
+ PyObject_Del(self);
+}
+
+static int
+is_literal_char(unsigned char c)
+{
+ return c < 128 && _PyUnicode_IsPrintable(c) && c != '\\' && c != '\'';
+}
+
+static PyObject *
+PyCArg_repr(PyCArgObject *self)
+{
+ char buffer[256];
+ switch(self->tag) {
+ case 'b':
+ case 'B':
+ sprintf(buffer, "<cparam '%c' (%d)>",
+ self->tag, self->value.b);
+ break;
+ case 'h':
+ case 'H':
+ sprintf(buffer, "<cparam '%c' (%d)>",
+ self->tag, self->value.h);
+ break;
+ case 'i':
+ case 'I':
+ sprintf(buffer, "<cparam '%c' (%d)>",
+ self->tag, self->value.i);
+ break;
+ case 'l':
+ case 'L':
+ sprintf(buffer, "<cparam '%c' (%ld)>",
+ self->tag, self->value.l);
+ break;
+
+ case 'q':
+ case 'Q':
+ sprintf(buffer,
+#ifdef MS_WIN32
+ "<cparam '%c' (%I64d)>",
+#else
+ "<cparam '%c' (%lld)>",
+#endif
+ self->tag, self->value.q);
+ break;
+ case 'd':
+ sprintf(buffer, "<cparam '%c' (%f)>",
+ self->tag, self->value.d);
+ break;
+ case 'f':
+ sprintf(buffer, "<cparam '%c' (%f)>",
+ self->tag, self->value.f);
+ break;
+
+ case 'c':
+ if (is_literal_char((unsigned char)self->value.c)) {
+ sprintf(buffer, "<cparam '%c' ('%c')>",
+ self->tag, self->value.c);
+ }
+ else {
+ sprintf(buffer, "<cparam '%c' ('\\x%02x')>",
+ self->tag, (unsigned char)self->value.c);
+ }
+ break;
+
+/* Hm, are these 'z' and 'Z' codes useful at all?
+ Shouldn't they be replaced by the functionality of c_string
+ and c_wstring ?
+*/
+ case 'z':
+ case 'Z':
+ case 'P':
+ sprintf(buffer, "<cparam '%c' (%p)>",
+ self->tag, self->value.p);
+ break;
+
+ default:
+ if (is_literal_char((unsigned char)self->tag)) {
+ sprintf(buffer, "<cparam '%c' at %p>",
+ (unsigned char)self->tag, self);
+ }
+ else {
+ sprintf(buffer, "<cparam 0x%02x at %p>",
+ (unsigned char)self->tag, self);
+ }
+ break;
+ }
+ return PyUnicode_FromString(buffer);
+}
+
+static PyMemberDef PyCArgType_members[] = {
+ { "_obj", T_OBJECT,
+ offsetof(PyCArgObject, obj), READONLY,
+ "the wrapped object" },
+ { NULL },
+};
+
+PyTypeObject PyCArg_Type = {
+ PyVarObject_HEAD_INIT(NULL, 0)
+ "CArgObject",
+ sizeof(PyCArgObject),
+ 0,
+ (destructor)PyCArg_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ (reprfunc)PyCArg_repr, /* tp_repr */
+ 0, /* tp_as_number */
+ 0, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ 0, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT, /* tp_flags */
+ 0, /* tp_doc */
+ 0, /* tp_traverse */
+ 0, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ 0, /* tp_iter */
+ 0, /* tp_iternext */
+ 0, /* tp_methods */
+ PyCArgType_members, /* tp_members */
+};
+
+/****************************************************************/
+/*
+ * Convert a PyObject * into a parameter suitable to pass to an
+ * C function call.
+ *
+ * 1. Python integers are converted to C int and passed by value.
+ * Py_None is converted to a C NULL pointer.
+ *
+ * 2. 3-tuples are expected to have a format character in the first
+ * item, which must be 'i', 'f', 'd', 'q', or 'P'.
+ * The second item will have to be an integer, float, double, long long
+ * or integer (denoting an address void *), will be converted to the
+ * corresponding C data type and passed by value.
+ *
+ * 3. Other Python objects are tested for an '_as_parameter_' attribute.
+ * The value of this attribute must be an integer which will be passed
+ * by value, or a 2-tuple or 3-tuple which will be used according
+ * to point 2 above. The third item (if any), is ignored. It is normally
+ * used to keep the object alive where this parameter refers to.
+ * XXX This convention is dangerous - you can construct arbitrary tuples
+ * in Python and pass them. Would it be safer to use a custom container
+ * datatype instead of a tuple?
+ *
+ * 4. Other Python objects cannot be passed as parameters - an exception is raised.
+ *
+ * 5. ConvParam will store the converted result in a struct containing format
+ * and value.
+ */
+
+union result {
+ char c;
+ char b;
+ short h;
+ int i;
+ long l;
+ long long q;
+ long double D;
+ double d;
+ float f;
+ void *p;
+};
+
+struct argument {
+ ffi_type *ffi_type;
+ PyObject *keep;
+ union result value;
+};
+
+/*
+ * Convert a single Python object into a PyCArgObject and return it.
+ */
+static int ConvParam(PyObject *obj, Py_ssize_t index, struct argument *pa)
+{
+ StgDictObject *dict;
+ pa->keep = NULL; /* so we cannot forget it later */
+
+ dict = PyObject_stgdict(obj);
+ if (dict) {
+ PyCArgObject *carg;
+ assert(dict->paramfunc);
+ /* If it has an stgdict, it is a CDataObject */
+ carg = dict->paramfunc((CDataObject *)obj);
+ if (carg == NULL)
+ return -1;
+ pa->ffi_type = carg->pffi_type;
+ memcpy(&pa->value, &carg->value, sizeof(pa->value));
+ pa->keep = (PyObject *)carg;
+ return 0;
+ }
+
+ if (PyCArg_CheckExact(obj)) {
+ PyCArgObject *carg = (PyCArgObject *)obj;
+ pa->ffi_type = carg->pffi_type;
+ Py_INCREF(obj);
+ pa->keep = obj;
+ memcpy(&pa->value, &carg->value, sizeof(pa->value));
+ return 0;
+ }
+
+ /* check for None, integer, string or unicode and use directly if successful */
+ if (obj == Py_None) {
+ pa->ffi_type = &ffi_type_pointer;
+ pa->value.p = NULL;
+ return 0;
+ }
+
+ if (PyLong_Check(obj)) {
+ pa->ffi_type = &ffi_type_sint;
+ pa->value.i = (long)PyLong_AsUnsignedLong(obj);
+ if (pa->value.i == -1 && PyErr_Occurred()) {
+ PyErr_Clear();
+ pa->value.i = PyLong_AsLong(obj);
+ if (pa->value.i == -1 && PyErr_Occurred()) {
+ PyErr_SetString(PyExc_OverflowError,
+ "int too long to convert");
+ return -1;
+ }
+ }
+ return 0;
+ }
+
+ if (PyBytes_Check(obj)) {
+ pa->ffi_type = &ffi_type_pointer;
+ pa->value.p = PyBytes_AsString(obj);
+ Py_INCREF(obj);
+ pa->keep = obj;
+ return 0;
+ }
+
+#ifdef CTYPES_UNICODE
+ if (PyUnicode_Check(obj)) {
+ pa->ffi_type = &ffi_type_pointer;
+ pa->value.p = _PyUnicode_AsWideCharString(obj);
+ if (pa->value.p == NULL)
+ return -1;
+ pa->keep = PyCapsule_New(pa->value.p, CTYPES_CAPSULE_NAME_PYMEM, pymem_destructor);
+ if (!pa->keep) {
+ PyMem_Free(pa->value.p);
+ return -1;
+ }
+ return 0;
+ }
+#endif
+
+ {
+ PyObject *arg;
+ arg = PyObject_GetAttrString(obj, "_as_parameter_");
+ /* Which types should we exactly allow here?
+ integers are required for using Python classes
+ as parameters (they have to expose the '_as_parameter_'
+ attribute)
+ */
+ if (arg) {
+ int result;
+ result = ConvParam(arg, index, pa);
+ Py_DECREF(arg);
+ return result;
+ }
+ PyErr_Format(PyExc_TypeError,
+ "Don't know how to convert parameter %d",
+ Py_SAFE_DOWNCAST(index, Py_ssize_t, int));
+ return -1;
+ }
+}
+
+
+ffi_type *_ctypes_get_ffi_type(PyObject *obj)
+{
+ StgDictObject *dict;
+ if (obj == NULL)
+ return &ffi_type_sint;
+ dict = PyType_stgdict(obj);
+ if (dict == NULL)
+ return &ffi_type_sint;
+#if defined(MS_WIN32) && !defined(_WIN32_WCE)
+ /* This little trick works correctly with MSVC.
+ It returns small structures in registers
+ */
+ if (dict->ffi_type_pointer.type == FFI_TYPE_STRUCT) {
+ if (can_return_struct_as_int(dict->ffi_type_pointer.size))
+ return &ffi_type_sint32;
+ else if (can_return_struct_as_sint64 (dict->ffi_type_pointer.size))
+ return &ffi_type_sint64;
+ }
+#endif
+ return &dict->ffi_type_pointer;
+}
+
+
+/*
+ * libffi uses:
+ *
+ * ffi_status ffi_prep_cif(ffi_cif *cif, ffi_abi abi,
+ * unsigned int nargs,
+ * ffi_type *rtype,
+ * ffi_type **atypes);
+ *
+ * and then
+ *
+ * void ffi_call(ffi_cif *cif, void *fn, void *rvalue, void **avalues);
+ */
+static int _call_function_pointer(int flags,
+ PPROC pProc,
+ void **avalues,
+ ffi_type **atypes,
+ ffi_type *restype,
+ void *resmem,
+ int argcount)
+{
+#ifdef WITH_THREAD
+ PyThreadState *_save = NULL; /* For Py_BLOCK_THREADS and Py_UNBLOCK_THREADS */
+#endif
+ PyObject *error_object = NULL;
+ int *space;
+ ffi_cif cif;
+ int cc;
+#ifdef MS_WIN32
+ int delta;
+#ifndef DONT_USE_SEH
+ DWORD dwExceptionCode = 0;
+ EXCEPTION_RECORD record;
+#endif
+#endif
+ /* XXX check before here */
+ if (restype == NULL) {
+ PyErr_SetString(PyExc_RuntimeError,
+ "No ffi_type for result");
+ return -1;
+ }
+
+ cc = FFI_DEFAULT_ABI;
+#if defined(MS_WIN32) && !defined(MS_WIN64) && !defined(_WIN32_WCE)
+ if ((flags & FUNCFLAG_CDECL) == 0)
+ cc = FFI_STDCALL;
+#endif
+ if (FFI_OK != ffi_prep_cif(&cif,
+ cc,
+ argcount,
+ restype,
+ atypes)) {
+ PyErr_SetString(PyExc_RuntimeError,
+ "ffi_prep_cif failed");
+ return -1;
+ }
+
+ if (flags & (FUNCFLAG_USE_ERRNO | FUNCFLAG_USE_LASTERROR)) {
+ error_object = _ctypes_get_errobj(&space);
+ if (error_object == NULL)
+ return -1;
+ }
+#ifdef WITH_THREAD
+ if ((flags & FUNCFLAG_PYTHONAPI) == 0)
+ Py_UNBLOCK_THREADS
+#endif
+ if (flags & FUNCFLAG_USE_ERRNO) {
+ int temp = space[0];
+ space[0] = errno;
+ errno = temp;
+ }
+#ifdef MS_WIN32
+ if (flags & FUNCFLAG_USE_LASTERROR) {
+ int temp = space[1];
+ space[1] = GetLastError();
+ SetLastError(temp);
+ }
+#ifndef DONT_USE_SEH
+ __try {
+#endif
+ delta =
+#endif
+ ffi_call(&cif, (void *)pProc, resmem, avalues);
+#ifdef MS_WIN32
+#ifndef DONT_USE_SEH
+ }
+ __except (HandleException(GetExceptionInformation(),
+ &dwExceptionCode, &record)) {
+ ;
+ }
+#endif
+ if (flags & FUNCFLAG_USE_LASTERROR) {
+ int temp = space[1];
+ space[1] = GetLastError();
+ SetLastError(temp);
+ }
+#endif
+ if (flags & FUNCFLAG_USE_ERRNO) {
+ int temp = space[0];
+ space[0] = errno;
+ errno = temp;
+ }
+#ifdef WITH_THREAD
+ if ((flags & FUNCFLAG_PYTHONAPI) == 0)
+ Py_BLOCK_THREADS
+#endif
+ Py_XDECREF(error_object);
+#ifdef MS_WIN32
+#ifndef DONT_USE_SEH
+ if (dwExceptionCode) {
+ SetException(dwExceptionCode, &record);
+ return -1;
+ }
+#endif
+#ifdef MS_WIN64
+ if (delta != 0) {
+ PyErr_Format(PyExc_RuntimeError,
+ "ffi_call failed with code %d",
+ delta);
+ return -1;
+ }
+#else
+ if (delta < 0) {
+ if (flags & FUNCFLAG_CDECL)
+ PyErr_Format(PyExc_ValueError,
+ "Procedure called with not enough "
+ "arguments (%d bytes missing) "
+ "or wrong calling convention",
+ -delta);
+ else
+ PyErr_Format(PyExc_ValueError,
+ "Procedure probably called with not enough "
+ "arguments (%d bytes missing)",
+ -delta);
+ return -1;
+ } else if (delta > 0) {
+ PyErr_Format(PyExc_ValueError,
+ "Procedure probably called with too many "
+ "arguments (%d bytes in excess)",
+ delta);
+ return -1;
+ }
+#endif
+#endif
+ if ((flags & FUNCFLAG_PYTHONAPI) && PyErr_Occurred())
+ return -1;
+ return 0;
+}
+
+/*
+ * Convert the C value in result into a Python object, depending on restype.
+ *
+ * - If restype is NULL, return a Python integer.
+ * - If restype is None, return None.
+ * - If restype is a simple ctypes type (c_int, c_void_p), call the type's getfunc,
+ * pass the result to checker and return the result.
+ * - If restype is another ctypes type, return an instance of that.
+ * - Otherwise, call restype and return the result.
+ */
+static PyObject *GetResult(PyObject *restype, void *result, PyObject *checker)
+{
+ StgDictObject *dict;
+ PyObject *retval, *v;
+
+ if (restype == NULL)
+ return PyLong_FromLong(*(int *)result);
+
+ if (restype == Py_None) {
+ Py_INCREF(Py_None);
+ return Py_None;
+ }
+
+ dict = PyType_stgdict(restype);
+ if (dict == NULL)
+ return PyObject_CallFunction(restype, "i", *(int *)result);
+
+ if (dict->getfunc && !_ctypes_simple_instance(restype)) {
+ retval = dict->getfunc(result, dict->size);
+ /* If restype is py_object (detected by comparing getfunc with
+ O_get), we have to call Py_DECREF because O_get has already
+ called Py_INCREF.
+ */
+ if (dict->getfunc == _ctypes_get_fielddesc("O")->getfunc) {
+ Py_DECREF(retval);
+ }
+ } else
+ retval = PyCData_FromBaseObj(restype, NULL, 0, result);
+
+ if (!checker || !retval)
+ return retval;
+
+ v = PyObject_CallFunctionObjArgs(checker, retval, NULL);
+ if (v == NULL)
+ _PyTraceback_Add("GetResult", "_ctypes/callproc.c", __LINE__-2);
+ Py_DECREF(retval);
+ return v;
+}
+
+/*
+ * Raise a new exception 'exc_class', adding additional text to the original
+ * exception string.
+ */
+void _ctypes_extend_error(PyObject *exc_class, const char *fmt, ...)
+{
+ va_list vargs;
+ PyObject *tp, *v, *tb, *s, *cls_str, *msg_str;
+
+ va_start(vargs, fmt);
+ s = PyUnicode_FromFormatV(fmt, vargs);
+ va_end(vargs);
+ if (!s)
+ return;
+
+ PyErr_Fetch(&tp, &v, &tb);
+ PyErr_NormalizeException(&tp, &v, &tb);
+ cls_str = PyObject_Str(tp);
+ if (cls_str) {
+ PyUnicode_AppendAndDel(&s, cls_str);
+ PyUnicode_AppendAndDel(&s, PyUnicode_FromString(": "));
+ if (s == NULL)
+ goto error;
+ } else
+ PyErr_Clear();
+ msg_str = PyObject_Str(v);
+ if (msg_str)
+ PyUnicode_AppendAndDel(&s, msg_str);
+ else {
+ PyErr_Clear();
+ PyUnicode_AppendAndDel(&s, PyUnicode_FromString("???"));
+ }
+ if (s == NULL)
+ goto error;
+ PyErr_SetObject(exc_class, s);
+error:
+ Py_XDECREF(tp);
+ Py_XDECREF(v);
+ Py_XDECREF(tb);
+ Py_XDECREF(s);
+}
+
+
+#ifdef MS_WIN32
+
+static PyObject *
+GetComError(HRESULT errcode, GUID *riid, IUnknown *pIunk)
+{
+ HRESULT hr;
+ ISupportErrorInfo *psei = NULL;
+ IErrorInfo *pei = NULL;
+ BSTR descr=NULL, helpfile=NULL, source=NULL;
+ GUID guid;
+ DWORD helpcontext=0;
+ LPOLESTR progid;
+ PyObject *obj;
+ LPOLESTR text;
+
+ /* We absolutely have to release the GIL during COM method calls,
+ otherwise we may get a deadlock!
+ */
+#ifdef WITH_THREAD
+ Py_BEGIN_ALLOW_THREADS
+#endif
+
+ hr = pIunk->lpVtbl->QueryInterface(pIunk, &IID_ISupportErrorInfo, (void **)&psei);
+ if (FAILED(hr))
+ goto failed;
+
+ hr = psei->lpVtbl->InterfaceSupportsErrorInfo(psei, riid);
+ psei->lpVtbl->Release(psei);
+ if (FAILED(hr))
+ goto failed;
+
+ hr = GetErrorInfo(0, &pei);
+ if (hr != S_OK)
+ goto failed;
+
+ pei->lpVtbl->GetDescription(pei, &descr);
+ pei->lpVtbl->GetGUID(pei, &guid);
+ pei->lpVtbl->GetHelpContext(pei, &helpcontext);
+ pei->lpVtbl->GetHelpFile(pei, &helpfile);
+ pei->lpVtbl->GetSource(pei, &source);
+
+ pei->lpVtbl->Release(pei);
+
+ failed:
+#ifdef WITH_THREAD
+ Py_END_ALLOW_THREADS
+#endif
+
+ progid = NULL;
+ ProgIDFromCLSID(&guid, &progid);
+
+ text = FormatError(errcode);
+ obj = Py_BuildValue(
+ "iu(uuuiu)",
+ errcode,
+ text,
+ descr, source, helpfile, helpcontext,
+ progid);
+ if (obj) {
+ PyErr_SetObject(ComError, obj);
+ Py_DECREF(obj);
+ }
+ LocalFree(text);
+
+ if (descr)
+ SysFreeString(descr);
+ if (helpfile)
+ SysFreeString(helpfile);
+ if (source)
+ SysFreeString(source);
+
+ return NULL;
+}
+#endif
+
+#if (defined(__x86_64__) && (defined(__MINGW64__) || defined(__CYGWIN__))) || \
+ defined(__aarch64__)
+#define CTYPES_PASS_BY_REF_HACK
+#define POW2(x) (((x & ~(x - 1)) == x) ? x : 0)
+#define IS_PASS_BY_REF(x) (x > 8 || !POW2(x))
+#endif
+
+/*
+ * Requirements, must be ensured by the caller:
+ * - argtuple is tuple of arguments
+ * - argtypes is either NULL, or a tuple of the same size as argtuple
+ *
+ * - XXX various requirements for restype, not yet collected
+ */
+PyObject *_ctypes_callproc(PPROC pProc,
+ PyObject *argtuple,
+#ifdef MS_WIN32
+ IUnknown *pIunk,
+ GUID *iid,
+#endif
+ int flags,
+ PyObject *argtypes, /* misleading name: This is a tuple of
+ methods, not types: the .from_param
+ class methods of the types */
+ PyObject *restype,
+ PyObject *checker)
+{
+ Py_ssize_t i, n, argcount, argtype_count;
+ void *resbuf;
+ struct argument *args, *pa;
+ ffi_type **atypes;
+ ffi_type *rtype;
+ void **avalues;
+ PyObject *retval = NULL;
+
+ n = argcount = PyTuple_GET_SIZE(argtuple);
+#ifdef MS_WIN32
+ /* an optional COM object this pointer */
+ if (pIunk)
+ ++argcount;
+#endif
+
+#ifdef UEFI_C_SOURCE
+ args = (struct argument *)malloc(sizeof(struct argument) * argcount);
+#else
+ args = (struct argument *)alloca(sizeof(struct argument) * argcount);
+#endif // UEFI_C_SOURCE
+ if (!args) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+ memset(args, 0, sizeof(struct argument) * argcount);
+ argtype_count = argtypes ? PyTuple_GET_SIZE(argtypes) : 0;
+#ifdef MS_WIN32
+ if (pIunk) {
+ args[0].ffi_type = &ffi_type_pointer;
+ args[0].value.p = pIunk;
+ pa = &args[1];
+ } else
+#endif
+ pa = &args[0];
+
+ /* Convert the arguments */
+ for (i = 0; i < n; ++i, ++pa) {
+ PyObject *converter;
+ PyObject *arg;
+ int err;
+
+ arg = PyTuple_GET_ITEM(argtuple, i); /* borrowed ref */
+ /* For cdecl functions, we allow more actual arguments
+ than the length of the argtypes tuple.
+ This is checked in _ctypes::PyCFuncPtr_Call
+ */
+ if (argtypes && argtype_count > i) {
+ PyObject *v;
+ converter = PyTuple_GET_ITEM(argtypes, i);
+ v = PyObject_CallFunctionObjArgs(converter, arg, NULL);
+ if (v == NULL) {
+ _ctypes_extend_error(PyExc_ArgError, "argument %d: ", i+1);
+ goto cleanup;
+ }
+
+ err = ConvParam(v, i+1, pa);
+ Py_DECREF(v);
+ if (-1 == err) {
+ _ctypes_extend_error(PyExc_ArgError, "argument %d: ", i+1);
+ goto cleanup;
+ }
+ } else {
+ err = ConvParam(arg, i+1, pa);
+ if (-1 == err) {
+ _ctypes_extend_error(PyExc_ArgError, "argument %d: ", i+1);
+ goto cleanup; /* leaking ? */
+ }
+ }
+ }
+
+ rtype = _ctypes_get_ffi_type(restype);
+
+
+#ifdef UEFI_C_SOURCE
+ resbuf = malloc(max(rtype->size, sizeof(ffi_arg)));
+#ifdef _Py_MEMORY_SANITIZER
+ /* ffi_call actually initializes resbuf, but from asm, which
+ * MemorySanitizer can't detect. Avoid false positives from MSan. */
+ if (resbuf != NULL) {
+ __msan_unpoison(resbuf, max(rtype->size, sizeof(ffi_arg)));
+ }
+#endif
+
+ avalues = (void **)malloc(sizeof(void *)* argcount);
+ atypes = (ffi_type **)malloc(sizeof(ffi_type *)* argcount);
+#else
+ resbuf = alloca(max(rtype->size, sizeof(ffi_arg)));
+#ifdef _Py_MEMORY_SANITIZER
+ /* ffi_call actually initializes resbuf, but from asm, which
+ * MemorySanitizer can't detect. Avoid false positives from MSan. */
+ if (resbuf != NULL) {
+ __msan_unpoison(resbuf, max(rtype->size, sizeof(ffi_arg)));
+ }
+#endif
+ avalues = (void **)alloca(sizeof(void *) * argcount);
+ atypes = (ffi_type **)alloca(sizeof(ffi_type *) * argcount);
+#endif //UEFI_C_SOURCE
+ if (!resbuf || !avalues || !atypes) {
+ PyErr_NoMemory();
+ goto cleanup;
+ }
+ for (i = 0; i < argcount; ++i) {
+ atypes[i] = args[i].ffi_type;
+#ifdef CTYPES_PASS_BY_REF_HACK
+ size_t size = atypes[i]->size;
+ if (IS_PASS_BY_REF(size)) {
+ void *tmp = alloca(size);
+ if (atypes[i]->type == FFI_TYPE_STRUCT)
+ memcpy(tmp, args[i].value.p, size);
+ else
+ memcpy(tmp, (void*)&args[i].value, size);
+
+ avalues[i] = tmp;
+ }
+ else
+#endif
+ if (atypes[i]->type == FFI_TYPE_STRUCT)
+ avalues[i] = (void *)args[i].value.p;
+ else
+ avalues[i] = (void *)&args[i].value;
+ }
+
+ if (-1 == _call_function_pointer(flags, pProc, avalues, atypes,
+ rtype, resbuf,
+ Py_SAFE_DOWNCAST(argcount,
+ Py_ssize_t,
+ int)))
+ goto cleanup;
+
+#ifdef WORDS_BIGENDIAN
+ /* libffi returns the result in a buffer with sizeof(ffi_arg). This
+ causes problems on big endian machines, since the result buffer
+ address cannot simply be used as result pointer, instead we must
+ adjust the pointer value:
+ */
+ /*
+ XXX I should find out and clarify why this is needed at all,
+ especially why adjusting for ffi_type_float must be avoided on
+ 64-bit platforms.
+ */
+ if (rtype->type != FFI_TYPE_FLOAT
+ && rtype->type != FFI_TYPE_STRUCT
+ && rtype->size < sizeof(ffi_arg))
+ resbuf = (char *)resbuf + sizeof(ffi_arg) - rtype->size;
+#endif
+
+#ifdef MS_WIN32
+ if (iid && pIunk) {
+ if (*(int *)resbuf & 0x80000000)
+ retval = GetComError(*(HRESULT *)resbuf, iid, pIunk);
+ else
+ retval = PyLong_FromLong(*(int *)resbuf);
+ } else if (flags & FUNCFLAG_HRESULT) {
+ if (*(int *)resbuf & 0x80000000)
+ retval = PyErr_SetFromWindowsErr(*(int *)resbuf);
+ else
+ retval = PyLong_FromLong(*(int *)resbuf);
+ } else
+#endif
+ retval = GetResult(restype, resbuf, checker);
+ cleanup:
+ for (i = 0; i < argcount; ++i)
+ Py_XDECREF(args[i].keep);
+ return retval;
+}
+
+static int
+_parse_voidp(PyObject *obj, void **address)
+{
+ *address = PyLong_AsVoidPtr(obj);
+ if (*address == NULL)
+ return 0;
+ return 1;
+}
+
+#ifdef MS_WIN32
+
+static const char format_error_doc[] =
+"FormatError([integer]) -> string\n\
+\n\
+Convert a win32 error code into a string. If the error code is not\n\
+given, the return value of a call to GetLastError() is used.\n";
+static PyObject *format_error(PyObject *self, PyObject *args)
+{
+ PyObject *result;
+ wchar_t *lpMsgBuf;
+ DWORD code = 0;
+ if (!PyArg_ParseTuple(args, "|i:FormatError", &code))
+ return NULL;
+ if (code == 0)
+ code = GetLastError();
+ lpMsgBuf = FormatError(code);
+ if (lpMsgBuf) {
+ result = PyUnicode_FromWideChar(lpMsgBuf, wcslen(lpMsgBuf));
+ LocalFree(lpMsgBuf);
+ } else {
+ result = PyUnicode_FromString("<no description>");
+ }
+ return result;
+}
+
+static const char load_library_doc[] =
+"LoadLibrary(name) -> handle\n\
+\n\
+Load an executable (usually a DLL), and return a handle to it.\n\
+The handle may be used to locate exported functions in this\n\
+module.\n";
+static PyObject *load_library(PyObject *self, PyObject *args)
+{
+ const WCHAR *name;
+ PyObject *nameobj;
+ PyObject *ignored;
+ HMODULE hMod;
+
+ if (!PyArg_ParseTuple(args, "U|O:LoadLibrary", &nameobj, &ignored))
+ return NULL;
+
+ name = _PyUnicode_AsUnicode(nameobj);
+ if (!name)
+ return NULL;
+
+ hMod = LoadLibraryW(name);
+ if (!hMod)
+ return PyErr_SetFromWindowsErr(GetLastError());
+#ifdef _WIN64
+ return PyLong_FromVoidPtr(hMod);
+#else
+ return Py_BuildValue("i", hMod);
+#endif
+}
+
+static const char free_library_doc[] =
+"FreeLibrary(handle) -> void\n\
+\n\
+Free the handle of an executable previously loaded by LoadLibrary.\n";
+static PyObject *free_library(PyObject *self, PyObject *args)
+{
+ void *hMod;
+ if (!PyArg_ParseTuple(args, "O&:FreeLibrary", &_parse_voidp, &hMod))
+ return NULL;
+ if (!FreeLibrary((HMODULE)hMod))
+ return PyErr_SetFromWindowsErr(GetLastError());
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+static const char copy_com_pointer_doc[] =
+"CopyComPointer(src, dst) -> HRESULT value\n";
+
+static PyObject *
+copy_com_pointer(PyObject *self, PyObject *args)
+{
+ PyObject *p1, *p2, *r = NULL;
+ struct argument a, b;
+ IUnknown *src, **pdst;
+ if (!PyArg_ParseTuple(args, "OO:CopyComPointer", &p1, &p2))
+ return NULL;
+ a.keep = b.keep = NULL;
+
+ if (-1 == ConvParam(p1, 0, &a) || -1 == ConvParam(p2, 1, &b))
+ goto done;
+ src = (IUnknown *)a.value.p;
+ pdst = (IUnknown **)b.value.p;
+
+ if (pdst == NULL)
+ r = PyLong_FromLong(E_POINTER);
+ else {
+ if (src)
+ src->lpVtbl->AddRef(src);
+ *pdst = src;
+ r = PyLong_FromLong(S_OK);
+ }
+ done:
+ Py_XDECREF(a.keep);
+ Py_XDECREF(b.keep);
+ return r;
+}
+#else
+
+#ifndef UEFI_C_SOURCE
+static PyObject *py_dl_open(PyObject *self, PyObject *args)
+{
+ PyObject *name, *name2;
+ char *name_str;
+ void * handle;
+#if HAVE_DECL_RTLD_LOCAL
+ int mode = RTLD_NOW | RTLD_LOCAL;
+#else
+ /* cygwin doesn't define RTLD_LOCAL */
+ int mode = RTLD_NOW;
+#endif
+ if (!PyArg_ParseTuple(args, "O|i:dlopen", &name, &mode))
+ return NULL;
+ mode |= RTLD_NOW;
+ if (name != Py_None) {
+ if (PyUnicode_FSConverter(name, &name2) == 0)
+ return NULL;
+ if (PyBytes_Check(name2))
+ name_str = PyBytes_AS_STRING(name2);
+ else
+ name_str = PyByteArray_AS_STRING(name2);
+ } else {
+ name_str = NULL;
+ name2 = NULL;
+ }
+ handle = ctypes_dlopen(name_str, mode);
+ Py_XDECREF(name2);
+ if (!handle) {
+ char *errmsg = ctypes_dlerror();
+ if (!errmsg)
+ errmsg = "dlopen() error";
+ PyErr_SetString(PyExc_OSError,
+ errmsg);
+ return NULL;
+ }
+ return PyLong_FromVoidPtr(handle);
+}
+
+static PyObject *py_dl_close(PyObject *self, PyObject *args)
+{
+ void *handle;
+
+ if (!PyArg_ParseTuple(args, "O&:dlclose", &_parse_voidp, &handle))
+ return NULL;
+ if (dlclose(handle)) {
+ PyErr_SetString(PyExc_OSError,
+ ctypes_dlerror());
+ return NULL;
+ }
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+static PyObject *py_dl_sym(PyObject *self, PyObject *args)
+{
+ char *name;
+ void *handle;
+ void *ptr;
+
+ if (!PyArg_ParseTuple(args, "O&s:dlsym",
+ &_parse_voidp, &handle, &name))
+ return NULL;
+ ptr = ctypes_dlsym((void*)handle, name);
+ if (!ptr) {
+ PyErr_SetString(PyExc_OSError,
+ ctypes_dlerror());
+ return NULL;
+ }
+ return PyLong_FromVoidPtr(ptr);
+}
+#endif // UEFI_C_SOURCE
+#endif
+
+/*
+ * Only for debugging so far: So that we can call CFunction instances
+ *
+ * XXX Needs to accept more arguments: flags, argtypes, restype
+ */
+static PyObject *
+call_function(PyObject *self, PyObject *args)
+{
+ void *func;
+ PyObject *arguments;
+ PyObject *result;
+
+ if (!PyArg_ParseTuple(args,
+ "O&O!",
+ &_parse_voidp, &func,
+ &PyTuple_Type, &arguments))
+ return NULL;
+
+ result = _ctypes_callproc((PPROC)func,
+ arguments,
+#ifdef MS_WIN32
+ NULL,
+ NULL,
+#endif
+ 0, /* flags */
+ NULL, /* self->argtypes */
+ NULL, /* self->restype */
+ NULL); /* checker */
+ return result;
+}
+
+/*
+ * Only for debugging so far: So that we can call CFunction instances
+ *
+ * XXX Needs to accept more arguments: flags, argtypes, restype
+ */
+static PyObject *
+call_cdeclfunction(PyObject *self, PyObject *args)
+{
+ void *func;
+ PyObject *arguments;
+ PyObject *result;
+
+ if (!PyArg_ParseTuple(args,
+ "O&O!",
+ &_parse_voidp, &func,
+ &PyTuple_Type, &arguments))
+ return NULL;
+
+ result = _ctypes_callproc((PPROC)func,
+ arguments,
+#ifdef MS_WIN32
+ NULL,
+ NULL,
+#endif
+ FUNCFLAG_CDECL, /* flags */
+ NULL, /* self->argtypes */
+ NULL, /* self->restype */
+ NULL); /* checker */
+ return result;
+}
+
+/*****************************************************************
+ * functions
+ */
+static const char sizeof_doc[] =
+"sizeof(C type) -> integer\n"
+"sizeof(C instance) -> integer\n"
+"Return the size in bytes of a C instance";
+
+static PyObject *
+sizeof_func(PyObject *self, PyObject *obj)
+{
+ StgDictObject *dict;
+
+ dict = PyType_stgdict(obj);
+ if (dict)
+ return PyLong_FromSsize_t(dict->size);
+
+ if (CDataObject_Check(obj))
+ return PyLong_FromSsize_t(((CDataObject *)obj)->b_size);
+ PyErr_SetString(PyExc_TypeError,
+ "this type has no size");
+ return NULL;
+}
+
+static const char alignment_doc[] =
+"alignment(C type) -> integer\n"
+"alignment(C instance) -> integer\n"
+"Return the alignment requirements of a C instance";
+
+static PyObject *
+align_func(PyObject *self, PyObject *obj)
+{
+ StgDictObject *dict;
+
+ dict = PyType_stgdict(obj);
+ if (dict)
+ return PyLong_FromSsize_t(dict->align);
+
+ dict = PyObject_stgdict(obj);
+ if (dict)
+ return PyLong_FromSsize_t(dict->align);
+
+ PyErr_SetString(PyExc_TypeError,
+ "no alignment info");
+ return NULL;
+}
+
+static const char byref_doc[] =
+"byref(C instance[, offset=0]) -> byref-object\n"
+"Return a pointer lookalike to a C instance, only usable\n"
+"as function argument";
+
+/*
+ * We must return something which can be converted to a parameter,
+ * but still has a reference to self.
+ */
+static PyObject *
+byref(PyObject *self, PyObject *args)
+{
+ PyCArgObject *parg;
+ PyObject *obj;
+ PyObject *pyoffset = NULL;
+ Py_ssize_t offset = 0;
+
+ if (!PyArg_UnpackTuple(args, "byref", 1, 2,
+ &obj, &pyoffset))
+ return NULL;
+ if (pyoffset) {
+ offset = PyNumber_AsSsize_t(pyoffset, NULL);
+ if (offset == -1 && PyErr_Occurred())
+ return NULL;
+ }
+ if (!CDataObject_Check(obj)) {
+ PyErr_Format(PyExc_TypeError,
+ "byref() argument must be a ctypes instance, not '%s'",
+ Py_TYPE(obj)->tp_name);
+ return NULL;
+ }
+
+ parg = PyCArgObject_new();
+ if (parg == NULL)
+ return NULL;
+
+ parg->tag = 'P';
+ parg->pffi_type = &ffi_type_pointer;
+ Py_INCREF(obj);
+ parg->obj = obj;
+ parg->value.p = (char *)((CDataObject *)obj)->b_ptr + offset;
+ return (PyObject *)parg;
+}
+
+static const char addressof_doc[] =
+"addressof(C instance) -> integer\n"
+"Return the address of the C instance internal buffer";
+
+static PyObject *
+addressof(PyObject *self, PyObject *obj)
+{
+ if (CDataObject_Check(obj))
+ return PyLong_FromVoidPtr(((CDataObject *)obj)->b_ptr);
+ PyErr_SetString(PyExc_TypeError,
+ "invalid type");
+ return NULL;
+}
+
+static int
+converter(PyObject *obj, void **address)
+{
+ *address = PyLong_AsVoidPtr(obj);
+ return *address != NULL;
+}
+
+static PyObject *
+My_PyObj_FromPtr(PyObject *self, PyObject *args)
+{
+ PyObject *ob;
+ if (!PyArg_ParseTuple(args, "O&:PyObj_FromPtr", converter, &ob))
+ return NULL;
+ Py_INCREF(ob);
+ return ob;
+}
+
+static PyObject *
+My_Py_INCREF(PyObject *self, PyObject *arg)
+{
+ Py_INCREF(arg); /* that's what this function is for */
+ Py_INCREF(arg); /* that for returning it */
+ return arg;
+}
+
+static PyObject *
+My_Py_DECREF(PyObject *self, PyObject *arg)
+{
+ Py_DECREF(arg); /* that's what this function is for */
+ Py_INCREF(arg); /* that's for returning it */
+ return arg;
+}
+
+static PyObject *
+resize(PyObject *self, PyObject *args)
+{
+ CDataObject *obj;
+ StgDictObject *dict;
+ Py_ssize_t size;
+
+ if (!PyArg_ParseTuple(args,
+ "On:resize",
+ &obj, &size))
+ return NULL;
+
+ dict = PyObject_stgdict((PyObject *)obj);
+ if (dict == NULL) {
+ PyErr_SetString(PyExc_TypeError,
+ "excepted ctypes instance");
+ return NULL;
+ }
+ if (size < dict->size) {
+ PyErr_Format(PyExc_ValueError,
+ "minimum size is %zd",
+ dict->size);
+ return NULL;
+ }
+ if (obj->b_needsfree == 0) {
+ PyErr_Format(PyExc_ValueError,
+ "Memory cannot be resized because this object doesn't own it");
+ return NULL;
+ }
+ if ((size_t)size <= sizeof(obj->b_value)) {
+ /* internal default buffer is large enough */
+ obj->b_size = size;
+ goto done;
+ }
+ if (!_CDataObject_HasExternalBuffer(obj)) {
+ /* We are currently using the objects default buffer, but it
+ isn't large enough any more. */
+ void *ptr = PyMem_Malloc(size);
+ if (ptr == NULL)
+ return PyErr_NoMemory();
+ memset(ptr, 0, size);
+ memmove(ptr, obj->b_ptr, obj->b_size);
+ obj->b_ptr = ptr;
+ obj->b_size = size;
+ } else {
+ void * ptr = PyMem_Realloc(obj->b_ptr, size);
+ if (ptr == NULL)
+ return PyErr_NoMemory();
+ obj->b_ptr = ptr;
+ obj->b_size = size;
+ }
+ done:
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+static PyObject *
+unpickle(PyObject *self, PyObject *args)
+{
+ PyObject *typ;
+ PyObject *state;
+ PyObject *result;
+ PyObject *tmp;
+ _Py_IDENTIFIER(__new__);
+ _Py_IDENTIFIER(__setstate__);
+
+ if (!PyArg_ParseTuple(args, "OO", &typ, &state))
+ return NULL;
+ result = _PyObject_CallMethodId(typ, &PyId___new__, "O", typ);
+ if (result == NULL)
+ return NULL;
+ tmp = _PyObject_CallMethodId(result, &PyId___setstate__, "O", state);
+ if (tmp == NULL) {
+ Py_DECREF(result);
+ return NULL;
+ }
+ Py_DECREF(tmp);
+ return result;
+}
+
+static PyObject *
+POINTER(PyObject *self, PyObject *cls)
+{
+ PyObject *result;
+ PyTypeObject *typ;
+ PyObject *key;
+ char *buf;
+
+ result = PyDict_GetItem(_ctypes_ptrtype_cache, cls);
+ if (result) {
+ Py_INCREF(result);
+ return result;
+ }
+ if (PyUnicode_CheckExact(cls)) {
+ const char *name = PyUnicode_AsUTF8(cls);
+ if (name == NULL)
+ return NULL;
+ buf = PyMem_Malloc(strlen(name) + 3 + 1);
+ if (buf == NULL)
+ return PyErr_NoMemory();
+ sprintf(buf, "LP_%s", name);
+ result = PyObject_CallFunction((PyObject *)Py_TYPE(&PyCPointer_Type),
+ "s(O){}",
+ buf,
+ &PyCPointer_Type);
+ PyMem_Free(buf);
+ if (result == NULL)
+ return result;
+ key = PyLong_FromVoidPtr(result);
+ if (key == NULL) {
+ Py_DECREF(result);
+ return NULL;
+ }
+ } else if (PyType_Check(cls)) {
+ typ = (PyTypeObject *)cls;
+ buf = PyMem_Malloc(strlen(typ->tp_name) + 3 + 1);
+ if (buf == NULL)
+ return PyErr_NoMemory();
+ sprintf(buf, "LP_%s", typ->tp_name);
+ result = PyObject_CallFunction((PyObject *)Py_TYPE(&PyCPointer_Type),
+ "s(O){sO}",
+ buf,
+ &PyCPointer_Type,
+ "_type_", cls);
+ PyMem_Free(buf);
+ if (result == NULL)
+ return result;
+ Py_INCREF(cls);
+ key = cls;
+ } else {
+ PyErr_SetString(PyExc_TypeError, "must be a ctypes type");
+ return NULL;
+ }
+ if (-1 == PyDict_SetItem(_ctypes_ptrtype_cache, key, result)) {
+ Py_DECREF(result);
+ Py_DECREF(key);
+ return NULL;
+ }
+ Py_DECREF(key);
+ return result;
+}
+
+static PyObject *
+pointer(PyObject *self, PyObject *arg)
+{
+ PyObject *result;
+ PyObject *typ;
+
+ typ = PyDict_GetItem(_ctypes_ptrtype_cache, (PyObject *)Py_TYPE(arg));
+ if (typ)
+ return PyObject_CallFunctionObjArgs(typ, arg, NULL);
+ typ = POINTER(NULL, (PyObject *)Py_TYPE(arg));
+ if (typ == NULL)
+ return NULL;
+ result = PyObject_CallFunctionObjArgs(typ, arg, NULL);
+ Py_DECREF(typ);
+ return result;
+}
+
+static PyObject *
+buffer_info(PyObject *self, PyObject *arg)
+{
+ StgDictObject *dict = PyType_stgdict(arg);
+ PyObject *shape;
+ Py_ssize_t i;
+
+ if (dict == NULL)
+ dict = PyObject_stgdict(arg);
+ if (dict == NULL) {
+ PyErr_SetString(PyExc_TypeError,
+ "not a ctypes type or object");
+ return NULL;
+ }
+ shape = PyTuple_New(dict->ndim);
+ if (shape == NULL)
+ return NULL;
+ for (i = 0; i < (int)dict->ndim; ++i)
+ PyTuple_SET_ITEM(shape, i, PyLong_FromSsize_t(dict->shape[i]));
+
+ if (PyErr_Occurred()) {
+ Py_DECREF(shape);
+ return NULL;
+ }
+ return Py_BuildValue("siN", dict->format, dict->ndim, shape);
+}
+
+PyMethodDef _ctypes_module_methods[] = {
+ {"get_errno", get_errno, METH_NOARGS},
+ {"set_errno", set_errno, METH_VARARGS},
+ {"POINTER", POINTER, METH_O },
+ {"pointer", pointer, METH_O },
+ {"_unpickle", unpickle, METH_VARARGS },
+ {"buffer_info", buffer_info, METH_O, "Return buffer interface information"},
+ {"resize", resize, METH_VARARGS, "Resize the memory buffer of a ctypes instance"},
+#ifndef UEFI_C_SOURCE
+#ifdef MS_WIN32
+ {"get_last_error", get_last_error, METH_NOARGS},
+ {"set_last_error", set_last_error, METH_VARARGS},
+ {"CopyComPointer", copy_com_pointer, METH_VARARGS, copy_com_pointer_doc},
+ {"FormatError", format_error, METH_VARARGS, format_error_doc},
+ {"LoadLibrary", load_library, METH_VARARGS, load_library_doc},
+ {"FreeLibrary", free_library, METH_VARARGS, free_library_doc},
+ {"_check_HRESULT", check_hresult, METH_VARARGS},
+#else
+ {"dlopen", py_dl_open, METH_VARARGS,
+ "dlopen(name, flag={RTLD_GLOBAL|RTLD_LOCAL}) open a shared library"},
+ {"dlclose", py_dl_close, METH_VARARGS, "dlclose a library"},
+ {"dlsym", py_dl_sym, METH_VARARGS, "find symbol in shared library"},
+#endif
+#endif // UEFI_C_SOURCE
+ {"alignment", align_func, METH_O, alignment_doc},
+ {"sizeof", sizeof_func, METH_O, sizeof_doc},
+ {"byref", byref, METH_VARARGS, byref_doc},
+ {"addressof", addressof, METH_O, addressof_doc},
+ {"call_function", call_function, METH_VARARGS },
+ {"call_cdeclfunction", call_cdeclfunction, METH_VARARGS },
+ {"PyObj_FromPtr", My_PyObj_FromPtr, METH_VARARGS },
+ {"Py_INCREF", My_Py_INCREF, METH_O },
+ {"Py_DECREF", My_Py_DECREF, METH_O },
+ {NULL, NULL} /* Sentinel */
+};
+
+/*
+ Local Variables:
+ compile-command: "cd .. && python setup.py -q build -g && python setup.py -q build install --home ~"
+ End:
+*/
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/ctypes_dlfcn.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/ctypes_dlfcn.h
new file mode 100644
index 00000000..8b452ec0
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/ctypes_dlfcn.h
@@ -0,0 +1,29 @@
+#ifndef _CTYPES_DLFCN_H_
+#define _CTYPES_DLFCN_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif /* __cplusplus */
+
+#ifndef UEFI_C_SOURCE
+#ifndef MS_WIN32
+
+#include <dlfcn.h>
+
+#ifndef CTYPES_DARWIN_DLFCN
+
+#define ctypes_dlsym dlsym
+#define ctypes_dlerror dlerror
+#define ctypes_dlopen dlopen
+#define ctypes_dlclose dlclose
+#define ctypes_dladdr dladdr
+
+#endif /* !CTYPES_DARWIN_DLFCN */
+
+#endif /* !MS_WIN32 */
+#endif // UEFI_C_SOURCE
+
+#ifdef __cplusplus
+}
+#endif /* __cplusplus */
+#endif /* _CTYPES_DLFCN_H_ */
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi.c
new file mode 100644
index 00000000..12e01284
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi.c
@@ -0,0 +1,572 @@
+/* -----------------------------------------------------------------------
+
+ Copyright (c) 2021, Intel Corporation. All rights reserved.<BR>
+ This program and the accompanying materials are licensed and made available under
+ the terms and conditions of the BSD License that accompanies this distribution.
+ The full text of the license may be found at
+ http://opensource.org/licenses/bsd-license.
+
+ THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+
+ ffi.c - Copyright (c) 1996, 1998, 1999, 2001 Red Hat, Inc.
+ Copyright (c) 2002 Ranjit Mathew
+ Copyright (c) 2002 Bo Thorsen
+ Copyright (c) 2002 Roger Sayle
+
+ x86 Foreign Function Interface
+
+ Permission is hereby granted, free of charge, to any person obtaining
+ a copy of this software and associated documentation files (the
+ ``Software''), to deal in the Software without restriction, including
+ without limitation the rights to use, copy, modify, merge, publish,
+ distribute, sublicense, and/or sell copies of the Software, and to
+ permit persons to whom the Software is furnished to do so, subject to
+ the following conditions:
+
+ The above copyright notice and this permission notice shall be included
+ in all copies or substantial portions of the Software.
+
+ THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS
+ OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+ IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ OTHER DEALINGS IN THE SOFTWARE.
+ ----------------------------------------------------------------------- */
+
+#include <ffi.h>
+#include <ffi_common.h>
+
+#include <stdlib.h>
+#include <math.h>
+#include <sys/types.h>
+
+/* ffi_prep_args is called by the assembly routine once stack space
+ has been allocated for the function's arguments */
+
+extern void Py_FatalError(const char *msg);
+
+/*@-exportheader@*/
+void ffi_prep_args(char *stack, extended_cif *ecif)
+/*@=exportheader@*/
+{
+ register unsigned int i;
+ register void **p_argv;
+ register char *argp;
+ register ffi_type **p_arg;
+
+ argp = stack;
+ if (ecif->cif->rtype->type == FFI_TYPE_STRUCT)
+ {
+ *(void **) argp = ecif->rvalue;
+ argp += sizeof(void *);
+ }
+
+ p_argv = ecif->avalue;
+
+ for (i = ecif->cif->nargs, p_arg = ecif->cif->arg_types;
+ i != 0;
+ i--, p_arg++)
+ {
+ size_t z;
+
+ /* Align if necessary */
+ if ((sizeof(void *) - 1) & (size_t) argp)
+ argp = (char *) ALIGN(argp, sizeof(void *));
+
+ z = (*p_arg)->size;
+ if (z < sizeof(intptr_t))
+ {
+ z = sizeof(intptr_t);
+ switch ((*p_arg)->type)
+ {
+ case FFI_TYPE_SINT8:
+ *(intptr_t *) argp = (intptr_t)*(SINT8 *)(* p_argv);
+ break;
+
+ case FFI_TYPE_UINT8:
+ *(uintptr_t *) argp = (uintptr_t)*(UINT8 *)(* p_argv);
+ break;
+
+ case FFI_TYPE_SINT16:
+ *(intptr_t *) argp = (intptr_t)*(SINT16 *)(* p_argv);
+ break;
+
+ case FFI_TYPE_UINT16:
+ *(uintptr_t *) argp = (uintptr_t)*(UINT16 *)(* p_argv);
+ break;
+
+ case FFI_TYPE_SINT32:
+ *(intptr_t *) argp = (intptr_t)*(SINT32 *)(* p_argv);
+ break;
+
+ case FFI_TYPE_UINT32:
+ *(uintptr_t *) argp = (uintptr_t)*(UINT32 *)(* p_argv);
+ break;
+
+ case FFI_TYPE_FLOAT:
+ *(uintptr_t *) argp = 0;
+ *(float *) argp = *(float *)(* p_argv);
+ break;
+
+ // 64-bit value cases should never be used for x86 and AMD64 builds
+ case FFI_TYPE_SINT64:
+ *(intptr_t *) argp = (intptr_t)*(SINT64 *)(* p_argv);
+ break;
+
+ case FFI_TYPE_UINT64:
+ *(uintptr_t *) argp = (uintptr_t)*(UINT64 *)(* p_argv);
+ break;
+
+ case FFI_TYPE_STRUCT:
+ *(uintptr_t *) argp = (uintptr_t)*(UINT32 *)(* p_argv);
+ break;
+
+ case FFI_TYPE_DOUBLE:
+ *(uintptr_t *) argp = 0;
+ *(double *) argp = *(double *)(* p_argv);
+ break;
+
+ default:
+ FFI_ASSERT(0);
+ }
+ }
+#ifdef _WIN64
+ else if (z > 8)
+ {
+ /* On Win64, if a single argument takes more than 8 bytes,
+ then it is always passed by reference. */
+ *(void **)argp = *p_argv;
+ z = 8;
+ }
+#endif
+ else
+ {
+ memcpy(argp, *p_argv, z);
+ }
+ p_argv++;
+ argp += z;
+ }
+
+ if (argp >= stack && (unsigned)(argp - stack) > ecif->cif->bytes)
+ {
+ Py_FatalError("FFI BUG: not enough stack space for arguments");
+ }
+ return;
+}
+
+/*
+Per: https://msdn.microsoft.com/en-us/library/7572ztz4.aspx
+To be returned by value in RAX, user-defined types must have a length
+of 1, 2, 4, 8, 16, 32, or 64 bits
+*/
+int can_return_struct_as_int(size_t s)
+{
+ return s == 1 || s == 2 || s == 4;
+}
+
+int can_return_struct_as_sint64(size_t s)
+{
+ return s == 8;
+}
+
+/* Perform machine dependent cif processing */
+ffi_status ffi_prep_cif_machdep(ffi_cif *cif)
+{
+ /* Set the return type flag */
+ switch (cif->rtype->type)
+ {
+ case FFI_TYPE_VOID:
+ case FFI_TYPE_SINT64:
+ case FFI_TYPE_FLOAT:
+ case FFI_TYPE_DOUBLE:
+ case FFI_TYPE_LONGDOUBLE:
+ cif->flags = (unsigned) cif->rtype->type;
+ break;
+
+ case FFI_TYPE_STRUCT:
+ /* MSVC returns small structures in registers. Put in cif->flags
+ the value FFI_TYPE_STRUCT only if the structure is big enough;
+ otherwise, put the 4- or 8-bytes integer type. */
+ if (can_return_struct_as_int(cif->rtype->size))
+ cif->flags = FFI_TYPE_INT;
+ else if (can_return_struct_as_sint64(cif->rtype->size))
+ cif->flags = FFI_TYPE_SINT64;
+ else
+ cif->flags = FFI_TYPE_STRUCT;
+ break;
+
+ case FFI_TYPE_UINT64:
+#if defined(_WIN64) || defined(UEFI_MSVC_64)
+ case FFI_TYPE_POINTER:
+#endif
+ cif->flags = FFI_TYPE_SINT64;
+ break;
+
+ default:
+ cif->flags = FFI_TYPE_INT;
+ break;
+ }
+
+ return FFI_OK;
+}
+
+#if defined(_WIN32) || defined(UEFI_MSVC_32)
+extern int
+ffi_call_x86(void (*)(char *, extended_cif *),
+ /*@out@*/ extended_cif *,
+ unsigned, unsigned,
+ /*@out@*/ unsigned *,
+ void (*fn)());
+#endif
+
+#if defined(_WIN64) || defined(UEFI_MSVC_64)
+extern int
+ffi_call_AMD64(void (*)(char *, extended_cif *),
+ /*@out@*/ extended_cif *,
+ unsigned, unsigned,
+ /*@out@*/ unsigned *,
+ void (*fn)());
+#endif
+
+int
+ffi_call(/*@dependent@*/ ffi_cif *cif,
+ void (*fn)(),
+ /*@out@*/ void *rvalue,
+ /*@dependent@*/ void **avalue)
+{
+ extended_cif ecif;
+#ifdef UEFI_C_SOURCE
+ int malloc_flag = 0;
+#endif // UEFI_C_SOURCE
+
+ ecif.cif = cif;
+ ecif.avalue = avalue;
+
+ /* If the return value is a struct and we don't have a return */
+ /* value address then we need to make one */
+
+ if ((rvalue == NULL) &&
+ (cif->rtype->type == FFI_TYPE_STRUCT))
+ {
+ /*@-sysunrecog@*/
+#ifdef UEFI_C_SOURCE
+ ecif.rvalue = malloc(cif->rtype->size);
+ malloc_flag = 1;
+#else
+ ecif.rvalue = alloca(cif->rtype->size);
+#endif // UEFI_C_SOURCE
+ /*@=sysunrecog@*/
+ }
+ else
+ ecif.rvalue = rvalue;
+
+
+ switch (cif->abi)
+ {
+#if !defined(_WIN64) && !defined(UEFI_MSVC_64)
+ case FFI_SYSV:
+ case FFI_STDCALL:
+ return ffi_call_x86(ffi_prep_args, &ecif, cif->bytes,
+ cif->flags, ecif.rvalue, fn);
+ break;
+#else
+ case FFI_SYSV:
+ /* If a single argument takes more than 8 bytes,
+ then a copy is passed by reference. */
+ for (unsigned i = 0; i < cif->nargs; i++) {
+ size_t z = cif->arg_types[i]->size;
+ if (z > 8) {
+ void *temp = alloca(z);
+ memcpy(temp, avalue[i], z);
+ avalue[i] = temp;
+ }
+ }
+ /*@-usedef@*/
+ return ffi_call_AMD64(ffi_prep_args, &ecif, cif->bytes,
+ cif->flags, ecif.rvalue, fn);
+ /*@=usedef@*/
+ break;
+#endif
+
+ default:
+ FFI_ASSERT(0);
+ break;
+ }
+
+#ifdef UEFI_C_SOURCE
+ if (malloc_flag) {
+ free (ecif.rvalue);
+ }
+#endif // UEFI_C_SOURCE
+
+ return -1; /* theller: Hrm. */
+}
+
+
+/** private members **/
+
+static void ffi_prep_incoming_args_SYSV (char *stack, void **ret,
+ void** args, ffi_cif* cif);
+/* This function is jumped to by the trampoline */
+
+#if defined(_WIN64) || defined(UEFI_MSVC_64)
+void *
+#else
+static void __fastcall
+#endif
+ffi_closure_SYSV (ffi_closure *closure, char *argp)
+{
+ // this is our return value storage
+ long double res;
+
+ // our various things...
+ ffi_cif *cif;
+ void **arg_area;
+ unsigned short rtype;
+ void *resp = (void*)&res;
+ void *args = argp + sizeof(void*);
+
+ cif = closure->cif;
+#ifdef UEFI_C_SOURCE
+ arg_area = (void**) malloc (cif->nargs * sizeof (void*));
+#else
+ arg_area = (void**) alloca (cif->nargs * sizeof (void*));
+#endif // UEFI_C_SOURCE
+
+ /* this call will initialize ARG_AREA, such that each
+ * element in that array points to the corresponding
+ * value on the stack; and if the function returns
+ * a structure, it will re-set RESP to point to the
+ * structure return address. */
+
+ ffi_prep_incoming_args_SYSV(args, (void**)&resp, arg_area, cif);
+
+ (closure->fun) (cif, resp, arg_area, closure->user_data);
+
+ rtype = cif->flags;
+
+#if (defined(_WIN32) && !defined(_WIN64)) || (defined(UEFI_MSVC_32) && !defined(UEFI_MSVC_64))
+#ifdef _MSC_VER
+ /* now, do a generic return based on the value of rtype */
+ if (rtype == FFI_TYPE_INT)
+ {
+ _asm mov eax, resp ;
+ _asm mov eax, [eax] ;
+ }
+ else if (rtype == FFI_TYPE_FLOAT)
+ {
+ _asm mov eax, resp ;
+ _asm fld DWORD PTR [eax] ;
+// asm ("flds (%0)" : : "r" (resp) : "st" );
+ }
+ else if (rtype == FFI_TYPE_DOUBLE)
+ {
+ _asm mov eax, resp ;
+ _asm fld QWORD PTR [eax] ;
+// asm ("fldl (%0)" : : "r" (resp) : "st", "st(1)" );
+ }
+ else if (rtype == FFI_TYPE_LONGDOUBLE)
+ {
+// asm ("fldt (%0)" : : "r" (resp) : "st", "st(1)" );
+ }
+ else if (rtype == FFI_TYPE_SINT64)
+ {
+ _asm mov edx, resp ;
+ _asm mov eax, [edx] ;
+ _asm mov edx, [edx + 4] ;
+// asm ("movl 0(%0),%%eax;"
+// "movl 4(%0),%%edx"
+// : : "r"(resp)
+// : "eax", "edx");
+ }
+#else
+ /* now, do a generic return based on the value of rtype */
+ if (rtype == FFI_TYPE_INT)
+ {
+ asm ("movl (%0),%%eax" : : "r" (resp) : "eax");
+ }
+ else if (rtype == FFI_TYPE_FLOAT)
+ {
+ asm ("flds (%0)" : : "r" (resp) : "st" );
+ }
+ else if (rtype == FFI_TYPE_DOUBLE)
+ {
+ asm ("fldl (%0)" : : "r" (resp) : "st", "st(1)" );
+ }
+ else if (rtype == FFI_TYPE_LONGDOUBLE)
+ {
+ asm ("fldt (%0)" : : "r" (resp) : "st", "st(1)" );
+ }
+ else if (rtype == FFI_TYPE_SINT64)
+ {
+ asm ("movl 0(%0),%%eax;"
+ "movl 4(%0),%%edx"
+ : : "r"(resp)
+ : "eax", "edx");
+ }
+#endif
+#endif
+
+#if defined(_WIN64) || defined(UEFI_MSVC_64)
+ /* The result is returned in rax. This does the right thing for
+ result types except for floats; we have to 'mov xmm0, rax' in the
+ caller to correct this.
+ */
+
+ free (arg_area);
+
+ return *(void **)resp;
+#endif
+}
+
+/*@-exportheader@*/
+static void
+ffi_prep_incoming_args_SYSV(char *stack, void **rvalue,
+ void **avalue, ffi_cif *cif)
+/*@=exportheader@*/
+{
+ register unsigned int i;
+ register void **p_argv;
+ register char *argp;
+ register ffi_type **p_arg;
+
+ argp = stack;
+
+ if ( cif->rtype->type == FFI_TYPE_STRUCT ) {
+ *rvalue = *(void **) argp;
+ argp += sizeof(void *);
+ }
+
+ p_argv = avalue;
+
+ for (i = cif->nargs, p_arg = cif->arg_types; (i != 0); i--, p_arg++)
+ {
+ size_t z;
+
+ /* Align if necessary */
+ if ((sizeof(char *) - 1) & (size_t) argp) {
+ argp = (char *) ALIGN(argp, sizeof(char*));
+ }
+
+ z = (*p_arg)->size;
+
+ /* because we're little endian, this is what it turns into. */
+
+#if defined(_WIN64) || defined(UEFI_MSVC_64)
+ if (z > 8) {
+ /* On Win64, if a single argument takes more than 8 bytes,
+ * then it is always passed by reference.
+ */
+ *p_argv = *((void**) argp);
+ z = 8;
+ }
+ else
+#endif
+ *p_argv = (void*) argp;
+
+ p_argv++;
+ argp += z;
+ }
+
+ return;
+}
+
+/* the cif must already be prep'ed */
+extern void ffi_closure_OUTER();
+
+ffi_status
+ffi_prep_closure_loc (ffi_closure* closure,
+ ffi_cif* cif,
+ void (*fun)(ffi_cif*,void*,void**,void*),
+ void *user_data,
+ void *codeloc)
+{
+ short bytes;
+ char *tramp;
+#if defined(_WIN64) || defined(UEFI_MSVC_64)
+ int mask = 0;
+#endif
+ FFI_ASSERT (cif->abi == FFI_SYSV);
+
+ if (cif->abi == FFI_SYSV)
+ bytes = 0;
+#if !defined(_WIN64) && !defined(UEFI_MSVC_64)
+ else if (cif->abi == FFI_STDCALL)
+ bytes = cif->bytes;
+#endif
+ else
+ return FFI_BAD_ABI;
+
+ tramp = &closure->tramp[0];
+
+#define BYTES(text) memcpy(tramp, text, sizeof(text)), tramp += sizeof(text)-1
+#define POINTER(x) *(void**)tramp = (void*)(x), tramp += sizeof(void*)
+#define SHORT(x) *(short*)tramp = x, tramp += sizeof(short)
+#define INT(x) *(int*)tramp = x, tramp += sizeof(int)
+
+#if defined(_WIN64) || defined(UEFI_MSVC_64)
+ if (cif->nargs >= 1 &&
+ (cif->arg_types[0]->type == FFI_TYPE_FLOAT
+ || cif->arg_types[0]->type == FFI_TYPE_DOUBLE))
+ mask |= 1;
+ if (cif->nargs >= 2 &&
+ (cif->arg_types[1]->type == FFI_TYPE_FLOAT
+ || cif->arg_types[1]->type == FFI_TYPE_DOUBLE))
+ mask |= 2;
+ if (cif->nargs >= 3 &&
+ (cif->arg_types[2]->type == FFI_TYPE_FLOAT
+ || cif->arg_types[2]->type == FFI_TYPE_DOUBLE))
+ mask |= 4;
+ if (cif->nargs >= 4 &&
+ (cif->arg_types[3]->type == FFI_TYPE_FLOAT
+ || cif->arg_types[3]->type == FFI_TYPE_DOUBLE))
+ mask |= 8;
+
+ /* 41 BB ---- mov r11d,mask */
+ BYTES("\x41\xBB"); INT(mask);
+
+ /* 48 B8 -------- mov rax, closure */
+ BYTES("\x48\xB8"); POINTER(closure);
+
+ /* 49 BA -------- mov r10, ffi_closure_OUTER */
+ BYTES("\x49\xBA"); POINTER(ffi_closure_OUTER);
+
+ /* 41 FF E2 jmp r10 */
+ BYTES("\x41\xFF\xE2");
+
+#else
+
+ /* mov ecx, closure */
+ BYTES("\xb9"); POINTER(closure);
+
+ /* mov edx, esp */
+ BYTES("\x8b\xd4");
+
+ /* call ffi_closure_SYSV */
+ BYTES("\xe8"); POINTER((char*)&ffi_closure_SYSV - (tramp + 4));
+
+ /* ret bytes */
+ BYTES("\xc2");
+ SHORT(bytes);
+
+#endif
+
+ if (tramp - &closure->tramp[0] > FFI_TRAMPOLINE_SIZE)
+ Py_FatalError("FFI_TRAMPOLINE_SIZE too small in " __FILE__);
+
+ closure->cif = cif;
+ closure->user_data = user_data;
+ closure->fun = fun;
+ return FFI_OK;
+}
+
+/**
+Hack function for passing Python368 build.
+**/
+VOID
+__chkstk()
+{
+}
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi.h
new file mode 100644
index 00000000..7ab8d0f9
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi.h
@@ -0,0 +1,331 @@
+/* -----------------------------------------------------------------*-C-*-
+ libffi 2.00-beta - Copyright (c) 1996-2003 Red Hat, Inc.
+
+ Permission is hereby granted, free of charge, to any person obtaining
+ a copy of this software and associated documentation files (the
+ ``Software''), to deal in the Software without restriction, including
+ without limitation the rights to use, copy, modify, merge, publish,
+ distribute, sublicense, and/or sell copies of the Software, and to
+ permit persons to whom the Software is furnished to do so, subject to
+ the following conditions:
+
+ The above copyright notice and this permission notice shall be included
+ in all copies or substantial portions of the Software.
+
+ THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS
+ OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+ IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ OTHER DEALINGS IN THE SOFTWARE.
+
+ ----------------------------------------------------------------------- */
+
+/* -------------------------------------------------------------------
+ The basic API is described in the README file.
+
+ The raw API is designed to bypass some of the argument packing
+ and unpacking on architectures for which it can be avoided.
+
+ The closure API allows interpreted functions to be packaged up
+ inside a C function pointer, so that they can be called as C functions,
+ with no understanding on the client side that they are interpreted.
+ It can also be used in other cases in which it is necessary to package
+ up a user specified parameter and a function pointer as a single
+ function pointer.
+
+ The closure API must be implemented in order to get its functionality,
+ e.g. for use by gij. Routines are provided to emulate the raw API
+ if the underlying platform doesn't allow faster implementation.
+
+ More details on the raw and cloure API can be found in:
+
+ http://gcc.gnu.org/ml/java/1999-q3/msg00138.html
+
+ and
+
+ http://gcc.gnu.org/ml/java/1999-q3/msg00174.html
+ -------------------------------------------------------------------- */
+
+#ifndef LIBFFI_H
+#define LIBFFI_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* Specify which architecture libffi is configured for. */
+//XXX #define X86
+
+/* ---- System configuration information --------------------------------- */
+
+#include <ffitarget.h>
+
+#ifndef LIBFFI_ASM
+
+#include <stddef.h>
+#include <limits.h>
+
+/* LONG_LONG_MAX is not always defined (not if STRICT_ANSI, for example).
+ But we can find it either under the correct ANSI name, or under GNU
+ C's internal name. */
+#ifdef LONG_LONG_MAX
+# define FFI_LONG_LONG_MAX LONG_LONG_MAX
+#else
+# ifdef LLONG_MAX
+# define FFI_LONG_LONG_MAX LLONG_MAX
+# else
+# ifdef __GNUC__
+# define FFI_LONG_LONG_MAX __LONG_LONG_MAX__
+# endif
+# ifdef _MSC_VER
+# define FFI_LONG_LONG_MAX _I64_MAX
+# endif
+# endif
+#endif
+
+#if SCHAR_MAX == 127
+# define ffi_type_uchar ffi_type_uint8
+# define ffi_type_schar ffi_type_sint8
+#else
+ #error "char size not supported"
+#endif
+
+#if SHRT_MAX == 32767
+# define ffi_type_ushort ffi_type_uint16
+# define ffi_type_sshort ffi_type_sint16
+#elif SHRT_MAX == 2147483647
+# define ffi_type_ushort ffi_type_uint32
+# define ffi_type_sshort ffi_type_sint32
+#else
+ #error "short size not supported"
+#endif
+
+#if INT_MAX == 32767
+# define ffi_type_uint ffi_type_uint16
+# define ffi_type_sint ffi_type_sint16
+#elif INT_MAX == 2147483647
+# define ffi_type_uint ffi_type_uint32
+# define ffi_type_sint ffi_type_sint32
+#elif INT_MAX == 9223372036854775807
+# define ffi_type_uint ffi_type_uint64
+# define ffi_type_sint ffi_type_sint64
+#else
+ #error "int size not supported"
+#endif
+
+#define ffi_type_ulong ffi_type_uint64
+#define ffi_type_slong ffi_type_sint64
+#if LONG_MAX == 2147483647
+# if FFI_LONG_LONG_MAX != 9223372036854775807
+ #error "no 64-bit data type supported"
+# endif
+#elif LONG_MAX != 9223372036854775807
+ #error "long size not supported"
+#endif
+
+/* The closure code assumes that this works on pointers, i.e. a size_t */
+/* can hold a pointer. */
+
+typedef struct _ffi_type
+{
+ size_t size;
+ unsigned short alignment;
+ unsigned short type;
+ /*@null@*/ struct _ffi_type **elements;
+} ffi_type;
+
+int can_return_struct_as_int(size_t);
+int can_return_struct_as_sint64(size_t);
+
+/* These are defined in types.c */
+extern ffi_type ffi_type_void;
+extern ffi_type ffi_type_uint8;
+extern ffi_type ffi_type_sint8;
+extern ffi_type ffi_type_uint16;
+extern ffi_type ffi_type_sint16;
+extern ffi_type ffi_type_uint32;
+extern ffi_type ffi_type_sint32;
+extern ffi_type ffi_type_uint64;
+extern ffi_type ffi_type_sint64;
+extern ffi_type ffi_type_float;
+extern ffi_type ffi_type_double;
+extern ffi_type ffi_type_longdouble;
+extern ffi_type ffi_type_pointer;
+
+
+typedef enum {
+ FFI_OK = 0,
+ FFI_BAD_TYPEDEF,
+ FFI_BAD_ABI
+} ffi_status;
+
+typedef unsigned FFI_TYPE;
+
+typedef struct {
+ ffi_abi abi;
+ unsigned nargs;
+ /*@dependent@*/ ffi_type **arg_types;
+ /*@dependent@*/ ffi_type *rtype;
+ unsigned bytes;
+ unsigned flags;
+#ifdef FFI_EXTRA_CIF_FIELDS
+ FFI_EXTRA_CIF_FIELDS;
+#endif
+} ffi_cif;
+
+/* ---- Definitions for the raw API -------------------------------------- */
+
+#if defined(_WIN64) || defined(UEFI_MSVC_64)
+#define FFI_SIZEOF_ARG 8
+#else
+#define FFI_SIZEOF_ARG 4
+#endif
+
+typedef union {
+ ffi_sarg sint;
+ ffi_arg uint;
+ float flt;
+ char data[FFI_SIZEOF_ARG];
+ void* ptr;
+} ffi_raw;
+
+void ffi_raw_call (/*@dependent@*/ ffi_cif *cif,
+ void (*fn)(),
+ /*@out@*/ void *rvalue,
+ /*@dependent@*/ ffi_raw *avalue);
+
+void ffi_ptrarray_to_raw (ffi_cif *cif, void **args, ffi_raw *raw);
+void ffi_raw_to_ptrarray (ffi_cif *cif, ffi_raw *raw, void **args);
+size_t ffi_raw_size (ffi_cif *cif);
+
+/* This is analogous to the raw API, except it uses Java parameter */
+/* packing, even on 64-bit machines. I.e. on 64-bit machines */
+/* longs and doubles are followed by an empty 64-bit word. */
+
+void ffi_java_raw_call (/*@dependent@*/ ffi_cif *cif,
+ void (*fn)(),
+ /*@out@*/ void *rvalue,
+ /*@dependent@*/ ffi_raw *avalue);
+
+void ffi_java_ptrarray_to_raw (ffi_cif *cif, void **args, ffi_raw *raw);
+void ffi_java_raw_to_ptrarray (ffi_cif *cif, ffi_raw *raw, void **args);
+size_t ffi_java_raw_size (ffi_cif *cif);
+
+/* ---- Definitions for closures ----------------------------------------- */
+
+#if FFI_CLOSURES
+
+typedef struct {
+ char tramp[FFI_TRAMPOLINE_SIZE];
+ ffi_cif *cif;
+ void (*fun)(ffi_cif*,void*,void**,void*);
+ void *user_data;
+} ffi_closure;
+
+void ffi_closure_free(void *);
+void *ffi_closure_alloc (size_t size, void **code);
+
+ffi_status
+ffi_prep_closure_loc (ffi_closure*,
+ ffi_cif *,
+ void (*fun)(ffi_cif*,void*,void**,void*),
+ void *user_data,
+ void *codeloc);
+
+typedef struct {
+ char tramp[FFI_TRAMPOLINE_SIZE];
+
+ ffi_cif *cif;
+
+#if !FFI_NATIVE_RAW_API
+
+ /* if this is enabled, then a raw closure has the same layout
+ as a regular closure. We use this to install an intermediate
+ handler to do the transaltion, void** -> ffi_raw*. */
+
+ void (*translate_args)(ffi_cif*,void*,void**,void*);
+ void *this_closure;
+
+#endif
+
+ void (*fun)(ffi_cif*,void*,ffi_raw*,void*);
+ void *user_data;
+
+} ffi_raw_closure;
+
+ffi_status
+ffi_prep_raw_closure (ffi_raw_closure*,
+ ffi_cif *cif,
+ void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
+ void *user_data);
+
+ffi_status
+ffi_prep_java_raw_closure (ffi_raw_closure*,
+ ffi_cif *cif,
+ void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
+ void *user_data);
+
+#endif /* FFI_CLOSURES */
+
+/* ---- Public interface definition -------------------------------------- */
+
+ffi_status ffi_prep_cif(/*@out@*/ /*@partial@*/ ffi_cif *cif,
+ ffi_abi abi,
+ unsigned int nargs,
+ /*@dependent@*/ /*@out@*/ /*@partial@*/ ffi_type *rtype,
+ /*@dependent@*/ ffi_type **atypes);
+
+int
+ffi_call(/*@dependent@*/ ffi_cif *cif,
+ void (*fn)(),
+ /*@out@*/ void *rvalue,
+ /*@dependent@*/ void **avalue);
+
+/* Useful for eliminating compiler warnings */
+#define FFI_FN(f) ((void (*)())f)
+
+/* ---- Definitions shared with assembly code ---------------------------- */
+
+#endif
+
+/* If these change, update src/mips/ffitarget.h. */
+#define FFI_TYPE_VOID 0
+#define FFI_TYPE_INT 1
+#define FFI_TYPE_FLOAT 2
+#define FFI_TYPE_DOUBLE 3
+#if 1
+#define FFI_TYPE_LONGDOUBLE 4
+#else
+#define FFI_TYPE_LONGDOUBLE FFI_TYPE_DOUBLE
+#endif
+#define FFI_TYPE_UINT8 5
+#define FFI_TYPE_SINT8 6
+#define FFI_TYPE_UINT16 7
+#define FFI_TYPE_SINT16 8
+#define FFI_TYPE_UINT32 9
+#define FFI_TYPE_SINT32 10
+#define FFI_TYPE_UINT64 11
+#define FFI_TYPE_SINT64 12
+#define FFI_TYPE_STRUCT 13
+#define FFI_TYPE_POINTER 14
+
+/* This should always refer to the last type code (for sanity checks) */
+#define FFI_TYPE_LAST FFI_TYPE_POINTER
+
+#ifdef UEFI_C_SOURCE
+#ifndef intptr_t
+typedef long long intptr_t;
+#endif
+#ifndef uintptr_t
+typedef unsigned long long uintptr_t;
+#endif
+#endif
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
+
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi_common.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi_common.h
new file mode 100644
index 00000000..2f39d2d5
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi_common.h
@@ -0,0 +1,85 @@
+/* -----------------------------------------------------------------------
+ ffi_common.h - Copyright (c) 1996 Red Hat, Inc.
+
+ Common internal definitions and macros. Only necessary for building
+ libffi.
+ ----------------------------------------------------------------------- */
+
+#ifndef FFI_COMMON_H
+#define FFI_COMMON_H
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include <fficonfig.h>
+#ifndef UEFI_C_SOURCE
+#include <malloc.h>
+#endif
+
+/* Check for the existence of memcpy. */
+#if STDC_HEADERS
+# include <string.h>
+#else
+# ifndef HAVE_MEMCPY
+# define memcpy(d, s, n) bcopy ((s), (d), (n))
+# endif
+#endif
+
+#if defined(FFI_DEBUG)
+#include <stdio.h>
+#endif
+
+#ifdef FFI_DEBUG
+/*@exits@*/ void ffi_assert(/*@temp@*/ char *expr, /*@temp@*/ char *file, int line);
+void ffi_stop_here(void);
+void ffi_type_test(/*@temp@*/ /*@out@*/ ffi_type *a, /*@temp@*/ char *file, int line);
+
+#define FFI_ASSERT(x) ((x) ? (void)0 : ffi_assert(#x, __FILE__,__LINE__))
+#define FFI_ASSERT_AT(x, f, l) ((x) ? 0 : ffi_assert(#x, (f), (l)))
+#define FFI_ASSERT_VALID_TYPE(x) ffi_type_test (x, __FILE__, __LINE__)
+#else
+#define FFI_ASSERT(x)
+#define FFI_ASSERT_AT(x, f, l)
+#define FFI_ASSERT_VALID_TYPE(x)
+#endif
+
+#define ALIGN(v, a) (((((size_t) (v))-1) | ((a)-1))+1)
+
+/* Perform machine dependent cif processing */
+ffi_status ffi_prep_cif_machdep(ffi_cif *cif);
+
+/* Extended cif, used in callback from assembly routine */
+typedef struct
+{
+ /*@dependent@*/ ffi_cif *cif;
+ /*@dependent@*/ void *rvalue;
+ /*@dependent@*/ void **avalue;
+} extended_cif;
+
+/* Terse sized type definitions. */
+#ifndef UEFI_C_SOURCE
+typedef unsigned int UINT8 __attribute__((__mode__(__QI__)));
+typedef signed int SINT8 __attribute__((__mode__(__QI__)));
+typedef unsigned int UINT16 __attribute__((__mode__(__HI__)));
+typedef signed int SINT16 __attribute__((__mode__(__HI__)));
+typedef unsigned int UINT32 __attribute__((__mode__(__SI__)));
+typedef signed int SINT32 __attribute__((__mode__(__SI__)));
+typedef unsigned int UINT64 __attribute__((__mode__(__DI__)));
+typedef signed int SINT64 __attribute__((__mode__(__DI__)));
+#else
+typedef signed int SINT8 __attribute__((__mode__(__QI__)));
+typedef signed int SINT16 __attribute__((__mode__(__HI__)));
+typedef signed int SINT32 __attribute__((__mode__(__SI__)));
+typedef signed int SINT64 __attribute__((__mode__(__DI__)));
+#endif
+typedef float FLOAT32;
+
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
+
+
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/malloc_closure.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/malloc_closure.c
new file mode 100644
index 00000000..624e3a8c
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/malloc_closure.c
@@ -0,0 +1,128 @@
+#include <Python.h>
+#include <ffi.h>
+#ifdef MS_WIN32
+#include <windows.h>
+#else
+#ifndef UEFI_C_SOURCE
+#include <sys/mman.h>
+#endif // UEFI_C_SOURCE
+#include <unistd.h>
+# if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
+# define MAP_ANONYMOUS MAP_ANON
+# endif
+#endif
+#include "ctypes.h"
+
+/* BLOCKSIZE can be adjusted. Larger blocksize will take a larger memory
+ overhead, but allocate less blocks from the system. It may be that some
+ systems have a limit of how many mmap'd blocks can be open.
+*/
+
+#define BLOCKSIZE _pagesize
+
+/* #define MALLOC_CLOSURE_DEBUG */ /* enable for some debugging output */
+
+/******************************************************************/
+
+typedef union _tagITEM {
+ ffi_closure closure;
+ union _tagITEM *next;
+} ITEM;
+
+static ITEM *free_list;
+static int _pagesize;
+
+static void more_core(void)
+{
+ ITEM *item;
+ int count, i;
+
+#ifndef UEFI_C_SOURCE
+/* determine the pagesize */
+#ifdef MS_WIN32
+ if (!_pagesize) {
+ SYSTEM_INFO systeminfo;
+ GetSystemInfo(&systeminfo);
+ _pagesize = systeminfo.dwPageSize;
+ }
+#else
+ if (!_pagesize) {
+#ifdef _SC_PAGESIZE
+ _pagesize = sysconf(_SC_PAGESIZE);
+#else
+ _pagesize = getpagesize();
+#endif
+ }
+#endif
+
+ /* calculate the number of nodes to allocate */
+ count = BLOCKSIZE / sizeof(ITEM);
+
+ /* allocate a memory block */
+#ifdef MS_WIN32
+ item = (ITEM *)VirtualAlloc(NULL,
+ count * sizeof(ITEM),
+ MEM_COMMIT,
+ PAGE_EXECUTE_READWRITE);
+ if (item == NULL)
+ return;
+#else
+ item = (ITEM *)mmap(NULL,
+ count * sizeof(ITEM),
+ PROT_READ | PROT_WRITE | PROT_EXEC,
+ MAP_PRIVATE | MAP_ANONYMOUS,
+ -1,
+ 0);
+ if (item == (void *)MAP_FAILED)
+ return;
+#endif
+
+#ifdef MALLOC_CLOSURE_DEBUG
+ printf("block at %p allocated (%d bytes), %d ITEMs\n",
+ item, count * sizeof(ITEM), count);
+#endif
+
+#else //EfiPy
+
+#define PAGE_SHIFT 14 /* 16K pages by default. */
+#define PAGE_SIZE (1 << PAGE_SHIFT)
+
+ count = PAGE_SIZE / sizeof(ITEM);
+
+ item = (ITEM *)malloc(count * sizeof(ITEM));
+ if (item == NULL)
+ return;
+
+#endif // EfiPy
+
+ /* put them into the free list */
+ for (i = 0; i < count; ++i) {
+ item->next = free_list;
+ free_list = item;
+ ++item;
+ }
+}
+
+/******************************************************************/
+
+/* put the item back into the free list */
+void ffi_closure_free(void *p)
+{
+ ITEM *item = (ITEM *)p;
+ item->next = free_list;
+ free_list = item;
+}
+
+/* return one item from the free list, allocating more if needed */
+void *ffi_closure_alloc(size_t ignored, void** codeloc)
+{
+ ITEM *item;
+ if (!free_list)
+ more_core();
+ if (!free_list)
+ return NULL;
+ item = free_list;
+ free_list = item->next;
+ *codeloc = (void *)item;
+ return (void *)item;
+}
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/config.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/config.c
new file mode 100644
index 00000000..4b1eb0fb
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/config.c
@@ -0,0 +1,159 @@
+/** @file
+ Python Module configuration.
+
+ Copyright (c) 2011-2021, Intel Corporation. All rights reserved.<BR>
+ This program and the accompanying materials are licensed and made available under
+ the terms and conditions of the BSD License that accompanies this distribution.
+ The full text of the license may be found at
+ http://opensource.org/licenses/bsd-license.
+
+ THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+**/
+
+/* This file contains the table of built-in modules.
+ See init_builtin() in import.c. */
+
+#include "Python.h"
+
+extern PyObject* PyInit_array(void);
+extern PyObject* PyInit__ast(void);
+extern PyObject* PyInit_binascii(void);
+extern PyObject* init_bisect(void);
+extern PyObject* PyInit_cmath(void);
+extern PyObject* PyInit__codecs(void);
+extern PyObject* PyInit__collections(void);
+extern PyObject* PyInit__pickle(void);
+extern PyObject* PyInit__csv(void);
+extern PyObject* init_ctypes(void);
+extern PyObject* PyInit__datetime(void);
+extern PyObject* PyEdk2__Init(void);
+extern PyObject* PyInit_errno(void);
+extern PyObject* PyInit__functools(void);
+extern PyObject* initfuture_builtins(void);
+extern PyObject* PyInit_gc(void);
+extern PyObject* init_heapq(void);
+extern PyObject* init_hotshot(void);
+extern PyObject* PyInit_imp(void);
+extern PyObject* PyInit__io(void);
+extern PyObject* PyInit_itertools(void);
+extern PyObject* PyInit__json(void);
+extern PyObject* init_lsprof(void);
+extern PyObject* PyInit_math(void);
+extern PyObject* PyInit__md5(void);
+extern PyObject* initmmap(void);
+extern PyObject* PyInit__operator(void);
+extern PyObject* PyInit_parser(void);
+extern PyObject* PyInit_pyexpat(void);
+extern PyObject* PyInit__random(void);
+extern PyObject* PyInit_select(void);
+extern PyObject* PyInit__sha1(void);
+extern PyObject* PyInit__sha256(void);
+extern PyObject* PyInit__sha512(void);
+extern PyObject* PyInit__sha3(void);
+extern PyObject* PyInit__blake2(void);
+extern PyObject* PyInit__signal(void);
+extern PyObject* PyInit__socket(void);
+extern PyObject* PyInit__sre(void);
+extern PyObject* PyInit__struct(void);
+extern PyObject* init_subprocess(void);
+extern PyObject* PyInit__symtable(void);
+extern PyObject* initthread(void);
+extern PyObject* PyInit_time(void);
+extern PyObject* PyInit_unicodedata(void);
+extern PyObject* PyInit__weakref(void);
+extern PyObject* init_winreg(void);
+extern PyObject* PyInit_zlib(void);
+extern PyObject* initbz2(void);
+
+extern PyObject* PyMarshal_Init(void);
+extern PyObject* _PyWarnings_Init(void);
+
+extern PyObject* PyInit__multibytecodec(void);
+extern PyObject* PyInit__codecs_cn(void);
+extern PyObject* PyInit__codecs_hk(void);
+extern PyObject* PyInit__codecs_iso2022(void);
+extern PyObject* PyInit__codecs_jp(void);
+extern PyObject* PyInit__codecs_kr(void);
+extern PyObject* PyInit__codecs_tw(void);
+
+extern PyObject* PyInit__string(void);
+extern PyObject* PyInit__stat(void);
+extern PyObject* PyInit__opcode(void);
+extern PyObject* PyInit_faulthandler(void);
+// _ctypes
+extern PyObject* PyInit__ctypes(void);
+extern PyObject* init_sqlite3(void);
+
+// EfiPy
+extern PyObject* init_EfiPy(void);
+
+// ssl
+extern PyObject* PyInit__ssl(void);
+
+struct _inittab _PyImport_Inittab[] = {
+ {"_ast", PyInit__ast},
+ {"_csv", PyInit__csv},
+ {"_io", PyInit__io},
+ {"_json", PyInit__json},
+ {"_md5", PyInit__md5},
+ {"_sha1", PyInit__sha1},
+ {"_sha256", PyInit__sha256},
+ {"_sha512", PyInit__sha512},
+ { "_sha3", PyInit__sha3 },
+ { "_blake2", PyInit__blake2 },
+// {"_socket", PyInit__socket},
+ {"_symtable", PyInit__symtable},
+ {"binascii", PyInit_binascii},
+ {"cmath", PyInit_cmath},
+ {"errno", PyInit_errno},
+ {"faulthandler", PyInit_faulthandler},
+ {"gc", PyInit_gc},
+ {"math", PyInit_math},
+ {"array", PyInit_array},
+ {"_datetime", PyInit__datetime},
+ {"parser", PyInit_parser},
+ {"pyexpat", PyInit_pyexpat},
+ {"select", PyInit_select},
+ {"_signal", PyInit__signal},
+ {"unicodedata", PyInit_unicodedata},
+ { "zlib", PyInit_zlib },
+
+ /* CJK codecs */
+ {"_multibytecodec", PyInit__multibytecodec},
+
+#ifdef WITH_THREAD
+ {"thread", initthread},
+#endif
+
+ /* These modules are required for the full built-in help() facility provided by pydoc. */
+ {"_codecs", PyInit__codecs},
+ {"_collections", PyInit__collections},
+ {"_functools", PyInit__functools},
+ {"_random", PyInit__random},
+ {"_sre", PyInit__sre},
+ {"_struct", PyInit__struct},
+ {"_weakref", PyInit__weakref},
+ {"itertools", PyInit_itertools},
+ {"_operator", PyInit__operator},
+ {"time", PyInit_time},
+
+ /* These four modules should always be built in. */
+ {"edk2", PyEdk2__Init},
+ {"_imp", PyInit_imp},
+ {"marshal", PyMarshal_Init},
+
+ /* These entries are here for sys.builtin_module_names */
+ {"__main__", NULL},
+ {"__builtin__", NULL},
+ {"builtins", NULL},
+ {"sys", NULL},
+ {"exceptions", NULL},
+ {"_warnings", _PyWarnings_Init},
+ {"_string", PyInit__string},
+ {"_stat", PyInit__stat},
+ {"_opcode", PyInit__opcode},
+ { "_ctypes", PyInit__ctypes },
+ /* Sentinel */
+ {0, 0}
+};
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/edk2module.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/edk2module.c
new file mode 100644
index 00000000..0501a2be
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/edk2module.c
@@ -0,0 +1,4348 @@
+/** @file
+ OS-specific module implementation for EDK II and UEFI.
+ Derived from posixmodule.c in Python 2.7.2.
+
+ Copyright (c) 2015, Daryl McDaniel. All rights reserved.<BR>
+ Copyright (c) 2011 - 2021, Intel Corporation. All rights reserved.<BR>
+ This program and the accompanying materials are licensed and made available under
+ the terms and conditions of the BSD License that accompanies this distribution.
+ The full text of the license may be found at
+ http://opensource.org/licenses/bsd-license.
+
+ THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+**/
+#define PY_SSIZE_T_CLEAN
+
+#include "Python.h"
+#include "structseq.h"
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <wchar.h>
+#include <sys/syslimits.h>
+#include <Uefi.h>
+#include <Library/UefiLib.h>
+#include <Library/UefiRuntimeServicesTableLib.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+PyDoc_STRVAR(edk2__doc__,
+ "This module provides access to UEFI firmware functionality that is\n\
+ standardized by the C Standard and the POSIX standard (a thinly\n\
+ disguised Unix interface). Refer to the library manual and\n\
+ corresponding UEFI Specification entries for more information on calls.");
+
+#ifndef Py_USING_UNICODE
+ /* This is used in signatures of functions. */
+ #define Py_UNICODE void
+#endif
+
+#ifdef HAVE_SYS_TYPES_H
+ #include <sys/types.h>
+#endif /* HAVE_SYS_TYPES_H */
+
+#ifdef HAVE_SYS_STAT_H
+ #include <sys/stat.h>
+#endif /* HAVE_SYS_STAT_H */
+
+#ifdef HAVE_SYS_WAIT_H
+ #include <sys/wait.h> /* For WNOHANG */
+#endif
+
+#ifdef HAVE_SIGNAL_H
+ #include <signal.h>
+#endif
+
+#ifdef HAVE_FCNTL_H
+ #include <fcntl.h>
+#endif /* HAVE_FCNTL_H */
+
+#ifdef HAVE_GRP_H
+ #include <grp.h>
+#endif
+
+#ifdef HAVE_SYSEXITS_H
+ #include <sysexits.h>
+#endif /* HAVE_SYSEXITS_H */
+
+#ifdef HAVE_SYS_LOADAVG_H
+ #include <sys/loadavg.h>
+#endif
+
+#ifdef HAVE_UTIME_H
+ #include <utime.h>
+#endif /* HAVE_UTIME_H */
+
+#ifdef HAVE_SYS_UTIME_H
+ #include <sys/utime.h>
+ #define HAVE_UTIME_H /* pretend we do for the rest of this file */
+#endif /* HAVE_SYS_UTIME_H */
+
+#ifdef HAVE_SYS_TIMES_H
+ #include <sys/times.h>
+#endif /* HAVE_SYS_TIMES_H */
+
+#ifdef HAVE_SYS_PARAM_H
+ #include <sys/param.h>
+#endif /* HAVE_SYS_PARAM_H */
+
+#ifdef HAVE_SYS_UTSNAME_H
+ #include <sys/utsname.h>
+#endif /* HAVE_SYS_UTSNAME_H */
+
+#ifdef HAVE_DIRENT_H
+ #include <dirent.h>
+ #define NAMLEN(dirent) wcslen((dirent)->FileName)
+#else
+ #define dirent direct
+ #define NAMLEN(dirent) (dirent)->d_namlen
+ #ifdef HAVE_SYS_NDIR_H
+ #include <sys/ndir.h>
+ #endif
+ #ifdef HAVE_SYS_DIR_H
+ #include <sys/dir.h>
+ #endif
+ #ifdef HAVE_NDIR_H
+ #include <ndir.h>
+ #endif
+#endif
+
+#ifndef MAXPATHLEN
+ #if defined(PATH_MAX) && PATH_MAX > 1024
+ #define MAXPATHLEN PATH_MAX
+ #else
+ #define MAXPATHLEN 1024
+ #endif
+#endif /* MAXPATHLEN */
+
+#define WAIT_TYPE int
+#define WAIT_STATUS_INT(s) (s)
+
+/* Issue #1983: pid_t can be longer than a C long on some systems */
+#if !defined(SIZEOF_PID_T) || SIZEOF_PID_T == SIZEOF_INT
+ #define PARSE_PID "i"
+ #define PyLong_FromPid PyLong_FromLong
+ #define PyLong_AsPid PyLong_AsLong
+#elif SIZEOF_PID_T == SIZEOF_LONG
+ #define PARSE_PID "l"
+ #define PyLong_FromPid PyLong_FromLong
+ #define PyLong_AsPid PyLong_AsLong
+#elif defined(SIZEOF_LONG_LONG) && SIZEOF_PID_T == SIZEOF_LONG_LONG
+ #define PARSE_PID "L"
+ #define PyLong_FromPid PyLong_FromLongLong
+ #define PyLong_AsPid PyLong_AsLongLong
+#else
+ #error "sizeof(pid_t) is neither sizeof(int), sizeof(long) or sizeof(long long)"
+#endif /* SIZEOF_PID_T */
+
+/* Don't use the "_r" form if we don't need it (also, won't have a
+ prototype for it, at least on Solaris -- maybe others as well?). */
+#if defined(HAVE_CTERMID_R) && defined(WITH_THREAD)
+ #define USE_CTERMID_R
+#endif
+
+#if defined(HAVE_TMPNAM_R) && defined(WITH_THREAD)
+ #define USE_TMPNAM_R
+#endif
+
+/* choose the appropriate stat and fstat functions and return structs */
+#undef STAT
+#undef FSTAT
+#undef STRUCT_STAT
+#define STAT stat
+#define FSTAT fstat
+#define STRUCT_STAT struct stat
+
+#define _PyVerify_fd(A) (1) /* dummy */
+
+/* dummy version. _PyVerify_fd() is already defined in fileobject.h */
+#define _PyVerify_fd_dup2(A, B) (1)
+
+#ifndef UEFI_C_SOURCE
+/* Return a dictionary corresponding to the POSIX environment table */
+extern char **environ;
+
+static PyObject *
+convertenviron(void)
+{
+ PyObject *d;
+ char **e;
+ d = PyDict_New();
+ if (d == NULL)
+ return NULL;
+ if (environ == NULL)
+ return d;
+ /* This part ignores errors */
+ for (e = environ; *e != NULL; e++) {
+ PyObject *k;
+ PyObject *v;
+ char *p = strchr(*e, '=');
+ if (p == NULL)
+ continue;
+ k = PyUnicode_FromStringAndSize(*e, (int)(p-*e));
+ if (k == NULL) {
+ PyErr_Clear();
+ continue;
+ }
+ v = PyUnicode_FromString(p+1);
+ if (v == NULL) {
+ PyErr_Clear();
+ Py_DECREF(k);
+ continue;
+ }
+ if (PyDict_GetItem(d, k) == NULL) {
+ if (PyDict_SetItem(d, k, v) != 0)
+ PyErr_Clear();
+ }
+ Py_DECREF(k);
+ Py_DECREF(v);
+ }
+ return d;
+}
+#endif /* UEFI_C_SOURCE */
+
+/* Set a POSIX-specific error from errno, and return NULL */
+
+static PyObject *
+edk2_error(void)
+{
+ return PyErr_SetFromErrno(PyExc_OSError);
+}
+static PyObject *
+edk2_error_with_filename(char* name)
+{
+ return PyErr_SetFromErrnoWithFilename(PyExc_OSError, name);
+}
+
+
+static PyObject *
+edk2_error_with_allocated_filename(char* name)
+{
+ PyObject *rc = PyErr_SetFromErrnoWithFilename(PyExc_OSError, name);
+ PyMem_Free(name);
+ return rc;
+}
+
+/* POSIX generic methods */
+
+#ifndef UEFI_C_SOURCE
+ static PyObject *
+ edk2_fildes(PyObject *fdobj, int (*func)(int))
+ {
+ int fd;
+ int res;
+ fd = PyObject_AsFileDescriptor(fdobj);
+ if (fd < 0)
+ return NULL;
+ if (!_PyVerify_fd(fd))
+ return edk2_error();
+ Py_BEGIN_ALLOW_THREADS
+ res = (*func)(fd);
+ Py_END_ALLOW_THREADS
+ if (res < 0)
+ return edk2_error();
+ Py_INCREF(Py_None);
+ return Py_None;
+ }
+#endif /* UEFI_C_SOURCE */
+
+static PyObject *
+edk2_1str(PyObject *args, char *format, int (*func)(const char*))
+{
+ char *path1 = NULL;
+ int res;
+ if (!PyArg_ParseTuple(args, format,
+ Py_FileSystemDefaultEncoding, &path1))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ res = (*func)(path1);
+ Py_END_ALLOW_THREADS
+ if (res < 0)
+ return edk2_error_with_allocated_filename(path1);
+ PyMem_Free(path1);
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+static PyObject *
+edk2_2str(PyObject *args,
+ char *format,
+ int (*func)(const char *, const char *))
+{
+ char *path1 = NULL, *path2 = NULL;
+ int res;
+ if (!PyArg_ParseTuple(args, format,
+ Py_FileSystemDefaultEncoding, &path1,
+ Py_FileSystemDefaultEncoding, &path2))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ res = (*func)(path1, path2);
+ Py_END_ALLOW_THREADS
+ PyMem_Free(path1);
+ PyMem_Free(path2);
+ if (res != 0)
+ /* XXX how to report both path1 and path2??? */
+ return edk2_error();
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+PyDoc_STRVAR(stat_result__doc__,
+"stat_result: Result from stat or lstat.\n\n\
+This object may be accessed either as a tuple of\n\
+ (mode, ino, dev, nlink, uid, gid, size, atime, mtime, ctime)\n\
+or via the attributes st_mode, st_ino, st_dev, st_nlink, st_uid, and so on.\n\
+\n\
+Posix/windows: If your platform supports st_blksize, st_blocks, st_rdev,\n\
+or st_flags, they are available as attributes only.\n\
+\n\
+See os.stat for more information.");
+
+static PyStructSequence_Field stat_result_fields[] = {
+ {"st_mode", "protection bits"},
+ //{"st_ino", "inode"},
+ //{"st_dev", "device"},
+ //{"st_nlink", "number of hard links"},
+ //{"st_uid", "user ID of owner"},
+ //{"st_gid", "group ID of owner"},
+ {"st_size", "total size, in bytes"},
+ /* The NULL is replaced with PyStructSequence_UnnamedField later. */
+ {NULL, "integer time of last access"},
+ {NULL, "integer time of last modification"},
+ {NULL, "integer time of last change"},
+ {"st_atime", "time of last access"},
+ {"st_mtime", "time of last modification"},
+ {"st_ctime", "time of last change"},
+#ifdef HAVE_STRUCT_STAT_ST_BLKSIZE
+ {"st_blksize", "blocksize for filesystem I/O"},
+#endif
+#ifdef HAVE_STRUCT_STAT_ST_BLOCKS
+ {"st_blocks", "number of blocks allocated"},
+#endif
+#ifdef HAVE_STRUCT_STAT_ST_RDEV
+ {"st_rdev", "device type (if inode device)"},
+#endif
+#ifdef HAVE_STRUCT_STAT_ST_FLAGS
+ {"st_flags", "user defined flags for file"},
+#endif
+#ifdef HAVE_STRUCT_STAT_ST_GEN
+ {"st_gen", "generation number"},
+#endif
+#ifdef HAVE_STRUCT_STAT_ST_BIRTHTIME
+ {"st_birthtime", "time of creation"},
+#endif
+ {0}
+};
+
+#ifdef HAVE_STRUCT_STAT_ST_BLKSIZE
+#define ST_BLKSIZE_IDX 8
+#else
+#define ST_BLKSIZE_IDX 12
+#endif
+
+#ifdef HAVE_STRUCT_STAT_ST_BLOCKS
+#define ST_BLOCKS_IDX (ST_BLKSIZE_IDX+1)
+#else
+#define ST_BLOCKS_IDX ST_BLKSIZE_IDX
+#endif
+
+#ifdef HAVE_STRUCT_STAT_ST_RDEV
+#define ST_RDEV_IDX (ST_BLOCKS_IDX+1)
+#else
+#define ST_RDEV_IDX ST_BLOCKS_IDX
+#endif
+
+#ifdef HAVE_STRUCT_STAT_ST_FLAGS
+#define ST_FLAGS_IDX (ST_RDEV_IDX+1)
+#else
+#define ST_FLAGS_IDX ST_RDEV_IDX
+#endif
+
+#ifdef HAVE_STRUCT_STAT_ST_GEN
+#define ST_GEN_IDX (ST_FLAGS_IDX+1)
+#else
+#define ST_GEN_IDX ST_FLAGS_IDX
+#endif
+
+#ifdef HAVE_STRUCT_STAT_ST_BIRTHTIME
+#define ST_BIRTHTIME_IDX (ST_GEN_IDX+1)
+#else
+#define ST_BIRTHTIME_IDX ST_GEN_IDX
+#endif
+
+static PyStructSequence_Desc stat_result_desc = {
+ "stat_result", /* name */
+ stat_result__doc__, /* doc */
+ stat_result_fields,
+ 10
+};
+
+#ifndef UEFI_C_SOURCE /* Not in UEFI */
+PyDoc_STRVAR(statvfs_result__doc__,
+"statvfs_result: Result from statvfs or fstatvfs.\n\n\
+This object may be accessed either as a tuple of\n\
+ (bsize, frsize, blocks, bfree, bavail, files, ffree, favail, flag, namemax),\n\
+or via the attributes f_bsize, f_frsize, f_blocks, f_bfree, and so on.\n\
+\n\
+See os.statvfs for more information.");
+
+static PyStructSequence_Field statvfs_result_fields[] = {
+ {"f_bsize", },
+ {"f_frsize", },
+ {"f_blocks", },
+ {"f_bfree", },
+ {"f_bavail", },
+ {"f_files", },
+ {"f_ffree", },
+ {"f_favail", },
+ {"f_flag", },
+ {"f_namemax",},
+ {0}
+};
+
+static PyStructSequence_Desc statvfs_result_desc = {
+ "statvfs_result", /* name */
+ statvfs_result__doc__, /* doc */
+ statvfs_result_fields,
+ 10
+};
+
+static PyTypeObject StatVFSResultType;
+#endif
+
+static int initialized;
+static PyTypeObject StatResultType;
+static newfunc structseq_new;
+
+static PyObject *
+statresult_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
+{
+ PyStructSequence *result;
+ int i;
+
+ result = (PyStructSequence*)structseq_new(type, args, kwds);
+ if (!result)
+ return NULL;
+ /* If we have been initialized from a tuple,
+ st_?time might be set to None. Initialize it
+ from the int slots. */
+ for (i = 7; i <= 9; i++) {
+ if (result->ob_item[i+3] == Py_None) {
+ Py_DECREF(Py_None);
+ Py_INCREF(result->ob_item[i]);
+ result->ob_item[i+3] = result->ob_item[i];
+ }
+ }
+ return (PyObject*)result;
+}
+
+
+
+/* If true, st_?time is float. */
+#if defined(UEFI_C_SOURCE)
+ static int _stat_float_times = 0;
+#else
+ static int _stat_float_times = 1;
+
+PyDoc_STRVAR(stat_float_times__doc__,
+"stat_float_times([newval]) -> oldval\n\n\
+Determine whether os.[lf]stat represents time stamps as float objects.\n\
+If newval is True, future calls to stat() return floats, if it is False,\n\
+future calls return ints. \n\
+If newval is omitted, return the current setting.\n");
+
+static PyObject*
+stat_float_times(PyObject* self, PyObject *args)
+{
+ int newval = -1;
+
+ if (!PyArg_ParseTuple(args, "|i:stat_float_times", &newval))
+ return NULL;
+ if (newval == -1)
+ /* Return old value */
+ return PyBool_FromLong(_stat_float_times);
+ _stat_float_times = newval;
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+#endif /* UEFI_C_SOURCE */
+
+static void
+fill_time(PyObject *v, int index, time_t sec, unsigned long nsec)
+{
+ PyObject *fval,*ival;
+#if SIZEOF_TIME_T > SIZEOF_LONG
+ ival = PyLong_FromLongLong((PY_LONG_LONG)sec);
+#else
+ ival = PyLong_FromLong((long)sec);
+#endif
+ if (!ival)
+ return;
+ if (_stat_float_times) {
+ fval = PyFloat_FromDouble(sec + 1e-9*nsec);
+ } else {
+ fval = ival;
+ Py_INCREF(fval);
+ }
+ PyStructSequence_SET_ITEM(v, index, ival);
+ PyStructSequence_SET_ITEM(v, index+3, fval);
+}
+
+/* pack a system stat C structure into the Python stat tuple
+ (used by edk2_stat() and edk2_fstat()) */
+static PyObject*
+_pystat_fromstructstat(STRUCT_STAT *st)
+{
+ unsigned long ansec, mnsec, cnsec;
+ PyObject *v = PyStructSequence_New(&StatResultType);
+ if (v == NULL)
+ return NULL;
+
+ PyStructSequence_SET_ITEM(v, 0, PyLong_FromLong((long)st->st_mode));
+ PyStructSequence_SET_ITEM(v, 1,
+ PyLong_FromLongLong((PY_LONG_LONG)st->st_size));
+
+ ansec = mnsec = cnsec = 0;
+ /* The index used by fill_time is the index of the integer time.
+ fill_time will add 3 to the index to get the floating time index.
+ */
+ fill_time(v, 2, st->st_atime, ansec);
+ fill_time(v, 3, st->st_mtime, mnsec);
+ fill_time(v, 4, st->st_mtime, cnsec);
+
+#ifdef HAVE_STRUCT_STAT_ST_BLKSIZE
+ PyStructSequence_SET_ITEM(v, ST_BLKSIZE_IDX,
+ PyLong_FromLong((long)st->st_blksize));
+#endif
+#ifdef HAVE_STRUCT_STAT_ST_BLOCKS
+ PyStructSequence_SET_ITEM(v, ST_BLOCKS_IDX,
+ PyLong_FromLong((long)st->st_blocks));
+#endif
+#ifdef HAVE_STRUCT_STAT_ST_RDEV
+ PyStructSequence_SET_ITEM(v, ST_RDEV_IDX,
+ PyLong_FromLong((long)st->st_rdev));
+#endif
+#ifdef HAVE_STRUCT_STAT_ST_GEN
+ PyStructSequence_SET_ITEM(v, ST_GEN_IDX,
+ PyLong_FromLong((long)st->st_gen));
+#endif
+#ifdef HAVE_STRUCT_STAT_ST_BIRTHTIME
+ {
+ PyObject *val;
+ unsigned long bsec,bnsec;
+ bsec = (long)st->st_birthtime;
+#ifdef HAVE_STAT_TV_NSEC2
+ bnsec = st->st_birthtimespec.tv_nsec;
+#else
+ bnsec = 0;
+#endif
+ if (_stat_float_times) {
+ val = PyFloat_FromDouble(bsec + 1e-9*bnsec);
+ } else {
+ val = PyLong_FromLong((long)bsec);
+ }
+ PyStructSequence_SET_ITEM(v, ST_BIRTHTIME_IDX,
+ val);
+ }
+#endif
+#ifdef HAVE_STRUCT_STAT_ST_FLAGS
+ PyStructSequence_SET_ITEM(v, ST_FLAGS_IDX,
+ PyLong_FromLong((long)st->st_flags));
+#endif
+
+ if (PyErr_Occurred()) {
+ Py_DECREF(v);
+ return NULL;
+ }
+
+ return v;
+}
+
+static PyObject *
+edk2_do_stat(PyObject *self, PyObject *args,
+ char *format,
+ int (*statfunc)(const char *, STRUCT_STAT *),
+ char *wformat,
+ int (*wstatfunc)(const Py_UNICODE *, STRUCT_STAT *))
+{
+ STRUCT_STAT st;
+ char *path = NULL; /* pass this to stat; do not free() it */
+ char *pathfree = NULL; /* this memory must be free'd */
+ int res;
+ PyObject *result;
+
+ if (!PyArg_ParseTuple(args, format,
+ Py_FileSystemDefaultEncoding, &path))
+ return NULL;
+ pathfree = path;
+
+ Py_BEGIN_ALLOW_THREADS
+ res = (*statfunc)(path, &st);
+ Py_END_ALLOW_THREADS
+
+ if (res != 0) {
+ result = edk2_error_with_filename(pathfree);
+ }
+ else
+ result = _pystat_fromstructstat(&st);
+
+ PyMem_Free(pathfree);
+ return result;
+}
+
+/* POSIX methods */
+
+PyDoc_STRVAR(edk2_access__doc__,
+"access(path, mode) -> True if granted, False otherwise\n\n\
+Use the real uid/gid to test for access to a path. Note that most\n\
+operations will use the effective uid/gid, therefore this routine can\n\
+be used in a suid/sgid environment to test if the invoking user has the\n\
+specified access to the path. The mode argument can be F_OK to test\n\
+existence, or the inclusive-OR of R_OK, W_OK, and X_OK.");
+
+static PyObject *
+edk2_access(PyObject *self, PyObject *args)
+{
+ char *path;
+ int mode;
+
+ int res;
+ if (!PyArg_ParseTuple(args, "eti:access",
+ Py_FileSystemDefaultEncoding, &path, &mode))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ res = access(path, mode);
+ Py_END_ALLOW_THREADS
+ PyMem_Free(path);
+ return PyBool_FromLong(res == 0);
+}
+
+#ifndef F_OK
+ #define F_OK 0
+#endif
+#ifndef R_OK
+ #define R_OK 4
+#endif
+#ifndef W_OK
+ #define W_OK 2
+#endif
+#ifndef X_OK
+ #define X_OK 1
+#endif
+
+PyDoc_STRVAR(edk2_chdir__doc__,
+"chdir(path)\n\n\
+Change the current working directory to the specified path.");
+
+static PyObject *
+edk2_chdir(PyObject *self, PyObject *args)
+{
+ return edk2_1str(args, "et:chdir", chdir);
+}
+
+PyDoc_STRVAR(edk2_chmod__doc__,
+"chmod(path, mode)\n\n\
+Change the access permissions of a file.");
+
+static PyObject *
+edk2_chmod(PyObject *self, PyObject *args)
+{
+ char *path = NULL;
+ int i;
+ int res;
+ if (!PyArg_ParseTuple(args, "eti:chmod", Py_FileSystemDefaultEncoding,
+ &path, &i))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ res = chmod(path, i);
+ Py_END_ALLOW_THREADS
+ if (res < 0)
+ return edk2_error_with_allocated_filename(path);
+ PyMem_Free(path);
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+#ifdef HAVE_FCHMOD
+PyDoc_STRVAR(edk2_fchmod__doc__,
+"fchmod(fd, mode)\n\n\
+Change the access permissions of the file given by file\n\
+descriptor fd.");
+
+static PyObject *
+edk2_fchmod(PyObject *self, PyObject *args)
+{
+ int fd, mode, res;
+ if (!PyArg_ParseTuple(args, "ii:fchmod", &fd, &mode))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ res = fchmod(fd, mode);
+ Py_END_ALLOW_THREADS
+ if (res < 0)
+ return edk2_error();
+ Py_RETURN_NONE;
+}
+#endif /* HAVE_FCHMOD */
+
+#ifdef HAVE_LCHMOD
+PyDoc_STRVAR(edk2_lchmod__doc__,
+"lchmod(path, mode)\n\n\
+Change the access permissions of a file. If path is a symlink, this\n\
+affects the link itself rather than the target.");
+
+static PyObject *
+edk2_lchmod(PyObject *self, PyObject *args)
+{
+ char *path = NULL;
+ int i;
+ int res;
+ if (!PyArg_ParseTuple(args, "eti:lchmod", Py_FileSystemDefaultEncoding,
+ &path, &i))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ res = lchmod(path, i);
+ Py_END_ALLOW_THREADS
+ if (res < 0)
+ return edk2_error_with_allocated_filename(path);
+ PyMem_Free(path);
+ Py_RETURN_NONE;
+}
+#endif /* HAVE_LCHMOD */
+
+
+#ifdef HAVE_CHFLAGS
+PyDoc_STRVAR(edk2_chflags__doc__,
+"chflags(path, flags)\n\n\
+Set file flags.");
+
+static PyObject *
+edk2_chflags(PyObject *self, PyObject *args)
+{
+ char *path;
+ unsigned long flags;
+ int res;
+ if (!PyArg_ParseTuple(args, "etk:chflags",
+ Py_FileSystemDefaultEncoding, &path, &flags))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ res = chflags(path, flags);
+ Py_END_ALLOW_THREADS
+ if (res < 0)
+ return edk2_error_with_allocated_filename(path);
+ PyMem_Free(path);
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+#endif /* HAVE_CHFLAGS */
+
+#ifdef HAVE_LCHFLAGS
+PyDoc_STRVAR(edk2_lchflags__doc__,
+"lchflags(path, flags)\n\n\
+Set file flags.\n\
+This function will not follow symbolic links.");
+
+static PyObject *
+edk2_lchflags(PyObject *self, PyObject *args)
+{
+ char *path;
+ unsigned long flags;
+ int res;
+ if (!PyArg_ParseTuple(args, "etk:lchflags",
+ Py_FileSystemDefaultEncoding, &path, &flags))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ res = lchflags(path, flags);
+ Py_END_ALLOW_THREADS
+ if (res < 0)
+ return edk2_error_with_allocated_filename(path);
+ PyMem_Free(path);
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+#endif /* HAVE_LCHFLAGS */
+
+#ifdef HAVE_CHROOT
+PyDoc_STRVAR(edk2_chroot__doc__,
+"chroot(path)\n\n\
+Change root directory to path.");
+
+static PyObject *
+edk2_chroot(PyObject *self, PyObject *args)
+{
+ return edk2_1str(args, "et:chroot", chroot);
+}
+#endif
+
+#ifdef HAVE_FSYNC
+PyDoc_STRVAR(edk2_fsync__doc__,
+"fsync(fildes)\n\n\
+force write of file with filedescriptor to disk.");
+
+static PyObject *
+edk2_fsync(PyObject *self, PyObject *fdobj)
+{
+ return edk2_fildes(fdobj, fsync);
+}
+#endif /* HAVE_FSYNC */
+
+#ifdef HAVE_FDATASYNC
+
+#ifdef __hpux
+extern int fdatasync(int); /* On HP-UX, in libc but not in unistd.h */
+#endif
+
+PyDoc_STRVAR(edk2_fdatasync__doc__,
+"fdatasync(fildes)\n\n\
+force write of file with filedescriptor to disk.\n\
+ does not force update of metadata.");
+
+static PyObject *
+edk2_fdatasync(PyObject *self, PyObject *fdobj)
+{
+ return edk2_fildes(fdobj, fdatasync);
+}
+#endif /* HAVE_FDATASYNC */
+
+
+#ifdef HAVE_CHOWN
+PyDoc_STRVAR(edk2_chown__doc__,
+"chown(path, uid, gid)\n\n\
+Change the owner and group id of path to the numeric uid and gid.");
+
+static PyObject *
+edk2_chown(PyObject *self, PyObject *args)
+{
+ char *path = NULL;
+ long uid, gid;
+ int res;
+ if (!PyArg_ParseTuple(args, "etll:chown",
+ Py_FileSystemDefaultEncoding, &path,
+ &uid, &gid))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ res = chown(path, (uid_t) uid, (gid_t) gid);
+ Py_END_ALLOW_THREADS
+ if (res < 0)
+ return edk2_error_with_allocated_filename(path);
+ PyMem_Free(path);
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+#endif /* HAVE_CHOWN */
+
+#ifdef HAVE_FCHOWN
+PyDoc_STRVAR(edk2_fchown__doc__,
+"fchown(fd, uid, gid)\n\n\
+Change the owner and group id of the file given by file descriptor\n\
+fd to the numeric uid and gid.");
+
+static PyObject *
+edk2_fchown(PyObject *self, PyObject *args)
+{
+ int fd;
+ long uid, gid;
+ int res;
+ if (!PyArg_ParseTuple(args, "ill:chown", &fd, &uid, &gid))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ res = fchown(fd, (uid_t) uid, (gid_t) gid);
+ Py_END_ALLOW_THREADS
+ if (res < 0)
+ return edk2_error();
+ Py_RETURN_NONE;
+}
+#endif /* HAVE_FCHOWN */
+
+#ifdef HAVE_LCHOWN
+PyDoc_STRVAR(edk2_lchown__doc__,
+"lchown(path, uid, gid)\n\n\
+Change the owner and group id of path to the numeric uid and gid.\n\
+This function will not follow symbolic links.");
+
+static PyObject *
+edk2_lchown(PyObject *self, PyObject *args)
+{
+ char *path = NULL;
+ long uid, gid;
+ int res;
+ if (!PyArg_ParseTuple(args, "etll:lchown",
+ Py_FileSystemDefaultEncoding, &path,
+ &uid, &gid))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ res = lchown(path, (uid_t) uid, (gid_t) gid);
+ Py_END_ALLOW_THREADS
+ if (res < 0)
+ return edk2_error_with_allocated_filename(path);
+ PyMem_Free(path);
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+#endif /* HAVE_LCHOWN */
+
+
+#ifdef HAVE_GETCWD
+PyDoc_STRVAR(edk2_getcwd__doc__,
+"getcwd() -> path\n\n\
+Return a string representing the current working directory.");
+
+static PyObject *
+edk2_getcwd(PyObject *self, PyObject *noargs)
+{
+ int bufsize_incr = 1024;
+ int bufsize = 0;
+ char *tmpbuf = NULL;
+ char *res = NULL;
+ PyObject *dynamic_return;
+
+ Py_BEGIN_ALLOW_THREADS
+ do {
+ bufsize = bufsize + bufsize_incr;
+ tmpbuf = malloc(bufsize);
+ if (tmpbuf == NULL) {
+ break;
+ }
+ res = getcwd(tmpbuf, bufsize);
+ if (res == NULL) {
+ free(tmpbuf);
+ }
+ } while ((res == NULL) && (errno == ERANGE));
+ Py_END_ALLOW_THREADS
+
+ if (res == NULL)
+ return edk2_error();
+
+ dynamic_return = PyUnicode_FromString(tmpbuf);
+ free(tmpbuf);
+
+ return dynamic_return;
+}
+
+#ifdef Py_USING_UNICODE
+PyDoc_STRVAR(edk2_getcwdu__doc__,
+"getcwdu() -> path\n\n\
+Return a unicode string representing the current working directory.");
+
+static PyObject *
+edk2_getcwdu(PyObject *self, PyObject *noargs)
+{
+ char buf[1026];
+ char *res;
+
+ Py_BEGIN_ALLOW_THREADS
+ res = getcwd(buf, sizeof buf);
+ Py_END_ALLOW_THREADS
+ if (res == NULL)
+ return edk2_error();
+ return PyUnicode_Decode(buf, strlen(buf), Py_FileSystemDefaultEncoding,"strict");
+}
+#endif /* Py_USING_UNICODE */
+#endif /* HAVE_GETCWD */
+
+
+PyDoc_STRVAR(edk2_listdir__doc__,
+"listdir(path) -> list_of_strings\n\n\
+Return a list containing the names of the entries in the directory.\n\
+\n\
+ path: path of directory to list\n\
+\n\
+The list is in arbitrary order. It does not include the special\n\
+entries '.' and '..' even if they are present in the directory.");
+
+static PyObject *
+edk2_listdir(PyObject *self, PyObject *args)
+{
+ /* XXX Should redo this putting the (now four) versions of opendir
+ in separate files instead of having them all here... */
+
+ char *name = NULL;
+ char *MBname;
+ PyObject *d, *v;
+ DIR *dirp;
+ struct dirent *ep;
+ int arg_is_unicode = 1;
+
+ errno = 0;
+ if (!PyArg_ParseTuple(args, "U:listdir", &v)) {
+ arg_is_unicode = 0;
+ PyErr_Clear();
+ }
+ if (!PyArg_ParseTuple(args, "et:listdir", Py_FileSystemDefaultEncoding, &name))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ dirp = opendir(name);
+ Py_END_ALLOW_THREADS
+ if (dirp == NULL) {
+ return edk2_error_with_allocated_filename(name);
+ }
+ if ((d = PyList_New(0)) == NULL) {
+ Py_BEGIN_ALLOW_THREADS
+ closedir(dirp);
+ Py_END_ALLOW_THREADS
+ PyMem_Free(name);
+ return NULL;
+ }
+ if((MBname = malloc(NAME_MAX)) == NULL) {
+ Py_BEGIN_ALLOW_THREADS
+ closedir(dirp);
+ Py_END_ALLOW_THREADS
+ Py_DECREF(d);
+ PyMem_Free(name);
+ return NULL;
+ }
+ for (;;) {
+ errno = 0;
+ Py_BEGIN_ALLOW_THREADS
+ ep = readdir(dirp);
+ Py_END_ALLOW_THREADS
+ if (ep == NULL) {
+ if ((errno == 0) || (errno == EISDIR)) {
+ break;
+ } else {
+ Py_BEGIN_ALLOW_THREADS
+ closedir(dirp);
+ Py_END_ALLOW_THREADS
+ Py_DECREF(d);
+ return edk2_error_with_allocated_filename(name);
+ }
+ }
+ if (ep->FileName[0] == L'.' &&
+ (NAMLEN(ep) == 1 ||
+ (ep->FileName[1] == L'.' && NAMLEN(ep) == 2)))
+ continue;
+ if(wcstombs(MBname, ep->FileName, NAME_MAX) == -1) {
+ free(MBname);
+ Py_BEGIN_ALLOW_THREADS
+ closedir(dirp);
+ Py_END_ALLOW_THREADS
+ Py_DECREF(d);
+ PyMem_Free(name);
+ return NULL;
+ }
+ v = PyUnicode_FromStringAndSize(MBname, strlen(MBname));
+ if (v == NULL) {
+ Py_DECREF(d);
+ d = NULL;
+ break;
+ }
+#ifdef Py_USING_UNICODE
+ if (arg_is_unicode) {
+ PyObject *w;
+
+ w = PyUnicode_FromEncodedObject(v,
+ Py_FileSystemDefaultEncoding,
+ "strict");
+ if (w != NULL) {
+ Py_DECREF(v);
+ v = w;
+ }
+ else {
+ /* fall back to the original byte string, as
+ discussed in patch #683592 */
+ PyErr_Clear();
+ }
+ }
+#endif
+ if (PyList_Append(d, v) != 0) {
+ Py_DECREF(v);
+ Py_DECREF(d);
+ d = NULL;
+ break;
+ }
+ Py_DECREF(v);
+ }
+ Py_BEGIN_ALLOW_THREADS
+ closedir(dirp);
+ Py_END_ALLOW_THREADS
+ PyMem_Free(name);
+ if(MBname != NULL) {
+ free(MBname);
+ }
+
+ return d;
+
+} /* end of edk2_listdir */
+
+PyDoc_STRVAR(edk2_mkdir__doc__,
+"mkdir(path [, mode=0777])\n\n\
+Create a directory.");
+
+static PyObject *
+edk2_mkdir(PyObject *self, PyObject *args)
+{
+ int res;
+ char *path = NULL;
+ int mode = 0777;
+
+ if (!PyArg_ParseTuple(args, "et|i:mkdir",
+ Py_FileSystemDefaultEncoding, &path, &mode))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ res = mkdir(path, mode);
+ Py_END_ALLOW_THREADS
+ if (res < 0)
+ return edk2_error_with_allocated_filename(path);
+ PyMem_Free(path);
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+
+/* sys/resource.h is needed for at least: wait3(), wait4(), broken nice. */
+#if defined(HAVE_SYS_RESOURCE_H)
+#include <sys/resource.h>
+#endif
+
+
+#ifdef HAVE_NICE
+PyDoc_STRVAR(edk2_nice__doc__,
+"nice(inc) -> new_priority\n\n\
+Decrease the priority of process by inc and return the new priority.");
+
+static PyObject *
+edk2_nice(PyObject *self, PyObject *args)
+{
+ int increment, value;
+
+ if (!PyArg_ParseTuple(args, "i:nice", &increment))
+ return NULL;
+
+ /* There are two flavours of 'nice': one that returns the new
+ priority (as required by almost all standards out there) and the
+ Linux/FreeBSD/BSDI one, which returns '0' on success and advices
+ the use of getpriority() to get the new priority.
+
+ If we are of the nice family that returns the new priority, we
+ need to clear errno before the call, and check if errno is filled
+ before calling edk2_error() on a returnvalue of -1, because the
+ -1 may be the actual new priority! */
+
+ errno = 0;
+ value = nice(increment);
+#if defined(HAVE_BROKEN_NICE) && defined(HAVE_GETPRIORITY)
+ if (value == 0)
+ value = getpriority(PRIO_PROCESS, 0);
+#endif
+ if (value == -1 && errno != 0)
+ /* either nice() or getpriority() returned an error */
+ return edk2_error();
+ return PyLong_FromLong((long) value);
+}
+#endif /* HAVE_NICE */
+
+PyDoc_STRVAR(edk2_rename__doc__,
+"rename(old, new)\n\n\
+Rename a file or directory.");
+
+static PyObject *
+edk2_rename(PyObject *self, PyObject *args)
+{
+ return edk2_2str(args, "etet:rename", rename);
+}
+
+
+PyDoc_STRVAR(edk2_rmdir__doc__,
+"rmdir(path)\n\n\
+Remove a directory.");
+
+static PyObject *
+edk2_rmdir(PyObject *self, PyObject *args)
+{
+ return edk2_1str(args, "et:rmdir", rmdir);
+}
+
+
+PyDoc_STRVAR(edk2_stat__doc__,
+"stat(path) -> stat result\n\n\
+Perform a stat system call on the given path.");
+
+static PyObject *
+edk2_stat(PyObject *self, PyObject *args)
+{
+ return edk2_do_stat(self, args, "et:stat", STAT, NULL, NULL);
+}
+
+
+#ifdef HAVE_SYSTEM
+PyDoc_STRVAR(edk2_system__doc__,
+"system(command) -> exit_status\n\n\
+Execute the command (a string) in a subshell.");
+
+static PyObject *
+edk2_system(PyObject *self, PyObject *args)
+{
+ char *command;
+ long sts;
+ if (!PyArg_ParseTuple(args, "s:system", &command))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ sts = system(command);
+ Py_END_ALLOW_THREADS
+ return PyLong_FromLong(sts);
+}
+#endif
+
+
+PyDoc_STRVAR(edk2_umask__doc__,
+"umask(new_mask) -> old_mask\n\n\
+Set the current numeric umask and return the previous umask.");
+
+static PyObject *
+edk2_umask(PyObject *self, PyObject *args)
+{
+ int i;
+ if (!PyArg_ParseTuple(args, "i:umask", &i))
+ return NULL;
+ i = (int)umask(i);
+ if (i < 0)
+ return edk2_error();
+ return PyLong_FromLong((long)i);
+}
+
+
+PyDoc_STRVAR(edk2_unlink__doc__,
+"unlink(path)\n\n\
+Remove a file (same as remove(path)).");
+
+PyDoc_STRVAR(edk2_remove__doc__,
+"remove(path)\n\n\
+Remove a file (same as unlink(path)).");
+
+static PyObject *
+edk2_unlink(PyObject *self, PyObject *args)
+{
+ return edk2_1str(args, "et:remove", unlink);
+}
+
+
+static int
+extract_time(PyObject *t, time_t* sec, long* usec)
+{
+ time_t intval;
+ if (PyFloat_Check(t)) {
+ double tval = PyFloat_AsDouble(t);
+ PyObject *intobj = PyNumber_Long(t);
+ if (!intobj)
+ return -1;
+#if SIZEOF_TIME_T > SIZEOF_LONG
+ intval = PyInt_AsUnsignedLongLongMask(intobj);
+#else
+ intval = PyLong_AsLong(intobj);
+#endif
+ Py_DECREF(intobj);
+ if (intval == -1 && PyErr_Occurred())
+ return -1;
+ *sec = intval;
+ *usec = (long)((tval - intval) * 1e6); /* can't exceed 1000000 */
+ if (*usec < 0)
+ /* If rounding gave us a negative number,
+ truncate. */
+ *usec = 0;
+ return 0;
+ }
+#if SIZEOF_TIME_T > SIZEOF_LONG
+ intval = PyInt_AsUnsignedLongLongMask(t);
+#else
+ intval = PyLong_AsLong(t);
+#endif
+ if (intval == -1 && PyErr_Occurred())
+ return -1;
+ *sec = intval;
+ *usec = 0;
+ return 0;
+}
+
+PyDoc_STRVAR(edk2_utime__doc__,
+"utime(path, (atime, mtime))\n\
+utime(path, None)\n\n\
+Set the access and modified time of the file to the given values. If the\n\
+second form is used, set the access and modified times to the current time.");
+
+static PyObject *
+edk2_utime(PyObject *self, PyObject *args)
+{
+ char *path = NULL;
+ time_t atime, mtime;
+ long ausec, musec;
+ int res;
+ PyObject* arg;
+
+#if defined(HAVE_UTIMES)
+ struct timeval buf[2];
+#define ATIME buf[0].tv_sec
+#define MTIME buf[1].tv_sec
+#elif defined(HAVE_UTIME_H)
+/* XXX should define struct utimbuf instead, above */
+ struct utimbuf buf;
+#define ATIME buf.actime
+#define MTIME buf.modtime
+#define UTIME_ARG &buf
+#else /* HAVE_UTIMES */
+ time_t buf[2];
+#define ATIME buf[0]
+#define MTIME buf[1]
+#define UTIME_ARG buf
+#endif /* HAVE_UTIMES */
+
+
+ if (!PyArg_ParseTuple(args, "etO:utime",
+ Py_FileSystemDefaultEncoding, &path, &arg))
+ return NULL;
+ if (arg == Py_None) {
+ /* optional time values not given */
+ Py_BEGIN_ALLOW_THREADS
+ res = utime(path, NULL);
+ Py_END_ALLOW_THREADS
+ }
+ else if (!PyTuple_Check(arg) || PyTuple_Size(arg) != 2) {
+ PyErr_SetString(PyExc_TypeError,
+ "utime() arg 2 must be a tuple (atime, mtime)");
+ PyMem_Free(path);
+ return NULL;
+ }
+ else {
+ if (extract_time(PyTuple_GET_ITEM(arg, 0),
+ &atime, &ausec) == -1) {
+ PyMem_Free(path);
+ return NULL;
+ }
+ if (extract_time(PyTuple_GET_ITEM(arg, 1),
+ &mtime, &musec) == -1) {
+ PyMem_Free(path);
+ return NULL;
+ }
+ ATIME = atime;
+ MTIME = mtime;
+#ifdef HAVE_UTIMES
+ buf[0].tv_usec = ausec;
+ buf[1].tv_usec = musec;
+ Py_BEGIN_ALLOW_THREADS
+ res = utimes(path, buf);
+ Py_END_ALLOW_THREADS
+#else
+ Py_BEGIN_ALLOW_THREADS
+ res = utime(path, UTIME_ARG);
+ Py_END_ALLOW_THREADS
+#endif /* HAVE_UTIMES */
+ }
+ if (res < 0) {
+ return edk2_error_with_allocated_filename(path);
+ }
+ PyMem_Free(path);
+ Py_INCREF(Py_None);
+ return Py_None;
+#undef UTIME_ARG
+#undef ATIME
+#undef MTIME
+}
+
+
+/* Process operations */
+
+PyDoc_STRVAR(edk2__exit__doc__,
+"_exit(status)\n\n\
+Exit to the system with specified status, without normal exit processing.");
+
+static PyObject *
+edk2__exit(PyObject *self, PyObject *args)
+{
+ int sts;
+ if (!PyArg_ParseTuple(args, "i:_exit", &sts))
+ return NULL;
+ _Exit(sts);
+ return NULL; /* Make gcc -Wall happy */
+}
+
+#if defined(HAVE_EXECV) || defined(HAVE_SPAWNV)
+static void
+free_string_array(char **array, Py_ssize_t count)
+{
+ Py_ssize_t i;
+ for (i = 0; i < count; i++)
+ PyMem_Free(array[i]);
+ PyMem_DEL(array);
+}
+#endif
+
+
+#ifdef HAVE_EXECV
+PyDoc_STRVAR(edk2_execv__doc__,
+"execv(path, args)\n\n\
+Execute an executable path with arguments, replacing current process.\n\
+\n\
+ path: path of executable file\n\
+ args: tuple or list of strings");
+
+static PyObject *
+edk2_execv(PyObject *self, PyObject *args)
+{
+ char *path;
+ PyObject *argv;
+ char **argvlist;
+ Py_ssize_t i, argc;
+ PyObject *(*getitem)(PyObject *, Py_ssize_t);
+
+ /* execv has two arguments: (path, argv), where
+ argv is a list or tuple of strings. */
+
+ if (!PyArg_ParseTuple(args, "etO:execv",
+ Py_FileSystemDefaultEncoding,
+ &path, &argv))
+ return NULL;
+ if (PyList_Check(argv)) {
+ argc = PyList_Size(argv);
+ getitem = PyList_GetItem;
+ }
+ else if (PyTuple_Check(argv)) {
+ argc = PyTuple_Size(argv);
+ getitem = PyTuple_GetItem;
+ }
+ else {
+ PyErr_SetString(PyExc_TypeError, "execv() arg 2 must be a tuple or list");
+ PyMem_Free(path);
+ return NULL;
+ }
+ if (argc < 1) {
+ PyErr_SetString(PyExc_ValueError, "execv() arg 2 must not be empty");
+ PyMem_Free(path);
+ return NULL;
+ }
+
+ argvlist = PyMem_NEW(char *, argc+1);
+ if (argvlist == NULL) {
+ PyMem_Free(path);
+ return PyErr_NoMemory();
+ }
+ for (i = 0; i < argc; i++) {
+ if (!PyArg_Parse((*getitem)(argv, i), "et",
+ Py_FileSystemDefaultEncoding,
+ &argvlist[i])) {
+ free_string_array(argvlist, i);
+ PyErr_SetString(PyExc_TypeError,
+ "execv() arg 2 must contain only strings");
+ PyMem_Free(path);
+ return NULL;
+
+ }
+ }
+ argvlist[argc] = NULL;
+
+ execv(path, argvlist);
+
+ /* If we get here it's definitely an error */
+
+ free_string_array(argvlist, argc);
+ PyMem_Free(path);
+ return edk2_error();
+}
+
+
+PyDoc_STRVAR(edk2_execve__doc__,
+"execve(path, args, env)\n\n\
+Execute a path with arguments and environment, replacing current process.\n\
+\n\
+ path: path of executable file\n\
+ args: tuple or list of arguments\n\
+ env: dictionary of strings mapping to strings");
+
+static PyObject *
+edk2_execve(PyObject *self, PyObject *args)
+{
+ char *path;
+ PyObject *argv, *env;
+ char **argvlist;
+ char **envlist;
+ PyObject *key, *val, *keys=NULL, *vals=NULL;
+ Py_ssize_t i, pos, argc, envc;
+ PyObject *(*getitem)(PyObject *, Py_ssize_t);
+ Py_ssize_t lastarg = 0;
+
+ /* execve has three arguments: (path, argv, env), where
+ argv is a list or tuple of strings and env is a dictionary
+ like posix.environ. */
+
+ if (!PyArg_ParseTuple(args, "etOO:execve",
+ Py_FileSystemDefaultEncoding,
+ &path, &argv, &env))
+ return NULL;
+ if (PyList_Check(argv)) {
+ argc = PyList_Size(argv);
+ getitem = PyList_GetItem;
+ }
+ else if (PyTuple_Check(argv)) {
+ argc = PyTuple_Size(argv);
+ getitem = PyTuple_GetItem;
+ }
+ else {
+ PyErr_SetString(PyExc_TypeError,
+ "execve() arg 2 must be a tuple or list");
+ goto fail_0;
+ }
+ if (!PyMapping_Check(env)) {
+ PyErr_SetString(PyExc_TypeError,
+ "execve() arg 3 must be a mapping object");
+ goto fail_0;
+ }
+
+ argvlist = PyMem_NEW(char *, argc+1);
+ if (argvlist == NULL) {
+ PyErr_NoMemory();
+ goto fail_0;
+ }
+ for (i = 0; i < argc; i++) {
+ if (!PyArg_Parse((*getitem)(argv, i),
+ "et;execve() arg 2 must contain only strings",
+ Py_FileSystemDefaultEncoding,
+ &argvlist[i]))
+ {
+ lastarg = i;
+ goto fail_1;
+ }
+ }
+ lastarg = argc;
+ argvlist[argc] = NULL;
+
+ i = PyMapping_Size(env);
+ if (i < 0)
+ goto fail_1;
+ envlist = PyMem_NEW(char *, i + 1);
+ if (envlist == NULL) {
+ PyErr_NoMemory();
+ goto fail_1;
+ }
+ envc = 0;
+ keys = PyMapping_Keys(env);
+ vals = PyMapping_Values(env);
+ if (!keys || !vals)
+ goto fail_2;
+ if (!PyList_Check(keys) || !PyList_Check(vals)) {
+ PyErr_SetString(PyExc_TypeError,
+ "execve(): env.keys() or env.values() is not a list");
+ goto fail_2;
+ }
+
+ for (pos = 0; pos < i; pos++) {
+ char *p, *k, *v;
+ size_t len;
+
+ key = PyList_GetItem(keys, pos);
+ val = PyList_GetItem(vals, pos);
+ if (!key || !val)
+ goto fail_2;
+
+ if (!PyArg_Parse(
+ key,
+ "s;execve() arg 3 contains a non-string key",
+ &k) ||
+ !PyArg_Parse(
+ val,
+ "s;execve() arg 3 contains a non-string value",
+ &v))
+ {
+ goto fail_2;
+ }
+
+#if defined(PYOS_OS2)
+ /* Omit Pseudo-Env Vars that Would Confuse Programs if Passed On */
+ if (stricmp(k, "BEGINLIBPATH") != 0 && stricmp(k, "ENDLIBPATH") != 0) {
+#endif
+ len = PyString_Size(key) + PyString_Size(val) + 2;
+ p = PyMem_NEW(char, len);
+ if (p == NULL) {
+ PyErr_NoMemory();
+ goto fail_2;
+ }
+ PyOS_snprintf(p, len, "%s=%s", k, v);
+ envlist[envc++] = p;
+#if defined(PYOS_OS2)
+ }
+#endif
+ }
+ envlist[envc] = 0;
+
+ execve(path, argvlist, envlist);
+
+ /* If we get here it's definitely an error */
+
+ (void) edk2_error();
+
+ fail_2:
+ while (--envc >= 0)
+ PyMem_DEL(envlist[envc]);
+ PyMem_DEL(envlist);
+ fail_1:
+ free_string_array(argvlist, lastarg);
+ Py_XDECREF(vals);
+ Py_XDECREF(keys);
+ fail_0:
+ PyMem_Free(path);
+ return NULL;
+}
+#endif /* HAVE_EXECV */
+
+
+#ifdef HAVE_SPAWNV
+PyDoc_STRVAR(edk2_spawnv__doc__,
+"spawnv(mode, path, args)\n\n\
+Execute the program 'path' in a new process.\n\
+\n\
+ mode: mode of process creation\n\
+ path: path of executable file\n\
+ args: tuple or list of strings");
+
+static PyObject *
+edk2_spawnv(PyObject *self, PyObject *args)
+{
+ char *path;
+ PyObject *argv;
+ char **argvlist;
+ int mode, i;
+ Py_ssize_t argc;
+ Py_intptr_t spawnval;
+ PyObject *(*getitem)(PyObject *, Py_ssize_t);
+
+ /* spawnv has three arguments: (mode, path, argv), where
+ argv is a list or tuple of strings. */
+
+ if (!PyArg_ParseTuple(args, "ietO:spawnv", &mode,
+ Py_FileSystemDefaultEncoding,
+ &path, &argv))
+ return NULL;
+ if (PyList_Check(argv)) {
+ argc = PyList_Size(argv);
+ getitem = PyList_GetItem;
+ }
+ else if (PyTuple_Check(argv)) {
+ argc = PyTuple_Size(argv);
+ getitem = PyTuple_GetItem;
+ }
+ else {
+ PyErr_SetString(PyExc_TypeError,
+ "spawnv() arg 2 must be a tuple or list");
+ PyMem_Free(path);
+ return NULL;
+ }
+
+ argvlist = PyMem_NEW(char *, argc+1);
+ if (argvlist == NULL) {
+ PyMem_Free(path);
+ return PyErr_NoMemory();
+ }
+ for (i = 0; i < argc; i++) {
+ if (!PyArg_Parse((*getitem)(argv, i), "et",
+ Py_FileSystemDefaultEncoding,
+ &argvlist[i])) {
+ free_string_array(argvlist, i);
+ PyErr_SetString(
+ PyExc_TypeError,
+ "spawnv() arg 2 must contain only strings");
+ PyMem_Free(path);
+ return NULL;
+ }
+ }
+ argvlist[argc] = NULL;
+
+#if defined(PYOS_OS2) && defined(PYCC_GCC)
+ Py_BEGIN_ALLOW_THREADS
+ spawnval = spawnv(mode, path, argvlist);
+ Py_END_ALLOW_THREADS
+#else
+ if (mode == _OLD_P_OVERLAY)
+ mode = _P_OVERLAY;
+
+ Py_BEGIN_ALLOW_THREADS
+ spawnval = _spawnv(mode, path, argvlist);
+ Py_END_ALLOW_THREADS
+#endif
+
+ free_string_array(argvlist, argc);
+ PyMem_Free(path);
+
+ if (spawnval == -1)
+ return edk2_error();
+ else
+#if SIZEOF_LONG == SIZEOF_VOID_P
+ return Py_BuildValue("l", (long) spawnval);
+#else
+ return Py_BuildValue("L", (PY_LONG_LONG) spawnval);
+#endif
+}
+
+
+PyDoc_STRVAR(edk2_spawnve__doc__,
+"spawnve(mode, path, args, env)\n\n\
+Execute the program 'path' in a new process.\n\
+\n\
+ mode: mode of process creation\n\
+ path: path of executable file\n\
+ args: tuple or list of arguments\n\
+ env: dictionary of strings mapping to strings");
+
+static PyObject *
+edk2_spawnve(PyObject *self, PyObject *args)
+{
+ char *path;
+ PyObject *argv, *env;
+ char **argvlist;
+ char **envlist;
+ PyObject *key, *val, *keys=NULL, *vals=NULL, *res=NULL;
+ int mode, pos, envc;
+ Py_ssize_t argc, i;
+ Py_intptr_t spawnval;
+ PyObject *(*getitem)(PyObject *, Py_ssize_t);
+ Py_ssize_t lastarg = 0;
+
+ /* spawnve has four arguments: (mode, path, argv, env), where
+ argv is a list or tuple of strings and env is a dictionary
+ like posix.environ. */
+
+ if (!PyArg_ParseTuple(args, "ietOO:spawnve", &mode,
+ Py_FileSystemDefaultEncoding,
+ &path, &argv, &env))
+ return NULL;
+ if (PyList_Check(argv)) {
+ argc = PyList_Size(argv);
+ getitem = PyList_GetItem;
+ }
+ else if (PyTuple_Check(argv)) {
+ argc = PyTuple_Size(argv);
+ getitem = PyTuple_GetItem;
+ }
+ else {
+ PyErr_SetString(PyExc_TypeError,
+ "spawnve() arg 2 must be a tuple or list");
+ goto fail_0;
+ }
+ if (!PyMapping_Check(env)) {
+ PyErr_SetString(PyExc_TypeError,
+ "spawnve() arg 3 must be a mapping object");
+ goto fail_0;
+ }
+
+ argvlist = PyMem_NEW(char *, argc+1);
+ if (argvlist == NULL) {
+ PyErr_NoMemory();
+ goto fail_0;
+ }
+ for (i = 0; i < argc; i++) {
+ if (!PyArg_Parse((*getitem)(argv, i),
+ "et;spawnve() arg 2 must contain only strings",
+ Py_FileSystemDefaultEncoding,
+ &argvlist[i]))
+ {
+ lastarg = i;
+ goto fail_1;
+ }
+ }
+ lastarg = argc;
+ argvlist[argc] = NULL;
+
+ i = PyMapping_Size(env);
+ if (i < 0)
+ goto fail_1;
+ envlist = PyMem_NEW(char *, i + 1);
+ if (envlist == NULL) {
+ PyErr_NoMemory();
+ goto fail_1;
+ }
+ envc = 0;
+ keys = PyMapping_Keys(env);
+ vals = PyMapping_Values(env);
+ if (!keys || !vals)
+ goto fail_2;
+ if (!PyList_Check(keys) || !PyList_Check(vals)) {
+ PyErr_SetString(PyExc_TypeError,
+ "spawnve(): env.keys() or env.values() is not a list");
+ goto fail_2;
+ }
+
+ for (pos = 0; pos < i; pos++) {
+ char *p, *k, *v;
+ size_t len;
+
+ key = PyList_GetItem(keys, pos);
+ val = PyList_GetItem(vals, pos);
+ if (!key || !val)
+ goto fail_2;
+
+ if (!PyArg_Parse(
+ key,
+ "s;spawnve() arg 3 contains a non-string key",
+ &k) ||
+ !PyArg_Parse(
+ val,
+ "s;spawnve() arg 3 contains a non-string value",
+ &v))
+ {
+ goto fail_2;
+ }
+ len = PyString_Size(key) + PyString_Size(val) + 2;
+ p = PyMem_NEW(char, len);
+ if (p == NULL) {
+ PyErr_NoMemory();
+ goto fail_2;
+ }
+ PyOS_snprintf(p, len, "%s=%s", k, v);
+ envlist[envc++] = p;
+ }
+ envlist[envc] = 0;
+
+#if defined(PYOS_OS2) && defined(PYCC_GCC)
+ Py_BEGIN_ALLOW_THREADS
+ spawnval = spawnve(mode, path, argvlist, envlist);
+ Py_END_ALLOW_THREADS
+#else
+ if (mode == _OLD_P_OVERLAY)
+ mode = _P_OVERLAY;
+
+ Py_BEGIN_ALLOW_THREADS
+ spawnval = _spawnve(mode, path, argvlist, envlist);
+ Py_END_ALLOW_THREADS
+#endif
+
+ if (spawnval == -1)
+ (void) edk2_error();
+ else
+#if SIZEOF_LONG == SIZEOF_VOID_P
+ res = Py_BuildValue("l", (long) spawnval);
+#else
+ res = Py_BuildValue("L", (PY_LONG_LONG) spawnval);
+#endif
+
+ fail_2:
+ while (--envc >= 0)
+ PyMem_DEL(envlist[envc]);
+ PyMem_DEL(envlist);
+ fail_1:
+ free_string_array(argvlist, lastarg);
+ Py_XDECREF(vals);
+ Py_XDECREF(keys);
+ fail_0:
+ PyMem_Free(path);
+ return res;
+}
+
+/* OS/2 supports spawnvp & spawnvpe natively */
+#if defined(PYOS_OS2)
+PyDoc_STRVAR(edk2_spawnvp__doc__,
+"spawnvp(mode, file, args)\n\n\
+Execute the program 'file' in a new process, using the environment\n\
+search path to find the file.\n\
+\n\
+ mode: mode of process creation\n\
+ file: executable file name\n\
+ args: tuple or list of strings");
+
+static PyObject *
+edk2_spawnvp(PyObject *self, PyObject *args)
+{
+ char *path;
+ PyObject *argv;
+ char **argvlist;
+ int mode, i, argc;
+ Py_intptr_t spawnval;
+ PyObject *(*getitem)(PyObject *, Py_ssize_t);
+
+ /* spawnvp has three arguments: (mode, path, argv), where
+ argv is a list or tuple of strings. */
+
+ if (!PyArg_ParseTuple(args, "ietO:spawnvp", &mode,
+ Py_FileSystemDefaultEncoding,
+ &path, &argv))
+ return NULL;
+ if (PyList_Check(argv)) {
+ argc = PyList_Size(argv);
+ getitem = PyList_GetItem;
+ }
+ else if (PyTuple_Check(argv)) {
+ argc = PyTuple_Size(argv);
+ getitem = PyTuple_GetItem;
+ }
+ else {
+ PyErr_SetString(PyExc_TypeError,
+ "spawnvp() arg 2 must be a tuple or list");
+ PyMem_Free(path);
+ return NULL;
+ }
+
+ argvlist = PyMem_NEW(char *, argc+1);
+ if (argvlist == NULL) {
+ PyMem_Free(path);
+ return PyErr_NoMemory();
+ }
+ for (i = 0; i < argc; i++) {
+ if (!PyArg_Parse((*getitem)(argv, i), "et",
+ Py_FileSystemDefaultEncoding,
+ &argvlist[i])) {
+ free_string_array(argvlist, i);
+ PyErr_SetString(
+ PyExc_TypeError,
+ "spawnvp() arg 2 must contain only strings");
+ PyMem_Free(path);
+ return NULL;
+ }
+ }
+ argvlist[argc] = NULL;
+
+ Py_BEGIN_ALLOW_THREADS
+#if defined(PYCC_GCC)
+ spawnval = spawnvp(mode, path, argvlist);
+#else
+ spawnval = _spawnvp(mode, path, argvlist);
+#endif
+ Py_END_ALLOW_THREADS
+
+ free_string_array(argvlist, argc);
+ PyMem_Free(path);
+
+ if (spawnval == -1)
+ return edk2_error();
+ else
+ return Py_BuildValue("l", (long) spawnval);
+}
+
+
+PyDoc_STRVAR(edk2_spawnvpe__doc__,
+"spawnvpe(mode, file, args, env)\n\n\
+Execute the program 'file' in a new process, using the environment\n\
+search path to find the file.\n\
+\n\
+ mode: mode of process creation\n\
+ file: executable file name\n\
+ args: tuple or list of arguments\n\
+ env: dictionary of strings mapping to strings");
+
+static PyObject *
+edk2_spawnvpe(PyObject *self, PyObject *args)
+{
+ char *path;
+ PyObject *argv, *env;
+ char **argvlist;
+ char **envlist;
+ PyObject *key, *val, *keys=NULL, *vals=NULL, *res=NULL;
+ int mode, i, pos, argc, envc;
+ Py_intptr_t spawnval;
+ PyObject *(*getitem)(PyObject *, Py_ssize_t);
+ int lastarg = 0;
+
+ /* spawnvpe has four arguments: (mode, path, argv, env), where
+ argv is a list or tuple of strings and env is a dictionary
+ like posix.environ. */
+
+ if (!PyArg_ParseTuple(args, "ietOO:spawnvpe", &mode,
+ Py_FileSystemDefaultEncoding,
+ &path, &argv, &env))
+ return NULL;
+ if (PyList_Check(argv)) {
+ argc = PyList_Size(argv);
+ getitem = PyList_GetItem;
+ }
+ else if (PyTuple_Check(argv)) {
+ argc = PyTuple_Size(argv);
+ getitem = PyTuple_GetItem;
+ }
+ else {
+ PyErr_SetString(PyExc_TypeError,
+ "spawnvpe() arg 2 must be a tuple or list");
+ goto fail_0;
+ }
+ if (!PyMapping_Check(env)) {
+ PyErr_SetString(PyExc_TypeError,
+ "spawnvpe() arg 3 must be a mapping object");
+ goto fail_0;
+ }
+
+ argvlist = PyMem_NEW(char *, argc+1);
+ if (argvlist == NULL) {
+ PyErr_NoMemory();
+ goto fail_0;
+ }
+ for (i = 0; i < argc; i++) {
+ if (!PyArg_Parse((*getitem)(argv, i),
+ "et;spawnvpe() arg 2 must contain only strings",
+ Py_FileSystemDefaultEncoding,
+ &argvlist[i]))
+ {
+ lastarg = i;
+ goto fail_1;
+ }
+ }
+ lastarg = argc;
+ argvlist[argc] = NULL;
+
+ i = PyMapping_Size(env);
+ if (i < 0)
+ goto fail_1;
+ envlist = PyMem_NEW(char *, i + 1);
+ if (envlist == NULL) {
+ PyErr_NoMemory();
+ goto fail_1;
+ }
+ envc = 0;
+ keys = PyMapping_Keys(env);
+ vals = PyMapping_Values(env);
+ if (!keys || !vals)
+ goto fail_2;
+ if (!PyList_Check(keys) || !PyList_Check(vals)) {
+ PyErr_SetString(PyExc_TypeError,
+ "spawnvpe(): env.keys() or env.values() is not a list");
+ goto fail_2;
+ }
+
+ for (pos = 0; pos < i; pos++) {
+ char *p, *k, *v;
+ size_t len;
+
+ key = PyList_GetItem(keys, pos);
+ val = PyList_GetItem(vals, pos);
+ if (!key || !val)
+ goto fail_2;
+
+ if (!PyArg_Parse(
+ key,
+ "s;spawnvpe() arg 3 contains a non-string key",
+ &k) ||
+ !PyArg_Parse(
+ val,
+ "s;spawnvpe() arg 3 contains a non-string value",
+ &v))
+ {
+ goto fail_2;
+ }
+ len = PyString_Size(key) + PyString_Size(val) + 2;
+ p = PyMem_NEW(char, len);
+ if (p == NULL) {
+ PyErr_NoMemory();
+ goto fail_2;
+ }
+ PyOS_snprintf(p, len, "%s=%s", k, v);
+ envlist[envc++] = p;
+ }
+ envlist[envc] = 0;
+
+ Py_BEGIN_ALLOW_THREADS
+#if defined(PYCC_GCC)
+ spawnval = spawnvpe(mode, path, argvlist, envlist);
+#else
+ spawnval = _spawnvpe(mode, path, argvlist, envlist);
+#endif
+ Py_END_ALLOW_THREADS
+
+ if (spawnval == -1)
+ (void) edk2_error();
+ else
+ res = Py_BuildValue("l", (long) spawnval);
+
+ fail_2:
+ while (--envc >= 0)
+ PyMem_DEL(envlist[envc]);
+ PyMem_DEL(envlist);
+ fail_1:
+ free_string_array(argvlist, lastarg);
+ Py_XDECREF(vals);
+ Py_XDECREF(keys);
+ fail_0:
+ PyMem_Free(path);
+ return res;
+}
+#endif /* PYOS_OS2 */
+#endif /* HAVE_SPAWNV */
+
+
+#ifdef HAVE_FORK1
+PyDoc_STRVAR(edk2_fork1__doc__,
+"fork1() -> pid\n\n\
+Fork a child process with a single multiplexed (i.e., not bound) thread.\n\
+\n\
+Return 0 to child process and PID of child to parent process.");
+
+static PyObject *
+edk2_fork1(PyObject *self, PyObject *noargs)
+{
+ pid_t pid;
+ int result = 0;
+ _PyImport_AcquireLock();
+ pid = fork1();
+ if (pid == 0) {
+ /* child: this clobbers and resets the import lock. */
+ PyOS_AfterFork();
+ } else {
+ /* parent: release the import lock. */
+ result = _PyImport_ReleaseLock();
+ }
+ if (pid == -1)
+ return edk2_error();
+ if (result < 0) {
+ /* Don't clobber the OSError if the fork failed. */
+ PyErr_SetString(PyExc_RuntimeError,
+ "not holding the import lock");
+ return NULL;
+ }
+ return PyLong_FromPid(pid);
+}
+#endif
+
+
+#ifdef HAVE_FORK
+PyDoc_STRVAR(edk2_fork__doc__,
+"fork() -> pid\n\n\
+Fork a child process.\n\
+Return 0 to child process and PID of child to parent process.");
+
+static PyObject *
+edk2_fork(PyObject *self, PyObject *noargs)
+{
+ pid_t pid;
+ int result = 0;
+ _PyImport_AcquireLock();
+ pid = fork();
+ if (pid == 0) {
+ /* child: this clobbers and resets the import lock. */
+ PyOS_AfterFork();
+ } else {
+ /* parent: release the import lock. */
+ result = _PyImport_ReleaseLock();
+ }
+ if (pid == -1)
+ return edk2_error();
+ if (result < 0) {
+ /* Don't clobber the OSError if the fork failed. */
+ PyErr_SetString(PyExc_RuntimeError,
+ "not holding the import lock");
+ return NULL;
+ }
+ return PyLong_FromPid(pid);
+}
+#endif
+
+/* AIX uses /dev/ptc but is otherwise the same as /dev/ptmx */
+/* IRIX has both /dev/ptc and /dev/ptmx, use ptmx */
+#if defined(HAVE_DEV_PTC) && !defined(HAVE_DEV_PTMX)
+#define DEV_PTY_FILE "/dev/ptc"
+#define HAVE_DEV_PTMX
+#else
+#define DEV_PTY_FILE "/dev/ptmx"
+#endif
+
+#if defined(HAVE_OPENPTY) || defined(HAVE_FORKPTY) || defined(HAVE_DEV_PTMX)
+#ifdef HAVE_PTY_H
+#include <pty.h>
+#else
+#ifdef HAVE_LIBUTIL_H
+#include <libutil.h>
+#else
+#ifdef HAVE_UTIL_H
+#include <util.h>
+#endif /* HAVE_UTIL_H */
+#endif /* HAVE_LIBUTIL_H */
+#endif /* HAVE_PTY_H */
+#ifdef HAVE_STROPTS_H
+#include <stropts.h>
+#endif
+#endif /* defined(HAVE_OPENPTY) || defined(HAVE_FORKPTY) || defined(HAVE_DEV_PTMX */
+
+#if defined(HAVE_OPENPTY) || defined(HAVE__GETPTY) || defined(HAVE_DEV_PTMX)
+PyDoc_STRVAR(edk2_openpty__doc__,
+"openpty() -> (master_fd, slave_fd)\n\n\
+Open a pseudo-terminal, returning open fd's for both master and slave end.\n");
+
+static PyObject *
+edk2_openpty(PyObject *self, PyObject *noargs)
+{
+ int master_fd, slave_fd;
+#ifndef HAVE_OPENPTY
+ char * slave_name;
+#endif
+#if defined(HAVE_DEV_PTMX) && !defined(HAVE_OPENPTY) && !defined(HAVE__GETPTY)
+ PyOS_sighandler_t sig_saved;
+#ifdef sun
+ extern char *ptsname(int fildes);
+#endif
+#endif
+
+#ifdef HAVE_OPENPTY
+ if (openpty(&master_fd, &slave_fd, NULL, NULL, NULL) != 0)
+ return edk2_error();
+#elif defined(HAVE__GETPTY)
+ slave_name = _getpty(&master_fd, O_RDWR, 0666, 0);
+ if (slave_name == NULL)
+ return edk2_error();
+
+ slave_fd = open(slave_name, O_RDWR);
+ if (slave_fd < 0)
+ return edk2_error();
+#else
+ master_fd = open(DEV_PTY_FILE, O_RDWR | O_NOCTTY); /* open master */
+ if (master_fd < 0)
+ return edk2_error();
+ sig_saved = PyOS_setsig(SIGCHLD, SIG_DFL);
+ /* change permission of slave */
+ if (grantpt(master_fd) < 0) {
+ PyOS_setsig(SIGCHLD, sig_saved);
+ return edk2_error();
+ }
+ /* unlock slave */
+ if (unlockpt(master_fd) < 0) {
+ PyOS_setsig(SIGCHLD, sig_saved);
+ return edk2_error();
+ }
+ PyOS_setsig(SIGCHLD, sig_saved);
+ slave_name = ptsname(master_fd); /* get name of slave */
+ if (slave_name == NULL)
+ return edk2_error();
+ slave_fd = open(slave_name, O_RDWR | O_NOCTTY); /* open slave */
+ if (slave_fd < 0)
+ return edk2_error();
+#if !defined(__CYGWIN__) && !defined(HAVE_DEV_PTC)
+ ioctl(slave_fd, I_PUSH, "ptem"); /* push ptem */
+ ioctl(slave_fd, I_PUSH, "ldterm"); /* push ldterm */
+#ifndef __hpux
+ ioctl(slave_fd, I_PUSH, "ttcompat"); /* push ttcompat */
+#endif /* __hpux */
+#endif /* HAVE_CYGWIN */
+#endif /* HAVE_OPENPTY */
+
+ return Py_BuildValue("(ii)", master_fd, slave_fd);
+
+}
+#endif /* defined(HAVE_OPENPTY) || defined(HAVE__GETPTY) || defined(HAVE_DEV_PTMX) */
+
+#ifdef HAVE_FORKPTY
+PyDoc_STRVAR(edk2_forkpty__doc__,
+"forkpty() -> (pid, master_fd)\n\n\
+Fork a new process with a new pseudo-terminal as controlling tty.\n\n\
+Like fork(), return 0 as pid to child process, and PID of child to parent.\n\
+To both, return fd of newly opened pseudo-terminal.\n");
+
+static PyObject *
+edk2_forkpty(PyObject *self, PyObject *noargs)
+{
+ int master_fd = -1, result = 0;
+ pid_t pid;
+
+ _PyImport_AcquireLock();
+ pid = forkpty(&master_fd, NULL, NULL, NULL);
+ if (pid == 0) {
+ /* child: this clobbers and resets the import lock. */
+ PyOS_AfterFork();
+ } else {
+ /* parent: release the import lock. */
+ result = _PyImport_ReleaseLock();
+ }
+ if (pid == -1)
+ return edk2_error();
+ if (result < 0) {
+ /* Don't clobber the OSError if the fork failed. */
+ PyErr_SetString(PyExc_RuntimeError,
+ "not holding the import lock");
+ return NULL;
+ }
+ return Py_BuildValue("(Ni)", PyLong_FromPid(pid), master_fd);
+}
+#endif
+
+PyDoc_STRVAR(edk2_getpid__doc__,
+"getpid() -> pid\n\n\
+Return the current process id");
+
+static PyObject *
+edk2_getpid(PyObject *self, PyObject *noargs)
+{
+ return PyLong_FromPid(getpid());
+}
+
+
+#ifdef HAVE_GETLOGIN
+PyDoc_STRVAR(edk2_getlogin__doc__,
+"getlogin() -> string\n\n\
+Return the actual login name.");
+
+static PyObject *
+edk2_getlogin(PyObject *self, PyObject *noargs)
+{
+ PyObject *result = NULL;
+ char *name;
+ int old_errno = errno;
+
+ errno = 0;
+ name = getlogin();
+ if (name == NULL) {
+ if (errno)
+ edk2_error();
+ else
+ PyErr_SetString(PyExc_OSError,
+ "unable to determine login name");
+ }
+ else
+ result = PyUnicode_FromString(name);
+ errno = old_errno;
+
+ return result;
+}
+#endif
+
+#ifdef HAVE_KILL
+PyDoc_STRVAR(edk2_kill__doc__,
+"kill(pid, sig)\n\n\
+Kill a process with a signal.");
+
+static PyObject *
+edk2_kill(PyObject *self, PyObject *args)
+{
+ pid_t pid;
+ int sig;
+ if (!PyArg_ParseTuple(args, PARSE_PID "i:kill", &pid, &sig))
+ return NULL;
+#if defined(PYOS_OS2) && !defined(PYCC_GCC)
+ if (sig == XCPT_SIGNAL_INTR || sig == XCPT_SIGNAL_BREAK) {
+ APIRET rc;
+ if ((rc = DosSendSignalException(pid, sig)) != NO_ERROR)
+ return os2_error(rc);
+
+ } else if (sig == XCPT_SIGNAL_KILLPROC) {
+ APIRET rc;
+ if ((rc = DosKillProcess(DKP_PROCESS, pid)) != NO_ERROR)
+ return os2_error(rc);
+
+ } else
+ return NULL; /* Unrecognized Signal Requested */
+#else
+ if (kill(pid, sig) == -1)
+ return edk2_error();
+#endif
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+#endif
+
+#ifdef HAVE_PLOCK
+
+#ifdef HAVE_SYS_LOCK_H
+#include <sys/lock.h>
+#endif
+
+PyDoc_STRVAR(edk2_plock__doc__,
+"plock(op)\n\n\
+Lock program segments into memory.");
+
+static PyObject *
+edk2_plock(PyObject *self, PyObject *args)
+{
+ int op;
+ if (!PyArg_ParseTuple(args, "i:plock", &op))
+ return NULL;
+ if (plock(op) == -1)
+ return edk2_error();
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+#endif
+
+
+#ifdef HAVE_POPEN
+PyDoc_STRVAR(edk2_popen__doc__,
+"popen(command [, mode='r' [, bufsize]]) -> pipe\n\n\
+Open a pipe to/from a command returning a file object.");
+
+static PyObject *
+edk2_popen(PyObject *self, PyObject *args)
+{
+ char *name;
+ char *mode = "r";
+ int bufsize = -1;
+ FILE *fp;
+ PyObject *f;
+ if (!PyArg_ParseTuple(args, "s|si:popen", &name, &mode, &bufsize))
+ return NULL;
+ /* Strip mode of binary or text modifiers */
+ if (strcmp(mode, "rb") == 0 || strcmp(mode, "rt") == 0)
+ mode = "r";
+ else if (strcmp(mode, "wb") == 0 || strcmp(mode, "wt") == 0)
+ mode = "w";
+ Py_BEGIN_ALLOW_THREADS
+ fp = popen(name, mode);
+ Py_END_ALLOW_THREADS
+ if (fp == NULL)
+ return edk2_error();
+// TODO: Commented this for UEFI as it doesn't compile and
+// has no impact on the edk2 module functionality
+// f = PyFile_FromFile(fp, name, mode, pclose);
+// if (f != NULL)
+// PyFile_SetBufSize(f, bufsize);
+ return f;
+}
+
+#endif /* HAVE_POPEN */
+
+
+#if defined(HAVE_WAIT3) || defined(HAVE_WAIT4)
+static PyObject *
+wait_helper(pid_t pid, int status, struct rusage *ru)
+{
+ PyObject *result;
+ static PyObject *struct_rusage;
+
+ if (pid == -1)
+ return edk2_error();
+
+ if (struct_rusage == NULL) {
+ PyObject *m = PyImport_ImportModuleNoBlock("resource");
+ if (m == NULL)
+ return NULL;
+ struct_rusage = PyObject_GetAttrString(m, "struct_rusage");
+ Py_DECREF(m);
+ if (struct_rusage == NULL)
+ return NULL;
+ }
+
+ /* XXX(nnorwitz): Copied (w/mods) from resource.c, there should be only one. */
+ result = PyStructSequence_New((PyTypeObject*) struct_rusage);
+ if (!result)
+ return NULL;
+
+#ifndef doubletime
+#define doubletime(TV) ((double)(TV).tv_sec + (TV).tv_usec * 0.000001)
+#endif
+
+ PyStructSequence_SET_ITEM(result, 0,
+ PyFloat_FromDouble(doubletime(ru->ru_utime)));
+ PyStructSequence_SET_ITEM(result, 1,
+ PyFloat_FromDouble(doubletime(ru->ru_stime)));
+#define SET_INT(result, index, value)\
+ PyStructSequence_SET_ITEM(result, index, PyLong_FromLong(value))
+ SET_INT(result, 2, ru->ru_maxrss);
+ SET_INT(result, 3, ru->ru_ixrss);
+ SET_INT(result, 4, ru->ru_idrss);
+ SET_INT(result, 5, ru->ru_isrss);
+ SET_INT(result, 6, ru->ru_minflt);
+ SET_INT(result, 7, ru->ru_majflt);
+ SET_INT(result, 8, ru->ru_nswap);
+ SET_INT(result, 9, ru->ru_inblock);
+ SET_INT(result, 10, ru->ru_oublock);
+ SET_INT(result, 11, ru->ru_msgsnd);
+ SET_INT(result, 12, ru->ru_msgrcv);
+ SET_INT(result, 13, ru->ru_nsignals);
+ SET_INT(result, 14, ru->ru_nvcsw);
+ SET_INT(result, 15, ru->ru_nivcsw);
+#undef SET_INT
+
+ if (PyErr_Occurred()) {
+ Py_DECREF(result);
+ return NULL;
+ }
+
+ return Py_BuildValue("NiN", PyLong_FromPid(pid), status, result);
+}
+#endif /* HAVE_WAIT3 || HAVE_WAIT4 */
+
+#ifdef HAVE_WAIT3
+PyDoc_STRVAR(edk2_wait3__doc__,
+"wait3(options) -> (pid, status, rusage)\n\n\
+Wait for completion of a child process.");
+
+static PyObject *
+edk2_wait3(PyObject *self, PyObject *args)
+{
+ pid_t pid;
+ int options;
+ struct rusage ru;
+ WAIT_TYPE status;
+ WAIT_STATUS_INT(status) = 0;
+
+ if (!PyArg_ParseTuple(args, "i:wait3", &options))
+ return NULL;
+
+ Py_BEGIN_ALLOW_THREADS
+ pid = wait3(&status, options, &ru);
+ Py_END_ALLOW_THREADS
+
+ return wait_helper(pid, WAIT_STATUS_INT(status), &ru);
+}
+#endif /* HAVE_WAIT3 */
+
+#ifdef HAVE_WAIT4
+PyDoc_STRVAR(edk2_wait4__doc__,
+"wait4(pid, options) -> (pid, status, rusage)\n\n\
+Wait for completion of a given child process.");
+
+static PyObject *
+edk2_wait4(PyObject *self, PyObject *args)
+{
+ pid_t pid;
+ int options;
+ struct rusage ru;
+ WAIT_TYPE status;
+ WAIT_STATUS_INT(status) = 0;
+
+ if (!PyArg_ParseTuple(args, PARSE_PID "i:wait4", &pid, &options))
+ return NULL;
+
+ Py_BEGIN_ALLOW_THREADS
+ pid = wait4(pid, &status, options, &ru);
+ Py_END_ALLOW_THREADS
+
+ return wait_helper(pid, WAIT_STATUS_INT(status), &ru);
+}
+#endif /* HAVE_WAIT4 */
+
+#ifdef HAVE_WAITPID
+PyDoc_STRVAR(edk2_waitpid__doc__,
+"waitpid(pid, options) -> (pid, status)\n\n\
+Wait for completion of a given child process.");
+
+static PyObject *
+edk2_waitpid(PyObject *self, PyObject *args)
+{
+ pid_t pid;
+ int options;
+ WAIT_TYPE status;
+ WAIT_STATUS_INT(status) = 0;
+
+ if (!PyArg_ParseTuple(args, PARSE_PID "i:waitpid", &pid, &options))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ pid = waitpid(pid, &status, options);
+ Py_END_ALLOW_THREADS
+ if (pid == -1)
+ return edk2_error();
+
+ return Py_BuildValue("Ni", PyLong_FromPid(pid), WAIT_STATUS_INT(status));
+}
+
+#elif defined(HAVE_CWAIT)
+
+/* MS C has a variant of waitpid() that's usable for most purposes. */
+PyDoc_STRVAR(edk2_waitpid__doc__,
+"waitpid(pid, options) -> (pid, status << 8)\n\n"
+"Wait for completion of a given process. options is ignored on Windows.");
+
+static PyObject *
+edk2_waitpid(PyObject *self, PyObject *args)
+{
+ Py_intptr_t pid;
+ int status, options;
+
+ if (!PyArg_ParseTuple(args, PARSE_PID "i:waitpid", &pid, &options))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ pid = _cwait(&status, pid, options);
+ Py_END_ALLOW_THREADS
+ if (pid == -1)
+ return edk2_error();
+
+ /* shift the status left a byte so this is more like the POSIX waitpid */
+ return Py_BuildValue("Ni", PyLong_FromPid(pid), status << 8);
+}
+#endif /* HAVE_WAITPID || HAVE_CWAIT */
+
+#ifdef HAVE_WAIT
+PyDoc_STRVAR(edk2_wait__doc__,
+"wait() -> (pid, status)\n\n\
+Wait for completion of a child process.");
+
+static PyObject *
+edk2_wait(PyObject *self, PyObject *noargs)
+{
+ pid_t pid;
+ WAIT_TYPE status;
+ WAIT_STATUS_INT(status) = 0;
+
+ Py_BEGIN_ALLOW_THREADS
+ pid = wait(&status);
+ Py_END_ALLOW_THREADS
+ if (pid == -1)
+ return edk2_error();
+
+ return Py_BuildValue("Ni", PyLong_FromPid(pid), WAIT_STATUS_INT(status));
+}
+#endif
+
+
+PyDoc_STRVAR(edk2_lstat__doc__,
+"lstat(path) -> stat result\n\n\
+Like stat(path), but do not follow symbolic links.");
+
+static PyObject *
+edk2_lstat(PyObject *self, PyObject *args)
+{
+#ifdef HAVE_LSTAT
+ return edk2_do_stat(self, args, "et:lstat", lstat, NULL, NULL);
+#else /* !HAVE_LSTAT */
+ return edk2_do_stat(self, args, "et:lstat", STAT, NULL, NULL);
+#endif /* !HAVE_LSTAT */
+}
+
+
+#ifdef HAVE_READLINK
+PyDoc_STRVAR(edk2_readlink__doc__,
+"readlink(path) -> path\n\n\
+Return a string representing the path to which the symbolic link points.");
+
+static PyObject *
+edk2_readlink(PyObject *self, PyObject *args)
+{
+ PyObject* v;
+ char buf[MAXPATHLEN];
+ char *path;
+ int n;
+#ifdef Py_USING_UNICODE
+ int arg_is_unicode = 0;
+#endif
+
+ if (!PyArg_ParseTuple(args, "et:readlink",
+ Py_FileSystemDefaultEncoding, &path))
+ return NULL;
+#ifdef Py_USING_UNICODE
+ v = PySequence_GetItem(args, 0);
+ if (v == NULL) {
+ PyMem_Free(path);
+ return NULL;
+ }
+
+ if (PyUnicode_Check(v)) {
+ arg_is_unicode = 1;
+ }
+ Py_DECREF(v);
+#endif
+
+ Py_BEGIN_ALLOW_THREADS
+ n = readlink(path, buf, (int) sizeof buf);
+ Py_END_ALLOW_THREADS
+ if (n < 0)
+ return edk2_error_with_allocated_filename(path);
+
+ PyMem_Free(path);
+ v = PyUnicode_FromStringAndSize(buf, n);
+#ifdef Py_USING_UNICODE
+ if (arg_is_unicode) {
+ PyObject *w;
+
+ w = PyUnicode_FromEncodedObject(v,
+ Py_FileSystemDefaultEncoding,
+ "strict");
+ if (w != NULL) {
+ Py_DECREF(v);
+ v = w;
+ }
+ else {
+ /* fall back to the original byte string, as
+ discussed in patch #683592 */
+ PyErr_Clear();
+ }
+ }
+#endif
+ return v;
+}
+#endif /* HAVE_READLINK */
+
+
+#ifdef HAVE_SYMLINK
+PyDoc_STRVAR(edk2_symlink__doc__,
+"symlink(src, dst)\n\n\
+Create a symbolic link pointing to src named dst.");
+
+static PyObject *
+edk2_symlink(PyObject *self, PyObject *args)
+{
+ return edk2_2str(args, "etet:symlink", symlink);
+}
+#endif /* HAVE_SYMLINK */
+
+
+#ifdef HAVE_TIMES
+#define NEED_TICKS_PER_SECOND
+static long ticks_per_second = -1;
+static PyObject *
+edk2_times(PyObject *self, PyObject *noargs)
+{
+ struct tms t;
+ clock_t c;
+ errno = 0;
+ c = times(&t);
+ if (c == (clock_t) -1)
+ return edk2_error();
+ return Py_BuildValue("ddddd",
+ (double)t.tms_utime / ticks_per_second,
+ (double)t.tms_stime / ticks_per_second,
+ (double)t.tms_cutime / ticks_per_second,
+ (double)t.tms_cstime / ticks_per_second,
+ (double)c / ticks_per_second);
+}
+#endif /* HAVE_TIMES */
+
+
+#ifdef HAVE_TIMES
+PyDoc_STRVAR(edk2_times__doc__,
+"times() -> (utime, stime, cutime, cstime, elapsed_time)\n\n\
+Return a tuple of floating point numbers indicating process times.");
+#endif
+
+
+#ifdef HAVE_GETSID
+PyDoc_STRVAR(edk2_getsid__doc__,
+"getsid(pid) -> sid\n\n\
+Call the system call getsid().");
+
+static PyObject *
+edk2_getsid(PyObject *self, PyObject *args)
+{
+ pid_t pid;
+ int sid;
+ if (!PyArg_ParseTuple(args, PARSE_PID ":getsid", &pid))
+ return NULL;
+ sid = getsid(pid);
+ if (sid < 0)
+ return edk2_error();
+ return PyLong_FromLong((long)sid);
+}
+#endif /* HAVE_GETSID */
+
+
+#ifdef HAVE_SETSID
+PyDoc_STRVAR(edk2_setsid__doc__,
+"setsid()\n\n\
+Call the system call setsid().");
+
+static PyObject *
+edk2_setsid(PyObject *self, PyObject *noargs)
+{
+ if (setsid() < 0)
+ return edk2_error();
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+#endif /* HAVE_SETSID */
+
+#ifdef HAVE_SETPGID
+PyDoc_STRVAR(edk2_setpgid__doc__,
+"setpgid(pid, pgrp)\n\n\
+Call the system call setpgid().");
+
+static PyObject *
+edk2_setpgid(PyObject *self, PyObject *args)
+{
+ pid_t pid;
+ int pgrp;
+ if (!PyArg_ParseTuple(args, PARSE_PID "i:setpgid", &pid, &pgrp))
+ return NULL;
+ if (setpgid(pid, pgrp) < 0)
+ return edk2_error();
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+#endif /* HAVE_SETPGID */
+
+
+#ifdef HAVE_TCGETPGRP
+PyDoc_STRVAR(edk2_tcgetpgrp__doc__,
+"tcgetpgrp(fd) -> pgid\n\n\
+Return the process group associated with the terminal given by a fd.");
+
+static PyObject *
+edk2_tcgetpgrp(PyObject *self, PyObject *args)
+{
+ int fd;
+ pid_t pgid;
+ if (!PyArg_ParseTuple(args, "i:tcgetpgrp", &fd))
+ return NULL;
+ pgid = tcgetpgrp(fd);
+ if (pgid < 0)
+ return edk2_error();
+ return PyLong_FromPid(pgid);
+}
+#endif /* HAVE_TCGETPGRP */
+
+
+#ifdef HAVE_TCSETPGRP
+PyDoc_STRVAR(edk2_tcsetpgrp__doc__,
+"tcsetpgrp(fd, pgid)\n\n\
+Set the process group associated with the terminal given by a fd.");
+
+static PyObject *
+edk2_tcsetpgrp(PyObject *self, PyObject *args)
+{
+ int fd;
+ pid_t pgid;
+ if (!PyArg_ParseTuple(args, "i" PARSE_PID ":tcsetpgrp", &fd, &pgid))
+ return NULL;
+ if (tcsetpgrp(fd, pgid) < 0)
+ return edk2_error();
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+#endif /* HAVE_TCSETPGRP */
+
+/* Functions acting on file descriptors */
+
+PyDoc_STRVAR(edk2_open__doc__,
+"open(filename, flag [, mode=0777]) -> fd\n\n\
+Open a file (for low level IO).");
+
+static PyObject *
+edk2_open(PyObject *self, PyObject *args)
+{
+ char *file = NULL;
+ int flag;
+ int mode = 0777;
+ int fd;
+
+ if (!PyArg_ParseTuple(args, "eti|i",
+ Py_FileSystemDefaultEncoding, &file,
+ &flag, &mode))
+ return NULL;
+
+ Py_BEGIN_ALLOW_THREADS
+ fd = open(file, flag, mode);
+ Py_END_ALLOW_THREADS
+ if (fd < 0)
+ return edk2_error_with_allocated_filename(file);
+ PyMem_Free(file);
+ return PyLong_FromLong((long)fd);
+}
+
+
+PyDoc_STRVAR(edk2_close__doc__,
+"close(fd)\n\n\
+Close a file descriptor (for low level IO).");
+
+static PyObject *
+edk2_close(PyObject *self, PyObject *args)
+{
+ int fd, res;
+ if (!PyArg_ParseTuple(args, "i:close", &fd))
+ return NULL;
+ if (!_PyVerify_fd(fd))
+ return edk2_error();
+ Py_BEGIN_ALLOW_THREADS
+ res = close(fd);
+ Py_END_ALLOW_THREADS
+ if (res < 0)
+ return edk2_error();
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+
+PyDoc_STRVAR(edk2_closerange__doc__,
+"closerange(fd_low, fd_high)\n\n\
+Closes all file descriptors in [fd_low, fd_high), ignoring errors.");
+
+static PyObject *
+edk2_closerange(PyObject *self, PyObject *args)
+{
+ int fd_from, fd_to, i;
+ if (!PyArg_ParseTuple(args, "ii:closerange", &fd_from, &fd_to))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ for (i = fd_from; i < fd_to; i++)
+ if (_PyVerify_fd(i))
+ close(i);
+ Py_END_ALLOW_THREADS
+ Py_RETURN_NONE;
+}
+
+
+PyDoc_STRVAR(edk2_dup__doc__,
+"dup(fd) -> fd2\n\n\
+Return a duplicate of a file descriptor.");
+
+static PyObject *
+edk2_dup(PyObject *self, PyObject *args)
+{
+ int fd;
+ if (!PyArg_ParseTuple(args, "i:dup", &fd))
+ return NULL;
+ if (!_PyVerify_fd(fd))
+ return edk2_error();
+ Py_BEGIN_ALLOW_THREADS
+ fd = dup(fd);
+ Py_END_ALLOW_THREADS
+ if (fd < 0)
+ return edk2_error();
+ return PyLong_FromLong((long)fd);
+}
+
+
+PyDoc_STRVAR(edk2_dup2__doc__,
+"dup2(old_fd, new_fd)\n\n\
+Duplicate file descriptor.");
+
+static PyObject *
+edk2_dup2(PyObject *self, PyObject *args)
+{
+ int fd, fd2, res;
+ if (!PyArg_ParseTuple(args, "ii:dup2", &fd, &fd2))
+ return NULL;
+ if (!_PyVerify_fd_dup2(fd, fd2))
+ return edk2_error();
+ Py_BEGIN_ALLOW_THREADS
+ res = dup2(fd, fd2);
+ Py_END_ALLOW_THREADS
+ if (res < 0)
+ return edk2_error();
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+
+PyDoc_STRVAR(edk2_lseek__doc__,
+"lseek(fd, pos, how) -> newpos\n\n\
+Set the current position of a file descriptor.");
+
+static PyObject *
+edk2_lseek(PyObject *self, PyObject *args)
+{
+ int fd, how;
+ off_t pos, res;
+ PyObject *posobj;
+ if (!PyArg_ParseTuple(args, "iOi:lseek", &fd, &posobj, &how))
+ return NULL;
+#ifdef SEEK_SET
+ /* Turn 0, 1, 2 into SEEK_{SET,CUR,END} */
+ switch (how) {
+ case 0: how = SEEK_SET; break;
+ case 1: how = SEEK_CUR; break;
+ case 2: how = SEEK_END; break;
+ }
+#endif /* SEEK_END */
+
+#if !defined(HAVE_LARGEFILE_SUPPORT)
+ pos = PyLong_AsLong(posobj);
+#else
+ pos = PyLong_Check(posobj) ?
+ PyLong_AsLongLong(posobj) : PyLong_AsLong(posobj);
+#endif
+ if (PyErr_Occurred())
+ return NULL;
+
+ if (!_PyVerify_fd(fd))
+ return edk2_error();
+ Py_BEGIN_ALLOW_THREADS
+ res = lseek(fd, pos, how);
+ Py_END_ALLOW_THREADS
+ if (res < 0)
+ return edk2_error();
+
+#if !defined(HAVE_LARGEFILE_SUPPORT)
+ return PyLong_FromLong(res);
+#else
+ return PyLong_FromLongLong(res);
+#endif
+}
+
+
+PyDoc_STRVAR(edk2_read__doc__,
+"read(fd, buffersize) -> string\n\n\
+Read a file descriptor.");
+
+static PyObject *
+edk2_read(PyObject *self, PyObject *args)
+{
+ int fd, size, n;
+ PyObject *buffer;
+ if (!PyArg_ParseTuple(args, "ii:read", &fd, &size))
+ return NULL;
+ if (size < 0) {
+ errno = EINVAL;
+ return edk2_error();
+ }
+ buffer = PyBytes_FromStringAndSize((char *)NULL, size);
+ if (buffer == NULL)
+ return NULL;
+ if (!_PyVerify_fd(fd)) {
+ Py_DECREF(buffer);
+ return edk2_error();
+ }
+ Py_BEGIN_ALLOW_THREADS
+ n = read(fd, PyBytes_AS_STRING(buffer), size);
+ Py_END_ALLOW_THREADS
+ if (n < 0) {
+ Py_DECREF(buffer);
+ return edk2_error();
+ }
+ if (n != size)
+ _PyBytes_Resize(&buffer, n);
+ return buffer;
+}
+
+
+PyDoc_STRVAR(edk2_write__doc__,
+"write(fd, string) -> byteswritten\n\n\
+Write a string to a file descriptor.");
+
+static PyObject *
+edk2_write(PyObject *self, PyObject *args)
+{
+ Py_buffer pbuf;
+ int fd;
+ Py_ssize_t size;
+
+ if (!PyArg_ParseTuple(args, "is*:write", &fd, &pbuf))
+ return NULL;
+ if (!_PyVerify_fd(fd)) {
+ PyBuffer_Release(&pbuf);
+ return edk2_error();
+ }
+ Py_BEGIN_ALLOW_THREADS
+ size = write(fd, pbuf.buf, (size_t)pbuf.len);
+ Py_END_ALLOW_THREADS
+ PyBuffer_Release(&pbuf);
+ if (size < 0)
+ return edk2_error();
+ return PyLong_FromSsize_t(size);
+}
+
+
+PyDoc_STRVAR(edk2_fstat__doc__,
+"fstat(fd) -> stat result\n\n\
+Like stat(), but for an open file descriptor.");
+
+static PyObject *
+edk2_fstat(PyObject *self, PyObject *args)
+{
+ int fd;
+ STRUCT_STAT st;
+ int res;
+ if (!PyArg_ParseTuple(args, "i:fstat", &fd))
+ return NULL;
+ if (!_PyVerify_fd(fd))
+ return edk2_error();
+ Py_BEGIN_ALLOW_THREADS
+ res = FSTAT(fd, &st);
+ Py_END_ALLOW_THREADS
+ if (res != 0) {
+ return edk2_error();
+ }
+
+ return _pystat_fromstructstat(&st);
+}
+
+/* check for known incorrect mode strings - problem is, platforms are
+ free to accept any mode characters they like and are supposed to
+ ignore stuff they don't understand... write or append mode with
+ universal newline support is expressly forbidden by PEP 278.
+ Additionally, remove the 'U' from the mode string as platforms
+ won't know what it is. Non-zero return signals an exception */
+int
+_PyFile_SanitizeMode(char *mode)
+{
+ char *upos;
+ size_t len = strlen(mode);
+
+ if (!len) {
+ PyErr_SetString(PyExc_ValueError, "empty mode string");
+ return -1;
+ }
+
+ upos = strchr(mode, 'U');
+ if (upos) {
+ memmove(upos, upos+1, len-(upos-mode)); /* incl null char */
+
+ if (mode[0] == 'w' || mode[0] == 'a') {
+ PyErr_Format(PyExc_ValueError, "universal newline "
+ "mode can only be used with modes "
+ "starting with 'r'");
+ return -1;
+ }
+
+ if (mode[0] != 'r') {
+ memmove(mode+1, mode, strlen(mode)+1);
+ mode[0] = 'r';
+ }
+
+ if (!strchr(mode, 'b')) {
+ memmove(mode+2, mode+1, strlen(mode));
+ mode[1] = 'b';
+ }
+ } else if (mode[0] != 'r' && mode[0] != 'w' && mode[0] != 'a') {
+ PyErr_Format(PyExc_ValueError, "mode string must begin with "
+ "one of 'r', 'w', 'a' or 'U', not '%.200s'", mode);
+ return -1;
+ }
+#ifdef Py_VERIFY_WINNT
+ /* additional checks on NT with visual studio 2005 and higher */
+ if (!_PyVerify_Mode_WINNT(mode)) {
+ PyErr_Format(PyExc_ValueError, "Invalid mode ('%.50s')", mode);
+ return -1;
+ }
+#endif
+ return 0;
+}
+
+
+PyDoc_STRVAR(edk2_fdopen__doc__,
+"fdopen(fd [, mode='r' [, bufsize]]) -> file_object\n\n\
+Return an open file object connected to a file descriptor.");
+
+static PyObject *
+edk2_fdopen(PyObject *self, PyObject *args)
+{
+ int fd;
+ char *orgmode = "r";
+ int bufsize = -1;
+ FILE *fp;
+ PyObject *f = NULL;
+ char *mode;
+ if (!PyArg_ParseTuple(args, "i|si", &fd, &orgmode, &bufsize))
+ return NULL;
+
+ /* Sanitize mode. See fileobject.c */
+ mode = PyMem_MALLOC(strlen(orgmode)+3);
+ if (!mode) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+ strcpy(mode, orgmode);
+ if (_PyFile_SanitizeMode(mode)) {
+ PyMem_FREE(mode);
+ return NULL;
+ }
+ if (!_PyVerify_fd(fd))
+ return edk2_error();
+ Py_BEGIN_ALLOW_THREADS
+#if defined(HAVE_FCNTL_H)
+ if (mode[0] == 'a') {
+ /* try to make sure the O_APPEND flag is set */
+ int flags;
+ flags = fcntl(fd, F_GETFL);
+ if (flags != -1)
+ fcntl(fd, F_SETFL, flags | O_APPEND);
+ fp = fdopen(fd, mode);
+ if (fp == NULL && flags != -1)
+ /* restore old mode if fdopen failed */
+ fcntl(fd, F_SETFL, flags);
+ } else {
+ fp = fdopen(fd, mode);
+ }
+#else
+ fp = fdopen(fd, mode);
+#endif
+ Py_END_ALLOW_THREADS
+ PyMem_FREE(mode);
+ if (fp == NULL)
+ return edk2_error();
+// TODO: Commented this for UEFI as it doesn't compile and
+// has no impact on the edk2 module functionality
+// f = PyFile_FromFile(fp, "<fdopen>", orgmode, fclose);
+// if (f != NULL)
+// PyFile_SetBufSize(f, bufsize);
+ return f;
+}
+
+PyDoc_STRVAR(edk2_isatty__doc__,
+"isatty(fd) -> bool\n\n\
+Return True if the file descriptor 'fd' is an open file descriptor\n\
+connected to the slave end of a terminal.");
+
+static PyObject *
+edk2_isatty(PyObject *self, PyObject *args)
+{
+ int fd;
+ if (!PyArg_ParseTuple(args, "i:isatty", &fd))
+ return NULL;
+ if (!_PyVerify_fd(fd))
+ return PyBool_FromLong(0);
+ return PyBool_FromLong(isatty(fd));
+}
+
+#ifdef HAVE_PIPE
+PyDoc_STRVAR(edk2_pipe__doc__,
+"pipe() -> (read_end, write_end)\n\n\
+Create a pipe.");
+
+static PyObject *
+edk2_pipe(PyObject *self, PyObject *noargs)
+{
+ int fds[2];
+ int res;
+ Py_BEGIN_ALLOW_THREADS
+ res = pipe(fds);
+ Py_END_ALLOW_THREADS
+ if (res != 0)
+ return edk2_error();
+ return Py_BuildValue("(ii)", fds[0], fds[1]);
+}
+#endif /* HAVE_PIPE */
+
+
+#ifdef HAVE_MKFIFO
+PyDoc_STRVAR(edk2_mkfifo__doc__,
+"mkfifo(filename [, mode=0666])\n\n\
+Create a FIFO (a POSIX named pipe).");
+
+static PyObject *
+edk2_mkfifo(PyObject *self, PyObject *args)
+{
+ char *filename;
+ int mode = 0666;
+ int res;
+ if (!PyArg_ParseTuple(args, "s|i:mkfifo", &filename, &mode))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ res = mkfifo(filename, mode);
+ Py_END_ALLOW_THREADS
+ if (res < 0)
+ return edk2_error();
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+#endif
+
+
+#if defined(HAVE_MKNOD) && defined(HAVE_MAKEDEV)
+PyDoc_STRVAR(edk2_mknod__doc__,
+"mknod(filename [, mode=0600, device])\n\n\
+Create a filesystem node (file, device special file or named pipe)\n\
+named filename. mode specifies both the permissions to use and the\n\
+type of node to be created, being combined (bitwise OR) with one of\n\
+S_IFREG, S_IFCHR, S_IFBLK, and S_IFIFO. For S_IFCHR and S_IFBLK,\n\
+device defines the newly created device special file (probably using\n\
+os.makedev()), otherwise it is ignored.");
+
+
+static PyObject *
+edk2_mknod(PyObject *self, PyObject *args)
+{
+ char *filename;
+ int mode = 0600;
+ int device = 0;
+ int res;
+ if (!PyArg_ParseTuple(args, "s|ii:mknod", &filename, &mode, &device))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ res = mknod(filename, mode, device);
+ Py_END_ALLOW_THREADS
+ if (res < 0)
+ return edk2_error();
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+#endif
+
+#ifdef HAVE_DEVICE_MACROS
+PyDoc_STRVAR(edk2_major__doc__,
+"major(device) -> major number\n\
+Extracts a device major number from a raw device number.");
+
+static PyObject *
+edk2_major(PyObject *self, PyObject *args)
+{
+ int device;
+ if (!PyArg_ParseTuple(args, "i:major", &device))
+ return NULL;
+ return PyLong_FromLong((long)major(device));
+}
+
+PyDoc_STRVAR(edk2_minor__doc__,
+"minor(device) -> minor number\n\
+Extracts a device minor number from a raw device number.");
+
+static PyObject *
+edk2_minor(PyObject *self, PyObject *args)
+{
+ int device;
+ if (!PyArg_ParseTuple(args, "i:minor", &device))
+ return NULL;
+ return PyLong_FromLong((long)minor(device));
+}
+
+PyDoc_STRVAR(edk2_makedev__doc__,
+"makedev(major, minor) -> device number\n\
+Composes a raw device number from the major and minor device numbers.");
+
+static PyObject *
+edk2_makedev(PyObject *self, PyObject *args)
+{
+ int major, minor;
+ if (!PyArg_ParseTuple(args, "ii:makedev", &major, &minor))
+ return NULL;
+ return PyLong_FromLong((long)makedev(major, minor));
+}
+#endif /* device macros */
+
+
+#ifdef HAVE_FTRUNCATE
+PyDoc_STRVAR(edk2_ftruncate__doc__,
+"ftruncate(fd, length)\n\n\
+Truncate a file to a specified length.");
+
+static PyObject *
+edk2_ftruncate(PyObject *self, PyObject *args)
+{
+ int fd;
+ off_t length;
+ int res;
+ PyObject *lenobj;
+
+ if (!PyArg_ParseTuple(args, "iO:ftruncate", &fd, &lenobj))
+ return NULL;
+
+#if !defined(HAVE_LARGEFILE_SUPPORT)
+ length = PyLong_AsLong(lenobj);
+#else
+ length = PyLong_Check(lenobj) ?
+ PyLong_AsLongLong(lenobj) : PyLong_AsLong(lenobj);
+#endif
+ if (PyErr_Occurred())
+ return NULL;
+
+ Py_BEGIN_ALLOW_THREADS
+ res = ftruncate(fd, length);
+ Py_END_ALLOW_THREADS
+ if (res < 0)
+ return edk2_error();
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+#endif
+
+#ifdef HAVE_PUTENV
+PyDoc_STRVAR(edk2_putenv__doc__,
+"putenv(key, value)\n\n\
+Change or add an environment variable.");
+
+/* Save putenv() parameters as values here, so we can collect them when they
+ * get re-set with another call for the same key. */
+static PyObject *edk2_putenv_garbage;
+
+static PyObject *
+edk2_putenv(PyObject *self, PyObject *args)
+{
+ char *s1, *s2;
+ char *newenv;
+ PyObject *newstr;
+ size_t len;
+
+ if (!PyArg_ParseTuple(args, "ss:putenv", &s1, &s2))
+ return NULL;
+
+ /* XXX This can leak memory -- not easy to fix :-( */
+ len = strlen(s1) + strlen(s2) + 2;
+ /* len includes space for a trailing \0; the size arg to
+ PyString_FromStringAndSize does not count that */
+ newstr = PyUnicode_FromStringAndSize(NULL, (int)len - 1);
+ if (newstr == NULL)
+ return PyErr_NoMemory();
+ newenv = PyString_AS_STRING(newstr);
+ PyOS_snprintf(newenv, len, "%s=%s", s1, s2);
+ if (putenv(newenv)) {
+ Py_DECREF(newstr);
+ edk2_error();
+ return NULL;
+ }
+ /* Install the first arg and newstr in edk2_putenv_garbage;
+ * this will cause previous value to be collected. This has to
+ * happen after the real putenv() call because the old value
+ * was still accessible until then. */
+ if (PyDict_SetItem(edk2_putenv_garbage,
+ PyTuple_GET_ITEM(args, 0), newstr)) {
+ /* really not much we can do; just leak */
+ PyErr_Clear();
+ }
+ else {
+ Py_DECREF(newstr);
+ }
+
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+#endif /* putenv */
+
+#ifdef HAVE_UNSETENV
+PyDoc_STRVAR(edk2_unsetenv__doc__,
+"unsetenv(key)\n\n\
+Delete an environment variable.");
+
+static PyObject *
+edk2_unsetenv(PyObject *self, PyObject *args)
+{
+ char *s1;
+
+ if (!PyArg_ParseTuple(args, "s:unsetenv", &s1))
+ return NULL;
+
+ unsetenv(s1);
+
+ /* Remove the key from edk2_putenv_garbage;
+ * this will cause it to be collected. This has to
+ * happen after the real unsetenv() call because the
+ * old value was still accessible until then.
+ */
+ if (PyDict_DelItem(edk2_putenv_garbage,
+ PyTuple_GET_ITEM(args, 0))) {
+ /* really not much we can do; just leak */
+ PyErr_Clear();
+ }
+
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+#endif /* unsetenv */
+
+PyDoc_STRVAR(edk2_strerror__doc__,
+"strerror(code) -> string\n\n\
+Translate an error code to a message string.");
+
+static PyObject *
+edk2_strerror(PyObject *self, PyObject *args)
+{
+ int code;
+ char *message;
+ if (!PyArg_ParseTuple(args, "i:strerror", &code))
+ return NULL;
+ message = strerror(code);
+ if (message == NULL) {
+ PyErr_SetString(PyExc_ValueError,
+ "strerror() argument out of range");
+ return NULL;
+ }
+ return PyUnicode_FromString(message);
+}
+
+
+#ifdef HAVE_SYS_WAIT_H
+
+#ifdef WCOREDUMP
+PyDoc_STRVAR(edk2_WCOREDUMP__doc__,
+"WCOREDUMP(status) -> bool\n\n\
+Return True if the process returning 'status' was dumped to a core file.");
+
+static PyObject *
+edk2_WCOREDUMP(PyObject *self, PyObject *args)
+{
+ WAIT_TYPE status;
+ WAIT_STATUS_INT(status) = 0;
+
+ if (!PyArg_ParseTuple(args, "i:WCOREDUMP", &WAIT_STATUS_INT(status)))
+ return NULL;
+
+ return PyBool_FromLong(WCOREDUMP(status));
+}
+#endif /* WCOREDUMP */
+
+#ifdef WIFCONTINUED
+PyDoc_STRVAR(edk2_WIFCONTINUED__doc__,
+"WIFCONTINUED(status) -> bool\n\n\
+Return True if the process returning 'status' was continued from a\n\
+job control stop.");
+
+static PyObject *
+edk2_WIFCONTINUED(PyObject *self, PyObject *args)
+{
+ WAIT_TYPE status;
+ WAIT_STATUS_INT(status) = 0;
+
+ if (!PyArg_ParseTuple(args, "i:WCONTINUED", &WAIT_STATUS_INT(status)))
+ return NULL;
+
+ return PyBool_FromLong(WIFCONTINUED(status));
+}
+#endif /* WIFCONTINUED */
+
+#ifdef WIFSTOPPED
+PyDoc_STRVAR(edk2_WIFSTOPPED__doc__,
+"WIFSTOPPED(status) -> bool\n\n\
+Return True if the process returning 'status' was stopped.");
+
+static PyObject *
+edk2_WIFSTOPPED(PyObject *self, PyObject *args)
+{
+ WAIT_TYPE status;
+ WAIT_STATUS_INT(status) = 0;
+
+ if (!PyArg_ParseTuple(args, "i:WIFSTOPPED", &WAIT_STATUS_INT(status)))
+ return NULL;
+
+ return PyBool_FromLong(WIFSTOPPED(status));
+}
+#endif /* WIFSTOPPED */
+
+#ifdef WIFSIGNALED
+PyDoc_STRVAR(edk2_WIFSIGNALED__doc__,
+"WIFSIGNALED(status) -> bool\n\n\
+Return True if the process returning 'status' was terminated by a signal.");
+
+static PyObject *
+edk2_WIFSIGNALED(PyObject *self, PyObject *args)
+{
+ WAIT_TYPE status;
+ WAIT_STATUS_INT(status) = 0;
+
+ if (!PyArg_ParseTuple(args, "i:WIFSIGNALED", &WAIT_STATUS_INT(status)))
+ return NULL;
+
+ return PyBool_FromLong(WIFSIGNALED(status));
+}
+#endif /* WIFSIGNALED */
+
+#ifdef WIFEXITED
+PyDoc_STRVAR(edk2_WIFEXITED__doc__,
+"WIFEXITED(status) -> bool\n\n\
+Return true if the process returning 'status' exited using the exit()\n\
+system call.");
+
+static PyObject *
+edk2_WIFEXITED(PyObject *self, PyObject *args)
+{
+ WAIT_TYPE status;
+ WAIT_STATUS_INT(status) = 0;
+
+ if (!PyArg_ParseTuple(args, "i:WIFEXITED", &WAIT_STATUS_INT(status)))
+ return NULL;
+
+ return PyBool_FromLong(WIFEXITED(status));
+}
+#endif /* WIFEXITED */
+
+#ifdef WEXITSTATUS
+PyDoc_STRVAR(edk2_WEXITSTATUS__doc__,
+"WEXITSTATUS(status) -> integer\n\n\
+Return the process return code from 'status'.");
+
+static PyObject *
+edk2_WEXITSTATUS(PyObject *self, PyObject *args)
+{
+ WAIT_TYPE status;
+ WAIT_STATUS_INT(status) = 0;
+
+ if (!PyArg_ParseTuple(args, "i:WEXITSTATUS", &WAIT_STATUS_INT(status)))
+ return NULL;
+
+ return Py_BuildValue("i", WEXITSTATUS(status));
+}
+#endif /* WEXITSTATUS */
+
+#ifdef WTERMSIG
+PyDoc_STRVAR(edk2_WTERMSIG__doc__,
+"WTERMSIG(status) -> integer\n\n\
+Return the signal that terminated the process that provided the 'status'\n\
+value.");
+
+static PyObject *
+edk2_WTERMSIG(PyObject *self, PyObject *args)
+{
+ WAIT_TYPE status;
+ WAIT_STATUS_INT(status) = 0;
+
+ if (!PyArg_ParseTuple(args, "i:WTERMSIG", &WAIT_STATUS_INT(status)))
+ return NULL;
+
+ return Py_BuildValue("i", WTERMSIG(status));
+}
+#endif /* WTERMSIG */
+
+#ifdef WSTOPSIG
+PyDoc_STRVAR(edk2_WSTOPSIG__doc__,
+"WSTOPSIG(status) -> integer\n\n\
+Return the signal that stopped the process that provided\n\
+the 'status' value.");
+
+static PyObject *
+edk2_WSTOPSIG(PyObject *self, PyObject *args)
+{
+ WAIT_TYPE status;
+ WAIT_STATUS_INT(status) = 0;
+
+ if (!PyArg_ParseTuple(args, "i:WSTOPSIG", &WAIT_STATUS_INT(status)))
+ return NULL;
+
+ return Py_BuildValue("i", WSTOPSIG(status));
+}
+#endif /* WSTOPSIG */
+
+#endif /* HAVE_SYS_WAIT_H */
+
+
+#if defined(HAVE_FSTATVFS) && defined(HAVE_SYS_STATVFS_H)
+#include <sys/statvfs.h>
+
+static PyObject*
+_pystatvfs_fromstructstatvfs(struct statvfs st) {
+ PyObject *v = PyStructSequence_New(&StatVFSResultType);
+ if (v == NULL)
+ return NULL;
+
+#if !defined(HAVE_LARGEFILE_SUPPORT)
+ PyStructSequence_SET_ITEM(v, 0, PyLong_FromLong((long) st.f_bsize));
+ PyStructSequence_SET_ITEM(v, 1, PyLong_FromLong((long) st.f_frsize));
+ PyStructSequence_SET_ITEM(v, 2, PyLong_FromLong((long) st.f_blocks));
+ PyStructSequence_SET_ITEM(v, 3, PyLong_FromLong((long) st.f_bfree));
+ PyStructSequence_SET_ITEM(v, 4, PyLong_FromLong((long) st.f_bavail));
+ PyStructSequence_SET_ITEM(v, 5, PyLong_FromLong((long) st.f_files));
+ PyStructSequence_SET_ITEM(v, 6, PyLong_FromLong((long) st.f_ffree));
+ PyStructSequence_SET_ITEM(v, 7, PyLong_FromLong((long) st.f_favail));
+ PyStructSequence_SET_ITEM(v, 8, PyLong_FromLong((long) st.f_flag));
+ PyStructSequence_SET_ITEM(v, 9, PyLong_FromLong((long) st.f_namemax));
+#else
+ PyStructSequence_SET_ITEM(v, 0, PyLong_FromLong((long) st.f_bsize));
+ PyStructSequence_SET_ITEM(v, 1, PyLong_FromLong((long) st.f_frsize));
+ PyStructSequence_SET_ITEM(v, 2,
+ PyLong_FromLongLong((PY_LONG_LONG) st.f_blocks));
+ PyStructSequence_SET_ITEM(v, 3,
+ PyLong_FromLongLong((PY_LONG_LONG) st.f_bfree));
+ PyStructSequence_SET_ITEM(v, 4,
+ PyLong_FromLongLong((PY_LONG_LONG) st.f_bavail));
+ PyStructSequence_SET_ITEM(v, 5,
+ PyLong_FromLongLong((PY_LONG_LONG) st.f_files));
+ PyStructSequence_SET_ITEM(v, 6,
+ PyLong_FromLongLong((PY_LONG_LONG) st.f_ffree));
+ PyStructSequence_SET_ITEM(v, 7,
+ PyLong_FromLongLong((PY_LONG_LONG) st.f_favail));
+ PyStructSequence_SET_ITEM(v, 8, PyLong_FromLong((long) st.f_flag));
+ PyStructSequence_SET_ITEM(v, 9, PyLong_FromLong((long) st.f_namemax));
+#endif
+
+ return v;
+}
+
+PyDoc_STRVAR(edk2_fstatvfs__doc__,
+"fstatvfs(fd) -> statvfs result\n\n\
+Perform an fstatvfs system call on the given fd.");
+
+static PyObject *
+edk2_fstatvfs(PyObject *self, PyObject *args)
+{
+ int fd, res;
+ struct statvfs st;
+
+ if (!PyArg_ParseTuple(args, "i:fstatvfs", &fd))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ res = fstatvfs(fd, &st);
+ Py_END_ALLOW_THREADS
+ if (res != 0)
+ return edk2_error();
+
+ return _pystatvfs_fromstructstatvfs(st);
+}
+#endif /* HAVE_FSTATVFS && HAVE_SYS_STATVFS_H */
+
+
+#if defined(HAVE_STATVFS) && defined(HAVE_SYS_STATVFS_H)
+#include <sys/statvfs.h>
+
+PyDoc_STRVAR(edk2_statvfs__doc__,
+"statvfs(path) -> statvfs result\n\n\
+Perform a statvfs system call on the given path.");
+
+static PyObject *
+edk2_statvfs(PyObject *self, PyObject *args)
+{
+ char *path;
+ int res;
+ struct statvfs st;
+ if (!PyArg_ParseTuple(args, "s:statvfs", &path))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ res = statvfs(path, &st);
+ Py_END_ALLOW_THREADS
+ if (res != 0)
+ return edk2_error_with_filename(path);
+
+ return _pystatvfs_fromstructstatvfs(st);
+}
+#endif /* HAVE_STATVFS */
+
+PyObject *
+PyOS_FSPath(PyObject *path)
+{
+ /* For error message reasons, this function is manually inlined in
+ path_converter(). */
+ _Py_IDENTIFIER(__fspath__);
+ PyObject *func = NULL;
+ PyObject *path_repr = NULL;
+
+ if (PyUnicode_Check(path) || PyBytes_Check(path)) {
+ Py_INCREF(path);
+ return path;
+ }
+
+ func = _PyObject_LookupSpecial(path, &PyId___fspath__);
+ if (NULL == func) {
+ return PyErr_Format(PyExc_TypeError,
+ "expected str, bytes or os.PathLike object, "
+ "not %.200s",
+ Py_TYPE(path)->tp_name);
+ }
+
+ path_repr = PyObject_CallFunctionObjArgs(func, NULL);
+ Py_DECREF(func);
+ if (NULL == path_repr) {
+ return NULL;
+ }
+
+ if (!(PyUnicode_Check(path_repr) || PyBytes_Check(path_repr))) {
+ PyErr_Format(PyExc_TypeError,
+ "expected %.200s.__fspath__() to return str or bytes, "
+ "not %.200s", Py_TYPE(path)->tp_name,
+ Py_TYPE(path_repr)->tp_name);
+ Py_DECREF(path_repr);
+ return NULL;
+ }
+
+ return path_repr;
+}
+
+#if !defined(UEFI_C_SOURCE) // not supported in 3.x
+#ifdef HAVE_TEMPNAM
+PyDoc_STRVAR(edk2_tempnam__doc__,
+"tempnam([dir[, prefix]]) -> string\n\n\
+Return a unique name for a temporary file.\n\
+The directory and a prefix may be specified as strings; they may be omitted\n\
+or None if not needed.");
+
+static PyObject *
+edk2_tempnam(PyObject *self, PyObject *args)
+{
+ PyObject *result = NULL;
+ char *dir = NULL;
+ char *pfx = NULL;
+ char *name;
+
+ if (!PyArg_ParseTuple(args, "|zz:tempnam", &dir, &pfx))
+ return NULL;
+
+ if (PyErr_Warn(PyExc_RuntimeWarning,
+ "tempnam is a potential security risk to your program") < 0)
+ return NULL;
+
+ if (PyErr_WarnPy3k("tempnam has been removed in 3.x; "
+ "use the tempfile module", 1) < 0)
+ return NULL;
+
+ name = tempnam(dir, pfx);
+ if (name == NULL)
+ return PyErr_NoMemory();
+ result = PyUnicode_FromString(name);
+ free(name);
+ return result;
+}
+#endif
+
+
+#ifdef HAVE_TMPFILE
+PyDoc_STRVAR(edk2_tmpfile__doc__,
+"tmpfile() -> file object\n\n\
+Create a temporary file with no directory entries.");
+
+static PyObject *
+edk2_tmpfile(PyObject *self, PyObject *noargs)
+{
+ FILE *fp;
+
+ if (PyErr_WarnPy3k("tmpfile has been removed in 3.x; "
+ "use the tempfile module", 1) < 0)
+ return NULL;
+
+ fp = tmpfile();
+ if (fp == NULL)
+ return edk2_error();
+ return PyFile_FromFile(fp, "<tmpfile>", "w+b", fclose);
+}
+#endif
+
+
+#ifdef HAVE_TMPNAM
+PyDoc_STRVAR(edk2_tmpnam__doc__,
+"tmpnam() -> string\n\n\
+Return a unique name for a temporary file.");
+
+static PyObject *
+edk2_tmpnam(PyObject *self, PyObject *noargs)
+{
+ char buffer[L_tmpnam];
+ char *name;
+
+ if (PyErr_Warn(PyExc_RuntimeWarning,
+ "tmpnam is a potential security risk to your program") < 0)
+ return NULL;
+
+ if (PyErr_WarnPy3k("tmpnam has been removed in 3.x; "
+ "use the tempfile module", 1) < 0)
+ return NULL;
+
+#ifdef USE_TMPNAM_R
+ name = tmpnam_r(buffer);
+#else
+ name = tmpnam(buffer);
+#endif
+ if (name == NULL) {
+ PyObject *err = Py_BuildValue("is", 0,
+#ifdef USE_TMPNAM_R
+ "unexpected NULL from tmpnam_r"
+#else
+ "unexpected NULL from tmpnam"
+#endif
+ );
+ PyErr_SetObject(PyExc_OSError, err);
+ Py_XDECREF(err);
+ return NULL;
+ }
+ return PyUnicode_FromString(buffer);
+}
+#endif
+#endif
+
+PyDoc_STRVAR(edk2_abort__doc__,
+"abort() -> does not return!\n\n\
+Abort the interpreter immediately. This 'dumps core' or otherwise fails\n\
+in the hardest way possible on the hosting operating system.");
+
+static PyObject *
+edk2_abort(PyObject *self, PyObject *noargs)
+{
+ abort();
+ /*NOTREACHED*/
+ Py_FatalError("abort() called from Python code didn't abort!");
+ return NULL;
+}
+
+static PyMethodDef edk2_methods[] = {
+ {"access", edk2_access, METH_VARARGS, edk2_access__doc__},
+#ifdef HAVE_TTYNAME
+ {"ttyname", edk2_ttyname, METH_VARARGS, edk2_ttyname__doc__},
+#endif
+ {"chdir", edk2_chdir, METH_VARARGS, edk2_chdir__doc__},
+#ifdef HAVE_CHFLAGS
+ {"chflags", edk2_chflags, METH_VARARGS, edk2_chflags__doc__},
+#endif /* HAVE_CHFLAGS */
+ {"chmod", edk2_chmod, METH_VARARGS, edk2_chmod__doc__},
+#ifdef HAVE_FCHMOD
+ {"fchmod", edk2_fchmod, METH_VARARGS, edk2_fchmod__doc__},
+#endif /* HAVE_FCHMOD */
+#ifdef HAVE_CHOWN
+ {"chown", edk2_chown, METH_VARARGS, edk2_chown__doc__},
+#endif /* HAVE_CHOWN */
+#ifdef HAVE_LCHMOD
+ {"lchmod", edk2_lchmod, METH_VARARGS, edk2_lchmod__doc__},
+#endif /* HAVE_LCHMOD */
+#ifdef HAVE_FCHOWN
+ {"fchown", edk2_fchown, METH_VARARGS, edk2_fchown__doc__},
+#endif /* HAVE_FCHOWN */
+#ifdef HAVE_LCHFLAGS
+ {"lchflags", edk2_lchflags, METH_VARARGS, edk2_lchflags__doc__},
+#endif /* HAVE_LCHFLAGS */
+#ifdef HAVE_LCHOWN
+ {"lchown", edk2_lchown, METH_VARARGS, edk2_lchown__doc__},
+#endif /* HAVE_LCHOWN */
+#ifdef HAVE_CHROOT
+ {"chroot", edk2_chroot, METH_VARARGS, edk2_chroot__doc__},
+#endif
+#ifdef HAVE_CTERMID
+ {"ctermid", edk2_ctermid, METH_NOARGS, edk2_ctermid__doc__},
+#endif
+#ifdef HAVE_GETCWD
+ {"getcwd", edk2_getcwd, METH_NOARGS, edk2_getcwd__doc__},
+#ifdef Py_USING_UNICODE
+ {"getcwdu", edk2_getcwdu, METH_NOARGS, edk2_getcwdu__doc__},
+#endif
+#endif
+#ifdef HAVE_LINK
+ {"link", edk2_link, METH_VARARGS, edk2_link__doc__},
+#endif /* HAVE_LINK */
+ {"listdir", edk2_listdir, METH_VARARGS, edk2_listdir__doc__},
+ {"lstat", edk2_lstat, METH_VARARGS, edk2_lstat__doc__},
+ {"mkdir", edk2_mkdir, METH_VARARGS, edk2_mkdir__doc__},
+#ifdef HAVE_NICE
+ {"nice", edk2_nice, METH_VARARGS, edk2_nice__doc__},
+#endif /* HAVE_NICE */
+#ifdef HAVE_READLINK
+ {"readlink", edk2_readlink, METH_VARARGS, edk2_readlink__doc__},
+#endif /* HAVE_READLINK */
+ {"rename", edk2_rename, METH_VARARGS, edk2_rename__doc__},
+ {"rmdir", edk2_rmdir, METH_VARARGS, edk2_rmdir__doc__},
+ {"stat", edk2_stat, METH_VARARGS, edk2_stat__doc__},
+ //{"stat_float_times", stat_float_times, METH_VARARGS, stat_float_times__doc__},
+#ifdef HAVE_SYMLINK
+ {"symlink", edk2_symlink, METH_VARARGS, edk2_symlink__doc__},
+#endif /* HAVE_SYMLINK */
+#ifdef HAVE_SYSTEM
+ {"system", edk2_system, METH_VARARGS, edk2_system__doc__},
+#endif
+ {"umask", edk2_umask, METH_VARARGS, edk2_umask__doc__},
+#ifdef HAVE_UNAME
+ {"uname", edk2_uname, METH_NOARGS, edk2_uname__doc__},
+#endif /* HAVE_UNAME */
+ {"unlink", edk2_unlink, METH_VARARGS, edk2_unlink__doc__},
+ {"remove", edk2_unlink, METH_VARARGS, edk2_remove__doc__},
+ {"utime", edk2_utime, METH_VARARGS, edk2_utime__doc__},
+#ifdef HAVE_TIMES
+ {"times", edk2_times, METH_NOARGS, edk2_times__doc__},
+#endif /* HAVE_TIMES */
+ {"_exit", edk2__exit, METH_VARARGS, edk2__exit__doc__},
+#ifdef HAVE_EXECV
+ {"execv", edk2_execv, METH_VARARGS, edk2_execv__doc__},
+ {"execve", edk2_execve, METH_VARARGS, edk2_execve__doc__},
+#endif /* HAVE_EXECV */
+#ifdef HAVE_SPAWNV
+ {"spawnv", edk2_spawnv, METH_VARARGS, edk2_spawnv__doc__},
+ {"spawnve", edk2_spawnve, METH_VARARGS, edk2_spawnve__doc__},
+#if defined(PYOS_OS2)
+ {"spawnvp", edk2_spawnvp, METH_VARARGS, edk2_spawnvp__doc__},
+ {"spawnvpe", edk2_spawnvpe, METH_VARARGS, edk2_spawnvpe__doc__},
+#endif /* PYOS_OS2 */
+#endif /* HAVE_SPAWNV */
+#ifdef HAVE_FORK1
+ {"fork1", edk2_fork1, METH_NOARGS, edk2_fork1__doc__},
+#endif /* HAVE_FORK1 */
+#ifdef HAVE_FORK
+ {"fork", edk2_fork, METH_NOARGS, edk2_fork__doc__},
+#endif /* HAVE_FORK */
+#if defined(HAVE_OPENPTY) || defined(HAVE__GETPTY) || defined(HAVE_DEV_PTMX)
+ {"openpty", edk2_openpty, METH_NOARGS, edk2_openpty__doc__},
+#endif /* HAVE_OPENPTY || HAVE__GETPTY || HAVE_DEV_PTMX */
+#ifdef HAVE_FORKPTY
+ {"forkpty", edk2_forkpty, METH_NOARGS, edk2_forkpty__doc__},
+#endif /* HAVE_FORKPTY */
+ {"getpid", edk2_getpid, METH_NOARGS, edk2_getpid__doc__},
+#ifdef HAVE_GETPGRP
+ {"getpgrp", edk2_getpgrp, METH_NOARGS, edk2_getpgrp__doc__},
+#endif /* HAVE_GETPGRP */
+#ifdef HAVE_GETPPID
+ {"getppid", edk2_getppid, METH_NOARGS, edk2_getppid__doc__},
+#endif /* HAVE_GETPPID */
+#ifdef HAVE_GETLOGIN
+ {"getlogin", edk2_getlogin, METH_NOARGS, edk2_getlogin__doc__},
+#endif
+#ifdef HAVE_KILL
+ {"kill", edk2_kill, METH_VARARGS, edk2_kill__doc__},
+#endif /* HAVE_KILL */
+#ifdef HAVE_KILLPG
+ {"killpg", edk2_killpg, METH_VARARGS, edk2_killpg__doc__},
+#endif /* HAVE_KILLPG */
+#ifdef HAVE_PLOCK
+ {"plock", edk2_plock, METH_VARARGS, edk2_plock__doc__},
+#endif /* HAVE_PLOCK */
+#ifdef HAVE_POPEN
+ {"popen", edk2_popen, METH_VARARGS, edk2_popen__doc__},
+#endif /* HAVE_POPEN */
+#ifdef HAVE_SETGROUPS
+ {"setgroups", edk2_setgroups, METH_O, edk2_setgroups__doc__},
+#endif /* HAVE_SETGROUPS */
+#ifdef HAVE_INITGROUPS
+ {"initgroups", edk2_initgroups, METH_VARARGS, edk2_initgroups__doc__},
+#endif /* HAVE_INITGROUPS */
+#ifdef HAVE_GETPGID
+ {"getpgid", edk2_getpgid, METH_VARARGS, edk2_getpgid__doc__},
+#endif /* HAVE_GETPGID */
+#ifdef HAVE_SETPGRP
+ {"setpgrp", edk2_setpgrp, METH_NOARGS, edk2_setpgrp__doc__},
+#endif /* HAVE_SETPGRP */
+#ifdef HAVE_WAIT
+ {"wait", edk2_wait, METH_NOARGS, edk2_wait__doc__},
+#endif /* HAVE_WAIT */
+#ifdef HAVE_WAIT3
+ {"wait3", edk2_wait3, METH_VARARGS, edk2_wait3__doc__},
+#endif /* HAVE_WAIT3 */
+#ifdef HAVE_WAIT4
+ {"wait4", edk2_wait4, METH_VARARGS, edk2_wait4__doc__},
+#endif /* HAVE_WAIT4 */
+#if defined(HAVE_WAITPID) || defined(HAVE_CWAIT)
+ {"waitpid", edk2_waitpid, METH_VARARGS, edk2_waitpid__doc__},
+#endif /* HAVE_WAITPID */
+#ifdef HAVE_GETSID
+ {"getsid", edk2_getsid, METH_VARARGS, edk2_getsid__doc__},
+#endif /* HAVE_GETSID */
+#ifdef HAVE_SETSID
+ {"setsid", edk2_setsid, METH_NOARGS, edk2_setsid__doc__},
+#endif /* HAVE_SETSID */
+#ifdef HAVE_SETPGID
+ {"setpgid", edk2_setpgid, METH_VARARGS, edk2_setpgid__doc__},
+#endif /* HAVE_SETPGID */
+#ifdef HAVE_TCGETPGRP
+ {"tcgetpgrp", edk2_tcgetpgrp, METH_VARARGS, edk2_tcgetpgrp__doc__},
+#endif /* HAVE_TCGETPGRP */
+#ifdef HAVE_TCSETPGRP
+ {"tcsetpgrp", edk2_tcsetpgrp, METH_VARARGS, edk2_tcsetpgrp__doc__},
+#endif /* HAVE_TCSETPGRP */
+ {"open", edk2_open, METH_VARARGS, edk2_open__doc__},
+ {"close", edk2_close, METH_VARARGS, edk2_close__doc__},
+ {"closerange", edk2_closerange, METH_VARARGS, edk2_closerange__doc__},
+ {"dup", edk2_dup, METH_VARARGS, edk2_dup__doc__},
+ {"dup2", edk2_dup2, METH_VARARGS, edk2_dup2__doc__},
+ {"lseek", edk2_lseek, METH_VARARGS, edk2_lseek__doc__},
+ {"read", edk2_read, METH_VARARGS, edk2_read__doc__},
+ {"write", edk2_write, METH_VARARGS, edk2_write__doc__},
+ {"fstat", edk2_fstat, METH_VARARGS, edk2_fstat__doc__},
+ {"fdopen", edk2_fdopen, METH_VARARGS, edk2_fdopen__doc__},
+ {"isatty", edk2_isatty, METH_VARARGS, edk2_isatty__doc__},
+#ifdef HAVE_PIPE
+ {"pipe", edk2_pipe, METH_NOARGS, edk2_pipe__doc__},
+#endif
+#ifdef HAVE_MKFIFO
+ {"mkfifo", edk2_mkfifo, METH_VARARGS, edk2_mkfifo__doc__},
+#endif
+#if defined(HAVE_MKNOD) && defined(HAVE_MAKEDEV)
+ {"mknod", edk2_mknod, METH_VARARGS, edk2_mknod__doc__},
+#endif
+#ifdef HAVE_DEVICE_MACROS
+ {"major", edk2_major, METH_VARARGS, edk2_major__doc__},
+ {"minor", edk2_minor, METH_VARARGS, edk2_minor__doc__},
+ {"makedev", edk2_makedev, METH_VARARGS, edk2_makedev__doc__},
+#endif
+#ifdef HAVE_FTRUNCATE
+ {"ftruncate", edk2_ftruncate, METH_VARARGS, edk2_ftruncate__doc__},
+#endif
+#ifdef HAVE_PUTENV
+ {"putenv", edk2_putenv, METH_VARARGS, edk2_putenv__doc__},
+#endif
+#ifdef HAVE_UNSETENV
+ {"unsetenv", edk2_unsetenv, METH_VARARGS, edk2_unsetenv__doc__},
+#endif
+ {"strerror", edk2_strerror, METH_VARARGS, edk2_strerror__doc__},
+#ifdef HAVE_FCHDIR
+ {"fchdir", edk2_fchdir, METH_O, edk2_fchdir__doc__},
+#endif
+#ifdef HAVE_FSYNC
+ {"fsync", edk2_fsync, METH_O, edk2_fsync__doc__},
+#endif
+#ifdef HAVE_FDATASYNC
+ {"fdatasync", edk2_fdatasync, METH_O, edk2_fdatasync__doc__},
+#endif
+#ifdef HAVE_SYS_WAIT_H
+#ifdef WCOREDUMP
+ {"WCOREDUMP", edk2_WCOREDUMP, METH_VARARGS, edk2_WCOREDUMP__doc__},
+#endif /* WCOREDUMP */
+#ifdef WIFCONTINUED
+ {"WIFCONTINUED",edk2_WIFCONTINUED, METH_VARARGS, edk2_WIFCONTINUED__doc__},
+#endif /* WIFCONTINUED */
+#ifdef WIFSTOPPED
+ {"WIFSTOPPED", edk2_WIFSTOPPED, METH_VARARGS, edk2_WIFSTOPPED__doc__},
+#endif /* WIFSTOPPED */
+#ifdef WIFSIGNALED
+ {"WIFSIGNALED", edk2_WIFSIGNALED, METH_VARARGS, edk2_WIFSIGNALED__doc__},
+#endif /* WIFSIGNALED */
+#ifdef WIFEXITED
+ {"WIFEXITED", edk2_WIFEXITED, METH_VARARGS, edk2_WIFEXITED__doc__},
+#endif /* WIFEXITED */
+#ifdef WEXITSTATUS
+ {"WEXITSTATUS", edk2_WEXITSTATUS, METH_VARARGS, edk2_WEXITSTATUS__doc__},
+#endif /* WEXITSTATUS */
+#ifdef WTERMSIG
+ {"WTERMSIG", edk2_WTERMSIG, METH_VARARGS, edk2_WTERMSIG__doc__},
+#endif /* WTERMSIG */
+#ifdef WSTOPSIG
+ {"WSTOPSIG", edk2_WSTOPSIG, METH_VARARGS, edk2_WSTOPSIG__doc__},
+#endif /* WSTOPSIG */
+#endif /* HAVE_SYS_WAIT_H */
+#if defined(HAVE_FSTATVFS) && defined(HAVE_SYS_STATVFS_H)
+ {"fstatvfs", edk2_fstatvfs, METH_VARARGS, edk2_fstatvfs__doc__},
+#endif
+#if defined(HAVE_STATVFS) && defined(HAVE_SYS_STATVFS_H)
+ {"statvfs", edk2_statvfs, METH_VARARGS, edk2_statvfs__doc__},
+#endif
+#if !defined(UEFI_C_SOURCE)
+#ifdef HAVE_TMPFILE
+ {"tmpfile", edk2_tmpfile, METH_NOARGS, edk2_tmpfile__doc__},
+#endif
+#ifdef HAVE_TEMPNAM
+ {"tempnam", edk2_tempnam, METH_VARARGS, edk2_tempnam__doc__},
+#endif
+#ifdef HAVE_TMPNAM
+ {"tmpnam", edk2_tmpnam, METH_NOARGS, edk2_tmpnam__doc__},
+#endif
+#endif
+#ifdef HAVE_CONFSTR
+ {"confstr", edk2_confstr, METH_VARARGS, edk2_confstr__doc__},
+#endif
+#ifdef HAVE_SYSCONF
+ {"sysconf", edk2_sysconf, METH_VARARGS, edk2_sysconf__doc__},
+#endif
+#ifdef HAVE_FPATHCONF
+ {"fpathconf", edk2_fpathconf, METH_VARARGS, edk2_fpathconf__doc__},
+#endif
+#ifdef HAVE_PATHCONF
+ {"pathconf", edk2_pathconf, METH_VARARGS, edk2_pathconf__doc__},
+#endif
+ {"abort", edk2_abort, METH_NOARGS, edk2_abort__doc__},
+ {NULL, NULL} /* Sentinel */
+};
+
+
+static int
+ins(PyObject *module, char *symbol, long value)
+{
+ return PyModule_AddIntConstant(module, symbol, value);
+}
+
+static int
+all_ins(PyObject *d)
+{
+#ifdef F_OK
+ if (ins(d, "F_OK", (long)F_OK)) return -1;
+#endif
+#ifdef R_OK
+ if (ins(d, "R_OK", (long)R_OK)) return -1;
+#endif
+#ifdef W_OK
+ if (ins(d, "W_OK", (long)W_OK)) return -1;
+#endif
+#ifdef X_OK
+ if (ins(d, "X_OK", (long)X_OK)) return -1;
+#endif
+#ifdef NGROUPS_MAX
+ if (ins(d, "NGROUPS_MAX", (long)NGROUPS_MAX)) return -1;
+#endif
+#ifdef TMP_MAX
+ if (ins(d, "TMP_MAX", (long)TMP_MAX)) return -1;
+#endif
+#ifdef WCONTINUED
+ if (ins(d, "WCONTINUED", (long)WCONTINUED)) return -1;
+#endif
+#ifdef WNOHANG
+ if (ins(d, "WNOHANG", (long)WNOHANG)) return -1;
+#endif
+#ifdef WUNTRACED
+ if (ins(d, "WUNTRACED", (long)WUNTRACED)) return -1;
+#endif
+#ifdef O_RDONLY
+ if (ins(d, "O_RDONLY", (long)O_RDONLY)) return -1;
+#endif
+#ifdef O_WRONLY
+ if (ins(d, "O_WRONLY", (long)O_WRONLY)) return -1;
+#endif
+#ifdef O_RDWR
+ if (ins(d, "O_RDWR", (long)O_RDWR)) return -1;
+#endif
+#ifdef O_NDELAY
+ if (ins(d, "O_NDELAY", (long)O_NDELAY)) return -1;
+#endif
+#ifdef O_NONBLOCK
+ if (ins(d, "O_NONBLOCK", (long)O_NONBLOCK)) return -1;
+#endif
+#ifdef O_APPEND
+ if (ins(d, "O_APPEND", (long)O_APPEND)) return -1;
+#endif
+#ifdef O_DSYNC
+ if (ins(d, "O_DSYNC", (long)O_DSYNC)) return -1;
+#endif
+#ifdef O_RSYNC
+ if (ins(d, "O_RSYNC", (long)O_RSYNC)) return -1;
+#endif
+#ifdef O_SYNC
+ if (ins(d, "O_SYNC", (long)O_SYNC)) return -1;
+#endif
+#ifdef O_NOCTTY
+ if (ins(d, "O_NOCTTY", (long)O_NOCTTY)) return -1;
+#endif
+#ifdef O_CREAT
+ if (ins(d, "O_CREAT", (long)O_CREAT)) return -1;
+#endif
+#ifdef O_EXCL
+ if (ins(d, "O_EXCL", (long)O_EXCL)) return -1;
+#endif
+#ifdef O_TRUNC
+ if (ins(d, "O_TRUNC", (long)O_TRUNC)) return -1;
+#endif
+#ifdef O_BINARY
+ if (ins(d, "O_BINARY", (long)O_BINARY)) return -1;
+#endif
+#ifdef O_TEXT
+ if (ins(d, "O_TEXT", (long)O_TEXT)) return -1;
+#endif
+#ifdef O_LARGEFILE
+ if (ins(d, "O_LARGEFILE", (long)O_LARGEFILE)) return -1;
+#endif
+#ifdef O_SHLOCK
+ if (ins(d, "O_SHLOCK", (long)O_SHLOCK)) return -1;
+#endif
+#ifdef O_EXLOCK
+ if (ins(d, "O_EXLOCK", (long)O_EXLOCK)) return -1;
+#endif
+
+/* MS Windows */
+#ifdef O_NOINHERIT
+ /* Don't inherit in child processes. */
+ if (ins(d, "O_NOINHERIT", (long)O_NOINHERIT)) return -1;
+#endif
+#ifdef _O_SHORT_LIVED
+ /* Optimize for short life (keep in memory). */
+ /* MS forgot to define this one with a non-underscore form too. */
+ if (ins(d, "O_SHORT_LIVED", (long)_O_SHORT_LIVED)) return -1;
+#endif
+#ifdef O_TEMPORARY
+ /* Automatically delete when last handle is closed. */
+ if (ins(d, "O_TEMPORARY", (long)O_TEMPORARY)) return -1;
+#endif
+#ifdef O_RANDOM
+ /* Optimize for random access. */
+ if (ins(d, "O_RANDOM", (long)O_RANDOM)) return -1;
+#endif
+#ifdef O_SEQUENTIAL
+ /* Optimize for sequential access. */
+ if (ins(d, "O_SEQUENTIAL", (long)O_SEQUENTIAL)) return -1;
+#endif
+
+/* GNU extensions. */
+#ifdef O_ASYNC
+ /* Send a SIGIO signal whenever input or output
+ becomes available on file descriptor */
+ if (ins(d, "O_ASYNC", (long)O_ASYNC)) return -1;
+#endif
+#ifdef O_DIRECT
+ /* Direct disk access. */
+ if (ins(d, "O_DIRECT", (long)O_DIRECT)) return -1;
+#endif
+#ifdef O_DIRECTORY
+ /* Must be a directory. */
+ if (ins(d, "O_DIRECTORY", (long)O_DIRECTORY)) return -1;
+#endif
+#ifdef O_NOFOLLOW
+ /* Do not follow links. */
+ if (ins(d, "O_NOFOLLOW", (long)O_NOFOLLOW)) return -1;
+#endif
+#ifdef O_NOATIME
+ /* Do not update the access time. */
+ if (ins(d, "O_NOATIME", (long)O_NOATIME)) return -1;
+#endif
+
+ /* These come from sysexits.h */
+#ifdef EX_OK
+ if (ins(d, "EX_OK", (long)EX_OK)) return -1;
+#endif /* EX_OK */
+#ifdef EX_USAGE
+ if (ins(d, "EX_USAGE", (long)EX_USAGE)) return -1;
+#endif /* EX_USAGE */
+#ifdef EX_DATAERR
+ if (ins(d, "EX_DATAERR", (long)EX_DATAERR)) return -1;
+#endif /* EX_DATAERR */
+#ifdef EX_NOINPUT
+ if (ins(d, "EX_NOINPUT", (long)EX_NOINPUT)) return -1;
+#endif /* EX_NOINPUT */
+#ifdef EX_NOUSER
+ if (ins(d, "EX_NOUSER", (long)EX_NOUSER)) return -1;
+#endif /* EX_NOUSER */
+#ifdef EX_NOHOST
+ if (ins(d, "EX_NOHOST", (long)EX_NOHOST)) return -1;
+#endif /* EX_NOHOST */
+#ifdef EX_UNAVAILABLE
+ if (ins(d, "EX_UNAVAILABLE", (long)EX_UNAVAILABLE)) return -1;
+#endif /* EX_UNAVAILABLE */
+#ifdef EX_SOFTWARE
+ if (ins(d, "EX_SOFTWARE", (long)EX_SOFTWARE)) return -1;
+#endif /* EX_SOFTWARE */
+#ifdef EX_OSERR
+ if (ins(d, "EX_OSERR", (long)EX_OSERR)) return -1;
+#endif /* EX_OSERR */
+#ifdef EX_OSFILE
+ if (ins(d, "EX_OSFILE", (long)EX_OSFILE)) return -1;
+#endif /* EX_OSFILE */
+#ifdef EX_CANTCREAT
+ if (ins(d, "EX_CANTCREAT", (long)EX_CANTCREAT)) return -1;
+#endif /* EX_CANTCREAT */
+#ifdef EX_IOERR
+ if (ins(d, "EX_IOERR", (long)EX_IOERR)) return -1;
+#endif /* EX_IOERR */
+#ifdef EX_TEMPFAIL
+ if (ins(d, "EX_TEMPFAIL", (long)EX_TEMPFAIL)) return -1;
+#endif /* EX_TEMPFAIL */
+#ifdef EX_PROTOCOL
+ if (ins(d, "EX_PROTOCOL", (long)EX_PROTOCOL)) return -1;
+#endif /* EX_PROTOCOL */
+#ifdef EX_NOPERM
+ if (ins(d, "EX_NOPERM", (long)EX_NOPERM)) return -1;
+#endif /* EX_NOPERM */
+#ifdef EX_CONFIG
+ if (ins(d, "EX_CONFIG", (long)EX_CONFIG)) return -1;
+#endif /* EX_CONFIG */
+#ifdef EX_NOTFOUND
+ if (ins(d, "EX_NOTFOUND", (long)EX_NOTFOUND)) return -1;
+#endif /* EX_NOTFOUND */
+
+#ifdef HAVE_SPAWNV
+ if (ins(d, "P_WAIT", (long)_P_WAIT)) return -1;
+ if (ins(d, "P_NOWAIT", (long)_P_NOWAIT)) return -1;
+ if (ins(d, "P_OVERLAY", (long)_OLD_P_OVERLAY)) return -1;
+ if (ins(d, "P_NOWAITO", (long)_P_NOWAITO)) return -1;
+ if (ins(d, "P_DETACH", (long)_P_DETACH)) return -1;
+#endif
+ return 0;
+}
+
+static struct PyModuleDef edk2module = {
+ PyModuleDef_HEAD_INIT,
+ "edk2",
+ edk2__doc__,
+ -1,
+ edk2_methods,
+ NULL,
+ NULL,
+ NULL,
+ NULL
+};
+
+#define INITFUNC initedk2
+#define MODNAME "edk2"
+
+PyMODINIT_FUNC
+PyEdk2__Init(void)
+{
+ PyObject *m;
+
+#ifndef UEFI_C_SOURCE
+ PyObject *v;
+#endif
+
+ m = PyModule_Create(&edk2module);
+ if (m == NULL)
+ return m;
+
+#ifndef UEFI_C_SOURCE
+ /* Initialize environ dictionary */
+ v = convertenviron();
+ Py_XINCREF(v);
+ if (v == NULL || PyModule_AddObject(m, "environ", v) != 0)
+ return NULL;
+ Py_DECREF(v);
+#endif /* UEFI_C_SOURCE */
+
+ if (all_ins(m))
+ return NULL ;
+
+ Py_INCREF(PyExc_OSError);
+ PyModule_AddObject(m, "error", PyExc_OSError);
+
+#ifdef HAVE_PUTENV
+ if (edk2_putenv_garbage == NULL)
+ edk2_putenv_garbage = PyDict_New();
+#endif
+
+ if (!initialized) {
+ stat_result_desc.name = MODNAME ".stat_result";
+ stat_result_desc.fields[2].name = PyStructSequence_UnnamedField;
+ stat_result_desc.fields[3].name = PyStructSequence_UnnamedField;
+ stat_result_desc.fields[4].name = PyStructSequence_UnnamedField;
+ PyStructSequence_InitType(&StatResultType, &stat_result_desc);
+ structseq_new = StatResultType.tp_new;
+ StatResultType.tp_new = statresult_new;
+
+ //statvfs_result_desc.name = MODNAME ".statvfs_result";
+ //PyStructSequence_InitType(&StatVFSResultType, &statvfs_result_desc);
+#ifdef NEED_TICKS_PER_SECOND
+# if defined(HAVE_SYSCONF) && defined(_SC_CLK_TCK)
+ ticks_per_second = sysconf(_SC_CLK_TCK);
+# elif defined(HZ)
+ ticks_per_second = HZ;
+# else
+ ticks_per_second = 60; /* magic fallback value; may be bogus */
+# endif
+#endif
+ }
+ Py_INCREF((PyObject*) &StatResultType);
+ PyModule_AddObject(m, "stat_result", (PyObject*) &StatResultType);
+ //Py_INCREF((PyObject*) &StatVFSResultType);
+ //PyModule_AddObject(m, "statvfs_result",
+ // (PyObject*) &StatVFSResultType);
+ initialized = 1;
+ return m;
+
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/errnomodule.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/errnomodule.c
new file mode 100644
index 00000000..b8e96c48
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/errnomodule.c
@@ -0,0 +1,890 @@
+/* Errno module
+
+ Copyright (c) 2011 - 2012, Intel Corporation. All rights reserved.<BR>
+ This program and the accompanying materials are licensed and made available under
+ the terms and conditions of the BSD License that accompanies this distribution.
+ The full text of the license may be found at
+ http://opensource.org/licenses/bsd-license.
+
+ THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+*/
+
+#include "Python.h"
+
+/* Windows socket errors (WSA*) */
+#ifdef MS_WINDOWS
+#include <windows.h>
+#endif
+
+/*
+ * Pull in the system error definitions
+ */
+
+static PyMethodDef errno_methods[] = {
+ {NULL, NULL}
+};
+
+/* Helper function doing the dictionary inserting */
+
+static void
+_inscode(PyObject *d, PyObject *de, const char *name, int code)
+{
+ PyObject *u = PyUnicode_FromString(name);
+ PyObject *v = PyLong_FromLong((long) code);
+
+ /* Don't bother checking for errors; they'll be caught at the end
+ * of the module initialization function by the caller of
+ * initerrno().
+ */
+ if (u && v) {
+ /* insert in modules dict */
+ PyDict_SetItem(d, u, v);
+ /* insert in errorcode dict */
+ PyDict_SetItem(de, v, u);
+ }
+ Py_XDECREF(u);
+ Py_XDECREF(v);
+}
+
+PyDoc_STRVAR(errno__doc__,
+"This module makes available standard errno system symbols.\n\
+\n\
+The value of each symbol is the corresponding integer value,\n\
+e.g., on most systems, errno.ENOENT equals the integer 2.\n\
+\n\
+The dictionary errno.errorcode maps numeric codes to symbol names,\n\
+e.g., errno.errorcode[2] could be the string 'ENOENT'.\n\
+\n\
+Symbols that are not relevant to the underlying system are not defined.\n\
+\n\
+To map error codes to error messages, use the function os.strerror(),\n\
+e.g. os.strerror(2) could return 'No such file or directory'.");
+
+static struct PyModuleDef errnomodule = {
+ PyModuleDef_HEAD_INIT,
+ "errno",
+ errno__doc__,
+ -1,
+ errno_methods,
+ NULL,
+ NULL,
+ NULL,
+ NULL
+};
+
+PyMODINIT_FUNC
+PyInit_errno(void)
+{
+ PyObject *m, *d, *de;
+ m = PyModule_Create(&errnomodule);
+ if (m == NULL)
+ return NULL;
+ d = PyModule_GetDict(m);
+ de = PyDict_New();
+ if (!d || !de || PyDict_SetItemString(d, "errorcode", de) < 0)
+ return NULL;
+
+/* Macro so I don't have to edit each and every line below... */
+#define inscode(d, ds, de, name, code, comment) _inscode(d, de, name, code)
+
+ /*
+ * The names and comments are borrowed from linux/include/errno.h,
+ * which should be pretty all-inclusive. However, the Solaris specific
+ * names and comments are borrowed from sys/errno.h in Solaris.
+ * MacOSX specific names and comments are borrowed from sys/errno.h in
+ * MacOSX.
+ */
+
+#ifdef ENODEV
+ inscode(d, ds, de, "ENODEV", ENODEV, "No such device");
+#endif
+#ifdef ENOCSI
+ inscode(d, ds, de, "ENOCSI", ENOCSI, "No CSI structure available");
+#endif
+#ifdef EHOSTUNREACH
+ inscode(d, ds, de, "EHOSTUNREACH", EHOSTUNREACH, "No route to host");
+#else
+#ifdef WSAEHOSTUNREACH
+ inscode(d, ds, de, "EHOSTUNREACH", WSAEHOSTUNREACH, "No route to host");
+#endif
+#endif
+#ifdef ENOMSG
+ inscode(d, ds, de, "ENOMSG", ENOMSG, "No message of desired type");
+#endif
+#ifdef EUCLEAN
+ inscode(d, ds, de, "EUCLEAN", EUCLEAN, "Structure needs cleaning");
+#endif
+#ifdef EL2NSYNC
+ inscode(d, ds, de, "EL2NSYNC", EL2NSYNC, "Level 2 not synchronized");
+#endif
+#ifdef EL2HLT
+ inscode(d, ds, de, "EL2HLT", EL2HLT, "Level 2 halted");
+#endif
+#ifdef ENODATA
+ inscode(d, ds, de, "ENODATA", ENODATA, "No data available");
+#endif
+#ifdef ENOTBLK
+ inscode(d, ds, de, "ENOTBLK", ENOTBLK, "Block device required");
+#endif
+#ifdef ENOSYS
+ inscode(d, ds, de, "ENOSYS", ENOSYS, "Function not implemented");
+#endif
+#ifdef EPIPE
+ inscode(d, ds, de, "EPIPE", EPIPE, "Broken pipe");
+#endif
+#ifdef EINVAL
+ inscode(d, ds, de, "EINVAL", EINVAL, "Invalid argument");
+#else
+#ifdef WSAEINVAL
+ inscode(d, ds, de, "EINVAL", WSAEINVAL, "Invalid argument");
+#endif
+#endif
+#ifdef EOVERFLOW
+ inscode(d, ds, de, "EOVERFLOW", EOVERFLOW, "Value too large for defined data type");
+#endif
+#ifdef EADV
+ inscode(d, ds, de, "EADV", EADV, "Advertise error");
+#endif
+#ifdef EINTR
+ inscode(d, ds, de, "EINTR", EINTR, "Interrupted system call");
+#else
+#ifdef WSAEINTR
+ inscode(d, ds, de, "EINTR", WSAEINTR, "Interrupted system call");
+#endif
+#endif
+#ifdef EUSERS
+ inscode(d, ds, de, "EUSERS", EUSERS, "Too many users");
+#else
+#ifdef WSAEUSERS
+ inscode(d, ds, de, "EUSERS", WSAEUSERS, "Too many users");
+#endif
+#endif
+#ifdef ENOTEMPTY
+ inscode(d, ds, de, "ENOTEMPTY", ENOTEMPTY, "Directory not empty");
+#else
+#ifdef WSAENOTEMPTY
+ inscode(d, ds, de, "ENOTEMPTY", WSAENOTEMPTY, "Directory not empty");
+#endif
+#endif
+#ifdef ENOBUFS
+ inscode(d, ds, de, "ENOBUFS", ENOBUFS, "No buffer space available");
+#else
+#ifdef WSAENOBUFS
+ inscode(d, ds, de, "ENOBUFS", WSAENOBUFS, "No buffer space available");
+#endif
+#endif
+#ifdef EPROTO
+ inscode(d, ds, de, "EPROTO", EPROTO, "Protocol error");
+#endif
+#ifdef EREMOTE
+ inscode(d, ds, de, "EREMOTE", EREMOTE, "Object is remote");
+#else
+#ifdef WSAEREMOTE
+ inscode(d, ds, de, "EREMOTE", WSAEREMOTE, "Object is remote");
+#endif
+#endif
+#ifdef ENAVAIL
+ inscode(d, ds, de, "ENAVAIL", ENAVAIL, "No XENIX semaphores available");
+#endif
+#ifdef ECHILD
+ inscode(d, ds, de, "ECHILD", ECHILD, "No child processes");
+#endif
+#ifdef ELOOP
+ inscode(d, ds, de, "ELOOP", ELOOP, "Too many symbolic links encountered");
+#else
+#ifdef WSAELOOP
+ inscode(d, ds, de, "ELOOP", WSAELOOP, "Too many symbolic links encountered");
+#endif
+#endif
+#ifdef EXDEV
+ inscode(d, ds, de, "EXDEV", EXDEV, "Cross-device link");
+#endif
+#ifdef E2BIG
+ inscode(d, ds, de, "E2BIG", E2BIG, "Arg list too long");
+#endif
+#ifdef ESRCH
+ inscode(d, ds, de, "ESRCH", ESRCH, "No such process");
+#endif
+#ifdef EMSGSIZE
+ inscode(d, ds, de, "EMSGSIZE", EMSGSIZE, "Message too long");
+#else
+#ifdef WSAEMSGSIZE
+ inscode(d, ds, de, "EMSGSIZE", WSAEMSGSIZE, "Message too long");
+#endif
+#endif
+#ifdef EAFNOSUPPORT
+ inscode(d, ds, de, "EAFNOSUPPORT", EAFNOSUPPORT, "Address family not supported by protocol");
+#else
+#ifdef WSAEAFNOSUPPORT
+ inscode(d, ds, de, "EAFNOSUPPORT", WSAEAFNOSUPPORT, "Address family not supported by protocol");
+#endif
+#endif
+#ifdef EBADR
+ inscode(d, ds, de, "EBADR", EBADR, "Invalid request descriptor");
+#endif
+#ifdef EHOSTDOWN
+ inscode(d, ds, de, "EHOSTDOWN", EHOSTDOWN, "Host is down");
+#else
+#ifdef WSAEHOSTDOWN
+ inscode(d, ds, de, "EHOSTDOWN", WSAEHOSTDOWN, "Host is down");
+#endif
+#endif
+#ifdef EPFNOSUPPORT
+ inscode(d, ds, de, "EPFNOSUPPORT", EPFNOSUPPORT, "Protocol family not supported");
+#else
+#ifdef WSAEPFNOSUPPORT
+ inscode(d, ds, de, "EPFNOSUPPORT", WSAEPFNOSUPPORT, "Protocol family not supported");
+#endif
+#endif
+#ifdef ENOPROTOOPT
+ inscode(d, ds, de, "ENOPROTOOPT", ENOPROTOOPT, "Protocol not available");
+#else
+#ifdef WSAENOPROTOOPT
+ inscode(d, ds, de, "ENOPROTOOPT", WSAENOPROTOOPT, "Protocol not available");
+#endif
+#endif
+#ifdef EBUSY
+ inscode(d, ds, de, "EBUSY", EBUSY, "Device or resource busy");
+#endif
+#ifdef EWOULDBLOCK
+ inscode(d, ds, de, "EWOULDBLOCK", EWOULDBLOCK, "Operation would block");
+#else
+#ifdef WSAEWOULDBLOCK
+ inscode(d, ds, de, "EWOULDBLOCK", WSAEWOULDBLOCK, "Operation would block");
+#endif
+#endif
+#ifdef EBADFD
+ inscode(d, ds, de, "EBADFD", EBADFD, "File descriptor in bad state");
+#endif
+#ifdef EDOTDOT
+ inscode(d, ds, de, "EDOTDOT", EDOTDOT, "RFS specific error");
+#endif
+#ifdef EISCONN
+ inscode(d, ds, de, "EISCONN", EISCONN, "Transport endpoint is already connected");
+#else
+#ifdef WSAEISCONN
+ inscode(d, ds, de, "EISCONN", WSAEISCONN, "Transport endpoint is already connected");
+#endif
+#endif
+#ifdef ENOANO
+ inscode(d, ds, de, "ENOANO", ENOANO, "No anode");
+#endif
+#ifdef ESHUTDOWN
+ inscode(d, ds, de, "ESHUTDOWN", ESHUTDOWN, "Cannot send after transport endpoint shutdown");
+#else
+#ifdef WSAESHUTDOWN
+ inscode(d, ds, de, "ESHUTDOWN", WSAESHUTDOWN, "Cannot send after transport endpoint shutdown");
+#endif
+#endif
+#ifdef ECHRNG
+ inscode(d, ds, de, "ECHRNG", ECHRNG, "Channel number out of range");
+#endif
+#ifdef ELIBBAD
+ inscode(d, ds, de, "ELIBBAD", ELIBBAD, "Accessing a corrupted shared library");
+#endif
+#ifdef ENONET
+ inscode(d, ds, de, "ENONET", ENONET, "Machine is not on the network");
+#endif
+#ifdef EBADE
+ inscode(d, ds, de, "EBADE", EBADE, "Invalid exchange");
+#endif
+#ifdef EBADF
+ inscode(d, ds, de, "EBADF", EBADF, "Bad file number");
+#else
+#ifdef WSAEBADF
+ inscode(d, ds, de, "EBADF", WSAEBADF, "Bad file number");
+#endif
+#endif
+#ifdef EMULTIHOP
+ inscode(d, ds, de, "EMULTIHOP", EMULTIHOP, "Multihop attempted");
+#endif
+#ifdef EIO
+ inscode(d, ds, de, "EIO", EIO, "I/O error");
+#endif
+#ifdef EUNATCH
+ inscode(d, ds, de, "EUNATCH", EUNATCH, "Protocol driver not attached");
+#endif
+#ifdef EPROTOTYPE
+ inscode(d, ds, de, "EPROTOTYPE", EPROTOTYPE, "Protocol wrong type for socket");
+#else
+#ifdef WSAEPROTOTYPE
+ inscode(d, ds, de, "EPROTOTYPE", WSAEPROTOTYPE, "Protocol wrong type for socket");
+#endif
+#endif
+#ifdef ENOSPC
+ inscode(d, ds, de, "ENOSPC", ENOSPC, "No space left on device");
+#endif
+#ifdef ENOEXEC
+ inscode(d, ds, de, "ENOEXEC", ENOEXEC, "Exec format error");
+#endif
+#ifdef EALREADY
+ inscode(d, ds, de, "EALREADY", EALREADY, "Operation already in progress");
+#else
+#ifdef WSAEALREADY
+ inscode(d, ds, de, "EALREADY", WSAEALREADY, "Operation already in progress");
+#endif
+#endif
+#ifdef ENETDOWN
+ inscode(d, ds, de, "ENETDOWN", ENETDOWN, "Network is down");
+#else
+#ifdef WSAENETDOWN
+ inscode(d, ds, de, "ENETDOWN", WSAENETDOWN, "Network is down");
+#endif
+#endif
+#ifdef ENOTNAM
+ inscode(d, ds, de, "ENOTNAM", ENOTNAM, "Not a XENIX named type file");
+#endif
+#ifdef EACCES
+ inscode(d, ds, de, "EACCES", EACCES, "Permission denied");
+#else
+#ifdef WSAEACCES
+ inscode(d, ds, de, "EACCES", WSAEACCES, "Permission denied");
+#endif
+#endif
+#ifdef ELNRNG
+ inscode(d, ds, de, "ELNRNG", ELNRNG, "Link number out of range");
+#endif
+#ifdef EILSEQ
+ inscode(d, ds, de, "EILSEQ", EILSEQ, "Illegal byte sequence");
+#endif
+#ifdef ENOTDIR
+ inscode(d, ds, de, "ENOTDIR", ENOTDIR, "Not a directory");
+#endif
+#ifdef ENOTUNIQ
+ inscode(d, ds, de, "ENOTUNIQ", ENOTUNIQ, "Name not unique on network");
+#endif
+#ifdef EPERM
+ inscode(d, ds, de, "EPERM", EPERM, "Operation not permitted");
+#endif
+#ifdef EDOM
+ inscode(d, ds, de, "EDOM", EDOM, "Math argument out of domain of func");
+#endif
+#ifdef EXFULL
+ inscode(d, ds, de, "EXFULL", EXFULL, "Exchange full");
+#endif
+#ifdef ECONNREFUSED
+ inscode(d, ds, de, "ECONNREFUSED", ECONNREFUSED, "Connection refused");
+#else
+#ifdef WSAECONNREFUSED
+ inscode(d, ds, de, "ECONNREFUSED", WSAECONNREFUSED, "Connection refused");
+#endif
+#endif
+#ifdef EISDIR
+ inscode(d, ds, de, "EISDIR", EISDIR, "Is a directory");
+#endif
+#ifdef EPROTONOSUPPORT
+ inscode(d, ds, de, "EPROTONOSUPPORT", EPROTONOSUPPORT, "Protocol not supported");
+#else
+#ifdef WSAEPROTONOSUPPORT
+ inscode(d, ds, de, "EPROTONOSUPPORT", WSAEPROTONOSUPPORT, "Protocol not supported");
+#endif
+#endif
+#ifdef EROFS
+ inscode(d, ds, de, "EROFS", EROFS, "Read-only file system");
+#endif
+#ifdef EADDRNOTAVAIL
+ inscode(d, ds, de, "EADDRNOTAVAIL", EADDRNOTAVAIL, "Cannot assign requested address");
+#else
+#ifdef WSAEADDRNOTAVAIL
+ inscode(d, ds, de, "EADDRNOTAVAIL", WSAEADDRNOTAVAIL, "Cannot assign requested address");
+#endif
+#endif
+#ifdef EIDRM
+ inscode(d, ds, de, "EIDRM", EIDRM, "Identifier removed");
+#endif
+#ifdef ECOMM
+ inscode(d, ds, de, "ECOMM", ECOMM, "Communication error on send");
+#endif
+#ifdef ESRMNT
+ inscode(d, ds, de, "ESRMNT", ESRMNT, "Srmount error");
+#endif
+#ifdef EREMOTEIO
+ inscode(d, ds, de, "EREMOTEIO", EREMOTEIO, "Remote I/O error");
+#endif
+#ifdef EL3RST
+ inscode(d, ds, de, "EL3RST", EL3RST, "Level 3 reset");
+#endif
+#ifdef EBADMSG
+ inscode(d, ds, de, "EBADMSG", EBADMSG, "Not a data message");
+#endif
+#ifdef ENFILE
+ inscode(d, ds, de, "ENFILE", ENFILE, "File table overflow");
+#endif
+#ifdef ELIBMAX
+ inscode(d, ds, de, "ELIBMAX", ELIBMAX, "Attempting to link in too many shared libraries");
+#endif
+#ifdef ESPIPE
+ inscode(d, ds, de, "ESPIPE", ESPIPE, "Illegal seek");
+#endif
+#ifdef ENOLINK
+ inscode(d, ds, de, "ENOLINK", ENOLINK, "Link has been severed");
+#endif
+#ifdef ENETRESET
+ inscode(d, ds, de, "ENETRESET", ENETRESET, "Network dropped connection because of reset");
+#else
+#ifdef WSAENETRESET
+ inscode(d, ds, de, "ENETRESET", WSAENETRESET, "Network dropped connection because of reset");
+#endif
+#endif
+#ifdef ETIMEDOUT
+ inscode(d, ds, de, "ETIMEDOUT", ETIMEDOUT, "Connection timed out");
+#else
+#ifdef WSAETIMEDOUT
+ inscode(d, ds, de, "ETIMEDOUT", WSAETIMEDOUT, "Connection timed out");
+#endif
+#endif
+#ifdef ENOENT
+ inscode(d, ds, de, "ENOENT", ENOENT, "No such file or directory");
+#endif
+#ifdef EEXIST
+ inscode(d, ds, de, "EEXIST", EEXIST, "File exists");
+#endif
+#ifdef EDQUOT
+ inscode(d, ds, de, "EDQUOT", EDQUOT, "Quota exceeded");
+#else
+#ifdef WSAEDQUOT
+ inscode(d, ds, de, "EDQUOT", WSAEDQUOT, "Quota exceeded");
+#endif
+#endif
+#ifdef ENOSTR
+ inscode(d, ds, de, "ENOSTR", ENOSTR, "Device not a stream");
+#endif
+#ifdef EBADSLT
+ inscode(d, ds, de, "EBADSLT", EBADSLT, "Invalid slot");
+#endif
+#ifdef EBADRQC
+ inscode(d, ds, de, "EBADRQC", EBADRQC, "Invalid request code");
+#endif
+#ifdef ELIBACC
+ inscode(d, ds, de, "ELIBACC", ELIBACC, "Can not access a needed shared library");
+#endif
+#ifdef EFAULT
+ inscode(d, ds, de, "EFAULT", EFAULT, "Bad address");
+#else
+#ifdef WSAEFAULT
+ inscode(d, ds, de, "EFAULT", WSAEFAULT, "Bad address");
+#endif
+#endif
+#ifdef EFBIG
+ inscode(d, ds, de, "EFBIG", EFBIG, "File too large");
+#endif
+#ifdef EDEADLK
+ inscode(d, ds, de, "EDEADLK", EDEADLK, "Resource deadlock would occur");
+#endif
+#ifdef ENOTCONN
+ inscode(d, ds, de, "ENOTCONN", ENOTCONN, "Transport endpoint is not connected");
+#else
+#ifdef WSAENOTCONN
+ inscode(d, ds, de, "ENOTCONN", WSAENOTCONN, "Transport endpoint is not connected");
+#endif
+#endif
+#ifdef EDESTADDRREQ
+ inscode(d, ds, de, "EDESTADDRREQ", EDESTADDRREQ, "Destination address required");
+#else
+#ifdef WSAEDESTADDRREQ
+ inscode(d, ds, de, "EDESTADDRREQ", WSAEDESTADDRREQ, "Destination address required");
+#endif
+#endif
+#ifdef ELIBSCN
+ inscode(d, ds, de, "ELIBSCN", ELIBSCN, ".lib section in a.out corrupted");
+#endif
+#ifdef ENOLCK
+ inscode(d, ds, de, "ENOLCK", ENOLCK, "No record locks available");
+#endif
+#ifdef EISNAM
+ inscode(d, ds, de, "EISNAM", EISNAM, "Is a named type file");
+#endif
+#ifdef ECONNABORTED
+ inscode(d, ds, de, "ECONNABORTED", ECONNABORTED, "Software caused connection abort");
+#else
+#ifdef WSAECONNABORTED
+ inscode(d, ds, de, "ECONNABORTED", WSAECONNABORTED, "Software caused connection abort");
+#endif
+#endif
+#ifdef ENETUNREACH
+ inscode(d, ds, de, "ENETUNREACH", ENETUNREACH, "Network is unreachable");
+#else
+#ifdef WSAENETUNREACH
+ inscode(d, ds, de, "ENETUNREACH", WSAENETUNREACH, "Network is unreachable");
+#endif
+#endif
+#ifdef ESTALE
+ inscode(d, ds, de, "ESTALE", ESTALE, "Stale NFS file handle");
+#else
+#ifdef WSAESTALE
+ inscode(d, ds, de, "ESTALE", WSAESTALE, "Stale NFS file handle");
+#endif
+#endif
+#ifdef ENOSR
+ inscode(d, ds, de, "ENOSR", ENOSR, "Out of streams resources");
+#endif
+#ifdef ENOMEM
+ inscode(d, ds, de, "ENOMEM", ENOMEM, "Out of memory");
+#endif
+#ifdef ENOTSOCK
+ inscode(d, ds, de, "ENOTSOCK", ENOTSOCK, "Socket operation on non-socket");
+#else
+#ifdef WSAENOTSOCK
+ inscode(d, ds, de, "ENOTSOCK", WSAENOTSOCK, "Socket operation on non-socket");
+#endif
+#endif
+#ifdef ESTRPIPE
+ inscode(d, ds, de, "ESTRPIPE", ESTRPIPE, "Streams pipe error");
+#endif
+#ifdef EMLINK
+ inscode(d, ds, de, "EMLINK", EMLINK, "Too many links");
+#endif
+#ifdef ERANGE
+ inscode(d, ds, de, "ERANGE", ERANGE, "Math result not representable");
+#endif
+#ifdef ELIBEXEC
+ inscode(d, ds, de, "ELIBEXEC", ELIBEXEC, "Cannot exec a shared library directly");
+#endif
+#ifdef EL3HLT
+ inscode(d, ds, de, "EL3HLT", EL3HLT, "Level 3 halted");
+#endif
+#ifdef ECONNRESET
+ inscode(d, ds, de, "ECONNRESET", ECONNRESET, "Connection reset by peer");
+#else
+#ifdef WSAECONNRESET
+ inscode(d, ds, de, "ECONNRESET", WSAECONNRESET, "Connection reset by peer");
+#endif
+#endif
+#ifdef EADDRINUSE
+ inscode(d, ds, de, "EADDRINUSE", EADDRINUSE, "Address already in use");
+#else
+#ifdef WSAEADDRINUSE
+ inscode(d, ds, de, "EADDRINUSE", WSAEADDRINUSE, "Address already in use");
+#endif
+#endif
+#ifdef EOPNOTSUPP
+ inscode(d, ds, de, "EOPNOTSUPP", EOPNOTSUPP, "Operation not supported on transport endpoint");
+#else
+#ifdef WSAEOPNOTSUPP
+ inscode(d, ds, de, "EOPNOTSUPP", WSAEOPNOTSUPP, "Operation not supported on transport endpoint");
+#endif
+#endif
+#ifdef EREMCHG
+ inscode(d, ds, de, "EREMCHG", EREMCHG, "Remote address changed");
+#endif
+#ifdef EAGAIN
+ inscode(d, ds, de, "EAGAIN", EAGAIN, "Try again");
+#endif
+#ifdef ENAMETOOLONG
+ inscode(d, ds, de, "ENAMETOOLONG", ENAMETOOLONG, "File name too long");
+#else
+#ifdef WSAENAMETOOLONG
+ inscode(d, ds, de, "ENAMETOOLONG", WSAENAMETOOLONG, "File name too long");
+#endif
+#endif
+#ifdef ENOTTY
+ inscode(d, ds, de, "ENOTTY", ENOTTY, "Not a typewriter");
+#endif
+#ifdef ERESTART
+ inscode(d, ds, de, "ERESTART", ERESTART, "Interrupted system call should be restarted");
+#endif
+#ifdef ESOCKTNOSUPPORT
+ inscode(d, ds, de, "ESOCKTNOSUPPORT", ESOCKTNOSUPPORT, "Socket type not supported");
+#else
+#ifdef WSAESOCKTNOSUPPORT
+ inscode(d, ds, de, "ESOCKTNOSUPPORT", WSAESOCKTNOSUPPORT, "Socket type not supported");
+#endif
+#endif
+#ifdef ETIME
+ inscode(d, ds, de, "ETIME", ETIME, "Timer expired");
+#endif
+#ifdef EBFONT
+ inscode(d, ds, de, "EBFONT", EBFONT, "Bad font file format");
+#endif
+#ifdef EDEADLOCK
+ inscode(d, ds, de, "EDEADLOCK", EDEADLOCK, "Error EDEADLOCK");
+#endif
+#ifdef ETOOMANYREFS
+ inscode(d, ds, de, "ETOOMANYREFS", ETOOMANYREFS, "Too many references: cannot splice");
+#else
+#ifdef WSAETOOMANYREFS
+ inscode(d, ds, de, "ETOOMANYREFS", WSAETOOMANYREFS, "Too many references: cannot splice");
+#endif
+#endif
+#ifdef EMFILE
+ inscode(d, ds, de, "EMFILE", EMFILE, "Too many open files");
+#else
+#ifdef WSAEMFILE
+ inscode(d, ds, de, "EMFILE", WSAEMFILE, "Too many open files");
+#endif
+#endif
+#ifdef ETXTBSY
+ inscode(d, ds, de, "ETXTBSY", ETXTBSY, "Text file busy");
+#endif
+#ifdef EINPROGRESS
+ inscode(d, ds, de, "EINPROGRESS", EINPROGRESS, "Operation now in progress");
+#else
+#ifdef WSAEINPROGRESS
+ inscode(d, ds, de, "EINPROGRESS", WSAEINPROGRESS, "Operation now in progress");
+#endif
+#endif
+#ifdef ENXIO
+ inscode(d, ds, de, "ENXIO", ENXIO, "No such device or address");
+#endif
+#ifdef ENOPKG
+ inscode(d, ds, de, "ENOPKG", ENOPKG, "Package not installed");
+#endif
+#ifdef WSASY
+ inscode(d, ds, de, "WSASY", WSASY, "Error WSASY");
+#endif
+#ifdef WSAEHOSTDOWN
+ inscode(d, ds, de, "WSAEHOSTDOWN", WSAEHOSTDOWN, "Host is down");
+#endif
+#ifdef WSAENETDOWN
+ inscode(d, ds, de, "WSAENETDOWN", WSAENETDOWN, "Network is down");
+#endif
+#ifdef WSAENOTSOCK
+ inscode(d, ds, de, "WSAENOTSOCK", WSAENOTSOCK, "Socket operation on non-socket");
+#endif
+#ifdef WSAEHOSTUNREACH
+ inscode(d, ds, de, "WSAEHOSTUNREACH", WSAEHOSTUNREACH, "No route to host");
+#endif
+#ifdef WSAELOOP
+ inscode(d, ds, de, "WSAELOOP", WSAELOOP, "Too many symbolic links encountered");
+#endif
+#ifdef WSAEMFILE
+ inscode(d, ds, de, "WSAEMFILE", WSAEMFILE, "Too many open files");
+#endif
+#ifdef WSAESTALE
+ inscode(d, ds, de, "WSAESTALE", WSAESTALE, "Stale NFS file handle");
+#endif
+#ifdef WSAVERNOTSUPPORTED
+ inscode(d, ds, de, "WSAVERNOTSUPPORTED", WSAVERNOTSUPPORTED, "Error WSAVERNOTSUPPORTED");
+#endif
+#ifdef WSAENETUNREACH
+ inscode(d, ds, de, "WSAENETUNREACH", WSAENETUNREACH, "Network is unreachable");
+#endif
+#ifdef WSAEPROCLIM
+ inscode(d, ds, de, "WSAEPROCLIM", WSAEPROCLIM, "Error WSAEPROCLIM");
+#endif
+#ifdef WSAEFAULT
+ inscode(d, ds, de, "WSAEFAULT", WSAEFAULT, "Bad address");
+#endif
+#ifdef WSANOTINITIALISED
+ inscode(d, ds, de, "WSANOTINITIALISED", WSANOTINITIALISED, "Error WSANOTINITIALISED");
+#endif
+#ifdef WSAEUSERS
+ inscode(d, ds, de, "WSAEUSERS", WSAEUSERS, "Too many users");
+#endif
+#ifdef WSAMAKEASYNCREPL
+ inscode(d, ds, de, "WSAMAKEASYNCREPL", WSAMAKEASYNCREPL, "Error WSAMAKEASYNCREPL");
+#endif
+#ifdef WSAENOPROTOOPT
+ inscode(d, ds, de, "WSAENOPROTOOPT", WSAENOPROTOOPT, "Protocol not available");
+#endif
+#ifdef WSAECONNABORTED
+ inscode(d, ds, de, "WSAECONNABORTED", WSAECONNABORTED, "Software caused connection abort");
+#endif
+#ifdef WSAENAMETOOLONG
+ inscode(d, ds, de, "WSAENAMETOOLONG", WSAENAMETOOLONG, "File name too long");
+#endif
+#ifdef WSAENOTEMPTY
+ inscode(d, ds, de, "WSAENOTEMPTY", WSAENOTEMPTY, "Directory not empty");
+#endif
+#ifdef WSAESHUTDOWN
+ inscode(d, ds, de, "WSAESHUTDOWN", WSAESHUTDOWN, "Cannot send after transport endpoint shutdown");
+#endif
+#ifdef WSAEAFNOSUPPORT
+ inscode(d, ds, de, "WSAEAFNOSUPPORT", WSAEAFNOSUPPORT, "Address family not supported by protocol");
+#endif
+#ifdef WSAETOOMANYREFS
+ inscode(d, ds, de, "WSAETOOMANYREFS", WSAETOOMANYREFS, "Too many references: cannot splice");
+#endif
+#ifdef WSAEACCES
+ inscode(d, ds, de, "WSAEACCES", WSAEACCES, "Permission denied");
+#endif
+#ifdef WSATR
+ inscode(d, ds, de, "WSATR", WSATR, "Error WSATR");
+#endif
+#ifdef WSABASEERR
+ inscode(d, ds, de, "WSABASEERR", WSABASEERR, "Error WSABASEERR");
+#endif
+#ifdef WSADESCRIPTIO
+ inscode(d, ds, de, "WSADESCRIPTIO", WSADESCRIPTIO, "Error WSADESCRIPTIO");
+#endif
+#ifdef WSAEMSGSIZE
+ inscode(d, ds, de, "WSAEMSGSIZE", WSAEMSGSIZE, "Message too long");
+#endif
+#ifdef WSAEBADF
+ inscode(d, ds, de, "WSAEBADF", WSAEBADF, "Bad file number");
+#endif
+#ifdef WSAECONNRESET
+ inscode(d, ds, de, "WSAECONNRESET", WSAECONNRESET, "Connection reset by peer");
+#endif
+#ifdef WSAGETSELECTERRO
+ inscode(d, ds, de, "WSAGETSELECTERRO", WSAGETSELECTERRO, "Error WSAGETSELECTERRO");
+#endif
+#ifdef WSAETIMEDOUT
+ inscode(d, ds, de, "WSAETIMEDOUT", WSAETIMEDOUT, "Connection timed out");
+#endif
+#ifdef WSAENOBUFS
+ inscode(d, ds, de, "WSAENOBUFS", WSAENOBUFS, "No buffer space available");
+#endif
+#ifdef WSAEDISCON
+ inscode(d, ds, de, "WSAEDISCON", WSAEDISCON, "Error WSAEDISCON");
+#endif
+#ifdef WSAEINTR
+ inscode(d, ds, de, "WSAEINTR", WSAEINTR, "Interrupted system call");
+#endif
+#ifdef WSAEPROTOTYPE
+ inscode(d, ds, de, "WSAEPROTOTYPE", WSAEPROTOTYPE, "Protocol wrong type for socket");
+#endif
+#ifdef WSAHOS
+ inscode(d, ds, de, "WSAHOS", WSAHOS, "Error WSAHOS");
+#endif
+#ifdef WSAEADDRINUSE
+ inscode(d, ds, de, "WSAEADDRINUSE", WSAEADDRINUSE, "Address already in use");
+#endif
+#ifdef WSAEADDRNOTAVAIL
+ inscode(d, ds, de, "WSAEADDRNOTAVAIL", WSAEADDRNOTAVAIL, "Cannot assign requested address");
+#endif
+#ifdef WSAEALREADY
+ inscode(d, ds, de, "WSAEALREADY", WSAEALREADY, "Operation already in progress");
+#endif
+#ifdef WSAEPROTONOSUPPORT
+ inscode(d, ds, de, "WSAEPROTONOSUPPORT", WSAEPROTONOSUPPORT, "Protocol not supported");
+#endif
+#ifdef WSASYSNOTREADY
+ inscode(d, ds, de, "WSASYSNOTREADY", WSASYSNOTREADY, "Error WSASYSNOTREADY");
+#endif
+#ifdef WSAEWOULDBLOCK
+ inscode(d, ds, de, "WSAEWOULDBLOCK", WSAEWOULDBLOCK, "Operation would block");
+#endif
+#ifdef WSAEPFNOSUPPORT
+ inscode(d, ds, de, "WSAEPFNOSUPPORT", WSAEPFNOSUPPORT, "Protocol family not supported");
+#endif
+#ifdef WSAEOPNOTSUPP
+ inscode(d, ds, de, "WSAEOPNOTSUPP", WSAEOPNOTSUPP, "Operation not supported on transport endpoint");
+#endif
+#ifdef WSAEISCONN
+ inscode(d, ds, de, "WSAEISCONN", WSAEISCONN, "Transport endpoint is already connected");
+#endif
+#ifdef WSAEDQUOT
+ inscode(d, ds, de, "WSAEDQUOT", WSAEDQUOT, "Quota exceeded");
+#endif
+#ifdef WSAENOTCONN
+ inscode(d, ds, de, "WSAENOTCONN", WSAENOTCONN, "Transport endpoint is not connected");
+#endif
+#ifdef WSAEREMOTE
+ inscode(d, ds, de, "WSAEREMOTE", WSAEREMOTE, "Object is remote");
+#endif
+#ifdef WSAEINVAL
+ inscode(d, ds, de, "WSAEINVAL", WSAEINVAL, "Invalid argument");
+#endif
+#ifdef WSAEINPROGRESS
+ inscode(d, ds, de, "WSAEINPROGRESS", WSAEINPROGRESS, "Operation now in progress");
+#endif
+#ifdef WSAGETSELECTEVEN
+ inscode(d, ds, de, "WSAGETSELECTEVEN", WSAGETSELECTEVEN, "Error WSAGETSELECTEVEN");
+#endif
+#ifdef WSAESOCKTNOSUPPORT
+ inscode(d, ds, de, "WSAESOCKTNOSUPPORT", WSAESOCKTNOSUPPORT, "Socket type not supported");
+#endif
+#ifdef WSAGETASYNCERRO
+ inscode(d, ds, de, "WSAGETASYNCERRO", WSAGETASYNCERRO, "Error WSAGETASYNCERRO");
+#endif
+#ifdef WSAMAKESELECTREPL
+ inscode(d, ds, de, "WSAMAKESELECTREPL", WSAMAKESELECTREPL, "Error WSAMAKESELECTREPL");
+#endif
+#ifdef WSAGETASYNCBUFLE
+ inscode(d, ds, de, "WSAGETASYNCBUFLE", WSAGETASYNCBUFLE, "Error WSAGETASYNCBUFLE");
+#endif
+#ifdef WSAEDESTADDRREQ
+ inscode(d, ds, de, "WSAEDESTADDRREQ", WSAEDESTADDRREQ, "Destination address required");
+#endif
+#ifdef WSAECONNREFUSED
+ inscode(d, ds, de, "WSAECONNREFUSED", WSAECONNREFUSED, "Connection refused");
+#endif
+#ifdef WSAENETRESET
+ inscode(d, ds, de, "WSAENETRESET", WSAENETRESET, "Network dropped connection because of reset");
+#endif
+#ifdef WSAN
+ inscode(d, ds, de, "WSAN", WSAN, "Error WSAN");
+#endif
+#ifdef ENOMEDIUM
+ inscode(d, ds, de, "ENOMEDIUM", ENOMEDIUM, "No medium found");
+#endif
+#ifdef EMEDIUMTYPE
+ inscode(d, ds, de, "EMEDIUMTYPE", EMEDIUMTYPE, "Wrong medium type");
+#endif
+#ifdef ECANCELED
+ inscode(d, ds, de, "ECANCELED", ECANCELED, "Operation Canceled");
+#endif
+#ifdef ENOKEY
+ inscode(d, ds, de, "ENOKEY", ENOKEY, "Required key not available");
+#endif
+#ifdef EKEYEXPIRED
+ inscode(d, ds, de, "EKEYEXPIRED", EKEYEXPIRED, "Key has expired");
+#endif
+#ifdef EKEYREVOKED
+ inscode(d, ds, de, "EKEYREVOKED", EKEYREVOKED, "Key has been revoked");
+#endif
+#ifdef EKEYREJECTED
+ inscode(d, ds, de, "EKEYREJECTED", EKEYREJECTED, "Key was rejected by service");
+#endif
+#ifdef EOWNERDEAD
+ inscode(d, ds, de, "EOWNERDEAD", EOWNERDEAD, "Owner died");
+#endif
+#ifdef ENOTRECOVERABLE
+ inscode(d, ds, de, "ENOTRECOVERABLE", ENOTRECOVERABLE, "State not recoverable");
+#endif
+#ifdef ERFKILL
+ inscode(d, ds, de, "ERFKILL", ERFKILL, "Operation not possible due to RF-kill");
+#endif
+
+/* These symbols are added for EDK II support. */
+#ifdef EMINERRORVAL
+ inscode(d, ds, de, "EMINERRORVAL", EMINERRORVAL, "Lowest valid error value");
+#endif
+#ifdef ENOTSUP
+ inscode(d, ds, de, "ENOTSUP", ENOTSUP, "Operation not supported");
+#endif
+#ifdef EBADRPC
+ inscode(d, ds, de, "EBADRPC", EBADRPC, "RPC struct is bad");
+#endif
+#ifdef ERPCMISMATCH
+ inscode(d, ds, de, "ERPCMISMATCH", ERPCMISMATCH, "RPC version wrong");
+#endif
+#ifdef EPROGUNAVAIL
+ inscode(d, ds, de, "EPROGUNAVAIL", EPROGUNAVAIL, "RPC prog. not avail");
+#endif
+#ifdef EPROGMISMATCH
+ inscode(d, ds, de, "EPROGMISMATCH", EPROGMISMATCH, "Program version wrong");
+#endif
+#ifdef EPROCUNAVAIL
+ inscode(d, ds, de, "EPROCUNAVAIL", EPROCUNAVAIL, "Bad procedure for program");
+#endif
+#ifdef EFTYPE
+ inscode(d, ds, de, "EFTYPE", EFTYPE, "Inappropriate file type or format");
+#endif
+#ifdef EAUTH
+ inscode(d, ds, de, "EAUTH", EAUTH, "Authentication error");
+#endif
+#ifdef ENEEDAUTH
+ inscode(d, ds, de, "ENEEDAUTH", ENEEDAUTH, "Need authenticator");
+#endif
+#ifdef ECANCELED
+ inscode(d, ds, de, "ECANCELED", ECANCELED, "Operation canceled");
+#endif
+#ifdef ENOATTR
+ inscode(d, ds, de, "ENOATTR", ENOATTR, "Attribute not found");
+#endif
+#ifdef EDOOFUS
+ inscode(d, ds, de, "EDOOFUS", EDOOFUS, "Programming Error");
+#endif
+#ifdef EBUFSIZE
+ inscode(d, ds, de, "EBUFSIZE", EBUFSIZE, "Buffer too small to hold result");
+#endif
+#ifdef EMAXERRORVAL
+ inscode(d, ds, de, "EMAXERRORVAL", EMAXERRORVAL, "One more than the highest defined error value");
+#endif
+
+ Py_DECREF(de);
+ return m;
+}
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/faulthandler.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/faulthandler.c
new file mode 100644
index 00000000..5b81995f
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/faulthandler.c
@@ -0,0 +1,1414 @@
+#include "Python.h"
+#include "pythread.h"
+#include <signal.h>
+#include <object.h>
+#include <frameobject.h>
+#include <signal.h>
+#if defined(HAVE_PTHREAD_SIGMASK) && !defined(HAVE_BROKEN_PTHREAD_SIGMASK)
+# include <pthread.h>
+#endif
+#ifdef MS_WINDOWS
+# include <windows.h>
+#endif
+#ifdef HAVE_SYS_RESOURCE_H
+# include <sys/resource.h>
+#endif
+
+/* Allocate at maximum 100 MB of the stack to raise the stack overflow */
+#define STACK_OVERFLOW_MAX_SIZE (100*1024*1024)
+
+#ifdef WITH_THREAD
+# define FAULTHANDLER_LATER
+#endif
+
+#ifndef MS_WINDOWS
+ /* register() is useless on Windows, because only SIGSEGV, SIGABRT and
+ SIGILL can be handled by the process, and these signals can only be used
+ with enable(), not using register() */
+# define FAULTHANDLER_USER
+#endif
+
+#define PUTS(fd, str) _Py_write_noraise(fd, str, strlen(str))
+
+_Py_IDENTIFIER(enable);
+_Py_IDENTIFIER(fileno);
+_Py_IDENTIFIER(flush);
+_Py_IDENTIFIER(stderr);
+
+#ifdef HAVE_SIGACTION
+typedef struct sigaction _Py_sighandler_t;
+#else
+typedef PyOS_sighandler_t _Py_sighandler_t;
+#endif
+
+typedef struct {
+ int signum;
+ int enabled;
+ const char* name;
+ _Py_sighandler_t previous;
+ int all_threads;
+} fault_handler_t;
+
+static struct {
+ int enabled;
+ PyObject *file;
+ int fd;
+ int all_threads;
+ PyInterpreterState *interp;
+#ifdef MS_WINDOWS
+ void *exc_handler;
+#endif
+} fatal_error = {0, NULL, -1, 0};
+
+#ifdef FAULTHANDLER_LATER
+static struct {
+ PyObject *file;
+ int fd;
+ PY_TIMEOUT_T timeout_us; /* timeout in microseconds */
+ int repeat;
+ PyInterpreterState *interp;
+ int exit;
+ char *header;
+ size_t header_len;
+ /* The main thread always holds this lock. It is only released when
+ faulthandler_thread() is interrupted before this thread exits, or at
+ Python exit. */
+ PyThread_type_lock cancel_event;
+ /* released by child thread when joined */
+ PyThread_type_lock running;
+} thread;
+#endif
+
+#ifdef FAULTHANDLER_USER
+typedef struct {
+ int enabled;
+ PyObject *file;
+ int fd;
+ int all_threads;
+ int chain;
+ _Py_sighandler_t previous;
+ PyInterpreterState *interp;
+} user_signal_t;
+
+static user_signal_t *user_signals;
+
+/* the following macros come from Python: Modules/signalmodule.c */
+#ifndef NSIG
+# if defined(_NSIG)
+# define NSIG _NSIG /* For BSD/SysV */
+# elif defined(_SIGMAX)
+# define NSIG (_SIGMAX + 1) /* For QNX */
+# elif defined(SIGMAX)
+# define NSIG (SIGMAX + 1) /* For djgpp */
+# else
+# define NSIG 64 /* Use a reasonable default value */
+# endif
+#endif
+
+static void faulthandler_user(int signum);
+#endif /* FAULTHANDLER_USER */
+
+
+static fault_handler_t faulthandler_handlers[] = {
+#ifdef SIGBUS
+ {SIGBUS, 0, "Bus error", },
+#endif
+#ifdef SIGILL
+ {SIGILL, 0, "Illegal instruction", },
+#endif
+ {SIGFPE, 0, "Floating point exception", },
+ {SIGABRT, 0, "Aborted", },
+ /* define SIGSEGV at the end to make it the default choice if searching the
+ handler fails in faulthandler_fatal_error() */
+ {SIGSEGV, 0, "Segmentation fault", }
+};
+static const size_t faulthandler_nsignals = \
+ Py_ARRAY_LENGTH(faulthandler_handlers);
+
+#ifdef HAVE_SIGALTSTACK
+static stack_t stack;
+static stack_t old_stack;
+#endif
+
+
+/* Get the file descriptor of a file by calling its fileno() method and then
+ call its flush() method.
+
+ If file is NULL or Py_None, use sys.stderr as the new file.
+ If file is an integer, it will be treated as file descriptor.
+
+ On success, return the file descriptor and write the new file into *file_ptr.
+ On error, return -1. */
+
+static int
+faulthandler_get_fileno(PyObject **file_ptr)
+{
+ PyObject *result;
+ long fd_long;
+ int fd;
+ PyObject *file = *file_ptr;
+
+ if (file == NULL || file == Py_None) {
+ file = _PySys_GetObjectId(&PyId_stderr);
+ if (file == NULL) {
+ PyErr_SetString(PyExc_RuntimeError, "unable to get sys.stderr");
+ return -1;
+ }
+ if (file == Py_None) {
+ PyErr_SetString(PyExc_RuntimeError, "sys.stderr is None");
+ return -1;
+ }
+ }
+ else if (PyLong_Check(file)) {
+ fd = _PyLong_AsInt(file);
+ if (fd == -1 && PyErr_Occurred())
+ return -1;
+ if (fd < 0) {
+ PyErr_SetString(PyExc_ValueError,
+ "file is not a valid file descripter");
+ return -1;
+ }
+ *file_ptr = NULL;
+ return fd;
+ }
+
+ result = _PyObject_CallMethodId(file, &PyId_fileno, NULL);
+ if (result == NULL)
+ return -1;
+
+ fd = -1;
+ if (PyLong_Check(result)) {
+ fd_long = PyLong_AsLong(result);
+ if (0 <= fd_long && fd_long < INT_MAX)
+ fd = (int)fd_long;
+ }
+ Py_DECREF(result);
+
+ if (fd == -1) {
+ PyErr_SetString(PyExc_RuntimeError,
+ "file.fileno() is not a valid file descriptor");
+ return -1;
+ }
+
+ result = _PyObject_CallMethodId(file, &PyId_flush, NULL);
+ if (result != NULL)
+ Py_DECREF(result);
+ else {
+ /* ignore flush() error */
+ PyErr_Clear();
+ }
+ *file_ptr = file;
+ return fd;
+}
+
+/* Get the state of the current thread: only call this function if the current
+ thread holds the GIL. Raise an exception on error. */
+static PyThreadState*
+get_thread_state(void)
+{
+ PyThreadState *tstate = _PyThreadState_UncheckedGet();
+ if (tstate == NULL) {
+ /* just in case but very unlikely... */
+ PyErr_SetString(PyExc_RuntimeError,
+ "unable to get the current thread state");
+ return NULL;
+ }
+ return tstate;
+}
+
+static void
+faulthandler_dump_traceback(int fd, int all_threads,
+ PyInterpreterState *interp)
+{
+ static volatile int reentrant = 0;
+ PyThreadState *tstate;
+
+ if (reentrant)
+ return;
+
+ reentrant = 1;
+
+#ifdef WITH_THREAD
+ /* SIGSEGV, SIGFPE, SIGABRT, SIGBUS and SIGILL are synchronous signals and
+ are thus delivered to the thread that caused the fault. Get the Python
+ thread state of the current thread.
+
+ PyThreadState_Get() doesn't give the state of the thread that caused the
+ fault if the thread released the GIL, and so this function cannot be
+ used. Read the thread local storage (TLS) instead: call
+ PyGILState_GetThisThreadState(). */
+ tstate = PyGILState_GetThisThreadState();
+#else
+ tstate = _PyThreadState_UncheckedGet();
+#endif
+
+ if (all_threads) {
+ (void)_Py_DumpTracebackThreads(fd, NULL, tstate);
+ }
+ else {
+ if (tstate != NULL)
+ _Py_DumpTraceback(fd, tstate);
+ }
+
+ reentrant = 0;
+}
+
+static PyObject*
+faulthandler_dump_traceback_py(PyObject *self,
+ PyObject *args, PyObject *kwargs)
+{
+ static char *kwlist[] = {"file", "all_threads", NULL};
+ PyObject *file = NULL;
+ int all_threads = 1;
+ PyThreadState *tstate;
+ const char *errmsg;
+ int fd;
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwargs,
+ "|Oi:dump_traceback", kwlist,
+ &file, &all_threads))
+ return NULL;
+
+ fd = faulthandler_get_fileno(&file);
+ if (fd < 0)
+ return NULL;
+
+ tstate = get_thread_state();
+ if (tstate == NULL)
+ return NULL;
+
+ if (all_threads) {
+ errmsg = _Py_DumpTracebackThreads(fd, NULL, tstate);
+ if (errmsg != NULL) {
+ PyErr_SetString(PyExc_RuntimeError, errmsg);
+ return NULL;
+ }
+ }
+ else {
+ _Py_DumpTraceback(fd, tstate);
+ }
+
+ if (PyErr_CheckSignals())
+ return NULL;
+
+ Py_RETURN_NONE;
+}
+
+static void
+faulthandler_disable_fatal_handler(fault_handler_t *handler)
+{
+ if (!handler->enabled)
+ return;
+ handler->enabled = 0;
+#ifdef HAVE_SIGACTION
+ (void)sigaction(handler->signum, &handler->previous, NULL);
+#else
+ (void)signal(handler->signum, handler->previous);
+#endif
+}
+
+
+/* Handler for SIGSEGV, SIGFPE, SIGABRT, SIGBUS and SIGILL signals.
+
+ Display the current Python traceback, restore the previous handler and call
+ the previous handler.
+
+ On Windows, don't explicitly call the previous handler, because the Windows
+ signal handler would not be called (for an unknown reason). The execution of
+ the program continues at faulthandler_fatal_error() exit, but the same
+ instruction will raise the same fault (signal), and so the previous handler
+ will be called.
+
+ This function is signal-safe and should only call signal-safe functions. */
+
+static void
+faulthandler_fatal_error(int signum)
+{
+ const int fd = fatal_error.fd;
+ size_t i;
+ fault_handler_t *handler = NULL;
+ int save_errno = errno;
+
+ if (!fatal_error.enabled)
+ return;
+
+ for (i=0; i < faulthandler_nsignals; i++) {
+ handler = &faulthandler_handlers[i];
+ if (handler->signum == signum)
+ break;
+ }
+ if (handler == NULL) {
+ /* faulthandler_nsignals == 0 (unlikely) */
+ return;
+ }
+
+ /* restore the previous handler */
+ faulthandler_disable_fatal_handler(handler);
+
+ PUTS(fd, "Fatal Python error: ");
+ PUTS(fd, handler->name);
+ PUTS(fd, "\n\n");
+
+ faulthandler_dump_traceback(fd, fatal_error.all_threads,
+ fatal_error.interp);
+
+ errno = save_errno;
+#ifdef MS_WINDOWS
+ if (signum == SIGSEGV) {
+ /* don't explicitly call the previous handler for SIGSEGV in this signal
+ handler, because the Windows signal handler would not be called */
+ return;
+ }
+#endif
+ /* call the previous signal handler: it is called immediately if we use
+ sigaction() thanks to SA_NODEFER flag, otherwise it is deferred */
+ raise(signum);
+}
+
+#ifdef MS_WINDOWS
+static int
+faulthandler_ignore_exception(DWORD code)
+{
+ /* bpo-30557: ignore exceptions which are not errors */
+ if (!(code & 0x80000000)) {
+ return 1;
+ }
+ /* bpo-31701: ignore MSC and COM exceptions
+ E0000000 + code */
+ if (code == 0xE06D7363 /* MSC exception ("Emsc") */
+ || code == 0xE0434352 /* COM Callable Runtime exception ("ECCR") */) {
+ return 1;
+ }
+ /* Interesting exception: log it with the Python traceback */
+ return 0;
+}
+
+static LONG WINAPI
+faulthandler_exc_handler(struct _EXCEPTION_POINTERS *exc_info)
+{
+ const int fd = fatal_error.fd;
+ DWORD code = exc_info->ExceptionRecord->ExceptionCode;
+ DWORD flags = exc_info->ExceptionRecord->ExceptionFlags;
+
+ if (faulthandler_ignore_exception(code)) {
+ /* ignore the exception: call the next exception handler */
+ return EXCEPTION_CONTINUE_SEARCH;
+ }
+
+ PUTS(fd, "Windows fatal exception: ");
+ switch (code)
+ {
+ /* only format most common errors */
+ case EXCEPTION_ACCESS_VIOLATION: PUTS(fd, "access violation"); break;
+ case EXCEPTION_FLT_DIVIDE_BY_ZERO: PUTS(fd, "float divide by zero"); break;
+ case EXCEPTION_FLT_OVERFLOW: PUTS(fd, "float overflow"); break;
+ case EXCEPTION_INT_DIVIDE_BY_ZERO: PUTS(fd, "int divide by zero"); break;
+ case EXCEPTION_INT_OVERFLOW: PUTS(fd, "integer overflow"); break;
+ case EXCEPTION_IN_PAGE_ERROR: PUTS(fd, "page error"); break;
+ case EXCEPTION_STACK_OVERFLOW: PUTS(fd, "stack overflow"); break;
+ default:
+ PUTS(fd, "code 0x");
+ _Py_DumpHexadecimal(fd, code, 8);
+ }
+ PUTS(fd, "\n\n");
+
+ if (code == EXCEPTION_ACCESS_VIOLATION) {
+ /* disable signal handler for SIGSEGV */
+ size_t i;
+ for (i=0; i < faulthandler_nsignals; i++) {
+ fault_handler_t *handler = &faulthandler_handlers[i];
+ if (handler->signum == SIGSEGV) {
+ faulthandler_disable_fatal_handler(handler);
+ break;
+ }
+ }
+ }
+
+ faulthandler_dump_traceback(fd, fatal_error.all_threads,
+ fatal_error.interp);
+
+ /* call the next exception handler */
+ return EXCEPTION_CONTINUE_SEARCH;
+}
+#endif
+
+/* Install the handler for fatal signals, faulthandler_fatal_error(). */
+
+static int
+faulthandler_enable(void)
+{
+ size_t i;
+
+ if (fatal_error.enabled) {
+ return 0;
+ }
+ fatal_error.enabled = 1;
+
+ for (i=0; i < faulthandler_nsignals; i++) {
+ fault_handler_t *handler;
+#ifdef HAVE_SIGACTION
+ struct sigaction action;
+#endif
+ int err;
+
+ handler = &faulthandler_handlers[i];
+ assert(!handler->enabled);
+#ifdef HAVE_SIGACTION
+ action.sa_handler = faulthandler_fatal_error;
+ sigemptyset(&action.sa_mask);
+ /* Do not prevent the signal from being received from within
+ its own signal handler */
+ action.sa_flags = SA_NODEFER;
+#ifdef HAVE_SIGALTSTACK
+ if (stack.ss_sp != NULL) {
+ /* Call the signal handler on an alternate signal stack
+ provided by sigaltstack() */
+ action.sa_flags |= SA_ONSTACK;
+ }
+#endif
+ err = sigaction(handler->signum, &action, &handler->previous);
+#else
+ handler->previous = signal(handler->signum,
+ faulthandler_fatal_error);
+ err = (handler->previous == SIG_ERR);
+#endif
+ if (err) {
+ PyErr_SetFromErrno(PyExc_RuntimeError);
+ return -1;
+ }
+
+ handler->enabled = 1;
+ }
+
+#ifdef MS_WINDOWS
+ assert(fatal_error.exc_handler == NULL);
+ fatal_error.exc_handler = AddVectoredExceptionHandler(1, faulthandler_exc_handler);
+#endif
+ return 0;
+}
+
+static PyObject*
+faulthandler_py_enable(PyObject *self, PyObject *args, PyObject *kwargs)
+{
+ static char *kwlist[] = {"file", "all_threads", NULL};
+ PyObject *file = NULL;
+ int all_threads = 1;
+ int fd;
+ PyThreadState *tstate;
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwargs,
+ "|Oi:enable", kwlist, &file, &all_threads))
+ return NULL;
+
+ fd = faulthandler_get_fileno(&file);
+ if (fd < 0)
+ return NULL;
+
+ tstate = get_thread_state();
+ if (tstate == NULL)
+ return NULL;
+
+ Py_XINCREF(file);
+ Py_XSETREF(fatal_error.file, file);
+ fatal_error.fd = fd;
+ fatal_error.all_threads = all_threads;
+ fatal_error.interp = tstate->interp;
+
+ if (faulthandler_enable() < 0) {
+ return NULL;
+ }
+
+ Py_RETURN_NONE;
+}
+
+static void
+faulthandler_disable(void)
+{
+ unsigned int i;
+ fault_handler_t *handler;
+
+ if (fatal_error.enabled) {
+ fatal_error.enabled = 0;
+ for (i=0; i < faulthandler_nsignals; i++) {
+ handler = &faulthandler_handlers[i];
+ faulthandler_disable_fatal_handler(handler);
+ }
+ }
+#ifdef MS_WINDOWS
+ if (fatal_error.exc_handler != NULL) {
+ RemoveVectoredExceptionHandler(fatal_error.exc_handler);
+ fatal_error.exc_handler = NULL;
+ }
+#endif
+ Py_CLEAR(fatal_error.file);
+}
+
+static PyObject*
+faulthandler_disable_py(PyObject *self)
+{
+ if (!fatal_error.enabled) {
+ Py_INCREF(Py_False);
+ return Py_False;
+ }
+ faulthandler_disable();
+ Py_INCREF(Py_True);
+ return Py_True;
+}
+
+static PyObject*
+faulthandler_is_enabled(PyObject *self)
+{
+ return PyBool_FromLong(fatal_error.enabled);
+}
+
+#ifdef FAULTHANDLER_LATER
+
+static void
+faulthandler_thread(void *unused)
+{
+ PyLockStatus st;
+ const char* errmsg;
+ int ok;
+#if defined(HAVE_PTHREAD_SIGMASK) && !defined(HAVE_BROKEN_PTHREAD_SIGMASK)
+ sigset_t set;
+
+ /* we don't want to receive any signal */
+ sigfillset(&set);
+ pthread_sigmask(SIG_SETMASK, &set, NULL);
+#endif
+
+ do {
+ st = PyThread_acquire_lock_timed(thread.cancel_event,
+ thread.timeout_us, 0);
+ if (st == PY_LOCK_ACQUIRED) {
+ PyThread_release_lock(thread.cancel_event);
+ break;
+ }
+ /* Timeout => dump traceback */
+ assert(st == PY_LOCK_FAILURE);
+
+ _Py_write_noraise(thread.fd, thread.header, (int)thread.header_len);
+
+ errmsg = _Py_DumpTracebackThreads(thread.fd, thread.interp, NULL);
+ ok = (errmsg == NULL);
+
+ if (thread.exit)
+ _exit(1);
+ } while (ok && thread.repeat);
+
+ /* The only way out */
+ PyThread_release_lock(thread.running);
+}
+
+static void
+cancel_dump_traceback_later(void)
+{
+ /* Notify cancellation */
+ PyThread_release_lock(thread.cancel_event);
+
+ /* Wait for thread to join */
+ PyThread_acquire_lock(thread.running, 1);
+ PyThread_release_lock(thread.running);
+
+ /* The main thread should always hold the cancel_event lock */
+ PyThread_acquire_lock(thread.cancel_event, 1);
+
+ Py_CLEAR(thread.file);
+ if (thread.header) {
+ PyMem_Free(thread.header);
+ thread.header = NULL;
+ }
+}
+
+static char*
+format_timeout(double timeout)
+{
+ unsigned long us, sec, min, hour;
+ double intpart, fracpart;
+ char buffer[100];
+
+ fracpart = modf(timeout, &intpart);
+ sec = (unsigned long)intpart;
+ us = (unsigned long)(fracpart * 1e6);
+ min = sec / 60;
+ sec %= 60;
+ hour = min / 60;
+ min %= 60;
+
+ if (us != 0)
+ PyOS_snprintf(buffer, sizeof(buffer),
+ "Timeout (%lu:%02lu:%02lu.%06lu)!\n",
+ hour, min, sec, us);
+ else
+ PyOS_snprintf(buffer, sizeof(buffer),
+ "Timeout (%lu:%02lu:%02lu)!\n",
+ hour, min, sec);
+
+ return _PyMem_Strdup(buffer);
+}
+
+static PyObject*
+faulthandler_dump_traceback_later(PyObject *self,
+ PyObject *args, PyObject *kwargs)
+{
+ static char *kwlist[] = {"timeout", "repeat", "file", "exit", NULL};
+ double timeout;
+ PY_TIMEOUT_T timeout_us;
+ int repeat = 0;
+ PyObject *file = NULL;
+ int fd;
+ int exit = 0;
+ PyThreadState *tstate;
+ char *header;
+ size_t header_len;
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwargs,
+ "d|iOi:dump_traceback_later", kwlist,
+ &timeout, &repeat, &file, &exit))
+ return NULL;
+ if ((timeout * 1e6) >= (double) PY_TIMEOUT_MAX) {
+ PyErr_SetString(PyExc_OverflowError, "timeout value is too large");
+ return NULL;
+ }
+ timeout_us = (PY_TIMEOUT_T)(timeout * 1e6);
+ if (timeout_us <= 0) {
+ PyErr_SetString(PyExc_ValueError, "timeout must be greater than 0");
+ return NULL;
+ }
+
+ tstate = get_thread_state();
+ if (tstate == NULL)
+ return NULL;
+
+ fd = faulthandler_get_fileno(&file);
+ if (fd < 0)
+ return NULL;
+
+ /* format the timeout */
+ header = format_timeout(timeout);
+ if (header == NULL)
+ return PyErr_NoMemory();
+ header_len = strlen(header);
+
+ /* Cancel previous thread, if running */
+ cancel_dump_traceback_later();
+
+ Py_XINCREF(file);
+ Py_XSETREF(thread.file, file);
+ thread.fd = fd;
+ thread.timeout_us = timeout_us;
+ thread.repeat = repeat;
+ thread.interp = tstate->interp;
+ thread.exit = exit;
+ thread.header = header;
+ thread.header_len = header_len;
+
+ /* Arm these locks to serve as events when released */
+ PyThread_acquire_lock(thread.running, 1);
+
+ if (PyThread_start_new_thread(faulthandler_thread, NULL) == -1) {
+ PyThread_release_lock(thread.running);
+ Py_CLEAR(thread.file);
+ PyMem_Free(header);
+ thread.header = NULL;
+ PyErr_SetString(PyExc_RuntimeError,
+ "unable to start watchdog thread");
+ return NULL;
+ }
+
+ Py_RETURN_NONE;
+}
+
+static PyObject*
+faulthandler_cancel_dump_traceback_later_py(PyObject *self)
+{
+ cancel_dump_traceback_later();
+ Py_RETURN_NONE;
+}
+#endif /* FAULTHANDLER_LATER */
+
+#ifdef FAULTHANDLER_USER
+static int
+faulthandler_register(int signum, int chain, _Py_sighandler_t *p_previous)
+{
+#ifdef HAVE_SIGACTION
+ struct sigaction action;
+ action.sa_handler = faulthandler_user;
+ sigemptyset(&action.sa_mask);
+ /* if the signal is received while the kernel is executing a system
+ call, try to restart the system call instead of interrupting it and
+ return EINTR. */
+ action.sa_flags = SA_RESTART;
+ if (chain) {
+ /* do not prevent the signal from being received from within its
+ own signal handler */
+ action.sa_flags = SA_NODEFER;
+ }
+#ifdef HAVE_SIGALTSTACK
+ if (stack.ss_sp != NULL) {
+ /* Call the signal handler on an alternate signal stack
+ provided by sigaltstack() */
+ action.sa_flags |= SA_ONSTACK;
+ }
+#endif
+ return sigaction(signum, &action, p_previous);
+#else
+ _Py_sighandler_t previous;
+ previous = signal(signum, faulthandler_user);
+ if (p_previous != NULL)
+ *p_previous = previous;
+ return (previous == SIG_ERR);
+#endif
+}
+
+/* Handler of user signals (e.g. SIGUSR1).
+
+ Dump the traceback of the current thread, or of all threads if
+ thread.all_threads is true.
+
+ This function is signal safe and should only call signal safe functions. */
+
+static void
+faulthandler_user(int signum)
+{
+ user_signal_t *user;
+ int save_errno = errno;
+
+ user = &user_signals[signum];
+ if (!user->enabled)
+ return;
+
+ faulthandler_dump_traceback(user->fd, user->all_threads, user->interp);
+
+#ifdef HAVE_SIGACTION
+ if (user->chain) {
+ (void)sigaction(signum, &user->previous, NULL);
+ errno = save_errno;
+
+ /* call the previous signal handler */
+ raise(signum);
+
+ save_errno = errno;
+ (void)faulthandler_register(signum, user->chain, NULL);
+ errno = save_errno;
+ }
+#else
+ if (user->chain) {
+ errno = save_errno;
+ /* call the previous signal handler */
+ user->previous(signum);
+ }
+#endif
+}
+
+static int
+check_signum(int signum)
+{
+ unsigned int i;
+
+ for (i=0; i < faulthandler_nsignals; i++) {
+ if (faulthandler_handlers[i].signum == signum) {
+ PyErr_Format(PyExc_RuntimeError,
+ "signal %i cannot be registered, "
+ "use enable() instead",
+ signum);
+ return 0;
+ }
+ }
+ if (signum < 1 || NSIG <= signum) {
+ PyErr_SetString(PyExc_ValueError, "signal number out of range");
+ return 0;
+ }
+ return 1;
+}
+
+static PyObject*
+faulthandler_register_py(PyObject *self,
+ PyObject *args, PyObject *kwargs)
+{
+ static char *kwlist[] = {"signum", "file", "all_threads", "chain", NULL};
+ int signum;
+ PyObject *file = NULL;
+ int all_threads = 1;
+ int chain = 0;
+ int fd;
+ user_signal_t *user;
+ _Py_sighandler_t previous;
+ PyThreadState *tstate;
+ int err;
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwargs,
+ "i|Oii:register", kwlist,
+ &signum, &file, &all_threads, &chain))
+ return NULL;
+
+ if (!check_signum(signum))
+ return NULL;
+
+ tstate = get_thread_state();
+ if (tstate == NULL)
+ return NULL;
+
+ fd = faulthandler_get_fileno(&file);
+ if (fd < 0)
+ return NULL;
+
+ if (user_signals == NULL) {
+ user_signals = PyMem_Malloc(NSIG * sizeof(user_signal_t));
+ if (user_signals == NULL)
+ return PyErr_NoMemory();
+ memset(user_signals, 0, NSIG * sizeof(user_signal_t));
+ }
+ user = &user_signals[signum];
+
+ if (!user->enabled) {
+ err = faulthandler_register(signum, chain, &previous);
+ if (err) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ return NULL;
+ }
+
+ user->previous = previous;
+ }
+
+ Py_XINCREF(file);
+ Py_XSETREF(user->file, file);
+ user->fd = fd;
+ user->all_threads = all_threads;
+ user->chain = chain;
+ user->interp = tstate->interp;
+ user->enabled = 1;
+
+ Py_RETURN_NONE;
+}
+
+static int
+faulthandler_unregister(user_signal_t *user, int signum)
+{
+ if (!user->enabled)
+ return 0;
+ user->enabled = 0;
+#ifdef HAVE_SIGACTION
+ (void)sigaction(signum, &user->previous, NULL);
+#else
+ (void)signal(signum, user->previous);
+#endif
+ Py_CLEAR(user->file);
+ user->fd = -1;
+ return 1;
+}
+
+static PyObject*
+faulthandler_unregister_py(PyObject *self, PyObject *args)
+{
+ int signum;
+ user_signal_t *user;
+ int change;
+
+ if (!PyArg_ParseTuple(args, "i:unregister", &signum))
+ return NULL;
+
+ if (!check_signum(signum))
+ return NULL;
+
+ if (user_signals == NULL)
+ Py_RETURN_FALSE;
+
+ user = &user_signals[signum];
+ change = faulthandler_unregister(user, signum);
+ return PyBool_FromLong(change);
+}
+#endif /* FAULTHANDLER_USER */
+
+
+static void
+faulthandler_suppress_crash_report(void)
+{
+#ifdef MS_WINDOWS
+ UINT mode;
+
+ /* Configure Windows to not display the Windows Error Reporting dialog */
+ mode = SetErrorMode(SEM_NOGPFAULTERRORBOX);
+ SetErrorMode(mode | SEM_NOGPFAULTERRORBOX);
+#endif
+
+#ifdef HAVE_SYS_RESOURCE_H
+ struct rlimit rl;
+#ifndef UEFI_C_SOURCE
+ /* Disable creation of core dump */
+ if (getrlimit(RLIMIT_CORE, &rl) == 0) {
+ rl.rlim_cur = 0;
+ setrlimit(RLIMIT_CORE, &rl);
+ }
+#endif
+#endif
+
+#if defined(_MSC_VER) && !defined(UEFI_MSVC_64) && !defined(UEFI_MSVC_32)
+ /* Visual Studio: configure abort() to not display an error message nor
+ open a popup asking to report the fault. */
+ _set_abort_behavior(0, _WRITE_ABORT_MSG | _CALL_REPORTFAULT);
+#endif
+}
+
+static PyObject *
+faulthandler_read_null(PyObject *self, PyObject *args)
+{
+ volatile int *x;
+ volatile int y;
+
+ faulthandler_suppress_crash_report();
+ x = NULL;
+ y = *x;
+ return PyLong_FromLong(y);
+
+}
+
+static void
+faulthandler_raise_sigsegv(void)
+{
+ faulthandler_suppress_crash_report();
+#if defined(MS_WINDOWS)
+ /* For SIGSEGV, faulthandler_fatal_error() restores the previous signal
+ handler and then gives back the execution flow to the program (without
+ explicitly calling the previous error handler). In a normal case, the
+ SIGSEGV was raised by the kernel because of a fault, and so if the
+ program retries to execute the same instruction, the fault will be
+ raised again.
+
+ Here the fault is simulated by a fake SIGSEGV signal raised by the
+ application. We have to raise SIGSEGV at lease twice: once for
+ faulthandler_fatal_error(), and one more time for the previous signal
+ handler. */
+ while(1)
+ raise(SIGSEGV);
+#else
+ raise(SIGSEGV);
+#endif
+}
+
+static PyObject *
+faulthandler_sigsegv(PyObject *self, PyObject *args)
+{
+ int release_gil = 0;
+ if (!PyArg_ParseTuple(args, "|i:_sigsegv", &release_gil))
+ return NULL;
+
+ if (release_gil) {
+ Py_BEGIN_ALLOW_THREADS
+ faulthandler_raise_sigsegv();
+ Py_END_ALLOW_THREADS
+ } else {
+ faulthandler_raise_sigsegv();
+ }
+ Py_RETURN_NONE;
+}
+
+#ifdef WITH_THREAD
+static void
+faulthandler_fatal_error_thread(void *plock)
+{
+ PyThread_type_lock *lock = (PyThread_type_lock *)plock;
+
+ Py_FatalError("in new thread");
+
+ /* notify the caller that we are done */
+ PyThread_release_lock(lock);
+}
+
+static PyObject *
+faulthandler_fatal_error_c_thread(PyObject *self, PyObject *args)
+{
+ long thread;
+ PyThread_type_lock lock;
+
+ faulthandler_suppress_crash_report();
+
+ lock = PyThread_allocate_lock();
+ if (lock == NULL)
+ return PyErr_NoMemory();
+
+ PyThread_acquire_lock(lock, WAIT_LOCK);
+
+ thread = PyThread_start_new_thread(faulthandler_fatal_error_thread, lock);
+ if (thread == -1) {
+ PyThread_free_lock(lock);
+ PyErr_SetString(PyExc_RuntimeError, "unable to start the thread");
+ return NULL;
+ }
+
+ /* wait until the thread completes: it will never occur, since Py_FatalError()
+ exits the process immediately. */
+ PyThread_acquire_lock(lock, WAIT_LOCK);
+ PyThread_release_lock(lock);
+ PyThread_free_lock(lock);
+
+ Py_RETURN_NONE;
+}
+#endif
+
+static PyObject *
+faulthandler_sigfpe(PyObject *self, PyObject *args)
+{
+ /* Do an integer division by zero: raise a SIGFPE on Intel CPU, but not on
+ PowerPC. Use volatile to disable compile-time optimizations. */
+ volatile int x = 1, y = 0, z;
+ faulthandler_suppress_crash_report();
+ z = x / y;
+ /* If the division by zero didn't raise a SIGFPE (e.g. on PowerPC),
+ raise it manually. */
+ raise(SIGFPE);
+ /* This line is never reached, but we pretend to make something with z
+ to silence a compiler warning. */
+ return PyLong_FromLong(z);
+}
+
+static PyObject *
+faulthandler_sigabrt(PyObject *self, PyObject *args)
+{
+ faulthandler_suppress_crash_report();
+ abort();
+ Py_RETURN_NONE;
+}
+
+static PyObject *
+faulthandler_fatal_error_py(PyObject *self, PyObject *args)
+{
+ char *message;
+ int release_gil = 0;
+ if (!PyArg_ParseTuple(args, "y|i:fatal_error", &message, &release_gil))
+ return NULL;
+ faulthandler_suppress_crash_report();
+ if (release_gil) {
+ Py_BEGIN_ALLOW_THREADS
+ Py_FatalError(message);
+ Py_END_ALLOW_THREADS
+ }
+ else {
+ Py_FatalError(message);
+ }
+ Py_RETURN_NONE;
+}
+
+#if defined(HAVE_SIGALTSTACK) && defined(HAVE_SIGACTION)
+#define FAULTHANDLER_STACK_OVERFLOW
+
+#ifdef __INTEL_COMPILER
+ /* Issue #23654: Turn off ICC's tail call optimization for the
+ * stack_overflow generator. ICC turns the recursive tail call into
+ * a loop. */
+# pragma intel optimization_level 0
+#endif
+static
+uintptr_t
+stack_overflow(uintptr_t min_sp, uintptr_t max_sp, size_t *depth)
+{
+ /* allocate 4096 bytes on the stack at each call */
+ unsigned char buffer[4096];
+ uintptr_t sp = (uintptr_t)&buffer;
+ *depth += 1;
+ if (sp < min_sp || max_sp < sp)
+ return sp;
+ buffer[0] = 1;
+ buffer[4095] = 0;
+ return stack_overflow(min_sp, max_sp, depth);
+}
+
+static PyObject *
+faulthandler_stack_overflow(PyObject *self)
+{
+ size_t depth, size;
+ uintptr_t sp = (uintptr_t)&depth;
+ uintptr_t stop;
+
+ faulthandler_suppress_crash_report();
+ depth = 0;
+ stop = stack_overflow(sp - STACK_OVERFLOW_MAX_SIZE,
+ sp + STACK_OVERFLOW_MAX_SIZE,
+ &depth);
+ if (sp < stop)
+ size = stop - sp;
+ else
+ size = sp - stop;
+ PyErr_Format(PyExc_RuntimeError,
+ "unable to raise a stack overflow (allocated %zu bytes "
+ "on the stack, %zu recursive calls)",
+ size, depth);
+ return NULL;
+}
+#endif /* defined(HAVE_SIGALTSTACK) && defined(HAVE_SIGACTION) */
+
+
+static int
+faulthandler_traverse(PyObject *module, visitproc visit, void *arg)
+{
+#ifdef FAULTHANDLER_USER
+ unsigned int signum;
+#endif
+
+#ifdef FAULTHANDLER_LATER
+ Py_VISIT(thread.file);
+#endif
+#ifdef FAULTHANDLER_USER
+ if (user_signals != NULL) {
+ for (signum=0; signum < NSIG; signum++)
+ Py_VISIT(user_signals[signum].file);
+ }
+#endif
+ Py_VISIT(fatal_error.file);
+ return 0;
+}
+
+#ifdef MS_WINDOWS
+static PyObject *
+faulthandler_raise_exception(PyObject *self, PyObject *args)
+{
+ unsigned int code, flags = 0;
+ if (!PyArg_ParseTuple(args, "I|I:_raise_exception", &code, &flags))
+ return NULL;
+ faulthandler_suppress_crash_report();
+ RaiseException(code, flags, 0, NULL);
+ Py_RETURN_NONE;
+}
+#endif
+
+PyDoc_STRVAR(module_doc,
+"faulthandler module.");
+
+static PyMethodDef module_methods[] = {
+ {"enable",
+ (PyCFunction)faulthandler_py_enable, METH_VARARGS|METH_KEYWORDS,
+ PyDoc_STR("enable(file=sys.stderr, all_threads=True): "
+ "enable the fault handler")},
+ {"disable", (PyCFunction)faulthandler_disable_py, METH_NOARGS,
+ PyDoc_STR("disable(): disable the fault handler")},
+ {"is_enabled", (PyCFunction)faulthandler_is_enabled, METH_NOARGS,
+ PyDoc_STR("is_enabled()->bool: check if the handler is enabled")},
+ {"dump_traceback",
+ (PyCFunction)faulthandler_dump_traceback_py, METH_VARARGS|METH_KEYWORDS,
+ PyDoc_STR("dump_traceback(file=sys.stderr, all_threads=True): "
+ "dump the traceback of the current thread, or of all threads "
+ "if all_threads is True, into file")},
+#ifdef FAULTHANDLER_LATER
+ {"dump_traceback_later",
+ (PyCFunction)faulthandler_dump_traceback_later, METH_VARARGS|METH_KEYWORDS,
+ PyDoc_STR("dump_traceback_later(timeout, repeat=False, file=sys.stderrn, exit=False):\n"
+ "dump the traceback of all threads in timeout seconds,\n"
+ "or each timeout seconds if repeat is True. If exit is True, "
+ "call _exit(1) which is not safe.")},
+ {"cancel_dump_traceback_later",
+ (PyCFunction)faulthandler_cancel_dump_traceback_later_py, METH_NOARGS,
+ PyDoc_STR("cancel_dump_traceback_later():\ncancel the previous call "
+ "to dump_traceback_later().")},
+#endif
+
+#ifdef FAULTHANDLER_USER
+ {"register",
+ (PyCFunction)faulthandler_register_py, METH_VARARGS|METH_KEYWORDS,
+ PyDoc_STR("register(signum, file=sys.stderr, all_threads=True, chain=False): "
+ "register a handler for the signal 'signum': dump the "
+ "traceback of the current thread, or of all threads if "
+ "all_threads is True, into file")},
+ {"unregister",
+ faulthandler_unregister_py, METH_VARARGS|METH_KEYWORDS,
+ PyDoc_STR("unregister(signum): unregister the handler of the signal "
+ "'signum' registered by register()")},
+#endif
+
+ {"_read_null", faulthandler_read_null, METH_NOARGS,
+ PyDoc_STR("_read_null(): read from NULL, raise "
+ "a SIGSEGV or SIGBUS signal depending on the platform")},
+ {"_sigsegv", faulthandler_sigsegv, METH_VARARGS,
+ PyDoc_STR("_sigsegv(release_gil=False): raise a SIGSEGV signal")},
+#ifdef WITH_THREAD
+ {"_fatal_error_c_thread", faulthandler_fatal_error_c_thread, METH_NOARGS,
+ PyDoc_STR("fatal_error_c_thread(): "
+ "call Py_FatalError() in a new C thread.")},
+#endif
+ {"_sigabrt", faulthandler_sigabrt, METH_NOARGS,
+ PyDoc_STR("_sigabrt(): raise a SIGABRT signal")},
+ {"_sigfpe", (PyCFunction)faulthandler_sigfpe, METH_NOARGS,
+ PyDoc_STR("_sigfpe(): raise a SIGFPE signal")},
+ {"_fatal_error", faulthandler_fatal_error_py, METH_VARARGS,
+ PyDoc_STR("_fatal_error(message): call Py_FatalError(message)")},
+#ifdef FAULTHANDLER_STACK_OVERFLOW
+ {"_stack_overflow", (PyCFunction)faulthandler_stack_overflow, METH_NOARGS,
+ PyDoc_STR("_stack_overflow(): recursive call to raise a stack overflow")},
+#endif
+#ifdef MS_WINDOWS
+ {"_raise_exception", faulthandler_raise_exception, METH_VARARGS,
+ PyDoc_STR("raise_exception(code, flags=0): Call RaiseException(code, flags).")},
+#endif
+ {NULL, NULL} /* sentinel */
+};
+
+static struct PyModuleDef module_def = {
+ PyModuleDef_HEAD_INIT,
+ "faulthandler",
+ module_doc,
+ 0, /* non-negative size to be able to unload the module */
+ module_methods,
+ NULL,
+ faulthandler_traverse,
+ NULL,
+ NULL
+};
+
+PyMODINIT_FUNC
+PyInit_faulthandler(void)
+{
+ PyObject *m = PyModule_Create(&module_def);
+ if (m == NULL)
+ return NULL;
+
+ /* Add constants for unit tests */
+#ifdef MS_WINDOWS
+ /* RaiseException() codes (prefixed by an underscore) */
+ if (PyModule_AddIntConstant(m, "_EXCEPTION_ACCESS_VIOLATION",
+ EXCEPTION_ACCESS_VIOLATION))
+ return NULL;
+ if (PyModule_AddIntConstant(m, "_EXCEPTION_INT_DIVIDE_BY_ZERO",
+ EXCEPTION_INT_DIVIDE_BY_ZERO))
+ return NULL;
+ if (PyModule_AddIntConstant(m, "_EXCEPTION_STACK_OVERFLOW",
+ EXCEPTION_STACK_OVERFLOW))
+ return NULL;
+
+ /* RaiseException() flags (prefixed by an underscore) */
+ if (PyModule_AddIntConstant(m, "_EXCEPTION_NONCONTINUABLE",
+ EXCEPTION_NONCONTINUABLE))
+ return NULL;
+ if (PyModule_AddIntConstant(m, "_EXCEPTION_NONCONTINUABLE_EXCEPTION",
+ EXCEPTION_NONCONTINUABLE_EXCEPTION))
+ return NULL;
+#endif
+
+ return m;
+}
+
+/* Call faulthandler.enable() if the PYTHONFAULTHANDLER environment variable
+ is defined, or if sys._xoptions has a 'faulthandler' key. */
+
+static int
+faulthandler_env_options(void)
+{
+ PyObject *xoptions, *key, *module, *res;
+ char *p;
+
+ if (!((p = Py_GETENV("PYTHONFAULTHANDLER")) && *p != '\0')) {
+ /* PYTHONFAULTHANDLER environment variable is missing
+ or an empty string */
+ int has_key;
+
+ xoptions = PySys_GetXOptions();
+ if (xoptions == NULL)
+ return -1;
+
+ key = PyUnicode_FromString("faulthandler");
+ if (key == NULL)
+ return -1;
+
+ has_key = PyDict_Contains(xoptions, key);
+ Py_DECREF(key);
+ if (has_key <= 0)
+ return has_key;
+ }
+
+ module = PyImport_ImportModule("faulthandler");
+ if (module == NULL) {
+ return -1;
+ }
+ res = _PyObject_CallMethodId(module, &PyId_enable, NULL);
+ Py_DECREF(module);
+ if (res == NULL)
+ return -1;
+ Py_DECREF(res);
+ return 0;
+}
+
+int _PyFaulthandler_Init(void)
+{
+#ifdef HAVE_SIGALTSTACK
+ int err;
+
+ /* Try to allocate an alternate stack for faulthandler() signal handler to
+ * be able to allocate memory on the stack, even on a stack overflow. If it
+ * fails, ignore the error. */
+ stack.ss_flags = 0;
+ stack.ss_size = SIGSTKSZ;
+ stack.ss_sp = PyMem_Malloc(stack.ss_size);
+ if (stack.ss_sp != NULL) {
+ err = sigaltstack(&stack, &old_stack);
+ if (err) {
+ PyMem_Free(stack.ss_sp);
+ stack.ss_sp = NULL;
+ }
+ }
+#endif
+#ifdef FAULTHANDLER_LATER
+ thread.file = NULL;
+ thread.cancel_event = PyThread_allocate_lock();
+ thread.running = PyThread_allocate_lock();
+ if (!thread.cancel_event || !thread.running) {
+ PyErr_SetString(PyExc_RuntimeError,
+ "could not allocate locks for faulthandler");
+ return -1;
+ }
+ PyThread_acquire_lock(thread.cancel_event, 1);
+#endif
+
+ return faulthandler_env_options();
+}
+
+void _PyFaulthandler_Fini(void)
+{
+#ifdef FAULTHANDLER_USER
+ unsigned int signum;
+#endif
+
+#ifdef FAULTHANDLER_LATER
+ /* later */
+ if (thread.cancel_event) {
+ cancel_dump_traceback_later();
+ PyThread_release_lock(thread.cancel_event);
+ PyThread_free_lock(thread.cancel_event);
+ thread.cancel_event = NULL;
+ }
+ if (thread.running) {
+ PyThread_free_lock(thread.running);
+ thread.running = NULL;
+ }
+#endif
+
+#ifdef FAULTHANDLER_USER
+ /* user */
+ if (user_signals != NULL) {
+ for (signum=0; signum < NSIG; signum++)
+ faulthandler_unregister(&user_signals[signum], signum);
+ PyMem_Free(user_signals);
+ user_signals = NULL;
+ }
+#endif
+
+ /* fatal */
+ faulthandler_disable();
+#ifdef HAVE_SIGALTSTACK
+ if (stack.ss_sp != NULL) {
+ /* Fetch the current alt stack */
+ stack_t current_stack = {};
+ if (sigaltstack(NULL, ¤t_stack) == 0) {
+ if (current_stack.ss_sp == stack.ss_sp) {
+ /* The current alt stack is the one that we installed.
+ It is safe to restore the old stack that we found when
+ we installed ours */
+ sigaltstack(&old_stack, NULL);
+ } else {
+ /* Someone switched to a different alt stack and didn't
+ restore ours when they were done (if they're done).
+ There's not much we can do in this unlikely case */
+ }
+ }
+ PyMem_Free(stack.ss_sp);
+ stack.ss_sp = NULL;
+ }
+#endif
+}
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/getpath.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/getpath.c
new file mode 100644
index 00000000..ad10784d
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/getpath.c
@@ -0,0 +1,1283 @@
+/* Return the initial module search path. */
+
+#include "Python.h"
+#include <osdefs.h>
+#include <ctype.h>
+
+//#include <sys/types.h>
+//#include <string.h>
+
+#ifdef __APPLE__
+#include <mach-o/dyld.h>
+#endif
+
+/* Search in some common locations for the associated Python libraries.
+ *
+ * Two directories must be found, the platform independent directory
+ * (prefix), containing the common .py and .pyc files, and the platform
+ * dependent directory (exec_prefix), containing the shared library
+ * modules. Note that prefix and exec_prefix can be the same directory,
+ * but for some installations, they are different.
+ *
+ * Py_GetPath() carries out separate searches for prefix and exec_prefix.
+ * Each search tries a number of different locations until a ``landmark''
+ * file or directory is found. If no prefix or exec_prefix is found, a
+ * warning message is issued and the preprocessor defined PREFIX and
+ * EXEC_PREFIX are used (even though they will not work); python carries on
+ * as best as is possible, but most imports will fail.
+ *
+ * Before any searches are done, the location of the executable is
+ * determined. If argv[0] has one or more slashes in it, it is used
+ * unchanged. Otherwise, it must have been invoked from the shell's path,
+ * so we search $PATH for the named executable and use that. If the
+ * executable was not found on $PATH (or there was no $PATH environment
+ * variable), the original argv[0] string is used.
+ *
+ * Next, the executable location is examined to see if it is a symbolic
+ * link. If so, the link is chased (correctly interpreting a relative
+ * pathname if one is found) and the directory of the link target is used.
+ *
+ * Finally, argv0_path is set to the directory containing the executable
+ * (i.e. the last component is stripped).
+ *
+ * With argv0_path in hand, we perform a number of steps. The same steps
+ * are performed for prefix and for exec_prefix, but with a different
+ * landmark.
+ *
+ * Step 1. Are we running python out of the build directory? This is
+ * checked by looking for a different kind of landmark relative to
+ * argv0_path. For prefix, the landmark's path is derived from the VPATH
+ * preprocessor variable (taking into account that its value is almost, but
+ * not quite, what we need). For exec_prefix, the landmark is
+ * pybuilddir.txt. If the landmark is found, we're done.
+ *
+ * For the remaining steps, the prefix landmark will always be
+ * lib/python$VERSION/os.py and the exec_prefix will always be
+ * lib/python$VERSION/lib-dynload, where $VERSION is Python's version
+ * number as supplied by the Makefile. Note that this means that no more
+ * build directory checking is performed; if the first step did not find
+ * the landmarks, the assumption is that python is running from an
+ * installed setup.
+ *
+ * Step 2. See if the $PYTHONHOME environment variable points to the
+ * installed location of the Python libraries. If $PYTHONHOME is set, then
+ * it points to prefix and exec_prefix. $PYTHONHOME can be a single
+ * directory, which is used for both, or the prefix and exec_prefix
+ * directories separated by a colon.
+ *
+ * Step 3. Try to find prefix and exec_prefix relative to argv0_path,
+ * backtracking up the path until it is exhausted. This is the most common
+ * step to succeed. Note that if prefix and exec_prefix are different,
+ * exec_prefix is more likely to be found; however if exec_prefix is a
+ * subdirectory of prefix, both will be found.
+ *
+ * Step 4. Search the directories pointed to by the preprocessor variables
+ * PREFIX and EXEC_PREFIX. These are supplied by the Makefile but can be
+ * passed in as options to the configure script.
+ *
+ * That's it!
+ *
+ * Well, almost. Once we have determined prefix and exec_prefix, the
+ * preprocessor variable PYTHONPATH is used to construct a path. Each
+ * relative path on PYTHONPATH is prefixed with prefix. Then the directory
+ * containing the shared library modules is appended. The environment
+ * variable $PYTHONPATH is inserted in front of it all. Finally, the
+ * prefix and exec_prefix globals are tweaked so they reflect the values
+ * expected by other code, by stripping the "lib/python$VERSION/..." stuff
+ * off. If either points to the build directory, the globals are reset to
+ * the corresponding preprocessor variables (so sys.prefix will reflect the
+ * installation location, even though sys.path points into the build
+ * directory). This seems to make more sense given that currently the only
+ * known use of sys.prefix and sys.exec_prefix is for the ILU installation
+ * process to find the installed Python tree.
+ *
+ * An embedding application can use Py_SetPath() to override all of
+ * these authomatic path computations.
+ *
+ * NOTE: Windows MSVC builds use PC/getpathp.c instead!
+ */
+
+#ifdef __cplusplus
+ extern "C" {
+#endif
+
+/* Filename separator */
+#ifndef SEP
+#define SEP L'/'
+#define ALTSEP L'\\'
+#endif
+
+#ifndef ALTSEP
+#define ALTSEP L'\\'
+#endif
+
+
+#define SIFY_I( x ) L#x
+#define SIFY( y ) SIFY_I( y )
+
+#ifndef PREFIX
+ #define PREFIX L"/Efi/StdLib"
+#endif
+
+#ifndef EXEC_PREFIX
+ #define EXEC_PREFIX PREFIX
+#endif
+
+#ifndef LIBPYTHON
+ #define LIBPYTHON L"lib/python" VERSION L"." SIFY(PY_MICRO_VERSION)
+#endif
+
+#ifndef PYTHONPATH
+ #define PYTHONPATH LIBPYTHON
+#endif
+
+#ifndef LANDMARK
+ #define LANDMARK L"os.py"
+#endif
+
+#ifndef VERSION
+ #define VERSION SIFY(PY_MAJOR_VERSION) SIFY(PY_MINOR_VERSION)
+#endif
+
+#ifndef VPATH
+ #define VPATH L"."
+#endif
+
+/* Search path entry delimiter */
+//# define DELIM ';'
+# define DELIM_STR ";"
+
+#ifdef DELIM
+ #define sDELIM L";"
+#endif
+
+
+#if !defined(PREFIX) || !defined(EXEC_PREFIX) || !defined(VERSION) || !defined(VPATH)
+#error "PREFIX, EXEC_PREFIX, VERSION, and VPATH must be constant defined"
+#endif
+
+#ifndef LANDMARK
+#define LANDMARK L"os.py"
+#endif
+
+static wchar_t prefix[MAXPATHLEN+1] = {0};
+static wchar_t exec_prefix[MAXPATHLEN+1] = {0};
+static wchar_t progpath[MAXPATHLEN+1] = {0};
+static wchar_t *module_search_path = NULL;
+static wchar_t lib_python[] = LIBPYTHON;
+static wchar_t volume_name[32] = { 0 };
+
+
+/* Get file status. Encode the path to the locale encoding. */
+
+static int
+_Py_wstat(const wchar_t* path, struct stat *buf)
+{
+ int err;
+ char *fname;
+ fname = Py_EncodeLocale(path, NULL);
+ if (fname == NULL) {
+ errno = EINVAL;
+ return -1;
+ }
+ err = stat(fname, buf);
+ PyMem_Free(fname);
+ return err;
+}
+
+/* Get file status. Encode the path to the locale encoding. */
+
+static wchar_t *
+_Py_basename(const wchar_t* path)
+{
+ int err;
+ size_t len = 0;
+ char *fname, *buf;
+ wchar_t *bname;
+ fname = Py_EncodeLocale(path, NULL);
+ if (fname == NULL) {
+ errno = EINVAL;
+ return NULL;
+ }
+ buf = basename(fname);
+ PyMem_Free(fname);
+ len = strlen(buf);
+ bname = Py_DecodeLocale(buf, &len);
+ return bname;
+}
+
+/** Determine if "ch" is a separator character.
+
+ @param[in] ch The character to test.
+
+ @retval TRUE ch is a separator character.
+ @retval FALSE ch is NOT a separator character.
+**/
+static int
+is_sep(wchar_t ch)
+{
+ return ch == SEP || ch == ALTSEP;
+}
+
+/** Determine if a path is absolute, or not.
+ An absolute path consists of a volume name, "VOL:", followed by a rooted path,
+ "/path/elements". If both of these components are present, the path is absolute.
+
+ Let P be a pointer to the path to test.
+ Let A be a pointer to the first ':' in P.
+ Let B be a pointer to the first '/' or '\\' in P.
+
+ If A and B are not NULL
+ If (A-P+1) == (B-P) then the path is absolute.
+ Otherwise, the path is NOT absolute.
+
+ @param[in] path The path to test.
+
+ @retval -1 Path is absolute but lacking volume name.
+ @retval 0 Path is NOT absolute.
+ @retval 1 Path is absolute.
+*/
+static int
+is_absolute(wchar_t *path)
+{
+ wchar_t *A;
+ wchar_t *B;
+
+ A = wcschr(path, L':');
+ B = wcspbrk(path, L"/\\");
+
+ if(B != NULL) {
+ if(A == NULL) {
+ if(B == path) {
+ return -1;
+ }
+ }
+ else {
+ if(((A - path) + 1) == (B - path)) {
+ return 1;
+ }
+ }
+ }
+ return 0;
+}
+
+static void
+reduce(wchar_t *dir)
+{
+ size_t i = wcslen(dir);
+ while (i > 0 && !is_sep(dir[i]))
+ --i;
+ dir[i] = '\0';
+}
+
+static int
+isfile(wchar_t *filename) /* Is file, not directory */
+{
+ struct stat buf;
+ if (_Py_wstat(filename, &buf) != 0)
+ return 0;
+ if (!S_ISREG(buf.st_mode))
+ return 0;
+ return 1;
+}
+
+
+static int
+ismodule(wchar_t *filename) /* Is module -- check for .pyc too */
+{
+ if (isfile(filename))
+ return 1;
+
+ /* Check for the compiled version of prefix. */
+ if (wcslen(filename) < MAXPATHLEN) {
+ wcscat(filename, L"c");
+ if (isfile(filename))
+ return 1;
+ }
+ return 0;
+}
+
+static int
+isdir(wchar_t *filename) /* Is directory */
+{
+ struct stat buf;
+ if (_Py_wstat(filename, &buf) != 0)
+ return 0;
+ if (!S_ISDIR(buf.st_mode))
+ return 0;
+ return 1;
+}
+
+/* Add a path component, by appending stuff to buffer.
+ buffer must have at least MAXPATHLEN + 1 bytes allocated, and contain a
+ NUL-terminated string with no more than MAXPATHLEN characters (not counting
+ the trailing NUL). It's a fatal error if it contains a string longer than
+ that (callers must be careful!). If these requirements are met, it's
+ guaranteed that buffer will still be a NUL-terminated string with no more
+ than MAXPATHLEN characters at exit. If stuff is too long, only as much of
+ stuff as fits will be appended.
+*/
+static void
+joinpath(wchar_t *buffer, wchar_t *stuff)
+{
+ size_t n, k;
+ k = 0;
+ if (is_absolute(stuff) == 1){
+ n = 0;
+ }
+ else {
+ n = wcslen(buffer);
+ if (n == 0) {
+ wcsncpy(buffer, volume_name, MAXPATHLEN);
+ n = wcslen(buffer);
+ }
+ if (n > 0 && n < MAXPATHLEN){
+ if(!is_sep(buffer[n-1])) {
+ buffer[n++] = SEP;
+ }
+ if(is_sep(stuff[0])) ++stuff;
+ }
+ }
+ if (n > MAXPATHLEN)
+ Py_FatalError("buffer overflow in getpath.c's joinpath()");
+ k = wcslen(stuff);
+ if (n + k > MAXPATHLEN)
+ k = MAXPATHLEN - n;
+ wcsncpy(buffer+n, stuff, k);
+ buffer[n+k] = '\0';
+}
+
+static int
+isxfile(wchar_t *filename)
+{
+ struct stat buf;
+ wchar_t *bn;
+ wchar_t *newbn;
+ int bnlen;
+ char *filename_str;
+
+ bn = _Py_basename(filename); // Separate off the file name component
+ reduce(filename); // and isolate the path component
+ bnlen = wcslen(bn);
+ newbn = wcsrchr(bn, L'.'); // Does basename contain a period?
+ if(newbn == NULL) { // Does NOT contain a period.
+ newbn = &bn[bnlen];
+ wcsncpy(newbn, L".efi", MAXPATHLEN - bnlen); // append ".efi" to basename
+ bnlen += 4;
+ }
+ else if(wcscmp(newbn, L".efi") != 0) {
+ return 0; // File can not be executable.
+ }
+ joinpath(filename, bn); // Stitch path and file name back together
+
+ return isdir(filename);
+}
+
+/* copy_absolute requires that path be allocated at least
+ MAXPATHLEN + 1 bytes and that p be no more than MAXPATHLEN bytes. */
+static void
+copy_absolute(wchar_t *path, wchar_t *p, size_t pathlen)
+{
+ if (is_absolute(p) == 1)
+ wcscpy(path, p);
+ else {
+ if (!_Py_wgetcwd(path, pathlen)) {
+ /* unable to get the current directory */
+ if(volume_name[0] != 0) {
+ wcscpy(path, volume_name);
+ joinpath(path, p);
+ }
+ else
+ wcscpy(path, p);
+ return;
+ }
+ if (p[0] == '.' && p[1] == SEP)
+ p += 2;
+ joinpath(path, p);
+ }
+}
+
+/* absolutize() requires that path be allocated at least MAXPATHLEN+1 bytes. */
+static void
+absolutize(wchar_t *path)
+{
+ wchar_t buffer[MAXPATHLEN+1];
+
+ if (is_absolute(path) == 1)
+ return;
+ copy_absolute(buffer, path, MAXPATHLEN+1);
+ wcscpy(path, buffer);
+}
+
+/** Extract the volume name from a path.
+
+ @param[out] Dest Pointer to location in which to store the extracted volume name.
+ @param[in] path Pointer to the path to extract the volume name from.
+**/
+static void
+set_volume(wchar_t *Dest, wchar_t *path)
+{
+ size_t VolLen;
+
+ if(is_absolute(path)) {
+ VolLen = wcscspn(path, L"/\\:");
+ if((VolLen != 0) && (path[VolLen] == L':')) {
+ (void) wcsncpy(Dest, path, VolLen + 1);
+ }
+ }
+}
+
+
+/* search for a prefix value in an environment file. If found, copy it
+ to the provided buffer, which is expected to be no more than MAXPATHLEN
+ bytes long.
+*/
+
+static int
+find_env_config_value(FILE * env_file, const wchar_t * key, wchar_t * value)
+{
+ int result = 0; /* meaning not found */
+ char buffer[MAXPATHLEN*2+1]; /* allow extra for key, '=', etc. */
+
+ fseek(env_file, 0, SEEK_SET);
+ while (!feof(env_file)) {
+ char * p = fgets(buffer, MAXPATHLEN*2, env_file);
+ wchar_t tmpbuffer[MAXPATHLEN*2+1];
+ PyObject * decoded;
+ int n;
+
+ if (p == NULL)
+ break;
+ n = strlen(p);
+ if (p[n - 1] != '\n') {
+ /* line has overflowed - bail */
+ break;
+ }
+ if (p[0] == '#') /* Comment - skip */
+ continue;
+ decoded = PyUnicode_DecodeUTF8(buffer, n, "surrogateescape");
+ if (decoded != NULL) {
+ Py_ssize_t k;
+ wchar_t * state;
+ k = PyUnicode_AsWideChar(decoded,
+ tmpbuffer, MAXPATHLEN * 2);
+ Py_DECREF(decoded);
+ if (k >= 0) {
+ wchar_t * tok = wcstok(tmpbuffer, L" \t\r\n", &state);
+ if ((tok != NULL) && !wcscmp(tok, key)) {
+ tok = wcstok(NULL, L" \t", &state);
+ if ((tok != NULL) && !wcscmp(tok, L"=")) {
+ tok = wcstok(NULL, L"\r\n", &state);
+ if (tok != NULL) {
+ wcsncpy(value, tok, MAXPATHLEN);
+ result = 1;
+ break;
+ }
+ }
+ }
+ }
+ }
+ }
+ return result;
+}
+
+/* search_for_prefix requires that argv0_path be no more than MAXPATHLEN
+ bytes long.
+*/
+static int
+search_for_prefix(wchar_t *argv0_path, wchar_t *home, wchar_t *_prefix,
+ wchar_t *lib_python)
+{
+ size_t n;
+ wchar_t *vpath;
+
+ /* If PYTHONHOME is set, we believe it unconditionally */
+ if (home) {
+ wchar_t *delim;
+ wcsncpy(prefix, home, MAXPATHLEN);
+ prefix[MAXPATHLEN] = L'\0';
+ delim = wcschr(prefix, DELIM);
+ if (delim)
+ *delim = L'\0';
+ joinpath(prefix, lib_python);
+ joinpath(prefix, LANDMARK);
+ return 1;
+ }
+
+ /* Check to see if argv[0] is in the build directory */
+ wcsncpy(prefix, argv0_path, MAXPATHLEN);
+ prefix[MAXPATHLEN] = L'\0';
+ joinpath(prefix, L"Modules/Setup");
+ if (isfile(prefix)) {
+ /* Check VPATH to see if argv0_path is in the build directory. */
+ vpath = Py_DecodeLocale(VPATH, NULL);
+ if (vpath != NULL) {
+ wcsncpy(prefix, argv0_path, MAXPATHLEN);
+ prefix[MAXPATHLEN] = L'\0';
+ joinpath(prefix, vpath);
+ PyMem_RawFree(vpath);
+ joinpath(prefix, L"Lib");
+ joinpath(prefix, LANDMARK);
+ if (ismodule(prefix))
+ return -1;
+ }
+ }
+
+ /* Search from argv0_path, until root is found */
+ copy_absolute(prefix, argv0_path, MAXPATHLEN+1);
+ do {
+ n = wcslen(prefix);
+ joinpath(prefix, lib_python);
+ joinpath(prefix, LANDMARK);
+ if (ismodule(prefix))
+ return 1;
+ prefix[n] = L'\0';
+ reduce(prefix);
+ } while (prefix[0]);
+
+ /* Look at configure's PREFIX */
+ wcsncpy(prefix, _prefix, MAXPATHLEN);
+ prefix[MAXPATHLEN] = L'\0';
+ joinpath(prefix, lib_python);
+ joinpath(prefix, LANDMARK);
+ if (ismodule(prefix))
+ return 1;
+
+ /* Fail */
+ return 0;
+}
+
+
+/* search_for_exec_prefix requires that argv0_path be no more than
+ MAXPATHLEN bytes long.
+*/
+static int
+search_for_exec_prefix(wchar_t *argv0_path, wchar_t *home,
+ wchar_t *_exec_prefix, wchar_t *lib_python)
+{
+ size_t n;
+
+ /* If PYTHONHOME is set, we believe it unconditionally */
+ if (home) {
+ wchar_t *delim;
+ delim = wcschr(home, DELIM);
+ if (delim)
+ wcsncpy(exec_prefix, delim+1, MAXPATHLEN);
+ else
+ wcsncpy(exec_prefix, home, MAXPATHLEN);
+ exec_prefix[MAXPATHLEN] = L'\0';
+ joinpath(exec_prefix, lib_python);
+ joinpath(exec_prefix, L"lib-dynload");
+ return 1;
+ }
+
+ /* Check to see if argv[0] is in the build directory. "pybuilddir.txt"
+ is written by setup.py and contains the relative path to the location
+ of shared library modules. */
+ wcsncpy(exec_prefix, argv0_path, MAXPATHLEN);
+ exec_prefix[MAXPATHLEN] = L'\0';
+ joinpath(exec_prefix, L"pybuilddir.txt");
+ if (isfile(exec_prefix)) {
+ FILE *f = _Py_wfopen(exec_prefix, L"rb");
+ if (f == NULL)
+ errno = 0;
+ else {
+ char buf[MAXPATHLEN+1];
+ PyObject *decoded;
+ wchar_t rel_builddir_path[MAXPATHLEN+1];
+ n = fread(buf, 1, MAXPATHLEN, f);
+ buf[n] = '\0';
+ fclose(f);
+ decoded = PyUnicode_DecodeUTF8(buf, n, "surrogateescape");
+ if (decoded != NULL) {
+ Py_ssize_t k;
+ k = PyUnicode_AsWideChar(decoded,
+ rel_builddir_path, MAXPATHLEN);
+ Py_DECREF(decoded);
+ if (k >= 0) {
+ rel_builddir_path[k] = L'\0';
+ wcsncpy(exec_prefix, argv0_path, MAXPATHLEN);
+ exec_prefix[MAXPATHLEN] = L'\0';
+ joinpath(exec_prefix, rel_builddir_path);
+ return -1;
+ }
+ }
+ }
+ }
+
+ /* Search from argv0_path, until root is found */
+ copy_absolute(exec_prefix, argv0_path, MAXPATHLEN+1);
+ do {
+ n = wcslen(exec_prefix);
+ joinpath(exec_prefix, lib_python);
+ joinpath(exec_prefix, L"lib-dynload");
+ if (isdir(exec_prefix))
+ return 1;
+ exec_prefix[n] = L'\0';
+ reduce(exec_prefix);
+ } while (exec_prefix[0]);
+
+ /* Look at configure's EXEC_PREFIX */
+ wcsncpy(exec_prefix, _exec_prefix, MAXPATHLEN);
+ exec_prefix[MAXPATHLEN] = L'\0';
+ joinpath(exec_prefix, lib_python);
+ joinpath(exec_prefix, L"lib-dynload");
+ if (isdir(exec_prefix))
+ return 1;
+
+ /* Fail */
+ return 0;
+}
+
+static void
+calculate_path(void)
+{
+ extern wchar_t *Py_GetProgramName(void);
+ wchar_t *pythonpath = PYTHONPATH;
+ static const wchar_t delimiter[2] = {DELIM, '\0'};
+ static const wchar_t separator[2] = {SEP, '\0'};
+ //char *rtpypath = Py_GETENV("PYTHONPATH"); /* XXX use wide version on Windows */
+ wchar_t *rtpypath = NULL;
+ char *_path = getenv("path");
+ wchar_t *path_buffer = NULL;
+ wchar_t *path = NULL;
+ wchar_t *prog = Py_GetProgramName();
+ wchar_t argv0_path[MAXPATHLEN+1];
+ wchar_t zip_path[MAXPATHLEN+1];
+ wchar_t *buf;
+ size_t bufsz;
+ size_t prefixsz;
+ wchar_t *defpath;
+
+ if (_path) {
+ path_buffer = Py_DecodeLocale(_path, NULL);
+ path = path_buffer;
+ }
+/* ###########################################################################
+ Determine path to the Python.efi binary.
+ Produces progpath, argv0_path, and volume_name.
+########################################################################### */
+
+ /* If there is no slash in the argv0 path, then we have to
+ * assume python is on the user's $PATH, since there's no
+ * other way to find a directory to start the search from. If
+ * $PATH isn't exported, you lose.
+ */
+ if (wcspbrk(prog, L"/\\"))
+ {
+ wcsncpy(progpath, prog, MAXPATHLEN);
+ }
+ else if (path) {
+ while (1) {
+ wchar_t *delim = wcschr(path, DELIM);
+
+ if (delim) {
+ size_t len = delim - path;
+ if (len > MAXPATHLEN)
+ len = MAXPATHLEN;
+ wcsncpy(progpath, path, len);
+ *(progpath + len) = L'\0';
+ }
+ else
+ wcsncpy(progpath, path, MAXPATHLEN);
+
+ joinpath(progpath, prog);
+ if (isxfile(progpath))
+ break;
+
+ if (!delim) {
+ progpath[0] = L'\0';
+ break;
+ }
+ path = delim + 1;
+ }
+ }
+ else
+ progpath[0] = L'\0';
+
+ if ( (!is_absolute(progpath)) && (progpath[0] != '\0') )
+ absolutize(progpath);
+
+ wcsncpy(argv0_path, progpath, MAXPATHLEN);
+ argv0_path[MAXPATHLEN] = L'\0';
+ set_volume(volume_name, argv0_path);
+
+ reduce(argv0_path);
+ /* At this point, argv0_path is guaranteed to be less than
+ MAXPATHLEN bytes long.
+ */
+/* ###########################################################################
+ Build the FULL prefix string, including volume name.
+ This is the full path to the platform independent libraries.
+########################################################################### */
+
+ wcsncpy(prefix, volume_name, MAXPATHLEN);
+ joinpath(prefix, PREFIX);
+ joinpath(prefix, lib_python);
+
+/* ###########################################################################
+ Build the FULL path to the zipped-up Python library.
+########################################################################### */
+
+ wcsncpy(zip_path, prefix, MAXPATHLEN);
+ zip_path[MAXPATHLEN] = L'\0';
+ reduce(zip_path);
+ joinpath(zip_path, L"python00.zip");
+ bufsz = wcslen(zip_path); /* Replace "00" with version */
+ zip_path[bufsz - 6] = VERSION[0];
+ zip_path[bufsz - 5] = VERSION[1];
+/* ###########################################################################
+ Build the FULL path to dynamically loadable libraries.
+########################################################################### */
+
+ wcsncpy(exec_prefix, volume_name, MAXPATHLEN); // "fs0:"
+ joinpath(exec_prefix, EXEC_PREFIX); // "fs0:/Efi/StdLib"
+ joinpath(exec_prefix, lib_python); // "fs0:/Efi/StdLib/lib/python.27"
+ joinpath(exec_prefix, L"lib-dynload"); // "fs0:/Efi/StdLib/lib/python.27/lib-dynload"
+/* ###########################################################################
+ Build the module search path.
+########################################################################### */
+
+ /* Reduce prefix and exec_prefix to their essence,
+ * e.g. /usr/local/lib/python1.5 is reduced to /usr/local.
+ * If we're loading relative to the build directory,
+ * return the compiled-in defaults instead.
+ */
+ reduce(prefix);
+ reduce(prefix);
+ /* The prefix is the root directory, but reduce() chopped
+ * off the "/". */
+ if (!prefix[0]) {
+ wcscpy(prefix, volume_name);
+ }
+ bufsz = wcslen(prefix);
+ if(prefix[bufsz-1] == L':') { // if prefix consists solely of a volume_name
+ prefix[bufsz] = SEP; // then append SEP indicating the root directory
+ prefix[bufsz+1] = 0; // and ensure the new string is terminated
+ }
+
+ /* Calculate size of return buffer.
+ */
+ defpath = pythonpath;
+ bufsz = 0;
+
+ if (rtpypath)
+ bufsz += wcslen(rtpypath) + 1;
+
+ prefixsz = wcslen(prefix) + 1;
+ while (1) {
+ wchar_t *delim = wcschr(defpath, DELIM);
+
+ if (is_absolute(defpath) == 0)
+ /* Paths are relative to prefix */
+ bufsz += prefixsz;
+
+ if (delim)
+ bufsz += delim - defpath + 1;
+ else {
+ bufsz += wcslen(defpath) + 1;
+ break;
+ }
+ defpath = delim + 1;
+ }
+
+ bufsz += wcslen(zip_path) + 1;
+ bufsz += wcslen(exec_prefix) + 1;
+
+ /* This is the only malloc call in this file */
+ buf = (wchar_t *)PyMem_Malloc(bufsz * 2);
+
+ if (buf == NULL) {
+ /* We can't exit, so print a warning and limp along */
+ fprintf(stderr, "Not enough memory for dynamic PYTHONPATH.\n");
+ fprintf(stderr, "Using default static PYTHONPATH.\n");
+ module_search_path = PYTHONPATH;
+ }
+ else {
+ /* Run-time value of $PYTHONPATH goes first */
+ if (rtpypath) {
+ wcscpy(buf, rtpypath);
+ wcscat(buf, delimiter);
+ }
+ else
+ buf[0] = L'\0';
+
+ /* Next is the default zip path */
+ wcscat(buf, zip_path);
+ wcscat(buf, delimiter);
+ /* Next goes merge of compile-time $PYTHONPATH with
+ * dynamically located prefix.
+ */
+ defpath = pythonpath;
+ while (1) {
+ wchar_t *delim = wcschr(defpath, DELIM);
+
+ if (is_absolute(defpath) != 1) {
+ wcscat(buf, prefix);
+ wcscat(buf, separator);
+ }
+
+ if (delim) {
+ size_t len = delim - defpath + 1;
+ size_t end = wcslen(buf) + len;
+ wcsncat(buf, defpath, len);
+ *(buf + end) = L'\0';
+ }
+ else {
+ wcscat(buf, defpath);
+ break;
+ }
+ defpath = delim + 1;
+ }
+ wcscat(buf, delimiter);
+ /* Finally, on goes the directory for dynamic-load modules */
+ wcscat(buf, exec_prefix);
+ /* And publish the results */
+ module_search_path = buf;
+ }
+ /* At this point, exec_prefix is set to VOL:/Efi/StdLib/lib/python.27/dynalib.
+ We want to get back to the root value, so we have to remove the final three
+ segments to get VOL:/Efi/StdLib. Because we don't know what VOL is, and
+ EXEC_PREFIX is also indeterminate, we just remove the three final segments.
+ */
+ reduce(exec_prefix);
+ reduce(exec_prefix);
+ reduce(exec_prefix);
+ if (!exec_prefix[0]) {
+ wcscpy(exec_prefix, volume_name);
+ }
+ bufsz = wcslen(exec_prefix);
+ if(exec_prefix[bufsz-1] == L':') {
+ exec_prefix[bufsz] = SEP;
+ exec_prefix[bufsz+1] = 0;
+ }
+
+#if 1
+ if (Py_VerboseFlag) PySys_WriteStderr("%s[%d]: module_search_path = \"%s\"\n", __func__, __LINE__, module_search_path);
+ if (Py_VerboseFlag) PySys_WriteStderr("%s[%d]: prefix = \"%s\"\n", __func__, __LINE__, prefix);
+ if (Py_VerboseFlag) PySys_WriteStderr("%s[%d]: exec_prefix = \"%s\"\n", __func__, __LINE__, exec_prefix);
+ if (Py_VerboseFlag) PySys_WriteStderr("%s[%d]: progpath = \"%s\"\n", __func__, __LINE__, progpath);
+#endif
+
+#if 0
+
+ extern wchar_t *Py_GetProgramName(void);
+
+ static const wchar_t delimiter[2] = {DELIM, '\0'};
+ static const wchar_t separator[2] = {SEP, '\0'};
+ char *_rtpypath = Py_GETENV("PYTHONPATH"); /* XXX use wide version on Windows */
+ wchar_t *rtpypath = NULL;
+ //wchar_t *home = Py_GetPythonHome();
+ char *_path = getenv("PATH");
+ wchar_t *path_buffer = NULL;
+ wchar_t *path = NULL;
+ wchar_t *prog = Py_GetProgramName();
+ wchar_t argv0_path[MAXPATHLEN+1];
+ wchar_t zip_path[MAXPATHLEN+1];
+ wchar_t *buf;
+ size_t bufsz;
+ size_t prefixsz;
+ wchar_t *defpath;
+#ifdef WITH_NEXT_FRAMEWORK
+ NSModule pythonModule;
+ const char* modPath;
+#endif
+#ifdef __APPLE__
+#if MAC_OS_X_VERSION_MAX_ALLOWED >= MAC_OS_X_VERSION_10_4
+ uint32_t nsexeclength = MAXPATHLEN;
+#else
+ unsigned long nsexeclength = MAXPATHLEN;
+#endif
+ char execpath[MAXPATHLEN+1];
+#endif
+ wchar_t *_pythonpath, *_prefix, *_exec_prefix;
+ wchar_t *lib_python;
+
+ _pythonpath = Py_DecodeLocale(PYTHONPATH, NULL);
+ _prefix = Py_DecodeLocale(PREFIX, NULL);
+ _exec_prefix = Py_DecodeLocale(EXEC_PREFIX, NULL);
+ lib_python = Py_DecodeLocale("lib/python" VERSION, NULL);
+
+ if (!_pythonpath || !_prefix || !_exec_prefix || !lib_python) {
+ Py_FatalError(
+ "Unable to decode path variables in getpath.c: "
+ "memory error");
+ }
+
+ if (_path) {
+ path_buffer = Py_DecodeLocale(_path, NULL);
+ path = path_buffer;
+ }
+
+ /* If there is no slash in the argv0 path, then we have to
+ * assume python is on the user's $PATH, since there's no
+ * other way to find a directory to start the search from. If
+ * $PATH isn't exported, you lose.
+ */
+ if (wcschr(prog, SEP))
+ wcsncpy(progpath, prog, MAXPATHLEN);
+#ifdef __APPLE__
+ /* On Mac OS X, if a script uses an interpreter of the form
+ * "#!/opt/python2.3/bin/python", the kernel only passes "python"
+ * as argv[0], which falls through to the $PATH search below.
+ * If /opt/python2.3/bin isn't in your path, or is near the end,
+ * this algorithm may incorrectly find /usr/bin/python. To work
+ * around this, we can use _NSGetExecutablePath to get a better
+ * hint of what the intended interpreter was, although this
+ * will fail if a relative path was used. but in that case,
+ * absolutize() should help us out below
+ */
+ else if(0 == _NSGetExecutablePath(execpath, &nsexeclength) && execpath[0] == SEP) {
+ size_t r = mbstowcs(progpath, execpath, MAXPATHLEN+1);
+ if (r == (size_t)-1 || r > MAXPATHLEN) {
+ /* Could not convert execpath, or it's too long. */
+ progpath[0] = L'\0';
+ }
+ }
+#endif /* __APPLE__ */
+ else if (path) {
+ while (1) {
+ wchar_t *delim = wcschr(path, DELIM);
+
+ if (delim) {
+ size_t len = delim - path;
+ if (len > MAXPATHLEN)
+ len = MAXPATHLEN;
+ wcsncpy(progpath, path, len);
+ *(progpath + len) = L'\0';
+ }
+ else
+ wcsncpy(progpath, path, MAXPATHLEN);
+
+ joinpath(progpath, prog);
+ if (isxfile(progpath))
+ break;
+
+ if (!delim) {
+ progpath[0] = L'\0';
+ break;
+ }
+ path = delim + 1;
+ }
+ }
+ else
+ progpath[0] = L'\0';
+ PyMem_RawFree(path_buffer);
+ if (progpath[0] != SEP && progpath[0] != L'\0')
+ absolutize(progpath);
+ wcsncpy(argv0_path, progpath, MAXPATHLEN);
+ argv0_path[MAXPATHLEN] = L'\0';
+
+#ifdef WITH_NEXT_FRAMEWORK
+ /* On Mac OS X we have a special case if we're running from a framework.
+ ** This is because the python home should be set relative to the library,
+ ** which is in the framework, not relative to the executable, which may
+ ** be outside of the framework. Except when we're in the build directory...
+ */
+ pythonModule = NSModuleForSymbol(NSLookupAndBindSymbol("_Py_Initialize"));
+ /* Use dylib functions to find out where the framework was loaded from */
+ modPath = NSLibraryNameForModule(pythonModule);
+ if (modPath != NULL) {
+ /* We're in a framework. */
+ /* See if we might be in the build directory. The framework in the
+ ** build directory is incomplete, it only has the .dylib and a few
+ ** needed symlinks, it doesn't have the Lib directories and such.
+ ** If we're running with the framework from the build directory we must
+ ** be running the interpreter in the build directory, so we use the
+ ** build-directory-specific logic to find Lib and such.
+ */
+ wchar_t* wbuf = Py_DecodeLocale(modPath, NULL);
+ if (wbuf == NULL) {
+ Py_FatalError("Cannot decode framework location");
+ }
+
+ wcsncpy(argv0_path, wbuf, MAXPATHLEN);
+ reduce(argv0_path);
+ joinpath(argv0_path, lib_python);
+ joinpath(argv0_path, LANDMARK);
+ if (!ismodule(argv0_path)) {
+ /* We are in the build directory so use the name of the
+ executable - we know that the absolute path is passed */
+ wcsncpy(argv0_path, progpath, MAXPATHLEN);
+ }
+ else {
+ /* Use the location of the library as the progpath */
+ wcsncpy(argv0_path, wbuf, MAXPATHLEN);
+ }
+ PyMem_RawFree(wbuf);
+ }
+#endif
+
+#if HAVE_READLINK
+ {
+ wchar_t tmpbuffer[MAXPATHLEN+1];
+ int linklen = _Py_wreadlink(progpath, tmpbuffer, MAXPATHLEN);
+ while (linklen != -1) {
+ if (tmpbuffer[0] == SEP)
+ /* tmpbuffer should never be longer than MAXPATHLEN,
+ but extra check does not hurt */
+ wcsncpy(argv0_path, tmpbuffer, MAXPATHLEN);
+ else {
+ /* Interpret relative to progpath */
+ reduce(argv0_path);
+ joinpath(argv0_path, tmpbuffer);
+ }
+ linklen = _Py_wreadlink(argv0_path, tmpbuffer, MAXPATHLEN);
+ }
+ }
+#endif /* HAVE_READLINK */
+
+ reduce(argv0_path);
+ /* At this point, argv0_path is guaranteed to be less than
+ MAXPATHLEN bytes long.
+ */
+
+ /* Search for an environment configuration file, first in the
+ executable's directory and then in the parent directory.
+ If found, open it for use when searching for prefixes.
+ */
+
+ {
+ wchar_t tmpbuffer[MAXPATHLEN+1];
+ wchar_t *env_cfg = L"pyvenv.cfg";
+ FILE * env_file = NULL;
+
+ wcscpy(tmpbuffer, argv0_path);
+
+ joinpath(tmpbuffer, env_cfg);
+ env_file = _Py_wfopen(tmpbuffer, L"r");
+ if (env_file == NULL) {
+ errno = 0;
+ reduce(tmpbuffer);
+ reduce(tmpbuffer);
+ joinpath(tmpbuffer, env_cfg);
+ env_file = _Py_wfopen(tmpbuffer, L"r");
+ if (env_file == NULL) {
+ errno = 0;
+ }
+ }
+ if (env_file != NULL) {
+ /* Look for a 'home' variable and set argv0_path to it, if found */
+ if (find_env_config_value(env_file, L"home", tmpbuffer)) {
+ wcscpy(argv0_path, tmpbuffer);
+ }
+ fclose(env_file);
+ env_file = NULL;
+ }
+ }
+ printf("argv0_path = %s, home = %s, _prefix = %s, lib_python=%s",argv0_path, home, _prefix, lib_python);
+
+ pfound = search_for_prefix(argv0_path, home, _prefix, lib_python);
+ if (!pfound) {
+ if (!Py_FrozenFlag)
+ fprintf(stderr,
+ "Could not find platform independent libraries <prefix>\n");
+ wcsncpy(prefix, _prefix, MAXPATHLEN);
+ joinpath(prefix, lib_python);
+ }
+ else
+ reduce(prefix);
+
+ wcsncpy(zip_path, prefix, MAXPATHLEN);
+ zip_path[MAXPATHLEN] = L'\0';
+ if (pfound > 0) { /* Use the reduced prefix returned by Py_GetPrefix() */
+ reduce(zip_path);
+ reduce(zip_path);
+ }
+ else
+ wcsncpy(zip_path, _prefix, MAXPATHLEN);
+ joinpath(zip_path, L"lib/python36.zip");
+ bufsz = wcslen(zip_path); /* Replace "00" with version */
+ zip_path[bufsz - 6] = VERSION[0];
+ zip_path[bufsz - 5] = VERSION[2];
+
+ efound = search_for_exec_prefix(argv0_path, home,
+ _exec_prefix, lib_python);
+ if (!efound) {
+ if (!Py_FrozenFlag)
+ fprintf(stderr,
+ "Could not find platform dependent libraries <exec_prefix>\n");
+ wcsncpy(exec_prefix, _exec_prefix, MAXPATHLEN);
+ joinpath(exec_prefix, L"lib/lib-dynload");
+ }
+ /* If we found EXEC_PREFIX do *not* reduce it! (Yet.) */
+
+ if ((!pfound || !efound) && !Py_FrozenFlag)
+ fprintf(stderr,
+ "Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>]\n");
+
+ /* Calculate size of return buffer.
+ */
+ bufsz = 0;
+
+ if (_rtpypath && _rtpypath[0] != '\0') {
+ size_t rtpypath_len;
+ rtpypath = Py_DecodeLocale(_rtpypath, &rtpypath_len);
+ if (rtpypath != NULL)
+ bufsz += rtpypath_len + 1;
+ }
+
+ defpath = _pythonpath;
+ prefixsz = wcslen(prefix) + 1;
+ while (1) {
+ wchar_t *delim = wcschr(defpath, DELIM);
+
+ if (defpath[0] != SEP)
+ /* Paths are relative to prefix */
+ bufsz += prefixsz;
+
+ if (delim)
+ bufsz += delim - defpath + 1;
+ else {
+ bufsz += wcslen(defpath) + 1;
+ break;
+ }
+ defpath = delim + 1;
+ }
+
+ bufsz += wcslen(zip_path) + 1;
+ bufsz += wcslen(exec_prefix) + 1;
+
+ buf = PyMem_RawMalloc(bufsz * sizeof(wchar_t));
+ if (buf == NULL) {
+ Py_FatalError(
+ "Not enough memory for dynamic PYTHONPATH");
+ }
+
+ /* Run-time value of $PYTHONPATH goes first */
+ if (rtpypath) {
+ wcscpy(buf, rtpypath);
+ wcscat(buf, delimiter);
+ }
+ else
+ buf[0] = '\0';
+
+ /* Next is the default zip path */
+ wcscat(buf, zip_path);
+ wcscat(buf, delimiter);
+
+ /* Next goes merge of compile-time $PYTHONPATH with
+ * dynamically located prefix.
+ */
+ defpath = _pythonpath;
+ while (1) {
+ wchar_t *delim = wcschr(defpath, DELIM);
+
+ if (defpath[0] != SEP) {
+ wcscat(buf, prefix);
+ if (prefixsz >= 2 && prefix[prefixsz - 2] != SEP &&
+ defpath[0] != (delim ? DELIM : L'\0')) { /* not empty */
+ wcscat(buf, separator);
+ }
+ }
+
+ if (delim) {
+ size_t len = delim - defpath + 1;
+ size_t end = wcslen(buf) + len;
+ wcsncat(buf, defpath, len);
+ *(buf + end) = '\0';
+ }
+ else {
+ wcscat(buf, defpath);
+ break;
+ }
+ defpath = delim + 1;
+ }
+ wcscat(buf, delimiter);
+
+ /* Finally, on goes the directory for dynamic-load modules */
+ wcscat(buf, exec_prefix);
+
+ /* And publish the results */
+ module_search_path = buf;
+
+ /* Reduce prefix and exec_prefix to their essence,
+ * e.g. /usr/local/lib/python1.5 is reduced to /usr/local.
+ * If we're loading relative to the build directory,
+ * return the compiled-in defaults instead.
+ */
+ if (pfound > 0) {
+ reduce(prefix);
+ reduce(prefix);
+ /* The prefix is the root directory, but reduce() chopped
+ * off the "/". */
+ if (!prefix[0])
+ wcscpy(prefix, separator);
+ }
+ else
+ wcsncpy(prefix, _prefix, MAXPATHLEN);
+
+ if (efound > 0) {
+ reduce(exec_prefix);
+ reduce(exec_prefix);
+ reduce(exec_prefix);
+ if (!exec_prefix[0])
+ wcscpy(exec_prefix, separator);
+ }
+ else
+ wcsncpy(exec_prefix, _exec_prefix, MAXPATHLEN);
+
+ PyMem_RawFree(_pythonpath);
+ PyMem_RawFree(_prefix);
+ PyMem_RawFree(_exec_prefix);
+ PyMem_RawFree(lib_python);
+ PyMem_RawFree(rtpypath);
+#endif
+}
+
+
+/* External interface */
+void
+Py_SetPath(const wchar_t *path)
+{
+ if (module_search_path != NULL) {
+ PyMem_RawFree(module_search_path);
+ module_search_path = NULL;
+ }
+ if (path != NULL) {
+ extern wchar_t *Py_GetProgramName(void);
+ wchar_t *prog = Py_GetProgramName();
+ wcsncpy(progpath, prog, MAXPATHLEN);
+ exec_prefix[0] = prefix[0] = L'\0';
+ module_search_path = PyMem_RawMalloc((wcslen(path) + 1) * sizeof(wchar_t));
+ if (module_search_path != NULL)
+ wcscpy(module_search_path, path);
+ }
+}
+
+wchar_t *
+Py_GetPath(void)
+{
+ if (!module_search_path)
+ calculate_path();
+ return module_search_path;
+}
+
+wchar_t *
+Py_GetPrefix(void)
+{
+ if (!module_search_path)
+ calculate_path();
+ return prefix;
+}
+
+wchar_t *
+Py_GetExecPrefix(void)
+{
+ if (!module_search_path)
+ calculate_path();
+ return exec_prefix;
+}
+
+wchar_t *
+Py_GetProgramFullPath(void)
+{
+ if (!module_search_path)
+ calculate_path();
+ return progpath;
+}
+
+
+#ifdef __cplusplus
+}
+#endif
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/main.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/main.c
new file mode 100644
index 00000000..c46c81ca
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/main.c
@@ -0,0 +1,878 @@
+/* Python interpreter main program */
+
+#include "Python.h"
+#include "osdefs.h"
+
+#include <locale.h>
+
+#if defined(MS_WINDOWS) || defined(__CYGWIN__)
+#include <windows.h>
+#ifdef HAVE_IO_H
+#include <io.h>
+#endif
+#ifdef HAVE_FCNTL_H
+#include <fcntl.h>
+#endif
+#endif
+
+#if !defined(UEFI_MSVC_64) && !defined(UEFI_MSVC_32)
+#ifdef _MSC_VER
+#include <crtdbg.h>
+#endif
+#endif
+
+#if defined(MS_WINDOWS)
+#define PYTHONHOMEHELP "<prefix>\\python{major}{minor}"
+#else
+#define PYTHONHOMEHELP "<prefix>/lib/pythonX.X"
+#endif
+
+#include "pygetopt.h"
+
+#define COPYRIGHT \
+ "Type \"help\", \"copyright\", \"credits\" or \"license\" " \
+ "for more information."
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* For Py_GetArgcArgv(); set by main() */
+static wchar_t **orig_argv;
+static int orig_argc;
+
+/* command line options */
+#define BASE_OPTS L"$bBc:dEhiIJm:OqRsStuvVW:xX:?"
+
+#define PROGRAM_OPTS BASE_OPTS
+
+/* Short usage message (with %s for argv0) */
+static const char usage_line[] =
+"usage: %ls [option] ... [-c cmd | -m mod | file | -] [arg] ...\n";
+
+/* Long usage message, split into parts < 512 bytes */
+static const char usage_1[] = "\
+Options and arguments (and corresponding environment variables):\n\
+-b : issue warnings about str(bytes_instance), str(bytearray_instance)\n\
+ and comparing bytes/bytearray with str. (-bb: issue errors)\n\
+-B : don't write .pyc files on import; also PYTHONDONTWRITEBYTECODE=x\n\
+-c cmd : program passed in as string (terminates option list)\n\
+-d : debug output from parser; also PYTHONDEBUG=x\n\
+-E : ignore PYTHON* environment variables (such as PYTHONPATH)\n\
+-h : print this help message and exit (also --help)\n\
+";
+static const char usage_2[] = "\
+-i : inspect interactively after running script; forces a prompt even\n\
+ if stdin does not appear to be a terminal; also PYTHONINSPECT=x\n\
+-I : isolate Python from the user's environment (implies -E and -s)\n\
+-m mod : run library module as a script (terminates option list)\n\
+-O : remove assert and __debug__-dependent statements; add .opt-1 before\n\
+ .pyc extension; also PYTHONOPTIMIZE=x\n\
+-OO : do -O changes and also discard docstrings; add .opt-2 before\n\
+ .pyc extension\n\
+-q : don't print version and copyright messages on interactive startup\n\
+-s : don't add user site directory to sys.path; also PYTHONNOUSERSITE\n\
+-S : don't imply 'import site' on initialization\n\
+";
+static const char usage_3[] = "\
+-u : force the binary I/O layers of stdout and stderr to be unbuffered;\n\
+ stdin is always buffered; text I/O layer will be line-buffered;\n\
+ also PYTHONUNBUFFERED=x\n\
+-v : verbose (trace import statements); also PYTHONVERBOSE=x\n\
+ can be supplied multiple times to increase verbosity\n\
+-V : print the Python version number and exit (also --version)\n\
+ when given twice, print more information about the build\n\
+-W arg : warning control; arg is action:message:category:module:lineno\n\
+ also PYTHONWARNINGS=arg\n\
+-x : skip first line of source, allowing use of non-Unix forms of #!cmd\n\
+-X opt : set implementation-specific option\n\
+";
+static const char usage_4[] = "\
+file : program read from script file\n\
+- : program read from stdin (default; interactive mode if a tty)\n\
+arg ...: arguments passed to program in sys.argv[1:]\n\n\
+Other environment variables:\n\
+PYTHONSTARTUP: file executed on interactive startup (no default)\n\
+PYTHONPATH : '%lc'-separated list of directories prefixed to the\n\
+ default module search path. The result is sys.path.\n\
+";
+static const char usage_5[] =
+"PYTHONHOME : alternate <prefix> directory (or <prefix>%lc<exec_prefix>).\n"
+" The default module search path uses %s.\n"
+"PYTHONCASEOK : ignore case in 'import' statements (UEFI Default).\n"
+"PYTHONIOENCODING: Encoding[:errors] used for stdin/stdout/stderr.\n"
+"PYTHONFAULTHANDLER: dump the Python traceback on fatal errors.\n";
+static const char usage_6[] =
+"PYTHONMALLOC: set the Python memory allocators and/or install debug hooks\n"
+" on Python memory allocators. Use PYTHONMALLOC=debug to install debug\n"
+" hooks.\n";
+
+static int
+usage(int exitcode, const wchar_t* program)
+{
+ FILE *f = exitcode ? stderr : stdout;
+
+ fprintf(f, usage_line, program);
+ if (exitcode)
+ fprintf(f, "Try `python -h' for more information.\n");
+ else {
+ fputs(usage_1, f);
+ fputs(usage_2, f);
+ fputs(usage_3, f);
+ fprintf(f, usage_4, (wint_t)DELIM);
+ fprintf(f, usage_5, (wint_t)DELIM, PYTHONHOMEHELP);
+ //fputs(usage_6, f);
+ }
+ return exitcode;
+}
+
+static void RunStartupFile(PyCompilerFlags *cf)
+{
+ char *startup = Py_GETENV("PYTHONSTARTUP");
+ if (startup != NULL && startup[0] != '\0') {
+ FILE *fp = _Py_fopen(startup, "r");
+ if (fp != NULL) {
+ (void) PyRun_SimpleFileExFlags(fp, startup, 0, cf);
+ PyErr_Clear();
+ fclose(fp);
+ } else {
+ int save_errno;
+
+ save_errno = errno;
+ PySys_WriteStderr("Could not open PYTHONSTARTUP\n");
+ errno = save_errno;
+ PyErr_SetFromErrnoWithFilename(PyExc_IOError,
+ startup);
+ PyErr_Print();
+ PyErr_Clear();
+ }
+ }
+}
+
+static void RunInteractiveHook(void)
+{
+ PyObject *sys, *hook, *result;
+ sys = PyImport_ImportModule("sys");
+ if (sys == NULL)
+ goto error;
+ hook = PyObject_GetAttrString(sys, "__interactivehook__");
+ Py_DECREF(sys);
+ if (hook == NULL)
+ PyErr_Clear();
+ else {
+ result = PyObject_CallObject(hook, NULL);
+ Py_DECREF(hook);
+ if (result == NULL)
+ goto error;
+ else
+ Py_DECREF(result);
+ }
+ return;
+
+error:
+ PySys_WriteStderr("Failed calling sys.__interactivehook__\n");
+ PyErr_Print();
+ PyErr_Clear();
+}
+
+
+static int RunModule(wchar_t *modname, int set_argv0)
+{
+ PyObject *module, *runpy, *runmodule, *runargs, *result;
+ runpy = PyImport_ImportModule("runpy");
+ if (runpy == NULL) {
+ fprintf(stderr, "Could not import runpy module\n");
+ PyErr_Print();
+ return -1;
+ }
+ runmodule = PyObject_GetAttrString(runpy, "_run_module_as_main");
+ if (runmodule == NULL) {
+ fprintf(stderr, "Could not access runpy._run_module_as_main\n");
+ PyErr_Print();
+ Py_DECREF(runpy);
+ return -1;
+ }
+ module = PyUnicode_FromWideChar(modname, wcslen(modname));
+ if (module == NULL) {
+ fprintf(stderr, "Could not convert module name to unicode\n");
+ PyErr_Print();
+ Py_DECREF(runpy);
+ Py_DECREF(runmodule);
+ return -1;
+ }
+ runargs = Py_BuildValue("(Oi)", module, set_argv0);
+ if (runargs == NULL) {
+ fprintf(stderr,
+ "Could not create arguments for runpy._run_module_as_main\n");
+ PyErr_Print();
+ Py_DECREF(runpy);
+ Py_DECREF(runmodule);
+ Py_DECREF(module);
+ return -1;
+ }
+ result = PyObject_Call(runmodule, runargs, NULL);
+ if (result == NULL) {
+ PyErr_Print();
+ }
+ Py_DECREF(runpy);
+ Py_DECREF(runmodule);
+ Py_DECREF(module);
+ Py_DECREF(runargs);
+ if (result == NULL) {
+ return -1;
+ }
+ Py_DECREF(result);
+ return 0;
+}
+
+static PyObject *
+AsImportPathEntry(wchar_t *filename)
+{
+ PyObject *sys_path0 = NULL, *importer;
+
+ sys_path0 = PyUnicode_FromWideChar(filename, wcslen(filename));
+ if (sys_path0 == NULL)
+ goto error;
+
+ importer = PyImport_GetImporter(sys_path0);
+ if (importer == NULL)
+ goto error;
+
+ if (importer == Py_None) {
+ Py_DECREF(sys_path0);
+ Py_DECREF(importer);
+ return NULL;
+ }
+ Py_DECREF(importer);
+ return sys_path0;
+
+error:
+ Py_XDECREF(sys_path0);
+ PySys_WriteStderr("Failed checking if argv[0] is an import path entry\n");
+ PyErr_Print();
+ PyErr_Clear();
+ return NULL;
+}
+
+
+static int
+RunMainFromImporter(PyObject *sys_path0)
+{
+ PyObject *sys_path;
+ int sts;
+
+ /* Assume sys_path0 has already been checked by AsImportPathEntry,
+ * so put it in sys.path[0] and import __main__ */
+ sys_path = PySys_GetObject("path");
+ if (sys_path == NULL) {
+ PyErr_SetString(PyExc_RuntimeError, "unable to get sys.path");
+ goto error;
+ }
+ sts = PyList_Insert(sys_path, 0, sys_path0);
+ if (sts) {
+ sys_path0 = NULL;
+ goto error;
+ }
+
+ sts = RunModule(L"__main__", 0);
+ return sts != 0;
+
+error:
+ Py_XDECREF(sys_path0);
+ PyErr_Print();
+ return 1;
+}
+
+static int
+run_command(wchar_t *command, PyCompilerFlags *cf)
+{
+ PyObject *unicode, *bytes;
+ int ret;
+
+ unicode = PyUnicode_FromWideChar(command, -1);
+ if (unicode == NULL)
+ goto error;
+ bytes = PyUnicode_AsUTF8String(unicode);
+ Py_DECREF(unicode);
+ if (bytes == NULL)
+ goto error;
+ ret = PyRun_SimpleStringFlags(PyBytes_AsString(bytes), cf);
+ Py_DECREF(bytes);
+ return ret != 0;
+
+error:
+ PySys_WriteStderr("Unable to decode the command from the command line:\n");
+ PyErr_Print();
+ return 1;
+}
+
+static int
+run_file(FILE *fp, const wchar_t *filename, PyCompilerFlags *p_cf)
+{
+ PyObject *unicode, *bytes = NULL;
+ char *filename_str;
+ int run;
+ /* call pending calls like signal handlers (SIGINT) */
+ if (Py_MakePendingCalls() == -1) {
+ PyErr_Print();
+ return 1;
+ }
+
+ if (filename) {
+ unicode = PyUnicode_FromWideChar(filename, wcslen(filename));
+ if (unicode != NULL) {
+ bytes = PyUnicode_EncodeFSDefault(unicode);
+ Py_DECREF(unicode);
+ }
+ if (bytes != NULL)
+ filename_str = PyBytes_AsString(bytes);
+ else {
+ PyErr_Clear();
+ filename_str = "<encoding error>";
+ }
+ }
+ else
+ filename_str = "<stdin>";
+
+ run = PyRun_AnyFileExFlags(fp, filename_str, filename != NULL, p_cf);
+ Py_XDECREF(bytes);
+ return run != 0;
+}
+
+
+/* Main program */
+
+int
+Py_Main(int argc, wchar_t **argv)
+{
+ int c;
+ int sts;
+ wchar_t *command = NULL;
+ wchar_t *filename = NULL;
+ wchar_t *module = NULL;
+ FILE *fp = stdin;
+ char *p;
+#ifdef MS_WINDOWS
+ wchar_t *wp;
+#endif
+ int skipfirstline = 0;
+ int stdin_is_interactive = 0;
+ int help = 0;
+ int version = 0;
+ int saw_unbuffered_flag = 0;
+ int saw_pound_flag = 0;
+ char *opt;
+ PyCompilerFlags cf;
+ PyObject *main_importer_path = NULL;
+ PyObject *warning_option = NULL;
+ PyObject *warning_options = NULL;
+
+ cf.cf_flags = 0;
+
+ orig_argc = argc; /* For Py_GetArgcArgv() */
+ orig_argv = argv;
+
+ /* Hash randomization needed early for all string operations
+ (including -W and -X options). */
+ _PyOS_opterr = 0; /* prevent printing the error in 1st pass */
+ while ((c = _PyOS_GetOpt(argc, argv, PROGRAM_OPTS)) != EOF) {
+ if (c == 'm' || c == 'c') {
+ /* -c / -m is the last option: following arguments are
+ not interpreter options. */
+ break;
+ }
+ if (c == 'E') {
+ Py_IgnoreEnvironmentFlag++;
+ break;
+ }
+ }
+
+ if (saw_pound_flag == 0) {
+ if (freopen("stdout:", "w", stderr) == NULL) {
+ puts("ERROR: Unable to reopen stderr as an alias to stdout!");
+ }
+ saw_pound_flag = 0xFF;
+ }
+
+#if 0
+ opt = Py_GETENV("PYTHONMALLOC");
+ if (_PyMem_SetupAllocators(opt) < 0) {
+ fprintf(stderr,
+ "Error in PYTHONMALLOC: unknown allocator \"%s\"!\n", opt);
+ exit(1);
+ }
+#endif
+ _PyRandom_Init();
+
+ PySys_ResetWarnOptions();
+ _PyOS_ResetGetOpt();
+
+
+ while ((c = _PyOS_GetOpt(argc, argv, PROGRAM_OPTS)) != EOF) {
+ if (c == 'c') {
+ size_t len;
+ /* -c is the last option; following arguments
+ that look like options are left for the
+ command to interpret. */
+
+ len = wcslen(_PyOS_optarg) + 1 + 1;
+ command = (wchar_t *)PyMem_RawMalloc(sizeof(wchar_t) * len);
+ if (command == NULL)
+ Py_FatalError(
+ "not enough memory to copy -c argument");
+ wcscpy(command, _PyOS_optarg);
+ command[len - 2] = '\n';
+ command[len - 1] = 0;
+ break;
+ }
+
+ if (c == 'm') {
+ /* -m is the last option; following arguments
+ that look like options are left for the
+ module to interpret. */
+ module = _PyOS_optarg;
+ break;
+ }
+
+ switch (c) {
+ case 'b':
+ Py_BytesWarningFlag++;
+ break;
+
+ case 'd':
+ Py_DebugFlag++;
+ break;
+
+ case 'i':
+ Py_InspectFlag++;
+ Py_InteractiveFlag++;
+ break;
+
+ case 'I':
+ Py_IsolatedFlag++;
+ Py_NoUserSiteDirectory++;
+ Py_IgnoreEnvironmentFlag++;
+ break;
+
+ /* case 'J': reserved for Jython */
+
+ case 'O':
+ Py_OptimizeFlag++;
+ break;
+
+ case 'B':
+ Py_DontWriteBytecodeFlag++;
+ break;
+
+ case 's':
+ Py_NoUserSiteDirectory++;
+ break;
+
+ case 'S':
+ Py_NoSiteFlag++;
+ break;
+
+ case 'E':
+ /* Already handled above */
+ break;
+
+ case 't':
+ /* ignored for backwards compatibility */
+ break;
+
+ case 'u':
+ Py_UnbufferedStdioFlag = 1;
+ saw_unbuffered_flag = 1;
+ break;
+
+ case 'v':
+ Py_VerboseFlag++;
+ break;
+
+ case 'x':
+ skipfirstline = 1;
+ break;
+
+ case 'h':
+ case '?':
+ help++;
+ break;
+
+ case 'V':
+ version++;
+ break;
+
+ case 'W':
+ if (warning_options == NULL)
+ warning_options = PyList_New(0);
+ if (warning_options == NULL)
+ Py_FatalError("failure in handling of -W argument");
+ warning_option = PyUnicode_FromWideChar(_PyOS_optarg, -1);
+ if (warning_option == NULL)
+ Py_FatalError("failure in handling of -W argument");
+ if (PyList_Append(warning_options, warning_option) == -1)
+ Py_FatalError("failure in handling of -W argument");
+ Py_DECREF(warning_option);
+ break;
+
+ case 'X':
+ PySys_AddXOption(_PyOS_optarg);
+ break;
+
+ case 'q':
+ Py_QuietFlag++;
+ break;
+
+ case '$':
+ /* Ignored */
+ break;
+
+ case 'R':
+ /* Ignored */
+ break;
+
+ /* This space reserved for other options */
+
+ default:
+ return usage(2, argv[0]);
+ /*NOTREACHED*/
+
+ }
+ }
+
+ if (help)
+ return usage(0, argv[0]);
+
+ if (version) {
+ printf("Python %s\n", version >= 2 ? Py_GetVersion() : PY_VERSION);
+ return 0;
+ }
+
+ if (!Py_InspectFlag &&
+ (p = Py_GETENV("PYTHONINSPECT")) && *p != '\0')
+ Py_InspectFlag = 1;
+ if (!saw_unbuffered_flag &&
+ (p = Py_GETENV("PYTHONUNBUFFERED")) && *p != '\0')
+ Py_UnbufferedStdioFlag = 1;
+
+ if (!Py_NoUserSiteDirectory &&
+ (p = Py_GETENV("PYTHONNOUSERSITE")) && *p != '\0')
+ Py_NoUserSiteDirectory = 1;
+
+#ifdef MS_WINDOWS
+ if (!Py_IgnoreEnvironmentFlag && (wp = _wgetenv(L"PYTHONWARNINGS")) &&
+ *wp != L'\0') {
+ wchar_t *buf, *warning, *context = NULL;
+
+ buf = (wchar_t *)PyMem_RawMalloc((wcslen(wp) + 1) * sizeof(wchar_t));
+ if (buf == NULL)
+ Py_FatalError(
+ "not enough memory to copy PYTHONWARNINGS");
+ wcscpy(buf, wp);
+ for (warning = wcstok_s(buf, L",", &context);
+ warning != NULL;
+ warning = wcstok_s(NULL, L",", &context)) {
+ PySys_AddWarnOption(warning);
+ }
+ PyMem_RawFree(buf);
+ }
+#else
+ if ((p = Py_GETENV("PYTHONWARNINGS")) && *p != '\0') {
+ char *buf, *oldloc;
+ PyObject *unicode;
+
+ /* settle for strtok here as there's no one standard
+ C89 wcstok */
+ buf = (char *)PyMem_RawMalloc(strlen(p) + 1);
+ if (buf == NULL)
+ Py_FatalError(
+ "not enough memory to copy PYTHONWARNINGS");
+ strcpy(buf, p);
+ oldloc = _PyMem_RawStrdup(setlocale(LC_ALL, NULL));
+ setlocale(LC_ALL, "");
+ for (p = strtok(buf, ","); p != NULL; p = strtok(NULL, ",")) {
+#ifdef __APPLE__
+ /* Use utf-8 on Mac OS X */
+ unicode = PyUnicode_FromString(p);
+#else
+ unicode = PyUnicode_DecodeLocale(p, "surrogateescape");
+#endif
+ if (unicode == NULL) {
+ /* ignore errors */
+ PyErr_Clear();
+ continue;
+ }
+ PySys_AddWarnOptionUnicode(unicode);
+ Py_DECREF(unicode);
+ }
+ setlocale(LC_ALL, oldloc);
+ PyMem_RawFree(oldloc);
+ PyMem_RawFree(buf);
+ }
+#endif
+ if (warning_options != NULL) {
+ Py_ssize_t i;
+ for (i = 0; i < PyList_GET_SIZE(warning_options); i++) {
+ PySys_AddWarnOptionUnicode(PyList_GET_ITEM(warning_options, i));
+ }
+ }
+
+ if (command == NULL && module == NULL && _PyOS_optind < argc &&
+ wcscmp(argv[_PyOS_optind], L"-") != 0)
+ {
+ filename = argv[_PyOS_optind];
+ }
+
+ stdin_is_interactive = Py_FdIsInteractive(stdin, (char *)0);
+#if defined(MS_WINDOWS) || defined(__CYGWIN__)
+ /* don't translate newlines (\r\n <=> \n) */
+ _setmode(fileno(stdin), O_BINARY);
+ _setmode(fileno(stdout), O_BINARY);
+ _setmode(fileno(stderr), O_BINARY);
+#endif
+
+ if (Py_UnbufferedStdioFlag) {
+#ifdef HAVE_SETVBUF
+ setvbuf(stdin, (char *)NULL, _IONBF, BUFSIZ);
+ setvbuf(stdout, (char *)NULL, _IONBF, BUFSIZ);
+ setvbuf(stderr, (char *)NULL, _IONBF, BUFSIZ);
+#else /* !HAVE_SETVBUF */
+ setbuf(stdin, (char *)NULL);
+ setbuf(stdout, (char *)NULL);
+ setbuf(stderr, (char *)NULL);
+#endif /* !HAVE_SETVBUF */
+ }
+ else if (Py_InteractiveFlag) {
+#ifdef MS_WINDOWS
+ /* Doesn't have to have line-buffered -- use unbuffered */
+ /* Any set[v]buf(stdin, ...) screws up Tkinter :-( */
+ setvbuf(stdout, (char *)NULL, _IONBF, BUFSIZ);
+#else /* !MS_WINDOWS */
+#ifdef HAVE_SETVBUF
+ setvbuf(stdin, (char *)NULL, _IOLBF, BUFSIZ);
+ setvbuf(stdout, (char *)NULL, _IOLBF, BUFSIZ);
+#endif /* HAVE_SETVBUF */
+#endif /* !MS_WINDOWS */
+ /* Leave stderr alone - it should be unbuffered anyway. */
+ }
+
+#ifdef __APPLE__
+ /* On MacOS X, when the Python interpreter is embedded in an
+ application bundle, it gets executed by a bootstrapping script
+ that does os.execve() with an argv[0] that's different from the
+ actual Python executable. This is needed to keep the Finder happy,
+ or rather, to work around Apple's overly strict requirements of
+ the process name. However, we still need a usable sys.executable,
+ so the actual executable path is passed in an environment variable.
+ See Lib/plat-mac/bundlebuiler.py for details about the bootstrap
+ script. */
+ if ((p = Py_GETENV("PYTHONEXECUTABLE")) && *p != '\0') {
+ wchar_t* buffer;
+ size_t len = strlen(p) + 1;
+
+ buffer = PyMem_RawMalloc(len * sizeof(wchar_t));
+ if (buffer == NULL) {
+ Py_FatalError(
+ "not enough memory to copy PYTHONEXECUTABLE");
+ }
+
+ mbstowcs(buffer, p, len);
+ Py_SetProgramName(buffer);
+ /* buffer is now handed off - do not free */
+ } else {
+#ifdef WITH_NEXT_FRAMEWORK
+ char* pyvenv_launcher = getenv("__PYVENV_LAUNCHER__");
+
+ if (pyvenv_launcher && *pyvenv_launcher) {
+ /* Used by Mac/Tools/pythonw.c to forward
+ * the argv0 of the stub executable
+ */
+ wchar_t* wbuf = Py_DecodeLocale(pyvenv_launcher, NULL);
+
+ if (wbuf == NULL) {
+ Py_FatalError("Cannot decode __PYVENV_LAUNCHER__");
+ }
+ Py_SetProgramName(wbuf);
+
+ /* Don't free wbuf, the argument to Py_SetProgramName
+ * must remain valid until Py_FinalizeEx is called.
+ */
+ } else {
+ Py_SetProgramName(argv[0]);
+ }
+#else
+ Py_SetProgramName(argv[0]);
+#endif
+ }
+#else
+ Py_SetProgramName(argv[0]);
+#endif
+ Py_Initialize();
+ Py_XDECREF(warning_options);
+
+ if (!Py_QuietFlag && (Py_VerboseFlag ||
+ (command == NULL && filename == NULL &&
+ module == NULL && stdin_is_interactive))) {
+ fprintf(stderr, "Python %s on %s\n",
+ Py_GetVersion(), Py_GetPlatform());
+ if (!Py_NoSiteFlag)
+ fprintf(stderr, "%s\n", COPYRIGHT);
+ }
+
+ if (command != NULL) {
+ /* Backup _PyOS_optind and force sys.argv[0] = '-c' */
+ _PyOS_optind--;
+ argv[_PyOS_optind] = L"-c";
+ }
+
+ if (module != NULL) {
+ /* Backup _PyOS_optind and force sys.argv[0] = '-m'*/
+ _PyOS_optind--;
+ argv[_PyOS_optind] = L"-m";
+ }
+
+ if (filename != NULL) {
+ main_importer_path = AsImportPathEntry(filename);
+ }
+
+ if (main_importer_path != NULL) {
+ /* Let RunMainFromImporter adjust sys.path[0] later */
+ PySys_SetArgvEx(argc-_PyOS_optind, argv+_PyOS_optind, 0);
+ } else {
+ /* Use config settings to decide whether or not to update sys.path[0] */
+ PySys_SetArgv(argc-_PyOS_optind, argv+_PyOS_optind);
+ }
+
+ if ((Py_InspectFlag || (command == NULL && filename == NULL && module == NULL)) &&
+ isatty(fileno(stdin)) &&
+ !Py_IsolatedFlag) {
+ PyObject *v;
+ v = PyImport_ImportModule("readline");
+ if (v == NULL)
+ PyErr_Clear();
+ else
+ Py_DECREF(v);
+ }
+
+ if (command) {
+ sts = run_command(command, &cf);
+ PyMem_RawFree(command);
+ } else if (module) {
+ sts = (RunModule(module, 1) != 0);
+ }
+ else {
+
+ if (filename == NULL && stdin_is_interactive) {
+ Py_InspectFlag = 0; /* do exit on SystemExit */
+ RunStartupFile(&cf);
+ RunInteractiveHook();
+ }
+ /* XXX */
+
+ sts = -1; /* keep track of whether we've already run __main__ */
+
+ if (main_importer_path != NULL) {
+ sts = RunMainFromImporter(main_importer_path);
+ }
+
+ if (sts==-1 && filename != NULL) {
+ fp = _Py_wfopen(filename, L"r");
+ if (fp == NULL) {
+ char *cfilename_buffer;
+ const char *cfilename;
+ int err = errno;
+ cfilename_buffer = Py_EncodeLocale(filename, NULL);
+ if (cfilename_buffer != NULL)
+ cfilename = cfilename_buffer;
+ else
+ cfilename = "<unprintable file name>";
+ fprintf(stderr, "%ls: can't open file '%s': [Errno %d] %s\n",
+ argv[0], cfilename, err, strerror(err));
+ if (cfilename_buffer)
+ PyMem_Free(cfilename_buffer);
+ return 2;
+ }
+ else if (skipfirstline) {
+ int ch;
+ /* Push back first newline so line numbers
+ remain the same */
+ while ((ch = getc(fp)) != EOF) {
+ if (ch == '\n') {
+ (void)ungetc(ch, fp);
+ break;
+ }
+ }
+ }
+ {
+ struct _Py_stat_struct sb;
+ if (_Py_fstat_noraise(fileno(fp), &sb) == 0 &&
+ S_ISDIR(sb.st_mode)) {
+ fprintf(stderr,
+ "%ls: '%ls' is a directory, cannot continue\n",
+ argv[0], filename);
+ fclose(fp);
+ return 1;
+ }
+ }
+ }
+
+ if (sts == -1)
+ sts = run_file(fp, filename, &cf);
+ }
+
+ /* Check this environment variable at the end, to give programs the
+ * opportunity to set it from Python.
+ */
+ if (!Py_InspectFlag &&
+ (p = Py_GETENV("PYTHONINSPECT")) && *p != '\0')
+ {
+ Py_InspectFlag = 1;
+ }
+
+ if (Py_InspectFlag && stdin_is_interactive &&
+ (filename != NULL || command != NULL || module != NULL)) {
+ Py_InspectFlag = 0;
+ RunInteractiveHook();
+ /* XXX */
+ sts = PyRun_AnyFileFlags(stdin, "<stdin>", &cf) != 0;
+ }
+
+ if (Py_FinalizeEx() < 0) {
+ /* Value unlikely to be confused with a non-error exit status or
+ other special meaning */
+ sts = 120;
+ }
+
+#ifdef __INSURE__
+ /* Insure++ is a memory analysis tool that aids in discovering
+ * memory leaks and other memory problems. On Python exit, the
+ * interned string dictionaries are flagged as being in use at exit
+ * (which it is). Under normal circumstances, this is fine because
+ * the memory will be automatically reclaimed by the system. Under
+ * memory debugging, it's a huge source of useless noise, so we
+ * trade off slower shutdown for less distraction in the memory
+ * reports. -baw
+ */
+ _Py_ReleaseInternedUnicodeStrings();
+#endif /* __INSURE__ */
+
+ return sts;
+}
+
+/* this is gonna seem *real weird*, but if you put some other code between
+ Py_Main() and Py_GetArgcArgv() you will need to adjust the test in the
+ while statement in Misc/gdbinit:ppystack */
+
+/* Make the *original* argc/argv available to other modules.
+ This is rare, but it is needed by the secureware extension. */
+
+void
+Py_GetArgcArgv(int *argc, wchar_t ***argv)
+{
+ *argc = orig_argc;
+ *argv = orig_argv;
+}
+
+#ifdef __cplusplus
+}
+#endif
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/selectmodule.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/selectmodule.c
new file mode 100644
index 00000000..7072f5ee
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/selectmodule.c
@@ -0,0 +1,2638 @@
+/* select - Module containing unix select(2) call.
+ Under Unix, the file descriptors are small integers.
+ Under Win32, select only exists for sockets, and sockets may
+ have any value except INVALID_SOCKET.
+*/
+
+#if defined(HAVE_POLL_H) && !defined(_GNU_SOURCE)
+#define _GNU_SOURCE
+#endif
+
+#include "Python.h"
+#include <structmember.h>
+
+#ifdef HAVE_SYS_DEVPOLL_H
+#include <sys/resource.h>
+#include <sys/devpoll.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <fcntl.h>
+#endif
+
+#ifdef __APPLE__
+ /* Perform runtime testing for a broken poll on OSX to make it easier
+ * to use the same binary on multiple releases of the OS.
+ */
+#undef HAVE_BROKEN_POLL
+#endif
+
+/* Windows #defines FD_SETSIZE to 64 if FD_SETSIZE isn't already defined.
+ 64 is too small (too many people have bumped into that limit).
+ Here we boost it.
+ Users who want even more than the boosted limit should #define
+ FD_SETSIZE higher before this; e.g., via compiler /D switch.
+*/
+#if defined(MS_WINDOWS) && !defined(FD_SETSIZE)
+#define FD_SETSIZE 512
+#endif
+
+#if defined(HAVE_POLL_H)
+#include <poll.h>
+#elif defined(HAVE_SYS_POLL_H)
+#include <sys/poll.h>
+#endif
+
+#ifdef __sgi
+/* This is missing from unistd.h */
+extern void bzero(void *, int);
+#endif
+
+#ifdef HAVE_SYS_TYPES_H
+#include <sys/types.h>
+#endif
+
+#ifdef MS_WINDOWS
+# define WIN32_LEAN_AND_MEAN
+# include <winsock.h>
+#else
+# define SOCKET int
+#endif
+
+/* list of Python objects and their file descriptor */
+typedef struct {
+ PyObject *obj; /* owned reference */
+ SOCKET fd;
+ int sentinel; /* -1 == sentinel */
+} pylist;
+
+static void
+reap_obj(pylist fd2obj[FD_SETSIZE + 1])
+{
+ unsigned int i;
+ for (i = 0; i < (unsigned int)FD_SETSIZE + 1 && fd2obj[i].sentinel >= 0; i++) {
+ Py_CLEAR(fd2obj[i].obj);
+ }
+ fd2obj[0].sentinel = -1;
+}
+
+
+/* returns -1 and sets the Python exception if an error occurred, otherwise
+ returns a number >= 0
+*/
+static int
+seq2set(PyObject *seq, fd_set *set, pylist fd2obj[FD_SETSIZE + 1])
+{
+ int max = -1;
+ unsigned int index = 0;
+ Py_ssize_t i;
+ PyObject* fast_seq = NULL;
+ PyObject* o = NULL;
+
+ fd2obj[0].obj = (PyObject*)0; /* set list to zero size */
+ FD_ZERO(set);
+
+ fast_seq = PySequence_Fast(seq, "arguments 1-3 must be sequences");
+ if (!fast_seq)
+ return -1;
+
+ for (i = 0; i < PySequence_Fast_GET_SIZE(fast_seq); i++) {
+ SOCKET v;
+
+ /* any intervening fileno() calls could decr this refcnt */
+ if (!(o = PySequence_Fast_GET_ITEM(fast_seq, i)))
+ goto finally;
+
+ Py_INCREF(o);
+ v = PyObject_AsFileDescriptor( o );
+ if (v == -1) goto finally;
+
+#if defined(_MSC_VER) && !defined(UEFI_C_SOURCE)
+ max = 0; /* not used for Win32 */
+#else /* !_MSC_VER */
+ if (!_PyIsSelectable_fd(v)) {
+ PyErr_SetString(PyExc_ValueError,
+ "filedescriptor out of range in select()");
+ goto finally;
+ }
+ if (v > max)
+ max = v;
+#endif /* _MSC_VER */
+ FD_SET(v, set);
+
+ /* add object and its file descriptor to the list */
+ if (index >= (unsigned int)FD_SETSIZE) {
+ PyErr_SetString(PyExc_ValueError,
+ "too many file descriptors in select()");
+ goto finally;
+ }
+ fd2obj[index].obj = o;
+ fd2obj[index].fd = v;
+ fd2obj[index].sentinel = 0;
+ fd2obj[++index].sentinel = -1;
+ }
+ Py_DECREF(fast_seq);
+ return max+1;
+
+ finally:
+ Py_XDECREF(o);
+ Py_DECREF(fast_seq);
+ return -1;
+}
+
+/* returns NULL and sets the Python exception if an error occurred */
+static PyObject *
+set2list(fd_set *set, pylist fd2obj[FD_SETSIZE + 1])
+{
+ int i, j, count=0;
+ PyObject *list, *o;
+ SOCKET fd;
+
+ for (j = 0; fd2obj[j].sentinel >= 0; j++) {
+ if (FD_ISSET(fd2obj[j].fd, set))
+ count++;
+ }
+ list = PyList_New(count);
+ if (!list)
+ return NULL;
+
+ i = 0;
+ for (j = 0; fd2obj[j].sentinel >= 0; j++) {
+ fd = fd2obj[j].fd;
+ if (FD_ISSET(fd, set)) {
+ o = fd2obj[j].obj;
+ fd2obj[j].obj = NULL;
+ /* transfer ownership */
+ if (PyList_SetItem(list, i, o) < 0)
+ goto finally;
+
+ i++;
+ }
+ }
+ return list;
+ finally:
+ Py_DECREF(list);
+ return NULL;
+}
+
+#undef SELECT_USES_HEAP
+#if FD_SETSIZE > 1024
+#define SELECT_USES_HEAP
+#endif /* FD_SETSIZE > 1024 */
+
+static PyObject *
+select_select(PyObject *self, PyObject *args)
+{
+#ifdef SELECT_USES_HEAP
+ pylist *rfd2obj, *wfd2obj, *efd2obj;
+#else /* !SELECT_USES_HEAP */
+ /* XXX: All this should probably be implemented as follows:
+ * - find the highest descriptor we're interested in
+ * - add one
+ * - that's the size
+ * See: Stevens, APitUE, $12.5.1
+ */
+ pylist rfd2obj[FD_SETSIZE + 1];
+ pylist wfd2obj[FD_SETSIZE + 1];
+ pylist efd2obj[FD_SETSIZE + 1];
+#endif /* SELECT_USES_HEAP */
+ PyObject *ifdlist, *ofdlist, *efdlist;
+ PyObject *ret = NULL;
+ PyObject *timeout_obj = Py_None;
+ fd_set ifdset, ofdset, efdset;
+ struct timeval tv, *tvp;
+ int imax, omax, emax, max;
+ int n;
+ _PyTime_t timeout, deadline = 0;
+
+ /* convert arguments */
+ if (!PyArg_UnpackTuple(args, "select", 3, 4,
+ &ifdlist, &ofdlist, &efdlist, &timeout_obj))
+ return NULL;
+
+ if (timeout_obj == Py_None)
+ tvp = (struct timeval *)NULL;
+ else {
+ if (_PyTime_FromSecondsObject(&timeout, timeout_obj,
+ _PyTime_ROUND_TIMEOUT) < 0) {
+ if (PyErr_ExceptionMatches(PyExc_TypeError)) {
+ PyErr_SetString(PyExc_TypeError,
+ "timeout must be a float or None");
+ }
+ return NULL;
+ }
+
+ if (_PyTime_AsTimeval(timeout, &tv, _PyTime_ROUND_TIMEOUT) == -1)
+ return NULL;
+ if (tv.tv_sec < 0) {
+ PyErr_SetString(PyExc_ValueError, "timeout must be non-negative");
+ return NULL;
+ }
+ tvp = &tv;
+ }
+
+#ifdef SELECT_USES_HEAP
+ /* Allocate memory for the lists */
+ rfd2obj = PyMem_NEW(pylist, FD_SETSIZE + 1);
+ wfd2obj = PyMem_NEW(pylist, FD_SETSIZE + 1);
+ efd2obj = PyMem_NEW(pylist, FD_SETSIZE + 1);
+ if (rfd2obj == NULL || wfd2obj == NULL || efd2obj == NULL) {
+ if (rfd2obj) PyMem_DEL(rfd2obj);
+ if (wfd2obj) PyMem_DEL(wfd2obj);
+ if (efd2obj) PyMem_DEL(efd2obj);
+ return PyErr_NoMemory();
+ }
+#endif /* SELECT_USES_HEAP */
+
+ /* Convert sequences to fd_sets, and get maximum fd number
+ * propagates the Python exception set in seq2set()
+ */
+ rfd2obj[0].sentinel = -1;
+ wfd2obj[0].sentinel = -1;
+ efd2obj[0].sentinel = -1;
+ if ((imax=seq2set(ifdlist, &ifdset, rfd2obj)) < 0)
+ goto finally;
+ if ((omax=seq2set(ofdlist, &ofdset, wfd2obj)) < 0)
+ goto finally;
+ if ((emax=seq2set(efdlist, &efdset, efd2obj)) < 0)
+ goto finally;
+
+ max = imax;
+ if (omax > max) max = omax;
+ if (emax > max) max = emax;
+
+ if (tvp)
+ deadline = _PyTime_GetMonotonicClock() + timeout;
+
+ do {
+ Py_BEGIN_ALLOW_THREADS
+ errno = 0;
+ n = select(max, &ifdset, &ofdset, &efdset, tvp);
+ Py_END_ALLOW_THREADS
+
+ if (errno != EINTR)
+ break;
+
+ /* select() was interrupted by a signal */
+ if (PyErr_CheckSignals())
+ goto finally;
+
+ if (tvp) {
+ timeout = deadline - _PyTime_GetMonotonicClock();
+ if (timeout < 0) {
+ /* bpo-35310: lists were unmodified -- clear them explicitly */
+ FD_ZERO(&ifdset);
+ FD_ZERO(&ofdset);
+ FD_ZERO(&efdset);
+ n = 0;
+ break;
+ }
+ _PyTime_AsTimeval_noraise(timeout, &tv, _PyTime_ROUND_CEILING);
+ /* retry select() with the recomputed timeout */
+ }
+ } while (1);
+
+#ifdef MS_WINDOWS
+ if (n == SOCKET_ERROR) {
+ PyErr_SetExcFromWindowsErr(PyExc_OSError, WSAGetLastError());
+ }
+#else
+ if (n < 0) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ }
+#endif
+ else {
+ /* any of these three calls can raise an exception. it's more
+ convenient to test for this after all three calls... but
+ is that acceptable?
+ */
+ ifdlist = set2list(&ifdset, rfd2obj);
+ ofdlist = set2list(&ofdset, wfd2obj);
+ efdlist = set2list(&efdset, efd2obj);
+ if (PyErr_Occurred())
+ ret = NULL;
+ else
+ ret = PyTuple_Pack(3, ifdlist, ofdlist, efdlist);
+
+ Py_XDECREF(ifdlist);
+ Py_XDECREF(ofdlist);
+ Py_XDECREF(efdlist);
+ }
+
+ finally:
+ reap_obj(rfd2obj);
+ reap_obj(wfd2obj);
+ reap_obj(efd2obj);
+#ifdef SELECT_USES_HEAP
+ PyMem_DEL(rfd2obj);
+ PyMem_DEL(wfd2obj);
+ PyMem_DEL(efd2obj);
+#endif /* SELECT_USES_HEAP */
+ return ret;
+}
+
+#if defined(HAVE_POLL) && !defined(HAVE_BROKEN_POLL)
+/*
+ * poll() support
+ */
+
+typedef struct {
+ PyObject_HEAD
+ PyObject *dict;
+ int ufd_uptodate;
+ int ufd_len;
+ struct pollfd *ufds;
+ int poll_running;
+} pollObject;
+
+static PyTypeObject poll_Type;
+
+/* Update the malloc'ed array of pollfds to match the dictionary
+ contained within a pollObject. Return 1 on success, 0 on an error.
+*/
+
+static int
+update_ufd_array(pollObject *self)
+{
+ Py_ssize_t i, pos;
+ PyObject *key, *value;
+ struct pollfd *old_ufds = self->ufds;
+
+ self->ufd_len = PyDict_Size(self->dict);
+ PyMem_RESIZE(self->ufds, struct pollfd, self->ufd_len);
+ if (self->ufds == NULL) {
+ self->ufds = old_ufds;
+ PyErr_NoMemory();
+ return 0;
+ }
+
+ i = pos = 0;
+ while (PyDict_Next(self->dict, &pos, &key, &value)) {
+ assert(i < self->ufd_len);
+ /* Never overflow */
+ self->ufds[i].fd = (int)PyLong_AsLong(key);
+ self->ufds[i].events = (short)(unsigned short)PyLong_AsLong(value);
+ i++;
+ }
+ assert(i == self->ufd_len);
+ self->ufd_uptodate = 1;
+ return 1;
+}
+
+static int
+ushort_converter(PyObject *obj, void *ptr)
+{
+ unsigned long uval;
+
+ uval = PyLong_AsUnsignedLong(obj);
+ if (uval == (unsigned long)-1 && PyErr_Occurred())
+ return 0;
+ if (uval > USHRT_MAX) {
+ PyErr_SetString(PyExc_OverflowError,
+ "Python int too large for C unsigned short");
+ return 0;
+ }
+
+ *(unsigned short *)ptr = Py_SAFE_DOWNCAST(uval, unsigned long, unsigned short);
+ return 1;
+}
+
+PyDoc_STRVAR(poll_register_doc,
+"register(fd [, eventmask] ) -> None\n\n\
+Register a file descriptor with the polling object.\n\
+fd -- either an integer, or an object with a fileno() method returning an\n\
+ int.\n\
+events -- an optional bitmask describing the type of events to check for");
+
+static PyObject *
+poll_register(pollObject *self, PyObject *args)
+{
+ PyObject *o, *key, *value;
+ int fd;
+ unsigned short events = POLLIN | POLLPRI | POLLOUT;
+ int err;
+
+ if (!PyArg_ParseTuple(args, "O|O&:register", &o, ushort_converter, &events))
+ return NULL;
+
+ fd = PyObject_AsFileDescriptor(o);
+ if (fd == -1) return NULL;
+
+ /* Add entry to the internal dictionary: the key is the
+ file descriptor, and the value is the event mask. */
+ key = PyLong_FromLong(fd);
+ if (key == NULL)
+ return NULL;
+ value = PyLong_FromLong(events);
+ if (value == NULL) {
+ Py_DECREF(key);
+ return NULL;
+ }
+ err = PyDict_SetItem(self->dict, key, value);
+ Py_DECREF(key);
+ Py_DECREF(value);
+ if (err < 0)
+ return NULL;
+
+ self->ufd_uptodate = 0;
+
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+PyDoc_STRVAR(poll_modify_doc,
+"modify(fd, eventmask) -> None\n\n\
+Modify an already registered file descriptor.\n\
+fd -- either an integer, or an object with a fileno() method returning an\n\
+ int.\n\
+events -- an optional bitmask describing the type of events to check for");
+
+static PyObject *
+poll_modify(pollObject *self, PyObject *args)
+{
+ PyObject *o, *key, *value;
+ int fd;
+ unsigned short events;
+ int err;
+
+ if (!PyArg_ParseTuple(args, "OO&:modify", &o, ushort_converter, &events))
+ return NULL;
+
+ fd = PyObject_AsFileDescriptor(o);
+ if (fd == -1) return NULL;
+
+ /* Modify registered fd */
+ key = PyLong_FromLong(fd);
+ if (key == NULL)
+ return NULL;
+ if (PyDict_GetItem(self->dict, key) == NULL) {
+ errno = ENOENT;
+ PyErr_SetFromErrno(PyExc_OSError);
+ Py_DECREF(key);
+ return NULL;
+ }
+ value = PyLong_FromLong(events);
+ if (value == NULL) {
+ Py_DECREF(key);
+ return NULL;
+ }
+ err = PyDict_SetItem(self->dict, key, value);
+ Py_DECREF(key);
+ Py_DECREF(value);
+ if (err < 0)
+ return NULL;
+
+ self->ufd_uptodate = 0;
+
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+
+PyDoc_STRVAR(poll_unregister_doc,
+"unregister(fd) -> None\n\n\
+Remove a file descriptor being tracked by the polling object.");
+
+static PyObject *
+poll_unregister(pollObject *self, PyObject *o)
+{
+ PyObject *key;
+ int fd;
+
+ fd = PyObject_AsFileDescriptor( o );
+ if (fd == -1)
+ return NULL;
+
+ /* Check whether the fd is already in the array */
+ key = PyLong_FromLong(fd);
+ if (key == NULL)
+ return NULL;
+
+ if (PyDict_DelItem(self->dict, key) == -1) {
+ Py_DECREF(key);
+ /* This will simply raise the KeyError set by PyDict_DelItem
+ if the file descriptor isn't registered. */
+ return NULL;
+ }
+
+ Py_DECREF(key);
+ self->ufd_uptodate = 0;
+
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+PyDoc_STRVAR(poll_poll_doc,
+"poll( [timeout] ) -> list of (fd, event) 2-tuples\n\n\
+Polls the set of registered file descriptors, returning a list containing \n\
+any descriptors that have events or errors to report.");
+
+static PyObject *
+poll_poll(pollObject *self, PyObject *args)
+{
+ PyObject *result_list = NULL, *timeout_obj = NULL;
+ int poll_result, i, j;
+ PyObject *value = NULL, *num = NULL;
+ _PyTime_t timeout = -1, ms = -1, deadline = 0;
+ int async_err = 0;
+
+ if (!PyArg_ParseTuple(args, "|O:poll", &timeout_obj)) {
+ return NULL;
+ }
+
+ if (timeout_obj != NULL && timeout_obj != Py_None) {
+ if (_PyTime_FromMillisecondsObject(&timeout, timeout_obj,
+ _PyTime_ROUND_TIMEOUT) < 0) {
+ if (PyErr_ExceptionMatches(PyExc_TypeError)) {
+ PyErr_SetString(PyExc_TypeError,
+ "timeout must be an integer or None");
+ }
+ return NULL;
+ }
+
+ ms = _PyTime_AsMilliseconds(timeout, _PyTime_ROUND_TIMEOUT);
+ if (ms < INT_MIN || ms > INT_MAX) {
+ PyErr_SetString(PyExc_OverflowError, "timeout is too large");
+ return NULL;
+ }
+
+ if (timeout >= 0) {
+ deadline = _PyTime_GetMonotonicClock() + timeout;
+ }
+ }
+
+ /* On some OSes, typically BSD-based ones, the timeout parameter of the
+ poll() syscall, when negative, must be exactly INFTIM, where defined,
+ or -1. See issue 31334. */
+ if (ms < 0) {
+#ifdef INFTIM
+ ms = INFTIM;
+#else
+ ms = -1;
+#endif
+ }
+
+ /* Avoid concurrent poll() invocation, issue 8865 */
+ if (self->poll_running) {
+ PyErr_SetString(PyExc_RuntimeError,
+ "concurrent poll() invocation");
+ return NULL;
+ }
+
+ /* Ensure the ufd array is up to date */
+ if (!self->ufd_uptodate)
+ if (update_ufd_array(self) == 0)
+ return NULL;
+
+ self->poll_running = 1;
+
+ /* call poll() */
+ async_err = 0;
+ do {
+ Py_BEGIN_ALLOW_THREADS
+ errno = 0;
+ poll_result = poll(self->ufds, self->ufd_len, (int)ms);
+ Py_END_ALLOW_THREADS
+
+ if (errno != EINTR)
+ break;
+
+ /* poll() was interrupted by a signal */
+ if (PyErr_CheckSignals()) {
+ async_err = 1;
+ break;
+ }
+
+ if (timeout >= 0) {
+ timeout = deadline - _PyTime_GetMonotonicClock();
+ if (timeout < 0) {
+ poll_result = 0;
+ break;
+ }
+ ms = _PyTime_AsMilliseconds(timeout, _PyTime_ROUND_CEILING);
+ /* retry poll() with the recomputed timeout */
+ }
+ } while (1);
+
+ self->poll_running = 0;
+
+ if (poll_result < 0) {
+ if (!async_err)
+ PyErr_SetFromErrno(PyExc_OSError);
+ return NULL;
+ }
+
+ /* build the result list */
+
+ result_list = PyList_New(poll_result);
+ if (!result_list)
+ return NULL;
+
+ for (i = 0, j = 0; j < poll_result; j++) {
+ /* skip to the next fired descriptor */
+ while (!self->ufds[i].revents) {
+ i++;
+ }
+ /* if we hit a NULL return, set value to NULL
+ and break out of loop; code at end will
+ clean up result_list */
+ value = PyTuple_New(2);
+ if (value == NULL)
+ goto error;
+ num = PyLong_FromLong(self->ufds[i].fd);
+ if (num == NULL) {
+ Py_DECREF(value);
+ goto error;
+ }
+ PyTuple_SET_ITEM(value, 0, num);
+
+ /* The &0xffff is a workaround for AIX. 'revents'
+ is a 16-bit short, and IBM assigned POLLNVAL
+ to be 0x8000, so the conversion to int results
+ in a negative number. See SF bug #923315. */
+ num = PyLong_FromLong(self->ufds[i].revents & 0xffff);
+ if (num == NULL) {
+ Py_DECREF(value);
+ goto error;
+ }
+ PyTuple_SET_ITEM(value, 1, num);
+ PyList_SET_ITEM(result_list, j, value);
+ i++;
+ }
+ return result_list;
+
+ error:
+ Py_DECREF(result_list);
+ return NULL;
+}
+
+static PyMethodDef poll_methods[] = {
+ {"register", (PyCFunction)poll_register,
+ METH_VARARGS, poll_register_doc},
+ {"modify", (PyCFunction)poll_modify,
+ METH_VARARGS, poll_modify_doc},
+ {"unregister", (PyCFunction)poll_unregister,
+ METH_O, poll_unregister_doc},
+ {"poll", (PyCFunction)poll_poll,
+ METH_VARARGS, poll_poll_doc},
+ {NULL, NULL} /* sentinel */
+};
+
+static pollObject *
+newPollObject(void)
+{
+ pollObject *self;
+ self = PyObject_New(pollObject, &poll_Type);
+ if (self == NULL)
+ return NULL;
+ /* ufd_uptodate is a Boolean, denoting whether the
+ array pointed to by ufds matches the contents of the dictionary. */
+ self->ufd_uptodate = 0;
+ self->ufds = NULL;
+ self->poll_running = 0;
+ self->dict = PyDict_New();
+ if (self->dict == NULL) {
+ Py_DECREF(self);
+ return NULL;
+ }
+ return self;
+}
+
+static void
+poll_dealloc(pollObject *self)
+{
+ if (self->ufds != NULL)
+ PyMem_DEL(self->ufds);
+ Py_XDECREF(self->dict);
+ PyObject_Del(self);
+}
+
+static PyTypeObject poll_Type = {
+ /* The ob_type field must be initialized in the module init function
+ * to be portable to Windows without using C++. */
+ PyVarObject_HEAD_INIT(NULL, 0)
+ "select.poll", /*tp_name*/
+ sizeof(pollObject), /*tp_basicsize*/
+ 0, /*tp_itemsize*/
+ /* methods */
+ (destructor)poll_dealloc, /*tp_dealloc*/
+ 0, /*tp_print*/
+ 0, /*tp_getattr*/
+ 0, /*tp_setattr*/
+ 0, /*tp_reserved*/
+ 0, /*tp_repr*/
+ 0, /*tp_as_number*/
+ 0, /*tp_as_sequence*/
+ 0, /*tp_as_mapping*/
+ 0, /*tp_hash*/
+ 0, /*tp_call*/
+ 0, /*tp_str*/
+ 0, /*tp_getattro*/
+ 0, /*tp_setattro*/
+ 0, /*tp_as_buffer*/
+ Py_TPFLAGS_DEFAULT, /*tp_flags*/
+ 0, /*tp_doc*/
+ 0, /*tp_traverse*/
+ 0, /*tp_clear*/
+ 0, /*tp_richcompare*/
+ 0, /*tp_weaklistoffset*/
+ 0, /*tp_iter*/
+ 0, /*tp_iternext*/
+ poll_methods, /*tp_methods*/
+};
+
+#ifdef HAVE_SYS_DEVPOLL_H
+typedef struct {
+ PyObject_HEAD
+ int fd_devpoll;
+ int max_n_fds;
+ int n_fds;
+ struct pollfd *fds;
+} devpollObject;
+
+static PyTypeObject devpoll_Type;
+
+static PyObject *
+devpoll_err_closed(void)
+{
+ PyErr_SetString(PyExc_ValueError, "I/O operation on closed devpoll object");
+ return NULL;
+}
+
+static int devpoll_flush(devpollObject *self)
+{
+ int size, n;
+
+ if (!self->n_fds) return 0;
+
+ size = sizeof(struct pollfd)*self->n_fds;
+ self->n_fds = 0;
+
+ n = _Py_write(self->fd_devpoll, self->fds, size);
+ if (n == -1)
+ return -1;
+
+ if (n < size) {
+ /*
+ ** Data writed to /dev/poll is a binary data structure. It is not
+ ** clear what to do if a partial write occurred. For now, raise
+ ** an exception and see if we actually found this problem in
+ ** the wild.
+ ** See http://bugs.python.org/issue6397.
+ */
+ PyErr_Format(PyExc_IOError, "failed to write all pollfds. "
+ "Please, report at http://bugs.python.org/. "
+ "Data to report: Size tried: %d, actual size written: %d.",
+ size, n);
+ return -1;
+ }
+ return 0;
+}
+
+static PyObject *
+internal_devpoll_register(devpollObject *self, PyObject *args, int remove)
+{
+ PyObject *o;
+ int fd;
+ unsigned short events = POLLIN | POLLPRI | POLLOUT;
+
+ if (self->fd_devpoll < 0)
+ return devpoll_err_closed();
+
+ if (!PyArg_ParseTuple(args, "O|O&:register", &o, ushort_converter, &events))
+ return NULL;
+
+ fd = PyObject_AsFileDescriptor(o);
+ if (fd == -1) return NULL;
+
+ if (remove) {
+ self->fds[self->n_fds].fd = fd;
+ self->fds[self->n_fds].events = POLLREMOVE;
+
+ if (++self->n_fds == self->max_n_fds) {
+ if (devpoll_flush(self))
+ return NULL;
+ }
+ }
+
+ self->fds[self->n_fds].fd = fd;
+ self->fds[self->n_fds].events = (signed short)events;
+
+ if (++self->n_fds == self->max_n_fds) {
+ if (devpoll_flush(self))
+ return NULL;
+ }
+
+ Py_RETURN_NONE;
+}
+
+PyDoc_STRVAR(devpoll_register_doc,
+"register(fd [, eventmask] ) -> None\n\n\
+Register a file descriptor with the polling object.\n\
+fd -- either an integer, or an object with a fileno() method returning an\n\
+ int.\n\
+events -- an optional bitmask describing the type of events to check for");
+
+static PyObject *
+devpoll_register(devpollObject *self, PyObject *args)
+{
+ return internal_devpoll_register(self, args, 0);
+}
+
+PyDoc_STRVAR(devpoll_modify_doc,
+"modify(fd[, eventmask]) -> None\n\n\
+Modify a possible already registered file descriptor.\n\
+fd -- either an integer, or an object with a fileno() method returning an\n\
+ int.\n\
+events -- an optional bitmask describing the type of events to check for");
+
+static PyObject *
+devpoll_modify(devpollObject *self, PyObject *args)
+{
+ return internal_devpoll_register(self, args, 1);
+}
+
+
+PyDoc_STRVAR(devpoll_unregister_doc,
+"unregister(fd) -> None\n\n\
+Remove a file descriptor being tracked by the polling object.");
+
+static PyObject *
+devpoll_unregister(devpollObject *self, PyObject *o)
+{
+ int fd;
+
+ if (self->fd_devpoll < 0)
+ return devpoll_err_closed();
+
+ fd = PyObject_AsFileDescriptor( o );
+ if (fd == -1)
+ return NULL;
+
+ self->fds[self->n_fds].fd = fd;
+ self->fds[self->n_fds].events = POLLREMOVE;
+
+ if (++self->n_fds == self->max_n_fds) {
+ if (devpoll_flush(self))
+ return NULL;
+ }
+
+ Py_RETURN_NONE;
+}
+
+PyDoc_STRVAR(devpoll_poll_doc,
+"poll( [timeout] ) -> list of (fd, event) 2-tuples\n\n\
+Polls the set of registered file descriptors, returning a list containing \n\
+any descriptors that have events or errors to report.");
+
+static PyObject *
+devpoll_poll(devpollObject *self, PyObject *args)
+{
+ struct dvpoll dvp;
+ PyObject *result_list = NULL, *timeout_obj = NULL;
+ int poll_result, i;
+ PyObject *value, *num1, *num2;
+ _PyTime_t timeout, ms, deadline = 0;
+
+ if (self->fd_devpoll < 0)
+ return devpoll_err_closed();
+
+ if (!PyArg_ParseTuple(args, "|O:poll", &timeout_obj)) {
+ return NULL;
+ }
+
+ /* Check values for timeout */
+ if (timeout_obj == NULL || timeout_obj == Py_None) {
+ timeout = -1;
+ ms = -1;
+ }
+ else {
+ if (_PyTime_FromMillisecondsObject(&timeout, timeout_obj,
+ _PyTime_ROUND_TIMEOUT) < 0) {
+ if (PyErr_ExceptionMatches(PyExc_TypeError)) {
+ PyErr_SetString(PyExc_TypeError,
+ "timeout must be an integer or None");
+ }
+ return NULL;
+ }
+
+ ms = _PyTime_AsMilliseconds(timeout, _PyTime_ROUND_TIMEOUT);
+ if (ms < -1 || ms > INT_MAX) {
+ PyErr_SetString(PyExc_OverflowError, "timeout is too large");
+ return NULL;
+ }
+ }
+
+ if (devpoll_flush(self))
+ return NULL;
+
+ dvp.dp_fds = self->fds;
+ dvp.dp_nfds = self->max_n_fds;
+ dvp.dp_timeout = (int)ms;
+
+ if (timeout >= 0)
+ deadline = _PyTime_GetMonotonicClock() + timeout;
+
+ do {
+ /* call devpoll() */
+ Py_BEGIN_ALLOW_THREADS
+ errno = 0;
+ poll_result = ioctl(self->fd_devpoll, DP_POLL, &dvp);
+ Py_END_ALLOW_THREADS
+
+ if (errno != EINTR)
+ break;
+
+ /* devpoll() was interrupted by a signal */
+ if (PyErr_CheckSignals())
+ return NULL;
+
+ if (timeout >= 0) {
+ timeout = deadline - _PyTime_GetMonotonicClock();
+ if (timeout < 0) {
+ poll_result = 0;
+ break;
+ }
+ ms = _PyTime_AsMilliseconds(timeout, _PyTime_ROUND_CEILING);
+ dvp.dp_timeout = (int)ms;
+ /* retry devpoll() with the recomputed timeout */
+ }
+ } while (1);
+
+ if (poll_result < 0) {
+ PyErr_SetFromErrno(PyExc_IOError);
+ return NULL;
+ }
+
+ /* build the result list */
+ result_list = PyList_New(poll_result);
+ if (!result_list)
+ return NULL;
+
+ for (i = 0; i < poll_result; i++) {
+ num1 = PyLong_FromLong(self->fds[i].fd);
+ num2 = PyLong_FromLong(self->fds[i].revents);
+ if ((num1 == NULL) || (num2 == NULL)) {
+ Py_XDECREF(num1);
+ Py_XDECREF(num2);
+ goto error;
+ }
+ value = PyTuple_Pack(2, num1, num2);
+ Py_DECREF(num1);
+ Py_DECREF(num2);
+ if (value == NULL)
+ goto error;
+ PyList_SET_ITEM(result_list, i, value);
+ }
+
+ return result_list;
+
+ error:
+ Py_DECREF(result_list);
+ return NULL;
+}
+
+static int
+devpoll_internal_close(devpollObject *self)
+{
+ int save_errno = 0;
+ if (self->fd_devpoll >= 0) {
+ int fd = self->fd_devpoll;
+ self->fd_devpoll = -1;
+ Py_BEGIN_ALLOW_THREADS
+ if (close(fd) < 0)
+ save_errno = errno;
+ Py_END_ALLOW_THREADS
+ }
+ return save_errno;
+}
+
+static PyObject*
+devpoll_close(devpollObject *self)
+{
+ errno = devpoll_internal_close(self);
+ if (errno < 0) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ return NULL;
+ }
+ Py_RETURN_NONE;
+}
+
+PyDoc_STRVAR(devpoll_close_doc,
+"close() -> None\n\
+\n\
+Close the devpoll file descriptor. Further operations on the devpoll\n\
+object will raise an exception.");
+
+static PyObject*
+devpoll_get_closed(devpollObject *self, void *Py_UNUSED(ignored))
+{
+ if (self->fd_devpoll < 0)
+ Py_RETURN_TRUE;
+ else
+ Py_RETURN_FALSE;
+}
+
+static PyObject*
+devpoll_fileno(devpollObject *self)
+{
+ if (self->fd_devpoll < 0)
+ return devpoll_err_closed();
+ return PyLong_FromLong(self->fd_devpoll);
+}
+
+PyDoc_STRVAR(devpoll_fileno_doc,
+"fileno() -> int\n\
+\n\
+Return the file descriptor.");
+
+static PyMethodDef devpoll_methods[] = {
+ {"register", (PyCFunction)devpoll_register,
+ METH_VARARGS, devpoll_register_doc},
+ {"modify", (PyCFunction)devpoll_modify,
+ METH_VARARGS, devpoll_modify_doc},
+ {"unregister", (PyCFunction)devpoll_unregister,
+ METH_O, devpoll_unregister_doc},
+ {"poll", (PyCFunction)devpoll_poll,
+ METH_VARARGS, devpoll_poll_doc},
+ {"close", (PyCFunction)devpoll_close, METH_NOARGS,
+ devpoll_close_doc},
+ {"fileno", (PyCFunction)devpoll_fileno, METH_NOARGS,
+ devpoll_fileno_doc},
+ {NULL, NULL} /* sentinel */
+};
+
+static PyGetSetDef devpoll_getsetlist[] = {
+ {"closed", (getter)devpoll_get_closed, NULL,
+ "True if the devpoll object is closed"},
+ {0},
+};
+
+static devpollObject *
+newDevPollObject(void)
+{
+ devpollObject *self;
+ int fd_devpoll, limit_result;
+ struct pollfd *fds;
+ struct rlimit limit;
+
+ /*
+ ** If we try to process more that getrlimit()
+ ** fds, the kernel will give an error, so
+ ** we set the limit here. It is a dynamic
+ ** value, because we can change rlimit() anytime.
+ */
+ limit_result = getrlimit(RLIMIT_NOFILE, &limit);
+ if (limit_result == -1) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ return NULL;
+ }
+
+ fd_devpoll = _Py_open("/dev/poll", O_RDWR);
+ if (fd_devpoll == -1)
+ return NULL;
+
+ fds = PyMem_NEW(struct pollfd, limit.rlim_cur);
+ if (fds == NULL) {
+ close(fd_devpoll);
+ PyErr_NoMemory();
+ return NULL;
+ }
+
+ self = PyObject_New(devpollObject, &devpoll_Type);
+ if (self == NULL) {
+ close(fd_devpoll);
+ PyMem_DEL(fds);
+ return NULL;
+ }
+ self->fd_devpoll = fd_devpoll;
+ self->max_n_fds = limit.rlim_cur;
+ self->n_fds = 0;
+ self->fds = fds;
+
+ return self;
+}
+
+static void
+devpoll_dealloc(devpollObject *self)
+{
+ (void)devpoll_internal_close(self);
+ PyMem_DEL(self->fds);
+ PyObject_Del(self);
+}
+
+static PyTypeObject devpoll_Type = {
+ /* The ob_type field must be initialized in the module init function
+ * to be portable to Windows without using C++. */
+ PyVarObject_HEAD_INIT(NULL, 0)
+ "select.devpoll", /*tp_name*/
+ sizeof(devpollObject), /*tp_basicsize*/
+ 0, /*tp_itemsize*/
+ /* methods */
+ (destructor)devpoll_dealloc, /*tp_dealloc*/
+ 0, /*tp_print*/
+ 0, /*tp_getattr*/
+ 0, /*tp_setattr*/
+ 0, /*tp_reserved*/
+ 0, /*tp_repr*/
+ 0, /*tp_as_number*/
+ 0, /*tp_as_sequence*/
+ 0, /*tp_as_mapping*/
+ 0, /*tp_hash*/
+ 0, /*tp_call*/
+ 0, /*tp_str*/
+ 0, /*tp_getattro*/
+ 0, /*tp_setattro*/
+ 0, /*tp_as_buffer*/
+ Py_TPFLAGS_DEFAULT, /*tp_flags*/
+ 0, /*tp_doc*/
+ 0, /*tp_traverse*/
+ 0, /*tp_clear*/
+ 0, /*tp_richcompare*/
+ 0, /*tp_weaklistoffset*/
+ 0, /*tp_iter*/
+ 0, /*tp_iternext*/
+ devpoll_methods, /*tp_methods*/
+ 0, /* tp_members */
+ devpoll_getsetlist, /* tp_getset */
+};
+#endif /* HAVE_SYS_DEVPOLL_H */
+
+
+
+PyDoc_STRVAR(poll_doc,
+"Returns a polling object, which supports registering and\n\
+unregistering file descriptors, and then polling them for I/O events.");
+
+static PyObject *
+select_poll(PyObject *self, PyObject *unused)
+{
+ return (PyObject *)newPollObject();
+}
+
+#ifdef HAVE_SYS_DEVPOLL_H
+PyDoc_STRVAR(devpoll_doc,
+"Returns a polling object, which supports registering and\n\
+unregistering file descriptors, and then polling them for I/O events.");
+
+static PyObject *
+select_devpoll(PyObject *self, PyObject *unused)
+{
+ return (PyObject *)newDevPollObject();
+}
+#endif
+
+
+#ifdef __APPLE__
+/*
+ * On some systems poll() sets errno on invalid file descriptors. We test
+ * for this at runtime because this bug may be fixed or introduced between
+ * OS releases.
+ */
+static int select_have_broken_poll(void)
+{
+ int poll_test;
+ int filedes[2];
+
+ struct pollfd poll_struct = { 0, POLLIN|POLLPRI|POLLOUT, 0 };
+
+ /* Create a file descriptor to make invalid */
+ if (pipe(filedes) < 0) {
+ return 1;
+ }
+ poll_struct.fd = filedes[0];
+ close(filedes[0]);
+ close(filedes[1]);
+ poll_test = poll(&poll_struct, 1, 0);
+ if (poll_test < 0) {
+ return 1;
+ } else if (poll_test == 0 && poll_struct.revents != POLLNVAL) {
+ return 1;
+ }
+ return 0;
+}
+#endif /* __APPLE__ */
+
+#endif /* HAVE_POLL */
+
+#ifdef HAVE_EPOLL
+/* **************************************************************************
+ * epoll interface for Linux 2.6
+ *
+ * Written by Christian Heimes
+ * Inspired by Twisted's _epoll.pyx and select.poll()
+ */
+
+#ifdef HAVE_SYS_EPOLL_H
+#include <sys/epoll.h>
+#endif
+
+typedef struct {
+ PyObject_HEAD
+ SOCKET epfd; /* epoll control file descriptor */
+} pyEpoll_Object;
+
+static PyTypeObject pyEpoll_Type;
+#define pyepoll_CHECK(op) (PyObject_TypeCheck((op), &pyEpoll_Type))
+
+static PyObject *
+pyepoll_err_closed(void)
+{
+ PyErr_SetString(PyExc_ValueError, "I/O operation on closed epoll object");
+ return NULL;
+}
+
+static int
+pyepoll_internal_close(pyEpoll_Object *self)
+{
+ int save_errno = 0;
+ if (self->epfd >= 0) {
+ int epfd = self->epfd;
+ self->epfd = -1;
+ Py_BEGIN_ALLOW_THREADS
+ if (close(epfd) < 0)
+ save_errno = errno;
+ Py_END_ALLOW_THREADS
+ }
+ return save_errno;
+}
+
+static PyObject *
+newPyEpoll_Object(PyTypeObject *type, int sizehint, int flags, SOCKET fd)
+{
+ pyEpoll_Object *self;
+
+ assert(type != NULL && type->tp_alloc != NULL);
+ self = (pyEpoll_Object *) type->tp_alloc(type, 0);
+ if (self == NULL)
+ return NULL;
+
+ if (fd == -1) {
+ Py_BEGIN_ALLOW_THREADS
+#ifdef HAVE_EPOLL_CREATE1
+ flags |= EPOLL_CLOEXEC;
+ if (flags)
+ self->epfd = epoll_create1(flags);
+ else
+#endif
+ self->epfd = epoll_create(sizehint);
+ Py_END_ALLOW_THREADS
+ }
+ else {
+ self->epfd = fd;
+ }
+ if (self->epfd < 0) {
+ Py_DECREF(self);
+ PyErr_SetFromErrno(PyExc_OSError);
+ return NULL;
+ }
+
+#ifndef HAVE_EPOLL_CREATE1
+ if (fd == -1 && _Py_set_inheritable(self->epfd, 0, NULL) < 0) {
+ Py_DECREF(self);
+ return NULL;
+ }
+#endif
+
+ return (PyObject *)self;
+}
+
+
+static PyObject *
+pyepoll_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
+{
+ int flags = 0, sizehint = -1;
+ static char *kwlist[] = {"sizehint", "flags", NULL};
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwds, "|ii:epoll", kwlist,
+ &sizehint, &flags))
+ return NULL;
+ if (sizehint == -1) {
+ sizehint = FD_SETSIZE - 1;
+ }
+ else if (sizehint <= 0) {
+ PyErr_SetString(PyExc_ValueError, "sizehint must be positive or -1");
+ return NULL;
+ }
+
+ return newPyEpoll_Object(type, sizehint, flags, -1);
+}
+
+
+static void
+pyepoll_dealloc(pyEpoll_Object *self)
+{
+ (void)pyepoll_internal_close(self);
+ Py_TYPE(self)->tp_free(self);
+}
+
+static PyObject*
+pyepoll_close(pyEpoll_Object *self)
+{
+ errno = pyepoll_internal_close(self);
+ if (errno < 0) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ return NULL;
+ }
+ Py_RETURN_NONE;
+}
+
+PyDoc_STRVAR(pyepoll_close_doc,
+"close() -> None\n\
+\n\
+Close the epoll control file descriptor. Further operations on the epoll\n\
+object will raise an exception.");
+
+static PyObject*
+pyepoll_get_closed(pyEpoll_Object *self, void *Py_UNUSED(ignored))
+{
+ if (self->epfd < 0)
+ Py_RETURN_TRUE;
+ else
+ Py_RETURN_FALSE;
+}
+
+static PyObject*
+pyepoll_fileno(pyEpoll_Object *self)
+{
+ if (self->epfd < 0)
+ return pyepoll_err_closed();
+ return PyLong_FromLong(self->epfd);
+}
+
+PyDoc_STRVAR(pyepoll_fileno_doc,
+"fileno() -> int\n\
+\n\
+Return the epoll control file descriptor.");
+
+static PyObject*
+pyepoll_fromfd(PyObject *cls, PyObject *args)
+{
+ SOCKET fd;
+
+ if (!PyArg_ParseTuple(args, "i:fromfd", &fd))
+ return NULL;
+
+ return newPyEpoll_Object((PyTypeObject*)cls, FD_SETSIZE - 1, 0, fd);
+}
+
+PyDoc_STRVAR(pyepoll_fromfd_doc,
+"fromfd(fd) -> epoll\n\
+\n\
+Create an epoll object from a given control fd.");
+
+static PyObject *
+pyepoll_internal_ctl(int epfd, int op, PyObject *pfd, unsigned int events)
+{
+ struct epoll_event ev;
+ int result;
+ int fd;
+
+ if (epfd < 0)
+ return pyepoll_err_closed();
+
+ fd = PyObject_AsFileDescriptor(pfd);
+ if (fd == -1) {
+ return NULL;
+ }
+
+ switch (op) {
+ case EPOLL_CTL_ADD:
+ case EPOLL_CTL_MOD:
+ ev.events = events;
+ ev.data.fd = fd;
+ Py_BEGIN_ALLOW_THREADS
+ result = epoll_ctl(epfd, op, fd, &ev);
+ Py_END_ALLOW_THREADS
+ break;
+ case EPOLL_CTL_DEL:
+ /* In kernel versions before 2.6.9, the EPOLL_CTL_DEL
+ * operation required a non-NULL pointer in event, even
+ * though this argument is ignored. */
+ Py_BEGIN_ALLOW_THREADS
+ result = epoll_ctl(epfd, op, fd, &ev);
+ if (errno == EBADF) {
+ /* fd already closed */
+ result = 0;
+ errno = 0;
+ }
+ Py_END_ALLOW_THREADS
+ break;
+ default:
+ result = -1;
+ errno = EINVAL;
+ }
+
+ if (result < 0) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ return NULL;
+ }
+ Py_RETURN_NONE;
+}
+
+static PyObject *
+pyepoll_register(pyEpoll_Object *self, PyObject *args, PyObject *kwds)
+{
+ PyObject *pfd;
+ unsigned int events = EPOLLIN | EPOLLOUT | EPOLLPRI;
+ static char *kwlist[] = {"fd", "eventmask", NULL};
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|I:register", kwlist,
+ &pfd, &events)) {
+ return NULL;
+ }
+
+ return pyepoll_internal_ctl(self->epfd, EPOLL_CTL_ADD, pfd, events);
+}
+
+PyDoc_STRVAR(pyepoll_register_doc,
+"register(fd[, eventmask]) -> None\n\
+\n\
+Registers a new fd or raises an OSError if the fd is already registered.\n\
+fd is the target file descriptor of the operation.\n\
+events is a bit set composed of the various EPOLL constants; the default\n\
+is EPOLLIN | EPOLLOUT | EPOLLPRI.\n\
+\n\
+The epoll interface supports all file descriptors that support poll.");
+
+static PyObject *
+pyepoll_modify(pyEpoll_Object *self, PyObject *args, PyObject *kwds)
+{
+ PyObject *pfd;
+ unsigned int events;
+ static char *kwlist[] = {"fd", "eventmask", NULL};
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwds, "OI:modify", kwlist,
+ &pfd, &events)) {
+ return NULL;
+ }
+
+ return pyepoll_internal_ctl(self->epfd, EPOLL_CTL_MOD, pfd, events);
+}
+
+PyDoc_STRVAR(pyepoll_modify_doc,
+"modify(fd, eventmask) -> None\n\
+\n\
+fd is the target file descriptor of the operation\n\
+events is a bit set composed of the various EPOLL constants");
+
+static PyObject *
+pyepoll_unregister(pyEpoll_Object *self, PyObject *args, PyObject *kwds)
+{
+ PyObject *pfd;
+ static char *kwlist[] = {"fd", NULL};
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwds, "O:unregister", kwlist,
+ &pfd)) {
+ return NULL;
+ }
+
+ return pyepoll_internal_ctl(self->epfd, EPOLL_CTL_DEL, pfd, 0);
+}
+
+PyDoc_STRVAR(pyepoll_unregister_doc,
+"unregister(fd) -> None\n\
+\n\
+fd is the target file descriptor of the operation.");
+
+static PyObject *
+pyepoll_poll(pyEpoll_Object *self, PyObject *args, PyObject *kwds)
+{
+ static char *kwlist[] = {"timeout", "maxevents", NULL};
+ PyObject *timeout_obj = NULL;
+ int maxevents = -1;
+ int nfds, i;
+ PyObject *elist = NULL, *etuple = NULL;
+ struct epoll_event *evs = NULL;
+ _PyTime_t timeout, ms, deadline;
+
+ if (self->epfd < 0)
+ return pyepoll_err_closed();
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwds, "|Oi:poll", kwlist,
+ &timeout_obj, &maxevents)) {
+ return NULL;
+ }
+
+ if (timeout_obj == NULL || timeout_obj == Py_None) {
+ timeout = -1;
+ ms = -1;
+ deadline = 0; /* initialize to prevent gcc warning */
+ }
+ else {
+ /* epoll_wait() has a resolution of 1 millisecond, round towards
+ infinity to wait at least timeout seconds. */
+ if (_PyTime_FromSecondsObject(&timeout, timeout_obj,
+ _PyTime_ROUND_TIMEOUT) < 0) {
+ if (PyErr_ExceptionMatches(PyExc_TypeError)) {
+ PyErr_SetString(PyExc_TypeError,
+ "timeout must be an integer or None");
+ }
+ return NULL;
+ }
+
+ ms = _PyTime_AsMilliseconds(timeout, _PyTime_ROUND_CEILING);
+ if (ms < INT_MIN || ms > INT_MAX) {
+ PyErr_SetString(PyExc_OverflowError, "timeout is too large");
+ return NULL;
+ }
+
+ deadline = _PyTime_GetMonotonicClock() + timeout;
+ }
+
+ if (maxevents == -1) {
+ maxevents = FD_SETSIZE-1;
+ }
+ else if (maxevents < 1) {
+ PyErr_Format(PyExc_ValueError,
+ "maxevents must be greater than 0, got %d",
+ maxevents);
+ return NULL;
+ }
+
+ evs = PyMem_New(struct epoll_event, maxevents);
+ if (evs == NULL) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+
+ do {
+ Py_BEGIN_ALLOW_THREADS
+ errno = 0;
+ nfds = epoll_wait(self->epfd, evs, maxevents, (int)ms);
+ Py_END_ALLOW_THREADS
+
+ if (errno != EINTR)
+ break;
+
+ /* poll() was interrupted by a signal */
+ if (PyErr_CheckSignals())
+ goto error;
+
+ if (timeout >= 0) {
+ timeout = deadline - _PyTime_GetMonotonicClock();
+ if (timeout < 0) {
+ nfds = 0;
+ break;
+ }
+ ms = _PyTime_AsMilliseconds(timeout, _PyTime_ROUND_CEILING);
+ /* retry epoll_wait() with the recomputed timeout */
+ }
+ } while(1);
+
+ if (nfds < 0) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ goto error;
+ }
+
+ elist = PyList_New(nfds);
+ if (elist == NULL) {
+ goto error;
+ }
+
+ for (i = 0; i < nfds; i++) {
+ etuple = Py_BuildValue("iI", evs[i].data.fd, evs[i].events);
+ if (etuple == NULL) {
+ Py_CLEAR(elist);
+ goto error;
+ }
+ PyList_SET_ITEM(elist, i, etuple);
+ }
+
+ error:
+ PyMem_Free(evs);
+ return elist;
+}
+
+PyDoc_STRVAR(pyepoll_poll_doc,
+"poll([timeout=-1[, maxevents=-1]]) -> [(fd, events), (...)]\n\
+\n\
+Wait for events on the epoll file descriptor for a maximum time of timeout\n\
+in seconds (as float). -1 makes poll wait indefinitely.\n\
+Up to maxevents are returned to the caller.");
+
+static PyObject *
+pyepoll_enter(pyEpoll_Object *self, PyObject *args)
+{
+ if (self->epfd < 0)
+ return pyepoll_err_closed();
+
+ Py_INCREF(self);
+ return (PyObject *)self;
+}
+
+static PyObject *
+pyepoll_exit(PyObject *self, PyObject *args)
+{
+ _Py_IDENTIFIER(close);
+
+ return _PyObject_CallMethodId(self, &PyId_close, NULL);
+}
+
+static PyMethodDef pyepoll_methods[] = {
+ {"fromfd", (PyCFunction)pyepoll_fromfd,
+ METH_VARARGS | METH_CLASS, pyepoll_fromfd_doc},
+ {"close", (PyCFunction)pyepoll_close, METH_NOARGS,
+ pyepoll_close_doc},
+ {"fileno", (PyCFunction)pyepoll_fileno, METH_NOARGS,
+ pyepoll_fileno_doc},
+ {"modify", (PyCFunction)pyepoll_modify,
+ METH_VARARGS | METH_KEYWORDS, pyepoll_modify_doc},
+ {"register", (PyCFunction)pyepoll_register,
+ METH_VARARGS | METH_KEYWORDS, pyepoll_register_doc},
+ {"unregister", (PyCFunction)pyepoll_unregister,
+ METH_VARARGS | METH_KEYWORDS, pyepoll_unregister_doc},
+ {"poll", (PyCFunction)pyepoll_poll,
+ METH_VARARGS | METH_KEYWORDS, pyepoll_poll_doc},
+ {"__enter__", (PyCFunction)pyepoll_enter, METH_NOARGS,
+ NULL},
+ {"__exit__", (PyCFunction)pyepoll_exit, METH_VARARGS,
+ NULL},
+ {NULL, NULL},
+};
+
+static PyGetSetDef pyepoll_getsetlist[] = {
+ {"closed", (getter)pyepoll_get_closed, NULL,
+ "True if the epoll handler is closed"},
+ {0},
+};
+
+PyDoc_STRVAR(pyepoll_doc,
+"select.epoll(sizehint=-1, flags=0)\n\
+\n\
+Returns an epolling object\n\
+\n\
+sizehint must be a positive integer or -1 for the default size. The\n\
+sizehint is used to optimize internal data structures. It doesn't limit\n\
+the maximum number of monitored events.");
+
+static PyTypeObject pyEpoll_Type = {
+ PyVarObject_HEAD_INIT(NULL, 0)
+ "select.epoll", /* tp_name */
+ sizeof(pyEpoll_Object), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ (destructor)pyepoll_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ 0, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ PyObject_GenericGetAttr, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT, /* tp_flags */
+ pyepoll_doc, /* tp_doc */
+ 0, /* tp_traverse */
+ 0, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ 0, /* tp_iter */
+ 0, /* tp_iternext */
+ pyepoll_methods, /* tp_methods */
+ 0, /* tp_members */
+ pyepoll_getsetlist, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ 0, /* tp_init */
+ 0, /* tp_alloc */
+ pyepoll_new, /* tp_new */
+ 0, /* tp_free */
+};
+
+#endif /* HAVE_EPOLL */
+
+#ifdef HAVE_KQUEUE
+/* **************************************************************************
+ * kqueue interface for BSD
+ *
+ * Copyright (c) 2000 Doug White, 2006 James Knight, 2007 Christian Heimes
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+
+#ifdef HAVE_SYS_EVENT_H
+#include <sys/event.h>
+#endif
+
+PyDoc_STRVAR(kqueue_event_doc,
+"kevent(ident, filter=KQ_FILTER_READ, flags=KQ_EV_ADD, fflags=0, data=0, udata=0)\n\
+\n\
+This object is the equivalent of the struct kevent for the C API.\n\
+\n\
+See the kqueue manpage for more detailed information about the meaning\n\
+of the arguments.\n\
+\n\
+One minor note: while you might hope that udata could store a\n\
+reference to a python object, it cannot, because it is impossible to\n\
+keep a proper reference count of the object once it's passed into the\n\
+kernel. Therefore, I have restricted it to only storing an integer. I\n\
+recommend ignoring it and simply using the 'ident' field to key off\n\
+of. You could also set up a dictionary on the python side to store a\n\
+udata->object mapping.");
+
+typedef struct {
+ PyObject_HEAD
+ struct kevent e;
+} kqueue_event_Object;
+
+static PyTypeObject kqueue_event_Type;
+
+#define kqueue_event_Check(op) (PyObject_TypeCheck((op), &kqueue_event_Type))
+
+typedef struct {
+ PyObject_HEAD
+ SOCKET kqfd; /* kqueue control fd */
+} kqueue_queue_Object;
+
+static PyTypeObject kqueue_queue_Type;
+
+#define kqueue_queue_Check(op) (PyObject_TypeCheck((op), &kqueue_queue_Type))
+
+#if (SIZEOF_UINTPTR_T != SIZEOF_VOID_P)
+# error uintptr_t does not match void *!
+#elif (SIZEOF_UINTPTR_T == SIZEOF_LONG_LONG)
+# define T_UINTPTRT T_ULONGLONG
+# define T_INTPTRT T_LONGLONG
+# define UINTPTRT_FMT_UNIT "K"
+# define INTPTRT_FMT_UNIT "L"
+#elif (SIZEOF_UINTPTR_T == SIZEOF_LONG)
+# define T_UINTPTRT T_ULONG
+# define T_INTPTRT T_LONG
+# define UINTPTRT_FMT_UNIT "k"
+# define INTPTRT_FMT_UNIT "l"
+#elif (SIZEOF_UINTPTR_T == SIZEOF_INT)
+# define T_UINTPTRT T_UINT
+# define T_INTPTRT T_INT
+# define UINTPTRT_FMT_UNIT "I"
+# define INTPTRT_FMT_UNIT "i"
+#else
+# error uintptr_t does not match int, long, or long long!
+#endif
+
+#if SIZEOF_LONG_LONG == 8
+# define T_INT64 T_LONGLONG
+# define INT64_FMT_UNIT "L"
+#elif SIZEOF_LONG == 8
+# define T_INT64 T_LONG
+# define INT64_FMT_UNIT "l"
+#elif SIZEOF_INT == 8
+# define T_INT64 T_INT
+# define INT64_FMT_UNIT "i"
+#else
+# define INT64_FMT_UNIT "_"
+#endif
+
+#if SIZEOF_LONG_LONG == 4
+# define T_UINT32 T_ULONGLONG
+# define UINT32_FMT_UNIT "K"
+#elif SIZEOF_LONG == 4
+# define T_UINT32 T_ULONG
+# define UINT32_FMT_UNIT "k"
+#elif SIZEOF_INT == 4
+# define T_UINT32 T_UINT
+# define UINT32_FMT_UNIT "I"
+#else
+# define UINT32_FMT_UNIT "_"
+#endif
+
+/*
+ * kevent is not standard and its members vary across BSDs.
+ */
+#ifdef __NetBSD__
+# define FILTER_TYPE T_UINT32
+# define FILTER_FMT_UNIT UINT32_FMT_UNIT
+# define FLAGS_TYPE T_UINT32
+# define FLAGS_FMT_UNIT UINT32_FMT_UNIT
+# define FFLAGS_TYPE T_UINT32
+# define FFLAGS_FMT_UNIT UINT32_FMT_UNIT
+#else
+# define FILTER_TYPE T_SHORT
+# define FILTER_FMT_UNIT "h"
+# define FLAGS_TYPE T_USHORT
+# define FLAGS_FMT_UNIT "H"
+# define FFLAGS_TYPE T_UINT
+# define FFLAGS_FMT_UNIT "I"
+#endif
+
+#if defined(__NetBSD__) || defined(__OpenBSD__)
+# define DATA_TYPE T_INT64
+# define DATA_FMT_UNIT INT64_FMT_UNIT
+#else
+# define DATA_TYPE T_INTPTRT
+# define DATA_FMT_UNIT INTPTRT_FMT_UNIT
+#endif
+
+/* Unfortunately, we can't store python objects in udata, because
+ * kevents in the kernel can be removed without warning, which would
+ * forever lose the refcount on the object stored with it.
+ */
+
+#define KQ_OFF(x) offsetof(kqueue_event_Object, x)
+static struct PyMemberDef kqueue_event_members[] = {
+ {"ident", T_UINTPTRT, KQ_OFF(e.ident)},
+ {"filter", FILTER_TYPE, KQ_OFF(e.filter)},
+ {"flags", FLAGS_TYPE, KQ_OFF(e.flags)},
+ {"fflags", T_UINT, KQ_OFF(e.fflags)},
+ {"data", DATA_TYPE, KQ_OFF(e.data)},
+ {"udata", T_UINTPTRT, KQ_OFF(e.udata)},
+ {NULL} /* Sentinel */
+};
+#undef KQ_OFF
+
+static PyObject *
+
+kqueue_event_repr(kqueue_event_Object *s)
+{
+ char buf[1024];
+ PyOS_snprintf(
+ buf, sizeof(buf),
+ "<select.kevent ident=%zu filter=%d flags=0x%x fflags=0x%x "
+ "data=0x%llx udata=%p>",
+ (size_t)(s->e.ident), (int)s->e.filter, (unsigned int)s->e.flags,
+ (unsigned int)s->e.fflags, (long long)(s->e.data), (void *)s->e.udata);
+ return PyUnicode_FromString(buf);
+}
+
+static int
+kqueue_event_init(kqueue_event_Object *self, PyObject *args, PyObject *kwds)
+{
+ PyObject *pfd;
+ static char *kwlist[] = {"ident", "filter", "flags", "fflags",
+ "data", "udata", NULL};
+ static const char fmt[] = "O|"
+ FILTER_FMT_UNIT FLAGS_FMT_UNIT FFLAGS_FMT_UNIT DATA_FMT_UNIT
+ UINTPTRT_FMT_UNIT ":kevent";
+
+ EV_SET(&(self->e), 0, EVFILT_READ, EV_ADD, 0, 0, 0); /* defaults */
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwds, fmt, kwlist,
+ &pfd, &(self->e.filter), &(self->e.flags),
+ &(self->e.fflags), &(self->e.data), &(self->e.udata))) {
+ return -1;
+ }
+
+ if (PyLong_Check(pfd)) {
+ self->e.ident = PyLong_AsSize_t(pfd);
+ }
+ else {
+ self->e.ident = PyObject_AsFileDescriptor(pfd);
+ }
+ if (PyErr_Occurred()) {
+ return -1;
+ }
+ return 0;
+}
+
+static PyObject *
+kqueue_event_richcompare(kqueue_event_Object *s, kqueue_event_Object *o,
+ int op)
+{
+ int result;
+
+ if (!kqueue_event_Check(o)) {
+ Py_RETURN_NOTIMPLEMENTED;
+ }
+
+#define CMP(a, b) ((a) != (b)) ? ((a) < (b) ? -1 : 1)
+ result = CMP(s->e.ident, o->e.ident)
+ : CMP(s->e.filter, o->e.filter)
+ : CMP(s->e.flags, o->e.flags)
+ : CMP(s->e.fflags, o->e.fflags)
+ : CMP(s->e.data, o->e.data)
+ : CMP((intptr_t)s->e.udata, (intptr_t)o->e.udata)
+ : 0;
+#undef CMP
+
+ switch (op) {
+ case Py_EQ:
+ result = (result == 0);
+ break;
+ case Py_NE:
+ result = (result != 0);
+ break;
+ case Py_LE:
+ result = (result <= 0);
+ break;
+ case Py_GE:
+ result = (result >= 0);
+ break;
+ case Py_LT:
+ result = (result < 0);
+ break;
+ case Py_GT:
+ result = (result > 0);
+ break;
+ }
+ return PyBool_FromLong((long)result);
+}
+
+static PyTypeObject kqueue_event_Type = {
+ PyVarObject_HEAD_INIT(NULL, 0)
+ "select.kevent", /* tp_name */
+ sizeof(kqueue_event_Object), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ 0, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ (reprfunc)kqueue_event_repr, /* tp_repr */
+ 0, /* tp_as_number */
+ 0, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ 0, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT, /* tp_flags */
+ kqueue_event_doc, /* tp_doc */
+ 0, /* tp_traverse */
+ 0, /* tp_clear */
+ (richcmpfunc)kqueue_event_richcompare, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ 0, /* tp_iter */
+ 0, /* tp_iternext */
+ 0, /* tp_methods */
+ kqueue_event_members, /* tp_members */
+ 0, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ (initproc)kqueue_event_init, /* tp_init */
+ 0, /* tp_alloc */
+ 0, /* tp_new */
+ 0, /* tp_free */
+};
+
+static PyObject *
+kqueue_queue_err_closed(void)
+{
+ PyErr_SetString(PyExc_ValueError, "I/O operation on closed kqueue object");
+ return NULL;
+}
+
+static int
+kqueue_queue_internal_close(kqueue_queue_Object *self)
+{
+ int save_errno = 0;
+ if (self->kqfd >= 0) {
+ int kqfd = self->kqfd;
+ self->kqfd = -1;
+ Py_BEGIN_ALLOW_THREADS
+ if (close(kqfd) < 0)
+ save_errno = errno;
+ Py_END_ALLOW_THREADS
+ }
+ return save_errno;
+}
+
+static PyObject *
+newKqueue_Object(PyTypeObject *type, SOCKET fd)
+{
+ kqueue_queue_Object *self;
+ assert(type != NULL && type->tp_alloc != NULL);
+ self = (kqueue_queue_Object *) type->tp_alloc(type, 0);
+ if (self == NULL) {
+ return NULL;
+ }
+
+ if (fd == -1) {
+ Py_BEGIN_ALLOW_THREADS
+ self->kqfd = kqueue();
+ Py_END_ALLOW_THREADS
+ }
+ else {
+ self->kqfd = fd;
+ }
+ if (self->kqfd < 0) {
+ Py_DECREF(self);
+ PyErr_SetFromErrno(PyExc_OSError);
+ return NULL;
+ }
+
+ if (fd == -1) {
+ if (_Py_set_inheritable(self->kqfd, 0, NULL) < 0) {
+ Py_DECREF(self);
+ return NULL;
+ }
+ }
+ return (PyObject *)self;
+}
+
+static PyObject *
+kqueue_queue_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
+{
+ if ((args != NULL && PyObject_Size(args)) ||
+ (kwds != NULL && PyObject_Size(kwds))) {
+ PyErr_SetString(PyExc_ValueError,
+ "select.kqueue doesn't accept arguments");
+ return NULL;
+ }
+
+ return newKqueue_Object(type, -1);
+}
+
+static void
+kqueue_queue_dealloc(kqueue_queue_Object *self)
+{
+ kqueue_queue_internal_close(self);
+ Py_TYPE(self)->tp_free(self);
+}
+
+static PyObject*
+kqueue_queue_close(kqueue_queue_Object *self)
+{
+ errno = kqueue_queue_internal_close(self);
+ if (errno < 0) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ return NULL;
+ }
+ Py_RETURN_NONE;
+}
+
+PyDoc_STRVAR(kqueue_queue_close_doc,
+"close() -> None\n\
+\n\
+Close the kqueue control file descriptor. Further operations on the kqueue\n\
+object will raise an exception.");
+
+static PyObject*
+kqueue_queue_get_closed(kqueue_queue_Object *self, void *Py_UNUSED(ignored))
+{
+ if (self->kqfd < 0)
+ Py_RETURN_TRUE;
+ else
+ Py_RETURN_FALSE;
+}
+
+static PyObject*
+kqueue_queue_fileno(kqueue_queue_Object *self)
+{
+ if (self->kqfd < 0)
+ return kqueue_queue_err_closed();
+ return PyLong_FromLong(self->kqfd);
+}
+
+PyDoc_STRVAR(kqueue_queue_fileno_doc,
+"fileno() -> int\n\
+\n\
+Return the kqueue control file descriptor.");
+
+static PyObject*
+kqueue_queue_fromfd(PyObject *cls, PyObject *args)
+{
+ SOCKET fd;
+
+ if (!PyArg_ParseTuple(args, "i:fromfd", &fd))
+ return NULL;
+
+ return newKqueue_Object((PyTypeObject*)cls, fd);
+}
+
+PyDoc_STRVAR(kqueue_queue_fromfd_doc,
+"fromfd(fd) -> kqueue\n\
+\n\
+Create a kqueue object from a given control fd.");
+
+static PyObject *
+kqueue_queue_control(kqueue_queue_Object *self, PyObject *args)
+{
+ int nevents = 0;
+ int gotevents = 0;
+ int nchanges = 0;
+ int i = 0;
+ PyObject *otimeout = NULL;
+ PyObject *ch = NULL;
+ PyObject *seq = NULL, *ei = NULL;
+ PyObject *result = NULL;
+ struct kevent *evl = NULL;
+ struct kevent *chl = NULL;
+ struct timespec timeoutspec;
+ struct timespec *ptimeoutspec;
+ _PyTime_t timeout, deadline = 0;
+
+ if (self->kqfd < 0)
+ return kqueue_queue_err_closed();
+
+ if (!PyArg_ParseTuple(args, "Oi|O:control", &ch, &nevents, &otimeout))
+ return NULL;
+
+ if (nevents < 0) {
+ PyErr_Format(PyExc_ValueError,
+ "Length of eventlist must be 0 or positive, got %d",
+ nevents);
+ return NULL;
+ }
+
+ if (otimeout == Py_None || otimeout == NULL) {
+ ptimeoutspec = NULL;
+ }
+ else {
+ if (_PyTime_FromSecondsObject(&timeout,
+ otimeout, _PyTime_ROUND_TIMEOUT) < 0) {
+ PyErr_Format(PyExc_TypeError,
+ "timeout argument must be a number "
+ "or None, got %.200s",
+ Py_TYPE(otimeout)->tp_name);
+ return NULL;
+ }
+
+ if (_PyTime_AsTimespec(timeout, &timeoutspec) == -1)
+ return NULL;
+
+ if (timeoutspec.tv_sec < 0) {
+ PyErr_SetString(PyExc_ValueError,
+ "timeout must be positive or None");
+ return NULL;
+ }
+ ptimeoutspec = &timeoutspec;
+ }
+
+ if (ch != NULL && ch != Py_None) {
+ seq = PySequence_Fast(ch, "changelist is not iterable");
+ if (seq == NULL) {
+ return NULL;
+ }
+ if (PySequence_Fast_GET_SIZE(seq) > INT_MAX) {
+ PyErr_SetString(PyExc_OverflowError,
+ "changelist is too long");
+ goto error;
+ }
+ nchanges = (int)PySequence_Fast_GET_SIZE(seq);
+
+ chl = PyMem_New(struct kevent, nchanges);
+ if (chl == NULL) {
+ PyErr_NoMemory();
+ goto error;
+ }
+ for (i = 0; i < nchanges; ++i) {
+ ei = PySequence_Fast_GET_ITEM(seq, i);
+ if (!kqueue_event_Check(ei)) {
+ PyErr_SetString(PyExc_TypeError,
+ "changelist must be an iterable of "
+ "select.kevent objects");
+ goto error;
+ }
+ chl[i] = ((kqueue_event_Object *)ei)->e;
+ }
+ Py_CLEAR(seq);
+ }
+
+ /* event list */
+ if (nevents) {
+ evl = PyMem_New(struct kevent, nevents);
+ if (evl == NULL) {
+ PyErr_NoMemory();
+ goto error;
+ }
+ }
+
+ if (ptimeoutspec)
+ deadline = _PyTime_GetMonotonicClock() + timeout;
+
+ do {
+ Py_BEGIN_ALLOW_THREADS
+ errno = 0;
+ gotevents = kevent(self->kqfd, chl, nchanges,
+ evl, nevents, ptimeoutspec);
+ Py_END_ALLOW_THREADS
+
+ if (errno != EINTR)
+ break;
+
+ /* kevent() was interrupted by a signal */
+ if (PyErr_CheckSignals())
+ goto error;
+
+ if (ptimeoutspec) {
+ timeout = deadline - _PyTime_GetMonotonicClock();
+ if (timeout < 0) {
+ gotevents = 0;
+ break;
+ }
+ if (_PyTime_AsTimespec(timeout, &timeoutspec) == -1)
+ goto error;
+ /* retry kevent() with the recomputed timeout */
+ }
+ } while (1);
+
+ if (gotevents == -1) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ goto error;
+ }
+
+ result = PyList_New(gotevents);
+ if (result == NULL) {
+ goto error;
+ }
+
+ for (i = 0; i < gotevents; i++) {
+ kqueue_event_Object *ch;
+
+ ch = PyObject_New(kqueue_event_Object, &kqueue_event_Type);
+ if (ch == NULL) {
+ goto error;
+ }
+ ch->e = evl[i];
+ PyList_SET_ITEM(result, i, (PyObject *)ch);
+ }
+ PyMem_Free(chl);
+ PyMem_Free(evl);
+ return result;
+
+ error:
+ PyMem_Free(chl);
+ PyMem_Free(evl);
+ Py_XDECREF(result);
+ Py_XDECREF(seq);
+ return NULL;
+}
+
+PyDoc_STRVAR(kqueue_queue_control_doc,
+"control(changelist, max_events[, timeout=None]) -> eventlist\n\
+\n\
+Calls the kernel kevent function.\n\
+- changelist must be an iterable of kevent objects describing the changes\n\
+ to be made to the kernel's watch list or None.\n\
+- max_events lets you specify the maximum number of events that the\n\
+ kernel will return.\n\
+- timeout is the maximum time to wait in seconds, or else None,\n\
+ to wait forever. timeout accepts floats for smaller timeouts, too.");
+
+
+static PyMethodDef kqueue_queue_methods[] = {
+ {"fromfd", (PyCFunction)kqueue_queue_fromfd,
+ METH_VARARGS | METH_CLASS, kqueue_queue_fromfd_doc},
+ {"close", (PyCFunction)kqueue_queue_close, METH_NOARGS,
+ kqueue_queue_close_doc},
+ {"fileno", (PyCFunction)kqueue_queue_fileno, METH_NOARGS,
+ kqueue_queue_fileno_doc},
+ {"control", (PyCFunction)kqueue_queue_control,
+ METH_VARARGS , kqueue_queue_control_doc},
+ {NULL, NULL},
+};
+
+static PyGetSetDef kqueue_queue_getsetlist[] = {
+ {"closed", (getter)kqueue_queue_get_closed, NULL,
+ "True if the kqueue handler is closed"},
+ {0},
+};
+
+PyDoc_STRVAR(kqueue_queue_doc,
+"Kqueue syscall wrapper.\n\
+\n\
+For example, to start watching a socket for input:\n\
+>>> kq = kqueue()\n\
+>>> sock = socket()\n\
+>>> sock.connect((host, port))\n\
+>>> kq.control([kevent(sock, KQ_FILTER_WRITE, KQ_EV_ADD)], 0)\n\
+\n\
+To wait one second for it to become writeable:\n\
+>>> kq.control(None, 1, 1000)\n\
+\n\
+To stop listening:\n\
+>>> kq.control([kevent(sock, KQ_FILTER_WRITE, KQ_EV_DELETE)], 0)");
+
+static PyTypeObject kqueue_queue_Type = {
+ PyVarObject_HEAD_INIT(NULL, 0)
+ "select.kqueue", /* tp_name */
+ sizeof(kqueue_queue_Object), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ (destructor)kqueue_queue_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ 0, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ 0, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT, /* tp_flags */
+ kqueue_queue_doc, /* tp_doc */
+ 0, /* tp_traverse */
+ 0, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ 0, /* tp_iter */
+ 0, /* tp_iternext */
+ kqueue_queue_methods, /* tp_methods */
+ 0, /* tp_members */
+ kqueue_queue_getsetlist, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ 0, /* tp_init */
+ 0, /* tp_alloc */
+ kqueue_queue_new, /* tp_new */
+ 0, /* tp_free */
+};
+
+#endif /* HAVE_KQUEUE */
+
+
+
+
+
+/* ************************************************************************ */
+
+PyDoc_STRVAR(select_doc,
+"select(rlist, wlist, xlist[, timeout]) -> (rlist, wlist, xlist)\n\
+\n\
+Wait until one or more file descriptors are ready for some kind of I/O.\n\
+The first three arguments are sequences of file descriptors to be waited for:\n\
+rlist -- wait until ready for reading\n\
+wlist -- wait until ready for writing\n\
+xlist -- wait for an ``exceptional condition''\n\
+If only one kind of condition is required, pass [] for the other lists.\n\
+A file descriptor is either a socket or file object, or a small integer\n\
+gotten from a fileno() method call on one of those.\n\
+\n\
+The optional 4th argument specifies a timeout in seconds; it may be\n\
+a floating point number to specify fractions of seconds. If it is absent\n\
+or None, the call will never time out.\n\
+\n\
+The return value is a tuple of three lists corresponding to the first three\n\
+arguments; each contains the subset of the corresponding file descriptors\n\
+that are ready.\n\
+\n\
+*** IMPORTANT NOTICE ***\n\
+On Windows, only sockets are supported; on Unix, all file\n\
+descriptors can be used.");
+
+static PyMethodDef select_methods[] = {
+ {"select", select_select, METH_VARARGS, select_doc},
+#if defined(HAVE_POLL) && !defined(HAVE_BROKEN_POLL)
+ {"poll", select_poll, METH_NOARGS, poll_doc},
+#endif /* HAVE_POLL */
+#ifdef HAVE_SYS_DEVPOLL_H
+ {"devpoll", select_devpoll, METH_NOARGS, devpoll_doc},
+#endif
+ {0, 0}, /* sentinel */
+};
+
+PyDoc_STRVAR(module_doc,
+"This module supports asynchronous I/O on multiple file descriptors.\n\
+\n\
+*** IMPORTANT NOTICE ***\n\
+On Windows, only sockets are supported; on Unix, all file descriptors.");
+
+
+static struct PyModuleDef selectmodule = {
+ PyModuleDef_HEAD_INIT,
+ "select",
+ module_doc,
+ -1,
+ select_methods,
+ NULL,
+ NULL,
+ NULL,
+ NULL
+};
+
+
+
+
+PyMODINIT_FUNC
+PyInit_select(void)
+{
+ PyObject *m;
+ m = PyModule_Create(&selectmodule);
+ if (m == NULL)
+ return NULL;
+
+ Py_INCREF(PyExc_OSError);
+ PyModule_AddObject(m, "error", PyExc_OSError);
+
+#ifdef PIPE_BUF
+#ifdef HAVE_BROKEN_PIPE_BUF
+#undef PIPE_BUF
+#define PIPE_BUF 512
+#endif
+ PyModule_AddIntMacro(m, PIPE_BUF);
+#endif
+
+#if defined(HAVE_POLL) && !defined(HAVE_BROKEN_POLL)
+#ifdef __APPLE__
+ if (select_have_broken_poll()) {
+ if (PyObject_DelAttrString(m, "poll") == -1) {
+ PyErr_Clear();
+ }
+ } else {
+#else
+ {
+#endif
+ if (PyType_Ready(&poll_Type) < 0)
+ return NULL;
+ PyModule_AddIntMacro(m, POLLIN);
+ PyModule_AddIntMacro(m, POLLPRI);
+ PyModule_AddIntMacro(m, POLLOUT);
+ PyModule_AddIntMacro(m, POLLERR);
+ PyModule_AddIntMacro(m, POLLHUP);
+ PyModule_AddIntMacro(m, POLLNVAL);
+
+#ifdef POLLRDNORM
+ PyModule_AddIntMacro(m, POLLRDNORM);
+#endif
+#ifdef POLLRDBAND
+ PyModule_AddIntMacro(m, POLLRDBAND);
+#endif
+#ifdef POLLWRNORM
+ PyModule_AddIntMacro(m, POLLWRNORM);
+#endif
+#ifdef POLLWRBAND
+ PyModule_AddIntMacro(m, POLLWRBAND);
+#endif
+#ifdef POLLMSG
+ PyModule_AddIntMacro(m, POLLMSG);
+#endif
+#ifdef POLLRDHUP
+ /* Kernel 2.6.17+ */
+ PyModule_AddIntMacro(m, POLLRDHUP);
+#endif
+ }
+#endif /* HAVE_POLL */
+
+#ifdef HAVE_SYS_DEVPOLL_H
+ if (PyType_Ready(&devpoll_Type) < 0)
+ return NULL;
+#endif
+
+#ifdef HAVE_EPOLL
+ Py_TYPE(&pyEpoll_Type) = &PyType_Type;
+ if (PyType_Ready(&pyEpoll_Type) < 0)
+ return NULL;
+
+ Py_INCREF(&pyEpoll_Type);
+ PyModule_AddObject(m, "epoll", (PyObject *) &pyEpoll_Type);
+
+ PyModule_AddIntMacro(m, EPOLLIN);
+ PyModule_AddIntMacro(m, EPOLLOUT);
+ PyModule_AddIntMacro(m, EPOLLPRI);
+ PyModule_AddIntMacro(m, EPOLLERR);
+ PyModule_AddIntMacro(m, EPOLLHUP);
+#ifdef EPOLLRDHUP
+ /* Kernel 2.6.17 */
+ PyModule_AddIntMacro(m, EPOLLRDHUP);
+#endif
+ PyModule_AddIntMacro(m, EPOLLET);
+#ifdef EPOLLONESHOT
+ /* Kernel 2.6.2+ */
+ PyModule_AddIntMacro(m, EPOLLONESHOT);
+#endif
+#ifdef EPOLLEXCLUSIVE
+ PyModule_AddIntMacro(m, EPOLLEXCLUSIVE);
+#endif
+
+#ifdef EPOLLRDNORM
+ PyModule_AddIntMacro(m, EPOLLRDNORM);
+#endif
+#ifdef EPOLLRDBAND
+ PyModule_AddIntMacro(m, EPOLLRDBAND);
+#endif
+#ifdef EPOLLWRNORM
+ PyModule_AddIntMacro(m, EPOLLWRNORM);
+#endif
+#ifdef EPOLLWRBAND
+ PyModule_AddIntMacro(m, EPOLLWRBAND);
+#endif
+#ifdef EPOLLMSG
+ PyModule_AddIntMacro(m, EPOLLMSG);
+#endif
+
+#ifdef EPOLL_CLOEXEC
+ PyModule_AddIntMacro(m, EPOLL_CLOEXEC);
+#endif
+#endif /* HAVE_EPOLL */
+
+#ifdef HAVE_KQUEUE
+ kqueue_event_Type.tp_new = PyType_GenericNew;
+ Py_TYPE(&kqueue_event_Type) = &PyType_Type;
+ if(PyType_Ready(&kqueue_event_Type) < 0)
+ return NULL;
+
+ Py_INCREF(&kqueue_event_Type);
+ PyModule_AddObject(m, "kevent", (PyObject *)&kqueue_event_Type);
+
+ Py_TYPE(&kqueue_queue_Type) = &PyType_Type;
+ if(PyType_Ready(&kqueue_queue_Type) < 0)
+ return NULL;
+ Py_INCREF(&kqueue_queue_Type);
+ PyModule_AddObject(m, "kqueue", (PyObject *)&kqueue_queue_Type);
+
+ /* event filters */
+ PyModule_AddIntConstant(m, "KQ_FILTER_READ", EVFILT_READ);
+ PyModule_AddIntConstant(m, "KQ_FILTER_WRITE", EVFILT_WRITE);
+#ifdef EVFILT_AIO
+ PyModule_AddIntConstant(m, "KQ_FILTER_AIO", EVFILT_AIO);
+#endif
+#ifdef EVFILT_VNODE
+ PyModule_AddIntConstant(m, "KQ_FILTER_VNODE", EVFILT_VNODE);
+#endif
+#ifdef EVFILT_PROC
+ PyModule_AddIntConstant(m, "KQ_FILTER_PROC", EVFILT_PROC);
+#endif
+#ifdef EVFILT_NETDEV
+ PyModule_AddIntConstant(m, "KQ_FILTER_NETDEV", EVFILT_NETDEV);
+#endif
+#ifdef EVFILT_SIGNAL
+ PyModule_AddIntConstant(m, "KQ_FILTER_SIGNAL", EVFILT_SIGNAL);
+#endif
+ PyModule_AddIntConstant(m, "KQ_FILTER_TIMER", EVFILT_TIMER);
+
+ /* event flags */
+ PyModule_AddIntConstant(m, "KQ_EV_ADD", EV_ADD);
+ PyModule_AddIntConstant(m, "KQ_EV_DELETE", EV_DELETE);
+ PyModule_AddIntConstant(m, "KQ_EV_ENABLE", EV_ENABLE);
+ PyModule_AddIntConstant(m, "KQ_EV_DISABLE", EV_DISABLE);
+ PyModule_AddIntConstant(m, "KQ_EV_ONESHOT", EV_ONESHOT);
+ PyModule_AddIntConstant(m, "KQ_EV_CLEAR", EV_CLEAR);
+
+#ifdef EV_SYSFLAGS
+ PyModule_AddIntConstant(m, "KQ_EV_SYSFLAGS", EV_SYSFLAGS);
+#endif
+#ifdef EV_FLAG1
+ PyModule_AddIntConstant(m, "KQ_EV_FLAG1", EV_FLAG1);
+#endif
+
+ PyModule_AddIntConstant(m, "KQ_EV_EOF", EV_EOF);
+ PyModule_AddIntConstant(m, "KQ_EV_ERROR", EV_ERROR);
+
+ /* READ WRITE filter flag */
+#ifdef NOTE_LOWAT
+ PyModule_AddIntConstant(m, "KQ_NOTE_LOWAT", NOTE_LOWAT);
+#endif
+
+ /* VNODE filter flags */
+#ifdef EVFILT_VNODE
+ PyModule_AddIntConstant(m, "KQ_NOTE_DELETE", NOTE_DELETE);
+ PyModule_AddIntConstant(m, "KQ_NOTE_WRITE", NOTE_WRITE);
+ PyModule_AddIntConstant(m, "KQ_NOTE_EXTEND", NOTE_EXTEND);
+ PyModule_AddIntConstant(m, "KQ_NOTE_ATTRIB", NOTE_ATTRIB);
+ PyModule_AddIntConstant(m, "KQ_NOTE_LINK", NOTE_LINK);
+ PyModule_AddIntConstant(m, "KQ_NOTE_RENAME", NOTE_RENAME);
+ PyModule_AddIntConstant(m, "KQ_NOTE_REVOKE", NOTE_REVOKE);
+#endif
+
+ /* PROC filter flags */
+#ifdef EVFILT_PROC
+ PyModule_AddIntConstant(m, "KQ_NOTE_EXIT", NOTE_EXIT);
+ PyModule_AddIntConstant(m, "KQ_NOTE_FORK", NOTE_FORK);
+ PyModule_AddIntConstant(m, "KQ_NOTE_EXEC", NOTE_EXEC);
+ PyModule_AddIntConstant(m, "KQ_NOTE_PCTRLMASK", NOTE_PCTRLMASK);
+ PyModule_AddIntConstant(m, "KQ_NOTE_PDATAMASK", NOTE_PDATAMASK);
+
+ PyModule_AddIntConstant(m, "KQ_NOTE_TRACK", NOTE_TRACK);
+ PyModule_AddIntConstant(m, "KQ_NOTE_CHILD", NOTE_CHILD);
+ PyModule_AddIntConstant(m, "KQ_NOTE_TRACKERR", NOTE_TRACKERR);
+#endif
+
+ /* NETDEV filter flags */
+#ifdef EVFILT_NETDEV
+ PyModule_AddIntConstant(m, "KQ_NOTE_LINKUP", NOTE_LINKUP);
+ PyModule_AddIntConstant(m, "KQ_NOTE_LINKDOWN", NOTE_LINKDOWN);
+ PyModule_AddIntConstant(m, "KQ_NOTE_LINKINV", NOTE_LINKINV);
+#endif
+
+#endif /* HAVE_KQUEUE */
+ return m;
+}
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/socketmodule.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/socketmodule.c
new file mode 100644
index 00000000..d5bb9f59
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/socketmodule.c
@@ -0,0 +1,7810 @@
+/* Socket module */
+
+/*
+
+This module provides an interface to Berkeley socket IPC.
+
+Limitations:
+
+- Only AF_INET, AF_INET6 and AF_UNIX address families are supported in a
+ portable manner, though AF_PACKET, AF_NETLINK and AF_TIPC are supported
+ under Linux.
+- No read/write operations (use sendall/recv or makefile instead).
+- Additional restrictions apply on some non-Unix platforms (compensated
+ for by socket.py).
+
+Module interface:
+
+- socket.error: exception raised for socket specific errors, alias for OSError
+- socket.gaierror: exception raised for getaddrinfo/getnameinfo errors,
+ a subclass of socket.error
+- socket.herror: exception raised for gethostby* errors,
+ a subclass of socket.error
+- socket.gethostbyname(hostname) --> host IP address (string: 'dd.dd.dd.dd')
+- socket.gethostbyaddr(IP address) --> (hostname, [alias, ...], [IP addr, ...])
+- socket.gethostname() --> host name (string: 'spam' or 'spam.domain.com')
+- socket.getprotobyname(protocolname) --> protocol number
+- socket.getservbyname(servicename[, protocolname]) --> port number
+- socket.getservbyport(portnumber[, protocolname]) --> service name
+- socket.socket([family[, type [, proto, fileno]]]) --> new socket object
+ (fileno specifies a pre-existing socket file descriptor)
+- socket.socketpair([family[, type [, proto]]]) --> (socket, socket)
+- socket.ntohs(16 bit value) --> new int object
+- socket.ntohl(32 bit value) --> new int object
+- socket.htons(16 bit value) --> new int object
+- socket.htonl(32 bit value) --> new int object
+- socket.getaddrinfo(host, port [, family, type, proto, flags])
+ --> List of (family, type, proto, canonname, sockaddr)
+- socket.getnameinfo(sockaddr, flags) --> (host, port)
+- socket.AF_INET, socket.SOCK_STREAM, etc.: constants from <socket.h>
+- socket.has_ipv6: boolean value indicating if IPv6 is supported
+- socket.inet_aton(IP address) -> 32-bit packed IP representation
+- socket.inet_ntoa(packed IP) -> IP address string
+- socket.getdefaulttimeout() -> None | float
+- socket.setdefaulttimeout(None | float)
+- socket.if_nameindex() -> list of tuples (if_index, if_name)
+- socket.if_nametoindex(name) -> corresponding interface index
+- socket.if_indextoname(index) -> corresponding interface name
+- an Internet socket address is a pair (hostname, port)
+ where hostname can be anything recognized by gethostbyname()
+ (including the dd.dd.dd.dd notation) and port is in host byte order
+- where a hostname is returned, the dd.dd.dd.dd notation is used
+- a UNIX domain socket address is a string specifying the pathname
+- an AF_PACKET socket address is a tuple containing a string
+ specifying the ethernet interface and an integer specifying
+ the Ethernet protocol number to be received. For example:
+ ("eth0",0x1234). Optional 3rd,4th,5th elements in the tuple
+ specify packet-type and ha-type/addr.
+- an AF_TIPC socket address is expressed as
+ (addr_type, v1, v2, v3 [, scope]); where addr_type can be one of:
+ TIPC_ADDR_NAMESEQ, TIPC_ADDR_NAME, and TIPC_ADDR_ID;
+ and scope can be one of:
+ TIPC_ZONE_SCOPE, TIPC_CLUSTER_SCOPE, and TIPC_NODE_SCOPE.
+ The meaning of v1, v2 and v3 depends on the value of addr_type:
+ if addr_type is TIPC_ADDR_NAME:
+ v1 is the server type
+ v2 is the port identifier
+ v3 is ignored
+ if addr_type is TIPC_ADDR_NAMESEQ:
+ v1 is the server type
+ v2 is the lower port number
+ v3 is the upper port number
+ if addr_type is TIPC_ADDR_ID:
+ v1 is the node
+ v2 is the ref
+ v3 is ignored
+
+
+Local naming conventions:
+
+- names starting with sock_ are socket object methods
+- names starting with socket_ are module-level functions
+- names starting with PySocket are exported through socketmodule.h
+
+*/
+
+#ifdef __APPLE__
+#include <AvailabilityMacros.h>
+/* for getaddrinfo thread safety test on old versions of OS X */
+#ifndef MAC_OS_X_VERSION_10_5
+#define MAC_OS_X_VERSION_10_5 1050
+#endif
+ /*
+ * inet_aton is not available on OSX 10.3, yet we want to use a binary
+ * that was build on 10.4 or later to work on that release, weak linking
+ * comes to the rescue.
+ */
+# pragma weak inet_aton
+#endif
+
+#include "Python.h"
+#include "structmember.h"
+
+/* Socket object documentation */
+PyDoc_STRVAR(sock_doc,
+"socket(family=AF_INET, type=SOCK_STREAM, proto=0, fileno=None) -> socket object\n\
+\n\
+Open a socket of the given type. The family argument specifies the\n\
+address family; it defaults to AF_INET. The type argument specifies\n\
+whether this is a stream (SOCK_STREAM, this is the default)\n\
+or datagram (SOCK_DGRAM) socket. The protocol argument defaults to 0,\n\
+specifying the default protocol. Keyword arguments are accepted.\n\
+The socket is created as non-inheritable.\n\
+\n\
+A socket object represents one endpoint of a network connection.\n\
+\n\
+Methods of socket objects (keyword arguments not allowed):\n\
+\n\
+_accept() -- accept connection, returning new socket fd and client address\n\
+bind(addr) -- bind the socket to a local address\n\
+close() -- close the socket\n\
+connect(addr) -- connect the socket to a remote address\n\
+connect_ex(addr) -- connect, return an error code instead of an exception\n\
+dup() -- return a new socket fd duplicated from fileno()\n\
+fileno() -- return underlying file descriptor\n\
+getpeername() -- return remote address [*]\n\
+getsockname() -- return local address\n\
+getsockopt(level, optname[, buflen]) -- get socket options\n\
+gettimeout() -- return timeout or None\n\
+listen([n]) -- start listening for incoming connections\n\
+recv(buflen[, flags]) -- receive data\n\
+recv_into(buffer[, nbytes[, flags]]) -- receive data (into a buffer)\n\
+recvfrom(buflen[, flags]) -- receive data and sender\'s address\n\
+recvfrom_into(buffer[, nbytes, [, flags])\n\
+ -- receive data and sender\'s address (into a buffer)\n\
+sendall(data[, flags]) -- send all data\n\
+send(data[, flags]) -- send data, may not send all of it\n\
+sendto(data[, flags], addr) -- send data to a given address\n\
+setblocking(0 | 1) -- set or clear the blocking I/O flag\n\
+setsockopt(level, optname, value[, optlen]) -- set socket options\n\
+settimeout(None | float) -- set or clear the timeout\n\
+shutdown(how) -- shut down traffic in one or both directions\n\
+if_nameindex() -- return all network interface indices and names\n\
+if_nametoindex(name) -- return the corresponding interface index\n\
+if_indextoname(index) -- return the corresponding interface name\n\
+\n\
+ [*] not available on all platforms!");
+
+/* XXX This is a terrible mess of platform-dependent preprocessor hacks.
+ I hope some day someone can clean this up please... */
+
+/* Hacks for gethostbyname_r(). On some non-Linux platforms, the configure
+ script doesn't get this right, so we hardcode some platform checks below.
+ On the other hand, not all Linux versions agree, so there the settings
+ computed by the configure script are needed! */
+
+#ifndef __linux__
+# undef HAVE_GETHOSTBYNAME_R_3_ARG
+# undef HAVE_GETHOSTBYNAME_R_5_ARG
+# undef HAVE_GETHOSTBYNAME_R_6_ARG
+#endif
+
+#if defined(__OpenBSD__)
+# include <sys/uio.h>
+#endif
+
+#if !defined(WITH_THREAD)
+# undef HAVE_GETHOSTBYNAME_R
+#endif
+
+#if defined(__ANDROID__) && __ANDROID_API__ < 23
+# undef HAVE_GETHOSTBYNAME_R
+#endif
+
+#ifdef HAVE_GETHOSTBYNAME_R
+# if defined(_AIX) && !defined(_LINUX_SOURCE_COMPAT)
+# define HAVE_GETHOSTBYNAME_R_3_ARG
+# elif defined(__sun) || defined(__sgi)
+# define HAVE_GETHOSTBYNAME_R_5_ARG
+# elif defined(__linux__)
+/* Rely on the configure script */
+# elif defined(_LINUX_SOURCE_COMPAT) /* Linux compatibility on AIX */
+# define HAVE_GETHOSTBYNAME_R_6_ARG
+# else
+# undef HAVE_GETHOSTBYNAME_R
+# endif
+#endif
+
+#if !defined(HAVE_GETHOSTBYNAME_R) && defined(WITH_THREAD) && \
+ !defined(MS_WINDOWS)
+# define USE_GETHOSTBYNAME_LOCK
+#endif
+
+/* To use __FreeBSD_version, __OpenBSD__, and __NetBSD_Version__ */
+#ifdef HAVE_SYS_PARAM_H
+#include <sys/param.h>
+#endif
+/* On systems on which getaddrinfo() is believed to not be thread-safe,
+ (this includes the getaddrinfo emulation) protect access with a lock.
+
+ getaddrinfo is thread-safe on Mac OS X 10.5 and later. Originally it was
+ a mix of code including an unsafe implementation from an old BSD's
+ libresolv. In 10.5 Apple reimplemented it as a safe IPC call to the
+ mDNSResponder process. 10.5 is the first be UNIX '03 certified, which
+ includes the requirement that getaddrinfo be thread-safe. See issue #25924.
+
+ It's thread-safe in OpenBSD starting with 5.4, released Nov 2013:
+ http://www.openbsd.org/plus54.html
+
+ It's thread-safe in NetBSD starting with 4.0, released Dec 2007:
+
+http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libc/net/getaddrinfo.c.diff?r1=1.82&r2=1.83
+ */
+#if defined(WITH_THREAD) && ( \
+ (defined(__APPLE__) && \
+ MAC_OS_X_VERSION_MIN_REQUIRED < MAC_OS_X_VERSION_10_5) || \
+ (defined(__FreeBSD__) && __FreeBSD_version+0 < 503000) || \
+ (defined(__OpenBSD__) && OpenBSD+0 < 201311) || \
+ (defined(__NetBSD__) && __NetBSD_Version__+0 < 400000000) || \
+ !defined(HAVE_GETADDRINFO))
+#define USE_GETADDRINFO_LOCK
+#endif
+
+#ifdef USE_GETADDRINFO_LOCK
+#define ACQUIRE_GETADDRINFO_LOCK PyThread_acquire_lock(netdb_lock, 1);
+#define RELEASE_GETADDRINFO_LOCK PyThread_release_lock(netdb_lock);
+#else
+#define ACQUIRE_GETADDRINFO_LOCK
+#define RELEASE_GETADDRINFO_LOCK
+#endif
+
+#if defined(USE_GETHOSTBYNAME_LOCK) || defined(USE_GETADDRINFO_LOCK)
+# include "pythread.h"
+#endif
+
+#if defined(PYCC_VACPP)
+# include <types.h>
+# include <io.h>
+# include <sys/ioctl.h>
+# include <utils.h>
+# include <ctype.h>
+#endif
+
+#if defined(__APPLE__) || defined(__CYGWIN__) || defined(__NetBSD__)
+# include <sys/ioctl.h>
+#endif
+
+
+#if defined(__sgi) && _COMPILER_VERSION>700 && !_SGIAPI
+/* make sure that the reentrant (gethostbyaddr_r etc)
+ functions are declared correctly if compiling with
+ MIPSPro 7.x in ANSI C mode (default) */
+
+/* XXX Using _SGIAPI is the wrong thing,
+ but I don't know what the right thing is. */
+#undef _SGIAPI /* to avoid warning */
+#define _SGIAPI 1
+
+#undef _XOPEN_SOURCE
+#include <sys/socket.h>
+#include <sys/types.h>
+#include <netinet/in.h>
+#ifdef _SS_ALIGNSIZE
+#define HAVE_GETADDRINFO 1
+#define HAVE_GETNAMEINFO 1
+#endif
+
+#define HAVE_INET_PTON
+#include <netdb.h>
+#endif
+
+/* Irix 6.5 fails to define this variable at all. This is needed
+ for both GCC and SGI's compiler. I'd say that the SGI headers
+ are just busted. Same thing for Solaris. */
+#if (defined(__sgi) || defined(sun)) && !defined(INET_ADDRSTRLEN)
+#define INET_ADDRSTRLEN 16
+#endif
+
+/* Generic includes */
+#ifdef HAVE_SYS_TYPES_H
+#include <sys/types.h>
+#endif
+
+#ifdef HAVE_SYS_SOCKET_H
+#include <sys/socket.h>
+#endif
+
+#ifdef HAVE_NET_IF_H
+#include <net/if.h>
+#endif
+
+/* Generic socket object definitions and includes */
+#define PySocket_BUILDING_SOCKET
+#include "socketmodule.h"
+
+/* Addressing includes */
+
+#ifndef MS_WINDOWS
+
+/* Non-MS WINDOWS includes */
+# include <netdb.h>
+# include <unistd.h>
+
+/* Headers needed for inet_ntoa() and inet_addr() */
+# include <arpa/inet.h>
+
+# include <fcntl.h>
+
+#else
+
+/* MS_WINDOWS includes */
+# ifdef HAVE_FCNTL_H
+# include <fcntl.h>
+# endif
+
+/* Provides the IsWindows7SP1OrGreater() function */
+#include <VersionHelpers.h>
+
+/* remove some flags on older version Windows during run-time.
+ https://msdn.microsoft.com/en-us/library/windows/desktop/ms738596.aspx */
+typedef struct {
+ DWORD build_number; /* available starting with this Win10 BuildNumber */
+ const char flag_name[20];
+} FlagRuntimeInfo;
+
+/* IMPORTANT: make sure the list ordered by descending build_number */
+static FlagRuntimeInfo win_runtime_flags[] = {
+ /* available starting with Windows 10 1703 */
+ {15063, "TCP_KEEPCNT"},
+ /* available starting with Windows 10 1607 */
+ {14393, "TCP_FASTOPEN"}
+};
+
+static void
+remove_unusable_flags(PyObject *m)
+{
+ PyObject *dict;
+ OSVERSIONINFOEX info;
+ DWORDLONG dwlConditionMask;
+
+ dict = PyModule_GetDict(m);
+ if (dict == NULL) {
+ return;
+ }
+
+ /* set to Windows 10, except BuildNumber. */
+ memset(&info, 0, sizeof(info));
+ info.dwOSVersionInfoSize = sizeof(info);
+ info.dwMajorVersion = 10;
+ info.dwMinorVersion = 0;
+
+ /* set Condition Mask */
+ dwlConditionMask = 0;
+ VER_SET_CONDITION(dwlConditionMask, VER_MAJORVERSION, VER_GREATER_EQUAL);
+ VER_SET_CONDITION(dwlConditionMask, VER_MINORVERSION, VER_GREATER_EQUAL);
+ VER_SET_CONDITION(dwlConditionMask, VER_BUILDNUMBER, VER_GREATER_EQUAL);
+
+ for (int i=0; i<sizeof(win_runtime_flags)/sizeof(FlagRuntimeInfo); i++) {
+ info.dwBuildNumber = win_runtime_flags[i].build_number;
+ /* greater than or equal to the specified version?
+ Compatibility Mode will not cheat VerifyVersionInfo(...) */
+ if (VerifyVersionInfo(
+ &info,
+ VER_MAJORVERSION|VER_MINORVERSION|VER_BUILDNUMBER,
+ dwlConditionMask)) {
+ break;
+ }
+ else {
+ if (PyDict_GetItemString(
+ dict,
+ win_runtime_flags[i].flag_name) != NULL)
+ {
+ if (PyDict_DelItemString(
+ dict,
+ win_runtime_flags[i].flag_name))
+ {
+ PyErr_Clear();
+ }
+ }
+ }
+ }
+}
+
+#endif
+
+#include <stddef.h>
+
+#ifndef O_NONBLOCK
+# define O_NONBLOCK O_NDELAY
+#endif
+
+/* include Python's addrinfo.h unless it causes trouble */
+#if defined(__sgi) && _COMPILER_VERSION>700 && defined(_SS_ALIGNSIZE)
+ /* Do not include addinfo.h on some newer IRIX versions.
+ * _SS_ALIGNSIZE is defined in sys/socket.h by 6.5.21,
+ * for example, but not by 6.5.10.
+ */
+#elif defined(_MSC_VER) && _MSC_VER>1201
+ /* Do not include addrinfo.h for MSVC7 or greater. 'addrinfo' and
+ * EAI_* constants are defined in (the already included) ws2tcpip.h.
+ */
+#else
+# include "addrinfo.h"
+#endif
+
+#ifndef HAVE_INET_PTON
+#if !defined(NTDDI_VERSION) || (NTDDI_VERSION < NTDDI_LONGHORN)
+int inet_pton(int af, const char *src, void *dst);
+const char *inet_ntop(int af, const void *src, char *dst, socklen_t size);
+#endif
+#endif
+
+#ifdef __APPLE__
+/* On OS X, getaddrinfo returns no error indication of lookup
+ failure, so we must use the emulation instead of the libinfo
+ implementation. Unfortunately, performing an autoconf test
+ for this bug would require DNS access for the machine performing
+ the configuration, which is not acceptable. Therefore, we
+ determine the bug just by checking for __APPLE__. If this bug
+ gets ever fixed, perhaps checking for sys/version.h would be
+ appropriate, which is 10/0 on the system with the bug. */
+#ifndef HAVE_GETNAMEINFO
+/* This bug seems to be fixed in Jaguar. Ths easiest way I could
+ Find to check for Jaguar is that it has getnameinfo(), which
+ older releases don't have */
+#undef HAVE_GETADDRINFO
+#endif
+
+#ifdef HAVE_INET_ATON
+#define USE_INET_ATON_WEAKLINK
+#endif
+
+#endif
+
+/* I know this is a bad practice, but it is the easiest... */
+#if !defined(HAVE_GETADDRINFO)
+/* avoid clashes with the C library definition of the symbol. */
+#define getaddrinfo fake_getaddrinfo
+#define gai_strerror fake_gai_strerror
+#define freeaddrinfo fake_freeaddrinfo
+#include "getaddrinfo.c"
+#endif
+#if !defined(HAVE_GETNAMEINFO)
+#define getnameinfo fake_getnameinfo
+#include "getnameinfo.c"
+#endif
+
+#ifdef MS_WINDOWS
+#define SOCKETCLOSE closesocket
+#endif
+
+#ifdef MS_WIN32
+#undef EAFNOSUPPORT
+#define EAFNOSUPPORT WSAEAFNOSUPPORT
+#define snprintf _snprintf
+#endif
+
+#ifndef SOCKETCLOSE
+#define SOCKETCLOSE close
+#endif
+
+#if (defined(HAVE_BLUETOOTH_H) || defined(HAVE_BLUETOOTH_BLUETOOTH_H)) && !defined(__NetBSD__) && !defined(__DragonFly__)
+#define USE_BLUETOOTH 1
+#if defined(__FreeBSD__)
+#define BTPROTO_L2CAP BLUETOOTH_PROTO_L2CAP
+#define BTPROTO_RFCOMM BLUETOOTH_PROTO_RFCOMM
+#define BTPROTO_HCI BLUETOOTH_PROTO_HCI
+#define SOL_HCI SOL_HCI_RAW
+#define HCI_FILTER SO_HCI_RAW_FILTER
+#define sockaddr_l2 sockaddr_l2cap
+#define sockaddr_rc sockaddr_rfcomm
+#define hci_dev hci_node
+#define _BT_L2_MEMB(sa, memb) ((sa)->l2cap_##memb)
+#define _BT_RC_MEMB(sa, memb) ((sa)->rfcomm_##memb)
+#define _BT_HCI_MEMB(sa, memb) ((sa)->hci_##memb)
+#elif defined(__NetBSD__) || defined(__DragonFly__)
+#define sockaddr_l2 sockaddr_bt
+#define sockaddr_rc sockaddr_bt
+#define sockaddr_hci sockaddr_bt
+#define sockaddr_sco sockaddr_bt
+#define SOL_HCI BTPROTO_HCI
+#define HCI_DATA_DIR SO_HCI_DIRECTION
+#define _BT_L2_MEMB(sa, memb) ((sa)->bt_##memb)
+#define _BT_RC_MEMB(sa, memb) ((sa)->bt_##memb)
+#define _BT_HCI_MEMB(sa, memb) ((sa)->bt_##memb)
+#define _BT_SCO_MEMB(sa, memb) ((sa)->bt_##memb)
+#else
+#define _BT_L2_MEMB(sa, memb) ((sa)->l2_##memb)
+#define _BT_RC_MEMB(sa, memb) ((sa)->rc_##memb)
+#define _BT_HCI_MEMB(sa, memb) ((sa)->hci_##memb)
+#define _BT_SCO_MEMB(sa, memb) ((sa)->sco_##memb)
+#endif
+#endif
+
+/* Convert "sock_addr_t *" to "struct sockaddr *". */
+#define SAS2SA(x) (&((x)->sa))
+
+/*
+ * Constants for getnameinfo()
+ */
+#if !defined(NI_MAXHOST)
+#define NI_MAXHOST 1025
+#endif
+#if !defined(NI_MAXSERV)
+#define NI_MAXSERV 32
+#endif
+
+#ifndef INVALID_SOCKET /* MS defines this */
+#define INVALID_SOCKET (-1)
+#endif
+
+#ifndef INADDR_NONE
+#define INADDR_NONE (-1)
+#endif
+
+/* XXX There's a problem here: *static* functions are not supposed to have
+ a Py prefix (or use CapitalizedWords). Later... */
+
+/* Global variable holding the exception type for errors detected
+ by this module (but not argument type or memory errors, etc.). */
+static PyObject *socket_herror;
+static PyObject *socket_gaierror;
+static PyObject *socket_timeout;
+
+/* A forward reference to the socket type object.
+ The sock_type variable contains pointers to various functions,
+ some of which call new_sockobject(), which uses sock_type, so
+ there has to be a circular reference. */
+static PyTypeObject sock_type;
+
+#if defined(HAVE_POLL_H)
+#include <poll.h>
+#elif defined(HAVE_SYS_POLL_H)
+#include <sys/poll.h>
+#endif
+
+/* Largest value to try to store in a socklen_t (used when handling
+ ancillary data). POSIX requires socklen_t to hold at least
+ (2**31)-1 and recommends against storing larger values, but
+ socklen_t was originally int in the BSD interface, so to be on the
+ safe side we use the smaller of (2**31)-1 and INT_MAX. */
+#if INT_MAX > 0x7fffffff
+#define SOCKLEN_T_LIMIT 0x7fffffff
+#else
+#define SOCKLEN_T_LIMIT INT_MAX
+#endif
+
+#ifdef HAVE_POLL
+/* Instead of select(), we'll use poll() since poll() works on any fd. */
+#define IS_SELECTABLE(s) 1
+/* Can we call select() with this socket without a buffer overrun? */
+#else
+/* If there's no timeout left, we don't have to call select, so it's a safe,
+ * little white lie. */
+#define IS_SELECTABLE(s) (_PyIsSelectable_fd((s)->sock_fd) || (s)->sock_timeout <= 0)
+#endif
+
+static PyObject*
+select_error(void)
+{
+ PyErr_SetString(PyExc_OSError, "unable to select on socket");
+ return NULL;
+}
+
+#ifdef MS_WINDOWS
+#ifndef WSAEAGAIN
+#define WSAEAGAIN WSAEWOULDBLOCK
+#endif
+#define CHECK_ERRNO(expected) \
+ (WSAGetLastError() == WSA ## expected)
+#else
+#define CHECK_ERRNO(expected) \
+ (errno == expected)
+#endif
+
+#ifdef MS_WINDOWS
+# define GET_SOCK_ERROR WSAGetLastError()
+# define SET_SOCK_ERROR(err) WSASetLastError(err)
+# define SOCK_TIMEOUT_ERR WSAEWOULDBLOCK
+# define SOCK_INPROGRESS_ERR WSAEWOULDBLOCK
+#else
+# define GET_SOCK_ERROR errno
+# define SET_SOCK_ERROR(err) do { errno = err; } while (0)
+# define SOCK_TIMEOUT_ERR EWOULDBLOCK
+# define SOCK_INPROGRESS_ERR EINPROGRESS
+#endif
+
+
+#ifdef MS_WINDOWS
+/* Does WSASocket() support the WSA_FLAG_NO_HANDLE_INHERIT flag? */
+static int support_wsa_no_inherit = -1;
+#endif
+
+/* Convenience function to raise an error according to errno
+ and return a NULL pointer from a function. */
+
+static PyObject *
+set_error(void)
+{
+#ifdef MS_WINDOWS
+ int err_no = WSAGetLastError();
+ /* PyErr_SetExcFromWindowsErr() invokes FormatMessage() which
+ recognizes the error codes used by both GetLastError() and
+ WSAGetLastError */
+ if (err_no)
+ return PyErr_SetExcFromWindowsErr(PyExc_OSError, err_no);
+#endif
+
+ return PyErr_SetFromErrno(PyExc_OSError);
+}
+
+
+static PyObject *
+set_herror(int h_error)
+{
+ PyObject *v;
+
+#ifdef HAVE_HSTRERROR
+ v = Py_BuildValue("(is)", h_error, (char *)hstrerror(h_error));
+#else
+ v = Py_BuildValue("(is)", h_error, "host not found");
+#endif
+ if (v != NULL) {
+ PyErr_SetObject(socket_herror, v);
+ Py_DECREF(v);
+ }
+
+ return NULL;
+}
+
+
+static PyObject *
+set_gaierror(int error)
+{
+ PyObject *v;
+
+#ifdef EAI_SYSTEM
+ /* EAI_SYSTEM is not available on Windows XP. */
+ if (error == EAI_SYSTEM)
+ return set_error();
+#endif
+
+#ifdef HAVE_GAI_STRERROR
+ v = Py_BuildValue("(is)", error, gai_strerror(error));
+#else
+ v = Py_BuildValue("(is)", error, "getaddrinfo failed");
+#endif
+ if (v != NULL) {
+ PyErr_SetObject(socket_gaierror, v);
+ Py_DECREF(v);
+ }
+
+ return NULL;
+}
+
+/* Function to perform the setting of socket blocking mode
+ internally. block = (1 | 0). */
+static int
+internal_setblocking(PySocketSockObject *s, int block)
+{
+ int result = -1;
+#ifdef MS_WINDOWS
+ u_long arg;
+#endif
+#if !defined(MS_WINDOWS) \
+ && !((defined(HAVE_SYS_IOCTL_H) && defined(FIONBIO)))
+ int delay_flag, new_delay_flag;
+#endif
+#ifdef SOCK_NONBLOCK
+ if (block)
+ s->sock_type &= (~SOCK_NONBLOCK);
+ else
+ s->sock_type |= SOCK_NONBLOCK;
+#endif
+
+ Py_BEGIN_ALLOW_THREADS
+#ifndef MS_WINDOWS
+#if (defined(HAVE_SYS_IOCTL_H) && defined(FIONBIO))
+ block = !block;
+ if (ioctl(s->sock_fd, FIONBIO, (unsigned int *)&block) == -1)
+ goto done;
+#else
+ delay_flag = fcntl(s->sock_fd, F_GETFL, 0);
+ if (delay_flag == -1)
+ goto done;
+ if (block)
+ new_delay_flag = delay_flag & (~O_NONBLOCK);
+ else
+ new_delay_flag = delay_flag | O_NONBLOCK;
+ if (new_delay_flag != delay_flag)
+ if (fcntl(s->sock_fd, F_SETFL, new_delay_flag) == -1)
+ goto done;
+#endif
+#else /* MS_WINDOWS */
+ arg = !block;
+ if (ioctlsocket(s->sock_fd, FIONBIO, &arg) != 0)
+ goto done;
+#endif /* MS_WINDOWS */
+
+ result = 0;
+
+ done:
+ ; /* necessary for --without-threads flag */
+ Py_END_ALLOW_THREADS
+
+ if (result) {
+#ifndef MS_WINDOWS
+ PyErr_SetFromErrno(PyExc_OSError);
+#else
+ PyErr_SetExcFromWindowsErr(PyExc_OSError, WSAGetLastError());
+#endif
+ }
+
+ return result;
+}
+
+static int
+internal_select(PySocketSockObject *s, int writing, _PyTime_t interval,
+ int connect)
+{
+ int n;
+#ifdef HAVE_POLL
+ struct pollfd pollfd;
+ _PyTime_t ms;
+#else
+ fd_set fds, efds;
+ struct timeval tv, *tvp;
+#endif
+
+#ifdef WITH_THREAD
+ /* must be called with the GIL held */
+ assert(PyGILState_Check());
+#endif
+
+ /* Error condition is for output only */
+ assert(!(connect && !writing));
+
+ /* Guard against closed socket */
+ if (s->sock_fd == INVALID_SOCKET)
+ return 0;
+
+ /* Prefer poll, if available, since you can poll() any fd
+ * which can't be done with select(). */
+#ifdef HAVE_POLL
+ pollfd.fd = s->sock_fd;
+ pollfd.events = writing ? POLLOUT : POLLIN;
+ if (connect) {
+ /* On Windows, the socket becomes writable on connection success,
+ but a connection failure is notified as an error. On POSIX, the
+ socket becomes writable on connection success or on connection
+ failure. */
+ pollfd.events |= POLLERR;
+ }
+
+ /* s->sock_timeout is in seconds, timeout in ms */
+ ms = _PyTime_AsMilliseconds(interval, _PyTime_ROUND_CEILING);
+ assert(ms <= INT_MAX);
+
+ Py_BEGIN_ALLOW_THREADS;
+ n = poll(&pollfd, 1, (int)ms);
+ Py_END_ALLOW_THREADS;
+#else
+ if (interval >= 0) {
+ _PyTime_AsTimeval_noraise(interval, &tv, _PyTime_ROUND_CEILING);
+ tvp = &tv;
+ }
+ else
+ tvp = NULL;
+
+ FD_ZERO(&fds);
+ FD_SET(s->sock_fd, &fds);
+ FD_ZERO(&efds);
+ if (connect) {
+ /* On Windows, the socket becomes writable on connection success,
+ but a connection failure is notified as an error. On POSIX, the
+ socket becomes writable on connection success or on connection
+ failure. */
+ FD_SET(s->sock_fd, &efds);
+ }
+
+ /* See if the socket is ready */
+ Py_BEGIN_ALLOW_THREADS;
+ if (writing)
+ n = select(Py_SAFE_DOWNCAST(s->sock_fd+1, SOCKET_T, int),
+ NULL, &fds, &efds, tvp);
+ else
+ n = select(Py_SAFE_DOWNCAST(s->sock_fd+1, SOCKET_T, int),
+ &fds, NULL, &efds, tvp);
+ Py_END_ALLOW_THREADS;
+#endif
+
+ if (n < 0)
+ return -1;
+ if (n == 0)
+ return 1;
+ return 0;
+}
+
+/* Call a socket function.
+
+ On error, raise an exception and return -1 if err is set, or fill err and
+ return -1 otherwise. If a signal was received and the signal handler raised
+ an exception, return -1, and set err to -1 if err is set.
+
+ On success, return 0, and set err to 0 if err is set.
+
+ If the socket has a timeout, wait until the socket is ready before calling
+ the function: wait until the socket is writable if writing is nonzero, wait
+ until the socket received data otherwise.
+
+ If the socket function is interrupted by a signal (failed with EINTR): retry
+ the function, except if the signal handler raised an exception (PEP 475).
+
+ When the function is retried, recompute the timeout using a monotonic clock.
+
+ sock_call_ex() must be called with the GIL held. The socket function is
+ called with the GIL released. */
+static int
+sock_call_ex(PySocketSockObject *s,
+ int writing,
+ int (*sock_func) (PySocketSockObject *s, void *data),
+ void *data,
+ int connect,
+ int *err,
+ _PyTime_t timeout)
+{
+ int has_timeout = (timeout > 0);
+ _PyTime_t deadline = 0;
+ int deadline_initialized = 0;
+ int res;
+
+#ifdef WITH_THREAD
+ /* sock_call() must be called with the GIL held. */
+ assert(PyGILState_Check());
+#endif
+
+ /* outer loop to retry select() when select() is interrupted by a signal
+ or to retry select()+sock_func() on false positive (see above) */
+ while (1) {
+ /* For connect(), poll even for blocking socket. The connection
+ runs asynchronously. */
+ if (has_timeout || connect) {
+ if (has_timeout) {
+ _PyTime_t interval;
+
+ if (deadline_initialized) {
+ /* recompute the timeout */
+ interval = deadline - _PyTime_GetMonotonicClock();
+ }
+ else {
+ deadline_initialized = 1;
+ deadline = _PyTime_GetMonotonicClock() + timeout;
+ interval = timeout;
+ }
+
+ if (interval >= 0)
+ res = internal_select(s, writing, interval, connect);
+ else
+ res = 1;
+ }
+ else {
+ res = internal_select(s, writing, timeout, connect);
+ }
+
+ if (res == -1) {
+ if (err)
+ *err = GET_SOCK_ERROR;
+
+ if (CHECK_ERRNO(EINTR)) {
+ /* select() was interrupted by a signal */
+ if (PyErr_CheckSignals()) {
+ if (err)
+ *err = -1;
+ return -1;
+ }
+
+ /* retry select() */
+ continue;
+ }
+
+ /* select() failed */
+ s->errorhandler();
+ return -1;
+ }
+
+ if (res == 1) {
+ if (err)
+ *err = SOCK_TIMEOUT_ERR;
+ else
+ PyErr_SetString(socket_timeout, "timed out");
+ return -1;
+ }
+
+ /* the socket is ready */
+ }
+
+ /* inner loop to retry sock_func() when sock_func() is interrupted
+ by a signal */
+ while (1) {
+ Py_BEGIN_ALLOW_THREADS
+ res = sock_func(s, data);
+ Py_END_ALLOW_THREADS
+
+ if (res) {
+ /* sock_func() succeeded */
+ if (err)
+ *err = 0;
+ return 0;
+ }
+
+ if (err)
+ *err = GET_SOCK_ERROR;
+
+ if (!CHECK_ERRNO(EINTR))
+ break;
+
+ /* sock_func() was interrupted by a signal */
+ if (PyErr_CheckSignals()) {
+ if (err)
+ *err = -1;
+ return -1;
+ }
+
+ /* retry sock_func() */
+ }
+
+ if (s->sock_timeout > 0
+ && (CHECK_ERRNO(EWOULDBLOCK) || CHECK_ERRNO(EAGAIN))) {
+ /* False positive: sock_func() failed with EWOULDBLOCK or EAGAIN.
+
+ For example, select() could indicate a socket is ready for
+ reading, but the data then discarded by the OS because of a
+ wrong checksum.
+
+ Loop on select() to recheck for socket readyness. */
+ continue;
+ }
+
+ /* sock_func() failed */
+ if (!err)
+ s->errorhandler();
+ /* else: err was already set before */
+ return -1;
+ }
+}
+
+static int
+sock_call(PySocketSockObject *s,
+ int writing,
+ int (*func) (PySocketSockObject *s, void *data),
+ void *data)
+{
+ return sock_call_ex(s, writing, func, data, 0, NULL, s->sock_timeout);
+}
+
+
+/* Initialize a new socket object. */
+
+/* Default timeout for new sockets */
+static _PyTime_t defaulttimeout = _PYTIME_FROMSECONDS(-1);
+
+static int
+init_sockobject(PySocketSockObject *s,
+ SOCKET_T fd, int family, int type, int proto)
+{
+ s->sock_fd = fd;
+ s->sock_family = family;
+ s->sock_type = type;
+ s->sock_proto = proto;
+
+ s->errorhandler = &set_error;
+#ifdef SOCK_NONBLOCK
+ if (type & SOCK_NONBLOCK)
+ s->sock_timeout = 0;
+ else
+#endif
+ {
+ s->sock_timeout = defaulttimeout;
+ if (defaulttimeout >= 0) {
+ if (internal_setblocking(s, 0) == -1) {
+ return -1;
+ }
+ }
+ }
+ return 0;
+}
+
+
+/* Create a new socket object.
+ This just creates the object and initializes it.
+ If the creation fails, return NULL and set an exception (implicit
+ in NEWOBJ()). */
+
+static PySocketSockObject *
+new_sockobject(SOCKET_T fd, int family, int type, int proto)
+{
+ PySocketSockObject *s;
+ s = (PySocketSockObject *)
+ PyType_GenericNew(&sock_type, NULL, NULL);
+ if (s == NULL)
+ return NULL;
+ if (init_sockobject(s, fd, family, type, proto) == -1) {
+ Py_DECREF(s);
+ return NULL;
+ }
+ return s;
+}
+
+
+/* Lock to allow python interpreter to continue, but only allow one
+ thread to be in gethostbyname or getaddrinfo */
+#if defined(USE_GETHOSTBYNAME_LOCK) || defined(USE_GETADDRINFO_LOCK)
+static PyThread_type_lock netdb_lock;
+#endif
+
+
+/* Convert a string specifying a host name or one of a few symbolic
+ names to a numeric IP address. This usually calls gethostbyname()
+ to do the work; the names "" and "<broadcast>" are special.
+ Return the length (IPv4 should be 4 bytes), or negative if
+ an error occurred; then an exception is raised. */
+
+static int
+setipaddr(const char *name, struct sockaddr *addr_ret, size_t addr_ret_size, int af)
+{
+ struct addrinfo hints, *res;
+ int error;
+
+ memset((void *) addr_ret, '\0', sizeof(*addr_ret));
+ if (name[0] == '\0') {
+ int siz;
+ memset(&hints, 0, sizeof(hints));
+ hints.ai_family = af;
+ hints.ai_socktype = SOCK_DGRAM; /*dummy*/
+ hints.ai_flags = AI_PASSIVE;
+ Py_BEGIN_ALLOW_THREADS
+ ACQUIRE_GETADDRINFO_LOCK
+ error = getaddrinfo(NULL, "0", &hints, &res);
+ Py_END_ALLOW_THREADS
+ /* We assume that those thread-unsafe getaddrinfo() versions
+ *are* safe regarding their return value, ie. that a
+ subsequent call to getaddrinfo() does not destroy the
+ outcome of the first call. */
+ RELEASE_GETADDRINFO_LOCK
+ if (error) {
+ set_gaierror(error);
+ return -1;
+ }
+ switch (res->ai_family) {
+ case AF_INET:
+ siz = 4;
+ break;
+#ifdef ENABLE_IPV6
+ case AF_INET6:
+ siz = 16;
+ break;
+#endif
+ default:
+ freeaddrinfo(res);
+ PyErr_SetString(PyExc_OSError,
+ "unsupported address family");
+ return -1;
+ }
+ if (res->ai_next) {
+ freeaddrinfo(res);
+ PyErr_SetString(PyExc_OSError,
+ "wildcard resolved to multiple address");
+ return -1;
+ }
+ if (res->ai_addrlen < addr_ret_size)
+ addr_ret_size = res->ai_addrlen;
+ memcpy(addr_ret, res->ai_addr, addr_ret_size);
+ freeaddrinfo(res);
+ return siz;
+ }
+ /* special-case broadcast - inet_addr() below can return INADDR_NONE for
+ * this */
+ if (strcmp(name, "255.255.255.255") == 0 ||
+ strcmp(name, "<broadcast>") == 0) {
+ struct sockaddr_in *sin;
+ if (af != AF_INET && af != AF_UNSPEC) {
+ PyErr_SetString(PyExc_OSError,
+ "address family mismatched");
+ return -1;
+ }
+ sin = (struct sockaddr_in *)addr_ret;
+ memset((void *) sin, '\0', sizeof(*sin));
+ sin->sin_family = AF_INET;
+#ifdef HAVE_SOCKADDR_SA_LEN
+ sin->sin_len = sizeof(*sin);
+#endif
+ sin->sin_addr.s_addr = INADDR_BROADCAST;
+ return sizeof(sin->sin_addr);
+ }
+
+ /* avoid a name resolution in case of numeric address */
+#ifdef HAVE_INET_PTON
+ /* check for an IPv4 address */
+ if (af == AF_UNSPEC || af == AF_INET) {
+ struct sockaddr_in *sin = (struct sockaddr_in *)addr_ret;
+ memset(sin, 0, sizeof(*sin));
+ if (inet_pton(AF_INET, name, &sin->sin_addr) > 0) {
+ sin->sin_family = AF_INET;
+#ifdef HAVE_SOCKADDR_SA_LEN
+ sin->sin_len = sizeof(*sin);
+#endif
+ return 4;
+ }
+ }
+#ifdef ENABLE_IPV6
+ /* check for an IPv6 address - if the address contains a scope ID, we
+ * fallback to getaddrinfo(), which can handle translation from interface
+ * name to interface index */
+ if ((af == AF_UNSPEC || af == AF_INET6) && !strchr(name, '%')) {
+ struct sockaddr_in6 *sin = (struct sockaddr_in6 *)addr_ret;
+ memset(sin, 0, sizeof(*sin));
+ if (inet_pton(AF_INET6, name, &sin->sin6_addr) > 0) {
+ sin->sin6_family = AF_INET6;
+#ifdef HAVE_SOCKADDR_SA_LEN
+ sin->sin6_len = sizeof(*sin);
+#endif
+ return 16;
+ }
+ }
+#endif /* ENABLE_IPV6 */
+#else /* HAVE_INET_PTON */
+ /* check for an IPv4 address */
+ if (af == AF_INET || af == AF_UNSPEC) {
+ struct sockaddr_in *sin = (struct sockaddr_in *)addr_ret;
+ memset(sin, 0, sizeof(*sin));
+ if ((sin->sin_addr.s_addr = inet_addr(name)) != INADDR_NONE) {
+ sin->sin_family = AF_INET;
+#ifdef HAVE_SOCKADDR_SA_LEN
+ sin->sin_len = sizeof(*sin);
+#endif
+ return 4;
+ }
+ }
+#endif /* HAVE_INET_PTON */
+
+ /* perform a name resolution */
+ memset(&hints, 0, sizeof(hints));
+ hints.ai_family = af;
+ Py_BEGIN_ALLOW_THREADS
+ ACQUIRE_GETADDRINFO_LOCK
+ error = getaddrinfo(name, NULL, &hints, &res);
+#if defined(__digital__) && defined(__unix__)
+ if (error == EAI_NONAME && af == AF_UNSPEC) {
+ /* On Tru64 V5.1, numeric-to-addr conversion fails
+ if no address family is given. Assume IPv4 for now.*/
+ hints.ai_family = AF_INET;
+ error = getaddrinfo(name, NULL, &hints, &res);
+ }
+#endif
+ Py_END_ALLOW_THREADS
+ RELEASE_GETADDRINFO_LOCK /* see comment in setipaddr() */
+ if (error) {
+ set_gaierror(error);
+ return -1;
+ }
+ if (res->ai_addrlen < addr_ret_size)
+ addr_ret_size = res->ai_addrlen;
+ memcpy((char *) addr_ret, res->ai_addr, addr_ret_size);
+ freeaddrinfo(res);
+ switch (addr_ret->sa_family) {
+ case AF_INET:
+ return 4;
+#ifdef ENABLE_IPV6
+ case AF_INET6:
+ return 16;
+#endif
+ default:
+ PyErr_SetString(PyExc_OSError, "unknown address family");
+ return -1;
+ }
+}
+
+
+/* Create a string object representing an IP address.
+ This is always a string of the form 'dd.dd.dd.dd' (with variable
+ size numbers). */
+
+static PyObject *
+makeipaddr(struct sockaddr *addr, int addrlen)
+{
+ char buf[NI_MAXHOST];
+ int error;
+
+ error = getnameinfo(addr, addrlen, buf, sizeof(buf), NULL, 0,
+ NI_NUMERICHOST);
+ if (error) {
+ set_gaierror(error);
+ return NULL;
+ }
+ return PyUnicode_FromString(buf);
+}
+
+
+#ifdef USE_BLUETOOTH
+/* Convert a string representation of a Bluetooth address into a numeric
+ address. Returns the length (6), or raises an exception and returns -1 if
+ an error occurred. */
+
+static int
+setbdaddr(const char *name, bdaddr_t *bdaddr)
+{
+ unsigned int b0, b1, b2, b3, b4, b5;
+ char ch;
+ int n;
+
+ n = sscanf(name, "%X:%X:%X:%X:%X:%X%c",
+ &b5, &b4, &b3, &b2, &b1, &b0, &ch);
+ if (n == 6 && (b0 | b1 | b2 | b3 | b4 | b5) < 256) {
+ bdaddr->b[0] = b0;
+ bdaddr->b[1] = b1;
+ bdaddr->b[2] = b2;
+ bdaddr->b[3] = b3;
+ bdaddr->b[4] = b4;
+ bdaddr->b[5] = b5;
+ return 6;
+ } else {
+ PyErr_SetString(PyExc_OSError, "bad bluetooth address");
+ return -1;
+ }
+}
+
+/* Create a string representation of the Bluetooth address. This is always a
+ string of the form 'XX:XX:XX:XX:XX:XX' where XX is a two digit hexadecimal
+ value (zero padded if necessary). */
+
+static PyObject *
+makebdaddr(bdaddr_t *bdaddr)
+{
+ char buf[(6 * 2) + 5 + 1];
+
+ sprintf(buf, "%02X:%02X:%02X:%02X:%02X:%02X",
+ bdaddr->b[5], bdaddr->b[4], bdaddr->b[3],
+ bdaddr->b[2], bdaddr->b[1], bdaddr->b[0]);
+ return PyUnicode_FromString(buf);
+}
+#endif
+
+
+/* Create an object representing the given socket address,
+ suitable for passing it back to bind(), connect() etc.
+ The family field of the sockaddr structure is inspected
+ to determine what kind of address it really is. */
+
+/*ARGSUSED*/
+static PyObject *
+makesockaddr(SOCKET_T sockfd, struct sockaddr *addr, size_t addrlen, int proto)
+{
+ if (addrlen == 0) {
+ /* No address -- may be recvfrom() from known socket */
+ Py_INCREF(Py_None);
+ return Py_None;
+ }
+
+ switch (addr->sa_family) {
+
+ case AF_INET:
+ {
+ struct sockaddr_in *a;
+ PyObject *addrobj = makeipaddr(addr, sizeof(*a));
+ PyObject *ret = NULL;
+ if (addrobj) {
+ a = (struct sockaddr_in *)addr;
+ ret = Py_BuildValue("Oi", addrobj, ntohs(a->sin_port));
+ Py_DECREF(addrobj);
+ }
+ return ret;
+ }
+
+#if defined(AF_UNIX)
+ case AF_UNIX:
+ {
+ struct sockaddr_un *a = (struct sockaddr_un *) addr;
+#ifdef __linux__
+ size_t linuxaddrlen = addrlen - offsetof(struct sockaddr_un, sun_path);
+ if (linuxaddrlen > 0 && a->sun_path[0] == 0) { /* Linux abstract namespace */
+ return PyBytes_FromStringAndSize(a->sun_path, linuxaddrlen);
+ }
+ else
+#endif /* linux */
+ {
+ /* regular NULL-terminated string */
+ return PyUnicode_DecodeFSDefault(a->sun_path);
+ }
+ }
+#endif /* AF_UNIX */
+
+#if defined(AF_NETLINK)
+ case AF_NETLINK:
+ {
+ struct sockaddr_nl *a = (struct sockaddr_nl *) addr;
+ return Py_BuildValue("II", a->nl_pid, a->nl_groups);
+ }
+#endif /* AF_NETLINK */
+
+#ifdef ENABLE_IPV6
+ case AF_INET6:
+ {
+ struct sockaddr_in6 *a;
+ PyObject *addrobj = makeipaddr(addr, sizeof(*a));
+ PyObject *ret = NULL;
+ if (addrobj) {
+ a = (struct sockaddr_in6 *)addr;
+ ret = Py_BuildValue("OiII",
+ addrobj,
+ ntohs(a->sin6_port),
+ ntohl(a->sin6_flowinfo),
+ a->sin6_scope_id);
+ Py_DECREF(addrobj);
+ }
+ return ret;
+ }
+#endif /* ENABLE_IPV6 */
+
+#ifdef USE_BLUETOOTH
+ case AF_BLUETOOTH:
+ switch (proto) {
+
+ case BTPROTO_L2CAP:
+ {
+ struct sockaddr_l2 *a = (struct sockaddr_l2 *) addr;
+ PyObject *addrobj = makebdaddr(&_BT_L2_MEMB(a, bdaddr));
+ PyObject *ret = NULL;
+ if (addrobj) {
+ ret = Py_BuildValue("Oi",
+ addrobj,
+ _BT_L2_MEMB(a, psm));
+ Py_DECREF(addrobj);
+ }
+ return ret;
+ }
+
+ case BTPROTO_RFCOMM:
+ {
+ struct sockaddr_rc *a = (struct sockaddr_rc *) addr;
+ PyObject *addrobj = makebdaddr(&_BT_RC_MEMB(a, bdaddr));
+ PyObject *ret = NULL;
+ if (addrobj) {
+ ret = Py_BuildValue("Oi",
+ addrobj,
+ _BT_RC_MEMB(a, channel));
+ Py_DECREF(addrobj);
+ }
+ return ret;
+ }
+
+ case BTPROTO_HCI:
+ {
+ struct sockaddr_hci *a = (struct sockaddr_hci *) addr;
+#if defined(__NetBSD__) || defined(__DragonFly__)
+ return makebdaddr(&_BT_HCI_MEMB(a, bdaddr));
+#else /* __NetBSD__ || __DragonFly__ */
+ PyObject *ret = NULL;
+ ret = Py_BuildValue("i", _BT_HCI_MEMB(a, dev));
+ return ret;
+#endif /* !(__NetBSD__ || __DragonFly__) */
+ }
+
+#if !defined(__FreeBSD__)
+ case BTPROTO_SCO:
+ {
+ struct sockaddr_sco *a = (struct sockaddr_sco *) addr;
+ return makebdaddr(&_BT_SCO_MEMB(a, bdaddr));
+ }
+#endif /* !__FreeBSD__ */
+
+ default:
+ PyErr_SetString(PyExc_ValueError,
+ "Unknown Bluetooth protocol");
+ return NULL;
+ }
+#endif /* USE_BLUETOOTH */
+
+#if defined(HAVE_NETPACKET_PACKET_H) && defined(SIOCGIFNAME)
+ case AF_PACKET:
+ {
+ struct sockaddr_ll *a = (struct sockaddr_ll *)addr;
+ const char *ifname = "";
+ struct ifreq ifr;
+ /* need to look up interface name give index */
+ if (a->sll_ifindex) {
+ ifr.ifr_ifindex = a->sll_ifindex;
+ if (ioctl(sockfd, SIOCGIFNAME, &ifr) == 0)
+ ifname = ifr.ifr_name;
+ }
+ return Py_BuildValue("shbhy#",
+ ifname,
+ ntohs(a->sll_protocol),
+ a->sll_pkttype,
+ a->sll_hatype,
+ a->sll_addr,
+ a->sll_halen);
+ }
+#endif /* HAVE_NETPACKET_PACKET_H && SIOCGIFNAME */
+
+#ifdef HAVE_LINUX_TIPC_H
+ case AF_TIPC:
+ {
+ struct sockaddr_tipc *a = (struct sockaddr_tipc *) addr;
+ if (a->addrtype == TIPC_ADDR_NAMESEQ) {
+ return Py_BuildValue("IIIII",
+ a->addrtype,
+ a->addr.nameseq.type,
+ a->addr.nameseq.lower,
+ a->addr.nameseq.upper,
+ a->scope);
+ } else if (a->addrtype == TIPC_ADDR_NAME) {
+ return Py_BuildValue("IIIII",
+ a->addrtype,
+ a->addr.name.name.type,
+ a->addr.name.name.instance,
+ a->addr.name.name.instance,
+ a->scope);
+ } else if (a->addrtype == TIPC_ADDR_ID) {
+ return Py_BuildValue("IIIII",
+ a->addrtype,
+ a->addr.id.node,
+ a->addr.id.ref,
+ 0,
+ a->scope);
+ } else {
+ PyErr_SetString(PyExc_ValueError,
+ "Invalid address type");
+ return NULL;
+ }
+ }
+#endif /* HAVE_LINUX_TIPC_H */
+
+#if defined(AF_CAN) && defined(SIOCGIFNAME)
+ case AF_CAN:
+ {
+ struct sockaddr_can *a = (struct sockaddr_can *)addr;
+ const char *ifname = "";
+ struct ifreq ifr;
+ /* need to look up interface name given index */
+ if (a->can_ifindex) {
+ ifr.ifr_ifindex = a->can_ifindex;
+ if (ioctl(sockfd, SIOCGIFNAME, &ifr) == 0)
+ ifname = ifr.ifr_name;
+ }
+
+ return Py_BuildValue("O&h", PyUnicode_DecodeFSDefault,
+ ifname,
+ a->can_family);
+ }
+#endif /* AF_CAN && SIOCGIFNAME */
+
+#ifdef PF_SYSTEM
+ case PF_SYSTEM:
+ switch(proto) {
+#ifdef SYSPROTO_CONTROL
+ case SYSPROTO_CONTROL:
+ {
+ struct sockaddr_ctl *a = (struct sockaddr_ctl *)addr;
+ return Py_BuildValue("(II)", a->sc_id, a->sc_unit);
+ }
+#endif /* SYSPROTO_CONTROL */
+ default:
+ PyErr_SetString(PyExc_ValueError,
+ "Invalid address type");
+ return 0;
+ }
+#endif /* PF_SYSTEM */
+
+#ifdef HAVE_SOCKADDR_ALG
+ case AF_ALG:
+ {
+ struct sockaddr_alg *a = (struct sockaddr_alg *)addr;
+ return Py_BuildValue("s#s#HH",
+ a->salg_type,
+ strnlen((const char*)a->salg_type,
+ sizeof(a->salg_type)),
+ a->salg_name,
+ strnlen((const char*)a->salg_name,
+ sizeof(a->salg_name)),
+ a->salg_feat,
+ a->salg_mask);
+ }
+#endif /* HAVE_SOCKADDR_ALG */
+
+ /* More cases here... */
+
+ default:
+ /* If we don't know the address family, don't raise an
+ exception -- return it as an (int, bytes) tuple. */
+ return Py_BuildValue("iy#",
+ addr->sa_family,
+ addr->sa_data,
+ sizeof(addr->sa_data));
+
+ }
+}
+
+/* Helper for getsockaddrarg: bypass IDNA for ASCII-only host names
+ (in particular, numeric IP addresses). */
+struct maybe_idna {
+ PyObject *obj;
+ char *buf;
+};
+
+static void
+idna_cleanup(struct maybe_idna *data)
+{
+ Py_CLEAR(data->obj);
+}
+
+static int
+idna_converter(PyObject *obj, struct maybe_idna *data)
+{
+ size_t len;
+ PyObject *obj2;
+ if (obj == NULL) {
+ idna_cleanup(data);
+ return 1;
+ }
+ data->obj = NULL;
+ len = -1;
+ if (PyBytes_Check(obj)) {
+ data->buf = PyBytes_AsString(obj);
+ len = PyBytes_Size(obj);
+ }
+ else if (PyByteArray_Check(obj)) {
+ data->buf = PyByteArray_AsString(obj);
+ len = PyByteArray_Size(obj);
+ }
+ else if (PyUnicode_Check(obj)) {
+ if (PyUnicode_READY(obj) == -1) {
+ return 0;
+ }
+ if (PyUnicode_IS_COMPACT_ASCII(obj)) {
+ data->buf = PyUnicode_DATA(obj);
+ len = PyUnicode_GET_LENGTH(obj);
+ }
+ else {
+ obj2 = PyUnicode_AsEncodedString(obj, "idna", NULL);
+ if (!obj2) {
+ PyErr_SetString(PyExc_TypeError, "encoding of hostname failed");
+ return 0;
+ }
+ assert(PyBytes_Check(obj2));
+ data->obj = obj2;
+ data->buf = PyBytes_AS_STRING(obj2);
+ len = PyBytes_GET_SIZE(obj2);
+ }
+ }
+ else {
+ PyErr_Format(PyExc_TypeError, "str, bytes or bytearray expected, not %s",
+ obj->ob_type->tp_name);
+ return 0;
+ }
+ if (strlen(data->buf) != len) {
+ Py_CLEAR(data->obj);
+ PyErr_SetString(PyExc_TypeError, "host name must not contain null character");
+ return 0;
+ }
+ return Py_CLEANUP_SUPPORTED;
+}
+
+/* Parse a socket address argument according to the socket object's
+ address family. Return 1 if the address was in the proper format,
+ 0 of not. The address is returned through addr_ret, its length
+ through len_ret. */
+
+static int
+getsockaddrarg(PySocketSockObject *s, PyObject *args,
+ struct sockaddr *addr_ret, int *len_ret)
+{
+ switch (s->sock_family) {
+
+#if defined(AF_UNIX)
+ case AF_UNIX:
+ {
+ struct sockaddr_un* addr;
+ Py_buffer path;
+ int retval = 0;
+
+ /* PEP 383. Not using PyUnicode_FSConverter since we need to
+ allow embedded nulls on Linux. */
+ if (PyUnicode_Check(args)) {
+ if ((args = PyUnicode_EncodeFSDefault(args)) == NULL)
+ return 0;
+ }
+ else
+ Py_INCREF(args);
+ if (!PyArg_Parse(args, "y*", &path)) {
+ Py_DECREF(args);
+ return retval;
+ }
+ assert(path.len >= 0);
+
+ addr = (struct sockaddr_un*)addr_ret;
+#ifdef __linux__
+ if (path.len > 0 && *(const char *)path.buf == 0) {
+ /* Linux abstract namespace extension */
+ if ((size_t)path.len > sizeof addr->sun_path) {
+ PyErr_SetString(PyExc_OSError,
+ "AF_UNIX path too long");
+ goto unix_out;
+ }
+ }
+ else
+#endif /* linux */
+ {
+ /* regular NULL-terminated string */
+ if ((size_t)path.len >= sizeof addr->sun_path) {
+ PyErr_SetString(PyExc_OSError,
+ "AF_UNIX path too long");
+ goto unix_out;
+ }
+ addr->sun_path[path.len] = 0;
+ }
+ addr->sun_family = s->sock_family;
+ memcpy(addr->sun_path, path.buf, path.len);
+ *len_ret = path.len + offsetof(struct sockaddr_un, sun_path);
+ retval = 1;
+ unix_out:
+ PyBuffer_Release(&path);
+ Py_DECREF(args);
+ return retval;
+ }
+#endif /* AF_UNIX */
+
+#if defined(AF_NETLINK)
+ case AF_NETLINK:
+ {
+ struct sockaddr_nl* addr;
+ int pid, groups;
+ addr = (struct sockaddr_nl *)addr_ret;
+ if (!PyTuple_Check(args)) {
+ PyErr_Format(
+ PyExc_TypeError,
+ "getsockaddrarg: "
+ "AF_NETLINK address must be tuple, not %.500s",
+ Py_TYPE(args)->tp_name);
+ return 0;
+ }
+ if (!PyArg_ParseTuple(args, "II:getsockaddrarg", &pid, &groups))
+ return 0;
+ addr->nl_family = AF_NETLINK;
+ addr->nl_pid = pid;
+ addr->nl_groups = groups;
+ *len_ret = sizeof(*addr);
+ return 1;
+ }
+#endif /* AF_NETLINK */
+
+#ifdef AF_RDS
+ case AF_RDS:
+ /* RDS sockets use sockaddr_in: fall-through */
+#endif /* AF_RDS */
+
+ case AF_INET:
+ {
+ struct sockaddr_in* addr;
+ struct maybe_idna host = {NULL, NULL};
+ int port, result;
+ if (!PyTuple_Check(args)) {
+ PyErr_Format(
+ PyExc_TypeError,
+ "getsockaddrarg: "
+ "AF_INET address must be tuple, not %.500s",
+ Py_TYPE(args)->tp_name);
+ return 0;
+ }
+ if (!PyArg_ParseTuple(args, "O&i:getsockaddrarg",
+ idna_converter, &host, &port))
+ return 0;
+ addr=(struct sockaddr_in*)addr_ret;
+ result = setipaddr(host.buf, (struct sockaddr *)addr,
+ sizeof(*addr), AF_INET);
+ idna_cleanup(&host);
+ if (result < 0)
+ return 0;
+ if (port < 0 || port > 0xffff) {
+ PyErr_SetString(
+ PyExc_OverflowError,
+ "getsockaddrarg: port must be 0-65535.");
+ return 0;
+ }
+ addr->sin_family = AF_INET;
+ addr->sin_port = htons((short)port);
+ *len_ret = sizeof *addr;
+ return 1;
+ }
+
+#ifdef ENABLE_IPV6
+ case AF_INET6:
+ {
+ struct sockaddr_in6* addr;
+ struct maybe_idna host = {NULL, NULL};
+ int port, result;
+ unsigned int flowinfo, scope_id;
+ flowinfo = scope_id = 0;
+ if (!PyTuple_Check(args)) {
+ PyErr_Format(
+ PyExc_TypeError,
+ "getsockaddrarg: "
+ "AF_INET6 address must be tuple, not %.500s",
+ Py_TYPE(args)->tp_name);
+ return 0;
+ }
+ if (!PyArg_ParseTuple(args, "O&i|II",
+ idna_converter, &host, &port, &flowinfo,
+ &scope_id)) {
+ return 0;
+ }
+ addr = (struct sockaddr_in6*)addr_ret;
+ result = setipaddr(host.buf, (struct sockaddr *)addr,
+ sizeof(*addr), AF_INET6);
+ idna_cleanup(&host);
+ if (result < 0)
+ return 0;
+ if (port < 0 || port > 0xffff) {
+ PyErr_SetString(
+ PyExc_OverflowError,
+ "getsockaddrarg: port must be 0-65535.");
+ return 0;
+ }
+ if (flowinfo > 0xfffff) {
+ PyErr_SetString(
+ PyExc_OverflowError,
+ "getsockaddrarg: flowinfo must be 0-1048575.");
+ return 0;
+ }
+ addr->sin6_family = s->sock_family;
+ addr->sin6_port = htons((short)port);
+ addr->sin6_flowinfo = htonl(flowinfo);
+ addr->sin6_scope_id = scope_id;
+ *len_ret = sizeof *addr;
+ return 1;
+ }
+#endif /* ENABLE_IPV6 */
+
+#ifdef USE_BLUETOOTH
+ case AF_BLUETOOTH:
+ {
+ switch (s->sock_proto) {
+ case BTPROTO_L2CAP:
+ {
+ struct sockaddr_l2 *addr;
+ const char *straddr;
+
+ addr = (struct sockaddr_l2 *)addr_ret;
+ memset(addr, 0, sizeof(struct sockaddr_l2));
+ _BT_L2_MEMB(addr, family) = AF_BLUETOOTH;
+ if (!PyArg_ParseTuple(args, "si", &straddr,
+ &_BT_L2_MEMB(addr, psm))) {
+ PyErr_SetString(PyExc_OSError, "getsockaddrarg: "
+ "wrong format");
+ return 0;
+ }
+ if (setbdaddr(straddr, &_BT_L2_MEMB(addr, bdaddr)) < 0)
+ return 0;
+
+ *len_ret = sizeof *addr;
+ return 1;
+ }
+ case BTPROTO_RFCOMM:
+ {
+ struct sockaddr_rc *addr;
+ const char *straddr;
+
+ addr = (struct sockaddr_rc *)addr_ret;
+ _BT_RC_MEMB(addr, family) = AF_BLUETOOTH;
+ if (!PyArg_ParseTuple(args, "si", &straddr,
+ &_BT_RC_MEMB(addr, channel))) {
+ PyErr_SetString(PyExc_OSError, "getsockaddrarg: "
+ "wrong format");
+ return 0;
+ }
+ if (setbdaddr(straddr, &_BT_RC_MEMB(addr, bdaddr)) < 0)
+ return 0;
+
+ *len_ret = sizeof *addr;
+ return 1;
+ }
+ case BTPROTO_HCI:
+ {
+ struct sockaddr_hci *addr = (struct sockaddr_hci *)addr_ret;
+#if defined(__NetBSD__) || defined(__DragonFly__)
+ const char *straddr;
+ _BT_HCI_MEMB(addr, family) = AF_BLUETOOTH;
+ if (!PyBytes_Check(args)) {
+ PyErr_SetString(PyExc_OSError, "getsockaddrarg: "
+ "wrong format");
+ return 0;
+ }
+ straddr = PyBytes_AS_STRING(args);
+ if (setbdaddr(straddr, &_BT_HCI_MEMB(addr, bdaddr)) < 0)
+ return 0;
+#else /* __NetBSD__ || __DragonFly__ */
+ _BT_HCI_MEMB(addr, family) = AF_BLUETOOTH;
+ if (!PyArg_ParseTuple(args, "i", &_BT_HCI_MEMB(addr, dev))) {
+ PyErr_SetString(PyExc_OSError, "getsockaddrarg: "
+ "wrong format");
+ return 0;
+ }
+#endif /* !(__NetBSD__ || __DragonFly__) */
+ *len_ret = sizeof *addr;
+ return 1;
+ }
+#if !defined(__FreeBSD__)
+ case BTPROTO_SCO:
+ {
+ struct sockaddr_sco *addr;
+ const char *straddr;
+
+ addr = (struct sockaddr_sco *)addr_ret;
+ _BT_SCO_MEMB(addr, family) = AF_BLUETOOTH;
+ if (!PyBytes_Check(args)) {
+ PyErr_SetString(PyExc_OSError, "getsockaddrarg: "
+ "wrong format");
+ return 0;
+ }
+ straddr = PyBytes_AS_STRING(args);
+ if (setbdaddr(straddr, &_BT_SCO_MEMB(addr, bdaddr)) < 0)
+ return 0;
+
+ *len_ret = sizeof *addr;
+ return 1;
+ }
+#endif /* !__FreeBSD__ */
+ default:
+ PyErr_SetString(PyExc_OSError, "getsockaddrarg: unknown Bluetooth protocol");
+ return 0;
+ }
+ }
+#endif /* USE_BLUETOOTH */
+
+#if defined(HAVE_NETPACKET_PACKET_H) && defined(SIOCGIFINDEX)
+ case AF_PACKET:
+ {
+ struct sockaddr_ll* addr;
+ struct ifreq ifr;
+ const char *interfaceName;
+ int protoNumber;
+ int hatype = 0;
+ int pkttype = PACKET_HOST;
+ Py_buffer haddr = {NULL, NULL};
+
+ if (!PyTuple_Check(args)) {
+ PyErr_Format(
+ PyExc_TypeError,
+ "getsockaddrarg: "
+ "AF_PACKET address must be tuple, not %.500s",
+ Py_TYPE(args)->tp_name);
+ return 0;
+ }
+ if (!PyArg_ParseTuple(args, "si|iiy*", &interfaceName,
+ &protoNumber, &pkttype, &hatype,
+ &haddr))
+ return 0;
+ strncpy(ifr.ifr_name, interfaceName, sizeof(ifr.ifr_name));
+ ifr.ifr_name[(sizeof(ifr.ifr_name))-1] = '\0';
+ if (ioctl(s->sock_fd, SIOCGIFINDEX, &ifr) < 0) {
+ s->errorhandler();
+ PyBuffer_Release(&haddr);
+ return 0;
+ }
+ if (haddr.buf && haddr.len > 8) {
+ PyErr_SetString(PyExc_ValueError,
+ "Hardware address must be 8 bytes or less");
+ PyBuffer_Release(&haddr);
+ return 0;
+ }
+ if (protoNumber < 0 || protoNumber > 0xffff) {
+ PyErr_SetString(
+ PyExc_OverflowError,
+ "getsockaddrarg: proto must be 0-65535.");
+ PyBuffer_Release(&haddr);
+ return 0;
+ }
+ addr = (struct sockaddr_ll*)addr_ret;
+ addr->sll_family = AF_PACKET;
+ addr->sll_protocol = htons((short)protoNumber);
+ addr->sll_ifindex = ifr.ifr_ifindex;
+ addr->sll_pkttype = pkttype;
+ addr->sll_hatype = hatype;
+ if (haddr.buf) {
+ memcpy(&addr->sll_addr, haddr.buf, haddr.len);
+ addr->sll_halen = haddr.len;
+ }
+ else
+ addr->sll_halen = 0;
+ *len_ret = sizeof *addr;
+ PyBuffer_Release(&haddr);
+ return 1;
+ }
+#endif /* HAVE_NETPACKET_PACKET_H && SIOCGIFINDEX */
+
+#ifdef HAVE_LINUX_TIPC_H
+ case AF_TIPC:
+ {
+ unsigned int atype, v1, v2, v3;
+ unsigned int scope = TIPC_CLUSTER_SCOPE;
+ struct sockaddr_tipc *addr;
+
+ if (!PyTuple_Check(args)) {
+ PyErr_Format(
+ PyExc_TypeError,
+ "getsockaddrarg: "
+ "AF_TIPC address must be tuple, not %.500s",
+ Py_TYPE(args)->tp_name);
+ return 0;
+ }
+
+ if (!PyArg_ParseTuple(args,
+ "IIII|I;Invalid TIPC address format",
+ &atype, &v1, &v2, &v3, &scope))
+ return 0;
+
+ addr = (struct sockaddr_tipc *) addr_ret;
+ memset(addr, 0, sizeof(struct sockaddr_tipc));
+
+ addr->family = AF_TIPC;
+ addr->scope = scope;
+ addr->addrtype = atype;
+
+ if (atype == TIPC_ADDR_NAMESEQ) {
+ addr->addr.nameseq.type = v1;
+ addr->addr.nameseq.lower = v2;
+ addr->addr.nameseq.upper = v3;
+ } else if (atype == TIPC_ADDR_NAME) {
+ addr->addr.name.name.type = v1;
+ addr->addr.name.name.instance = v2;
+ } else if (atype == TIPC_ADDR_ID) {
+ addr->addr.id.node = v1;
+ addr->addr.id.ref = v2;
+ } else {
+ /* Shouldn't happen */
+ PyErr_SetString(PyExc_TypeError, "Invalid address type");
+ return 0;
+ }
+
+ *len_ret = sizeof(*addr);
+
+ return 1;
+ }
+#endif /* HAVE_LINUX_TIPC_H */
+
+#if defined(AF_CAN) && defined(CAN_RAW) && defined(CAN_BCM) && defined(SIOCGIFINDEX)
+ case AF_CAN:
+ switch (s->sock_proto) {
+ case CAN_RAW:
+ /* fall-through */
+ case CAN_BCM:
+ {
+ struct sockaddr_can *addr;
+ PyObject *interfaceName;
+ struct ifreq ifr;
+ Py_ssize_t len;
+
+ addr = (struct sockaddr_can *)addr_ret;
+
+ if (!PyArg_ParseTuple(args, "O&", PyUnicode_FSConverter,
+ &interfaceName))
+ return 0;
+
+ len = PyBytes_GET_SIZE(interfaceName);
+
+ if (len == 0) {
+ ifr.ifr_ifindex = 0;
+ } else if ((size_t)len < sizeof(ifr.ifr_name)) {
+ strncpy(ifr.ifr_name, PyBytes_AS_STRING(interfaceName), sizeof(ifr.ifr_name));
+ ifr.ifr_name[(sizeof(ifr.ifr_name))-1] = '\0';
+ if (ioctl(s->sock_fd, SIOCGIFINDEX, &ifr) < 0) {
+ s->errorhandler();
+ Py_DECREF(interfaceName);
+ return 0;
+ }
+ } else {
+ PyErr_SetString(PyExc_OSError,
+ "AF_CAN interface name too long");
+ Py_DECREF(interfaceName);
+ return 0;
+ }
+
+ addr->can_family = AF_CAN;
+ addr->can_ifindex = ifr.ifr_ifindex;
+
+ *len_ret = sizeof(*addr);
+ Py_DECREF(interfaceName);
+ return 1;
+ }
+ default:
+ PyErr_SetString(PyExc_OSError,
+ "getsockaddrarg: unsupported CAN protocol");
+ return 0;
+ }
+#endif /* AF_CAN && CAN_RAW && CAN_BCM && SIOCGIFINDEX */
+
+#ifdef PF_SYSTEM
+ case PF_SYSTEM:
+ switch (s->sock_proto) {
+#ifdef SYSPROTO_CONTROL
+ case SYSPROTO_CONTROL:
+ {
+ struct sockaddr_ctl *addr;
+
+ addr = (struct sockaddr_ctl *)addr_ret;
+ addr->sc_family = AF_SYSTEM;
+ addr->ss_sysaddr = AF_SYS_CONTROL;
+
+ if (PyUnicode_Check(args)) {
+ struct ctl_info info;
+ PyObject *ctl_name;
+
+ if (!PyArg_Parse(args, "O&",
+ PyUnicode_FSConverter, &ctl_name)) {
+ return 0;
+ }
+
+ if (PyBytes_GET_SIZE(ctl_name) > (Py_ssize_t)sizeof(info.ctl_name)) {
+ PyErr_SetString(PyExc_ValueError,
+ "provided string is too long");
+ Py_DECREF(ctl_name);
+ return 0;
+ }
+ strncpy(info.ctl_name, PyBytes_AS_STRING(ctl_name),
+ sizeof(info.ctl_name));
+ Py_DECREF(ctl_name);
+
+ if (ioctl(s->sock_fd, CTLIOCGINFO, &info)) {
+ PyErr_SetString(PyExc_OSError,
+ "cannot find kernel control with provided name");
+ return 0;
+ }
+
+ addr->sc_id = info.ctl_id;
+ addr->sc_unit = 0;
+ } else if (!PyArg_ParseTuple(args, "II",
+ &(addr->sc_id), &(addr->sc_unit))) {
+ PyErr_SetString(PyExc_TypeError, "getsockaddrarg: "
+ "expected str or tuple of two ints");
+
+ return 0;
+ }
+
+ *len_ret = sizeof(*addr);
+ return 1;
+ }
+#endif /* SYSPROTO_CONTROL */
+ default:
+ PyErr_SetString(PyExc_OSError,
+ "getsockaddrarg: unsupported PF_SYSTEM protocol");
+ return 0;
+ }
+#endif /* PF_SYSTEM */
+#ifdef HAVE_SOCKADDR_ALG
+ case AF_ALG:
+ {
+ struct sockaddr_alg *sa;
+ const char *type;
+ const char *name;
+ sa = (struct sockaddr_alg *)addr_ret;
+
+ memset(sa, 0, sizeof(*sa));
+ sa->salg_family = AF_ALG;
+
+ if (!PyArg_ParseTuple(args, "ss|HH:getsockaddrarg",
+ &type, &name, &sa->salg_feat, &sa->salg_mask))
+ {
+ return 0;
+ }
+ /* sockaddr_alg has fixed-sized char arrays for type, and name
+ * both must be NULL terminated.
+ */
+ if (strlen(type) >= sizeof(sa->salg_type)) {
+ PyErr_SetString(PyExc_ValueError, "AF_ALG type too long.");
+ return 0;
+ }
+ strncpy((char *)sa->salg_type, type, sizeof(sa->salg_type));
+ if (strlen(name) >= sizeof(sa->salg_name)) {
+ PyErr_SetString(PyExc_ValueError, "AF_ALG name too long.");
+ return 0;
+ }
+ strncpy((char *)sa->salg_name, name, sizeof(sa->salg_name));
+
+ *len_ret = sizeof(*sa);
+ return 1;
+ }
+#endif /* HAVE_SOCKADDR_ALG */
+
+ /* More cases here... */
+
+ default:
+ PyErr_SetString(PyExc_OSError, "getsockaddrarg: bad family");
+ return 0;
+
+ }
+}
+
+
+/* Get the address length according to the socket object's address family.
+ Return 1 if the family is known, 0 otherwise. The length is returned
+ through len_ret. */
+
+static int
+getsockaddrlen(PySocketSockObject *s, socklen_t *len_ret)
+{
+ switch (s->sock_family) {
+
+#if defined(AF_UNIX)
+ case AF_UNIX:
+ {
+ *len_ret = sizeof (struct sockaddr_un);
+ return 1;
+ }
+#endif /* AF_UNIX */
+
+#if defined(AF_NETLINK)
+ case AF_NETLINK:
+ {
+ *len_ret = sizeof (struct sockaddr_nl);
+ return 1;
+ }
+#endif /* AF_NETLINK */
+
+#ifdef AF_RDS
+ case AF_RDS:
+ /* RDS sockets use sockaddr_in: fall-through */
+#endif /* AF_RDS */
+
+ case AF_INET:
+ {
+ *len_ret = sizeof (struct sockaddr_in);
+ return 1;
+ }
+
+#ifdef ENABLE_IPV6
+ case AF_INET6:
+ {
+ *len_ret = sizeof (struct sockaddr_in6);
+ return 1;
+ }
+#endif /* ENABLE_IPV6 */
+
+#ifdef USE_BLUETOOTH
+ case AF_BLUETOOTH:
+ {
+ switch(s->sock_proto)
+ {
+
+ case BTPROTO_L2CAP:
+ *len_ret = sizeof (struct sockaddr_l2);
+ return 1;
+ case BTPROTO_RFCOMM:
+ *len_ret = sizeof (struct sockaddr_rc);
+ return 1;
+ case BTPROTO_HCI:
+ *len_ret = sizeof (struct sockaddr_hci);
+ return 1;
+#if !defined(__FreeBSD__)
+ case BTPROTO_SCO:
+ *len_ret = sizeof (struct sockaddr_sco);
+ return 1;
+#endif /* !__FreeBSD__ */
+ default:
+ PyErr_SetString(PyExc_OSError, "getsockaddrlen: "
+ "unknown BT protocol");
+ return 0;
+
+ }
+ }
+#endif /* USE_BLUETOOTH */
+
+#ifdef HAVE_NETPACKET_PACKET_H
+ case AF_PACKET:
+ {
+ *len_ret = sizeof (struct sockaddr_ll);
+ return 1;
+ }
+#endif /* HAVE_NETPACKET_PACKET_H */
+
+#ifdef HAVE_LINUX_TIPC_H
+ case AF_TIPC:
+ {
+ *len_ret = sizeof (struct sockaddr_tipc);
+ return 1;
+ }
+#endif /* HAVE_LINUX_TIPC_H */
+
+#ifdef AF_CAN
+ case AF_CAN:
+ {
+ *len_ret = sizeof (struct sockaddr_can);
+ return 1;
+ }
+#endif /* AF_CAN */
+
+#ifdef PF_SYSTEM
+ case PF_SYSTEM:
+ switch(s->sock_proto) {
+#ifdef SYSPROTO_CONTROL
+ case SYSPROTO_CONTROL:
+ *len_ret = sizeof (struct sockaddr_ctl);
+ return 1;
+#endif /* SYSPROTO_CONTROL */
+ default:
+ PyErr_SetString(PyExc_OSError, "getsockaddrlen: "
+ "unknown PF_SYSTEM protocol");
+ return 0;
+ }
+#endif /* PF_SYSTEM */
+#ifdef HAVE_SOCKADDR_ALG
+ case AF_ALG:
+ {
+ *len_ret = sizeof (struct sockaddr_alg);
+ return 1;
+ }
+#endif /* HAVE_SOCKADDR_ALG */
+
+ /* More cases here... */
+
+ default:
+ PyErr_SetString(PyExc_OSError, "getsockaddrlen: bad family");
+ return 0;
+
+ }
+}
+
+
+/* Support functions for the sendmsg() and recvmsg[_into]() methods.
+ Currently, these methods are only compiled if the RFC 2292/3542
+ CMSG_LEN() macro is available. Older systems seem to have used
+ sizeof(struct cmsghdr) + (length) where CMSG_LEN() is used now, so
+ it may be possible to define CMSG_LEN() that way if it's not
+ provided. Some architectures might need extra padding after the
+ cmsghdr, however, and CMSG_LEN() would have to take account of
+ this. */
+#ifdef CMSG_LEN
+/* If length is in range, set *result to CMSG_LEN(length) and return
+ true; otherwise, return false. */
+static int
+get_CMSG_LEN(size_t length, size_t *result)
+{
+ size_t tmp;
+
+ if (length > (SOCKLEN_T_LIMIT - CMSG_LEN(0)))
+ return 0;
+ tmp = CMSG_LEN(length);
+ if (tmp > SOCKLEN_T_LIMIT || tmp < length)
+ return 0;
+ *result = tmp;
+ return 1;
+}
+
+#ifdef CMSG_SPACE
+/* If length is in range, set *result to CMSG_SPACE(length) and return
+ true; otherwise, return false. */
+static int
+get_CMSG_SPACE(size_t length, size_t *result)
+{
+ size_t tmp;
+
+ /* Use CMSG_SPACE(1) here in order to take account of the padding
+ necessary before *and* after the data. */
+ if (length > (SOCKLEN_T_LIMIT - CMSG_SPACE(1)))
+ return 0;
+ tmp = CMSG_SPACE(length);
+ if (tmp > SOCKLEN_T_LIMIT || tmp < length)
+ return 0;
+ *result = tmp;
+ return 1;
+}
+#endif
+
+/* Return true iff msg->msg_controllen is valid, cmsgh is a valid
+ pointer in msg->msg_control with at least "space" bytes after it,
+ and its cmsg_len member inside the buffer. */
+static int
+cmsg_min_space(struct msghdr *msg, struct cmsghdr *cmsgh, size_t space)
+{
+ size_t cmsg_offset;
+ static const size_t cmsg_len_end = (offsetof(struct cmsghdr, cmsg_len) +
+ sizeof(cmsgh->cmsg_len));
+
+ /* Note that POSIX allows msg_controllen to be of signed type. */
+ if (cmsgh == NULL || msg->msg_control == NULL)
+ return 0;
+ /* Note that POSIX allows msg_controllen to be of a signed type. This is
+ annoying under OS X as it's unsigned there and so it triggers a
+ tautological comparison warning under Clang when compared against 0.
+ Since the check is valid on other platforms, silence the warning under
+ Clang. */
+ #ifdef __clang__
+ #pragma clang diagnostic push
+ #pragma clang diagnostic ignored "-Wtautological-compare"
+ #endif
+ #if defined(__GNUC__) && ((__GNUC__ > 4) || ((__GNUC__ == 4) && (__GNUC_MINOR__ > 5)))
+ #pragma GCC diagnostic push
+ #pragma GCC diagnostic ignored "-Wtype-limits"
+ #endif
+ if (msg->msg_controllen < 0)
+ return 0;
+ #if defined(__GNUC__) && ((__GNUC__ > 4) || ((__GNUC__ == 4) && (__GNUC_MINOR__ > 5)))
+ #pragma GCC diagnostic pop
+ #endif
+ #ifdef __clang__
+ #pragma clang diagnostic pop
+ #endif
+ if (space < cmsg_len_end)
+ space = cmsg_len_end;
+ cmsg_offset = (char *)cmsgh - (char *)msg->msg_control;
+ return (cmsg_offset <= (size_t)-1 - space &&
+ cmsg_offset + space <= msg->msg_controllen);
+}
+
+/* If pointer CMSG_DATA(cmsgh) is in buffer msg->msg_control, set
+ *space to number of bytes following it in the buffer and return
+ true; otherwise, return false. Assumes cmsgh, msg->msg_control and
+ msg->msg_controllen are valid. */
+static int
+get_cmsg_data_space(struct msghdr *msg, struct cmsghdr *cmsgh, size_t *space)
+{
+ size_t data_offset;
+ char *data_ptr;
+
+ if ((data_ptr = (char *)CMSG_DATA(cmsgh)) == NULL)
+ return 0;
+ data_offset = data_ptr - (char *)msg->msg_control;
+ if (data_offset > msg->msg_controllen)
+ return 0;
+ *space = msg->msg_controllen - data_offset;
+ return 1;
+}
+
+/* If cmsgh is invalid or not contained in the buffer pointed to by
+ msg->msg_control, return -1. If cmsgh is valid and its associated
+ data is entirely contained in the buffer, set *data_len to the
+ length of the associated data and return 0. If only part of the
+ associated data is contained in the buffer but cmsgh is otherwise
+ valid, set *data_len to the length contained in the buffer and
+ return 1. */
+static int
+get_cmsg_data_len(struct msghdr *msg, struct cmsghdr *cmsgh, size_t *data_len)
+{
+ size_t space, cmsg_data_len;
+
+ if (!cmsg_min_space(msg, cmsgh, CMSG_LEN(0)) ||
+ cmsgh->cmsg_len < CMSG_LEN(0))
+ return -1;
+ cmsg_data_len = cmsgh->cmsg_len - CMSG_LEN(0);
+ if (!get_cmsg_data_space(msg, cmsgh, &space))
+ return -1;
+ if (space >= cmsg_data_len) {
+ *data_len = cmsg_data_len;
+ return 0;
+ }
+ *data_len = space;
+ return 1;
+}
+#endif /* CMSG_LEN */
+
+
+struct sock_accept {
+ socklen_t *addrlen;
+ sock_addr_t *addrbuf;
+ SOCKET_T result;
+};
+
+#if defined(HAVE_ACCEPT4) && defined(SOCK_CLOEXEC)
+/* accept4() is available on Linux 2.6.28+ and glibc 2.10 */
+static int accept4_works = -1;
+#endif
+
+static int
+sock_accept_impl(PySocketSockObject *s, void *data)
+{
+ struct sock_accept *ctx = data;
+ struct sockaddr *addr = SAS2SA(ctx->addrbuf);
+ socklen_t *paddrlen = ctx->addrlen;
+#ifdef HAVE_SOCKADDR_ALG
+ /* AF_ALG does not support accept() with addr and raises
+ * ECONNABORTED instead. */
+ if (s->sock_family == AF_ALG) {
+ addr = NULL;
+ paddrlen = NULL;
+ *ctx->addrlen = 0;
+ }
+#endif
+
+#if defined(HAVE_ACCEPT4) && defined(SOCK_CLOEXEC)
+ if (accept4_works != 0) {
+ ctx->result = accept4(s->sock_fd, addr, paddrlen,
+ SOCK_CLOEXEC);
+ if (ctx->result == INVALID_SOCKET && accept4_works == -1) {
+ /* On Linux older than 2.6.28, accept4() fails with ENOSYS */
+ accept4_works = (errno != ENOSYS);
+ }
+ }
+ if (accept4_works == 0)
+ ctx->result = accept(s->sock_fd, addr, paddrlen);
+#else
+ ctx->result = accept(s->sock_fd, addr, paddrlen);
+#endif
+
+#ifdef MS_WINDOWS
+ return (ctx->result != INVALID_SOCKET);
+#else
+ return (ctx->result >= 0);
+#endif
+}
+
+/* s._accept() -> (fd, address) */
+
+static PyObject *
+sock_accept(PySocketSockObject *s)
+{
+ sock_addr_t addrbuf;
+ SOCKET_T newfd;
+ socklen_t addrlen;
+ PyObject *sock = NULL;
+ PyObject *addr = NULL;
+ PyObject *res = NULL;
+ struct sock_accept ctx;
+
+ if (!getsockaddrlen(s, &addrlen))
+ return NULL;
+ memset(&addrbuf, 0, addrlen);
+
+ if (!IS_SELECTABLE(s))
+ return select_error();
+
+ ctx.addrlen = &addrlen;
+ ctx.addrbuf = &addrbuf;
+ if (sock_call(s, 0, sock_accept_impl, &ctx) < 0)
+ return NULL;
+ newfd = ctx.result;
+
+#ifdef MS_WINDOWS
+ if (!SetHandleInformation((HANDLE)newfd, HANDLE_FLAG_INHERIT, 0)) {
+ PyErr_SetFromWindowsErr(0);
+ SOCKETCLOSE(newfd);
+ goto finally;
+ }
+#else
+
+#if defined(HAVE_ACCEPT4) && defined(SOCK_CLOEXEC)
+ if (!accept4_works)
+#endif
+ {
+ if (_Py_set_inheritable(newfd, 0, NULL) < 0) {
+ SOCKETCLOSE(newfd);
+ goto finally;
+ }
+ }
+#endif
+
+ sock = PyLong_FromSocket_t(newfd);
+ if (sock == NULL) {
+ SOCKETCLOSE(newfd);
+ goto finally;
+ }
+
+ addr = makesockaddr(s->sock_fd, SAS2SA(&addrbuf),
+ addrlen, s->sock_proto);
+ if (addr == NULL)
+ goto finally;
+
+ res = PyTuple_Pack(2, sock, addr);
+
+finally:
+ Py_XDECREF(sock);
+ Py_XDECREF(addr);
+ return res;
+}
+
+PyDoc_STRVAR(accept_doc,
+"_accept() -> (integer, address info)\n\
+\n\
+Wait for an incoming connection. Return a new socket file descriptor\n\
+representing the connection, and the address of the client.\n\
+For IP sockets, the address info is a pair (hostaddr, port).");
+
+/* s.setblocking(flag) method. Argument:
+ False -- non-blocking mode; same as settimeout(0)
+ True -- blocking mode; same as settimeout(None)
+*/
+
+static PyObject *
+sock_setblocking(PySocketSockObject *s, PyObject *arg)
+{
+ long block;
+
+ block = PyLong_AsLong(arg);
+ if (block == -1 && PyErr_Occurred())
+ return NULL;
+
+ s->sock_timeout = _PyTime_FromSeconds(block ? -1 : 0);
+ if (internal_setblocking(s, block) == -1) {
+ return NULL;
+ }
+ Py_RETURN_NONE;
+}
+
+PyDoc_STRVAR(setblocking_doc,
+"setblocking(flag)\n\
+\n\
+Set the socket to blocking (flag is true) or non-blocking (false).\n\
+setblocking(True) is equivalent to settimeout(None);\n\
+setblocking(False) is equivalent to settimeout(0.0).");
+
+static int
+socket_parse_timeout(_PyTime_t *timeout, PyObject *timeout_obj)
+{
+#ifdef MS_WINDOWS
+ struct timeval tv;
+#endif
+#ifndef HAVE_POLL
+ _PyTime_t ms;
+#endif
+ int overflow = 0;
+
+ if (timeout_obj == Py_None) {
+ *timeout = _PyTime_FromSeconds(-1);
+ return 0;
+ }
+
+ if (_PyTime_FromSecondsObject(timeout,
+ timeout_obj, _PyTime_ROUND_TIMEOUT) < 0)
+ return -1;
+
+ if (*timeout < 0) {
+ PyErr_SetString(PyExc_ValueError, "Timeout value out of range");
+ return -1;
+ }
+
+#ifdef MS_WINDOWS
+ overflow |= (_PyTime_AsTimeval(*timeout, &tv, _PyTime_ROUND_TIMEOUT) < 0);
+#endif
+#ifndef HAVE_POLL
+ ms = _PyTime_AsMilliseconds(*timeout, _PyTime_ROUND_TIMEOUT);
+ overflow |= (ms > INT_MAX);
+#endif
+ if (overflow) {
+ PyErr_SetString(PyExc_OverflowError,
+ "timeout doesn't fit into C timeval");
+ return -1;
+ }
+
+ return 0;
+}
+
+/* s.settimeout(timeout) method. Argument:
+ None -- no timeout, blocking mode; same as setblocking(True)
+ 0.0 -- non-blocking mode; same as setblocking(False)
+ > 0 -- timeout mode; operations time out after timeout seconds
+ < 0 -- illegal; raises an exception
+*/
+static PyObject *
+sock_settimeout(PySocketSockObject *s, PyObject *arg)
+{
+ _PyTime_t timeout;
+
+ if (socket_parse_timeout(&timeout, arg) < 0)
+ return NULL;
+
+ s->sock_timeout = timeout;
+ if (internal_setblocking(s, timeout < 0) == -1) {
+ return NULL;
+ }
+ Py_RETURN_NONE;
+}
+
+PyDoc_STRVAR(settimeout_doc,
+"settimeout(timeout)\n\
+\n\
+Set a timeout on socket operations. 'timeout' can be a float,\n\
+giving in seconds, or None. Setting a timeout of None disables\n\
+the timeout feature and is equivalent to setblocking(1).\n\
+Setting a timeout of zero is the same as setblocking(0).");
+
+/* s.gettimeout() method.
+ Returns the timeout associated with a socket. */
+static PyObject *
+sock_gettimeout(PySocketSockObject *s)
+{
+ if (s->sock_timeout < 0) {
+ Py_INCREF(Py_None);
+ return Py_None;
+ }
+ else {
+ double seconds = _PyTime_AsSecondsDouble(s->sock_timeout);
+ return PyFloat_FromDouble(seconds);
+ }
+}
+
+PyDoc_STRVAR(gettimeout_doc,
+"gettimeout() -> timeout\n\
+\n\
+Returns the timeout in seconds (float) associated with socket \n\
+operations. A timeout of None indicates that timeouts on socket \n\
+operations are disabled.");
+
+/* s.setsockopt() method.
+ With an integer third argument, sets an integer optval with optlen=4.
+ With None as third argument and an integer fourth argument, set
+ optval=NULL with unsigned int as optlen.
+ With a string third argument, sets an option from a buffer;
+ use optional built-in module 'struct' to encode the string.
+*/
+
+static PyObject *
+sock_setsockopt(PySocketSockObject *s, PyObject *args)
+{
+ int level;
+ int optname;
+ int res;
+ Py_buffer optval;
+ int flag;
+ unsigned int optlen;
+ PyObject *none;
+
+ /* setsockopt(level, opt, flag) */
+ if (PyArg_ParseTuple(args, "iii:setsockopt",
+ &level, &optname, &flag)) {
+ res = setsockopt(s->sock_fd, level, optname,
+ (char*)&flag, sizeof flag);
+ goto done;
+ }
+
+ PyErr_Clear();
+ /* setsockopt(level, opt, None, flag) */
+ if (PyArg_ParseTuple(args, "iiO!I:setsockopt",
+ &level, &optname, Py_TYPE(Py_None), &none, &optlen)) {
+ assert(sizeof(socklen_t) >= sizeof(unsigned int));
+ res = setsockopt(s->sock_fd, level, optname,
+ NULL, (socklen_t)optlen);
+ goto done;
+ }
+
+ PyErr_Clear();
+ /* setsockopt(level, opt, buffer) */
+ if (!PyArg_ParseTuple(args, "iiy*:setsockopt",
+ &level, &optname, &optval))
+ return NULL;
+
+#ifdef MS_WINDOWS
+ if (optval.len > INT_MAX) {
+ PyBuffer_Release(&optval);
+ PyErr_Format(PyExc_OverflowError,
+ "socket option is larger than %i bytes",
+ INT_MAX);
+ return NULL;
+ }
+ res = setsockopt(s->sock_fd, level, optname,
+ optval.buf, (int)optval.len);
+#else
+ res = setsockopt(s->sock_fd, level, optname, optval.buf, optval.len);
+#endif
+ PyBuffer_Release(&optval);
+
+done:
+ if (res < 0) {
+ return s->errorhandler();
+ }
+
+ Py_RETURN_NONE;
+}
+
+PyDoc_STRVAR(setsockopt_doc,
+"setsockopt(level, option, value: int)\n\
+setsockopt(level, option, value: buffer)\n\
+setsockopt(level, option, None, optlen: int)\n\
+\n\
+Set a socket option. See the Unix manual for level and option.\n\
+The value argument can either be an integer, a string buffer, or \n\
+None, optlen.");
+
+
+/* s.getsockopt() method.
+ With two arguments, retrieves an integer option.
+ With a third integer argument, retrieves a string buffer of that size;
+ use optional built-in module 'struct' to decode the string. */
+
+static PyObject *
+sock_getsockopt(PySocketSockObject *s, PyObject *args)
+{
+ int level;
+ int optname;
+ int res;
+ PyObject *buf;
+ socklen_t buflen = 0;
+
+ if (!PyArg_ParseTuple(args, "ii|i:getsockopt",
+ &level, &optname, &buflen))
+ return NULL;
+
+ if (buflen == 0) {
+ int flag = 0;
+ socklen_t flagsize = sizeof flag;
+ res = getsockopt(s->sock_fd, level, optname,
+ (void *)&flag, &flagsize);
+ if (res < 0)
+ return s->errorhandler();
+ return PyLong_FromLong(flag);
+ }
+ if (buflen <= 0 || buflen > 1024) {
+ PyErr_SetString(PyExc_OSError,
+ "getsockopt buflen out of range");
+ return NULL;
+ }
+ buf = PyBytes_FromStringAndSize((char *)NULL, buflen);
+ if (buf == NULL)
+ return NULL;
+ res = getsockopt(s->sock_fd, level, optname,
+ (void *)PyBytes_AS_STRING(buf), &buflen);
+ if (res < 0) {
+ Py_DECREF(buf);
+ return s->errorhandler();
+ }
+ _PyBytes_Resize(&buf, buflen);
+ return buf;
+}
+
+PyDoc_STRVAR(getsockopt_doc,
+"getsockopt(level, option[, buffersize]) -> value\n\
+\n\
+Get a socket option. See the Unix manual for level and option.\n\
+If a nonzero buffersize argument is given, the return value is a\n\
+string of that length; otherwise it is an integer.");
+
+
+/* s.bind(sockaddr) method */
+
+static PyObject *
+sock_bind(PySocketSockObject *s, PyObject *addro)
+{
+ sock_addr_t addrbuf;
+ int addrlen;
+ int res;
+
+ if (!getsockaddrarg(s, addro, SAS2SA(&addrbuf), &addrlen))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ res = bind(s->sock_fd, SAS2SA(&addrbuf), addrlen);
+ Py_END_ALLOW_THREADS
+ if (res < 0)
+ return s->errorhandler();
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+PyDoc_STRVAR(bind_doc,
+"bind(address)\n\
+\n\
+Bind the socket to a local address. For IP sockets, the address is a\n\
+pair (host, port); the host must refer to the local host. For raw packet\n\
+sockets the address is a tuple (ifname, proto [,pkttype [,hatype [,addr]]])");
+
+
+/* s.close() method.
+ Set the file descriptor to -1 so operations tried subsequently
+ will surely fail. */
+
+static PyObject *
+sock_close(PySocketSockObject *s)
+{
+ SOCKET_T fd;
+ int res;
+
+ fd = s->sock_fd;
+ if (fd != INVALID_SOCKET) {
+ s->sock_fd = INVALID_SOCKET;
+
+ /* We do not want to retry upon EINTR: see
+ http://lwn.net/Articles/576478/ and
+ http://linux.derkeiler.com/Mailing-Lists/Kernel/2005-09/3000.html
+ for more details. */
+ Py_BEGIN_ALLOW_THREADS
+ res = SOCKETCLOSE(fd);
+ Py_END_ALLOW_THREADS
+ /* bpo-30319: The peer can already have closed the connection.
+ Python ignores ECONNRESET on close(). */
+ if (res < 0 && errno != ECONNRESET) {
+ return s->errorhandler();
+ }
+ }
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+PyDoc_STRVAR(close_doc,
+"close()\n\
+\n\
+Close the socket. It cannot be used after this call.");
+
+static PyObject *
+sock_detach(PySocketSockObject *s)
+{
+ SOCKET_T fd = s->sock_fd;
+ s->sock_fd = INVALID_SOCKET;
+ return PyLong_FromSocket_t(fd);
+}
+
+PyDoc_STRVAR(detach_doc,
+"detach()\n\
+\n\
+Close the socket object without closing the underlying file descriptor.\n\
+The object cannot be used after this call, but the file descriptor\n\
+can be reused for other purposes. The file descriptor is returned.");
+
+static int
+sock_connect_impl(PySocketSockObject *s, void* Py_UNUSED(data))
+{
+ int err;
+ socklen_t size = sizeof err;
+
+ if (getsockopt(s->sock_fd, SOL_SOCKET, SO_ERROR, (void *)&err, &size)) {
+ /* getsockopt() failed */
+ return 0;
+ }
+
+ if (err == EISCONN)
+ return 1;
+ if (err != 0) {
+ /* sock_call_ex() uses GET_SOCK_ERROR() to get the error code */
+ SET_SOCK_ERROR(err);
+ return 0;
+ }
+ return 1;
+}
+
+static int
+internal_connect(PySocketSockObject *s, struct sockaddr *addr, int addrlen,
+ int raise)
+{
+ int res, err, wait_connect;
+
+ Py_BEGIN_ALLOW_THREADS
+ res = connect(s->sock_fd, addr, addrlen);
+ Py_END_ALLOW_THREADS
+
+ if (!res) {
+ /* connect() succeeded, the socket is connected */
+ return 0;
+ }
+
+ /* connect() failed */
+
+ /* save error, PyErr_CheckSignals() can replace it */
+ err = GET_SOCK_ERROR;
+ if (CHECK_ERRNO(EINTR)) {
+ if (PyErr_CheckSignals())
+ return -1;
+
+ /* Issue #23618: when connect() fails with EINTR, the connection is
+ running asynchronously.
+
+ If the socket is blocking or has a timeout, wait until the
+ connection completes, fails or timed out using select(), and then
+ get the connection status using getsockopt(SO_ERROR).
+
+ If the socket is non-blocking, raise InterruptedError. The caller is
+ responsible to wait until the connection completes, fails or timed
+ out (it's the case in asyncio for example). */
+ wait_connect = (s->sock_timeout != 0 && IS_SELECTABLE(s));
+ }
+ else {
+ wait_connect = (s->sock_timeout > 0 && err == SOCK_INPROGRESS_ERR
+ && IS_SELECTABLE(s));
+ }
+
+ if (!wait_connect) {
+ if (raise) {
+ /* restore error, maybe replaced by PyErr_CheckSignals() */
+ SET_SOCK_ERROR(err);
+ s->errorhandler();
+ return -1;
+ }
+ else
+ return err;
+ }
+
+ if (raise) {
+ /* socket.connect() raises an exception on error */
+ if (sock_call_ex(s, 1, sock_connect_impl, NULL,
+ 1, NULL, s->sock_timeout) < 0)
+ return -1;
+ }
+ else {
+ /* socket.connect_ex() returns the error code on error */
+ if (sock_call_ex(s, 1, sock_connect_impl, NULL,
+ 1, &err, s->sock_timeout) < 0)
+ return err;
+ }
+ return 0;
+}
+
+/* s.connect(sockaddr) method */
+
+static PyObject *
+sock_connect(PySocketSockObject *s, PyObject *addro)
+{
+ sock_addr_t addrbuf;
+ int addrlen;
+ int res;
+
+ if (!getsockaddrarg(s, addro, SAS2SA(&addrbuf), &addrlen))
+ return NULL;
+
+ res = internal_connect(s, SAS2SA(&addrbuf), addrlen, 1);
+ if (res < 0)
+ return NULL;
+
+ Py_RETURN_NONE;
+}
+
+PyDoc_STRVAR(connect_doc,
+"connect(address)\n\
+\n\
+Connect the socket to a remote address. For IP sockets, the address\n\
+is a pair (host, port).");
+
+
+/* s.connect_ex(sockaddr) method */
+
+static PyObject *
+sock_connect_ex(PySocketSockObject *s, PyObject *addro)
+{
+ sock_addr_t addrbuf;
+ int addrlen;
+ int res;
+
+ if (!getsockaddrarg(s, addro, SAS2SA(&addrbuf), &addrlen))
+ return NULL;
+
+ res = internal_connect(s, SAS2SA(&addrbuf), addrlen, 0);
+ if (res < 0)
+ return NULL;
+
+ return PyLong_FromLong((long) res);
+}
+
+PyDoc_STRVAR(connect_ex_doc,
+"connect_ex(address) -> errno\n\
+\n\
+This is like connect(address), but returns an error code (the errno value)\n\
+instead of raising an exception when an error occurs.");
+
+
+/* s.fileno() method */
+
+static PyObject *
+sock_fileno(PySocketSockObject *s)
+{
+ return PyLong_FromSocket_t(s->sock_fd);
+}
+
+PyDoc_STRVAR(fileno_doc,
+"fileno() -> integer\n\
+\n\
+Return the integer file descriptor of the socket.");
+
+
+/* s.getsockname() method */
+
+static PyObject *
+sock_getsockname(PySocketSockObject *s)
+{
+ sock_addr_t addrbuf;
+ int res;
+ socklen_t addrlen;
+
+ if (!getsockaddrlen(s, &addrlen))
+ return NULL;
+ memset(&addrbuf, 0, addrlen);
+ Py_BEGIN_ALLOW_THREADS
+ res = getsockname(s->sock_fd, SAS2SA(&addrbuf), &addrlen);
+ Py_END_ALLOW_THREADS
+ if (res < 0)
+ return s->errorhandler();
+ return makesockaddr(s->sock_fd, SAS2SA(&addrbuf), addrlen,
+ s->sock_proto);
+}
+
+PyDoc_STRVAR(getsockname_doc,
+"getsockname() -> address info\n\
+\n\
+Return the address of the local endpoint. For IP sockets, the address\n\
+info is a pair (hostaddr, port).");
+
+
+#ifdef HAVE_GETPEERNAME /* Cray APP doesn't have this :-( */
+/* s.getpeername() method */
+
+static PyObject *
+sock_getpeername(PySocketSockObject *s)
+{
+ sock_addr_t addrbuf;
+ int res;
+ socklen_t addrlen;
+
+ if (!getsockaddrlen(s, &addrlen))
+ return NULL;
+ memset(&addrbuf, 0, addrlen);
+ Py_BEGIN_ALLOW_THREADS
+ res = getpeername(s->sock_fd, SAS2SA(&addrbuf), &addrlen);
+ Py_END_ALLOW_THREADS
+ if (res < 0)
+ return s->errorhandler();
+ return makesockaddr(s->sock_fd, SAS2SA(&addrbuf), addrlen,
+ s->sock_proto);
+}
+
+PyDoc_STRVAR(getpeername_doc,
+"getpeername() -> address info\n\
+\n\
+Return the address of the remote endpoint. For IP sockets, the address\n\
+info is a pair (hostaddr, port).");
+
+#endif /* HAVE_GETPEERNAME */
+
+
+/* s.listen(n) method */
+
+static PyObject *
+sock_listen(PySocketSockObject *s, PyObject *args)
+{
+ /* We try to choose a default backlog high enough to avoid connection drops
+ * for common workloads, yet not too high to limit resource usage. */
+ int backlog = Py_MIN(SOMAXCONN, 128);
+ int res;
+
+ if (!PyArg_ParseTuple(args, "|i:listen", &backlog))
+ return NULL;
+
+ Py_BEGIN_ALLOW_THREADS
+ /* To avoid problems on systems that don't allow a negative backlog
+ * (which doesn't make sense anyway) we force a minimum value of 0. */
+ if (backlog < 0)
+ backlog = 0;
+ res = listen(s->sock_fd, backlog);
+ Py_END_ALLOW_THREADS
+ if (res < 0)
+ return s->errorhandler();
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+PyDoc_STRVAR(listen_doc,
+"listen([backlog])\n\
+\n\
+Enable a server to accept connections. If backlog is specified, it must be\n\
+at least 0 (if it is lower, it is set to 0); it specifies the number of\n\
+unaccepted connections that the system will allow before refusing new\n\
+connections. If not specified, a default reasonable value is chosen.");
+
+struct sock_recv {
+ char *cbuf;
+ Py_ssize_t len;
+ int flags;
+ Py_ssize_t result;
+};
+
+static int
+sock_recv_impl(PySocketSockObject *s, void *data)
+{
+ struct sock_recv *ctx = data;
+
+#ifdef MS_WINDOWS
+ if (ctx->len > INT_MAX)
+ ctx->len = INT_MAX;
+ ctx->result = recv(s->sock_fd, ctx->cbuf, (int)ctx->len, ctx->flags);
+#else
+ ctx->result = recv(s->sock_fd, ctx->cbuf, ctx->len, ctx->flags);
+#endif
+ return (ctx->result >= 0);
+}
+
+
+/*
+ * This is the guts of the recv() and recv_into() methods, which reads into a
+ * char buffer. If you have any inc/dec ref to do to the objects that contain
+ * the buffer, do it in the caller. This function returns the number of bytes
+ * successfully read. If there was an error, it returns -1. Note that it is
+ * also possible that we return a number of bytes smaller than the request
+ * bytes.
+ */
+
+static Py_ssize_t
+sock_recv_guts(PySocketSockObject *s, char* cbuf, Py_ssize_t len, int flags)
+{
+ struct sock_recv ctx;
+
+ if (!IS_SELECTABLE(s)) {
+ select_error();
+ return -1;
+ }
+ if (len == 0) {
+ /* If 0 bytes were requested, do nothing. */
+ return 0;
+ }
+
+ ctx.cbuf = cbuf;
+ ctx.len = len;
+ ctx.flags = flags;
+ if (sock_call(s, 0, sock_recv_impl, &ctx) < 0)
+ return -1;
+
+ return ctx.result;
+}
+
+
+/* s.recv(nbytes [,flags]) method */
+
+static PyObject *
+sock_recv(PySocketSockObject *s, PyObject *args)
+{
+ Py_ssize_t recvlen, outlen;
+ int flags = 0;
+ PyObject *buf;
+
+ if (!PyArg_ParseTuple(args, "n|i:recv", &recvlen, &flags))
+ return NULL;
+
+ if (recvlen < 0) {
+ PyErr_SetString(PyExc_ValueError,
+ "negative buffersize in recv");
+ return NULL;
+ }
+
+ /* Allocate a new string. */
+ buf = PyBytes_FromStringAndSize((char *) 0, recvlen);
+ if (buf == NULL)
+ return NULL;
+
+ /* Call the guts */
+ outlen = sock_recv_guts(s, PyBytes_AS_STRING(buf), recvlen, flags);
+ if (outlen < 0) {
+ /* An error occurred, release the string and return an
+ error. */
+ Py_DECREF(buf);
+ return NULL;
+ }
+ if (outlen != recvlen) {
+ /* We did not read as many bytes as we anticipated, resize the
+ string if possible and be successful. */
+ _PyBytes_Resize(&buf, outlen);
+ }
+
+ return buf;
+}
+
+PyDoc_STRVAR(recv_doc,
+"recv(buffersize[, flags]) -> data\n\
+\n\
+Receive up to buffersize bytes from the socket. For the optional flags\n\
+argument, see the Unix manual. When no data is available, block until\n\
+at least one byte is available or until the remote end is closed. When\n\
+the remote end is closed and all data is read, return the empty string.");
+
+
+/* s.recv_into(buffer, [nbytes [,flags]]) method */
+
+static PyObject*
+sock_recv_into(PySocketSockObject *s, PyObject *args, PyObject *kwds)
+{
+ static char *kwlist[] = {"buffer", "nbytes", "flags", 0};
+
+ int flags = 0;
+ Py_buffer pbuf;
+ char *buf;
+ Py_ssize_t buflen, readlen, recvlen = 0;
+
+ /* Get the buffer's memory */
+ if (!PyArg_ParseTupleAndKeywords(args, kwds, "w*|ni:recv_into", kwlist,
+ &pbuf, &recvlen, &flags))
+ return NULL;
+ buf = pbuf.buf;
+ buflen = pbuf.len;
+
+ if (recvlen < 0) {
+ PyBuffer_Release(&pbuf);
+ PyErr_SetString(PyExc_ValueError,
+ "negative buffersize in recv_into");
+ return NULL;
+ }
+ if (recvlen == 0) {
+ /* If nbytes was not specified, use the buffer's length */
+ recvlen = buflen;
+ }
+
+ /* Check if the buffer is large enough */
+ if (buflen < recvlen) {
+ PyBuffer_Release(&pbuf);
+ PyErr_SetString(PyExc_ValueError,
+ "buffer too small for requested bytes");
+ return NULL;
+ }
+
+ /* Call the guts */
+ readlen = sock_recv_guts(s, buf, recvlen, flags);
+ if (readlen < 0) {
+ /* Return an error. */
+ PyBuffer_Release(&pbuf);
+ return NULL;
+ }
+
+ PyBuffer_Release(&pbuf);
+ /* Return the number of bytes read. Note that we do not do anything
+ special here in the case that readlen < recvlen. */
+ return PyLong_FromSsize_t(readlen);
+}
+
+PyDoc_STRVAR(recv_into_doc,
+"recv_into(buffer, [nbytes[, flags]]) -> nbytes_read\n\
+\n\
+A version of recv() that stores its data into a buffer rather than creating \n\
+a new string. Receive up to buffersize bytes from the socket. If buffersize \n\
+is not specified (or 0), receive up to the size available in the given buffer.\n\
+\n\
+See recv() for documentation about the flags.");
+
+struct sock_recvfrom {
+ char* cbuf;
+ Py_ssize_t len;
+ int flags;
+ socklen_t *addrlen;
+ sock_addr_t *addrbuf;
+ Py_ssize_t result;
+};
+
+static int
+sock_recvfrom_impl(PySocketSockObject *s, void *data)
+{
+ struct sock_recvfrom *ctx = data;
+
+ memset(ctx->addrbuf, 0, *ctx->addrlen);
+
+#ifdef MS_WINDOWS
+ if (ctx->len > INT_MAX)
+ ctx->len = INT_MAX;
+ ctx->result = recvfrom(s->sock_fd, ctx->cbuf, (int)ctx->len, ctx->flags,
+ SAS2SA(ctx->addrbuf), ctx->addrlen);
+#else
+ ctx->result = recvfrom(s->sock_fd, ctx->cbuf, ctx->len, ctx->flags,
+ SAS2SA(ctx->addrbuf), ctx->addrlen);
+#endif
+ return (ctx->result >= 0);
+}
+
+
+/*
+ * This is the guts of the recvfrom() and recvfrom_into() methods, which reads
+ * into a char buffer. If you have any inc/def ref to do to the objects that
+ * contain the buffer, do it in the caller. This function returns the number
+ * of bytes successfully read. If there was an error, it returns -1. Note
+ * that it is also possible that we return a number of bytes smaller than the
+ * request bytes.
+ *
+ * 'addr' is a return value for the address object. Note that you must decref
+ * it yourself.
+ */
+static Py_ssize_t
+sock_recvfrom_guts(PySocketSockObject *s, char* cbuf, Py_ssize_t len, int flags,
+ PyObject** addr)
+{
+ sock_addr_t addrbuf;
+ socklen_t addrlen;
+ struct sock_recvfrom ctx;
+
+ *addr = NULL;
+
+ if (!getsockaddrlen(s, &addrlen))
+ return -1;
+
+ if (!IS_SELECTABLE(s)) {
+ select_error();
+ return -1;
+ }
+
+ ctx.cbuf = cbuf;
+ ctx.len = len;
+ ctx.flags = flags;
+ ctx.addrbuf = &addrbuf;
+ ctx.addrlen = &addrlen;
+ if (sock_call(s, 0, sock_recvfrom_impl, &ctx) < 0)
+ return -1;
+
+ *addr = makesockaddr(s->sock_fd, SAS2SA(&addrbuf), addrlen,
+ s->sock_proto);
+ if (*addr == NULL)
+ return -1;
+
+ return ctx.result;
+}
+
+/* s.recvfrom(nbytes [,flags]) method */
+
+static PyObject *
+sock_recvfrom(PySocketSockObject *s, PyObject *args)
+{
+ PyObject *buf = NULL;
+ PyObject *addr = NULL;
+ PyObject *ret = NULL;
+ int flags = 0;
+ Py_ssize_t recvlen, outlen;
+
+ if (!PyArg_ParseTuple(args, "n|i:recvfrom", &recvlen, &flags))
+ return NULL;
+
+ if (recvlen < 0) {
+ PyErr_SetString(PyExc_ValueError,
+ "negative buffersize in recvfrom");
+ return NULL;
+ }
+
+ buf = PyBytes_FromStringAndSize((char *) 0, recvlen);
+ if (buf == NULL)
+ return NULL;
+
+ outlen = sock_recvfrom_guts(s, PyBytes_AS_STRING(buf),
+ recvlen, flags, &addr);
+ if (outlen < 0) {
+ goto finally;
+ }
+
+ if (outlen != recvlen) {
+ /* We did not read as many bytes as we anticipated, resize the
+ string if possible and be successful. */
+ if (_PyBytes_Resize(&buf, outlen) < 0)
+ /* Oopsy, not so successful after all. */
+ goto finally;
+ }
+
+ ret = PyTuple_Pack(2, buf, addr);
+
+finally:
+ Py_XDECREF(buf);
+ Py_XDECREF(addr);
+ return ret;
+}
+
+PyDoc_STRVAR(recvfrom_doc,
+"recvfrom(buffersize[, flags]) -> (data, address info)\n\
+\n\
+Like recv(buffersize, flags) but also return the sender's address info.");
+
+
+/* s.recvfrom_into(buffer[, nbytes [,flags]]) method */
+
+static PyObject *
+sock_recvfrom_into(PySocketSockObject *s, PyObject *args, PyObject* kwds)
+{
+ static char *kwlist[] = {"buffer", "nbytes", "flags", 0};
+
+ int flags = 0;
+ Py_buffer pbuf;
+ char *buf;
+ Py_ssize_t readlen, buflen, recvlen = 0;
+
+ PyObject *addr = NULL;
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwds, "w*|ni:recvfrom_into",
+ kwlist, &pbuf,
+ &recvlen, &flags))
+ return NULL;
+ buf = pbuf.buf;
+ buflen = pbuf.len;
+
+ if (recvlen < 0) {
+ PyBuffer_Release(&pbuf);
+ PyErr_SetString(PyExc_ValueError,
+ "negative buffersize in recvfrom_into");
+ return NULL;
+ }
+ if (recvlen == 0) {
+ /* If nbytes was not specified, use the buffer's length */
+ recvlen = buflen;
+ } else if (recvlen > buflen) {
+ PyBuffer_Release(&pbuf);
+ PyErr_SetString(PyExc_ValueError,
+ "nbytes is greater than the length of the buffer");
+ return NULL;
+ }
+
+ readlen = sock_recvfrom_guts(s, buf, recvlen, flags, &addr);
+ if (readlen < 0) {
+ PyBuffer_Release(&pbuf);
+ /* Return an error */
+ Py_XDECREF(addr);
+ return NULL;
+ }
+
+ PyBuffer_Release(&pbuf);
+ /* Return the number of bytes read and the address. Note that we do
+ not do anything special here in the case that readlen < recvlen. */
+ return Py_BuildValue("nN", readlen, addr);
+}
+
+PyDoc_STRVAR(recvfrom_into_doc,
+"recvfrom_into(buffer[, nbytes[, flags]]) -> (nbytes, address info)\n\
+\n\
+Like recv_into(buffer[, nbytes[, flags]]) but also return the sender's address info.");
+
+/* The sendmsg() and recvmsg[_into]() methods require a working
+ CMSG_LEN(). See the comment near get_CMSG_LEN(). */
+#ifdef CMSG_LEN
+struct sock_recvmsg {
+ struct msghdr *msg;
+ int flags;
+ ssize_t result;
+};
+
+static int
+sock_recvmsg_impl(PySocketSockObject *s, void *data)
+{
+ struct sock_recvmsg *ctx = data;
+
+ ctx->result = recvmsg(s->sock_fd, ctx->msg, ctx->flags);
+ return (ctx->result >= 0);
+}
+
+/*
+ * Call recvmsg() with the supplied iovec structures, flags, and
+ * ancillary data buffer size (controllen). Returns the tuple return
+ * value for recvmsg() or recvmsg_into(), with the first item provided
+ * by the supplied makeval() function. makeval() will be called with
+ * the length read and makeval_data as arguments, and must return a
+ * new reference (which will be decrefed if there is a subsequent
+ * error). On error, closes any file descriptors received via
+ * SCM_RIGHTS.
+ */
+static PyObject *
+sock_recvmsg_guts(PySocketSockObject *s, struct iovec *iov, int iovlen,
+ int flags, Py_ssize_t controllen,
+ PyObject *(*makeval)(ssize_t, void *), void *makeval_data)
+{
+ sock_addr_t addrbuf;
+ socklen_t addrbuflen;
+ struct msghdr msg = {0};
+ PyObject *cmsg_list = NULL, *retval = NULL;
+ void *controlbuf = NULL;
+ struct cmsghdr *cmsgh;
+ size_t cmsgdatalen = 0;
+ int cmsg_status;
+ struct sock_recvmsg ctx;
+
+ /* XXX: POSIX says that msg_name and msg_namelen "shall be
+ ignored" when the socket is connected (Linux fills them in
+ anyway for AF_UNIX sockets at least). Normally msg_namelen
+ seems to be set to 0 if there's no address, but try to
+ initialize msg_name to something that won't be mistaken for a
+ real address if that doesn't happen. */
+ if (!getsockaddrlen(s, &addrbuflen))
+ return NULL;
+ memset(&addrbuf, 0, addrbuflen);
+ SAS2SA(&addrbuf)->sa_family = AF_UNSPEC;
+
+ if (controllen < 0 || controllen > SOCKLEN_T_LIMIT) {
+ PyErr_SetString(PyExc_ValueError,
+ "invalid ancillary data buffer length");
+ return NULL;
+ }
+ if (controllen > 0 && (controlbuf = PyMem_Malloc(controllen)) == NULL)
+ return PyErr_NoMemory();
+
+ /* Make the system call. */
+ if (!IS_SELECTABLE(s)) {
+ select_error();
+ goto finally;
+ }
+
+ msg.msg_name = SAS2SA(&addrbuf);
+ msg.msg_namelen = addrbuflen;
+ msg.msg_iov = iov;
+ msg.msg_iovlen = iovlen;
+ msg.msg_control = controlbuf;
+ msg.msg_controllen = controllen;
+
+ ctx.msg = &msg;
+ ctx.flags = flags;
+ if (sock_call(s, 0, sock_recvmsg_impl, &ctx) < 0)
+ goto finally;
+
+ /* Make list of (level, type, data) tuples from control messages. */
+ if ((cmsg_list = PyList_New(0)) == NULL)
+ goto err_closefds;
+ /* Check for empty ancillary data as old CMSG_FIRSTHDR()
+ implementations didn't do so. */
+ for (cmsgh = ((msg.msg_controllen > 0) ? CMSG_FIRSTHDR(&msg) : NULL);
+ cmsgh != NULL; cmsgh = CMSG_NXTHDR(&msg, cmsgh)) {
+ PyObject *bytes, *tuple;
+ int tmp;
+
+ cmsg_status = get_cmsg_data_len(&msg, cmsgh, &cmsgdatalen);
+ if (cmsg_status != 0) {
+ if (PyErr_WarnEx(PyExc_RuntimeWarning,
+ "received malformed or improperly-truncated "
+ "ancillary data", 1) == -1)
+ goto err_closefds;
+ }
+ if (cmsg_status < 0)
+ break;
+ if (cmsgdatalen > PY_SSIZE_T_MAX) {
+ PyErr_SetString(PyExc_OSError, "control message too long");
+ goto err_closefds;
+ }
+
+ bytes = PyBytes_FromStringAndSize((char *)CMSG_DATA(cmsgh),
+ cmsgdatalen);
+ tuple = Py_BuildValue("iiN", (int)cmsgh->cmsg_level,
+ (int)cmsgh->cmsg_type, bytes);
+ if (tuple == NULL)
+ goto err_closefds;
+ tmp = PyList_Append(cmsg_list, tuple);
+ Py_DECREF(tuple);
+ if (tmp != 0)
+ goto err_closefds;
+
+ if (cmsg_status != 0)
+ break;
+ }
+
+ retval = Py_BuildValue("NOiN",
+ (*makeval)(ctx.result, makeval_data),
+ cmsg_list,
+ (int)msg.msg_flags,
+ makesockaddr(s->sock_fd, SAS2SA(&addrbuf),
+ ((msg.msg_namelen > addrbuflen) ?
+ addrbuflen : msg.msg_namelen),
+ s->sock_proto));
+ if (retval == NULL)
+ goto err_closefds;
+
+finally:
+ Py_XDECREF(cmsg_list);
+ PyMem_Free(controlbuf);
+ return retval;
+
+err_closefds:
+#ifdef SCM_RIGHTS
+ /* Close all descriptors coming from SCM_RIGHTS, so they don't leak. */
+ for (cmsgh = ((msg.msg_controllen > 0) ? CMSG_FIRSTHDR(&msg) : NULL);
+ cmsgh != NULL; cmsgh = CMSG_NXTHDR(&msg, cmsgh)) {
+ cmsg_status = get_cmsg_data_len(&msg, cmsgh, &cmsgdatalen);
+ if (cmsg_status < 0)
+ break;
+ if (cmsgh->cmsg_level == SOL_SOCKET &&
+ cmsgh->cmsg_type == SCM_RIGHTS) {
+ size_t numfds;
+ int *fdp;
+
+ numfds = cmsgdatalen / sizeof(int);
+ fdp = (int *)CMSG_DATA(cmsgh);
+ while (numfds-- > 0)
+ close(*fdp++);
+ }
+ if (cmsg_status != 0)
+ break;
+ }
+#endif /* SCM_RIGHTS */
+ goto finally;
+}
+
+
+static PyObject *
+makeval_recvmsg(ssize_t received, void *data)
+{
+ PyObject **buf = data;
+
+ if (received < PyBytes_GET_SIZE(*buf))
+ _PyBytes_Resize(buf, received);
+ Py_XINCREF(*buf);
+ return *buf;
+}
+
+/* s.recvmsg(bufsize[, ancbufsize[, flags]]) method */
+
+static PyObject *
+sock_recvmsg(PySocketSockObject *s, PyObject *args)
+{
+ Py_ssize_t bufsize, ancbufsize = 0;
+ int flags = 0;
+ struct iovec iov;
+ PyObject *buf = NULL, *retval = NULL;
+
+ if (!PyArg_ParseTuple(args, "n|ni:recvmsg", &bufsize, &ancbufsize, &flags))
+ return NULL;
+
+ if (bufsize < 0) {
+ PyErr_SetString(PyExc_ValueError, "negative buffer size in recvmsg()");
+ return NULL;
+ }
+ if ((buf = PyBytes_FromStringAndSize(NULL, bufsize)) == NULL)
+ return NULL;
+ iov.iov_base = PyBytes_AS_STRING(buf);
+ iov.iov_len = bufsize;
+
+ /* Note that we're passing a pointer to *our pointer* to the bytes
+ object here (&buf); makeval_recvmsg() may incref the object, or
+ deallocate it and set our pointer to NULL. */
+ retval = sock_recvmsg_guts(s, &iov, 1, flags, ancbufsize,
+ &makeval_recvmsg, &buf);
+ Py_XDECREF(buf);
+ return retval;
+}
+
+PyDoc_STRVAR(recvmsg_doc,
+"recvmsg(bufsize[, ancbufsize[, flags]]) -> (data, ancdata, msg_flags, address)\n\
+\n\
+Receive normal data (up to bufsize bytes) and ancillary data from the\n\
+socket. The ancbufsize argument sets the size in bytes of the\n\
+internal buffer used to receive the ancillary data; it defaults to 0,\n\
+meaning that no ancillary data will be received. Appropriate buffer\n\
+sizes for ancillary data can be calculated using CMSG_SPACE() or\n\
+CMSG_LEN(), and items which do not fit into the buffer might be\n\
+truncated or discarded. The flags argument defaults to 0 and has the\n\
+same meaning as for recv().\n\
+\n\
+The return value is a 4-tuple: (data, ancdata, msg_flags, address).\n\
+The data item is a bytes object holding the non-ancillary data\n\
+received. The ancdata item is a list of zero or more tuples\n\
+(cmsg_level, cmsg_type, cmsg_data) representing the ancillary data\n\
+(control messages) received: cmsg_level and cmsg_type are integers\n\
+specifying the protocol level and protocol-specific type respectively,\n\
+and cmsg_data is a bytes object holding the associated data. The\n\
+msg_flags item is the bitwise OR of various flags indicating\n\
+conditions on the received message; see your system documentation for\n\
+details. If the receiving socket is unconnected, address is the\n\
+address of the sending socket, if available; otherwise, its value is\n\
+unspecified.\n\
+\n\
+If recvmsg() raises an exception after the system call returns, it\n\
+will first attempt to close any file descriptors received via the\n\
+SCM_RIGHTS mechanism.");
+
+
+static PyObject *
+makeval_recvmsg_into(ssize_t received, void *data)
+{
+ return PyLong_FromSsize_t(received);
+}
+
+/* s.recvmsg_into(buffers[, ancbufsize[, flags]]) method */
+
+static PyObject *
+sock_recvmsg_into(PySocketSockObject *s, PyObject *args)
+{
+ Py_ssize_t ancbufsize = 0;
+ int flags = 0;
+ struct iovec *iovs = NULL;
+ Py_ssize_t i, nitems, nbufs = 0;
+ Py_buffer *bufs = NULL;
+ PyObject *buffers_arg, *fast, *retval = NULL;
+
+ if (!PyArg_ParseTuple(args, "O|ni:recvmsg_into",
+ &buffers_arg, &ancbufsize, &flags))
+ return NULL;
+
+ if ((fast = PySequence_Fast(buffers_arg,
+ "recvmsg_into() argument 1 must be an "
+ "iterable")) == NULL)
+ return NULL;
+ nitems = PySequence_Fast_GET_SIZE(fast);
+ if (nitems > INT_MAX) {
+ PyErr_SetString(PyExc_OSError, "recvmsg_into() argument 1 is too long");
+ goto finally;
+ }
+
+ /* Fill in an iovec for each item, and save the Py_buffer
+ structs to release afterwards. */
+ if (nitems > 0 && ((iovs = PyMem_New(struct iovec, nitems)) == NULL ||
+ (bufs = PyMem_New(Py_buffer, nitems)) == NULL)) {
+ PyErr_NoMemory();
+ goto finally;
+ }
+ for (; nbufs < nitems; nbufs++) {
+ if (!PyArg_Parse(PySequence_Fast_GET_ITEM(fast, nbufs),
+ "w*;recvmsg_into() argument 1 must be an iterable "
+ "of single-segment read-write buffers",
+ &bufs[nbufs]))
+ goto finally;
+ iovs[nbufs].iov_base = bufs[nbufs].buf;
+ iovs[nbufs].iov_len = bufs[nbufs].len;
+ }
+
+ retval = sock_recvmsg_guts(s, iovs, nitems, flags, ancbufsize,
+ &makeval_recvmsg_into, NULL);
+finally:
+ for (i = 0; i < nbufs; i++)
+ PyBuffer_Release(&bufs[i]);
+ PyMem_Free(bufs);
+ PyMem_Free(iovs);
+ Py_DECREF(fast);
+ return retval;
+}
+
+PyDoc_STRVAR(recvmsg_into_doc,
+"recvmsg_into(buffers[, ancbufsize[, flags]]) -> (nbytes, ancdata, msg_flags, address)\n\
+\n\
+Receive normal data and ancillary data from the socket, scattering the\n\
+non-ancillary data into a series of buffers. The buffers argument\n\
+must be an iterable of objects that export writable buffers\n\
+(e.g. bytearray objects); these will be filled with successive chunks\n\
+of the non-ancillary data until it has all been written or there are\n\
+no more buffers. The ancbufsize argument sets the size in bytes of\n\
+the internal buffer used to receive the ancillary data; it defaults to\n\
+0, meaning that no ancillary data will be received. Appropriate\n\
+buffer sizes for ancillary data can be calculated using CMSG_SPACE()\n\
+or CMSG_LEN(), and items which do not fit into the buffer might be\n\
+truncated or discarded. The flags argument defaults to 0 and has the\n\
+same meaning as for recv().\n\
+\n\
+The return value is a 4-tuple: (nbytes, ancdata, msg_flags, address).\n\
+The nbytes item is the total number of bytes of non-ancillary data\n\
+written into the buffers. The ancdata item is a list of zero or more\n\
+tuples (cmsg_level, cmsg_type, cmsg_data) representing the ancillary\n\
+data (control messages) received: cmsg_level and cmsg_type are\n\
+integers specifying the protocol level and protocol-specific type\n\
+respectively, and cmsg_data is a bytes object holding the associated\n\
+data. The msg_flags item is the bitwise OR of various flags\n\
+indicating conditions on the received message; see your system\n\
+documentation for details. If the receiving socket is unconnected,\n\
+address is the address of the sending socket, if available; otherwise,\n\
+its value is unspecified.\n\
+\n\
+If recvmsg_into() raises an exception after the system call returns,\n\
+it will first attempt to close any file descriptors received via the\n\
+SCM_RIGHTS mechanism.");
+#endif /* CMSG_LEN */
+
+
+struct sock_send {
+ char *buf;
+ Py_ssize_t len;
+ int flags;
+ Py_ssize_t result;
+};
+
+static int
+sock_send_impl(PySocketSockObject *s, void *data)
+{
+ struct sock_send *ctx = data;
+
+#ifdef MS_WINDOWS
+ if (ctx->len > INT_MAX)
+ ctx->len = INT_MAX;
+ ctx->result = send(s->sock_fd, ctx->buf, (int)ctx->len, ctx->flags);
+#else
+ ctx->result = send(s->sock_fd, ctx->buf, ctx->len, ctx->flags);
+#endif
+ return (ctx->result >= 0);
+}
+
+/* s.send(data [,flags]) method */
+
+static PyObject *
+sock_send(PySocketSockObject *s, PyObject *args)
+{
+ int flags = 0;
+ Py_buffer pbuf;
+ struct sock_send ctx;
+
+ if (!PyArg_ParseTuple(args, "y*|i:send", &pbuf, &flags))
+ return NULL;
+
+ if (!IS_SELECTABLE(s)) {
+ PyBuffer_Release(&pbuf);
+ return select_error();
+ }
+ ctx.buf = pbuf.buf;
+ ctx.len = pbuf.len;
+ ctx.flags = flags;
+ if (sock_call(s, 1, sock_send_impl, &ctx) < 0) {
+ PyBuffer_Release(&pbuf);
+ return NULL;
+ }
+ PyBuffer_Release(&pbuf);
+
+ return PyLong_FromSsize_t(ctx.result);
+}
+
+PyDoc_STRVAR(send_doc,
+"send(data[, flags]) -> count\n\
+\n\
+Send a data string to the socket. For the optional flags\n\
+argument, see the Unix manual. Return the number of bytes\n\
+sent; this may be less than len(data) if the network is busy.");
+
+
+/* s.sendall(data [,flags]) method */
+
+static PyObject *
+sock_sendall(PySocketSockObject *s, PyObject *args)
+{
+ char *buf;
+ Py_ssize_t len, n;
+ int flags = 0;
+ Py_buffer pbuf;
+ struct sock_send ctx;
+ int has_timeout = (s->sock_timeout > 0);
+ _PyTime_t interval = s->sock_timeout;
+ _PyTime_t deadline = 0;
+ int deadline_initialized = 0;
+ PyObject *res = NULL;
+
+ if (!PyArg_ParseTuple(args, "y*|i:sendall", &pbuf, &flags))
+ return NULL;
+ buf = pbuf.buf;
+ len = pbuf.len;
+
+ if (!IS_SELECTABLE(s)) {
+ PyBuffer_Release(&pbuf);
+ return select_error();
+ }
+
+ do {
+ if (has_timeout) {
+ if (deadline_initialized) {
+ /* recompute the timeout */
+ interval = deadline - _PyTime_GetMonotonicClock();
+ }
+ else {
+ deadline_initialized = 1;
+ deadline = _PyTime_GetMonotonicClock() + s->sock_timeout;
+ }
+
+ if (interval <= 0) {
+ PyErr_SetString(socket_timeout, "timed out");
+ goto done;
+ }
+ }
+
+ ctx.buf = buf;
+ ctx.len = len;
+ ctx.flags = flags;
+ if (sock_call_ex(s, 1, sock_send_impl, &ctx, 0, NULL, interval) < 0)
+ goto done;
+ n = ctx.result;
+ assert(n >= 0);
+
+ buf += n;
+ len -= n;
+
+ /* We must run our signal handlers before looping again.
+ send() can return a successful partial write when it is
+ interrupted, so we can't restrict ourselves to EINTR. */
+ if (PyErr_CheckSignals())
+ goto done;
+ } while (len > 0);
+ PyBuffer_Release(&pbuf);
+
+ Py_INCREF(Py_None);
+ res = Py_None;
+
+done:
+ PyBuffer_Release(&pbuf);
+ return res;
+}
+
+PyDoc_STRVAR(sendall_doc,
+"sendall(data[, flags])\n\
+\n\
+Send a data string to the socket. For the optional flags\n\
+argument, see the Unix manual. This calls send() repeatedly\n\
+until all data is sent. If an error occurs, it's impossible\n\
+to tell how much data has been sent.");
+
+
+struct sock_sendto {
+ char *buf;
+ Py_ssize_t len;
+ int flags;
+ int addrlen;
+ sock_addr_t *addrbuf;
+ Py_ssize_t result;
+};
+
+static int
+sock_sendto_impl(PySocketSockObject *s, void *data)
+{
+ struct sock_sendto *ctx = data;
+
+#ifdef MS_WINDOWS
+ if (ctx->len > INT_MAX)
+ ctx->len = INT_MAX;
+ ctx->result = sendto(s->sock_fd, ctx->buf, (int)ctx->len, ctx->flags,
+ SAS2SA(ctx->addrbuf), ctx->addrlen);
+#else
+ ctx->result = sendto(s->sock_fd, ctx->buf, ctx->len, ctx->flags,
+ SAS2SA(ctx->addrbuf), ctx->addrlen);
+#endif
+ return (ctx->result >= 0);
+}
+
+/* s.sendto(data, [flags,] sockaddr) method */
+
+static PyObject *
+sock_sendto(PySocketSockObject *s, PyObject *args)
+{
+ Py_buffer pbuf;
+ PyObject *addro;
+ Py_ssize_t arglen;
+ sock_addr_t addrbuf;
+ int addrlen, flags;
+ struct sock_sendto ctx;
+
+ flags = 0;
+ arglen = PyTuple_Size(args);
+ switch (arglen) {
+ case 2:
+ PyArg_ParseTuple(args, "y*O:sendto", &pbuf, &addro);
+ break;
+ case 3:
+ PyArg_ParseTuple(args, "y*iO:sendto",
+ &pbuf, &flags, &addro);
+ break;
+ default:
+ PyErr_Format(PyExc_TypeError,
+ "sendto() takes 2 or 3 arguments (%d given)",
+ arglen);
+ return NULL;
+ }
+ if (PyErr_Occurred())
+ return NULL;
+
+ if (!IS_SELECTABLE(s)) {
+ PyBuffer_Release(&pbuf);
+ return select_error();
+ }
+
+ if (!getsockaddrarg(s, addro, SAS2SA(&addrbuf), &addrlen)) {
+ PyBuffer_Release(&pbuf);
+ return NULL;
+ }
+
+ ctx.buf = pbuf.buf;
+ ctx.len = pbuf.len;
+ ctx.flags = flags;
+ ctx.addrlen = addrlen;
+ ctx.addrbuf = &addrbuf;
+ if (sock_call(s, 1, sock_sendto_impl, &ctx) < 0) {
+ PyBuffer_Release(&pbuf);
+ return NULL;
+ }
+ PyBuffer_Release(&pbuf);
+
+ return PyLong_FromSsize_t(ctx.result);
+}
+
+PyDoc_STRVAR(sendto_doc,
+"sendto(data[, flags], address) -> count\n\
+\n\
+Like send(data, flags) but allows specifying the destination address.\n\
+For IP sockets, the address is a pair (hostaddr, port).");
+
+
+/* The sendmsg() and recvmsg[_into]() methods require a working
+ CMSG_LEN(). See the comment near get_CMSG_LEN(). */
+#ifdef CMSG_LEN
+struct sock_sendmsg {
+ struct msghdr *msg;
+ int flags;
+ ssize_t result;
+};
+
+static int
+sock_sendmsg_iovec(PySocketSockObject *s, PyObject *data_arg,
+ struct msghdr *msg,
+ Py_buffer **databufsout, Py_ssize_t *ndatabufsout) {
+ Py_ssize_t ndataparts, ndatabufs = 0;
+ int result = -1;
+ struct iovec *iovs = NULL;
+ PyObject *data_fast = NULL;
+ Py_buffer *databufs = NULL;
+
+ /* Fill in an iovec for each message part, and save the Py_buffer
+ structs to release afterwards. */
+ data_fast = PySequence_Fast(data_arg,
+ "sendmsg() argument 1 must be an "
+ "iterable");
+ if (data_fast == NULL) {
+ goto finally;
+ }
+
+ ndataparts = PySequence_Fast_GET_SIZE(data_fast);
+ if (ndataparts > INT_MAX) {
+ PyErr_SetString(PyExc_OSError, "sendmsg() argument 1 is too long");
+ goto finally;
+ }
+
+ msg->msg_iovlen = ndataparts;
+ if (ndataparts > 0) {
+ iovs = PyMem_New(struct iovec, ndataparts);
+ if (iovs == NULL) {
+ PyErr_NoMemory();
+ goto finally;
+ }
+ msg->msg_iov = iovs;
+
+ databufs = PyMem_New(Py_buffer, ndataparts);
+ if (databufs == NULL) {
+ PyErr_NoMemory();
+ goto finally;
+ }
+ }
+ for (; ndatabufs < ndataparts; ndatabufs++) {
+ if (!PyArg_Parse(PySequence_Fast_GET_ITEM(data_fast, ndatabufs),
+ "y*;sendmsg() argument 1 must be an iterable of "
+ "bytes-like objects",
+ &databufs[ndatabufs]))
+ goto finally;
+ iovs[ndatabufs].iov_base = databufs[ndatabufs].buf;
+ iovs[ndatabufs].iov_len = databufs[ndatabufs].len;
+ }
+ result = 0;
+ finally:
+ *databufsout = databufs;
+ *ndatabufsout = ndatabufs;
+ Py_XDECREF(data_fast);
+ return result;
+}
+
+static int
+sock_sendmsg_impl(PySocketSockObject *s, void *data)
+{
+ struct sock_sendmsg *ctx = data;
+
+ ctx->result = sendmsg(s->sock_fd, ctx->msg, ctx->flags);
+ return (ctx->result >= 0);
+}
+
+/* s.sendmsg(buffers[, ancdata[, flags[, address]]]) method */
+
+static PyObject *
+sock_sendmsg(PySocketSockObject *s, PyObject *args)
+{
+ Py_ssize_t i, ndatabufs = 0, ncmsgs, ncmsgbufs = 0;
+ Py_buffer *databufs = NULL;
+ sock_addr_t addrbuf;
+ struct msghdr msg;
+ struct cmsginfo {
+ int level;
+ int type;
+ Py_buffer data;
+ } *cmsgs = NULL;
+ void *controlbuf = NULL;
+ size_t controllen, controllen_last;
+ int addrlen, flags = 0;
+ PyObject *data_arg, *cmsg_arg = NULL, *addr_arg = NULL,
+ *cmsg_fast = NULL, *retval = NULL;
+ struct sock_sendmsg ctx;
+
+ if (!PyArg_ParseTuple(args, "O|OiO:sendmsg",
+ &data_arg, &cmsg_arg, &flags, &addr_arg)) {
+ return NULL;
+ }
+
+ memset(&msg, 0, sizeof(msg));
+
+ /* Parse destination address. */
+ if (addr_arg != NULL && addr_arg != Py_None) {
+ if (!getsockaddrarg(s, addr_arg, SAS2SA(&addrbuf), &addrlen))
+ goto finally;
+ msg.msg_name = &addrbuf;
+ msg.msg_namelen = addrlen;
+ }
+
+ /* Fill in an iovec for each message part, and save the Py_buffer
+ structs to release afterwards. */
+ if (sock_sendmsg_iovec(s, data_arg, &msg, &databufs, &ndatabufs) == -1) {
+ goto finally;
+ }
+
+ if (cmsg_arg == NULL)
+ ncmsgs = 0;
+ else {
+ if ((cmsg_fast = PySequence_Fast(cmsg_arg,
+ "sendmsg() argument 2 must be an "
+ "iterable")) == NULL)
+ goto finally;
+ ncmsgs = PySequence_Fast_GET_SIZE(cmsg_fast);
+ }
+
+#ifndef CMSG_SPACE
+ if (ncmsgs > 1) {
+ PyErr_SetString(PyExc_OSError,
+ "sending multiple control messages is not supported "
+ "on this system");
+ goto finally;
+ }
+#endif
+ /* Save level, type and Py_buffer for each control message,
+ and calculate total size. */
+ if (ncmsgs > 0 && (cmsgs = PyMem_New(struct cmsginfo, ncmsgs)) == NULL) {
+ PyErr_NoMemory();
+ goto finally;
+ }
+ controllen = controllen_last = 0;
+ while (ncmsgbufs < ncmsgs) {
+ size_t bufsize, space;
+
+ if (!PyArg_Parse(PySequence_Fast_GET_ITEM(cmsg_fast, ncmsgbufs),
+ "(iiy*):[sendmsg() ancillary data items]",
+ &cmsgs[ncmsgbufs].level,
+ &cmsgs[ncmsgbufs].type,
+ &cmsgs[ncmsgbufs].data))
+ goto finally;
+ bufsize = cmsgs[ncmsgbufs++].data.len;
+
+#ifdef CMSG_SPACE
+ if (!get_CMSG_SPACE(bufsize, &space)) {
+#else
+ if (!get_CMSG_LEN(bufsize, &space)) {
+#endif
+ PyErr_SetString(PyExc_OSError, "ancillary data item too large");
+ goto finally;
+ }
+ controllen += space;
+ if (controllen > SOCKLEN_T_LIMIT || controllen < controllen_last) {
+ PyErr_SetString(PyExc_OSError, "too much ancillary data");
+ goto finally;
+ }
+ controllen_last = controllen;
+ }
+
+ /* Construct ancillary data block from control message info. */
+ if (ncmsgbufs > 0) {
+ struct cmsghdr *cmsgh = NULL;
+
+ controlbuf = PyMem_Malloc(controllen);
+ if (controlbuf == NULL) {
+ PyErr_NoMemory();
+ goto finally;
+ }
+ msg.msg_control = controlbuf;
+
+ msg.msg_controllen = controllen;
+
+ /* Need to zero out the buffer as a workaround for glibc's
+ CMSG_NXTHDR() implementation. After getting the pointer to
+ the next header, it checks its (uninitialized) cmsg_len
+ member to see if the "message" fits in the buffer, and
+ returns NULL if it doesn't. Zero-filling the buffer
+ ensures that this doesn't happen. */
+ memset(controlbuf, 0, controllen);
+
+ for (i = 0; i < ncmsgbufs; i++) {
+ size_t msg_len, data_len = cmsgs[i].data.len;
+ int enough_space = 0;
+
+ cmsgh = (i == 0) ? CMSG_FIRSTHDR(&msg) : CMSG_NXTHDR(&msg, cmsgh);
+ if (cmsgh == NULL) {
+ PyErr_Format(PyExc_RuntimeError,
+ "unexpected NULL result from %s()",
+ (i == 0) ? "CMSG_FIRSTHDR" : "CMSG_NXTHDR");
+ goto finally;
+ }
+ if (!get_CMSG_LEN(data_len, &msg_len)) {
+ PyErr_SetString(PyExc_RuntimeError,
+ "item size out of range for CMSG_LEN()");
+ goto finally;
+ }
+ if (cmsg_min_space(&msg, cmsgh, msg_len)) {
+ size_t space;
+
+ cmsgh->cmsg_len = msg_len;
+ if (get_cmsg_data_space(&msg, cmsgh, &space))
+ enough_space = (space >= data_len);
+ }
+ if (!enough_space) {
+ PyErr_SetString(PyExc_RuntimeError,
+ "ancillary data does not fit in calculated "
+ "space");
+ goto finally;
+ }
+ cmsgh->cmsg_level = cmsgs[i].level;
+ cmsgh->cmsg_type = cmsgs[i].type;
+ memcpy(CMSG_DATA(cmsgh), cmsgs[i].data.buf, data_len);
+ }
+ }
+
+ /* Make the system call. */
+ if (!IS_SELECTABLE(s)) {
+ select_error();
+ goto finally;
+ }
+
+ ctx.msg = &msg;
+ ctx.flags = flags;
+ if (sock_call(s, 1, sock_sendmsg_impl, &ctx) < 0)
+ goto finally;
+
+ retval = PyLong_FromSsize_t(ctx.result);
+
+finally:
+ PyMem_Free(controlbuf);
+ for (i = 0; i < ncmsgbufs; i++)
+ PyBuffer_Release(&cmsgs[i].data);
+ PyMem_Free(cmsgs);
+ Py_XDECREF(cmsg_fast);
+ PyMem_Free(msg.msg_iov);
+ for (i = 0; i < ndatabufs; i++) {
+ PyBuffer_Release(&databufs[i]);
+ }
+ PyMem_Free(databufs);
+ return retval;
+}
+
+PyDoc_STRVAR(sendmsg_doc,
+"sendmsg(buffers[, ancdata[, flags[, address]]]) -> count\n\
+\n\
+Send normal and ancillary data to the socket, gathering the\n\
+non-ancillary data from a series of buffers and concatenating it into\n\
+a single message. The buffers argument specifies the non-ancillary\n\
+data as an iterable of bytes-like objects (e.g. bytes objects).\n\
+The ancdata argument specifies the ancillary data (control messages)\n\
+as an iterable of zero or more tuples (cmsg_level, cmsg_type,\n\
+cmsg_data), where cmsg_level and cmsg_type are integers specifying the\n\
+protocol level and protocol-specific type respectively, and cmsg_data\n\
+is a bytes-like object holding the associated data. The flags\n\
+argument defaults to 0 and has the same meaning as for send(). If\n\
+address is supplied and not None, it sets a destination address for\n\
+the message. The return value is the number of bytes of non-ancillary\n\
+data sent.");
+#endif /* CMSG_LEN */
+
+#ifdef HAVE_SOCKADDR_ALG
+static PyObject*
+sock_sendmsg_afalg(PySocketSockObject *self, PyObject *args, PyObject *kwds)
+{
+ PyObject *retval = NULL;
+
+ Py_ssize_t i, ndatabufs = 0;
+ Py_buffer *databufs = NULL;
+ PyObject *data_arg = NULL;
+
+ Py_buffer iv = {NULL, NULL};
+
+ PyObject *opobj = NULL;
+ int op = -1;
+
+ PyObject *assoclenobj = NULL;
+ int assoclen = -1;
+
+ unsigned int *uiptr;
+ int flags = 0;
+
+ struct msghdr msg;
+ struct cmsghdr *header = NULL;
+ struct af_alg_iv *alg_iv = NULL;
+ struct sock_sendmsg ctx;
+ Py_ssize_t controllen;
+ void *controlbuf = NULL;
+ static char *keywords[] = {"msg", "op", "iv", "assoclen", "flags", 0};
+
+ if (self->sock_family != AF_ALG) {
+ PyErr_SetString(PyExc_OSError,
+ "algset is only supported for AF_ALG");
+ return NULL;
+ }
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwds,
+ "|O$O!y*O!i:sendmsg_afalg", keywords,
+ &data_arg,
+ &PyLong_Type, &opobj, &iv,
+ &PyLong_Type, &assoclenobj, &flags)) {
+ return NULL;
+ }
+
+ memset(&msg, 0, sizeof(msg));
+
+ /* op is a required, keyword-only argument >= 0 */
+ if (opobj != NULL) {
+ op = _PyLong_AsInt(opobj);
+ }
+ if (op < 0) {
+ /* override exception from _PyLong_AsInt() */
+ PyErr_SetString(PyExc_TypeError,
+ "Invalid or missing argument 'op'");
+ goto finally;
+ }
+ /* assoclen is optional but must be >= 0 */
+ if (assoclenobj != NULL) {
+ assoclen = _PyLong_AsInt(assoclenobj);
+ if (assoclen == -1 && PyErr_Occurred()) {
+ goto finally;
+ }
+ if (assoclen < 0) {
+ PyErr_SetString(PyExc_TypeError,
+ "assoclen must be positive");
+ goto finally;
+ }
+ }
+
+ controllen = CMSG_SPACE(4);
+ if (iv.buf != NULL) {
+ controllen += CMSG_SPACE(sizeof(*alg_iv) + iv.len);
+ }
+ if (assoclen >= 0) {
+ controllen += CMSG_SPACE(4);
+ }
+
+ controlbuf = PyMem_Malloc(controllen);
+ if (controlbuf == NULL) {
+ PyErr_NoMemory();
+ goto finally;
+ }
+ memset(controlbuf, 0, controllen);
+
+ msg.msg_controllen = controllen;
+ msg.msg_control = controlbuf;
+
+ /* Fill in an iovec for each message part, and save the Py_buffer
+ structs to release afterwards. */
+ if (data_arg != NULL) {
+ if (sock_sendmsg_iovec(self, data_arg, &msg, &databufs, &ndatabufs) == -1) {
+ goto finally;
+ }
+ }
+
+ /* set operation to encrypt or decrypt */
+ header = CMSG_FIRSTHDR(&msg);
+ if (header == NULL) {
+ PyErr_SetString(PyExc_RuntimeError,
+ "unexpected NULL result from CMSG_FIRSTHDR");
+ goto finally;
+ }
+ header->cmsg_level = SOL_ALG;
+ header->cmsg_type = ALG_SET_OP;
+ header->cmsg_len = CMSG_LEN(4);
+ uiptr = (void*)CMSG_DATA(header);
+ *uiptr = (unsigned int)op;
+
+ /* set initialization vector */
+ if (iv.buf != NULL) {
+ header = CMSG_NXTHDR(&msg, header);
+ if (header == NULL) {
+ PyErr_SetString(PyExc_RuntimeError,
+ "unexpected NULL result from CMSG_NXTHDR(iv)");
+ goto finally;
+ }
+ header->cmsg_level = SOL_ALG;
+ header->cmsg_type = ALG_SET_IV;
+ header->cmsg_len = CMSG_SPACE(sizeof(*alg_iv) + iv.len);
+ alg_iv = (void*)CMSG_DATA(header);
+ alg_iv->ivlen = iv.len;
+ memcpy(alg_iv->iv, iv.buf, iv.len);
+ }
+
+ /* set length of associated data for AEAD */
+ if (assoclen >= 0) {
+ header = CMSG_NXTHDR(&msg, header);
+ if (header == NULL) {
+ PyErr_SetString(PyExc_RuntimeError,
+ "unexpected NULL result from CMSG_NXTHDR(assoc)");
+ goto finally;
+ }
+ header->cmsg_level = SOL_ALG;
+ header->cmsg_type = ALG_SET_AEAD_ASSOCLEN;
+ header->cmsg_len = CMSG_LEN(4);
+ uiptr = (void*)CMSG_DATA(header);
+ *uiptr = (unsigned int)assoclen;
+ }
+
+ ctx.msg = &msg;
+ ctx.flags = flags;
+ if (sock_call(self, 1, sock_sendmsg_impl, &ctx) < 0) {
+ goto finally;
+ }
+
+ retval = PyLong_FromSsize_t(ctx.result);
+
+ finally:
+ PyMem_Free(controlbuf);
+ if (iv.buf != NULL) {
+ PyBuffer_Release(&iv);
+ }
+ PyMem_Free(msg.msg_iov);
+ for (i = 0; i < ndatabufs; i++) {
+ PyBuffer_Release(&databufs[i]);
+ }
+ PyMem_Free(databufs);
+ return retval;
+}
+
+PyDoc_STRVAR(sendmsg_afalg_doc,
+"sendmsg_afalg([msg], *, op[, iv[, assoclen[, flags=MSG_MORE]]])\n\
+\n\
+Set operation mode, IV and length of associated data for an AF_ALG\n\
+operation socket.");
+#endif
+
+/* s.shutdown(how) method */
+
+static PyObject *
+sock_shutdown(PySocketSockObject *s, PyObject *arg)
+{
+ int how;
+ int res;
+
+ how = _PyLong_AsInt(arg);
+ if (how == -1 && PyErr_Occurred())
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ res = shutdown(s->sock_fd, how);
+ Py_END_ALLOW_THREADS
+ if (res < 0)
+ return s->errorhandler();
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+PyDoc_STRVAR(shutdown_doc,
+"shutdown(flag)\n\
+\n\
+Shut down the reading side of the socket (flag == SHUT_RD), the writing side\n\
+of the socket (flag == SHUT_WR), or both ends (flag == SHUT_RDWR).");
+
+#if defined(MS_WINDOWS) && defined(SIO_RCVALL)
+static PyObject*
+sock_ioctl(PySocketSockObject *s, PyObject *arg)
+{
+ unsigned long cmd = SIO_RCVALL;
+ PyObject *argO;
+ DWORD recv;
+
+ if (!PyArg_ParseTuple(arg, "kO:ioctl", &cmd, &argO))
+ return NULL;
+
+ switch (cmd) {
+ case SIO_RCVALL: {
+ unsigned int option = RCVALL_ON;
+ if (!PyArg_ParseTuple(arg, "kI:ioctl", &cmd, &option))
+ return NULL;
+ if (WSAIoctl(s->sock_fd, cmd, &option, sizeof(option),
+ NULL, 0, &recv, NULL, NULL) == SOCKET_ERROR) {
+ return set_error();
+ }
+ return PyLong_FromUnsignedLong(recv); }
+ case SIO_KEEPALIVE_VALS: {
+ struct tcp_keepalive ka;
+ if (!PyArg_ParseTuple(arg, "k(kkk):ioctl", &cmd,
+ &ka.onoff, &ka.keepalivetime, &ka.keepaliveinterval))
+ return NULL;
+ if (WSAIoctl(s->sock_fd, cmd, &ka, sizeof(ka),
+ NULL, 0, &recv, NULL, NULL) == SOCKET_ERROR) {
+ return set_error();
+ }
+ return PyLong_FromUnsignedLong(recv); }
+#if defined(SIO_LOOPBACK_FAST_PATH)
+ case SIO_LOOPBACK_FAST_PATH: {
+ unsigned int option;
+ if (!PyArg_ParseTuple(arg, "kI:ioctl", &cmd, &option))
+ return NULL;
+ if (WSAIoctl(s->sock_fd, cmd, &option, sizeof(option),
+ NULL, 0, &recv, NULL, NULL) == SOCKET_ERROR) {
+ return set_error();
+ }
+ return PyLong_FromUnsignedLong(recv); }
+#endif
+ default:
+ PyErr_Format(PyExc_ValueError, "invalid ioctl command %d", cmd);
+ return NULL;
+ }
+}
+PyDoc_STRVAR(sock_ioctl_doc,
+"ioctl(cmd, option) -> long\n\
+\n\
+Control the socket with WSAIoctl syscall. Currently supported 'cmd' values are\n\
+SIO_RCVALL: 'option' must be one of the socket.RCVALL_* constants.\n\
+SIO_KEEPALIVE_VALS: 'option' is a tuple of (onoff, timeout, interval).\n\
+SIO_LOOPBACK_FAST_PATH: 'option' is a boolean value, and is disabled by default");
+#endif
+
+#if defined(MS_WINDOWS)
+static PyObject*
+sock_share(PySocketSockObject *s, PyObject *arg)
+{
+ WSAPROTOCOL_INFO info;
+ DWORD processId;
+ int result;
+
+ if (!PyArg_ParseTuple(arg, "I", &processId))
+ return NULL;
+
+ Py_BEGIN_ALLOW_THREADS
+ result = WSADuplicateSocket(s->sock_fd, processId, &info);
+ Py_END_ALLOW_THREADS
+ if (result == SOCKET_ERROR)
+ return set_error();
+ return PyBytes_FromStringAndSize((const char*)&info, sizeof(info));
+}
+PyDoc_STRVAR(sock_share_doc,
+"share(process_id) -> bytes\n\
+\n\
+Share the socket with another process. The target process id\n\
+must be provided and the resulting bytes object passed to the target\n\
+process. There the shared socket can be instantiated by calling\n\
+socket.fromshare().");
+
+
+#endif
+
+/* List of methods for socket objects */
+
+static PyMethodDef sock_methods[] = {
+ {"_accept", (PyCFunction)sock_accept, METH_NOARGS,
+ accept_doc},
+ {"bind", (PyCFunction)sock_bind, METH_O,
+ bind_doc},
+ {"close", (PyCFunction)sock_close, METH_NOARGS,
+ close_doc},
+ {"connect", (PyCFunction)sock_connect, METH_O,
+ connect_doc},
+ {"connect_ex", (PyCFunction)sock_connect_ex, METH_O,
+ connect_ex_doc},
+ {"detach", (PyCFunction)sock_detach, METH_NOARGS,
+ detach_doc},
+ {"fileno", (PyCFunction)sock_fileno, METH_NOARGS,
+ fileno_doc},
+#ifdef HAVE_GETPEERNAME
+ {"getpeername", (PyCFunction)sock_getpeername,
+ METH_NOARGS, getpeername_doc},
+#endif
+ {"getsockname", (PyCFunction)sock_getsockname,
+ METH_NOARGS, getsockname_doc},
+ {"getsockopt", (PyCFunction)sock_getsockopt, METH_VARARGS,
+ getsockopt_doc},
+#if defined(MS_WINDOWS) && defined(SIO_RCVALL)
+ {"ioctl", (PyCFunction)sock_ioctl, METH_VARARGS,
+ sock_ioctl_doc},
+#endif
+#if defined(MS_WINDOWS)
+ {"share", (PyCFunction)sock_share, METH_VARARGS,
+ sock_share_doc},
+#endif
+ {"listen", (PyCFunction)sock_listen, METH_VARARGS,
+ listen_doc},
+ {"recv", (PyCFunction)sock_recv, METH_VARARGS,
+ recv_doc},
+ {"recv_into", (PyCFunction)sock_recv_into, METH_VARARGS | METH_KEYWORDS,
+ recv_into_doc},
+ {"recvfrom", (PyCFunction)sock_recvfrom, METH_VARARGS,
+ recvfrom_doc},
+ {"recvfrom_into", (PyCFunction)sock_recvfrom_into, METH_VARARGS | METH_KEYWORDS,
+ recvfrom_into_doc},
+ {"send", (PyCFunction)sock_send, METH_VARARGS,
+ send_doc},
+ {"sendall", (PyCFunction)sock_sendall, METH_VARARGS,
+ sendall_doc},
+ {"sendto", (PyCFunction)sock_sendto, METH_VARARGS,
+ sendto_doc},
+ {"setblocking", (PyCFunction)sock_setblocking, METH_O,
+ setblocking_doc},
+ {"settimeout", (PyCFunction)sock_settimeout, METH_O,
+ settimeout_doc},
+ {"gettimeout", (PyCFunction)sock_gettimeout, METH_NOARGS,
+ gettimeout_doc},
+ {"setsockopt", (PyCFunction)sock_setsockopt, METH_VARARGS,
+ setsockopt_doc},
+ {"shutdown", (PyCFunction)sock_shutdown, METH_O,
+ shutdown_doc},
+#ifndef UEFI_C_SOURCE
+#ifdef CMSG_LEN
+ {"recvmsg", (PyCFunction)sock_recvmsg, METH_VARARGS,
+ recvmsg_doc},
+ {"recvmsg_into", (PyCFunction)sock_recvmsg_into, METH_VARARGS,
+ recvmsg_into_doc,},
+ {"sendmsg", (PyCFunction)sock_sendmsg, METH_VARARGS,
+ sendmsg_doc},
+#endif
+#ifdef HAVE_SOCKADDR_ALG
+ {"sendmsg_afalg", (PyCFunction)sock_sendmsg_afalg, METH_VARARGS | METH_KEYWORDS,
+ sendmsg_afalg_doc},
+#endif
+#endif
+ {NULL, NULL} /* sentinel */
+};
+
+/* SockObject members */
+static PyMemberDef sock_memberlist[] = {
+ {"family", T_INT, offsetof(PySocketSockObject, sock_family), READONLY, "the socket family"},
+ {"type", T_INT, offsetof(PySocketSockObject, sock_type), READONLY, "the socket type"},
+ {"proto", T_INT, offsetof(PySocketSockObject, sock_proto), READONLY, "the socket protocol"},
+ {0},
+};
+
+static PyGetSetDef sock_getsetlist[] = {
+ {"timeout", (getter)sock_gettimeout, NULL, PyDoc_STR("the socket timeout")},
+ {NULL} /* sentinel */
+};
+
+/* Deallocate a socket object in response to the last Py_DECREF().
+ First close the file description. */
+
+static void
+sock_finalize(PySocketSockObject *s)
+{
+ SOCKET_T fd;
+ PyObject *error_type, *error_value, *error_traceback;
+
+ /* Save the current exception, if any. */
+ PyErr_Fetch(&error_type, &error_value, &error_traceback);
+
+ if (s->sock_fd != INVALID_SOCKET) {
+ if (PyErr_ResourceWarning((PyObject *)s, 1, "unclosed %R", s)) {
+ /* Spurious errors can appear at shutdown */
+ if (PyErr_ExceptionMatches(PyExc_Warning)) {
+ PyErr_WriteUnraisable((PyObject *)s);
+ }
+ }
+
+ /* Only close the socket *after* logging the ResourceWarning warning
+ to allow the logger to call socket methods like
+ socket.getsockname(). If the socket is closed before, socket
+ methods fails with the EBADF error. */
+ fd = s->sock_fd;
+ s->sock_fd = INVALID_SOCKET;
+
+ /* We do not want to retry upon EINTR: see sock_close() */
+ Py_BEGIN_ALLOW_THREADS
+ (void) SOCKETCLOSE(fd);
+ Py_END_ALLOW_THREADS
+ }
+
+ /* Restore the saved exception. */
+ PyErr_Restore(error_type, error_value, error_traceback);
+}
+
+static void
+sock_dealloc(PySocketSockObject *s)
+{
+ if (PyObject_CallFinalizerFromDealloc((PyObject *)s) < 0)
+ return;
+
+ Py_TYPE(s)->tp_free((PyObject *)s);
+}
+
+
+static PyObject *
+sock_repr(PySocketSockObject *s)
+{
+ long sock_fd;
+ /* On Windows, this test is needed because SOCKET_T is unsigned */
+ if (s->sock_fd == INVALID_SOCKET) {
+ sock_fd = -1;
+ }
+#if SIZEOF_SOCKET_T > SIZEOF_LONG
+ else if (s->sock_fd > LONG_MAX) {
+ /* this can occur on Win64, and actually there is a special
+ ugly printf formatter for decimal pointer length integer
+ printing, only bother if necessary*/
+ PyErr_SetString(PyExc_OverflowError,
+ "no printf formatter to display "
+ "the socket descriptor in decimal");
+ return NULL;
+ }
+#endif
+ else
+ sock_fd = (long)s->sock_fd;
+ return PyUnicode_FromFormat(
+ "<socket object, fd=%ld, family=%d, type=%d, proto=%d>",
+ sock_fd, s->sock_family,
+ s->sock_type,
+ s->sock_proto);
+}
+
+
+/* Create a new, uninitialized socket object. */
+
+static PyObject *
+sock_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
+{
+ PyObject *new;
+
+ new = type->tp_alloc(type, 0);
+ if (new != NULL) {
+ ((PySocketSockObject *)new)->sock_fd = INVALID_SOCKET;
+ ((PySocketSockObject *)new)->sock_timeout = _PyTime_FromSeconds(-1);
+ ((PySocketSockObject *)new)->errorhandler = &set_error;
+ }
+ return new;
+}
+
+
+/* Initialize a new socket object. */
+
+#ifdef SOCK_CLOEXEC
+/* socket() and socketpair() fail with EINVAL on Linux kernel older
+ * than 2.6.27 if SOCK_CLOEXEC flag is set in the socket type. */
+static int sock_cloexec_works = -1;
+#endif
+
+/*ARGSUSED*/
+static int
+sock_initobj(PyObject *self, PyObject *args, PyObject *kwds)
+{
+ PySocketSockObject *s = (PySocketSockObject *)self;
+ PyObject *fdobj = NULL;
+ SOCKET_T fd = INVALID_SOCKET;
+ int family = AF_INET, type = SOCK_STREAM, proto = 0;
+ static char *keywords[] = {"family", "type", "proto", "fileno", 0};
+#ifndef MS_WINDOWS
+#ifdef SOCK_CLOEXEC
+ int *atomic_flag_works = &sock_cloexec_works;
+#else
+ int *atomic_flag_works = NULL;
+#endif
+#endif
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwds,
+ "|iiiO:socket", keywords,
+ &family, &type, &proto, &fdobj))
+ return -1;
+
+ if (fdobj != NULL && fdobj != Py_None) {
+#ifdef MS_WINDOWS
+ /* recreate a socket that was duplicated */
+ if (PyBytes_Check(fdobj)) {
+ WSAPROTOCOL_INFO info;
+ if (PyBytes_GET_SIZE(fdobj) != sizeof(info)) {
+ PyErr_Format(PyExc_ValueError,
+ "socket descriptor string has wrong size, "
+ "should be %zu bytes.", sizeof(info));
+ return -1;
+ }
+ memcpy(&info, PyBytes_AS_STRING(fdobj), sizeof(info));
+ Py_BEGIN_ALLOW_THREADS
+ fd = WSASocket(FROM_PROTOCOL_INFO, FROM_PROTOCOL_INFO,
+ FROM_PROTOCOL_INFO, &info, 0, WSA_FLAG_OVERLAPPED);
+ Py_END_ALLOW_THREADS
+ if (fd == INVALID_SOCKET) {
+ set_error();
+ return -1;
+ }
+ family = info.iAddressFamily;
+ type = info.iSocketType;
+ proto = info.iProtocol;
+ }
+ else
+#endif
+ {
+ fd = PyLong_AsSocket_t(fdobj);
+ if (fd == (SOCKET_T)(-1) && PyErr_Occurred())
+ return -1;
+ if (fd == INVALID_SOCKET) {
+ PyErr_SetString(PyExc_ValueError,
+ "can't use invalid socket value");
+ return -1;
+ }
+ }
+ }
+ else {
+#ifdef MS_WINDOWS
+ /* Windows implementation */
+#ifndef WSA_FLAG_NO_HANDLE_INHERIT
+#define WSA_FLAG_NO_HANDLE_INHERIT 0x80
+#endif
+
+ Py_BEGIN_ALLOW_THREADS
+ if (support_wsa_no_inherit) {
+ fd = WSASocket(family, type, proto,
+ NULL, 0,
+ WSA_FLAG_OVERLAPPED | WSA_FLAG_NO_HANDLE_INHERIT);
+ if (fd == INVALID_SOCKET) {
+ /* Windows 7 or Windows 2008 R2 without SP1 or the hotfix */
+ support_wsa_no_inherit = 0;
+ fd = socket(family, type, proto);
+ }
+ }
+ else {
+ fd = socket(family, type, proto);
+ }
+ Py_END_ALLOW_THREADS
+
+ if (fd == INVALID_SOCKET) {
+ set_error();
+ return -1;
+ }
+
+ if (!support_wsa_no_inherit) {
+ if (!SetHandleInformation((HANDLE)fd, HANDLE_FLAG_INHERIT, 0)) {
+ closesocket(fd);
+ PyErr_SetFromWindowsErr(0);
+ return -1;
+ }
+ }
+#else
+ /* UNIX */
+ Py_BEGIN_ALLOW_THREADS
+#ifdef SOCK_CLOEXEC
+ if (sock_cloexec_works != 0) {
+ fd = socket(family, type | SOCK_CLOEXEC, proto);
+ if (sock_cloexec_works == -1) {
+ if (fd >= 0) {
+ sock_cloexec_works = 1;
+ }
+ else if (errno == EINVAL) {
+ /* Linux older than 2.6.27 does not support SOCK_CLOEXEC */
+ sock_cloexec_works = 0;
+ fd = socket(family, type, proto);
+ }
+ }
+ }
+ else
+#endif
+ {
+ fd = socket(family, type, proto);
+ }
+ Py_END_ALLOW_THREADS
+
+ if (fd == INVALID_SOCKET) {
+ set_error();
+ return -1;
+ }
+
+ if (_Py_set_inheritable(fd, 0, atomic_flag_works) < 0) {
+ SOCKETCLOSE(fd);
+ return -1;
+ }
+#endif
+ }
+ if (init_sockobject(s, fd, family, type, proto) == -1) {
+ SOCKETCLOSE(fd);
+ return -1;
+ }
+
+ return 0;
+
+}
+
+
+/* Type object for socket objects. */
+
+static PyTypeObject sock_type = {
+ PyVarObject_HEAD_INIT(0, 0) /* Must fill in type value later */
+ "_socket.socket", /* tp_name */
+ sizeof(PySocketSockObject), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ (destructor)sock_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ (reprfunc)sock_repr, /* tp_repr */
+ 0, /* tp_as_number */
+ 0, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ PyObject_GenericGetAttr, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE
+ | Py_TPFLAGS_HAVE_FINALIZE, /* tp_flags */
+ sock_doc, /* tp_doc */
+ 0, /* tp_traverse */
+ 0, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ 0, /* tp_iter */
+ 0, /* tp_iternext */
+ sock_methods, /* tp_methods */
+ sock_memberlist, /* tp_members */
+ sock_getsetlist, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ sock_initobj, /* tp_init */
+ PyType_GenericAlloc, /* tp_alloc */
+ sock_new, /* tp_new */
+ PyObject_Del, /* tp_free */
+ 0, /* tp_is_gc */
+ 0, /* tp_bases */
+ 0, /* tp_mro */
+ 0, /* tp_cache */
+ 0, /* tp_subclasses */
+ 0, /* tp_weaklist */
+ 0, /* tp_del */
+ 0, /* tp_version_tag */
+ (destructor)sock_finalize, /* tp_finalize */
+};
+
+
+/* Python interface to gethostname(). */
+
+/*ARGSUSED*/
+static PyObject *
+socket_gethostname(PyObject *self, PyObject *unused)
+{
+#ifdef MS_WINDOWS
+ /* Don't use winsock's gethostname, as this returns the ANSI
+ version of the hostname, whereas we need a Unicode string.
+ Otherwise, gethostname apparently also returns the DNS name. */
+ wchar_t buf[MAX_COMPUTERNAME_LENGTH + 1];
+ DWORD size = Py_ARRAY_LENGTH(buf);
+ wchar_t *name;
+ PyObject *result;
+
+ if (GetComputerNameExW(ComputerNamePhysicalDnsHostname, buf, &size))
+ return PyUnicode_FromWideChar(buf, size);
+
+ if (GetLastError() != ERROR_MORE_DATA)
+ return PyErr_SetFromWindowsErr(0);
+
+ if (size == 0)
+ return PyUnicode_New(0, 0);
+
+ /* MSDN says ERROR_MORE_DATA may occur because DNS allows longer
+ names */
+ name = PyMem_New(wchar_t, size);
+ if (!name) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+ if (!GetComputerNameExW(ComputerNamePhysicalDnsHostname,
+ name,
+ &size))
+ {
+ PyMem_Free(name);
+ return PyErr_SetFromWindowsErr(0);
+ }
+
+ result = PyUnicode_FromWideChar(name, size);
+ PyMem_Free(name);
+ return result;
+#else
+ char buf[1024];
+ int res;
+ Py_BEGIN_ALLOW_THREADS
+ res = gethostname(buf, (int) sizeof buf - 1);
+ Py_END_ALLOW_THREADS
+ if (res < 0)
+ return set_error();
+ buf[sizeof buf - 1] = '\0';
+ return PyUnicode_DecodeFSDefault(buf);
+#endif
+}
+
+PyDoc_STRVAR(gethostname_doc,
+"gethostname() -> string\n\
+\n\
+Return the current host name.");
+
+#ifdef HAVE_SETHOSTNAME
+PyDoc_STRVAR(sethostname_doc,
+"sethostname(name)\n\n\
+Sets the hostname to name.");
+
+static PyObject *
+socket_sethostname(PyObject *self, PyObject *args)
+{
+ PyObject *hnobj;
+ Py_buffer buf;
+ int res, flag = 0;
+
+#ifdef _AIX
+/* issue #18259, not declared in any useful header file */
+extern int sethostname(const char *, size_t);
+#endif
+
+ if (!PyArg_ParseTuple(args, "S:sethostname", &hnobj)) {
+ PyErr_Clear();
+ if (!PyArg_ParseTuple(args, "O&:sethostname",
+ PyUnicode_FSConverter, &hnobj))
+ return NULL;
+ flag = 1;
+ }
+ res = PyObject_GetBuffer(hnobj, &buf, PyBUF_SIMPLE);
+ if (!res) {
+ res = sethostname(buf.buf, buf.len);
+ PyBuffer_Release(&buf);
+ }
+ if (flag)
+ Py_DECREF(hnobj);
+ if (res)
+ return set_error();
+ Py_RETURN_NONE;
+}
+#endif
+
+/* Python interface to gethostbyname(name). */
+
+/*ARGSUSED*/
+static PyObject *
+socket_gethostbyname(PyObject *self, PyObject *args)
+{
+ char *name;
+ sock_addr_t addrbuf;
+ PyObject *ret = NULL;
+
+ if (!PyArg_ParseTuple(args, "et:gethostbyname", "idna", &name))
+ return NULL;
+ if (setipaddr(name, SAS2SA(&addrbuf), sizeof(addrbuf), AF_INET) < 0)
+ goto finally;
+ ret = makeipaddr(SAS2SA(&addrbuf), sizeof(struct sockaddr_in));
+finally:
+ PyMem_Free(name);
+ return ret;
+}
+
+PyDoc_STRVAR(gethostbyname_doc,
+"gethostbyname(host) -> address\n\
+\n\
+Return the IP address (a string of the form '255.255.255.255') for a host.");
+
+
+static PyObject*
+sock_decode_hostname(const char *name)
+{
+#ifdef MS_WINDOWS
+ /* Issue #26227: gethostbyaddr() returns a string encoded
+ * to the ANSI code page */
+ return PyUnicode_DecodeFSDefault(name);
+#else
+ /* Decode from UTF-8 */
+ return PyUnicode_FromString(name);
+#endif
+}
+
+/* Convenience function common to gethostbyname_ex and gethostbyaddr */
+
+static PyObject *
+gethost_common(struct hostent *h, struct sockaddr *addr, size_t alen, int af)
+{
+ char **pch;
+ PyObject *rtn_tuple = (PyObject *)NULL;
+ PyObject *name_list = (PyObject *)NULL;
+ PyObject *addr_list = (PyObject *)NULL;
+ PyObject *tmp;
+ PyObject *name;
+
+ if (h == NULL) {
+ /* Let's get real error message to return */
+ set_herror(h_errno);
+ return NULL;
+ }
+
+ if (h->h_addrtype != af) {
+ /* Let's get real error message to return */
+ errno = EAFNOSUPPORT;
+ PyErr_SetFromErrno(PyExc_OSError);
+ return NULL;
+ }
+
+ switch (af) {
+
+ case AF_INET:
+ if (alen < sizeof(struct sockaddr_in))
+ return NULL;
+ break;
+
+#ifdef ENABLE_IPV6
+ case AF_INET6:
+ if (alen < sizeof(struct sockaddr_in6))
+ return NULL;
+ break;
+#endif
+
+ }
+
+ if ((name_list = PyList_New(0)) == NULL)
+ goto err;
+
+ if ((addr_list = PyList_New(0)) == NULL)
+ goto err;
+
+ /* SF #1511317: h_aliases can be NULL */
+ if (h->h_aliases) {
+ for (pch = h->h_aliases; *pch != NULL; pch++) {
+ int status;
+ tmp = PyUnicode_FromString(*pch);
+ if (tmp == NULL)
+ goto err;
+
+ status = PyList_Append(name_list, tmp);
+ Py_DECREF(tmp);
+
+ if (status)
+ goto err;
+ }
+ }
+
+ for (pch = h->h_addr_list; *pch != NULL; pch++) {
+ int status;
+
+ switch (af) {
+
+ case AF_INET:
+ {
+ struct sockaddr_in sin;
+ memset(&sin, 0, sizeof(sin));
+ sin.sin_family = af;
+#ifdef HAVE_SOCKADDR_SA_LEN
+ sin.sin_len = sizeof(sin);
+#endif
+ memcpy(&sin.sin_addr, *pch, sizeof(sin.sin_addr));
+ tmp = makeipaddr((struct sockaddr *)&sin, sizeof(sin));
+
+ if (pch == h->h_addr_list && alen >= sizeof(sin))
+ memcpy((char *) addr, &sin, sizeof(sin));
+ break;
+ }
+
+#ifdef ENABLE_IPV6
+ case AF_INET6:
+ {
+ struct sockaddr_in6 sin6;
+ memset(&sin6, 0, sizeof(sin6));
+ sin6.sin6_family = af;
+#ifdef HAVE_SOCKADDR_SA_LEN
+ sin6.sin6_len = sizeof(sin6);
+#endif
+ memcpy(&sin6.sin6_addr, *pch, sizeof(sin6.sin6_addr));
+ tmp = makeipaddr((struct sockaddr *)&sin6,
+ sizeof(sin6));
+
+ if (pch == h->h_addr_list && alen >= sizeof(sin6))
+ memcpy((char *) addr, &sin6, sizeof(sin6));
+ break;
+ }
+#endif
+
+ default: /* can't happen */
+ PyErr_SetString(PyExc_OSError,
+ "unsupported address family");
+ return NULL;
+ }
+
+ if (tmp == NULL)
+ goto err;
+
+ status = PyList_Append(addr_list, tmp);
+ Py_DECREF(tmp);
+
+ if (status)
+ goto err;
+ }
+
+ name = sock_decode_hostname(h->h_name);
+ if (name == NULL)
+ goto err;
+ rtn_tuple = Py_BuildValue("NOO", name, name_list, addr_list);
+
+ err:
+ Py_XDECREF(name_list);
+ Py_XDECREF(addr_list);
+ return rtn_tuple;
+}
+
+
+/* Python interface to gethostbyname_ex(name). */
+
+/*ARGSUSED*/
+static PyObject *
+socket_gethostbyname_ex(PyObject *self, PyObject *args)
+{
+ char *name;
+ struct hostent *h;
+ sock_addr_t addr;
+ struct sockaddr *sa;
+ PyObject *ret = NULL;
+#ifdef HAVE_GETHOSTBYNAME_R
+ struct hostent hp_allocated;
+#ifdef HAVE_GETHOSTBYNAME_R_3_ARG
+ struct hostent_data data;
+#else
+ char buf[16384];
+ int buf_len = (sizeof buf) - 1;
+ int errnop;
+#endif
+#ifdef HAVE_GETHOSTBYNAME_R_3_ARG
+ int result;
+#endif
+#endif /* HAVE_GETHOSTBYNAME_R */
+
+ if (!PyArg_ParseTuple(args, "et:gethostbyname_ex", "idna", &name))
+ return NULL;
+ if (setipaddr(name, SAS2SA(&addr), sizeof(addr), AF_INET) < 0)
+ goto finally;
+ Py_BEGIN_ALLOW_THREADS
+#ifdef HAVE_GETHOSTBYNAME_R
+#if defined(HAVE_GETHOSTBYNAME_R_6_ARG)
+ gethostbyname_r(name, &hp_allocated, buf, buf_len,
+ &h, &errnop);
+#elif defined(HAVE_GETHOSTBYNAME_R_5_ARG)
+ h = gethostbyname_r(name, &hp_allocated, buf, buf_len, &errnop);
+#else /* HAVE_GETHOSTBYNAME_R_3_ARG */
+ memset((void *) &data, '\0', sizeof(data));
+ result = gethostbyname_r(name, &hp_allocated, &data);
+ h = (result != 0) ? NULL : &hp_allocated;
+#endif
+#else /* not HAVE_GETHOSTBYNAME_R */
+#ifdef USE_GETHOSTBYNAME_LOCK
+ PyThread_acquire_lock(netdb_lock, 1);
+#endif
+ h = gethostbyname(name);
+#endif /* HAVE_GETHOSTBYNAME_R */
+ Py_END_ALLOW_THREADS
+ /* Some C libraries would require addr.__ss_family instead of
+ addr.ss_family.
+ Therefore, we cast the sockaddr_storage into sockaddr to
+ access sa_family. */
+ sa = SAS2SA(&addr);
+ ret = gethost_common(h, SAS2SA(&addr), sizeof(addr),
+ sa->sa_family);
+#ifdef USE_GETHOSTBYNAME_LOCK
+ PyThread_release_lock(netdb_lock);
+#endif
+finally:
+ PyMem_Free(name);
+ return ret;
+}
+
+PyDoc_STRVAR(ghbn_ex_doc,
+"gethostbyname_ex(host) -> (name, aliaslist, addresslist)\n\
+\n\
+Return the true host name, a list of aliases, and a list of IP addresses,\n\
+for a host. The host argument is a string giving a host name or IP number.");
+
+
+/* Python interface to gethostbyaddr(IP). */
+
+/*ARGSUSED*/
+static PyObject *
+socket_gethostbyaddr(PyObject *self, PyObject *args)
+{
+ sock_addr_t addr;
+ struct sockaddr *sa = SAS2SA(&addr);
+ char *ip_num;
+ struct hostent *h;
+ PyObject *ret = NULL;
+#ifdef HAVE_GETHOSTBYNAME_R
+ struct hostent hp_allocated;
+#ifdef HAVE_GETHOSTBYNAME_R_3_ARG
+ struct hostent_data data;
+#else
+ /* glibcs up to 2.10 assume that the buf argument to
+ gethostbyaddr_r is 8-byte aligned, which at least llvm-gcc
+ does not ensure. The attribute below instructs the compiler
+ to maintain this alignment. */
+ char buf[16384] Py_ALIGNED(8);
+ int buf_len = (sizeof buf) - 1;
+ int errnop;
+#endif
+#ifdef HAVE_GETHOSTBYNAME_R_3_ARG
+ int result;
+#endif
+#endif /* HAVE_GETHOSTBYNAME_R */
+ const char *ap;
+ int al;
+ int af;
+
+ if (!PyArg_ParseTuple(args, "et:gethostbyaddr", "idna", &ip_num))
+ return NULL;
+ af = AF_UNSPEC;
+ if (setipaddr(ip_num, sa, sizeof(addr), af) < 0)
+ goto finally;
+ af = sa->sa_family;
+ ap = NULL;
+ /* al = 0; */
+ switch (af) {
+ case AF_INET:
+ ap = (char *)&((struct sockaddr_in *)sa)->sin_addr;
+ al = sizeof(((struct sockaddr_in *)sa)->sin_addr);
+ break;
+#ifdef ENABLE_IPV6
+ case AF_INET6:
+ ap = (char *)&((struct sockaddr_in6 *)sa)->sin6_addr;
+ al = sizeof(((struct sockaddr_in6 *)sa)->sin6_addr);
+ break;
+#endif
+ default:
+ PyErr_SetString(PyExc_OSError, "unsupported address family");
+ goto finally;
+ }
+ Py_BEGIN_ALLOW_THREADS
+#ifdef HAVE_GETHOSTBYNAME_R
+#if defined(HAVE_GETHOSTBYNAME_R_6_ARG)
+ gethostbyaddr_r(ap, al, af,
+ &hp_allocated, buf, buf_len,
+ &h, &errnop);
+#elif defined(HAVE_GETHOSTBYNAME_R_5_ARG)
+ h = gethostbyaddr_r(ap, al, af,
+ &hp_allocated, buf, buf_len, &errnop);
+#else /* HAVE_GETHOSTBYNAME_R_3_ARG */
+ memset((void *) &data, '\0', sizeof(data));
+ result = gethostbyaddr_r(ap, al, af, &hp_allocated, &data);
+ h = (result != 0) ? NULL : &hp_allocated;
+#endif
+#else /* not HAVE_GETHOSTBYNAME_R */
+#ifdef USE_GETHOSTBYNAME_LOCK
+ PyThread_acquire_lock(netdb_lock, 1);
+#endif
+ h = gethostbyaddr(ap, al, af);
+#endif /* HAVE_GETHOSTBYNAME_R */
+ Py_END_ALLOW_THREADS
+ ret = gethost_common(h, SAS2SA(&addr), sizeof(addr), af);
+#ifdef USE_GETHOSTBYNAME_LOCK
+ PyThread_release_lock(netdb_lock);
+#endif
+finally:
+ PyMem_Free(ip_num);
+ return ret;
+}
+
+PyDoc_STRVAR(gethostbyaddr_doc,
+"gethostbyaddr(host) -> (name, aliaslist, addresslist)\n\
+\n\
+Return the true host name, a list of aliases, and a list of IP addresses,\n\
+for a host. The host argument is a string giving a host name or IP number.");
+
+
+/* Python interface to getservbyname(name).
+ This only returns the port number, since the other info is already
+ known or not useful (like the list of aliases). */
+
+/*ARGSUSED*/
+static PyObject *
+socket_getservbyname(PyObject *self, PyObject *args)
+{
+ const char *name, *proto=NULL;
+ struct servent *sp;
+ if (!PyArg_ParseTuple(args, "s|s:getservbyname", &name, &proto))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ sp = getservbyname(name, proto);
+ Py_END_ALLOW_THREADS
+ if (sp == NULL) {
+ PyErr_SetString(PyExc_OSError, "service/proto not found");
+ return NULL;
+ }
+ return PyLong_FromLong((long) ntohs(sp->s_port));
+}
+
+PyDoc_STRVAR(getservbyname_doc,
+"getservbyname(servicename[, protocolname]) -> integer\n\
+\n\
+Return a port number from a service name and protocol name.\n\
+The optional protocol name, if given, should be 'tcp' or 'udp',\n\
+otherwise any protocol will match.");
+
+
+/* Python interface to getservbyport(port).
+ This only returns the service name, since the other info is already
+ known or not useful (like the list of aliases). */
+
+/*ARGSUSED*/
+static PyObject *
+socket_getservbyport(PyObject *self, PyObject *args)
+{
+ int port;
+ const char *proto=NULL;
+ struct servent *sp;
+ if (!PyArg_ParseTuple(args, "i|s:getservbyport", &port, &proto))
+ return NULL;
+ if (port < 0 || port > 0xffff) {
+ PyErr_SetString(
+ PyExc_OverflowError,
+ "getservbyport: port must be 0-65535.");
+ return NULL;
+ }
+ Py_BEGIN_ALLOW_THREADS
+ sp = getservbyport(htons((short)port), proto);
+ Py_END_ALLOW_THREADS
+ if (sp == NULL) {
+ PyErr_SetString(PyExc_OSError, "port/proto not found");
+ return NULL;
+ }
+ return PyUnicode_FromString(sp->s_name);
+}
+
+PyDoc_STRVAR(getservbyport_doc,
+"getservbyport(port[, protocolname]) -> string\n\
+\n\
+Return the service name from a port number and protocol name.\n\
+The optional protocol name, if given, should be 'tcp' or 'udp',\n\
+otherwise any protocol will match.");
+
+/* Python interface to getprotobyname(name).
+ This only returns the protocol number, since the other info is
+ already known or not useful (like the list of aliases). */
+
+/*ARGSUSED*/
+static PyObject *
+socket_getprotobyname(PyObject *self, PyObject *args)
+{
+ const char *name;
+ struct protoent *sp;
+ if (!PyArg_ParseTuple(args, "s:getprotobyname", &name))
+ return NULL;
+ Py_BEGIN_ALLOW_THREADS
+ sp = getprotobyname(name);
+ Py_END_ALLOW_THREADS
+ if (sp == NULL) {
+ PyErr_SetString(PyExc_OSError, "protocol not found");
+ return NULL;
+ }
+ return PyLong_FromLong((long) sp->p_proto);
+}
+
+PyDoc_STRVAR(getprotobyname_doc,
+"getprotobyname(name) -> integer\n\
+\n\
+Return the protocol number for the named protocol. (Rarely used.)");
+
+
+#ifndef NO_DUP
+/* dup() function for socket fds */
+
+static PyObject *
+socket_dup(PyObject *self, PyObject *fdobj)
+{
+ SOCKET_T fd, newfd;
+ PyObject *newfdobj;
+#ifdef MS_WINDOWS
+ WSAPROTOCOL_INFO info;
+#endif
+
+ fd = PyLong_AsSocket_t(fdobj);
+ if (fd == (SOCKET_T)(-1) && PyErr_Occurred())
+ return NULL;
+
+#ifdef MS_WINDOWS
+ if (WSADuplicateSocket(fd, GetCurrentProcessId(), &info))
+ return set_error();
+
+ newfd = WSASocket(FROM_PROTOCOL_INFO, FROM_PROTOCOL_INFO,
+ FROM_PROTOCOL_INFO,
+ &info, 0, WSA_FLAG_OVERLAPPED);
+ if (newfd == INVALID_SOCKET)
+ return set_error();
+
+ if (!SetHandleInformation((HANDLE)newfd, HANDLE_FLAG_INHERIT, 0)) {
+ closesocket(newfd);
+ PyErr_SetFromWindowsErr(0);
+ return NULL;
+ }
+#else
+ /* On UNIX, dup can be used to duplicate the file descriptor of a socket */
+ newfd = _Py_dup(fd);
+ if (newfd == INVALID_SOCKET)
+ return NULL;
+#endif
+
+ newfdobj = PyLong_FromSocket_t(newfd);
+ if (newfdobj == NULL)
+ SOCKETCLOSE(newfd);
+ return newfdobj;
+}
+
+PyDoc_STRVAR(dup_doc,
+"dup(integer) -> integer\n\
+\n\
+Duplicate an integer socket file descriptor. This is like os.dup(), but for\n\
+sockets; on some platforms os.dup() won't work for socket file descriptors.");
+#endif
+
+
+#ifdef HAVE_SOCKETPAIR
+/* Create a pair of sockets using the socketpair() function.
+ Arguments as for socket() except the default family is AF_UNIX if
+ defined on the platform; otherwise, the default is AF_INET. */
+
+/*ARGSUSED*/
+static PyObject *
+socket_socketpair(PyObject *self, PyObject *args)
+{
+ PySocketSockObject *s0 = NULL, *s1 = NULL;
+ SOCKET_T sv[2];
+ int family, type = SOCK_STREAM, proto = 0;
+ PyObject *res = NULL;
+#ifdef SOCK_CLOEXEC
+ int *atomic_flag_works = &sock_cloexec_works;
+#else
+ int *atomic_flag_works = NULL;
+#endif
+ int ret;
+
+#if defined(AF_UNIX)
+ family = AF_UNIX;
+#else
+ family = AF_INET;
+#endif
+ if (!PyArg_ParseTuple(args, "|iii:socketpair",
+ &family, &type, &proto))
+ return NULL;
+
+ /* Create a pair of socket fds */
+ Py_BEGIN_ALLOW_THREADS
+#ifdef SOCK_CLOEXEC
+ if (sock_cloexec_works != 0) {
+ ret = socketpair(family, type | SOCK_CLOEXEC, proto, sv);
+ if (sock_cloexec_works == -1) {
+ if (ret >= 0) {
+ sock_cloexec_works = 1;
+ }
+ else if (errno == EINVAL) {
+ /* Linux older than 2.6.27 does not support SOCK_CLOEXEC */
+ sock_cloexec_works = 0;
+ ret = socketpair(family, type, proto, sv);
+ }
+ }
+ }
+ else
+#endif
+ {
+ ret = socketpair(family, type, proto, sv);
+ }
+ Py_END_ALLOW_THREADS
+
+ if (ret < 0)
+ return set_error();
+
+ if (_Py_set_inheritable(sv[0], 0, atomic_flag_works) < 0)
+ goto finally;
+ if (_Py_set_inheritable(sv[1], 0, atomic_flag_works) < 0)
+ goto finally;
+
+ s0 = new_sockobject(sv[0], family, type, proto);
+ if (s0 == NULL)
+ goto finally;
+ s1 = new_sockobject(sv[1], family, type, proto);
+ if (s1 == NULL)
+ goto finally;
+ res = PyTuple_Pack(2, s0, s1);
+
+finally:
+ if (res == NULL) {
+ if (s0 == NULL)
+ SOCKETCLOSE(sv[0]);
+ if (s1 == NULL)
+ SOCKETCLOSE(sv[1]);
+ }
+ Py_XDECREF(s0);
+ Py_XDECREF(s1);
+ return res;
+}
+
+PyDoc_STRVAR(socketpair_doc,
+"socketpair([family[, type [, proto]]]) -> (socket object, socket object)\n\
+\n\
+Create a pair of socket objects from the sockets returned by the platform\n\
+socketpair() function.\n\
+The arguments are the same as for socket() except the default family is\n\
+AF_UNIX if defined on the platform; otherwise, the default is AF_INET.");
+
+#endif /* HAVE_SOCKETPAIR */
+
+
+static PyObject *
+socket_ntohs(PyObject *self, PyObject *args)
+{
+ int x1, x2;
+
+ if (!PyArg_ParseTuple(args, "i:ntohs", &x1)) {
+ return NULL;
+ }
+ if (x1 < 0) {
+ PyErr_SetString(PyExc_OverflowError,
+ "can't convert negative number to unsigned long");
+ return NULL;
+ }
+ x2 = (unsigned int)ntohs((unsigned short)x1);
+ return PyLong_FromLong(x2);
+}
+
+PyDoc_STRVAR(ntohs_doc,
+"ntohs(integer) -> integer\n\
+\n\
+Convert a 16-bit integer from network to host byte order.");
+
+
+static PyObject *
+socket_ntohl(PyObject *self, PyObject *arg)
+{
+ unsigned long x;
+
+ if (PyLong_Check(arg)) {
+ x = PyLong_AsUnsignedLong(arg);
+ if (x == (unsigned long) -1 && PyErr_Occurred())
+ return NULL;
+#if SIZEOF_LONG > 4
+ {
+ unsigned long y;
+ /* only want the trailing 32 bits */
+ y = x & 0xFFFFFFFFUL;
+ if (y ^ x)
+ return PyErr_Format(PyExc_OverflowError,
+ "int larger than 32 bits");
+ x = y;
+ }
+#endif
+ }
+ else
+ return PyErr_Format(PyExc_TypeError,
+ "expected int, %s found",
+ Py_TYPE(arg)->tp_name);
+ return PyLong_FromUnsignedLong(ntohl(x));
+}
+
+PyDoc_STRVAR(ntohl_doc,
+"ntohl(integer) -> integer\n\
+\n\
+Convert a 32-bit integer from network to host byte order.");
+
+
+static PyObject *
+socket_htons(PyObject *self, PyObject *args)
+{
+ int x1, x2;
+
+ if (!PyArg_ParseTuple(args, "i:htons", &x1)) {
+ return NULL;
+ }
+ if (x1 < 0) {
+ PyErr_SetString(PyExc_OverflowError,
+ "can't convert negative number to unsigned long");
+ return NULL;
+ }
+ x2 = (unsigned int)htons((unsigned short)x1);
+ return PyLong_FromLong(x2);
+}
+
+PyDoc_STRVAR(htons_doc,
+"htons(integer) -> integer\n\
+\n\
+Convert a 16-bit integer from host to network byte order.");
+
+
+static PyObject *
+socket_htonl(PyObject *self, PyObject *arg)
+{
+ unsigned long x;
+
+ if (PyLong_Check(arg)) {
+ x = PyLong_AsUnsignedLong(arg);
+ if (x == (unsigned long) -1 && PyErr_Occurred())
+ return NULL;
+#if SIZEOF_LONG > 4
+ {
+ unsigned long y;
+ /* only want the trailing 32 bits */
+ y = x & 0xFFFFFFFFUL;
+ if (y ^ x)
+ return PyErr_Format(PyExc_OverflowError,
+ "int larger than 32 bits");
+ x = y;
+ }
+#endif
+ }
+ else
+ return PyErr_Format(PyExc_TypeError,
+ "expected int, %s found",
+ Py_TYPE(arg)->tp_name);
+ return PyLong_FromUnsignedLong(htonl((unsigned long)x));
+}
+
+PyDoc_STRVAR(htonl_doc,
+"htonl(integer) -> integer\n\
+\n\
+Convert a 32-bit integer from host to network byte order.");
+
+/* socket.inet_aton() and socket.inet_ntoa() functions. */
+
+PyDoc_STRVAR(inet_aton_doc,
+"inet_aton(string) -> bytes giving packed 32-bit IP representation\n\
+\n\
+Convert an IP address in string format (123.45.67.89) to the 32-bit packed\n\
+binary format used in low-level network functions.");
+
+static PyObject*
+socket_inet_aton(PyObject *self, PyObject *args)
+{
+#ifdef HAVE_INET_ATON
+ struct in_addr buf;
+#endif
+
+#if !defined(HAVE_INET_ATON) || defined(USE_INET_ATON_WEAKLINK)
+#if (SIZEOF_INT != 4)
+#error "Not sure if in_addr_t exists and int is not 32-bits."
+#endif
+ /* Have to use inet_addr() instead */
+ unsigned int packed_addr;
+#endif
+ const char *ip_addr;
+
+ if (!PyArg_ParseTuple(args, "s:inet_aton", &ip_addr))
+ return NULL;
+
+
+#ifdef HAVE_INET_ATON
+
+#ifdef USE_INET_ATON_WEAKLINK
+ if (inet_aton != NULL) {
+#endif
+ if (inet_aton(ip_addr, &buf))
+ return PyBytes_FromStringAndSize((char *)(&buf),
+ sizeof(buf));
+
+ PyErr_SetString(PyExc_OSError,
+ "illegal IP address string passed to inet_aton");
+ return NULL;
+
+#ifdef USE_INET_ATON_WEAKLINK
+ } else {
+#endif
+
+#endif
+
+#if !defined(HAVE_INET_ATON) || defined(USE_INET_ATON_WEAKLINK)
+
+ /* special-case this address as inet_addr might return INADDR_NONE
+ * for this */
+ if (strcmp(ip_addr, "255.255.255.255") == 0) {
+ packed_addr = INADDR_BROADCAST;
+ } else {
+
+ packed_addr = inet_addr(ip_addr);
+
+ if (packed_addr == INADDR_NONE) { /* invalid address */
+ PyErr_SetString(PyExc_OSError,
+ "illegal IP address string passed to inet_aton");
+ return NULL;
+ }
+ }
+ return PyBytes_FromStringAndSize((char *) &packed_addr,
+ sizeof(packed_addr));
+
+#ifdef USE_INET_ATON_WEAKLINK
+ }
+#endif
+
+#endif
+}
+
+PyDoc_STRVAR(inet_ntoa_doc,
+"inet_ntoa(packed_ip) -> ip_address_string\n\
+\n\
+Convert an IP address from 32-bit packed binary format to string format");
+
+static PyObject*
+socket_inet_ntoa(PyObject *self, PyObject *args)
+{
+ Py_buffer packed_ip;
+ struct in_addr packed_addr;
+
+ if (!PyArg_ParseTuple(args, "y*:inet_ntoa", &packed_ip)) {
+ return NULL;
+ }
+
+ if (packed_ip.len != sizeof(packed_addr)) {
+ PyErr_SetString(PyExc_OSError,
+ "packed IP wrong length for inet_ntoa");
+ PyBuffer_Release(&packed_ip);
+ return NULL;
+ }
+
+ memcpy(&packed_addr, packed_ip.buf, packed_ip.len);
+ PyBuffer_Release(&packed_ip);
+
+ return PyUnicode_FromString(inet_ntoa(packed_addr));
+}
+
+#if defined(HAVE_INET_PTON) || defined(MS_WINDOWS)
+
+PyDoc_STRVAR(inet_pton_doc,
+"inet_pton(af, ip) -> packed IP address string\n\
+\n\
+Convert an IP address from string format to a packed string suitable\n\
+for use with low-level network functions.");
+
+#endif
+
+#ifdef HAVE_INET_PTON
+
+static PyObject *
+socket_inet_pton(PyObject *self, PyObject *args)
+{
+ int af;
+ const char* ip;
+ int retval;
+#ifdef ENABLE_IPV6
+ char packed[Py_MAX(sizeof(struct in_addr), sizeof(struct in6_addr))];
+#else
+ char packed[sizeof(struct in_addr)];
+#endif
+ if (!PyArg_ParseTuple(args, "is:inet_pton", &af, &ip)) {
+ return NULL;
+ }
+
+#if !defined(ENABLE_IPV6) && defined(AF_INET6)
+ if(af == AF_INET6) {
+ PyErr_SetString(PyExc_OSError,
+ "can't use AF_INET6, IPv6 is disabled");
+ return NULL;
+ }
+#endif
+
+ retval = inet_pton(af, ip, packed);
+ if (retval < 0) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ return NULL;
+ } else if (retval == 0) {
+ PyErr_SetString(PyExc_OSError,
+ "illegal IP address string passed to inet_pton");
+ return NULL;
+ } else if (af == AF_INET) {
+ return PyBytes_FromStringAndSize(packed,
+ sizeof(struct in_addr));
+#ifdef ENABLE_IPV6
+ } else if (af == AF_INET6) {
+ return PyBytes_FromStringAndSize(packed,
+ sizeof(struct in6_addr));
+#endif
+ } else {
+ PyErr_SetString(PyExc_OSError, "unknown address family");
+ return NULL;
+ }
+}
+#elif defined(MS_WINDOWS)
+
+static PyObject *
+socket_inet_pton(PyObject *self, PyObject *args)
+{
+ int af;
+ char* ip;
+ struct sockaddr_in6 addr;
+ INT ret, size;
+
+ if (!PyArg_ParseTuple(args, "is:inet_pton", &af, &ip)) {
+ return NULL;
+ }
+
+ size = sizeof(addr);
+ ret = WSAStringToAddressA(ip, af, NULL, (LPSOCKADDR)&addr, &size);
+
+ if (ret) {
+ PyErr_SetExcFromWindowsErr(PyExc_OSError, WSAGetLastError());
+ return NULL;
+ } else if(af == AF_INET) {
+ struct sockaddr_in *addr4 = (struct sockaddr_in*)&addr;
+ return PyBytes_FromStringAndSize((const char *)&(addr4->sin_addr),
+ sizeof(addr4->sin_addr));
+ } else if (af == AF_INET6) {
+ return PyBytes_FromStringAndSize((const char *)&(addr.sin6_addr),
+ sizeof(addr.sin6_addr));
+ } else {
+ PyErr_SetString(PyExc_OSError, "unknown address family");
+ return NULL;
+ }
+}
+
+#endif
+
+#if defined(HAVE_INET_PTON) || defined(MS_WINDOWS)
+
+PyDoc_STRVAR(inet_ntop_doc,
+"inet_ntop(af, packed_ip) -> string formatted IP address\n\
+\n\
+Convert a packed IP address of the given family to string format.");
+
+#endif
+
+
+#ifdef HAVE_INET_PTON
+static PyObject *
+socket_inet_ntop(PyObject *self, PyObject *args)
+{
+ int af;
+ Py_buffer packed_ip;
+ const char* retval;
+#ifdef ENABLE_IPV6
+ char ip[Py_MAX(INET_ADDRSTRLEN, INET6_ADDRSTRLEN) + 1];
+#else
+ char ip[INET_ADDRSTRLEN + 1];
+#endif
+
+ /* Guarantee NUL-termination for PyUnicode_FromString() below */
+ memset((void *) &ip[0], '\0', sizeof(ip));
+
+ if (!PyArg_ParseTuple(args, "iy*:inet_ntop", &af, &packed_ip)) {
+ return NULL;
+ }
+
+ if (af == AF_INET) {
+ if (packed_ip.len != sizeof(struct in_addr)) {
+ PyErr_SetString(PyExc_ValueError,
+ "invalid length of packed IP address string");
+ PyBuffer_Release(&packed_ip);
+ return NULL;
+ }
+#ifdef ENABLE_IPV6
+ } else if (af == AF_INET6) {
+ if (packed_ip.len != sizeof(struct in6_addr)) {
+ PyErr_SetString(PyExc_ValueError,
+ "invalid length of packed IP address string");
+ PyBuffer_Release(&packed_ip);
+ return NULL;
+ }
+#endif
+ } else {
+ PyErr_Format(PyExc_ValueError,
+ "unknown address family %d", af);
+ PyBuffer_Release(&packed_ip);
+ return NULL;
+ }
+
+ retval = inet_ntop(af, packed_ip.buf, ip, sizeof(ip));
+ PyBuffer_Release(&packed_ip);
+ if (!retval) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ return NULL;
+ } else {
+ return PyUnicode_FromString(retval);
+ }
+}
+
+#elif defined(MS_WINDOWS)
+
+static PyObject *
+socket_inet_ntop(PyObject *self, PyObject *args)
+{
+ int af;
+ Py_buffer packed_ip;
+ struct sockaddr_in6 addr;
+ DWORD addrlen, ret, retlen;
+#ifdef ENABLE_IPV6
+ char ip[Py_MAX(INET_ADDRSTRLEN, INET6_ADDRSTRLEN) + 1];
+#else
+ char ip[INET_ADDRSTRLEN + 1];
+#endif
+
+ /* Guarantee NUL-termination for PyUnicode_FromString() below */
+ memset((void *) &ip[0], '\0', sizeof(ip));
+
+ if (!PyArg_ParseTuple(args, "iy*:inet_ntop", &af, &packed_ip)) {
+ return NULL;
+ }
+
+ if (af == AF_INET) {
+ struct sockaddr_in * addr4 = (struct sockaddr_in *)&addr;
+
+ if (packed_ip.len != sizeof(struct in_addr)) {
+ PyErr_SetString(PyExc_ValueError,
+ "invalid length of packed IP address string");
+ PyBuffer_Release(&packed_ip);
+ return NULL;
+ }
+ memset(addr4, 0, sizeof(struct sockaddr_in));
+ addr4->sin_family = AF_INET;
+ memcpy(&(addr4->sin_addr), packed_ip.buf, sizeof(addr4->sin_addr));
+ addrlen = sizeof(struct sockaddr_in);
+ } else if (af == AF_INET6) {
+ if (packed_ip.len != sizeof(struct in6_addr)) {
+ PyErr_SetString(PyExc_ValueError,
+ "invalid length of packed IP address string");
+ PyBuffer_Release(&packed_ip);
+ return NULL;
+ }
+
+ memset(&addr, 0, sizeof(addr));
+ addr.sin6_family = AF_INET6;
+ memcpy(&(addr.sin6_addr), packed_ip.buf, sizeof(addr.sin6_addr));
+ addrlen = sizeof(addr);
+ } else {
+ PyErr_Format(PyExc_ValueError,
+ "unknown address family %d", af);
+ PyBuffer_Release(&packed_ip);
+ return NULL;
+ }
+ PyBuffer_Release(&packed_ip);
+
+ retlen = sizeof(ip);
+ ret = WSAAddressToStringA((struct sockaddr*)&addr, addrlen, NULL,
+ ip, &retlen);
+
+ if (ret) {
+ PyErr_SetExcFromWindowsErr(PyExc_OSError, WSAGetLastError());
+ return NULL;
+ } else {
+ return PyUnicode_FromString(ip);
+ }
+}
+
+#endif /* HAVE_INET_PTON */
+
+/* Python interface to getaddrinfo(host, port). */
+
+/*ARGSUSED*/
+static PyObject *
+socket_getaddrinfo(PyObject *self, PyObject *args, PyObject* kwargs)
+{
+ static char* kwnames[] = {"host", "port", "family", "type", "proto",
+ "flags", 0};
+ struct addrinfo hints, *res;
+ struct addrinfo *res0 = NULL;
+ PyObject *hobj = NULL;
+ PyObject *pobj = (PyObject *)NULL;
+ char pbuf[30];
+ char *hptr, *pptr;
+ int family, socktype, protocol, flags;
+ int error;
+ PyObject *all = (PyObject *)NULL;
+ PyObject *idna = NULL;
+
+ socktype = protocol = flags = 0;
+ family = AF_UNSPEC;
+ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "OO|iiii:getaddrinfo",
+ kwnames, &hobj, &pobj, &family, &socktype,
+ &protocol, &flags)) {
+ return NULL;
+ }
+ if (hobj == Py_None) {
+ hptr = NULL;
+ } else if (PyUnicode_Check(hobj)) {
+ idna = PyUnicode_AsEncodedString(hobj, "idna", NULL);
+ if (!idna)
+ return NULL;
+ assert(PyBytes_Check(idna));
+ hptr = PyBytes_AS_STRING(idna);
+ } else if (PyBytes_Check(hobj)) {
+ hptr = PyBytes_AsString(hobj);
+ } else {
+ PyErr_SetString(PyExc_TypeError,
+ "getaddrinfo() argument 1 must be string or None");
+ return NULL;
+ }
+ if (PyLong_CheckExact(pobj)) {
+ long value = PyLong_AsLong(pobj);
+ if (value == -1 && PyErr_Occurred())
+ goto err;
+ PyOS_snprintf(pbuf, sizeof(pbuf), "%ld", value);
+ pptr = pbuf;
+ } else if (PyUnicode_Check(pobj)) {
+ pptr = PyUnicode_AsUTF8(pobj);
+ if (pptr == NULL)
+ goto err;
+ } else if (PyBytes_Check(pobj)) {
+ pptr = PyBytes_AS_STRING(pobj);
+ } else if (pobj == Py_None) {
+ pptr = (char *)NULL;
+ } else {
+ PyErr_SetString(PyExc_OSError, "Int or String expected");
+ goto err;
+ }
+#if defined(__APPLE__) && defined(AI_NUMERICSERV)
+ if ((flags & AI_NUMERICSERV) && (pptr == NULL || (pptr[0] == '0' && pptr[1] == 0))) {
+ /* On OSX up to at least OSX 10.8 getaddrinfo crashes
+ * if AI_NUMERICSERV is set and the servname is NULL or "0".
+ * This workaround avoids a segfault in libsystem.
+ */
+ pptr = "00";
+ }
+#endif
+ memset(&hints, 0, sizeof(hints));
+ hints.ai_family = family;
+ hints.ai_socktype = socktype;
+ hints.ai_protocol = protocol;
+ hints.ai_flags = flags;
+ Py_BEGIN_ALLOW_THREADS
+ ACQUIRE_GETADDRINFO_LOCK
+ error = getaddrinfo(hptr, pptr, &hints, &res0);
+ Py_END_ALLOW_THREADS
+ RELEASE_GETADDRINFO_LOCK /* see comment in setipaddr() */
+ if (error) {
+ set_gaierror(error);
+ goto err;
+ }
+
+ all = PyList_New(0);
+ if (all == NULL)
+ goto err;
+ for (res = res0; res; res = res->ai_next) {
+ PyObject *single;
+ PyObject *addr =
+ makesockaddr(-1, res->ai_addr, res->ai_addrlen, protocol);
+ if (addr == NULL)
+ goto err;
+ single = Py_BuildValue("iiisO", res->ai_family,
+ res->ai_socktype, res->ai_protocol,
+ res->ai_canonname ? res->ai_canonname : "",
+ addr);
+ Py_DECREF(addr);
+ if (single == NULL)
+ goto err;
+
+ if (PyList_Append(all, single)) {
+ Py_DECREF(single);
+ goto err;
+ }
+ Py_DECREF(single);
+ }
+ Py_XDECREF(idna);
+ if (res0)
+ freeaddrinfo(res0);
+ return all;
+ err:
+ Py_XDECREF(all);
+ Py_XDECREF(idna);
+ if (res0)
+ freeaddrinfo(res0);
+ return (PyObject *)NULL;
+}
+
+PyDoc_STRVAR(getaddrinfo_doc,
+"getaddrinfo(host, port [, family, type, proto, flags])\n\
+ -> list of (family, type, proto, canonname, sockaddr)\n\
+\n\
+Resolve host and port into addrinfo struct.");
+
+/* Python interface to getnameinfo(sa, flags). */
+
+/*ARGSUSED*/
+static PyObject *
+socket_getnameinfo(PyObject *self, PyObject *args)
+{
+ PyObject *sa = (PyObject *)NULL;
+ int flags;
+ const char *hostp;
+ int port;
+ unsigned int flowinfo, scope_id;
+ char hbuf[NI_MAXHOST], pbuf[NI_MAXSERV];
+ struct addrinfo hints, *res = NULL;
+ int error;
+ PyObject *ret = (PyObject *)NULL;
+ PyObject *name;
+
+ flags = flowinfo = scope_id = 0;
+ if (!PyArg_ParseTuple(args, "Oi:getnameinfo", &sa, &flags))
+ return NULL;
+ if (!PyTuple_Check(sa)) {
+ PyErr_SetString(PyExc_TypeError,
+ "getnameinfo() argument 1 must be a tuple");
+ return NULL;
+ }
+ if (!PyArg_ParseTuple(sa, "si|II",
+ &hostp, &port, &flowinfo, &scope_id))
+ return NULL;
+ if (flowinfo > 0xfffff) {
+ PyErr_SetString(PyExc_OverflowError,
+ "getsockaddrarg: flowinfo must be 0-1048575.");
+ return NULL;
+ }
+ PyOS_snprintf(pbuf, sizeof(pbuf), "%d", port);
+ memset(&hints, 0, sizeof(hints));
+ hints.ai_family = AF_UNSPEC;
+ hints.ai_socktype = SOCK_DGRAM; /* make numeric port happy */
+ hints.ai_flags = AI_NUMERICHOST; /* don't do any name resolution */
+ Py_BEGIN_ALLOW_THREADS
+ ACQUIRE_GETADDRINFO_LOCK
+ error = getaddrinfo(hostp, pbuf, &hints, &res);
+ Py_END_ALLOW_THREADS
+ RELEASE_GETADDRINFO_LOCK /* see comment in setipaddr() */
+ if (error) {
+ set_gaierror(error);
+ goto fail;
+ }
+ if (res->ai_next) {
+ PyErr_SetString(PyExc_OSError,
+ "sockaddr resolved to multiple addresses");
+ goto fail;
+ }
+ switch (res->ai_family) {
+ case AF_INET:
+ {
+ if (PyTuple_GET_SIZE(sa) != 2) {
+ PyErr_SetString(PyExc_OSError,
+ "IPv4 sockaddr must be 2 tuple");
+ goto fail;
+ }
+ break;
+ }
+#ifdef ENABLE_IPV6
+ case AF_INET6:
+ {
+ struct sockaddr_in6 *sin6;
+ sin6 = (struct sockaddr_in6 *)res->ai_addr;
+ sin6->sin6_flowinfo = htonl(flowinfo);
+ sin6->sin6_scope_id = scope_id;
+ break;
+ }
+#endif
+ }
+ error = getnameinfo(res->ai_addr, (socklen_t) res->ai_addrlen,
+ hbuf, sizeof(hbuf), pbuf, sizeof(pbuf), flags);
+ if (error) {
+ set_gaierror(error);
+ goto fail;
+ }
+
+ name = sock_decode_hostname(hbuf);
+ if (name == NULL)
+ goto fail;
+ ret = Py_BuildValue("Ns", name, pbuf);
+
+fail:
+ if (res)
+ freeaddrinfo(res);
+ return ret;
+}
+
+PyDoc_STRVAR(getnameinfo_doc,
+"getnameinfo(sockaddr, flags) --> (host, port)\n\
+\n\
+Get host and port for a sockaddr.");
+
+
+/* Python API to getting and setting the default timeout value. */
+
+static PyObject *
+socket_getdefaulttimeout(PyObject *self)
+{
+ if (defaulttimeout < 0) {
+ Py_INCREF(Py_None);
+ return Py_None;
+ }
+ else {
+ double seconds = _PyTime_AsSecondsDouble(defaulttimeout);
+ return PyFloat_FromDouble(seconds);
+ }
+}
+
+PyDoc_STRVAR(getdefaulttimeout_doc,
+"getdefaulttimeout() -> timeout\n\
+\n\
+Returns the default timeout in seconds (float) for new socket objects.\n\
+A value of None indicates that new socket objects have no timeout.\n\
+When the socket module is first imported, the default is None.");
+
+static PyObject *
+socket_setdefaulttimeout(PyObject *self, PyObject *arg)
+{
+ _PyTime_t timeout;
+
+ if (socket_parse_timeout(&timeout, arg) < 0)
+ return NULL;
+
+ defaulttimeout = timeout;
+
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+PyDoc_STRVAR(setdefaulttimeout_doc,
+"setdefaulttimeout(timeout)\n\
+\n\
+Set the default timeout in seconds (float) for new socket objects.\n\
+A value of None indicates that new socket objects have no timeout.\n\
+When the socket module is first imported, the default is None.");
+
+#ifdef HAVE_IF_NAMEINDEX
+/* Python API for getting interface indices and names */
+
+static PyObject *
+socket_if_nameindex(PyObject *self, PyObject *arg)
+{
+ PyObject *list;
+ int i;
+ struct if_nameindex *ni;
+
+ ni = if_nameindex();
+ if (ni == NULL) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ return NULL;
+ }
+
+ list = PyList_New(0);
+ if (list == NULL) {
+ if_freenameindex(ni);
+ return NULL;
+ }
+
+ for (i = 0; ni[i].if_index != 0 && i < INT_MAX; i++) {
+ PyObject *ni_tuple = Py_BuildValue("IO&",
+ ni[i].if_index, PyUnicode_DecodeFSDefault, ni[i].if_name);
+
+ if (ni_tuple == NULL || PyList_Append(list, ni_tuple) == -1) {
+ Py_XDECREF(ni_tuple);
+ Py_DECREF(list);
+ if_freenameindex(ni);
+ return NULL;
+ }
+ Py_DECREF(ni_tuple);
+ }
+
+ if_freenameindex(ni);
+ return list;
+}
+
+PyDoc_STRVAR(if_nameindex_doc,
+"if_nameindex()\n\
+\n\
+Returns a list of network interface information (index, name) tuples.");
+
+static PyObject *
+socket_if_nametoindex(PyObject *self, PyObject *args)
+{
+ PyObject *oname;
+ unsigned long index;
+
+ if (!PyArg_ParseTuple(args, "O&:if_nametoindex",
+ PyUnicode_FSConverter, &oname))
+ return NULL;
+
+ index = if_nametoindex(PyBytes_AS_STRING(oname));
+ Py_DECREF(oname);
+ if (index == 0) {
+ /* if_nametoindex() doesn't set errno */
+ PyErr_SetString(PyExc_OSError, "no interface with this name");
+ return NULL;
+ }
+
+ return PyLong_FromUnsignedLong(index);
+}
+
+PyDoc_STRVAR(if_nametoindex_doc,
+"if_nametoindex(if_name)\n\
+\n\
+Returns the interface index corresponding to the interface name if_name.");
+
+static PyObject *
+socket_if_indextoname(PyObject *self, PyObject *arg)
+{
+ unsigned long index;
+ char name[IF_NAMESIZE + 1];
+
+ index = PyLong_AsUnsignedLong(arg);
+ if (index == (unsigned long) -1)
+ return NULL;
+
+ if (if_indextoname(index, name) == NULL) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ return NULL;
+ }
+
+ return PyUnicode_DecodeFSDefault(name);
+}
+
+PyDoc_STRVAR(if_indextoname_doc,
+"if_indextoname(if_index)\n\
+\n\
+Returns the interface name corresponding to the interface index if_index.");
+
+#endif /* HAVE_IF_NAMEINDEX */
+
+
+#ifdef CMSG_LEN
+/* Python interface to CMSG_LEN(length). */
+
+static PyObject *
+socket_CMSG_LEN(PyObject *self, PyObject *args)
+{
+ Py_ssize_t length;
+ size_t result;
+
+ if (!PyArg_ParseTuple(args, "n:CMSG_LEN", &length))
+ return NULL;
+ if (length < 0 || !get_CMSG_LEN(length, &result)) {
+ PyErr_Format(PyExc_OverflowError, "CMSG_LEN() argument out of range");
+ return NULL;
+ }
+ return PyLong_FromSize_t(result);
+}
+
+PyDoc_STRVAR(CMSG_LEN_doc,
+"CMSG_LEN(length) -> control message length\n\
+\n\
+Return the total length, without trailing padding, of an ancillary\n\
+data item with associated data of the given length. This value can\n\
+often be used as the buffer size for recvmsg() to receive a single\n\
+item of ancillary data, but RFC 3542 requires portable applications to\n\
+use CMSG_SPACE() and thus include space for padding, even when the\n\
+item will be the last in the buffer. Raises OverflowError if length\n\
+is outside the permissible range of values.");
+
+
+#ifdef CMSG_SPACE
+/* Python interface to CMSG_SPACE(length). */
+
+static PyObject *
+socket_CMSG_SPACE(PyObject *self, PyObject *args)
+{
+ Py_ssize_t length;
+ size_t result;
+
+ if (!PyArg_ParseTuple(args, "n:CMSG_SPACE", &length))
+ return NULL;
+ if (length < 0 || !get_CMSG_SPACE(length, &result)) {
+ PyErr_SetString(PyExc_OverflowError,
+ "CMSG_SPACE() argument out of range");
+ return NULL;
+ }
+ return PyLong_FromSize_t(result);
+}
+
+PyDoc_STRVAR(CMSG_SPACE_doc,
+"CMSG_SPACE(length) -> buffer size\n\
+\n\
+Return the buffer size needed for recvmsg() to receive an ancillary\n\
+data item with associated data of the given length, along with any\n\
+trailing padding. The buffer space needed to receive multiple items\n\
+is the sum of the CMSG_SPACE() values for their associated data\n\
+lengths. Raises OverflowError if length is outside the permissible\n\
+range of values.");
+#endif /* CMSG_SPACE */
+#endif /* CMSG_LEN */
+
+
+/* List of functions exported by this module. */
+
+static PyMethodDef socket_methods[] = {
+ {"gethostbyname", socket_gethostbyname,
+ METH_VARARGS, gethostbyname_doc},
+ {"gethostbyname_ex", socket_gethostbyname_ex,
+ METH_VARARGS, ghbn_ex_doc},
+ {"gethostbyaddr", socket_gethostbyaddr,
+ METH_VARARGS, gethostbyaddr_doc},
+ {"gethostname", socket_gethostname,
+ METH_NOARGS, gethostname_doc},
+#ifdef HAVE_SETHOSTNAME
+ {"sethostname", socket_sethostname,
+ METH_VARARGS, sethostname_doc},
+#endif
+ {"getservbyname", socket_getservbyname,
+ METH_VARARGS, getservbyname_doc},
+ {"getservbyport", socket_getservbyport,
+ METH_VARARGS, getservbyport_doc},
+ {"getprotobyname", socket_getprotobyname,
+ METH_VARARGS, getprotobyname_doc},
+#ifndef NO_DUP
+ {"dup", socket_dup,
+ METH_O, dup_doc},
+#endif
+#ifdef HAVE_SOCKETPAIR
+ {"socketpair", socket_socketpair,
+ METH_VARARGS, socketpair_doc},
+#endif
+ {"ntohs", socket_ntohs,
+ METH_VARARGS, ntohs_doc},
+ {"ntohl", socket_ntohl,
+ METH_O, ntohl_doc},
+ {"htons", socket_htons,
+ METH_VARARGS, htons_doc},
+ {"htonl", socket_htonl,
+ METH_O, htonl_doc},
+ {"inet_aton", socket_inet_aton,
+ METH_VARARGS, inet_aton_doc},
+ {"inet_ntoa", socket_inet_ntoa,
+ METH_VARARGS, inet_ntoa_doc},
+#if defined(HAVE_INET_PTON) || defined(MS_WINDOWS)
+ {"inet_pton", socket_inet_pton,
+ METH_VARARGS, inet_pton_doc},
+ {"inet_ntop", socket_inet_ntop,
+ METH_VARARGS, inet_ntop_doc},
+#endif
+ {"getaddrinfo", (PyCFunction)socket_getaddrinfo,
+ METH_VARARGS | METH_KEYWORDS, getaddrinfo_doc},
+ {"getnameinfo", socket_getnameinfo,
+ METH_VARARGS, getnameinfo_doc},
+ {"getdefaulttimeout", (PyCFunction)socket_getdefaulttimeout,
+ METH_NOARGS, getdefaulttimeout_doc},
+ {"setdefaulttimeout", socket_setdefaulttimeout,
+ METH_O, setdefaulttimeout_doc},
+#ifdef HAVE_IF_NAMEINDEX
+ {"if_nameindex", socket_if_nameindex,
+ METH_NOARGS, if_nameindex_doc},
+ {"if_nametoindex", socket_if_nametoindex,
+ METH_VARARGS, if_nametoindex_doc},
+ {"if_indextoname", socket_if_indextoname,
+ METH_O, if_indextoname_doc},
+#endif
+#ifdef CMSG_LEN
+ {"CMSG_LEN", socket_CMSG_LEN,
+ METH_VARARGS, CMSG_LEN_doc},
+#ifdef CMSG_SPACE
+ {"CMSG_SPACE", socket_CMSG_SPACE,
+ METH_VARARGS, CMSG_SPACE_doc},
+#endif
+#endif
+ {NULL, NULL} /* Sentinel */
+};
+
+
+#ifdef MS_WINDOWS
+#define OS_INIT_DEFINED
+
+/* Additional initialization and cleanup for Windows */
+
+static void
+os_cleanup(void)
+{
+ WSACleanup();
+}
+
+static int
+os_init(void)
+{
+ WSADATA WSAData;
+ int ret;
+ ret = WSAStartup(0x0101, &WSAData);
+ switch (ret) {
+ case 0: /* No error */
+ Py_AtExit(os_cleanup);
+ return 1; /* Success */
+ case WSASYSNOTREADY:
+ PyErr_SetString(PyExc_ImportError,
+ "WSAStartup failed: network not ready");
+ break;
+ case WSAVERNOTSUPPORTED:
+ case WSAEINVAL:
+ PyErr_SetString(
+ PyExc_ImportError,
+ "WSAStartup failed: requested version not supported");
+ break;
+ default:
+ PyErr_Format(PyExc_ImportError, "WSAStartup failed: error code %d", ret);
+ break;
+ }
+ return 0; /* Failure */
+}
+
+#endif /* MS_WINDOWS */
+
+
+
+#ifndef OS_INIT_DEFINED
+static int
+os_init(void)
+{
+ return 1; /* Success */
+}
+#endif
+
+
+/* C API table - always add new things to the end for binary
+ compatibility. */
+static
+PySocketModule_APIObject PySocketModuleAPI =
+{
+ &sock_type,
+ NULL,
+ NULL
+};
+
+
+/* Initialize the _socket module.
+
+ This module is actually called "_socket", and there's a wrapper
+ "socket.py" which implements some additional functionality.
+ The import of "_socket" may fail with an ImportError exception if
+ os-specific initialization fails. On Windows, this does WINSOCK
+ initialization. When WINSOCK is initialized successfully, a call to
+ WSACleanup() is scheduled to be made at exit time.
+*/
+
+PyDoc_STRVAR(socket_doc,
+"Implementation module for socket operations.\n\
+\n\
+See the socket module for documentation.");
+
+static struct PyModuleDef socketmodule = {
+ PyModuleDef_HEAD_INIT,
+ PySocket_MODULE_NAME,
+ socket_doc,
+ -1,
+ socket_methods,
+ NULL,
+ NULL,
+ NULL,
+ NULL
+};
+
+PyMODINIT_FUNC
+PyInit__socket(void)
+{
+ PyObject *m, *has_ipv6;
+
+ if (!os_init())
+ return NULL;
+
+#ifdef MS_WINDOWS
+ if (support_wsa_no_inherit == -1) {
+ support_wsa_no_inherit = IsWindows7SP1OrGreater();
+ }
+#endif
+
+ Py_TYPE(&sock_type) = &PyType_Type;
+ m = PyModule_Create(&socketmodule);
+ if (m == NULL)
+ return NULL;
+
+ Py_INCREF(PyExc_OSError);
+ PySocketModuleAPI.error = PyExc_OSError;
+ Py_INCREF(PyExc_OSError);
+ PyModule_AddObject(m, "error", PyExc_OSError);
+ socket_herror = PyErr_NewException("socket.herror",
+ PyExc_OSError, NULL);
+ if (socket_herror == NULL)
+ return NULL;
+ Py_INCREF(socket_herror);
+ PyModule_AddObject(m, "herror", socket_herror);
+ socket_gaierror = PyErr_NewException("socket.gaierror", PyExc_OSError,
+ NULL);
+ if (socket_gaierror == NULL)
+ return NULL;
+ Py_INCREF(socket_gaierror);
+ PyModule_AddObject(m, "gaierror", socket_gaierror);
+ socket_timeout = PyErr_NewException("socket.timeout",
+ PyExc_OSError, NULL);
+ if (socket_timeout == NULL)
+ return NULL;
+ PySocketModuleAPI.timeout_error = socket_timeout;
+ Py_INCREF(socket_timeout);
+ PyModule_AddObject(m, "timeout", socket_timeout);
+ Py_INCREF((PyObject *)&sock_type);
+ if (PyModule_AddObject(m, "SocketType",
+ (PyObject *)&sock_type) != 0)
+ return NULL;
+ Py_INCREF((PyObject *)&sock_type);
+ if (PyModule_AddObject(m, "socket",
+ (PyObject *)&sock_type) != 0)
+ return NULL;
+
+#ifdef ENABLE_IPV6
+ has_ipv6 = Py_True;
+#else
+ has_ipv6 = Py_False;
+#endif
+ Py_INCREF(has_ipv6);
+ PyModule_AddObject(m, "has_ipv6", has_ipv6);
+
+ /* Export C API */
+ if (PyModule_AddObject(m, PySocket_CAPI_NAME,
+ PyCapsule_New(&PySocketModuleAPI, PySocket_CAPSULE_NAME, NULL)
+ ) != 0)
+ return NULL;
+
+ /* Address families (we only support AF_INET and AF_UNIX) */
+#ifdef AF_UNSPEC
+ PyModule_AddIntMacro(m, AF_UNSPEC);
+#endif
+ PyModule_AddIntMacro(m, AF_INET);
+#if defined(AF_UNIX)
+ PyModule_AddIntMacro(m, AF_UNIX);
+#endif /* AF_UNIX */
+#ifdef AF_AX25
+ /* Amateur Radio AX.25 */
+ PyModule_AddIntMacro(m, AF_AX25);
+#endif
+#ifdef AF_IPX
+ PyModule_AddIntMacro(m, AF_IPX); /* Novell IPX */
+#endif
+#ifdef AF_APPLETALK
+ /* Appletalk DDP */
+ PyModule_AddIntMacro(m, AF_APPLETALK);
+#endif
+#ifdef AF_NETROM
+ /* Amateur radio NetROM */
+ PyModule_AddIntMacro(m, AF_NETROM);
+#endif
+#ifdef AF_BRIDGE
+ /* Multiprotocol bridge */
+ PyModule_AddIntMacro(m, AF_BRIDGE);
+#endif
+#ifdef AF_ATMPVC
+ /* ATM PVCs */
+ PyModule_AddIntMacro(m, AF_ATMPVC);
+#endif
+#ifdef AF_AAL5
+ /* Reserved for Werner's ATM */
+ PyModule_AddIntMacro(m, AF_AAL5);
+#endif
+#ifdef HAVE_SOCKADDR_ALG
+ PyModule_AddIntMacro(m, AF_ALG); /* Linux crypto */
+#endif
+#ifdef AF_X25
+ /* Reserved for X.25 project */
+ PyModule_AddIntMacro(m, AF_X25);
+#endif
+#ifdef AF_INET6
+ PyModule_AddIntMacro(m, AF_INET6); /* IP version 6 */
+#endif
+#ifdef AF_ROSE
+ /* Amateur Radio X.25 PLP */
+ PyModule_AddIntMacro(m, AF_ROSE);
+#endif
+#ifdef AF_DECnet
+ /* Reserved for DECnet project */
+ PyModule_AddIntMacro(m, AF_DECnet);
+#endif
+#ifdef AF_NETBEUI
+ /* Reserved for 802.2LLC project */
+ PyModule_AddIntMacro(m, AF_NETBEUI);
+#endif
+#ifdef AF_SECURITY
+ /* Security callback pseudo AF */
+ PyModule_AddIntMacro(m, AF_SECURITY);
+#endif
+#ifdef AF_KEY
+ /* PF_KEY key management API */
+ PyModule_AddIntMacro(m, AF_KEY);
+#endif
+#ifdef AF_NETLINK
+ /* */
+ PyModule_AddIntMacro(m, AF_NETLINK);
+ PyModule_AddIntMacro(m, NETLINK_ROUTE);
+#ifdef NETLINK_SKIP
+ PyModule_AddIntMacro(m, NETLINK_SKIP);
+#endif
+#ifdef NETLINK_W1
+ PyModule_AddIntMacro(m, NETLINK_W1);
+#endif
+ PyModule_AddIntMacro(m, NETLINK_USERSOCK);
+ PyModule_AddIntMacro(m, NETLINK_FIREWALL);
+#ifdef NETLINK_TCPDIAG
+ PyModule_AddIntMacro(m, NETLINK_TCPDIAG);
+#endif
+#ifdef NETLINK_NFLOG
+ PyModule_AddIntMacro(m, NETLINK_NFLOG);
+#endif
+#ifdef NETLINK_XFRM
+ PyModule_AddIntMacro(m, NETLINK_XFRM);
+#endif
+#ifdef NETLINK_ARPD
+ PyModule_AddIntMacro(m, NETLINK_ARPD);
+#endif
+#ifdef NETLINK_ROUTE6
+ PyModule_AddIntMacro(m, NETLINK_ROUTE6);
+#endif
+ PyModule_AddIntMacro(m, NETLINK_IP6_FW);
+#ifdef NETLINK_DNRTMSG
+ PyModule_AddIntMacro(m, NETLINK_DNRTMSG);
+#endif
+#ifdef NETLINK_TAPBASE
+ PyModule_AddIntMacro(m, NETLINK_TAPBASE);
+#endif
+#ifdef NETLINK_CRYPTO
+ PyModule_AddIntMacro(m, NETLINK_CRYPTO);
+#endif
+#endif /* AF_NETLINK */
+#ifdef AF_ROUTE
+ /* Alias to emulate 4.4BSD */
+ PyModule_AddIntMacro(m, AF_ROUTE);
+#endif
+#ifdef AF_LINK
+ PyModule_AddIntMacro(m, AF_LINK);
+#endif
+#ifdef AF_ASH
+ /* Ash */
+ PyModule_AddIntMacro(m, AF_ASH);
+#endif
+#ifdef AF_ECONET
+ /* Acorn Econet */
+ PyModule_AddIntMacro(m, AF_ECONET);
+#endif
+#ifdef AF_ATMSVC
+ /* ATM SVCs */
+ PyModule_AddIntMacro(m, AF_ATMSVC);
+#endif
+#ifdef AF_SNA
+ /* Linux SNA Project (nutters!) */
+ PyModule_AddIntMacro(m, AF_SNA);
+#endif
+#ifdef AF_IRDA
+ /* IRDA sockets */
+ PyModule_AddIntMacro(m, AF_IRDA);
+#endif
+#ifdef AF_PPPOX
+ /* PPPoX sockets */
+ PyModule_AddIntMacro(m, AF_PPPOX);
+#endif
+#ifdef AF_WANPIPE
+ /* Wanpipe API Sockets */
+ PyModule_AddIntMacro(m, AF_WANPIPE);
+#endif
+#ifdef AF_LLC
+ /* Linux LLC */
+ PyModule_AddIntMacro(m, AF_LLC);
+#endif
+
+#ifdef USE_BLUETOOTH
+ PyModule_AddIntMacro(m, AF_BLUETOOTH);
+ PyModule_AddIntMacro(m, BTPROTO_L2CAP);
+ PyModule_AddIntMacro(m, BTPROTO_HCI);
+ PyModule_AddIntMacro(m, SOL_HCI);
+#if !defined(__NetBSD__) && !defined(__DragonFly__)
+ PyModule_AddIntMacro(m, HCI_FILTER);
+#endif
+#if !defined(__FreeBSD__)
+#if !defined(__NetBSD__) && !defined(__DragonFly__)
+ PyModule_AddIntMacro(m, HCI_TIME_STAMP);
+#endif
+ PyModule_AddIntMacro(m, HCI_DATA_DIR);
+ PyModule_AddIntMacro(m, BTPROTO_SCO);
+#endif
+ PyModule_AddIntMacro(m, BTPROTO_RFCOMM);
+ PyModule_AddStringConstant(m, "BDADDR_ANY", "00:00:00:00:00:00");
+ PyModule_AddStringConstant(m, "BDADDR_LOCAL", "00:00:00:FF:FF:FF");
+#endif
+
+#ifdef AF_CAN
+ /* Controller Area Network */
+ PyModule_AddIntMacro(m, AF_CAN);
+#endif
+#ifdef PF_CAN
+ /* Controller Area Network */
+ PyModule_AddIntMacro(m, PF_CAN);
+#endif
+
+/* Reliable Datagram Sockets */
+#ifdef AF_RDS
+ PyModule_AddIntMacro(m, AF_RDS);
+#endif
+#ifdef PF_RDS
+ PyModule_AddIntMacro(m, PF_RDS);
+#endif
+
+/* Kernel event messages */
+#ifdef PF_SYSTEM
+ PyModule_AddIntMacro(m, PF_SYSTEM);
+#endif
+#ifdef AF_SYSTEM
+ PyModule_AddIntMacro(m, AF_SYSTEM);
+#endif
+
+#ifdef AF_PACKET
+ PyModule_AddIntMacro(m, AF_PACKET);
+#endif
+#ifdef PF_PACKET
+ PyModule_AddIntMacro(m, PF_PACKET);
+#endif
+#ifdef PACKET_HOST
+ PyModule_AddIntMacro(m, PACKET_HOST);
+#endif
+#ifdef PACKET_BROADCAST
+ PyModule_AddIntMacro(m, PACKET_BROADCAST);
+#endif
+#ifdef PACKET_MULTICAST
+ PyModule_AddIntMacro(m, PACKET_MULTICAST);
+#endif
+#ifdef PACKET_OTHERHOST
+ PyModule_AddIntMacro(m, PACKET_OTHERHOST);
+#endif
+#ifdef PACKET_OUTGOING
+ PyModule_AddIntMacro(m, PACKET_OUTGOING);
+#endif
+#ifdef PACKET_LOOPBACK
+ PyModule_AddIntMacro(m, PACKET_LOOPBACK);
+#endif
+#ifdef PACKET_FASTROUTE
+ PyModule_AddIntMacro(m, PACKET_FASTROUTE);
+#endif
+
+#ifdef HAVE_LINUX_TIPC_H
+ PyModule_AddIntMacro(m, AF_TIPC);
+
+ /* for addresses */
+ PyModule_AddIntMacro(m, TIPC_ADDR_NAMESEQ);
+ PyModule_AddIntMacro(m, TIPC_ADDR_NAME);
+ PyModule_AddIntMacro(m, TIPC_ADDR_ID);
+
+ PyModule_AddIntMacro(m, TIPC_ZONE_SCOPE);
+ PyModule_AddIntMacro(m, TIPC_CLUSTER_SCOPE);
+ PyModule_AddIntMacro(m, TIPC_NODE_SCOPE);
+
+ /* for setsockopt() */
+ PyModule_AddIntMacro(m, SOL_TIPC);
+ PyModule_AddIntMacro(m, TIPC_IMPORTANCE);
+ PyModule_AddIntMacro(m, TIPC_SRC_DROPPABLE);
+ PyModule_AddIntMacro(m, TIPC_DEST_DROPPABLE);
+ PyModule_AddIntMacro(m, TIPC_CONN_TIMEOUT);
+
+ PyModule_AddIntMacro(m, TIPC_LOW_IMPORTANCE);
+ PyModule_AddIntMacro(m, TIPC_MEDIUM_IMPORTANCE);
+ PyModule_AddIntMacro(m, TIPC_HIGH_IMPORTANCE);
+ PyModule_AddIntMacro(m, TIPC_CRITICAL_IMPORTANCE);
+
+ /* for subscriptions */
+ PyModule_AddIntMacro(m, TIPC_SUB_PORTS);
+ PyModule_AddIntMacro(m, TIPC_SUB_SERVICE);
+#ifdef TIPC_SUB_CANCEL
+ /* doesn't seem to be available everywhere */
+ PyModule_AddIntMacro(m, TIPC_SUB_CANCEL);
+#endif
+ PyModule_AddIntMacro(m, TIPC_WAIT_FOREVER);
+ PyModule_AddIntMacro(m, TIPC_PUBLISHED);
+ PyModule_AddIntMacro(m, TIPC_WITHDRAWN);
+ PyModule_AddIntMacro(m, TIPC_SUBSCR_TIMEOUT);
+ PyModule_AddIntMacro(m, TIPC_CFG_SRV);
+ PyModule_AddIntMacro(m, TIPC_TOP_SRV);
+#endif
+
+#ifdef HAVE_SOCKADDR_ALG
+ /* Socket options */
+ PyModule_AddIntMacro(m, ALG_SET_KEY);
+ PyModule_AddIntMacro(m, ALG_SET_IV);
+ PyModule_AddIntMacro(m, ALG_SET_OP);
+ PyModule_AddIntMacro(m, ALG_SET_AEAD_ASSOCLEN);
+ PyModule_AddIntMacro(m, ALG_SET_AEAD_AUTHSIZE);
+ PyModule_AddIntMacro(m, ALG_SET_PUBKEY);
+
+ /* Operations */
+ PyModule_AddIntMacro(m, ALG_OP_DECRYPT);
+ PyModule_AddIntMacro(m, ALG_OP_ENCRYPT);
+ PyModule_AddIntMacro(m, ALG_OP_SIGN);
+ PyModule_AddIntMacro(m, ALG_OP_VERIFY);
+#endif
+
+ /* Socket types */
+ PyModule_AddIntMacro(m, SOCK_STREAM);
+ PyModule_AddIntMacro(m, SOCK_DGRAM);
+/* We have incomplete socket support. */
+#ifdef SOCK_RAW
+ /* SOCK_RAW is marked as optional in the POSIX specification */
+ PyModule_AddIntMacro(m, SOCK_RAW);
+#endif
+ PyModule_AddIntMacro(m, SOCK_SEQPACKET);
+#if defined(SOCK_RDM)
+ PyModule_AddIntMacro(m, SOCK_RDM);
+#endif
+#ifdef SOCK_CLOEXEC
+ PyModule_AddIntMacro(m, SOCK_CLOEXEC);
+#endif
+#ifdef SOCK_NONBLOCK
+ PyModule_AddIntMacro(m, SOCK_NONBLOCK);
+#endif
+
+#ifdef SO_DEBUG
+ PyModule_AddIntMacro(m, SO_DEBUG);
+#endif
+#ifdef SO_ACCEPTCONN
+ PyModule_AddIntMacro(m, SO_ACCEPTCONN);
+#endif
+#ifdef SO_REUSEADDR
+ PyModule_AddIntMacro(m, SO_REUSEADDR);
+#endif
+#ifdef SO_EXCLUSIVEADDRUSE
+ PyModule_AddIntMacro(m, SO_EXCLUSIVEADDRUSE);
+#endif
+
+#ifdef SO_KEEPALIVE
+ PyModule_AddIntMacro(m, SO_KEEPALIVE);
+#endif
+#ifdef SO_DONTROUTE
+ PyModule_AddIntMacro(m, SO_DONTROUTE);
+#endif
+#ifdef SO_BROADCAST
+ PyModule_AddIntMacro(m, SO_BROADCAST);
+#endif
+#ifdef SO_USELOOPBACK
+ PyModule_AddIntMacro(m, SO_USELOOPBACK);
+#endif
+#ifdef SO_LINGER
+ PyModule_AddIntMacro(m, SO_LINGER);
+#endif
+#ifdef SO_OOBINLINE
+ PyModule_AddIntMacro(m, SO_OOBINLINE);
+#endif
+#ifndef __GNU__
+#ifdef SO_REUSEPORT
+ PyModule_AddIntMacro(m, SO_REUSEPORT);
+#endif
+#endif
+#ifdef SO_SNDBUF
+ PyModule_AddIntMacro(m, SO_SNDBUF);
+#endif
+#ifdef SO_RCVBUF
+ PyModule_AddIntMacro(m, SO_RCVBUF);
+#endif
+#ifdef SO_SNDLOWAT
+ PyModule_AddIntMacro(m, SO_SNDLOWAT);
+#endif
+#ifdef SO_RCVLOWAT
+ PyModule_AddIntMacro(m, SO_RCVLOWAT);
+#endif
+#ifdef SO_SNDTIMEO
+ PyModule_AddIntMacro(m, SO_SNDTIMEO);
+#endif
+#ifdef SO_RCVTIMEO
+ PyModule_AddIntMacro(m, SO_RCVTIMEO);
+#endif
+#ifdef SO_ERROR
+ PyModule_AddIntMacro(m, SO_ERROR);
+#endif
+#ifdef SO_TYPE
+ PyModule_AddIntMacro(m, SO_TYPE);
+#endif
+#ifdef SO_SETFIB
+ PyModule_AddIntMacro(m, SO_SETFIB);
+#endif
+#ifdef SO_PASSCRED
+ PyModule_AddIntMacro(m, SO_PASSCRED);
+#endif
+#ifdef SO_PEERCRED
+ PyModule_AddIntMacro(m, SO_PEERCRED);
+#endif
+#ifdef LOCAL_PEERCRED
+ PyModule_AddIntMacro(m, LOCAL_PEERCRED);
+#endif
+#ifdef SO_PASSSEC
+ PyModule_AddIntMacro(m, SO_PASSSEC);
+#endif
+#ifdef SO_PEERSEC
+ PyModule_AddIntMacro(m, SO_PEERSEC);
+#endif
+#ifdef SO_BINDTODEVICE
+ PyModule_AddIntMacro(m, SO_BINDTODEVICE);
+#endif
+#ifdef SO_PRIORITY
+ PyModule_AddIntMacro(m, SO_PRIORITY);
+#endif
+#ifdef SO_MARK
+ PyModule_AddIntMacro(m, SO_MARK);
+#endif
+#ifdef SO_DOMAIN
+ PyModule_AddIntMacro(m, SO_DOMAIN);
+#endif
+#ifdef SO_PROTOCOL
+ PyModule_AddIntMacro(m, SO_PROTOCOL);
+#endif
+
+ /* Maximum number of connections for "listen" */
+#ifdef SOMAXCONN
+ PyModule_AddIntMacro(m, SOMAXCONN);
+#else
+ PyModule_AddIntConstant(m, "SOMAXCONN", 5); /* Common value */
+#endif
+
+ /* Ancillary message types */
+#ifdef SCM_RIGHTS
+ PyModule_AddIntMacro(m, SCM_RIGHTS);
+#endif
+#ifdef SCM_CREDENTIALS
+ PyModule_AddIntMacro(m, SCM_CREDENTIALS);
+#endif
+#ifdef SCM_CREDS
+ PyModule_AddIntMacro(m, SCM_CREDS);
+#endif
+
+ /* Flags for send, recv */
+#ifdef MSG_OOB
+ PyModule_AddIntMacro(m, MSG_OOB);
+#endif
+#ifdef MSG_PEEK
+ PyModule_AddIntMacro(m, MSG_PEEK);
+#endif
+#ifdef MSG_DONTROUTE
+ PyModule_AddIntMacro(m, MSG_DONTROUTE);
+#endif
+#ifdef MSG_DONTWAIT
+ PyModule_AddIntMacro(m, MSG_DONTWAIT);
+#endif
+#ifdef MSG_EOR
+ PyModule_AddIntMacro(m, MSG_EOR);
+#endif
+#ifdef MSG_TRUNC
+ PyModule_AddIntMacro(m, MSG_TRUNC);
+#endif
+#ifdef MSG_CTRUNC
+ PyModule_AddIntMacro(m, MSG_CTRUNC);
+#endif
+#ifdef MSG_WAITALL
+ PyModule_AddIntMacro(m, MSG_WAITALL);
+#endif
+#ifdef MSG_BTAG
+ PyModule_AddIntMacro(m, MSG_BTAG);
+#endif
+#ifdef MSG_ETAG
+ PyModule_AddIntMacro(m, MSG_ETAG);
+#endif
+#ifdef MSG_NOSIGNAL
+ PyModule_AddIntMacro(m, MSG_NOSIGNAL);
+#endif
+#ifdef MSG_NOTIFICATION
+ PyModule_AddIntMacro(m, MSG_NOTIFICATION);
+#endif
+#ifdef MSG_CMSG_CLOEXEC
+ PyModule_AddIntMacro(m, MSG_CMSG_CLOEXEC);
+#endif
+#ifdef MSG_ERRQUEUE
+ PyModule_AddIntMacro(m, MSG_ERRQUEUE);
+#endif
+#ifdef MSG_CONFIRM
+ PyModule_AddIntMacro(m, MSG_CONFIRM);
+#endif
+#ifdef MSG_MORE
+ PyModule_AddIntMacro(m, MSG_MORE);
+#endif
+#ifdef MSG_EOF
+ PyModule_AddIntMacro(m, MSG_EOF);
+#endif
+#ifdef MSG_BCAST
+ PyModule_AddIntMacro(m, MSG_BCAST);
+#endif
+#ifdef MSG_MCAST
+ PyModule_AddIntMacro(m, MSG_MCAST);
+#endif
+#ifdef MSG_FASTOPEN
+ PyModule_AddIntMacro(m, MSG_FASTOPEN);
+#endif
+
+ /* Protocol level and numbers, usable for [gs]etsockopt */
+#ifdef SOL_SOCKET
+ PyModule_AddIntMacro(m, SOL_SOCKET);
+#endif
+#ifdef SOL_IP
+ PyModule_AddIntMacro(m, SOL_IP);
+#else
+ PyModule_AddIntConstant(m, "SOL_IP", 0);
+#endif
+#ifdef SOL_IPX
+ PyModule_AddIntMacro(m, SOL_IPX);
+#endif
+#ifdef SOL_AX25
+ PyModule_AddIntMacro(m, SOL_AX25);
+#endif
+#ifdef SOL_ATALK
+ PyModule_AddIntMacro(m, SOL_ATALK);
+#endif
+#ifdef SOL_NETROM
+ PyModule_AddIntMacro(m, SOL_NETROM);
+#endif
+#ifdef SOL_ROSE
+ PyModule_AddIntMacro(m, SOL_ROSE);
+#endif
+#ifdef SOL_TCP
+ PyModule_AddIntMacro(m, SOL_TCP);
+#else
+ PyModule_AddIntConstant(m, "SOL_TCP", 6);
+#endif
+#ifdef SOL_UDP
+ PyModule_AddIntMacro(m, SOL_UDP);
+#else
+ PyModule_AddIntConstant(m, "SOL_UDP", 17);
+#endif
+#ifdef SOL_CAN_BASE
+ PyModule_AddIntMacro(m, SOL_CAN_BASE);
+#endif
+#ifdef SOL_CAN_RAW
+ PyModule_AddIntMacro(m, SOL_CAN_RAW);
+ PyModule_AddIntMacro(m, CAN_RAW);
+#endif
+#ifdef HAVE_LINUX_CAN_H
+ PyModule_AddIntMacro(m, CAN_EFF_FLAG);
+ PyModule_AddIntMacro(m, CAN_RTR_FLAG);
+ PyModule_AddIntMacro(m, CAN_ERR_FLAG);
+
+ PyModule_AddIntMacro(m, CAN_SFF_MASK);
+ PyModule_AddIntMacro(m, CAN_EFF_MASK);
+ PyModule_AddIntMacro(m, CAN_ERR_MASK);
+#endif
+#ifdef HAVE_LINUX_CAN_RAW_H
+ PyModule_AddIntMacro(m, CAN_RAW_FILTER);
+ PyModule_AddIntMacro(m, CAN_RAW_ERR_FILTER);
+ PyModule_AddIntMacro(m, CAN_RAW_LOOPBACK);
+ PyModule_AddIntMacro(m, CAN_RAW_RECV_OWN_MSGS);
+#endif
+#ifdef HAVE_LINUX_CAN_RAW_FD_FRAMES
+ PyModule_AddIntMacro(m, CAN_RAW_FD_FRAMES);
+#endif
+#ifdef HAVE_LINUX_CAN_BCM_H
+ PyModule_AddIntMacro(m, CAN_BCM);
+ PyModule_AddIntConstant(m, "CAN_BCM_TX_SETUP", TX_SETUP);
+ PyModule_AddIntConstant(m, "CAN_BCM_TX_DELETE", TX_DELETE);
+ PyModule_AddIntConstant(m, "CAN_BCM_TX_READ", TX_READ);
+ PyModule_AddIntConstant(m, "CAN_BCM_TX_SEND", TX_SEND);
+ PyModule_AddIntConstant(m, "CAN_BCM_RX_SETUP", RX_SETUP);
+ PyModule_AddIntConstant(m, "CAN_BCM_RX_DELETE", RX_DELETE);
+ PyModule_AddIntConstant(m, "CAN_BCM_RX_READ", RX_READ);
+ PyModule_AddIntConstant(m, "CAN_BCM_TX_STATUS", TX_STATUS);
+ PyModule_AddIntConstant(m, "CAN_BCM_TX_EXPIRED", TX_EXPIRED);
+ PyModule_AddIntConstant(m, "CAN_BCM_RX_STATUS", RX_STATUS);
+ PyModule_AddIntConstant(m, "CAN_BCM_RX_TIMEOUT", RX_TIMEOUT);
+ PyModule_AddIntConstant(m, "CAN_BCM_RX_CHANGED", RX_CHANGED);
+#endif
+#ifdef SOL_RDS
+ PyModule_AddIntMacro(m, SOL_RDS);
+#endif
+#ifdef HAVE_SOCKADDR_ALG
+ PyModule_AddIntMacro(m, SOL_ALG);
+#endif
+#ifdef RDS_CANCEL_SENT_TO
+ PyModule_AddIntMacro(m, RDS_CANCEL_SENT_TO);
+#endif
+#ifdef RDS_GET_MR
+ PyModule_AddIntMacro(m, RDS_GET_MR);
+#endif
+#ifdef RDS_FREE_MR
+ PyModule_AddIntMacro(m, RDS_FREE_MR);
+#endif
+#ifdef RDS_RECVERR
+ PyModule_AddIntMacro(m, RDS_RECVERR);
+#endif
+#ifdef RDS_CONG_MONITOR
+ PyModule_AddIntMacro(m, RDS_CONG_MONITOR);
+#endif
+#ifdef RDS_GET_MR_FOR_DEST
+ PyModule_AddIntMacro(m, RDS_GET_MR_FOR_DEST);
+#endif
+#ifdef IPPROTO_IP
+ PyModule_AddIntMacro(m, IPPROTO_IP);
+#else
+ PyModule_AddIntConstant(m, "IPPROTO_IP", 0);
+#endif
+#ifdef IPPROTO_HOPOPTS
+ PyModule_AddIntMacro(m, IPPROTO_HOPOPTS);
+#endif
+#ifdef IPPROTO_ICMP
+ PyModule_AddIntMacro(m, IPPROTO_ICMP);
+#else
+ PyModule_AddIntConstant(m, "IPPROTO_ICMP", 1);
+#endif
+#ifdef IPPROTO_IGMP
+ PyModule_AddIntMacro(m, IPPROTO_IGMP);
+#endif
+#ifdef IPPROTO_GGP
+ PyModule_AddIntMacro(m, IPPROTO_GGP);
+#endif
+#ifdef IPPROTO_IPV4
+ PyModule_AddIntMacro(m, IPPROTO_IPV4);
+#endif
+#ifdef IPPROTO_IPV6
+ PyModule_AddIntMacro(m, IPPROTO_IPV6);
+#endif
+#ifdef IPPROTO_IPIP
+ PyModule_AddIntMacro(m, IPPROTO_IPIP);
+#endif
+#ifdef IPPROTO_TCP
+ PyModule_AddIntMacro(m, IPPROTO_TCP);
+#else
+ PyModule_AddIntConstant(m, "IPPROTO_TCP", 6);
+#endif
+#ifdef IPPROTO_EGP
+ PyModule_AddIntMacro(m, IPPROTO_EGP);
+#endif
+#ifdef IPPROTO_PUP
+ PyModule_AddIntMacro(m, IPPROTO_PUP);
+#endif
+#ifdef IPPROTO_UDP
+ PyModule_AddIntMacro(m, IPPROTO_UDP);
+#else
+ PyModule_AddIntConstant(m, "IPPROTO_UDP", 17);
+#endif
+#ifdef IPPROTO_IDP
+ PyModule_AddIntMacro(m, IPPROTO_IDP);
+#endif
+#ifdef IPPROTO_HELLO
+ PyModule_AddIntMacro(m, IPPROTO_HELLO);
+#endif
+#ifdef IPPROTO_ND
+ PyModule_AddIntMacro(m, IPPROTO_ND);
+#endif
+#ifdef IPPROTO_TP
+ PyModule_AddIntMacro(m, IPPROTO_TP);
+#endif
+#ifdef IPPROTO_IPV6
+ PyModule_AddIntMacro(m, IPPROTO_IPV6);
+#endif
+#ifdef IPPROTO_ROUTING
+ PyModule_AddIntMacro(m, IPPROTO_ROUTING);
+#endif
+#ifdef IPPROTO_FRAGMENT
+ PyModule_AddIntMacro(m, IPPROTO_FRAGMENT);
+#endif
+#ifdef IPPROTO_RSVP
+ PyModule_AddIntMacro(m, IPPROTO_RSVP);
+#endif
+#ifdef IPPROTO_GRE
+ PyModule_AddIntMacro(m, IPPROTO_GRE);
+#endif
+#ifdef IPPROTO_ESP
+ PyModule_AddIntMacro(m, IPPROTO_ESP);
+#endif
+#ifdef IPPROTO_AH
+ PyModule_AddIntMacro(m, IPPROTO_AH);
+#endif
+#ifdef IPPROTO_MOBILE
+ PyModule_AddIntMacro(m, IPPROTO_MOBILE);
+#endif
+#ifdef IPPROTO_ICMPV6
+ PyModule_AddIntMacro(m, IPPROTO_ICMPV6);
+#endif
+#ifdef IPPROTO_NONE
+ PyModule_AddIntMacro(m, IPPROTO_NONE);
+#endif
+#ifdef IPPROTO_DSTOPTS
+ PyModule_AddIntMacro(m, IPPROTO_DSTOPTS);
+#endif
+#ifdef IPPROTO_XTP
+ PyModule_AddIntMacro(m, IPPROTO_XTP);
+#endif
+#ifdef IPPROTO_EON
+ PyModule_AddIntMacro(m, IPPROTO_EON);
+#endif
+#ifdef IPPROTO_PIM
+ PyModule_AddIntMacro(m, IPPROTO_PIM);
+#endif
+#ifdef IPPROTO_IPCOMP
+ PyModule_AddIntMacro(m, IPPROTO_IPCOMP);
+#endif
+#ifdef IPPROTO_VRRP
+ PyModule_AddIntMacro(m, IPPROTO_VRRP);
+#endif
+#ifdef IPPROTO_SCTP
+ PyModule_AddIntMacro(m, IPPROTO_SCTP);
+#endif
+#ifdef IPPROTO_BIP
+ PyModule_AddIntMacro(m, IPPROTO_BIP);
+#endif
+/**/
+#ifdef IPPROTO_RAW
+ PyModule_AddIntMacro(m, IPPROTO_RAW);
+#else
+ PyModule_AddIntConstant(m, "IPPROTO_RAW", 255);
+#endif
+#ifdef IPPROTO_MAX
+ PyModule_AddIntMacro(m, IPPROTO_MAX);
+#endif
+
+#ifdef SYSPROTO_CONTROL
+ PyModule_AddIntMacro(m, SYSPROTO_CONTROL);
+#endif
+
+ /* Some port configuration */
+#ifdef IPPORT_RESERVED
+ PyModule_AddIntMacro(m, IPPORT_RESERVED);
+#else
+ PyModule_AddIntConstant(m, "IPPORT_RESERVED", 1024);
+#endif
+#ifdef IPPORT_USERRESERVED
+ PyModule_AddIntMacro(m, IPPORT_USERRESERVED);
+#else
+ PyModule_AddIntConstant(m, "IPPORT_USERRESERVED", 5000);
+#endif
+
+ /* Some reserved IP v.4 addresses */
+#ifdef INADDR_ANY
+ PyModule_AddIntMacro(m, INADDR_ANY);
+#else
+ PyModule_AddIntConstant(m, "INADDR_ANY", 0x00000000);
+#endif
+#ifdef INADDR_BROADCAST
+ PyModule_AddIntMacro(m, INADDR_BROADCAST);
+#else
+ PyModule_AddIntConstant(m, "INADDR_BROADCAST", 0xffffffff);
+#endif
+#ifdef INADDR_LOOPBACK
+ PyModule_AddIntMacro(m, INADDR_LOOPBACK);
+#else
+ PyModule_AddIntConstant(m, "INADDR_LOOPBACK", 0x7F000001);
+#endif
+#ifdef INADDR_UNSPEC_GROUP
+ PyModule_AddIntMacro(m, INADDR_UNSPEC_GROUP);
+#else
+ PyModule_AddIntConstant(m, "INADDR_UNSPEC_GROUP", 0xe0000000);
+#endif
+#ifdef INADDR_ALLHOSTS_GROUP
+ PyModule_AddIntConstant(m, "INADDR_ALLHOSTS_GROUP",
+ INADDR_ALLHOSTS_GROUP);
+#else
+ PyModule_AddIntConstant(m, "INADDR_ALLHOSTS_GROUP", 0xe0000001);
+#endif
+#ifdef INADDR_MAX_LOCAL_GROUP
+ PyModule_AddIntMacro(m, INADDR_MAX_LOCAL_GROUP);
+#else
+ PyModule_AddIntConstant(m, "INADDR_MAX_LOCAL_GROUP", 0xe00000ff);
+#endif
+#ifdef INADDR_NONE
+ PyModule_AddIntMacro(m, INADDR_NONE);
+#else
+ PyModule_AddIntConstant(m, "INADDR_NONE", 0xffffffff);
+#endif
+
+ /* IPv4 [gs]etsockopt options */
+#ifdef IP_OPTIONS
+ PyModule_AddIntMacro(m, IP_OPTIONS);
+#endif
+#ifdef IP_HDRINCL
+ PyModule_AddIntMacro(m, IP_HDRINCL);
+#endif
+#ifdef IP_TOS
+ PyModule_AddIntMacro(m, IP_TOS);
+#endif
+#ifdef IP_TTL
+ PyModule_AddIntMacro(m, IP_TTL);
+#endif
+#ifdef IP_RECVOPTS
+ PyModule_AddIntMacro(m, IP_RECVOPTS);
+#endif
+#ifdef IP_RECVRETOPTS
+ PyModule_AddIntMacro(m, IP_RECVRETOPTS);
+#endif
+#ifdef IP_RECVDSTADDR
+ PyModule_AddIntMacro(m, IP_RECVDSTADDR);
+#endif
+#ifdef IP_RETOPTS
+ PyModule_AddIntMacro(m, IP_RETOPTS);
+#endif
+#ifdef IP_MULTICAST_IF
+ PyModule_AddIntMacro(m, IP_MULTICAST_IF);
+#endif
+#ifdef IP_MULTICAST_TTL
+ PyModule_AddIntMacro(m, IP_MULTICAST_TTL);
+#endif
+#ifdef IP_MULTICAST_LOOP
+ PyModule_AddIntMacro(m, IP_MULTICAST_LOOP);
+#endif
+#ifdef IP_ADD_MEMBERSHIP
+ PyModule_AddIntMacro(m, IP_ADD_MEMBERSHIP);
+#endif
+#ifdef IP_DROP_MEMBERSHIP
+ PyModule_AddIntMacro(m, IP_DROP_MEMBERSHIP);
+#endif
+#ifdef IP_DEFAULT_MULTICAST_TTL
+ PyModule_AddIntMacro(m, IP_DEFAULT_MULTICAST_TTL);
+#endif
+#ifdef IP_DEFAULT_MULTICAST_LOOP
+ PyModule_AddIntMacro(m, IP_DEFAULT_MULTICAST_LOOP);
+#endif
+#ifdef IP_MAX_MEMBERSHIPS
+ PyModule_AddIntMacro(m, IP_MAX_MEMBERSHIPS);
+#endif
+#ifdef IP_TRANSPARENT
+ PyModule_AddIntMacro(m, IP_TRANSPARENT);
+#endif
+
+ /* IPv6 [gs]etsockopt options, defined in RFC2553 */
+#ifdef IPV6_JOIN_GROUP
+ PyModule_AddIntMacro(m, IPV6_JOIN_GROUP);
+#endif
+#ifdef IPV6_LEAVE_GROUP
+ PyModule_AddIntMacro(m, IPV6_LEAVE_GROUP);
+#endif
+#ifdef IPV6_MULTICAST_HOPS
+ PyModule_AddIntMacro(m, IPV6_MULTICAST_HOPS);
+#endif
+#ifdef IPV6_MULTICAST_IF
+ PyModule_AddIntMacro(m, IPV6_MULTICAST_IF);
+#endif
+#ifdef IPV6_MULTICAST_LOOP
+ PyModule_AddIntMacro(m, IPV6_MULTICAST_LOOP);
+#endif
+#ifdef IPV6_UNICAST_HOPS
+ PyModule_AddIntMacro(m, IPV6_UNICAST_HOPS);
+#endif
+ /* Additional IPV6 socket options, defined in RFC 3493 */
+#ifdef IPV6_V6ONLY
+ PyModule_AddIntMacro(m, IPV6_V6ONLY);
+#endif
+ /* Advanced IPV6 socket options, from RFC 3542 */
+#ifdef IPV6_CHECKSUM
+ PyModule_AddIntMacro(m, IPV6_CHECKSUM);
+#endif
+#ifdef IPV6_DONTFRAG
+ PyModule_AddIntMacro(m, IPV6_DONTFRAG);
+#endif
+#ifdef IPV6_DSTOPTS
+ PyModule_AddIntMacro(m, IPV6_DSTOPTS);
+#endif
+#ifdef IPV6_HOPLIMIT
+ PyModule_AddIntMacro(m, IPV6_HOPLIMIT);
+#endif
+#ifdef IPV6_HOPOPTS
+ PyModule_AddIntMacro(m, IPV6_HOPOPTS);
+#endif
+#ifdef IPV6_NEXTHOP
+ PyModule_AddIntMacro(m, IPV6_NEXTHOP);
+#endif
+#ifdef IPV6_PATHMTU
+ PyModule_AddIntMacro(m, IPV6_PATHMTU);
+#endif
+#ifdef IPV6_PKTINFO
+ PyModule_AddIntMacro(m, IPV6_PKTINFO);
+#endif
+#ifdef IPV6_RECVDSTOPTS
+ PyModule_AddIntMacro(m, IPV6_RECVDSTOPTS);
+#endif
+#ifdef IPV6_RECVHOPLIMIT
+ PyModule_AddIntMacro(m, IPV6_RECVHOPLIMIT);
+#endif
+#ifdef IPV6_RECVHOPOPTS
+ PyModule_AddIntMacro(m, IPV6_RECVHOPOPTS);
+#endif
+#ifdef IPV6_RECVPKTINFO
+ PyModule_AddIntMacro(m, IPV6_RECVPKTINFO);
+#endif
+#ifdef IPV6_RECVRTHDR
+ PyModule_AddIntMacro(m, IPV6_RECVRTHDR);
+#endif
+#ifdef IPV6_RECVTCLASS
+ PyModule_AddIntMacro(m, IPV6_RECVTCLASS);
+#endif
+#ifdef IPV6_RTHDR
+ PyModule_AddIntMacro(m, IPV6_RTHDR);
+#endif
+#ifdef IPV6_RTHDRDSTOPTS
+ PyModule_AddIntMacro(m, IPV6_RTHDRDSTOPTS);
+#endif
+#ifdef IPV6_RTHDR_TYPE_0
+ PyModule_AddIntMacro(m, IPV6_RTHDR_TYPE_0);
+#endif
+#ifdef IPV6_RECVPATHMTU
+ PyModule_AddIntMacro(m, IPV6_RECVPATHMTU);
+#endif
+#ifdef IPV6_TCLASS
+ PyModule_AddIntMacro(m, IPV6_TCLASS);
+#endif
+#ifdef IPV6_USE_MIN_MTU
+ PyModule_AddIntMacro(m, IPV6_USE_MIN_MTU);
+#endif
+
+ /* TCP options */
+#ifdef TCP_NODELAY
+ PyModule_AddIntMacro(m, TCP_NODELAY);
+#endif
+#ifdef TCP_MAXSEG
+ PyModule_AddIntMacro(m, TCP_MAXSEG);
+#endif
+#ifdef TCP_CORK
+ PyModule_AddIntMacro(m, TCP_CORK);
+#endif
+#ifdef TCP_KEEPIDLE
+ PyModule_AddIntMacro(m, TCP_KEEPIDLE);
+#endif
+#ifdef TCP_KEEPINTVL
+ PyModule_AddIntMacro(m, TCP_KEEPINTVL);
+#endif
+#ifdef TCP_KEEPCNT
+ PyModule_AddIntMacro(m, TCP_KEEPCNT);
+#endif
+#ifdef TCP_SYNCNT
+ PyModule_AddIntMacro(m, TCP_SYNCNT);
+#endif
+#ifdef TCP_LINGER2
+ PyModule_AddIntMacro(m, TCP_LINGER2);
+#endif
+#ifdef TCP_DEFER_ACCEPT
+ PyModule_AddIntMacro(m, TCP_DEFER_ACCEPT);
+#endif
+#ifdef TCP_WINDOW_CLAMP
+ PyModule_AddIntMacro(m, TCP_WINDOW_CLAMP);
+#endif
+#ifdef TCP_INFO
+ PyModule_AddIntMacro(m, TCP_INFO);
+#endif
+#ifdef TCP_QUICKACK
+ PyModule_AddIntMacro(m, TCP_QUICKACK);
+#endif
+#ifdef TCP_FASTOPEN
+ PyModule_AddIntMacro(m, TCP_FASTOPEN);
+#endif
+#ifdef TCP_CONGESTION
+ PyModule_AddIntMacro(m, TCP_CONGESTION);
+#endif
+#ifdef TCP_USER_TIMEOUT
+ PyModule_AddIntMacro(m, TCP_USER_TIMEOUT);
+#endif
+
+ /* IPX options */
+#ifdef IPX_TYPE
+ PyModule_AddIntMacro(m, IPX_TYPE);
+#endif
+
+/* Reliable Datagram Sockets */
+#ifdef RDS_CMSG_RDMA_ARGS
+ PyModule_AddIntMacro(m, RDS_CMSG_RDMA_ARGS);
+#endif
+#ifdef RDS_CMSG_RDMA_DEST
+ PyModule_AddIntMacro(m, RDS_CMSG_RDMA_DEST);
+#endif
+#ifdef RDS_CMSG_RDMA_MAP
+ PyModule_AddIntMacro(m, RDS_CMSG_RDMA_MAP);
+#endif
+#ifdef RDS_CMSG_RDMA_STATUS
+ PyModule_AddIntMacro(m, RDS_CMSG_RDMA_STATUS);
+#endif
+#ifdef RDS_CMSG_RDMA_UPDATE
+ PyModule_AddIntMacro(m, RDS_CMSG_RDMA_UPDATE);
+#endif
+#ifdef RDS_RDMA_READWRITE
+ PyModule_AddIntMacro(m, RDS_RDMA_READWRITE);
+#endif
+#ifdef RDS_RDMA_FENCE
+ PyModule_AddIntMacro(m, RDS_RDMA_FENCE);
+#endif
+#ifdef RDS_RDMA_INVALIDATE
+ PyModule_AddIntMacro(m, RDS_RDMA_INVALIDATE);
+#endif
+#ifdef RDS_RDMA_USE_ONCE
+ PyModule_AddIntMacro(m, RDS_RDMA_USE_ONCE);
+#endif
+#ifdef RDS_RDMA_DONTWAIT
+ PyModule_AddIntMacro(m, RDS_RDMA_DONTWAIT);
+#endif
+#ifdef RDS_RDMA_NOTIFY_ME
+ PyModule_AddIntMacro(m, RDS_RDMA_NOTIFY_ME);
+#endif
+#ifdef RDS_RDMA_SILENT
+ PyModule_AddIntMacro(m, RDS_RDMA_SILENT);
+#endif
+
+ /* get{addr,name}info parameters */
+#ifdef EAI_ADDRFAMILY
+ PyModule_AddIntMacro(m, EAI_ADDRFAMILY);
+#endif
+#ifdef EAI_AGAIN
+ PyModule_AddIntMacro(m, EAI_AGAIN);
+#endif
+#ifdef EAI_BADFLAGS
+ PyModule_AddIntMacro(m, EAI_BADFLAGS);
+#endif
+#ifdef EAI_FAIL
+ PyModule_AddIntMacro(m, EAI_FAIL);
+#endif
+#ifdef EAI_FAMILY
+ PyModule_AddIntMacro(m, EAI_FAMILY);
+#endif
+#ifdef EAI_MEMORY
+ PyModule_AddIntMacro(m, EAI_MEMORY);
+#endif
+#ifdef EAI_NODATA
+ PyModule_AddIntMacro(m, EAI_NODATA);
+#endif
+#ifdef EAI_NONAME
+ PyModule_AddIntMacro(m, EAI_NONAME);
+#endif
+#ifdef EAI_OVERFLOW
+ PyModule_AddIntMacro(m, EAI_OVERFLOW);
+#endif
+#ifdef EAI_SERVICE
+ PyModule_AddIntMacro(m, EAI_SERVICE);
+#endif
+#ifdef EAI_SOCKTYPE
+ PyModule_AddIntMacro(m, EAI_SOCKTYPE);
+#endif
+#ifdef EAI_SYSTEM
+ PyModule_AddIntMacro(m, EAI_SYSTEM);
+#endif
+#ifdef EAI_BADHINTS
+ PyModule_AddIntMacro(m, EAI_BADHINTS);
+#endif
+#ifdef EAI_PROTOCOL
+ PyModule_AddIntMacro(m, EAI_PROTOCOL);
+#endif
+#ifdef EAI_MAX
+ PyModule_AddIntMacro(m, EAI_MAX);
+#endif
+#ifdef AI_PASSIVE
+ PyModule_AddIntMacro(m, AI_PASSIVE);
+#endif
+#ifdef AI_CANONNAME
+ PyModule_AddIntMacro(m, AI_CANONNAME);
+#endif
+#ifdef AI_NUMERICHOST
+ PyModule_AddIntMacro(m, AI_NUMERICHOST);
+#endif
+#ifdef AI_NUMERICSERV
+ PyModule_AddIntMacro(m, AI_NUMERICSERV);
+#endif
+#ifdef AI_MASK
+ PyModule_AddIntMacro(m, AI_MASK);
+#endif
+#ifdef AI_ALL
+ PyModule_AddIntMacro(m, AI_ALL);
+#endif
+#ifdef AI_V4MAPPED_CFG
+ PyModule_AddIntMacro(m, AI_V4MAPPED_CFG);
+#endif
+#ifdef AI_ADDRCONFIG
+ PyModule_AddIntMacro(m, AI_ADDRCONFIG);
+#endif
+#ifdef AI_V4MAPPED
+ PyModule_AddIntMacro(m, AI_V4MAPPED);
+#endif
+#ifdef AI_DEFAULT
+ PyModule_AddIntMacro(m, AI_DEFAULT);
+#endif
+#ifdef NI_MAXHOST
+ PyModule_AddIntMacro(m, NI_MAXHOST);
+#endif
+#ifdef NI_MAXSERV
+ PyModule_AddIntMacro(m, NI_MAXSERV);
+#endif
+#ifdef NI_NOFQDN
+ PyModule_AddIntMacro(m, NI_NOFQDN);
+#endif
+#ifdef NI_NUMERICHOST
+ PyModule_AddIntMacro(m, NI_NUMERICHOST);
+#endif
+#ifdef NI_NAMEREQD
+ PyModule_AddIntMacro(m, NI_NAMEREQD);
+#endif
+#ifdef NI_NUMERICSERV
+ PyModule_AddIntMacro(m, NI_NUMERICSERV);
+#endif
+#ifdef NI_DGRAM
+ PyModule_AddIntMacro(m, NI_DGRAM);
+#endif
+
+ /* shutdown() parameters */
+#ifdef SHUT_RD
+ PyModule_AddIntMacro(m, SHUT_RD);
+#elif defined(SD_RECEIVE)
+ PyModule_AddIntConstant(m, "SHUT_RD", SD_RECEIVE);
+#else
+ PyModule_AddIntConstant(m, "SHUT_RD", 0);
+#endif
+#ifdef SHUT_WR
+ PyModule_AddIntMacro(m, SHUT_WR);
+#elif defined(SD_SEND)
+ PyModule_AddIntConstant(m, "SHUT_WR", SD_SEND);
+#else
+ PyModule_AddIntConstant(m, "SHUT_WR", 1);
+#endif
+#ifdef SHUT_RDWR
+ PyModule_AddIntMacro(m, SHUT_RDWR);
+#elif defined(SD_BOTH)
+ PyModule_AddIntConstant(m, "SHUT_RDWR", SD_BOTH);
+#else
+ PyModule_AddIntConstant(m, "SHUT_RDWR", 2);
+#endif
+
+#ifdef SIO_RCVALL
+ {
+ DWORD codes[] = {SIO_RCVALL, SIO_KEEPALIVE_VALS,
+#if defined(SIO_LOOPBACK_FAST_PATH)
+ SIO_LOOPBACK_FAST_PATH
+#endif
+ };
+ const char *names[] = {"SIO_RCVALL", "SIO_KEEPALIVE_VALS",
+#if defined(SIO_LOOPBACK_FAST_PATH)
+ "SIO_LOOPBACK_FAST_PATH"
+#endif
+ };
+ int i;
+ for(i = 0; i<Py_ARRAY_LENGTH(codes); ++i) {
+ PyObject *tmp;
+ tmp = PyLong_FromUnsignedLong(codes[i]);
+ if (tmp == NULL)
+ return NULL;
+ PyModule_AddObject(m, names[i], tmp);
+ }
+ }
+ PyModule_AddIntMacro(m, RCVALL_OFF);
+ PyModule_AddIntMacro(m, RCVALL_ON);
+ PyModule_AddIntMacro(m, RCVALL_SOCKETLEVELONLY);
+#ifdef RCVALL_IPLEVEL
+ PyModule_AddIntMacro(m, RCVALL_IPLEVEL);
+#endif
+#ifdef RCVALL_MAX
+ PyModule_AddIntMacro(m, RCVALL_MAX);
+#endif
+#endif /* _MSTCPIP_ */
+
+ /* Initialize gethostbyname lock */
+#if defined(USE_GETHOSTBYNAME_LOCK) || defined(USE_GETADDRINFO_LOCK)
+ netdb_lock = PyThread_allocate_lock();
+#endif
+
+#ifdef MS_WINDOWS
+ /* removes some flags on older version Windows during run-time */
+ remove_unusable_flags(m);
+#endif
+
+ return m;
+}
+
+
+#ifndef HAVE_INET_PTON
+#if !defined(NTDDI_VERSION) || (NTDDI_VERSION < NTDDI_LONGHORN)
+
+/* Simplistic emulation code for inet_pton that only works for IPv4 */
+/* These are not exposed because they do not set errno properly */
+
+int
+inet_pton(int af, const char *src, void *dst)
+{
+ if (af == AF_INET) {
+#if (SIZEOF_INT != 4)
+#error "Not sure if in_addr_t exists and int is not 32-bits."
+#endif
+ unsigned int packed_addr;
+ packed_addr = inet_addr(src);
+ if (packed_addr == INADDR_NONE)
+ return 0;
+ memcpy(dst, &packed_addr, 4);
+ return 1;
+ }
+ /* Should set errno to EAFNOSUPPORT */
+ return -1;
+}
+
+const char *
+inet_ntop(int af, const void *src, char *dst, socklen_t size)
+{
+ if (af == AF_INET) {
+ struct in_addr packed_addr;
+ if (size < 16)
+ /* Should set errno to ENOSPC. */
+ return NULL;
+ memcpy(&packed_addr, src, sizeof(packed_addr));
+ return strncpy(dst, inet_ntoa(packed_addr), size);
+ }
+ /* Should set errno to EAFNOSUPPORT */
+ return NULL;
+}
+
+#endif
+#endif
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/socketmodule.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/socketmodule.h
new file mode 100644
index 00000000..ada048f4
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/socketmodule.h
@@ -0,0 +1,282 @@
+/* Socket module header file */
+#ifdef UEFI_C_SOURCE
+# undef CMSG_LEN // Hack to not to include code reloated to CMSG_LEN
+#endif
+/* Includes needed for the sockaddr_* symbols below */
+#ifndef MS_WINDOWS
+#ifdef __VMS
+# include <socket.h>
+# else
+# include <sys/socket.h>
+# endif
+# include <netinet/in.h>
+# if !defined(__CYGWIN__)
+# include <netinet/tcp.h>
+# endif
+
+#else /* MS_WINDOWS */
+# include <winsock2.h>
+/* Windows 'supports' CMSG_LEN, but does not follow the POSIX standard
+ * interface at all, so there is no point including the code that
+ * attempts to use it.
+ */
+# ifdef PySocket_BUILDING_SOCKET
+# undef CMSG_LEN
+# endif
+# include <ws2tcpip.h>
+/* VC6 is shipped with old platform headers, and does not have MSTcpIP.h
+ * Separate SDKs have all the functions we want, but older ones don't have
+ * any version information.
+ * I use SIO_GET_MULTICAST_FILTER to detect a decent SDK.
+ */
+# ifdef SIO_GET_MULTICAST_FILTER
+# include <MSTcpIP.h> /* for SIO_RCVALL */
+# define HAVE_ADDRINFO
+# define HAVE_SOCKADDR_STORAGE
+# define HAVE_GETADDRINFO
+# define HAVE_GETNAMEINFO
+# define ENABLE_IPV6
+# else
+typedef int socklen_t;
+# endif /* IPPROTO_IPV6 */
+#endif /* MS_WINDOWS */
+
+#ifdef HAVE_SYS_UN_H
+# include <sys/un.h>
+#else
+# undef AF_UNIX
+#endif
+
+#ifdef HAVE_LINUX_NETLINK_H
+# ifdef HAVE_ASM_TYPES_H
+# include <asm/types.h>
+# endif
+# include <linux/netlink.h>
+#else
+# undef AF_NETLINK
+#endif
+
+#ifdef HAVE_BLUETOOTH_BLUETOOTH_H
+#include <bluetooth/bluetooth.h>
+#include <bluetooth/rfcomm.h>
+#include <bluetooth/l2cap.h>
+#include <bluetooth/sco.h>
+#include <bluetooth/hci.h>
+#endif
+
+#ifdef HAVE_BLUETOOTH_H
+#include <bluetooth.h>
+#endif
+
+#ifdef HAVE_NET_IF_H
+# include <net/if.h>
+#endif
+
+#ifdef HAVE_NETPACKET_PACKET_H
+# include <sys/ioctl.h>
+# include <netpacket/packet.h>
+#endif
+
+#ifdef HAVE_LINUX_TIPC_H
+# include <linux/tipc.h>
+#endif
+
+#ifdef HAVE_LINUX_CAN_H
+# include <linux/can.h>
+#else
+# undef AF_CAN
+# undef PF_CAN
+#endif
+
+#ifdef HAVE_LINUX_CAN_RAW_H
+#include <linux/can/raw.h>
+#endif
+
+#ifdef HAVE_LINUX_CAN_BCM_H
+#include <linux/can/bcm.h>
+#endif
+
+#ifdef HAVE_SYS_SYS_DOMAIN_H
+#include <sys/sys_domain.h>
+#endif
+#ifdef HAVE_SYS_KERN_CONTROL_H
+#include <sys/kern_control.h>
+#endif
+
+#ifdef HAVE_SOCKADDR_ALG
+#include <linux/if_alg.h>
+#ifndef AF_ALG
+#define AF_ALG 38
+#endif
+#ifndef SOL_ALG
+#define SOL_ALG 279
+#endif
+
+/* Linux 3.19 */
+#ifndef ALG_SET_AEAD_ASSOCLEN
+#define ALG_SET_AEAD_ASSOCLEN 4
+#endif
+#ifndef ALG_SET_AEAD_AUTHSIZE
+#define ALG_SET_AEAD_AUTHSIZE 5
+#endif
+/* Linux 4.8 */
+#ifndef ALG_SET_PUBKEY
+#define ALG_SET_PUBKEY 6
+#endif
+
+#ifndef ALG_OP_SIGN
+#define ALG_OP_SIGN 2
+#endif
+#ifndef ALG_OP_VERIFY
+#define ALG_OP_VERIFY 3
+#endif
+
+#endif /* HAVE_SOCKADDR_ALG */
+
+
+#ifndef Py__SOCKET_H
+#define Py__SOCKET_H
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* Python module and C API name */
+#define PySocket_MODULE_NAME "_socket"
+#define PySocket_CAPI_NAME "CAPI"
+#define PySocket_CAPSULE_NAME PySocket_MODULE_NAME "." PySocket_CAPI_NAME
+
+/* Abstract the socket file descriptor type */
+#ifdef MS_WINDOWS
+typedef SOCKET SOCKET_T;
+# ifdef MS_WIN64
+# define SIZEOF_SOCKET_T 8
+# else
+# define SIZEOF_SOCKET_T 4
+# endif
+#else
+typedef int SOCKET_T;
+# define SIZEOF_SOCKET_T SIZEOF_INT
+#endif
+
+#if SIZEOF_SOCKET_T <= SIZEOF_LONG
+#define PyLong_FromSocket_t(fd) PyLong_FromLong((SOCKET_T)(fd))
+#define PyLong_AsSocket_t(fd) (SOCKET_T)PyLong_AsLong(fd)
+#else
+#define PyLong_FromSocket_t(fd) PyLong_FromLongLong((SOCKET_T)(fd))
+#define PyLong_AsSocket_t(fd) (SOCKET_T)PyLong_AsLongLong(fd)
+#endif
+
+/* Socket address */
+typedef union sock_addr {
+ struct sockaddr_in in;
+ struct sockaddr sa;
+#ifdef AF_UNIX
+ struct sockaddr_un un;
+#endif
+#ifdef AF_NETLINK
+ struct sockaddr_nl nl;
+#endif
+#ifdef ENABLE_IPV6
+ struct sockaddr_in6 in6;
+ struct sockaddr_storage storage;
+#endif
+#ifdef HAVE_BLUETOOTH_BLUETOOTH_H
+ struct sockaddr_l2 bt_l2;
+ struct sockaddr_rc bt_rc;
+ struct sockaddr_sco bt_sco;
+ struct sockaddr_hci bt_hci;
+#endif
+#ifdef HAVE_NETPACKET_PACKET_H
+ struct sockaddr_ll ll;
+#endif
+#ifdef HAVE_LINUX_CAN_H
+ struct sockaddr_can can;
+#endif
+#ifdef HAVE_SYS_KERN_CONTROL_H
+ struct sockaddr_ctl ctl;
+#endif
+#ifdef HAVE_SOCKADDR_ALG
+ struct sockaddr_alg alg;
+#endif
+} sock_addr_t;
+
+/* The object holding a socket. It holds some extra information,
+ like the address family, which is used to decode socket address
+ arguments properly. */
+
+typedef struct {
+ PyObject_HEAD
+ SOCKET_T sock_fd; /* Socket file descriptor */
+ int sock_family; /* Address family, e.g., AF_INET */
+ int sock_type; /* Socket type, e.g., SOCK_STREAM */
+ int sock_proto; /* Protocol type, usually 0 */
+ PyObject *(*errorhandler)(void); /* Error handler; checks
+ errno, returns NULL and
+ sets a Python exception */
+ _PyTime_t sock_timeout; /* Operation timeout in seconds;
+ 0.0 means non-blocking */
+} PySocketSockObject;
+
+/* --- C API ----------------------------------------------------*/
+
+/* Short explanation of what this C API export mechanism does
+ and how it works:
+
+ The _ssl module needs access to the type object defined in
+ the _socket module. Since cross-DLL linking introduces a lot of
+ problems on many platforms, the "trick" is to wrap the
+ C API of a module in a struct which then gets exported to
+ other modules via a PyCapsule.
+
+ The code in socketmodule.c defines this struct (which currently
+ only contains the type object reference, but could very
+ well also include other C APIs needed by other modules)
+ and exports it as PyCapsule via the module dictionary
+ under the name "CAPI".
+
+ Other modules can now include the socketmodule.h file
+ which defines the needed C APIs to import and set up
+ a static copy of this struct in the importing module.
+
+ After initialization, the importing module can then
+ access the C APIs from the _socket module by simply
+ referring to the static struct, e.g.
+
+ Load _socket module and its C API; this sets up the global
+ PySocketModule:
+
+ if (PySocketModule_ImportModuleAndAPI())
+ return;
+
+
+ Now use the C API as if it were defined in the using
+ module:
+
+ if (!PyArg_ParseTuple(args, "O!|zz:ssl",
+
+ PySocketModule.Sock_Type,
+
+ (PyObject*)&Sock,
+ &key_file, &cert_file))
+ return NULL;
+
+ Support could easily be extended to export more C APIs/symbols
+ this way. Currently, only the type object is exported,
+ other candidates would be socket constructors and socket
+ access functions.
+
+*/
+
+/* C API for usage by other Python modules */
+typedef struct {
+ PyTypeObject *Sock_Type;
+ PyObject *error;
+ PyObject *timeout_error;
+} PySocketModule_APIObject;
+
+#define PySocketModule_ImportModuleAndAPI() PyCapsule_Import(PySocket_CAPSULE_NAME, 1)
+
+#ifdef __cplusplus
+}
+#endif
+#endif /* !Py__SOCKET_H */
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/sre_lib.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/sre_lib.h
new file mode 100644
index 00000000..a50dad0d
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/sre_lib.h
@@ -0,0 +1,1372 @@
+/*
+ * Secret Labs' Regular Expression Engine
+ *
+ * regular expression matching engine
+ *
+ * Copyright (c) 1997-2001 by Secret Labs AB. All rights reserved.
+ *
+ * See the _sre.c file for information on usage and redistribution.
+ */
+
+/* String matching engine */
+
+/* This file is included three times, with different character settings */
+
+LOCAL(int)
+SRE(at)(SRE_STATE* state, SRE_CHAR* ptr, SRE_CODE at)
+{
+ /* check if pointer is at given position */
+
+ Py_ssize_t thisp, thatp;
+
+ switch (at) {
+
+ case SRE_AT_BEGINNING:
+ case SRE_AT_BEGINNING_STRING:
+ return ((void*) ptr == state->beginning);
+
+ case SRE_AT_BEGINNING_LINE:
+ return ((void*) ptr == state->beginning ||
+ SRE_IS_LINEBREAK((int) ptr[-1]));
+
+ case SRE_AT_END:
+ return (((SRE_CHAR *)state->end - ptr == 1 &&
+ SRE_IS_LINEBREAK((int) ptr[0])) ||
+ ((void*) ptr == state->end));
+
+ case SRE_AT_END_LINE:
+ return ((void*) ptr == state->end ||
+ SRE_IS_LINEBREAK((int) ptr[0]));
+
+ case SRE_AT_END_STRING:
+ return ((void*) ptr == state->end);
+
+ case SRE_AT_BOUNDARY:
+ if (state->beginning == state->end)
+ return 0;
+ thatp = ((void*) ptr > state->beginning) ?
+ SRE_IS_WORD((int) ptr[-1]) : 0;
+ thisp = ((void*) ptr < state->end) ?
+ SRE_IS_WORD((int) ptr[0]) : 0;
+ return thisp != thatp;
+
+ case SRE_AT_NON_BOUNDARY:
+ if (state->beginning == state->end)
+ return 0;
+ thatp = ((void*) ptr > state->beginning) ?
+ SRE_IS_WORD((int) ptr[-1]) : 0;
+ thisp = ((void*) ptr < state->end) ?
+ SRE_IS_WORD((int) ptr[0]) : 0;
+ return thisp == thatp;
+
+ case SRE_AT_LOC_BOUNDARY:
+ if (state->beginning == state->end)
+ return 0;
+ thatp = ((void*) ptr > state->beginning) ?
+ SRE_LOC_IS_WORD((int) ptr[-1]) : 0;
+ thisp = ((void*) ptr < state->end) ?
+ SRE_LOC_IS_WORD((int) ptr[0]) : 0;
+ return thisp != thatp;
+
+ case SRE_AT_LOC_NON_BOUNDARY:
+ if (state->beginning == state->end)
+ return 0;
+ thatp = ((void*) ptr > state->beginning) ?
+ SRE_LOC_IS_WORD((int) ptr[-1]) : 0;
+ thisp = ((void*) ptr < state->end) ?
+ SRE_LOC_IS_WORD((int) ptr[0]) : 0;
+ return thisp == thatp;
+
+ case SRE_AT_UNI_BOUNDARY:
+ if (state->beginning == state->end)
+ return 0;
+ thatp = ((void*) ptr > state->beginning) ?
+ SRE_UNI_IS_WORD((int) ptr[-1]) : 0;
+ thisp = ((void*) ptr < state->end) ?
+ SRE_UNI_IS_WORD((int) ptr[0]) : 0;
+ return thisp != thatp;
+
+ case SRE_AT_UNI_NON_BOUNDARY:
+ if (state->beginning == state->end)
+ return 0;
+ thatp = ((void*) ptr > state->beginning) ?
+ SRE_UNI_IS_WORD((int) ptr[-1]) : 0;
+ thisp = ((void*) ptr < state->end) ?
+ SRE_UNI_IS_WORD((int) ptr[0]) : 0;
+ return thisp == thatp;
+
+ }
+
+ return 0;
+}
+
+LOCAL(int)
+SRE(charset)(SRE_STATE* state, SRE_CODE* set, SRE_CODE ch)
+{
+ /* check if character is a member of the given set */
+
+ int ok = 1;
+
+ for (;;) {
+ switch (*set++) {
+
+ case SRE_OP_FAILURE:
+ return !ok;
+
+ case SRE_OP_LITERAL:
+ /* <LITERAL> <code> */
+ if (ch == set[0])
+ return ok;
+ set++;
+ break;
+
+ case SRE_OP_CATEGORY:
+ /* <CATEGORY> <code> */
+ if (sre_category(set[0], (int) ch))
+ return ok;
+ set++;
+ break;
+
+ case SRE_OP_CHARSET:
+ /* <CHARSET> <bitmap> */
+ if (ch < 256 &&
+ (set[ch/SRE_CODE_BITS] & (1u << (ch & (SRE_CODE_BITS-1)))))
+ return ok;
+ set += 256/SRE_CODE_BITS;
+ break;
+
+ case SRE_OP_RANGE:
+ /* <RANGE> <lower> <upper> */
+ if (set[0] <= ch && ch <= set[1])
+ return ok;
+ set += 2;
+ break;
+
+ case SRE_OP_RANGE_IGNORE:
+ /* <RANGE_IGNORE> <lower> <upper> */
+ {
+ SRE_CODE uch;
+ /* ch is already lower cased */
+ if (set[0] <= ch && ch <= set[1])
+ return ok;
+ uch = state->upper(ch);
+ if (set[0] <= uch && uch <= set[1])
+ return ok;
+ set += 2;
+ break;
+ }
+
+ case SRE_OP_NEGATE:
+ ok = !ok;
+ break;
+
+ case SRE_OP_BIGCHARSET:
+ /* <BIGCHARSET> <blockcount> <256 blockindices> <blocks> */
+ {
+ Py_ssize_t count, block;
+ count = *(set++);
+
+ if (ch < 0x10000u)
+ block = ((unsigned char*)set)[ch >> 8];
+ else
+ block = -1;
+ set += 256/sizeof(SRE_CODE);
+ if (block >=0 &&
+ (set[(block * 256 + (ch & 255))/SRE_CODE_BITS] &
+ (1u << (ch & (SRE_CODE_BITS-1)))))
+ return ok;
+ set += count * (256/SRE_CODE_BITS);
+ break;
+ }
+
+ default:
+ /* internal error -- there's not much we can do about it
+ here, so let's just pretend it didn't match... */
+ return 0;
+ }
+ }
+}
+
+LOCAL(Py_ssize_t) SRE(match)(SRE_STATE* state, SRE_CODE* pattern, int match_all);
+
+LOCAL(Py_ssize_t)
+SRE(count)(SRE_STATE* state, SRE_CODE* pattern, Py_ssize_t maxcount)
+{
+ SRE_CODE chr;
+ SRE_CHAR c;
+ SRE_CHAR* ptr = (SRE_CHAR *)state->ptr;
+ SRE_CHAR* end = (SRE_CHAR *)state->end;
+ Py_ssize_t i;
+
+ /* adjust end */
+ if (maxcount < end - ptr && maxcount != SRE_MAXREPEAT)
+ end = ptr + maxcount;
+
+ switch (pattern[0]) {
+
+ case SRE_OP_IN:
+ /* repeated set */
+ TRACE(("|%p|%p|COUNT IN\n", pattern, ptr));
+ while (ptr < end && SRE(charset)(state, pattern + 2, *ptr))
+ ptr++;
+ break;
+
+ case SRE_OP_ANY:
+ /* repeated dot wildcard. */
+ TRACE(("|%p|%p|COUNT ANY\n", pattern, ptr));
+ while (ptr < end && !SRE_IS_LINEBREAK(*ptr))
+ ptr++;
+ break;
+
+ case SRE_OP_ANY_ALL:
+ /* repeated dot wildcard. skip to the end of the target
+ string, and backtrack from there */
+ TRACE(("|%p|%p|COUNT ANY_ALL\n", pattern, ptr));
+ ptr = end;
+ break;
+
+ case SRE_OP_LITERAL:
+ /* repeated literal */
+ chr = pattern[1];
+ TRACE(("|%p|%p|COUNT LITERAL %d\n", pattern, ptr, chr));
+ c = (SRE_CHAR) chr;
+#if SIZEOF_SRE_CHAR < 4
+ if ((SRE_CODE) c != chr)
+ ; /* literal can't match: doesn't fit in char width */
+ else
+#endif
+ while (ptr < end && *ptr == c)
+ ptr++;
+ break;
+
+ case SRE_OP_LITERAL_IGNORE:
+ /* repeated literal */
+ chr = pattern[1];
+ TRACE(("|%p|%p|COUNT LITERAL_IGNORE %d\n", pattern, ptr, chr));
+ while (ptr < end && (SRE_CODE) state->lower(*ptr) == chr)
+ ptr++;
+ break;
+
+ case SRE_OP_NOT_LITERAL:
+ /* repeated non-literal */
+ chr = pattern[1];
+ TRACE(("|%p|%p|COUNT NOT_LITERAL %d\n", pattern, ptr, chr));
+ c = (SRE_CHAR) chr;
+#if SIZEOF_SRE_CHAR < 4
+ if ((SRE_CODE) c != chr)
+ ptr = end; /* literal can't match: doesn't fit in char width */
+ else
+#endif
+ while (ptr < end && *ptr != c)
+ ptr++;
+ break;
+
+ case SRE_OP_NOT_LITERAL_IGNORE:
+ /* repeated non-literal */
+ chr = pattern[1];
+ TRACE(("|%p|%p|COUNT NOT_LITERAL_IGNORE %d\n", pattern, ptr, chr));
+ while (ptr < end && (SRE_CODE) state->lower(*ptr) != chr)
+ ptr++;
+ break;
+
+ default:
+ /* repeated single character pattern */
+ TRACE(("|%p|%p|COUNT SUBPATTERN\n", pattern, ptr));
+ while ((SRE_CHAR*) state->ptr < end) {
+ i = SRE(match)(state, pattern, 0);
+ if (i < 0)
+ return i;
+ if (!i)
+ break;
+ }
+ TRACE(("|%p|%p|COUNT %" PY_FORMAT_SIZE_T "d\n", pattern, ptr,
+ (SRE_CHAR*) state->ptr - ptr));
+ return (SRE_CHAR*) state->ptr - ptr;
+ }
+
+ TRACE(("|%p|%p|COUNT %" PY_FORMAT_SIZE_T "d\n", pattern, ptr,
+ ptr - (SRE_CHAR*) state->ptr));
+ return ptr - (SRE_CHAR*) state->ptr;
+}
+
+#if 0 /* not used in this release */
+LOCAL(int)
+SRE(info)(SRE_STATE* state, SRE_CODE* pattern)
+{
+ /* check if an SRE_OP_INFO block matches at the current position.
+ returns the number of SRE_CODE objects to skip if successful, 0
+ if no match */
+
+ SRE_CHAR* end = (SRE_CHAR*) state->end;
+ SRE_CHAR* ptr = (SRE_CHAR*) state->ptr;
+ Py_ssize_t i;
+
+ /* check minimal length */
+ if (pattern[3] && end - ptr < pattern[3])
+ return 0;
+
+ /* check known prefix */
+ if (pattern[2] & SRE_INFO_PREFIX && pattern[5] > 1) {
+ /* <length> <skip> <prefix data> <overlap data> */
+ for (i = 0; i < pattern[5]; i++)
+ if ((SRE_CODE) ptr[i] != pattern[7 + i])
+ return 0;
+ return pattern[0] + 2 * pattern[6];
+ }
+ return pattern[0];
+}
+#endif
+
+/* The macros below should be used to protect recursive SRE(match)()
+ * calls that *failed* and do *not* return immediately (IOW, those
+ * that will backtrack). Explaining:
+ *
+ * - Recursive SRE(match)() returned true: that's usually a success
+ * (besides atypical cases like ASSERT_NOT), therefore there's no
+ * reason to restore lastmark;
+ *
+ * - Recursive SRE(match)() returned false but the current SRE(match)()
+ * is returning to the caller: If the current SRE(match)() is the
+ * top function of the recursion, returning false will be a matching
+ * failure, and it doesn't matter where lastmark is pointing to.
+ * If it's *not* the top function, it will be a recursive SRE(match)()
+ * failure by itself, and the calling SRE(match)() will have to deal
+ * with the failure by the same rules explained here (it will restore
+ * lastmark by itself if necessary);
+ *
+ * - Recursive SRE(match)() returned false, and will continue the
+ * outside 'for' loop: must be protected when breaking, since the next
+ * OP could potentially depend on lastmark;
+ *
+ * - Recursive SRE(match)() returned false, and will be called again
+ * inside a local for/while loop: must be protected between each
+ * loop iteration, since the recursive SRE(match)() could do anything,
+ * and could potentially depend on lastmark.
+ *
+ * For more information, check the discussion at SF patch #712900.
+ */
+#define LASTMARK_SAVE() \
+ do { \
+ ctx->lastmark = state->lastmark; \
+ ctx->lastindex = state->lastindex; \
+ } while (0)
+#define LASTMARK_RESTORE() \
+ do { \
+ state->lastmark = ctx->lastmark; \
+ state->lastindex = ctx->lastindex; \
+ } while (0)
+#ifdef UEFI_C_SOURCE
+#undef RETURN_ERROR
+#undef RETURN_SUCCESS
+#endif
+#define RETURN_ERROR(i) do { return i; } while(0)
+#define RETURN_FAILURE do { ret = 0; goto exit; } while(0)
+#define RETURN_SUCCESS do { ret = 1; goto exit; } while(0)
+
+#define RETURN_ON_ERROR(i) \
+ do { if (i < 0) RETURN_ERROR(i); } while (0)
+#define RETURN_ON_SUCCESS(i) \
+ do { RETURN_ON_ERROR(i); if (i > 0) RETURN_SUCCESS; } while (0)
+#define RETURN_ON_FAILURE(i) \
+ do { RETURN_ON_ERROR(i); if (i == 0) RETURN_FAILURE; } while (0)
+
+#define DATA_STACK_ALLOC(state, type, ptr) \
+do { \
+ alloc_pos = state->data_stack_base; \
+ TRACE(("allocating %s in %" PY_FORMAT_SIZE_T "d " \
+ "(%" PY_FORMAT_SIZE_T "d)\n", \
+ Py_STRINGIFY(type), alloc_pos, sizeof(type))); \
+ if (sizeof(type) > state->data_stack_size - alloc_pos) { \
+ int j = data_stack_grow(state, sizeof(type)); \
+ if (j < 0) return j; \
+ if (ctx_pos != -1) \
+ DATA_STACK_LOOKUP_AT(state, SRE(match_context), ctx, ctx_pos); \
+ } \
+ ptr = (type*)(state->data_stack+alloc_pos); \
+ state->data_stack_base += sizeof(type); \
+} while (0)
+
+#define DATA_STACK_LOOKUP_AT(state, type, ptr, pos) \
+do { \
+ TRACE(("looking up %s at %" PY_FORMAT_SIZE_T "d\n", Py_STRINGIFY(type), pos)); \
+ ptr = (type*)(state->data_stack+pos); \
+} while (0)
+
+#define DATA_STACK_PUSH(state, data, size) \
+do { \
+ TRACE(("copy data in %p to %" PY_FORMAT_SIZE_T "d " \
+ "(%" PY_FORMAT_SIZE_T "d)\n", \
+ data, state->data_stack_base, size)); \
+ if (size > state->data_stack_size - state->data_stack_base) { \
+ int j = data_stack_grow(state, size); \
+ if (j < 0) return j; \
+ if (ctx_pos != -1) \
+ DATA_STACK_LOOKUP_AT(state, SRE(match_context), ctx, ctx_pos); \
+ } \
+ memcpy(state->data_stack+state->data_stack_base, data, size); \
+ state->data_stack_base += size; \
+} while (0)
+
+#define DATA_STACK_POP(state, data, size, discard) \
+do { \
+ TRACE(("copy data to %p from %" PY_FORMAT_SIZE_T "d " \
+ "(%" PY_FORMAT_SIZE_T "d)\n", \
+ data, state->data_stack_base-size, size)); \
+ memcpy(data, state->data_stack+state->data_stack_base-size, size); \
+ if (discard) \
+ state->data_stack_base -= size; \
+} while (0)
+
+#define DATA_STACK_POP_DISCARD(state, size) \
+do { \
+ TRACE(("discard data from %" PY_FORMAT_SIZE_T "d " \
+ "(%" PY_FORMAT_SIZE_T "d)\n", \
+ state->data_stack_base-size, size)); \
+ state->data_stack_base -= size; \
+} while(0)
+
+#define DATA_PUSH(x) \
+ DATA_STACK_PUSH(state, (x), sizeof(*(x)))
+#define DATA_POP(x) \
+ DATA_STACK_POP(state, (x), sizeof(*(x)), 1)
+#define DATA_POP_DISCARD(x) \
+ DATA_STACK_POP_DISCARD(state, sizeof(*(x)))
+#define DATA_ALLOC(t,p) \
+ DATA_STACK_ALLOC(state, t, p)
+#define DATA_LOOKUP_AT(t,p,pos) \
+ DATA_STACK_LOOKUP_AT(state,t,p,pos)
+
+#define MARK_PUSH(lastmark) \
+ do if (lastmark > 0) { \
+ i = lastmark; /* ctx->lastmark may change if reallocated */ \
+ DATA_STACK_PUSH(state, state->mark, (i+1)*sizeof(void*)); \
+ } while (0)
+#define MARK_POP(lastmark) \
+ do if (lastmark > 0) { \
+ DATA_STACK_POP(state, state->mark, (lastmark+1)*sizeof(void*), 1); \
+ } while (0)
+#define MARK_POP_KEEP(lastmark) \
+ do if (lastmark > 0) { \
+ DATA_STACK_POP(state, state->mark, (lastmark+1)*sizeof(void*), 0); \
+ } while (0)
+#define MARK_POP_DISCARD(lastmark) \
+ do if (lastmark > 0) { \
+ DATA_STACK_POP_DISCARD(state, (lastmark+1)*sizeof(void*)); \
+ } while (0)
+
+#define JUMP_NONE 0
+#define JUMP_MAX_UNTIL_1 1
+#define JUMP_MAX_UNTIL_2 2
+#define JUMP_MAX_UNTIL_3 3
+#define JUMP_MIN_UNTIL_1 4
+#define JUMP_MIN_UNTIL_2 5
+#define JUMP_MIN_UNTIL_3 6
+#define JUMP_REPEAT 7
+#define JUMP_REPEAT_ONE_1 8
+#define JUMP_REPEAT_ONE_2 9
+#define JUMP_MIN_REPEAT_ONE 10
+#define JUMP_BRANCH 11
+#define JUMP_ASSERT 12
+#define JUMP_ASSERT_NOT 13
+
+#define DO_JUMPX(jumpvalue, jumplabel, nextpattern, matchall) \
+ DATA_ALLOC(SRE(match_context), nextctx); \
+ nextctx->last_ctx_pos = ctx_pos; \
+ nextctx->jump = jumpvalue; \
+ nextctx->pattern = nextpattern; \
+ nextctx->match_all = matchall; \
+ ctx_pos = alloc_pos; \
+ ctx = nextctx; \
+ goto entrance; \
+ jumplabel: \
+ while (0) /* gcc doesn't like labels at end of scopes */ \
+
+#define DO_JUMP(jumpvalue, jumplabel, nextpattern) \
+ DO_JUMPX(jumpvalue, jumplabel, nextpattern, ctx->match_all)
+
+#define DO_JUMP0(jumpvalue, jumplabel, nextpattern) \
+ DO_JUMPX(jumpvalue, jumplabel, nextpattern, 0)
+
+typedef struct {
+ Py_ssize_t last_ctx_pos;
+ Py_ssize_t jump;
+ SRE_CHAR* ptr;
+ SRE_CODE* pattern;
+ Py_ssize_t count;
+ Py_ssize_t lastmark;
+ Py_ssize_t lastindex;
+ union {
+ SRE_CODE chr;
+ SRE_REPEAT* rep;
+ } u;
+ int match_all;
+} SRE(match_context);
+
+/* check if string matches the given pattern. returns <0 for
+ error, 0 for failure, and 1 for success */
+LOCAL(Py_ssize_t)
+SRE(match)(SRE_STATE* state, SRE_CODE* pattern, int match_all)
+{
+ SRE_CHAR* end = (SRE_CHAR *)state->end;
+ Py_ssize_t alloc_pos, ctx_pos = -1;
+ Py_ssize_t i, ret = 0;
+ Py_ssize_t jump;
+ unsigned int sigcount=0;
+
+ SRE(match_context)* ctx;
+ SRE(match_context)* nextctx;
+
+ TRACE(("|%p|%p|ENTER\n", pattern, state->ptr));
+
+ DATA_ALLOC(SRE(match_context), ctx);
+ ctx->last_ctx_pos = -1;
+ ctx->jump = JUMP_NONE;
+ ctx->pattern = pattern;
+ ctx->match_all = match_all;
+ ctx_pos = alloc_pos;
+
+entrance:
+
+ ctx->ptr = (SRE_CHAR *)state->ptr;
+
+ if (ctx->pattern[0] == SRE_OP_INFO) {
+ /* optimization info block */
+ /* <INFO> <1=skip> <2=flags> <3=min> ... */
+ if (ctx->pattern[3] && (uintptr_t)(end - ctx->ptr) < ctx->pattern[3]) {
+ TRACE(("reject (got %" PY_FORMAT_SIZE_T "d chars, "
+ "need %" PY_FORMAT_SIZE_T "d)\n",
+ end - ctx->ptr, (Py_ssize_t) ctx->pattern[3]));
+ RETURN_FAILURE;
+ }
+ ctx->pattern += ctx->pattern[1] + 1;
+ }
+
+ for (;;) {
+ ++sigcount;
+ if ((0 == (sigcount & 0xfff)) && PyErr_CheckSignals())
+ RETURN_ERROR(SRE_ERROR_INTERRUPTED);
+
+ switch (*ctx->pattern++) {
+
+ case SRE_OP_MARK:
+ /* set mark */
+ /* <MARK> <gid> */
+ TRACE(("|%p|%p|MARK %d\n", ctx->pattern,
+ ctx->ptr, ctx->pattern[0]));
+ i = ctx->pattern[0];
+ if (i & 1)
+ state->lastindex = i/2 + 1;
+ if (i > state->lastmark) {
+ /* state->lastmark is the highest valid index in the
+ state->mark array. If it is increased by more than 1,
+ the intervening marks must be set to NULL to signal
+ that these marks have not been encountered. */
+ Py_ssize_t j = state->lastmark + 1;
+ while (j < i)
+ state->mark[j++] = NULL;
+ state->lastmark = i;
+ }
+ state->mark[i] = ctx->ptr;
+ ctx->pattern++;
+ break;
+
+ case SRE_OP_LITERAL:
+ /* match literal string */
+ /* <LITERAL> <code> */
+ TRACE(("|%p|%p|LITERAL %d\n", ctx->pattern,
+ ctx->ptr, *ctx->pattern));
+ if (ctx->ptr >= end || (SRE_CODE) ctx->ptr[0] != ctx->pattern[0])
+ RETURN_FAILURE;
+ ctx->pattern++;
+ ctx->ptr++;
+ break;
+
+ case SRE_OP_NOT_LITERAL:
+ /* match anything that is not literal character */
+ /* <NOT_LITERAL> <code> */
+ TRACE(("|%p|%p|NOT_LITERAL %d\n", ctx->pattern,
+ ctx->ptr, *ctx->pattern));
+ if (ctx->ptr >= end || (SRE_CODE) ctx->ptr[0] == ctx->pattern[0])
+ RETURN_FAILURE;
+ ctx->pattern++;
+ ctx->ptr++;
+ break;
+
+ case SRE_OP_SUCCESS:
+ /* end of pattern */
+ TRACE(("|%p|%p|SUCCESS\n", ctx->pattern, ctx->ptr));
+ if (!ctx->match_all || ctx->ptr == state->end) {
+ state->ptr = ctx->ptr;
+ RETURN_SUCCESS;
+ }
+ RETURN_FAILURE;
+
+ case SRE_OP_AT:
+ /* match at given position */
+ /* <AT> <code> */
+ TRACE(("|%p|%p|AT %d\n", ctx->pattern, ctx->ptr, *ctx->pattern));
+ if (!SRE(at)(state, ctx->ptr, *ctx->pattern))
+ RETURN_FAILURE;
+ ctx->pattern++;
+ break;
+
+ case SRE_OP_CATEGORY:
+ /* match at given category */
+ /* <CATEGORY> <code> */
+ TRACE(("|%p|%p|CATEGORY %d\n", ctx->pattern,
+ ctx->ptr, *ctx->pattern));
+ if (ctx->ptr >= end || !sre_category(ctx->pattern[0], ctx->ptr[0]))
+ RETURN_FAILURE;
+ ctx->pattern++;
+ ctx->ptr++;
+ break;
+
+ case SRE_OP_ANY:
+ /* match anything (except a newline) */
+ /* <ANY> */
+ TRACE(("|%p|%p|ANY\n", ctx->pattern, ctx->ptr));
+ if (ctx->ptr >= end || SRE_IS_LINEBREAK(ctx->ptr[0]))
+ RETURN_FAILURE;
+ ctx->ptr++;
+ break;
+
+ case SRE_OP_ANY_ALL:
+ /* match anything */
+ /* <ANY_ALL> */
+ TRACE(("|%p|%p|ANY_ALL\n", ctx->pattern, ctx->ptr));
+ if (ctx->ptr >= end)
+ RETURN_FAILURE;
+ ctx->ptr++;
+ break;
+
+ case SRE_OP_IN:
+ /* match set member (or non_member) */
+ /* <IN> <skip> <set> */
+ TRACE(("|%p|%p|IN\n", ctx->pattern, ctx->ptr));
+ if (ctx->ptr >= end ||
+ !SRE(charset)(state, ctx->pattern + 1, *ctx->ptr))
+ RETURN_FAILURE;
+ ctx->pattern += ctx->pattern[0];
+ ctx->ptr++;
+ break;
+
+ case SRE_OP_LITERAL_IGNORE:
+ TRACE(("|%p|%p|LITERAL_IGNORE %d\n",
+ ctx->pattern, ctx->ptr, ctx->pattern[0]));
+ if (ctx->ptr >= end ||
+ state->lower(*ctx->ptr) != state->lower(*ctx->pattern))
+ RETURN_FAILURE;
+ ctx->pattern++;
+ ctx->ptr++;
+ break;
+
+ case SRE_OP_NOT_LITERAL_IGNORE:
+ TRACE(("|%p|%p|NOT_LITERAL_IGNORE %d\n",
+ ctx->pattern, ctx->ptr, *ctx->pattern));
+ if (ctx->ptr >= end ||
+ state->lower(*ctx->ptr) == state->lower(*ctx->pattern))
+ RETURN_FAILURE;
+ ctx->pattern++;
+ ctx->ptr++;
+ break;
+
+ case SRE_OP_IN_IGNORE:
+ TRACE(("|%p|%p|IN_IGNORE\n", ctx->pattern, ctx->ptr));
+ if (ctx->ptr >= end
+ || !SRE(charset)(state, ctx->pattern+1,
+ (SRE_CODE)state->lower(*ctx->ptr)))
+ RETURN_FAILURE;
+ ctx->pattern += ctx->pattern[0];
+ ctx->ptr++;
+ break;
+
+ case SRE_OP_JUMP:
+ case SRE_OP_INFO:
+ /* jump forward */
+ /* <JUMP> <offset> */
+ TRACE(("|%p|%p|JUMP %d\n", ctx->pattern,
+ ctx->ptr, ctx->pattern[0]));
+ ctx->pattern += ctx->pattern[0];
+ break;
+
+ case SRE_OP_BRANCH:
+ /* alternation */
+ /* <BRANCH> <0=skip> code <JUMP> ... <NULL> */
+ TRACE(("|%p|%p|BRANCH\n", ctx->pattern, ctx->ptr));
+ LASTMARK_SAVE();
+ ctx->u.rep = state->repeat;
+ if (ctx->u.rep)
+ MARK_PUSH(ctx->lastmark);
+ for (; ctx->pattern[0]; ctx->pattern += ctx->pattern[0]) {
+ if (ctx->pattern[1] == SRE_OP_LITERAL &&
+ (ctx->ptr >= end ||
+ (SRE_CODE) *ctx->ptr != ctx->pattern[2]))
+ continue;
+ if (ctx->pattern[1] == SRE_OP_IN &&
+ (ctx->ptr >= end ||
+ !SRE(charset)(state, ctx->pattern + 3,
+ (SRE_CODE) *ctx->ptr)))
+ continue;
+ state->ptr = ctx->ptr;
+ DO_JUMP(JUMP_BRANCH, jump_branch, ctx->pattern+1);
+ if (ret) {
+ if (ctx->u.rep)
+ MARK_POP_DISCARD(ctx->lastmark);
+ RETURN_ON_ERROR(ret);
+ RETURN_SUCCESS;
+ }
+ if (ctx->u.rep)
+ MARK_POP_KEEP(ctx->lastmark);
+ LASTMARK_RESTORE();
+ }
+ if (ctx->u.rep)
+ MARK_POP_DISCARD(ctx->lastmark);
+ RETURN_FAILURE;
+
+ case SRE_OP_REPEAT_ONE:
+ /* match repeated sequence (maximizing regexp) */
+
+ /* this operator only works if the repeated item is
+ exactly one character wide, and we're not already
+ collecting backtracking points. for other cases,
+ use the MAX_REPEAT operator */
+
+ /* <REPEAT_ONE> <skip> <1=min> <2=max> item <SUCCESS> tail */
+
+ TRACE(("|%p|%p|REPEAT_ONE %d %d\n", ctx->pattern, ctx->ptr,
+ ctx->pattern[1], ctx->pattern[2]));
+
+ if ((Py_ssize_t) ctx->pattern[1] > end - ctx->ptr)
+ RETURN_FAILURE; /* cannot match */
+
+ state->ptr = ctx->ptr;
+
+ ret = SRE(count)(state, ctx->pattern+3, ctx->pattern[2]);
+ RETURN_ON_ERROR(ret);
+ DATA_LOOKUP_AT(SRE(match_context), ctx, ctx_pos);
+ ctx->count = ret;
+ ctx->ptr += ctx->count;
+
+ /* when we arrive here, count contains the number of
+ matches, and ctx->ptr points to the tail of the target
+ string. check if the rest of the pattern matches,
+ and backtrack if not. */
+
+ if (ctx->count < (Py_ssize_t) ctx->pattern[1])
+ RETURN_FAILURE;
+
+ if (ctx->pattern[ctx->pattern[0]] == SRE_OP_SUCCESS &&
+ ctx->ptr == state->end) {
+ /* tail is empty. we're finished */
+ state->ptr = ctx->ptr;
+ RETURN_SUCCESS;
+ }
+
+ LASTMARK_SAVE();
+
+ if (ctx->pattern[ctx->pattern[0]] == SRE_OP_LITERAL) {
+ /* tail starts with a literal. skip positions where
+ the rest of the pattern cannot possibly match */
+ ctx->u.chr = ctx->pattern[ctx->pattern[0]+1];
+ for (;;) {
+ while (ctx->count >= (Py_ssize_t) ctx->pattern[1] &&
+ (ctx->ptr >= end || *ctx->ptr != ctx->u.chr)) {
+ ctx->ptr--;
+ ctx->count--;
+ }
+ if (ctx->count < (Py_ssize_t) ctx->pattern[1])
+ break;
+ state->ptr = ctx->ptr;
+ DO_JUMP(JUMP_REPEAT_ONE_1, jump_repeat_one_1,
+ ctx->pattern+ctx->pattern[0]);
+ if (ret) {
+ RETURN_ON_ERROR(ret);
+ RETURN_SUCCESS;
+ }
+
+ LASTMARK_RESTORE();
+
+ ctx->ptr--;
+ ctx->count--;
+ }
+
+ } else {
+ /* general case */
+ while (ctx->count >= (Py_ssize_t) ctx->pattern[1]) {
+ state->ptr = ctx->ptr;
+ DO_JUMP(JUMP_REPEAT_ONE_2, jump_repeat_one_2,
+ ctx->pattern+ctx->pattern[0]);
+ if (ret) {
+ RETURN_ON_ERROR(ret);
+ RETURN_SUCCESS;
+ }
+ ctx->ptr--;
+ ctx->count--;
+ LASTMARK_RESTORE();
+ }
+ }
+ RETURN_FAILURE;
+
+ case SRE_OP_MIN_REPEAT_ONE:
+ /* match repeated sequence (minimizing regexp) */
+
+ /* this operator only works if the repeated item is
+ exactly one character wide, and we're not already
+ collecting backtracking points. for other cases,
+ use the MIN_REPEAT operator */
+
+ /* <MIN_REPEAT_ONE> <skip> <1=min> <2=max> item <SUCCESS> tail */
+
+ TRACE(("|%p|%p|MIN_REPEAT_ONE %d %d\n", ctx->pattern, ctx->ptr,
+ ctx->pattern[1], ctx->pattern[2]));
+
+ if ((Py_ssize_t) ctx->pattern[1] > end - ctx->ptr)
+ RETURN_FAILURE; /* cannot match */
+
+ state->ptr = ctx->ptr;
+
+ if (ctx->pattern[1] == 0)
+ ctx->count = 0;
+ else {
+ /* count using pattern min as the maximum */
+ ret = SRE(count)(state, ctx->pattern+3, ctx->pattern[1]);
+ RETURN_ON_ERROR(ret);
+ DATA_LOOKUP_AT(SRE(match_context), ctx, ctx_pos);
+ if (ret < (Py_ssize_t) ctx->pattern[1])
+ /* didn't match minimum number of times */
+ RETURN_FAILURE;
+ /* advance past minimum matches of repeat */
+ ctx->count = ret;
+ ctx->ptr += ctx->count;
+ }
+
+ if (ctx->pattern[ctx->pattern[0]] == SRE_OP_SUCCESS &&
+ (!match_all || ctx->ptr == state->end)) {
+ /* tail is empty. we're finished */
+ state->ptr = ctx->ptr;
+ RETURN_SUCCESS;
+
+ } else {
+ /* general case */
+ LASTMARK_SAVE();
+ while ((Py_ssize_t)ctx->pattern[2] == SRE_MAXREPEAT
+ || ctx->count <= (Py_ssize_t)ctx->pattern[2]) {
+ state->ptr = ctx->ptr;
+ DO_JUMP(JUMP_MIN_REPEAT_ONE,jump_min_repeat_one,
+ ctx->pattern+ctx->pattern[0]);
+ if (ret) {
+ RETURN_ON_ERROR(ret);
+ RETURN_SUCCESS;
+ }
+ state->ptr = ctx->ptr;
+ ret = SRE(count)(state, ctx->pattern+3, 1);
+ RETURN_ON_ERROR(ret);
+ DATA_LOOKUP_AT(SRE(match_context), ctx, ctx_pos);
+ if (ret == 0)
+ break;
+ assert(ret == 1);
+ ctx->ptr++;
+ ctx->count++;
+ LASTMARK_RESTORE();
+ }
+ }
+ RETURN_FAILURE;
+
+ case SRE_OP_REPEAT:
+ /* create repeat context. all the hard work is done
+ by the UNTIL operator (MAX_UNTIL, MIN_UNTIL) */
+ /* <REPEAT> <skip> <1=min> <2=max> item <UNTIL> tail */
+ TRACE(("|%p|%p|REPEAT %d %d\n", ctx->pattern, ctx->ptr,
+ ctx->pattern[1], ctx->pattern[2]));
+
+ /* install new repeat context */
+ ctx->u.rep = (SRE_REPEAT*) PyObject_MALLOC(sizeof(*ctx->u.rep));
+ if (!ctx->u.rep) {
+ PyErr_NoMemory();
+ RETURN_FAILURE;
+ }
+ ctx->u.rep->count = -1;
+ ctx->u.rep->pattern = ctx->pattern;
+ ctx->u.rep->prev = state->repeat;
+ ctx->u.rep->last_ptr = NULL;
+ state->repeat = ctx->u.rep;
+
+ state->ptr = ctx->ptr;
+ DO_JUMP(JUMP_REPEAT, jump_repeat, ctx->pattern+ctx->pattern[0]);
+ state->repeat = ctx->u.rep->prev;
+ PyObject_FREE(ctx->u.rep);
+
+ if (ret) {
+ RETURN_ON_ERROR(ret);
+ RETURN_SUCCESS;
+ }
+ RETURN_FAILURE;
+
+ case SRE_OP_MAX_UNTIL:
+ /* maximizing repeat */
+ /* <REPEAT> <skip> <1=min> <2=max> item <MAX_UNTIL> tail */
+
+ /* FIXME: we probably need to deal with zero-width
+ matches in here... */
+
+ ctx->u.rep = state->repeat;
+ if (!ctx->u.rep)
+ RETURN_ERROR(SRE_ERROR_STATE);
+
+ state->ptr = ctx->ptr;
+
+ ctx->count = ctx->u.rep->count+1;
+
+ TRACE(("|%p|%p|MAX_UNTIL %" PY_FORMAT_SIZE_T "d\n", ctx->pattern,
+ ctx->ptr, ctx->count));
+
+ if (ctx->count < (Py_ssize_t) ctx->u.rep->pattern[1]) {
+ /* not enough matches */
+ ctx->u.rep->count = ctx->count;
+ DO_JUMP(JUMP_MAX_UNTIL_1, jump_max_until_1,
+ ctx->u.rep->pattern+3);
+ if (ret) {
+ RETURN_ON_ERROR(ret);
+ RETURN_SUCCESS;
+ }
+ ctx->u.rep->count = ctx->count-1;
+ state->ptr = ctx->ptr;
+ RETURN_FAILURE;
+ }
+
+ if ((ctx->count < (Py_ssize_t) ctx->u.rep->pattern[2] ||
+ ctx->u.rep->pattern[2] == SRE_MAXREPEAT) &&
+ state->ptr != ctx->u.rep->last_ptr) {
+ /* we may have enough matches, but if we can
+ match another item, do so */
+ ctx->u.rep->count = ctx->count;
+ LASTMARK_SAVE();
+ MARK_PUSH(ctx->lastmark);
+ /* zero-width match protection */
+ DATA_PUSH(&ctx->u.rep->last_ptr);
+ ctx->u.rep->last_ptr = state->ptr;
+ DO_JUMP(JUMP_MAX_UNTIL_2, jump_max_until_2,
+ ctx->u.rep->pattern+3);
+ DATA_POP(&ctx->u.rep->last_ptr);
+ if (ret) {
+ MARK_POP_DISCARD(ctx->lastmark);
+ RETURN_ON_ERROR(ret);
+ RETURN_SUCCESS;
+ }
+ MARK_POP(ctx->lastmark);
+ LASTMARK_RESTORE();
+ ctx->u.rep->count = ctx->count-1;
+ state->ptr = ctx->ptr;
+ }
+
+ /* cannot match more repeated items here. make sure the
+ tail matches */
+ state->repeat = ctx->u.rep->prev;
+ DO_JUMP(JUMP_MAX_UNTIL_3, jump_max_until_3, ctx->pattern);
+ RETURN_ON_SUCCESS(ret);
+ state->repeat = ctx->u.rep;
+ state->ptr = ctx->ptr;
+ RETURN_FAILURE;
+
+ case SRE_OP_MIN_UNTIL:
+ /* minimizing repeat */
+ /* <REPEAT> <skip> <1=min> <2=max> item <MIN_UNTIL> tail */
+
+ ctx->u.rep = state->repeat;
+ if (!ctx->u.rep)
+ RETURN_ERROR(SRE_ERROR_STATE);
+
+ state->ptr = ctx->ptr;
+
+ ctx->count = ctx->u.rep->count+1;
+
+ TRACE(("|%p|%p|MIN_UNTIL %" PY_FORMAT_SIZE_T "d %p\n", ctx->pattern,
+ ctx->ptr, ctx->count, ctx->u.rep->pattern));
+
+ if (ctx->count < (Py_ssize_t) ctx->u.rep->pattern[1]) {
+ /* not enough matches */
+ ctx->u.rep->count = ctx->count;
+ DO_JUMP(JUMP_MIN_UNTIL_1, jump_min_until_1,
+ ctx->u.rep->pattern+3);
+ if (ret) {
+ RETURN_ON_ERROR(ret);
+ RETURN_SUCCESS;
+ }
+ ctx->u.rep->count = ctx->count-1;
+ state->ptr = ctx->ptr;
+ RETURN_FAILURE;
+ }
+
+ LASTMARK_SAVE();
+
+ /* see if the tail matches */
+ state->repeat = ctx->u.rep->prev;
+ DO_JUMP(JUMP_MIN_UNTIL_2, jump_min_until_2, ctx->pattern);
+ if (ret) {
+ RETURN_ON_ERROR(ret);
+ RETURN_SUCCESS;
+ }
+
+ state->repeat = ctx->u.rep;
+ state->ptr = ctx->ptr;
+
+ LASTMARK_RESTORE();
+
+ if ((ctx->count >= (Py_ssize_t) ctx->u.rep->pattern[2]
+ && ctx->u.rep->pattern[2] != SRE_MAXREPEAT) ||
+ state->ptr == ctx->u.rep->last_ptr)
+ RETURN_FAILURE;
+
+ ctx->u.rep->count = ctx->count;
+ /* zero-width match protection */
+ DATA_PUSH(&ctx->u.rep->last_ptr);
+ ctx->u.rep->last_ptr = state->ptr;
+ DO_JUMP(JUMP_MIN_UNTIL_3,jump_min_until_3,
+ ctx->u.rep->pattern+3);
+ DATA_POP(&ctx->u.rep->last_ptr);
+ if (ret) {
+ RETURN_ON_ERROR(ret);
+ RETURN_SUCCESS;
+ }
+ ctx->u.rep->count = ctx->count-1;
+ state->ptr = ctx->ptr;
+ RETURN_FAILURE;
+
+ case SRE_OP_GROUPREF:
+ /* match backreference */
+ TRACE(("|%p|%p|GROUPREF %d\n", ctx->pattern,
+ ctx->ptr, ctx->pattern[0]));
+ i = ctx->pattern[0];
+ {
+ Py_ssize_t groupref = i+i;
+ if (groupref >= state->lastmark) {
+ RETURN_FAILURE;
+ } else {
+ SRE_CHAR* p = (SRE_CHAR*) state->mark[groupref];
+ SRE_CHAR* e = (SRE_CHAR*) state->mark[groupref+1];
+ if (!p || !e || e < p)
+ RETURN_FAILURE;
+ while (p < e) {
+ if (ctx->ptr >= end || *ctx->ptr != *p)
+ RETURN_FAILURE;
+ p++;
+ ctx->ptr++;
+ }
+ }
+ }
+ ctx->pattern++;
+ break;
+
+ case SRE_OP_GROUPREF_IGNORE:
+ /* match backreference */
+ TRACE(("|%p|%p|GROUPREF_IGNORE %d\n", ctx->pattern,
+ ctx->ptr, ctx->pattern[0]));
+ i = ctx->pattern[0];
+ {
+ Py_ssize_t groupref = i+i;
+ if (groupref >= state->lastmark) {
+ RETURN_FAILURE;
+ } else {
+ SRE_CHAR* p = (SRE_CHAR*) state->mark[groupref];
+ SRE_CHAR* e = (SRE_CHAR*) state->mark[groupref+1];
+ if (!p || !e || e < p)
+ RETURN_FAILURE;
+ while (p < e) {
+ if (ctx->ptr >= end ||
+ state->lower(*ctx->ptr) != state->lower(*p))
+ RETURN_FAILURE;
+ p++;
+ ctx->ptr++;
+ }
+ }
+ }
+ ctx->pattern++;
+ break;
+
+ case SRE_OP_GROUPREF_EXISTS:
+ TRACE(("|%p|%p|GROUPREF_EXISTS %d\n", ctx->pattern,
+ ctx->ptr, ctx->pattern[0]));
+ /* <GROUPREF_EXISTS> <group> <skip> codeyes <JUMP> codeno ... */
+ i = ctx->pattern[0];
+ {
+ Py_ssize_t groupref = i+i;
+ if (groupref >= state->lastmark) {
+ ctx->pattern += ctx->pattern[1];
+ break;
+ } else {
+ SRE_CHAR* p = (SRE_CHAR*) state->mark[groupref];
+ SRE_CHAR* e = (SRE_CHAR*) state->mark[groupref+1];
+ if (!p || !e || e < p) {
+ ctx->pattern += ctx->pattern[1];
+ break;
+ }
+ }
+ }
+ ctx->pattern += 2;
+ break;
+
+ case SRE_OP_ASSERT:
+ /* assert subpattern */
+ /* <ASSERT> <skip> <back> <pattern> */
+ TRACE(("|%p|%p|ASSERT %d\n", ctx->pattern,
+ ctx->ptr, ctx->pattern[1]));
+ if (ctx->ptr - (SRE_CHAR *)state->beginning < (Py_ssize_t)ctx->pattern[1])
+ RETURN_FAILURE;
+ state->ptr = ctx->ptr - ctx->pattern[1];
+ DO_JUMP0(JUMP_ASSERT, jump_assert, ctx->pattern+2);
+ RETURN_ON_FAILURE(ret);
+ ctx->pattern += ctx->pattern[0];
+ break;
+
+ case SRE_OP_ASSERT_NOT:
+ /* assert not subpattern */
+ /* <ASSERT_NOT> <skip> <back> <pattern> */
+ TRACE(("|%p|%p|ASSERT_NOT %d\n", ctx->pattern,
+ ctx->ptr, ctx->pattern[1]));
+ if (ctx->ptr - (SRE_CHAR *)state->beginning >= (Py_ssize_t)ctx->pattern[1]) {
+ state->ptr = ctx->ptr - ctx->pattern[1];
+ DO_JUMP0(JUMP_ASSERT_NOT, jump_assert_not, ctx->pattern+2);
+ if (ret) {
+ RETURN_ON_ERROR(ret);
+ RETURN_FAILURE;
+ }
+ }
+ ctx->pattern += ctx->pattern[0];
+ break;
+
+ case SRE_OP_FAILURE:
+ /* immediate failure */
+ TRACE(("|%p|%p|FAILURE\n", ctx->pattern, ctx->ptr));
+ RETURN_FAILURE;
+
+ default:
+ TRACE(("|%p|%p|UNKNOWN %d\n", ctx->pattern, ctx->ptr,
+ ctx->pattern[-1]));
+ RETURN_ERROR(SRE_ERROR_ILLEGAL);
+ }
+ }
+
+exit:
+ ctx_pos = ctx->last_ctx_pos;
+ jump = ctx->jump;
+ DATA_POP_DISCARD(ctx);
+ if (ctx_pos == -1)
+ return ret;
+ DATA_LOOKUP_AT(SRE(match_context), ctx, ctx_pos);
+
+ switch (jump) {
+ case JUMP_MAX_UNTIL_2:
+ TRACE(("|%p|%p|JUMP_MAX_UNTIL_2\n", ctx->pattern, ctx->ptr));
+ goto jump_max_until_2;
+ case JUMP_MAX_UNTIL_3:
+ TRACE(("|%p|%p|JUMP_MAX_UNTIL_3\n", ctx->pattern, ctx->ptr));
+ goto jump_max_until_3;
+ case JUMP_MIN_UNTIL_2:
+ TRACE(("|%p|%p|JUMP_MIN_UNTIL_2\n", ctx->pattern, ctx->ptr));
+ goto jump_min_until_2;
+ case JUMP_MIN_UNTIL_3:
+ TRACE(("|%p|%p|JUMP_MIN_UNTIL_3\n", ctx->pattern, ctx->ptr));
+ goto jump_min_until_3;
+ case JUMP_BRANCH:
+ TRACE(("|%p|%p|JUMP_BRANCH\n", ctx->pattern, ctx->ptr));
+ goto jump_branch;
+ case JUMP_MAX_UNTIL_1:
+ TRACE(("|%p|%p|JUMP_MAX_UNTIL_1\n", ctx->pattern, ctx->ptr));
+ goto jump_max_until_1;
+ case JUMP_MIN_UNTIL_1:
+ TRACE(("|%p|%p|JUMP_MIN_UNTIL_1\n", ctx->pattern, ctx->ptr));
+ goto jump_min_until_1;
+ case JUMP_REPEAT:
+ TRACE(("|%p|%p|JUMP_REPEAT\n", ctx->pattern, ctx->ptr));
+ goto jump_repeat;
+ case JUMP_REPEAT_ONE_1:
+ TRACE(("|%p|%p|JUMP_REPEAT_ONE_1\n", ctx->pattern, ctx->ptr));
+ goto jump_repeat_one_1;
+ case JUMP_REPEAT_ONE_2:
+ TRACE(("|%p|%p|JUMP_REPEAT_ONE_2\n", ctx->pattern, ctx->ptr));
+ goto jump_repeat_one_2;
+ case JUMP_MIN_REPEAT_ONE:
+ TRACE(("|%p|%p|JUMP_MIN_REPEAT_ONE\n", ctx->pattern, ctx->ptr));
+ goto jump_min_repeat_one;
+ case JUMP_ASSERT:
+ TRACE(("|%p|%p|JUMP_ASSERT\n", ctx->pattern, ctx->ptr));
+ goto jump_assert;
+ case JUMP_ASSERT_NOT:
+ TRACE(("|%p|%p|JUMP_ASSERT_NOT\n", ctx->pattern, ctx->ptr));
+ goto jump_assert_not;
+ case JUMP_NONE:
+ TRACE(("|%p|%p|RETURN %" PY_FORMAT_SIZE_T "d\n", ctx->pattern,
+ ctx->ptr, ret));
+ break;
+ }
+
+ return ret; /* should never get here */
+}
+
+LOCAL(Py_ssize_t)
+SRE(search)(SRE_STATE* state, SRE_CODE* pattern)
+{
+ SRE_CHAR* ptr = (SRE_CHAR *)state->start;
+ SRE_CHAR* end = (SRE_CHAR *)state->end;
+ Py_ssize_t status = 0;
+ Py_ssize_t prefix_len = 0;
+ Py_ssize_t prefix_skip = 0;
+ SRE_CODE* prefix = NULL;
+ SRE_CODE* charset = NULL;
+ SRE_CODE* overlap = NULL;
+ int flags = 0;
+
+ if (ptr > end)
+ return 0;
+
+ if (pattern[0] == SRE_OP_INFO) {
+ /* optimization info block */
+ /* <INFO> <1=skip> <2=flags> <3=min> <4=max> <5=prefix info> */
+
+ flags = pattern[2];
+
+ if (pattern[3] && end - ptr < (Py_ssize_t)pattern[3]) {
+ TRACE(("reject (got %u chars, need %u)\n",
+ (unsigned int)(end - ptr), pattern[3]));
+ return 0;
+ }
+ if (pattern[3] > 1) {
+ /* adjust end point (but make sure we leave at least one
+ character in there, so literal search will work) */
+ end -= pattern[3] - 1;
+ if (end <= ptr)
+ end = ptr;
+ }
+
+ if (flags & SRE_INFO_PREFIX) {
+ /* pattern starts with a known prefix */
+ /* <length> <skip> <prefix data> <overlap data> */
+ prefix_len = pattern[5];
+ prefix_skip = pattern[6];
+ prefix = pattern + 7;
+ overlap = prefix + prefix_len - 1;
+ } else if (flags & SRE_INFO_CHARSET)
+ /* pattern starts with a character from a known set */
+ /* <charset> */
+ charset = pattern + 5;
+
+ pattern += 1 + pattern[1];
+ }
+
+ TRACE(("prefix = %p %" PY_FORMAT_SIZE_T "d %" PY_FORMAT_SIZE_T "d\n",
+ prefix, prefix_len, prefix_skip));
+ TRACE(("charset = %p\n", charset));
+
+ if (prefix_len == 1) {
+ /* pattern starts with a literal character */
+ SRE_CHAR c = (SRE_CHAR) prefix[0];
+#if SIZEOF_SRE_CHAR < 4
+ if ((SRE_CODE) c != prefix[0])
+ return 0; /* literal can't match: doesn't fit in char width */
+#endif
+ end = (SRE_CHAR *)state->end;
+ while (ptr < end) {
+ while (*ptr != c) {
+ if (++ptr >= end)
+ return 0;
+ }
+ TRACE(("|%p|%p|SEARCH LITERAL\n", pattern, ptr));
+ state->start = ptr;
+ state->ptr = ptr + prefix_skip;
+ if (flags & SRE_INFO_LITERAL)
+ return 1; /* we got all of it */
+ status = SRE(match)(state, pattern + 2*prefix_skip, 0);
+ if (status != 0)
+ return status;
+ ++ptr;
+ }
+ return 0;
+ }
+
+ if (prefix_len > 1) {
+ /* pattern starts with a known prefix. use the overlap
+ table to skip forward as fast as we possibly can */
+ Py_ssize_t i = 0;
+
+ end = (SRE_CHAR *)state->end;
+ if (prefix_len > end - ptr)
+ return 0;
+#if SIZEOF_SRE_CHAR < 4
+ for (i = 0; i < prefix_len; i++)
+ if ((SRE_CODE)(SRE_CHAR) prefix[i] != prefix[i])
+ return 0; /* literal can't match: doesn't fit in char width */
+#endif
+ while (ptr < end) {
+ SRE_CHAR c = (SRE_CHAR) prefix[0];
+ while (*ptr++ != c) {
+ if (ptr >= end)
+ return 0;
+ }
+ if (ptr >= end)
+ return 0;
+
+ i = 1;
+ do {
+ if (*ptr == (SRE_CHAR) prefix[i]) {
+ if (++i != prefix_len) {
+ if (++ptr >= end)
+ return 0;
+ continue;
+ }
+ /* found a potential match */
+ TRACE(("|%p|%p|SEARCH SCAN\n", pattern, ptr));
+ state->start = ptr - (prefix_len - 1);
+ state->ptr = ptr - (prefix_len - prefix_skip - 1);
+ if (flags & SRE_INFO_LITERAL)
+ return 1; /* we got all of it */
+ status = SRE(match)(state, pattern + 2*prefix_skip, 0);
+ if (status != 0)
+ return status;
+ /* close but no cigar -- try again */
+ if (++ptr >= end)
+ return 0;
+ }
+ i = overlap[i];
+ } while (i != 0);
+ }
+ return 0;
+ }
+
+ if (charset) {
+ /* pattern starts with a character from a known set */
+ end = (SRE_CHAR *)state->end;
+ for (;;) {
+ while (ptr < end && !SRE(charset)(state, charset, *ptr))
+ ptr++;
+ if (ptr >= end)
+ return 0;
+ TRACE(("|%p|%p|SEARCH CHARSET\n", pattern, ptr));
+ state->start = ptr;
+ state->ptr = ptr;
+ status = SRE(match)(state, pattern, 0);
+ if (status != 0)
+ break;
+ ptr++;
+ }
+ } else {
+ /* general case */
+ assert(ptr <= end);
+ while (1) {
+ TRACE(("|%p|%p|SEARCH\n", pattern, ptr));
+ state->start = state->ptr = ptr;
+ status = SRE(match)(state, pattern, 0);
+ if (status != 0 || ptr >= end)
+ break;
+ ptr++;
+ }
+ }
+
+ return status;
+}
+
+#undef SRE_CHAR
+#undef SIZEOF_SRE_CHAR
+#undef SRE
+
+/* vim:ts=4:sw=4:et
+*/
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/timemodule.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/timemodule.c
new file mode 100644
index 00000000..85b22142
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/timemodule.c
@@ -0,0 +1,1526 @@
+/* Time module */
+
+#include "Python.h"
+
+#include <ctype.h>
+
+#ifdef HAVE_SYS_TIMES_H
+#include <sys/times.h>
+#endif
+
+#ifdef HAVE_SYS_TYPES_H
+#include <sys/types.h>
+#endif
+
+#if defined(HAVE_SYS_RESOURCE_H)
+#include <sys/resource.h>
+#endif
+
+#ifdef QUICKWIN
+#include <io.h>
+#endif
+
+#if defined(__WATCOMC__) && !defined(__QNX__)
+#include <i86.h>
+#else
+#ifdef MS_WINDOWS
+#define WIN32_LEAN_AND_MEAN
+#include <windows.h>
+#include "pythread.h"
+#endif /* MS_WINDOWS */
+#endif /* !__WATCOMC__ || __QNX__ */
+
+/* Forward declarations */
+static int pysleep(_PyTime_t);
+static PyObject* floattime(_Py_clock_info_t *info);
+
+static PyObject *
+time_time(PyObject *self, PyObject *unused)
+{
+ return floattime(NULL);
+}
+
+PyDoc_STRVAR(time_doc,
+"time() -> floating point number\n\
+\n\
+Return the current time in seconds since the Epoch.\n\
+Fractions of a second may be present if the system clock provides them.");
+
+#if defined(HAVE_CLOCK)
+
+#ifndef CLOCKS_PER_SEC
+#ifdef CLK_TCK
+#define CLOCKS_PER_SEC CLK_TCK
+#else
+#define CLOCKS_PER_SEC 1000000
+#endif
+#endif
+
+static PyObject *
+floatclock(_Py_clock_info_t *info)
+{
+ clock_t value;
+ value = clock();
+ if (value == (clock_t)-1) {
+ PyErr_SetString(PyExc_RuntimeError,
+ "the processor time used is not available "
+ "or its value cannot be represented");
+ return NULL;
+ }
+ if (info) {
+ info->implementation = "clock()";
+ info->resolution = 1.0 / (double)CLOCKS_PER_SEC;
+ info->monotonic = 1;
+ info->adjustable = 0;
+ }
+ return PyFloat_FromDouble((double)value / CLOCKS_PER_SEC);
+}
+#endif /* HAVE_CLOCK */
+
+#ifdef MS_WINDOWS
+#define WIN32_PERF_COUNTER
+/* Win32 has better clock replacement; we have our own version, due to Mark
+ Hammond and Tim Peters */
+static PyObject*
+win_perf_counter(_Py_clock_info_t *info)
+{
+ static LONGLONG cpu_frequency = 0;
+ static LONGLONG ctrStart;
+ LARGE_INTEGER now;
+ double diff;
+
+ if (cpu_frequency == 0) {
+ LARGE_INTEGER freq;
+ QueryPerformanceCounter(&now);
+ ctrStart = now.QuadPart;
+ if (!QueryPerformanceFrequency(&freq) || freq.QuadPart == 0) {
+ PyErr_SetFromWindowsErr(0);
+ return NULL;
+ }
+ cpu_frequency = freq.QuadPart;
+ }
+ QueryPerformanceCounter(&now);
+ diff = (double)(now.QuadPart - ctrStart);
+ if (info) {
+ info->implementation = "QueryPerformanceCounter()";
+ info->resolution = 1.0 / (double)cpu_frequency;
+ info->monotonic = 1;
+ info->adjustable = 0;
+ }
+ return PyFloat_FromDouble(diff / (double)cpu_frequency);
+}
+#endif /* MS_WINDOWS */
+
+#if defined(WIN32_PERF_COUNTER) || defined(HAVE_CLOCK)
+#define PYCLOCK
+static PyObject*
+pyclock(_Py_clock_info_t *info)
+{
+#ifdef WIN32_PERF_COUNTER
+ return win_perf_counter(info);
+#else
+ return floatclock(info);
+#endif
+}
+
+static PyObject *
+time_clock(PyObject *self, PyObject *unused)
+{
+ return pyclock(NULL);
+}
+
+PyDoc_STRVAR(clock_doc,
+"clock() -> floating point number\n\
+\n\
+Return the CPU time or real time since the start of the process or since\n\
+the first call to clock(). This has as much precision as the system\n\
+records.");
+#endif
+
+#ifdef HAVE_CLOCK_GETTIME
+static PyObject *
+time_clock_gettime(PyObject *self, PyObject *args)
+{
+ int ret;
+ int clk_id;
+ struct timespec tp;
+
+ if (!PyArg_ParseTuple(args, "i:clock_gettime", &clk_id))
+ return NULL;
+
+ ret = clock_gettime((clockid_t)clk_id, &tp);
+ if (ret != 0) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ return NULL;
+ }
+ return PyFloat_FromDouble(tp.tv_sec + tp.tv_nsec * 1e-9);
+}
+
+PyDoc_STRVAR(clock_gettime_doc,
+"clock_gettime(clk_id) -> floating point number\n\
+\n\
+Return the time of the specified clock clk_id.");
+#endif /* HAVE_CLOCK_GETTIME */
+
+#ifdef HAVE_CLOCK_SETTIME
+static PyObject *
+time_clock_settime(PyObject *self, PyObject *args)
+{
+ int clk_id;
+ PyObject *obj;
+ _PyTime_t t;
+ struct timespec tp;
+ int ret;
+
+ if (!PyArg_ParseTuple(args, "iO:clock_settime", &clk_id, &obj))
+ return NULL;
+
+ if (_PyTime_FromSecondsObject(&t, obj, _PyTime_ROUND_FLOOR) < 0)
+ return NULL;
+
+ if (_PyTime_AsTimespec(t, &tp) == -1)
+ return NULL;
+
+ ret = clock_settime((clockid_t)clk_id, &tp);
+ if (ret != 0) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ return NULL;
+ }
+ Py_RETURN_NONE;
+}
+
+PyDoc_STRVAR(clock_settime_doc,
+"clock_settime(clk_id, time)\n\
+\n\
+Set the time of the specified clock clk_id.");
+#endif /* HAVE_CLOCK_SETTIME */
+
+#ifdef HAVE_CLOCK_GETRES
+static PyObject *
+time_clock_getres(PyObject *self, PyObject *args)
+{
+ int ret;
+ int clk_id;
+ struct timespec tp;
+
+ if (!PyArg_ParseTuple(args, "i:clock_getres", &clk_id))
+ return NULL;
+
+ ret = clock_getres((clockid_t)clk_id, &tp);
+ if (ret != 0) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ return NULL;
+ }
+
+ return PyFloat_FromDouble(tp.tv_sec + tp.tv_nsec * 1e-9);
+}
+
+PyDoc_STRVAR(clock_getres_doc,
+"clock_getres(clk_id) -> floating point number\n\
+\n\
+Return the resolution (precision) of the specified clock clk_id.");
+#endif /* HAVE_CLOCK_GETRES */
+
+static PyObject *
+time_sleep(PyObject *self, PyObject *obj)
+{
+ _PyTime_t secs;
+ if (_PyTime_FromSecondsObject(&secs, obj, _PyTime_ROUND_TIMEOUT))
+ return NULL;
+ if (secs < 0) {
+ PyErr_SetString(PyExc_ValueError,
+ "sleep length must be non-negative");
+ return NULL;
+ }
+ if (pysleep(secs) != 0)
+ return NULL;
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+PyDoc_STRVAR(sleep_doc,
+"sleep(seconds)\n\
+\n\
+Delay execution for a given number of seconds. The argument may be\n\
+a floating point number for subsecond precision.");
+
+static PyStructSequence_Field struct_time_type_fields[] = {
+ {"tm_year", "year, for example, 1993"},
+ {"tm_mon", "month of year, range [1, 12]"},
+ {"tm_mday", "day of month, range [1, 31]"},
+ {"tm_hour", "hours, range [0, 23]"},
+ {"tm_min", "minutes, range [0, 59]"},
+ {"tm_sec", "seconds, range [0, 61])"},
+ {"tm_wday", "day of week, range [0, 6], Monday is 0"},
+ {"tm_yday", "day of year, range [1, 366]"},
+ {"tm_isdst", "1 if summer time is in effect, 0 if not, and -1 if unknown"},
+ {"tm_zone", "abbreviation of timezone name"},
+ {"tm_gmtoff", "offset from UTC in seconds"},
+ {0}
+};
+
+static PyStructSequence_Desc struct_time_type_desc = {
+ "time.struct_time",
+ "The time value as returned by gmtime(), localtime(), and strptime(), and\n"
+ " accepted by asctime(), mktime() and strftime(). May be considered as a\n"
+ " sequence of 9 integers.\n\n"
+ " Note that several fields' values are not the same as those defined by\n"
+ " the C language standard for struct tm. For example, the value of the\n"
+ " field tm_year is the actual year, not year - 1900. See individual\n"
+ " fields' descriptions for details.",
+ struct_time_type_fields,
+ 9,
+};
+
+static int initialized;
+static PyTypeObject StructTimeType;
+
+
+static PyObject *
+tmtotuple(struct tm *p
+#ifndef HAVE_STRUCT_TM_TM_ZONE
+ , const char *zone, time_t gmtoff
+#endif
+)
+{
+ PyObject *v = PyStructSequence_New(&StructTimeType);
+ if (v == NULL)
+ return NULL;
+
+#define SET(i,val) PyStructSequence_SET_ITEM(v, i, PyLong_FromLong((long) val))
+
+ SET(0, p->tm_year + 1900);
+ SET(1, p->tm_mon + 1); /* Want January == 1 */
+ SET(2, p->tm_mday);
+ SET(3, p->tm_hour);
+ SET(4, p->tm_min);
+ SET(5, p->tm_sec);
+ SET(6, (p->tm_wday + 6) % 7); /* Want Monday == 0 */
+ SET(7, p->tm_yday + 1); /* Want January, 1 == 1 */
+ SET(8, p->tm_isdst);
+#ifdef HAVE_STRUCT_TM_TM_ZONE
+ PyStructSequence_SET_ITEM(v, 9,
+ PyUnicode_DecodeLocale(p->tm_zone, "surrogateescape"));
+ SET(10, p->tm_gmtoff);
+#else
+ PyStructSequence_SET_ITEM(v, 9,
+ PyUnicode_DecodeLocale(zone, "surrogateescape"));
+ PyStructSequence_SET_ITEM(v, 10, _PyLong_FromTime_t(gmtoff));
+#endif /* HAVE_STRUCT_TM_TM_ZONE */
+#undef SET
+ if (PyErr_Occurred()) {
+ Py_XDECREF(v);
+ return NULL;
+ }
+
+ return v;
+}
+
+/* Parse arg tuple that can contain an optional float-or-None value;
+ format needs to be "|O:name".
+ Returns non-zero on success (parallels PyArg_ParseTuple).
+*/
+static int
+parse_time_t_args(PyObject *args, const char *format, time_t *pwhen)
+{
+ PyObject *ot = NULL;
+ time_t whent;
+
+ if (!PyArg_ParseTuple(args, format, &ot))
+ return 0;
+ if (ot == NULL || ot == Py_None) {
+ whent = time(NULL);
+ }
+ else {
+ if (_PyTime_ObjectToTime_t(ot, &whent, _PyTime_ROUND_FLOOR) == -1)
+ return 0;
+ }
+ *pwhen = whent;
+ return 1;
+}
+
+static PyObject *
+time_gmtime(PyObject *self, PyObject *args)
+{
+ time_t when;
+ struct tm buf;
+
+ if (!parse_time_t_args(args, "|O:gmtime", &when))
+ return NULL;
+
+ errno = 0;
+ if (_PyTime_gmtime(when, &buf) != 0)
+ return NULL;
+#ifdef HAVE_STRUCT_TM_TM_ZONE
+ return tmtotuple(&buf);
+#else
+ return tmtotuple(&buf, "UTC", 0);
+#endif
+}
+
+#ifndef HAVE_TIMEGM
+static time_t
+timegm(struct tm *p)
+{
+ /* XXX: the following implementation will not work for tm_year < 1970.
+ but it is likely that platforms that don't have timegm do not support
+ negative timestamps anyways. */
+ return p->tm_sec + p->tm_min*60 + p->tm_hour*3600 + p->tm_yday*86400 +
+ (p->tm_year-70)*31536000 + ((p->tm_year-69)/4)*86400 -
+ ((p->tm_year-1)/100)*86400 + ((p->tm_year+299)/400)*86400;
+}
+#endif
+
+PyDoc_STRVAR(gmtime_doc,
+"gmtime([seconds]) -> (tm_year, tm_mon, tm_mday, tm_hour, tm_min,\n\
+ tm_sec, tm_wday, tm_yday, tm_isdst)\n\
+\n\
+Convert seconds since the Epoch to a time tuple expressing UTC (a.k.a.\n\
+GMT). When 'seconds' is not passed in, convert the current time instead.\n\
+\n\
+If the platform supports the tm_gmtoff and tm_zone, they are available as\n\
+attributes only.");
+
+static PyObject *
+time_localtime(PyObject *self, PyObject *args)
+{
+ time_t when;
+ struct tm buf;
+
+ if (!parse_time_t_args(args, "|O:localtime", &when))
+ return NULL;
+ if (_PyTime_localtime(when, &buf) != 0)
+ return NULL;
+#ifdef HAVE_STRUCT_TM_TM_ZONE
+ return tmtotuple(&buf);
+#else
+ {
+ struct tm local = buf;
+ char zone[100];
+ time_t gmtoff;
+ strftime(zone, sizeof(zone), "%Z", &buf);
+ gmtoff = timegm(&buf) - when;
+ return tmtotuple(&local, zone, gmtoff);
+ }
+#endif
+}
+
+PyDoc_STRVAR(localtime_doc,
+"localtime([seconds]) -> (tm_year,tm_mon,tm_mday,tm_hour,tm_min,\n\
+ tm_sec,tm_wday,tm_yday,tm_isdst)\n\
+\n\
+Convert seconds since the Epoch to a time tuple expressing local time.\n\
+When 'seconds' is not passed in, convert the current time instead.");
+
+/* Convert 9-item tuple to tm structure. Return 1 on success, set
+ * an exception and return 0 on error.
+ */
+static int
+gettmarg(PyObject *args, struct tm *p)
+{
+ int y;
+
+ memset((void *) p, '\0', sizeof(struct tm));
+
+ if (!PyTuple_Check(args)) {
+ PyErr_SetString(PyExc_TypeError,
+ "Tuple or struct_time argument required");
+ return 0;
+ }
+
+ if (!PyArg_ParseTuple(args, "iiiiiiiii",
+ &y, &p->tm_mon, &p->tm_mday,
+ &p->tm_hour, &p->tm_min, &p->tm_sec,
+ &p->tm_wday, &p->tm_yday, &p->tm_isdst))
+ return 0;
+
+ if (y < INT_MIN + 1900) {
+ PyErr_SetString(PyExc_OverflowError, "year out of range");
+ return 0;
+ }
+
+ p->tm_year = y - 1900;
+ p->tm_mon--;
+ p->tm_wday = (p->tm_wday + 1) % 7;
+ p->tm_yday--;
+#ifdef HAVE_STRUCT_TM_TM_ZONE
+ if (Py_TYPE(args) == &StructTimeType) {
+ PyObject *item;
+ item = PyTuple_GET_ITEM(args, 9);
+ p->tm_zone = item == Py_None ? NULL : PyUnicode_AsUTF8(item);
+ item = PyTuple_GET_ITEM(args, 10);
+ p->tm_gmtoff = item == Py_None ? 0 : PyLong_AsLong(item);
+ if (PyErr_Occurred())
+ return 0;
+ }
+#endif /* HAVE_STRUCT_TM_TM_ZONE */
+ return 1;
+}
+
+/* Check values of the struct tm fields before it is passed to strftime() and
+ * asctime(). Return 1 if all values are valid, otherwise set an exception
+ * and returns 0.
+ */
+static int
+checktm(struct tm* buf)
+{
+ /* Checks added to make sure strftime() and asctime() does not crash Python by
+ indexing blindly into some array for a textual representation
+ by some bad index (fixes bug #897625 and #6608).
+
+ Also support values of zero from Python code for arguments in which
+ that is out of range by forcing that value to the lowest value that
+ is valid (fixed bug #1520914).
+
+ Valid ranges based on what is allowed in struct tm:
+
+ - tm_year: [0, max(int)] (1)
+ - tm_mon: [0, 11] (2)
+ - tm_mday: [1, 31]
+ - tm_hour: [0, 23]
+ - tm_min: [0, 59]
+ - tm_sec: [0, 60]
+ - tm_wday: [0, 6] (1)
+ - tm_yday: [0, 365] (2)
+ - tm_isdst: [-max(int), max(int)]
+
+ (1) gettmarg() handles bounds-checking.
+ (2) Python's acceptable range is one greater than the range in C,
+ thus need to check against automatic decrement by gettmarg().
+ */
+ if (buf->tm_mon == -1)
+ buf->tm_mon = 0;
+ else if (buf->tm_mon < 0 || buf->tm_mon > 11) {
+ PyErr_SetString(PyExc_ValueError, "month out of range");
+ return 0;
+ }
+ if (buf->tm_mday == 0)
+ buf->tm_mday = 1;
+ else if (buf->tm_mday < 0 || buf->tm_mday > 31) {
+ PyErr_SetString(PyExc_ValueError, "day of month out of range");
+ return 0;
+ }
+ if (buf->tm_hour < 0 || buf->tm_hour > 23) {
+ PyErr_SetString(PyExc_ValueError, "hour out of range");
+ return 0;
+ }
+ if (buf->tm_min < 0 || buf->tm_min > 59) {
+ PyErr_SetString(PyExc_ValueError, "minute out of range");
+ return 0;
+ }
+ if (buf->tm_sec < 0 || buf->tm_sec > 61) {
+ PyErr_SetString(PyExc_ValueError, "seconds out of range");
+ return 0;
+ }
+ /* tm_wday does not need checking of its upper-bound since taking
+ ``% 7`` in gettmarg() automatically restricts the range. */
+ if (buf->tm_wday < 0) {
+ PyErr_SetString(PyExc_ValueError, "day of week out of range");
+ return 0;
+ }
+ if (buf->tm_yday == -1)
+ buf->tm_yday = 0;
+ else if (buf->tm_yday < 0 || buf->tm_yday > 365) {
+ PyErr_SetString(PyExc_ValueError, "day of year out of range");
+ return 0;
+ }
+ return 1;
+}
+
+#ifdef MS_WINDOWS
+ /* wcsftime() doesn't format correctly time zones, see issue #10653 */
+# undef HAVE_WCSFTIME
+#endif
+#define STRFTIME_FORMAT_CODES \
+"Commonly used format codes:\n\
+\n\
+%Y Year with century as a decimal number.\n\
+%m Month as a decimal number [01,12].\n\
+%d Day of the month as a decimal number [01,31].\n\
+%H Hour (24-hour clock) as a decimal number [00,23].\n\
+%M Minute as a decimal number [00,59].\n\
+%S Second as a decimal number [00,61].\n\
+%z Time zone offset from UTC.\n\
+%a Locale's abbreviated weekday name.\n\
+%A Locale's full weekday name.\n\
+%b Locale's abbreviated month name.\n\
+%B Locale's full month name.\n\
+%c Locale's appropriate date and time representation.\n\
+%I Hour (12-hour clock) as a decimal number [01,12].\n\
+%p Locale's equivalent of either AM or PM.\n\
+\n\
+Other codes may be available on your platform. See documentation for\n\
+the C library strftime function.\n"
+
+#ifdef HAVE_STRFTIME
+#ifdef HAVE_WCSFTIME
+#define time_char wchar_t
+#define format_time wcsftime
+#define time_strlen wcslen
+#else
+#define time_char char
+#define format_time strftime
+#define time_strlen strlen
+#endif
+
+static PyObject *
+time_strftime(PyObject *self, PyObject *args)
+{
+ PyObject *tup = NULL;
+ struct tm buf;
+ const time_char *fmt;
+#ifdef HAVE_WCSFTIME
+ wchar_t *format;
+#else
+ PyObject *format;
+#endif
+ PyObject *format_arg;
+ size_t fmtlen, buflen;
+ time_char *outbuf = NULL;
+ size_t i;
+ PyObject *ret = NULL;
+
+ memset((void *) &buf, '\0', sizeof(buf));
+
+ /* Will always expect a unicode string to be passed as format.
+ Given that there's no str type anymore in py3k this seems safe.
+ */
+ if (!PyArg_ParseTuple(args, "U|O:strftime", &format_arg, &tup))
+ return NULL;
+
+ if (tup == NULL) {
+ time_t tt = time(NULL);
+ if (_PyTime_localtime(tt, &buf) != 0)
+ return NULL;
+ }
+ else if (!gettmarg(tup, &buf) || !checktm(&buf))
+ return NULL;
+
+#if defined(_MSC_VER) || defined(sun) || defined(_AIX)
+ if (buf.tm_year + 1900 < 1 || 9999 < buf.tm_year + 1900) {
+ PyErr_SetString(PyExc_ValueError,
+ "strftime() requires year in [1; 9999]");
+ return NULL;
+ }
+#endif
+
+ /* Normalize tm_isdst just in case someone foolishly implements %Z
+ based on the assumption that tm_isdst falls within the range of
+ [-1, 1] */
+ if (buf.tm_isdst < -1)
+ buf.tm_isdst = -1;
+ else if (buf.tm_isdst > 1)
+ buf.tm_isdst = 1;
+
+#ifdef HAVE_WCSFTIME
+ format = _PyUnicode_AsWideCharString(format_arg);
+ if (format == NULL)
+ return NULL;
+ fmt = format;
+#else
+ /* Convert the unicode string to an ascii one */
+ format = PyUnicode_EncodeLocale(format_arg, "surrogateescape");
+ if (format == NULL)
+ return NULL;
+ fmt = PyBytes_AS_STRING(format);
+#endif
+
+#if defined(MS_WINDOWS) && !defined(HAVE_WCSFTIME)
+ /* check that the format string contains only valid directives */
+ for (outbuf = strchr(fmt, '%');
+ outbuf != NULL;
+ outbuf = strchr(outbuf+2, '%'))
+ {
+ if (outbuf[1] == '#')
+ ++outbuf; /* not documented by python, */
+ if (outbuf[1] == '\0')
+ break;
+ if ((outbuf[1] == 'y') && buf.tm_year < 0) {
+ PyErr_SetString(PyExc_ValueError,
+ "format %y requires year >= 1900 on Windows");
+ Py_DECREF(format);
+ return NULL;
+ }
+ }
+#elif (defined(_AIX) || defined(sun)) && defined(HAVE_WCSFTIME)
+ for (outbuf = wcschr(fmt, '%');
+ outbuf != NULL;
+ outbuf = wcschr(outbuf+2, '%'))
+ {
+ if (outbuf[1] == L'\0')
+ break;
+ /* Issue #19634: On AIX, wcsftime("y", (1899, 1, 1, 0, 0, 0, 0, 0, 0))
+ returns "0/" instead of "99" */
+ if (outbuf[1] == L'y' && buf.tm_year < 0) {
+ PyErr_SetString(PyExc_ValueError,
+ "format %y requires year >= 1900 on AIX");
+ PyMem_Free(format);
+ return NULL;
+ }
+ }
+#endif
+
+ fmtlen = time_strlen(fmt);
+
+ /* I hate these functions that presume you know how big the output
+ * will be ahead of time...
+ */
+ for (i = 1024; ; i += i) {
+ outbuf = (time_char *)PyMem_Malloc(i*sizeof(time_char));
+ if (outbuf == NULL) {
+ PyErr_NoMemory();
+ break;
+ }
+#if defined _MSC_VER && _MSC_VER >= 1400 && defined(__STDC_SECURE_LIB__)
+ errno = 0;
+#endif
+ _Py_BEGIN_SUPPRESS_IPH
+ buflen = format_time(outbuf, i, fmt, &buf);
+ _Py_END_SUPPRESS_IPH
+#if defined _MSC_VER && _MSC_VER >= 1400 && defined(__STDC_SECURE_LIB__)
+ /* VisualStudio .NET 2005 does this properly */
+ if (buflen == 0 && errno == EINVAL) {
+ PyErr_SetString(PyExc_ValueError, "Invalid format string");
+ PyMem_Free(outbuf);
+ break;
+ }
+#endif
+ if (buflen > 0 || i >= 256 * fmtlen) {
+ /* If the buffer is 256 times as long as the format,
+ it's probably not failing for lack of room!
+ More likely, the format yields an empty result,
+ e.g. an empty format, or %Z when the timezone
+ is unknown. */
+#ifdef HAVE_WCSFTIME
+ ret = PyUnicode_FromWideChar(outbuf, buflen);
+#else
+ ret = PyUnicode_DecodeLocaleAndSize(outbuf, buflen,
+ "surrogateescape");
+#endif
+ PyMem_Free(outbuf);
+ break;
+ }
+ PyMem_Free(outbuf);
+ }
+#ifdef HAVE_WCSFTIME
+ PyMem_Free(format);
+#else
+ Py_DECREF(format);
+#endif
+ return ret;
+}
+
+#undef time_char
+#undef format_time
+PyDoc_STRVAR(strftime_doc,
+"strftime(format[, tuple]) -> string\n\
+\n\
+Convert a time tuple to a string according to a format specification.\n\
+See the library reference manual for formatting codes. When the time tuple\n\
+is not present, current time as returned by localtime() is used.\n\
+\n" STRFTIME_FORMAT_CODES);
+#endif /* HAVE_STRFTIME */
+
+static PyObject *
+time_strptime(PyObject *self, PyObject *args)
+{
+ PyObject *strptime_module = PyImport_ImportModuleNoBlock("_strptime");
+ PyObject *strptime_result;
+ _Py_IDENTIFIER(_strptime_time);
+
+ if (!strptime_module)
+ return NULL;
+ strptime_result = _PyObject_CallMethodId(strptime_module,
+ &PyId__strptime_time, "O", args);
+ Py_DECREF(strptime_module);
+ return strptime_result;
+}
+
+
+PyDoc_STRVAR(strptime_doc,
+"strptime(string, format) -> struct_time\n\
+\n\
+Parse a string to a time tuple according to a format specification.\n\
+See the library reference manual for formatting codes (same as\n\
+strftime()).\n\
+\n" STRFTIME_FORMAT_CODES);
+
+static PyObject *
+_asctime(struct tm *timeptr)
+{
+ /* Inspired by Open Group reference implementation available at
+ * http://pubs.opengroup.org/onlinepubs/009695399/functions/asctime.html */
+ static const char wday_name[7][4] = {
+ "Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"
+ };
+ static const char mon_name[12][4] = {
+ "Jan", "Feb", "Mar", "Apr", "May", "Jun",
+ "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"
+ };
+ return PyUnicode_FromFormat(
+ "%s %s%3d %.2d:%.2d:%.2d %d",
+ wday_name[timeptr->tm_wday],
+ mon_name[timeptr->tm_mon],
+ timeptr->tm_mday, timeptr->tm_hour,
+ timeptr->tm_min, timeptr->tm_sec,
+ 1900 + timeptr->tm_year);
+}
+
+static PyObject *
+time_asctime(PyObject *self, PyObject *args)
+{
+ PyObject *tup = NULL;
+ struct tm buf;
+
+ if (!PyArg_UnpackTuple(args, "asctime", 0, 1, &tup))
+ return NULL;
+ if (tup == NULL) {
+ time_t tt = time(NULL);
+ if (_PyTime_localtime(tt, &buf) != 0)
+ return NULL;
+
+ } else if (!gettmarg(tup, &buf) || !checktm(&buf))
+ return NULL;
+ return _asctime(&buf);
+}
+
+PyDoc_STRVAR(asctime_doc,
+"asctime([tuple]) -> string\n\
+\n\
+Convert a time tuple to a string, e.g. 'Sat Jun 06 16:26:11 1998'.\n\
+When the time tuple is not present, current time as returned by localtime()\n\
+is used.");
+
+static PyObject *
+time_ctime(PyObject *self, PyObject *args)
+{
+ time_t tt;
+ struct tm buf;
+ if (!parse_time_t_args(args, "|O:ctime", &tt))
+ return NULL;
+ if (_PyTime_localtime(tt, &buf) != 0)
+ return NULL;
+ return _asctime(&buf);
+}
+
+PyDoc_STRVAR(ctime_doc,
+"ctime(seconds) -> string\n\
+\n\
+Convert a time in seconds since the Epoch to a string in local time.\n\
+This is equivalent to asctime(localtime(seconds)). When the time tuple is\n\
+not present, current time as returned by localtime() is used.");
+
+#ifdef HAVE_MKTIME
+static PyObject *
+time_mktime(PyObject *self, PyObject *tup)
+{
+ struct tm buf;
+ time_t tt;
+ if (!gettmarg(tup, &buf))
+ return NULL;
+#ifdef _AIX
+ /* year < 1902 or year > 2037 */
+ if (buf.tm_year < 2 || buf.tm_year > 137) {
+ /* Issue #19748: On AIX, mktime() doesn't report overflow error for
+ * timestamp < -2^31 or timestamp > 2**31-1. */
+ PyErr_SetString(PyExc_OverflowError,
+ "mktime argument out of range");
+ return NULL;
+ }
+#else
+ buf.tm_wday = -1; /* sentinel; original value ignored */
+#endif
+ tt = mktime(&buf);
+ /* Return value of -1 does not necessarily mean an error, but tm_wday
+ * cannot remain set to -1 if mktime succeeded. */
+ if (tt == (time_t)(-1)
+#ifndef _AIX
+ /* Return value of -1 does not necessarily mean an error, but
+ * tm_wday cannot remain set to -1 if mktime succeeded. */
+ && buf.tm_wday == -1
+#else
+ /* on AIX, tm_wday is always sets, even on error */
+#endif
+ )
+ {
+ PyErr_SetString(PyExc_OverflowError,
+ "mktime argument out of range");
+ return NULL;
+ }
+ return PyFloat_FromDouble((double)tt);
+}
+
+PyDoc_STRVAR(mktime_doc,
+"mktime(tuple) -> floating point number\n\
+\n\
+Convert a time tuple in local time to seconds since the Epoch.\n\
+Note that mktime(gmtime(0)) will not generally return zero for most\n\
+time zones; instead the returned value will either be equal to that\n\
+of the timezone or altzone attributes on the time module.");
+#endif /* HAVE_MKTIME */
+
+#ifdef HAVE_WORKING_TZSET
+static int init_timezone(PyObject *module);
+
+static PyObject *
+time_tzset(PyObject *self, PyObject *unused)
+{
+ PyObject* m;
+
+ m = PyImport_ImportModuleNoBlock("time");
+ if (m == NULL) {
+ return NULL;
+ }
+
+ tzset();
+
+ /* Reset timezone, altzone, daylight and tzname */
+ if (init_timezone(m) < 0) {
+ return NULL;
+ }
+ Py_DECREF(m);
+ if (PyErr_Occurred())
+ return NULL;
+
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+PyDoc_STRVAR(tzset_doc,
+"tzset()\n\
+\n\
+Initialize, or reinitialize, the local timezone to the value stored in\n\
+os.environ['TZ']. The TZ environment variable should be specified in\n\
+standard Unix timezone format as documented in the tzset man page\n\
+(eg. 'US/Eastern', 'Europe/Amsterdam'). Unknown timezones will silently\n\
+fall back to UTC. If the TZ environment variable is not set, the local\n\
+timezone is set to the systems best guess of wallclock time.\n\
+Changing the TZ environment variable without calling tzset *may* change\n\
+the local timezone used by methods such as localtime, but this behaviour\n\
+should not be relied on.");
+#endif /* HAVE_WORKING_TZSET */
+
+static PyObject *
+pymonotonic(_Py_clock_info_t *info)
+{
+ _PyTime_t t;
+ double d;
+ if (_PyTime_GetMonotonicClockWithInfo(&t, info) < 0) {
+ assert(info != NULL);
+ return NULL;
+ }
+ d = _PyTime_AsSecondsDouble(t);
+ return PyFloat_FromDouble(d);
+}
+
+static PyObject *
+time_monotonic(PyObject *self, PyObject *unused)
+{
+ return pymonotonic(NULL);
+}
+
+PyDoc_STRVAR(monotonic_doc,
+"monotonic() -> float\n\
+\n\
+Monotonic clock, cannot go backward.");
+
+static PyObject*
+perf_counter(_Py_clock_info_t *info)
+{
+#ifdef WIN32_PERF_COUNTER
+ return win_perf_counter(info);
+#else
+ return pymonotonic(info);
+#endif
+}
+
+static PyObject *
+time_perf_counter(PyObject *self, PyObject *unused)
+{
+ return perf_counter(NULL);
+}
+
+PyDoc_STRVAR(perf_counter_doc,
+"perf_counter() -> float\n\
+\n\
+Performance counter for benchmarking.");
+
+static PyObject*
+py_process_time(_Py_clock_info_t *info)
+{
+#if defined(MS_WINDOWS)
+ HANDLE process;
+ FILETIME creation_time, exit_time, kernel_time, user_time;
+ ULARGE_INTEGER large;
+ double total;
+ BOOL ok;
+
+ process = GetCurrentProcess();
+ ok = GetProcessTimes(process, &creation_time, &exit_time, &kernel_time, &user_time);
+ if (!ok)
+ return PyErr_SetFromWindowsErr(0);
+
+ large.u.LowPart = kernel_time.dwLowDateTime;
+ large.u.HighPart = kernel_time.dwHighDateTime;
+ total = (double)large.QuadPart;
+ large.u.LowPart = user_time.dwLowDateTime;
+ large.u.HighPart = user_time.dwHighDateTime;
+ total += (double)large.QuadPart;
+ if (info) {
+ info->implementation = "GetProcessTimes()";
+ info->resolution = 1e-7;
+ info->monotonic = 1;
+ info->adjustable = 0;
+ }
+ return PyFloat_FromDouble(total * 1e-7);
+#else
+
+#if defined(HAVE_SYS_RESOURCE_H)
+ struct rusage ru;
+#endif
+#ifdef HAVE_TIMES
+ struct tms t;
+ static long ticks_per_second = -1;
+#endif
+
+#if defined(HAVE_CLOCK_GETTIME) \
+ && (defined(CLOCK_PROCESS_CPUTIME_ID) || defined(CLOCK_PROF))
+ struct timespec tp;
+#ifdef CLOCK_PROF
+ const clockid_t clk_id = CLOCK_PROF;
+ const char *function = "clock_gettime(CLOCK_PROF)";
+#else
+ const clockid_t clk_id = CLOCK_PROCESS_CPUTIME_ID;
+ const char *function = "clock_gettime(CLOCK_PROCESS_CPUTIME_ID)";
+#endif
+
+ if (clock_gettime(clk_id, &tp) == 0) {
+ if (info) {
+ struct timespec res;
+ info->implementation = function;
+ info->monotonic = 1;
+ info->adjustable = 0;
+ if (clock_getres(clk_id, &res) == 0)
+ info->resolution = res.tv_sec + res.tv_nsec * 1e-9;
+ else
+ info->resolution = 1e-9;
+ }
+ return PyFloat_FromDouble(tp.tv_sec + tp.tv_nsec * 1e-9);
+ }
+#endif
+
+#ifndef UEFI_C_SOURCE
+#if defined(HAVE_SYS_RESOURCE_H)
+ if (getrusage(RUSAGE_SELF, &ru) == 0) {
+ double total;
+ total = ru.ru_utime.tv_sec + ru.ru_utime.tv_usec * 1e-6;
+ total += ru.ru_stime.tv_sec + ru.ru_stime.tv_usec * 1e-6;
+ if (info) {
+ info->implementation = "getrusage(RUSAGE_SELF)";
+ info->monotonic = 1;
+ info->adjustable = 0;
+ info->resolution = 1e-6;
+ }
+ return PyFloat_FromDouble(total);
+ }
+#endif
+#endif
+
+#ifdef HAVE_TIMES
+ if (times(&t) != (clock_t)-1) {
+ double total;
+
+ if (ticks_per_second == -1) {
+#if defined(HAVE_SYSCONF) && defined(_SC_CLK_TCK)
+ ticks_per_second = sysconf(_SC_CLK_TCK);
+ if (ticks_per_second < 1)
+ ticks_per_second = -1;
+#elif defined(HZ)
+ ticks_per_second = HZ;
+#else
+ ticks_per_second = 60; /* magic fallback value; may be bogus */
+#endif
+ }
+
+ if (ticks_per_second != -1) {
+ total = (double)t.tms_utime / ticks_per_second;
+ total += (double)t.tms_stime / ticks_per_second;
+ if (info) {
+ info->implementation = "times()";
+ info->monotonic = 1;
+ info->adjustable = 0;
+ info->resolution = 1.0 / ticks_per_second;
+ }
+ return PyFloat_FromDouble(total);
+ }
+ }
+#endif
+
+ /* Currently, Python 3 requires clock() to build: see issue #22624 */
+ return floatclock(info);
+#endif
+}
+
+static PyObject *
+time_process_time(PyObject *self, PyObject *unused)
+{
+ return py_process_time(NULL);
+}
+
+PyDoc_STRVAR(process_time_doc,
+"process_time() -> float\n\
+\n\
+Process time for profiling: sum of the kernel and user-space CPU time.");
+
+
+static PyObject *
+time_get_clock_info(PyObject *self, PyObject *args)
+{
+ char *name;
+ _Py_clock_info_t info;
+ PyObject *obj = NULL, *dict, *ns;
+
+ if (!PyArg_ParseTuple(args, "s:get_clock_info", &name))
+ return NULL;
+
+#ifdef Py_DEBUG
+ info.implementation = NULL;
+ info.monotonic = -1;
+ info.adjustable = -1;
+ info.resolution = -1.0;
+#else
+ info.implementation = "";
+ info.monotonic = 0;
+ info.adjustable = 0;
+ info.resolution = 1.0;
+#endif
+
+ if (strcmp(name, "time") == 0)
+ obj = floattime(&info);
+#ifdef PYCLOCK
+ else if (strcmp(name, "clock") == 0)
+ obj = pyclock(&info);
+#endif
+ else if (strcmp(name, "monotonic") == 0)
+ obj = pymonotonic(&info);
+ else if (strcmp(name, "perf_counter") == 0)
+ obj = perf_counter(&info);
+ else if (strcmp(name, "process_time") == 0)
+ obj = py_process_time(&info);
+ else {
+ PyErr_SetString(PyExc_ValueError, "unknown clock");
+ return NULL;
+ }
+ if (obj == NULL)
+ return NULL;
+ Py_DECREF(obj);
+
+ dict = PyDict_New();
+ if (dict == NULL)
+ return NULL;
+
+ assert(info.implementation != NULL);
+ obj = PyUnicode_FromString(info.implementation);
+ if (obj == NULL)
+ goto error;
+ if (PyDict_SetItemString(dict, "implementation", obj) == -1)
+ goto error;
+ Py_CLEAR(obj);
+
+ assert(info.monotonic != -1);
+ obj = PyBool_FromLong(info.monotonic);
+ if (obj == NULL)
+ goto error;
+ if (PyDict_SetItemString(dict, "monotonic", obj) == -1)
+ goto error;
+ Py_CLEAR(obj);
+
+ assert(info.adjustable != -1);
+ obj = PyBool_FromLong(info.adjustable);
+ if (obj == NULL)
+ goto error;
+ if (PyDict_SetItemString(dict, "adjustable", obj) == -1)
+ goto error;
+ Py_CLEAR(obj);
+
+ assert(info.resolution > 0.0);
+ assert(info.resolution <= 1.0);
+ obj = PyFloat_FromDouble(info.resolution);
+ if (obj == NULL)
+ goto error;
+ if (PyDict_SetItemString(dict, "resolution", obj) == -1)
+ goto error;
+ Py_CLEAR(obj);
+
+ ns = _PyNamespace_New(dict);
+ Py_DECREF(dict);
+ return ns;
+
+error:
+ Py_DECREF(dict);
+ Py_XDECREF(obj);
+ return NULL;
+}
+
+PyDoc_STRVAR(get_clock_info_doc,
+"get_clock_info(name: str) -> dict\n\
+\n\
+Get information of the specified clock.");
+
+static void
+get_zone(char *zone, int n, struct tm *p)
+{
+#ifdef HAVE_STRUCT_TM_TM_ZONE
+ strncpy(zone, p->tm_zone ? p->tm_zone : " ", n);
+#else
+ tzset();
+ strftime(zone, n, "%Z", p);
+#endif
+}
+
+static time_t
+get_gmtoff(time_t t, struct tm *p)
+{
+#ifdef HAVE_STRUCT_TM_TM_ZONE
+ return p->tm_gmtoff;
+#else
+ return timegm(p) - t;
+#endif
+}
+
+static int
+init_timezone(PyObject *m)
+{
+ assert(!PyErr_Occurred());
+
+ /* This code moved from PyInit_time wholesale to allow calling it from
+ time_tzset. In the future, some parts of it can be moved back
+ (for platforms that don't HAVE_WORKING_TZSET, when we know what they
+ are), and the extraneous calls to tzset(3) should be removed.
+ I haven't done this yet, as I don't want to change this code as
+ little as possible when introducing the time.tzset and time.tzsetwall
+ methods. This should simply be a method of doing the following once,
+ at the top of this function and removing the call to tzset() from
+ time_tzset():
+
+ #ifdef HAVE_TZSET
+ tzset()
+ #endif
+
+ And I'm lazy and hate C so nyer.
+ */
+#if defined(HAVE_TZNAME) && !defined(__GLIBC__) && !defined(__CYGWIN__)
+ PyObject *otz0, *otz1;
+ tzset();
+ PyModule_AddIntConstant(m, "timezone", timezone);
+#ifdef HAVE_ALTZONE
+ PyModule_AddIntConstant(m, "altzone", altzone);
+#else
+ PyModule_AddIntConstant(m, "altzone", timezone-3600);
+#endif
+ PyModule_AddIntConstant(m, "daylight", daylight);
+ otz0 = PyUnicode_DecodeLocale(tzname[0], "surrogateescape");
+ if (otz0 == NULL) {
+ return -1;
+ }
+ otz1 = PyUnicode_DecodeLocale(tzname[1], "surrogateescape");
+ if (otz1 == NULL) {
+ Py_DECREF(otz0);
+ return -1;
+ }
+ PyObject *tzname_obj = Py_BuildValue("(NN)", otz0, otz1);
+ if (tzname_obj == NULL) {
+ return -1;
+ }
+ PyModule_AddObject(m, "tzname", tzname_obj);
+#else /* !HAVE_TZNAME || __GLIBC__ || __CYGWIN__*/
+#ifdef HAVE_STRUCT_TM_TM_ZONE
+ {
+#define YEAR ((time_t)((365 * 24 + 6) * 3600))
+ time_t t;
+ struct tm p;
+ time_t janzone_t, julyzone_t;
+ char janname[10], julyname[10];
+ t = (time((time_t *)0) / YEAR) * YEAR;
+ _PyTime_localtime(t, &p);
+ get_zone(janname, 9, &p);
+ janzone_t = -get_gmtoff(t, &p);
+ janname[9] = '\0';
+ t += YEAR/2;
+ _PyTime_localtime(t, &p);
+ get_zone(julyname, 9, &p);
+ julyzone_t = -get_gmtoff(t, &p);
+ julyname[9] = '\0';
+
+#ifndef UEFI_C_SOURCE
+ /* Sanity check, don't check for the validity of timezones.
+ In practice, it should be more in range -12 hours .. +14 hours. */
+#define MAX_TIMEZONE (48 * 3600)
+ if (janzone_t < -MAX_TIMEZONE || janzone_t > MAX_TIMEZONE
+ || julyzone_t < -MAX_TIMEZONE || julyzone_t > MAX_TIMEZONE)
+ {
+ PyErr_SetString(PyExc_RuntimeError, "invalid GMT offset");
+ return -1;
+ }
+#endif
+ int janzone = (int)janzone_t;
+ int julyzone = (int)julyzone_t;
+
+ if( janzone < julyzone ) {
+ /* DST is reversed in the southern hemisphere */
+ PyModule_AddIntConstant(m, "timezone", julyzone);
+ PyModule_AddIntConstant(m, "altzone", janzone);
+ PyModule_AddIntConstant(m, "daylight",
+ janzone != julyzone);
+ PyModule_AddObject(m, "tzname",
+ Py_BuildValue("(zz)",
+ julyname, janname));
+ } else {
+ PyModule_AddIntConstant(m, "timezone", janzone);
+ PyModule_AddIntConstant(m, "altzone", julyzone);
+ PyModule_AddIntConstant(m, "daylight",
+ janzone != julyzone);
+ PyModule_AddObject(m, "tzname",
+ Py_BuildValue("(zz)",
+ janname, julyname));
+ }
+ }
+#else /*HAVE_STRUCT_TM_TM_ZONE */
+#ifdef __CYGWIN__
+ tzset();
+ PyModule_AddIntConstant(m, "timezone", _timezone);
+ PyModule_AddIntConstant(m, "altzone", _timezone-3600);
+ PyModule_AddIntConstant(m, "daylight", _daylight);
+ PyModule_AddObject(m, "tzname",
+ Py_BuildValue("(zz)", _tzname[0], _tzname[1]));
+#endif /* __CYGWIN__ */
+#endif
+#endif /* !HAVE_TZNAME || __GLIBC__ || __CYGWIN__*/
+
+ if (PyErr_Occurred()) {
+ return -1;
+ }
+ return 0;
+}
+
+
+static PyMethodDef time_methods[] = {
+ {"time", time_time, METH_NOARGS, time_doc},
+#ifdef PYCLOCK
+ {"clock", time_clock, METH_NOARGS, clock_doc},
+#endif
+#ifdef HAVE_CLOCK_GETTIME
+ {"clock_gettime", time_clock_gettime, METH_VARARGS, clock_gettime_doc},
+#endif
+#ifdef HAVE_CLOCK_SETTIME
+ {"clock_settime", time_clock_settime, METH_VARARGS, clock_settime_doc},
+#endif
+#ifdef HAVE_CLOCK_GETRES
+ {"clock_getres", time_clock_getres, METH_VARARGS, clock_getres_doc},
+#endif
+ {"sleep", time_sleep, METH_O, sleep_doc},
+ {"gmtime", time_gmtime, METH_VARARGS, gmtime_doc},
+ {"localtime", time_localtime, METH_VARARGS, localtime_doc},
+ {"asctime", time_asctime, METH_VARARGS, asctime_doc},
+ {"ctime", time_ctime, METH_VARARGS, ctime_doc},
+#ifdef HAVE_MKTIME
+ {"mktime", time_mktime, METH_O, mktime_doc},
+#endif
+#ifdef HAVE_STRFTIME
+ {"strftime", time_strftime, METH_VARARGS, strftime_doc},
+#endif
+ {"strptime", time_strptime, METH_VARARGS, strptime_doc},
+#ifdef HAVE_WORKING_TZSET
+ {"tzset", time_tzset, METH_NOARGS, tzset_doc},
+#endif
+ {"monotonic", time_monotonic, METH_NOARGS, monotonic_doc},
+ {"process_time", time_process_time, METH_NOARGS, process_time_doc},
+ {"perf_counter", time_perf_counter, METH_NOARGS, perf_counter_doc},
+ {"get_clock_info", time_get_clock_info, METH_VARARGS, get_clock_info_doc},
+ {NULL, NULL} /* sentinel */
+};
+
+
+PyDoc_STRVAR(module_doc,
+"This module provides various functions to manipulate time values.\n\
+\n\
+There are two standard representations of time. One is the number\n\
+of seconds since the Epoch, in UTC (a.k.a. GMT). It may be an integer\n\
+or a floating point number (to represent fractions of seconds).\n\
+The Epoch is system-defined; on Unix, it is generally January 1st, 1970.\n\
+The actual value can be retrieved by calling gmtime(0).\n\
+\n\
+The other representation is a tuple of 9 integers giving local time.\n\
+The tuple items are:\n\
+ year (including century, e.g. 1998)\n\
+ month (1-12)\n\
+ day (1-31)\n\
+ hours (0-23)\n\
+ minutes (0-59)\n\
+ seconds (0-59)\n\
+ weekday (0-6, Monday is 0)\n\
+ Julian day (day in the year, 1-366)\n\
+ DST (Daylight Savings Time) flag (-1, 0 or 1)\n\
+If the DST flag is 0, the time is given in the regular time zone;\n\
+if it is 1, the time is given in the DST time zone;\n\
+if it is -1, mktime() should guess based on the date and time.\n");
+
+
+
+static struct PyModuleDef timemodule = {
+ PyModuleDef_HEAD_INIT,
+ "time",
+ module_doc,
+ -1,
+ time_methods,
+ NULL,
+ NULL,
+ NULL,
+ NULL
+};
+
+PyMODINIT_FUNC
+PyInit_time(void)
+{
+ PyObject *m;
+ m = PyModule_Create(&timemodule);
+ if (m == NULL)
+ return NULL;
+
+ /* Set, or reset, module variables like time.timezone */
+ if (init_timezone(m) < 0) {
+ return NULL;
+ }
+
+#if defined(HAVE_CLOCK_GETTIME) || defined(HAVE_CLOCK_SETTIME) || defined(HAVE_CLOCK_GETRES)
+
+#ifdef CLOCK_REALTIME
+ PyModule_AddIntMacro(m, CLOCK_REALTIME);
+#endif
+#ifdef CLOCK_MONOTONIC
+ PyModule_AddIntMacro(m, CLOCK_MONOTONIC);
+#endif
+#ifdef CLOCK_MONOTONIC_RAW
+ PyModule_AddIntMacro(m, CLOCK_MONOTONIC_RAW);
+#endif
+#ifdef CLOCK_HIGHRES
+ PyModule_AddIntMacro(m, CLOCK_HIGHRES);
+#endif
+#ifdef CLOCK_PROCESS_CPUTIME_ID
+ PyModule_AddIntMacro(m, CLOCK_PROCESS_CPUTIME_ID);
+#endif
+#ifdef CLOCK_THREAD_CPUTIME_ID
+ PyModule_AddIntMacro(m, CLOCK_THREAD_CPUTIME_ID);
+#endif
+
+#endif /* defined(HAVE_CLOCK_GETTIME) || defined(HAVE_CLOCK_SETTIME) || defined(HAVE_CLOCK_GETRES) */
+
+ if (!initialized) {
+ if (PyStructSequence_InitType2(&StructTimeType,
+ &struct_time_type_desc) < 0)
+ return NULL;
+ }
+ Py_INCREF(&StructTimeType);
+ PyModule_AddIntConstant(m, "_STRUCT_TM_ITEMS", 11);
+ PyModule_AddObject(m, "struct_time", (PyObject*) &StructTimeType);
+ initialized = 1;
+
+ if (PyErr_Occurred()) {
+ return NULL;
+ }
+ return m;
+}
+
+static PyObject*
+floattime(_Py_clock_info_t *info)
+{
+ _PyTime_t t;
+ double d;
+ if (_PyTime_GetSystemClockWithInfo(&t, info) < 0) {
+ assert(info != NULL);
+ return NULL;
+ }
+ d = _PyTime_AsSecondsDouble(t);
+ return PyFloat_FromDouble(d);
+}
+
+
+/* Implement pysleep() for various platforms.
+ When interrupted (or when another error occurs), return -1 and
+ set an exception; else return 0. */
+
+static int
+pysleep(_PyTime_t secs)
+{
+ _PyTime_t deadline, monotonic;
+#ifndef MS_WINDOWS
+ struct timeval timeout;
+ int err = 0;
+#else
+ _PyTime_t millisecs;
+ unsigned long ul_millis;
+ DWORD rc;
+ HANDLE hInterruptEvent;
+#endif
+
+ deadline = _PyTime_GetMonotonicClock() + secs;
+
+ do {
+#ifndef MS_WINDOWS
+ if (_PyTime_AsTimeval(secs, &timeout, _PyTime_ROUND_CEILING) < 0)
+ return -1;
+
+ Py_BEGIN_ALLOW_THREADS
+ err = select(0, (fd_set *)0, (fd_set *)0, (fd_set *)0, &timeout);
+ Py_END_ALLOW_THREADS
+
+ if (err == 0)
+ break;
+
+ if (errno != EINTR) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ return -1;
+ }
+#else
+ millisecs = _PyTime_AsMilliseconds(secs, _PyTime_ROUND_CEILING);
+ if (millisecs > (double)ULONG_MAX) {
+ PyErr_SetString(PyExc_OverflowError,
+ "sleep length is too large");
+ return -1;
+ }
+
+ /* Allow sleep(0) to maintain win32 semantics, and as decreed
+ * by Guido, only the main thread can be interrupted.
+ */
+ ul_millis = (unsigned long)millisecs;
+ if (ul_millis == 0 || !_PyOS_IsMainThread()) {
+ Py_BEGIN_ALLOW_THREADS
+ Sleep(ul_millis);
+ Py_END_ALLOW_THREADS
+ break;
+ }
+
+ hInterruptEvent = _PyOS_SigintEvent();
+ ResetEvent(hInterruptEvent);
+
+ Py_BEGIN_ALLOW_THREADS
+ rc = WaitForSingleObjectEx(hInterruptEvent, ul_millis, FALSE);
+ Py_END_ALLOW_THREADS
+
+ if (rc != WAIT_OBJECT_0)
+ break;
+#endif
+
+ /* sleep was interrupted by SIGINT */
+ if (PyErr_CheckSignals())
+ return -1;
+
+ monotonic = _PyTime_GetMonotonicClock();
+ secs = deadline - monotonic;
+ if (secs < 0)
+ break;
+ /* retry with the recomputed delay */
+ } while (1);
+
+ return 0;
+}
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/zlib/gzguts.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/zlib/gzguts.h
new file mode 100644
index 00000000..23fe2617
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/zlib/gzguts.h
@@ -0,0 +1,218 @@
+/* gzguts.h -- zlib internal header definitions for gz* operations
+ * Copyright (C) 2004, 2005, 2010, 2011, 2012, 2013, 2016 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+#ifdef _LARGEFILE64_SOURCE
+# ifndef _LARGEFILE_SOURCE
+# define _LARGEFILE_SOURCE 1
+# endif
+# ifdef _FILE_OFFSET_BITS
+# undef _FILE_OFFSET_BITS
+# endif
+#endif
+
+#ifdef HAVE_HIDDEN
+# define ZLIB_INTERNAL __attribute__((visibility ("hidden")))
+#else
+# define ZLIB_INTERNAL
+#endif
+
+#include <stdio.h>
+#include "zlib.h"
+#ifdef STDC
+# include <string.h>
+# include <stdlib.h>
+# include <limits.h>
+#endif
+
+#ifndef _POSIX_SOURCE
+# define _POSIX_SOURCE
+#endif
+#include <fcntl.h>
+
+#ifdef _WIN32
+# include <stddef.h>
+#endif
+
+#if (defined(__TURBOC__) || defined(_MSC_VER) || defined(_WIN32)) && !defined(UEFI_C_SOURCE)
+# include <io.h>
+#endif
+
+#if defined(_WIN32) || defined(__CYGWIN__)
+# define WIDECHAR
+#endif
+
+#ifdef WINAPI_FAMILY
+# define open _open
+# define read _read
+# define write _write
+# define close _close
+#endif
+
+#ifdef NO_DEFLATE /* for compatibility with old definition */
+# define NO_GZCOMPRESS
+#endif
+
+#if defined(STDC99) || (defined(__TURBOC__) && __TURBOC__ >= 0x550)
+# ifndef HAVE_VSNPRINTF
+# define HAVE_VSNPRINTF
+# endif
+#endif
+
+#if defined(__CYGWIN__)
+# ifndef HAVE_VSNPRINTF
+# define HAVE_VSNPRINTF
+# endif
+#endif
+
+#if defined(MSDOS) && defined(__BORLANDC__) && (BORLANDC > 0x410)
+# ifndef HAVE_VSNPRINTF
+# define HAVE_VSNPRINTF
+# endif
+#endif
+
+#ifndef HAVE_VSNPRINTF
+# ifdef MSDOS
+/* vsnprintf may exist on some MS-DOS compilers (DJGPP?),
+ but for now we just assume it doesn't. */
+# define NO_vsnprintf
+# endif
+# ifdef __TURBOC__
+# define NO_vsnprintf
+# endif
+# ifdef WIN32
+/* In Win32, vsnprintf is available as the "non-ANSI" _vsnprintf. */
+# if !defined(vsnprintf) && !defined(NO_vsnprintf)
+# if !defined(_MSC_VER) || ( defined(_MSC_VER) && _MSC_VER < 1500 )
+# define vsnprintf _vsnprintf
+# endif
+# endif
+# endif
+# ifdef __SASC
+# define NO_vsnprintf
+# endif
+# ifdef VMS
+# define NO_vsnprintf
+# endif
+# ifdef __OS400__
+# define NO_vsnprintf
+# endif
+# ifdef __MVS__
+# define NO_vsnprintf
+# endif
+#endif
+
+/* unlike snprintf (which is required in C99), _snprintf does not guarantee
+ null termination of the result -- however this is only used in gzlib.c where
+ the result is assured to fit in the space provided */
+#if defined(_MSC_VER) && _MSC_VER < 1900
+# define snprintf _snprintf
+#endif
+
+#ifndef local
+# define local static
+#endif
+/* since "static" is used to mean two completely different things in C, we
+ define "local" for the non-static meaning of "static", for readability
+ (compile with -Dlocal if your debugger can't find static symbols) */
+
+/* gz* functions always use library allocation functions */
+#ifndef STDC
+ extern voidp malloc OF((uInt size));
+ extern void free OF((voidpf ptr));
+#endif
+
+/* get errno and strerror definition */
+#if defined UNDER_CE
+# include <windows.h>
+# define zstrerror() gz_strwinerror((DWORD)GetLastError())
+#else
+# ifndef NO_STRERROR
+# include <errno.h>
+# define zstrerror() strerror(errno)
+# else
+# define zstrerror() "stdio error (consult errno)"
+# endif
+#endif
+
+/* provide prototypes for these when building zlib without LFS */
+#if !defined(_LARGEFILE64_SOURCE) || _LFS64_LARGEFILE-0 == 0
+ ZEXTERN gzFile ZEXPORT gzopen64 OF((const char *, const char *));
+ ZEXTERN z_off64_t ZEXPORT gzseek64 OF((gzFile, z_off64_t, int));
+ ZEXTERN z_off64_t ZEXPORT gztell64 OF((gzFile));
+ ZEXTERN z_off64_t ZEXPORT gzoffset64 OF((gzFile));
+#endif
+
+/* default memLevel */
+#if MAX_MEM_LEVEL >= 8
+# define DEF_MEM_LEVEL 8
+#else
+# define DEF_MEM_LEVEL MAX_MEM_LEVEL
+#endif
+
+/* default i/o buffer size -- double this for output when reading (this and
+ twice this must be able to fit in an unsigned type) */
+#define GZBUFSIZE 8192
+
+/* gzip modes, also provide a little integrity check on the passed structure */
+#define GZ_NONE 0
+#define GZ_READ 7247
+#define GZ_WRITE 31153
+#define GZ_APPEND 1 /* mode set to GZ_WRITE after the file is opened */
+
+/* values for gz_state how */
+#define LOOK 0 /* look for a gzip header */
+#define COPY 1 /* copy input directly */
+#define GZIP 2 /* decompress a gzip stream */
+
+/* internal gzip file state data structure */
+typedef struct {
+ /* exposed contents for gzgetc() macro */
+ struct gzFile_s x; /* "x" for exposed */
+ /* x.have: number of bytes available at x.next */
+ /* x.next: next output data to deliver or write */
+ /* x.pos: current position in uncompressed data */
+ /* used for both reading and writing */
+ int mode; /* see gzip modes above */
+ int fd; /* file descriptor */
+ char *path; /* path or fd for error messages */
+ unsigned size; /* buffer size, zero if not allocated yet */
+ unsigned want; /* requested buffer size, default is GZBUFSIZE */
+ unsigned char *in; /* input buffer (double-sized when writing) */
+ unsigned char *out; /* output buffer (double-sized when reading) */
+ int direct; /* 0 if processing gzip, 1 if transparent */
+ /* just for reading */
+ int how; /* 0: get header, 1: copy, 2: decompress */
+ z_off64_t start; /* where the gzip data started, for rewinding */
+ int eof; /* true if end of input file reached */
+ int past; /* true if read requested past end */
+ /* just for writing */
+ int level; /* compression level */
+ int strategy; /* compression strategy */
+ /* seek request */
+ z_off64_t skip; /* amount to skip (already rewound if backwards) */
+ int seek; /* true if seek request pending */
+ /* error information */
+ int err; /* error code */
+ char *msg; /* error message */
+ /* zlib inflate or deflate stream */
+ z_stream strm; /* stream structure in-place (not a pointer) */
+} gz_state;
+typedef gz_state FAR *gz_statep;
+
+/* shared functions */
+void ZLIB_INTERNAL gz_error OF((gz_statep, int, const char *));
+#if defined UNDER_CE
+char ZLIB_INTERNAL *gz_strwinerror OF((DWORD error));
+#endif
+
+/* GT_OFF(x), where x is an unsigned value, is true if x > maximum z_off64_t
+ value -- needed when comparing unsigned to z_off64_t, which is signed
+ (possible z_off64_t types off_t, off64_t, and long are all signed) */
+#ifdef INT_MAX
+# define GT_OFF(x) (sizeof(int) == sizeof(z_off64_t) && (x) > INT_MAX)
+#else
+unsigned ZLIB_INTERNAL gz_intmax OF((void));
+# define GT_OFF(x) (sizeof(int) == sizeof(z_off64_t) && (x) > gz_intmax())
+#endif
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/dictobject.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/dictobject.c
new file mode 100644
index 00000000..9eabd28c
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/dictobject.c
@@ -0,0 +1,4472 @@
+/* Dictionary object implementation using a hash table */
+
+/* The distribution includes a separate file, Objects/dictnotes.txt,
+ describing explorations into dictionary design and optimization.
+ It covers typical dictionary use patterns, the parameters for
+ tuning dictionaries, and several ideas for possible optimizations.
+*/
+
+/* PyDictKeysObject
+
+This implements the dictionary's hashtable.
+
+As of Python 3.6, this is compact and ordered. Basic idea is described here.
+https://morepypy.blogspot.com/2015/01/faster-more-memory-efficient-and-more.html
+
+layout:
+
++---------------+
+| dk_refcnt |
+| dk_size |
+| dk_lookup |
+| dk_usable |
+| dk_nentries |
++---------------+
+| dk_indices |
+| |
++---------------+
+| dk_entries |
+| |
++---------------+
+
+dk_indices is actual hashtable. It holds index in entries, or DKIX_EMPTY(-1)
+or DKIX_DUMMY(-2).
+Size of indices is dk_size. Type of each index in indices is vary on dk_size:
+
+* int8 for dk_size <= 128
+* int16 for 256 <= dk_size <= 2**15
+* int32 for 2**16 <= dk_size <= 2**31
+* int64 for 2**32 <= dk_size
+
+dk_entries is array of PyDictKeyEntry. It's size is USABLE_FRACTION(dk_size).
+DK_ENTRIES(dk) can be used to get pointer to entries.
+
+NOTE: Since negative value is used for DKIX_EMPTY and DKIX_DUMMY, type of
+dk_indices entry is signed integer and int16 is used for table which
+dk_size == 256.
+*/
+
+
+/*
+The DictObject can be in one of two forms.
+
+Either:
+ A combined table:
+ ma_values == NULL, dk_refcnt == 1.
+ Values are stored in the me_value field of the PyDictKeysObject.
+Or:
+ A split table:
+ ma_values != NULL, dk_refcnt >= 1
+ Values are stored in the ma_values array.
+ Only string (unicode) keys are allowed.
+ All dicts sharing same key must have same insertion order.
+
+There are four kinds of slots in the table (slot is index, and
+DK_ENTRIES(keys)[index] if index >= 0):
+
+1. Unused. index == DKIX_EMPTY
+ Does not hold an active (key, value) pair now and never did. Unused can
+ transition to Active upon key insertion. This is each slot's initial state.
+
+2. Active. index >= 0, me_key != NULL and me_value != NULL
+ Holds an active (key, value) pair. Active can transition to Dummy or
+ Pending upon key deletion (for combined and split tables respectively).
+ This is the only case in which me_value != NULL.
+
+3. Dummy. index == DKIX_DUMMY (combined only)
+ Previously held an active (key, value) pair, but that was deleted and an
+ active pair has not yet overwritten the slot. Dummy can transition to
+ Active upon key insertion. Dummy slots cannot be made Unused again
+ else the probe sequence in case of collision would have no way to know
+ they were once active.
+
+4. Pending. index >= 0, key != NULL, and value == NULL (split only)
+ Not yet inserted in split-table.
+*/
+
+/*
+Preserving insertion order
+
+It's simple for combined table. Since dk_entries is mostly append only, we can
+get insertion order by just iterating dk_entries.
+
+One exception is .popitem(). It removes last item in dk_entries and decrement
+dk_nentries to achieve amortized O(1). Since there are DKIX_DUMMY remains in
+dk_indices, we can't increment dk_usable even though dk_nentries is
+decremented.
+
+In split table, inserting into pending entry is allowed only for dk_entries[ix]
+where ix == mp->ma_used. Inserting into other index and deleting item cause
+converting the dict to the combined table.
+*/
+
+/* PyDict_MINSIZE is the starting size for any new dict.
+ * 8 allows dicts with no more than 5 active entries; experiments suggested
+ * this suffices for the majority of dicts (consisting mostly of usually-small
+ * dicts created to pass keyword arguments).
+ * Making this 8, rather than 4 reduces the number of resizes for most
+ * dictionaries, without any significant extra memory use.
+ */
+#define PyDict_MINSIZE 8
+
+#include "Python.h"
+#include "dict-common.h"
+#include "stringlib/eq.h" /* to get unicode_eq() */
+
+/*[clinic input]
+class dict "PyDictObject *" "&PyDict_Type"
+[clinic start generated code]*/
+/*[clinic end generated code: output=da39a3ee5e6b4b0d input=f157a5a0ce9589d6]*/
+
+
+/*
+To ensure the lookup algorithm terminates, there must be at least one Unused
+slot (NULL key) in the table.
+To avoid slowing down lookups on a near-full table, we resize the table when
+it's USABLE_FRACTION (currently two-thirds) full.
+*/
+
+#define PERTURB_SHIFT 5
+
+/*
+Major subtleties ahead: Most hash schemes depend on having a "good" hash
+function, in the sense of simulating randomness. Python doesn't: its most
+important hash functions (for ints) are very regular in common
+cases:
+
+ >>>[hash(i) for i in range(4)]
+ [0, 1, 2, 3]
+
+This isn't necessarily bad! To the contrary, in a table of size 2**i, taking
+the low-order i bits as the initial table index is extremely fast, and there
+are no collisions at all for dicts indexed by a contiguous range of ints. So
+this gives better-than-random behavior in common cases, and that's very
+desirable.
+
+OTOH, when collisions occur, the tendency to fill contiguous slices of the
+hash table makes a good collision resolution strategy crucial. Taking only
+the last i bits of the hash code is also vulnerable: for example, consider
+the list [i << 16 for i in range(20000)] as a set of keys. Since ints are
+their own hash codes, and this fits in a dict of size 2**15, the last 15 bits
+ of every hash code are all 0: they *all* map to the same table index.
+
+But catering to unusual cases should not slow the usual ones, so we just take
+the last i bits anyway. It's up to collision resolution to do the rest. If
+we *usually* find the key we're looking for on the first try (and, it turns
+out, we usually do -- the table load factor is kept under 2/3, so the odds
+are solidly in our favor), then it makes best sense to keep the initial index
+computation dirt cheap.
+
+The first half of collision resolution is to visit table indices via this
+recurrence:
+
+ j = ((5*j) + 1) mod 2**i
+
+For any initial j in range(2**i), repeating that 2**i times generates each
+int in range(2**i) exactly once (see any text on random-number generation for
+proof). By itself, this doesn't help much: like linear probing (setting
+j += 1, or j -= 1, on each loop trip), it scans the table entries in a fixed
+order. This would be bad, except that's not the only thing we do, and it's
+actually *good* in the common cases where hash keys are consecutive. In an
+example that's really too small to make this entirely clear, for a table of
+size 2**3 the order of indices is:
+
+ 0 -> 1 -> 6 -> 7 -> 4 -> 5 -> 2 -> 3 -> 0 [and here it's repeating]
+
+If two things come in at index 5, the first place we look after is index 2,
+not 6, so if another comes in at index 6 the collision at 5 didn't hurt it.
+Linear probing is deadly in this case because there the fixed probe order
+is the *same* as the order consecutive keys are likely to arrive. But it's
+extremely unlikely hash codes will follow a 5*j+1 recurrence by accident,
+and certain that consecutive hash codes do not.
+
+The other half of the strategy is to get the other bits of the hash code
+into play. This is done by initializing a (unsigned) vrbl "perturb" to the
+full hash code, and changing the recurrence to:
+
+ perturb >>= PERTURB_SHIFT;
+ j = (5*j) + 1 + perturb;
+ use j % 2**i as the next table index;
+
+Now the probe sequence depends (eventually) on every bit in the hash code,
+and the pseudo-scrambling property of recurring on 5*j+1 is more valuable,
+because it quickly magnifies small differences in the bits that didn't affect
+the initial index. Note that because perturb is unsigned, if the recurrence
+is executed often enough perturb eventually becomes and remains 0. At that
+point (very rarely reached) the recurrence is on (just) 5*j+1 again, and
+that's certain to find an empty slot eventually (since it generates every int
+in range(2**i), and we make sure there's always at least one empty slot).
+
+Selecting a good value for PERTURB_SHIFT is a balancing act. You want it
+small so that the high bits of the hash code continue to affect the probe
+sequence across iterations; but you want it large so that in really bad cases
+the high-order hash bits have an effect on early iterations. 5 was "the
+best" in minimizing total collisions across experiments Tim Peters ran (on
+both normal and pathological cases), but 4 and 6 weren't significantly worse.
+
+Historical: Reimer Behrends contributed the idea of using a polynomial-based
+approach, using repeated multiplication by x in GF(2**n) where an irreducible
+polynomial for each table size was chosen such that x was a primitive root.
+Christian Tismer later extended that to use division by x instead, as an
+efficient way to get the high bits of the hash code into play. This scheme
+also gave excellent collision statistics, but was more expensive: two
+if-tests were required inside the loop; computing "the next" index took about
+the same number of operations but without as much potential parallelism
+(e.g., computing 5*j can go on at the same time as computing 1+perturb in the
+above, and then shifting perturb can be done while the table index is being
+masked); and the PyDictObject struct required a member to hold the table's
+polynomial. In Tim's experiments the current scheme ran faster, produced
+equally good collision statistics, needed less code & used less memory.
+
+*/
+
+/* forward declarations */
+static Py_ssize_t lookdict(PyDictObject *mp, PyObject *key,
+ Py_hash_t hash, PyObject ***value_addr,
+ Py_ssize_t *hashpos);
+static Py_ssize_t lookdict_unicode(PyDictObject *mp, PyObject *key,
+ Py_hash_t hash, PyObject ***value_addr,
+ Py_ssize_t *hashpos);
+static Py_ssize_t
+lookdict_unicode_nodummy(PyDictObject *mp, PyObject *key,
+ Py_hash_t hash, PyObject ***value_addr,
+ Py_ssize_t *hashpos);
+static Py_ssize_t lookdict_split(PyDictObject *mp, PyObject *key,
+ Py_hash_t hash, PyObject ***value_addr,
+ Py_ssize_t *hashpos);
+
+static int dictresize(PyDictObject *mp, Py_ssize_t minused);
+
+static PyObject* dict_iter(PyDictObject *dict);
+
+/*Global counter used to set ma_version_tag field of dictionary.
+ * It is incremented each time that a dictionary is created and each
+ * time that a dictionary is modified. */
+static uint64_t pydict_global_version = 0;
+
+#define DICT_NEXT_VERSION() (++pydict_global_version)
+
+/* Dictionary reuse scheme to save calls to malloc and free */
+#ifndef PyDict_MAXFREELIST
+#define PyDict_MAXFREELIST 80
+#endif
+static PyDictObject *free_list[PyDict_MAXFREELIST];
+static int numfree = 0;
+static PyDictKeysObject *keys_free_list[PyDict_MAXFREELIST];
+static int numfreekeys = 0;
+
+#include "clinic/dictobject.c.h"
+
+int
+PyDict_ClearFreeList(void)
+{
+ PyDictObject *op;
+ int ret = numfree + numfreekeys;
+ while (numfree) {
+ op = free_list[--numfree];
+ assert(PyDict_CheckExact(op));
+ PyObject_GC_Del(op);
+ }
+ while (numfreekeys) {
+ PyObject_FREE(keys_free_list[--numfreekeys]);
+ }
+ return ret;
+}
+
+/* Print summary info about the state of the optimized allocator */
+void
+_PyDict_DebugMallocStats(FILE *out)
+{
+ _PyDebugAllocatorStats(out,
+ "free PyDictObject", numfree, sizeof(PyDictObject));
+}
+
+
+void
+PyDict_Fini(void)
+{
+ PyDict_ClearFreeList();
+}
+
+#define DK_SIZE(dk) ((dk)->dk_size)
+#if SIZEOF_VOID_P > 4
+#define DK_IXSIZE(dk) \
+ (DK_SIZE(dk) <= 0xff ? \
+ 1 : DK_SIZE(dk) <= 0xffff ? \
+ 2 : DK_SIZE(dk) <= 0xffffffff ? \
+ 4 : sizeof(int64_t))
+#else
+#define DK_IXSIZE(dk) \
+ (DK_SIZE(dk) <= 0xff ? \
+ 1 : DK_SIZE(dk) <= 0xffff ? \
+ 2 : sizeof(int32_t))
+#endif
+#define DK_ENTRIES(dk) \
+ ((PyDictKeyEntry*)(&((int8_t*)((dk)->dk_indices))[DK_SIZE(dk) * DK_IXSIZE(dk)]))
+
+#define DK_DEBUG_INCREF _Py_INC_REFTOTAL _Py_REF_DEBUG_COMMA
+#define DK_DEBUG_DECREF _Py_DEC_REFTOTAL _Py_REF_DEBUG_COMMA
+
+#define DK_INCREF(dk) (DK_DEBUG_INCREF ++(dk)->dk_refcnt)
+#define DK_DECREF(dk) if (DK_DEBUG_DECREF (--(dk)->dk_refcnt) == 0) free_keys_object(dk)
+#define DK_MASK(dk) (((dk)->dk_size)-1)
+#define IS_POWER_OF_2(x) (((x) & (x-1)) == 0)
+
+/* lookup indices. returns DKIX_EMPTY, DKIX_DUMMY, or ix >=0 */
+static Py_ssize_t
+dk_get_index(PyDictKeysObject *keys, Py_ssize_t i)
+{
+ Py_ssize_t s = DK_SIZE(keys);
+ Py_ssize_t ix;
+
+ if (s <= 0xff) {
+ int8_t *indices = (int8_t*)(keys->dk_indices);
+ ix = indices[i];
+ }
+ else if (s <= 0xffff) {
+ int16_t *indices = (int16_t*)(keys->dk_indices);
+ ix = indices[i];
+ }
+#if SIZEOF_VOID_P > 4
+ else if (s > 0xffffffff) {
+ int64_t *indices = (int64_t*)(keys->dk_indices);
+ ix = indices[i];
+ }
+#endif
+ else {
+ int32_t *indices = (int32_t*)(keys->dk_indices);
+ ix = indices[i];
+ }
+ assert(ix >= DKIX_DUMMY);
+ return ix;
+}
+
+/* write to indices. */
+static void
+dk_set_index(PyDictKeysObject *keys, Py_ssize_t i, Py_ssize_t ix)
+{
+ Py_ssize_t s = DK_SIZE(keys);
+
+ assert(ix >= DKIX_DUMMY);
+
+ if (s <= 0xff) {
+ int8_t *indices = (int8_t*)(keys->dk_indices);
+ assert(ix <= 0x7f);
+ indices[i] = (char)ix;
+ }
+ else if (s <= 0xffff) {
+ int16_t *indices = (int16_t*)(keys->dk_indices);
+ assert(ix <= 0x7fff);
+ indices[i] = (int16_t)ix;
+ }
+#if SIZEOF_VOID_P > 4
+ else if (s > 0xffffffff) {
+ int64_t *indices = (int64_t*)(keys->dk_indices);
+ indices[i] = ix;
+ }
+#endif
+ else {
+ int32_t *indices = (int32_t*)(keys->dk_indices);
+ assert(ix <= 0x7fffffff);
+ indices[i] = (int32_t)ix;
+ }
+}
+
+
+/* USABLE_FRACTION is the maximum dictionary load.
+ * Increasing this ratio makes dictionaries more dense resulting in more
+ * collisions. Decreasing it improves sparseness at the expense of spreading
+ * indices over more cache lines and at the cost of total memory consumed.
+ *
+ * USABLE_FRACTION must obey the following:
+ * (0 < USABLE_FRACTION(n) < n) for all n >= 2
+ *
+ * USABLE_FRACTION should be quick to calculate.
+ * Fractions around 1/2 to 2/3 seem to work well in practice.
+ */
+#define USABLE_FRACTION(n) (((n) << 1)/3)
+
+/* ESTIMATE_SIZE is reverse function of USABLE_FRACTION.
+ * This can be used to reserve enough size to insert n entries without
+ * resizing.
+ */
+#define ESTIMATE_SIZE(n) (((n)*3+1) >> 1)
+
+/* Alternative fraction that is otherwise close enough to 2n/3 to make
+ * little difference. 8 * 2/3 == 8 * 5/8 == 5. 16 * 2/3 == 16 * 5/8 == 10.
+ * 32 * 2/3 = 21, 32 * 5/8 = 20.
+ * Its advantage is that it is faster to compute on machines with slow division.
+ * #define USABLE_FRACTION(n) (((n) >> 1) + ((n) >> 2) - ((n) >> 3))
+ */
+
+/* GROWTH_RATE. Growth rate upon hitting maximum load.
+ * Currently set to used*2 + capacity/2.
+ * This means that dicts double in size when growing without deletions,
+ * but have more head room when the number of deletions is on a par with the
+ * number of insertions.
+ * Raising this to used*4 doubles memory consumption depending on the size of
+ * the dictionary, but results in half the number of resizes, less effort to
+ * resize.
+ * GROWTH_RATE was set to used*4 up to version 3.2.
+ * GROWTH_RATE was set to used*2 in version 3.3.0
+ */
+#define GROWTH_RATE(d) (((d)->ma_used*2)+((d)->ma_keys->dk_size>>1))
+
+#define ENSURE_ALLOWS_DELETIONS(d) \
+ if ((d)->ma_keys->dk_lookup == lookdict_unicode_nodummy) { \
+ (d)->ma_keys->dk_lookup = lookdict_unicode; \
+ }
+
+/* This immutable, empty PyDictKeysObject is used for PyDict_Clear()
+ * (which cannot fail and thus can do no allocation).
+ */
+static PyDictKeysObject empty_keys_struct = {
+ 1, /* dk_refcnt */
+ 1, /* dk_size */
+ lookdict_split, /* dk_lookup */
+ 0, /* dk_usable (immutable) */
+ 0, /* dk_nentries */
+ {DKIX_EMPTY, DKIX_EMPTY, DKIX_EMPTY, DKIX_EMPTY,
+ DKIX_EMPTY, DKIX_EMPTY, DKIX_EMPTY, DKIX_EMPTY}, /* dk_indices */
+};
+
+static PyObject *empty_values[1] = { NULL };
+
+#define Py_EMPTY_KEYS &empty_keys_struct
+
+/* Uncomment to check the dict content in _PyDict_CheckConsistency() */
+/* #define DEBUG_PYDICT */
+
+
+#ifndef NDEBUG
+static int
+_PyDict_CheckConsistency(PyDictObject *mp)
+{
+ PyDictKeysObject *keys = mp->ma_keys;
+ int splitted = _PyDict_HasSplitTable(mp);
+ Py_ssize_t usable = USABLE_FRACTION(keys->dk_size);
+#ifdef DEBUG_PYDICT
+ PyDictKeyEntry *entries = DK_ENTRIES(keys);
+ Py_ssize_t i;
+#endif
+
+ assert(0 <= mp->ma_used && mp->ma_used <= usable);
+ assert(IS_POWER_OF_2(keys->dk_size));
+ assert(0 <= keys->dk_usable
+ && keys->dk_usable <= usable);
+ assert(0 <= keys->dk_nentries
+ && keys->dk_nentries <= usable);
+ assert(keys->dk_usable + keys->dk_nentries <= usable);
+
+ if (!splitted) {
+ /* combined table */
+ assert(keys->dk_refcnt == 1);
+ }
+
+#ifdef DEBUG_PYDICT
+ for (i=0; i < keys->dk_size; i++) {
+ Py_ssize_t ix = dk_get_index(keys, i);
+ assert(DKIX_DUMMY <= ix && ix <= usable);
+ }
+
+ for (i=0; i < usable; i++) {
+ PyDictKeyEntry *entry = &entries[i];
+ PyObject *key = entry->me_key;
+
+ if (key != NULL) {
+ if (PyUnicode_CheckExact(key)) {
+ Py_hash_t hash = ((PyASCIIObject *)key)->hash;
+ assert(hash != -1);
+ assert(entry->me_hash == hash);
+ }
+ else {
+ /* test_dict fails if PyObject_Hash() is called again */
+ assert(entry->me_hash != -1);
+ }
+ if (!splitted) {
+ assert(entry->me_value != NULL);
+ }
+ }
+
+ if (splitted) {
+ assert(entry->me_value == NULL);
+ }
+ }
+
+ if (splitted) {
+ /* splitted table */
+ for (i=0; i < mp->ma_used; i++) {
+ assert(mp->ma_values[i] != NULL);
+ }
+ }
+#endif
+
+ return 1;
+}
+#endif
+
+
+static PyDictKeysObject *new_keys_object(Py_ssize_t size)
+{
+ PyDictKeysObject *dk;
+ Py_ssize_t es, usable;
+
+ assert(size >= PyDict_MINSIZE);
+ assert(IS_POWER_OF_2(size));
+
+ usable = USABLE_FRACTION(size);
+ if (size <= 0xff) {
+ es = 1;
+ }
+ else if (size <= 0xffff) {
+ es = 2;
+ }
+#if SIZEOF_VOID_P > 4
+ else if (size <= 0xffffffff) {
+ es = 4;
+ }
+#endif
+ else {
+ es = sizeof(Py_ssize_t);
+ }
+
+ if (size == PyDict_MINSIZE && numfreekeys > 0) {
+ dk = keys_free_list[--numfreekeys];
+ }
+ else {
+ dk = PyObject_MALLOC(sizeof(PyDictKeysObject)
+ + es * size
+ + sizeof(PyDictKeyEntry) * usable);
+ if (dk == NULL) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+ }
+ DK_DEBUG_INCREF dk->dk_refcnt = 1;
+ dk->dk_size = size;
+ dk->dk_usable = usable;
+ dk->dk_lookup = lookdict_unicode_nodummy;
+ dk->dk_nentries = 0;
+ memset(&dk->dk_indices[0], 0xff, es * size);
+ memset(DK_ENTRIES(dk), 0, sizeof(PyDictKeyEntry) * usable);
+ return dk;
+}
+
+static void
+free_keys_object(PyDictKeysObject *keys)
+{
+ PyDictKeyEntry *entries = DK_ENTRIES(keys);
+ Py_ssize_t i, n;
+ for (i = 0, n = keys->dk_nentries; i < n; i++) {
+ Py_XDECREF(entries[i].me_key);
+ Py_XDECREF(entries[i].me_value);
+ }
+ if (keys->dk_size == PyDict_MINSIZE && numfreekeys < PyDict_MAXFREELIST) {
+ keys_free_list[numfreekeys++] = keys;
+ return;
+ }
+ PyObject_FREE(keys);
+}
+
+#define new_values(size) PyMem_NEW(PyObject *, size)
+#define free_values(values) PyMem_FREE(values)
+
+/* Consumes a reference to the keys object */
+static PyObject *
+new_dict(PyDictKeysObject *keys, PyObject **values)
+{
+ PyDictObject *mp;
+ assert(keys != NULL);
+ if (numfree) {
+ mp = free_list[--numfree];
+ assert (mp != NULL);
+ assert (Py_TYPE(mp) == &PyDict_Type);
+ _Py_NewReference((PyObject *)mp);
+ }
+ else {
+ mp = PyObject_GC_New(PyDictObject, &PyDict_Type);
+ if (mp == NULL) {
+ DK_DECREF(keys);
+ free_values(values);
+ return NULL;
+ }
+ }
+ mp->ma_keys = keys;
+ mp->ma_values = values;
+ mp->ma_used = 0;
+ mp->ma_version_tag = DICT_NEXT_VERSION();
+ assert(_PyDict_CheckConsistency(mp));
+ return (PyObject *)mp;
+}
+
+/* Consumes a reference to the keys object */
+static PyObject *
+new_dict_with_shared_keys(PyDictKeysObject *keys)
+{
+ PyObject **values;
+ Py_ssize_t i, size;
+
+ size = USABLE_FRACTION(DK_SIZE(keys));
+ values = new_values(size);
+ if (values == NULL) {
+ DK_DECREF(keys);
+ return PyErr_NoMemory();
+ }
+ for (i = 0; i < size; i++) {
+ values[i] = NULL;
+ }
+ return new_dict(keys, values);
+}
+
+PyObject *
+PyDict_New(void)
+{
+ PyDictKeysObject *keys = new_keys_object(PyDict_MINSIZE);
+ if (keys == NULL)
+ return NULL;
+ return new_dict(keys, NULL);
+}
+
+/* Search index of hash table from offset of entry table */
+static Py_ssize_t
+lookdict_index(PyDictKeysObject *k, Py_hash_t hash, Py_ssize_t index)
+{
+ size_t i;
+ size_t mask = DK_MASK(k);
+ Py_ssize_t ix;
+
+ i = (size_t)hash & mask;
+ ix = dk_get_index(k, i);
+ if (ix == index) {
+ return i;
+ }
+ if (ix == DKIX_EMPTY) {
+ return DKIX_EMPTY;
+ }
+
+ for (size_t perturb = hash;;) {
+ perturb >>= PERTURB_SHIFT;
+ i = mask & ((i << 2) + i + perturb + 1);
+ ix = dk_get_index(k, i);
+ if (ix == index) {
+ return i;
+ }
+ if (ix == DKIX_EMPTY) {
+ return DKIX_EMPTY;
+ }
+ }
+ assert(0); /* NOT REACHED */
+ return DKIX_ERROR;
+}
+
+/*
+The basic lookup function used by all operations.
+This is based on Algorithm D from Knuth Vol. 3, Sec. 6.4.
+Open addressing is preferred over chaining since the link overhead for
+chaining would be substantial (100% with typical malloc overhead).
+
+The initial probe index is computed as hash mod the table size. Subsequent
+probe indices are computed as explained earlier.
+
+All arithmetic on hash should ignore overflow.
+
+The details in this version are due to Tim Peters, building on many past
+contributions by Reimer Behrends, Jyrki Alakuijala, Vladimir Marangozov and
+Christian Tismer.
+
+lookdict() is general-purpose, and may return DKIX_ERROR if (and only if) a
+comparison raises an exception.
+lookdict_unicode() below is specialized to string keys, comparison of which can
+never raise an exception; that function can never return DKIX_ERROR when key
+is string. Otherwise, it falls back to lookdict().
+lookdict_unicode_nodummy is further specialized for string keys that cannot be
+the <dummy> value.
+For both, when the key isn't found a DKIX_EMPTY is returned. hashpos returns
+where the key index should be inserted.
+*/
+static Py_ssize_t
+lookdict(PyDictObject *mp, PyObject *key,
+ Py_hash_t hash, PyObject ***value_addr, Py_ssize_t *hashpos)
+{
+ size_t i, mask;
+ Py_ssize_t ix, freeslot;
+ int cmp;
+ PyDictKeysObject *dk;
+ PyDictKeyEntry *ep0, *ep;
+ PyObject *startkey;
+
+top:
+ dk = mp->ma_keys;
+ mask = DK_MASK(dk);
+ ep0 = DK_ENTRIES(dk);
+ i = (size_t)hash & mask;
+
+ ix = dk_get_index(dk, i);
+ if (ix == DKIX_EMPTY) {
+ if (hashpos != NULL)
+ *hashpos = i;
+ *value_addr = NULL;
+ return DKIX_EMPTY;
+ }
+ if (ix == DKIX_DUMMY) {
+ freeslot = i;
+ }
+ else {
+ ep = &ep0[ix];
+ assert(ep->me_key != NULL);
+ if (ep->me_key == key) {
+ *value_addr = &ep->me_value;
+ if (hashpos != NULL)
+ *hashpos = i;
+ return ix;
+ }
+ if (ep->me_hash == hash) {
+ startkey = ep->me_key;
+ Py_INCREF(startkey);
+ cmp = PyObject_RichCompareBool(startkey, key, Py_EQ);
+ Py_DECREF(startkey);
+ if (cmp < 0) {
+ *value_addr = NULL;
+ return DKIX_ERROR;
+ }
+ if (dk == mp->ma_keys && ep->me_key == startkey) {
+ if (cmp > 0) {
+ *value_addr = &ep->me_value;
+ if (hashpos != NULL)
+ *hashpos = i;
+ return ix;
+ }
+ }
+ else {
+ /* The dict was mutated, restart */
+ goto top;
+ }
+ }
+ freeslot = -1;
+ }
+
+ for (size_t perturb = hash;;) {
+ perturb >>= PERTURB_SHIFT;
+ i = ((i << 2) + i + perturb + 1) & mask;
+ ix = dk_get_index(dk, i);
+ if (ix == DKIX_EMPTY) {
+ if (hashpos != NULL) {
+ *hashpos = (freeslot == -1) ? (Py_ssize_t)i : freeslot;
+ }
+ *value_addr = NULL;
+ return ix;
+ }
+ if (ix == DKIX_DUMMY) {
+ if (freeslot == -1)
+ freeslot = i;
+ continue;
+ }
+ ep = &ep0[ix];
+ assert(ep->me_key != NULL);
+ if (ep->me_key == key) {
+ if (hashpos != NULL) {
+ *hashpos = i;
+ }
+ *value_addr = &ep->me_value;
+ return ix;
+ }
+ if (ep->me_hash == hash) {
+ startkey = ep->me_key;
+ Py_INCREF(startkey);
+ cmp = PyObject_RichCompareBool(startkey, key, Py_EQ);
+ Py_DECREF(startkey);
+ if (cmp < 0) {
+ *value_addr = NULL;
+ return DKIX_ERROR;
+ }
+ if (dk == mp->ma_keys && ep->me_key == startkey) {
+ if (cmp > 0) {
+ if (hashpos != NULL) {
+ *hashpos = i;
+ }
+ *value_addr = &ep->me_value;
+ return ix;
+ }
+ }
+ else {
+ /* The dict was mutated, restart */
+ goto top;
+ }
+ }
+ }
+ assert(0); /* NOT REACHED */
+ return 0;
+}
+
+/* Specialized version for string-only keys */
+static Py_ssize_t
+lookdict_unicode(PyDictObject *mp, PyObject *key,
+ Py_hash_t hash, PyObject ***value_addr, Py_ssize_t *hashpos)
+{
+ size_t i;
+ size_t mask = DK_MASK(mp->ma_keys);
+ Py_ssize_t ix, freeslot;
+ PyDictKeyEntry *ep, *ep0 = DK_ENTRIES(mp->ma_keys);
+
+ assert(mp->ma_values == NULL);
+ /* Make sure this function doesn't have to handle non-unicode keys,
+ including subclasses of str; e.g., one reason to subclass
+ unicodes is to override __eq__, and for speed we don't cater to
+ that here. */
+ if (!PyUnicode_CheckExact(key)) {
+ mp->ma_keys->dk_lookup = lookdict;
+ return lookdict(mp, key, hash, value_addr, hashpos);
+ }
+ i = (size_t)hash & mask;
+ ix = dk_get_index(mp->ma_keys, i);
+ if (ix == DKIX_EMPTY) {
+ if (hashpos != NULL)
+ *hashpos = i;
+ *value_addr = NULL;
+ return DKIX_EMPTY;
+ }
+ if (ix == DKIX_DUMMY) {
+ freeslot = i;
+ }
+ else {
+ ep = &ep0[ix];
+ assert(ep->me_key != NULL);
+ if (ep->me_key == key
+ || (ep->me_hash == hash && unicode_eq(ep->me_key, key))) {
+ if (hashpos != NULL)
+ *hashpos = i;
+ *value_addr = &ep->me_value;
+ return ix;
+ }
+ freeslot = -1;
+ }
+
+ for (size_t perturb = hash;;) {
+ perturb >>= PERTURB_SHIFT;
+ i = mask & ((i << 2) + i + perturb + 1);
+ ix = dk_get_index(mp->ma_keys, i);
+ if (ix == DKIX_EMPTY) {
+ if (hashpos != NULL) {
+ *hashpos = (freeslot == -1) ? (Py_ssize_t)i : freeslot;
+ }
+ *value_addr = NULL;
+ return DKIX_EMPTY;
+ }
+ if (ix == DKIX_DUMMY) {
+ if (freeslot == -1)
+ freeslot = i;
+ continue;
+ }
+ ep = &ep0[ix];
+ assert(ep->me_key != NULL);
+ if (ep->me_key == key
+ || (ep->me_hash == hash && unicode_eq(ep->me_key, key))) {
+ *value_addr = &ep->me_value;
+ if (hashpos != NULL) {
+ *hashpos = i;
+ }
+ return ix;
+ }
+ }
+ assert(0); /* NOT REACHED */
+ return 0;
+}
+
+/* Faster version of lookdict_unicode when it is known that no <dummy> keys
+ * will be present. */
+static Py_ssize_t
+lookdict_unicode_nodummy(PyDictObject *mp, PyObject *key,
+ Py_hash_t hash, PyObject ***value_addr,
+ Py_ssize_t *hashpos)
+{
+ size_t i;
+ size_t mask = DK_MASK(mp->ma_keys);
+ Py_ssize_t ix;
+ PyDictKeyEntry *ep, *ep0 = DK_ENTRIES(mp->ma_keys);
+
+ assert(mp->ma_values == NULL);
+ /* Make sure this function doesn't have to handle non-unicode keys,
+ including subclasses of str; e.g., one reason to subclass
+ unicodes is to override __eq__, and for speed we don't cater to
+ that here. */
+ if (!PyUnicode_CheckExact(key)) {
+ mp->ma_keys->dk_lookup = lookdict;
+ return lookdict(mp, key, hash, value_addr, hashpos);
+ }
+ i = (size_t)hash & mask;
+ ix = dk_get_index(mp->ma_keys, i);
+ assert (ix != DKIX_DUMMY);
+ if (ix == DKIX_EMPTY) {
+ if (hashpos != NULL)
+ *hashpos = i;
+ *value_addr = NULL;
+ return DKIX_EMPTY;
+ }
+ ep = &ep0[ix];
+ assert(ep->me_key != NULL);
+ assert(PyUnicode_CheckExact(ep->me_key));
+ if (ep->me_key == key ||
+ (ep->me_hash == hash && unicode_eq(ep->me_key, key))) {
+ if (hashpos != NULL)
+ *hashpos = i;
+ *value_addr = &ep->me_value;
+ return ix;
+ }
+ for (size_t perturb = hash;;) {
+ perturb >>= PERTURB_SHIFT;
+ i = mask & ((i << 2) + i + perturb + 1);
+ ix = dk_get_index(mp->ma_keys, i);
+ assert (ix != DKIX_DUMMY);
+ if (ix == DKIX_EMPTY) {
+ if (hashpos != NULL)
+ *hashpos = i;
+ *value_addr = NULL;
+ return DKIX_EMPTY;
+ }
+ ep = &ep0[ix];
+ assert(ep->me_key != NULL && PyUnicode_CheckExact(ep->me_key));
+ if (ep->me_key == key ||
+ (ep->me_hash == hash && unicode_eq(ep->me_key, key))) {
+ if (hashpos != NULL)
+ *hashpos = i;
+ *value_addr = &ep->me_value;
+ return ix;
+ }
+ }
+ assert(0); /* NOT REACHED */
+ return 0;
+}
+
+/* Version of lookdict for split tables.
+ * All split tables and only split tables use this lookup function.
+ * Split tables only contain unicode keys and no dummy keys,
+ * so algorithm is the same as lookdict_unicode_nodummy.
+ */
+static Py_ssize_t
+lookdict_split(PyDictObject *mp, PyObject *key,
+ Py_hash_t hash, PyObject ***value_addr, Py_ssize_t *hashpos)
+{
+ size_t i;
+ size_t mask = DK_MASK(mp->ma_keys);
+ Py_ssize_t ix;
+ PyDictKeyEntry *ep, *ep0 = DK_ENTRIES(mp->ma_keys);
+
+ /* mp must split table */
+ assert(mp->ma_values != NULL);
+ if (!PyUnicode_CheckExact(key)) {
+ ix = lookdict(mp, key, hash, value_addr, hashpos);
+ if (ix >= 0) {
+ *value_addr = &mp->ma_values[ix];
+ }
+ return ix;
+ }
+
+ i = (size_t)hash & mask;
+ ix = dk_get_index(mp->ma_keys, i);
+ if (ix == DKIX_EMPTY) {
+ if (hashpos != NULL)
+ *hashpos = i;
+ *value_addr = NULL;
+ return DKIX_EMPTY;
+ }
+ assert(ix >= 0);
+ ep = &ep0[ix];
+ assert(ep->me_key != NULL && PyUnicode_CheckExact(ep->me_key));
+ if (ep->me_key == key ||
+ (ep->me_hash == hash && unicode_eq(ep->me_key, key))) {
+ if (hashpos != NULL)
+ *hashpos = i;
+ *value_addr = &mp->ma_values[ix];
+ return ix;
+ }
+ for (size_t perturb = hash;;) {
+ perturb >>= PERTURB_SHIFT;
+ i = mask & ((i << 2) + i + perturb + 1);
+ ix = dk_get_index(mp->ma_keys, i);
+ if (ix == DKIX_EMPTY) {
+ if (hashpos != NULL)
+ *hashpos = i;
+ *value_addr = NULL;
+ return DKIX_EMPTY;
+ }
+ assert(ix >= 0);
+ ep = &ep0[ix];
+ assert(ep->me_key != NULL && PyUnicode_CheckExact(ep->me_key));
+ if (ep->me_key == key ||
+ (ep->me_hash == hash && unicode_eq(ep->me_key, key))) {
+ if (hashpos != NULL)
+ *hashpos = i;
+ *value_addr = &mp->ma_values[ix];
+ return ix;
+ }
+ }
+ assert(0); /* NOT REACHED */
+ return 0;
+}
+
+int
+_PyDict_HasOnlyStringKeys(PyObject *dict)
+{
+ Py_ssize_t pos = 0;
+ PyObject *key, *value;
+ assert(PyDict_Check(dict));
+ /* Shortcut */
+ if (((PyDictObject *)dict)->ma_keys->dk_lookup != lookdict)
+ return 1;
+ while (PyDict_Next(dict, &pos, &key, &value))
+ if (!PyUnicode_Check(key))
+ return 0;
+ return 1;
+}
+
+#define MAINTAIN_TRACKING(mp, key, value) \
+ do { \
+ if (!_PyObject_GC_IS_TRACKED(mp)) { \
+ if (_PyObject_GC_MAY_BE_TRACKED(key) || \
+ _PyObject_GC_MAY_BE_TRACKED(value)) { \
+ _PyObject_GC_TRACK(mp); \
+ } \
+ } \
+ } while(0)
+
+void
+_PyDict_MaybeUntrack(PyObject *op)
+{
+ PyDictObject *mp;
+ PyObject *value;
+ Py_ssize_t i, numentries;
+ PyDictKeyEntry *ep0;
+
+ if (!PyDict_CheckExact(op) || !_PyObject_GC_IS_TRACKED(op))
+ return;
+
+ mp = (PyDictObject *) op;
+ ep0 = DK_ENTRIES(mp->ma_keys);
+ numentries = mp->ma_keys->dk_nentries;
+ if (_PyDict_HasSplitTable(mp)) {
+ for (i = 0; i < numentries; i++) {
+ if ((value = mp->ma_values[i]) == NULL)
+ continue;
+ if (_PyObject_GC_MAY_BE_TRACKED(value)) {
+ assert(!_PyObject_GC_MAY_BE_TRACKED(ep0[i].me_key));
+ return;
+ }
+ }
+ }
+ else {
+ for (i = 0; i < numentries; i++) {
+ if ((value = ep0[i].me_value) == NULL)
+ continue;
+ if (_PyObject_GC_MAY_BE_TRACKED(value) ||
+ _PyObject_GC_MAY_BE_TRACKED(ep0[i].me_key))
+ return;
+ }
+ }
+ _PyObject_GC_UNTRACK(op);
+}
+
+/* Internal function to find slot for an item from its hash
+ when it is known that the key is not present in the dict.
+
+ The dict must be combined. */
+static void
+find_empty_slot(PyDictObject *mp, PyObject *key, Py_hash_t hash,
+ PyObject ***value_addr, Py_ssize_t *hashpos)
+{
+ size_t i;
+ size_t mask = DK_MASK(mp->ma_keys);
+ Py_ssize_t ix;
+ PyDictKeyEntry *ep, *ep0 = DK_ENTRIES(mp->ma_keys);
+
+ assert(!_PyDict_HasSplitTable(mp));
+ assert(hashpos != NULL);
+ assert(key != NULL);
+
+ if (!PyUnicode_CheckExact(key))
+ mp->ma_keys->dk_lookup = lookdict;
+ i = hash & mask;
+ ix = dk_get_index(mp->ma_keys, i);
+ for (size_t perturb = hash; ix != DKIX_EMPTY;) {
+ perturb >>= PERTURB_SHIFT;
+ i = (i << 2) + i + perturb + 1;
+ ix = dk_get_index(mp->ma_keys, i & mask);
+ }
+ ep = &ep0[mp->ma_keys->dk_nentries];
+ *hashpos = i & mask;
+ assert(ep->me_value == NULL);
+ *value_addr = &ep->me_value;
+}
+
+static int
+insertion_resize(PyDictObject *mp)
+{
+ return dictresize(mp, GROWTH_RATE(mp));
+}
+
+/*
+Internal routine to insert a new item into the table.
+Used both by the internal resize routine and by the public insert routine.
+Returns -1 if an error occurred, or 0 on success.
+*/
+static int
+insertdict(PyDictObject *mp, PyObject *key, Py_hash_t hash, PyObject *value)
+{
+ PyObject *old_value;
+ PyObject **value_addr;
+ PyDictKeyEntry *ep, *ep0;
+ Py_ssize_t hashpos, ix;
+
+ Py_INCREF(key);
+ Py_INCREF(value);
+ if (mp->ma_values != NULL && !PyUnicode_CheckExact(key)) {
+ if (insertion_resize(mp) < 0)
+ goto Fail;
+ }
+
+ ix = mp->ma_keys->dk_lookup(mp, key, hash, &value_addr, &hashpos);
+ if (ix == DKIX_ERROR)
+ goto Fail;
+
+ assert(PyUnicode_CheckExact(key) || mp->ma_keys->dk_lookup == lookdict);
+ MAINTAIN_TRACKING(mp, key, value);
+
+ /* When insertion order is different from shared key, we can't share
+ * the key anymore. Convert this instance to combine table.
+ */
+ if (_PyDict_HasSplitTable(mp) &&
+ ((ix >= 0 && *value_addr == NULL && mp->ma_used != ix) ||
+ (ix == DKIX_EMPTY && mp->ma_used != mp->ma_keys->dk_nentries))) {
+ if (insertion_resize(mp) < 0)
+ goto Fail;
+ find_empty_slot(mp, key, hash, &value_addr, &hashpos);
+ ix = DKIX_EMPTY;
+ }
+
+ if (ix == DKIX_EMPTY) {
+ /* Insert into new slot. */
+ if (mp->ma_keys->dk_usable <= 0) {
+ /* Need to resize. */
+ if (insertion_resize(mp) < 0)
+ goto Fail;
+ find_empty_slot(mp, key, hash, &value_addr, &hashpos);
+ }
+ ep0 = DK_ENTRIES(mp->ma_keys);
+ ep = &ep0[mp->ma_keys->dk_nentries];
+ dk_set_index(mp->ma_keys, hashpos, mp->ma_keys->dk_nentries);
+ ep->me_key = key;
+ ep->me_hash = hash;
+ if (mp->ma_values) {
+ assert (mp->ma_values[mp->ma_keys->dk_nentries] == NULL);
+ mp->ma_values[mp->ma_keys->dk_nentries] = value;
+ }
+ else {
+ ep->me_value = value;
+ }
+ mp->ma_used++;
+ mp->ma_version_tag = DICT_NEXT_VERSION();
+ mp->ma_keys->dk_usable--;
+ mp->ma_keys->dk_nentries++;
+ assert(mp->ma_keys->dk_usable >= 0);
+ assert(_PyDict_CheckConsistency(mp));
+ return 0;
+ }
+
+ assert(value_addr != NULL);
+
+ old_value = *value_addr;
+ if (old_value != NULL) {
+ *value_addr = value;
+ mp->ma_version_tag = DICT_NEXT_VERSION();
+ assert(_PyDict_CheckConsistency(mp));
+
+ Py_DECREF(old_value); /* which **CAN** re-enter (see issue #22653) */
+ Py_DECREF(key);
+ return 0;
+ }
+
+ /* pending state */
+ assert(_PyDict_HasSplitTable(mp));
+ assert(ix == mp->ma_used);
+ *value_addr = value;
+ mp->ma_used++;
+ mp->ma_version_tag = DICT_NEXT_VERSION();
+ assert(_PyDict_CheckConsistency(mp));
+ Py_DECREF(key);
+ return 0;
+
+Fail:
+ Py_DECREF(value);
+ Py_DECREF(key);
+ return -1;
+}
+
+/*
+Internal routine used by dictresize() to insert an item which is
+known to be absent from the dict. This routine also assumes that
+the dict contains no deleted entries. Besides the performance benefit,
+using insertdict() in dictresize() is dangerous (SF bug #1456209).
+Note that no refcounts are changed by this routine; if needed, the caller
+is responsible for incref'ing `key` and `value`.
+Neither mp->ma_used nor k->dk_usable are modified by this routine; the caller
+must set them correctly
+*/
+static void
+insertdict_clean(PyDictObject *mp, PyObject *key, Py_hash_t hash,
+ PyObject *value)
+{
+ size_t i;
+ PyDictKeysObject *k = mp->ma_keys;
+ size_t mask = (size_t)DK_SIZE(k)-1;
+ PyDictKeyEntry *ep0 = DK_ENTRIES(mp->ma_keys);
+ PyDictKeyEntry *ep;
+
+ assert(k->dk_lookup != NULL);
+ assert(value != NULL);
+ assert(key != NULL);
+ assert(PyUnicode_CheckExact(key) || k->dk_lookup == lookdict);
+ i = hash & mask;
+ for (size_t perturb = hash; dk_get_index(k, i) != DKIX_EMPTY;) {
+ perturb >>= PERTURB_SHIFT;
+ i = mask & ((i << 2) + i + perturb + 1);
+ }
+ ep = &ep0[k->dk_nentries];
+ assert(ep->me_value == NULL);
+ dk_set_index(k, i, k->dk_nentries);
+ k->dk_nentries++;
+ ep->me_key = key;
+ ep->me_hash = hash;
+ ep->me_value = value;
+}
+
+/*
+Restructure the table by allocating a new table and reinserting all
+items again. When entries have been deleted, the new table may
+actually be smaller than the old one.
+If a table is split (its keys and hashes are shared, its values are not),
+then the values are temporarily copied into the table, it is resized as
+a combined table, then the me_value slots in the old table are NULLed out.
+After resizing a table is always combined,
+but can be resplit by make_keys_shared().
+*/
+static int
+dictresize(PyDictObject *mp, Py_ssize_t minsize)
+{
+ Py_ssize_t i, newsize;
+ PyDictKeysObject *oldkeys;
+ PyObject **oldvalues;
+ PyDictKeyEntry *ep0;
+
+ /* Find the smallest table size > minused. */
+ for (newsize = PyDict_MINSIZE;
+ newsize < minsize && newsize > 0;
+ newsize <<= 1)
+ ;
+ if (newsize <= 0) {
+ PyErr_NoMemory();
+ return -1;
+ }
+ oldkeys = mp->ma_keys;
+ oldvalues = mp->ma_values;
+ /* Allocate a new table. */
+ mp->ma_keys = new_keys_object(newsize);
+ if (mp->ma_keys == NULL) {
+ mp->ma_keys = oldkeys;
+ return -1;
+ }
+ // New table must be large enough.
+ assert(mp->ma_keys->dk_usable >= mp->ma_used);
+ if (oldkeys->dk_lookup == lookdict)
+ mp->ma_keys->dk_lookup = lookdict;
+ mp->ma_values = NULL;
+ ep0 = DK_ENTRIES(oldkeys);
+ /* Main loop below assumes we can transfer refcount to new keys
+ * and that value is stored in me_value.
+ * Increment ref-counts and copy values here to compensate
+ * This (resizing a split table) should be relatively rare */
+ if (oldvalues != NULL) {
+ for (i = 0; i < oldkeys->dk_nentries; i++) {
+ if (oldvalues[i] != NULL) {
+ Py_INCREF(ep0[i].me_key);
+ ep0[i].me_value = oldvalues[i];
+ }
+ }
+ }
+ /* Main loop */
+ for (i = 0; i < oldkeys->dk_nentries; i++) {
+ PyDictKeyEntry *ep = &ep0[i];
+ if (ep->me_value != NULL) {
+ insertdict_clean(mp, ep->me_key, ep->me_hash, ep->me_value);
+ }
+ }
+ mp->ma_keys->dk_usable -= mp->ma_used;
+ if (oldvalues != NULL) {
+ /* NULL out me_value slot in oldkeys, in case it was shared */
+ for (i = 0; i < oldkeys->dk_nentries; i++)
+ ep0[i].me_value = NULL;
+ DK_DECREF(oldkeys);
+ if (oldvalues != empty_values) {
+ free_values(oldvalues);
+ }
+ }
+ else {
+ assert(oldkeys->dk_lookup != lookdict_split);
+ assert(oldkeys->dk_refcnt == 1);
+ DK_DEBUG_DECREF PyObject_FREE(oldkeys);
+ }
+ return 0;
+}
+
+/* Returns NULL if unable to split table.
+ * A NULL return does not necessarily indicate an error */
+static PyDictKeysObject *
+make_keys_shared(PyObject *op)
+{
+ Py_ssize_t i;
+ Py_ssize_t size;
+ PyDictObject *mp = (PyDictObject *)op;
+
+ if (!PyDict_CheckExact(op))
+ return NULL;
+ if (!_PyDict_HasSplitTable(mp)) {
+ PyDictKeyEntry *ep0;
+ PyObject **values;
+ assert(mp->ma_keys->dk_refcnt == 1);
+ if (mp->ma_keys->dk_lookup == lookdict) {
+ return NULL;
+ }
+ else if (mp->ma_keys->dk_lookup == lookdict_unicode) {
+ /* Remove dummy keys */
+ if (dictresize(mp, DK_SIZE(mp->ma_keys)))
+ return NULL;
+ }
+ assert(mp->ma_keys->dk_lookup == lookdict_unicode_nodummy);
+ /* Copy values into a new array */
+ ep0 = DK_ENTRIES(mp->ma_keys);
+ size = USABLE_FRACTION(DK_SIZE(mp->ma_keys));
+ values = new_values(size);
+ if (values == NULL) {
+ PyErr_SetString(PyExc_MemoryError,
+ "Not enough memory to allocate new values array");
+ return NULL;
+ }
+ for (i = 0; i < size; i++) {
+ values[i] = ep0[i].me_value;
+ ep0[i].me_value = NULL;
+ }
+ mp->ma_keys->dk_lookup = lookdict_split;
+ mp->ma_values = values;
+ }
+ DK_INCREF(mp->ma_keys);
+ return mp->ma_keys;
+}
+
+PyObject *
+_PyDict_NewPresized(Py_ssize_t minused)
+{
+ const Py_ssize_t max_presize = 128 * 1024;
+ Py_ssize_t newsize;
+ PyDictKeysObject *new_keys;
+
+ /* There are no strict guarantee that returned dict can contain minused
+ * items without resize. So we create medium size dict instead of very
+ * large dict or MemoryError.
+ */
+ if (minused > USABLE_FRACTION(max_presize)) {
+ newsize = max_presize;
+ }
+ else {
+ Py_ssize_t minsize = ESTIMATE_SIZE(minused);
+ newsize = PyDict_MINSIZE;
+ while (newsize < minsize) {
+ newsize <<= 1;
+ }
+ }
+ assert(IS_POWER_OF_2(newsize));
+
+ new_keys = new_keys_object(newsize);
+ if (new_keys == NULL)
+ return NULL;
+ return new_dict(new_keys, NULL);
+}
+
+/* Note that, for historical reasons, PyDict_GetItem() suppresses all errors
+ * that may occur (originally dicts supported only string keys, and exceptions
+ * weren't possible). So, while the original intent was that a NULL return
+ * meant the key wasn't present, in reality it can mean that, or that an error
+ * (suppressed) occurred while computing the key's hash, or that some error
+ * (suppressed) occurred when comparing keys in the dict's internal probe
+ * sequence. A nasty example of the latter is when a Python-coded comparison
+ * function hits a stack-depth error, which can cause this to return NULL
+ * even if the key is present.
+ */
+PyObject *
+PyDict_GetItem(PyObject *op, PyObject *key)
+{
+ Py_hash_t hash;
+ Py_ssize_t ix;
+ PyDictObject *mp = (PyDictObject *)op;
+ PyThreadState *tstate;
+ PyObject **value_addr;
+
+ if (!PyDict_Check(op))
+ return NULL;
+ if (!PyUnicode_CheckExact(key) ||
+ (hash = ((PyASCIIObject *) key)->hash) == -1)
+ {
+ hash = PyObject_Hash(key);
+ if (hash == -1) {
+ PyErr_Clear();
+ return NULL;
+ }
+ }
+
+ /* We can arrive here with a NULL tstate during initialization: try
+ running "python -Wi" for an example related to string interning.
+ Let's just hope that no exception occurs then... This must be
+ _PyThreadState_Current and not PyThreadState_GET() because in debug
+ mode, the latter complains if tstate is NULL. */
+ tstate = _PyThreadState_UncheckedGet();
+ if (tstate != NULL && tstate->curexc_type != NULL) {
+ /* preserve the existing exception */
+ PyObject *err_type, *err_value, *err_tb;
+ PyErr_Fetch(&err_type, &err_value, &err_tb);
+ ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, NULL);
+ /* ignore errors */
+ PyErr_Restore(err_type, err_value, err_tb);
+ if (ix < 0)
+ return NULL;
+ }
+ else {
+ ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, NULL);
+ if (ix < 0) {
+ PyErr_Clear();
+ return NULL;
+ }
+ }
+ return *value_addr;
+}
+
+/* Same as PyDict_GetItemWithError() but with hash supplied by caller.
+ This returns NULL *with* an exception set if an exception occurred.
+ It returns NULL *without* an exception set if the key wasn't present.
+*/
+PyObject *
+_PyDict_GetItem_KnownHash(PyObject *op, PyObject *key, Py_hash_t hash)
+{
+ Py_ssize_t ix;
+ PyDictObject *mp = (PyDictObject *)op;
+ PyObject **value_addr;
+
+ if (!PyDict_Check(op)) {
+ PyErr_BadInternalCall();
+ return NULL;
+ }
+
+ ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, NULL);
+ if (ix < 0) {
+ return NULL;
+ }
+ return *value_addr;
+}
+
+/* Variant of PyDict_GetItem() that doesn't suppress exceptions.
+ This returns NULL *with* an exception set if an exception occurred.
+ It returns NULL *without* an exception set if the key wasn't present.
+*/
+PyObject *
+PyDict_GetItemWithError(PyObject *op, PyObject *key)
+{
+ Py_ssize_t ix;
+ Py_hash_t hash;
+ PyDictObject*mp = (PyDictObject *)op;
+ PyObject **value_addr;
+
+ if (!PyDict_Check(op)) {
+ PyErr_BadInternalCall();
+ return NULL;
+ }
+ if (!PyUnicode_CheckExact(key) ||
+ (hash = ((PyASCIIObject *) key)->hash) == -1)
+ {
+ hash = PyObject_Hash(key);
+ if (hash == -1) {
+ return NULL;
+ }
+ }
+
+ ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, NULL);
+ if (ix < 0)
+ return NULL;
+ return *value_addr;
+}
+
+PyObject *
+_PyDict_GetItemIdWithError(PyObject *dp, struct _Py_Identifier *key)
+{
+ PyObject *kv;
+ kv = _PyUnicode_FromId(key); /* borrowed */
+ if (kv == NULL)
+ return NULL;
+ return PyDict_GetItemWithError(dp, kv);
+}
+
+/* Fast version of global value lookup (LOAD_GLOBAL).
+ * Lookup in globals, then builtins.
+ *
+ * Raise an exception and return NULL if an error occurred (ex: computing the
+ * key hash failed, key comparison failed, ...). Return NULL if the key doesn't
+ * exist. Return the value if the key exists.
+ */
+PyObject *
+_PyDict_LoadGlobal(PyDictObject *globals, PyDictObject *builtins, PyObject *key)
+{
+ Py_ssize_t ix;
+ Py_hash_t hash;
+ PyObject **value_addr;
+
+ if (!PyUnicode_CheckExact(key) ||
+ (hash = ((PyASCIIObject *) key)->hash) == -1)
+ {
+ hash = PyObject_Hash(key);
+ if (hash == -1)
+ return NULL;
+ }
+
+ /* namespace 1: globals */
+ ix = globals->ma_keys->dk_lookup(globals, key, hash, &value_addr, NULL);
+ if (ix == DKIX_ERROR)
+ return NULL;
+ if (ix != DKIX_EMPTY && *value_addr != NULL)
+ return *value_addr;
+
+ /* namespace 2: builtins */
+ ix = builtins->ma_keys->dk_lookup(builtins, key, hash, &value_addr, NULL);
+ if (ix < 0)
+ return NULL;
+ return *value_addr;
+}
+
+/* CAUTION: PyDict_SetItem() must guarantee that it won't resize the
+ * dictionary if it's merely replacing the value for an existing key.
+ * This means that it's safe to loop over a dictionary with PyDict_Next()
+ * and occasionally replace a value -- but you can't insert new keys or
+ * remove them.
+ */
+int
+PyDict_SetItem(PyObject *op, PyObject *key, PyObject *value)
+{
+ PyDictObject *mp;
+ Py_hash_t hash;
+ if (!PyDict_Check(op)) {
+ PyErr_BadInternalCall();
+ return -1;
+ }
+ assert(key);
+ assert(value);
+ mp = (PyDictObject *)op;
+ if (!PyUnicode_CheckExact(key) ||
+ (hash = ((PyASCIIObject *) key)->hash) == -1)
+ {
+ hash = PyObject_Hash(key);
+ if (hash == -1)
+ return -1;
+ }
+
+ /* insertdict() handles any resizing that might be necessary */
+ return insertdict(mp, key, hash, value);
+}
+
+int
+_PyDict_SetItem_KnownHash(PyObject *op, PyObject *key, PyObject *value,
+ Py_hash_t hash)
+{
+ PyDictObject *mp;
+
+ if (!PyDict_Check(op)) {
+ PyErr_BadInternalCall();
+ return -1;
+ }
+ assert(key);
+ assert(value);
+ assert(hash != -1);
+ mp = (PyDictObject *)op;
+
+ /* insertdict() handles any resizing that might be necessary */
+ return insertdict(mp, key, hash, value);
+}
+
+static int
+delitem_common(PyDictObject *mp, Py_ssize_t hashpos, Py_ssize_t ix,
+ PyObject **value_addr)
+{
+ PyObject *old_key, *old_value;
+ PyDictKeyEntry *ep;
+
+ old_value = *value_addr;
+ assert(old_value != NULL);
+ *value_addr = NULL;
+ mp->ma_used--;
+ mp->ma_version_tag = DICT_NEXT_VERSION();
+ ep = &DK_ENTRIES(mp->ma_keys)[ix];
+ dk_set_index(mp->ma_keys, hashpos, DKIX_DUMMY);
+ ENSURE_ALLOWS_DELETIONS(mp);
+ old_key = ep->me_key;
+ ep->me_key = NULL;
+ Py_DECREF(old_key);
+ Py_DECREF(old_value);
+
+ assert(_PyDict_CheckConsistency(mp));
+ return 0;
+}
+
+int
+PyDict_DelItem(PyObject *op, PyObject *key)
+{
+ Py_hash_t hash;
+ assert(key);
+ if (!PyUnicode_CheckExact(key) ||
+ (hash = ((PyASCIIObject *) key)->hash) == -1) {
+ hash = PyObject_Hash(key);
+ if (hash == -1)
+ return -1;
+ }
+
+ return _PyDict_DelItem_KnownHash(op, key, hash);
+}
+
+int
+_PyDict_DelItem_KnownHash(PyObject *op, PyObject *key, Py_hash_t hash)
+{
+ Py_ssize_t hashpos, ix;
+ PyDictObject *mp;
+ PyObject **value_addr;
+
+ if (!PyDict_Check(op)) {
+ PyErr_BadInternalCall();
+ return -1;
+ }
+ assert(key);
+ assert(hash != -1);
+ mp = (PyDictObject *)op;
+ ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, &hashpos);
+ if (ix == DKIX_ERROR)
+ return -1;
+ if (ix == DKIX_EMPTY || *value_addr == NULL) {
+ _PyErr_SetKeyError(key);
+ return -1;
+ }
+ assert(dk_get_index(mp->ma_keys, hashpos) == ix);
+
+ // Split table doesn't allow deletion. Combine it.
+ if (_PyDict_HasSplitTable(mp)) {
+ if (dictresize(mp, DK_SIZE(mp->ma_keys))) {
+ return -1;
+ }
+ ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, &hashpos);
+ assert(ix >= 0);
+ }
+ return delitem_common(mp, hashpos, ix, value_addr);
+}
+
+/* This function promises that the predicate -> deletion sequence is atomic
+ * (i.e. protected by the GIL), assuming the predicate itself doesn't
+ * release the GIL.
+ */
+int
+_PyDict_DelItemIf(PyObject *op, PyObject *key,
+ int (*predicate)(PyObject *value))
+{
+ Py_ssize_t hashpos, ix;
+ PyDictObject *mp;
+ Py_hash_t hash;
+ PyObject **value_addr;
+ int res;
+
+ if (!PyDict_Check(op)) {
+ PyErr_BadInternalCall();
+ return -1;
+ }
+ assert(key);
+ hash = PyObject_Hash(key);
+ if (hash == -1)
+ return -1;
+ mp = (PyDictObject *)op;
+ ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, &hashpos);
+ if (ix == DKIX_ERROR)
+ return -1;
+ if (ix == DKIX_EMPTY || *value_addr == NULL) {
+ _PyErr_SetKeyError(key);
+ return -1;
+ }
+ assert(dk_get_index(mp->ma_keys, hashpos) == ix);
+
+ // Split table doesn't allow deletion. Combine it.
+ if (_PyDict_HasSplitTable(mp)) {
+ if (dictresize(mp, DK_SIZE(mp->ma_keys))) {
+ return -1;
+ }
+ ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, &hashpos);
+ assert(ix >= 0);
+ }
+
+ res = predicate(*value_addr);
+ if (res == -1)
+ return -1;
+ if (res > 0)
+ return delitem_common(mp, hashpos, ix, value_addr);
+ else
+ return 0;
+}
+
+
+void
+PyDict_Clear(PyObject *op)
+{
+ PyDictObject *mp;
+ PyDictKeysObject *oldkeys;
+ PyObject **oldvalues;
+ Py_ssize_t i, n;
+
+ if (!PyDict_Check(op))
+ return;
+ mp = ((PyDictObject *)op);
+ oldkeys = mp->ma_keys;
+ oldvalues = mp->ma_values;
+ if (oldvalues == empty_values)
+ return;
+ /* Empty the dict... */
+ DK_INCREF(Py_EMPTY_KEYS);
+ mp->ma_keys = Py_EMPTY_KEYS;
+ mp->ma_values = empty_values;
+ mp->ma_used = 0;
+ mp->ma_version_tag = DICT_NEXT_VERSION();
+ /* ...then clear the keys and values */
+ if (oldvalues != NULL) {
+ n = oldkeys->dk_nentries;
+ for (i = 0; i < n; i++)
+ Py_CLEAR(oldvalues[i]);
+ free_values(oldvalues);
+ DK_DECREF(oldkeys);
+ }
+ else {
+ assert(oldkeys->dk_refcnt == 1);
+ DK_DECREF(oldkeys);
+ }
+ assert(_PyDict_CheckConsistency(mp));
+}
+
+/* Internal version of PyDict_Next that returns a hash value in addition
+ * to the key and value.
+ * Return 1 on success, return 0 when the reached the end of the dictionary
+ * (or if op is not a dictionary)
+ */
+int
+_PyDict_Next(PyObject *op, Py_ssize_t *ppos, PyObject **pkey,
+ PyObject **pvalue, Py_hash_t *phash)
+{
+ Py_ssize_t i, n;
+ PyDictObject *mp;
+ PyDictKeyEntry *entry_ptr;
+ PyObject *value;
+
+ if (!PyDict_Check(op))
+ return 0;
+ mp = (PyDictObject *)op;
+ i = *ppos;
+ n = mp->ma_keys->dk_nentries;
+ if ((size_t)i >= (size_t)n)
+ return 0;
+ if (mp->ma_values) {
+ PyObject **value_ptr = &mp->ma_values[i];
+ while (i < n && *value_ptr == NULL) {
+ value_ptr++;
+ i++;
+ }
+ if (i >= n)
+ return 0;
+ entry_ptr = &DK_ENTRIES(mp->ma_keys)[i];
+ value = *value_ptr;
+ }
+ else {
+ entry_ptr = &DK_ENTRIES(mp->ma_keys)[i];
+ while (i < n && entry_ptr->me_value == NULL) {
+ entry_ptr++;
+ i++;
+ }
+ if (i >= n)
+ return 0;
+ value = entry_ptr->me_value;
+ }
+ *ppos = i+1;
+ if (pkey)
+ *pkey = entry_ptr->me_key;
+ if (phash)
+ *phash = entry_ptr->me_hash;
+ if (pvalue)
+ *pvalue = value;
+ return 1;
+}
+
+/*
+ * Iterate over a dict. Use like so:
+ *
+ * Py_ssize_t i;
+ * PyObject *key, *value;
+ * i = 0; # important! i should not otherwise be changed by you
+ * while (PyDict_Next(yourdict, &i, &key, &value)) {
+ * Refer to borrowed references in key and value.
+ * }
+ *
+ * Return 1 on success, return 0 when the reached the end of the dictionary
+ * (or if op is not a dictionary)
+ *
+ * CAUTION: In general, it isn't safe to use PyDict_Next in a loop that
+ * mutates the dict. One exception: it is safe if the loop merely changes
+ * the values associated with the keys (but doesn't insert new keys or
+ * delete keys), via PyDict_SetItem().
+ */
+int
+PyDict_Next(PyObject *op, Py_ssize_t *ppos, PyObject **pkey, PyObject **pvalue)
+{
+ return _PyDict_Next(op, ppos, pkey, pvalue, NULL);
+}
+
+/* Internal version of dict.pop(). */
+PyObject *
+_PyDict_Pop_KnownHash(PyObject *dict, PyObject *key, Py_hash_t hash, PyObject *deflt)
+{
+ Py_ssize_t ix, hashpos;
+ PyObject *old_value, *old_key;
+ PyDictKeyEntry *ep;
+ PyObject **value_addr;
+ PyDictObject *mp;
+
+ assert(PyDict_Check(dict));
+ mp = (PyDictObject *)dict;
+
+ if (mp->ma_used == 0) {
+ if (deflt) {
+ Py_INCREF(deflt);
+ return deflt;
+ }
+ _PyErr_SetKeyError(key);
+ return NULL;
+ }
+ ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, &hashpos);
+ if (ix == DKIX_ERROR)
+ return NULL;
+ if (ix == DKIX_EMPTY || *value_addr == NULL) {
+ if (deflt) {
+ Py_INCREF(deflt);
+ return deflt;
+ }
+ _PyErr_SetKeyError(key);
+ return NULL;
+ }
+
+ // Split table doesn't allow deletion. Combine it.
+ if (_PyDict_HasSplitTable(mp)) {
+ if (dictresize(mp, DK_SIZE(mp->ma_keys))) {
+ return NULL;
+ }
+ ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, &hashpos);
+ assert(ix >= 0);
+ }
+
+ old_value = *value_addr;
+ assert(old_value != NULL);
+ *value_addr = NULL;
+ mp->ma_used--;
+ mp->ma_version_tag = DICT_NEXT_VERSION();
+ dk_set_index(mp->ma_keys, hashpos, DKIX_DUMMY);
+ ep = &DK_ENTRIES(mp->ma_keys)[ix];
+ ENSURE_ALLOWS_DELETIONS(mp);
+ old_key = ep->me_key;
+ ep->me_key = NULL;
+ Py_DECREF(old_key);
+
+ assert(_PyDict_CheckConsistency(mp));
+ return old_value;
+}
+
+PyObject *
+_PyDict_Pop(PyObject *dict, PyObject *key, PyObject *deflt)
+{
+ Py_hash_t hash;
+
+ if (((PyDictObject *)dict)->ma_used == 0) {
+ if (deflt) {
+ Py_INCREF(deflt);
+ return deflt;
+ }
+ _PyErr_SetKeyError(key);
+ return NULL;
+ }
+ if (!PyUnicode_CheckExact(key) ||
+ (hash = ((PyASCIIObject *) key)->hash) == -1) {
+ hash = PyObject_Hash(key);
+ if (hash == -1)
+ return NULL;
+ }
+ return _PyDict_Pop_KnownHash(dict, key, hash, deflt);
+}
+
+/* Internal version of dict.from_keys(). It is subclass-friendly. */
+PyObject *
+_PyDict_FromKeys(PyObject *cls, PyObject *iterable, PyObject *value)
+{
+ PyObject *it; /* iter(iterable) */
+ PyObject *key;
+ PyObject *d;
+ int status;
+
+ d = PyObject_CallObject(cls, NULL);
+ if (d == NULL)
+ return NULL;
+
+ if (PyDict_CheckExact(d) && ((PyDictObject *)d)->ma_used == 0) {
+ if (PyDict_CheckExact(iterable)) {
+ PyDictObject *mp = (PyDictObject *)d;
+ PyObject *oldvalue;
+ Py_ssize_t pos = 0;
+ PyObject *key;
+ Py_hash_t hash;
+
+ if (dictresize(mp, ESTIMATE_SIZE(((PyDictObject *)iterable)->ma_used))) {
+ Py_DECREF(d);
+ return NULL;
+ }
+
+ while (_PyDict_Next(iterable, &pos, &key, &oldvalue, &hash)) {
+ if (insertdict(mp, key, hash, value)) {
+ Py_DECREF(d);
+ return NULL;
+ }
+ }
+ return d;
+ }
+ if (PyAnySet_CheckExact(iterable)) {
+ PyDictObject *mp = (PyDictObject *)d;
+ Py_ssize_t pos = 0;
+ PyObject *key;
+ Py_hash_t hash;
+
+ if (dictresize(mp, ESTIMATE_SIZE(PySet_GET_SIZE(iterable)))) {
+ Py_DECREF(d);
+ return NULL;
+ }
+
+ while (_PySet_NextEntry(iterable, &pos, &key, &hash)) {
+ if (insertdict(mp, key, hash, value)) {
+ Py_DECREF(d);
+ return NULL;
+ }
+ }
+ return d;
+ }
+ }
+
+ it = PyObject_GetIter(iterable);
+ if (it == NULL){
+ Py_DECREF(d);
+ return NULL;
+ }
+
+ if (PyDict_CheckExact(d)) {
+ while ((key = PyIter_Next(it)) != NULL) {
+ status = PyDict_SetItem(d, key, value);
+ Py_DECREF(key);
+ if (status < 0)
+ goto Fail;
+ }
+ } else {
+ while ((key = PyIter_Next(it)) != NULL) {
+ status = PyObject_SetItem(d, key, value);
+ Py_DECREF(key);
+ if (status < 0)
+ goto Fail;
+ }
+ }
+
+ if (PyErr_Occurred())
+ goto Fail;
+ Py_DECREF(it);
+ return d;
+
+Fail:
+ Py_DECREF(it);
+ Py_DECREF(d);
+ return NULL;
+}
+
+/* Methods */
+
+static void
+dict_dealloc(PyDictObject *mp)
+{
+ PyObject **values = mp->ma_values;
+ PyDictKeysObject *keys = mp->ma_keys;
+ Py_ssize_t i, n;
+
+ /* bpo-31095: UnTrack is needed before calling any callbacks */
+ PyObject_GC_UnTrack(mp);
+ Py_TRASHCAN_SAFE_BEGIN(mp)
+ if (values != NULL) {
+ if (values != empty_values) {
+ for (i = 0, n = mp->ma_keys->dk_nentries; i < n; i++) {
+ Py_XDECREF(values[i]);
+ }
+ free_values(values);
+ }
+ DK_DECREF(keys);
+ }
+ else if (keys != NULL) {
+ assert(keys->dk_refcnt == 1);
+ DK_DECREF(keys);
+ }
+ if (numfree < PyDict_MAXFREELIST && Py_TYPE(mp) == &PyDict_Type)
+ free_list[numfree++] = mp;
+ else
+ Py_TYPE(mp)->tp_free((PyObject *)mp);
+ Py_TRASHCAN_SAFE_END(mp)
+}
+
+
+static PyObject *
+dict_repr(PyDictObject *mp)
+{
+ Py_ssize_t i;
+ PyObject *key = NULL, *value = NULL;
+ _PyUnicodeWriter writer;
+ int first;
+
+ i = Py_ReprEnter((PyObject *)mp);
+ if (i != 0) {
+ return i > 0 ? PyUnicode_FromString("{...}") : NULL;
+ }
+
+ if (mp->ma_used == 0) {
+ Py_ReprLeave((PyObject *)mp);
+ return PyUnicode_FromString("{}");
+ }
+
+ _PyUnicodeWriter_Init(&writer);
+ writer.overallocate = 1;
+ /* "{" + "1: 2" + ", 3: 4" * (len - 1) + "}" */
+ writer.min_length = 1 + 4 + (2 + 4) * (mp->ma_used - 1) + 1;
+
+ if (_PyUnicodeWriter_WriteChar(&writer, '{') < 0)
+ goto error;
+
+ /* Do repr() on each key+value pair, and insert ": " between them.
+ Note that repr may mutate the dict. */
+ i = 0;
+ first = 1;
+ while (PyDict_Next((PyObject *)mp, &i, &key, &value)) {
+ PyObject *s;
+ int res;
+
+ /* Prevent repr from deleting key or value during key format. */
+ Py_INCREF(key);
+ Py_INCREF(value);
+
+ if (!first) {
+ if (_PyUnicodeWriter_WriteASCIIString(&writer, ", ", 2) < 0)
+ goto error;
+ }
+ first = 0;
+
+ s = PyObject_Repr(key);
+ if (s == NULL)
+ goto error;
+ res = _PyUnicodeWriter_WriteStr(&writer, s);
+ Py_DECREF(s);
+ if (res < 0)
+ goto error;
+
+ if (_PyUnicodeWriter_WriteASCIIString(&writer, ": ", 2) < 0)
+ goto error;
+
+ s = PyObject_Repr(value);
+ if (s == NULL)
+ goto error;
+ res = _PyUnicodeWriter_WriteStr(&writer, s);
+ Py_DECREF(s);
+ if (res < 0)
+ goto error;
+
+ Py_CLEAR(key);
+ Py_CLEAR(value);
+ }
+
+ writer.overallocate = 0;
+ if (_PyUnicodeWriter_WriteChar(&writer, '}') < 0)
+ goto error;
+
+ Py_ReprLeave((PyObject *)mp);
+
+ return _PyUnicodeWriter_Finish(&writer);
+
+error:
+ Py_ReprLeave((PyObject *)mp);
+ _PyUnicodeWriter_Dealloc(&writer);
+ Py_XDECREF(key);
+ Py_XDECREF(value);
+ return NULL;
+}
+
+static Py_ssize_t
+dict_length(PyDictObject *mp)
+{
+ return mp->ma_used;
+}
+
+static PyObject *
+dict_subscript(PyDictObject *mp, PyObject *key)
+{
+ PyObject *v;
+ Py_ssize_t ix;
+ Py_hash_t hash;
+ PyObject **value_addr;
+
+ if (!PyUnicode_CheckExact(key) ||
+ (hash = ((PyASCIIObject *) key)->hash) == -1) {
+ hash = PyObject_Hash(key);
+ if (hash == -1)
+ return NULL;
+ }
+ ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, NULL);
+ if (ix == DKIX_ERROR)
+ return NULL;
+ if (ix == DKIX_EMPTY || *value_addr == NULL) {
+ if (!PyDict_CheckExact(mp)) {
+ /* Look up __missing__ method if we're a subclass. */
+ PyObject *missing, *res;
+ _Py_IDENTIFIER(__missing__);
+ missing = _PyObject_LookupSpecial((PyObject *)mp, &PyId___missing__);
+ if (missing != NULL) {
+ res = PyObject_CallFunctionObjArgs(missing,
+ key, NULL);
+ Py_DECREF(missing);
+ return res;
+ }
+ else if (PyErr_Occurred())
+ return NULL;
+ }
+ _PyErr_SetKeyError(key);
+ return NULL;
+ }
+ v = *value_addr;
+ Py_INCREF(v);
+ return v;
+}
+
+static int
+dict_ass_sub(PyDictObject *mp, PyObject *v, PyObject *w)
+{
+ if (w == NULL)
+ return PyDict_DelItem((PyObject *)mp, v);
+ else
+ return PyDict_SetItem((PyObject *)mp, v, w);
+}
+
+static PyMappingMethods dict_as_mapping = {
+ (lenfunc)dict_length, /*mp_length*/
+ (binaryfunc)dict_subscript, /*mp_subscript*/
+ (objobjargproc)dict_ass_sub, /*mp_ass_subscript*/
+};
+
+static PyObject *
+dict_keys(PyDictObject *mp)
+{
+ PyObject *v;
+ Py_ssize_t i, j;
+ PyDictKeyEntry *ep;
+ Py_ssize_t size, n, offset;
+ PyObject **value_ptr;
+
+ again:
+ n = mp->ma_used;
+ v = PyList_New(n);
+ if (v == NULL)
+ return NULL;
+ if (n != mp->ma_used) {
+ /* Durnit. The allocations caused the dict to resize.
+ * Just start over, this shouldn't normally happen.
+ */
+ Py_DECREF(v);
+ goto again;
+ }
+ ep = DK_ENTRIES(mp->ma_keys);
+ size = mp->ma_keys->dk_nentries;
+ if (mp->ma_values) {
+ value_ptr = mp->ma_values;
+ offset = sizeof(PyObject *);
+ }
+ else {
+ value_ptr = &ep[0].me_value;
+ offset = sizeof(PyDictKeyEntry);
+ }
+ for (i = 0, j = 0; i < size; i++) {
+ if (*value_ptr != NULL) {
+ PyObject *key = ep[i].me_key;
+ Py_INCREF(key);
+ PyList_SET_ITEM(v, j, key);
+ j++;
+ }
+ value_ptr = (PyObject **)(((char *)value_ptr) + offset);
+ }
+ assert(j == n);
+ return v;
+}
+
+static PyObject *
+dict_values(PyDictObject *mp)
+{
+ PyObject *v;
+ Py_ssize_t i, j;
+ PyDictKeyEntry *ep;
+ Py_ssize_t size, n, offset;
+ PyObject **value_ptr;
+
+ again:
+ n = mp->ma_used;
+ v = PyList_New(n);
+ if (v == NULL)
+ return NULL;
+ if (n != mp->ma_used) {
+ /* Durnit. The allocations caused the dict to resize.
+ * Just start over, this shouldn't normally happen.
+ */
+ Py_DECREF(v);
+ goto again;
+ }
+ ep = DK_ENTRIES(mp->ma_keys);
+ size = mp->ma_keys->dk_nentries;
+ if (mp->ma_values) {
+ value_ptr = mp->ma_values;
+ offset = sizeof(PyObject *);
+ }
+ else {
+ value_ptr = &ep[0].me_value;
+ offset = sizeof(PyDictKeyEntry);
+ }
+ for (i = 0, j = 0; i < size; i++) {
+ PyObject *value = *value_ptr;
+ value_ptr = (PyObject **)(((char *)value_ptr) + offset);
+ if (value != NULL) {
+ Py_INCREF(value);
+ PyList_SET_ITEM(v, j, value);
+ j++;
+ }
+ }
+ assert(j == n);
+ return v;
+}
+
+static PyObject *
+dict_items(PyDictObject *mp)
+{
+ PyObject *v;
+ Py_ssize_t i, j, n;
+ Py_ssize_t size, offset;
+ PyObject *item, *key;
+ PyDictKeyEntry *ep;
+ PyObject **value_ptr;
+
+ /* Preallocate the list of tuples, to avoid allocations during
+ * the loop over the items, which could trigger GC, which
+ * could resize the dict. :-(
+ */
+ again:
+ n = mp->ma_used;
+ v = PyList_New(n);
+ if (v == NULL)
+ return NULL;
+ for (i = 0; i < n; i++) {
+ item = PyTuple_New(2);
+ if (item == NULL) {
+ Py_DECREF(v);
+ return NULL;
+ }
+ PyList_SET_ITEM(v, i, item);
+ }
+ if (n != mp->ma_used) {
+ /* Durnit. The allocations caused the dict to resize.
+ * Just start over, this shouldn't normally happen.
+ */
+ Py_DECREF(v);
+ goto again;
+ }
+ /* Nothing we do below makes any function calls. */
+ ep = DK_ENTRIES(mp->ma_keys);
+ size = mp->ma_keys->dk_nentries;
+ if (mp->ma_values) {
+ value_ptr = mp->ma_values;
+ offset = sizeof(PyObject *);
+ }
+ else {
+ value_ptr = &ep[0].me_value;
+ offset = sizeof(PyDictKeyEntry);
+ }
+ for (i = 0, j = 0; i < size; i++) {
+ PyObject *value = *value_ptr;
+ value_ptr = (PyObject **)(((char *)value_ptr) + offset);
+ if (value != NULL) {
+ key = ep[i].me_key;
+ item = PyList_GET_ITEM(v, j);
+ Py_INCREF(key);
+ PyTuple_SET_ITEM(item, 0, key);
+ Py_INCREF(value);
+ PyTuple_SET_ITEM(item, 1, value);
+ j++;
+ }
+ }
+ assert(j == n);
+ return v;
+}
+
+/*[clinic input]
+@classmethod
+dict.fromkeys
+ iterable: object
+ value: object=None
+ /
+
+Returns a new dict with keys from iterable and values equal to value.
+[clinic start generated code]*/
+
+static PyObject *
+dict_fromkeys_impl(PyTypeObject *type, PyObject *iterable, PyObject *value)
+/*[clinic end generated code: output=8fb98e4b10384999 input=b85a667f9bf4669d]*/
+{
+ return _PyDict_FromKeys((PyObject *)type, iterable, value);
+}
+
+static int
+dict_update_common(PyObject *self, PyObject *args, PyObject *kwds,
+ const char *methname)
+{
+ PyObject *arg = NULL;
+ int result = 0;
+
+ if (!PyArg_UnpackTuple(args, methname, 0, 1, &arg))
+ result = -1;
+
+ else if (arg != NULL) {
+ _Py_IDENTIFIER(keys);
+ if (_PyObject_HasAttrId(arg, &PyId_keys))
+ result = PyDict_Merge(self, arg, 1);
+ else
+ result = PyDict_MergeFromSeq2(self, arg, 1);
+ }
+ if (result == 0 && kwds != NULL) {
+ if (PyArg_ValidateKeywordArguments(kwds))
+ result = PyDict_Merge(self, kwds, 1);
+ else
+ result = -1;
+ }
+ return result;
+}
+
+static PyObject *
+dict_update(PyObject *self, PyObject *args, PyObject *kwds)
+{
+ if (dict_update_common(self, args, kwds, "update") != -1)
+ Py_RETURN_NONE;
+ return NULL;
+}
+
+/* Update unconditionally replaces existing items.
+ Merge has a 3rd argument 'override'; if set, it acts like Update,
+ otherwise it leaves existing items unchanged.
+
+ PyDict_{Update,Merge} update/merge from a mapping object.
+
+ PyDict_MergeFromSeq2 updates/merges from any iterable object
+ producing iterable objects of length 2.
+*/
+
+int
+PyDict_MergeFromSeq2(PyObject *d, PyObject *seq2, int override)
+{
+ PyObject *it; /* iter(seq2) */
+ Py_ssize_t i; /* index into seq2 of current element */
+ PyObject *item; /* seq2[i] */
+ PyObject *fast; /* item as a 2-tuple or 2-list */
+
+ assert(d != NULL);
+ assert(PyDict_Check(d));
+ assert(seq2 != NULL);
+
+ it = PyObject_GetIter(seq2);
+ if (it == NULL)
+ return -1;
+
+ for (i = 0; ; ++i) {
+ PyObject *key, *value;
+ Py_ssize_t n;
+
+ fast = NULL;
+ item = PyIter_Next(it);
+ if (item == NULL) {
+ if (PyErr_Occurred())
+ goto Fail;
+ break;
+ }
+
+ /* Convert item to sequence, and verify length 2. */
+ fast = PySequence_Fast(item, "");
+ if (fast == NULL) {
+ if (PyErr_ExceptionMatches(PyExc_TypeError))
+ PyErr_Format(PyExc_TypeError,
+ "cannot convert dictionary update "
+ "sequence element #%zd to a sequence",
+ i);
+ goto Fail;
+ }
+ n = PySequence_Fast_GET_SIZE(fast);
+ if (n != 2) {
+ PyErr_Format(PyExc_ValueError,
+ "dictionary update sequence element #%zd "
+ "has length %zd; 2 is required",
+ i, n);
+ goto Fail;
+ }
+
+ /* Update/merge with this (key, value) pair. */
+ key = PySequence_Fast_GET_ITEM(fast, 0);
+ value = PySequence_Fast_GET_ITEM(fast, 1);
+ Py_INCREF(key);
+ Py_INCREF(value);
+ if (override || PyDict_GetItem(d, key) == NULL) {
+ int status = PyDict_SetItem(d, key, value);
+ if (status < 0) {
+ Py_DECREF(key);
+ Py_DECREF(value);
+ goto Fail;
+ }
+ }
+ Py_DECREF(key);
+ Py_DECREF(value);
+ Py_DECREF(fast);
+ Py_DECREF(item);
+ }
+
+ i = 0;
+ assert(_PyDict_CheckConsistency((PyDictObject *)d));
+ goto Return;
+Fail:
+ Py_XDECREF(item);
+ Py_XDECREF(fast);
+ i = -1;
+Return:
+ Py_DECREF(it);
+ return Py_SAFE_DOWNCAST(i, Py_ssize_t, int);
+}
+
+static int
+dict_merge(PyObject *a, PyObject *b, int override)
+{
+ PyDictObject *mp, *other;
+ Py_ssize_t i, n;
+ PyDictKeyEntry *entry, *ep0;
+
+ assert(0 <= override && override <= 2);
+
+ /* We accept for the argument either a concrete dictionary object,
+ * or an abstract "mapping" object. For the former, we can do
+ * things quite efficiently. For the latter, we only require that
+ * PyMapping_Keys() and PyObject_GetItem() be supported.
+ */
+ if (a == NULL || !PyDict_Check(a) || b == NULL) {
+ PyErr_BadInternalCall();
+ return -1;
+ }
+ mp = (PyDictObject*)a;
+ if (PyDict_Check(b) && (Py_TYPE(b)->tp_iter == (getiterfunc)dict_iter)) {
+ other = (PyDictObject*)b;
+ if (other == mp || other->ma_used == 0)
+ /* a.update(a) or a.update({}); nothing to do */
+ return 0;
+ if (mp->ma_used == 0)
+ /* Since the target dict is empty, PyDict_GetItem()
+ * always returns NULL. Setting override to 1
+ * skips the unnecessary test.
+ */
+ override = 1;
+ /* Do one big resize at the start, rather than
+ * incrementally resizing as we insert new items. Expect
+ * that there will be no (or few) overlapping keys.
+ */
+ if (USABLE_FRACTION(mp->ma_keys->dk_size) < other->ma_used) {
+ if (dictresize(mp, ESTIMATE_SIZE(mp->ma_used + other->ma_used))) {
+ return -1;
+ }
+ }
+ ep0 = DK_ENTRIES(other->ma_keys);
+ for (i = 0, n = other->ma_keys->dk_nentries; i < n; i++) {
+ PyObject *key, *value;
+ Py_hash_t hash;
+ entry = &ep0[i];
+ key = entry->me_key;
+ hash = entry->me_hash;
+ if (other->ma_values)
+ value = other->ma_values[i];
+ else
+ value = entry->me_value;
+
+ if (value != NULL) {
+ int err = 0;
+ Py_INCREF(key);
+ Py_INCREF(value);
+ if (override == 1)
+ err = insertdict(mp, key, hash, value);
+ else if (_PyDict_GetItem_KnownHash(a, key, hash) == NULL) {
+ if (PyErr_Occurred()) {
+ Py_DECREF(value);
+ Py_DECREF(key);
+ return -1;
+ }
+ err = insertdict(mp, key, hash, value);
+ }
+ else if (override != 0) {
+ _PyErr_SetKeyError(key);
+ Py_DECREF(value);
+ Py_DECREF(key);
+ return -1;
+ }
+ Py_DECREF(value);
+ Py_DECREF(key);
+ if (err != 0)
+ return -1;
+
+ if (n != other->ma_keys->dk_nentries) {
+ PyErr_SetString(PyExc_RuntimeError,
+ "dict mutated during update");
+ return -1;
+ }
+ }
+ }
+ }
+ else {
+ /* Do it the generic, slower way */
+ PyObject *keys = PyMapping_Keys(b);
+ PyObject *iter;
+ PyObject *key, *value;
+ int status;
+
+ if (keys == NULL)
+ /* Docstring says this is equivalent to E.keys() so
+ * if E doesn't have a .keys() method we want
+ * AttributeError to percolate up. Might as well
+ * do the same for any other error.
+ */
+ return -1;
+
+ iter = PyObject_GetIter(keys);
+ Py_DECREF(keys);
+ if (iter == NULL)
+ return -1;
+
+ for (key = PyIter_Next(iter); key; key = PyIter_Next(iter)) {
+ if (override != 1 && PyDict_GetItem(a, key) != NULL) {
+ if (override != 0) {
+ _PyErr_SetKeyError(key);
+ Py_DECREF(key);
+ Py_DECREF(iter);
+ return -1;
+ }
+ Py_DECREF(key);
+ continue;
+ }
+ value = PyObject_GetItem(b, key);
+ if (value == NULL) {
+ Py_DECREF(iter);
+ Py_DECREF(key);
+ return -1;
+ }
+ status = PyDict_SetItem(a, key, value);
+ Py_DECREF(key);
+ Py_DECREF(value);
+ if (status < 0) {
+ Py_DECREF(iter);
+ return -1;
+ }
+ }
+ Py_DECREF(iter);
+ if (PyErr_Occurred())
+ /* Iterator completed, via error */
+ return -1;
+ }
+ assert(_PyDict_CheckConsistency((PyDictObject *)a));
+ return 0;
+}
+
+int
+PyDict_Update(PyObject *a, PyObject *b)
+{
+ return dict_merge(a, b, 1);
+}
+
+int
+PyDict_Merge(PyObject *a, PyObject *b, int override)
+{
+ /* XXX Deprecate override not in (0, 1). */
+ return dict_merge(a, b, override != 0);
+}
+
+int
+_PyDict_MergeEx(PyObject *a, PyObject *b, int override)
+{
+ return dict_merge(a, b, override);
+}
+
+static PyObject *
+dict_copy(PyDictObject *mp)
+{
+ return PyDict_Copy((PyObject*)mp);
+}
+
+PyObject *
+PyDict_Copy(PyObject *o)
+{
+ PyObject *copy;
+ PyDictObject *mp;
+ Py_ssize_t i, n;
+
+ if (o == NULL || !PyDict_Check(o)) {
+ PyErr_BadInternalCall();
+ return NULL;
+ }
+ mp = (PyDictObject *)o;
+ if (_PyDict_HasSplitTable(mp)) {
+ PyDictObject *split_copy;
+ Py_ssize_t size = USABLE_FRACTION(DK_SIZE(mp->ma_keys));
+ PyObject **newvalues;
+ newvalues = new_values(size);
+ if (newvalues == NULL)
+ return PyErr_NoMemory();
+ split_copy = PyObject_GC_New(PyDictObject, &PyDict_Type);
+ if (split_copy == NULL) {
+ free_values(newvalues);
+ return NULL;
+ }
+ split_copy->ma_values = newvalues;
+ split_copy->ma_keys = mp->ma_keys;
+ split_copy->ma_used = mp->ma_used;
+ split_copy->ma_version_tag = DICT_NEXT_VERSION();
+ DK_INCREF(mp->ma_keys);
+ for (i = 0, n = size; i < n; i++) {
+ PyObject *value = mp->ma_values[i];
+ Py_XINCREF(value);
+ split_copy->ma_values[i] = value;
+ }
+ if (_PyObject_GC_IS_TRACKED(mp))
+ _PyObject_GC_TRACK(split_copy);
+ return (PyObject *)split_copy;
+ }
+ copy = PyDict_New();
+ if (copy == NULL)
+ return NULL;
+ if (PyDict_Merge(copy, o, 1) == 0)
+ return copy;
+ Py_DECREF(copy);
+ return NULL;
+}
+
+Py_ssize_t
+PyDict_Size(PyObject *mp)
+{
+ if (mp == NULL || !PyDict_Check(mp)) {
+ PyErr_BadInternalCall();
+ return -1;
+ }
+ return ((PyDictObject *)mp)->ma_used;
+}
+
+PyObject *
+PyDict_Keys(PyObject *mp)
+{
+ if (mp == NULL || !PyDict_Check(mp)) {
+ PyErr_BadInternalCall();
+ return NULL;
+ }
+ return dict_keys((PyDictObject *)mp);
+}
+
+PyObject *
+PyDict_Values(PyObject *mp)
+{
+ if (mp == NULL || !PyDict_Check(mp)) {
+ PyErr_BadInternalCall();
+ return NULL;
+ }
+ return dict_values((PyDictObject *)mp);
+}
+
+PyObject *
+PyDict_Items(PyObject *mp)
+{
+ if (mp == NULL || !PyDict_Check(mp)) {
+ PyErr_BadInternalCall();
+ return NULL;
+ }
+ return dict_items((PyDictObject *)mp);
+}
+
+/* Return 1 if dicts equal, 0 if not, -1 if error.
+ * Gets out as soon as any difference is detected.
+ * Uses only Py_EQ comparison.
+ */
+static int
+dict_equal(PyDictObject *a, PyDictObject *b)
+{
+ Py_ssize_t i;
+
+ if (a->ma_used != b->ma_used)
+ /* can't be equal if # of entries differ */
+ return 0;
+ /* Same # of entries -- check all of 'em. Exit early on any diff. */
+ for (i = 0; i < a->ma_keys->dk_nentries; i++) {
+ PyDictKeyEntry *ep = &DK_ENTRIES(a->ma_keys)[i];
+ PyObject *aval;
+ if (a->ma_values)
+ aval = a->ma_values[i];
+ else
+ aval = ep->me_value;
+ if (aval != NULL) {
+ int cmp;
+ PyObject *bval;
+ PyObject **vaddr;
+ PyObject *key = ep->me_key;
+ /* temporarily bump aval's refcount to ensure it stays
+ alive until we're done with it */
+ Py_INCREF(aval);
+ /* ditto for key */
+ Py_INCREF(key);
+ /* reuse the known hash value */
+ if ((b->ma_keys->dk_lookup)(b, key, ep->me_hash, &vaddr, NULL) < 0)
+ bval = NULL;
+ else
+ bval = *vaddr;
+ if (bval == NULL) {
+ Py_DECREF(key);
+ Py_DECREF(aval);
+ if (PyErr_Occurred())
+ return -1;
+ return 0;
+ }
+ cmp = PyObject_RichCompareBool(aval, bval, Py_EQ);
+ Py_DECREF(key);
+ Py_DECREF(aval);
+ if (cmp <= 0) /* error or not equal */
+ return cmp;
+ }
+ }
+ return 1;
+}
+
+static PyObject *
+dict_richcompare(PyObject *v, PyObject *w, int op)
+{
+ int cmp;
+ PyObject *res;
+
+ if (!PyDict_Check(v) || !PyDict_Check(w)) {
+ res = Py_NotImplemented;
+ }
+ else if (op == Py_EQ || op == Py_NE) {
+ cmp = dict_equal((PyDictObject *)v, (PyDictObject *)w);
+ if (cmp < 0)
+ return NULL;
+ res = (cmp == (op == Py_EQ)) ? Py_True : Py_False;
+ }
+ else
+ res = Py_NotImplemented;
+ Py_INCREF(res);
+ return res;
+}
+
+/*[clinic input]
+
+@coexist
+dict.__contains__
+
+ key: object
+ /
+
+True if D has a key k, else False.
+[clinic start generated code]*/
+
+static PyObject *
+dict___contains__(PyDictObject *self, PyObject *key)
+/*[clinic end generated code: output=a3d03db709ed6e6b input=b852b2a19b51ab24]*/
+{
+ register PyDictObject *mp = self;
+ Py_hash_t hash;
+ Py_ssize_t ix;
+ PyObject **value_addr;
+
+ if (!PyUnicode_CheckExact(key) ||
+ (hash = ((PyASCIIObject *) key)->hash) == -1) {
+ hash = PyObject_Hash(key);
+ if (hash == -1)
+ return NULL;
+ }
+ ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, NULL);
+ if (ix == DKIX_ERROR)
+ return NULL;
+ if (ix == DKIX_EMPTY || *value_addr == NULL)
+ Py_RETURN_FALSE;
+ Py_RETURN_TRUE;
+}
+
+static PyObject *
+dict_get(PyDictObject *mp, PyObject *args)
+{
+ PyObject *key;
+ PyObject *failobj = Py_None;
+ PyObject *val = NULL;
+ Py_hash_t hash;
+ Py_ssize_t ix;
+ PyObject **value_addr;
+
+ if (!PyArg_UnpackTuple(args, "get", 1, 2, &key, &failobj))
+ return NULL;
+
+ if (!PyUnicode_CheckExact(key) ||
+ (hash = ((PyASCIIObject *) key)->hash) == -1) {
+ hash = PyObject_Hash(key);
+ if (hash == -1)
+ return NULL;
+ }
+ ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, NULL);
+ if (ix == DKIX_ERROR)
+ return NULL;
+ if (ix == DKIX_EMPTY || *value_addr == NULL)
+ val = failobj;
+ else
+ val = *value_addr;
+ Py_INCREF(val);
+ return val;
+}
+
+PyObject *
+PyDict_SetDefault(PyObject *d, PyObject *key, PyObject *defaultobj)
+{
+ PyDictObject *mp = (PyDictObject *)d;
+ PyObject *value;
+ Py_hash_t hash;
+ Py_ssize_t hashpos, ix;
+ PyObject **value_addr;
+
+ if (!PyDict_Check(d)) {
+ PyErr_BadInternalCall();
+ return NULL;
+ }
+
+ if (!PyUnicode_CheckExact(key) ||
+ (hash = ((PyASCIIObject *) key)->hash) == -1) {
+ hash = PyObject_Hash(key);
+ if (hash == -1)
+ return NULL;
+ }
+
+ if (mp->ma_values != NULL && !PyUnicode_CheckExact(key)) {
+ if (insertion_resize(mp) < 0)
+ return NULL;
+ }
+
+ ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, &hashpos);
+ if (ix == DKIX_ERROR)
+ return NULL;
+
+ if (_PyDict_HasSplitTable(mp) &&
+ ((ix >= 0 && *value_addr == NULL && mp->ma_used != ix) ||
+ (ix == DKIX_EMPTY && mp->ma_used != mp->ma_keys->dk_nentries))) {
+ if (insertion_resize(mp) < 0) {
+ return NULL;
+ }
+ find_empty_slot(mp, key, hash, &value_addr, &hashpos);
+ ix = DKIX_EMPTY;
+ }
+
+ if (ix == DKIX_EMPTY) {
+ PyDictKeyEntry *ep, *ep0;
+ value = defaultobj;
+ if (mp->ma_keys->dk_usable <= 0) {
+ if (insertion_resize(mp) < 0) {
+ return NULL;
+ }
+ find_empty_slot(mp, key, hash, &value_addr, &hashpos);
+ }
+ ep0 = DK_ENTRIES(mp->ma_keys);
+ ep = &ep0[mp->ma_keys->dk_nentries];
+ dk_set_index(mp->ma_keys, hashpos, mp->ma_keys->dk_nentries);
+ Py_INCREF(key);
+ Py_INCREF(value);
+ MAINTAIN_TRACKING(mp, key, value);
+ ep->me_key = key;
+ ep->me_hash = hash;
+ if (mp->ma_values) {
+ assert(mp->ma_values[mp->ma_keys->dk_nentries] == NULL);
+ mp->ma_values[mp->ma_keys->dk_nentries] = value;
+ }
+ else {
+ ep->me_value = value;
+ }
+ mp->ma_used++;
+ mp->ma_version_tag = DICT_NEXT_VERSION();
+ mp->ma_keys->dk_usable--;
+ mp->ma_keys->dk_nentries++;
+ assert(mp->ma_keys->dk_usable >= 0);
+ }
+ else if (*value_addr == NULL) {
+ value = defaultobj;
+ assert(_PyDict_HasSplitTable(mp));
+ assert(ix == mp->ma_used);
+ Py_INCREF(value);
+ MAINTAIN_TRACKING(mp, key, value);
+ *value_addr = value;
+ mp->ma_used++;
+ mp->ma_version_tag = DICT_NEXT_VERSION();
+ }
+ else {
+ value = *value_addr;
+ }
+
+ assert(_PyDict_CheckConsistency(mp));
+ return value;
+}
+
+static PyObject *
+dict_setdefault(PyDictObject *mp, PyObject *args)
+{
+ PyObject *key, *val;
+ PyObject *defaultobj = Py_None;
+
+ if (!PyArg_UnpackTuple(args, "setdefault", 1, 2, &key, &defaultobj))
+ return NULL;
+
+ val = PyDict_SetDefault((PyObject *)mp, key, defaultobj);
+ Py_XINCREF(val);
+ return val;
+}
+
+static PyObject *
+dict_clear(PyDictObject *mp)
+{
+ PyDict_Clear((PyObject *)mp);
+ Py_RETURN_NONE;
+}
+
+static PyObject *
+dict_pop(PyDictObject *mp, PyObject *args)
+{
+ PyObject *key, *deflt = NULL;
+
+ if(!PyArg_UnpackTuple(args, "pop", 1, 2, &key, &deflt))
+ return NULL;
+
+ return _PyDict_Pop((PyObject*)mp, key, deflt);
+}
+
+static PyObject *
+dict_popitem(PyDictObject *mp)
+{
+ Py_ssize_t i, j;
+ PyDictKeyEntry *ep0, *ep;
+ PyObject *res;
+
+ /* Allocate the result tuple before checking the size. Believe it
+ * or not, this allocation could trigger a garbage collection which
+ * could empty the dict, so if we checked the size first and that
+ * happened, the result would be an infinite loop (searching for an
+ * entry that no longer exists). Note that the usual popitem()
+ * idiom is "while d: k, v = d.popitem()". so needing to throw the
+ * tuple away if the dict *is* empty isn't a significant
+ * inefficiency -- possible, but unlikely in practice.
+ */
+ res = PyTuple_New(2);
+ if (res == NULL)
+ return NULL;
+ if (mp->ma_used == 0) {
+ Py_DECREF(res);
+ PyErr_SetString(PyExc_KeyError,
+ "popitem(): dictionary is empty");
+ return NULL;
+ }
+ /* Convert split table to combined table */
+ if (mp->ma_keys->dk_lookup == lookdict_split) {
+ if (dictresize(mp, DK_SIZE(mp->ma_keys))) {
+ Py_DECREF(res);
+ return NULL;
+ }
+ }
+ ENSURE_ALLOWS_DELETIONS(mp);
+
+ /* Pop last item */
+ ep0 = DK_ENTRIES(mp->ma_keys);
+ i = mp->ma_keys->dk_nentries - 1;
+ while (i >= 0 && ep0[i].me_value == NULL) {
+ i--;
+ }
+ assert(i >= 0);
+
+ ep = &ep0[i];
+ j = lookdict_index(mp->ma_keys, ep->me_hash, i);
+ assert(j >= 0);
+ assert(dk_get_index(mp->ma_keys, j) == i);
+ dk_set_index(mp->ma_keys, j, DKIX_DUMMY);
+
+ PyTuple_SET_ITEM(res, 0, ep->me_key);
+ PyTuple_SET_ITEM(res, 1, ep->me_value);
+ ep->me_key = NULL;
+ ep->me_value = NULL;
+ /* We can't dk_usable++ since there is DKIX_DUMMY in indices */
+ mp->ma_keys->dk_nentries = i;
+ mp->ma_used--;
+ mp->ma_version_tag = DICT_NEXT_VERSION();
+ assert(_PyDict_CheckConsistency(mp));
+ return res;
+}
+
+static int
+dict_traverse(PyObject *op, visitproc visit, void *arg)
+{
+ PyDictObject *mp = (PyDictObject *)op;
+ PyDictKeysObject *keys = mp->ma_keys;
+ PyDictKeyEntry *entries = DK_ENTRIES(keys);
+ Py_ssize_t i, n = keys->dk_nentries;
+
+ if (keys->dk_lookup == lookdict) {
+ for (i = 0; i < n; i++) {
+ if (entries[i].me_value != NULL) {
+ Py_VISIT(entries[i].me_value);
+ Py_VISIT(entries[i].me_key);
+ }
+ }
+ }
+ else {
+ if (mp->ma_values != NULL) {
+ for (i = 0; i < n; i++) {
+ Py_VISIT(mp->ma_values[i]);
+ }
+ }
+ else {
+ for (i = 0; i < n; i++) {
+ Py_VISIT(entries[i].me_value);
+ }
+ }
+ }
+ return 0;
+}
+
+static int
+dict_tp_clear(PyObject *op)
+{
+ PyDict_Clear(op);
+ return 0;
+}
+
+static PyObject *dictiter_new(PyDictObject *, PyTypeObject *);
+
+Py_ssize_t
+_PyDict_SizeOf(PyDictObject *mp)
+{
+ Py_ssize_t size, usable, res;
+
+ size = DK_SIZE(mp->ma_keys);
+ usable = USABLE_FRACTION(size);
+
+ res = _PyObject_SIZE(Py_TYPE(mp));
+ if (mp->ma_values)
+ res += usable * sizeof(PyObject*);
+ /* If the dictionary is split, the keys portion is accounted-for
+ in the type object. */
+ if (mp->ma_keys->dk_refcnt == 1)
+ res += (sizeof(PyDictKeysObject)
+ + DK_IXSIZE(mp->ma_keys) * size
+ + sizeof(PyDictKeyEntry) * usable);
+ return res;
+}
+
+Py_ssize_t
+_PyDict_KeysSize(PyDictKeysObject *keys)
+{
+ return (sizeof(PyDictKeysObject)
+ + DK_IXSIZE(keys) * DK_SIZE(keys)
+ + USABLE_FRACTION(DK_SIZE(keys)) * sizeof(PyDictKeyEntry));
+}
+
+static PyObject *
+dict_sizeof(PyDictObject *mp)
+{
+ return PyLong_FromSsize_t(_PyDict_SizeOf(mp));
+}
+
+PyDoc_STRVAR(getitem__doc__, "x.__getitem__(y) <==> x[y]");
+
+PyDoc_STRVAR(sizeof__doc__,
+"D.__sizeof__() -> size of D in memory, in bytes");
+
+PyDoc_STRVAR(get__doc__,
+"D.get(k[,d]) -> D[k] if k in D, else d. d defaults to None.");
+
+PyDoc_STRVAR(setdefault_doc__,
+"D.setdefault(k[,d]) -> D.get(k,d), also set D[k]=d if k not in D");
+
+PyDoc_STRVAR(pop__doc__,
+"D.pop(k[,d]) -> v, remove specified key and return the corresponding value.\n\
+If key is not found, d is returned if given, otherwise KeyError is raised");
+
+PyDoc_STRVAR(popitem__doc__,
+"D.popitem() -> (k, v), remove and return some (key, value) pair as a\n\
+2-tuple; but raise KeyError if D is empty.");
+
+PyDoc_STRVAR(update__doc__,
+"D.update([E, ]**F) -> None. Update D from dict/iterable E and F.\n\
+If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]\n\
+If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v\n\
+In either case, this is followed by: for k in F: D[k] = F[k]");
+
+PyDoc_STRVAR(clear__doc__,
+"D.clear() -> None. Remove all items from D.");
+
+PyDoc_STRVAR(copy__doc__,
+"D.copy() -> a shallow copy of D");
+
+/* Forward */
+static PyObject *dictkeys_new(PyObject *);
+static PyObject *dictitems_new(PyObject *);
+static PyObject *dictvalues_new(PyObject *);
+
+PyDoc_STRVAR(keys__doc__,
+ "D.keys() -> a set-like object providing a view on D's keys");
+PyDoc_STRVAR(items__doc__,
+ "D.items() -> a set-like object providing a view on D's items");
+PyDoc_STRVAR(values__doc__,
+ "D.values() -> an object providing a view on D's values");
+
+static PyMethodDef mapp_methods[] = {
+ DICT___CONTAINS___METHODDEF
+ {"__getitem__", (PyCFunction)dict_subscript, METH_O | METH_COEXIST,
+ getitem__doc__},
+ {"__sizeof__", (PyCFunction)dict_sizeof, METH_NOARGS,
+ sizeof__doc__},
+ {"get", (PyCFunction)dict_get, METH_VARARGS,
+ get__doc__},
+ {"setdefault", (PyCFunction)dict_setdefault, METH_VARARGS,
+ setdefault_doc__},
+ {"pop", (PyCFunction)dict_pop, METH_VARARGS,
+ pop__doc__},
+ {"popitem", (PyCFunction)dict_popitem, METH_NOARGS,
+ popitem__doc__},
+ {"keys", (PyCFunction)dictkeys_new, METH_NOARGS,
+ keys__doc__},
+ {"items", (PyCFunction)dictitems_new, METH_NOARGS,
+ items__doc__},
+ {"values", (PyCFunction)dictvalues_new, METH_NOARGS,
+ values__doc__},
+ {"update", (PyCFunction)dict_update, METH_VARARGS | METH_KEYWORDS,
+ update__doc__},
+ DICT_FROMKEYS_METHODDEF
+ {"clear", (PyCFunction)dict_clear, METH_NOARGS,
+ clear__doc__},
+ {"copy", (PyCFunction)dict_copy, METH_NOARGS,
+ copy__doc__},
+ {NULL, NULL} /* sentinel */
+};
+
+/* Return 1 if `key` is in dict `op`, 0 if not, and -1 on error. */
+int
+PyDict_Contains(PyObject *op, PyObject *key)
+{
+ Py_hash_t hash;
+ Py_ssize_t ix;
+ PyDictObject *mp = (PyDictObject *)op;
+ PyObject **value_addr;
+
+ if (!PyUnicode_CheckExact(key) ||
+ (hash = ((PyASCIIObject *) key)->hash) == -1) {
+ hash = PyObject_Hash(key);
+ if (hash == -1)
+ return -1;
+ }
+ ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, NULL);
+ if (ix == DKIX_ERROR)
+ return -1;
+ return (ix != DKIX_EMPTY && *value_addr != NULL);
+}
+
+/* Internal version of PyDict_Contains used when the hash value is already known */
+int
+_PyDict_Contains(PyObject *op, PyObject *key, Py_hash_t hash)
+{
+ PyDictObject *mp = (PyDictObject *)op;
+ PyObject **value_addr;
+ Py_ssize_t ix;
+
+ ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, NULL);
+ if (ix == DKIX_ERROR)
+ return -1;
+ return (ix != DKIX_EMPTY && *value_addr != NULL);
+}
+
+/* Hack to implement "key in dict" */
+static PySequenceMethods dict_as_sequence = {
+ 0, /* sq_length */
+ 0, /* sq_concat */
+ 0, /* sq_repeat */
+ 0, /* sq_item */
+ 0, /* sq_slice */
+ 0, /* sq_ass_item */
+ 0, /* sq_ass_slice */
+ PyDict_Contains, /* sq_contains */
+ 0, /* sq_inplace_concat */
+ 0, /* sq_inplace_repeat */
+};
+
+static PyObject *
+dict_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
+{
+ PyObject *self;
+ PyDictObject *d;
+
+ assert(type != NULL && type->tp_alloc != NULL);
+ self = type->tp_alloc(type, 0);
+ if (self == NULL)
+ return NULL;
+ d = (PyDictObject *)self;
+
+ /* The object has been implicitly tracked by tp_alloc */
+ if (type == &PyDict_Type)
+ _PyObject_GC_UNTRACK(d);
+
+ d->ma_used = 0;
+ d->ma_version_tag = DICT_NEXT_VERSION();
+ d->ma_keys = new_keys_object(PyDict_MINSIZE);
+ if (d->ma_keys == NULL) {
+ Py_DECREF(self);
+ return NULL;
+ }
+ assert(_PyDict_CheckConsistency(d));
+ return self;
+}
+
+static int
+dict_init(PyObject *self, PyObject *args, PyObject *kwds)
+{
+ return dict_update_common(self, args, kwds, "dict");
+}
+
+static PyObject *
+dict_iter(PyDictObject *dict)
+{
+ return dictiter_new(dict, &PyDictIterKey_Type);
+}
+
+PyDoc_STRVAR(dictionary_doc,
+"dict() -> new empty dictionary\n"
+"dict(mapping) -> new dictionary initialized from a mapping object's\n"
+" (key, value) pairs\n"
+"dict(iterable) -> new dictionary initialized as if via:\n"
+" d = {}\n"
+" for k, v in iterable:\n"
+" d[k] = v\n"
+"dict(**kwargs) -> new dictionary initialized with the name=value pairs\n"
+" in the keyword argument list. For example: dict(one=1, two=2)");
+
+PyTypeObject PyDict_Type = {
+ PyVarObject_HEAD_INIT(&PyType_Type, 0)
+ "dict",
+ sizeof(PyDictObject),
+ 0,
+ (destructor)dict_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ (reprfunc)dict_repr, /* tp_repr */
+ 0, /* tp_as_number */
+ &dict_as_sequence, /* tp_as_sequence */
+ &dict_as_mapping, /* tp_as_mapping */
+ PyObject_HashNotImplemented, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ PyObject_GenericGetAttr, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC |
+ Py_TPFLAGS_BASETYPE | Py_TPFLAGS_DICT_SUBCLASS, /* tp_flags */
+ dictionary_doc, /* tp_doc */
+ dict_traverse, /* tp_traverse */
+ dict_tp_clear, /* tp_clear */
+ dict_richcompare, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ (getiterfunc)dict_iter, /* tp_iter */
+ 0, /* tp_iternext */
+ mapp_methods, /* tp_methods */
+ 0, /* tp_members */
+ 0, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ dict_init, /* tp_init */
+ PyType_GenericAlloc, /* tp_alloc */
+ dict_new, /* tp_new */
+ PyObject_GC_Del, /* tp_free */
+};
+
+PyObject *
+_PyDict_GetItemId(PyObject *dp, struct _Py_Identifier *key)
+{
+ PyObject *kv;
+ kv = _PyUnicode_FromId(key); /* borrowed */
+ if (kv == NULL) {
+ PyErr_Clear();
+ return NULL;
+ }
+ return PyDict_GetItem(dp, kv);
+}
+
+/* For backward compatibility with old dictionary interface */
+
+PyObject *
+PyDict_GetItemString(PyObject *v, const char *key)
+{
+ PyObject *kv, *rv;
+ kv = PyUnicode_FromString(key);
+ if (kv == NULL) {
+ PyErr_Clear();
+ return NULL;
+ }
+ rv = PyDict_GetItem(v, kv);
+ Py_DECREF(kv);
+ return rv;
+}
+
+int
+_PyDict_SetItemId(PyObject *v, struct _Py_Identifier *key, PyObject *item)
+{
+ PyObject *kv;
+ kv = _PyUnicode_FromId(key); /* borrowed */
+ if (kv == NULL)
+ return -1;
+ return PyDict_SetItem(v, kv, item);
+}
+
+int
+PyDict_SetItemString(PyObject *v, const char *key, PyObject *item)
+{
+ PyObject *kv;
+ int err;
+ kv = PyUnicode_FromString(key);
+ if (kv == NULL)
+ return -1;
+ PyUnicode_InternInPlace(&kv); /* XXX Should we really? */
+ err = PyDict_SetItem(v, kv, item);
+ Py_DECREF(kv);
+ return err;
+}
+
+int
+_PyDict_DelItemId(PyObject *v, _Py_Identifier *key)
+{
+ PyObject *kv = _PyUnicode_FromId(key); /* borrowed */
+ if (kv == NULL)
+ return -1;
+ return PyDict_DelItem(v, kv);
+}
+
+int
+PyDict_DelItemString(PyObject *v, const char *key)
+{
+ PyObject *kv;
+ int err;
+ kv = PyUnicode_FromString(key);
+ if (kv == NULL)
+ return -1;
+ err = PyDict_DelItem(v, kv);
+ Py_DECREF(kv);
+ return err;
+}
+
+/* Dictionary iterator types */
+
+typedef struct {
+ PyObject_HEAD
+ PyDictObject *di_dict; /* Set to NULL when iterator is exhausted */
+ Py_ssize_t di_used;
+ Py_ssize_t di_pos;
+ PyObject* di_result; /* reusable result tuple for iteritems */
+ Py_ssize_t len;
+} dictiterobject;
+
+static PyObject *
+dictiter_new(PyDictObject *dict, PyTypeObject *itertype)
+{
+ dictiterobject *di;
+ di = PyObject_GC_New(dictiterobject, itertype);
+ if (di == NULL)
+ return NULL;
+ Py_INCREF(dict);
+ di->di_dict = dict;
+ di->di_used = dict->ma_used;
+ di->di_pos = 0;
+ di->len = dict->ma_used;
+ if (itertype == &PyDictIterItem_Type) {
+ di->di_result = PyTuple_Pack(2, Py_None, Py_None);
+ if (di->di_result == NULL) {
+ Py_DECREF(di);
+ return NULL;
+ }
+ }
+ else
+ di->di_result = NULL;
+ _PyObject_GC_TRACK(di);
+ return (PyObject *)di;
+}
+
+static void
+dictiter_dealloc(dictiterobject *di)
+{
+ /* bpo-31095: UnTrack is needed before calling any callbacks */
+ _PyObject_GC_UNTRACK(di);
+ Py_XDECREF(di->di_dict);
+ Py_XDECREF(di->di_result);
+ PyObject_GC_Del(di);
+}
+
+static int
+dictiter_traverse(dictiterobject *di, visitproc visit, void *arg)
+{
+ Py_VISIT(di->di_dict);
+ Py_VISIT(di->di_result);
+ return 0;
+}
+
+static PyObject *
+dictiter_len(dictiterobject *di)
+{
+ Py_ssize_t len = 0;
+ if (di->di_dict != NULL && di->di_used == di->di_dict->ma_used)
+ len = di->len;
+ return PyLong_FromSize_t(len);
+}
+
+PyDoc_STRVAR(length_hint_doc,
+ "Private method returning an estimate of len(list(it)).");
+
+static PyObject *
+dictiter_reduce(dictiterobject *di);
+
+PyDoc_STRVAR(reduce_doc, "Return state information for pickling.");
+
+static PyMethodDef dictiter_methods[] = {
+ {"__length_hint__", (PyCFunction)dictiter_len, METH_NOARGS,
+ length_hint_doc},
+ {"__reduce__", (PyCFunction)dictiter_reduce, METH_NOARGS,
+ reduce_doc},
+ {NULL, NULL} /* sentinel */
+};
+
+static PyObject*
+dictiter_iternextkey(dictiterobject *di)
+{
+ PyObject *key;
+ Py_ssize_t i, n;
+ PyDictKeysObject *k;
+ PyDictObject *d = di->di_dict;
+
+ if (d == NULL)
+ return NULL;
+ assert (PyDict_Check(d));
+
+ if (di->di_used != d->ma_used) {
+ PyErr_SetString(PyExc_RuntimeError,
+ "dictionary changed size during iteration");
+ di->di_used = -1; /* Make this state sticky */
+ return NULL;
+ }
+
+ i = di->di_pos;
+ k = d->ma_keys;
+ n = k->dk_nentries;
+ if (d->ma_values) {
+ PyObject **value_ptr = &d->ma_values[i];
+ while (i < n && *value_ptr == NULL) {
+ value_ptr++;
+ i++;
+ }
+ if (i >= n)
+ goto fail;
+ key = DK_ENTRIES(k)[i].me_key;
+ }
+ else {
+ PyDictKeyEntry *entry_ptr = &DK_ENTRIES(k)[i];
+ while (i < n && entry_ptr->me_value == NULL) {
+ entry_ptr++;
+ i++;
+ }
+ if (i >= n)
+ goto fail;
+ key = entry_ptr->me_key;
+ }
+ di->di_pos = i+1;
+ di->len--;
+ Py_INCREF(key);
+ return key;
+
+fail:
+ di->di_dict = NULL;
+ Py_DECREF(d);
+ return NULL;
+}
+
+PyTypeObject PyDictIterKey_Type = {
+ PyVarObject_HEAD_INIT(&PyType_Type, 0)
+ "dict_keyiterator", /* tp_name */
+ sizeof(dictiterobject), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ /* methods */
+ (destructor)dictiter_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ 0, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ PyObject_GenericGetAttr, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,/* tp_flags */
+ 0, /* tp_doc */
+ (traverseproc)dictiter_traverse, /* tp_traverse */
+ 0, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ PyObject_SelfIter, /* tp_iter */
+ (iternextfunc)dictiter_iternextkey, /* tp_iternext */
+ dictiter_methods, /* tp_methods */
+ 0,
+};
+
+static PyObject *
+dictiter_iternextvalue(dictiterobject *di)
+{
+ PyObject *value;
+ Py_ssize_t i, n;
+ PyDictObject *d = di->di_dict;
+
+ if (d == NULL)
+ return NULL;
+ assert (PyDict_Check(d));
+
+ if (di->di_used != d->ma_used) {
+ PyErr_SetString(PyExc_RuntimeError,
+ "dictionary changed size during iteration");
+ di->di_used = -1; /* Make this state sticky */
+ return NULL;
+ }
+
+ i = di->di_pos;
+ n = d->ma_keys->dk_nentries;
+ if (d->ma_values) {
+ PyObject **value_ptr = &d->ma_values[i];
+ while (i < n && *value_ptr == NULL) {
+ value_ptr++;
+ i++;
+ }
+ if (i >= n)
+ goto fail;
+ value = *value_ptr;
+ }
+ else {
+ PyDictKeyEntry *entry_ptr = &DK_ENTRIES(d->ma_keys)[i];
+ while (i < n && entry_ptr->me_value == NULL) {
+ entry_ptr++;
+ i++;
+ }
+ if (i >= n)
+ goto fail;
+ value = entry_ptr->me_value;
+ }
+ di->di_pos = i+1;
+ di->len--;
+ Py_INCREF(value);
+ return value;
+
+fail:
+ di->di_dict = NULL;
+ Py_DECREF(d);
+ return NULL;
+}
+
+PyTypeObject PyDictIterValue_Type = {
+ PyVarObject_HEAD_INIT(&PyType_Type, 0)
+ "dict_valueiterator", /* tp_name */
+ sizeof(dictiterobject), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ /* methods */
+ (destructor)dictiter_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ 0, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ PyObject_GenericGetAttr, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC, /* tp_flags */
+ 0, /* tp_doc */
+ (traverseproc)dictiter_traverse, /* tp_traverse */
+ 0, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ PyObject_SelfIter, /* tp_iter */
+ (iternextfunc)dictiter_iternextvalue, /* tp_iternext */
+ dictiter_methods, /* tp_methods */
+ 0,
+};
+
+static PyObject *
+dictiter_iternextitem(dictiterobject *di)
+{
+ PyObject *key, *value, *result;
+ Py_ssize_t i, n;
+ PyDictObject *d = di->di_dict;
+
+ if (d == NULL)
+ return NULL;
+ assert (PyDict_Check(d));
+
+ if (di->di_used != d->ma_used) {
+ PyErr_SetString(PyExc_RuntimeError,
+ "dictionary changed size during iteration");
+ di->di_used = -1; /* Make this state sticky */
+ return NULL;
+ }
+
+ i = di->di_pos;
+ n = d->ma_keys->dk_nentries;
+ if (d->ma_values) {
+ PyObject **value_ptr = &d->ma_values[i];
+ while (i < n && *value_ptr == NULL) {
+ value_ptr++;
+ i++;
+ }
+ if (i >= n)
+ goto fail;
+ key = DK_ENTRIES(d->ma_keys)[i].me_key;
+ value = *value_ptr;
+ }
+ else {
+ PyDictKeyEntry *entry_ptr = &DK_ENTRIES(d->ma_keys)[i];
+ while (i < n && entry_ptr->me_value == NULL) {
+ entry_ptr++;
+ i++;
+ }
+ if (i >= n)
+ goto fail;
+ key = entry_ptr->me_key;
+ value = entry_ptr->me_value;
+ }
+ di->di_pos = i+1;
+ di->len--;
+ Py_INCREF(key);
+ Py_INCREF(value);
+ result = di->di_result;
+ if (Py_REFCNT(result) == 1) {
+ PyObject *oldkey = PyTuple_GET_ITEM(result, 0);
+ PyObject *oldvalue = PyTuple_GET_ITEM(result, 1);
+ PyTuple_SET_ITEM(result, 0, key); /* steals reference */
+ PyTuple_SET_ITEM(result, 1, value); /* steals reference */
+ Py_INCREF(result);
+ Py_DECREF(oldkey);
+ Py_DECREF(oldvalue);
+ }
+ else {
+ result = PyTuple_New(2);
+ if (result == NULL)
+ return NULL;
+ PyTuple_SET_ITEM(result, 0, key); /* steals reference */
+ PyTuple_SET_ITEM(result, 1, value); /* steals reference */
+ }
+ return result;
+
+fail:
+ di->di_dict = NULL;
+ Py_DECREF(d);
+ return NULL;
+}
+
+PyTypeObject PyDictIterItem_Type = {
+ PyVarObject_HEAD_INIT(&PyType_Type, 0)
+ "dict_itemiterator", /* tp_name */
+ sizeof(dictiterobject), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ /* methods */
+ (destructor)dictiter_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ 0, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ PyObject_GenericGetAttr, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,/* tp_flags */
+ 0, /* tp_doc */
+ (traverseproc)dictiter_traverse, /* tp_traverse */
+ 0, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ PyObject_SelfIter, /* tp_iter */
+ (iternextfunc)dictiter_iternextitem, /* tp_iternext */
+ dictiter_methods, /* tp_methods */
+ 0,
+};
+
+
+static PyObject *
+dictiter_reduce(dictiterobject *di)
+{
+ PyObject *list;
+ dictiterobject tmp;
+
+ list = PyList_New(0);
+ if (!list)
+ return NULL;
+
+ /* copy the itertor state */
+ tmp = *di;
+ Py_XINCREF(tmp.di_dict);
+
+ /* iterate the temporary into a list */
+ for(;;) {
+ PyObject *element = 0;
+ if (Py_TYPE(di) == &PyDictIterItem_Type)
+ element = dictiter_iternextitem(&tmp);
+ else if (Py_TYPE(di) == &PyDictIterKey_Type)
+ element = dictiter_iternextkey(&tmp);
+ else if (Py_TYPE(di) == &PyDictIterValue_Type)
+ element = dictiter_iternextvalue(&tmp);
+ else
+ assert(0);
+ if (element) {
+ if (PyList_Append(list, element)) {
+ Py_DECREF(element);
+ Py_DECREF(list);
+ Py_XDECREF(tmp.di_dict);
+ return NULL;
+ }
+ Py_DECREF(element);
+ } else
+ break;
+ }
+ Py_XDECREF(tmp.di_dict);
+ /* check for error */
+ if (tmp.di_dict != NULL) {
+ /* we have an error */
+ Py_DECREF(list);
+ return NULL;
+ }
+ return Py_BuildValue("N(N)", _PyObject_GetBuiltin("iter"), list);
+}
+
+/***********************************************/
+/* View objects for keys(), items(), values(). */
+/***********************************************/
+
+/* The instance lay-out is the same for all three; but the type differs. */
+
+static void
+dictview_dealloc(_PyDictViewObject *dv)
+{
+ /* bpo-31095: UnTrack is needed before calling any callbacks */
+ _PyObject_GC_UNTRACK(dv);
+ Py_XDECREF(dv->dv_dict);
+ PyObject_GC_Del(dv);
+}
+
+static int
+dictview_traverse(_PyDictViewObject *dv, visitproc visit, void *arg)
+{
+ Py_VISIT(dv->dv_dict);
+ return 0;
+}
+
+static Py_ssize_t
+dictview_len(_PyDictViewObject *dv)
+{
+ Py_ssize_t len = 0;
+ if (dv->dv_dict != NULL)
+ len = dv->dv_dict->ma_used;
+ return len;
+}
+
+PyObject *
+_PyDictView_New(PyObject *dict, PyTypeObject *type)
+{
+ _PyDictViewObject *dv;
+ if (dict == NULL) {
+ PyErr_BadInternalCall();
+ return NULL;
+ }
+ if (!PyDict_Check(dict)) {
+ /* XXX Get rid of this restriction later */
+ PyErr_Format(PyExc_TypeError,
+ "%s() requires a dict argument, not '%s'",
+ type->tp_name, dict->ob_type->tp_name);
+ return NULL;
+ }
+ dv = PyObject_GC_New(_PyDictViewObject, type);
+ if (dv == NULL)
+ return NULL;
+ Py_INCREF(dict);
+ dv->dv_dict = (PyDictObject *)dict;
+ _PyObject_GC_TRACK(dv);
+ return (PyObject *)dv;
+}
+
+/* TODO(guido): The views objects are not complete:
+
+ * support more set operations
+ * support arbitrary mappings?
+ - either these should be static or exported in dictobject.h
+ - if public then they should probably be in builtins
+*/
+
+/* Return 1 if self is a subset of other, iterating over self;
+ 0 if not; -1 if an error occurred. */
+static int
+all_contained_in(PyObject *self, PyObject *other)
+{
+ PyObject *iter = PyObject_GetIter(self);
+ int ok = 1;
+
+ if (iter == NULL)
+ return -1;
+ for (;;) {
+ PyObject *next = PyIter_Next(iter);
+ if (next == NULL) {
+ if (PyErr_Occurred())
+ ok = -1;
+ break;
+ }
+ ok = PySequence_Contains(other, next);
+ Py_DECREF(next);
+ if (ok <= 0)
+ break;
+ }
+ Py_DECREF(iter);
+ return ok;
+}
+
+static PyObject *
+dictview_richcompare(PyObject *self, PyObject *other, int op)
+{
+ Py_ssize_t len_self, len_other;
+ int ok;
+ PyObject *result;
+
+ assert(self != NULL);
+ assert(PyDictViewSet_Check(self));
+ assert(other != NULL);
+
+ if (!PyAnySet_Check(other) && !PyDictViewSet_Check(other))
+ Py_RETURN_NOTIMPLEMENTED;
+
+ len_self = PyObject_Size(self);
+ if (len_self < 0)
+ return NULL;
+ len_other = PyObject_Size(other);
+ if (len_other < 0)
+ return NULL;
+
+ ok = 0;
+ switch(op) {
+
+ case Py_NE:
+ case Py_EQ:
+ if (len_self == len_other)
+ ok = all_contained_in(self, other);
+ if (op == Py_NE && ok >= 0)
+ ok = !ok;
+ break;
+
+ case Py_LT:
+ if (len_self < len_other)
+ ok = all_contained_in(self, other);
+ break;
+
+ case Py_LE:
+ if (len_self <= len_other)
+ ok = all_contained_in(self, other);
+ break;
+
+ case Py_GT:
+ if (len_self > len_other)
+ ok = all_contained_in(other, self);
+ break;
+
+ case Py_GE:
+ if (len_self >= len_other)
+ ok = all_contained_in(other, self);
+ break;
+
+ }
+ if (ok < 0)
+ return NULL;
+ result = ok ? Py_True : Py_False;
+ Py_INCREF(result);
+ return result;
+}
+
+static PyObject *
+dictview_repr(_PyDictViewObject *dv)
+{
+ PyObject *seq;
+ PyObject *result = NULL;
+ Py_ssize_t rc;
+
+ rc = Py_ReprEnter((PyObject *)dv);
+ if (rc != 0) {
+ return rc > 0 ? PyUnicode_FromString("...") : NULL;
+ }
+ seq = PySequence_List((PyObject *)dv);
+ if (seq == NULL) {
+ goto Done;
+ }
+ result = PyUnicode_FromFormat("%s(%R)", Py_TYPE(dv)->tp_name, seq);
+ Py_DECREF(seq);
+
+Done:
+ Py_ReprLeave((PyObject *)dv);
+ return result;
+}
+
+/*** dict_keys ***/
+
+static PyObject *
+dictkeys_iter(_PyDictViewObject *dv)
+{
+ if (dv->dv_dict == NULL) {
+ Py_RETURN_NONE;
+ }
+ return dictiter_new(dv->dv_dict, &PyDictIterKey_Type);
+}
+
+static int
+dictkeys_contains(_PyDictViewObject *dv, PyObject *obj)
+{
+ if (dv->dv_dict == NULL)
+ return 0;
+ return PyDict_Contains((PyObject *)dv->dv_dict, obj);
+}
+
+static PySequenceMethods dictkeys_as_sequence = {
+ (lenfunc)dictview_len, /* sq_length */
+ 0, /* sq_concat */
+ 0, /* sq_repeat */
+ 0, /* sq_item */
+ 0, /* sq_slice */
+ 0, /* sq_ass_item */
+ 0, /* sq_ass_slice */
+ (objobjproc)dictkeys_contains, /* sq_contains */
+};
+
+static PyObject*
+dictviews_sub(PyObject* self, PyObject *other)
+{
+ PyObject *result = PySet_New(self);
+ PyObject *tmp;
+ _Py_IDENTIFIER(difference_update);
+
+ if (result == NULL)
+ return NULL;
+
+ tmp = _PyObject_CallMethodIdObjArgs(result, &PyId_difference_update, other, NULL);
+ if (tmp == NULL) {
+ Py_DECREF(result);
+ return NULL;
+ }
+
+ Py_DECREF(tmp);
+ return result;
+}
+
+PyObject*
+_PyDictView_Intersect(PyObject* self, PyObject *other)
+{
+ PyObject *result = PySet_New(self);
+ PyObject *tmp;
+ _Py_IDENTIFIER(intersection_update);
+
+ if (result == NULL)
+ return NULL;
+
+ tmp = _PyObject_CallMethodIdObjArgs(result, &PyId_intersection_update, other, NULL);
+ if (tmp == NULL) {
+ Py_DECREF(result);
+ return NULL;
+ }
+
+ Py_DECREF(tmp);
+ return result;
+}
+
+static PyObject*
+dictviews_or(PyObject* self, PyObject *other)
+{
+ PyObject *result = PySet_New(self);
+ PyObject *tmp;
+ _Py_IDENTIFIER(update);
+
+ if (result == NULL)
+ return NULL;
+
+ tmp = _PyObject_CallMethodIdObjArgs(result, &PyId_update, other, NULL);
+ if (tmp == NULL) {
+ Py_DECREF(result);
+ return NULL;
+ }
+
+ Py_DECREF(tmp);
+ return result;
+}
+
+static PyObject*
+dictviews_xor(PyObject* self, PyObject *other)
+{
+ PyObject *result = PySet_New(self);
+ PyObject *tmp;
+ _Py_IDENTIFIER(symmetric_difference_update);
+
+ if (result == NULL)
+ return NULL;
+
+ tmp = _PyObject_CallMethodIdObjArgs(result, &PyId_symmetric_difference_update, other, NULL);
+ if (tmp == NULL) {
+ Py_DECREF(result);
+ return NULL;
+ }
+
+ Py_DECREF(tmp);
+ return result;
+}
+
+static PyNumberMethods dictviews_as_number = {
+ 0, /*nb_add*/
+ (binaryfunc)dictviews_sub, /*nb_subtract*/
+ 0, /*nb_multiply*/
+ 0, /*nb_remainder*/
+ 0, /*nb_divmod*/
+ 0, /*nb_power*/
+ 0, /*nb_negative*/
+ 0, /*nb_positive*/
+ 0, /*nb_absolute*/
+ 0, /*nb_bool*/
+ 0, /*nb_invert*/
+ 0, /*nb_lshift*/
+ 0, /*nb_rshift*/
+ (binaryfunc)_PyDictView_Intersect, /*nb_and*/
+ (binaryfunc)dictviews_xor, /*nb_xor*/
+ (binaryfunc)dictviews_or, /*nb_or*/
+};
+
+static PyObject*
+dictviews_isdisjoint(PyObject *self, PyObject *other)
+{
+ PyObject *it;
+ PyObject *item = NULL;
+
+ if (self == other) {
+ if (dictview_len((_PyDictViewObject *)self) == 0)
+ Py_RETURN_TRUE;
+ else
+ Py_RETURN_FALSE;
+ }
+
+ /* Iterate over the shorter object (only if other is a set,
+ * because PySequence_Contains may be expensive otherwise): */
+ if (PyAnySet_Check(other) || PyDictViewSet_Check(other)) {
+ Py_ssize_t len_self = dictview_len((_PyDictViewObject *)self);
+ Py_ssize_t len_other = PyObject_Size(other);
+ if (len_other == -1)
+ return NULL;
+
+ if ((len_other > len_self)) {
+ PyObject *tmp = other;
+ other = self;
+ self = tmp;
+ }
+ }
+
+ it = PyObject_GetIter(other);
+ if (it == NULL)
+ return NULL;
+
+ while ((item = PyIter_Next(it)) != NULL) {
+ int contains = PySequence_Contains(self, item);
+ Py_DECREF(item);
+ if (contains == -1) {
+ Py_DECREF(it);
+ return NULL;
+ }
+
+ if (contains) {
+ Py_DECREF(it);
+ Py_RETURN_FALSE;
+ }
+ }
+ Py_DECREF(it);
+ if (PyErr_Occurred())
+ return NULL; /* PyIter_Next raised an exception. */
+ Py_RETURN_TRUE;
+}
+
+PyDoc_STRVAR(isdisjoint_doc,
+"Return True if the view and the given iterable have a null intersection.");
+
+static PyMethodDef dictkeys_methods[] = {
+ {"isdisjoint", (PyCFunction)dictviews_isdisjoint, METH_O,
+ isdisjoint_doc},
+ {NULL, NULL} /* sentinel */
+};
+
+PyTypeObject PyDictKeys_Type = {
+ PyVarObject_HEAD_INIT(&PyType_Type, 0)
+ "dict_keys", /* tp_name */
+ sizeof(_PyDictViewObject), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ /* methods */
+ (destructor)dictview_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ (reprfunc)dictview_repr, /* tp_repr */
+ &dictviews_as_number, /* tp_as_number */
+ &dictkeys_as_sequence, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ PyObject_GenericGetAttr, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,/* tp_flags */
+ 0, /* tp_doc */
+ (traverseproc)dictview_traverse, /* tp_traverse */
+ 0, /* tp_clear */
+ dictview_richcompare, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ (getiterfunc)dictkeys_iter, /* tp_iter */
+ 0, /* tp_iternext */
+ dictkeys_methods, /* tp_methods */
+ 0,
+};
+
+static PyObject *
+dictkeys_new(PyObject *dict)
+{
+ return _PyDictView_New(dict, &PyDictKeys_Type);
+}
+
+/*** dict_items ***/
+
+static PyObject *
+dictitems_iter(_PyDictViewObject *dv)
+{
+ if (dv->dv_dict == NULL) {
+ Py_RETURN_NONE;
+ }
+ return dictiter_new(dv->dv_dict, &PyDictIterItem_Type);
+}
+
+static int
+dictitems_contains(_PyDictViewObject *dv, PyObject *obj)
+{
+ int result;
+ PyObject *key, *value, *found;
+ if (dv->dv_dict == NULL)
+ return 0;
+ if (!PyTuple_Check(obj) || PyTuple_GET_SIZE(obj) != 2)
+ return 0;
+ key = PyTuple_GET_ITEM(obj, 0);
+ value = PyTuple_GET_ITEM(obj, 1);
+ found = PyDict_GetItemWithError((PyObject *)dv->dv_dict, key);
+ if (found == NULL) {
+ if (PyErr_Occurred())
+ return -1;
+ return 0;
+ }
+ Py_INCREF(found);
+ result = PyObject_RichCompareBool(value, found, Py_EQ);
+ Py_DECREF(found);
+ return result;
+}
+
+static PySequenceMethods dictitems_as_sequence = {
+ (lenfunc)dictview_len, /* sq_length */
+ 0, /* sq_concat */
+ 0, /* sq_repeat */
+ 0, /* sq_item */
+ 0, /* sq_slice */
+ 0, /* sq_ass_item */
+ 0, /* sq_ass_slice */
+ (objobjproc)dictitems_contains, /* sq_contains */
+};
+
+static PyMethodDef dictitems_methods[] = {
+ {"isdisjoint", (PyCFunction)dictviews_isdisjoint, METH_O,
+ isdisjoint_doc},
+ {NULL, NULL} /* sentinel */
+};
+
+PyTypeObject PyDictItems_Type = {
+ PyVarObject_HEAD_INIT(&PyType_Type, 0)
+ "dict_items", /* tp_name */
+ sizeof(_PyDictViewObject), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ /* methods */
+ (destructor)dictview_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ (reprfunc)dictview_repr, /* tp_repr */
+ &dictviews_as_number, /* tp_as_number */
+ &dictitems_as_sequence, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ PyObject_GenericGetAttr, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,/* tp_flags */
+ 0, /* tp_doc */
+ (traverseproc)dictview_traverse, /* tp_traverse */
+ 0, /* tp_clear */
+ dictview_richcompare, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ (getiterfunc)dictitems_iter, /* tp_iter */
+ 0, /* tp_iternext */
+ dictitems_methods, /* tp_methods */
+ 0,
+};
+
+static PyObject *
+dictitems_new(PyObject *dict)
+{
+ return _PyDictView_New(dict, &PyDictItems_Type);
+}
+
+/*** dict_values ***/
+
+static PyObject *
+dictvalues_iter(_PyDictViewObject *dv)
+{
+ if (dv->dv_dict == NULL) {
+ Py_RETURN_NONE;
+ }
+ return dictiter_new(dv->dv_dict, &PyDictIterValue_Type);
+}
+
+static PySequenceMethods dictvalues_as_sequence = {
+ (lenfunc)dictview_len, /* sq_length */
+ 0, /* sq_concat */
+ 0, /* sq_repeat */
+ 0, /* sq_item */
+ 0, /* sq_slice */
+ 0, /* sq_ass_item */
+ 0, /* sq_ass_slice */
+ (objobjproc)0, /* sq_contains */
+};
+
+static PyMethodDef dictvalues_methods[] = {
+ {NULL, NULL} /* sentinel */
+};
+
+PyTypeObject PyDictValues_Type = {
+ PyVarObject_HEAD_INIT(&PyType_Type, 0)
+ "dict_values", /* tp_name */
+ sizeof(_PyDictViewObject), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ /* methods */
+ (destructor)dictview_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ (reprfunc)dictview_repr, /* tp_repr */
+ 0, /* tp_as_number */
+ &dictvalues_as_sequence, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ PyObject_GenericGetAttr, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,/* tp_flags */
+ 0, /* tp_doc */
+ (traverseproc)dictview_traverse, /* tp_traverse */
+ 0, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ (getiterfunc)dictvalues_iter, /* tp_iter */
+ 0, /* tp_iternext */
+ dictvalues_methods, /* tp_methods */
+ 0,
+};
+
+static PyObject *
+dictvalues_new(PyObject *dict)
+{
+ return _PyDictView_New(dict, &PyDictValues_Type);
+}
+
+/* Returns NULL if cannot allocate a new PyDictKeysObject,
+ but does not set an error */
+PyDictKeysObject *
+_PyDict_NewKeysForClass(void)
+{
+ PyDictKeysObject *keys = new_keys_object(PyDict_MINSIZE);
+ if (keys == NULL)
+ PyErr_Clear();
+ else
+ keys->dk_lookup = lookdict_split;
+ return keys;
+}
+
+#define CACHED_KEYS(tp) (((PyHeapTypeObject*)tp)->ht_cached_keys)
+
+PyObject *
+PyObject_GenericGetDict(PyObject *obj, void *context)
+{
+ PyObject *dict, **dictptr = _PyObject_GetDictPtr(obj);
+ if (dictptr == NULL) {
+ PyErr_SetString(PyExc_AttributeError,
+ "This object has no __dict__");
+ return NULL;
+ }
+ dict = *dictptr;
+ if (dict == NULL) {
+ PyTypeObject *tp = Py_TYPE(obj);
+ if ((tp->tp_flags & Py_TPFLAGS_HEAPTYPE) && CACHED_KEYS(tp)) {
+ DK_INCREF(CACHED_KEYS(tp));
+ *dictptr = dict = new_dict_with_shared_keys(CACHED_KEYS(tp));
+ }
+ else {
+ *dictptr = dict = PyDict_New();
+ }
+ }
+ Py_XINCREF(dict);
+ return dict;
+}
+
+int
+_PyObjectDict_SetItem(PyTypeObject *tp, PyObject **dictptr,
+ PyObject *key, PyObject *value)
+{
+ PyObject *dict;
+ int res;
+ PyDictKeysObject *cached;
+
+ assert(dictptr != NULL);
+ if ((tp->tp_flags & Py_TPFLAGS_HEAPTYPE) && (cached = CACHED_KEYS(tp))) {
+ assert(dictptr != NULL);
+ dict = *dictptr;
+ if (dict == NULL) {
+ DK_INCREF(cached);
+ dict = new_dict_with_shared_keys(cached);
+ if (dict == NULL)
+ return -1;
+ *dictptr = dict;
+ }
+ if (value == NULL) {
+ res = PyDict_DelItem(dict, key);
+ // Since key sharing dict doesn't allow deletion, PyDict_DelItem()
+ // always converts dict to combined form.
+ if ((cached = CACHED_KEYS(tp)) != NULL) {
+ CACHED_KEYS(tp) = NULL;
+ DK_DECREF(cached);
+ }
+ }
+ else {
+ int was_shared = (cached == ((PyDictObject *)dict)->ma_keys);
+ res = PyDict_SetItem(dict, key, value);
+ if (was_shared &&
+ (cached = CACHED_KEYS(tp)) != NULL &&
+ cached != ((PyDictObject *)dict)->ma_keys) {
+ /* PyDict_SetItem() may call dictresize and convert split table
+ * into combined table. In such case, convert it to split
+ * table again and update type's shared key only when this is
+ * the only dict sharing key with the type.
+ *
+ * This is to allow using shared key in class like this:
+ *
+ * class C:
+ * def __init__(self):
+ * # one dict resize happens
+ * self.a, self.b, self.c = 1, 2, 3
+ * self.d, self.e, self.f = 4, 5, 6
+ * a = C()
+ */
+ if (cached->dk_refcnt == 1) {
+ CACHED_KEYS(tp) = make_keys_shared(dict);
+ }
+ else {
+ CACHED_KEYS(tp) = NULL;
+ }
+ DK_DECREF(cached);
+ if (CACHED_KEYS(tp) == NULL && PyErr_Occurred())
+ return -1;
+ }
+ }
+ } else {
+ dict = *dictptr;
+ if (dict == NULL) {
+ dict = PyDict_New();
+ if (dict == NULL)
+ return -1;
+ *dictptr = dict;
+ }
+ if (value == NULL) {
+ res = PyDict_DelItem(dict, key);
+ } else {
+ res = PyDict_SetItem(dict, key, value);
+ }
+ }
+ return res;
+}
+
+void
+_PyDictKeys_DecRef(PyDictKeysObject *keys)
+{
+ DK_DECREF(keys);
+}
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/memoryobject.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/memoryobject.c
new file mode 100644
index 00000000..2b6449c7
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/memoryobject.c
@@ -0,0 +1,3114 @@
+/* Memoryview object implementation */
+
+#include "Python.h"
+#include "pystrhex.h"
+#include <stddef.h>
+
+
+/****************************************************************************/
+/* ManagedBuffer Object */
+/****************************************************************************/
+
+/*
+ ManagedBuffer Object:
+ ---------------------
+
+ The purpose of this object is to facilitate the handling of chained
+ memoryviews that have the same underlying exporting object. PEP-3118
+ allows the underlying object to change while a view is exported. This
+ could lead to unexpected results when constructing a new memoryview
+ from an existing memoryview.
+
+ Rather than repeatedly redirecting buffer requests to the original base
+ object, all chained memoryviews use a single buffer snapshot. This
+ snapshot is generated by the constructor _PyManagedBuffer_FromObject().
+
+ Ownership rules:
+ ----------------
+
+ The master buffer inside a managed buffer is filled in by the original
+ base object. shape, strides, suboffsets and format are read-only for
+ all consumers.
+
+ A memoryview's buffer is a private copy of the exporter's buffer. shape,
+ strides and suboffsets belong to the memoryview and are thus writable.
+
+ If a memoryview itself exports several buffers via memory_getbuf(), all
+ buffer copies share shape, strides and suboffsets. In this case, the
+ arrays are NOT writable.
+
+ Reference count assumptions:
+ ----------------------------
+
+ The 'obj' member of a Py_buffer must either be NULL or refer to the
+ exporting base object. In the Python codebase, all getbufferprocs
+ return a new reference to view.obj (example: bytes_buffer_getbuffer()).
+
+ PyBuffer_Release() decrements view.obj (if non-NULL), so the
+ releasebufferprocs must NOT decrement view.obj.
+*/
+
+
+#define CHECK_MBUF_RELEASED(mbuf) \
+ if (((_PyManagedBufferObject *)mbuf)->flags&_Py_MANAGED_BUFFER_RELEASED) { \
+ PyErr_SetString(PyExc_ValueError, \
+ "operation forbidden on released memoryview object"); \
+ return NULL; \
+ }
+
+
+static _PyManagedBufferObject *
+mbuf_alloc(void)
+{
+ _PyManagedBufferObject *mbuf;
+
+ mbuf = (_PyManagedBufferObject *)
+ PyObject_GC_New(_PyManagedBufferObject, &_PyManagedBuffer_Type);
+ if (mbuf == NULL)
+ return NULL;
+ mbuf->flags = 0;
+ mbuf->exports = 0;
+ mbuf->master.obj = NULL;
+ _PyObject_GC_TRACK(mbuf);
+
+ return mbuf;
+}
+
+static PyObject *
+_PyManagedBuffer_FromObject(PyObject *base)
+{
+ _PyManagedBufferObject *mbuf;
+
+ mbuf = mbuf_alloc();
+ if (mbuf == NULL)
+ return NULL;
+
+ if (PyObject_GetBuffer(base, &mbuf->master, PyBUF_FULL_RO) < 0) {
+ mbuf->master.obj = NULL;
+ Py_DECREF(mbuf);
+ return NULL;
+ }
+
+ return (PyObject *)mbuf;
+}
+
+static void
+mbuf_release(_PyManagedBufferObject *self)
+{
+ if (self->flags&_Py_MANAGED_BUFFER_RELEASED)
+ return;
+
+ /* NOTE: at this point self->exports can still be > 0 if this function
+ is called from mbuf_clear() to break up a reference cycle. */
+ self->flags |= _Py_MANAGED_BUFFER_RELEASED;
+
+ /* PyBuffer_Release() decrements master->obj and sets it to NULL. */
+ _PyObject_GC_UNTRACK(self);
+ PyBuffer_Release(&self->master);
+}
+
+static void
+mbuf_dealloc(_PyManagedBufferObject *self)
+{
+ assert(self->exports == 0);
+ mbuf_release(self);
+ if (self->flags&_Py_MANAGED_BUFFER_FREE_FORMAT)
+ PyMem_Free(self->master.format);
+ PyObject_GC_Del(self);
+}
+
+static int
+mbuf_traverse(_PyManagedBufferObject *self, visitproc visit, void *arg)
+{
+ Py_VISIT(self->master.obj);
+ return 0;
+}
+
+static int
+mbuf_clear(_PyManagedBufferObject *self)
+{
+ assert(self->exports >= 0);
+ mbuf_release(self);
+ return 0;
+}
+
+PyTypeObject _PyManagedBuffer_Type = {
+ PyVarObject_HEAD_INIT(&PyType_Type, 0)
+ "managedbuffer",
+ sizeof(_PyManagedBufferObject),
+ 0,
+ (destructor)mbuf_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ 0, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ PyObject_GenericGetAttr, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC, /* tp_flags */
+ 0, /* tp_doc */
+ (traverseproc)mbuf_traverse, /* tp_traverse */
+ (inquiry)mbuf_clear /* tp_clear */
+};
+
+
+/****************************************************************************/
+/* MemoryView Object */
+/****************************************************************************/
+
+/* In the process of breaking reference cycles mbuf_release() can be
+ called before memory_release(). */
+#define BASE_INACCESSIBLE(mv) \
+ (((PyMemoryViewObject *)mv)->flags&_Py_MEMORYVIEW_RELEASED || \
+ ((PyMemoryViewObject *)mv)->mbuf->flags&_Py_MANAGED_BUFFER_RELEASED)
+
+#define CHECK_RELEASED(mv) \
+ if (BASE_INACCESSIBLE(mv)) { \
+ PyErr_SetString(PyExc_ValueError, \
+ "operation forbidden on released memoryview object"); \
+ return NULL; \
+ }
+
+#define CHECK_RELEASED_INT(mv) \
+ if (BASE_INACCESSIBLE(mv)) { \
+ PyErr_SetString(PyExc_ValueError, \
+ "operation forbidden on released memoryview object"); \
+ return -1; \
+ }
+
+#define CHECK_LIST_OR_TUPLE(v) \
+ if (!PyList_Check(v) && !PyTuple_Check(v)) { \
+ PyErr_SetString(PyExc_TypeError, \
+ #v " must be a list or a tuple"); \
+ return NULL; \
+ }
+
+#define VIEW_ADDR(mv) (&((PyMemoryViewObject *)mv)->view)
+
+/* Check for the presence of suboffsets in the first dimension. */
+#define HAVE_PTR(suboffsets, dim) (suboffsets && suboffsets[dim] >= 0)
+/* Adjust ptr if suboffsets are present. */
+#define ADJUST_PTR(ptr, suboffsets, dim) \
+ (HAVE_PTR(suboffsets, dim) ? *((char**)ptr) + suboffsets[dim] : ptr)
+
+/* Memoryview buffer properties */
+#define MV_C_CONTIGUOUS(flags) (flags&(_Py_MEMORYVIEW_SCALAR|_Py_MEMORYVIEW_C))
+#define MV_F_CONTIGUOUS(flags) \
+ (flags&(_Py_MEMORYVIEW_SCALAR|_Py_MEMORYVIEW_FORTRAN))
+#define MV_ANY_CONTIGUOUS(flags) \
+ (flags&(_Py_MEMORYVIEW_SCALAR|_Py_MEMORYVIEW_C|_Py_MEMORYVIEW_FORTRAN))
+
+/* Fast contiguity test. Caller must ensure suboffsets==NULL and ndim==1. */
+#define MV_CONTIGUOUS_NDIM1(view) \
+ ((view)->shape[0] == 1 || (view)->strides[0] == (view)->itemsize)
+
+/* getbuffer() requests */
+#define REQ_INDIRECT(flags) ((flags&PyBUF_INDIRECT) == PyBUF_INDIRECT)
+#define REQ_C_CONTIGUOUS(flags) ((flags&PyBUF_C_CONTIGUOUS) == PyBUF_C_CONTIGUOUS)
+#define REQ_F_CONTIGUOUS(flags) ((flags&PyBUF_F_CONTIGUOUS) == PyBUF_F_CONTIGUOUS)
+#define REQ_ANY_CONTIGUOUS(flags) ((flags&PyBUF_ANY_CONTIGUOUS) == PyBUF_ANY_CONTIGUOUS)
+#define REQ_STRIDES(flags) ((flags&PyBUF_STRIDES) == PyBUF_STRIDES)
+#define REQ_SHAPE(flags) ((flags&PyBUF_ND) == PyBUF_ND)
+#define REQ_WRITABLE(flags) (flags&PyBUF_WRITABLE)
+#define REQ_FORMAT(flags) (flags&PyBUF_FORMAT)
+
+
+PyDoc_STRVAR(memory_doc,
+"memoryview(object)\n--\n\
+\n\
+Create a new memoryview object which references the given object.");
+
+
+/**************************************************************************/
+/* Copy memoryview buffers */
+/**************************************************************************/
+
+/* The functions in this section take a source and a destination buffer
+ with the same logical structure: format, itemsize, ndim and shape
+ are identical, with ndim > 0.
+
+ NOTE: All buffers are assumed to have PyBUF_FULL information, which
+ is the case for memoryviews! */
+
+
+/* Assumptions: ndim >= 1. The macro tests for a corner case that should
+ perhaps be explicitly forbidden in the PEP. */
+#define HAVE_SUBOFFSETS_IN_LAST_DIM(view) \
+ (view->suboffsets && view->suboffsets[dest->ndim-1] >= 0)
+
+static int
+last_dim_is_contiguous(const Py_buffer *dest, const Py_buffer *src)
+{
+ assert(dest->ndim > 0 && src->ndim > 0);
+ return (!HAVE_SUBOFFSETS_IN_LAST_DIM(dest) &&
+ !HAVE_SUBOFFSETS_IN_LAST_DIM(src) &&
+ dest->strides[dest->ndim-1] == dest->itemsize &&
+ src->strides[src->ndim-1] == src->itemsize);
+}
+
+/* This is not a general function for determining format equivalence.
+ It is used in copy_single() and copy_buffer() to weed out non-matching
+ formats. Skipping the '@' character is specifically used in slice
+ assignments, where the lvalue is already known to have a single character
+ format. This is a performance hack that could be rewritten (if properly
+ benchmarked). */
+static int
+equiv_format(const Py_buffer *dest, const Py_buffer *src)
+{
+ const char *dfmt, *sfmt;
+
+ assert(dest->format && src->format);
+ dfmt = dest->format[0] == '@' ? dest->format+1 : dest->format;
+ sfmt = src->format[0] == '@' ? src->format+1 : src->format;
+
+ if (strcmp(dfmt, sfmt) != 0 ||
+ dest->itemsize != src->itemsize) {
+ return 0;
+ }
+
+ return 1;
+}
+
+/* Two shapes are equivalent if they are either equal or identical up
+ to a zero element at the same position. For example, in NumPy arrays
+ the shapes [1, 0, 5] and [1, 0, 7] are equivalent. */
+static int
+equiv_shape(const Py_buffer *dest, const Py_buffer *src)
+{
+ int i;
+
+ if (dest->ndim != src->ndim)
+ return 0;
+
+ for (i = 0; i < dest->ndim; i++) {
+ if (dest->shape[i] != src->shape[i])
+ return 0;
+ if (dest->shape[i] == 0)
+ break;
+ }
+
+ return 1;
+}
+
+/* Check that the logical structure of the destination and source buffers
+ is identical. */
+static int
+equiv_structure(const Py_buffer *dest, const Py_buffer *src)
+{
+ if (!equiv_format(dest, src) ||
+ !equiv_shape(dest, src)) {
+ PyErr_SetString(PyExc_ValueError,
+ "memoryview assignment: lvalue and rvalue have different "
+ "structures");
+ return 0;
+ }
+
+ return 1;
+}
+
+/* Base case for recursive multi-dimensional copying. Contiguous arrays are
+ copied with very little overhead. Assumptions: ndim == 1, mem == NULL or
+ sizeof(mem) == shape[0] * itemsize. */
+static void
+copy_base(const Py_ssize_t *shape, Py_ssize_t itemsize,
+ char *dptr, const Py_ssize_t *dstrides, const Py_ssize_t *dsuboffsets,
+ char *sptr, const Py_ssize_t *sstrides, const Py_ssize_t *ssuboffsets,
+ char *mem)
+{
+ if (mem == NULL) { /* contiguous */
+ Py_ssize_t size = shape[0] * itemsize;
+ if (dptr + size < sptr || sptr + size < dptr)
+ memcpy(dptr, sptr, size); /* no overlapping */
+ else
+ memmove(dptr, sptr, size);
+ }
+ else {
+ char *p;
+ Py_ssize_t i;
+ for (i=0, p=mem; i < shape[0]; p+=itemsize, sptr+=sstrides[0], i++) {
+ char *xsptr = ADJUST_PTR(sptr, ssuboffsets, 0);
+ memcpy(p, xsptr, itemsize);
+ }
+ for (i=0, p=mem; i < shape[0]; p+=itemsize, dptr+=dstrides[0], i++) {
+ char *xdptr = ADJUST_PTR(dptr, dsuboffsets, 0);
+ memcpy(xdptr, p, itemsize);
+ }
+ }
+
+}
+
+/* Recursively copy a source buffer to a destination buffer. The two buffers
+ have the same ndim, shape and itemsize. */
+static void
+copy_rec(const Py_ssize_t *shape, Py_ssize_t ndim, Py_ssize_t itemsize,
+ char *dptr, const Py_ssize_t *dstrides, const Py_ssize_t *dsuboffsets,
+ char *sptr, const Py_ssize_t *sstrides, const Py_ssize_t *ssuboffsets,
+ char *mem)
+{
+ Py_ssize_t i;
+
+ assert(ndim >= 1);
+
+ if (ndim == 1) {
+ copy_base(shape, itemsize,
+ dptr, dstrides, dsuboffsets,
+ sptr, sstrides, ssuboffsets,
+ mem);
+ return;
+ }
+
+ for (i = 0; i < shape[0]; dptr+=dstrides[0], sptr+=sstrides[0], i++) {
+ char *xdptr = ADJUST_PTR(dptr, dsuboffsets, 0);
+ char *xsptr = ADJUST_PTR(sptr, ssuboffsets, 0);
+
+ copy_rec(shape+1, ndim-1, itemsize,
+ xdptr, dstrides+1, dsuboffsets ? dsuboffsets+1 : NULL,
+ xsptr, sstrides+1, ssuboffsets ? ssuboffsets+1 : NULL,
+ mem);
+ }
+}
+
+/* Faster copying of one-dimensional arrays. */
+static int
+copy_single(Py_buffer *dest, Py_buffer *src)
+{
+ char *mem = NULL;
+
+ assert(dest->ndim == 1);
+
+ if (!equiv_structure(dest, src))
+ return -1;
+
+ if (!last_dim_is_contiguous(dest, src)) {
+ mem = PyMem_Malloc(dest->shape[0] * dest->itemsize);
+ if (mem == NULL) {
+ PyErr_NoMemory();
+ return -1;
+ }
+ }
+
+ copy_base(dest->shape, dest->itemsize,
+ dest->buf, dest->strides, dest->suboffsets,
+ src->buf, src->strides, src->suboffsets,
+ mem);
+
+ if (mem)
+ PyMem_Free(mem);
+
+ return 0;
+}
+
+/* Recursively copy src to dest. Both buffers must have the same basic
+ structure. Copying is atomic, the function never fails with a partial
+ copy. */
+static int
+copy_buffer(Py_buffer *dest, Py_buffer *src)
+{
+ char *mem = NULL;
+
+ assert(dest->ndim > 0);
+
+ if (!equiv_structure(dest, src))
+ return -1;
+
+ if (!last_dim_is_contiguous(dest, src)) {
+ mem = PyMem_Malloc(dest->shape[dest->ndim-1] * dest->itemsize);
+ if (mem == NULL) {
+ PyErr_NoMemory();
+ return -1;
+ }
+ }
+
+ copy_rec(dest->shape, dest->ndim, dest->itemsize,
+ dest->buf, dest->strides, dest->suboffsets,
+ src->buf, src->strides, src->suboffsets,
+ mem);
+
+ if (mem)
+ PyMem_Free(mem);
+
+ return 0;
+}
+
+/* Initialize strides for a C-contiguous array. */
+static void
+init_strides_from_shape(Py_buffer *view)
+{
+ Py_ssize_t i;
+
+ assert(view->ndim > 0);
+
+ view->strides[view->ndim-1] = view->itemsize;
+ for (i = view->ndim-2; i >= 0; i--)
+ view->strides[i] = view->strides[i+1] * view->shape[i+1];
+}
+
+/* Initialize strides for a Fortran-contiguous array. */
+static void
+init_fortran_strides_from_shape(Py_buffer *view)
+{
+ Py_ssize_t i;
+
+ assert(view->ndim > 0);
+
+ view->strides[0] = view->itemsize;
+ for (i = 1; i < view->ndim; i++)
+ view->strides[i] = view->strides[i-1] * view->shape[i-1];
+}
+
+/* Copy src to a contiguous representation. order is one of 'C', 'F' (Fortran)
+ or 'A' (Any). Assumptions: src has PyBUF_FULL information, src->ndim >= 1,
+ len(mem) == src->len. */
+static int
+buffer_to_contiguous(char *mem, Py_buffer *src, char order)
+{
+ Py_buffer dest;
+ Py_ssize_t *strides;
+ int ret;
+
+ assert(src->ndim >= 1);
+ assert(src->shape != NULL);
+ assert(src->strides != NULL);
+
+ strides = PyMem_Malloc(src->ndim * (sizeof *src->strides));
+ if (strides == NULL) {
+ PyErr_NoMemory();
+ return -1;
+ }
+
+ /* initialize dest */
+ dest = *src;
+ dest.buf = mem;
+ /* shape is constant and shared: the logical representation of the
+ array is unaltered. */
+
+ /* The physical representation determined by strides (and possibly
+ suboffsets) may change. */
+ dest.strides = strides;
+ if (order == 'C' || order == 'A') {
+ init_strides_from_shape(&dest);
+ }
+ else {
+ init_fortran_strides_from_shape(&dest);
+ }
+
+ dest.suboffsets = NULL;
+
+ ret = copy_buffer(&dest, src);
+
+ PyMem_Free(strides);
+ return ret;
+}
+
+
+/****************************************************************************/
+/* Constructors */
+/****************************************************************************/
+
+/* Initialize values that are shared with the managed buffer. */
+static void
+init_shared_values(Py_buffer *dest, const Py_buffer *src)
+{
+ dest->obj = src->obj;
+ dest->buf = src->buf;
+ dest->len = src->len;
+ dest->itemsize = src->itemsize;
+ dest->readonly = src->readonly;
+ dest->format = src->format ? src->format : "B";
+ dest->internal = src->internal;
+}
+
+/* Copy shape and strides. Reconstruct missing values. */
+static void
+init_shape_strides(Py_buffer *dest, const Py_buffer *src)
+{
+ Py_ssize_t i;
+
+ if (src->ndim == 0) {
+ dest->shape = NULL;
+ dest->strides = NULL;
+ return;
+ }
+ if (src->ndim == 1) {
+ dest->shape[0] = src->shape ? src->shape[0] : src->len / src->itemsize;
+ dest->strides[0] = src->strides ? src->strides[0] : src->itemsize;
+ return;
+ }
+
+ for (i = 0; i < src->ndim; i++)
+ dest->shape[i] = src->shape[i];
+ if (src->strides) {
+ for (i = 0; i < src->ndim; i++)
+ dest->strides[i] = src->strides[i];
+ }
+ else {
+ init_strides_from_shape(dest);
+ }
+}
+
+static void
+init_suboffsets(Py_buffer *dest, const Py_buffer *src)
+{
+ Py_ssize_t i;
+
+ if (src->suboffsets == NULL) {
+ dest->suboffsets = NULL;
+ return;
+ }
+ for (i = 0; i < src->ndim; i++)
+ dest->suboffsets[i] = src->suboffsets[i];
+}
+
+/* len = product(shape) * itemsize */
+static void
+init_len(Py_buffer *view)
+{
+ Py_ssize_t i, len;
+
+ len = 1;
+ for (i = 0; i < view->ndim; i++)
+ len *= view->shape[i];
+ len *= view->itemsize;
+
+ view->len = len;
+}
+
+/* Initialize memoryview buffer properties. */
+static void
+init_flags(PyMemoryViewObject *mv)
+{
+ const Py_buffer *view = &mv->view;
+ int flags = 0;
+
+ switch (view->ndim) {
+ case 0:
+ flags |= (_Py_MEMORYVIEW_SCALAR|_Py_MEMORYVIEW_C|
+ _Py_MEMORYVIEW_FORTRAN);
+ break;
+ case 1:
+ if (MV_CONTIGUOUS_NDIM1(view))
+ flags |= (_Py_MEMORYVIEW_C|_Py_MEMORYVIEW_FORTRAN);
+ break;
+ default:
+ if (PyBuffer_IsContiguous(view, 'C'))
+ flags |= _Py_MEMORYVIEW_C;
+ if (PyBuffer_IsContiguous(view, 'F'))
+ flags |= _Py_MEMORYVIEW_FORTRAN;
+ break;
+ }
+
+ if (view->suboffsets) {
+ flags |= _Py_MEMORYVIEW_PIL;
+ flags &= ~(_Py_MEMORYVIEW_C|_Py_MEMORYVIEW_FORTRAN);
+ }
+
+ mv->flags = flags;
+}
+
+/* Allocate a new memoryview and perform basic initialization. New memoryviews
+ are exclusively created through the mbuf_add functions. */
+static PyMemoryViewObject *
+memory_alloc(int ndim)
+{
+ PyMemoryViewObject *mv;
+
+ mv = (PyMemoryViewObject *)
+ PyObject_GC_NewVar(PyMemoryViewObject, &PyMemoryView_Type, 3*ndim);
+ if (mv == NULL)
+ return NULL;
+
+ mv->mbuf = NULL;
+ mv->hash = -1;
+ mv->flags = 0;
+ mv->exports = 0;
+ mv->view.ndim = ndim;
+ mv->view.shape = mv->ob_array;
+ mv->view.strides = mv->ob_array + ndim;
+ mv->view.suboffsets = mv->ob_array + 2 * ndim;
+ mv->weakreflist = NULL;
+
+ _PyObject_GC_TRACK(mv);
+ return mv;
+}
+
+/*
+ Return a new memoryview that is registered with mbuf. If src is NULL,
+ use mbuf->master as the underlying buffer. Otherwise, use src.
+
+ The new memoryview has full buffer information: shape and strides
+ are always present, suboffsets as needed. Arrays are copied to
+ the memoryview's ob_array field.
+ */
+static PyObject *
+mbuf_add_view(_PyManagedBufferObject *mbuf, const Py_buffer *src)
+{
+ PyMemoryViewObject *mv;
+ Py_buffer *dest;
+
+ if (src == NULL)
+ src = &mbuf->master;
+
+ if (src->ndim > PyBUF_MAX_NDIM) {
+ PyErr_SetString(PyExc_ValueError,
+ "memoryview: number of dimensions must not exceed "
+ Py_STRINGIFY(PyBUF_MAX_NDIM));
+ return NULL;
+ }
+
+ mv = memory_alloc(src->ndim);
+ if (mv == NULL)
+ return NULL;
+
+ dest = &mv->view;
+ init_shared_values(dest, src);
+ init_shape_strides(dest, src);
+ init_suboffsets(dest, src);
+ init_flags(mv);
+
+ mv->mbuf = mbuf;
+ Py_INCREF(mbuf);
+ mbuf->exports++;
+
+ return (PyObject *)mv;
+}
+
+/* Register an incomplete view: shape, strides, suboffsets and flags still
+ need to be initialized. Use 'ndim' instead of src->ndim to determine the
+ size of the memoryview's ob_array.
+
+ Assumption: ndim <= PyBUF_MAX_NDIM. */
+static PyObject *
+mbuf_add_incomplete_view(_PyManagedBufferObject *mbuf, const Py_buffer *src,
+ int ndim)
+{
+ PyMemoryViewObject *mv;
+ Py_buffer *dest;
+
+ if (src == NULL)
+ src = &mbuf->master;
+
+ assert(ndim <= PyBUF_MAX_NDIM);
+
+ mv = memory_alloc(ndim);
+ if (mv == NULL)
+ return NULL;
+
+ dest = &mv->view;
+ init_shared_values(dest, src);
+
+ mv->mbuf = mbuf;
+ Py_INCREF(mbuf);
+ mbuf->exports++;
+
+ return (PyObject *)mv;
+}
+
+/* Expose a raw memory area as a view of contiguous bytes. flags can be
+ PyBUF_READ or PyBUF_WRITE. view->format is set to "B" (unsigned bytes).
+ The memoryview has complete buffer information. */
+PyObject *
+PyMemoryView_FromMemory(char *mem, Py_ssize_t size, int flags)
+{
+ _PyManagedBufferObject *mbuf;
+ PyObject *mv;
+ int readonly;
+
+ assert(mem != NULL);
+ assert(flags == PyBUF_READ || flags == PyBUF_WRITE);
+
+ mbuf = mbuf_alloc();
+ if (mbuf == NULL)
+ return NULL;
+
+ readonly = (flags == PyBUF_WRITE) ? 0 : 1;
+ (void)PyBuffer_FillInfo(&mbuf->master, NULL, mem, size, readonly,
+ PyBUF_FULL_RO);
+
+ mv = mbuf_add_view(mbuf, NULL);
+ Py_DECREF(mbuf);
+
+ return mv;
+}
+
+/* Create a memoryview from a given Py_buffer. For simple byte views,
+ PyMemoryView_FromMemory() should be used instead.
+ This function is the only entry point that can create a master buffer
+ without full information. Because of this fact init_shape_strides()
+ must be able to reconstruct missing values. */
+PyObject *
+PyMemoryView_FromBuffer(Py_buffer *info)
+{
+ _PyManagedBufferObject *mbuf;
+ PyObject *mv;
+
+ if (info->buf == NULL) {
+ PyErr_SetString(PyExc_ValueError,
+ "PyMemoryView_FromBuffer(): info->buf must not be NULL");
+ return NULL;
+ }
+
+ mbuf = mbuf_alloc();
+ if (mbuf == NULL)
+ return NULL;
+
+ /* info->obj is either NULL or a borrowed reference. This reference
+ should not be decremented in PyBuffer_Release(). */
+ mbuf->master = *info;
+ mbuf->master.obj = NULL;
+
+ mv = mbuf_add_view(mbuf, NULL);
+ Py_DECREF(mbuf);
+
+ return mv;
+}
+
+/* Create a memoryview from an object that implements the buffer protocol.
+ If the object is a memoryview, the new memoryview must be registered
+ with the same managed buffer. Otherwise, a new managed buffer is created. */
+PyObject *
+PyMemoryView_FromObject(PyObject *v)
+{
+ _PyManagedBufferObject *mbuf;
+
+ if (PyMemoryView_Check(v)) {
+ PyMemoryViewObject *mv = (PyMemoryViewObject *)v;
+ CHECK_RELEASED(mv);
+ return mbuf_add_view(mv->mbuf, &mv->view);
+ }
+ else if (PyObject_CheckBuffer(v)) {
+ PyObject *ret;
+ mbuf = (_PyManagedBufferObject *)_PyManagedBuffer_FromObject(v);
+ if (mbuf == NULL)
+ return NULL;
+ ret = mbuf_add_view(mbuf, NULL);
+ Py_DECREF(mbuf);
+ return ret;
+ }
+
+ PyErr_Format(PyExc_TypeError,
+ "memoryview: a bytes-like object is required, not '%.200s'",
+ Py_TYPE(v)->tp_name);
+ return NULL;
+}
+
+/* Copy the format string from a base object that might vanish. */
+static int
+mbuf_copy_format(_PyManagedBufferObject *mbuf, const char *fmt)
+{
+ if (fmt != NULL) {
+ char *cp = PyMem_Malloc(strlen(fmt)+1);
+ if (cp == NULL) {
+ PyErr_NoMemory();
+ return -1;
+ }
+ mbuf->master.format = strcpy(cp, fmt);
+ mbuf->flags |= _Py_MANAGED_BUFFER_FREE_FORMAT;
+ }
+
+ return 0;
+}
+
+/*
+ Return a memoryview that is based on a contiguous copy of src.
+ Assumptions: src has PyBUF_FULL_RO information, src->ndim > 0.
+
+ Ownership rules:
+ 1) As usual, the returned memoryview has a private copy
+ of src->shape, src->strides and src->suboffsets.
+ 2) src->format is copied to the master buffer and released
+ in mbuf_dealloc(). The releasebufferproc of the bytes
+ object is NULL, so it does not matter that mbuf_release()
+ passes the altered format pointer to PyBuffer_Release().
+*/
+static PyObject *
+memory_from_contiguous_copy(Py_buffer *src, char order)
+{
+ _PyManagedBufferObject *mbuf;
+ PyMemoryViewObject *mv;
+ PyObject *bytes;
+ Py_buffer *dest;
+ int i;
+
+ assert(src->ndim > 0);
+ assert(src->shape != NULL);
+
+ bytes = PyBytes_FromStringAndSize(NULL, src->len);
+ if (bytes == NULL)
+ return NULL;
+
+ mbuf = (_PyManagedBufferObject *)_PyManagedBuffer_FromObject(bytes);
+ Py_DECREF(bytes);
+ if (mbuf == NULL)
+ return NULL;
+
+ if (mbuf_copy_format(mbuf, src->format) < 0) {
+ Py_DECREF(mbuf);
+ return NULL;
+ }
+
+ mv = (PyMemoryViewObject *)mbuf_add_incomplete_view(mbuf, NULL, src->ndim);
+ Py_DECREF(mbuf);
+ if (mv == NULL)
+ return NULL;
+
+ dest = &mv->view;
+
+ /* shared values are initialized correctly except for itemsize */
+ dest->itemsize = src->itemsize;
+
+ /* shape and strides */
+ for (i = 0; i < src->ndim; i++) {
+ dest->shape[i] = src->shape[i];
+ }
+ if (order == 'C' || order == 'A') {
+ init_strides_from_shape(dest);
+ }
+ else {
+ init_fortran_strides_from_shape(dest);
+ }
+ /* suboffsets */
+ dest->suboffsets = NULL;
+
+ /* flags */
+ init_flags(mv);
+
+ if (copy_buffer(dest, src) < 0) {
+ Py_DECREF(mv);
+ return NULL;
+ }
+
+ return (PyObject *)mv;
+}
+
+/*
+ Return a new memoryview object based on a contiguous exporter with
+ buffertype={PyBUF_READ, PyBUF_WRITE} and order={'C', 'F'ortran, or 'A'ny}.
+ The logical structure of the input and output buffers is the same
+ (i.e. tolist(input) == tolist(output)), but the physical layout in
+ memory can be explicitly chosen.
+
+ As usual, if buffertype=PyBUF_WRITE, the exporter's buffer must be writable,
+ otherwise it may be writable or read-only.
+
+ If the exporter is already contiguous with the desired target order,
+ the memoryview will be directly based on the exporter.
+
+ Otherwise, if the buffertype is PyBUF_READ, the memoryview will be
+ based on a new bytes object. If order={'C', 'A'ny}, use 'C' order,
+ 'F'ortran order otherwise.
+*/
+PyObject *
+PyMemoryView_GetContiguous(PyObject *obj, int buffertype, char order)
+{
+ PyMemoryViewObject *mv;
+ PyObject *ret;
+ Py_buffer *view;
+
+ assert(buffertype == PyBUF_READ || buffertype == PyBUF_WRITE);
+ assert(order == 'C' || order == 'F' || order == 'A');
+
+ mv = (PyMemoryViewObject *)PyMemoryView_FromObject(obj);
+ if (mv == NULL)
+ return NULL;
+
+ view = &mv->view;
+ if (buffertype == PyBUF_WRITE && view->readonly) {
+ PyErr_SetString(PyExc_BufferError,
+ "underlying buffer is not writable");
+ Py_DECREF(mv);
+ return NULL;
+ }
+
+ if (PyBuffer_IsContiguous(view, order))
+ return (PyObject *)mv;
+
+ if (buffertype == PyBUF_WRITE) {
+ PyErr_SetString(PyExc_BufferError,
+ "writable contiguous buffer requested "
+ "for a non-contiguous object.");
+ Py_DECREF(mv);
+ return NULL;
+ }
+
+ ret = memory_from_contiguous_copy(view, order);
+ Py_DECREF(mv);
+ return ret;
+}
+
+
+static PyObject *
+memory_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds)
+{
+ PyObject *obj;
+ static char *kwlist[] = {"object", NULL};
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwds, "O:memoryview", kwlist,
+ &obj)) {
+ return NULL;
+ }
+
+ return PyMemoryView_FromObject(obj);
+}
+
+
+/****************************************************************************/
+/* Previously in abstract.c */
+/****************************************************************************/
+
+typedef struct {
+ Py_buffer view;
+ Py_ssize_t array[1];
+} Py_buffer_full;
+
+int
+PyBuffer_ToContiguous(void *buf, Py_buffer *src, Py_ssize_t len, char order)
+{
+ Py_buffer_full *fb = NULL;
+ int ret;
+
+ assert(order == 'C' || order == 'F' || order == 'A');
+
+ if (len != src->len) {
+ PyErr_SetString(PyExc_ValueError,
+ "PyBuffer_ToContiguous: len != view->len");
+ return -1;
+ }
+
+ if (PyBuffer_IsContiguous(src, order)) {
+ memcpy((char *)buf, src->buf, len);
+ return 0;
+ }
+
+ /* buffer_to_contiguous() assumes PyBUF_FULL */
+ fb = PyMem_Malloc(sizeof *fb + 3 * src->ndim * (sizeof *fb->array));
+ if (fb == NULL) {
+ PyErr_NoMemory();
+ return -1;
+ }
+ fb->view.ndim = src->ndim;
+ fb->view.shape = fb->array;
+ fb->view.strides = fb->array + src->ndim;
+ fb->view.suboffsets = fb->array + 2 * src->ndim;
+
+ init_shared_values(&fb->view, src);
+ init_shape_strides(&fb->view, src);
+ init_suboffsets(&fb->view, src);
+
+ src = &fb->view;
+
+ ret = buffer_to_contiguous(buf, src, order);
+ PyMem_Free(fb);
+ return ret;
+}
+
+
+/****************************************************************************/
+/* Release/GC management */
+/****************************************************************************/
+
+/* Inform the managed buffer that this particular memoryview will not access
+ the underlying buffer again. If no other memoryviews are registered with
+ the managed buffer, the underlying buffer is released instantly and
+ marked as inaccessible for both the memoryview and the managed buffer.
+
+ This function fails if the memoryview itself has exported buffers. */
+static int
+_memory_release(PyMemoryViewObject *self)
+{
+ if (self->flags & _Py_MEMORYVIEW_RELEASED)
+ return 0;
+
+ if (self->exports == 0) {
+ self->flags |= _Py_MEMORYVIEW_RELEASED;
+ assert(self->mbuf->exports > 0);
+ if (--self->mbuf->exports == 0)
+ mbuf_release(self->mbuf);
+ return 0;
+ }
+ if (self->exports > 0) {
+ PyErr_Format(PyExc_BufferError,
+ "memoryview has %zd exported buffer%s", self->exports,
+ self->exports==1 ? "" : "s");
+ return -1;
+ }
+
+ Py_FatalError("_memory_release(): negative export count");
+ return -1;
+}
+
+static PyObject *
+memory_release(PyMemoryViewObject *self, PyObject *noargs)
+{
+ if (_memory_release(self) < 0)
+ return NULL;
+ Py_RETURN_NONE;
+}
+
+static void
+memory_dealloc(PyMemoryViewObject *self)
+{
+ assert(self->exports == 0);
+ _PyObject_GC_UNTRACK(self);
+ (void)_memory_release(self);
+ Py_CLEAR(self->mbuf);
+ if (self->weakreflist != NULL)
+ PyObject_ClearWeakRefs((PyObject *) self);
+ PyObject_GC_Del(self);
+}
+
+static int
+memory_traverse(PyMemoryViewObject *self, visitproc visit, void *arg)
+{
+ Py_VISIT(self->mbuf);
+ return 0;
+}
+
+static int
+memory_clear(PyMemoryViewObject *self)
+{
+ (void)_memory_release(self);
+ Py_CLEAR(self->mbuf);
+ return 0;
+}
+
+static PyObject *
+memory_enter(PyObject *self, PyObject *args)
+{
+ CHECK_RELEASED(self);
+ Py_INCREF(self);
+ return self;
+}
+
+static PyObject *
+memory_exit(PyObject *self, PyObject *args)
+{
+ return memory_release((PyMemoryViewObject *)self, NULL);
+}
+
+
+/****************************************************************************/
+/* Casting format and shape */
+/****************************************************************************/
+
+#define IS_BYTE_FORMAT(f) (f == 'b' || f == 'B' || f == 'c')
+
+static Py_ssize_t
+get_native_fmtchar(char *result, const char *fmt)
+{
+ Py_ssize_t size = -1;
+
+ if (fmt[0] == '@') fmt++;
+
+ switch (fmt[0]) {
+ case 'c': case 'b': case 'B': size = sizeof(char); break;
+ case 'h': case 'H': size = sizeof(short); break;
+ case 'i': case 'I': size = sizeof(int); break;
+ case 'l': case 'L': size = sizeof(long); break;
+ case 'q': case 'Q': size = sizeof(long long); break;
+ case 'n': case 'N': size = sizeof(Py_ssize_t); break;
+ case 'f': size = sizeof(float); break;
+ case 'd': size = sizeof(double); break;
+ case '?': size = sizeof(_Bool); break;
+ case 'P': size = sizeof(void *); break;
+ }
+
+ if (size > 0 && fmt[1] == '\0') {
+ *result = fmt[0];
+ return size;
+ }
+
+ return -1;
+}
+
+static const char *
+get_native_fmtstr(const char *fmt)
+{
+ int at = 0;
+
+ if (fmt[0] == '@') {
+ at = 1;
+ fmt++;
+ }
+ if (fmt[0] == '\0' || fmt[1] != '\0') {
+ return NULL;
+ }
+
+#define RETURN(s) do { return at ? "@" s : s; } while (0)
+
+ switch (fmt[0]) {
+ case 'c': RETURN("c");
+ case 'b': RETURN("b");
+ case 'B': RETURN("B");
+ case 'h': RETURN("h");
+ case 'H': RETURN("H");
+ case 'i': RETURN("i");
+ case 'I': RETURN("I");
+ case 'l': RETURN("l");
+ case 'L': RETURN("L");
+ case 'q': RETURN("q");
+ case 'Q': RETURN("Q");
+ case 'n': RETURN("n");
+ case 'N': RETURN("N");
+ case 'f': RETURN("f");
+ case 'd': RETURN("d");
+ case '?': RETURN("?");
+ case 'P': RETURN("P");
+ }
+
+ return NULL;
+}
+
+
+/* Cast a memoryview's data type to 'format'. The input array must be
+ C-contiguous. At least one of input-format, output-format must have
+ byte size. The output array is 1-D, with the same byte length as the
+ input array. Thus, view->len must be a multiple of the new itemsize. */
+static int
+cast_to_1D(PyMemoryViewObject *mv, PyObject *format)
+{
+ Py_buffer *view = &mv->view;
+ PyObject *asciifmt;
+ char srcchar, destchar;
+ Py_ssize_t itemsize;
+ int ret = -1;
+
+ assert(view->ndim >= 1);
+ assert(Py_SIZE(mv) == 3*view->ndim);
+ assert(view->shape == mv->ob_array);
+ assert(view->strides == mv->ob_array + view->ndim);
+ assert(view->suboffsets == mv->ob_array + 2*view->ndim);
+
+ asciifmt = PyUnicode_AsASCIIString(format);
+ if (asciifmt == NULL)
+ return ret;
+
+ itemsize = get_native_fmtchar(&destchar, PyBytes_AS_STRING(asciifmt));
+ if (itemsize < 0) {
+ PyErr_SetString(PyExc_ValueError,
+ "memoryview: destination format must be a native single "
+ "character format prefixed with an optional '@'");
+ goto out;
+ }
+
+ if ((get_native_fmtchar(&srcchar, view->format) < 0 ||
+ !IS_BYTE_FORMAT(srcchar)) && !IS_BYTE_FORMAT(destchar)) {
+ PyErr_SetString(PyExc_TypeError,
+ "memoryview: cannot cast between two non-byte formats");
+ goto out;
+ }
+ if (view->len % itemsize) {
+ PyErr_SetString(PyExc_TypeError,
+ "memoryview: length is not a multiple of itemsize");
+ goto out;
+ }
+
+ view->format = (char *)get_native_fmtstr(PyBytes_AS_STRING(asciifmt));
+ if (view->format == NULL) {
+ /* NOT_REACHED: get_native_fmtchar() already validates the format. */
+ PyErr_SetString(PyExc_RuntimeError,
+ "memoryview: internal error");
+ goto out;
+ }
+ view->itemsize = itemsize;
+
+ view->ndim = 1;
+ view->shape[0] = view->len / view->itemsize;
+ view->strides[0] = view->itemsize;
+ view->suboffsets = NULL;
+
+ init_flags(mv);
+
+ ret = 0;
+
+out:
+ Py_DECREF(asciifmt);
+ return ret;
+}
+
+/* The memoryview must have space for 3*len(seq) elements. */
+static Py_ssize_t
+copy_shape(Py_ssize_t *shape, const PyObject *seq, Py_ssize_t ndim,
+ Py_ssize_t itemsize)
+{
+ Py_ssize_t x, i;
+ Py_ssize_t len = itemsize;
+
+ for (i = 0; i < ndim; i++) {
+ PyObject *tmp = PySequence_Fast_GET_ITEM(seq, i);
+ if (!PyLong_Check(tmp)) {
+ PyErr_SetString(PyExc_TypeError,
+ "memoryview.cast(): elements of shape must be integers");
+ return -1;
+ }
+ x = PyLong_AsSsize_t(tmp);
+ if (x == -1 && PyErr_Occurred()) {
+ return -1;
+ }
+ if (x <= 0) {
+ /* In general elements of shape may be 0, but not for casting. */
+ PyErr_Format(PyExc_ValueError,
+ "memoryview.cast(): elements of shape must be integers > 0");
+ return -1;
+ }
+ if (x > PY_SSIZE_T_MAX / len) {
+ PyErr_Format(PyExc_ValueError,
+ "memoryview.cast(): product(shape) > SSIZE_MAX");
+ return -1;
+ }
+ len *= x;
+ shape[i] = x;
+ }
+
+ return len;
+}
+
+/* Cast a 1-D array to a new shape. The result array will be C-contiguous.
+ If the result array does not have exactly the same byte length as the
+ input array, raise ValueError. */
+static int
+cast_to_ND(PyMemoryViewObject *mv, const PyObject *shape, int ndim)
+{
+ Py_buffer *view = &mv->view;
+ Py_ssize_t len;
+
+ assert(view->ndim == 1); /* ndim from cast_to_1D() */
+ assert(Py_SIZE(mv) == 3*(ndim==0?1:ndim)); /* ndim of result array */
+ assert(view->shape == mv->ob_array);
+ assert(view->strides == mv->ob_array + (ndim==0?1:ndim));
+ assert(view->suboffsets == NULL);
+
+ view->ndim = ndim;
+ if (view->ndim == 0) {
+ view->shape = NULL;
+ view->strides = NULL;
+ len = view->itemsize;
+ }
+ else {
+ len = copy_shape(view->shape, shape, ndim, view->itemsize);
+ if (len < 0)
+ return -1;
+ init_strides_from_shape(view);
+ }
+
+ if (view->len != len) {
+ PyErr_SetString(PyExc_TypeError,
+ "memoryview: product(shape) * itemsize != buffer size");
+ return -1;
+ }
+
+ init_flags(mv);
+
+ return 0;
+}
+
+static int
+zero_in_shape(PyMemoryViewObject *mv)
+{
+ Py_buffer *view = &mv->view;
+ Py_ssize_t i;
+
+ for (i = 0; i < view->ndim; i++)
+ if (view->shape[i] == 0)
+ return 1;
+
+ return 0;
+}
+
+/*
+ Cast a copy of 'self' to a different view. The input view must
+ be C-contiguous. The function always casts the input view to a
+ 1-D output according to 'format'. At least one of input-format,
+ output-format must have byte size.
+
+ If 'shape' is given, the 1-D view from the previous step will
+ be cast to a C-contiguous view with new shape and strides.
+
+ All casts must result in views that will have the exact byte
+ size of the original input. Otherwise, an error is raised.
+*/
+static PyObject *
+memory_cast(PyMemoryViewObject *self, PyObject *args, PyObject *kwds)
+{
+ static char *kwlist[] = {"format", "shape", NULL};
+ PyMemoryViewObject *mv = NULL;
+ PyObject *shape = NULL;
+ PyObject *format;
+ Py_ssize_t ndim = 1;
+
+ CHECK_RELEASED(self);
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|O", kwlist,
+ &format, &shape)) {
+ return NULL;
+ }
+ if (!PyUnicode_Check(format)) {
+ PyErr_SetString(PyExc_TypeError,
+ "memoryview: format argument must be a string");
+ return NULL;
+ }
+ if (!MV_C_CONTIGUOUS(self->flags)) {
+ PyErr_SetString(PyExc_TypeError,
+ "memoryview: casts are restricted to C-contiguous views");
+ return NULL;
+ }
+ if ((shape || self->view.ndim != 1) && zero_in_shape(self)) {
+ PyErr_SetString(PyExc_TypeError,
+ "memoryview: cannot cast view with zeros in shape or strides");
+ return NULL;
+ }
+ if (shape) {
+ CHECK_LIST_OR_TUPLE(shape)
+ ndim = PySequence_Fast_GET_SIZE(shape);
+ if (ndim > PyBUF_MAX_NDIM) {
+ PyErr_SetString(PyExc_ValueError,
+ "memoryview: number of dimensions must not exceed "
+ Py_STRINGIFY(PyBUF_MAX_NDIM));
+ return NULL;
+ }
+ if (self->view.ndim != 1 && ndim != 1) {
+ PyErr_SetString(PyExc_TypeError,
+ "memoryview: cast must be 1D -> ND or ND -> 1D");
+ return NULL;
+ }
+ }
+
+ mv = (PyMemoryViewObject *)
+ mbuf_add_incomplete_view(self->mbuf, &self->view, ndim==0 ? 1 : (int)ndim);
+ if (mv == NULL)
+ return NULL;
+
+ if (cast_to_1D(mv, format) < 0)
+ goto error;
+ if (shape && cast_to_ND(mv, shape, (int)ndim) < 0)
+ goto error;
+
+ return (PyObject *)mv;
+
+error:
+ Py_DECREF(mv);
+ return NULL;
+}
+
+
+/**************************************************************************/
+/* getbuffer */
+/**************************************************************************/
+
+static int
+memory_getbuf(PyMemoryViewObject *self, Py_buffer *view, int flags)
+{
+ Py_buffer *base = &self->view;
+ int baseflags = self->flags;
+
+ CHECK_RELEASED_INT(self);
+
+ /* start with complete information */
+ *view = *base;
+ view->obj = NULL;
+
+ if (REQ_WRITABLE(flags) && base->readonly) {
+ PyErr_SetString(PyExc_BufferError,
+ "memoryview: underlying buffer is not writable");
+ return -1;
+ }
+ if (!REQ_FORMAT(flags)) {
+ /* NULL indicates that the buffer's data type has been cast to 'B'.
+ view->itemsize is the _previous_ itemsize. If shape is present,
+ the equality product(shape) * itemsize = len still holds at this
+ point. The equality calcsize(format) = itemsize does _not_ hold
+ from here on! */
+ view->format = NULL;
+ }
+
+ if (REQ_C_CONTIGUOUS(flags) && !MV_C_CONTIGUOUS(baseflags)) {
+ PyErr_SetString(PyExc_BufferError,
+ "memoryview: underlying buffer is not C-contiguous");
+ return -1;
+ }
+ if (REQ_F_CONTIGUOUS(flags) && !MV_F_CONTIGUOUS(baseflags)) {
+ PyErr_SetString(PyExc_BufferError,
+ "memoryview: underlying buffer is not Fortran contiguous");
+ return -1;
+ }
+ if (REQ_ANY_CONTIGUOUS(flags) && !MV_ANY_CONTIGUOUS(baseflags)) {
+ PyErr_SetString(PyExc_BufferError,
+ "memoryview: underlying buffer is not contiguous");
+ return -1;
+ }
+ if (!REQ_INDIRECT(flags) && (baseflags & _Py_MEMORYVIEW_PIL)) {
+ PyErr_SetString(PyExc_BufferError,
+ "memoryview: underlying buffer requires suboffsets");
+ return -1;
+ }
+ if (!REQ_STRIDES(flags)) {
+ if (!MV_C_CONTIGUOUS(baseflags)) {
+ PyErr_SetString(PyExc_BufferError,
+ "memoryview: underlying buffer is not C-contiguous");
+ return -1;
+ }
+ view->strides = NULL;
+ }
+ if (!REQ_SHAPE(flags)) {
+ /* PyBUF_SIMPLE or PyBUF_WRITABLE: at this point buf is C-contiguous,
+ so base->buf = ndbuf->data. */
+ if (view->format != NULL) {
+ /* PyBUF_SIMPLE|PyBUF_FORMAT and PyBUF_WRITABLE|PyBUF_FORMAT do
+ not make sense. */
+ PyErr_Format(PyExc_BufferError,
+ "memoryview: cannot cast to unsigned bytes if the format flag "
+ "is present");
+ return -1;
+ }
+ /* product(shape) * itemsize = len and calcsize(format) = itemsize
+ do _not_ hold from here on! */
+ view->ndim = 1;
+ view->shape = NULL;
+ }
+
+
+ view->obj = (PyObject *)self;
+ Py_INCREF(view->obj);
+ self->exports++;
+
+ return 0;
+}
+
+static void
+memory_releasebuf(PyMemoryViewObject *self, Py_buffer *view)
+{
+ self->exports--;
+ return;
+ /* PyBuffer_Release() decrements view->obj after this function returns. */
+}
+
+/* Buffer methods */
+static PyBufferProcs memory_as_buffer = {
+ (getbufferproc)memory_getbuf, /* bf_getbuffer */
+ (releasebufferproc)memory_releasebuf, /* bf_releasebuffer */
+};
+
+
+/****************************************************************************/
+/* Optimized pack/unpack for all native format specifiers */
+/****************************************************************************/
+
+/*
+ Fix exceptions:
+ 1) Include format string in the error message.
+ 2) OverflowError -> ValueError.
+ 3) The error message from PyNumber_Index() is not ideal.
+*/
+static int
+type_error_int(const char *fmt)
+{
+ PyErr_Format(PyExc_TypeError,
+ "memoryview: invalid type for format '%s'", fmt);
+ return -1;
+}
+
+static int
+value_error_int(const char *fmt)
+{
+ PyErr_Format(PyExc_ValueError,
+ "memoryview: invalid value for format '%s'", fmt);
+ return -1;
+}
+
+static int
+fix_error_int(const char *fmt)
+{
+ assert(PyErr_Occurred());
+ if (PyErr_ExceptionMatches(PyExc_TypeError)) {
+ PyErr_Clear();
+ return type_error_int(fmt);
+ }
+ else if (PyErr_ExceptionMatches(PyExc_OverflowError) ||
+ PyErr_ExceptionMatches(PyExc_ValueError)) {
+ PyErr_Clear();
+ return value_error_int(fmt);
+ }
+
+ return -1;
+}
+
+/* Accept integer objects or objects with an __index__() method. */
+static long
+pylong_as_ld(PyObject *item)
+{
+ PyObject *tmp;
+ long ld;
+
+ tmp = PyNumber_Index(item);
+ if (tmp == NULL)
+ return -1;
+
+ ld = PyLong_AsLong(tmp);
+ Py_DECREF(tmp);
+ return ld;
+}
+
+static unsigned long
+pylong_as_lu(PyObject *item)
+{
+ PyObject *tmp;
+ unsigned long lu;
+
+ tmp = PyNumber_Index(item);
+ if (tmp == NULL)
+ return (unsigned long)-1;
+
+ lu = PyLong_AsUnsignedLong(tmp);
+ Py_DECREF(tmp);
+ return lu;
+}
+
+static long long
+pylong_as_lld(PyObject *item)
+{
+ PyObject *tmp;
+ long long lld;
+
+ tmp = PyNumber_Index(item);
+ if (tmp == NULL)
+ return -1;
+
+ lld = PyLong_AsLongLong(tmp);
+ Py_DECREF(tmp);
+ return lld;
+}
+
+static unsigned long long
+pylong_as_llu(PyObject *item)
+{
+ PyObject *tmp;
+ unsigned long long llu;
+
+ tmp = PyNumber_Index(item);
+ if (tmp == NULL)
+ return (unsigned long long)-1;
+
+ llu = PyLong_AsUnsignedLongLong(tmp);
+ Py_DECREF(tmp);
+ return llu;
+}
+
+static Py_ssize_t
+pylong_as_zd(PyObject *item)
+{
+ PyObject *tmp;
+ Py_ssize_t zd;
+
+ tmp = PyNumber_Index(item);
+ if (tmp == NULL)
+ return -1;
+
+ zd = PyLong_AsSsize_t(tmp);
+ Py_DECREF(tmp);
+ return zd;
+}
+
+static size_t
+pylong_as_zu(PyObject *item)
+{
+ PyObject *tmp;
+ size_t zu;
+
+ tmp = PyNumber_Index(item);
+ if (tmp == NULL)
+ return (size_t)-1;
+
+ zu = PyLong_AsSize_t(tmp);
+ Py_DECREF(tmp);
+ return zu;
+}
+
+/* Timings with the ndarray from _testbuffer.c indicate that using the
+ struct module is around 15x slower than the two functions below. */
+
+#define UNPACK_SINGLE(dest, ptr, type) \
+ do { \
+ type x; \
+ memcpy((char *)&x, ptr, sizeof x); \
+ dest = x; \
+ } while (0)
+
+/* Unpack a single item. 'fmt' can be any native format character in struct
+ module syntax. This function is very sensitive to small changes. With this
+ layout gcc automatically generates a fast jump table. */
+static PyObject *
+unpack_single(const char *ptr, const char *fmt)
+{
+ unsigned long long llu;
+ unsigned long lu;
+ size_t zu;
+ long long lld;
+ long ld;
+ Py_ssize_t zd;
+ double d;
+ unsigned char uc;
+ void *p;
+
+ switch (fmt[0]) {
+
+ /* signed integers and fast path for 'B' */
+ case 'B': uc = *((unsigned char *)ptr); goto convert_uc;
+ case 'b': ld = *((signed char *)ptr); goto convert_ld;
+ case 'h': UNPACK_SINGLE(ld, ptr, short); goto convert_ld;
+ case 'i': UNPACK_SINGLE(ld, ptr, int); goto convert_ld;
+ case 'l': UNPACK_SINGLE(ld, ptr, long); goto convert_ld;
+
+ /* boolean */
+ case '?': UNPACK_SINGLE(ld, ptr, _Bool); goto convert_bool;
+
+ /* unsigned integers */
+ case 'H': UNPACK_SINGLE(lu, ptr, unsigned short); goto convert_lu;
+ case 'I': UNPACK_SINGLE(lu, ptr, unsigned int); goto convert_lu;
+ case 'L': UNPACK_SINGLE(lu, ptr, unsigned long); goto convert_lu;
+
+ /* native 64-bit */
+ case 'q': UNPACK_SINGLE(lld, ptr, long long); goto convert_lld;
+ case 'Q': UNPACK_SINGLE(llu, ptr, unsigned long long); goto convert_llu;
+
+ /* ssize_t and size_t */
+ case 'n': UNPACK_SINGLE(zd, ptr, Py_ssize_t); goto convert_zd;
+ case 'N': UNPACK_SINGLE(zu, ptr, size_t); goto convert_zu;
+
+ /* floats */
+ case 'f': UNPACK_SINGLE(d, ptr, float); goto convert_double;
+ case 'd': UNPACK_SINGLE(d, ptr, double); goto convert_double;
+
+ /* bytes object */
+ case 'c': goto convert_bytes;
+
+ /* pointer */
+ case 'P': UNPACK_SINGLE(p, ptr, void *); goto convert_pointer;
+
+ /* default */
+ default: goto err_format;
+ }
+
+convert_uc:
+ /* PyLong_FromUnsignedLong() is slower */
+ return PyLong_FromLong(uc);
+convert_ld:
+ return PyLong_FromLong(ld);
+convert_lu:
+ return PyLong_FromUnsignedLong(lu);
+convert_lld:
+ return PyLong_FromLongLong(lld);
+convert_llu:
+ return PyLong_FromUnsignedLongLong(llu);
+convert_zd:
+ return PyLong_FromSsize_t(zd);
+convert_zu:
+ return PyLong_FromSize_t(zu);
+convert_double:
+ return PyFloat_FromDouble(d);
+convert_bool:
+ return PyBool_FromLong(ld);
+convert_bytes:
+ return PyBytes_FromStringAndSize(ptr, 1);
+convert_pointer:
+ return PyLong_FromVoidPtr(p);
+err_format:
+ PyErr_Format(PyExc_NotImplementedError,
+ "memoryview: format %s not supported", fmt);
+ return NULL;
+}
+
+#define PACK_SINGLE(ptr, src, type) \
+ do { \
+ type x; \
+ x = (type)src; \
+ memcpy(ptr, (char *)&x, sizeof x); \
+ } while (0)
+
+/* Pack a single item. 'fmt' can be any native format character in
+ struct module syntax. */
+static int
+pack_single(char *ptr, PyObject *item, const char *fmt)
+{
+ unsigned long long llu;
+ unsigned long lu;
+ size_t zu;
+ long long lld;
+ long ld;
+ Py_ssize_t zd;
+ double d;
+ void *p;
+
+ switch (fmt[0]) {
+ /* signed integers */
+ case 'b': case 'h': case 'i': case 'l':
+ ld = pylong_as_ld(item);
+ if (ld == -1 && PyErr_Occurred())
+ goto err_occurred;
+ switch (fmt[0]) {
+ case 'b':
+ if (ld < SCHAR_MIN || ld > SCHAR_MAX) goto err_range;
+ *((signed char *)ptr) = (signed char)ld; break;
+ case 'h':
+ if (ld < SHRT_MIN || ld > SHRT_MAX) goto err_range;
+ PACK_SINGLE(ptr, ld, short); break;
+ case 'i':
+ if (ld < INT_MIN || ld > INT_MAX) goto err_range;
+ PACK_SINGLE(ptr, ld, int); break;
+ default: /* 'l' */
+ PACK_SINGLE(ptr, ld, long); break;
+ }
+ break;
+
+ /* unsigned integers */
+ case 'B': case 'H': case 'I': case 'L':
+ lu = pylong_as_lu(item);
+ if (lu == (unsigned long)-1 && PyErr_Occurred())
+ goto err_occurred;
+ switch (fmt[0]) {
+ case 'B':
+ if (lu > UCHAR_MAX) goto err_range;
+ *((unsigned char *)ptr) = (unsigned char)lu; break;
+ case 'H':
+ if (lu > USHRT_MAX) goto err_range;
+ PACK_SINGLE(ptr, lu, unsigned short); break;
+ case 'I':
+ if (lu > UINT_MAX) goto err_range;
+ PACK_SINGLE(ptr, lu, unsigned int); break;
+ default: /* 'L' */
+ PACK_SINGLE(ptr, lu, unsigned long); break;
+ }
+ break;
+
+ /* native 64-bit */
+ case 'q':
+ lld = pylong_as_lld(item);
+ if (lld == -1 && PyErr_Occurred())
+ goto err_occurred;
+ PACK_SINGLE(ptr, lld, long long);
+ break;
+ case 'Q':
+ llu = pylong_as_llu(item);
+ if (llu == (unsigned long long)-1 && PyErr_Occurred())
+ goto err_occurred;
+ PACK_SINGLE(ptr, llu, unsigned long long);
+ break;
+
+ /* ssize_t and size_t */
+ case 'n':
+ zd = pylong_as_zd(item);
+ if (zd == -1 && PyErr_Occurred())
+ goto err_occurred;
+ PACK_SINGLE(ptr, zd, Py_ssize_t);
+ break;
+ case 'N':
+ zu = pylong_as_zu(item);
+ if (zu == (size_t)-1 && PyErr_Occurred())
+ goto err_occurred;
+ PACK_SINGLE(ptr, zu, size_t);
+ break;
+
+ /* floats */
+ case 'f': case 'd':
+ d = PyFloat_AsDouble(item);
+ if (d == -1.0 && PyErr_Occurred())
+ goto err_occurred;
+ if (fmt[0] == 'f') {
+ PACK_SINGLE(ptr, d, float);
+ }
+ else {
+ PACK_SINGLE(ptr, d, double);
+ }
+ break;
+
+ /* bool */
+ case '?':
+ ld = PyObject_IsTrue(item);
+ if (ld < 0)
+ return -1; /* preserve original error */
+ PACK_SINGLE(ptr, ld, _Bool);
+ break;
+
+ /* bytes object */
+ case 'c':
+ if (!PyBytes_Check(item))
+ return type_error_int(fmt);
+ if (PyBytes_GET_SIZE(item) != 1)
+ return value_error_int(fmt);
+ *ptr = PyBytes_AS_STRING(item)[0];
+ break;
+
+ /* pointer */
+ case 'P':
+ p = PyLong_AsVoidPtr(item);
+ if (p == NULL && PyErr_Occurred())
+ goto err_occurred;
+ PACK_SINGLE(ptr, p, void *);
+ break;
+
+ /* default */
+ default: goto err_format;
+ }
+
+ return 0;
+
+err_occurred:
+ return fix_error_int(fmt);
+err_range:
+ return value_error_int(fmt);
+err_format:
+ PyErr_Format(PyExc_NotImplementedError,
+ "memoryview: format %s not supported", fmt);
+ return -1;
+}
+
+
+/****************************************************************************/
+/* unpack using the struct module */
+/****************************************************************************/
+
+/* For reasonable performance it is necessary to cache all objects required
+ for unpacking. An unpacker can handle the format passed to unpack_from().
+ Invariant: All pointer fields of the struct should either be NULL or valid
+ pointers. */
+struct unpacker {
+ PyObject *unpack_from; /* Struct.unpack_from(format) */
+ PyObject *mview; /* cached memoryview */
+ char *item; /* buffer for mview */
+ Py_ssize_t itemsize; /* len(item) */
+};
+
+static struct unpacker *
+unpacker_new(void)
+{
+ struct unpacker *x = PyMem_Malloc(sizeof *x);
+
+ if (x == NULL) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+
+ x->unpack_from = NULL;
+ x->mview = NULL;
+ x->item = NULL;
+ x->itemsize = 0;
+
+ return x;
+}
+
+static void
+unpacker_free(struct unpacker *x)
+{
+ if (x) {
+ Py_XDECREF(x->unpack_from);
+ Py_XDECREF(x->mview);
+ PyMem_Free(x->item);
+ PyMem_Free(x);
+ }
+}
+
+/* Return a new unpacker for the given format. */
+static struct unpacker *
+struct_get_unpacker(const char *fmt, Py_ssize_t itemsize)
+{
+ PyObject *structmodule; /* XXX cache these two */
+ PyObject *Struct = NULL; /* XXX in globals? */
+ PyObject *structobj = NULL;
+ PyObject *format = NULL;
+ struct unpacker *x = NULL;
+
+ structmodule = PyImport_ImportModule("struct");
+ if (structmodule == NULL)
+ return NULL;
+
+ Struct = PyObject_GetAttrString(structmodule, "Struct");
+ Py_DECREF(structmodule);
+ if (Struct == NULL)
+ return NULL;
+
+ x = unpacker_new();
+ if (x == NULL)
+ goto error;
+
+ format = PyBytes_FromString(fmt);
+ if (format == NULL)
+ goto error;
+
+ structobj = PyObject_CallFunctionObjArgs(Struct, format, NULL);
+ if (structobj == NULL)
+ goto error;
+
+ x->unpack_from = PyObject_GetAttrString(structobj, "unpack_from");
+ if (x->unpack_from == NULL)
+ goto error;
+
+ x->item = PyMem_Malloc(itemsize);
+ if (x->item == NULL) {
+ PyErr_NoMemory();
+ goto error;
+ }
+ x->itemsize = itemsize;
+
+ x->mview = PyMemoryView_FromMemory(x->item, itemsize, PyBUF_WRITE);
+ if (x->mview == NULL)
+ goto error;
+
+
+out:
+ Py_XDECREF(Struct);
+ Py_XDECREF(format);
+ Py_XDECREF(structobj);
+ return x;
+
+error:
+ unpacker_free(x);
+ x = NULL;
+ goto out;
+}
+
+/* unpack a single item */
+static PyObject *
+struct_unpack_single(const char *ptr, struct unpacker *x)
+{
+ PyObject *v;
+
+ memcpy(x->item, ptr, x->itemsize);
+ v = PyObject_CallFunctionObjArgs(x->unpack_from, x->mview, NULL);
+ if (v == NULL)
+ return NULL;
+
+ if (PyTuple_GET_SIZE(v) == 1) {
+ PyObject *tmp = PyTuple_GET_ITEM(v, 0);
+ Py_INCREF(tmp);
+ Py_DECREF(v);
+ return tmp;
+ }
+
+ return v;
+}
+
+
+/****************************************************************************/
+/* Representations */
+/****************************************************************************/
+
+/* allow explicit form of native format */
+static const char *
+adjust_fmt(const Py_buffer *view)
+{
+ const char *fmt;
+
+ fmt = (view->format[0] == '@') ? view->format+1 : view->format;
+ if (fmt[0] && fmt[1] == '\0')
+ return fmt;
+
+ PyErr_Format(PyExc_NotImplementedError,
+ "memoryview: unsupported format %s", view->format);
+ return NULL;
+}
+
+/* Base case for multi-dimensional unpacking. Assumption: ndim == 1. */
+static PyObject *
+tolist_base(const char *ptr, const Py_ssize_t *shape,
+ const Py_ssize_t *strides, const Py_ssize_t *suboffsets,
+ const char *fmt)
+{
+ PyObject *lst, *item;
+ Py_ssize_t i;
+
+ lst = PyList_New(shape[0]);
+ if (lst == NULL)
+ return NULL;
+
+ for (i = 0; i < shape[0]; ptr+=strides[0], i++) {
+ const char *xptr = ADJUST_PTR(ptr, suboffsets, 0);
+ item = unpack_single(xptr, fmt);
+ if (item == NULL) {
+ Py_DECREF(lst);
+ return NULL;
+ }
+ PyList_SET_ITEM(lst, i, item);
+ }
+
+ return lst;
+}
+
+/* Unpack a multi-dimensional array into a nested list.
+ Assumption: ndim >= 1. */
+static PyObject *
+tolist_rec(const char *ptr, Py_ssize_t ndim, const Py_ssize_t *shape,
+ const Py_ssize_t *strides, const Py_ssize_t *suboffsets,
+ const char *fmt)
+{
+ PyObject *lst, *item;
+ Py_ssize_t i;
+
+ assert(ndim >= 1);
+ assert(shape != NULL);
+ assert(strides != NULL);
+
+ if (ndim == 1)
+ return tolist_base(ptr, shape, strides, suboffsets, fmt);
+
+ lst = PyList_New(shape[0]);
+ if (lst == NULL)
+ return NULL;
+
+ for (i = 0; i < shape[0]; ptr+=strides[0], i++) {
+ const char *xptr = ADJUST_PTR(ptr, suboffsets, 0);
+ item = tolist_rec(xptr, ndim-1, shape+1,
+ strides+1, suboffsets ? suboffsets+1 : NULL,
+ fmt);
+ if (item == NULL) {
+ Py_DECREF(lst);
+ return NULL;
+ }
+ PyList_SET_ITEM(lst, i, item);
+ }
+
+ return lst;
+}
+
+/* Return a list representation of the memoryview. Currently only buffers
+ with native format strings are supported. */
+static PyObject *
+memory_tolist(PyMemoryViewObject *mv, PyObject *noargs)
+{
+ const Py_buffer *view = &(mv->view);
+ const char *fmt;
+
+ CHECK_RELEASED(mv);
+
+ fmt = adjust_fmt(view);
+ if (fmt == NULL)
+ return NULL;
+ if (view->ndim == 0) {
+ return unpack_single(view->buf, fmt);
+ }
+ else if (view->ndim == 1) {
+ return tolist_base(view->buf, view->shape,
+ view->strides, view->suboffsets,
+ fmt);
+ }
+ else {
+ return tolist_rec(view->buf, view->ndim, view->shape,
+ view->strides, view->suboffsets,
+ fmt);
+ }
+}
+
+static PyObject *
+memory_tobytes(PyMemoryViewObject *self, PyObject *dummy)
+{
+ Py_buffer *src = VIEW_ADDR(self);
+ PyObject *bytes = NULL;
+
+ CHECK_RELEASED(self);
+
+ if (MV_C_CONTIGUOUS(self->flags)) {
+ return PyBytes_FromStringAndSize(src->buf, src->len);
+ }
+
+ bytes = PyBytes_FromStringAndSize(NULL, src->len);
+ if (bytes == NULL)
+ return NULL;
+
+ if (buffer_to_contiguous(PyBytes_AS_STRING(bytes), src, 'C') < 0) {
+ Py_DECREF(bytes);
+ return NULL;
+ }
+
+ return bytes;
+}
+
+static PyObject *
+memory_hex(PyMemoryViewObject *self, PyObject *dummy)
+{
+ Py_buffer *src = VIEW_ADDR(self);
+ PyObject *bytes;
+ PyObject *ret;
+
+ CHECK_RELEASED(self);
+
+ if (MV_C_CONTIGUOUS(self->flags)) {
+ return _Py_strhex(src->buf, src->len);
+ }
+
+ bytes = memory_tobytes(self, dummy);
+ if (bytes == NULL)
+ return NULL;
+
+ ret = _Py_strhex(PyBytes_AS_STRING(bytes), Py_SIZE(bytes));
+ Py_DECREF(bytes);
+
+ return ret;
+}
+
+static PyObject *
+memory_repr(PyMemoryViewObject *self)
+{
+ if (self->flags & _Py_MEMORYVIEW_RELEASED)
+ return PyUnicode_FromFormat("<released memory at %p>", self);
+ else
+ return PyUnicode_FromFormat("<memory at %p>", self);
+}
+
+
+/**************************************************************************/
+/* Indexing and slicing */
+/**************************************************************************/
+
+static char *
+lookup_dimension(Py_buffer *view, char *ptr, int dim, Py_ssize_t index)
+{
+ Py_ssize_t nitems; /* items in the given dimension */
+
+ assert(view->shape);
+ assert(view->strides);
+
+ nitems = view->shape[dim];
+ if (index < 0) {
+ index += nitems;
+ }
+ if (index < 0 || index >= nitems) {
+ PyErr_Format(PyExc_IndexError,
+ "index out of bounds on dimension %d", dim + 1);
+ return NULL;
+ }
+
+ ptr += view->strides[dim] * index;
+
+ ptr = ADJUST_PTR(ptr, view->suboffsets, dim);
+
+ return ptr;
+}
+
+/* Get the pointer to the item at index. */
+static char *
+ptr_from_index(Py_buffer *view, Py_ssize_t index)
+{
+ char *ptr = (char *)view->buf;
+ return lookup_dimension(view, ptr, 0, index);
+}
+
+/* Get the pointer to the item at tuple. */
+static char *
+ptr_from_tuple(Py_buffer *view, PyObject *tup)
+{
+ char *ptr = (char *)view->buf;
+ Py_ssize_t dim, nindices = PyTuple_GET_SIZE(tup);
+
+ if (nindices > view->ndim) {
+ PyErr_Format(PyExc_TypeError,
+ "cannot index %zd-dimension view with %zd-element tuple",
+ view->ndim, nindices);
+ return NULL;
+ }
+
+ for (dim = 0; dim < nindices; dim++) {
+ Py_ssize_t index;
+ index = PyNumber_AsSsize_t(PyTuple_GET_ITEM(tup, dim),
+ PyExc_IndexError);
+ if (index == -1 && PyErr_Occurred())
+ return NULL;
+ ptr = lookup_dimension(view, ptr, (int)dim, index);
+ if (ptr == NULL)
+ return NULL;
+ }
+ return ptr;
+}
+
+/* Return the item at index. In a one-dimensional view, this is an object
+ with the type specified by view->format. Otherwise, the item is a sub-view.
+ The function is used in memory_subscript() and memory_as_sequence. */
+static PyObject *
+memory_item(PyMemoryViewObject *self, Py_ssize_t index)
+{
+ Py_buffer *view = &(self->view);
+ const char *fmt;
+
+ CHECK_RELEASED(self);
+
+ fmt = adjust_fmt(view);
+ if (fmt == NULL)
+ return NULL;
+
+ if (view->ndim == 0) {
+ PyErr_SetString(PyExc_TypeError, "invalid indexing of 0-dim memory");
+ return NULL;
+ }
+ if (view->ndim == 1) {
+ char *ptr = ptr_from_index(view, index);
+ if (ptr == NULL)
+ return NULL;
+ return unpack_single(ptr, fmt);
+ }
+
+ PyErr_SetString(PyExc_NotImplementedError,
+ "multi-dimensional sub-views are not implemented");
+ return NULL;
+}
+
+/* Return the item at position *key* (a tuple of indices). */
+static PyObject *
+memory_item_multi(PyMemoryViewObject *self, PyObject *tup)
+{
+ Py_buffer *view = &(self->view);
+ const char *fmt;
+ Py_ssize_t nindices = PyTuple_GET_SIZE(tup);
+ char *ptr;
+
+ CHECK_RELEASED(self);
+
+ fmt = adjust_fmt(view);
+ if (fmt == NULL)
+ return NULL;
+
+ if (nindices < view->ndim) {
+ PyErr_SetString(PyExc_NotImplementedError,
+ "sub-views are not implemented");
+ return NULL;
+ }
+ ptr = ptr_from_tuple(view, tup);
+ if (ptr == NULL)
+ return NULL;
+ return unpack_single(ptr, fmt);
+}
+
+static int
+init_slice(Py_buffer *base, PyObject *key, int dim)
+{
+ Py_ssize_t start, stop, step, slicelength;
+
+ if (PySlice_Unpack(key, &start, &stop, &step) < 0) {
+ return -1;
+ }
+ slicelength = PySlice_AdjustIndices(base->shape[dim], &start, &stop, step);
+
+
+ if (base->suboffsets == NULL || dim == 0) {
+ adjust_buf:
+ base->buf = (char *)base->buf + base->strides[dim] * start;
+ }
+ else {
+ Py_ssize_t n = dim-1;
+ while (n >= 0 && base->suboffsets[n] < 0)
+ n--;
+ if (n < 0)
+ goto adjust_buf; /* all suboffsets are negative */
+ base->suboffsets[n] = base->suboffsets[n] + base->strides[dim] * start;
+ }
+ base->shape[dim] = slicelength;
+ base->strides[dim] = base->strides[dim] * step;
+
+ return 0;
+}
+
+static int
+is_multislice(PyObject *key)
+{
+ Py_ssize_t size, i;
+
+ if (!PyTuple_Check(key))
+ return 0;
+ size = PyTuple_GET_SIZE(key);
+ if (size == 0)
+ return 0;
+
+ for (i = 0; i < size; i++) {
+ PyObject *x = PyTuple_GET_ITEM(key, i);
+ if (!PySlice_Check(x))
+ return 0;
+ }
+ return 1;
+}
+
+static Py_ssize_t
+is_multiindex(PyObject *key)
+{
+ Py_ssize_t size, i;
+
+ if (!PyTuple_Check(key))
+ return 0;
+ size = PyTuple_GET_SIZE(key);
+ for (i = 0; i < size; i++) {
+ PyObject *x = PyTuple_GET_ITEM(key, i);
+ if (!PyIndex_Check(x))
+ return 0;
+ }
+ return 1;
+}
+
+/* mv[obj] returns an object holding the data for one element if obj
+ fully indexes the memoryview or another memoryview object if it
+ does not.
+
+ 0-d memoryview objects can be referenced using mv[...] or mv[()]
+ but not with anything else. */
+static PyObject *
+memory_subscript(PyMemoryViewObject *self, PyObject *key)
+{
+ Py_buffer *view;
+ view = &(self->view);
+
+ CHECK_RELEASED(self);
+
+ if (view->ndim == 0) {
+ if (PyTuple_Check(key) && PyTuple_GET_SIZE(key) == 0) {
+ const char *fmt = adjust_fmt(view);
+ if (fmt == NULL)
+ return NULL;
+ return unpack_single(view->buf, fmt);
+ }
+ else if (key == Py_Ellipsis) {
+ Py_INCREF(self);
+ return (PyObject *)self;
+ }
+ else {
+ PyErr_SetString(PyExc_TypeError,
+ "invalid indexing of 0-dim memory");
+ return NULL;
+ }
+ }
+
+ if (PyIndex_Check(key)) {
+ Py_ssize_t index;
+ index = PyNumber_AsSsize_t(key, PyExc_IndexError);
+ if (index == -1 && PyErr_Occurred())
+ return NULL;
+ return memory_item(self, index);
+ }
+ else if (PySlice_Check(key)) {
+ PyMemoryViewObject *sliced;
+
+ sliced = (PyMemoryViewObject *)mbuf_add_view(self->mbuf, view);
+ if (sliced == NULL)
+ return NULL;
+
+ if (init_slice(&sliced->view, key, 0) < 0) {
+ Py_DECREF(sliced);
+ return NULL;
+ }
+ init_len(&sliced->view);
+ init_flags(sliced);
+
+ return (PyObject *)sliced;
+ }
+ else if (is_multiindex(key)) {
+ return memory_item_multi(self, key);
+ }
+ else if (is_multislice(key)) {
+ PyErr_SetString(PyExc_NotImplementedError,
+ "multi-dimensional slicing is not implemented");
+ return NULL;
+ }
+
+ PyErr_SetString(PyExc_TypeError, "memoryview: invalid slice key");
+ return NULL;
+}
+
+static int
+memory_ass_sub(PyMemoryViewObject *self, PyObject *key, PyObject *value)
+{
+ Py_buffer *view = &(self->view);
+ Py_buffer src;
+ const char *fmt;
+ char *ptr;
+
+ CHECK_RELEASED_INT(self);
+
+ fmt = adjust_fmt(view);
+ if (fmt == NULL)
+ return -1;
+
+ if (view->readonly) {
+ PyErr_SetString(PyExc_TypeError, "cannot modify read-only memory");
+ return -1;
+ }
+ if (value == NULL) {
+ PyErr_SetString(PyExc_TypeError, "cannot delete memory");
+ return -1;
+ }
+ if (view->ndim == 0) {
+ if (key == Py_Ellipsis ||
+ (PyTuple_Check(key) && PyTuple_GET_SIZE(key)==0)) {
+ ptr = (char *)view->buf;
+ return pack_single(ptr, value, fmt);
+ }
+ else {
+ PyErr_SetString(PyExc_TypeError,
+ "invalid indexing of 0-dim memory");
+ return -1;
+ }
+ }
+
+ if (PyIndex_Check(key)) {
+ Py_ssize_t index;
+ if (1 < view->ndim) {
+ PyErr_SetString(PyExc_NotImplementedError,
+ "sub-views are not implemented");
+ return -1;
+ }
+ index = PyNumber_AsSsize_t(key, PyExc_IndexError);
+ if (index == -1 && PyErr_Occurred())
+ return -1;
+ ptr = ptr_from_index(view, index);
+ if (ptr == NULL)
+ return -1;
+ return pack_single(ptr, value, fmt);
+ }
+ /* one-dimensional: fast path */
+ if (PySlice_Check(key) && view->ndim == 1) {
+ Py_buffer dest; /* sliced view */
+ Py_ssize_t arrays[3];
+ int ret = -1;
+
+ /* rvalue must be an exporter */
+ if (PyObject_GetBuffer(value, &src, PyBUF_FULL_RO) < 0)
+ return ret;
+
+ dest = *view;
+ dest.shape = &arrays[0]; dest.shape[0] = view->shape[0];
+ dest.strides = &arrays[1]; dest.strides[0] = view->strides[0];
+ if (view->suboffsets) {
+ dest.suboffsets = &arrays[2]; dest.suboffsets[0] = view->suboffsets[0];
+ }
+
+ if (init_slice(&dest, key, 0) < 0)
+ goto end_block;
+ dest.len = dest.shape[0] * dest.itemsize;
+
+ ret = copy_single(&dest, &src);
+
+ end_block:
+ PyBuffer_Release(&src);
+ return ret;
+ }
+ if (is_multiindex(key)) {
+ char *ptr;
+ if (PyTuple_GET_SIZE(key) < view->ndim) {
+ PyErr_SetString(PyExc_NotImplementedError,
+ "sub-views are not implemented");
+ return -1;
+ }
+ ptr = ptr_from_tuple(view, key);
+ if (ptr == NULL)
+ return -1;
+ return pack_single(ptr, value, fmt);
+ }
+ if (PySlice_Check(key) || is_multislice(key)) {
+ /* Call memory_subscript() to produce a sliced lvalue, then copy
+ rvalue into lvalue. This is already implemented in _testbuffer.c. */
+ PyErr_SetString(PyExc_NotImplementedError,
+ "memoryview slice assignments are currently restricted "
+ "to ndim = 1");
+ return -1;
+ }
+
+ PyErr_SetString(PyExc_TypeError, "memoryview: invalid slice key");
+ return -1;
+}
+
+static Py_ssize_t
+memory_length(PyMemoryViewObject *self)
+{
+ CHECK_RELEASED_INT(self);
+ return self->view.ndim == 0 ? 1 : self->view.shape[0];
+}
+
+/* As mapping */
+static PyMappingMethods memory_as_mapping = {
+ (lenfunc)memory_length, /* mp_length */
+ (binaryfunc)memory_subscript, /* mp_subscript */
+ (objobjargproc)memory_ass_sub, /* mp_ass_subscript */
+};
+
+/* As sequence */
+static PySequenceMethods memory_as_sequence = {
+ (lenfunc)memory_length, /* sq_length */
+ 0, /* sq_concat */
+ 0, /* sq_repeat */
+ (ssizeargfunc)memory_item, /* sq_item */
+};
+
+
+/**************************************************************************/
+/* Comparisons */
+/**************************************************************************/
+
+#define MV_COMPARE_EX -1 /* exception */
+#define MV_COMPARE_NOT_IMPL -2 /* not implemented */
+
+/* Translate a StructError to "not equal". Preserve other exceptions. */
+static int
+fix_struct_error_int(void)
+{
+ assert(PyErr_Occurred());
+ /* XXX Cannot get at StructError directly? */
+ if (PyErr_ExceptionMatches(PyExc_ImportError) ||
+ PyErr_ExceptionMatches(PyExc_MemoryError)) {
+ return MV_COMPARE_EX;
+ }
+ /* StructError: invalid or unknown format -> not equal */
+ PyErr_Clear();
+ return 0;
+}
+
+/* Unpack and compare single items of p and q using the struct module. */
+static int
+struct_unpack_cmp(const char *p, const char *q,
+ struct unpacker *unpack_p, struct unpacker *unpack_q)
+{
+ PyObject *v, *w;
+ int ret;
+
+ /* At this point any exception from the struct module should not be
+ StructError, since both formats have been accepted already. */
+ v = struct_unpack_single(p, unpack_p);
+ if (v == NULL)
+ return MV_COMPARE_EX;
+
+ w = struct_unpack_single(q, unpack_q);
+ if (w == NULL) {
+ Py_DECREF(v);
+ return MV_COMPARE_EX;
+ }
+
+ /* MV_COMPARE_EX == -1: exceptions are preserved */
+ ret = PyObject_RichCompareBool(v, w, Py_EQ);
+ Py_DECREF(v);
+ Py_DECREF(w);
+
+ return ret;
+}
+
+/* Unpack and compare single items of p and q. If both p and q have the same
+ single element native format, the comparison uses a fast path (gcc creates
+ a jump table and converts memcpy into simple assignments on x86/x64).
+
+ Otherwise, the comparison is delegated to the struct module, which is
+ 30-60x slower. */
+#define CMP_SINGLE(p, q, type) \
+ do { \
+ type x; \
+ type y; \
+ memcpy((char *)&x, p, sizeof x); \
+ memcpy((char *)&y, q, sizeof y); \
+ equal = (x == y); \
+ } while (0)
+
+static int
+unpack_cmp(const char *p, const char *q, char fmt,
+ struct unpacker *unpack_p, struct unpacker *unpack_q)
+{
+ int equal;
+
+ switch (fmt) {
+
+ /* signed integers and fast path for 'B' */
+ case 'B': return *((unsigned char *)p) == *((unsigned char *)q);
+ case 'b': return *((signed char *)p) == *((signed char *)q);
+ case 'h': CMP_SINGLE(p, q, short); return equal;
+ case 'i': CMP_SINGLE(p, q, int); return equal;
+ case 'l': CMP_SINGLE(p, q, long); return equal;
+
+ /* boolean */
+ case '?': CMP_SINGLE(p, q, _Bool); return equal;
+
+ /* unsigned integers */
+ case 'H': CMP_SINGLE(p, q, unsigned short); return equal;
+ case 'I': CMP_SINGLE(p, q, unsigned int); return equal;
+ case 'L': CMP_SINGLE(p, q, unsigned long); return equal;
+
+ /* native 64-bit */
+ case 'q': CMP_SINGLE(p, q, long long); return equal;
+ case 'Q': CMP_SINGLE(p, q, unsigned long long); return equal;
+
+ /* ssize_t and size_t */
+ case 'n': CMP_SINGLE(p, q, Py_ssize_t); return equal;
+ case 'N': CMP_SINGLE(p, q, size_t); return equal;
+
+ /* floats */
+ /* XXX DBL_EPSILON? */
+ case 'f': CMP_SINGLE(p, q, float); return equal;
+ case 'd': CMP_SINGLE(p, q, double); return equal;
+
+ /* bytes object */
+ case 'c': return *p == *q;
+
+ /* pointer */
+ case 'P': CMP_SINGLE(p, q, void *); return equal;
+
+ /* use the struct module */
+ case '_':
+ assert(unpack_p);
+ assert(unpack_q);
+ return struct_unpack_cmp(p, q, unpack_p, unpack_q);
+ }
+
+ /* NOT REACHED */
+ PyErr_SetString(PyExc_RuntimeError,
+ "memoryview: internal error in richcompare");
+ return MV_COMPARE_EX;
+}
+
+/* Base case for recursive array comparisons. Assumption: ndim == 1. */
+static int
+cmp_base(const char *p, const char *q, const Py_ssize_t *shape,
+ const Py_ssize_t *pstrides, const Py_ssize_t *psuboffsets,
+ const Py_ssize_t *qstrides, const Py_ssize_t *qsuboffsets,
+ char fmt, struct unpacker *unpack_p, struct unpacker *unpack_q)
+{
+ Py_ssize_t i;
+ int equal;
+
+ for (i = 0; i < shape[0]; p+=pstrides[0], q+=qstrides[0], i++) {
+ const char *xp = ADJUST_PTR(p, psuboffsets, 0);
+ const char *xq = ADJUST_PTR(q, qsuboffsets, 0);
+ equal = unpack_cmp(xp, xq, fmt, unpack_p, unpack_q);
+ if (equal <= 0)
+ return equal;
+ }
+
+ return 1;
+}
+
+/* Recursively compare two multi-dimensional arrays that have the same
+ logical structure. Assumption: ndim >= 1. */
+static int
+cmp_rec(const char *p, const char *q,
+ Py_ssize_t ndim, const Py_ssize_t *shape,
+ const Py_ssize_t *pstrides, const Py_ssize_t *psuboffsets,
+ const Py_ssize_t *qstrides, const Py_ssize_t *qsuboffsets,
+ char fmt, struct unpacker *unpack_p, struct unpacker *unpack_q)
+{
+ Py_ssize_t i;
+ int equal;
+
+ assert(ndim >= 1);
+ assert(shape != NULL);
+ assert(pstrides != NULL);
+ assert(qstrides != NULL);
+
+ if (ndim == 1) {
+ return cmp_base(p, q, shape,
+ pstrides, psuboffsets,
+ qstrides, qsuboffsets,
+ fmt, unpack_p, unpack_q);
+ }
+
+ for (i = 0; i < shape[0]; p+=pstrides[0], q+=qstrides[0], i++) {
+ const char *xp = ADJUST_PTR(p, psuboffsets, 0);
+ const char *xq = ADJUST_PTR(q, qsuboffsets, 0);
+ equal = cmp_rec(xp, xq, ndim-1, shape+1,
+ pstrides+1, psuboffsets ? psuboffsets+1 : NULL,
+ qstrides+1, qsuboffsets ? qsuboffsets+1 : NULL,
+ fmt, unpack_p, unpack_q);
+ if (equal <= 0)
+ return equal;
+ }
+
+ return 1;
+}
+
+static PyObject *
+memory_richcompare(PyObject *v, PyObject *w, int op)
+{
+ PyObject *res;
+ Py_buffer wbuf, *vv;
+ Py_buffer *ww = NULL;
+ struct unpacker *unpack_v = NULL;
+ struct unpacker *unpack_w = NULL;
+ char vfmt, wfmt;
+ int equal = MV_COMPARE_NOT_IMPL;
+
+ if (op != Py_EQ && op != Py_NE)
+ goto result; /* Py_NotImplemented */
+
+ assert(PyMemoryView_Check(v));
+ if (BASE_INACCESSIBLE(v)) {
+ equal = (v == w);
+ goto result;
+ }
+ vv = VIEW_ADDR(v);
+
+ if (PyMemoryView_Check(w)) {
+ if (BASE_INACCESSIBLE(w)) {
+ equal = (v == w);
+ goto result;
+ }
+ ww = VIEW_ADDR(w);
+ }
+ else {
+ if (PyObject_GetBuffer(w, &wbuf, PyBUF_FULL_RO) < 0) {
+ PyErr_Clear();
+ goto result; /* Py_NotImplemented */
+ }
+ ww = &wbuf;
+ }
+
+ if (!equiv_shape(vv, ww)) {
+ PyErr_Clear();
+ equal = 0;
+ goto result;
+ }
+
+ /* Use fast unpacking for identical primitive C type formats. */
+ if (get_native_fmtchar(&vfmt, vv->format) < 0)
+ vfmt = '_';
+ if (get_native_fmtchar(&wfmt, ww->format) < 0)
+ wfmt = '_';
+ if (vfmt == '_' || wfmt == '_' || vfmt != wfmt) {
+ /* Use struct module unpacking. NOTE: Even for equal format strings,
+ memcmp() cannot be used for item comparison since it would give
+ incorrect results in the case of NaNs or uninitialized padding
+ bytes. */
+ vfmt = '_';
+ unpack_v = struct_get_unpacker(vv->format, vv->itemsize);
+ if (unpack_v == NULL) {
+ equal = fix_struct_error_int();
+ goto result;
+ }
+ unpack_w = struct_get_unpacker(ww->format, ww->itemsize);
+ if (unpack_w == NULL) {
+ equal = fix_struct_error_int();
+ goto result;
+ }
+ }
+
+ if (vv->ndim == 0) {
+ equal = unpack_cmp(vv->buf, ww->buf,
+ vfmt, unpack_v, unpack_w);
+ }
+ else if (vv->ndim == 1) {
+ equal = cmp_base(vv->buf, ww->buf, vv->shape,
+ vv->strides, vv->suboffsets,
+ ww->strides, ww->suboffsets,
+ vfmt, unpack_v, unpack_w);
+ }
+ else {
+ equal = cmp_rec(vv->buf, ww->buf, vv->ndim, vv->shape,
+ vv->strides, vv->suboffsets,
+ ww->strides, ww->suboffsets,
+ vfmt, unpack_v, unpack_w);
+ }
+
+result:
+ if (equal < 0) {
+ if (equal == MV_COMPARE_NOT_IMPL)
+ res = Py_NotImplemented;
+ else /* exception */
+ res = NULL;
+ }
+ else if ((equal && op == Py_EQ) || (!equal && op == Py_NE))
+ res = Py_True;
+ else
+ res = Py_False;
+
+ if (ww == &wbuf)
+ PyBuffer_Release(ww);
+
+ unpacker_free(unpack_v);
+ unpacker_free(unpack_w);
+
+ Py_XINCREF(res);
+ return res;
+}
+
+/**************************************************************************/
+/* Hash */
+/**************************************************************************/
+
+static Py_hash_t
+memory_hash(PyMemoryViewObject *self)
+{
+ if (self->hash == -1) {
+ Py_buffer *view = &self->view;
+ char *mem = view->buf;
+ Py_ssize_t ret;
+ char fmt;
+
+ CHECK_RELEASED_INT(self);
+
+ if (!view->readonly) {
+ PyErr_SetString(PyExc_ValueError,
+ "cannot hash writable memoryview object");
+ return -1;
+ }
+ ret = get_native_fmtchar(&fmt, view->format);
+ if (ret < 0 || !IS_BYTE_FORMAT(fmt)) {
+ PyErr_SetString(PyExc_ValueError,
+ "memoryview: hashing is restricted to formats 'B', 'b' or 'c'");
+ return -1;
+ }
+ if (view->obj != NULL && PyObject_Hash(view->obj) == -1) {
+ /* Keep the original error message */
+ return -1;
+ }
+
+ if (!MV_C_CONTIGUOUS(self->flags)) {
+ mem = PyMem_Malloc(view->len);
+ if (mem == NULL) {
+ PyErr_NoMemory();
+ return -1;
+ }
+ if (buffer_to_contiguous(mem, view, 'C') < 0) {
+ PyMem_Free(mem);
+ return -1;
+ }
+ }
+
+ /* Can't fail */
+ self->hash = _Py_HashBytes(mem, view->len);
+
+ if (mem != view->buf)
+ PyMem_Free(mem);
+ }
+
+ return self->hash;
+}
+
+
+/**************************************************************************/
+/* getters */
+/**************************************************************************/
+
+static PyObject *
+_IntTupleFromSsizet(int len, Py_ssize_t *vals)
+{
+ int i;
+ PyObject *o;
+ PyObject *intTuple;
+
+ if (vals == NULL)
+ return PyTuple_New(0);
+
+ intTuple = PyTuple_New(len);
+ if (!intTuple)
+ return NULL;
+ for (i=0; i<len; i++) {
+ o = PyLong_FromSsize_t(vals[i]);
+ if (!o) {
+ Py_DECREF(intTuple);
+ return NULL;
+ }
+ PyTuple_SET_ITEM(intTuple, i, o);
+ }
+ return intTuple;
+}
+
+static PyObject *
+memory_obj_get(PyMemoryViewObject *self, void *Py_UNUSED(ignored))
+{
+ Py_buffer *view = &self->view;
+
+ CHECK_RELEASED(self);
+ if (view->obj == NULL) {
+ Py_RETURN_NONE;
+ }
+ Py_INCREF(view->obj);
+ return view->obj;
+}
+
+static PyObject *
+memory_nbytes_get(PyMemoryViewObject *self, void *Py_UNUSED(ignored))
+{
+ CHECK_RELEASED(self);
+ return PyLong_FromSsize_t(self->view.len);
+}
+
+static PyObject *
+memory_format_get(PyMemoryViewObject *self, void *Py_UNUSED(ignored))
+{
+ CHECK_RELEASED(self);
+ return PyUnicode_FromString(self->view.format);
+}
+
+static PyObject *
+memory_itemsize_get(PyMemoryViewObject *self, void *Py_UNUSED(ignored))
+{
+ CHECK_RELEASED(self);
+ return PyLong_FromSsize_t(self->view.itemsize);
+}
+
+static PyObject *
+memory_shape_get(PyMemoryViewObject *self, void *Py_UNUSED(ignored))
+{
+ CHECK_RELEASED(self);
+ return _IntTupleFromSsizet(self->view.ndim, self->view.shape);
+}
+
+static PyObject *
+memory_strides_get(PyMemoryViewObject *self, void *Py_UNUSED(ignored))
+{
+ CHECK_RELEASED(self);
+ return _IntTupleFromSsizet(self->view.ndim, self->view.strides);
+}
+
+static PyObject *
+memory_suboffsets_get(PyMemoryViewObject *self, void *Py_UNUSED(ignored))
+{
+ CHECK_RELEASED(self);
+ return _IntTupleFromSsizet(self->view.ndim, self->view.suboffsets);
+}
+
+static PyObject *
+memory_readonly_get(PyMemoryViewObject *self, void *Py_UNUSED(ignored))
+{
+ CHECK_RELEASED(self);
+ return PyBool_FromLong(self->view.readonly);
+}
+
+static PyObject *
+memory_ndim_get(PyMemoryViewObject *self, void *Py_UNUSED(ignored))
+{
+ CHECK_RELEASED(self);
+ return PyLong_FromLong(self->view.ndim);
+}
+
+static PyObject *
+memory_c_contiguous(PyMemoryViewObject *self, PyObject *dummy)
+{
+ CHECK_RELEASED(self);
+ return PyBool_FromLong(MV_C_CONTIGUOUS(self->flags));
+}
+
+static PyObject *
+memory_f_contiguous(PyMemoryViewObject *self, PyObject *dummy)
+{
+ CHECK_RELEASED(self);
+ return PyBool_FromLong(MV_F_CONTIGUOUS(self->flags));
+}
+
+static PyObject *
+memory_contiguous(PyMemoryViewObject *self, PyObject *dummy)
+{
+ CHECK_RELEASED(self);
+ return PyBool_FromLong(MV_ANY_CONTIGUOUS(self->flags));
+}
+
+PyDoc_STRVAR(memory_obj_doc,
+ "The underlying object of the memoryview.");
+PyDoc_STRVAR(memory_nbytes_doc,
+ "The amount of space in bytes that the array would use in\n"
+ " a contiguous representation.");
+PyDoc_STRVAR(memory_readonly_doc,
+ "A bool indicating whether the memory is read only.");
+PyDoc_STRVAR(memory_itemsize_doc,
+ "The size in bytes of each element of the memoryview.");
+PyDoc_STRVAR(memory_format_doc,
+ "A string containing the format (in struct module style)\n"
+ " for each element in the view.");
+PyDoc_STRVAR(memory_ndim_doc,
+ "An integer indicating how many dimensions of a multi-dimensional\n"
+ " array the memory represents.");
+PyDoc_STRVAR(memory_shape_doc,
+ "A tuple of ndim integers giving the shape of the memory\n"
+ " as an N-dimensional array.");
+PyDoc_STRVAR(memory_strides_doc,
+ "A tuple of ndim integers giving the size in bytes to access\n"
+ " each element for each dimension of the array.");
+PyDoc_STRVAR(memory_suboffsets_doc,
+ "A tuple of integers used internally for PIL-style arrays.");
+PyDoc_STRVAR(memory_c_contiguous_doc,
+ "A bool indicating whether the memory is C contiguous.");
+PyDoc_STRVAR(memory_f_contiguous_doc,
+ "A bool indicating whether the memory is Fortran contiguous.");
+PyDoc_STRVAR(memory_contiguous_doc,
+ "A bool indicating whether the memory is contiguous.");
+
+
+static PyGetSetDef memory_getsetlist[] = {
+ {"obj", (getter)memory_obj_get, NULL, memory_obj_doc},
+ {"nbytes", (getter)memory_nbytes_get, NULL, memory_nbytes_doc},
+ {"readonly", (getter)memory_readonly_get, NULL, memory_readonly_doc},
+ {"itemsize", (getter)memory_itemsize_get, NULL, memory_itemsize_doc},
+ {"format", (getter)memory_format_get, NULL, memory_format_doc},
+ {"ndim", (getter)memory_ndim_get, NULL, memory_ndim_doc},
+ {"shape", (getter)memory_shape_get, NULL, memory_shape_doc},
+ {"strides", (getter)memory_strides_get, NULL, memory_strides_doc},
+ {"suboffsets", (getter)memory_suboffsets_get, NULL, memory_suboffsets_doc},
+ {"c_contiguous", (getter)memory_c_contiguous, NULL, memory_c_contiguous_doc},
+ {"f_contiguous", (getter)memory_f_contiguous, NULL, memory_f_contiguous_doc},
+ {"contiguous", (getter)memory_contiguous, NULL, memory_contiguous_doc},
+ {NULL, NULL, NULL, NULL},
+};
+
+PyDoc_STRVAR(memory_release_doc,
+"release($self, /)\n--\n\
+\n\
+Release the underlying buffer exposed by the memoryview object.");
+PyDoc_STRVAR(memory_tobytes_doc,
+"tobytes($self, /)\n--\n\
+\n\
+Return the data in the buffer as a byte string.");
+PyDoc_STRVAR(memory_hex_doc,
+"hex($self, /)\n--\n\
+\n\
+Return the data in the buffer as a string of hexadecimal numbers.");
+PyDoc_STRVAR(memory_tolist_doc,
+"tolist($self, /)\n--\n\
+\n\
+Return the data in the buffer as a list of elements.");
+PyDoc_STRVAR(memory_cast_doc,
+"cast($self, /, format, *, shape)\n--\n\
+\n\
+Cast a memoryview to a new format or shape.");
+
+static PyMethodDef memory_methods[] = {
+ {"release", (PyCFunction)memory_release, METH_NOARGS, memory_release_doc},
+ {"tobytes", (PyCFunction)memory_tobytes, METH_NOARGS, memory_tobytes_doc},
+ {"hex", (PyCFunction)memory_hex, METH_NOARGS, memory_hex_doc},
+ {"tolist", (PyCFunction)memory_tolist, METH_NOARGS, memory_tolist_doc},
+ {"cast", (PyCFunction)memory_cast, METH_VARARGS|METH_KEYWORDS, memory_cast_doc},
+ {"__enter__", memory_enter, METH_NOARGS, NULL},
+ {"__exit__", memory_exit, METH_VARARGS, NULL},
+ {NULL, NULL}
+};
+
+
+PyTypeObject PyMemoryView_Type = {
+ PyVarObject_HEAD_INIT(&PyType_Type, 0)
+ "memoryview", /* tp_name */
+ offsetof(PyMemoryViewObject, ob_array), /* tp_basicsize */
+ sizeof(Py_ssize_t), /* tp_itemsize */
+ (destructor)memory_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ (reprfunc)memory_repr, /* tp_repr */
+ 0, /* tp_as_number */
+ &memory_as_sequence, /* tp_as_sequence */
+ &memory_as_mapping, /* tp_as_mapping */
+ (hashfunc)memory_hash, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ PyObject_GenericGetAttr, /* tp_getattro */
+ 0, /* tp_setattro */
+ &memory_as_buffer, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC, /* tp_flags */
+ memory_doc, /* tp_doc */
+ (traverseproc)memory_traverse, /* tp_traverse */
+ (inquiry)memory_clear, /* tp_clear */
+ memory_richcompare, /* tp_richcompare */
+ offsetof(PyMemoryViewObject, weakreflist),/* tp_weaklistoffset */
+ 0, /* tp_iter */
+ 0, /* tp_iternext */
+ memory_methods, /* tp_methods */
+ 0, /* tp_members */
+ memory_getsetlist, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ 0, /* tp_init */
+ 0, /* tp_alloc */
+ memory_new, /* tp_new */
+};
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/object.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/object.c
new file mode 100644
index 00000000..97b307da
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/object.c
@@ -0,0 +1,2082 @@
+
+/* Generic object operations; and implementation of None */
+
+#include "Python.h"
+#include "frameobject.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+_Py_IDENTIFIER(Py_Repr);
+_Py_IDENTIFIER(__bytes__);
+_Py_IDENTIFIER(__dir__);
+_Py_IDENTIFIER(__isabstractmethod__);
+_Py_IDENTIFIER(builtins);
+
+#ifdef Py_REF_DEBUG
+Py_ssize_t _Py_RefTotal;
+
+Py_ssize_t
+_Py_GetRefTotal(void)
+{
+ PyObject *o;
+ Py_ssize_t total = _Py_RefTotal;
+ o = _PySet_Dummy;
+ if (o != NULL)
+ total -= o->ob_refcnt;
+ return total;
+}
+
+void
+_PyDebug_PrintTotalRefs(void) {
+ PyObject *xoptions, *value;
+ _Py_IDENTIFIER(showrefcount);
+
+ xoptions = PySys_GetXOptions();
+ if (xoptions == NULL)
+ return;
+ value = _PyDict_GetItemId(xoptions, &PyId_showrefcount);
+ if (value == Py_True)
+ fprintf(stderr,
+ "[%" PY_FORMAT_SIZE_T "d refs, "
+ "%" PY_FORMAT_SIZE_T "d blocks]\n",
+ _Py_GetRefTotal(), _Py_GetAllocatedBlocks());
+}
+#endif /* Py_REF_DEBUG */
+
+/* Object allocation routines used by NEWOBJ and NEWVAROBJ macros.
+ These are used by the individual routines for object creation.
+ Do not call them otherwise, they do not initialize the object! */
+
+#ifdef Py_TRACE_REFS
+/* Head of circular doubly-linked list of all objects. These are linked
+ * together via the _ob_prev and _ob_next members of a PyObject, which
+ * exist only in a Py_TRACE_REFS build.
+ */
+static PyObject refchain = {&refchain, &refchain};
+
+/* Insert op at the front of the list of all objects. If force is true,
+ * op is added even if _ob_prev and _ob_next are non-NULL already. If
+ * force is false amd _ob_prev or _ob_next are non-NULL, do nothing.
+ * force should be true if and only if op points to freshly allocated,
+ * uninitialized memory, or you've unlinked op from the list and are
+ * relinking it into the front.
+ * Note that objects are normally added to the list via _Py_NewReference,
+ * which is called by PyObject_Init. Not all objects are initialized that
+ * way, though; exceptions include statically allocated type objects, and
+ * statically allocated singletons (like Py_True and Py_None).
+ */
+void
+_Py_AddToAllObjects(PyObject *op, int force)
+{
+#ifdef Py_DEBUG
+ if (!force) {
+ /* If it's initialized memory, op must be in or out of
+ * the list unambiguously.
+ */
+ assert((op->_ob_prev == NULL) == (op->_ob_next == NULL));
+ }
+#endif
+ if (force || op->_ob_prev == NULL) {
+ op->_ob_next = refchain._ob_next;
+ op->_ob_prev = &refchain;
+ refchain._ob_next->_ob_prev = op;
+ refchain._ob_next = op;
+ }
+}
+#endif /* Py_TRACE_REFS */
+
+#ifdef COUNT_ALLOCS
+static PyTypeObject *type_list;
+/* All types are added to type_list, at least when
+ they get one object created. That makes them
+ immortal, which unfortunately contributes to
+ garbage itself. If unlist_types_without_objects
+ is set, they will be removed from the type_list
+ once the last object is deallocated. */
+static int unlist_types_without_objects;
+extern Py_ssize_t tuple_zero_allocs, fast_tuple_allocs;
+extern Py_ssize_t quick_int_allocs, quick_neg_int_allocs;
+extern Py_ssize_t null_strings, one_strings;
+void
+dump_counts(FILE* f)
+{
+ PyTypeObject *tp;
+ PyObject *xoptions, *value;
+ _Py_IDENTIFIER(showalloccount);
+
+ xoptions = PySys_GetXOptions();
+ if (xoptions == NULL)
+ return;
+ value = _PyDict_GetItemId(xoptions, &PyId_showalloccount);
+ if (value != Py_True)
+ return;
+
+ for (tp = type_list; tp; tp = tp->tp_next)
+ fprintf(f, "%s alloc'd: %" PY_FORMAT_SIZE_T "d, "
+ "freed: %" PY_FORMAT_SIZE_T "d, "
+ "max in use: %" PY_FORMAT_SIZE_T "d\n",
+ tp->tp_name, tp->tp_allocs, tp->tp_frees,
+ tp->tp_maxalloc);
+ fprintf(f, "fast tuple allocs: %" PY_FORMAT_SIZE_T "d, "
+ "empty: %" PY_FORMAT_SIZE_T "d\n",
+ fast_tuple_allocs, tuple_zero_allocs);
+ fprintf(f, "fast int allocs: pos: %" PY_FORMAT_SIZE_T "d, "
+ "neg: %" PY_FORMAT_SIZE_T "d\n",
+ quick_int_allocs, quick_neg_int_allocs);
+ fprintf(f, "null strings: %" PY_FORMAT_SIZE_T "d, "
+ "1-strings: %" PY_FORMAT_SIZE_T "d\n",
+ null_strings, one_strings);
+}
+
+PyObject *
+get_counts(void)
+{
+ PyTypeObject *tp;
+ PyObject *result;
+ PyObject *v;
+
+ result = PyList_New(0);
+ if (result == NULL)
+ return NULL;
+ for (tp = type_list; tp; tp = tp->tp_next) {
+ v = Py_BuildValue("(snnn)", tp->tp_name, tp->tp_allocs,
+ tp->tp_frees, tp->tp_maxalloc);
+ if (v == NULL) {
+ Py_DECREF(result);
+ return NULL;
+ }
+ if (PyList_Append(result, v) < 0) {
+ Py_DECREF(v);
+ Py_DECREF(result);
+ return NULL;
+ }
+ Py_DECREF(v);
+ }
+ return result;
+}
+
+void
+inc_count(PyTypeObject *tp)
+{
+ if (tp->tp_next == NULL && tp->tp_prev == NULL) {
+ /* first time; insert in linked list */
+ if (tp->tp_next != NULL) /* sanity check */
+ Py_FatalError("XXX inc_count sanity check");
+ if (type_list)
+ type_list->tp_prev = tp;
+ tp->tp_next = type_list;
+ /* Note that as of Python 2.2, heap-allocated type objects
+ * can go away, but this code requires that they stay alive
+ * until program exit. That's why we're careful with
+ * refcounts here. type_list gets a new reference to tp,
+ * while ownership of the reference type_list used to hold
+ * (if any) was transferred to tp->tp_next in the line above.
+ * tp is thus effectively immortal after this.
+ */
+ Py_INCREF(tp);
+ type_list = tp;
+#ifdef Py_TRACE_REFS
+ /* Also insert in the doubly-linked list of all objects,
+ * if not already there.
+ */
+ _Py_AddToAllObjects((PyObject *)tp, 0);
+#endif
+ }
+ tp->tp_allocs++;
+ if (tp->tp_allocs - tp->tp_frees > tp->tp_maxalloc)
+ tp->tp_maxalloc = tp->tp_allocs - tp->tp_frees;
+}
+
+void dec_count(PyTypeObject *tp)
+{
+ tp->tp_frees++;
+ if (unlist_types_without_objects &&
+ tp->tp_allocs == tp->tp_frees) {
+ /* unlink the type from type_list */
+ if (tp->tp_prev)
+ tp->tp_prev->tp_next = tp->tp_next;
+ else
+ type_list = tp->tp_next;
+ if (tp->tp_next)
+ tp->tp_next->tp_prev = tp->tp_prev;
+ tp->tp_next = tp->tp_prev = NULL;
+ Py_DECREF(tp);
+ }
+}
+
+#endif
+
+#ifdef Py_REF_DEBUG
+/* Log a fatal error; doesn't return. */
+void
+_Py_NegativeRefcount(const char *fname, int lineno, PyObject *op)
+{
+ char buf[300];
+
+ PyOS_snprintf(buf, sizeof(buf),
+ "%s:%i object at %p has negative ref count "
+ "%" PY_FORMAT_SIZE_T "d",
+ fname, lineno, op, op->ob_refcnt);
+ Py_FatalError(buf);
+}
+
+#endif /* Py_REF_DEBUG */
+
+void
+Py_IncRef(PyObject *o)
+{
+ Py_XINCREF(o);
+}
+
+void
+Py_DecRef(PyObject *o)
+{
+ Py_XDECREF(o);
+}
+
+PyObject *
+PyObject_Init(PyObject *op, PyTypeObject *tp)
+{
+ if (op == NULL)
+ return PyErr_NoMemory();
+ /* Any changes should be reflected in PyObject_INIT (objimpl.h) */
+ Py_TYPE(op) = tp;
+ _Py_NewReference(op);
+ return op;
+}
+
+PyVarObject *
+PyObject_InitVar(PyVarObject *op, PyTypeObject *tp, Py_ssize_t size)
+{
+ if (op == NULL)
+ return (PyVarObject *) PyErr_NoMemory();
+ /* Any changes should be reflected in PyObject_INIT_VAR */
+ op->ob_size = size;
+ Py_TYPE(op) = tp;
+ _Py_NewReference((PyObject *)op);
+ return op;
+}
+
+PyObject *
+_PyObject_New(PyTypeObject *tp)
+{
+ PyObject *op;
+ op = (PyObject *) PyObject_MALLOC(_PyObject_SIZE(tp));
+ if (op == NULL)
+ return PyErr_NoMemory();
+ return PyObject_INIT(op, tp);
+}
+
+PyVarObject *
+_PyObject_NewVar(PyTypeObject *tp, Py_ssize_t nitems)
+{
+ PyVarObject *op;
+ const size_t size = _PyObject_VAR_SIZE(tp, nitems);
+ op = (PyVarObject *) PyObject_MALLOC(size);
+ if (op == NULL)
+ return (PyVarObject *)PyErr_NoMemory();
+ return PyObject_INIT_VAR(op, tp, nitems);
+}
+
+void
+PyObject_CallFinalizer(PyObject *self)
+{
+ PyTypeObject *tp = Py_TYPE(self);
+
+ /* The former could happen on heaptypes created from the C API, e.g.
+ PyType_FromSpec(). */
+ if (!PyType_HasFeature(tp, Py_TPFLAGS_HAVE_FINALIZE) ||
+ tp->tp_finalize == NULL)
+ return;
+ /* tp_finalize should only be called once. */
+ if (PyType_IS_GC(tp) && _PyGC_FINALIZED(self))
+ return;
+
+ tp->tp_finalize(self);
+ if (PyType_IS_GC(tp))
+ _PyGC_SET_FINALIZED(self, 1);
+}
+
+int
+PyObject_CallFinalizerFromDealloc(PyObject *self)
+{
+ Py_ssize_t refcnt;
+
+ /* Temporarily resurrect the object. */
+ if (self->ob_refcnt != 0) {
+ Py_FatalError("PyObject_CallFinalizerFromDealloc called on "
+ "object with a non-zero refcount");
+ }
+ self->ob_refcnt = 1;
+
+ PyObject_CallFinalizer(self);
+
+ /* Undo the temporary resurrection; can't use DECREF here, it would
+ * cause a recursive call.
+ */
+ assert(self->ob_refcnt > 0);
+ if (--self->ob_refcnt == 0)
+ return 0; /* this is the normal path out */
+
+ /* tp_finalize resurrected it! Make it look like the original Py_DECREF
+ * never happened.
+ */
+ refcnt = self->ob_refcnt;
+ _Py_NewReference(self);
+ self->ob_refcnt = refcnt;
+
+ if (PyType_IS_GC(Py_TYPE(self))) {
+ assert(_PyGC_REFS(self) != _PyGC_REFS_UNTRACKED);
+ }
+ /* If Py_REF_DEBUG, _Py_NewReference bumped _Py_RefTotal, so
+ * we need to undo that. */
+ _Py_DEC_REFTOTAL;
+ /* If Py_TRACE_REFS, _Py_NewReference re-added self to the object
+ * chain, so no more to do there.
+ * If COUNT_ALLOCS, the original decref bumped tp_frees, and
+ * _Py_NewReference bumped tp_allocs: both of those need to be
+ * undone.
+ */
+#ifdef COUNT_ALLOCS
+ --Py_TYPE(self)->tp_frees;
+ --Py_TYPE(self)->tp_allocs;
+#endif
+ return -1;
+}
+
+int
+PyObject_Print(PyObject *op, FILE *fp, int flags)
+{
+ int ret = 0;
+ if (PyErr_CheckSignals())
+ return -1;
+#ifdef USE_STACKCHECK
+ if (PyOS_CheckStack()) {
+ PyErr_SetString(PyExc_MemoryError, "stack overflow");
+ return -1;
+ }
+#endif
+ clearerr(fp); /* Clear any previous error condition */
+ if (op == NULL) {
+ Py_BEGIN_ALLOW_THREADS
+ fprintf(fp, "<nil>");
+ Py_END_ALLOW_THREADS
+ }
+ else {
+ if (op->ob_refcnt <= 0)
+ /* XXX(twouters) cast refcount to long until %zd is
+ universally available */
+ Py_BEGIN_ALLOW_THREADS
+ fprintf(fp, "<refcnt %ld at %p>",
+ (long)op->ob_refcnt, op);
+ Py_END_ALLOW_THREADS
+ else {
+ PyObject *s;
+ if (flags & Py_PRINT_RAW)
+ s = PyObject_Str(op);
+ else
+ s = PyObject_Repr(op);
+ if (s == NULL)
+ ret = -1;
+ else if (PyBytes_Check(s)) {
+ fwrite(PyBytes_AS_STRING(s), 1,
+ PyBytes_GET_SIZE(s), fp);
+ }
+ else if (PyUnicode_Check(s)) {
+ PyObject *t;
+ t = PyUnicode_AsEncodedString(s, "utf-8", "backslashreplace");
+ if (t == NULL) {
+ ret = -1;
+ }
+ else {
+ fwrite(PyBytes_AS_STRING(t), 1,
+ PyBytes_GET_SIZE(t), fp);
+ Py_DECREF(t);
+ }
+ }
+ else {
+ PyErr_Format(PyExc_TypeError,
+ "str() or repr() returned '%.100s'",
+ s->ob_type->tp_name);
+ ret = -1;
+ }
+ Py_XDECREF(s);
+ }
+ }
+ if (ret == 0) {
+ if (ferror(fp)) {
+ PyErr_SetFromErrno(PyExc_IOError);
+ clearerr(fp);
+ ret = -1;
+ }
+ }
+ return ret;
+}
+
+/* For debugging convenience. Set a breakpoint here and call it from your DLL */
+void
+_Py_BreakPoint(void)
+{
+}
+
+
+/* Heuristic checking if the object memory has been deallocated.
+ Rely on the debug hooks on Python memory allocators which fills the memory
+ with DEADBYTE (0xDB) when memory is deallocated.
+
+ The function can be used to prevent segmentation fault on dereferencing
+ pointers like 0xdbdbdbdbdbdbdbdb. Such pointer is very unlikely to be mapped
+ in memory. */
+int
+_PyObject_IsFreed(PyObject *op)
+{
+ uintptr_t ptr = (uintptr_t)op;
+ if (_PyMem_IsFreed(&ptr, sizeof(ptr))) {
+ return 1;
+ }
+ int freed = _PyMem_IsFreed(&op->ob_type, sizeof(op->ob_type));
+ /* ignore op->ob_ref: the value can have be modified
+ by Py_INCREF() and Py_DECREF(). */
+#ifdef Py_TRACE_REFS
+ freed &= _PyMem_IsFreed(&op->_ob_next, sizeof(op->_ob_next));
+ freed &= _PyMem_IsFreed(&op->_ob_prev, sizeof(op->_ob_prev));
+#endif
+ return freed;
+}
+
+
+/* For debugging convenience. See Misc/gdbinit for some useful gdb hooks */
+void
+_PyObject_Dump(PyObject* op)
+{
+ if (op == NULL) {
+ fprintf(stderr, "<NULL object>\n");
+ fflush(stderr);
+ return;
+ }
+
+ if (_PyObject_IsFreed(op)) {
+ /* It seems like the object memory has been freed:
+ don't access it to prevent a segmentation fault. */
+ fprintf(stderr, "<freed object>\n");
+ return;
+ }
+
+ PyGILState_STATE gil;
+ PyObject *error_type, *error_value, *error_traceback;
+
+ fprintf(stderr, "object : ");
+ fflush(stderr);
+#ifdef WITH_THREAD
+ gil = PyGILState_Ensure();
+#endif
+ PyErr_Fetch(&error_type, &error_value, &error_traceback);
+ (void)PyObject_Print(op, stderr, 0);
+ fflush(stderr);
+ PyErr_Restore(error_type, error_value, error_traceback);
+#ifdef WITH_THREAD
+ PyGILState_Release(gil);
+#endif
+ /* XXX(twouters) cast refcount to long until %zd is
+ universally available */
+ fprintf(stderr, "\n"
+ "type : %s\n"
+ "refcount: %ld\n"
+ "address : %p\n",
+ Py_TYPE(op)==NULL ? "NULL" : Py_TYPE(op)->tp_name,
+ (long)op->ob_refcnt,
+ op);
+ fflush(stderr);
+}
+
+PyObject *
+PyObject_Repr(PyObject *v)
+{
+ PyObject *res;
+ if (PyErr_CheckSignals())
+ return NULL;
+#ifdef USE_STACKCHECK
+ if (PyOS_CheckStack()) {
+ PyErr_SetString(PyExc_MemoryError, "stack overflow");
+ return NULL;
+ }
+#endif
+ if (v == NULL)
+ return PyUnicode_FromString("<NULL>");
+ if (Py_TYPE(v)->tp_repr == NULL)
+ return PyUnicode_FromFormat("<%s object at %p>",
+ v->ob_type->tp_name, v);
+
+#ifdef Py_DEBUG
+ /* PyObject_Repr() must not be called with an exception set,
+ because it may clear it (directly or indirectly) and so the
+ caller loses its exception */
+ assert(!PyErr_Occurred());
+#endif
+
+ /* It is possible for a type to have a tp_repr representation that loops
+ infinitely. */
+ if (Py_EnterRecursiveCall(" while getting the repr of an object"))
+ return NULL;
+ res = (*v->ob_type->tp_repr)(v);
+ Py_LeaveRecursiveCall();
+ if (res == NULL)
+ return NULL;
+ if (!PyUnicode_Check(res)) {
+ PyErr_Format(PyExc_TypeError,
+ "__repr__ returned non-string (type %.200s)",
+ res->ob_type->tp_name);
+ Py_DECREF(res);
+ return NULL;
+ }
+#ifndef Py_DEBUG
+ if (PyUnicode_READY(res) < 0)
+ return NULL;
+#endif
+ return res;
+}
+
+PyObject *
+PyObject_Str(PyObject *v)
+{
+ PyObject *res;
+ if (PyErr_CheckSignals())
+ return NULL;
+#ifdef USE_STACKCHECK
+ if (PyOS_CheckStack()) {
+ PyErr_SetString(PyExc_MemoryError, "stack overflow");
+ return NULL;
+ }
+#endif
+ if (v == NULL)
+ return PyUnicode_FromString("<NULL>");
+ if (PyUnicode_CheckExact(v)) {
+#ifndef Py_DEBUG
+ if (PyUnicode_READY(v) < 0)
+ return NULL;
+#endif
+ Py_INCREF(v);
+ return v;
+ }
+ if (Py_TYPE(v)->tp_str == NULL)
+ return PyObject_Repr(v);
+
+#ifdef Py_DEBUG
+ /* PyObject_Str() must not be called with an exception set,
+ because it may clear it (directly or indirectly) and so the
+ caller loses its exception */
+ assert(!PyErr_Occurred());
+#endif
+
+ /* It is possible for a type to have a tp_str representation that loops
+ infinitely. */
+ if (Py_EnterRecursiveCall(" while getting the str of an object"))
+ return NULL;
+ res = (*Py_TYPE(v)->tp_str)(v);
+ Py_LeaveRecursiveCall();
+ if (res == NULL)
+ return NULL;
+ if (!PyUnicode_Check(res)) {
+ PyErr_Format(PyExc_TypeError,
+ "__str__ returned non-string (type %.200s)",
+ Py_TYPE(res)->tp_name);
+ Py_DECREF(res);
+ return NULL;
+ }
+#ifndef Py_DEBUG
+ if (PyUnicode_READY(res) < 0)
+ return NULL;
+#endif
+ assert(_PyUnicode_CheckConsistency(res, 1));
+ return res;
+}
+
+PyObject *
+PyObject_ASCII(PyObject *v)
+{
+ PyObject *repr, *ascii, *res;
+
+ repr = PyObject_Repr(v);
+ if (repr == NULL)
+ return NULL;
+
+ if (PyUnicode_IS_ASCII(repr))
+ return repr;
+
+ /* repr is guaranteed to be a PyUnicode object by PyObject_Repr */
+ ascii = _PyUnicode_AsASCIIString(repr, "backslashreplace");
+ Py_DECREF(repr);
+ if (ascii == NULL)
+ return NULL;
+
+ res = PyUnicode_DecodeASCII(
+ PyBytes_AS_STRING(ascii),
+ PyBytes_GET_SIZE(ascii),
+ NULL);
+
+ Py_DECREF(ascii);
+ return res;
+}
+
+PyObject *
+PyObject_Bytes(PyObject *v)
+{
+ PyObject *result, *func;
+
+ if (v == NULL)
+ return PyBytes_FromString("<NULL>");
+
+ if (PyBytes_CheckExact(v)) {
+ Py_INCREF(v);
+ return v;
+ }
+
+ func = _PyObject_LookupSpecial(v, &PyId___bytes__);
+ if (func != NULL) {
+ result = PyObject_CallFunctionObjArgs(func, NULL);
+ Py_DECREF(func);
+ if (result == NULL)
+ return NULL;
+ if (!PyBytes_Check(result)) {
+ PyErr_Format(PyExc_TypeError,
+ "__bytes__ returned non-bytes (type %.200s)",
+ Py_TYPE(result)->tp_name);
+ Py_DECREF(result);
+ return NULL;
+ }
+ return result;
+ }
+ else if (PyErr_Occurred())
+ return NULL;
+ return PyBytes_FromObject(v);
+}
+
+/* For Python 3.0.1 and later, the old three-way comparison has been
+ completely removed in favour of rich comparisons. PyObject_Compare() and
+ PyObject_Cmp() are gone, and the builtin cmp function no longer exists.
+ The old tp_compare slot has been renamed to tp_reserved, and should no
+ longer be used. Use tp_richcompare instead.
+
+ See (*) below for practical amendments.
+
+ tp_richcompare gets called with a first argument of the appropriate type
+ and a second object of an arbitrary type. We never do any kind of
+ coercion.
+
+ The tp_richcompare slot should return an object, as follows:
+
+ NULL if an exception occurred
+ NotImplemented if the requested comparison is not implemented
+ any other false value if the requested comparison is false
+ any other true value if the requested comparison is true
+
+ The PyObject_RichCompare[Bool]() wrappers raise TypeError when they get
+ NotImplemented.
+
+ (*) Practical amendments:
+
+ - If rich comparison returns NotImplemented, == and != are decided by
+ comparing the object pointer (i.e. falling back to the base object
+ implementation).
+
+*/
+
+/* Map rich comparison operators to their swapped version, e.g. LT <--> GT */
+int _Py_SwappedOp[] = {Py_GT, Py_GE, Py_EQ, Py_NE, Py_LT, Py_LE};
+
+static const char * const opstrings[] = {"<", "<=", "==", "!=", ">", ">="};
+
+/* Perform a rich comparison, raising TypeError when the requested comparison
+ operator is not supported. */
+static PyObject *
+do_richcompare(PyObject *v, PyObject *w, int op)
+{
+ richcmpfunc f;
+ PyObject *res;
+ int checked_reverse_op = 0;
+
+ if (v->ob_type != w->ob_type &&
+ PyType_IsSubtype(w->ob_type, v->ob_type) &&
+ (f = w->ob_type->tp_richcompare) != NULL) {
+ checked_reverse_op = 1;
+ res = (*f)(w, v, _Py_SwappedOp[op]);
+ if (res != Py_NotImplemented)
+ return res;
+ Py_DECREF(res);
+ }
+ if ((f = v->ob_type->tp_richcompare) != NULL) {
+ res = (*f)(v, w, op);
+ if (res != Py_NotImplemented)
+ return res;
+ Py_DECREF(res);
+ }
+ if (!checked_reverse_op && (f = w->ob_type->tp_richcompare) != NULL) {
+ res = (*f)(w, v, _Py_SwappedOp[op]);
+ if (res != Py_NotImplemented)
+ return res;
+ Py_DECREF(res);
+ }
+ /* If neither object implements it, provide a sensible default
+ for == and !=, but raise an exception for ordering. */
+ switch (op) {
+ case Py_EQ:
+ res = (v == w) ? Py_True : Py_False;
+ break;
+ case Py_NE:
+ res = (v != w) ? Py_True : Py_False;
+ break;
+ default:
+ PyErr_Format(PyExc_TypeError,
+ "'%s' not supported between instances of '%.100s' and '%.100s'",
+ opstrings[op],
+ v->ob_type->tp_name,
+ w->ob_type->tp_name);
+ return NULL;
+ }
+ Py_INCREF(res);
+ return res;
+}
+
+/* Perform a rich comparison with object result. This wraps do_richcompare()
+ with a check for NULL arguments and a recursion check. */
+
+PyObject *
+PyObject_RichCompare(PyObject *v, PyObject *w, int op)
+{
+ PyObject *res;
+
+ assert(Py_LT <= op && op <= Py_GE);
+ if (v == NULL || w == NULL) {
+ if (!PyErr_Occurred())
+ PyErr_BadInternalCall();
+ return NULL;
+ }
+ if (Py_EnterRecursiveCall(" in comparison"))
+ return NULL;
+ res = do_richcompare(v, w, op);
+ Py_LeaveRecursiveCall();
+ return res;
+}
+
+/* Perform a rich comparison with integer result. This wraps
+ PyObject_RichCompare(), returning -1 for error, 0 for false, 1 for true. */
+int
+PyObject_RichCompareBool(PyObject *v, PyObject *w, int op)
+{
+ PyObject *res;
+ int ok;
+
+ /* Quick result when objects are the same.
+ Guarantees that identity implies equality. */
+ if (v == w) {
+ if (op == Py_EQ)
+ return 1;
+ else if (op == Py_NE)
+ return 0;
+ }
+
+ res = PyObject_RichCompare(v, w, op);
+ if (res == NULL)
+ return -1;
+ if (PyBool_Check(res))
+ ok = (res == Py_True);
+ else
+ ok = PyObject_IsTrue(res);
+ Py_DECREF(res);
+ return ok;
+}
+
+Py_hash_t
+PyObject_HashNotImplemented(PyObject *v)
+{
+ PyErr_Format(PyExc_TypeError, "unhashable type: '%.200s'",
+ Py_TYPE(v)->tp_name);
+ return -1;
+}
+
+Py_hash_t
+PyObject_Hash(PyObject *v)
+{
+ PyTypeObject *tp = Py_TYPE(v);
+ if (tp->tp_hash != NULL)
+ return (*tp->tp_hash)(v);
+ /* To keep to the general practice that inheriting
+ * solely from object in C code should work without
+ * an explicit call to PyType_Ready, we implicitly call
+ * PyType_Ready here and then check the tp_hash slot again
+ */
+ if (tp->tp_dict == NULL) {
+ if (PyType_Ready(tp) < 0)
+ return -1;
+ if (tp->tp_hash != NULL)
+ return (*tp->tp_hash)(v);
+ }
+ /* Otherwise, the object can't be hashed */
+ return PyObject_HashNotImplemented(v);
+}
+
+PyObject *
+PyObject_GetAttrString(PyObject *v, const char *name)
+{
+ PyObject *w, *res;
+
+ if (Py_TYPE(v)->tp_getattr != NULL)
+ return (*Py_TYPE(v)->tp_getattr)(v, (char*)name);
+ w = PyUnicode_InternFromString(name);
+ if (w == NULL)
+ return NULL;
+ res = PyObject_GetAttr(v, w);
+ Py_DECREF(w);
+ return res;
+}
+
+int
+PyObject_HasAttrString(PyObject *v, const char *name)
+{
+ PyObject *res = PyObject_GetAttrString(v, name);
+ if (res != NULL) {
+ Py_DECREF(res);
+ return 1;
+ }
+ PyErr_Clear();
+ return 0;
+}
+
+int
+PyObject_SetAttrString(PyObject *v, const char *name, PyObject *w)
+{
+ PyObject *s;
+ int res;
+
+ if (Py_TYPE(v)->tp_setattr != NULL)
+ return (*Py_TYPE(v)->tp_setattr)(v, (char*)name, w);
+ s = PyUnicode_InternFromString(name);
+ if (s == NULL)
+ return -1;
+ res = PyObject_SetAttr(v, s, w);
+ Py_XDECREF(s);
+ return res;
+}
+
+int
+_PyObject_IsAbstract(PyObject *obj)
+{
+ int res;
+ PyObject* isabstract;
+
+ if (obj == NULL)
+ return 0;
+
+ isabstract = _PyObject_GetAttrId(obj, &PyId___isabstractmethod__);
+ if (isabstract == NULL) {
+ if (PyErr_ExceptionMatches(PyExc_AttributeError)) {
+ PyErr_Clear();
+ return 0;
+ }
+ return -1;
+ }
+ res = PyObject_IsTrue(isabstract);
+ Py_DECREF(isabstract);
+ return res;
+}
+
+PyObject *
+_PyObject_GetAttrId(PyObject *v, _Py_Identifier *name)
+{
+ PyObject *result;
+ PyObject *oname = _PyUnicode_FromId(name); /* borrowed */
+ if (!oname)
+ return NULL;
+ result = PyObject_GetAttr(v, oname);
+ return result;
+}
+
+int
+_PyObject_HasAttrId(PyObject *v, _Py_Identifier *name)
+{
+ int result;
+ PyObject *oname = _PyUnicode_FromId(name); /* borrowed */
+ if (!oname)
+ return -1;
+ result = PyObject_HasAttr(v, oname);
+ return result;
+}
+
+int
+_PyObject_SetAttrId(PyObject *v, _Py_Identifier *name, PyObject *w)
+{
+ int result;
+ PyObject *oname = _PyUnicode_FromId(name); /* borrowed */
+ if (!oname)
+ return -1;
+ result = PyObject_SetAttr(v, oname, w);
+ return result;
+}
+
+PyObject *
+PyObject_GetAttr(PyObject *v, PyObject *name)
+{
+ PyTypeObject *tp = Py_TYPE(v);
+
+ if (!PyUnicode_Check(name)) {
+ PyErr_Format(PyExc_TypeError,
+ "attribute name must be string, not '%.200s'",
+ name->ob_type->tp_name);
+ return NULL;
+ }
+ if (tp->tp_getattro != NULL)
+ return (*tp->tp_getattro)(v, name);
+ if (tp->tp_getattr != NULL) {
+ char *name_str = PyUnicode_AsUTF8(name);
+ if (name_str == NULL)
+ return NULL;
+ return (*tp->tp_getattr)(v, name_str);
+ }
+ PyErr_Format(PyExc_AttributeError,
+ "'%.50s' object has no attribute '%U'",
+ tp->tp_name, name);
+ return NULL;
+}
+
+int
+PyObject_HasAttr(PyObject *v, PyObject *name)
+{
+ PyObject *res = PyObject_GetAttr(v, name);
+ if (res != NULL) {
+ Py_DECREF(res);
+ return 1;
+ }
+ PyErr_Clear();
+ return 0;
+}
+
+int
+PyObject_SetAttr(PyObject *v, PyObject *name, PyObject *value)
+{
+ PyTypeObject *tp = Py_TYPE(v);
+ int err;
+
+ if (!PyUnicode_Check(name)) {
+ PyErr_Format(PyExc_TypeError,
+ "attribute name must be string, not '%.200s'",
+ name->ob_type->tp_name);
+ return -1;
+ }
+ Py_INCREF(name);
+
+ PyUnicode_InternInPlace(&name);
+ if (tp->tp_setattro != NULL) {
+ err = (*tp->tp_setattro)(v, name, value);
+ Py_DECREF(name);
+ return err;
+ }
+ if (tp->tp_setattr != NULL) {
+ char *name_str = PyUnicode_AsUTF8(name);
+ if (name_str == NULL)
+ return -1;
+ err = (*tp->tp_setattr)(v, name_str, value);
+ Py_DECREF(name);
+ return err;
+ }
+ Py_DECREF(name);
+ assert(name->ob_refcnt >= 1);
+ if (tp->tp_getattr == NULL && tp->tp_getattro == NULL)
+ PyErr_Format(PyExc_TypeError,
+ "'%.100s' object has no attributes "
+ "(%s .%U)",
+ tp->tp_name,
+ value==NULL ? "del" : "assign to",
+ name);
+ else
+ PyErr_Format(PyExc_TypeError,
+ "'%.100s' object has only read-only attributes "
+ "(%s .%U)",
+ tp->tp_name,
+ value==NULL ? "del" : "assign to",
+ name);
+ return -1;
+}
+
+/* Helper to get a pointer to an object's __dict__ slot, if any */
+
+PyObject **
+_PyObject_GetDictPtr(PyObject *obj)
+{
+ Py_ssize_t dictoffset;
+ PyTypeObject *tp = Py_TYPE(obj);
+
+ dictoffset = tp->tp_dictoffset;
+ if (dictoffset == 0)
+ return NULL;
+ if (dictoffset < 0) {
+ Py_ssize_t tsize;
+ size_t size;
+
+ tsize = ((PyVarObject *)obj)->ob_size;
+ if (tsize < 0)
+ tsize = -tsize;
+ size = _PyObject_VAR_SIZE(tp, tsize);
+
+ dictoffset += (long)size;
+ assert(dictoffset > 0);
+ assert(dictoffset % SIZEOF_VOID_P == 0);
+ }
+ return (PyObject **) ((char *)obj + dictoffset);
+}
+
+PyObject *
+PyObject_SelfIter(PyObject *obj)
+{
+ Py_INCREF(obj);
+ return obj;
+}
+
+/* Convenience function to get a builtin from its name */
+PyObject *
+_PyObject_GetBuiltin(const char *name)
+{
+ PyObject *mod_name, *mod, *attr;
+
+ mod_name = _PyUnicode_FromId(&PyId_builtins); /* borrowed */
+ if (mod_name == NULL)
+ return NULL;
+ mod = PyImport_Import(mod_name);
+ if (mod == NULL)
+ return NULL;
+ attr = PyObject_GetAttrString(mod, name);
+ Py_DECREF(mod);
+ return attr;
+}
+
+/* Helper used when the __next__ method is removed from a type:
+ tp_iternext is never NULL and can be safely called without checking
+ on every iteration.
+ */
+
+PyObject *
+_PyObject_NextNotImplemented(PyObject *self)
+{
+ PyErr_Format(PyExc_TypeError,
+ "'%.200s' object is not iterable",
+ Py_TYPE(self)->tp_name);
+ return NULL;
+}
+
+/* Generic GetAttr functions - put these in your tp_[gs]etattro slot */
+
+PyObject *
+_PyObject_GenericGetAttrWithDict(PyObject *obj, PyObject *name, PyObject *dict)
+{
+ PyTypeObject *tp = Py_TYPE(obj);
+ PyObject *descr = NULL;
+ PyObject *res = NULL;
+ descrgetfunc f;
+ Py_ssize_t dictoffset;
+ PyObject **dictptr;
+
+ if (!PyUnicode_Check(name)){
+ PyErr_Format(PyExc_TypeError,
+ "attribute name must be string, not '%.200s'",
+ name->ob_type->tp_name);
+ return NULL;
+ }
+ Py_INCREF(name);
+
+ if (tp->tp_dict == NULL) {
+ if (PyType_Ready(tp) < 0)
+ goto done;
+ }
+
+ descr = _PyType_Lookup(tp, name);
+
+ f = NULL;
+ if (descr != NULL) {
+ Py_INCREF(descr);
+ f = descr->ob_type->tp_descr_get;
+ if (f != NULL && PyDescr_IsData(descr)) {
+ res = f(descr, obj, (PyObject *)obj->ob_type);
+ goto done;
+ }
+ }
+
+ if (dict == NULL) {
+ /* Inline _PyObject_GetDictPtr */
+ dictoffset = tp->tp_dictoffset;
+ if (dictoffset != 0) {
+ if (dictoffset < 0) {
+ Py_ssize_t tsize;
+ size_t size;
+
+ tsize = ((PyVarObject *)obj)->ob_size;
+ if (tsize < 0)
+ tsize = -tsize;
+ size = _PyObject_VAR_SIZE(tp, tsize);
+ assert(size <= PY_SSIZE_T_MAX);
+
+ dictoffset += (Py_ssize_t)size;
+ assert(dictoffset > 0);
+ assert(dictoffset % SIZEOF_VOID_P == 0);
+ }
+ dictptr = (PyObject **) ((char *)obj + dictoffset);
+ dict = *dictptr;
+ }
+ }
+ if (dict != NULL) {
+ Py_INCREF(dict);
+ res = PyDict_GetItem(dict, name);
+ if (res != NULL) {
+ Py_INCREF(res);
+ Py_DECREF(dict);
+ goto done;
+ }
+ Py_DECREF(dict);
+ }
+
+ if (f != NULL) {
+ res = f(descr, obj, (PyObject *)Py_TYPE(obj));
+ goto done;
+ }
+
+ if (descr != NULL) {
+ res = descr;
+ descr = NULL;
+ goto done;
+ }
+
+ PyErr_Format(PyExc_AttributeError,
+ "'%.50s' object has no attribute '%U'",
+ tp->tp_name, name);
+ done:
+ Py_XDECREF(descr);
+ Py_DECREF(name);
+ return res;
+}
+
+PyObject *
+PyObject_GenericGetAttr(PyObject *obj, PyObject *name)
+{
+ return _PyObject_GenericGetAttrWithDict(obj, name, NULL);
+}
+
+int
+_PyObject_GenericSetAttrWithDict(PyObject *obj, PyObject *name,
+ PyObject *value, PyObject *dict)
+{
+ PyTypeObject *tp = Py_TYPE(obj);
+ PyObject *descr;
+ descrsetfunc f;
+ PyObject **dictptr;
+ int res = -1;
+
+ if (!PyUnicode_Check(name)){
+ PyErr_Format(PyExc_TypeError,
+ "attribute name must be string, not '%.200s'",
+ name->ob_type->tp_name);
+ return -1;
+ }
+
+ if (tp->tp_dict == NULL && PyType_Ready(tp) < 0)
+ return -1;
+
+ Py_INCREF(name);
+
+ descr = _PyType_Lookup(tp, name);
+
+ if (descr != NULL) {
+ Py_INCREF(descr);
+ f = descr->ob_type->tp_descr_set;
+ if (f != NULL) {
+ res = f(descr, obj, value);
+ goto done;
+ }
+ }
+
+ if (dict == NULL) {
+ dictptr = _PyObject_GetDictPtr(obj);
+ if (dictptr == NULL) {
+ if (descr == NULL) {
+ PyErr_Format(PyExc_AttributeError,
+ "'%.100s' object has no attribute '%U'",
+ tp->tp_name, name);
+ }
+ else {
+ PyErr_Format(PyExc_AttributeError,
+ "'%.50s' object attribute '%U' is read-only",
+ tp->tp_name, name);
+ }
+ goto done;
+ }
+ res = _PyObjectDict_SetItem(tp, dictptr, name, value);
+ }
+ else {
+ Py_INCREF(dict);
+ if (value == NULL)
+ res = PyDict_DelItem(dict, name);
+ else
+ res = PyDict_SetItem(dict, name, value);
+ Py_DECREF(dict);
+ }
+ if (res < 0 && PyErr_ExceptionMatches(PyExc_KeyError))
+ PyErr_SetObject(PyExc_AttributeError, name);
+
+ done:
+ Py_XDECREF(descr);
+ Py_DECREF(name);
+ return res;
+}
+
+int
+PyObject_GenericSetAttr(PyObject *obj, PyObject *name, PyObject *value)
+{
+ return _PyObject_GenericSetAttrWithDict(obj, name, value, NULL);
+}
+
+int
+PyObject_GenericSetDict(PyObject *obj, PyObject *value, void *context)
+{
+ PyObject **dictptr = _PyObject_GetDictPtr(obj);
+ if (dictptr == NULL) {
+ PyErr_SetString(PyExc_AttributeError,
+ "This object has no __dict__");
+ return -1;
+ }
+ if (value == NULL) {
+ PyErr_SetString(PyExc_TypeError, "cannot delete __dict__");
+ return -1;
+ }
+ if (!PyDict_Check(value)) {
+ PyErr_Format(PyExc_TypeError,
+ "__dict__ must be set to a dictionary, "
+ "not a '%.200s'", Py_TYPE(value)->tp_name);
+ return -1;
+ }
+ Py_INCREF(value);
+ Py_XSETREF(*dictptr, value);
+ return 0;
+}
+
+
+/* Test a value used as condition, e.g., in a for or if statement.
+ Return -1 if an error occurred */
+
+int
+PyObject_IsTrue(PyObject *v)
+{
+ Py_ssize_t res;
+ if (v == Py_True)
+ return 1;
+ if (v == Py_False)
+ return 0;
+ if (v == Py_None)
+ return 0;
+ else if (v->ob_type->tp_as_number != NULL &&
+ v->ob_type->tp_as_number->nb_bool != NULL)
+ res = (*v->ob_type->tp_as_number->nb_bool)(v);
+ else if (v->ob_type->tp_as_mapping != NULL &&
+ v->ob_type->tp_as_mapping->mp_length != NULL)
+ res = (*v->ob_type->tp_as_mapping->mp_length)(v);
+ else if (v->ob_type->tp_as_sequence != NULL &&
+ v->ob_type->tp_as_sequence->sq_length != NULL)
+ res = (*v->ob_type->tp_as_sequence->sq_length)(v);
+ else
+ return 1;
+ /* if it is negative, it should be either -1 or -2 */
+ return (res > 0) ? 1 : Py_SAFE_DOWNCAST(res, Py_ssize_t, int);
+}
+
+/* equivalent of 'not v'
+ Return -1 if an error occurred */
+
+int
+PyObject_Not(PyObject *v)
+{
+ int res;
+ res = PyObject_IsTrue(v);
+ if (res < 0)
+ return res;
+ return res == 0;
+}
+
+/* Test whether an object can be called */
+
+int
+PyCallable_Check(PyObject *x)
+{
+ if (x == NULL)
+ return 0;
+ return x->ob_type->tp_call != NULL;
+}
+
+
+/* Helper for PyObject_Dir without arguments: returns the local scope. */
+static PyObject *
+_dir_locals(void)
+{
+ PyObject *names;
+ PyObject *locals;
+
+ locals = PyEval_GetLocals();
+ if (locals == NULL)
+ return NULL;
+
+ names = PyMapping_Keys(locals);
+ if (!names)
+ return NULL;
+ if (!PyList_Check(names)) {
+ PyErr_Format(PyExc_TypeError,
+ "dir(): expected keys() of locals to be a list, "
+ "not '%.200s'", Py_TYPE(names)->tp_name);
+ Py_DECREF(names);
+ return NULL;
+ }
+ if (PyList_Sort(names)) {
+ Py_DECREF(names);
+ return NULL;
+ }
+ /* the locals don't need to be DECREF'd */
+ return names;
+}
+
+/* Helper for PyObject_Dir: object introspection. */
+static PyObject *
+_dir_object(PyObject *obj)
+{
+ PyObject *result, *sorted;
+ PyObject *dirfunc = _PyObject_LookupSpecial(obj, &PyId___dir__);
+
+ assert(obj);
+ if (dirfunc == NULL) {
+ if (!PyErr_Occurred())
+ PyErr_SetString(PyExc_TypeError, "object does not provide __dir__");
+ return NULL;
+ }
+ /* use __dir__ */
+ result = PyObject_CallFunctionObjArgs(dirfunc, NULL);
+ Py_DECREF(dirfunc);
+ if (result == NULL)
+ return NULL;
+ /* return sorted(result) */
+ sorted = PySequence_List(result);
+ Py_DECREF(result);
+ if (sorted == NULL)
+ return NULL;
+ if (PyList_Sort(sorted)) {
+ Py_DECREF(sorted);
+ return NULL;
+ }
+ return sorted;
+}
+
+/* Implementation of dir() -- if obj is NULL, returns the names in the current
+ (local) scope. Otherwise, performs introspection of the object: returns a
+ sorted list of attribute names (supposedly) accessible from the object
+*/
+PyObject *
+PyObject_Dir(PyObject *obj)
+{
+ return (obj == NULL) ? _dir_locals() : _dir_object(obj);
+}
+
+/*
+None is a non-NULL undefined value.
+There is (and should be!) no way to create other objects of this type,
+so there is exactly one (which is indestructible, by the way).
+*/
+
+/* ARGSUSED */
+static PyObject *
+none_repr(PyObject *op)
+{
+ return PyUnicode_FromString("None");
+}
+
+/* ARGUSED */
+static void
+none_dealloc(PyObject* ignore)
+{
+ /* This should never get called, but we also don't want to SEGV if
+ * we accidentally decref None out of existence.
+ */
+ Py_FatalError("deallocating None");
+}
+
+static PyObject *
+none_new(PyTypeObject *type, PyObject *args, PyObject *kwargs)
+{
+ if (PyTuple_GET_SIZE(args) || (kwargs && PyDict_Size(kwargs))) {
+ PyErr_SetString(PyExc_TypeError, "NoneType takes no arguments");
+ return NULL;
+ }
+ Py_RETURN_NONE;
+}
+
+static int
+none_bool(PyObject *v)
+{
+ return 0;
+}
+
+static PyNumberMethods none_as_number = {
+ 0, /* nb_add */
+ 0, /* nb_subtract */
+ 0, /* nb_multiply */
+ 0, /* nb_remainder */
+ 0, /* nb_divmod */
+ 0, /* nb_power */
+ 0, /* nb_negative */
+ 0, /* nb_positive */
+ 0, /* nb_absolute */
+ (inquiry)none_bool, /* nb_bool */
+ 0, /* nb_invert */
+ 0, /* nb_lshift */
+ 0, /* nb_rshift */
+ 0, /* nb_and */
+ 0, /* nb_xor */
+ 0, /* nb_or */
+ 0, /* nb_int */
+ 0, /* nb_reserved */
+ 0, /* nb_float */
+ 0, /* nb_inplace_add */
+ 0, /* nb_inplace_subtract */
+ 0, /* nb_inplace_multiply */
+ 0, /* nb_inplace_remainder */
+ 0, /* nb_inplace_power */
+ 0, /* nb_inplace_lshift */
+ 0, /* nb_inplace_rshift */
+ 0, /* nb_inplace_and */
+ 0, /* nb_inplace_xor */
+ 0, /* nb_inplace_or */
+ 0, /* nb_floor_divide */
+ 0, /* nb_true_divide */
+ 0, /* nb_inplace_floor_divide */
+ 0, /* nb_inplace_true_divide */
+ 0, /* nb_index */
+};
+
+PyTypeObject _PyNone_Type = {
+ PyVarObject_HEAD_INIT(&PyType_Type, 0)
+ "NoneType",
+ 0,
+ 0,
+ none_dealloc, /*tp_dealloc*/ /*never called*/
+ 0, /*tp_print*/
+ 0, /*tp_getattr*/
+ 0, /*tp_setattr*/
+ 0, /*tp_reserved*/
+ none_repr, /*tp_repr*/
+ &none_as_number, /*tp_as_number*/
+ 0, /*tp_as_sequence*/
+ 0, /*tp_as_mapping*/
+ 0, /*tp_hash */
+ 0, /*tp_call */
+ 0, /*tp_str */
+ 0, /*tp_getattro */
+ 0, /*tp_setattro */
+ 0, /*tp_as_buffer */
+ Py_TPFLAGS_DEFAULT, /*tp_flags */
+ 0, /*tp_doc */
+ 0, /*tp_traverse */
+ 0, /*tp_clear */
+ 0, /*tp_richcompare */
+ 0, /*tp_weaklistoffset */
+ 0, /*tp_iter */
+ 0, /*tp_iternext */
+ 0, /*tp_methods */
+ 0, /*tp_members */
+ 0, /*tp_getset */
+ 0, /*tp_base */
+ 0, /*tp_dict */
+ 0, /*tp_descr_get */
+ 0, /*tp_descr_set */
+ 0, /*tp_dictoffset */
+ 0, /*tp_init */
+ 0, /*tp_alloc */
+ none_new, /*tp_new */
+};
+
+PyObject _Py_NoneStruct = {
+ _PyObject_EXTRA_INIT
+ 1, &_PyNone_Type
+};
+
+/* NotImplemented is an object that can be used to signal that an
+ operation is not implemented for the given type combination. */
+
+static PyObject *
+NotImplemented_repr(PyObject *op)
+{
+ return PyUnicode_FromString("NotImplemented");
+}
+
+static PyObject *
+NotImplemented_reduce(PyObject *op)
+{
+ return PyUnicode_FromString("NotImplemented");
+}
+
+static PyMethodDef notimplemented_methods[] = {
+ {"__reduce__", (PyCFunction)NotImplemented_reduce, METH_NOARGS, NULL},
+ {NULL, NULL}
+};
+
+static PyObject *
+notimplemented_new(PyTypeObject *type, PyObject *args, PyObject *kwargs)
+{
+ if (PyTuple_GET_SIZE(args) || (kwargs && PyDict_Size(kwargs))) {
+ PyErr_SetString(PyExc_TypeError, "NotImplementedType takes no arguments");
+ return NULL;
+ }
+ Py_RETURN_NOTIMPLEMENTED;
+}
+
+static void
+notimplemented_dealloc(PyObject* ignore)
+{
+ /* This should never get called, but we also don't want to SEGV if
+ * we accidentally decref NotImplemented out of existence.
+ */
+ Py_FatalError("deallocating NotImplemented");
+}
+
+PyTypeObject _PyNotImplemented_Type = {
+ PyVarObject_HEAD_INIT(&PyType_Type, 0)
+ "NotImplementedType",
+ 0,
+ 0,
+ notimplemented_dealloc, /*tp_dealloc*/ /*never called*/
+ 0, /*tp_print*/
+ 0, /*tp_getattr*/
+ 0, /*tp_setattr*/
+ 0, /*tp_reserved*/
+ NotImplemented_repr, /*tp_repr*/
+ 0, /*tp_as_number*/
+ 0, /*tp_as_sequence*/
+ 0, /*tp_as_mapping*/
+ 0, /*tp_hash */
+ 0, /*tp_call */
+ 0, /*tp_str */
+ 0, /*tp_getattro */
+ 0, /*tp_setattro */
+ 0, /*tp_as_buffer */
+ Py_TPFLAGS_DEFAULT, /*tp_flags */
+ 0, /*tp_doc */
+ 0, /*tp_traverse */
+ 0, /*tp_clear */
+ 0, /*tp_richcompare */
+ 0, /*tp_weaklistoffset */
+ 0, /*tp_iter */
+ 0, /*tp_iternext */
+ notimplemented_methods, /*tp_methods */
+ 0, /*tp_members */
+ 0, /*tp_getset */
+ 0, /*tp_base */
+ 0, /*tp_dict */
+ 0, /*tp_descr_get */
+ 0, /*tp_descr_set */
+ 0, /*tp_dictoffset */
+ 0, /*tp_init */
+ 0, /*tp_alloc */
+ notimplemented_new, /*tp_new */
+};
+
+PyObject _Py_NotImplementedStruct = {
+ _PyObject_EXTRA_INIT
+ 1, &_PyNotImplemented_Type
+};
+
+void
+_Py_ReadyTypes(void)
+{
+ if (PyType_Ready(&PyBaseObject_Type) < 0)
+ Py_FatalError("Can't initialize object type");
+
+ if (PyType_Ready(&PyType_Type) < 0)
+ Py_FatalError("Can't initialize type type");
+
+ if (PyType_Ready(&_PyWeakref_RefType) < 0)
+ Py_FatalError("Can't initialize weakref type");
+
+ if (PyType_Ready(&_PyWeakref_CallableProxyType) < 0)
+ Py_FatalError("Can't initialize callable weakref proxy type");
+
+ if (PyType_Ready(&_PyWeakref_ProxyType) < 0)
+ Py_FatalError("Can't initialize weakref proxy type");
+
+ if (PyType_Ready(&PyLong_Type) < 0)
+ Py_FatalError("Can't initialize int type");
+
+ if (PyType_Ready(&PyBool_Type) < 0)
+ Py_FatalError("Can't initialize bool type");
+
+ if (PyType_Ready(&PyByteArray_Type) < 0)
+ Py_FatalError("Can't initialize bytearray type");
+
+ if (PyType_Ready(&PyBytes_Type) < 0)
+ Py_FatalError("Can't initialize 'str'");
+
+ if (PyType_Ready(&PyList_Type) < 0)
+ Py_FatalError("Can't initialize list type");
+
+ if (PyType_Ready(&_PyNone_Type) < 0)
+ Py_FatalError("Can't initialize None type");
+
+ if (PyType_Ready(&_PyNotImplemented_Type) < 0)
+ Py_FatalError("Can't initialize NotImplemented type");
+
+ if (PyType_Ready(&PyTraceBack_Type) < 0)
+ Py_FatalError("Can't initialize traceback type");
+
+ if (PyType_Ready(&PySuper_Type) < 0)
+ Py_FatalError("Can't initialize super type");
+
+ if (PyType_Ready(&PyRange_Type) < 0)
+ Py_FatalError("Can't initialize range type");
+
+ if (PyType_Ready(&PyDict_Type) < 0)
+ Py_FatalError("Can't initialize dict type");
+
+ if (PyType_Ready(&PyDictKeys_Type) < 0)
+ Py_FatalError("Can't initialize dict keys type");
+
+ if (PyType_Ready(&PyDictValues_Type) < 0)
+ Py_FatalError("Can't initialize dict values type");
+
+ if (PyType_Ready(&PyDictItems_Type) < 0)
+ Py_FatalError("Can't initialize dict items type");
+
+ if (PyType_Ready(&PyODict_Type) < 0)
+ Py_FatalError("Can't initialize OrderedDict type");
+
+ if (PyType_Ready(&PyODictKeys_Type) < 0)
+ Py_FatalError("Can't initialize odict_keys type");
+
+ if (PyType_Ready(&PyODictItems_Type) < 0)
+ Py_FatalError("Can't initialize odict_items type");
+
+ if (PyType_Ready(&PyODictValues_Type) < 0)
+ Py_FatalError("Can't initialize odict_values type");
+
+ if (PyType_Ready(&PyODictIter_Type) < 0)
+ Py_FatalError("Can't initialize odict_keyiterator type");
+
+ if (PyType_Ready(&PySet_Type) < 0)
+ Py_FatalError("Can't initialize set type");
+
+ if (PyType_Ready(&PyUnicode_Type) < 0)
+ Py_FatalError("Can't initialize str type");
+
+ if (PyType_Ready(&PySlice_Type) < 0)
+ Py_FatalError("Can't initialize slice type");
+
+ if (PyType_Ready(&PyStaticMethod_Type) < 0)
+ Py_FatalError("Can't initialize static method type");
+
+ if (PyType_Ready(&PyComplex_Type) < 0)
+ Py_FatalError("Can't initialize complex type");
+
+ if (PyType_Ready(&PyFloat_Type) < 0)
+ Py_FatalError("Can't initialize float type");
+
+ if (PyType_Ready(&PyFrozenSet_Type) < 0)
+ Py_FatalError("Can't initialize frozenset type");
+
+ if (PyType_Ready(&PyProperty_Type) < 0)
+ Py_FatalError("Can't initialize property type");
+
+ if (PyType_Ready(&_PyManagedBuffer_Type) < 0)
+ Py_FatalError("Can't initialize managed buffer type");
+
+ if (PyType_Ready(&PyMemoryView_Type) < 0)
+ Py_FatalError("Can't initialize memoryview type");
+
+ if (PyType_Ready(&PyTuple_Type) < 0)
+ Py_FatalError("Can't initialize tuple type");
+
+ if (PyType_Ready(&PyEnum_Type) < 0)
+ Py_FatalError("Can't initialize enumerate type");
+
+ if (PyType_Ready(&PyReversed_Type) < 0)
+ Py_FatalError("Can't initialize reversed type");
+
+ if (PyType_Ready(&PyStdPrinter_Type) < 0)
+ Py_FatalError("Can't initialize StdPrinter");
+
+ if (PyType_Ready(&PyCode_Type) < 0)
+ Py_FatalError("Can't initialize code type");
+
+ if (PyType_Ready(&PyFrame_Type) < 0)
+ Py_FatalError("Can't initialize frame type");
+
+ if (PyType_Ready(&PyCFunction_Type) < 0)
+ Py_FatalError("Can't initialize builtin function type");
+
+ if (PyType_Ready(&PyMethod_Type) < 0)
+ Py_FatalError("Can't initialize method type");
+
+ if (PyType_Ready(&PyFunction_Type) < 0)
+ Py_FatalError("Can't initialize function type");
+
+ if (PyType_Ready(&PyDictProxy_Type) < 0)
+ Py_FatalError("Can't initialize dict proxy type");
+
+ if (PyType_Ready(&PyGen_Type) < 0)
+ Py_FatalError("Can't initialize generator type");
+
+ if (PyType_Ready(&PyGetSetDescr_Type) < 0)
+ Py_FatalError("Can't initialize get-set descriptor type");
+
+ if (PyType_Ready(&PyWrapperDescr_Type) < 0)
+ Py_FatalError("Can't initialize wrapper type");
+
+ if (PyType_Ready(&_PyMethodWrapper_Type) < 0)
+ Py_FatalError("Can't initialize method wrapper type");
+
+ if (PyType_Ready(&PyEllipsis_Type) < 0)
+ Py_FatalError("Can't initialize ellipsis type");
+
+ if (PyType_Ready(&PyMemberDescr_Type) < 0)
+ Py_FatalError("Can't initialize member descriptor type");
+
+ if (PyType_Ready(&_PyNamespace_Type) < 0)
+ Py_FatalError("Can't initialize namespace type");
+
+ if (PyType_Ready(&PyCapsule_Type) < 0)
+ Py_FatalError("Can't initialize capsule type");
+
+ if (PyType_Ready(&PyLongRangeIter_Type) < 0)
+ Py_FatalError("Can't initialize long range iterator type");
+
+ if (PyType_Ready(&PyCell_Type) < 0)
+ Py_FatalError("Can't initialize cell type");
+
+ if (PyType_Ready(&PyInstanceMethod_Type) < 0)
+ Py_FatalError("Can't initialize instance method type");
+
+ if (PyType_Ready(&PyClassMethodDescr_Type) < 0)
+ Py_FatalError("Can't initialize class method descr type");
+
+ if (PyType_Ready(&PyMethodDescr_Type) < 0)
+ Py_FatalError("Can't initialize method descr type");
+
+ if (PyType_Ready(&PyCallIter_Type) < 0)
+ Py_FatalError("Can't initialize call iter type");
+
+ if (PyType_Ready(&PySeqIter_Type) < 0)
+ Py_FatalError("Can't initialize sequence iterator type");
+
+ if (PyType_Ready(&PyCoro_Type) < 0)
+ Py_FatalError("Can't initialize coroutine type");
+
+ if (PyType_Ready(&_PyCoroWrapper_Type) < 0)
+ Py_FatalError("Can't initialize coroutine wrapper type");
+}
+
+
+#ifdef Py_TRACE_REFS
+
+void
+_Py_NewReference(PyObject *op)
+{
+ _Py_INC_REFTOTAL;
+ op->ob_refcnt = 1;
+ _Py_AddToAllObjects(op, 1);
+ _Py_INC_TPALLOCS(op);
+}
+
+void
+_Py_ForgetReference(PyObject *op)
+{
+#ifdef SLOW_UNREF_CHECK
+ PyObject *p;
+#endif
+ if (op->ob_refcnt < 0)
+ Py_FatalError("UNREF negative refcnt");
+ if (op == &refchain ||
+ op->_ob_prev->_ob_next != op || op->_ob_next->_ob_prev != op) {
+ fprintf(stderr, "* ob\n");
+ _PyObject_Dump(op);
+ fprintf(stderr, "* op->_ob_prev->_ob_next\n");
+ _PyObject_Dump(op->_ob_prev->_ob_next);
+ fprintf(stderr, "* op->_ob_next->_ob_prev\n");
+ _PyObject_Dump(op->_ob_next->_ob_prev);
+ Py_FatalError("UNREF invalid object");
+ }
+#ifdef SLOW_UNREF_CHECK
+ for (p = refchain._ob_next; p != &refchain; p = p->_ob_next) {
+ if (p == op)
+ break;
+ }
+ if (p == &refchain) /* Not found */
+ Py_FatalError("UNREF unknown object");
+#endif
+ op->_ob_next->_ob_prev = op->_ob_prev;
+ op->_ob_prev->_ob_next = op->_ob_next;
+ op->_ob_next = op->_ob_prev = NULL;
+ _Py_INC_TPFREES(op);
+}
+
+void
+_Py_Dealloc(PyObject *op)
+{
+ destructor dealloc = Py_TYPE(op)->tp_dealloc;
+ _Py_ForgetReference(op);
+ (*dealloc)(op);
+}
+
+/* Print all live objects. Because PyObject_Print is called, the
+ * interpreter must be in a healthy state.
+ */
+void
+_Py_PrintReferences(FILE *fp)
+{
+ PyObject *op;
+ fprintf(fp, "Remaining objects:\n");
+ for (op = refchain._ob_next; op != &refchain; op = op->_ob_next) {
+ fprintf(fp, "%p [%" PY_FORMAT_SIZE_T "d] ", op, op->ob_refcnt);
+ if (PyObject_Print(op, fp, 0) != 0)
+ PyErr_Clear();
+ putc('\n', fp);
+ }
+}
+
+/* Print the addresses of all live objects. Unlike _Py_PrintReferences, this
+ * doesn't make any calls to the Python C API, so is always safe to call.
+ */
+void
+_Py_PrintReferenceAddresses(FILE *fp)
+{
+ PyObject *op;
+ fprintf(fp, "Remaining object addresses:\n");
+ for (op = refchain._ob_next; op != &refchain; op = op->_ob_next)
+ fprintf(fp, "%p [%" PY_FORMAT_SIZE_T "d] %s\n", op,
+ op->ob_refcnt, Py_TYPE(op)->tp_name);
+}
+
+PyObject *
+_Py_GetObjects(PyObject *self, PyObject *args)
+{
+ int i, n;
+ PyObject *t = NULL;
+ PyObject *res, *op;
+
+ if (!PyArg_ParseTuple(args, "i|O", &n, &t))
+ return NULL;
+ op = refchain._ob_next;
+ res = PyList_New(0);
+ if (res == NULL)
+ return NULL;
+ for (i = 0; (n == 0 || i < n) && op != &refchain; i++) {
+ while (op == self || op == args || op == res || op == t ||
+ (t != NULL && Py_TYPE(op) != (PyTypeObject *) t)) {
+ op = op->_ob_next;
+ if (op == &refchain)
+ return res;
+ }
+ if (PyList_Append(res, op) < 0) {
+ Py_DECREF(res);
+ return NULL;
+ }
+ op = op->_ob_next;
+ }
+ return res;
+}
+
+#endif
+
+
+/* Hack to force loading of abstract.o */
+Py_ssize_t (*_Py_abstract_hack)(PyObject *) = PyObject_Size;
+
+
+void
+_PyObject_DebugTypeStats(FILE *out)
+{
+ _PyCFunction_DebugMallocStats(out);
+ _PyDict_DebugMallocStats(out);
+ _PyFloat_DebugMallocStats(out);
+ _PyFrame_DebugMallocStats(out);
+ _PyList_DebugMallocStats(out);
+ _PyMethod_DebugMallocStats(out);
+ _PyTuple_DebugMallocStats(out);
+}
+
+/* These methods are used to control infinite recursion in repr, str, print,
+ etc. Container objects that may recursively contain themselves,
+ e.g. builtin dictionaries and lists, should use Py_ReprEnter() and
+ Py_ReprLeave() to avoid infinite recursion.
+
+ Py_ReprEnter() returns 0 the first time it is called for a particular
+ object and 1 every time thereafter. It returns -1 if an exception
+ occurred. Py_ReprLeave() has no return value.
+
+ See dictobject.c and listobject.c for examples of use.
+*/
+
+int
+Py_ReprEnter(PyObject *obj)
+{
+ PyObject *dict;
+ PyObject *list;
+ Py_ssize_t i;
+
+ dict = PyThreadState_GetDict();
+ /* Ignore a missing thread-state, so that this function can be called
+ early on startup. */
+ if (dict == NULL)
+ return 0;
+ list = _PyDict_GetItemId(dict, &PyId_Py_Repr);
+ if (list == NULL) {
+ list = PyList_New(0);
+ if (list == NULL)
+ return -1;
+ if (_PyDict_SetItemId(dict, &PyId_Py_Repr, list) < 0)
+ return -1;
+ Py_DECREF(list);
+ }
+ i = PyList_GET_SIZE(list);
+ while (--i >= 0) {
+ if (PyList_GET_ITEM(list, i) == obj)
+ return 1;
+ }
+ if (PyList_Append(list, obj) < 0)
+ return -1;
+ return 0;
+}
+
+void
+Py_ReprLeave(PyObject *obj)
+{
+ PyObject *dict;
+ PyObject *list;
+ Py_ssize_t i;
+ PyObject *error_type, *error_value, *error_traceback;
+
+ PyErr_Fetch(&error_type, &error_value, &error_traceback);
+
+ dict = PyThreadState_GetDict();
+ if (dict == NULL)
+ goto finally;
+
+ list = _PyDict_GetItemId(dict, &PyId_Py_Repr);
+ if (list == NULL || !PyList_Check(list))
+ goto finally;
+
+ i = PyList_GET_SIZE(list);
+ /* Count backwards because we always expect obj to be list[-1] */
+ while (--i >= 0) {
+ if (PyList_GET_ITEM(list, i) == obj) {
+ PyList_SetSlice(list, i, i + 1, NULL);
+ break;
+ }
+ }
+
+finally:
+ /* ignore exceptions because there is no way to report them. */
+ PyErr_Restore(error_type, error_value, error_traceback);
+}
+
+/* Trashcan support. */
+
+/* Current call-stack depth of tp_dealloc calls. */
+int _PyTrash_delete_nesting = 0;
+
+/* List of objects that still need to be cleaned up, singly linked via their
+ * gc headers' gc_prev pointers.
+ */
+PyObject *_PyTrash_delete_later = NULL;
+
+/* Add op to the _PyTrash_delete_later list. Called when the current
+ * call-stack depth gets large. op must be a currently untracked gc'ed
+ * object, with refcount 0. Py_DECREF must already have been called on it.
+ */
+void
+_PyTrash_deposit_object(PyObject *op)
+{
+ assert(PyObject_IS_GC(op));
+ assert(_PyGC_REFS(op) == _PyGC_REFS_UNTRACKED);
+ assert(op->ob_refcnt == 0);
+ _Py_AS_GC(op)->gc.gc_prev = (PyGC_Head *)_PyTrash_delete_later;
+ _PyTrash_delete_later = op;
+}
+
+/* The equivalent API, using per-thread state recursion info */
+void
+_PyTrash_thread_deposit_object(PyObject *op)
+{
+ PyThreadState *tstate = PyThreadState_GET();
+ assert(PyObject_IS_GC(op));
+ assert(_PyGC_REFS(op) == _PyGC_REFS_UNTRACKED);
+ assert(op->ob_refcnt == 0);
+ _Py_AS_GC(op)->gc.gc_prev = (PyGC_Head *) tstate->trash_delete_later;
+ tstate->trash_delete_later = op;
+}
+
+/* Dealloccate all the objects in the _PyTrash_delete_later list. Called when
+ * the call-stack unwinds again.
+ */
+void
+_PyTrash_destroy_chain(void)
+{
+ while (_PyTrash_delete_later) {
+ PyObject *op = _PyTrash_delete_later;
+ destructor dealloc = Py_TYPE(op)->tp_dealloc;
+
+ _PyTrash_delete_later =
+ (PyObject*) _Py_AS_GC(op)->gc.gc_prev;
+
+ /* Call the deallocator directly. This used to try to
+ * fool Py_DECREF into calling it indirectly, but
+ * Py_DECREF was already called on this object, and in
+ * assorted non-release builds calling Py_DECREF again ends
+ * up distorting allocation statistics.
+ */
+ assert(op->ob_refcnt == 0);
+ ++_PyTrash_delete_nesting;
+ (*dealloc)(op);
+ --_PyTrash_delete_nesting;
+ }
+}
+
+/* The equivalent API, using per-thread state recursion info */
+void
+_PyTrash_thread_destroy_chain(void)
+{
+ PyThreadState *tstate = PyThreadState_GET();
+ while (tstate->trash_delete_later) {
+ PyObject *op = tstate->trash_delete_later;
+ destructor dealloc = Py_TYPE(op)->tp_dealloc;
+
+ tstate->trash_delete_later =
+ (PyObject*) _Py_AS_GC(op)->gc.gc_prev;
+
+ /* Call the deallocator directly. This used to try to
+ * fool Py_DECREF into calling it indirectly, but
+ * Py_DECREF was already called on this object, and in
+ * assorted non-release builds calling Py_DECREF again ends
+ * up distorting allocation statistics.
+ */
+ assert(op->ob_refcnt == 0);
+ ++tstate->trash_delete_nesting;
+ (*dealloc)(op);
+ --tstate->trash_delete_nesting;
+ }
+}
+
+#ifndef Py_TRACE_REFS
+/* For Py_LIMITED_API, we need an out-of-line version of _Py_Dealloc.
+ Define this here, so we can undefine the macro. */
+#undef _Py_Dealloc
+PyAPI_FUNC(void) _Py_Dealloc(PyObject *);
+void
+_Py_Dealloc(PyObject *op)
+{
+ _Py_INC_TPFREES(op) _Py_COUNT_ALLOCS_COMMA
+ (*Py_TYPE(op)->tp_dealloc)(op);
+}
+#endif
+
+#ifdef __cplusplus
+}
+#endif
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/stringlib/transmogrify.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/stringlib/transmogrify.h
new file mode 100644
index 00000000..15c63eda
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/stringlib/transmogrify.h
@@ -0,0 +1,701 @@
+#if STRINGLIB_IS_UNICODE
+# error "transmogrify.h only compatible with byte-wise strings"
+#endif
+
+/* the more complicated methods. parts of these should be pulled out into the
+ shared code in bytes_methods.c to cut down on duplicate code bloat. */
+
+static PyObject *
+return_self(PyObject *self)
+{
+#if !STRINGLIB_MUTABLE
+ if (STRINGLIB_CHECK_EXACT(self)) {
+ Py_INCREF(self);
+ return self;
+ }
+#endif
+ return STRINGLIB_NEW(STRINGLIB_STR(self), STRINGLIB_LEN(self));
+}
+
+static PyObject*
+stringlib_expandtabs(PyObject *self, PyObject *args, PyObject *kwds)
+{
+ const char *e, *p;
+ char *q;
+ Py_ssize_t i, j;
+ PyObject *u;
+ static char *kwlist[] = {"tabsize", 0};
+ int tabsize = 8;
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwds, "|i:expandtabs",
+ kwlist, &tabsize))
+ return NULL;
+
+ /* First pass: determine size of output string */
+ i = j = 0;
+ e = STRINGLIB_STR(self) + STRINGLIB_LEN(self);
+ for (p = STRINGLIB_STR(self); p < e; p++) {
+ if (*p == '\t') {
+ if (tabsize > 0) {
+ Py_ssize_t incr = tabsize - (j % tabsize);
+ if (j > PY_SSIZE_T_MAX - incr)
+ goto overflow;
+ j += incr;
+ }
+ }
+ else {
+ if (j > PY_SSIZE_T_MAX - 1)
+ goto overflow;
+ j++;
+ if (*p == '\n' || *p == '\r') {
+ if (i > PY_SSIZE_T_MAX - j)
+ goto overflow;
+ i += j;
+ j = 0;
+ }
+ }
+ }
+
+ if (i > PY_SSIZE_T_MAX - j)
+ goto overflow;
+
+ /* Second pass: create output string and fill it */
+ u = STRINGLIB_NEW(NULL, i + j);
+ if (!u)
+ return NULL;
+
+ j = 0;
+ q = STRINGLIB_STR(u);
+
+ for (p = STRINGLIB_STR(self); p < e; p++) {
+ if (*p == '\t') {
+ if (tabsize > 0) {
+ i = tabsize - (j % tabsize);
+ j += i;
+ while (i--)
+ *q++ = ' ';
+ }
+ }
+ else {
+ j++;
+ *q++ = *p;
+ if (*p == '\n' || *p == '\r')
+ j = 0;
+ }
+ }
+
+ return u;
+ overflow:
+ PyErr_SetString(PyExc_OverflowError, "result too long");
+ return NULL;
+}
+
+static PyObject *
+pad(PyObject *self, Py_ssize_t left, Py_ssize_t right, char fill)
+{
+ PyObject *u;
+
+ if (left < 0)
+ left = 0;
+ if (right < 0)
+ right = 0;
+
+ if (left == 0 && right == 0) {
+ return return_self(self);
+ }
+
+ u = STRINGLIB_NEW(NULL, left + STRINGLIB_LEN(self) + right);
+ if (u) {
+ if (left)
+ memset(STRINGLIB_STR(u), fill, left);
+ memcpy(STRINGLIB_STR(u) + left,
+ STRINGLIB_STR(self),
+ STRINGLIB_LEN(self));
+ if (right)
+ memset(STRINGLIB_STR(u) + left + STRINGLIB_LEN(self),
+ fill, right);
+ }
+
+ return u;
+}
+
+static PyObject *
+stringlib_ljust(PyObject *self, PyObject *args)
+{
+ Py_ssize_t width;
+ char fillchar = ' ';
+
+ if (!PyArg_ParseTuple(args, "n|c:ljust", &width, &fillchar))
+ return NULL;
+
+ if (STRINGLIB_LEN(self) >= width) {
+ return return_self(self);
+ }
+
+ return pad(self, 0, width - STRINGLIB_LEN(self), fillchar);
+}
+
+
+static PyObject *
+stringlib_rjust(PyObject *self, PyObject *args)
+{
+ Py_ssize_t width;
+ char fillchar = ' ';
+
+ if (!PyArg_ParseTuple(args, "n|c:rjust", &width, &fillchar))
+ return NULL;
+
+ if (STRINGLIB_LEN(self) >= width) {
+ return return_self(self);
+ }
+
+ return pad(self, width - STRINGLIB_LEN(self), 0, fillchar);
+}
+
+
+static PyObject *
+stringlib_center(PyObject *self, PyObject *args)
+{
+ Py_ssize_t marg, left;
+ Py_ssize_t width;
+ char fillchar = ' ';
+
+ if (!PyArg_ParseTuple(args, "n|c:center", &width, &fillchar))
+ return NULL;
+
+ if (STRINGLIB_LEN(self) >= width) {
+ return return_self(self);
+ }
+
+ marg = width - STRINGLIB_LEN(self);
+ left = marg / 2 + (marg & width & 1);
+
+ return pad(self, left, marg - left, fillchar);
+}
+
+static PyObject *
+stringlib_zfill(PyObject *self, PyObject *args)
+{
+ Py_ssize_t fill;
+ PyObject *s;
+ char *p;
+ Py_ssize_t width;
+
+ if (!PyArg_ParseTuple(args, "n:zfill", &width))
+ return NULL;
+
+ if (STRINGLIB_LEN(self) >= width) {
+ return return_self(self);
+ }
+
+ fill = width - STRINGLIB_LEN(self);
+
+ s = pad(self, fill, 0, '0');
+
+ if (s == NULL)
+ return NULL;
+
+ p = STRINGLIB_STR(s);
+ if (p[fill] == '+' || p[fill] == '-') {
+ /* move sign to beginning of string */
+ p[0] = p[fill];
+ p[fill] = '0';
+ }
+
+ return s;
+}
+
+
+/* find and count characters and substrings */
+
+#define findchar(target, target_len, c) \
+ ((char *)memchr((const void *)(target), c, target_len))
+
+
+static Py_ssize_t
+countchar(const char *target, Py_ssize_t target_len, char c,
+ Py_ssize_t maxcount)
+{
+ Py_ssize_t count = 0;
+ const char *start = target;
+ const char *end = target + target_len;
+
+ while ((start = findchar(start, end - start, c)) != NULL) {
+ count++;
+ if (count >= maxcount)
+ break;
+ start += 1;
+ }
+ return count;
+}
+
+
+/* Algorithms for different cases of string replacement */
+
+/* len(self)>=1, from="", len(to)>=1, maxcount>=1 */
+static PyObject *
+stringlib_replace_interleave(PyObject *self,
+ const char *to_s, Py_ssize_t to_len,
+ Py_ssize_t maxcount)
+{
+ const char *self_s;
+ char *result_s;
+ Py_ssize_t self_len, result_len;
+ Py_ssize_t count, i;
+ PyObject *result;
+
+ self_len = STRINGLIB_LEN(self);
+
+ /* 1 at the end plus 1 after every character;
+ count = min(maxcount, self_len + 1) */
+ if (maxcount <= self_len) {
+ count = maxcount;
+ }
+ else {
+ /* Can't overflow: self_len + 1 <= maxcount <= PY_SSIZE_T_MAX. */
+ count = self_len + 1;
+ }
+
+ /* Check for overflow */
+ /* result_len = count * to_len + self_len; */
+ assert(count > 0);
+ if (to_len > (PY_SSIZE_T_MAX - self_len) / count) {
+ PyErr_SetString(PyExc_OverflowError,
+ "replace bytes are too long");
+ return NULL;
+ }
+ result_len = count * to_len + self_len;
+ result = STRINGLIB_NEW(NULL, result_len);
+ if (result == NULL) {
+ return NULL;
+ }
+
+ self_s = STRINGLIB_STR(self);
+ result_s = STRINGLIB_STR(result);
+
+ if (to_len > 1) {
+ /* Lay the first one down (guaranteed this will occur) */
+ memcpy(result_s, to_s, to_len);
+ result_s += to_len;
+ count -= 1;
+
+ for (i = 0; i < count; i++) {
+ *result_s++ = *self_s++;
+ memcpy(result_s, to_s, to_len);
+ result_s += to_len;
+ }
+ }
+ else {
+ result_s[0] = to_s[0];
+ result_s += to_len;
+ count -= 1;
+ for (i = 0; i < count; i++) {
+ *result_s++ = *self_s++;
+ result_s[0] = to_s[0];
+ result_s += to_len;
+ }
+ }
+
+ /* Copy the rest of the original string */
+ memcpy(result_s, self_s, self_len - i);
+
+ return result;
+}
+
+/* Special case for deleting a single character */
+/* len(self)>=1, len(from)==1, to="", maxcount>=1 */
+static PyObject *
+stringlib_replace_delete_single_character(PyObject *self,
+ char from_c, Py_ssize_t maxcount)
+{
+ const char *self_s, *start, *next, *end;
+ char *result_s;
+ Py_ssize_t self_len, result_len;
+ Py_ssize_t count;
+ PyObject *result;
+
+ self_len = STRINGLIB_LEN(self);
+ self_s = STRINGLIB_STR(self);
+
+ count = countchar(self_s, self_len, from_c, maxcount);
+ if (count == 0) {
+ return return_self(self);
+ }
+
+ result_len = self_len - count; /* from_len == 1 */
+ assert(result_len>=0);
+
+ result = STRINGLIB_NEW(NULL, result_len);
+ if (result == NULL) {
+ return NULL;
+ }
+ result_s = STRINGLIB_STR(result);
+
+ start = self_s;
+ end = self_s + self_len;
+ while (count-- > 0) {
+ next = findchar(start, end - start, from_c);
+ if (next == NULL)
+ break;
+ memcpy(result_s, start, next - start);
+ result_s += (next - start);
+ start = next + 1;
+ }
+ memcpy(result_s, start, end - start);
+
+ return result;
+}
+
+/* len(self)>=1, len(from)>=2, to="", maxcount>=1 */
+
+static PyObject *
+stringlib_replace_delete_substring(PyObject *self,
+ const char *from_s, Py_ssize_t from_len,
+ Py_ssize_t maxcount)
+{
+ const char *self_s, *start, *next, *end;
+ char *result_s;
+ Py_ssize_t self_len, result_len;
+ Py_ssize_t count, offset;
+ PyObject *result;
+
+ self_len = STRINGLIB_LEN(self);
+ self_s = STRINGLIB_STR(self);
+
+ count = stringlib_count(self_s, self_len,
+ from_s, from_len,
+ maxcount);
+
+ if (count == 0) {
+ /* no matches */
+ return return_self(self);
+ }
+
+ result_len = self_len - (count * from_len);
+ assert (result_len>=0);
+
+ result = STRINGLIB_NEW(NULL, result_len);
+ if (result == NULL) {
+ return NULL;
+ }
+ result_s = STRINGLIB_STR(result);
+
+ start = self_s;
+ end = self_s + self_len;
+ while (count-- > 0) {
+ offset = stringlib_find(start, end - start,
+ from_s, from_len,
+ 0);
+ if (offset == -1)
+ break;
+ next = start + offset;
+
+ memcpy(result_s, start, next - start);
+
+ result_s += (next - start);
+ start = next + from_len;
+ }
+ memcpy(result_s, start, end - start);
+ return result;
+}
+
+/* len(self)>=1, len(from)==len(to)==1, maxcount>=1 */
+static PyObject *
+stringlib_replace_single_character_in_place(PyObject *self,
+ char from_c, char to_c,
+ Py_ssize_t maxcount)
+{
+ const char *self_s, *end;
+ char *result_s, *start, *next;
+ Py_ssize_t self_len;
+ PyObject *result;
+
+ /* The result string will be the same size */
+ self_s = STRINGLIB_STR(self);
+ self_len = STRINGLIB_LEN(self);
+
+ next = findchar(self_s, self_len, from_c);
+
+ if (next == NULL) {
+ /* No matches; return the original bytes */
+ return return_self(self);
+ }
+
+ /* Need to make a new bytes */
+ result = STRINGLIB_NEW(NULL, self_len);
+ if (result == NULL) {
+ return NULL;
+ }
+ result_s = STRINGLIB_STR(result);
+ memcpy(result_s, self_s, self_len);
+
+ /* change everything in-place, starting with this one */
+ start = result_s + (next - self_s);
+ *start = to_c;
+ start++;
+ end = result_s + self_len;
+
+ while (--maxcount > 0) {
+ next = findchar(start, end - start, from_c);
+ if (next == NULL)
+ break;
+ *next = to_c;
+ start = next + 1;
+ }
+
+ return result;
+}
+
+/* len(self)>=1, len(from)==len(to)>=2, maxcount>=1 */
+static PyObject *
+stringlib_replace_substring_in_place(PyObject *self,
+ const char *from_s, Py_ssize_t from_len,
+ const char *to_s, Py_ssize_t to_len,
+ Py_ssize_t maxcount)
+{
+ const char *self_s, *end;
+ char *result_s, *start;
+ Py_ssize_t self_len, offset;
+ PyObject *result;
+
+ /* The result bytes will be the same size */
+
+ self_s = STRINGLIB_STR(self);
+ self_len = STRINGLIB_LEN(self);
+
+ offset = stringlib_find(self_s, self_len,
+ from_s, from_len,
+ 0);
+ if (offset == -1) {
+ /* No matches; return the original bytes */
+ return return_self(self);
+ }
+
+ /* Need to make a new bytes */
+ result = STRINGLIB_NEW(NULL, self_len);
+ if (result == NULL) {
+ return NULL;
+ }
+ result_s = STRINGLIB_STR(result);
+ memcpy(result_s, self_s, self_len);
+
+ /* change everything in-place, starting with this one */
+ start = result_s + offset;
+ memcpy(start, to_s, from_len);
+ start += from_len;
+ end = result_s + self_len;
+
+ while ( --maxcount > 0) {
+ offset = stringlib_find(start, end - start,
+ from_s, from_len,
+ 0);
+ if (offset == -1)
+ break;
+ memcpy(start + offset, to_s, from_len);
+ start += offset + from_len;
+ }
+
+ return result;
+}
+
+/* len(self)>=1, len(from)==1, len(to)>=2, maxcount>=1 */
+static PyObject *
+stringlib_replace_single_character(PyObject *self,
+ char from_c,
+ const char *to_s, Py_ssize_t to_len,
+ Py_ssize_t maxcount)
+{
+ const char *self_s, *start, *next, *end;
+ char *result_s;
+ Py_ssize_t self_len, result_len;
+ Py_ssize_t count;
+ PyObject *result;
+
+ self_s = STRINGLIB_STR(self);
+ self_len = STRINGLIB_LEN(self);
+
+ count = countchar(self_s, self_len, from_c, maxcount);
+ if (count == 0) {
+ /* no matches, return unchanged */
+ return return_self(self);
+ }
+
+ /* use the difference between current and new, hence the "-1" */
+ /* result_len = self_len + count * (to_len-1) */
+ assert(count > 0);
+ if (to_len - 1 > (PY_SSIZE_T_MAX - self_len) / count) {
+ PyErr_SetString(PyExc_OverflowError, "replace bytes is too long");
+ return NULL;
+ }
+ result_len = self_len + count * (to_len - 1);
+
+ result = STRINGLIB_NEW(NULL, result_len);
+ if (result == NULL) {
+ return NULL;
+ }
+ result_s = STRINGLIB_STR(result);
+
+ start = self_s;
+ end = self_s + self_len;
+ while (count-- > 0) {
+ next = findchar(start, end - start, from_c);
+ if (next == NULL)
+ break;
+
+ if (next == start) {
+ /* replace with the 'to' */
+ memcpy(result_s, to_s, to_len);
+ result_s += to_len;
+ start += 1;
+ } else {
+ /* copy the unchanged old then the 'to' */
+ memcpy(result_s, start, next - start);
+ result_s += (next - start);
+ memcpy(result_s, to_s, to_len);
+ result_s += to_len;
+ start = next + 1;
+ }
+ }
+ /* Copy the remainder of the remaining bytes */
+ memcpy(result_s, start, end - start);
+
+ return result;
+}
+
+/* len(self)>=1, len(from)>=2, len(to)>=2, maxcount>=1 */
+static PyObject *
+stringlib_replace_substring(PyObject *self,
+ const char *from_s, Py_ssize_t from_len,
+ const char *to_s, Py_ssize_t to_len,
+ Py_ssize_t maxcount)
+{
+ const char *self_s, *start, *next, *end;
+ char *result_s;
+ Py_ssize_t self_len, result_len;
+ Py_ssize_t count, offset;
+ PyObject *result;
+
+ self_s = STRINGLIB_STR(self);
+ self_len = STRINGLIB_LEN(self);
+
+ count = stringlib_count(self_s, self_len,
+ from_s, from_len,
+ maxcount);
+
+ if (count == 0) {
+ /* no matches, return unchanged */
+ return return_self(self);
+ }
+
+ /* Check for overflow */
+ /* result_len = self_len + count * (to_len-from_len) */
+ assert(count > 0);
+ if (to_len - from_len > (PY_SSIZE_T_MAX - self_len) / count) {
+ PyErr_SetString(PyExc_OverflowError, "replace bytes is too long");
+ return NULL;
+ }
+ result_len = self_len + count * (to_len - from_len);
+
+ result = STRINGLIB_NEW(NULL, result_len);
+ if (result == NULL) {
+ return NULL;
+ }
+ result_s = STRINGLIB_STR(result);
+
+ start = self_s;
+ end = self_s + self_len;
+ while (count-- > 0) {
+ offset = stringlib_find(start, end - start,
+ from_s, from_len,
+ 0);
+ if (offset == -1)
+ break;
+ next = start + offset;
+ if (next == start) {
+ /* replace with the 'to' */
+ memcpy(result_s, to_s, to_len);
+ result_s += to_len;
+ start += from_len;
+ } else {
+ /* copy the unchanged old then the 'to' */
+ memcpy(result_s, start, next - start);
+ result_s += (next - start);
+ memcpy(result_s, to_s, to_len);
+ result_s += to_len;
+ start = next + from_len;
+ }
+ }
+ /* Copy the remainder of the remaining bytes */
+ memcpy(result_s, start, end - start);
+
+ return result;
+}
+
+
+static PyObject *
+stringlib_replace(PyObject *self,
+ const char *from_s, Py_ssize_t from_len,
+ const char *to_s, Py_ssize_t to_len,
+ Py_ssize_t maxcount)
+{
+ if (maxcount < 0) {
+ maxcount = PY_SSIZE_T_MAX;
+ } else if (maxcount == 0 || STRINGLIB_LEN(self) == 0) {
+ /* nothing to do; return the original bytes */
+ return return_self(self);
+ }
+
+ /* Handle zero-length special cases */
+ if (from_len == 0) {
+ if (to_len == 0) {
+ /* nothing to do; return the original bytes */
+ return return_self(self);
+ }
+ /* insert the 'to' bytes everywhere. */
+ /* >>> b"Python".replace(b"", b".") */
+ /* b'.P.y.t.h.o.n.' */
+ return stringlib_replace_interleave(self, to_s, to_len, maxcount);
+ }
+
+ /* Except for b"".replace(b"", b"A") == b"A" there is no way beyond this */
+ /* point for an empty self bytes to generate a non-empty bytes */
+ /* Special case so the remaining code always gets a non-empty bytes */
+ if (STRINGLIB_LEN(self) == 0) {
+ return return_self(self);
+ }
+
+ if (to_len == 0) {
+ /* delete all occurrences of 'from' bytes */
+ if (from_len == 1) {
+ return stringlib_replace_delete_single_character(
+ self, from_s[0], maxcount);
+ } else {
+ return stringlib_replace_delete_substring(
+ self, from_s, from_len, maxcount);
+ }
+ }
+
+ /* Handle special case where both bytes have the same length */
+
+ if (from_len == to_len) {
+ if (from_len == 1) {
+ return stringlib_replace_single_character_in_place(
+ self, from_s[0], to_s[0], maxcount);
+ } else {
+ return stringlib_replace_substring_in_place(
+ self, from_s, from_len, to_s, to_len, maxcount);
+ }
+ }
+
+ /* Otherwise use the more generic algorithms */
+ if (from_len == 1) {
+ return stringlib_replace_single_character(
+ self, from_s[0], to_s, to_len, maxcount);
+ } else {
+ /* len('from')>=2, len('to')>=1 */
+ return stringlib_replace_substring(
+ self, from_s, from_len, to_s, to_len, maxcount);
+ }
+}
+
+#undef findchar
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/unicodeobject.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/unicodeobject.c
new file mode 100644
index 00000000..1fdd5ec1
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/unicodeobject.c
@@ -0,0 +1,15773 @@
+/*
+
+Unicode implementation based on original code by Fredrik Lundh,
+modified by Marc-Andre Lemburg <mal@lemburg.com>.
+
+Major speed upgrades to the method implementations at the Reykjavik
+NeedForSpeed sprint, by Fredrik Lundh and Andrew Dalke.
+
+Copyright (c) Corporation for National Research Initiatives.
+
+--------------------------------------------------------------------
+The original string type implementation is:
+
+ Copyright (c) 1999 by Secret Labs AB
+ Copyright (c) 1999 by Fredrik Lundh
+
+By obtaining, using, and/or copying this software and/or its
+associated documentation, you agree that you have read, understood,
+and will comply with the following terms and conditions:
+
+Permission to use, copy, modify, and distribute this software and its
+associated documentation for any purpose and without fee is hereby
+granted, provided that the above copyright notice appears in all
+copies, and that both that copyright notice and this permission notice
+appear in supporting documentation, and that the name of Secret Labs
+AB or the author not be used in advertising or publicity pertaining to
+distribution of the software without specific, written prior
+permission.
+
+SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO
+THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
+FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR BE LIABLE FOR
+ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
+OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+--------------------------------------------------------------------
+
+*/
+
+#define PY_SSIZE_T_CLEAN
+#include "Python.h"
+#include "ucnhash.h"
+#include "bytes_methods.h"
+#include "stringlib/eq.h"
+
+#ifdef MS_WINDOWS
+#include <windows.h>
+#endif
+
+/*[clinic input]
+class str "PyUnicodeObject *" "&PyUnicode_Type"
+[clinic start generated code]*/
+/*[clinic end generated code: output=da39a3ee5e6b4b0d input=604e916854800fa8]*/
+
+/* --- Globals ------------------------------------------------------------
+
+NOTE: In the interpreter's initialization phase, some globals are currently
+ initialized dynamically as needed. In the process Unicode objects may
+ be created before the Unicode type is ready.
+
+*/
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* Maximum code point of Unicode 6.0: 0x10ffff (1,114,111) */
+#define MAX_UNICODE 0x10ffff
+
+#ifdef Py_DEBUG
+# define _PyUnicode_CHECK(op) _PyUnicode_CheckConsistency(op, 0)
+#else
+# define _PyUnicode_CHECK(op) PyUnicode_Check(op)
+#endif
+
+#define _PyUnicode_UTF8(op) \
+ (((PyCompactUnicodeObject*)(op))->utf8)
+#define PyUnicode_UTF8(op) \
+ (assert(_PyUnicode_CHECK(op)), \
+ assert(PyUnicode_IS_READY(op)), \
+ PyUnicode_IS_COMPACT_ASCII(op) ? \
+ ((char*)((PyASCIIObject*)(op) + 1)) : \
+ _PyUnicode_UTF8(op))
+#define _PyUnicode_UTF8_LENGTH(op) \
+ (((PyCompactUnicodeObject*)(op))->utf8_length)
+#define PyUnicode_UTF8_LENGTH(op) \
+ (assert(_PyUnicode_CHECK(op)), \
+ assert(PyUnicode_IS_READY(op)), \
+ PyUnicode_IS_COMPACT_ASCII(op) ? \
+ ((PyASCIIObject*)(op))->length : \
+ _PyUnicode_UTF8_LENGTH(op))
+#define _PyUnicode_WSTR(op) \
+ (((PyASCIIObject*)(op))->wstr)
+#define _PyUnicode_WSTR_LENGTH(op) \
+ (((PyCompactUnicodeObject*)(op))->wstr_length)
+#define _PyUnicode_LENGTH(op) \
+ (((PyASCIIObject *)(op))->length)
+#define _PyUnicode_STATE(op) \
+ (((PyASCIIObject *)(op))->state)
+#define _PyUnicode_HASH(op) \
+ (((PyASCIIObject *)(op))->hash)
+#define _PyUnicode_KIND(op) \
+ (assert(_PyUnicode_CHECK(op)), \
+ ((PyASCIIObject *)(op))->state.kind)
+#define _PyUnicode_GET_LENGTH(op) \
+ (assert(_PyUnicode_CHECK(op)), \
+ ((PyASCIIObject *)(op))->length)
+#define _PyUnicode_DATA_ANY(op) \
+ (((PyUnicodeObject*)(op))->data.any)
+
+#undef PyUnicode_READY
+#define PyUnicode_READY(op) \
+ (assert(_PyUnicode_CHECK(op)), \
+ (PyUnicode_IS_READY(op) ? \
+ 0 : \
+ _PyUnicode_Ready(op)))
+
+#define _PyUnicode_SHARE_UTF8(op) \
+ (assert(_PyUnicode_CHECK(op)), \
+ assert(!PyUnicode_IS_COMPACT_ASCII(op)), \
+ (_PyUnicode_UTF8(op) == PyUnicode_DATA(op)))
+#define _PyUnicode_SHARE_WSTR(op) \
+ (assert(_PyUnicode_CHECK(op)), \
+ (_PyUnicode_WSTR(unicode) == PyUnicode_DATA(op)))
+
+/* true if the Unicode object has an allocated UTF-8 memory block
+ (not shared with other data) */
+#define _PyUnicode_HAS_UTF8_MEMORY(op) \
+ ((!PyUnicode_IS_COMPACT_ASCII(op) \
+ && _PyUnicode_UTF8(op) \
+ && _PyUnicode_UTF8(op) != PyUnicode_DATA(op)))
+
+/* true if the Unicode object has an allocated wstr memory block
+ (not shared with other data) */
+#define _PyUnicode_HAS_WSTR_MEMORY(op) \
+ ((_PyUnicode_WSTR(op) && \
+ (!PyUnicode_IS_READY(op) || \
+ _PyUnicode_WSTR(op) != PyUnicode_DATA(op))))
+
+/* Generic helper macro to convert characters of different types.
+ from_type and to_type have to be valid type names, begin and end
+ are pointers to the source characters which should be of type
+ "from_type *". to is a pointer of type "to_type *" and points to the
+ buffer where the result characters are written to. */
+#define _PyUnicode_CONVERT_BYTES(from_type, to_type, begin, end, to) \
+ do { \
+ to_type *_to = (to_type *)(to); \
+ const from_type *_iter = (from_type *)(begin); \
+ const from_type *_end = (from_type *)(end); \
+ Py_ssize_t n = (_end) - (_iter); \
+ const from_type *_unrolled_end = \
+ _iter + _Py_SIZE_ROUND_DOWN(n, 4); \
+ while (_iter < (_unrolled_end)) { \
+ _to[0] = (to_type) _iter[0]; \
+ _to[1] = (to_type) _iter[1]; \
+ _to[2] = (to_type) _iter[2]; \
+ _to[3] = (to_type) _iter[3]; \
+ _iter += 4; _to += 4; \
+ } \
+ while (_iter < (_end)) \
+ *_to++ = (to_type) *_iter++; \
+ } while (0)
+
+#ifdef MS_WINDOWS
+ /* On Windows, overallocate by 50% is the best factor */
+# define OVERALLOCATE_FACTOR 2
+#else
+ /* On Linux, overallocate by 25% is the best factor */
+# define OVERALLOCATE_FACTOR 4
+#endif
+
+/* This dictionary holds all interned unicode strings. Note that references
+ to strings in this dictionary are *not* counted in the string's ob_refcnt.
+ When the interned string reaches a refcnt of 0 the string deallocation
+ function will delete the reference from this dictionary.
+
+ Another way to look at this is that to say that the actual reference
+ count of a string is: s->ob_refcnt + (s->state ? 2 : 0)
+*/
+static PyObject *interned = NULL;
+
+/* The empty Unicode object is shared to improve performance. */
+static PyObject *unicode_empty = NULL;
+
+#define _Py_INCREF_UNICODE_EMPTY() \
+ do { \
+ if (unicode_empty != NULL) \
+ Py_INCREF(unicode_empty); \
+ else { \
+ unicode_empty = PyUnicode_New(0, 0); \
+ if (unicode_empty != NULL) { \
+ Py_INCREF(unicode_empty); \
+ assert(_PyUnicode_CheckConsistency(unicode_empty, 1)); \
+ } \
+ } \
+ } while (0)
+
+#define _Py_RETURN_UNICODE_EMPTY() \
+ do { \
+ _Py_INCREF_UNICODE_EMPTY(); \
+ return unicode_empty; \
+ } while (0)
+
+#define FILL(kind, data, value, start, length) \
+ do { \
+ assert(0 <= start); \
+ assert(kind != PyUnicode_WCHAR_KIND); \
+ switch (kind) { \
+ case PyUnicode_1BYTE_KIND: { \
+ assert(value <= 0xff); \
+ Py_UCS1 ch = (unsigned char)value; \
+ Py_UCS1 *to = (Py_UCS1 *)data + start; \
+ memset(to, ch, length); \
+ break; \
+ } \
+ case PyUnicode_2BYTE_KIND: { \
+ assert(value <= 0xffff); \
+ Py_UCS2 ch = (Py_UCS2)value; \
+ Py_UCS2 *to = (Py_UCS2 *)data + start; \
+ const Py_UCS2 *end = to + length; \
+ for (; to < end; ++to) *to = ch; \
+ break; \
+ } \
+ case PyUnicode_4BYTE_KIND: { \
+ assert(value <= MAX_UNICODE); \
+ Py_UCS4 ch = value; \
+ Py_UCS4 * to = (Py_UCS4 *)data + start; \
+ const Py_UCS4 *end = to + length; \
+ for (; to < end; ++to) *to = ch; \
+ break; \
+ } \
+ default: assert(0); \
+ } \
+ } while (0)
+
+
+/* Forward declaration */
+static int
+_PyUnicodeWriter_WriteCharInline(_PyUnicodeWriter *writer, Py_UCS4 ch);
+
+/* List of static strings. */
+static _Py_Identifier *static_strings = NULL;
+
+/* Single character Unicode strings in the Latin-1 range are being
+ shared as well. */
+static PyObject *unicode_latin1[256] = {NULL};
+
+/* Fast detection of the most frequent whitespace characters */
+const unsigned char _Py_ascii_whitespace[] = {
+ 0, 0, 0, 0, 0, 0, 0, 0,
+/* case 0x0009: * CHARACTER TABULATION */
+/* case 0x000A: * LINE FEED */
+/* case 0x000B: * LINE TABULATION */
+/* case 0x000C: * FORM FEED */
+/* case 0x000D: * CARRIAGE RETURN */
+ 0, 1, 1, 1, 1, 1, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+/* case 0x001C: * FILE SEPARATOR */
+/* case 0x001D: * GROUP SEPARATOR */
+/* case 0x001E: * RECORD SEPARATOR */
+/* case 0x001F: * UNIT SEPARATOR */
+ 0, 0, 0, 0, 1, 1, 1, 1,
+/* case 0x0020: * SPACE */
+ 1, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0
+};
+
+/* forward */
+static PyUnicodeObject *_PyUnicode_New(Py_ssize_t length);
+static PyObject* get_latin1_char(unsigned char ch);
+static int unicode_modifiable(PyObject *unicode);
+
+
+static PyObject *
+_PyUnicode_FromUCS1(const Py_UCS1 *s, Py_ssize_t size);
+static PyObject *
+_PyUnicode_FromUCS2(const Py_UCS2 *s, Py_ssize_t size);
+static PyObject *
+_PyUnicode_FromUCS4(const Py_UCS4 *s, Py_ssize_t size);
+
+static PyObject *
+unicode_encode_call_errorhandler(const char *errors,
+ PyObject **errorHandler,const char *encoding, const char *reason,
+ PyObject *unicode, PyObject **exceptionObject,
+ Py_ssize_t startpos, Py_ssize_t endpos, Py_ssize_t *newpos);
+
+static void
+raise_encode_exception(PyObject **exceptionObject,
+ const char *encoding,
+ PyObject *unicode,
+ Py_ssize_t startpos, Py_ssize_t endpos,
+ const char *reason);
+
+/* Same for linebreaks */
+static const unsigned char ascii_linebreak[] = {
+ 0, 0, 0, 0, 0, 0, 0, 0,
+/* 0x000A, * LINE FEED */
+/* 0x000B, * LINE TABULATION */
+/* 0x000C, * FORM FEED */
+/* 0x000D, * CARRIAGE RETURN */
+ 0, 0, 1, 1, 1, 1, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+/* 0x001C, * FILE SEPARATOR */
+/* 0x001D, * GROUP SEPARATOR */
+/* 0x001E, * RECORD SEPARATOR */
+ 0, 0, 0, 0, 1, 1, 1, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0
+};
+
+#include "clinic/unicodeobject.c.h"
+
+typedef enum {
+ _Py_ERROR_UNKNOWN=0,
+ _Py_ERROR_STRICT,
+ _Py_ERROR_SURROGATEESCAPE,
+ _Py_ERROR_REPLACE,
+ _Py_ERROR_IGNORE,
+ _Py_ERROR_BACKSLASHREPLACE,
+ _Py_ERROR_SURROGATEPASS,
+ _Py_ERROR_XMLCHARREFREPLACE,
+ _Py_ERROR_OTHER
+} _Py_error_handler;
+
+static _Py_error_handler
+get_error_handler(const char *errors)
+{
+ if (errors == NULL || strcmp(errors, "strict") == 0) {
+ return _Py_ERROR_STRICT;
+ }
+ if (strcmp(errors, "surrogateescape") == 0) {
+ return _Py_ERROR_SURROGATEESCAPE;
+ }
+ if (strcmp(errors, "replace") == 0) {
+ return _Py_ERROR_REPLACE;
+ }
+ if (strcmp(errors, "ignore") == 0) {
+ return _Py_ERROR_IGNORE;
+ }
+ if (strcmp(errors, "backslashreplace") == 0) {
+ return _Py_ERROR_BACKSLASHREPLACE;
+ }
+ if (strcmp(errors, "surrogatepass") == 0) {
+ return _Py_ERROR_SURROGATEPASS;
+ }
+ if (strcmp(errors, "xmlcharrefreplace") == 0) {
+ return _Py_ERROR_XMLCHARREFREPLACE;
+ }
+ return _Py_ERROR_OTHER;
+}
+
+/* The max unicode value is always 0x10FFFF while using the PEP-393 API.
+ This function is kept for backward compatibility with the old API. */
+Py_UNICODE
+PyUnicode_GetMax(void)
+{
+#ifdef Py_UNICODE_WIDE
+ return 0x10FFFF;
+#else
+ /* This is actually an illegal character, so it should
+ not be passed to unichr. */
+ return 0xFFFF;
+#endif
+}
+
+#ifdef Py_DEBUG
+int
+_PyUnicode_CheckConsistency(PyObject *op, int check_content)
+{
+ PyASCIIObject *ascii;
+ unsigned int kind;
+
+ assert(PyUnicode_Check(op));
+
+ ascii = (PyASCIIObject *)op;
+ kind = ascii->state.kind;
+
+ if (ascii->state.ascii == 1 && ascii->state.compact == 1) {
+ assert(kind == PyUnicode_1BYTE_KIND);
+ assert(ascii->state.ready == 1);
+ }
+ else {
+ PyCompactUnicodeObject *compact = (PyCompactUnicodeObject *)op;
+ void *data;
+
+ if (ascii->state.compact == 1) {
+ data = compact + 1;
+ assert(kind == PyUnicode_1BYTE_KIND
+ || kind == PyUnicode_2BYTE_KIND
+ || kind == PyUnicode_4BYTE_KIND);
+ assert(ascii->state.ascii == 0);
+ assert(ascii->state.ready == 1);
+ assert (compact->utf8 != data);
+ }
+ else {
+ PyUnicodeObject *unicode = (PyUnicodeObject *)op;
+
+ data = unicode->data.any;
+ if (kind == PyUnicode_WCHAR_KIND) {
+ assert(ascii->length == 0);
+ assert(ascii->hash == -1);
+ assert(ascii->state.compact == 0);
+ assert(ascii->state.ascii == 0);
+ assert(ascii->state.ready == 0);
+ assert(ascii->state.interned == SSTATE_NOT_INTERNED);
+ assert(ascii->wstr != NULL);
+ assert(data == NULL);
+ assert(compact->utf8 == NULL);
+ }
+ else {
+ assert(kind == PyUnicode_1BYTE_KIND
+ || kind == PyUnicode_2BYTE_KIND
+ || kind == PyUnicode_4BYTE_KIND);
+ assert(ascii->state.compact == 0);
+ assert(ascii->state.ready == 1);
+ assert(data != NULL);
+ if (ascii->state.ascii) {
+ assert (compact->utf8 == data);
+ assert (compact->utf8_length == ascii->length);
+ }
+ else
+ assert (compact->utf8 != data);
+ }
+ }
+ if (kind != PyUnicode_WCHAR_KIND) {
+ if (
+#if SIZEOF_WCHAR_T == 2
+ kind == PyUnicode_2BYTE_KIND
+#else
+ kind == PyUnicode_4BYTE_KIND
+#endif
+ )
+ {
+ assert(ascii->wstr == data);
+ assert(compact->wstr_length == ascii->length);
+ } else
+ assert(ascii->wstr != data);
+ }
+
+ if (compact->utf8 == NULL)
+ assert(compact->utf8_length == 0);
+ if (ascii->wstr == NULL)
+ assert(compact->wstr_length == 0);
+ }
+ /* check that the best kind is used */
+ if (check_content && kind != PyUnicode_WCHAR_KIND)
+ {
+ Py_ssize_t i;
+ Py_UCS4 maxchar = 0;
+ void *data;
+ Py_UCS4 ch;
+
+ data = PyUnicode_DATA(ascii);
+ for (i=0; i < ascii->length; i++)
+ {
+ ch = PyUnicode_READ(kind, data, i);
+ if (ch > maxchar)
+ maxchar = ch;
+ }
+ if (kind == PyUnicode_1BYTE_KIND) {
+ if (ascii->state.ascii == 0) {
+ assert(maxchar >= 128);
+ assert(maxchar <= 255);
+ }
+ else
+ assert(maxchar < 128);
+ }
+ else if (kind == PyUnicode_2BYTE_KIND) {
+ assert(maxchar >= 0x100);
+ assert(maxchar <= 0xFFFF);
+ }
+ else {
+ assert(maxchar >= 0x10000);
+ assert(maxchar <= MAX_UNICODE);
+ }
+ assert(PyUnicode_READ(kind, data, ascii->length) == 0);
+ }
+ return 1;
+}
+#endif
+
+static PyObject*
+unicode_result_wchar(PyObject *unicode)
+{
+#ifndef Py_DEBUG
+ Py_ssize_t len;
+
+ len = _PyUnicode_WSTR_LENGTH(unicode);
+ if (len == 0) {
+ Py_DECREF(unicode);
+ _Py_RETURN_UNICODE_EMPTY();
+ }
+
+ if (len == 1) {
+ wchar_t ch = _PyUnicode_WSTR(unicode)[0];
+ if ((Py_UCS4)ch < 256) {
+ PyObject *latin1_char = get_latin1_char((unsigned char)ch);
+ Py_DECREF(unicode);
+ return latin1_char;
+ }
+ }
+
+ if (_PyUnicode_Ready(unicode) < 0) {
+ Py_DECREF(unicode);
+ return NULL;
+ }
+#else
+ assert(Py_REFCNT(unicode) == 1);
+
+ /* don't make the result ready in debug mode to ensure that the caller
+ makes the string ready before using it */
+ assert(_PyUnicode_CheckConsistency(unicode, 1));
+#endif
+ return unicode;
+}
+
+static PyObject*
+unicode_result_ready(PyObject *unicode)
+{
+ Py_ssize_t length;
+
+ length = PyUnicode_GET_LENGTH(unicode);
+ if (length == 0) {
+ if (unicode != unicode_empty) {
+ Py_DECREF(unicode);
+ _Py_RETURN_UNICODE_EMPTY();
+ }
+ return unicode_empty;
+ }
+
+ if (length == 1) {
+ void *data = PyUnicode_DATA(unicode);
+ int kind = PyUnicode_KIND(unicode);
+ Py_UCS4 ch = PyUnicode_READ(kind, data, 0);
+ if (ch < 256) {
+ PyObject *latin1_char = unicode_latin1[ch];
+ if (latin1_char != NULL) {
+ if (unicode != latin1_char) {
+ Py_INCREF(latin1_char);
+ Py_DECREF(unicode);
+ }
+ return latin1_char;
+ }
+ else {
+ assert(_PyUnicode_CheckConsistency(unicode, 1));
+ Py_INCREF(unicode);
+ unicode_latin1[ch] = unicode;
+ return unicode;
+ }
+ }
+ }
+
+ assert(_PyUnicode_CheckConsistency(unicode, 1));
+ return unicode;
+}
+
+static PyObject*
+unicode_result(PyObject *unicode)
+{
+ assert(_PyUnicode_CHECK(unicode));
+ if (PyUnicode_IS_READY(unicode))
+ return unicode_result_ready(unicode);
+ else
+ return unicode_result_wchar(unicode);
+}
+
+static PyObject*
+unicode_result_unchanged(PyObject *unicode)
+{
+ if (PyUnicode_CheckExact(unicode)) {
+ if (PyUnicode_READY(unicode) == -1)
+ return NULL;
+ Py_INCREF(unicode);
+ return unicode;
+ }
+ else
+ /* Subtype -- return genuine unicode string with the same value. */
+ return _PyUnicode_Copy(unicode);
+}
+
+/* Implementation of the "backslashreplace" error handler for 8-bit encodings:
+ ASCII, Latin1, UTF-8, etc. */
+static char*
+backslashreplace(_PyBytesWriter *writer, char *str,
+ PyObject *unicode, Py_ssize_t collstart, Py_ssize_t collend)
+{
+ Py_ssize_t size, i;
+ Py_UCS4 ch;
+ enum PyUnicode_Kind kind;
+ void *data;
+
+ assert(PyUnicode_IS_READY(unicode));
+ kind = PyUnicode_KIND(unicode);
+ data = PyUnicode_DATA(unicode);
+
+ size = 0;
+ /* determine replacement size */
+ for (i = collstart; i < collend; ++i) {
+ Py_ssize_t incr;
+
+ ch = PyUnicode_READ(kind, data, i);
+ if (ch < 0x100)
+ incr = 2+2;
+ else if (ch < 0x10000)
+ incr = 2+4;
+ else {
+ assert(ch <= MAX_UNICODE);
+ incr = 2+8;
+ }
+ if (size > PY_SSIZE_T_MAX - incr) {
+ PyErr_SetString(PyExc_OverflowError,
+ "encoded result is too long for a Python string");
+ return NULL;
+ }
+ size += incr;
+ }
+
+ str = _PyBytesWriter_Prepare(writer, str, size);
+ if (str == NULL)
+ return NULL;
+
+ /* generate replacement */
+ for (i = collstart; i < collend; ++i) {
+ ch = PyUnicode_READ(kind, data, i);
+ *str++ = '\\';
+ if (ch >= 0x00010000) {
+ *str++ = 'U';
+ *str++ = Py_hexdigits[(ch>>28)&0xf];
+ *str++ = Py_hexdigits[(ch>>24)&0xf];
+ *str++ = Py_hexdigits[(ch>>20)&0xf];
+ *str++ = Py_hexdigits[(ch>>16)&0xf];
+ *str++ = Py_hexdigits[(ch>>12)&0xf];
+ *str++ = Py_hexdigits[(ch>>8)&0xf];
+ }
+ else if (ch >= 0x100) {
+ *str++ = 'u';
+ *str++ = Py_hexdigits[(ch>>12)&0xf];
+ *str++ = Py_hexdigits[(ch>>8)&0xf];
+ }
+ else
+ *str++ = 'x';
+ *str++ = Py_hexdigits[(ch>>4)&0xf];
+ *str++ = Py_hexdigits[ch&0xf];
+ }
+ return str;
+}
+
+/* Implementation of the "xmlcharrefreplace" error handler for 8-bit encodings:
+ ASCII, Latin1, UTF-8, etc. */
+static char*
+xmlcharrefreplace(_PyBytesWriter *writer, char *str,
+ PyObject *unicode, Py_ssize_t collstart, Py_ssize_t collend)
+{
+ Py_ssize_t size, i;
+ Py_UCS4 ch;
+ enum PyUnicode_Kind kind;
+ void *data;
+
+ assert(PyUnicode_IS_READY(unicode));
+ kind = PyUnicode_KIND(unicode);
+ data = PyUnicode_DATA(unicode);
+
+ size = 0;
+ /* determine replacement size */
+ for (i = collstart; i < collend; ++i) {
+ Py_ssize_t incr;
+
+ ch = PyUnicode_READ(kind, data, i);
+ if (ch < 10)
+ incr = 2+1+1;
+ else if (ch < 100)
+ incr = 2+2+1;
+ else if (ch < 1000)
+ incr = 2+3+1;
+ else if (ch < 10000)
+ incr = 2+4+1;
+ else if (ch < 100000)
+ incr = 2+5+1;
+ else if (ch < 1000000)
+ incr = 2+6+1;
+ else {
+ assert(ch <= MAX_UNICODE);
+ incr = 2+7+1;
+ }
+ if (size > PY_SSIZE_T_MAX - incr) {
+ PyErr_SetString(PyExc_OverflowError,
+ "encoded result is too long for a Python string");
+ return NULL;
+ }
+ size += incr;
+ }
+
+ str = _PyBytesWriter_Prepare(writer, str, size);
+ if (str == NULL)
+ return NULL;
+
+ /* generate replacement */
+ for (i = collstart; i < collend; ++i) {
+ str += sprintf(str, "&#%d;", PyUnicode_READ(kind, data, i));
+ }
+ return str;
+}
+
+/* --- Bloom Filters ----------------------------------------------------- */
+
+/* stuff to implement simple "bloom filters" for Unicode characters.
+ to keep things simple, we use a single bitmask, using the least 5
+ bits from each unicode characters as the bit index. */
+
+/* the linebreak mask is set up by Unicode_Init below */
+
+#if LONG_BIT >= 128
+#define BLOOM_WIDTH 128
+#elif LONG_BIT >= 64
+#define BLOOM_WIDTH 64
+#elif LONG_BIT >= 32
+#define BLOOM_WIDTH 32
+#else
+#error "LONG_BIT is smaller than 32"
+#endif
+
+#define BLOOM_MASK unsigned long
+
+static BLOOM_MASK bloom_linebreak = ~(BLOOM_MASK)0;
+
+#define BLOOM(mask, ch) ((mask & (1UL << ((ch) & (BLOOM_WIDTH - 1)))))
+
+#define BLOOM_LINEBREAK(ch) \
+ ((ch) < 128U ? ascii_linebreak[(ch)] : \
+ (BLOOM(bloom_linebreak, (ch)) && Py_UNICODE_ISLINEBREAK(ch)))
+
+static BLOOM_MASK
+make_bloom_mask(int kind, void* ptr, Py_ssize_t len)
+{
+#define BLOOM_UPDATE(TYPE, MASK, PTR, LEN) \
+ do { \
+ TYPE *data = (TYPE *)PTR; \
+ TYPE *end = data + LEN; \
+ Py_UCS4 ch; \
+ for (; data != end; data++) { \
+ ch = *data; \
+ MASK |= (1UL << (ch & (BLOOM_WIDTH - 1))); \
+ } \
+ break; \
+ } while (0)
+
+ /* calculate simple bloom-style bitmask for a given unicode string */
+
+ BLOOM_MASK mask;
+
+ mask = 0;
+ switch (kind) {
+ case PyUnicode_1BYTE_KIND:
+ BLOOM_UPDATE(Py_UCS1, mask, ptr, len);
+ break;
+ case PyUnicode_2BYTE_KIND:
+ BLOOM_UPDATE(Py_UCS2, mask, ptr, len);
+ break;
+ case PyUnicode_4BYTE_KIND:
+ BLOOM_UPDATE(Py_UCS4, mask, ptr, len);
+ break;
+ default:
+ assert(0);
+ }
+ return mask;
+
+#undef BLOOM_UPDATE
+}
+
+static int
+ensure_unicode(PyObject *obj)
+{
+ if (!PyUnicode_Check(obj)) {
+ PyErr_Format(PyExc_TypeError,
+ "must be str, not %.100s",
+ Py_TYPE(obj)->tp_name);
+ return -1;
+ }
+ return PyUnicode_READY(obj);
+}
+
+/* Compilation of templated routines */
+
+#include "stringlib/asciilib.h"
+#include "stringlib/fastsearch.h"
+#include "stringlib/partition.h"
+#include "stringlib/split.h"
+#include "stringlib/count.h"
+#include "stringlib/find.h"
+#include "stringlib/find_max_char.h"
+#include "stringlib/undef.h"
+
+#include "stringlib/ucs1lib.h"
+#include "stringlib/fastsearch.h"
+#include "stringlib/partition.h"
+#include "stringlib/split.h"
+#include "stringlib/count.h"
+#include "stringlib/find.h"
+#include "stringlib/replace.h"
+#include "stringlib/find_max_char.h"
+#include "stringlib/undef.h"
+
+#include "stringlib/ucs2lib.h"
+#include "stringlib/fastsearch.h"
+#include "stringlib/partition.h"
+#include "stringlib/split.h"
+#include "stringlib/count.h"
+#include "stringlib/find.h"
+#include "stringlib/replace.h"
+#include "stringlib/find_max_char.h"
+#include "stringlib/undef.h"
+
+#include "stringlib/ucs4lib.h"
+#include "stringlib/fastsearch.h"
+#include "stringlib/partition.h"
+#include "stringlib/split.h"
+#include "stringlib/count.h"
+#include "stringlib/find.h"
+#include "stringlib/replace.h"
+#include "stringlib/find_max_char.h"
+#include "stringlib/undef.h"
+
+#include "stringlib/unicodedefs.h"
+#include "stringlib/fastsearch.h"
+#include "stringlib/count.h"
+#include "stringlib/find.h"
+#include "stringlib/undef.h"
+
+/* --- Unicode Object ----------------------------------------------------- */
+
+static PyObject *
+fixup(PyObject *self, Py_UCS4 (*fixfct)(PyObject *s));
+
+static Py_ssize_t
+findchar(const void *s, int kind,
+ Py_ssize_t size, Py_UCS4 ch,
+ int direction)
+{
+ switch (kind) {
+ case PyUnicode_1BYTE_KIND:
+ if ((Py_UCS1) ch != ch)
+ return -1;
+ if (direction > 0)
+ return ucs1lib_find_char((Py_UCS1 *) s, size, (Py_UCS1) ch);
+ else
+ return ucs1lib_rfind_char((Py_UCS1 *) s, size, (Py_UCS1) ch);
+ case PyUnicode_2BYTE_KIND:
+ if ((Py_UCS2) ch != ch)
+ return -1;
+ if (direction > 0)
+ return ucs2lib_find_char((Py_UCS2 *) s, size, (Py_UCS2) ch);
+ else
+ return ucs2lib_rfind_char((Py_UCS2 *) s, size, (Py_UCS2) ch);
+ case PyUnicode_4BYTE_KIND:
+ if (direction > 0)
+ return ucs4lib_find_char((Py_UCS4 *) s, size, ch);
+ else
+ return ucs4lib_rfind_char((Py_UCS4 *) s, size, ch);
+ default:
+ assert(0);
+ return -1;
+ }
+}
+
+#ifdef Py_DEBUG
+/* Fill the data of a Unicode string with invalid characters to detect bugs
+ earlier.
+
+ _PyUnicode_CheckConsistency(str, 1) detects invalid characters, at least for
+ ASCII and UCS-4 strings. U+00FF is invalid in ASCII and U+FFFFFFFF is an
+ invalid character in Unicode 6.0. */
+static void
+unicode_fill_invalid(PyObject *unicode, Py_ssize_t old_length)
+{
+ int kind = PyUnicode_KIND(unicode);
+ Py_UCS1 *data = PyUnicode_1BYTE_DATA(unicode);
+ Py_ssize_t length = _PyUnicode_LENGTH(unicode);
+ if (length <= old_length)
+ return;
+ memset(data + old_length * kind, 0xff, (length - old_length) * kind);
+}
+#endif
+
+static PyObject*
+resize_compact(PyObject *unicode, Py_ssize_t length)
+{
+ Py_ssize_t char_size;
+ Py_ssize_t struct_size;
+ Py_ssize_t new_size;
+ int share_wstr;
+ PyObject *new_unicode;
+#ifdef Py_DEBUG
+ Py_ssize_t old_length = _PyUnicode_LENGTH(unicode);
+#endif
+
+ assert(unicode_modifiable(unicode));
+ assert(PyUnicode_IS_READY(unicode));
+ assert(PyUnicode_IS_COMPACT(unicode));
+
+ char_size = PyUnicode_KIND(unicode);
+ if (PyUnicode_IS_ASCII(unicode))
+ struct_size = sizeof(PyASCIIObject);
+ else
+ struct_size = sizeof(PyCompactUnicodeObject);
+ share_wstr = _PyUnicode_SHARE_WSTR(unicode);
+
+ if (length > ((PY_SSIZE_T_MAX - struct_size) / char_size - 1)) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+ new_size = (struct_size + (length + 1) * char_size);
+
+ if (_PyUnicode_HAS_UTF8_MEMORY(unicode)) {
+ PyObject_DEL(_PyUnicode_UTF8(unicode));
+ _PyUnicode_UTF8(unicode) = NULL;
+ _PyUnicode_UTF8_LENGTH(unicode) = 0;
+ }
+ _Py_DEC_REFTOTAL;
+ _Py_ForgetReference(unicode);
+
+ new_unicode = (PyObject *)PyObject_REALLOC(unicode, new_size);
+ if (new_unicode == NULL) {
+ _Py_NewReference(unicode);
+ PyErr_NoMemory();
+ return NULL;
+ }
+ unicode = new_unicode;
+ _Py_NewReference(unicode);
+
+ _PyUnicode_LENGTH(unicode) = length;
+ if (share_wstr) {
+ _PyUnicode_WSTR(unicode) = PyUnicode_DATA(unicode);
+ if (!PyUnicode_IS_ASCII(unicode))
+ _PyUnicode_WSTR_LENGTH(unicode) = length;
+ }
+ else if (_PyUnicode_HAS_WSTR_MEMORY(unicode)) {
+ PyObject_DEL(_PyUnicode_WSTR(unicode));
+ _PyUnicode_WSTR(unicode) = NULL;
+ if (!PyUnicode_IS_ASCII(unicode))
+ _PyUnicode_WSTR_LENGTH(unicode) = 0;
+ }
+#ifdef Py_DEBUG
+ unicode_fill_invalid(unicode, old_length);
+#endif
+ PyUnicode_WRITE(PyUnicode_KIND(unicode), PyUnicode_DATA(unicode),
+ length, 0);
+ assert(_PyUnicode_CheckConsistency(unicode, 0));
+ return unicode;
+}
+
+static int
+resize_inplace(PyObject *unicode, Py_ssize_t length)
+{
+ wchar_t *wstr;
+ Py_ssize_t new_size;
+ assert(!PyUnicode_IS_COMPACT(unicode));
+ assert(Py_REFCNT(unicode) == 1);
+
+ if (PyUnicode_IS_READY(unicode)) {
+ Py_ssize_t char_size;
+ int share_wstr, share_utf8;
+ void *data;
+#ifdef Py_DEBUG
+ Py_ssize_t old_length = _PyUnicode_LENGTH(unicode);
+#endif
+
+ data = _PyUnicode_DATA_ANY(unicode);
+ char_size = PyUnicode_KIND(unicode);
+ share_wstr = _PyUnicode_SHARE_WSTR(unicode);
+ share_utf8 = _PyUnicode_SHARE_UTF8(unicode);
+
+ if (length > (PY_SSIZE_T_MAX / char_size - 1)) {
+ PyErr_NoMemory();
+ return -1;
+ }
+ new_size = (length + 1) * char_size;
+
+ if (!share_utf8 && _PyUnicode_HAS_UTF8_MEMORY(unicode))
+ {
+ PyObject_DEL(_PyUnicode_UTF8(unicode));
+ _PyUnicode_UTF8(unicode) = NULL;
+ _PyUnicode_UTF8_LENGTH(unicode) = 0;
+ }
+
+ data = (PyObject *)PyObject_REALLOC(data, new_size);
+ if (data == NULL) {
+ PyErr_NoMemory();
+ return -1;
+ }
+ _PyUnicode_DATA_ANY(unicode) = data;
+ if (share_wstr) {
+ _PyUnicode_WSTR(unicode) = data;
+ _PyUnicode_WSTR_LENGTH(unicode) = length;
+ }
+ if (share_utf8) {
+ _PyUnicode_UTF8(unicode) = data;
+ _PyUnicode_UTF8_LENGTH(unicode) = length;
+ }
+ _PyUnicode_LENGTH(unicode) = length;
+ PyUnicode_WRITE(PyUnicode_KIND(unicode), data, length, 0);
+#ifdef Py_DEBUG
+ unicode_fill_invalid(unicode, old_length);
+#endif
+ if (share_wstr || _PyUnicode_WSTR(unicode) == NULL) {
+ assert(_PyUnicode_CheckConsistency(unicode, 0));
+ return 0;
+ }
+ }
+ assert(_PyUnicode_WSTR(unicode) != NULL);
+
+ /* check for integer overflow */
+ if (length > PY_SSIZE_T_MAX / (Py_ssize_t)sizeof(wchar_t) - 1) {
+ PyErr_NoMemory();
+ return -1;
+ }
+ new_size = sizeof(wchar_t) * (length + 1);
+ wstr = _PyUnicode_WSTR(unicode);
+ wstr = PyObject_REALLOC(wstr, new_size);
+ if (!wstr) {
+ PyErr_NoMemory();
+ return -1;
+ }
+ _PyUnicode_WSTR(unicode) = wstr;
+ _PyUnicode_WSTR(unicode)[length] = 0;
+ _PyUnicode_WSTR_LENGTH(unicode) = length;
+ assert(_PyUnicode_CheckConsistency(unicode, 0));
+ return 0;
+}
+
+static PyObject*
+resize_copy(PyObject *unicode, Py_ssize_t length)
+{
+ Py_ssize_t copy_length;
+ if (_PyUnicode_KIND(unicode) != PyUnicode_WCHAR_KIND) {
+ PyObject *copy;
+
+ if (PyUnicode_READY(unicode) == -1)
+ return NULL;
+
+ copy = PyUnicode_New(length, PyUnicode_MAX_CHAR_VALUE(unicode));
+ if (copy == NULL)
+ return NULL;
+
+ copy_length = Py_MIN(length, PyUnicode_GET_LENGTH(unicode));
+ _PyUnicode_FastCopyCharacters(copy, 0, unicode, 0, copy_length);
+ return copy;
+ }
+ else {
+ PyObject *w;
+
+ w = (PyObject*)_PyUnicode_New(length);
+ if (w == NULL)
+ return NULL;
+ copy_length = _PyUnicode_WSTR_LENGTH(unicode);
+ copy_length = Py_MIN(copy_length, length);
+ memcpy(_PyUnicode_WSTR(w), _PyUnicode_WSTR(unicode),
+ copy_length * sizeof(wchar_t));
+ return w;
+ }
+}
+
+/* We allocate one more byte to make sure the string is
+ Ux0000 terminated; some code (e.g. new_identifier)
+ relies on that.
+
+ XXX This allocator could further be enhanced by assuring that the
+ free list never reduces its size below 1.
+
+*/
+
+static PyUnicodeObject *
+_PyUnicode_New(Py_ssize_t length)
+{
+ PyUnicodeObject *unicode;
+ size_t new_size;
+
+ /* Optimization for empty strings */
+ if (length == 0 && unicode_empty != NULL) {
+ Py_INCREF(unicode_empty);
+ return (PyUnicodeObject*)unicode_empty;
+ }
+
+ /* Ensure we won't overflow the size. */
+ if (length > ((PY_SSIZE_T_MAX / (Py_ssize_t)sizeof(Py_UNICODE)) - 1)) {
+ return (PyUnicodeObject *)PyErr_NoMemory();
+ }
+ if (length < 0) {
+ PyErr_SetString(PyExc_SystemError,
+ "Negative size passed to _PyUnicode_New");
+ return NULL;
+ }
+
+ unicode = PyObject_New(PyUnicodeObject, &PyUnicode_Type);
+ if (unicode == NULL)
+ return NULL;
+ new_size = sizeof(Py_UNICODE) * ((size_t)length + 1);
+
+ _PyUnicode_WSTR_LENGTH(unicode) = length;
+ _PyUnicode_HASH(unicode) = -1;
+ _PyUnicode_STATE(unicode).interned = 0;
+ _PyUnicode_STATE(unicode).kind = 0;
+ _PyUnicode_STATE(unicode).compact = 0;
+ _PyUnicode_STATE(unicode).ready = 0;
+ _PyUnicode_STATE(unicode).ascii = 0;
+ _PyUnicode_DATA_ANY(unicode) = NULL;
+ _PyUnicode_LENGTH(unicode) = 0;
+ _PyUnicode_UTF8(unicode) = NULL;
+ _PyUnicode_UTF8_LENGTH(unicode) = 0;
+
+ _PyUnicode_WSTR(unicode) = (Py_UNICODE*) PyObject_MALLOC(new_size);
+ if (!_PyUnicode_WSTR(unicode)) {
+ Py_DECREF(unicode);
+ PyErr_NoMemory();
+ return NULL;
+ }
+
+ /* Initialize the first element to guard against cases where
+ * the caller fails before initializing str -- unicode_resize()
+ * reads str[0], and the Keep-Alive optimization can keep memory
+ * allocated for str alive across a call to unicode_dealloc(unicode).
+ * We don't want unicode_resize to read uninitialized memory in
+ * that case.
+ */
+ _PyUnicode_WSTR(unicode)[0] = 0;
+ _PyUnicode_WSTR(unicode)[length] = 0;
+
+ assert(_PyUnicode_CheckConsistency((PyObject *)unicode, 0));
+ return unicode;
+}
+
+static const char*
+unicode_kind_name(PyObject *unicode)
+{
+ /* don't check consistency: unicode_kind_name() is called from
+ _PyUnicode_Dump() */
+ if (!PyUnicode_IS_COMPACT(unicode))
+ {
+ if (!PyUnicode_IS_READY(unicode))
+ return "wstr";
+ switch (PyUnicode_KIND(unicode))
+ {
+ case PyUnicode_1BYTE_KIND:
+ if (PyUnicode_IS_ASCII(unicode))
+ return "legacy ascii";
+ else
+ return "legacy latin1";
+ case PyUnicode_2BYTE_KIND:
+ return "legacy UCS2";
+ case PyUnicode_4BYTE_KIND:
+ return "legacy UCS4";
+ default:
+ return "<legacy invalid kind>";
+ }
+ }
+ assert(PyUnicode_IS_READY(unicode));
+ switch (PyUnicode_KIND(unicode)) {
+ case PyUnicode_1BYTE_KIND:
+ if (PyUnicode_IS_ASCII(unicode))
+ return "ascii";
+ else
+ return "latin1";
+ case PyUnicode_2BYTE_KIND:
+ return "UCS2";
+ case PyUnicode_4BYTE_KIND:
+ return "UCS4";
+ default:
+ return "<invalid compact kind>";
+ }
+}
+
+#ifdef Py_DEBUG
+/* Functions wrapping macros for use in debugger */
+char *_PyUnicode_utf8(void *unicode){
+ return PyUnicode_UTF8(unicode);
+}
+
+void *_PyUnicode_compact_data(void *unicode) {
+ return _PyUnicode_COMPACT_DATA(unicode);
+}
+void *_PyUnicode_data(void *unicode){
+ printf("obj %p\n", unicode);
+ printf("compact %d\n", PyUnicode_IS_COMPACT(unicode));
+ printf("compact ascii %d\n", PyUnicode_IS_COMPACT_ASCII(unicode));
+ printf("ascii op %p\n", ((void*)((PyASCIIObject*)(unicode) + 1)));
+ printf("compact op %p\n", ((void*)((PyCompactUnicodeObject*)(unicode) + 1)));
+ printf("compact data %p\n", _PyUnicode_COMPACT_DATA(unicode));
+ return PyUnicode_DATA(unicode);
+}
+
+void
+_PyUnicode_Dump(PyObject *op)
+{
+ PyASCIIObject *ascii = (PyASCIIObject *)op;
+ PyCompactUnicodeObject *compact = (PyCompactUnicodeObject *)op;
+ PyUnicodeObject *unicode = (PyUnicodeObject *)op;
+ void *data;
+
+ if (ascii->state.compact)
+ {
+ if (ascii->state.ascii)
+ data = (ascii + 1);
+ else
+ data = (compact + 1);
+ }
+ else
+ data = unicode->data.any;
+ printf("%s: len=%" PY_FORMAT_SIZE_T "u, ",
+ unicode_kind_name(op), ascii->length);
+
+ if (ascii->wstr == data)
+ printf("shared ");
+ printf("wstr=%p", ascii->wstr);
+
+ if (!(ascii->state.ascii == 1 && ascii->state.compact == 1)) {
+ printf(" (%" PY_FORMAT_SIZE_T "u), ", compact->wstr_length);
+ if (!ascii->state.compact && compact->utf8 == unicode->data.any)
+ printf("shared ");
+ printf("utf8=%p (%" PY_FORMAT_SIZE_T "u)",
+ compact->utf8, compact->utf8_length);
+ }
+ printf(", data=%p\n", data);
+}
+#endif
+
+PyObject *
+PyUnicode_New(Py_ssize_t size, Py_UCS4 maxchar)
+{
+ PyObject *obj;
+ PyCompactUnicodeObject *unicode;
+ void *data;
+ enum PyUnicode_Kind kind;
+ int is_sharing, is_ascii;
+ Py_ssize_t char_size;
+ Py_ssize_t struct_size;
+
+ /* Optimization for empty strings */
+ if (size == 0 && unicode_empty != NULL) {
+ Py_INCREF(unicode_empty);
+ return unicode_empty;
+ }
+
+ is_ascii = 0;
+ is_sharing = 0;
+ struct_size = sizeof(PyCompactUnicodeObject);
+ if (maxchar < 128) {
+ kind = PyUnicode_1BYTE_KIND;
+ char_size = 1;
+ is_ascii = 1;
+ struct_size = sizeof(PyASCIIObject);
+ }
+ else if (maxchar < 256) {
+ kind = PyUnicode_1BYTE_KIND;
+ char_size = 1;
+ }
+ else if (maxchar < 65536) {
+ kind = PyUnicode_2BYTE_KIND;
+ char_size = 2;
+ if (sizeof(wchar_t) == 2)
+ is_sharing = 1;
+ }
+ else {
+ if (maxchar > MAX_UNICODE) {
+ PyErr_SetString(PyExc_SystemError,
+ "invalid maximum character passed to PyUnicode_New");
+ return NULL;
+ }
+ kind = PyUnicode_4BYTE_KIND;
+ char_size = 4;
+ if (sizeof(wchar_t) == 4)
+ is_sharing = 1;
+ }
+
+ /* Ensure we won't overflow the size. */
+ if (size < 0) {
+ PyErr_SetString(PyExc_SystemError,
+ "Negative size passed to PyUnicode_New");
+ return NULL;
+ }
+ if (size > ((PY_SSIZE_T_MAX - struct_size) / char_size - 1))
+ return PyErr_NoMemory();
+
+ /* Duplicated allocation code from _PyObject_New() instead of a call to
+ * PyObject_New() so we are able to allocate space for the object and
+ * it's data buffer.
+ */
+ obj = (PyObject *) PyObject_MALLOC(struct_size + (size + 1) * char_size);
+ if (obj == NULL)
+ return PyErr_NoMemory();
+ obj = PyObject_INIT(obj, &PyUnicode_Type);
+ if (obj == NULL)
+ return NULL;
+
+ unicode = (PyCompactUnicodeObject *)obj;
+ if (is_ascii)
+ data = ((PyASCIIObject*)obj) + 1;
+ else
+ data = unicode + 1;
+ _PyUnicode_LENGTH(unicode) = size;
+ _PyUnicode_HASH(unicode) = -1;
+ _PyUnicode_STATE(unicode).interned = 0;
+ _PyUnicode_STATE(unicode).kind = kind;
+ _PyUnicode_STATE(unicode).compact = 1;
+ _PyUnicode_STATE(unicode).ready = 1;
+ _PyUnicode_STATE(unicode).ascii = is_ascii;
+ if (is_ascii) {
+ ((char*)data)[size] = 0;
+ _PyUnicode_WSTR(unicode) = NULL;
+ }
+ else if (kind == PyUnicode_1BYTE_KIND) {
+ ((char*)data)[size] = 0;
+ _PyUnicode_WSTR(unicode) = NULL;
+ _PyUnicode_WSTR_LENGTH(unicode) = 0;
+ unicode->utf8 = NULL;
+ unicode->utf8_length = 0;
+ }
+ else {
+ unicode->utf8 = NULL;
+ unicode->utf8_length = 0;
+ if (kind == PyUnicode_2BYTE_KIND)
+ ((Py_UCS2*)data)[size] = 0;
+ else /* kind == PyUnicode_4BYTE_KIND */
+ ((Py_UCS4*)data)[size] = 0;
+ if (is_sharing) {
+ _PyUnicode_WSTR_LENGTH(unicode) = size;
+ _PyUnicode_WSTR(unicode) = (wchar_t *)data;
+ }
+ else {
+ _PyUnicode_WSTR_LENGTH(unicode) = 0;
+ _PyUnicode_WSTR(unicode) = NULL;
+ }
+ }
+#ifdef Py_DEBUG
+ unicode_fill_invalid((PyObject*)unicode, 0);
+#endif
+ assert(_PyUnicode_CheckConsistency((PyObject*)unicode, 0));
+ return obj;
+}
+
+#if SIZEOF_WCHAR_T == 2
+/* Helper function to convert a 16-bits wchar_t representation to UCS4, this
+ will decode surrogate pairs, the other conversions are implemented as macros
+ for efficiency.
+
+ This function assumes that unicode can hold one more code point than wstr
+ characters for a terminating null character. */
+static void
+unicode_convert_wchar_to_ucs4(const wchar_t *begin, const wchar_t *end,
+ PyObject *unicode)
+{
+ const wchar_t *iter;
+ Py_UCS4 *ucs4_out;
+
+ assert(unicode != NULL);
+ assert(_PyUnicode_CHECK(unicode));
+ assert(_PyUnicode_KIND(unicode) == PyUnicode_4BYTE_KIND);
+ ucs4_out = PyUnicode_4BYTE_DATA(unicode);
+
+ for (iter = begin; iter < end; ) {
+ assert(ucs4_out < (PyUnicode_4BYTE_DATA(unicode) +
+ _PyUnicode_GET_LENGTH(unicode)));
+ if (Py_UNICODE_IS_HIGH_SURROGATE(iter[0])
+ && (iter+1) < end
+ && Py_UNICODE_IS_LOW_SURROGATE(iter[1]))
+ {
+ *ucs4_out++ = Py_UNICODE_JOIN_SURROGATES(iter[0], iter[1]);
+ iter += 2;
+ }
+ else {
+ *ucs4_out++ = *iter;
+ iter++;
+ }
+ }
+ assert(ucs4_out == (PyUnicode_4BYTE_DATA(unicode) +
+ _PyUnicode_GET_LENGTH(unicode)));
+
+}
+#endif
+
+static int
+unicode_check_modifiable(PyObject *unicode)
+{
+ if (!unicode_modifiable(unicode)) {
+ PyErr_SetString(PyExc_SystemError,
+ "Cannot modify a string currently used");
+ return -1;
+ }
+ return 0;
+}
+
+static int
+_copy_characters(PyObject *to, Py_ssize_t to_start,
+ PyObject *from, Py_ssize_t from_start,
+ Py_ssize_t how_many, int check_maxchar)
+{
+ unsigned int from_kind, to_kind;
+ void *from_data, *to_data;
+
+ assert(0 <= how_many);
+ assert(0 <= from_start);
+ assert(0 <= to_start);
+ assert(PyUnicode_Check(from));
+ assert(PyUnicode_IS_READY(from));
+ assert(from_start + how_many <= PyUnicode_GET_LENGTH(from));
+
+ assert(PyUnicode_Check(to));
+ assert(PyUnicode_IS_READY(to));
+ assert(to_start + how_many <= PyUnicode_GET_LENGTH(to));
+
+ if (how_many == 0)
+ return 0;
+
+ from_kind = PyUnicode_KIND(from);
+ from_data = PyUnicode_DATA(from);
+ to_kind = PyUnicode_KIND(to);
+ to_data = PyUnicode_DATA(to);
+
+#ifdef Py_DEBUG
+ if (!check_maxchar
+ && PyUnicode_MAX_CHAR_VALUE(from) > PyUnicode_MAX_CHAR_VALUE(to))
+ {
+ const Py_UCS4 to_maxchar = PyUnicode_MAX_CHAR_VALUE(to);
+ Py_UCS4 ch;
+ Py_ssize_t i;
+ for (i=0; i < how_many; i++) {
+ ch = PyUnicode_READ(from_kind, from_data, from_start + i);
+ assert(ch <= to_maxchar);
+ }
+ }
+#endif
+
+ if (from_kind == to_kind) {
+ if (check_maxchar
+ && !PyUnicode_IS_ASCII(from) && PyUnicode_IS_ASCII(to))
+ {
+ /* Writing Latin-1 characters into an ASCII string requires to
+ check that all written characters are pure ASCII */
+ Py_UCS4 max_char;
+ max_char = ucs1lib_find_max_char(from_data,
+ (Py_UCS1*)from_data + how_many);
+ if (max_char >= 128)
+ return -1;
+ }
+ memcpy((char*)to_data + to_kind * to_start,
+ (char*)from_data + from_kind * from_start,
+ to_kind * how_many);
+ }
+ else if (from_kind == PyUnicode_1BYTE_KIND
+ && to_kind == PyUnicode_2BYTE_KIND)
+ {
+ _PyUnicode_CONVERT_BYTES(
+ Py_UCS1, Py_UCS2,
+ PyUnicode_1BYTE_DATA(from) + from_start,
+ PyUnicode_1BYTE_DATA(from) + from_start + how_many,
+ PyUnicode_2BYTE_DATA(to) + to_start
+ );
+ }
+ else if (from_kind == PyUnicode_1BYTE_KIND
+ && to_kind == PyUnicode_4BYTE_KIND)
+ {
+ _PyUnicode_CONVERT_BYTES(
+ Py_UCS1, Py_UCS4,
+ PyUnicode_1BYTE_DATA(from) + from_start,
+ PyUnicode_1BYTE_DATA(from) + from_start + how_many,
+ PyUnicode_4BYTE_DATA(to) + to_start
+ );
+ }
+ else if (from_kind == PyUnicode_2BYTE_KIND
+ && to_kind == PyUnicode_4BYTE_KIND)
+ {
+ _PyUnicode_CONVERT_BYTES(
+ Py_UCS2, Py_UCS4,
+ PyUnicode_2BYTE_DATA(from) + from_start,
+ PyUnicode_2BYTE_DATA(from) + from_start + how_many,
+ PyUnicode_4BYTE_DATA(to) + to_start
+ );
+ }
+ else {
+ assert (PyUnicode_MAX_CHAR_VALUE(from) > PyUnicode_MAX_CHAR_VALUE(to));
+
+ if (!check_maxchar) {
+ if (from_kind == PyUnicode_2BYTE_KIND
+ && to_kind == PyUnicode_1BYTE_KIND)
+ {
+ _PyUnicode_CONVERT_BYTES(
+ Py_UCS2, Py_UCS1,
+ PyUnicode_2BYTE_DATA(from) + from_start,
+ PyUnicode_2BYTE_DATA(from) + from_start + how_many,
+ PyUnicode_1BYTE_DATA(to) + to_start
+ );
+ }
+ else if (from_kind == PyUnicode_4BYTE_KIND
+ && to_kind == PyUnicode_1BYTE_KIND)
+ {
+ _PyUnicode_CONVERT_BYTES(
+ Py_UCS4, Py_UCS1,
+ PyUnicode_4BYTE_DATA(from) + from_start,
+ PyUnicode_4BYTE_DATA(from) + from_start + how_many,
+ PyUnicode_1BYTE_DATA(to) + to_start
+ );
+ }
+ else if (from_kind == PyUnicode_4BYTE_KIND
+ && to_kind == PyUnicode_2BYTE_KIND)
+ {
+ _PyUnicode_CONVERT_BYTES(
+ Py_UCS4, Py_UCS2,
+ PyUnicode_4BYTE_DATA(from) + from_start,
+ PyUnicode_4BYTE_DATA(from) + from_start + how_many,
+ PyUnicode_2BYTE_DATA(to) + to_start
+ );
+ }
+ else {
+ assert(0);
+ return -1;
+ }
+ }
+ else {
+ const Py_UCS4 to_maxchar = PyUnicode_MAX_CHAR_VALUE(to);
+ Py_UCS4 ch;
+ Py_ssize_t i;
+
+ for (i=0; i < how_many; i++) {
+ ch = PyUnicode_READ(from_kind, from_data, from_start + i);
+ if (ch > to_maxchar)
+ return -1;
+ PyUnicode_WRITE(to_kind, to_data, to_start + i, ch);
+ }
+ }
+ }
+ return 0;
+}
+
+void
+_PyUnicode_FastCopyCharacters(
+ PyObject *to, Py_ssize_t to_start,
+ PyObject *from, Py_ssize_t from_start, Py_ssize_t how_many)
+{
+ (void)_copy_characters(to, to_start, from, from_start, how_many, 0);
+}
+
+Py_ssize_t
+PyUnicode_CopyCharacters(PyObject *to, Py_ssize_t to_start,
+ PyObject *from, Py_ssize_t from_start,
+ Py_ssize_t how_many)
+{
+ int err;
+
+ if (!PyUnicode_Check(from) || !PyUnicode_Check(to)) {
+ PyErr_BadInternalCall();
+ return -1;
+ }
+
+ if (PyUnicode_READY(from) == -1)
+ return -1;
+ if (PyUnicode_READY(to) == -1)
+ return -1;
+
+ if ((size_t)from_start > (size_t)PyUnicode_GET_LENGTH(from)) {
+ PyErr_SetString(PyExc_IndexError, "string index out of range");
+ return -1;
+ }
+ if ((size_t)to_start > (size_t)PyUnicode_GET_LENGTH(to)) {
+ PyErr_SetString(PyExc_IndexError, "string index out of range");
+ return -1;
+ }
+ if (how_many < 0) {
+ PyErr_SetString(PyExc_SystemError, "how_many cannot be negative");
+ return -1;
+ }
+ how_many = Py_MIN(PyUnicode_GET_LENGTH(from)-from_start, how_many);
+ if (to_start + how_many > PyUnicode_GET_LENGTH(to)) {
+ PyErr_Format(PyExc_SystemError,
+ "Cannot write %zi characters at %zi "
+ "in a string of %zi characters",
+ how_many, to_start, PyUnicode_GET_LENGTH(to));
+ return -1;
+ }
+
+ if (how_many == 0)
+ return 0;
+
+ if (unicode_check_modifiable(to))
+ return -1;
+
+ err = _copy_characters(to, to_start, from, from_start, how_many, 1);
+ if (err) {
+ PyErr_Format(PyExc_SystemError,
+ "Cannot copy %s characters "
+ "into a string of %s characters",
+ unicode_kind_name(from),
+ unicode_kind_name(to));
+ return -1;
+ }
+ return how_many;
+}
+
+/* Find the maximum code point and count the number of surrogate pairs so a
+ correct string length can be computed before converting a string to UCS4.
+ This function counts single surrogates as a character and not as a pair.
+
+ Return 0 on success, or -1 on error. */
+static int
+find_maxchar_surrogates(const wchar_t *begin, const wchar_t *end,
+ Py_UCS4 *maxchar, Py_ssize_t *num_surrogates)
+{
+ const wchar_t *iter;
+ Py_UCS4 ch;
+
+ assert(num_surrogates != NULL && maxchar != NULL);
+ *num_surrogates = 0;
+ *maxchar = 0;
+
+ for (iter = begin; iter < end; ) {
+#if SIZEOF_WCHAR_T == 2
+ if (Py_UNICODE_IS_HIGH_SURROGATE(iter[0])
+ && (iter+1) < end
+ && Py_UNICODE_IS_LOW_SURROGATE(iter[1]))
+ {
+ ch = Py_UNICODE_JOIN_SURROGATES(iter[0], iter[1]);
+ ++(*num_surrogates);
+ iter += 2;
+ }
+ else
+#endif
+ {
+ ch = *iter;
+ iter++;
+ }
+ if (ch > *maxchar) {
+ *maxchar = ch;
+ if (*maxchar > MAX_UNICODE) {
+ PyErr_Format(PyExc_ValueError,
+ "character U+%x is not in range [U+0000; U+10ffff]",
+ ch);
+ return -1;
+ }
+ }
+ }
+ return 0;
+}
+
+int
+_PyUnicode_Ready(PyObject *unicode)
+{
+ wchar_t *end;
+ Py_UCS4 maxchar = 0;
+ Py_ssize_t num_surrogates;
+#if SIZEOF_WCHAR_T == 2
+ Py_ssize_t length_wo_surrogates;
+#endif
+
+ /* _PyUnicode_Ready() is only intended for old-style API usage where
+ strings were created using _PyObject_New() and where no canonical
+ representation (the str field) has been set yet aka strings
+ which are not yet ready. */
+ assert(_PyUnicode_CHECK(unicode));
+ assert(_PyUnicode_KIND(unicode) == PyUnicode_WCHAR_KIND);
+ assert(_PyUnicode_WSTR(unicode) != NULL);
+ assert(_PyUnicode_DATA_ANY(unicode) == NULL);
+ assert(_PyUnicode_UTF8(unicode) == NULL);
+ /* Actually, it should neither be interned nor be anything else: */
+ assert(_PyUnicode_STATE(unicode).interned == SSTATE_NOT_INTERNED);
+
+ end = _PyUnicode_WSTR(unicode) + _PyUnicode_WSTR_LENGTH(unicode);
+ if (find_maxchar_surrogates(_PyUnicode_WSTR(unicode), end,
+ &maxchar, &num_surrogates) == -1)
+ return -1;
+
+ if (maxchar < 256) {
+ _PyUnicode_DATA_ANY(unicode) = PyObject_MALLOC(_PyUnicode_WSTR_LENGTH(unicode) + 1);
+ if (!_PyUnicode_DATA_ANY(unicode)) {
+ PyErr_NoMemory();
+ return -1;
+ }
+ _PyUnicode_CONVERT_BYTES(wchar_t, unsigned char,
+ _PyUnicode_WSTR(unicode), end,
+ PyUnicode_1BYTE_DATA(unicode));
+ PyUnicode_1BYTE_DATA(unicode)[_PyUnicode_WSTR_LENGTH(unicode)] = '\0';
+ _PyUnicode_LENGTH(unicode) = _PyUnicode_WSTR_LENGTH(unicode);
+ _PyUnicode_STATE(unicode).kind = PyUnicode_1BYTE_KIND;
+ if (maxchar < 128) {
+ _PyUnicode_STATE(unicode).ascii = 1;
+ _PyUnicode_UTF8(unicode) = _PyUnicode_DATA_ANY(unicode);
+ _PyUnicode_UTF8_LENGTH(unicode) = _PyUnicode_WSTR_LENGTH(unicode);
+ }
+ else {
+ _PyUnicode_STATE(unicode).ascii = 0;
+ _PyUnicode_UTF8(unicode) = NULL;
+ _PyUnicode_UTF8_LENGTH(unicode) = 0;
+ }
+ PyObject_FREE(_PyUnicode_WSTR(unicode));
+ _PyUnicode_WSTR(unicode) = NULL;
+ _PyUnicode_WSTR_LENGTH(unicode) = 0;
+ }
+ /* In this case we might have to convert down from 4-byte native
+ wchar_t to 2-byte unicode. */
+ else if (maxchar < 65536) {
+ assert(num_surrogates == 0 &&
+ "FindMaxCharAndNumSurrogatePairs() messed up");
+
+#if SIZEOF_WCHAR_T == 2
+ /* We can share representations and are done. */
+ _PyUnicode_DATA_ANY(unicode) = _PyUnicode_WSTR(unicode);
+ PyUnicode_2BYTE_DATA(unicode)[_PyUnicode_WSTR_LENGTH(unicode)] = '\0';
+ _PyUnicode_LENGTH(unicode) = _PyUnicode_WSTR_LENGTH(unicode);
+ _PyUnicode_STATE(unicode).kind = PyUnicode_2BYTE_KIND;
+ _PyUnicode_UTF8(unicode) = NULL;
+ _PyUnicode_UTF8_LENGTH(unicode) = 0;
+#else
+ /* sizeof(wchar_t) == 4 */
+ _PyUnicode_DATA_ANY(unicode) = PyObject_MALLOC(
+ 2 * (_PyUnicode_WSTR_LENGTH(unicode) + 1));
+ if (!_PyUnicode_DATA_ANY(unicode)) {
+ PyErr_NoMemory();
+ return -1;
+ }
+ _PyUnicode_CONVERT_BYTES(wchar_t, Py_UCS2,
+ _PyUnicode_WSTR(unicode), end,
+ PyUnicode_2BYTE_DATA(unicode));
+ PyUnicode_2BYTE_DATA(unicode)[_PyUnicode_WSTR_LENGTH(unicode)] = '\0';
+ _PyUnicode_LENGTH(unicode) = _PyUnicode_WSTR_LENGTH(unicode);
+ _PyUnicode_STATE(unicode).kind = PyUnicode_2BYTE_KIND;
+ _PyUnicode_UTF8(unicode) = NULL;
+ _PyUnicode_UTF8_LENGTH(unicode) = 0;
+ PyObject_FREE(_PyUnicode_WSTR(unicode));
+ _PyUnicode_WSTR(unicode) = NULL;
+ _PyUnicode_WSTR_LENGTH(unicode) = 0;
+#endif
+ }
+ /* maxchar exeeds 16 bit, wee need 4 bytes for unicode characters */
+ else {
+#if SIZEOF_WCHAR_T == 2
+ /* in case the native representation is 2-bytes, we need to allocate a
+ new normalized 4-byte version. */
+ length_wo_surrogates = _PyUnicode_WSTR_LENGTH(unicode) - num_surrogates;
+ if (length_wo_surrogates > PY_SSIZE_T_MAX / 4 - 1) {
+ PyErr_NoMemory();
+ return -1;
+ }
+ _PyUnicode_DATA_ANY(unicode) = PyObject_MALLOC(4 * (length_wo_surrogates + 1));
+ if (!_PyUnicode_DATA_ANY(unicode)) {
+ PyErr_NoMemory();
+ return -1;
+ }
+ _PyUnicode_LENGTH(unicode) = length_wo_surrogates;
+ _PyUnicode_STATE(unicode).kind = PyUnicode_4BYTE_KIND;
+ _PyUnicode_UTF8(unicode) = NULL;
+ _PyUnicode_UTF8_LENGTH(unicode) = 0;
+ /* unicode_convert_wchar_to_ucs4() requires a ready string */
+ _PyUnicode_STATE(unicode).ready = 1;
+ unicode_convert_wchar_to_ucs4(_PyUnicode_WSTR(unicode), end, unicode);
+ PyObject_FREE(_PyUnicode_WSTR(unicode));
+ _PyUnicode_WSTR(unicode) = NULL;
+ _PyUnicode_WSTR_LENGTH(unicode) = 0;
+#else
+ assert(num_surrogates == 0);
+
+ _PyUnicode_DATA_ANY(unicode) = _PyUnicode_WSTR(unicode);
+ _PyUnicode_LENGTH(unicode) = _PyUnicode_WSTR_LENGTH(unicode);
+ _PyUnicode_UTF8(unicode) = NULL;
+ _PyUnicode_UTF8_LENGTH(unicode) = 0;
+ _PyUnicode_STATE(unicode).kind = PyUnicode_4BYTE_KIND;
+#endif
+ PyUnicode_4BYTE_DATA(unicode)[_PyUnicode_LENGTH(unicode)] = '\0';
+ }
+ _PyUnicode_STATE(unicode).ready = 1;
+ assert(_PyUnicode_CheckConsistency(unicode, 1));
+ return 0;
+}
+
+static void
+unicode_dealloc(PyObject *unicode)
+{
+ switch (PyUnicode_CHECK_INTERNED(unicode)) {
+ case SSTATE_NOT_INTERNED:
+ break;
+
+ case SSTATE_INTERNED_MORTAL:
+ /* revive dead object temporarily for DelItem */
+ Py_REFCNT(unicode) = 3;
+ if (PyDict_DelItem(interned, unicode) != 0)
+ Py_FatalError(
+ "deletion of interned string failed");
+ break;
+
+ case SSTATE_INTERNED_IMMORTAL:
+ Py_FatalError("Immortal interned string died.");
+ /* fall through */
+
+ default:
+ Py_FatalError("Inconsistent interned string state.");
+ }
+
+ if (_PyUnicode_HAS_WSTR_MEMORY(unicode))
+ PyObject_DEL(_PyUnicode_WSTR(unicode));
+ if (_PyUnicode_HAS_UTF8_MEMORY(unicode))
+ PyObject_DEL(_PyUnicode_UTF8(unicode));
+ if (!PyUnicode_IS_COMPACT(unicode) && _PyUnicode_DATA_ANY(unicode))
+ PyObject_DEL(_PyUnicode_DATA_ANY(unicode));
+
+ Py_TYPE(unicode)->tp_free(unicode);
+}
+
+#ifdef Py_DEBUG
+static int
+unicode_is_singleton(PyObject *unicode)
+{
+ PyASCIIObject *ascii = (PyASCIIObject *)unicode;
+ if (unicode == unicode_empty)
+ return 1;
+ if (ascii->state.kind != PyUnicode_WCHAR_KIND && ascii->length == 1)
+ {
+ Py_UCS4 ch = PyUnicode_READ_CHAR(unicode, 0);
+ if (ch < 256 && unicode_latin1[ch] == unicode)
+ return 1;
+ }
+ return 0;
+}
+#endif
+
+static int
+unicode_modifiable(PyObject *unicode)
+{
+ assert(_PyUnicode_CHECK(unicode));
+ if (Py_REFCNT(unicode) != 1)
+ return 0;
+ if (_PyUnicode_HASH(unicode) != -1)
+ return 0;
+ if (PyUnicode_CHECK_INTERNED(unicode))
+ return 0;
+ if (!PyUnicode_CheckExact(unicode))
+ return 0;
+#ifdef Py_DEBUG
+ /* singleton refcount is greater than 1 */
+ assert(!unicode_is_singleton(unicode));
+#endif
+ return 1;
+}
+
+static int
+unicode_resize(PyObject **p_unicode, Py_ssize_t length)
+{
+ PyObject *unicode;
+ Py_ssize_t old_length;
+
+ assert(p_unicode != NULL);
+ unicode = *p_unicode;
+
+ assert(unicode != NULL);
+ assert(PyUnicode_Check(unicode));
+ assert(0 <= length);
+
+ if (_PyUnicode_KIND(unicode) == PyUnicode_WCHAR_KIND)
+ old_length = PyUnicode_WSTR_LENGTH(unicode);
+ else
+ old_length = PyUnicode_GET_LENGTH(unicode);
+ if (old_length == length)
+ return 0;
+
+ if (length == 0) {
+ _Py_INCREF_UNICODE_EMPTY();
+ if (!unicode_empty)
+ return -1;
+ Py_SETREF(*p_unicode, unicode_empty);
+ return 0;
+ }
+
+ if (!unicode_modifiable(unicode)) {
+ PyObject *copy = resize_copy(unicode, length);
+ if (copy == NULL)
+ return -1;
+ Py_SETREF(*p_unicode, copy);
+ return 0;
+ }
+
+ if (PyUnicode_IS_COMPACT(unicode)) {
+ PyObject *new_unicode = resize_compact(unicode, length);
+ if (new_unicode == NULL)
+ return -1;
+ *p_unicode = new_unicode;
+ return 0;
+ }
+ return resize_inplace(unicode, length);
+}
+
+int
+PyUnicode_Resize(PyObject **p_unicode, Py_ssize_t length)
+{
+ PyObject *unicode;
+ if (p_unicode == NULL) {
+ PyErr_BadInternalCall();
+ return -1;
+ }
+ unicode = *p_unicode;
+ if (unicode == NULL || !PyUnicode_Check(unicode) || length < 0)
+ {
+ PyErr_BadInternalCall();
+ return -1;
+ }
+ return unicode_resize(p_unicode, length);
+}
+
+/* Copy an ASCII or latin1 char* string into a Python Unicode string.
+
+ WARNING: The function doesn't copy the terminating null character and
+ doesn't check the maximum character (may write a latin1 character in an
+ ASCII string). */
+static void
+unicode_write_cstr(PyObject *unicode, Py_ssize_t index,
+ const char *str, Py_ssize_t len)
+{
+ enum PyUnicode_Kind kind = PyUnicode_KIND(unicode);
+ void *data = PyUnicode_DATA(unicode);
+ const char *end = str + len;
+
+ switch (kind) {
+ case PyUnicode_1BYTE_KIND: {
+ assert(index + len <= PyUnicode_GET_LENGTH(unicode));
+#ifdef Py_DEBUG
+ if (PyUnicode_IS_ASCII(unicode)) {
+ Py_UCS4 maxchar = ucs1lib_find_max_char(
+ (const Py_UCS1*)str,
+ (const Py_UCS1*)str + len);
+ assert(maxchar < 128);
+ }
+#endif
+ memcpy((char *) data + index, str, len);
+ break;
+ }
+ case PyUnicode_2BYTE_KIND: {
+ Py_UCS2 *start = (Py_UCS2 *)data + index;
+ Py_UCS2 *ucs2 = start;
+ assert(index <= PyUnicode_GET_LENGTH(unicode));
+
+ for (; str < end; ++ucs2, ++str)
+ *ucs2 = (Py_UCS2)*str;
+
+ assert((ucs2 - start) <= PyUnicode_GET_LENGTH(unicode));
+ break;
+ }
+ default: {
+ Py_UCS4 *start = (Py_UCS4 *)data + index;
+ Py_UCS4 *ucs4 = start;
+ assert(kind == PyUnicode_4BYTE_KIND);
+ assert(index <= PyUnicode_GET_LENGTH(unicode));
+
+ for (; str < end; ++ucs4, ++str)
+ *ucs4 = (Py_UCS4)*str;
+
+ assert((ucs4 - start) <= PyUnicode_GET_LENGTH(unicode));
+ }
+ }
+}
+
+static PyObject*
+get_latin1_char(unsigned char ch)
+{
+ PyObject *unicode = unicode_latin1[ch];
+ if (!unicode) {
+ unicode = PyUnicode_New(1, ch);
+ if (!unicode)
+ return NULL;
+ PyUnicode_1BYTE_DATA(unicode)[0] = ch;
+ assert(_PyUnicode_CheckConsistency(unicode, 1));
+ unicode_latin1[ch] = unicode;
+ }
+ Py_INCREF(unicode);
+ return unicode;
+}
+
+static PyObject*
+unicode_char(Py_UCS4 ch)
+{
+ PyObject *unicode;
+
+ assert(ch <= MAX_UNICODE);
+
+ if (ch < 256)
+ return get_latin1_char(ch);
+
+ unicode = PyUnicode_New(1, ch);
+ if (unicode == NULL)
+ return NULL;
+ switch (PyUnicode_KIND(unicode)) {
+ case PyUnicode_1BYTE_KIND:
+ PyUnicode_1BYTE_DATA(unicode)[0] = (Py_UCS1)ch;
+ break;
+ case PyUnicode_2BYTE_KIND:
+ PyUnicode_2BYTE_DATA(unicode)[0] = (Py_UCS2)ch;
+ break;
+ default:
+ assert(PyUnicode_KIND(unicode) == PyUnicode_4BYTE_KIND);
+ PyUnicode_4BYTE_DATA(unicode)[0] = ch;
+ }
+ assert(_PyUnicode_CheckConsistency(unicode, 1));
+ return unicode;
+}
+
+PyObject *
+PyUnicode_FromUnicode(const Py_UNICODE *u, Py_ssize_t size)
+{
+ PyObject *unicode;
+ Py_UCS4 maxchar = 0;
+ Py_ssize_t num_surrogates;
+
+ if (u == NULL)
+ return (PyObject*)_PyUnicode_New(size);
+
+ /* If the Unicode data is known at construction time, we can apply
+ some optimizations which share commonly used objects. */
+
+ /* Optimization for empty strings */
+ if (size == 0)
+ _Py_RETURN_UNICODE_EMPTY();
+
+ /* Single character Unicode objects in the Latin-1 range are
+ shared when using this constructor */
+ if (size == 1 && (Py_UCS4)*u < 256)
+ return get_latin1_char((unsigned char)*u);
+
+ /* If not empty and not single character, copy the Unicode data
+ into the new object */
+ if (find_maxchar_surrogates(u, u + size,
+ &maxchar, &num_surrogates) == -1)
+ return NULL;
+
+ unicode = PyUnicode_New(size - num_surrogates, maxchar);
+ if (!unicode)
+ return NULL;
+
+ switch (PyUnicode_KIND(unicode)) {
+ case PyUnicode_1BYTE_KIND:
+ _PyUnicode_CONVERT_BYTES(Py_UNICODE, unsigned char,
+ u, u + size, PyUnicode_1BYTE_DATA(unicode));
+ break;
+ case PyUnicode_2BYTE_KIND:
+#if Py_UNICODE_SIZE == 2
+ memcpy(PyUnicode_2BYTE_DATA(unicode), u, size * 2);
+#else
+ _PyUnicode_CONVERT_BYTES(Py_UNICODE, Py_UCS2,
+ u, u + size, PyUnicode_2BYTE_DATA(unicode));
+#endif
+ break;
+ case PyUnicode_4BYTE_KIND:
+#if SIZEOF_WCHAR_T == 2
+ /* This is the only case which has to process surrogates, thus
+ a simple copy loop is not enough and we need a function. */
+ unicode_convert_wchar_to_ucs4(u, u + size, unicode);
+#else
+ assert(num_surrogates == 0);
+ memcpy(PyUnicode_4BYTE_DATA(unicode), u, size * 4);
+#endif
+ break;
+ default:
+ assert(0 && "Impossible state");
+ }
+
+ return unicode_result(unicode);
+}
+
+PyObject *
+PyUnicode_FromStringAndSize(const char *u, Py_ssize_t size)
+{
+ if (size < 0) {
+ PyErr_SetString(PyExc_SystemError,
+ "Negative size passed to PyUnicode_FromStringAndSize");
+ return NULL;
+ }
+ if (u != NULL)
+ return PyUnicode_DecodeUTF8Stateful(u, size, NULL, NULL);
+ else
+ return (PyObject *)_PyUnicode_New(size);
+}
+
+PyObject *
+PyUnicode_FromString(const char *u)
+{
+ size_t size = strlen(u);
+ if (size > PY_SSIZE_T_MAX) {
+ PyErr_SetString(PyExc_OverflowError, "input too long");
+ return NULL;
+ }
+ return PyUnicode_DecodeUTF8Stateful(u, (Py_ssize_t)size, NULL, NULL);
+}
+
+PyObject *
+_PyUnicode_FromId(_Py_Identifier *id)
+{
+ if (!id->object) {
+ id->object = PyUnicode_DecodeUTF8Stateful(id->string,
+ strlen(id->string),
+ NULL, NULL);
+ if (!id->object)
+ return NULL;
+ PyUnicode_InternInPlace(&id->object);
+ assert(!id->next);
+ id->next = static_strings;
+ static_strings = id;
+ }
+ return id->object;
+}
+
+void
+_PyUnicode_ClearStaticStrings()
+{
+ _Py_Identifier *tmp, *s = static_strings;
+ while (s) {
+ Py_CLEAR(s->object);
+ tmp = s->next;
+ s->next = NULL;
+ s = tmp;
+ }
+ static_strings = NULL;
+}
+
+/* Internal function, doesn't check maximum character */
+
+PyObject*
+_PyUnicode_FromASCII(const char *buffer, Py_ssize_t size)
+{
+ const unsigned char *s = (const unsigned char *)buffer;
+ PyObject *unicode;
+ if (size == 1) {
+#ifdef Py_DEBUG
+ assert((unsigned char)s[0] < 128);
+#endif
+ return get_latin1_char(s[0]);
+ }
+ unicode = PyUnicode_New(size, 127);
+ if (!unicode)
+ return NULL;
+ memcpy(PyUnicode_1BYTE_DATA(unicode), s, size);
+ assert(_PyUnicode_CheckConsistency(unicode, 1));
+ return unicode;
+}
+
+static Py_UCS4
+kind_maxchar_limit(unsigned int kind)
+{
+ switch (kind) {
+ case PyUnicode_1BYTE_KIND:
+ return 0x80;
+ case PyUnicode_2BYTE_KIND:
+ return 0x100;
+ case PyUnicode_4BYTE_KIND:
+ return 0x10000;
+ default:
+ assert(0 && "invalid kind");
+ return MAX_UNICODE;
+ }
+}
+
+static Py_UCS4
+align_maxchar(Py_UCS4 maxchar)
+{
+ if (maxchar <= 127)
+ return 127;
+ else if (maxchar <= 255)
+ return 255;
+ else if (maxchar <= 65535)
+ return 65535;
+ else
+ return MAX_UNICODE;
+}
+
+static PyObject*
+_PyUnicode_FromUCS1(const Py_UCS1* u, Py_ssize_t size)
+{
+ PyObject *res;
+ unsigned char max_char;
+
+ if (size == 0)
+ _Py_RETURN_UNICODE_EMPTY();
+ assert(size > 0);
+ if (size == 1)
+ return get_latin1_char(u[0]);
+
+ max_char = ucs1lib_find_max_char(u, u + size);
+ res = PyUnicode_New(size, max_char);
+ if (!res)
+ return NULL;
+ memcpy(PyUnicode_1BYTE_DATA(res), u, size);
+ assert(_PyUnicode_CheckConsistency(res, 1));
+ return res;
+}
+
+static PyObject*
+_PyUnicode_FromUCS2(const Py_UCS2 *u, Py_ssize_t size)
+{
+ PyObject *res;
+ Py_UCS2 max_char;
+
+ if (size == 0)
+ _Py_RETURN_UNICODE_EMPTY();
+ assert(size > 0);
+ if (size == 1)
+ return unicode_char(u[0]);
+
+ max_char = ucs2lib_find_max_char(u, u + size);
+ res = PyUnicode_New(size, max_char);
+ if (!res)
+ return NULL;
+ if (max_char >= 256)
+ memcpy(PyUnicode_2BYTE_DATA(res), u, sizeof(Py_UCS2)*size);
+ else {
+ _PyUnicode_CONVERT_BYTES(
+ Py_UCS2, Py_UCS1, u, u + size, PyUnicode_1BYTE_DATA(res));
+ }
+ assert(_PyUnicode_CheckConsistency(res, 1));
+ return res;
+}
+
+static PyObject*
+_PyUnicode_FromUCS4(const Py_UCS4 *u, Py_ssize_t size)
+{
+ PyObject *res;
+ Py_UCS4 max_char;
+
+ if (size == 0)
+ _Py_RETURN_UNICODE_EMPTY();
+ assert(size > 0);
+ if (size == 1)
+ return unicode_char(u[0]);
+
+ max_char = ucs4lib_find_max_char(u, u + size);
+ res = PyUnicode_New(size, max_char);
+ if (!res)
+ return NULL;
+ if (max_char < 256)
+ _PyUnicode_CONVERT_BYTES(Py_UCS4, Py_UCS1, u, u + size,
+ PyUnicode_1BYTE_DATA(res));
+ else if (max_char < 0x10000)
+ _PyUnicode_CONVERT_BYTES(Py_UCS4, Py_UCS2, u, u + size,
+ PyUnicode_2BYTE_DATA(res));
+ else
+ memcpy(PyUnicode_4BYTE_DATA(res), u, sizeof(Py_UCS4)*size);
+ assert(_PyUnicode_CheckConsistency(res, 1));
+ return res;
+}
+
+PyObject*
+PyUnicode_FromKindAndData(int kind, const void *buffer, Py_ssize_t size)
+{
+ if (size < 0) {
+ PyErr_SetString(PyExc_ValueError, "size must be positive");
+ return NULL;
+ }
+ switch (kind) {
+ case PyUnicode_1BYTE_KIND:
+ return _PyUnicode_FromUCS1(buffer, size);
+ case PyUnicode_2BYTE_KIND:
+ return _PyUnicode_FromUCS2(buffer, size);
+ case PyUnicode_4BYTE_KIND:
+ return _PyUnicode_FromUCS4(buffer, size);
+ default:
+ PyErr_SetString(PyExc_SystemError, "invalid kind");
+ return NULL;
+ }
+}
+
+Py_UCS4
+_PyUnicode_FindMaxChar(PyObject *unicode, Py_ssize_t start, Py_ssize_t end)
+{
+ enum PyUnicode_Kind kind;
+ void *startptr, *endptr;
+
+ assert(PyUnicode_IS_READY(unicode));
+ assert(0 <= start);
+ assert(end <= PyUnicode_GET_LENGTH(unicode));
+ assert(start <= end);
+
+ if (start == 0 && end == PyUnicode_GET_LENGTH(unicode))
+ return PyUnicode_MAX_CHAR_VALUE(unicode);
+
+ if (start == end)
+ return 127;
+
+ if (PyUnicode_IS_ASCII(unicode))
+ return 127;
+
+ kind = PyUnicode_KIND(unicode);
+ startptr = PyUnicode_DATA(unicode);
+ endptr = (char *)startptr + end * kind;
+ startptr = (char *)startptr + start * kind;
+ switch(kind) {
+ case PyUnicode_1BYTE_KIND:
+ return ucs1lib_find_max_char(startptr, endptr);
+ case PyUnicode_2BYTE_KIND:
+ return ucs2lib_find_max_char(startptr, endptr);
+ case PyUnicode_4BYTE_KIND:
+ return ucs4lib_find_max_char(startptr, endptr);
+ default:
+ assert(0);
+ return 0;
+ }
+}
+
+/* Ensure that a string uses the most efficient storage, if it is not the
+ case: create a new string with of the right kind. Write NULL into *p_unicode
+ on error. */
+static void
+unicode_adjust_maxchar(PyObject **p_unicode)
+{
+ PyObject *unicode, *copy;
+ Py_UCS4 max_char;
+ Py_ssize_t len;
+ unsigned int kind;
+
+ assert(p_unicode != NULL);
+ unicode = *p_unicode;
+ assert(PyUnicode_IS_READY(unicode));
+ if (PyUnicode_IS_ASCII(unicode))
+ return;
+
+ len = PyUnicode_GET_LENGTH(unicode);
+ kind = PyUnicode_KIND(unicode);
+ if (kind == PyUnicode_1BYTE_KIND) {
+ const Py_UCS1 *u = PyUnicode_1BYTE_DATA(unicode);
+ max_char = ucs1lib_find_max_char(u, u + len);
+ if (max_char >= 128)
+ return;
+ }
+ else if (kind == PyUnicode_2BYTE_KIND) {
+ const Py_UCS2 *u = PyUnicode_2BYTE_DATA(unicode);
+ max_char = ucs2lib_find_max_char(u, u + len);
+ if (max_char >= 256)
+ return;
+ }
+ else {
+ const Py_UCS4 *u = PyUnicode_4BYTE_DATA(unicode);
+ assert(kind == PyUnicode_4BYTE_KIND);
+ max_char = ucs4lib_find_max_char(u, u + len);
+ if (max_char >= 0x10000)
+ return;
+ }
+ copy = PyUnicode_New(len, max_char);
+ if (copy != NULL)
+ _PyUnicode_FastCopyCharacters(copy, 0, unicode, 0, len);
+ Py_DECREF(unicode);
+ *p_unicode = copy;
+}
+
+PyObject*
+_PyUnicode_Copy(PyObject *unicode)
+{
+ Py_ssize_t length;
+ PyObject *copy;
+
+ if (!PyUnicode_Check(unicode)) {
+ PyErr_BadInternalCall();
+ return NULL;
+ }
+ if (PyUnicode_READY(unicode) == -1)
+ return NULL;
+
+ length = PyUnicode_GET_LENGTH(unicode);
+ copy = PyUnicode_New(length, PyUnicode_MAX_CHAR_VALUE(unicode));
+ if (!copy)
+ return NULL;
+ assert(PyUnicode_KIND(copy) == PyUnicode_KIND(unicode));
+
+ memcpy(PyUnicode_DATA(copy), PyUnicode_DATA(unicode),
+ length * PyUnicode_KIND(unicode));
+ assert(_PyUnicode_CheckConsistency(copy, 1));
+ return copy;
+}
+
+
+/* Widen Unicode objects to larger buffers. Don't write terminating null
+ character. Return NULL on error. */
+
+void*
+_PyUnicode_AsKind(PyObject *s, unsigned int kind)
+{
+ Py_ssize_t len;
+ void *result;
+ unsigned int skind;
+
+ if (PyUnicode_READY(s) == -1)
+ return NULL;
+
+ len = PyUnicode_GET_LENGTH(s);
+ skind = PyUnicode_KIND(s);
+ if (skind >= kind) {
+ PyErr_SetString(PyExc_SystemError, "invalid widening attempt");
+ return NULL;
+ }
+ switch (kind) {
+ case PyUnicode_2BYTE_KIND:
+ result = PyMem_New(Py_UCS2, len);
+ if (!result)
+ return PyErr_NoMemory();
+ assert(skind == PyUnicode_1BYTE_KIND);
+ _PyUnicode_CONVERT_BYTES(
+ Py_UCS1, Py_UCS2,
+ PyUnicode_1BYTE_DATA(s),
+ PyUnicode_1BYTE_DATA(s) + len,
+ result);
+ return result;
+ case PyUnicode_4BYTE_KIND:
+ result = PyMem_New(Py_UCS4, len);
+ if (!result)
+ return PyErr_NoMemory();
+ if (skind == PyUnicode_2BYTE_KIND) {
+ _PyUnicode_CONVERT_BYTES(
+ Py_UCS2, Py_UCS4,
+ PyUnicode_2BYTE_DATA(s),
+ PyUnicode_2BYTE_DATA(s) + len,
+ result);
+ }
+ else {
+ assert(skind == PyUnicode_1BYTE_KIND);
+ _PyUnicode_CONVERT_BYTES(
+ Py_UCS1, Py_UCS4,
+ PyUnicode_1BYTE_DATA(s),
+ PyUnicode_1BYTE_DATA(s) + len,
+ result);
+ }
+ return result;
+ default:
+ break;
+ }
+ PyErr_SetString(PyExc_SystemError, "invalid kind");
+ return NULL;
+}
+
+static Py_UCS4*
+as_ucs4(PyObject *string, Py_UCS4 *target, Py_ssize_t targetsize,
+ int copy_null)
+{
+ int kind;
+ void *data;
+ Py_ssize_t len, targetlen;
+ if (PyUnicode_READY(string) == -1)
+ return NULL;
+ kind = PyUnicode_KIND(string);
+ data = PyUnicode_DATA(string);
+ len = PyUnicode_GET_LENGTH(string);
+ targetlen = len;
+ if (copy_null)
+ targetlen++;
+ if (!target) {
+ target = PyMem_New(Py_UCS4, targetlen);
+ if (!target) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+ }
+ else {
+ if (targetsize < targetlen) {
+ PyErr_Format(PyExc_SystemError,
+ "string is longer than the buffer");
+ if (copy_null && 0 < targetsize)
+ target[0] = 0;
+ return NULL;
+ }
+ }
+ if (kind == PyUnicode_1BYTE_KIND) {
+ Py_UCS1 *start = (Py_UCS1 *) data;
+ _PyUnicode_CONVERT_BYTES(Py_UCS1, Py_UCS4, start, start + len, target);
+ }
+ else if (kind == PyUnicode_2BYTE_KIND) {
+ Py_UCS2 *start = (Py_UCS2 *) data;
+ _PyUnicode_CONVERT_BYTES(Py_UCS2, Py_UCS4, start, start + len, target);
+ }
+ else {
+ assert(kind == PyUnicode_4BYTE_KIND);
+ memcpy(target, data, len * sizeof(Py_UCS4));
+ }
+ if (copy_null)
+ target[len] = 0;
+ return target;
+}
+
+Py_UCS4*
+PyUnicode_AsUCS4(PyObject *string, Py_UCS4 *target, Py_ssize_t targetsize,
+ int copy_null)
+{
+ if (target == NULL || targetsize < 0) {
+ PyErr_BadInternalCall();
+ return NULL;
+ }
+ return as_ucs4(string, target, targetsize, copy_null);
+}
+
+Py_UCS4*
+PyUnicode_AsUCS4Copy(PyObject *string)
+{
+ return as_ucs4(string, NULL, 0, 1);
+}
+
+#ifdef HAVE_WCHAR_H
+
+PyObject *
+PyUnicode_FromWideChar(const wchar_t *w, Py_ssize_t size)
+{
+ if (w == NULL) {
+ if (size == 0)
+ _Py_RETURN_UNICODE_EMPTY();
+ PyErr_BadInternalCall();
+ return NULL;
+ }
+
+ if (size == -1) {
+ size = wcslen(w);
+ }
+
+ return PyUnicode_FromUnicode(w, size);
+}
+
+#endif /* HAVE_WCHAR_H */
+
+/* maximum number of characters required for output of %lld or %p.
+ We need at most ceil(log10(256)*SIZEOF_LONG_LONG) digits,
+ plus 1 for the sign. 53/22 is an upper bound for log10(256). */
+#define MAX_LONG_LONG_CHARS (2 + (SIZEOF_LONG_LONG*53-1) / 22)
+
+static int
+unicode_fromformat_write_str(_PyUnicodeWriter *writer, PyObject *str,
+ Py_ssize_t width, Py_ssize_t precision)
+{
+ Py_ssize_t length, fill, arglen;
+ Py_UCS4 maxchar;
+
+ if (PyUnicode_READY(str) == -1)
+ return -1;
+
+ length = PyUnicode_GET_LENGTH(str);
+ if ((precision == -1 || precision >= length)
+ && width <= length)
+ return _PyUnicodeWriter_WriteStr(writer, str);
+
+ if (precision != -1)
+ length = Py_MIN(precision, length);
+
+ arglen = Py_MAX(length, width);
+ if (PyUnicode_MAX_CHAR_VALUE(str) > writer->maxchar)
+ maxchar = _PyUnicode_FindMaxChar(str, 0, length);
+ else
+ maxchar = writer->maxchar;
+
+ if (_PyUnicodeWriter_Prepare(writer, arglen, maxchar) == -1)
+ return -1;
+
+ if (width > length) {
+ fill = width - length;
+ if (PyUnicode_Fill(writer->buffer, writer->pos, fill, ' ') == -1)
+ return -1;
+ writer->pos += fill;
+ }
+
+ _PyUnicode_FastCopyCharacters(writer->buffer, writer->pos,
+ str, 0, length);
+ writer->pos += length;
+ return 0;
+}
+
+static int
+unicode_fromformat_write_cstr(_PyUnicodeWriter *writer, const char *str,
+ Py_ssize_t width, Py_ssize_t precision)
+{
+ /* UTF-8 */
+ Py_ssize_t length;
+ PyObject *unicode;
+ int res;
+
+ length = strlen(str);
+ if (precision != -1)
+ length = Py_MIN(length, precision);
+ unicode = PyUnicode_DecodeUTF8Stateful(str, length, "replace", NULL);
+ if (unicode == NULL)
+ return -1;
+
+ res = unicode_fromformat_write_str(writer, unicode, width, -1);
+ Py_DECREF(unicode);
+ return res;
+}
+
+static const char*
+unicode_fromformat_arg(_PyUnicodeWriter *writer,
+ const char *f, va_list *vargs)
+{
+ const char *p;
+ Py_ssize_t len;
+ int zeropad;
+ Py_ssize_t width;
+ Py_ssize_t precision;
+ int longflag;
+ int longlongflag;
+ int size_tflag;
+ Py_ssize_t fill;
+
+ p = f;
+ f++;
+ zeropad = 0;
+ if (*f == '0') {
+ zeropad = 1;
+ f++;
+ }
+
+ /* parse the width.precision part, e.g. "%2.5s" => width=2, precision=5 */
+ width = -1;
+ if (Py_ISDIGIT((unsigned)*f)) {
+ width = *f - '0';
+ f++;
+ while (Py_ISDIGIT((unsigned)*f)) {
+ if (width > (PY_SSIZE_T_MAX - ((int)*f - '0')) / 10) {
+ PyErr_SetString(PyExc_ValueError,
+ "width too big");
+ return NULL;
+ }
+ width = (width * 10) + (*f - '0');
+ f++;
+ }
+ }
+ precision = -1;
+ if (*f == '.') {
+ f++;
+ if (Py_ISDIGIT((unsigned)*f)) {
+ precision = (*f - '0');
+ f++;
+ while (Py_ISDIGIT((unsigned)*f)) {
+ if (precision > (PY_SSIZE_T_MAX - ((int)*f - '0')) / 10) {
+ PyErr_SetString(PyExc_ValueError,
+ "precision too big");
+ return NULL;
+ }
+ precision = (precision * 10) + (*f - '0');
+ f++;
+ }
+ }
+ if (*f == '%') {
+ /* "%.3%s" => f points to "3" */
+ f--;
+ }
+ }
+ if (*f == '\0') {
+ /* bogus format "%.123" => go backward, f points to "3" */
+ f--;
+ }
+
+ /* Handle %ld, %lu, %lld and %llu. */
+ longflag = 0;
+ longlongflag = 0;
+ size_tflag = 0;
+ if (*f == 'l') {
+ if (f[1] == 'd' || f[1] == 'u' || f[1] == 'i') {
+ longflag = 1;
+ ++f;
+ }
+ else if (f[1] == 'l' &&
+ (f[2] == 'd' || f[2] == 'u' || f[2] == 'i')) {
+ longlongflag = 1;
+ f += 2;
+ }
+ }
+ /* handle the size_t flag. */
+ else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u' || f[1] == 'i')) {
+ size_tflag = 1;
+ ++f;
+ }
+
+ if (f[1] == '\0')
+ writer->overallocate = 0;
+
+ switch (*f) {
+ case 'c':
+ {
+ int ordinal = va_arg(*vargs, int);
+ if (ordinal < 0 || ordinal > MAX_UNICODE) {
+ PyErr_SetString(PyExc_OverflowError,
+ "character argument not in range(0x110000)");
+ return NULL;
+ }
+ if (_PyUnicodeWriter_WriteCharInline(writer, ordinal) < 0)
+ return NULL;
+ break;
+ }
+
+ case 'i':
+ case 'd':
+ case 'u':
+ case 'x':
+ {
+ /* used by sprintf */
+ char buffer[MAX_LONG_LONG_CHARS];
+ Py_ssize_t arglen;
+
+ if (*f == 'u') {
+ if (longflag)
+ len = sprintf(buffer, "%lu",
+ va_arg(*vargs, unsigned long));
+ else if (longlongflag)
+ len = sprintf(buffer, "%llu",
+ va_arg(*vargs, unsigned long long));
+ else if (size_tflag)
+ len = sprintf(buffer, "%" PY_FORMAT_SIZE_T "u",
+ va_arg(*vargs, size_t));
+ else
+ len = sprintf(buffer, "%u",
+ va_arg(*vargs, unsigned int));
+ }
+ else if (*f == 'x') {
+ len = sprintf(buffer, "%x", va_arg(*vargs, int));
+ }
+ else {
+ if (longflag)
+ len = sprintf(buffer, "%li",
+ va_arg(*vargs, long));
+ else if (longlongflag)
+ len = sprintf(buffer, "%lli",
+ va_arg(*vargs, long long));
+ else if (size_tflag)
+ len = sprintf(buffer, "%" PY_FORMAT_SIZE_T "i",
+ va_arg(*vargs, Py_ssize_t));
+ else
+ len = sprintf(buffer, "%i",
+ va_arg(*vargs, int));
+ }
+ assert(len >= 0);
+
+ if (precision < len)
+ precision = len;
+
+ arglen = Py_MAX(precision, width);
+ if (_PyUnicodeWriter_Prepare(writer, arglen, 127) == -1)
+ return NULL;
+
+ if (width > precision) {
+ Py_UCS4 fillchar;
+ fill = width - precision;
+ fillchar = zeropad?'0':' ';
+ if (PyUnicode_Fill(writer->buffer, writer->pos, fill, fillchar) == -1)
+ return NULL;
+ writer->pos += fill;
+ }
+ if (precision > len) {
+ fill = precision - len;
+ if (PyUnicode_Fill(writer->buffer, writer->pos, fill, '0') == -1)
+ return NULL;
+ writer->pos += fill;
+ }
+
+ if (_PyUnicodeWriter_WriteASCIIString(writer, buffer, len) < 0)
+ return NULL;
+ break;
+ }
+
+ case 'p':
+ {
+ char number[MAX_LONG_LONG_CHARS];
+
+ len = sprintf(number, "%p", va_arg(*vargs, void*));
+ assert(len >= 0);
+
+ /* %p is ill-defined: ensure leading 0x. */
+ if (number[1] == 'X')
+ number[1] = 'x';
+ else if (number[1] != 'x') {
+ memmove(number + 2, number,
+ strlen(number) + 1);
+ number[0] = '0';
+ number[1] = 'x';
+ len += 2;
+ }
+
+ if (_PyUnicodeWriter_WriteASCIIString(writer, number, len) < 0)
+ return NULL;
+ break;
+ }
+
+ case 's':
+ {
+ /* UTF-8 */
+ const char *s = va_arg(*vargs, const char*);
+ if (unicode_fromformat_write_cstr(writer, s, width, precision) < 0)
+ return NULL;
+ break;
+ }
+
+ case 'U':
+ {
+ PyObject *obj = va_arg(*vargs, PyObject *);
+ assert(obj && _PyUnicode_CHECK(obj));
+
+ if (unicode_fromformat_write_str(writer, obj, width, precision) == -1)
+ return NULL;
+ break;
+ }
+
+ case 'V':
+ {
+ PyObject *obj = va_arg(*vargs, PyObject *);
+ const char *str = va_arg(*vargs, const char *);
+ if (obj) {
+ assert(_PyUnicode_CHECK(obj));
+ if (unicode_fromformat_write_str(writer, obj, width, precision) == -1)
+ return NULL;
+ }
+ else {
+ assert(str != NULL);
+ if (unicode_fromformat_write_cstr(writer, str, width, precision) < 0)
+ return NULL;
+ }
+ break;
+ }
+
+ case 'S':
+ {
+ PyObject *obj = va_arg(*vargs, PyObject *);
+ PyObject *str;
+ assert(obj);
+ str = PyObject_Str(obj);
+ if (!str)
+ return NULL;
+ if (unicode_fromformat_write_str(writer, str, width, precision) == -1) {
+ Py_DECREF(str);
+ return NULL;
+ }
+ Py_DECREF(str);
+ break;
+ }
+
+ case 'R':
+ {
+ PyObject *obj = va_arg(*vargs, PyObject *);
+ PyObject *repr;
+ assert(obj);
+ repr = PyObject_Repr(obj);
+ if (!repr)
+ return NULL;
+ if (unicode_fromformat_write_str(writer, repr, width, precision) == -1) {
+ Py_DECREF(repr);
+ return NULL;
+ }
+ Py_DECREF(repr);
+ break;
+ }
+
+ case 'A':
+ {
+ PyObject *obj = va_arg(*vargs, PyObject *);
+ PyObject *ascii;
+ assert(obj);
+ ascii = PyObject_ASCII(obj);
+ if (!ascii)
+ return NULL;
+ if (unicode_fromformat_write_str(writer, ascii, width, precision) == -1) {
+ Py_DECREF(ascii);
+ return NULL;
+ }
+ Py_DECREF(ascii);
+ break;
+ }
+
+ case '%':
+ if (_PyUnicodeWriter_WriteCharInline(writer, '%') < 0)
+ return NULL;
+ break;
+
+ default:
+ /* if we stumble upon an unknown formatting code, copy the rest
+ of the format string to the output string. (we cannot just
+ skip the code, since there's no way to know what's in the
+ argument list) */
+ len = strlen(p);
+ if (_PyUnicodeWriter_WriteLatin1String(writer, p, len) == -1)
+ return NULL;
+ f = p+len;
+ return f;
+ }
+
+ f++;
+ return f;
+}
+
+PyObject *
+PyUnicode_FromFormatV(const char *format, va_list vargs)
+{
+ va_list vargs2;
+ const char *f;
+ _PyUnicodeWriter writer;
+
+ _PyUnicodeWriter_Init(&writer);
+ writer.min_length = strlen(format) + 100;
+ writer.overallocate = 1;
+
+ // Copy varags to be able to pass a reference to a subfunction.
+ va_copy(vargs2, vargs);
+
+ for (f = format; *f; ) {
+ if (*f == '%') {
+ f = unicode_fromformat_arg(&writer, f, &vargs2);
+ if (f == NULL)
+ goto fail;
+ }
+ else {
+ const char *p;
+ Py_ssize_t len;
+
+ p = f;
+ do
+ {
+ if ((unsigned char)*p > 127) {
+ PyErr_Format(PyExc_ValueError,
+ "PyUnicode_FromFormatV() expects an ASCII-encoded format "
+ "string, got a non-ASCII byte: 0x%02x",
+ (unsigned char)*p);
+ goto fail;
+ }
+ p++;
+ }
+ while (*p != '\0' && *p != '%');
+ len = p - f;
+
+ if (*p == '\0')
+ writer.overallocate = 0;
+
+ if (_PyUnicodeWriter_WriteASCIIString(&writer, f, len) < 0)
+ goto fail;
+
+ f = p;
+ }
+ }
+ va_end(vargs2);
+ return _PyUnicodeWriter_Finish(&writer);
+
+ fail:
+ va_end(vargs2);
+ _PyUnicodeWriter_Dealloc(&writer);
+ return NULL;
+}
+
+PyObject *
+PyUnicode_FromFormat(const char *format, ...)
+{
+ PyObject* ret;
+ va_list vargs;
+
+#ifdef HAVE_STDARG_PROTOTYPES
+ va_start(vargs, format);
+#else
+ va_start(vargs);
+#endif
+ ret = PyUnicode_FromFormatV(format, vargs);
+ va_end(vargs);
+ return ret;
+}
+
+#ifdef HAVE_WCHAR_H
+
+/* Helper function for PyUnicode_AsWideChar() and PyUnicode_AsWideCharString():
+ convert a Unicode object to a wide character string.
+
+ - If w is NULL: return the number of wide characters (including the null
+ character) required to convert the unicode object. Ignore size argument.
+
+ - Otherwise: return the number of wide characters (excluding the null
+ character) written into w. Write at most size wide characters (including
+ the null character). */
+static Py_ssize_t
+unicode_aswidechar(PyObject *unicode,
+ wchar_t *w,
+ Py_ssize_t size)
+{
+ Py_ssize_t res;
+ const wchar_t *wstr;
+
+ wstr = PyUnicode_AsUnicodeAndSize(unicode, &res);
+ if (wstr == NULL)
+ return -1;
+
+ if (w != NULL) {
+ if (size > res)
+ size = res + 1;
+ else
+ res = size;
+ memcpy(w, wstr, size * sizeof(wchar_t));
+ return res;
+ }
+ else
+ return res + 1;
+}
+
+Py_ssize_t
+PyUnicode_AsWideChar(PyObject *unicode,
+ wchar_t *w,
+ Py_ssize_t size)
+{
+ if (unicode == NULL) {
+ PyErr_BadInternalCall();
+ return -1;
+ }
+ return unicode_aswidechar(unicode, w, size);
+}
+
+wchar_t*
+PyUnicode_AsWideCharString(PyObject *unicode,
+ Py_ssize_t *size)
+{
+ wchar_t* buffer;
+ Py_ssize_t buflen;
+
+ if (unicode == NULL) {
+ PyErr_BadInternalCall();
+ return NULL;
+ }
+
+ buflen = unicode_aswidechar(unicode, NULL, 0);
+ if (buflen == -1)
+ return NULL;
+ buffer = PyMem_NEW(wchar_t, buflen);
+ if (buffer == NULL) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+ buflen = unicode_aswidechar(unicode, buffer, buflen);
+ if (buflen == -1) {
+ PyMem_FREE(buffer);
+ return NULL;
+ }
+ if (size != NULL)
+ *size = buflen;
+ return buffer;
+}
+
+wchar_t*
+_PyUnicode_AsWideCharString(PyObject *unicode)
+{
+ const wchar_t *wstr;
+ wchar_t *buffer;
+ Py_ssize_t buflen;
+
+ if (unicode == NULL) {
+ PyErr_BadInternalCall();
+ return NULL;
+ }
+
+ wstr = PyUnicode_AsUnicodeAndSize(unicode, &buflen);
+ if (wstr == NULL) {
+ return NULL;
+ }
+ if (wcslen(wstr) != (size_t)buflen) {
+ PyErr_SetString(PyExc_ValueError,
+ "embedded null character");
+ return NULL;
+ }
+
+ buffer = PyMem_NEW(wchar_t, buflen + 1);
+ if (buffer == NULL) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+ memcpy(buffer, wstr, (buflen + 1) * sizeof(wchar_t));
+ return buffer;
+}
+
+#endif /* HAVE_WCHAR_H */
+
+PyObject *
+PyUnicode_FromOrdinal(int ordinal)
+{
+ if (ordinal < 0 || ordinal > MAX_UNICODE) {
+ PyErr_SetString(PyExc_ValueError,
+ "chr() arg not in range(0x110000)");
+ return NULL;
+ }
+
+ return unicode_char((Py_UCS4)ordinal);
+}
+
+PyObject *
+PyUnicode_FromObject(PyObject *obj)
+{
+ /* XXX Perhaps we should make this API an alias of
+ PyObject_Str() instead ?! */
+ if (PyUnicode_CheckExact(obj)) {
+ if (PyUnicode_READY(obj) == -1)
+ return NULL;
+ Py_INCREF(obj);
+ return obj;
+ }
+ if (PyUnicode_Check(obj)) {
+ /* For a Unicode subtype that's not a Unicode object,
+ return a true Unicode object with the same data. */
+ return _PyUnicode_Copy(obj);
+ }
+ PyErr_Format(PyExc_TypeError,
+ "Can't convert '%.100s' object to str implicitly",
+ Py_TYPE(obj)->tp_name);
+ return NULL;
+}
+
+PyObject *
+PyUnicode_FromEncodedObject(PyObject *obj,
+ const char *encoding,
+ const char *errors)
+{
+ Py_buffer buffer;
+ PyObject *v;
+
+ if (obj == NULL) {
+ PyErr_BadInternalCall();
+ return NULL;
+ }
+
+ /* Decoding bytes objects is the most common case and should be fast */
+ if (PyBytes_Check(obj)) {
+ if (PyBytes_GET_SIZE(obj) == 0)
+ _Py_RETURN_UNICODE_EMPTY();
+ v = PyUnicode_Decode(
+ PyBytes_AS_STRING(obj), PyBytes_GET_SIZE(obj),
+ encoding, errors);
+ return v;
+ }
+
+ if (PyUnicode_Check(obj)) {
+ PyErr_SetString(PyExc_TypeError,
+ "decoding str is not supported");
+ return NULL;
+ }
+
+ /* Retrieve a bytes buffer view through the PEP 3118 buffer interface */
+ if (PyObject_GetBuffer(obj, &buffer, PyBUF_SIMPLE) < 0) {
+ PyErr_Format(PyExc_TypeError,
+ "decoding to str: need a bytes-like object, %.80s found",
+ Py_TYPE(obj)->tp_name);
+ return NULL;
+ }
+
+ if (buffer.len == 0) {
+ PyBuffer_Release(&buffer);
+ _Py_RETURN_UNICODE_EMPTY();
+ }
+
+ v = PyUnicode_Decode((char*) buffer.buf, buffer.len, encoding, errors);
+ PyBuffer_Release(&buffer);
+ return v;
+}
+
+/* Normalize an encoding name: similar to encodings.normalize_encoding(), but
+ also convert to lowercase. Return 1 on success, or 0 on error (encoding is
+ longer than lower_len-1). */
+int
+_Py_normalize_encoding(const char *encoding,
+ char *lower,
+ size_t lower_len)
+{
+ const char *e;
+ char *l;
+ char *l_end;
+ int punct;
+
+ assert(encoding != NULL);
+
+ e = encoding;
+ l = lower;
+ l_end = &lower[lower_len - 1];
+ punct = 0;
+ while (1) {
+ char c = *e;
+ if (c == 0) {
+ break;
+ }
+
+ if (Py_ISALNUM(c) || c == '.') {
+ if (punct && l != lower) {
+ if (l == l_end) {
+ return 0;
+ }
+ *l++ = '_';
+ }
+ punct = 0;
+
+ if (l == l_end) {
+ return 0;
+ }
+ *l++ = Py_TOLOWER(c);
+ }
+ else {
+ punct = 1;
+ }
+
+ e++;
+ }
+ *l = '\0';
+ return 1;
+}
+
+PyObject *
+PyUnicode_Decode(const char *s,
+ Py_ssize_t size,
+ const char *encoding,
+ const char *errors)
+{
+ PyObject *buffer = NULL, *unicode;
+ Py_buffer info;
+ char buflower[11]; /* strlen("iso-8859-1\0") == 11, longest shortcut */
+
+ if (encoding == NULL) {
+ return PyUnicode_DecodeUTF8Stateful(s, size, errors, NULL);
+ }
+
+ /* Shortcuts for common default encodings */
+ if (_Py_normalize_encoding(encoding, buflower, sizeof(buflower))) {
+ char *lower = buflower;
+
+ /* Fast paths */
+ if (lower[0] == 'u' && lower[1] == 't' && lower[2] == 'f') {
+ lower += 3;
+ if (*lower == '_') {
+ /* Match "utf8" and "utf_8" */
+ lower++;
+ }
+
+ if (lower[0] == '8' && lower[1] == 0) {
+ return PyUnicode_DecodeUTF8Stateful(s, size, errors, NULL);
+ }
+ else if (lower[0] == '1' && lower[1] == '6' && lower[2] == 0) {
+ return PyUnicode_DecodeUTF16(s, size, errors, 0);
+ }
+ else if (lower[0] == '3' && lower[1] == '2' && lower[2] == 0) {
+ return PyUnicode_DecodeUTF32(s, size, errors, 0);
+ }
+ }
+ else {
+ if (strcmp(lower, "ascii") == 0
+ || strcmp(lower, "us_ascii") == 0) {
+ return PyUnicode_DecodeASCII(s, size, errors);
+ }
+ #ifdef MS_WINDOWS
+ else if (strcmp(lower, "mbcs") == 0) {
+ return PyUnicode_DecodeMBCS(s, size, errors);
+ }
+ #endif
+ else if (strcmp(lower, "latin1") == 0
+ || strcmp(lower, "latin_1") == 0
+ || strcmp(lower, "iso_8859_1") == 0
+ || strcmp(lower, "iso8859_1") == 0) {
+ return PyUnicode_DecodeLatin1(s, size, errors);
+ }
+ }
+ }
+
+ /* Decode via the codec registry */
+ buffer = NULL;
+ if (PyBuffer_FillInfo(&info, NULL, (void *)s, size, 1, PyBUF_FULL_RO) < 0)
+ goto onError;
+ buffer = PyMemoryView_FromBuffer(&info);
+ if (buffer == NULL)
+ goto onError;
+ unicode = _PyCodec_DecodeText(buffer, encoding, errors);
+ if (unicode == NULL)
+ goto onError;
+ if (!PyUnicode_Check(unicode)) {
+ PyErr_Format(PyExc_TypeError,
+ "'%.400s' decoder returned '%.400s' instead of 'str'; "
+ "use codecs.decode() to decode to arbitrary types",
+ encoding,
+ Py_TYPE(unicode)->tp_name);
+ Py_DECREF(unicode);
+ goto onError;
+ }
+ Py_DECREF(buffer);
+ return unicode_result(unicode);
+
+ onError:
+ Py_XDECREF(buffer);
+ return NULL;
+}
+
+PyObject *
+PyUnicode_AsDecodedObject(PyObject *unicode,
+ const char *encoding,
+ const char *errors)
+{
+ if (!PyUnicode_Check(unicode)) {
+ PyErr_BadArgument();
+ return NULL;
+ }
+
+ if (PyErr_WarnEx(PyExc_DeprecationWarning,
+ "PyUnicode_AsDecodedObject() is deprecated; "
+ "use PyCodec_Decode() to decode from str", 1) < 0)
+ return NULL;
+
+ if (encoding == NULL)
+ encoding = PyUnicode_GetDefaultEncoding();
+
+ /* Decode via the codec registry */
+ return PyCodec_Decode(unicode, encoding, errors);
+}
+
+PyObject *
+PyUnicode_AsDecodedUnicode(PyObject *unicode,
+ const char *encoding,
+ const char *errors)
+{
+ PyObject *v;
+
+ if (!PyUnicode_Check(unicode)) {
+ PyErr_BadArgument();
+ goto onError;
+ }
+
+ if (PyErr_WarnEx(PyExc_DeprecationWarning,
+ "PyUnicode_AsDecodedUnicode() is deprecated; "
+ "use PyCodec_Decode() to decode from str to str", 1) < 0)
+ return NULL;
+
+ if (encoding == NULL)
+ encoding = PyUnicode_GetDefaultEncoding();
+
+ /* Decode via the codec registry */
+ v = PyCodec_Decode(unicode, encoding, errors);
+ if (v == NULL)
+ goto onError;
+ if (!PyUnicode_Check(v)) {
+ PyErr_Format(PyExc_TypeError,
+ "'%.400s' decoder returned '%.400s' instead of 'str'; "
+ "use codecs.decode() to decode to arbitrary types",
+ encoding,
+ Py_TYPE(unicode)->tp_name);
+ Py_DECREF(v);
+ goto onError;
+ }
+ return unicode_result(v);
+
+ onError:
+ return NULL;
+}
+
+PyObject *
+PyUnicode_Encode(const Py_UNICODE *s,
+ Py_ssize_t size,
+ const char *encoding,
+ const char *errors)
+{
+ PyObject *v, *unicode;
+
+ unicode = PyUnicode_FromUnicode(s, size);
+ if (unicode == NULL)
+ return NULL;
+ v = PyUnicode_AsEncodedString(unicode, encoding, errors);
+ Py_DECREF(unicode);
+ return v;
+}
+
+PyObject *
+PyUnicode_AsEncodedObject(PyObject *unicode,
+ const char *encoding,
+ const char *errors)
+{
+ PyObject *v;
+
+ if (!PyUnicode_Check(unicode)) {
+ PyErr_BadArgument();
+ goto onError;
+ }
+
+ if (PyErr_WarnEx(PyExc_DeprecationWarning,
+ "PyUnicode_AsEncodedObject() is deprecated; "
+ "use PyUnicode_AsEncodedString() to encode from str to bytes "
+ "or PyCodec_Encode() for generic encoding", 1) < 0)
+ return NULL;
+
+ if (encoding == NULL)
+ encoding = PyUnicode_GetDefaultEncoding();
+
+ /* Encode via the codec registry */
+ v = PyCodec_Encode(unicode, encoding, errors);
+ if (v == NULL)
+ goto onError;
+ return v;
+
+ onError:
+ return NULL;
+}
+
+static size_t
+wcstombs_errorpos(const wchar_t *wstr)
+{
+ size_t len;
+#if SIZEOF_WCHAR_T == 2
+ wchar_t buf[3];
+#else
+ wchar_t buf[2];
+#endif
+ char outbuf[MB_LEN_MAX];
+ const wchar_t *start, *previous;
+
+#if SIZEOF_WCHAR_T == 2
+ buf[2] = 0;
+#else
+ buf[1] = 0;
+#endif
+ start = wstr;
+ while (*wstr != L'\0')
+ {
+ previous = wstr;
+#if SIZEOF_WCHAR_T == 2
+ if (Py_UNICODE_IS_HIGH_SURROGATE(wstr[0])
+ && Py_UNICODE_IS_LOW_SURROGATE(wstr[1]))
+ {
+ buf[0] = wstr[0];
+ buf[1] = wstr[1];
+ wstr += 2;
+ }
+ else {
+ buf[0] = *wstr;
+ buf[1] = 0;
+ wstr++;
+ }
+#else
+ buf[0] = *wstr;
+ wstr++;
+#endif
+ len = wcstombs(outbuf, buf, sizeof(outbuf));
+ if (len == (size_t)-1)
+ return previous - start;
+ }
+
+ /* failed to find the unencodable character */
+ return 0;
+}
+
+static int
+locale_error_handler(const char *errors, int *surrogateescape)
+{
+ _Py_error_handler error_handler = get_error_handler(errors);
+ switch (error_handler)
+ {
+ case _Py_ERROR_STRICT:
+ *surrogateescape = 0;
+ return 0;
+ case _Py_ERROR_SURROGATEESCAPE:
+ *surrogateescape = 1;
+ return 0;
+ default:
+ PyErr_Format(PyExc_ValueError,
+ "only 'strict' and 'surrogateescape' error handlers "
+ "are supported, not '%s'",
+ errors);
+ return -1;
+ }
+}
+
+static PyObject *
+unicode_encode_locale(PyObject *unicode, const char *errors,
+ int current_locale)
+{
+ Py_ssize_t wlen, wlen2;
+ wchar_t *wstr;
+ PyObject *bytes = NULL;
+ char *errmsg;
+ PyObject *reason = NULL;
+ PyObject *exc;
+ size_t error_pos;
+ int surrogateescape;
+
+ if (locale_error_handler(errors, &surrogateescape) < 0)
+ return NULL;
+
+ wstr = PyUnicode_AsWideCharString(unicode, &wlen);
+ if (wstr == NULL)
+ return NULL;
+
+ wlen2 = wcslen(wstr);
+ if (wlen2 != wlen) {
+ PyMem_Free(wstr);
+ PyErr_SetString(PyExc_ValueError, "embedded null character");
+ return NULL;
+ }
+
+ if (surrogateescape) {
+ /* "surrogateescape" error handler */
+ char *str;
+
+ str = _Py_EncodeLocaleEx(wstr, &error_pos, current_locale);
+ if (str == NULL) {
+ if (error_pos == (size_t)-1) {
+ PyErr_NoMemory();
+ PyMem_Free(wstr);
+ return NULL;
+ }
+ else {
+ goto encode_error;
+ }
+ }
+ PyMem_Free(wstr);
+
+ bytes = PyBytes_FromString(str);
+ PyMem_Free(str);
+ }
+ else {
+ /* strict mode */
+ size_t len, len2;
+
+ len = wcstombs(NULL, wstr, 0);
+ if (len == (size_t)-1) {
+ error_pos = (size_t)-1;
+ goto encode_error;
+ }
+
+ bytes = PyBytes_FromStringAndSize(NULL, len);
+ if (bytes == NULL) {
+ PyMem_Free(wstr);
+ return NULL;
+ }
+
+ len2 = wcstombs(PyBytes_AS_STRING(bytes), wstr, len+1);
+ if (len2 == (size_t)-1 || len2 > len) {
+ error_pos = (size_t)-1;
+ goto encode_error;
+ }
+ PyMem_Free(wstr);
+ }
+ return bytes;
+
+encode_error:
+ errmsg = strerror(errno);
+ assert(errmsg != NULL);
+
+ if (error_pos == (size_t)-1)
+ error_pos = wcstombs_errorpos(wstr);
+
+ PyMem_Free(wstr);
+ Py_XDECREF(bytes);
+
+ if (errmsg != NULL) {
+ size_t errlen;
+ wstr = Py_DecodeLocale(errmsg, &errlen);
+ if (wstr != NULL) {
+ reason = PyUnicode_FromWideChar(wstr, errlen);
+ PyMem_RawFree(wstr);
+ } else
+ errmsg = NULL;
+ }
+ if (errmsg == NULL)
+ reason = PyUnicode_FromString(
+ "wcstombs() encountered an unencodable "
+ "wide character");
+ if (reason == NULL)
+ return NULL;
+
+ exc = PyObject_CallFunction(PyExc_UnicodeEncodeError, "sOnnO",
+ "locale", unicode,
+ (Py_ssize_t)error_pos,
+ (Py_ssize_t)(error_pos+1),
+ reason);
+ Py_DECREF(reason);
+ if (exc != NULL) {
+ PyCodec_StrictErrors(exc);
+ Py_XDECREF(exc);
+ }
+ return NULL;
+}
+
+PyObject *
+PyUnicode_EncodeLocale(PyObject *unicode, const char *errors)
+{
+ return unicode_encode_locale(unicode, errors, 1);
+}
+
+PyObject *
+PyUnicode_EncodeFSDefault(PyObject *unicode)
+{
+#if defined(__APPLE__)
+ return _PyUnicode_AsUTF8String(unicode, Py_FileSystemDefaultEncodeErrors);
+#else
+ PyInterpreterState *interp = PyThreadState_GET()->interp;
+ /* Bootstrap check: if the filesystem codec is implemented in Python, we
+ cannot use it to encode and decode filenames before it is loaded. Load
+ the Python codec requires to encode at least its own filename. Use the C
+ version of the locale codec until the codec registry is initialized and
+ the Python codec is loaded.
+
+ Py_FileSystemDefaultEncoding is shared between all interpreters, we
+ cannot only rely on it: check also interp->fscodec_initialized for
+ subinterpreters. */
+ if (Py_FileSystemDefaultEncoding && interp->fscodec_initialized) {
+ return PyUnicode_AsEncodedString(unicode,
+ Py_FileSystemDefaultEncoding,
+ Py_FileSystemDefaultEncodeErrors);
+ }
+ else {
+ return unicode_encode_locale(unicode,
+ Py_FileSystemDefaultEncodeErrors, 0);
+ }
+#endif
+}
+
+PyObject *
+PyUnicode_AsEncodedString(PyObject *unicode,
+ const char *encoding,
+ const char *errors)
+{
+ PyObject *v;
+ char buflower[11]; /* strlen("iso_8859_1\0") == 11, longest shortcut */
+
+ if (!PyUnicode_Check(unicode)) {
+ PyErr_BadArgument();
+ return NULL;
+ }
+
+ if (encoding == NULL) {
+ return _PyUnicode_AsUTF8String(unicode, errors);
+ }
+
+ /* Shortcuts for common default encodings */
+ if (_Py_normalize_encoding(encoding, buflower, sizeof(buflower))) {
+ char *lower = buflower;
+
+ /* Fast paths */
+ if (lower[0] == 'u' && lower[1] == 't' && lower[2] == 'f') {
+ lower += 3;
+ if (*lower == '_') {
+ /* Match "utf8" and "utf_8" */
+ lower++;
+ }
+
+ if (lower[0] == '8' && lower[1] == 0) {
+ return _PyUnicode_AsUTF8String(unicode, errors);
+ }
+ else if (lower[0] == '1' && lower[1] == '6' && lower[2] == 0) {
+ return _PyUnicode_EncodeUTF16(unicode, errors, 0);
+ }
+ else if (lower[0] == '3' && lower[1] == '2' && lower[2] == 0) {
+ return _PyUnicode_EncodeUTF32(unicode, errors, 0);
+ }
+ }
+ else {
+ if (strcmp(lower, "ascii") == 0
+ || strcmp(lower, "us_ascii") == 0) {
+ return _PyUnicode_AsASCIIString(unicode, errors);
+ }
+#ifdef MS_WINDOWS
+ else if (strcmp(lower, "mbcs") == 0) {
+ return PyUnicode_EncodeCodePage(CP_ACP, unicode, errors);
+ }
+#endif
+ else if (strcmp(lower, "latin1") == 0 ||
+ strcmp(lower, "latin_1") == 0 ||
+ strcmp(lower, "iso_8859_1") == 0 ||
+ strcmp(lower, "iso8859_1") == 0) {
+ return _PyUnicode_AsLatin1String(unicode, errors);
+ }
+ }
+ }
+
+ /* Encode via the codec registry */
+ v = _PyCodec_EncodeText(unicode, encoding, errors);
+ if (v == NULL)
+ return NULL;
+
+ /* The normal path */
+ if (PyBytes_Check(v))
+ return v;
+
+ /* If the codec returns a buffer, raise a warning and convert to bytes */
+ if (PyByteArray_Check(v)) {
+ int error;
+ PyObject *b;
+
+ error = PyErr_WarnFormat(PyExc_RuntimeWarning, 1,
+ "encoder %s returned bytearray instead of bytes; "
+ "use codecs.encode() to encode to arbitrary types",
+ encoding);
+ if (error) {
+ Py_DECREF(v);
+ return NULL;
+ }
+
+ b = PyBytes_FromStringAndSize(PyByteArray_AS_STRING(v), Py_SIZE(v));
+ Py_DECREF(v);
+ return b;
+ }
+
+ PyErr_Format(PyExc_TypeError,
+ "'%.400s' encoder returned '%.400s' instead of 'bytes'; "
+ "use codecs.encode() to encode to arbitrary types",
+ encoding,
+ Py_TYPE(v)->tp_name);
+ Py_DECREF(v);
+ return NULL;
+}
+
+PyObject *
+PyUnicode_AsEncodedUnicode(PyObject *unicode,
+ const char *encoding,
+ const char *errors)
+{
+ PyObject *v;
+
+ if (!PyUnicode_Check(unicode)) {
+ PyErr_BadArgument();
+ goto onError;
+ }
+
+ if (PyErr_WarnEx(PyExc_DeprecationWarning,
+ "PyUnicode_AsEncodedUnicode() is deprecated; "
+ "use PyCodec_Encode() to encode from str to str", 1) < 0)
+ return NULL;
+
+ if (encoding == NULL)
+ encoding = PyUnicode_GetDefaultEncoding();
+
+ /* Encode via the codec registry */
+ v = PyCodec_Encode(unicode, encoding, errors);
+ if (v == NULL)
+ goto onError;
+ if (!PyUnicode_Check(v)) {
+ PyErr_Format(PyExc_TypeError,
+ "'%.400s' encoder returned '%.400s' instead of 'str'; "
+ "use codecs.encode() to encode to arbitrary types",
+ encoding,
+ Py_TYPE(v)->tp_name);
+ Py_DECREF(v);
+ goto onError;
+ }
+ return v;
+
+ onError:
+ return NULL;
+}
+
+static size_t
+mbstowcs_errorpos(const char *str, size_t len)
+{
+#ifdef HAVE_MBRTOWC
+ const char *start = str;
+ mbstate_t mbs;
+ size_t converted;
+ wchar_t ch;
+
+ memset(&mbs, 0, sizeof mbs);
+ while (len)
+ {
+ converted = mbrtowc(&ch, str, len, &mbs);
+ if (converted == 0)
+ /* Reached end of string */
+ break;
+ if (converted == (size_t)-1 || converted == (size_t)-2) {
+ /* Conversion error or incomplete character */
+ return str - start;
+ }
+ else {
+ str += converted;
+ len -= converted;
+ }
+ }
+ /* failed to find the undecodable byte sequence */
+ return 0;
+#endif
+ return 0;
+}
+
+static PyObject*
+unicode_decode_locale(const char *str, Py_ssize_t len,
+ const char *errors, int current_locale)
+{
+ wchar_t smallbuf[256];
+ size_t smallbuf_len = Py_ARRAY_LENGTH(smallbuf);
+ wchar_t *wstr;
+ size_t wlen, wlen2;
+ PyObject *unicode;
+ int surrogateescape;
+ size_t error_pos;
+ char *errmsg;
+ PyObject *reason = NULL; /* initialize to prevent gcc warning */
+ PyObject *exc;
+
+ if (locale_error_handler(errors, &surrogateescape) < 0)
+ return NULL;
+
+ if (str[len] != '\0' || (size_t)len != strlen(str)) {
+ PyErr_SetString(PyExc_ValueError, "embedded null byte");
+ return NULL;
+ }
+
+ if (surrogateescape) {
+ /* "surrogateescape" error handler */
+ wstr = _Py_DecodeLocaleEx(str, &wlen, current_locale);
+ if (wstr == NULL) {
+ if (wlen == (size_t)-1)
+ PyErr_NoMemory();
+ else
+ PyErr_SetFromErrno(PyExc_OSError);
+ return NULL;
+ }
+
+ unicode = PyUnicode_FromWideChar(wstr, wlen);
+ PyMem_RawFree(wstr);
+ }
+ else {
+ /* strict mode */
+#ifndef HAVE_BROKEN_MBSTOWCS
+ wlen = mbstowcs(NULL, str, 0);
+#else
+ wlen = len;
+#endif
+ if (wlen == (size_t)-1)
+ goto decode_error;
+ if (wlen+1 <= smallbuf_len) {
+ wstr = smallbuf;
+ }
+ else {
+ wstr = PyMem_New(wchar_t, wlen+1);
+ if (!wstr)
+ return PyErr_NoMemory();
+ }
+
+ wlen2 = mbstowcs(wstr, str, wlen+1);
+ if (wlen2 == (size_t)-1) {
+ if (wstr != smallbuf)
+ PyMem_Free(wstr);
+ goto decode_error;
+ }
+#ifdef HAVE_BROKEN_MBSTOWCS
+ assert(wlen2 == wlen);
+#endif
+ unicode = PyUnicode_FromWideChar(wstr, wlen2);
+ if (wstr != smallbuf)
+ PyMem_Free(wstr);
+ }
+ return unicode;
+
+decode_error:
+ reason = NULL;
+ errmsg = strerror(errno);
+ assert(errmsg != NULL);
+
+ error_pos = mbstowcs_errorpos(str, len);
+ if (errmsg != NULL) {
+ size_t errlen;
+ wstr = Py_DecodeLocale(errmsg, &errlen);
+ if (wstr != NULL) {
+ reason = PyUnicode_FromWideChar(wstr, errlen);
+ PyMem_RawFree(wstr);
+ }
+ }
+ if (reason == NULL)
+ reason = PyUnicode_FromString(
+ "mbstowcs() encountered an invalid multibyte sequence");
+ if (reason == NULL)
+ return NULL;
+
+ exc = PyObject_CallFunction(PyExc_UnicodeDecodeError, "sy#nnO",
+ "locale", str, len,
+ (Py_ssize_t)error_pos,
+ (Py_ssize_t)(error_pos+1),
+ reason);
+ Py_DECREF(reason);
+ if (exc != NULL) {
+ PyCodec_StrictErrors(exc);
+ Py_XDECREF(exc);
+ }
+ return NULL;
+}
+
+PyObject*
+PyUnicode_DecodeLocaleAndSize(const char *str, Py_ssize_t size,
+ const char *errors)
+{
+ return unicode_decode_locale(str, size, errors, 1);
+}
+
+PyObject*
+PyUnicode_DecodeLocale(const char *str, const char *errors)
+{
+ Py_ssize_t size = (Py_ssize_t)strlen(str);
+ return unicode_decode_locale(str, size, errors, 1);
+}
+
+
+PyObject*
+PyUnicode_DecodeFSDefault(const char *s) {
+ Py_ssize_t size = (Py_ssize_t)strlen(s);
+ return PyUnicode_DecodeFSDefaultAndSize(s, size);
+}
+
+PyObject*
+PyUnicode_DecodeFSDefaultAndSize(const char *s, Py_ssize_t size)
+{
+#if defined(__APPLE__)
+ return PyUnicode_DecodeUTF8Stateful(s, size, Py_FileSystemDefaultEncodeErrors, NULL);
+#else
+ PyInterpreterState *interp = PyThreadState_GET()->interp;
+ /* Bootstrap check: if the filesystem codec is implemented in Python, we
+ cannot use it to encode and decode filenames before it is loaded. Load
+ the Python codec requires to encode at least its own filename. Use the C
+ version of the locale codec until the codec registry is initialized and
+ the Python codec is loaded.
+
+ Py_FileSystemDefaultEncoding is shared between all interpreters, we
+ cannot only rely on it: check also interp->fscodec_initialized for
+ subinterpreters. */
+ if (Py_FileSystemDefaultEncoding && interp->fscodec_initialized) {
+ return PyUnicode_Decode(s, size,
+ Py_FileSystemDefaultEncoding,
+ Py_FileSystemDefaultEncodeErrors);
+ }
+ else {
+ return unicode_decode_locale(s, size,
+ Py_FileSystemDefaultEncodeErrors, 0);
+ }
+#endif
+}
+
+
+int
+PyUnicode_FSConverter(PyObject* arg, void* addr)
+{
+ PyObject *path = NULL;
+ PyObject *output = NULL;
+ Py_ssize_t size;
+ void *data;
+ if (arg == NULL) {
+ Py_DECREF(*(PyObject**)addr);
+ *(PyObject**)addr = NULL;
+ return 1;
+ }
+ path = PyOS_FSPath(arg);
+ if (path == NULL) {
+ return 0;
+ }
+ if (PyBytes_Check(path)) {
+ output = path;
+ }
+ else { // PyOS_FSPath() guarantees its returned value is bytes or str.
+ output = PyUnicode_EncodeFSDefault(path);
+ Py_DECREF(path);
+ if (!output) {
+ return 0;
+ }
+ assert(PyBytes_Check(output));
+ }
+
+ size = PyBytes_GET_SIZE(output);
+ data = PyBytes_AS_STRING(output);
+ if ((size_t)size != strlen(data)) {
+ PyErr_SetString(PyExc_ValueError, "embedded null byte");
+ Py_DECREF(output);
+ return 0;
+ }
+ *(PyObject**)addr = output;
+ return Py_CLEANUP_SUPPORTED;
+}
+
+
+int
+PyUnicode_FSDecoder(PyObject* arg, void* addr)
+{
+ int is_buffer = 0;
+ PyObject *path = NULL;
+ PyObject *output = NULL;
+ if (arg == NULL) {
+ Py_DECREF(*(PyObject**)addr);
+ *(PyObject**)addr = NULL;
+ return 1;
+ }
+
+ is_buffer = PyObject_CheckBuffer(arg);
+ if (!is_buffer) {
+ path = PyOS_FSPath(arg);
+ if (path == NULL) {
+ return 0;
+ }
+ }
+ else {
+ path = arg;
+ Py_INCREF(arg);
+ }
+
+ if (PyUnicode_Check(path)) {
+ if (PyUnicode_READY(path) == -1) {
+ Py_DECREF(path);
+ return 0;
+ }
+ output = path;
+ }
+ else if (PyBytes_Check(path) || is_buffer) {
+ PyObject *path_bytes = NULL;
+
+ if (!PyBytes_Check(path) &&
+ PyErr_WarnFormat(PyExc_DeprecationWarning, 1,
+ "path should be string, bytes, or os.PathLike, not %.200s",
+ Py_TYPE(arg)->tp_name)) {
+ Py_DECREF(path);
+ return 0;
+ }
+ path_bytes = PyBytes_FromObject(path);
+ Py_DECREF(path);
+ if (!path_bytes) {
+ return 0;
+ }
+ output = PyUnicode_DecodeFSDefaultAndSize(PyBytes_AS_STRING(path_bytes),
+ PyBytes_GET_SIZE(path_bytes));
+ Py_DECREF(path_bytes);
+ if (!output) {
+ return 0;
+ }
+ }
+ else {
+ PyErr_Format(PyExc_TypeError,
+ "path should be string, bytes, or os.PathLike, not %.200s",
+ Py_TYPE(arg)->tp_name);
+ Py_DECREF(path);
+ return 0;
+ }
+ if (PyUnicode_READY(output) == -1) {
+ Py_DECREF(output);
+ return 0;
+ }
+ if (findchar(PyUnicode_DATA(output), PyUnicode_KIND(output),
+ PyUnicode_GET_LENGTH(output), 0, 1) >= 0) {
+ PyErr_SetString(PyExc_ValueError, "embedded null character");
+ Py_DECREF(output);
+ return 0;
+ }
+ *(PyObject**)addr = output;
+ return Py_CLEANUP_SUPPORTED;
+}
+
+
+char*
+PyUnicode_AsUTF8AndSize(PyObject *unicode, Py_ssize_t *psize)
+{
+ PyObject *bytes;
+
+ if (!PyUnicode_Check(unicode)) {
+ PyErr_BadArgument();
+ return NULL;
+ }
+ if (PyUnicode_READY(unicode) == -1)
+ return NULL;
+
+ if (PyUnicode_UTF8(unicode) == NULL) {
+ assert(!PyUnicode_IS_COMPACT_ASCII(unicode));
+ bytes = _PyUnicode_AsUTF8String(unicode, NULL);
+ if (bytes == NULL)
+ return NULL;
+ _PyUnicode_UTF8(unicode) = PyObject_MALLOC(PyBytes_GET_SIZE(bytes) + 1);
+ if (_PyUnicode_UTF8(unicode) == NULL) {
+ PyErr_NoMemory();
+ Py_DECREF(bytes);
+ return NULL;
+ }
+ _PyUnicode_UTF8_LENGTH(unicode) = PyBytes_GET_SIZE(bytes);
+ memcpy(_PyUnicode_UTF8(unicode),
+ PyBytes_AS_STRING(bytes),
+ _PyUnicode_UTF8_LENGTH(unicode) + 1);
+ Py_DECREF(bytes);
+ }
+
+ if (psize)
+ *psize = PyUnicode_UTF8_LENGTH(unicode);
+ return PyUnicode_UTF8(unicode);
+}
+
+char*
+PyUnicode_AsUTF8(PyObject *unicode)
+{
+ return PyUnicode_AsUTF8AndSize(unicode, NULL);
+}
+
+Py_UNICODE *
+PyUnicode_AsUnicodeAndSize(PyObject *unicode, Py_ssize_t *size)
+{
+ const unsigned char *one_byte;
+#if SIZEOF_WCHAR_T == 4
+ const Py_UCS2 *two_bytes;
+#else
+ const Py_UCS4 *four_bytes;
+ const Py_UCS4 *ucs4_end;
+ Py_ssize_t num_surrogates;
+#endif
+ wchar_t *w;
+ wchar_t *wchar_end;
+
+ if (!PyUnicode_Check(unicode)) {
+ PyErr_BadArgument();
+ return NULL;
+ }
+ if (_PyUnicode_WSTR(unicode) == NULL) {
+ /* Non-ASCII compact unicode object */
+ assert(_PyUnicode_KIND(unicode) != 0);
+ assert(PyUnicode_IS_READY(unicode));
+
+ if (PyUnicode_KIND(unicode) == PyUnicode_4BYTE_KIND) {
+#if SIZEOF_WCHAR_T == 2
+ four_bytes = PyUnicode_4BYTE_DATA(unicode);
+ ucs4_end = four_bytes + _PyUnicode_LENGTH(unicode);
+ num_surrogates = 0;
+
+ for (; four_bytes < ucs4_end; ++four_bytes) {
+ if (*four_bytes > 0xFFFF)
+ ++num_surrogates;
+ }
+
+ _PyUnicode_WSTR(unicode) = (wchar_t *) PyObject_MALLOC(
+ sizeof(wchar_t) * (_PyUnicode_LENGTH(unicode) + 1 + num_surrogates));
+ if (!_PyUnicode_WSTR(unicode)) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+ _PyUnicode_WSTR_LENGTH(unicode) = _PyUnicode_LENGTH(unicode) + num_surrogates;
+
+ w = _PyUnicode_WSTR(unicode);
+ wchar_end = w + _PyUnicode_WSTR_LENGTH(unicode);
+ four_bytes = PyUnicode_4BYTE_DATA(unicode);
+ for (; four_bytes < ucs4_end; ++four_bytes, ++w) {
+ if (*four_bytes > 0xFFFF) {
+ assert(*four_bytes <= MAX_UNICODE);
+ /* encode surrogate pair in this case */
+ *w++ = Py_UNICODE_HIGH_SURROGATE(*four_bytes);
+ *w = Py_UNICODE_LOW_SURROGATE(*four_bytes);
+ }
+ else
+ *w = *four_bytes;
+
+ if (w > wchar_end) {
+ assert(0 && "Miscalculated string end");
+ }
+ }
+ *w = 0;
+#else
+ /* sizeof(wchar_t) == 4 */
+ Py_FatalError("Impossible unicode object state, wstr and str "
+ "should share memory already.");
+ return NULL;
+#endif
+ }
+ else {
+ if ((size_t)_PyUnicode_LENGTH(unicode) >
+ PY_SSIZE_T_MAX / sizeof(wchar_t) - 1) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+ _PyUnicode_WSTR(unicode) = (wchar_t *) PyObject_MALLOC(sizeof(wchar_t) *
+ (_PyUnicode_LENGTH(unicode) + 1));
+ if (!_PyUnicode_WSTR(unicode)) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+ if (!PyUnicode_IS_COMPACT_ASCII(unicode))
+ _PyUnicode_WSTR_LENGTH(unicode) = _PyUnicode_LENGTH(unicode);
+ w = _PyUnicode_WSTR(unicode);
+ wchar_end = w + _PyUnicode_LENGTH(unicode);
+
+ if (PyUnicode_KIND(unicode) == PyUnicode_1BYTE_KIND) {
+ one_byte = PyUnicode_1BYTE_DATA(unicode);
+ for (; w < wchar_end; ++one_byte, ++w)
+ *w = *one_byte;
+ /* null-terminate the wstr */
+ *w = 0;
+ }
+ else if (PyUnicode_KIND(unicode) == PyUnicode_2BYTE_KIND) {
+#if SIZEOF_WCHAR_T == 4
+ two_bytes = PyUnicode_2BYTE_DATA(unicode);
+ for (; w < wchar_end; ++two_bytes, ++w)
+ *w = *two_bytes;
+ /* null-terminate the wstr */
+ *w = 0;
+#else
+ /* sizeof(wchar_t) == 2 */
+ PyObject_FREE(_PyUnicode_WSTR(unicode));
+ _PyUnicode_WSTR(unicode) = NULL;
+ Py_FatalError("Impossible unicode object state, wstr "
+ "and str should share memory already.");
+ return NULL;
+#endif
+ }
+ else {
+ assert(0 && "This should never happen.");
+ }
+ }
+ }
+ if (size != NULL)
+ *size = PyUnicode_WSTR_LENGTH(unicode);
+ return _PyUnicode_WSTR(unicode);
+}
+
+Py_UNICODE *
+PyUnicode_AsUnicode(PyObject *unicode)
+{
+ return PyUnicode_AsUnicodeAndSize(unicode, NULL);
+}
+
+const Py_UNICODE *
+_PyUnicode_AsUnicode(PyObject *unicode)
+{
+ Py_ssize_t size;
+ const Py_UNICODE *wstr;
+
+ wstr = PyUnicode_AsUnicodeAndSize(unicode, &size);
+ if (wstr && wcslen(wstr) != (size_t)size) {
+ PyErr_SetString(PyExc_ValueError, "embedded null character");
+ return NULL;
+ }
+ return wstr;
+}
+
+
+Py_ssize_t
+PyUnicode_GetSize(PyObject *unicode)
+{
+ if (!PyUnicode_Check(unicode)) {
+ PyErr_BadArgument();
+ goto onError;
+ }
+ return PyUnicode_GET_SIZE(unicode);
+
+ onError:
+ return -1;
+}
+
+Py_ssize_t
+PyUnicode_GetLength(PyObject *unicode)
+{
+ if (!PyUnicode_Check(unicode)) {
+ PyErr_BadArgument();
+ return -1;
+ }
+ if (PyUnicode_READY(unicode) == -1)
+ return -1;
+ return PyUnicode_GET_LENGTH(unicode);
+}
+
+Py_UCS4
+PyUnicode_ReadChar(PyObject *unicode, Py_ssize_t index)
+{
+ void *data;
+ int kind;
+
+ if (!PyUnicode_Check(unicode)) {
+ PyErr_BadArgument();
+ return (Py_UCS4)-1;
+ }
+ if (PyUnicode_READY(unicode) == -1) {
+ return (Py_UCS4)-1;
+ }
+ if (index < 0 || index >= PyUnicode_GET_LENGTH(unicode)) {
+ PyErr_SetString(PyExc_IndexError, "string index out of range");
+ return (Py_UCS4)-1;
+ }
+ data = PyUnicode_DATA(unicode);
+ kind = PyUnicode_KIND(unicode);
+ return PyUnicode_READ(kind, data, index);
+}
+
+int
+PyUnicode_WriteChar(PyObject *unicode, Py_ssize_t index, Py_UCS4 ch)
+{
+ if (!PyUnicode_Check(unicode) || !PyUnicode_IS_COMPACT(unicode)) {
+ PyErr_BadArgument();
+ return -1;
+ }
+ assert(PyUnicode_IS_READY(unicode));
+ if (index < 0 || index >= PyUnicode_GET_LENGTH(unicode)) {
+ PyErr_SetString(PyExc_IndexError, "string index out of range");
+ return -1;
+ }
+ if (unicode_check_modifiable(unicode))
+ return -1;
+ if (ch > PyUnicode_MAX_CHAR_VALUE(unicode)) {
+ PyErr_SetString(PyExc_ValueError, "character out of range");
+ return -1;
+ }
+ PyUnicode_WRITE(PyUnicode_KIND(unicode), PyUnicode_DATA(unicode),
+ index, ch);
+ return 0;
+}
+
+const char *
+PyUnicode_GetDefaultEncoding(void)
+{
+ return "utf-8";
+}
+
+/* create or adjust a UnicodeDecodeError */
+static void
+make_decode_exception(PyObject **exceptionObject,
+ const char *encoding,
+ const char *input, Py_ssize_t length,
+ Py_ssize_t startpos, Py_ssize_t endpos,
+ const char *reason)
+{
+ if (*exceptionObject == NULL) {
+ *exceptionObject = PyUnicodeDecodeError_Create(
+ encoding, input, length, startpos, endpos, reason);
+ }
+ else {
+ if (PyUnicodeDecodeError_SetStart(*exceptionObject, startpos))
+ goto onError;
+ if (PyUnicodeDecodeError_SetEnd(*exceptionObject, endpos))
+ goto onError;
+ if (PyUnicodeDecodeError_SetReason(*exceptionObject, reason))
+ goto onError;
+ }
+ return;
+
+onError:
+ Py_CLEAR(*exceptionObject);
+}
+
+#ifdef MS_WINDOWS
+/* error handling callback helper:
+ build arguments, call the callback and check the arguments,
+ if no exception occurred, copy the replacement to the output
+ and adjust various state variables.
+ return 0 on success, -1 on error
+*/
+
+static int
+unicode_decode_call_errorhandler_wchar(
+ const char *errors, PyObject **errorHandler,
+ const char *encoding, const char *reason,
+ const char **input, const char **inend, Py_ssize_t *startinpos,
+ Py_ssize_t *endinpos, PyObject **exceptionObject, const char **inptr,
+ PyObject **output, Py_ssize_t *outpos)
+{
+ static const char *argparse = "O!n;decoding error handler must return (str, int) tuple";
+
+ PyObject *restuple = NULL;
+ PyObject *repunicode = NULL;
+ Py_ssize_t outsize;
+ Py_ssize_t insize;
+ Py_ssize_t requiredsize;
+ Py_ssize_t newpos;
+ PyObject *inputobj = NULL;
+ wchar_t *repwstr;
+ Py_ssize_t repwlen;
+
+ assert (_PyUnicode_KIND(*output) == PyUnicode_WCHAR_KIND);
+ outsize = _PyUnicode_WSTR_LENGTH(*output);
+
+ if (*errorHandler == NULL) {
+ *errorHandler = PyCodec_LookupError(errors);
+ if (*errorHandler == NULL)
+ goto onError;
+ }
+
+ make_decode_exception(exceptionObject,
+ encoding,
+ *input, *inend - *input,
+ *startinpos, *endinpos,
+ reason);
+ if (*exceptionObject == NULL)
+ goto onError;
+
+ restuple = PyObject_CallFunctionObjArgs(*errorHandler, *exceptionObject, NULL);
+ if (restuple == NULL)
+ goto onError;
+ if (!PyTuple_Check(restuple)) {
+ PyErr_SetString(PyExc_TypeError, &argparse[4]);
+ goto onError;
+ }
+ if (!PyArg_ParseTuple(restuple, argparse, &PyUnicode_Type, &repunicode, &newpos))
+ goto onError;
+
+ /* Copy back the bytes variables, which might have been modified by the
+ callback */
+ inputobj = PyUnicodeDecodeError_GetObject(*exceptionObject);
+ if (!inputobj)
+ goto onError;
+ if (!PyBytes_Check(inputobj)) {
+ PyErr_Format(PyExc_TypeError, "exception attribute object must be bytes");
+ }
+ *input = PyBytes_AS_STRING(inputobj);
+ insize = PyBytes_GET_SIZE(inputobj);
+ *inend = *input + insize;
+ /* we can DECREF safely, as the exception has another reference,
+ so the object won't go away. */
+ Py_DECREF(inputobj);
+
+ if (newpos<0)
+ newpos = insize+newpos;
+ if (newpos<0 || newpos>insize) {
+ PyErr_Format(PyExc_IndexError, "position %zd from error handler out of bounds", newpos);
+ goto onError;
+ }
+
+ repwstr = PyUnicode_AsUnicodeAndSize(repunicode, &repwlen);
+ if (repwstr == NULL)
+ goto onError;
+ /* need more space? (at least enough for what we
+ have+the replacement+the rest of the string (starting
+ at the new input position), so we won't have to check space
+ when there are no errors in the rest of the string) */
+ requiredsize = *outpos;
+ if (requiredsize > PY_SSIZE_T_MAX - repwlen)
+ goto overflow;
+ requiredsize += repwlen;
+ if (requiredsize > PY_SSIZE_T_MAX - (insize - newpos))
+ goto overflow;
+ requiredsize += insize - newpos;
+ if (requiredsize > outsize) {
+ if (outsize <= PY_SSIZE_T_MAX/2 && requiredsize < 2*outsize)
+ requiredsize = 2*outsize;
+ if (unicode_resize(output, requiredsize) < 0)
+ goto onError;
+ }
+ wcsncpy(_PyUnicode_WSTR(*output) + *outpos, repwstr, repwlen);
+ *outpos += repwlen;
+ *endinpos = newpos;
+ *inptr = *input + newpos;
+
+ /* we made it! */
+ Py_XDECREF(restuple);
+ return 0;
+
+ overflow:
+ PyErr_SetString(PyExc_OverflowError,
+ "decoded result is too long for a Python string");
+
+ onError:
+ Py_XDECREF(restuple);
+ return -1;
+}
+#endif /* MS_WINDOWS */
+
+static int
+unicode_decode_call_errorhandler_writer(
+ const char *errors, PyObject **errorHandler,
+ const char *encoding, const char *reason,
+ const char **input, const char **inend, Py_ssize_t *startinpos,
+ Py_ssize_t *endinpos, PyObject **exceptionObject, const char **inptr,
+ _PyUnicodeWriter *writer /* PyObject **output, Py_ssize_t *outpos */)
+{
+ static const char *argparse = "O!n;decoding error handler must return (str, int) tuple";
+
+ PyObject *restuple = NULL;
+ PyObject *repunicode = NULL;
+ Py_ssize_t insize;
+ Py_ssize_t newpos;
+ Py_ssize_t replen;
+ Py_ssize_t remain;
+ PyObject *inputobj = NULL;
+ int need_to_grow = 0;
+ const char *new_inptr;
+
+ if (*errorHandler == NULL) {
+ *errorHandler = PyCodec_LookupError(errors);
+ if (*errorHandler == NULL)
+ goto onError;
+ }
+
+ make_decode_exception(exceptionObject,
+ encoding,
+ *input, *inend - *input,
+ *startinpos, *endinpos,
+ reason);
+ if (*exceptionObject == NULL)
+ goto onError;
+
+ restuple = PyObject_CallFunctionObjArgs(*errorHandler, *exceptionObject, NULL);
+ if (restuple == NULL)
+ goto onError;
+ if (!PyTuple_Check(restuple)) {
+ PyErr_SetString(PyExc_TypeError, &argparse[4]);
+ goto onError;
+ }
+ if (!PyArg_ParseTuple(restuple, argparse, &PyUnicode_Type, &repunicode, &newpos))
+ goto onError;
+
+ /* Copy back the bytes variables, which might have been modified by the
+ callback */
+ inputobj = PyUnicodeDecodeError_GetObject(*exceptionObject);
+ if (!inputobj)
+ goto onError;
+ if (!PyBytes_Check(inputobj)) {
+ PyErr_Format(PyExc_TypeError, "exception attribute object must be bytes");
+ }
+ remain = *inend - *input - *endinpos;
+ *input = PyBytes_AS_STRING(inputobj);
+ insize = PyBytes_GET_SIZE(inputobj);
+ *inend = *input + insize;
+ /* we can DECREF safely, as the exception has another reference,
+ so the object won't go away. */
+ Py_DECREF(inputobj);
+
+ if (newpos<0)
+ newpos = insize+newpos;
+ if (newpos<0 || newpos>insize) {
+ PyErr_Format(PyExc_IndexError, "position %zd from error handler out of bounds", newpos);
+ goto onError;
+ }
+
+ if (PyUnicode_READY(repunicode) < 0)
+ goto onError;
+ replen = PyUnicode_GET_LENGTH(repunicode);
+ if (replen > 1) {
+ writer->min_length += replen - 1;
+ need_to_grow = 1;
+ }
+ new_inptr = *input + newpos;
+ if (*inend - new_inptr > remain) {
+ /* We don't know the decoding algorithm here so we make the worst
+ assumption that one byte decodes to one unicode character.
+ If unfortunately one byte could decode to more unicode characters,
+ the decoder may write out-of-bound then. Is it possible for the
+ algorithms using this function? */
+ writer->min_length += *inend - new_inptr - remain;
+ need_to_grow = 1;
+ }
+ if (need_to_grow) {
+ writer->overallocate = 1;
+ if (_PyUnicodeWriter_Prepare(writer, writer->min_length - writer->pos,
+ PyUnicode_MAX_CHAR_VALUE(repunicode)) == -1)
+ goto onError;
+ }
+ if (_PyUnicodeWriter_WriteStr(writer, repunicode) == -1)
+ goto onError;
+
+ *endinpos = newpos;
+ *inptr = new_inptr;
+
+ /* we made it! */
+ Py_XDECREF(restuple);
+ return 0;
+
+ onError:
+ Py_XDECREF(restuple);
+ return -1;
+}
+
+/* --- UTF-7 Codec -------------------------------------------------------- */
+
+/* See RFC2152 for details. We encode conservatively and decode liberally. */
+
+/* Three simple macros defining base-64. */
+
+/* Is c a base-64 character? */
+
+#define IS_BASE64(c) \
+ (((c) >= 'A' && (c) <= 'Z') || \
+ ((c) >= 'a' && (c) <= 'z') || \
+ ((c) >= '0' && (c) <= '9') || \
+ (c) == '+' || (c) == '/')
+
+/* given that c is a base-64 character, what is its base-64 value? */
+
+#define FROM_BASE64(c) \
+ (((c) >= 'A' && (c) <= 'Z') ? (c) - 'A' : \
+ ((c) >= 'a' && (c) <= 'z') ? (c) - 'a' + 26 : \
+ ((c) >= '0' && (c) <= '9') ? (c) - '0' + 52 : \
+ (c) == '+' ? 62 : 63)
+
+/* What is the base-64 character of the bottom 6 bits of n? */
+
+#define TO_BASE64(n) \
+ ("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"[(n) & 0x3f])
+
+/* DECODE_DIRECT: this byte encountered in a UTF-7 string should be
+ * decoded as itself. We are permissive on decoding; the only ASCII
+ * byte not decoding to itself is the + which begins a base64
+ * string. */
+
+#define DECODE_DIRECT(c) \
+ ((c) <= 127 && (c) != '+')
+
+/* The UTF-7 encoder treats ASCII characters differently according to
+ * whether they are Set D, Set O, Whitespace, or special (i.e. none of
+ * the above). See RFC2152. This array identifies these different
+ * sets:
+ * 0 : "Set D"
+ * alphanumeric and '(),-./:?
+ * 1 : "Set O"
+ * !"#$%&*;<=>@[]^_`{|}
+ * 2 : "whitespace"
+ * ht nl cr sp
+ * 3 : special (must be base64 encoded)
+ * everything else (i.e. +\~ and non-printing codes 0-8 11-12 14-31 127)
+ */
+
+static
+char utf7_category[128] = {
+/* nul soh stx etx eot enq ack bel bs ht nl vt np cr so si */
+ 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 3, 3, 2, 3, 3,
+/* dle dc1 dc2 dc3 dc4 nak syn etb can em sub esc fs gs rs us */
+ 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
+/* sp ! " # $ % & ' ( ) * + , - . / */
+ 2, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 3, 0, 0, 0, 0,
+/* 0 1 2 3 4 5 6 7 8 9 : ; < = > ? */
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0,
+/* @ A B C D E F G H I J K L M N O */
+ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+/* P Q R S T U V W X Y Z [ \ ] ^ _ */
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 3, 1, 1, 1,
+/* ` a b c d e f g h i j k l m n o */
+ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+/* p q r s t u v w x y z { | } ~ del */
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 3, 3,
+};
+
+/* ENCODE_DIRECT: this character should be encoded as itself. The
+ * answer depends on whether we are encoding set O as itself, and also
+ * on whether we are encoding whitespace as itself. RFC2152 makes it
+ * clear that the answers to these questions vary between
+ * applications, so this code needs to be flexible. */
+
+#define ENCODE_DIRECT(c, directO, directWS) \
+ ((c) < 128 && (c) > 0 && \
+ ((utf7_category[(c)] == 0) || \
+ (directWS && (utf7_category[(c)] == 2)) || \
+ (directO && (utf7_category[(c)] == 1))))
+
+PyObject *
+PyUnicode_DecodeUTF7(const char *s,
+ Py_ssize_t size,
+ const char *errors)
+{
+ return PyUnicode_DecodeUTF7Stateful(s, size, errors, NULL);
+}
+
+/* The decoder. The only state we preserve is our read position,
+ * i.e. how many characters we have consumed. So if we end in the
+ * middle of a shift sequence we have to back off the read position
+ * and the output to the beginning of the sequence, otherwise we lose
+ * all the shift state (seen bits, number of bits seen, high
+ * surrogate). */
+
+PyObject *
+PyUnicode_DecodeUTF7Stateful(const char *s,
+ Py_ssize_t size,
+ const char *errors,
+ Py_ssize_t *consumed)
+{
+ const char *starts = s;
+ Py_ssize_t startinpos;
+ Py_ssize_t endinpos;
+ const char *e;
+ _PyUnicodeWriter writer;
+ const char *errmsg = "";
+ int inShift = 0;
+ Py_ssize_t shiftOutStart;
+ unsigned int base64bits = 0;
+ unsigned long base64buffer = 0;
+ Py_UCS4 surrogate = 0;
+ PyObject *errorHandler = NULL;
+ PyObject *exc = NULL;
+
+ if (size == 0) {
+ if (consumed)
+ *consumed = 0;
+ _Py_RETURN_UNICODE_EMPTY();
+ }
+
+ /* Start off assuming it's all ASCII. Widen later as necessary. */
+ _PyUnicodeWriter_Init(&writer);
+ writer.min_length = size;
+
+ shiftOutStart = 0;
+ e = s + size;
+
+ while (s < e) {
+ Py_UCS4 ch;
+ restart:
+ ch = (unsigned char) *s;
+
+ if (inShift) { /* in a base-64 section */
+ if (IS_BASE64(ch)) { /* consume a base-64 character */
+ base64buffer = (base64buffer << 6) | FROM_BASE64(ch);
+ base64bits += 6;
+ s++;
+ if (base64bits >= 16) {
+ /* we have enough bits for a UTF-16 value */
+ Py_UCS4 outCh = (Py_UCS4)(base64buffer >> (base64bits-16));
+ base64bits -= 16;
+ base64buffer &= (1 << base64bits) - 1; /* clear high bits */
+ assert(outCh <= 0xffff);
+ if (surrogate) {
+ /* expecting a second surrogate */
+ if (Py_UNICODE_IS_LOW_SURROGATE(outCh)) {
+ Py_UCS4 ch2 = Py_UNICODE_JOIN_SURROGATES(surrogate, outCh);
+ if (_PyUnicodeWriter_WriteCharInline(&writer, ch2) < 0)
+ goto onError;
+ surrogate = 0;
+ continue;
+ }
+ else {
+ if (_PyUnicodeWriter_WriteCharInline(&writer, surrogate) < 0)
+ goto onError;
+ surrogate = 0;
+ }
+ }
+ if (Py_UNICODE_IS_HIGH_SURROGATE(outCh)) {
+ /* first surrogate */
+ surrogate = outCh;
+ }
+ else {
+ if (_PyUnicodeWriter_WriteCharInline(&writer, outCh) < 0)
+ goto onError;
+ }
+ }
+ }
+ else { /* now leaving a base-64 section */
+ inShift = 0;
+ if (base64bits > 0) { /* left-over bits */
+ if (base64bits >= 6) {
+ /* We've seen at least one base-64 character */
+ s++;
+ errmsg = "partial character in shift sequence";
+ goto utf7Error;
+ }
+ else {
+ /* Some bits remain; they should be zero */
+ if (base64buffer != 0) {
+ s++;
+ errmsg = "non-zero padding bits in shift sequence";
+ goto utf7Error;
+ }
+ }
+ }
+ if (surrogate && DECODE_DIRECT(ch)) {
+ if (_PyUnicodeWriter_WriteCharInline(&writer, surrogate) < 0)
+ goto onError;
+ }
+ surrogate = 0;
+ if (ch == '-') {
+ /* '-' is absorbed; other terminating
+ characters are preserved */
+ s++;
+ }
+ }
+ }
+ else if ( ch == '+' ) {
+ startinpos = s-starts;
+ s++; /* consume '+' */
+ if (s < e && *s == '-') { /* '+-' encodes '+' */
+ s++;
+ if (_PyUnicodeWriter_WriteCharInline(&writer, '+') < 0)
+ goto onError;
+ }
+ else { /* begin base64-encoded section */
+ inShift = 1;
+ surrogate = 0;
+ shiftOutStart = writer.pos;
+ base64bits = 0;
+ base64buffer = 0;
+ }
+ }
+ else if (DECODE_DIRECT(ch)) { /* character decodes as itself */
+ s++;
+ if (_PyUnicodeWriter_WriteCharInline(&writer, ch) < 0)
+ goto onError;
+ }
+ else {
+ startinpos = s-starts;
+ s++;
+ errmsg = "unexpected special character";
+ goto utf7Error;
+ }
+ continue;
+utf7Error:
+ endinpos = s-starts;
+ if (unicode_decode_call_errorhandler_writer(
+ errors, &errorHandler,
+ "utf7", errmsg,
+ &starts, &e, &startinpos, &endinpos, &exc, &s,
+ &writer))
+ goto onError;
+ }
+
+ /* end of string */
+
+ if (inShift && !consumed) { /* in shift sequence, no more to follow */
+ /* if we're in an inconsistent state, that's an error */
+ inShift = 0;
+ if (surrogate ||
+ (base64bits >= 6) ||
+ (base64bits > 0 && base64buffer != 0)) {
+ endinpos = size;
+ if (unicode_decode_call_errorhandler_writer(
+ errors, &errorHandler,
+ "utf7", "unterminated shift sequence",
+ &starts, &e, &startinpos, &endinpos, &exc, &s,
+ &writer))
+ goto onError;
+ if (s < e)
+ goto restart;
+ }
+ }
+
+ /* return state */
+ if (consumed) {
+ if (inShift) {
+ *consumed = startinpos;
+ if (writer.pos != shiftOutStart && writer.maxchar > 127) {
+ PyObject *result = PyUnicode_FromKindAndData(
+ writer.kind, writer.data, shiftOutStart);
+ Py_XDECREF(errorHandler);
+ Py_XDECREF(exc);
+ _PyUnicodeWriter_Dealloc(&writer);
+ return result;
+ }
+ writer.pos = shiftOutStart; /* back off output */
+ }
+ else {
+ *consumed = s-starts;
+ }
+ }
+
+ Py_XDECREF(errorHandler);
+ Py_XDECREF(exc);
+ return _PyUnicodeWriter_Finish(&writer);
+
+ onError:
+ Py_XDECREF(errorHandler);
+ Py_XDECREF(exc);
+ _PyUnicodeWriter_Dealloc(&writer);
+ return NULL;
+}
+
+
+PyObject *
+_PyUnicode_EncodeUTF7(PyObject *str,
+ int base64SetO,
+ int base64WhiteSpace,
+ const char *errors)
+{
+ int kind;
+ void *data;
+ Py_ssize_t len;
+ PyObject *v;
+ int inShift = 0;
+ Py_ssize_t i;
+ unsigned int base64bits = 0;
+ unsigned long base64buffer = 0;
+ char * out;
+ char * start;
+
+ if (PyUnicode_READY(str) == -1)
+ return NULL;
+ kind = PyUnicode_KIND(str);
+ data = PyUnicode_DATA(str);
+ len = PyUnicode_GET_LENGTH(str);
+
+ if (len == 0)
+ return PyBytes_FromStringAndSize(NULL, 0);
+
+ /* It might be possible to tighten this worst case */
+ if (len > PY_SSIZE_T_MAX / 8)
+ return PyErr_NoMemory();
+ v = PyBytes_FromStringAndSize(NULL, len * 8);
+ if (v == NULL)
+ return NULL;
+
+ start = out = PyBytes_AS_STRING(v);
+ for (i = 0; i < len; ++i) {
+ Py_UCS4 ch = PyUnicode_READ(kind, data, i);
+
+ if (inShift) {
+ if (ENCODE_DIRECT(ch, !base64SetO, !base64WhiteSpace)) {
+ /* shifting out */
+ if (base64bits) { /* output remaining bits */
+ *out++ = TO_BASE64(base64buffer << (6-base64bits));
+ base64buffer = 0;
+ base64bits = 0;
+ }
+ inShift = 0;
+ /* Characters not in the BASE64 set implicitly unshift the sequence
+ so no '-' is required, except if the character is itself a '-' */
+ if (IS_BASE64(ch) || ch == '-') {
+ *out++ = '-';
+ }
+ *out++ = (char) ch;
+ }
+ else {
+ goto encode_char;
+ }
+ }
+ else { /* not in a shift sequence */
+ if (ch == '+') {
+ *out++ = '+';
+ *out++ = '-';
+ }
+ else if (ENCODE_DIRECT(ch, !base64SetO, !base64WhiteSpace)) {
+ *out++ = (char) ch;
+ }
+ else {
+ *out++ = '+';
+ inShift = 1;
+ goto encode_char;
+ }
+ }
+ continue;
+encode_char:
+ if (ch >= 0x10000) {
+ assert(ch <= MAX_UNICODE);
+
+ /* code first surrogate */
+ base64bits += 16;
+ base64buffer = (base64buffer << 16) | Py_UNICODE_HIGH_SURROGATE(ch);
+ while (base64bits >= 6) {
+ *out++ = TO_BASE64(base64buffer >> (base64bits-6));
+ base64bits -= 6;
+ }
+ /* prepare second surrogate */
+ ch = Py_UNICODE_LOW_SURROGATE(ch);
+ }
+ base64bits += 16;
+ base64buffer = (base64buffer << 16) | ch;
+ while (base64bits >= 6) {
+ *out++ = TO_BASE64(base64buffer >> (base64bits-6));
+ base64bits -= 6;
+ }
+ }
+ if (base64bits)
+ *out++= TO_BASE64(base64buffer << (6-base64bits) );
+ if (inShift)
+ *out++ = '-';
+ if (_PyBytes_Resize(&v, out - start) < 0)
+ return NULL;
+ return v;
+}
+PyObject *
+PyUnicode_EncodeUTF7(const Py_UNICODE *s,
+ Py_ssize_t size,
+ int base64SetO,
+ int base64WhiteSpace,
+ const char *errors)
+{
+ PyObject *result;
+ PyObject *tmp = PyUnicode_FromUnicode(s, size);
+ if (tmp == NULL)
+ return NULL;
+ result = _PyUnicode_EncodeUTF7(tmp, base64SetO,
+ base64WhiteSpace, errors);
+ Py_DECREF(tmp);
+ return result;
+}
+
+#undef IS_BASE64
+#undef FROM_BASE64
+#undef TO_BASE64
+#undef DECODE_DIRECT
+#undef ENCODE_DIRECT
+
+/* --- UTF-8 Codec -------------------------------------------------------- */
+
+PyObject *
+PyUnicode_DecodeUTF8(const char *s,
+ Py_ssize_t size,
+ const char *errors)
+{
+ return PyUnicode_DecodeUTF8Stateful(s, size, errors, NULL);
+}
+
+#include "stringlib/asciilib.h"
+#include "stringlib/codecs.h"
+#include "stringlib/undef.h"
+
+#include "stringlib/ucs1lib.h"
+#include "stringlib/codecs.h"
+#include "stringlib/undef.h"
+
+#include "stringlib/ucs2lib.h"
+#include "stringlib/codecs.h"
+#include "stringlib/undef.h"
+
+#include "stringlib/ucs4lib.h"
+#include "stringlib/codecs.h"
+#include "stringlib/undef.h"
+
+/* Mask to quickly check whether a C 'long' contains a
+ non-ASCII, UTF8-encoded char. */
+#if (SIZEOF_LONG == 8)
+# define ASCII_CHAR_MASK 0x8080808080808080UL
+#elif (SIZEOF_LONG == 4)
+# define ASCII_CHAR_MASK 0x80808080UL
+#else
+# error C 'long' size should be either 4 or 8!
+#endif
+
+static Py_ssize_t
+ascii_decode(const char *start, const char *end, Py_UCS1 *dest)
+{
+ const char *p = start;
+ const char *aligned_end = (const char *) _Py_ALIGN_DOWN(end, SIZEOF_LONG);
+
+ /*
+ * Issue #17237: m68k is a bit different from most architectures in
+ * that objects do not use "natural alignment" - for example, int and
+ * long are only aligned at 2-byte boundaries. Therefore the assert()
+ * won't work; also, tests have shown that skipping the "optimised
+ * version" will even speed up m68k.
+ */
+#if !defined(__m68k__)
+#if SIZEOF_LONG <= SIZEOF_VOID_P
+ assert(_Py_IS_ALIGNED(dest, SIZEOF_LONG));
+ if (_Py_IS_ALIGNED(p, SIZEOF_LONG)) {
+ /* Fast path, see in STRINGLIB(utf8_decode) for
+ an explanation. */
+ /* Help allocation */
+ const char *_p = p;
+ Py_UCS1 * q = dest;
+ while (_p < aligned_end) {
+ unsigned long value = *(const unsigned long *) _p;
+ if (value & ASCII_CHAR_MASK)
+ break;
+ *((unsigned long *)q) = value;
+ _p += SIZEOF_LONG;
+ q += SIZEOF_LONG;
+ }
+ p = _p;
+ while (p < end) {
+ if ((unsigned char)*p & 0x80)
+ break;
+ *q++ = *p++;
+ }
+ return p - start;
+ }
+#endif
+#endif
+ while (p < end) {
+ /* Fast path, see in STRINGLIB(utf8_decode) in stringlib/codecs.h
+ for an explanation. */
+ if (_Py_IS_ALIGNED(p, SIZEOF_LONG)) {
+ /* Help allocation */
+ const char *_p = p;
+ while (_p < aligned_end) {
+ unsigned long value = *(unsigned long *) _p;
+ if (value & ASCII_CHAR_MASK)
+ break;
+ _p += SIZEOF_LONG;
+ }
+ p = _p;
+ if (_p == end)
+ break;
+ }
+ if ((unsigned char)*p & 0x80)
+ break;
+ ++p;
+ }
+ memcpy(dest, start, p - start);
+ return p - start;
+}
+
+PyObject *
+PyUnicode_DecodeUTF8Stateful(const char *s,
+ Py_ssize_t size,
+ const char *errors,
+ Py_ssize_t *consumed)
+{
+ _PyUnicodeWriter writer;
+ const char *starts = s;
+ const char *end = s + size;
+
+ Py_ssize_t startinpos;
+ Py_ssize_t endinpos;
+ const char *errmsg = "";
+ PyObject *error_handler_obj = NULL;
+ PyObject *exc = NULL;
+ _Py_error_handler error_handler = _Py_ERROR_UNKNOWN;
+
+ if (size == 0) {
+ if (consumed)
+ *consumed = 0;
+ _Py_RETURN_UNICODE_EMPTY();
+ }
+
+ /* ASCII is equivalent to the first 128 ordinals in Unicode. */
+ if (size == 1 && (unsigned char)s[0] < 128) {
+ if (consumed)
+ *consumed = 1;
+ return get_latin1_char((unsigned char)s[0]);
+ }
+
+ _PyUnicodeWriter_Init(&writer);
+ writer.min_length = size;
+ if (_PyUnicodeWriter_Prepare(&writer, writer.min_length, 127) == -1)
+ goto onError;
+
+ writer.pos = ascii_decode(s, end, writer.data);
+ s += writer.pos;
+ while (s < end) {
+ Py_UCS4 ch;
+ int kind = writer.kind;
+
+ if (kind == PyUnicode_1BYTE_KIND) {
+ if (PyUnicode_IS_ASCII(writer.buffer))
+ ch = asciilib_utf8_decode(&s, end, writer.data, &writer.pos);
+ else
+ ch = ucs1lib_utf8_decode(&s, end, writer.data, &writer.pos);
+ } else if (kind == PyUnicode_2BYTE_KIND) {
+ ch = ucs2lib_utf8_decode(&s, end, writer.data, &writer.pos);
+ } else {
+ assert(kind == PyUnicode_4BYTE_KIND);
+ ch = ucs4lib_utf8_decode(&s, end, writer.data, &writer.pos);
+ }
+
+ switch (ch) {
+ case 0:
+ if (s == end || consumed)
+ goto End;
+ errmsg = "unexpected end of data";
+ startinpos = s - starts;
+ endinpos = end - starts;
+ break;
+ case 1:
+ errmsg = "invalid start byte";
+ startinpos = s - starts;
+ endinpos = startinpos + 1;
+ break;
+ case 2:
+ case 3:
+ case 4:
+ errmsg = "invalid continuation byte";
+ startinpos = s - starts;
+ endinpos = startinpos + ch - 1;
+ break;
+ default:
+ if (_PyUnicodeWriter_WriteCharInline(&writer, ch) < 0)
+ goto onError;
+ continue;
+ }
+
+ if (error_handler == _Py_ERROR_UNKNOWN)
+ error_handler = get_error_handler(errors);
+
+ switch (error_handler) {
+ case _Py_ERROR_IGNORE:
+ s += (endinpos - startinpos);
+ break;
+
+ case _Py_ERROR_REPLACE:
+ if (_PyUnicodeWriter_WriteCharInline(&writer, 0xfffd) < 0)
+ goto onError;
+ s += (endinpos - startinpos);
+ break;
+
+ case _Py_ERROR_SURROGATEESCAPE:
+ {
+ Py_ssize_t i;
+
+ if (_PyUnicodeWriter_PrepareKind(&writer, PyUnicode_2BYTE_KIND) < 0)
+ goto onError;
+ for (i=startinpos; i<endinpos; i++) {
+ ch = (Py_UCS4)(unsigned char)(starts[i]);
+ PyUnicode_WRITE(writer.kind, writer.data, writer.pos,
+ ch + 0xdc00);
+ writer.pos++;
+ }
+ s += (endinpos - startinpos);
+ break;
+ }
+
+ default:
+ if (unicode_decode_call_errorhandler_writer(
+ errors, &error_handler_obj,
+ "utf-8", errmsg,
+ &starts, &end, &startinpos, &endinpos, &exc, &s,
+ &writer))
+ goto onError;
+ }
+ }
+
+End:
+ if (consumed)
+ *consumed = s - starts;
+
+ Py_XDECREF(error_handler_obj);
+ Py_XDECREF(exc);
+ return _PyUnicodeWriter_Finish(&writer);
+
+onError:
+ Py_XDECREF(error_handler_obj);
+ Py_XDECREF(exc);
+ _PyUnicodeWriter_Dealloc(&writer);
+ return NULL;
+}
+
+#if defined(__APPLE__) || defined(__ANDROID__)
+
+/* Simplified UTF-8 decoder using surrogateescape error handler,
+ used to decode the command line arguments on Mac OS X and Android.
+
+ Return a pointer to a newly allocated wide character string (use
+ PyMem_RawFree() to free the memory), or NULL on memory allocation error. */
+
+wchar_t*
+_Py_DecodeUTF8_surrogateescape(const char *s, Py_ssize_t size)
+{
+ const char *e;
+ wchar_t *unicode;
+ Py_ssize_t outpos;
+
+ /* Note: size will always be longer than the resulting Unicode
+ character count */
+ if (PY_SSIZE_T_MAX / (Py_ssize_t)sizeof(wchar_t) < (size + 1))
+ return NULL;
+ unicode = PyMem_RawMalloc((size + 1) * sizeof(wchar_t));
+ if (!unicode)
+ return NULL;
+
+ /* Unpack UTF-8 encoded data */
+ e = s + size;
+ outpos = 0;
+ while (s < e) {
+ Py_UCS4 ch;
+#if SIZEOF_WCHAR_T == 4
+ ch = ucs4lib_utf8_decode(&s, e, (Py_UCS4 *)unicode, &outpos);
+#else
+ ch = ucs2lib_utf8_decode(&s, e, (Py_UCS2 *)unicode, &outpos);
+#endif
+ if (ch > 0xFF) {
+#if SIZEOF_WCHAR_T == 4
+ assert(0);
+#else
+ assert(ch > 0xFFFF && ch <= MAX_UNICODE);
+ /* compute and append the two surrogates: */
+ unicode[outpos++] = (wchar_t)Py_UNICODE_HIGH_SURROGATE(ch);
+ unicode[outpos++] = (wchar_t)Py_UNICODE_LOW_SURROGATE(ch);
+#endif
+ }
+ else {
+ if (!ch && s == e)
+ break;
+ /* surrogateescape */
+ unicode[outpos++] = 0xDC00 + (unsigned char)*s++;
+ }
+ }
+ unicode[outpos] = L'\0';
+ return unicode;
+}
+
+#endif /* __APPLE__ or __ANDROID__ */
+
+/* Primary internal function which creates utf8 encoded bytes objects.
+
+ Allocation strategy: if the string is short, convert into a stack buffer
+ and allocate exactly as much space needed at the end. Else allocate the
+ maximum possible needed (4 result bytes per Unicode character), and return
+ the excess memory at the end.
+*/
+PyObject *
+_PyUnicode_AsUTF8String(PyObject *unicode, const char *errors)
+{
+ enum PyUnicode_Kind kind;
+ void *data;
+ Py_ssize_t size;
+
+ if (!PyUnicode_Check(unicode)) {
+ PyErr_BadArgument();
+ return NULL;
+ }
+
+ if (PyUnicode_READY(unicode) == -1)
+ return NULL;
+
+ if (PyUnicode_UTF8(unicode))
+ return PyBytes_FromStringAndSize(PyUnicode_UTF8(unicode),
+ PyUnicode_UTF8_LENGTH(unicode));
+
+ kind = PyUnicode_KIND(unicode);
+ data = PyUnicode_DATA(unicode);
+ size = PyUnicode_GET_LENGTH(unicode);
+
+ switch (kind) {
+ default:
+ assert(0);
+ case PyUnicode_1BYTE_KIND:
+ /* the string cannot be ASCII, or PyUnicode_UTF8() would be set */
+ assert(!PyUnicode_IS_ASCII(unicode));
+ return ucs1lib_utf8_encoder(unicode, data, size, errors);
+ case PyUnicode_2BYTE_KIND:
+ return ucs2lib_utf8_encoder(unicode, data, size, errors);
+ case PyUnicode_4BYTE_KIND:
+ return ucs4lib_utf8_encoder(unicode, data, size, errors);
+ }
+}
+
+PyObject *
+PyUnicode_EncodeUTF8(const Py_UNICODE *s,
+ Py_ssize_t size,
+ const char *errors)
+{
+ PyObject *v, *unicode;
+
+ unicode = PyUnicode_FromUnicode(s, size);
+ if (unicode == NULL)
+ return NULL;
+ v = _PyUnicode_AsUTF8String(unicode, errors);
+ Py_DECREF(unicode);
+ return v;
+}
+
+PyObject *
+PyUnicode_AsUTF8String(PyObject *unicode)
+{
+ return _PyUnicode_AsUTF8String(unicode, NULL);
+}
+
+/* --- UTF-32 Codec ------------------------------------------------------- */
+
+PyObject *
+PyUnicode_DecodeUTF32(const char *s,
+ Py_ssize_t size,
+ const char *errors,
+ int *byteorder)
+{
+ return PyUnicode_DecodeUTF32Stateful(s, size, errors, byteorder, NULL);
+}
+
+PyObject *
+PyUnicode_DecodeUTF32Stateful(const char *s,
+ Py_ssize_t size,
+ const char *errors,
+ int *byteorder,
+ Py_ssize_t *consumed)
+{
+ const char *starts = s;
+ Py_ssize_t startinpos;
+ Py_ssize_t endinpos;
+ _PyUnicodeWriter writer;
+ const unsigned char *q, *e;
+ int le, bo = 0; /* assume native ordering by default */
+ const char *encoding;
+ const char *errmsg = "";
+ PyObject *errorHandler = NULL;
+ PyObject *exc = NULL;
+
+ q = (unsigned char *)s;
+ e = q + size;
+
+ if (byteorder)
+ bo = *byteorder;
+
+ /* Check for BOM marks (U+FEFF) in the input and adjust current
+ byte order setting accordingly. In native mode, the leading BOM
+ mark is skipped, in all other modes, it is copied to the output
+ stream as-is (giving a ZWNBSP character). */
+ if (bo == 0 && size >= 4) {
+ Py_UCS4 bom = ((unsigned int)q[3] << 24) | (q[2] << 16) | (q[1] << 8) | q[0];
+ if (bom == 0x0000FEFF) {
+ bo = -1;
+ q += 4;
+ }
+ else if (bom == 0xFFFE0000) {
+ bo = 1;
+ q += 4;
+ }
+ if (byteorder)
+ *byteorder = bo;
+ }
+
+ if (q == e) {
+ if (consumed)
+ *consumed = size;
+ _Py_RETURN_UNICODE_EMPTY();
+ }
+
+#ifdef WORDS_BIGENDIAN
+ le = bo < 0;
+#else
+ le = bo <= 0;
+#endif
+ encoding = le ? "utf-32-le" : "utf-32-be";
+
+ _PyUnicodeWriter_Init(&writer);
+ writer.min_length = (e - q + 3) / 4;
+ if (_PyUnicodeWriter_Prepare(&writer, writer.min_length, 127) == -1)
+ goto onError;
+
+ while (1) {
+ Py_UCS4 ch = 0;
+ Py_UCS4 maxch = PyUnicode_MAX_CHAR_VALUE(writer.buffer);
+
+ if (e - q >= 4) {
+ enum PyUnicode_Kind kind = writer.kind;
+ void *data = writer.data;
+ const unsigned char *last = e - 4;
+ Py_ssize_t pos = writer.pos;
+ if (le) {
+ do {
+ ch = ((unsigned int)q[3] << 24) | (q[2] << 16) | (q[1] << 8) | q[0];
+ if (ch > maxch)
+ break;
+ if (kind != PyUnicode_1BYTE_KIND &&
+ Py_UNICODE_IS_SURROGATE(ch))
+ break;
+ PyUnicode_WRITE(kind, data, pos++, ch);
+ q += 4;
+ } while (q <= last);
+ }
+ else {
+ do {
+ ch = ((unsigned int)q[0] << 24) | (q[1] << 16) | (q[2] << 8) | q[3];
+ if (ch > maxch)
+ break;
+ if (kind != PyUnicode_1BYTE_KIND &&
+ Py_UNICODE_IS_SURROGATE(ch))
+ break;
+ PyUnicode_WRITE(kind, data, pos++, ch);
+ q += 4;
+ } while (q <= last);
+ }
+ writer.pos = pos;
+ }
+
+ if (Py_UNICODE_IS_SURROGATE(ch)) {
+ errmsg = "code point in surrogate code point range(0xd800, 0xe000)";
+ startinpos = ((const char *)q) - starts;
+ endinpos = startinpos + 4;
+ }
+ else if (ch <= maxch) {
+ if (q == e || consumed)
+ break;
+ /* remaining bytes at the end? (size should be divisible by 4) */
+ errmsg = "truncated data";
+ startinpos = ((const char *)q) - starts;
+ endinpos = ((const char *)e) - starts;
+ }
+ else {
+ if (ch < 0x110000) {
+ if (_PyUnicodeWriter_WriteCharInline(&writer, ch) < 0)
+ goto onError;
+ q += 4;
+ continue;
+ }
+ errmsg = "code point not in range(0x110000)";
+ startinpos = ((const char *)q) - starts;
+ endinpos = startinpos + 4;
+ }
+
+ /* The remaining input chars are ignored if the callback
+ chooses to skip the input */
+ if (unicode_decode_call_errorhandler_writer(
+ errors, &errorHandler,
+ encoding, errmsg,
+ &starts, (const char **)&e, &startinpos, &endinpos, &exc, (const char **)&q,
+ &writer))
+ goto onError;
+ }
+
+ if (consumed)
+ *consumed = (const char *)q-starts;
+
+ Py_XDECREF(errorHandler);
+ Py_XDECREF(exc);
+ return _PyUnicodeWriter_Finish(&writer);
+
+ onError:
+ _PyUnicodeWriter_Dealloc(&writer);
+ Py_XDECREF(errorHandler);
+ Py_XDECREF(exc);
+ return NULL;
+}
+
+PyObject *
+_PyUnicode_EncodeUTF32(PyObject *str,
+ const char *errors,
+ int byteorder)
+{
+ enum PyUnicode_Kind kind;
+ const void *data;
+ Py_ssize_t len;
+ PyObject *v;
+ uint32_t *out;
+#if PY_LITTLE_ENDIAN
+ int native_ordering = byteorder <= 0;
+#else
+ int native_ordering = byteorder >= 0;
+#endif
+ const char *encoding;
+ Py_ssize_t nsize, pos;
+ PyObject *errorHandler = NULL;
+ PyObject *exc = NULL;
+ PyObject *rep = NULL;
+
+ if (!PyUnicode_Check(str)) {
+ PyErr_BadArgument();
+ return NULL;
+ }
+ if (PyUnicode_READY(str) == -1)
+ return NULL;
+ kind = PyUnicode_KIND(str);
+ data = PyUnicode_DATA(str);
+ len = PyUnicode_GET_LENGTH(str);
+
+ if (len > PY_SSIZE_T_MAX / 4 - (byteorder == 0))
+ return PyErr_NoMemory();
+ nsize = len + (byteorder == 0);
+ v = PyBytes_FromStringAndSize(NULL, nsize * 4);
+ if (v == NULL)
+ return NULL;
+
+ /* output buffer is 4-bytes aligned */
+ assert(_Py_IS_ALIGNED(PyBytes_AS_STRING(v), 4));
+ out = (uint32_t *)PyBytes_AS_STRING(v);
+ if (byteorder == 0)
+ *out++ = 0xFEFF;
+ if (len == 0)
+ goto done;
+
+ if (byteorder == -1)
+ encoding = "utf-32-le";
+ else if (byteorder == 1)
+ encoding = "utf-32-be";
+ else
+ encoding = "utf-32";
+
+ if (kind == PyUnicode_1BYTE_KIND) {
+ ucs1lib_utf32_encode((const Py_UCS1 *)data, len, &out, native_ordering);
+ goto done;
+ }
+
+ pos = 0;
+ while (pos < len) {
+ Py_ssize_t repsize, moreunits;
+
+ if (kind == PyUnicode_2BYTE_KIND) {
+ pos += ucs2lib_utf32_encode((const Py_UCS2 *)data + pos, len - pos,
+ &out, native_ordering);
+ }
+ else {
+ assert(kind == PyUnicode_4BYTE_KIND);
+ pos += ucs4lib_utf32_encode((const Py_UCS4 *)data + pos, len - pos,
+ &out, native_ordering);
+ }
+ if (pos == len)
+ break;
+
+ rep = unicode_encode_call_errorhandler(
+ errors, &errorHandler,
+ encoding, "surrogates not allowed",
+ str, &exc, pos, pos + 1, &pos);
+ if (!rep)
+ goto error;
+
+ if (PyBytes_Check(rep)) {
+ repsize = PyBytes_GET_SIZE(rep);
+ if (repsize & 3) {
+ raise_encode_exception(&exc, encoding,
+ str, pos - 1, pos,
+ "surrogates not allowed");
+ goto error;
+ }
+ moreunits = repsize / 4;
+ }
+ else {
+ assert(PyUnicode_Check(rep));
+ if (PyUnicode_READY(rep) < 0)
+ goto error;
+ moreunits = repsize = PyUnicode_GET_LENGTH(rep);
+ if (!PyUnicode_IS_ASCII(rep)) {
+ raise_encode_exception(&exc, encoding,
+ str, pos - 1, pos,
+ "surrogates not allowed");
+ goto error;
+ }
+ }
+
+ /* four bytes are reserved for each surrogate */
+ if (moreunits > 1) {
+ Py_ssize_t outpos = out - (uint32_t*) PyBytes_AS_STRING(v);
+ if (moreunits >= (PY_SSIZE_T_MAX - PyBytes_GET_SIZE(v)) / 4) {
+ /* integer overflow */
+ PyErr_NoMemory();
+ goto error;
+ }
+ if (_PyBytes_Resize(&v, PyBytes_GET_SIZE(v) + 4 * (moreunits - 1)) < 0)
+ goto error;
+ out = (uint32_t*) PyBytes_AS_STRING(v) + outpos;
+ }
+
+ if (PyBytes_Check(rep)) {
+ memcpy(out, PyBytes_AS_STRING(rep), repsize);
+ out += moreunits;
+ } else /* rep is unicode */ {
+ assert(PyUnicode_KIND(rep) == PyUnicode_1BYTE_KIND);
+ ucs1lib_utf32_encode(PyUnicode_1BYTE_DATA(rep), repsize,
+ &out, native_ordering);
+ }
+
+ Py_CLEAR(rep);
+ }
+
+ /* Cut back to size actually needed. This is necessary for, for example,
+ encoding of a string containing isolated surrogates and the 'ignore'
+ handler is used. */
+ nsize = (unsigned char*) out - (unsigned char*) PyBytes_AS_STRING(v);
+ if (nsize != PyBytes_GET_SIZE(v))
+ _PyBytes_Resize(&v, nsize);
+ Py_XDECREF(errorHandler);
+ Py_XDECREF(exc);
+ done:
+ return v;
+ error:
+ Py_XDECREF(rep);
+ Py_XDECREF(errorHandler);
+ Py_XDECREF(exc);
+ Py_XDECREF(v);
+ return NULL;
+}
+
+PyObject *
+PyUnicode_EncodeUTF32(const Py_UNICODE *s,
+ Py_ssize_t size,
+ const char *errors,
+ int byteorder)
+{
+ PyObject *result;
+ PyObject *tmp = PyUnicode_FromUnicode(s, size);
+ if (tmp == NULL)
+ return NULL;
+ result = _PyUnicode_EncodeUTF32(tmp, errors, byteorder);
+ Py_DECREF(tmp);
+ return result;
+}
+
+PyObject *
+PyUnicode_AsUTF32String(PyObject *unicode)
+{
+ return _PyUnicode_EncodeUTF32(unicode, NULL, 0);
+}
+
+/* --- UTF-16 Codec ------------------------------------------------------- */
+
+PyObject *
+PyUnicode_DecodeUTF16(const char *s,
+ Py_ssize_t size,
+ const char *errors,
+ int *byteorder)
+{
+ return PyUnicode_DecodeUTF16Stateful(s, size, errors, byteorder, NULL);
+}
+
+PyObject *
+PyUnicode_DecodeUTF16Stateful(const char *s,
+ Py_ssize_t size,
+ const char *errors,
+ int *byteorder,
+ Py_ssize_t *consumed)
+{
+ const char *starts = s;
+ Py_ssize_t startinpos;
+ Py_ssize_t endinpos;
+ _PyUnicodeWriter writer;
+ const unsigned char *q, *e;
+ int bo = 0; /* assume native ordering by default */
+ int native_ordering;
+ const char *errmsg = "";
+ PyObject *errorHandler = NULL;
+ PyObject *exc = NULL;
+ const char *encoding;
+
+ q = (unsigned char *)s;
+ e = q + size;
+
+ if (byteorder)
+ bo = *byteorder;
+
+ /* Check for BOM marks (U+FEFF) in the input and adjust current
+ byte order setting accordingly. In native mode, the leading BOM
+ mark is skipped, in all other modes, it is copied to the output
+ stream as-is (giving a ZWNBSP character). */
+ if (bo == 0 && size >= 2) {
+ const Py_UCS4 bom = (q[1] << 8) | q[0];
+ if (bom == 0xFEFF) {
+ q += 2;
+ bo = -1;
+ }
+ else if (bom == 0xFFFE) {
+ q += 2;
+ bo = 1;
+ }
+ if (byteorder)
+ *byteorder = bo;
+ }
+
+ if (q == e) {
+ if (consumed)
+ *consumed = size;
+ _Py_RETURN_UNICODE_EMPTY();
+ }
+
+#if PY_LITTLE_ENDIAN
+ native_ordering = bo <= 0;
+ encoding = bo <= 0 ? "utf-16-le" : "utf-16-be";
+#else
+ native_ordering = bo >= 0;
+ encoding = bo >= 0 ? "utf-16-be" : "utf-16-le";
+#endif
+
+ /* Note: size will always be longer than the resulting Unicode
+ character count normally. Error handler will take care of
+ resizing when needed. */
+ _PyUnicodeWriter_Init(&writer);
+ writer.min_length = (e - q + 1) / 2;
+ if (_PyUnicodeWriter_Prepare(&writer, writer.min_length, 127) == -1)
+ goto onError;
+
+ while (1) {
+ Py_UCS4 ch = 0;
+ if (e - q >= 2) {
+ int kind = writer.kind;
+ if (kind == PyUnicode_1BYTE_KIND) {
+ if (PyUnicode_IS_ASCII(writer.buffer))
+ ch = asciilib_utf16_decode(&q, e,
+ (Py_UCS1*)writer.data, &writer.pos,
+ native_ordering);
+ else
+ ch = ucs1lib_utf16_decode(&q, e,
+ (Py_UCS1*)writer.data, &writer.pos,
+ native_ordering);
+ } else if (kind == PyUnicode_2BYTE_KIND) {
+ ch = ucs2lib_utf16_decode(&q, e,
+ (Py_UCS2*)writer.data, &writer.pos,
+ native_ordering);
+ } else {
+ assert(kind == PyUnicode_4BYTE_KIND);
+ ch = ucs4lib_utf16_decode(&q, e,
+ (Py_UCS4*)writer.data, &writer.pos,
+ native_ordering);
+ }
+ }
+
+ switch (ch)
+ {
+ case 0:
+ /* remaining byte at the end? (size should be even) */
+ if (q == e || consumed)
+ goto End;
+ errmsg = "truncated data";
+ startinpos = ((const char *)q) - starts;
+ endinpos = ((const char *)e) - starts;
+ break;
+ /* The remaining input chars are ignored if the callback
+ chooses to skip the input */
+ case 1:
+ q -= 2;
+ if (consumed)
+ goto End;
+ errmsg = "unexpected end of data";
+ startinpos = ((const char *)q) - starts;
+ endinpos = ((const char *)e) - starts;
+ break;
+ case 2:
+ errmsg = "illegal encoding";
+ startinpos = ((const char *)q) - 2 - starts;
+ endinpos = startinpos + 2;
+ break;
+ case 3:
+ errmsg = "illegal UTF-16 surrogate";
+ startinpos = ((const char *)q) - 4 - starts;
+ endinpos = startinpos + 2;
+ break;
+ default:
+ if (_PyUnicodeWriter_WriteCharInline(&writer, ch) < 0)
+ goto onError;
+ continue;
+ }
+
+ if (unicode_decode_call_errorhandler_writer(
+ errors,
+ &errorHandler,
+ encoding, errmsg,
+ &starts,
+ (const char **)&e,
+ &startinpos,
+ &endinpos,
+ &exc,
+ (const char **)&q,
+ &writer))
+ goto onError;
+ }
+
+End:
+ if (consumed)
+ *consumed = (const char *)q-starts;
+
+ Py_XDECREF(errorHandler);
+ Py_XDECREF(exc);
+ return _PyUnicodeWriter_Finish(&writer);
+
+ onError:
+ _PyUnicodeWriter_Dealloc(&writer);
+ Py_XDECREF(errorHandler);
+ Py_XDECREF(exc);
+ return NULL;
+}
+
+PyObject *
+_PyUnicode_EncodeUTF16(PyObject *str,
+ const char *errors,
+ int byteorder)
+{
+ enum PyUnicode_Kind kind;
+ const void *data;
+ Py_ssize_t len;
+ PyObject *v;
+ unsigned short *out;
+ Py_ssize_t pairs;
+#if PY_BIG_ENDIAN
+ int native_ordering = byteorder >= 0;
+#else
+ int native_ordering = byteorder <= 0;
+#endif
+ const char *encoding;
+ Py_ssize_t nsize, pos;
+ PyObject *errorHandler = NULL;
+ PyObject *exc = NULL;
+ PyObject *rep = NULL;
+
+ if (!PyUnicode_Check(str)) {
+ PyErr_BadArgument();
+ return NULL;
+ }
+ if (PyUnicode_READY(str) == -1)
+ return NULL;
+ kind = PyUnicode_KIND(str);
+ data = PyUnicode_DATA(str);
+ len = PyUnicode_GET_LENGTH(str);
+
+ pairs = 0;
+ if (kind == PyUnicode_4BYTE_KIND) {
+ const Py_UCS4 *in = (const Py_UCS4 *)data;
+ const Py_UCS4 *end = in + len;
+ while (in < end) {
+ if (*in++ >= 0x10000) {
+ pairs++;
+ }
+ }
+ }
+ if (len > PY_SSIZE_T_MAX / 2 - pairs - (byteorder == 0)) {
+ return PyErr_NoMemory();
+ }
+ nsize = len + pairs + (byteorder == 0);
+ v = PyBytes_FromStringAndSize(NULL, nsize * 2);
+ if (v == NULL) {
+ return NULL;
+ }
+
+ /* output buffer is 2-bytes aligned */
+ assert(_Py_IS_ALIGNED(PyBytes_AS_STRING(v), 2));
+ out = (unsigned short *)PyBytes_AS_STRING(v);
+ if (byteorder == 0) {
+ *out++ = 0xFEFF;
+ }
+ if (len == 0) {
+ goto done;
+ }
+
+ if (kind == PyUnicode_1BYTE_KIND) {
+ ucs1lib_utf16_encode((const Py_UCS1 *)data, len, &out, native_ordering);
+ goto done;
+ }
+
+ if (byteorder < 0) {
+ encoding = "utf-16-le";
+ }
+ else if (byteorder > 0) {
+ encoding = "utf-16-be";
+ }
+ else {
+ encoding = "utf-16";
+ }
+
+ pos = 0;
+ while (pos < len) {
+ Py_ssize_t repsize, moreunits;
+
+ if (kind == PyUnicode_2BYTE_KIND) {
+ pos += ucs2lib_utf16_encode((const Py_UCS2 *)data + pos, len - pos,
+ &out, native_ordering);
+ }
+ else {
+ assert(kind == PyUnicode_4BYTE_KIND);
+ pos += ucs4lib_utf16_encode((const Py_UCS4 *)data + pos, len - pos,
+ &out, native_ordering);
+ }
+ if (pos == len)
+ break;
+
+ rep = unicode_encode_call_errorhandler(
+ errors, &errorHandler,
+ encoding, "surrogates not allowed",
+ str, &exc, pos, pos + 1, &pos);
+ if (!rep)
+ goto error;
+
+ if (PyBytes_Check(rep)) {
+ repsize = PyBytes_GET_SIZE(rep);
+ if (repsize & 1) {
+ raise_encode_exception(&exc, encoding,
+ str, pos - 1, pos,
+ "surrogates not allowed");
+ goto error;
+ }
+ moreunits = repsize / 2;
+ }
+ else {
+ assert(PyUnicode_Check(rep));
+ if (PyUnicode_READY(rep) < 0)
+ goto error;
+ moreunits = repsize = PyUnicode_GET_LENGTH(rep);
+ if (!PyUnicode_IS_ASCII(rep)) {
+ raise_encode_exception(&exc, encoding,
+ str, pos - 1, pos,
+ "surrogates not allowed");
+ goto error;
+ }
+ }
+
+ /* two bytes are reserved for each surrogate */
+ if (moreunits > 1) {
+ Py_ssize_t outpos = out - (unsigned short*) PyBytes_AS_STRING(v);
+ if (moreunits >= (PY_SSIZE_T_MAX - PyBytes_GET_SIZE(v)) / 2) {
+ /* integer overflow */
+ PyErr_NoMemory();
+ goto error;
+ }
+ if (_PyBytes_Resize(&v, PyBytes_GET_SIZE(v) + 2 * (moreunits - 1)) < 0)
+ goto error;
+ out = (unsigned short*) PyBytes_AS_STRING(v) + outpos;
+ }
+
+ if (PyBytes_Check(rep)) {
+ memcpy(out, PyBytes_AS_STRING(rep), repsize);
+ out += moreunits;
+ } else /* rep is unicode */ {
+ assert(PyUnicode_KIND(rep) == PyUnicode_1BYTE_KIND);
+ ucs1lib_utf16_encode(PyUnicode_1BYTE_DATA(rep), repsize,
+ &out, native_ordering);
+ }
+
+ Py_CLEAR(rep);
+ }
+
+ /* Cut back to size actually needed. This is necessary for, for example,
+ encoding of a string containing isolated surrogates and the 'ignore' handler
+ is used. */
+ nsize = (unsigned char*) out - (unsigned char*) PyBytes_AS_STRING(v);
+ if (nsize != PyBytes_GET_SIZE(v))
+ _PyBytes_Resize(&v, nsize);
+ Py_XDECREF(errorHandler);
+ Py_XDECREF(exc);
+ done:
+ return v;
+ error:
+ Py_XDECREF(rep);
+ Py_XDECREF(errorHandler);
+ Py_XDECREF(exc);
+ Py_XDECREF(v);
+ return NULL;
+#undef STORECHAR
+}
+
+PyObject *
+PyUnicode_EncodeUTF16(const Py_UNICODE *s,
+ Py_ssize_t size,
+ const char *errors,
+ int byteorder)
+{
+ PyObject *result;
+ PyObject *tmp = PyUnicode_FromUnicode(s, size);
+ if (tmp == NULL)
+ return NULL;
+ result = _PyUnicode_EncodeUTF16(tmp, errors, byteorder);
+ Py_DECREF(tmp);
+ return result;
+}
+
+PyObject *
+PyUnicode_AsUTF16String(PyObject *unicode)
+{
+ return _PyUnicode_EncodeUTF16(unicode, NULL, 0);
+}
+
+/* --- Unicode Escape Codec ----------------------------------------------- */
+
+static _PyUnicode_Name_CAPI *ucnhash_CAPI = NULL;
+
+PyObject *
+_PyUnicode_DecodeUnicodeEscape(const char *s,
+ Py_ssize_t size,
+ const char *errors,
+ const char **first_invalid_escape)
+{
+ const char *starts = s;
+ _PyUnicodeWriter writer;
+ const char *end;
+ PyObject *errorHandler = NULL;
+ PyObject *exc = NULL;
+
+ // so we can remember if we've seen an invalid escape char or not
+ *first_invalid_escape = NULL;
+
+ if (size == 0) {
+ _Py_RETURN_UNICODE_EMPTY();
+ }
+ /* Escaped strings will always be longer than the resulting
+ Unicode string, so we start with size here and then reduce the
+ length after conversion to the true value.
+ (but if the error callback returns a long replacement string
+ we'll have to allocate more space) */
+ _PyUnicodeWriter_Init(&writer);
+ writer.min_length = size;
+ if (_PyUnicodeWriter_Prepare(&writer, size, 127) < 0) {
+ goto onError;
+ }
+
+ end = s + size;
+ while (s < end) {
+ unsigned char c = (unsigned char) *s++;
+ Py_UCS4 ch;
+ int count;
+ Py_ssize_t startinpos;
+ Py_ssize_t endinpos;
+ const char *message;
+
+#define WRITE_ASCII_CHAR(ch) \
+ do { \
+ assert(ch <= 127); \
+ assert(writer.pos < writer.size); \
+ PyUnicode_WRITE(writer.kind, writer.data, writer.pos++, ch); \
+ } while(0)
+
+#define WRITE_CHAR(ch) \
+ do { \
+ if (ch <= writer.maxchar) { \
+ assert(writer.pos < writer.size); \
+ PyUnicode_WRITE(writer.kind, writer.data, writer.pos++, ch); \
+ } \
+ else if (_PyUnicodeWriter_WriteCharInline(&writer, ch) < 0) { \
+ goto onError; \
+ } \
+ } while(0)
+
+ /* Non-escape characters are interpreted as Unicode ordinals */
+ if (c != '\\') {
+ WRITE_CHAR(c);
+ continue;
+ }
+
+ startinpos = s - starts - 1;
+ /* \ - Escapes */
+ if (s >= end) {
+ message = "\\ at end of string";
+ goto error;
+ }
+ c = (unsigned char) *s++;
+
+ assert(writer.pos < writer.size);
+ switch (c) {
+
+ /* \x escapes */
+ case '\n': continue;
+ case '\\': WRITE_ASCII_CHAR('\\'); continue;
+ case '\'': WRITE_ASCII_CHAR('\''); continue;
+ case '\"': WRITE_ASCII_CHAR('\"'); continue;
+ case 'b': WRITE_ASCII_CHAR('\b'); continue;
+ /* FF */
+ case 'f': WRITE_ASCII_CHAR('\014'); continue;
+ case 't': WRITE_ASCII_CHAR('\t'); continue;
+ case 'n': WRITE_ASCII_CHAR('\n'); continue;
+ case 'r': WRITE_ASCII_CHAR('\r'); continue;
+ /* VT */
+ case 'v': WRITE_ASCII_CHAR('\013'); continue;
+ /* BEL, not classic C */
+ case 'a': WRITE_ASCII_CHAR('\007'); continue;
+
+ /* \OOO (octal) escapes */
+ case '0': case '1': case '2': case '3':
+ case '4': case '5': case '6': case '7':
+ ch = c - '0';
+ if (s < end && '0' <= *s && *s <= '7') {
+ ch = (ch<<3) + *s++ - '0';
+ if (s < end && '0' <= *s && *s <= '7') {
+ ch = (ch<<3) + *s++ - '0';
+ }
+ }
+ WRITE_CHAR(ch);
+ continue;
+
+ /* hex escapes */
+ /* \xXX */
+ case 'x':
+ count = 2;
+ message = "truncated \\xXX escape";
+ goto hexescape;
+
+ /* \uXXXX */
+ case 'u':
+ count = 4;
+ message = "truncated \\uXXXX escape";
+ goto hexescape;
+
+ /* \UXXXXXXXX */
+ case 'U':
+ count = 8;
+ message = "truncated \\UXXXXXXXX escape";
+ hexescape:
+ for (ch = 0; count && s < end; ++s, --count) {
+ c = (unsigned char)*s;
+ ch <<= 4;
+ if (c >= '0' && c <= '9') {
+ ch += c - '0';
+ }
+ else if (c >= 'a' && c <= 'f') {
+ ch += c - ('a' - 10);
+ }
+ else if (c >= 'A' && c <= 'F') {
+ ch += c - ('A' - 10);
+ }
+ else {
+ break;
+ }
+ }
+ if (count) {
+ goto error;
+ }
+
+ /* when we get here, ch is a 32-bit unicode character */
+ if (ch > MAX_UNICODE) {
+ message = "illegal Unicode character";
+ goto error;
+ }
+
+ WRITE_CHAR(ch);
+ continue;
+
+ /* \N{name} */
+ case 'N':
+ if (ucnhash_CAPI == NULL) {
+ /* load the unicode data module */
+ ucnhash_CAPI = (_PyUnicode_Name_CAPI *)PyCapsule_Import(
+ PyUnicodeData_CAPSULE_NAME, 1);
+ if (ucnhash_CAPI == NULL) {
+ PyErr_SetString(
+ PyExc_UnicodeError,
+ "\\N escapes not supported (can't load unicodedata module)"
+ );
+ goto onError;
+ }
+ }
+
+ message = "malformed \\N character escape";
+ if (s < end && *s == '{') {
+ const char *start = ++s;
+ size_t namelen;
+ /* look for the closing brace */
+ while (s < end && *s != '}')
+ s++;
+ namelen = s - start;
+ if (namelen && s < end) {
+ /* found a name. look it up in the unicode database */
+ s++;
+ ch = 0xffffffff; /* in case 'getcode' messes up */
+ if (namelen <= INT_MAX &&
+ ucnhash_CAPI->getcode(NULL, start, (int)namelen,
+ &ch, 0)) {
+ assert(ch <= MAX_UNICODE);
+ WRITE_CHAR(ch);
+ continue;
+ }
+ message = "unknown Unicode character name";
+ }
+ }
+ goto error;
+
+ default:
+ if (*first_invalid_escape == NULL) {
+ *first_invalid_escape = s-1; /* Back up one char, since we've
+ already incremented s. */
+ }
+ WRITE_ASCII_CHAR('\\');
+ WRITE_CHAR(c);
+ continue;
+ }
+
+ error:
+ endinpos = s-starts;
+ writer.min_length = end - s + writer.pos;
+ if (unicode_decode_call_errorhandler_writer(
+ errors, &errorHandler,
+ "unicodeescape", message,
+ &starts, &end, &startinpos, &endinpos, &exc, &s,
+ &writer)) {
+ goto onError;
+ }
+ assert(end - s <= writer.size - writer.pos);
+
+#undef WRITE_ASCII_CHAR
+#undef WRITE_CHAR
+ }
+
+ Py_XDECREF(errorHandler);
+ Py_XDECREF(exc);
+ return _PyUnicodeWriter_Finish(&writer);
+
+ onError:
+ _PyUnicodeWriter_Dealloc(&writer);
+ Py_XDECREF(errorHandler);
+ Py_XDECREF(exc);
+ return NULL;
+}
+
+PyObject *
+PyUnicode_DecodeUnicodeEscape(const char *s,
+ Py_ssize_t size,
+ const char *errors)
+{
+ const char *first_invalid_escape;
+ PyObject *result = _PyUnicode_DecodeUnicodeEscape(s, size, errors,
+ &first_invalid_escape);
+ if (result == NULL)
+ return NULL;
+ if (first_invalid_escape != NULL) {
+ if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1,
+ "invalid escape sequence '\\%c'",
+ (unsigned char)*first_invalid_escape) < 0) {
+ Py_DECREF(result);
+ return NULL;
+ }
+ }
+ return result;
+}
+
+/* Return a Unicode-Escape string version of the Unicode object. */
+
+PyObject *
+PyUnicode_AsUnicodeEscapeString(PyObject *unicode)
+{
+ Py_ssize_t i, len;
+ PyObject *repr;
+ char *p;
+ enum PyUnicode_Kind kind;
+ void *data;
+ Py_ssize_t expandsize;
+
+ /* Initial allocation is based on the longest-possible character
+ escape.
+
+ For UCS1 strings it's '\xxx', 4 bytes per source character.
+ For UCS2 strings it's '\uxxxx', 6 bytes per source character.
+ For UCS4 strings it's '\U00xxxxxx', 10 bytes per source character.
+ */
+
+ if (!PyUnicode_Check(unicode)) {
+ PyErr_BadArgument();
+ return NULL;
+ }
+ if (PyUnicode_READY(unicode) == -1) {
+ return NULL;
+ }
+
+ len = PyUnicode_GET_LENGTH(unicode);
+ if (len == 0) {
+ return PyBytes_FromStringAndSize(NULL, 0);
+ }
+
+ kind = PyUnicode_KIND(unicode);
+ data = PyUnicode_DATA(unicode);
+ /* 4 byte characters can take up 10 bytes, 2 byte characters can take up 6
+ bytes, and 1 byte characters 4. */
+ expandsize = kind * 2 + 2;
+ if (len > PY_SSIZE_T_MAX / expandsize) {
+ return PyErr_NoMemory();
+ }
+ repr = PyBytes_FromStringAndSize(NULL, expandsize * len);
+ if (repr == NULL) {
+ return NULL;
+ }
+
+ p = PyBytes_AS_STRING(repr);
+ for (i = 0; i < len; i++) {
+ Py_UCS4 ch = PyUnicode_READ(kind, data, i);
+
+ /* U+0000-U+00ff range */
+ if (ch < 0x100) {
+ if (ch >= ' ' && ch < 127) {
+ if (ch != '\\') {
+ /* Copy printable US ASCII as-is */
+ *p++ = (char) ch;
+ }
+ /* Escape backslashes */
+ else {
+ *p++ = '\\';
+ *p++ = '\\';
+ }
+ }
+
+ /* Map special whitespace to '\t', \n', '\r' */
+ else if (ch == '\t') {
+ *p++ = '\\';
+ *p++ = 't';
+ }
+ else if (ch == '\n') {
+ *p++ = '\\';
+ *p++ = 'n';
+ }
+ else if (ch == '\r') {
+ *p++ = '\\';
+ *p++ = 'r';
+ }
+
+ /* Map non-printable US ASCII and 8-bit characters to '\xHH' */
+ else {
+ *p++ = '\\';
+ *p++ = 'x';
+ *p++ = Py_hexdigits[(ch >> 4) & 0x000F];
+ *p++ = Py_hexdigits[ch & 0x000F];
+ }
+ }
+ /* U+0100-U+ffff range: Map 16-bit characters to '\uHHHH' */
+ else if (ch < 0x10000) {
+ *p++ = '\\';
+ *p++ = 'u';
+ *p++ = Py_hexdigits[(ch >> 12) & 0x000F];
+ *p++ = Py_hexdigits[(ch >> 8) & 0x000F];
+ *p++ = Py_hexdigits[(ch >> 4) & 0x000F];
+ *p++ = Py_hexdigits[ch & 0x000F];
+ }
+ /* U+010000-U+10ffff range: Map 21-bit characters to '\U00HHHHHH' */
+ else {
+
+ /* Make sure that the first two digits are zero */
+ assert(ch <= MAX_UNICODE && MAX_UNICODE <= 0x10ffff);
+ *p++ = '\\';
+ *p++ = 'U';
+ *p++ = '0';
+ *p++ = '0';
+ *p++ = Py_hexdigits[(ch >> 20) & 0x0000000F];
+ *p++ = Py_hexdigits[(ch >> 16) & 0x0000000F];
+ *p++ = Py_hexdigits[(ch >> 12) & 0x0000000F];
+ *p++ = Py_hexdigits[(ch >> 8) & 0x0000000F];
+ *p++ = Py_hexdigits[(ch >> 4) & 0x0000000F];
+ *p++ = Py_hexdigits[ch & 0x0000000F];
+ }
+ }
+
+ assert(p - PyBytes_AS_STRING(repr) > 0);
+ if (_PyBytes_Resize(&repr, p - PyBytes_AS_STRING(repr)) < 0) {
+ return NULL;
+ }
+ return repr;
+}
+
+PyObject *
+PyUnicode_EncodeUnicodeEscape(const Py_UNICODE *s,
+ Py_ssize_t size)
+{
+ PyObject *result;
+ PyObject *tmp = PyUnicode_FromUnicode(s, size);
+ if (tmp == NULL) {
+ return NULL;
+ }
+
+ result = PyUnicode_AsUnicodeEscapeString(tmp);
+ Py_DECREF(tmp);
+ return result;
+}
+
+/* --- Raw Unicode Escape Codec ------------------------------------------- */
+
+PyObject *
+PyUnicode_DecodeRawUnicodeEscape(const char *s,
+ Py_ssize_t size,
+ const char *errors)
+{
+ const char *starts = s;
+ _PyUnicodeWriter writer;
+ const char *end;
+ PyObject *errorHandler = NULL;
+ PyObject *exc = NULL;
+
+ if (size == 0) {
+ _Py_RETURN_UNICODE_EMPTY();
+ }
+
+ /* Escaped strings will always be longer than the resulting
+ Unicode string, so we start with size here and then reduce the
+ length after conversion to the true value. (But decoding error
+ handler might have to resize the string) */
+ _PyUnicodeWriter_Init(&writer);
+ writer.min_length = size;
+ if (_PyUnicodeWriter_Prepare(&writer, size, 127) < 0) {
+ goto onError;
+ }
+
+ end = s + size;
+ while (s < end) {
+ unsigned char c = (unsigned char) *s++;
+ Py_UCS4 ch;
+ int count;
+ Py_ssize_t startinpos;
+ Py_ssize_t endinpos;
+ const char *message;
+
+#define WRITE_CHAR(ch) \
+ do { \
+ if (ch <= writer.maxchar) { \
+ assert(writer.pos < writer.size); \
+ PyUnicode_WRITE(writer.kind, writer.data, writer.pos++, ch); \
+ } \
+ else if (_PyUnicodeWriter_WriteCharInline(&writer, ch) < 0) { \
+ goto onError; \
+ } \
+ } while(0)
+
+ /* Non-escape characters are interpreted as Unicode ordinals */
+ if (c != '\\' || s >= end) {
+ WRITE_CHAR(c);
+ continue;
+ }
+
+ c = (unsigned char) *s++;
+ if (c == 'u') {
+ count = 4;
+ message = "truncated \\uXXXX escape";
+ }
+ else if (c == 'U') {
+ count = 8;
+ message = "truncated \\UXXXXXXXX escape";
+ }
+ else {
+ assert(writer.pos < writer.size);
+ PyUnicode_WRITE(writer.kind, writer.data, writer.pos++, '\\');
+ WRITE_CHAR(c);
+ continue;
+ }
+ startinpos = s - starts - 2;
+
+ /* \uHHHH with 4 hex digits, \U00HHHHHH with 8 */
+ for (ch = 0; count && s < end; ++s, --count) {
+ c = (unsigned char)*s;
+ ch <<= 4;
+ if (c >= '0' && c <= '9') {
+ ch += c - '0';
+ }
+ else if (c >= 'a' && c <= 'f') {
+ ch += c - ('a' - 10);
+ }
+ else if (c >= 'A' && c <= 'F') {
+ ch += c - ('A' - 10);
+ }
+ else {
+ break;
+ }
+ }
+ if (!count) {
+ if (ch <= MAX_UNICODE) {
+ WRITE_CHAR(ch);
+ continue;
+ }
+ message = "\\Uxxxxxxxx out of range";
+ }
+
+ endinpos = s-starts;
+ writer.min_length = end - s + writer.pos;
+ if (unicode_decode_call_errorhandler_writer(
+ errors, &errorHandler,
+ "rawunicodeescape", message,
+ &starts, &end, &startinpos, &endinpos, &exc, &s,
+ &writer)) {
+ goto onError;
+ }
+ assert(end - s <= writer.size - writer.pos);
+
+#undef WRITE_CHAR
+ }
+ Py_XDECREF(errorHandler);
+ Py_XDECREF(exc);
+ return _PyUnicodeWriter_Finish(&writer);
+
+ onError:
+ _PyUnicodeWriter_Dealloc(&writer);
+ Py_XDECREF(errorHandler);
+ Py_XDECREF(exc);
+ return NULL;
+
+}
+
+
+PyObject *
+PyUnicode_AsRawUnicodeEscapeString(PyObject *unicode)
+{
+ PyObject *repr;
+ char *p;
+ Py_ssize_t expandsize, pos;
+ int kind;
+ void *data;
+ Py_ssize_t len;
+
+ if (!PyUnicode_Check(unicode)) {
+ PyErr_BadArgument();
+ return NULL;
+ }
+ if (PyUnicode_READY(unicode) == -1) {
+ return NULL;
+ }
+ kind = PyUnicode_KIND(unicode);
+ data = PyUnicode_DATA(unicode);
+ len = PyUnicode_GET_LENGTH(unicode);
+ if (kind == PyUnicode_1BYTE_KIND) {
+ return PyBytes_FromStringAndSize(data, len);
+ }
+
+ /* 4 byte characters can take up 10 bytes, 2 byte characters can take up 6
+ bytes, and 1 byte characters 4. */
+ expandsize = kind * 2 + 2;
+
+ if (len > PY_SSIZE_T_MAX / expandsize) {
+ return PyErr_NoMemory();
+ }
+ repr = PyBytes_FromStringAndSize(NULL, expandsize * len);
+ if (repr == NULL) {
+ return NULL;
+ }
+ if (len == 0) {
+ return repr;
+ }
+
+ p = PyBytes_AS_STRING(repr);
+ for (pos = 0; pos < len; pos++) {
+ Py_UCS4 ch = PyUnicode_READ(kind, data, pos);
+
+ /* U+0000-U+00ff range: Copy 8-bit characters as-is */
+ if (ch < 0x100) {
+ *p++ = (char) ch;
+ }
+ /* U+0000-U+00ff range: Map 16-bit characters to '\uHHHH' */
+ else if (ch < 0x10000) {
+ *p++ = '\\';
+ *p++ = 'u';
+ *p++ = Py_hexdigits[(ch >> 12) & 0xf];
+ *p++ = Py_hexdigits[(ch >> 8) & 0xf];
+ *p++ = Py_hexdigits[(ch >> 4) & 0xf];
+ *p++ = Py_hexdigits[ch & 15];
+ }
+ /* U+010000-U+10ffff range: Map 32-bit characters to '\U00HHHHHH' */
+ else {
+ assert(ch <= MAX_UNICODE && MAX_UNICODE <= 0x10ffff);
+ *p++ = '\\';
+ *p++ = 'U';
+ *p++ = '0';
+ *p++ = '0';
+ *p++ = Py_hexdigits[(ch >> 20) & 0xf];
+ *p++ = Py_hexdigits[(ch >> 16) & 0xf];
+ *p++ = Py_hexdigits[(ch >> 12) & 0xf];
+ *p++ = Py_hexdigits[(ch >> 8) & 0xf];
+ *p++ = Py_hexdigits[(ch >> 4) & 0xf];
+ *p++ = Py_hexdigits[ch & 15];
+ }
+ }
+
+ assert(p > PyBytes_AS_STRING(repr));
+ if (_PyBytes_Resize(&repr, p - PyBytes_AS_STRING(repr)) < 0) {
+ return NULL;
+ }
+ return repr;
+}
+
+PyObject *
+PyUnicode_EncodeRawUnicodeEscape(const Py_UNICODE *s,
+ Py_ssize_t size)
+{
+ PyObject *result;
+ PyObject *tmp = PyUnicode_FromUnicode(s, size);
+ if (tmp == NULL)
+ return NULL;
+ result = PyUnicode_AsRawUnicodeEscapeString(tmp);
+ Py_DECREF(tmp);
+ return result;
+}
+
+/* --- Unicode Internal Codec ------------------------------------------- */
+
+PyObject *
+_PyUnicode_DecodeUnicodeInternal(const char *s,
+ Py_ssize_t size,
+ const char *errors)
+{
+ const char *starts = s;
+ Py_ssize_t startinpos;
+ Py_ssize_t endinpos;
+ _PyUnicodeWriter writer;
+ const char *end;
+ const char *reason;
+ PyObject *errorHandler = NULL;
+ PyObject *exc = NULL;
+
+ if (PyErr_WarnEx(PyExc_DeprecationWarning,
+ "unicode_internal codec has been deprecated",
+ 1))
+ return NULL;
+
+ if (size < 0) {
+ PyErr_BadInternalCall();
+ return NULL;
+ }
+ if (size == 0)
+ _Py_RETURN_UNICODE_EMPTY();
+
+ _PyUnicodeWriter_Init(&writer);
+ if (size / Py_UNICODE_SIZE > PY_SSIZE_T_MAX - 1) {
+ PyErr_NoMemory();
+ goto onError;
+ }
+ writer.min_length = (size + (Py_UNICODE_SIZE - 1)) / Py_UNICODE_SIZE;
+
+ end = s + size;
+ while (s < end) {
+ Py_UNICODE uch;
+ Py_UCS4 ch;
+ if (end - s < Py_UNICODE_SIZE) {
+ endinpos = end-starts;
+ reason = "truncated input";
+ goto error;
+ }
+ /* We copy the raw representation one byte at a time because the
+ pointer may be unaligned (see test_codeccallbacks). */
+ ((char *) &uch)[0] = s[0];
+ ((char *) &uch)[1] = s[1];
+#ifdef Py_UNICODE_WIDE
+ ((char *) &uch)[2] = s[2];
+ ((char *) &uch)[3] = s[3];
+#endif
+ ch = uch;
+#ifdef Py_UNICODE_WIDE
+ /* We have to sanity check the raw data, otherwise doom looms for
+ some malformed UCS-4 data. */
+ if (ch > 0x10ffff) {
+ endinpos = s - starts + Py_UNICODE_SIZE;
+ reason = "illegal code point (> 0x10FFFF)";
+ goto error;
+ }
+#endif
+ s += Py_UNICODE_SIZE;
+#ifndef Py_UNICODE_WIDE
+ if (Py_UNICODE_IS_HIGH_SURROGATE(ch) && end - s >= Py_UNICODE_SIZE)
+ {
+ Py_UNICODE uch2;
+ ((char *) &uch2)[0] = s[0];
+ ((char *) &uch2)[1] = s[1];
+ if (Py_UNICODE_IS_LOW_SURROGATE(uch2))
+ {
+ ch = Py_UNICODE_JOIN_SURROGATES(uch, uch2);
+ s += Py_UNICODE_SIZE;
+ }
+ }
+#endif
+
+ if (_PyUnicodeWriter_WriteCharInline(&writer, ch) < 0)
+ goto onError;
+ continue;
+
+ error:
+ startinpos = s - starts;
+ if (unicode_decode_call_errorhandler_writer(
+ errors, &errorHandler,
+ "unicode_internal", reason,
+ &starts, &end, &startinpos, &endinpos, &exc, &s,
+ &writer))
+ goto onError;
+ }
+
+ Py_XDECREF(errorHandler);
+ Py_XDECREF(exc);
+ return _PyUnicodeWriter_Finish(&writer);
+
+ onError:
+ _PyUnicodeWriter_Dealloc(&writer);
+ Py_XDECREF(errorHandler);
+ Py_XDECREF(exc);
+ return NULL;
+}
+
+/* --- Latin-1 Codec ------------------------------------------------------ */
+
+PyObject *
+PyUnicode_DecodeLatin1(const char *s,
+ Py_ssize_t size,
+ const char *errors)
+{
+ /* Latin-1 is equivalent to the first 256 ordinals in Unicode. */
+ return _PyUnicode_FromUCS1((unsigned char*)s, size);
+}
+
+/* create or adjust a UnicodeEncodeError */
+static void
+make_encode_exception(PyObject **exceptionObject,
+ const char *encoding,
+ PyObject *unicode,
+ Py_ssize_t startpos, Py_ssize_t endpos,
+ const char *reason)
+{
+ if (*exceptionObject == NULL) {
+ *exceptionObject = PyObject_CallFunction(
+ PyExc_UnicodeEncodeError, "sOnns",
+ encoding, unicode, startpos, endpos, reason);
+ }
+ else {
+ if (PyUnicodeEncodeError_SetStart(*exceptionObject, startpos))
+ goto onError;
+ if (PyUnicodeEncodeError_SetEnd(*exceptionObject, endpos))
+ goto onError;
+ if (PyUnicodeEncodeError_SetReason(*exceptionObject, reason))
+ goto onError;
+ return;
+ onError:
+ Py_CLEAR(*exceptionObject);
+ }
+}
+
+/* raises a UnicodeEncodeError */
+static void
+raise_encode_exception(PyObject **exceptionObject,
+ const char *encoding,
+ PyObject *unicode,
+ Py_ssize_t startpos, Py_ssize_t endpos,
+ const char *reason)
+{
+ make_encode_exception(exceptionObject,
+ encoding, unicode, startpos, endpos, reason);
+ if (*exceptionObject != NULL)
+ PyCodec_StrictErrors(*exceptionObject);
+}
+
+/* error handling callback helper:
+ build arguments, call the callback and check the arguments,
+ put the result into newpos and return the replacement string, which
+ has to be freed by the caller */
+static PyObject *
+unicode_encode_call_errorhandler(const char *errors,
+ PyObject **errorHandler,
+ const char *encoding, const char *reason,
+ PyObject *unicode, PyObject **exceptionObject,
+ Py_ssize_t startpos, Py_ssize_t endpos,
+ Py_ssize_t *newpos)
+{
+ static const char *argparse = "On;encoding error handler must return (str/bytes, int) tuple";
+ Py_ssize_t len;
+ PyObject *restuple;
+ PyObject *resunicode;
+
+ if (*errorHandler == NULL) {
+ *errorHandler = PyCodec_LookupError(errors);
+ if (*errorHandler == NULL)
+ return NULL;
+ }
+
+ if (PyUnicode_READY(unicode) == -1)
+ return NULL;
+ len = PyUnicode_GET_LENGTH(unicode);
+
+ make_encode_exception(exceptionObject,
+ encoding, unicode, startpos, endpos, reason);
+ if (*exceptionObject == NULL)
+ return NULL;
+
+ restuple = PyObject_CallFunctionObjArgs(
+ *errorHandler, *exceptionObject, NULL);
+ if (restuple == NULL)
+ return NULL;
+ if (!PyTuple_Check(restuple)) {
+ PyErr_SetString(PyExc_TypeError, &argparse[3]);
+ Py_DECREF(restuple);
+ return NULL;
+ }
+ if (!PyArg_ParseTuple(restuple, argparse,
+ &resunicode, newpos)) {
+ Py_DECREF(restuple);
+ return NULL;
+ }
+ if (!PyUnicode_Check(resunicode) && !PyBytes_Check(resunicode)) {
+ PyErr_SetString(PyExc_TypeError, &argparse[3]);
+ Py_DECREF(restuple);
+ return NULL;
+ }
+ if (*newpos<0)
+ *newpos = len + *newpos;
+ if (*newpos<0 || *newpos>len) {
+ PyErr_Format(PyExc_IndexError, "position %zd from error handler out of bounds", *newpos);
+ Py_DECREF(restuple);
+ return NULL;
+ }
+ Py_INCREF(resunicode);
+ Py_DECREF(restuple);
+ return resunicode;
+}
+
+static PyObject *
+unicode_encode_ucs1(PyObject *unicode,
+ const char *errors,
+ const Py_UCS4 limit)
+{
+ /* input state */
+ Py_ssize_t pos=0, size;
+ int kind;
+ void *data;
+ /* pointer into the output */
+ char *str;
+ const char *encoding = (limit == 256) ? "latin-1" : "ascii";
+ const char *reason = (limit == 256) ? "ordinal not in range(256)" : "ordinal not in range(128)";
+ PyObject *error_handler_obj = NULL;
+ PyObject *exc = NULL;
+ _Py_error_handler error_handler = _Py_ERROR_UNKNOWN;
+ PyObject *rep = NULL;
+ /* output object */
+ _PyBytesWriter writer;
+
+ if (PyUnicode_READY(unicode) == -1)
+ return NULL;
+ size = PyUnicode_GET_LENGTH(unicode);
+ kind = PyUnicode_KIND(unicode);
+ data = PyUnicode_DATA(unicode);
+ /* allocate enough for a simple encoding without
+ replacements, if we need more, we'll resize */
+ if (size == 0)
+ return PyBytes_FromStringAndSize(NULL, 0);
+
+ _PyBytesWriter_Init(&writer);
+ str = _PyBytesWriter_Alloc(&writer, size);
+ if (str == NULL)
+ return NULL;
+
+ while (pos < size) {
+ Py_UCS4 ch = PyUnicode_READ(kind, data, pos);
+
+ /* can we encode this? */
+ if (ch < limit) {
+ /* no overflow check, because we know that the space is enough */
+ *str++ = (char)ch;
+ ++pos;
+ }
+ else {
+ Py_ssize_t newpos, i;
+ /* startpos for collecting unencodable chars */
+ Py_ssize_t collstart = pos;
+ Py_ssize_t collend = collstart + 1;
+ /* find all unecodable characters */
+
+ while ((collend < size) && (PyUnicode_READ(kind, data, collend) >= limit))
+ ++collend;
+
+ /* Only overallocate the buffer if it's not the last write */
+ writer.overallocate = (collend < size);
+
+ /* cache callback name lookup (if not done yet, i.e. it's the first error) */
+ if (error_handler == _Py_ERROR_UNKNOWN)
+ error_handler = get_error_handler(errors);
+
+ switch (error_handler) {
+ case _Py_ERROR_STRICT:
+ raise_encode_exception(&exc, encoding, unicode, collstart, collend, reason);
+ goto onError;
+
+ case _Py_ERROR_REPLACE:
+ memset(str, '?', collend - collstart);
+ str += (collend - collstart);
+ /* fall through */
+ case _Py_ERROR_IGNORE:
+ pos = collend;
+ break;
+
+ case _Py_ERROR_BACKSLASHREPLACE:
+ /* subtract preallocated bytes */
+ writer.min_size -= (collend - collstart);
+ str = backslashreplace(&writer, str,
+ unicode, collstart, collend);
+ if (str == NULL)
+ goto onError;
+ pos = collend;
+ break;
+
+ case _Py_ERROR_XMLCHARREFREPLACE:
+ /* subtract preallocated bytes */
+ writer.min_size -= (collend - collstart);
+ str = xmlcharrefreplace(&writer, str,
+ unicode, collstart, collend);
+ if (str == NULL)
+ goto onError;
+ pos = collend;
+ break;
+
+ case _Py_ERROR_SURROGATEESCAPE:
+ for (i = collstart; i < collend; ++i) {
+ ch = PyUnicode_READ(kind, data, i);
+ if (ch < 0xdc80 || 0xdcff < ch) {
+ /* Not a UTF-8b surrogate */
+ break;
+ }
+ *str++ = (char)(ch - 0xdc00);
+ ++pos;
+ }
+ if (i >= collend)
+ break;
+ collstart = pos;
+ assert(collstart != collend);
+ /* fall through */
+
+ default:
+ rep = unicode_encode_call_errorhandler(errors, &error_handler_obj,
+ encoding, reason, unicode, &exc,
+ collstart, collend, &newpos);
+ if (rep == NULL)
+ goto onError;
+
+ /* subtract preallocated bytes */
+ writer.min_size -= 1;
+
+ if (PyBytes_Check(rep)) {
+ /* Directly copy bytes result to output. */
+ str = _PyBytesWriter_WriteBytes(&writer, str,
+ PyBytes_AS_STRING(rep),
+ PyBytes_GET_SIZE(rep));
+ }
+ else {
+ assert(PyUnicode_Check(rep));
+
+ if (PyUnicode_READY(rep) < 0)
+ goto onError;
+
+ if (PyUnicode_IS_ASCII(rep)) {
+ /* Fast path: all characters are smaller than limit */
+ assert(limit >= 128);
+ assert(PyUnicode_KIND(rep) == PyUnicode_1BYTE_KIND);
+ str = _PyBytesWriter_WriteBytes(&writer, str,
+ PyUnicode_DATA(rep),
+ PyUnicode_GET_LENGTH(rep));
+ }
+ else {
+ Py_ssize_t repsize = PyUnicode_GET_LENGTH(rep);
+
+ str = _PyBytesWriter_Prepare(&writer, str, repsize);
+ if (str == NULL)
+ goto onError;
+
+ /* check if there is anything unencodable in the
+ replacement and copy it to the output */
+ for (i = 0; repsize-->0; ++i, ++str) {
+ ch = PyUnicode_READ_CHAR(rep, i);
+ if (ch >= limit) {
+ raise_encode_exception(&exc, encoding, unicode,
+ pos, pos+1, reason);
+ goto onError;
+ }
+ *str = (char)ch;
+ }
+ }
+ }
+ if (str == NULL)
+ goto onError;
+
+ pos = newpos;
+ Py_CLEAR(rep);
+ }
+
+ /* If overallocation was disabled, ensure that it was the last
+ write. Otherwise, we missed an optimization */
+ assert(writer.overallocate || pos == size);
+ }
+ }
+
+ Py_XDECREF(error_handler_obj);
+ Py_XDECREF(exc);
+ return _PyBytesWriter_Finish(&writer, str);
+
+ onError:
+ Py_XDECREF(rep);
+ _PyBytesWriter_Dealloc(&writer);
+ Py_XDECREF(error_handler_obj);
+ Py_XDECREF(exc);
+ return NULL;
+}
+
+/* Deprecated */
+PyObject *
+PyUnicode_EncodeLatin1(const Py_UNICODE *p,
+ Py_ssize_t size,
+ const char *errors)
+{
+ PyObject *result;
+ PyObject *unicode = PyUnicode_FromUnicode(p, size);
+ if (unicode == NULL)
+ return NULL;
+ result = unicode_encode_ucs1(unicode, errors, 256);
+ Py_DECREF(unicode);
+ return result;
+}
+
+PyObject *
+_PyUnicode_AsLatin1String(PyObject *unicode, const char *errors)
+{
+ if (!PyUnicode_Check(unicode)) {
+ PyErr_BadArgument();
+ return NULL;
+ }
+ if (PyUnicode_READY(unicode) == -1)
+ return NULL;
+ /* Fast path: if it is a one-byte string, construct
+ bytes object directly. */
+ if (PyUnicode_KIND(unicode) == PyUnicode_1BYTE_KIND)
+ return PyBytes_FromStringAndSize(PyUnicode_DATA(unicode),
+ PyUnicode_GET_LENGTH(unicode));
+ /* Non-Latin-1 characters present. Defer to above function to
+ raise the exception. */
+ return unicode_encode_ucs1(unicode, errors, 256);
+}
+
+PyObject*
+PyUnicode_AsLatin1String(PyObject *unicode)
+{
+ return _PyUnicode_AsLatin1String(unicode, NULL);
+}
+
+/* --- 7-bit ASCII Codec -------------------------------------------------- */
+
+PyObject *
+PyUnicode_DecodeASCII(const char *s,
+ Py_ssize_t size,
+ const char *errors)
+{
+ const char *starts = s;
+ _PyUnicodeWriter writer;
+ int kind;
+ void *data;
+ Py_ssize_t startinpos;
+ Py_ssize_t endinpos;
+ Py_ssize_t outpos;
+ const char *e;
+ PyObject *error_handler_obj = NULL;
+ PyObject *exc = NULL;
+ _Py_error_handler error_handler = _Py_ERROR_UNKNOWN;
+
+ if (size == 0)
+ _Py_RETURN_UNICODE_EMPTY();
+
+ /* ASCII is equivalent to the first 128 ordinals in Unicode. */
+ if (size == 1 && (unsigned char)s[0] < 128)
+ return get_latin1_char((unsigned char)s[0]);
+
+ _PyUnicodeWriter_Init(&writer);
+ writer.min_length = size;
+ if (_PyUnicodeWriter_Prepare(&writer, writer.min_length, 127) < 0)
+ return NULL;
+
+ e = s + size;
+ data = writer.data;
+ outpos = ascii_decode(s, e, (Py_UCS1 *)data);
+ writer.pos = outpos;
+ if (writer.pos == size)
+ return _PyUnicodeWriter_Finish(&writer);
+
+ s += writer.pos;
+ kind = writer.kind;
+ while (s < e) {
+ unsigned char c = (unsigned char)*s;
+ if (c < 128) {
+ PyUnicode_WRITE(kind, data, writer.pos, c);
+ writer.pos++;
+ ++s;
+ continue;
+ }
+
+ /* byte outsize range 0x00..0x7f: call the error handler */
+
+ if (error_handler == _Py_ERROR_UNKNOWN)
+ error_handler = get_error_handler(errors);
+
+ switch (error_handler)
+ {
+ case _Py_ERROR_REPLACE:
+ case _Py_ERROR_SURROGATEESCAPE:
+ /* Fast-path: the error handler only writes one character,
+ but we may switch to UCS2 at the first write */
+ if (_PyUnicodeWriter_PrepareKind(&writer, PyUnicode_2BYTE_KIND) < 0)
+ goto onError;
+ kind = writer.kind;
+ data = writer.data;
+
+ if (error_handler == _Py_ERROR_REPLACE)
+ PyUnicode_WRITE(kind, data, writer.pos, 0xfffd);
+ else
+ PyUnicode_WRITE(kind, data, writer.pos, c + 0xdc00);
+ writer.pos++;
+ ++s;
+ break;
+
+ case _Py_ERROR_IGNORE:
+ ++s;
+ break;
+
+ default:
+ startinpos = s-starts;
+ endinpos = startinpos + 1;
+ if (unicode_decode_call_errorhandler_writer(
+ errors, &error_handler_obj,
+ "ascii", "ordinal not in range(128)",
+ &starts, &e, &startinpos, &endinpos, &exc, &s,
+ &writer))
+ goto onError;
+ kind = writer.kind;
+ data = writer.data;
+ }
+ }
+ Py_XDECREF(error_handler_obj);
+ Py_XDECREF(exc);
+ return _PyUnicodeWriter_Finish(&writer);
+
+ onError:
+ _PyUnicodeWriter_Dealloc(&writer);
+ Py_XDECREF(error_handler_obj);
+ Py_XDECREF(exc);
+ return NULL;
+}
+
+/* Deprecated */
+PyObject *
+PyUnicode_EncodeASCII(const Py_UNICODE *p,
+ Py_ssize_t size,
+ const char *errors)
+{
+ PyObject *result;
+ PyObject *unicode = PyUnicode_FromUnicode(p, size);
+ if (unicode == NULL)
+ return NULL;
+ result = unicode_encode_ucs1(unicode, errors, 128);
+ Py_DECREF(unicode);
+ return result;
+}
+
+PyObject *
+_PyUnicode_AsASCIIString(PyObject *unicode, const char *errors)
+{
+ if (!PyUnicode_Check(unicode)) {
+ PyErr_BadArgument();
+ return NULL;
+ }
+ if (PyUnicode_READY(unicode) == -1)
+ return NULL;
+ /* Fast path: if it is an ASCII-only string, construct bytes object
+ directly. Else defer to above function to raise the exception. */
+ if (PyUnicode_IS_ASCII(unicode))
+ return PyBytes_FromStringAndSize(PyUnicode_DATA(unicode),
+ PyUnicode_GET_LENGTH(unicode));
+ return unicode_encode_ucs1(unicode, errors, 128);
+}
+
+PyObject *
+PyUnicode_AsASCIIString(PyObject *unicode)
+{
+ return _PyUnicode_AsASCIIString(unicode, NULL);
+}
+
+#ifdef MS_WINDOWS
+
+/* --- MBCS codecs for Windows -------------------------------------------- */
+
+#if SIZEOF_INT < SIZEOF_SIZE_T
+#define NEED_RETRY
+#endif
+
+#ifndef WC_ERR_INVALID_CHARS
+# define WC_ERR_INVALID_CHARS 0x0080
+#endif
+
+static const char*
+code_page_name(UINT code_page, PyObject **obj)
+{
+ *obj = NULL;
+ if (code_page == CP_ACP)
+ return "mbcs";
+ if (code_page == CP_UTF7)
+ return "CP_UTF7";
+ if (code_page == CP_UTF8)
+ return "CP_UTF8";
+
+ *obj = PyBytes_FromFormat("cp%u", code_page);
+ if (*obj == NULL)
+ return NULL;
+ return PyBytes_AS_STRING(*obj);
+}
+
+static DWORD
+decode_code_page_flags(UINT code_page)
+{
+ if (code_page == CP_UTF7) {
+ /* The CP_UTF7 decoder only supports flags=0 */
+ return 0;
+ }
+ else
+ return MB_ERR_INVALID_CHARS;
+}
+
+/*
+ * Decode a byte string from a Windows code page into unicode object in strict
+ * mode.
+ *
+ * Returns consumed size if succeed, returns -2 on decode error, or raise an
+ * OSError and returns -1 on other error.
+ */
+static int
+decode_code_page_strict(UINT code_page,
+ PyObject **v,
+ const char *in,
+ int insize)
+{
+ const DWORD flags = decode_code_page_flags(code_page);
+ wchar_t *out;
+ DWORD outsize;
+
+ /* First get the size of the result */
+ assert(insize > 0);
+ outsize = MultiByteToWideChar(code_page, flags, in, insize, NULL, 0);
+ if (outsize <= 0)
+ goto error;
+
+ if (*v == NULL) {
+ /* Create unicode object */
+ /* FIXME: don't use _PyUnicode_New(), but allocate a wchar_t* buffer */
+ *v = (PyObject*)_PyUnicode_New(outsize);
+ if (*v == NULL)
+ return -1;
+ out = PyUnicode_AS_UNICODE(*v);
+ }
+ else {
+ /* Extend unicode object */
+ Py_ssize_t n = PyUnicode_GET_SIZE(*v);
+ if (unicode_resize(v, n + outsize) < 0)
+ return -1;
+ out = PyUnicode_AS_UNICODE(*v) + n;
+ }
+
+ /* Do the conversion */
+ outsize = MultiByteToWideChar(code_page, flags, in, insize, out, outsize);
+ if (outsize <= 0)
+ goto error;
+ return insize;
+
+error:
+ if (GetLastError() == ERROR_NO_UNICODE_TRANSLATION)
+ return -2;
+ PyErr_SetFromWindowsErr(0);
+ return -1;
+}
+
+/*
+ * Decode a byte string from a code page into unicode object with an error
+ * handler.
+ *
+ * Returns consumed size if succeed, or raise an OSError or
+ * UnicodeDecodeError exception and returns -1 on error.
+ */
+static int
+decode_code_page_errors(UINT code_page,
+ PyObject **v,
+ const char *in, const int size,
+ const char *errors, int final)
+{
+ const char *startin = in;
+ const char *endin = in + size;
+ const DWORD flags = decode_code_page_flags(code_page);
+ /* Ideally, we should get reason from FormatMessage. This is the Windows
+ 2000 English version of the message. */
+ const char *reason = "No mapping for the Unicode character exists "
+ "in the target code page.";
+ /* each step cannot decode more than 1 character, but a character can be
+ represented as a surrogate pair */
+ wchar_t buffer[2], *out;
+ int insize;
+ Py_ssize_t outsize;
+ PyObject *errorHandler = NULL;
+ PyObject *exc = NULL;
+ PyObject *encoding_obj = NULL;
+ const char *encoding;
+ DWORD err;
+ int ret = -1;
+
+ assert(size > 0);
+
+ encoding = code_page_name(code_page, &encoding_obj);
+ if (encoding == NULL)
+ return -1;
+
+ if ((errors == NULL || strcmp(errors, "strict") == 0) && final) {
+ /* The last error was ERROR_NO_UNICODE_TRANSLATION, then we raise a
+ UnicodeDecodeError. */
+ make_decode_exception(&exc, encoding, in, size, 0, 0, reason);
+ if (exc != NULL) {
+ PyCodec_StrictErrors(exc);
+ Py_CLEAR(exc);
+ }
+ goto error;
+ }
+
+ if (*v == NULL) {
+ /* Create unicode object */
+ if (size > PY_SSIZE_T_MAX / (Py_ssize_t)Py_ARRAY_LENGTH(buffer)) {
+ PyErr_NoMemory();
+ goto error;
+ }
+ /* FIXME: don't use _PyUnicode_New(), but allocate a wchar_t* buffer */
+ *v = (PyObject*)_PyUnicode_New(size * Py_ARRAY_LENGTH(buffer));
+ if (*v == NULL)
+ goto error;
+ out = PyUnicode_AS_UNICODE(*v);
+ }
+ else {
+ /* Extend unicode object */
+ Py_ssize_t n = PyUnicode_GET_SIZE(*v);
+ if (size > (PY_SSIZE_T_MAX - n) / (Py_ssize_t)Py_ARRAY_LENGTH(buffer)) {
+ PyErr_NoMemory();
+ goto error;
+ }
+ if (unicode_resize(v, n + size * Py_ARRAY_LENGTH(buffer)) < 0)
+ goto error;
+ out = PyUnicode_AS_UNICODE(*v) + n;
+ }
+
+ /* Decode the byte string character per character */
+ while (in < endin)
+ {
+ /* Decode a character */
+ insize = 1;
+ do
+ {
+ outsize = MultiByteToWideChar(code_page, flags,
+ in, insize,
+ buffer, Py_ARRAY_LENGTH(buffer));
+ if (outsize > 0)
+ break;
+ err = GetLastError();
+ if (err != ERROR_NO_UNICODE_TRANSLATION
+ && err != ERROR_INSUFFICIENT_BUFFER)
+ {
+ PyErr_SetFromWindowsErr(0);
+ goto error;
+ }
+ insize++;
+ }
+ /* 4=maximum length of a UTF-8 sequence */
+ while (insize <= 4 && (in + insize) <= endin);
+
+ if (outsize <= 0) {
+ Py_ssize_t startinpos, endinpos, outpos;
+
+ /* last character in partial decode? */
+ if (in + insize >= endin && !final)
+ break;
+
+ startinpos = in - startin;
+ endinpos = startinpos + 1;
+ outpos = out - PyUnicode_AS_UNICODE(*v);
+ if (unicode_decode_call_errorhandler_wchar(
+ errors, &errorHandler,
+ encoding, reason,
+ &startin, &endin, &startinpos, &endinpos, &exc, &in,
+ v, &outpos))
+ {
+ goto error;
+ }
+ out = PyUnicode_AS_UNICODE(*v) + outpos;
+ }
+ else {
+ in += insize;
+ memcpy(out, buffer, outsize * sizeof(wchar_t));
+ out += outsize;
+ }
+ }
+
+ /* write a NUL character at the end */
+ *out = 0;
+
+ /* Extend unicode object */
+ outsize = out - PyUnicode_AS_UNICODE(*v);
+ assert(outsize <= PyUnicode_WSTR_LENGTH(*v));
+ if (unicode_resize(v, outsize) < 0)
+ goto error;
+ /* (in - startin) <= size and size is an int */
+ ret = Py_SAFE_DOWNCAST(in - startin, Py_ssize_t, int);
+
+error:
+ Py_XDECREF(encoding_obj);
+ Py_XDECREF(errorHandler);
+ Py_XDECREF(exc);
+ return ret;
+}
+
+static PyObject *
+decode_code_page_stateful(int code_page,
+ const char *s, Py_ssize_t size,
+ const char *errors, Py_ssize_t *consumed)
+{
+ PyObject *v = NULL;
+ int chunk_size, final, converted, done;
+
+ if (code_page < 0) {
+ PyErr_SetString(PyExc_ValueError, "invalid code page number");
+ return NULL;
+ }
+ if (size < 0) {
+ PyErr_BadInternalCall();
+ return NULL;
+ }
+
+ if (consumed)
+ *consumed = 0;
+
+ do
+ {
+#ifdef NEED_RETRY
+ if (size > INT_MAX) {
+ chunk_size = INT_MAX;
+ final = 0;
+ done = 0;
+ }
+ else
+#endif
+ {
+ chunk_size = (int)size;
+ final = (consumed == NULL);
+ done = 1;
+ }
+
+ if (chunk_size == 0 && done) {
+ if (v != NULL)
+ break;
+ _Py_RETURN_UNICODE_EMPTY();
+ }
+
+ converted = decode_code_page_strict(code_page, &v,
+ s, chunk_size);
+ if (converted == -2)
+ converted = decode_code_page_errors(code_page, &v,
+ s, chunk_size,
+ errors, final);
+ assert(converted != 0 || done);
+
+ if (converted < 0) {
+ Py_XDECREF(v);
+ return NULL;
+ }
+
+ if (consumed)
+ *consumed += converted;
+
+ s += converted;
+ size -= converted;
+ } while (!done);
+
+ return unicode_result(v);
+}
+
+PyObject *
+PyUnicode_DecodeCodePageStateful(int code_page,
+ const char *s,
+ Py_ssize_t size,
+ const char *errors,
+ Py_ssize_t *consumed)
+{
+ return decode_code_page_stateful(code_page, s, size, errors, consumed);
+}
+
+PyObject *
+PyUnicode_DecodeMBCSStateful(const char *s,
+ Py_ssize_t size,
+ const char *errors,
+ Py_ssize_t *consumed)
+{
+ return decode_code_page_stateful(CP_ACP, s, size, errors, consumed);
+}
+
+PyObject *
+PyUnicode_DecodeMBCS(const char *s,
+ Py_ssize_t size,
+ const char *errors)
+{
+ return PyUnicode_DecodeMBCSStateful(s, size, errors, NULL);
+}
+
+static DWORD
+encode_code_page_flags(UINT code_page, const char *errors)
+{
+ if (code_page == CP_UTF8) {
+ return WC_ERR_INVALID_CHARS;
+ }
+ else if (code_page == CP_UTF7) {
+ /* CP_UTF7 only supports flags=0 */
+ return 0;
+ }
+ else {
+ if (errors != NULL && strcmp(errors, "replace") == 0)
+ return 0;
+ else
+ return WC_NO_BEST_FIT_CHARS;
+ }
+}
+
+/*
+ * Encode a Unicode string to a Windows code page into a byte string in strict
+ * mode.
+ *
+ * Returns consumed characters if succeed, returns -2 on encode error, or raise
+ * an OSError and returns -1 on other error.
+ */
+static int
+encode_code_page_strict(UINT code_page, PyObject **outbytes,
+ PyObject *unicode, Py_ssize_t offset, int len,
+ const char* errors)
+{
+ BOOL usedDefaultChar = FALSE;
+ BOOL *pusedDefaultChar = &usedDefaultChar;
+ int outsize;
+ wchar_t *p;
+ Py_ssize_t size;
+ const DWORD flags = encode_code_page_flags(code_page, NULL);
+ char *out;
+ /* Create a substring so that we can get the UTF-16 representation
+ of just the slice under consideration. */
+ PyObject *substring;
+
+ assert(len > 0);
+
+ if (code_page != CP_UTF8 && code_page != CP_UTF7)
+ pusedDefaultChar = &usedDefaultChar;
+ else
+ pusedDefaultChar = NULL;
+
+ substring = PyUnicode_Substring(unicode, offset, offset+len);
+ if (substring == NULL)
+ return -1;
+ p = PyUnicode_AsUnicodeAndSize(substring, &size);
+ if (p == NULL) {
+ Py_DECREF(substring);
+ return -1;
+ }
+ assert(size <= INT_MAX);
+
+ /* First get the size of the result */
+ outsize = WideCharToMultiByte(code_page, flags,
+ p, (int)size,
+ NULL, 0,
+ NULL, pusedDefaultChar);
+ if (outsize <= 0)
+ goto error;
+ /* If we used a default char, then we failed! */
+ if (pusedDefaultChar && *pusedDefaultChar) {
+ Py_DECREF(substring);
+ return -2;
+ }
+
+ if (*outbytes == NULL) {
+ /* Create string object */
+ *outbytes = PyBytes_FromStringAndSize(NULL, outsize);
+ if (*outbytes == NULL) {
+ Py_DECREF(substring);
+ return -1;
+ }
+ out = PyBytes_AS_STRING(*outbytes);
+ }
+ else {
+ /* Extend string object */
+ const Py_ssize_t n = PyBytes_Size(*outbytes);
+ if (outsize > PY_SSIZE_T_MAX - n) {
+ PyErr_NoMemory();
+ Py_DECREF(substring);
+ return -1;
+ }
+ if (_PyBytes_Resize(outbytes, n + outsize) < 0) {
+ Py_DECREF(substring);
+ return -1;
+ }
+ out = PyBytes_AS_STRING(*outbytes) + n;
+ }
+
+ /* Do the conversion */
+ outsize = WideCharToMultiByte(code_page, flags,
+ p, (int)size,
+ out, outsize,
+ NULL, pusedDefaultChar);
+ Py_CLEAR(substring);
+ if (outsize <= 0)
+ goto error;
+ if (pusedDefaultChar && *pusedDefaultChar)
+ return -2;
+ return 0;
+
+error:
+ Py_XDECREF(substring);
+ if (GetLastError() == ERROR_NO_UNICODE_TRANSLATION)
+ return -2;
+ PyErr_SetFromWindowsErr(0);
+ return -1;
+}
+
+/*
+ * Encode a Unicode string to a Windows code page into a byte string using an
+ * error handler.
+ *
+ * Returns consumed characters if succeed, or raise an OSError and returns
+ * -1 on other error.
+ */
+static int
+encode_code_page_errors(UINT code_page, PyObject **outbytes,
+ PyObject *unicode, Py_ssize_t unicode_offset,
+ Py_ssize_t insize, const char* errors)
+{
+ const DWORD flags = encode_code_page_flags(code_page, errors);
+ Py_ssize_t pos = unicode_offset;
+ Py_ssize_t endin = unicode_offset + insize;
+ /* Ideally, we should get reason from FormatMessage. This is the Windows
+ 2000 English version of the message. */
+ const char *reason = "invalid character";
+ /* 4=maximum length of a UTF-8 sequence */
+ char buffer[4];
+ BOOL usedDefaultChar = FALSE, *pusedDefaultChar;
+ Py_ssize_t outsize;
+ char *out;
+ PyObject *errorHandler = NULL;
+ PyObject *exc = NULL;
+ PyObject *encoding_obj = NULL;
+ const char *encoding;
+ Py_ssize_t newpos, newoutsize;
+ PyObject *rep;
+ int ret = -1;
+
+ assert(insize > 0);
+
+ encoding = code_page_name(code_page, &encoding_obj);
+ if (encoding == NULL)
+ return -1;
+
+ if (errors == NULL || strcmp(errors, "strict") == 0) {
+ /* The last error was ERROR_NO_UNICODE_TRANSLATION,
+ then we raise a UnicodeEncodeError. */
+ make_encode_exception(&exc, encoding, unicode, 0, 0, reason);
+ if (exc != NULL) {
+ PyCodec_StrictErrors(exc);
+ Py_DECREF(exc);
+ }
+ Py_XDECREF(encoding_obj);
+ return -1;
+ }
+
+ if (code_page != CP_UTF8 && code_page != CP_UTF7)
+ pusedDefaultChar = &usedDefaultChar;
+ else
+ pusedDefaultChar = NULL;
+
+ if (Py_ARRAY_LENGTH(buffer) > PY_SSIZE_T_MAX / insize) {
+ PyErr_NoMemory();
+ goto error;
+ }
+ outsize = insize * Py_ARRAY_LENGTH(buffer);
+
+ if (*outbytes == NULL) {
+ /* Create string object */
+ *outbytes = PyBytes_FromStringAndSize(NULL, outsize);
+ if (*outbytes == NULL)
+ goto error;
+ out = PyBytes_AS_STRING(*outbytes);
+ }
+ else {
+ /* Extend string object */
+ Py_ssize_t n = PyBytes_Size(*outbytes);
+ if (n > PY_SSIZE_T_MAX - outsize) {
+ PyErr_NoMemory();
+ goto error;
+ }
+ if (_PyBytes_Resize(outbytes, n + outsize) < 0)
+ goto error;
+ out = PyBytes_AS_STRING(*outbytes) + n;
+ }
+
+ /* Encode the string character per character */
+ while (pos < endin)
+ {
+ Py_UCS4 ch = PyUnicode_READ_CHAR(unicode, pos);
+ wchar_t chars[2];
+ int charsize;
+ if (ch < 0x10000) {
+ chars[0] = (wchar_t)ch;
+ charsize = 1;
+ }
+ else {
+ chars[0] = Py_UNICODE_HIGH_SURROGATE(ch);
+ chars[1] = Py_UNICODE_LOW_SURROGATE(ch);
+ charsize = 2;
+ }
+
+ outsize = WideCharToMultiByte(code_page, flags,
+ chars, charsize,
+ buffer, Py_ARRAY_LENGTH(buffer),
+ NULL, pusedDefaultChar);
+ if (outsize > 0) {
+ if (pusedDefaultChar == NULL || !(*pusedDefaultChar))
+ {
+ pos++;
+ memcpy(out, buffer, outsize);
+ out += outsize;
+ continue;
+ }
+ }
+ else if (GetLastError() != ERROR_NO_UNICODE_TRANSLATION) {
+ PyErr_SetFromWindowsErr(0);
+ goto error;
+ }
+
+ rep = unicode_encode_call_errorhandler(
+ errors, &errorHandler, encoding, reason,
+ unicode, &exc,
+ pos, pos + 1, &newpos);
+ if (rep == NULL)
+ goto error;
+ pos = newpos;
+
+ if (PyBytes_Check(rep)) {
+ outsize = PyBytes_GET_SIZE(rep);
+ if (outsize != 1) {
+ Py_ssize_t offset = out - PyBytes_AS_STRING(*outbytes);
+ newoutsize = PyBytes_GET_SIZE(*outbytes) + (outsize - 1);
+ if (_PyBytes_Resize(outbytes, newoutsize) < 0) {
+ Py_DECREF(rep);
+ goto error;
+ }
+ out = PyBytes_AS_STRING(*outbytes) + offset;
+ }
+ memcpy(out, PyBytes_AS_STRING(rep), outsize);
+ out += outsize;
+ }
+ else {
+ Py_ssize_t i;
+ enum PyUnicode_Kind kind;
+ void *data;
+
+ if (PyUnicode_READY(rep) == -1) {
+ Py_DECREF(rep);
+ goto error;
+ }
+
+ outsize = PyUnicode_GET_LENGTH(rep);
+ if (outsize != 1) {
+ Py_ssize_t offset = out - PyBytes_AS_STRING(*outbytes);
+ newoutsize = PyBytes_GET_SIZE(*outbytes) + (outsize - 1);
+ if (_PyBytes_Resize(outbytes, newoutsize) < 0) {
+ Py_DECREF(rep);
+ goto error;
+ }
+ out = PyBytes_AS_STRING(*outbytes) + offset;
+ }
+ kind = PyUnicode_KIND(rep);
+ data = PyUnicode_DATA(rep);
+ for (i=0; i < outsize; i++) {
+ Py_UCS4 ch = PyUnicode_READ(kind, data, i);
+ if (ch > 127) {
+ raise_encode_exception(&exc,
+ encoding, unicode,
+ pos, pos + 1,
+ "unable to encode error handler result to ASCII");
+ Py_DECREF(rep);
+ goto error;
+ }
+ *out = (unsigned char)ch;
+ out++;
+ }
+ }
+ Py_DECREF(rep);
+ }
+ /* write a NUL byte */
+ *out = 0;
+ outsize = out - PyBytes_AS_STRING(*outbytes);
+ assert(outsize <= PyBytes_GET_SIZE(*outbytes));
+ if (_PyBytes_Resize(outbytes, outsize) < 0)
+ goto error;
+ ret = 0;
+
+error:
+ Py_XDECREF(encoding_obj);
+ Py_XDECREF(errorHandler);
+ Py_XDECREF(exc);
+ return ret;
+}
+
+static PyObject *
+encode_code_page(int code_page,
+ PyObject *unicode,
+ const char *errors)
+{
+ Py_ssize_t len;
+ PyObject *outbytes = NULL;
+ Py_ssize_t offset;
+ int chunk_len, ret, done;
+
+ if (!PyUnicode_Check(unicode)) {
+ PyErr_BadArgument();
+ return NULL;
+ }
+
+ if (PyUnicode_READY(unicode) == -1)
+ return NULL;
+ len = PyUnicode_GET_LENGTH(unicode);
+
+ if (code_page < 0) {
+ PyErr_SetString(PyExc_ValueError, "invalid code page number");
+ return NULL;
+ }
+
+ if (len == 0)
+ return PyBytes_FromStringAndSize(NULL, 0);
+
+ offset = 0;
+ do
+ {
+#ifdef NEED_RETRY
+ /* UTF-16 encoding may double the size, so use only INT_MAX/2
+ chunks. */
+ if (len > INT_MAX/2) {
+ chunk_len = INT_MAX/2;
+ done = 0;
+ }
+ else
+#endif
+ {
+ chunk_len = (int)len;
+ done = 1;
+ }
+
+ ret = encode_code_page_strict(code_page, &outbytes,
+ unicode, offset, chunk_len,
+ errors);
+ if (ret == -2)
+ ret = encode_code_page_errors(code_page, &outbytes,
+ unicode, offset,
+ chunk_len, errors);
+ if (ret < 0) {
+ Py_XDECREF(outbytes);
+ return NULL;
+ }
+
+ offset += chunk_len;
+ len -= chunk_len;
+ } while (!done);
+
+ return outbytes;
+}
+
+PyObject *
+PyUnicode_EncodeMBCS(const Py_UNICODE *p,
+ Py_ssize_t size,
+ const char *errors)
+{
+ PyObject *unicode, *res;
+ unicode = PyUnicode_FromUnicode(p, size);
+ if (unicode == NULL)
+ return NULL;
+ res = encode_code_page(CP_ACP, unicode, errors);
+ Py_DECREF(unicode);
+ return res;
+}
+
+PyObject *
+PyUnicode_EncodeCodePage(int code_page,
+ PyObject *unicode,
+ const char *errors)
+{
+ return encode_code_page(code_page, unicode, errors);
+}
+
+PyObject *
+PyUnicode_AsMBCSString(PyObject *unicode)
+{
+ return PyUnicode_EncodeCodePage(CP_ACP, unicode, NULL);
+}
+
+#undef NEED_RETRY
+
+#endif /* MS_WINDOWS */
+
+/* --- Character Mapping Codec -------------------------------------------- */
+
+static int
+charmap_decode_string(const char *s,
+ Py_ssize_t size,
+ PyObject *mapping,
+ const char *errors,
+ _PyUnicodeWriter *writer)
+{
+ const char *starts = s;
+ const char *e;
+ Py_ssize_t startinpos, endinpos;
+ PyObject *errorHandler = NULL, *exc = NULL;
+ Py_ssize_t maplen;
+ enum PyUnicode_Kind mapkind;
+ void *mapdata;
+ Py_UCS4 x;
+ unsigned char ch;
+
+ if (PyUnicode_READY(mapping) == -1)
+ return -1;
+
+ maplen = PyUnicode_GET_LENGTH(mapping);
+ mapdata = PyUnicode_DATA(mapping);
+ mapkind = PyUnicode_KIND(mapping);
+
+ e = s + size;
+
+ if (mapkind == PyUnicode_1BYTE_KIND && maplen >= 256) {
+ /* fast-path for cp037, cp500 and iso8859_1 encodings. iso8859_1
+ * is disabled in encoding aliases, latin1 is preferred because
+ * its implementation is faster. */
+ Py_UCS1 *mapdata_ucs1 = (Py_UCS1 *)mapdata;
+ Py_UCS1 *outdata = (Py_UCS1 *)writer->data;
+ Py_UCS4 maxchar = writer->maxchar;
+
+ assert (writer->kind == PyUnicode_1BYTE_KIND);
+ while (s < e) {
+ ch = *s;
+ x = mapdata_ucs1[ch];
+ if (x > maxchar) {
+ if (_PyUnicodeWriter_Prepare(writer, 1, 0xff) == -1)
+ goto onError;
+ maxchar = writer->maxchar;
+ outdata = (Py_UCS1 *)writer->data;
+ }
+ outdata[writer->pos] = x;
+ writer->pos++;
+ ++s;
+ }
+ return 0;
+ }
+
+ while (s < e) {
+ if (mapkind == PyUnicode_2BYTE_KIND && maplen >= 256) {
+ enum PyUnicode_Kind outkind = writer->kind;
+ Py_UCS2 *mapdata_ucs2 = (Py_UCS2 *)mapdata;
+ if (outkind == PyUnicode_1BYTE_KIND) {
+ Py_UCS1 *outdata = (Py_UCS1 *)writer->data;
+ Py_UCS4 maxchar = writer->maxchar;
+ while (s < e) {
+ ch = *s;
+ x = mapdata_ucs2[ch];
+ if (x > maxchar)
+ goto Error;
+ outdata[writer->pos] = x;
+ writer->pos++;
+ ++s;
+ }
+ break;
+ }
+ else if (outkind == PyUnicode_2BYTE_KIND) {
+ Py_UCS2 *outdata = (Py_UCS2 *)writer->data;
+ while (s < e) {
+ ch = *s;
+ x = mapdata_ucs2[ch];
+ if (x == 0xFFFE)
+ goto Error;
+ outdata[writer->pos] = x;
+ writer->pos++;
+ ++s;
+ }
+ break;
+ }
+ }
+ ch = *s;
+
+ if (ch < maplen)
+ x = PyUnicode_READ(mapkind, mapdata, ch);
+ else
+ x = 0xfffe; /* invalid value */
+Error:
+ if (x == 0xfffe)
+ {
+ /* undefined mapping */
+ startinpos = s-starts;
+ endinpos = startinpos+1;
+ if (unicode_decode_call_errorhandler_writer(
+ errors, &errorHandler,
+ "charmap", "character maps to <undefined>",
+ &starts, &e, &startinpos, &endinpos, &exc, &s,
+ writer)) {
+ goto onError;
+ }
+ continue;
+ }
+
+ if (_PyUnicodeWriter_WriteCharInline(writer, x) < 0)
+ goto onError;
+ ++s;
+ }
+ Py_XDECREF(errorHandler);
+ Py_XDECREF(exc);
+ return 0;
+
+onError:
+ Py_XDECREF(errorHandler);
+ Py_XDECREF(exc);
+ return -1;
+}
+
+static int
+charmap_decode_mapping(const char *s,
+ Py_ssize_t size,
+ PyObject *mapping,
+ const char *errors,
+ _PyUnicodeWriter *writer)
+{
+ const char *starts = s;
+ const char *e;
+ Py_ssize_t startinpos, endinpos;
+ PyObject *errorHandler = NULL, *exc = NULL;
+ unsigned char ch;
+ PyObject *key, *item = NULL;
+
+ e = s + size;
+
+ while (s < e) {
+ ch = *s;
+
+ /* Get mapping (char ordinal -> integer, Unicode char or None) */
+ key = PyLong_FromLong((long)ch);
+ if (key == NULL)
+ goto onError;
+
+ item = PyObject_GetItem(mapping, key);
+ Py_DECREF(key);
+ if (item == NULL) {
+ if (PyErr_ExceptionMatches(PyExc_LookupError)) {
+ /* No mapping found means: mapping is undefined. */
+ PyErr_Clear();
+ goto Undefined;
+ } else
+ goto onError;
+ }
+
+ /* Apply mapping */
+ if (item == Py_None)
+ goto Undefined;
+ if (PyLong_Check(item)) {
+ long value = PyLong_AS_LONG(item);
+ if (value == 0xFFFE)
+ goto Undefined;
+ if (value < 0 || value > MAX_UNICODE) {
+ PyErr_Format(PyExc_TypeError,
+ "character mapping must be in range(0x%lx)",
+ (unsigned long)MAX_UNICODE + 1);
+ goto onError;
+ }
+
+ if (_PyUnicodeWriter_WriteCharInline(writer, value) < 0)
+ goto onError;
+ }
+ else if (PyUnicode_Check(item)) {
+ if (PyUnicode_READY(item) == -1)
+ goto onError;
+ if (PyUnicode_GET_LENGTH(item) == 1) {
+ Py_UCS4 value = PyUnicode_READ_CHAR(item, 0);
+ if (value == 0xFFFE)
+ goto Undefined;
+ if (_PyUnicodeWriter_WriteCharInline(writer, value) < 0)
+ goto onError;
+ }
+ else {
+ writer->overallocate = 1;
+ if (_PyUnicodeWriter_WriteStr(writer, item) == -1)
+ goto onError;
+ }
+ }
+ else {
+ /* wrong return value */
+ PyErr_SetString(PyExc_TypeError,
+ "character mapping must return integer, None or str");
+ goto onError;
+ }
+ Py_CLEAR(item);
+ ++s;
+ continue;
+
+Undefined:
+ /* undefined mapping */
+ Py_CLEAR(item);
+ startinpos = s-starts;
+ endinpos = startinpos+1;
+ if (unicode_decode_call_errorhandler_writer(
+ errors, &errorHandler,
+ "charmap", "character maps to <undefined>",
+ &starts, &e, &startinpos, &endinpos, &exc, &s,
+ writer)) {
+ goto onError;
+ }
+ }
+ Py_XDECREF(errorHandler);
+ Py_XDECREF(exc);
+ return 0;
+
+onError:
+ Py_XDECREF(item);
+ Py_XDECREF(errorHandler);
+ Py_XDECREF(exc);
+ return -1;
+}
+
+PyObject *
+PyUnicode_DecodeCharmap(const char *s,
+ Py_ssize_t size,
+ PyObject *mapping,
+ const char *errors)
+{
+ _PyUnicodeWriter writer;
+
+ /* Default to Latin-1 */
+ if (mapping == NULL)
+ return PyUnicode_DecodeLatin1(s, size, errors);
+
+ if (size == 0)
+ _Py_RETURN_UNICODE_EMPTY();
+ _PyUnicodeWriter_Init(&writer);
+ writer.min_length = size;
+ if (_PyUnicodeWriter_Prepare(&writer, writer.min_length, 127) == -1)
+ goto onError;
+
+ if (PyUnicode_CheckExact(mapping)) {
+ if (charmap_decode_string(s, size, mapping, errors, &writer) < 0)
+ goto onError;
+ }
+ else {
+ if (charmap_decode_mapping(s, size, mapping, errors, &writer) < 0)
+ goto onError;
+ }
+ return _PyUnicodeWriter_Finish(&writer);
+
+ onError:
+ _PyUnicodeWriter_Dealloc(&writer);
+ return NULL;
+}
+
+/* Charmap encoding: the lookup table */
+
+struct encoding_map {
+ PyObject_HEAD
+ unsigned char level1[32];
+ int count2, count3;
+ unsigned char level23[1];
+};
+
+static PyObject*
+encoding_map_size(PyObject *obj, PyObject* args)
+{
+ struct encoding_map *map = (struct encoding_map*)obj;
+ return PyLong_FromLong(sizeof(*map) - 1 + 16*map->count2 +
+ 128*map->count3);
+}
+
+static PyMethodDef encoding_map_methods[] = {
+ {"size", encoding_map_size, METH_NOARGS,
+ PyDoc_STR("Return the size (in bytes) of this object") },
+ { 0 }
+};
+
+static void
+encoding_map_dealloc(PyObject* o)
+{
+ PyObject_FREE(o);
+}
+
+static PyTypeObject EncodingMapType = {
+ PyVarObject_HEAD_INIT(NULL, 0)
+ "EncodingMap", /*tp_name*/
+ sizeof(struct encoding_map), /*tp_basicsize*/
+ 0, /*tp_itemsize*/
+ /* methods */
+ encoding_map_dealloc, /*tp_dealloc*/
+ 0, /*tp_print*/
+ 0, /*tp_getattr*/
+ 0, /*tp_setattr*/
+ 0, /*tp_reserved*/
+ 0, /*tp_repr*/
+ 0, /*tp_as_number*/
+ 0, /*tp_as_sequence*/
+ 0, /*tp_as_mapping*/
+ 0, /*tp_hash*/
+ 0, /*tp_call*/
+ 0, /*tp_str*/
+ 0, /*tp_getattro*/
+ 0, /*tp_setattro*/
+ 0, /*tp_as_buffer*/
+ Py_TPFLAGS_DEFAULT, /*tp_flags*/
+ 0, /*tp_doc*/
+ 0, /*tp_traverse*/
+ 0, /*tp_clear*/
+ 0, /*tp_richcompare*/
+ 0, /*tp_weaklistoffset*/
+ 0, /*tp_iter*/
+ 0, /*tp_iternext*/
+ encoding_map_methods, /*tp_methods*/
+ 0, /*tp_members*/
+ 0, /*tp_getset*/
+ 0, /*tp_base*/
+ 0, /*tp_dict*/
+ 0, /*tp_descr_get*/
+ 0, /*tp_descr_set*/
+ 0, /*tp_dictoffset*/
+ 0, /*tp_init*/
+ 0, /*tp_alloc*/
+ 0, /*tp_new*/
+ 0, /*tp_free*/
+ 0, /*tp_is_gc*/
+};
+
+PyObject*
+PyUnicode_BuildEncodingMap(PyObject* string)
+{
+ PyObject *result;
+ struct encoding_map *mresult;
+ int i;
+ int need_dict = 0;
+ unsigned char level1[32];
+ unsigned char level2[512];
+ unsigned char *mlevel1, *mlevel2, *mlevel3;
+ int count2 = 0, count3 = 0;
+ int kind;
+ void *data;
+ Py_ssize_t length;
+ Py_UCS4 ch;
+
+ if (!PyUnicode_Check(string) || !PyUnicode_GET_LENGTH(string)) {
+ PyErr_BadArgument();
+ return NULL;
+ }
+ kind = PyUnicode_KIND(string);
+ data = PyUnicode_DATA(string);
+ length = PyUnicode_GET_LENGTH(string);
+ length = Py_MIN(length, 256);
+ memset(level1, 0xFF, sizeof level1);
+ memset(level2, 0xFF, sizeof level2);
+
+ /* If there isn't a one-to-one mapping of NULL to \0,
+ or if there are non-BMP characters, we need to use
+ a mapping dictionary. */
+ if (PyUnicode_READ(kind, data, 0) != 0)
+ need_dict = 1;
+ for (i = 1; i < length; i++) {
+ int l1, l2;
+ ch = PyUnicode_READ(kind, data, i);
+ if (ch == 0 || ch > 0xFFFF) {
+ need_dict = 1;
+ break;
+ }
+ if (ch == 0xFFFE)
+ /* unmapped character */
+ continue;
+ l1 = ch >> 11;
+ l2 = ch >> 7;
+ if (level1[l1] == 0xFF)
+ level1[l1] = count2++;
+ if (level2[l2] == 0xFF)
+ level2[l2] = count3++;
+ }
+
+ if (count2 >= 0xFF || count3 >= 0xFF)
+ need_dict = 1;
+
+ if (need_dict) {
+ PyObject *result = PyDict_New();
+ PyObject *key, *value;
+ if (!result)
+ return NULL;
+ for (i = 0; i < length; i++) {
+ key = PyLong_FromLong(PyUnicode_READ(kind, data, i));
+ value = PyLong_FromLong(i);
+ if (!key || !value)
+ goto failed1;
+ if (PyDict_SetItem(result, key, value) == -1)
+ goto failed1;
+ Py_DECREF(key);
+ Py_DECREF(value);
+ }
+ return result;
+ failed1:
+ Py_XDECREF(key);
+ Py_XDECREF(value);
+ Py_DECREF(result);
+ return NULL;
+ }
+
+ /* Create a three-level trie */
+ result = PyObject_MALLOC(sizeof(struct encoding_map) +
+ 16*count2 + 128*count3 - 1);
+ if (!result)
+ return PyErr_NoMemory();
+ PyObject_Init(result, &EncodingMapType);
+ mresult = (struct encoding_map*)result;
+ mresult->count2 = count2;
+ mresult->count3 = count3;
+ mlevel1 = mresult->level1;
+ mlevel2 = mresult->level23;
+ mlevel3 = mresult->level23 + 16*count2;
+ memcpy(mlevel1, level1, 32);
+ memset(mlevel2, 0xFF, 16*count2);
+ memset(mlevel3, 0, 128*count3);
+ count3 = 0;
+ for (i = 1; i < length; i++) {
+ int o1, o2, o3, i2, i3;
+ Py_UCS4 ch = PyUnicode_READ(kind, data, i);
+ if (ch == 0xFFFE)
+ /* unmapped character */
+ continue;
+ o1 = ch>>11;
+ o2 = (ch>>7) & 0xF;
+ i2 = 16*mlevel1[o1] + o2;
+ if (mlevel2[i2] == 0xFF)
+ mlevel2[i2] = count3++;
+ o3 = ch & 0x7F;
+ i3 = 128*mlevel2[i2] + o3;
+ mlevel3[i3] = i;
+ }
+ return result;
+}
+
+static int
+encoding_map_lookup(Py_UCS4 c, PyObject *mapping)
+{
+ struct encoding_map *map = (struct encoding_map*)mapping;
+ int l1 = c>>11;
+ int l2 = (c>>7) & 0xF;
+ int l3 = c & 0x7F;
+ int i;
+
+ if (c > 0xFFFF)
+ return -1;
+ if (c == 0)
+ return 0;
+ /* level 1*/
+ i = map->level1[l1];
+ if (i == 0xFF) {
+ return -1;
+ }
+ /* level 2*/
+ i = map->level23[16*i+l2];
+ if (i == 0xFF) {
+ return -1;
+ }
+ /* level 3 */
+ i = map->level23[16*map->count2 + 128*i + l3];
+ if (i == 0) {
+ return -1;
+ }
+ return i;
+}
+
+/* Lookup the character ch in the mapping. If the character
+ can't be found, Py_None is returned (or NULL, if another
+ error occurred). */
+static PyObject *
+charmapencode_lookup(Py_UCS4 c, PyObject *mapping)
+{
+ PyObject *w = PyLong_FromLong((long)c);
+ PyObject *x;
+
+ if (w == NULL)
+ return NULL;
+ x = PyObject_GetItem(mapping, w);
+ Py_DECREF(w);
+ if (x == NULL) {
+ if (PyErr_ExceptionMatches(PyExc_LookupError)) {
+ /* No mapping found means: mapping is undefined. */
+ PyErr_Clear();
+ x = Py_None;
+ Py_INCREF(x);
+ return x;
+ } else
+ return NULL;
+ }
+ else if (x == Py_None)
+ return x;
+ else if (PyLong_Check(x)) {
+ long value = PyLong_AS_LONG(x);
+ if (value < 0 || value > 255) {
+ PyErr_SetString(PyExc_TypeError,
+ "character mapping must be in range(256)");
+ Py_DECREF(x);
+ return NULL;
+ }
+ return x;
+ }
+ else if (PyBytes_Check(x))
+ return x;
+ else {
+ /* wrong return value */
+ PyErr_Format(PyExc_TypeError,
+ "character mapping must return integer, bytes or None, not %.400s",
+ x->ob_type->tp_name);
+ Py_DECREF(x);
+ return NULL;
+ }
+}
+
+static int
+charmapencode_resize(PyObject **outobj, Py_ssize_t *outpos, Py_ssize_t requiredsize)
+{
+ Py_ssize_t outsize = PyBytes_GET_SIZE(*outobj);
+ /* exponentially overallocate to minimize reallocations */
+ if (requiredsize < 2*outsize)
+ requiredsize = 2*outsize;
+ if (_PyBytes_Resize(outobj, requiredsize))
+ return -1;
+ return 0;
+}
+
+typedef enum charmapencode_result {
+ enc_SUCCESS, enc_FAILED, enc_EXCEPTION
+} charmapencode_result;
+/* lookup the character, put the result in the output string and adjust
+ various state variables. Resize the output bytes object if not enough
+ space is available. Return a new reference to the object that
+ was put in the output buffer, or Py_None, if the mapping was undefined
+ (in which case no character was written) or NULL, if a
+ reallocation error occurred. The caller must decref the result */
+static charmapencode_result
+charmapencode_output(Py_UCS4 c, PyObject *mapping,
+ PyObject **outobj, Py_ssize_t *outpos)
+{
+ PyObject *rep;
+ char *outstart;
+ Py_ssize_t outsize = PyBytes_GET_SIZE(*outobj);
+
+ if (Py_TYPE(mapping) == &EncodingMapType) {
+ int res = encoding_map_lookup(c, mapping);
+ Py_ssize_t requiredsize = *outpos+1;
+ if (res == -1)
+ return enc_FAILED;
+ if (outsize<requiredsize)
+ if (charmapencode_resize(outobj, outpos, requiredsize))
+ return enc_EXCEPTION;
+ outstart = PyBytes_AS_STRING(*outobj);
+ outstart[(*outpos)++] = (char)res;
+ return enc_SUCCESS;
+ }
+
+ rep = charmapencode_lookup(c, mapping);
+ if (rep==NULL)
+ return enc_EXCEPTION;
+ else if (rep==Py_None) {
+ Py_DECREF(rep);
+ return enc_FAILED;
+ } else {
+ if (PyLong_Check(rep)) {
+ Py_ssize_t requiredsize = *outpos+1;
+ if (outsize<requiredsize)
+ if (charmapencode_resize(outobj, outpos, requiredsize)) {
+ Py_DECREF(rep);
+ return enc_EXCEPTION;
+ }
+ outstart = PyBytes_AS_STRING(*outobj);
+ outstart[(*outpos)++] = (char)PyLong_AS_LONG(rep);
+ }
+ else {
+ const char *repchars = PyBytes_AS_STRING(rep);
+ Py_ssize_t repsize = PyBytes_GET_SIZE(rep);
+ Py_ssize_t requiredsize = *outpos+repsize;
+ if (outsize<requiredsize)
+ if (charmapencode_resize(outobj, outpos, requiredsize)) {
+ Py_DECREF(rep);
+ return enc_EXCEPTION;
+ }
+ outstart = PyBytes_AS_STRING(*outobj);
+ memcpy(outstart + *outpos, repchars, repsize);
+ *outpos += repsize;
+ }
+ }
+ Py_DECREF(rep);
+ return enc_SUCCESS;
+}
+
+/* handle an error in PyUnicode_EncodeCharmap
+ Return 0 on success, -1 on error */
+static int
+charmap_encoding_error(
+ PyObject *unicode, Py_ssize_t *inpos, PyObject *mapping,
+ PyObject **exceptionObject,
+ _Py_error_handler *error_handler, PyObject **error_handler_obj, const char *errors,
+ PyObject **res, Py_ssize_t *respos)
+{
+ PyObject *repunicode = NULL; /* initialize to prevent gcc warning */
+ Py_ssize_t size, repsize;
+ Py_ssize_t newpos;
+ enum PyUnicode_Kind kind;
+ void *data;
+ Py_ssize_t index;
+ /* startpos for collecting unencodable chars */
+ Py_ssize_t collstartpos = *inpos;
+ Py_ssize_t collendpos = *inpos+1;
+ Py_ssize_t collpos;
+ char *encoding = "charmap";
+ char *reason = "character maps to <undefined>";
+ charmapencode_result x;
+ Py_UCS4 ch;
+ int val;
+
+ if (PyUnicode_READY(unicode) == -1)
+ return -1;
+ size = PyUnicode_GET_LENGTH(unicode);
+ /* find all unencodable characters */
+ while (collendpos < size) {
+ PyObject *rep;
+ if (Py_TYPE(mapping) == &EncodingMapType) {
+ ch = PyUnicode_READ_CHAR(unicode, collendpos);
+ val = encoding_map_lookup(ch, mapping);
+ if (val != -1)
+ break;
+ ++collendpos;
+ continue;
+ }
+
+ ch = PyUnicode_READ_CHAR(unicode, collendpos);
+ rep = charmapencode_lookup(ch, mapping);
+ if (rep==NULL)
+ return -1;
+ else if (rep!=Py_None) {
+ Py_DECREF(rep);
+ break;
+ }
+ Py_DECREF(rep);
+ ++collendpos;
+ }
+ /* cache callback name lookup
+ * (if not done yet, i.e. it's the first error) */
+ if (*error_handler == _Py_ERROR_UNKNOWN)
+ *error_handler = get_error_handler(errors);
+
+ switch (*error_handler) {
+ case _Py_ERROR_STRICT:
+ raise_encode_exception(exceptionObject, encoding, unicode, collstartpos, collendpos, reason);
+ return -1;
+
+ case _Py_ERROR_REPLACE:
+ for (collpos = collstartpos; collpos<collendpos; ++collpos) {
+ x = charmapencode_output('?', mapping, res, respos);
+ if (x==enc_EXCEPTION) {
+ return -1;
+ }
+ else if (x==enc_FAILED) {
+ raise_encode_exception(exceptionObject, encoding, unicode, collstartpos, collendpos, reason);
+ return -1;
+ }
+ }
+ /* fall through */
+ case _Py_ERROR_IGNORE:
+ *inpos = collendpos;
+ break;
+
+ case _Py_ERROR_XMLCHARREFREPLACE:
+ /* generate replacement (temporarily (mis)uses p) */
+ for (collpos = collstartpos; collpos < collendpos; ++collpos) {
+ char buffer[2+29+1+1];
+ char *cp;
+ sprintf(buffer, "&#%d;", (int)PyUnicode_READ_CHAR(unicode, collpos));
+ for (cp = buffer; *cp; ++cp) {
+ x = charmapencode_output(*cp, mapping, res, respos);
+ if (x==enc_EXCEPTION)
+ return -1;
+ else if (x==enc_FAILED) {
+ raise_encode_exception(exceptionObject, encoding, unicode, collstartpos, collendpos, reason);
+ return -1;
+ }
+ }
+ }
+ *inpos = collendpos;
+ break;
+
+ default:
+ repunicode = unicode_encode_call_errorhandler(errors, error_handler_obj,
+ encoding, reason, unicode, exceptionObject,
+ collstartpos, collendpos, &newpos);
+ if (repunicode == NULL)
+ return -1;
+ if (PyBytes_Check(repunicode)) {
+ /* Directly copy bytes result to output. */
+ Py_ssize_t outsize = PyBytes_Size(*res);
+ Py_ssize_t requiredsize;
+ repsize = PyBytes_Size(repunicode);
+ requiredsize = *respos + repsize;
+ if (requiredsize > outsize)
+ /* Make room for all additional bytes. */
+ if (charmapencode_resize(res, respos, requiredsize)) {
+ Py_DECREF(repunicode);
+ return -1;
+ }
+ memcpy(PyBytes_AsString(*res) + *respos,
+ PyBytes_AsString(repunicode), repsize);
+ *respos += repsize;
+ *inpos = newpos;
+ Py_DECREF(repunicode);
+ break;
+ }
+ /* generate replacement */
+ if (PyUnicode_READY(repunicode) == -1) {
+ Py_DECREF(repunicode);
+ return -1;
+ }
+ repsize = PyUnicode_GET_LENGTH(repunicode);
+ data = PyUnicode_DATA(repunicode);
+ kind = PyUnicode_KIND(repunicode);
+ for (index = 0; index < repsize; index++) {
+ Py_UCS4 repch = PyUnicode_READ(kind, data, index);
+ x = charmapencode_output(repch, mapping, res, respos);
+ if (x==enc_EXCEPTION) {
+ Py_DECREF(repunicode);
+ return -1;
+ }
+ else if (x==enc_FAILED) {
+ Py_DECREF(repunicode);
+ raise_encode_exception(exceptionObject, encoding, unicode, collstartpos, collendpos, reason);
+ return -1;
+ }
+ }
+ *inpos = newpos;
+ Py_DECREF(repunicode);
+ }
+ return 0;
+}
+
+PyObject *
+_PyUnicode_EncodeCharmap(PyObject *unicode,
+ PyObject *mapping,
+ const char *errors)
+{
+ /* output object */
+ PyObject *res = NULL;
+ /* current input position */
+ Py_ssize_t inpos = 0;
+ Py_ssize_t size;
+ /* current output position */
+ Py_ssize_t respos = 0;
+ PyObject *error_handler_obj = NULL;
+ PyObject *exc = NULL;
+ _Py_error_handler error_handler = _Py_ERROR_UNKNOWN;
+ void *data;
+ int kind;
+
+ if (PyUnicode_READY(unicode) == -1)
+ return NULL;
+ size = PyUnicode_GET_LENGTH(unicode);
+ data = PyUnicode_DATA(unicode);
+ kind = PyUnicode_KIND(unicode);
+
+ /* Default to Latin-1 */
+ if (mapping == NULL)
+ return unicode_encode_ucs1(unicode, errors, 256);
+
+ /* allocate enough for a simple encoding without
+ replacements, if we need more, we'll resize */
+ res = PyBytes_FromStringAndSize(NULL, size);
+ if (res == NULL)
+ goto onError;
+ if (size == 0)
+ return res;
+
+ while (inpos<size) {
+ Py_UCS4 ch = PyUnicode_READ(kind, data, inpos);
+ /* try to encode it */
+ charmapencode_result x = charmapencode_output(ch, mapping, &res, &respos);
+ if (x==enc_EXCEPTION) /* error */
+ goto onError;
+ if (x==enc_FAILED) { /* unencodable character */
+ if (charmap_encoding_error(unicode, &inpos, mapping,
+ &exc,
+ &error_handler, &error_handler_obj, errors,
+ &res, &respos)) {
+ goto onError;
+ }
+ }
+ else
+ /* done with this character => adjust input position */
+ ++inpos;
+ }
+
+ /* Resize if we allocated to much */
+ if (respos<PyBytes_GET_SIZE(res))
+ if (_PyBytes_Resize(&res, respos) < 0)
+ goto onError;
+
+ Py_XDECREF(exc);
+ Py_XDECREF(error_handler_obj);
+ return res;
+
+ onError:
+ Py_XDECREF(res);
+ Py_XDECREF(exc);
+ Py_XDECREF(error_handler_obj);
+ return NULL;
+}
+
+/* Deprecated */
+PyObject *
+PyUnicode_EncodeCharmap(const Py_UNICODE *p,
+ Py_ssize_t size,
+ PyObject *mapping,
+ const char *errors)
+{
+ PyObject *result;
+ PyObject *unicode = PyUnicode_FromUnicode(p, size);
+ if (unicode == NULL)
+ return NULL;
+ result = _PyUnicode_EncodeCharmap(unicode, mapping, errors);
+ Py_DECREF(unicode);
+ return result;
+}
+
+PyObject *
+PyUnicode_AsCharmapString(PyObject *unicode,
+ PyObject *mapping)
+{
+ if (!PyUnicode_Check(unicode) || mapping == NULL) {
+ PyErr_BadArgument();
+ return NULL;
+ }
+ return _PyUnicode_EncodeCharmap(unicode, mapping, NULL);
+}
+
+/* create or adjust a UnicodeTranslateError */
+static void
+make_translate_exception(PyObject **exceptionObject,
+ PyObject *unicode,
+ Py_ssize_t startpos, Py_ssize_t endpos,
+ const char *reason)
+{
+ if (*exceptionObject == NULL) {
+ *exceptionObject = _PyUnicodeTranslateError_Create(
+ unicode, startpos, endpos, reason);
+ }
+ else {
+ if (PyUnicodeTranslateError_SetStart(*exceptionObject, startpos))
+ goto onError;
+ if (PyUnicodeTranslateError_SetEnd(*exceptionObject, endpos))
+ goto onError;
+ if (PyUnicodeTranslateError_SetReason(*exceptionObject, reason))
+ goto onError;
+ return;
+ onError:
+ Py_CLEAR(*exceptionObject);
+ }
+}
+
+/* error handling callback helper:
+ build arguments, call the callback and check the arguments,
+ put the result into newpos and return the replacement string, which
+ has to be freed by the caller */
+static PyObject *
+unicode_translate_call_errorhandler(const char *errors,
+ PyObject **errorHandler,
+ const char *reason,
+ PyObject *unicode, PyObject **exceptionObject,
+ Py_ssize_t startpos, Py_ssize_t endpos,
+ Py_ssize_t *newpos)
+{
+ static const char *argparse = "O!n;translating error handler must return (str, int) tuple";
+
+ Py_ssize_t i_newpos;
+ PyObject *restuple;
+ PyObject *resunicode;
+
+ if (*errorHandler == NULL) {
+ *errorHandler = PyCodec_LookupError(errors);
+ if (*errorHandler == NULL)
+ return NULL;
+ }
+
+ make_translate_exception(exceptionObject,
+ unicode, startpos, endpos, reason);
+ if (*exceptionObject == NULL)
+ return NULL;
+
+ restuple = PyObject_CallFunctionObjArgs(
+ *errorHandler, *exceptionObject, NULL);
+ if (restuple == NULL)
+ return NULL;
+ if (!PyTuple_Check(restuple)) {
+ PyErr_SetString(PyExc_TypeError, &argparse[4]);
+ Py_DECREF(restuple);
+ return NULL;
+ }
+ if (!PyArg_ParseTuple(restuple, argparse, &PyUnicode_Type,
+ &resunicode, &i_newpos)) {
+ Py_DECREF(restuple);
+ return NULL;
+ }
+ if (i_newpos<0)
+ *newpos = PyUnicode_GET_LENGTH(unicode)+i_newpos;
+ else
+ *newpos = i_newpos;
+ if (*newpos<0 || *newpos>PyUnicode_GET_LENGTH(unicode)) {
+ PyErr_Format(PyExc_IndexError, "position %zd from error handler out of bounds", *newpos);
+ Py_DECREF(restuple);
+ return NULL;
+ }
+ Py_INCREF(resunicode);
+ Py_DECREF(restuple);
+ return resunicode;
+}
+
+/* Lookup the character ch in the mapping and put the result in result,
+ which must be decrefed by the caller.
+ Return 0 on success, -1 on error */
+static int
+charmaptranslate_lookup(Py_UCS4 c, PyObject *mapping, PyObject **result)
+{
+ PyObject *w = PyLong_FromLong((long)c);
+ PyObject *x;
+
+ if (w == NULL)
+ return -1;
+ x = PyObject_GetItem(mapping, w);
+ Py_DECREF(w);
+ if (x == NULL) {
+ if (PyErr_ExceptionMatches(PyExc_LookupError)) {
+ /* No mapping found means: use 1:1 mapping. */
+ PyErr_Clear();
+ *result = NULL;
+ return 0;
+ } else
+ return -1;
+ }
+ else if (x == Py_None) {
+ *result = x;
+ return 0;
+ }
+ else if (PyLong_Check(x)) {
+ long value = PyLong_AS_LONG(x);
+ if (value < 0 || value > MAX_UNICODE) {
+ PyErr_Format(PyExc_ValueError,
+ "character mapping must be in range(0x%x)",
+ MAX_UNICODE+1);
+ Py_DECREF(x);
+ return -1;
+ }
+ *result = x;
+ return 0;
+ }
+ else if (PyUnicode_Check(x)) {
+ *result = x;
+ return 0;
+ }
+ else {
+ /* wrong return value */
+ PyErr_SetString(PyExc_TypeError,
+ "character mapping must return integer, None or str");
+ Py_DECREF(x);
+ return -1;
+ }
+}
+
+/* lookup the character, write the result into the writer.
+ Return 1 if the result was written into the writer, return 0 if the mapping
+ was undefined, raise an exception return -1 on error. */
+static int
+charmaptranslate_output(Py_UCS4 ch, PyObject *mapping,
+ _PyUnicodeWriter *writer)
+{
+ PyObject *item;
+
+ if (charmaptranslate_lookup(ch, mapping, &item))
+ return -1;
+
+ if (item == NULL) {
+ /* not found => default to 1:1 mapping */
+ if (_PyUnicodeWriter_WriteCharInline(writer, ch) < 0) {
+ return -1;
+ }
+ return 1;
+ }
+
+ if (item == Py_None) {
+ Py_DECREF(item);
+ return 0;
+ }
+
+ if (PyLong_Check(item)) {
+ long ch = (Py_UCS4)PyLong_AS_LONG(item);
+ /* PyLong_AS_LONG() cannot fail, charmaptranslate_lookup() already
+ used it */
+ if (_PyUnicodeWriter_WriteCharInline(writer, ch) < 0) {
+ Py_DECREF(item);
+ return -1;
+ }
+ Py_DECREF(item);
+ return 1;
+ }
+
+ if (!PyUnicode_Check(item)) {
+ Py_DECREF(item);
+ return -1;
+ }
+
+ if (_PyUnicodeWriter_WriteStr(writer, item) < 0) {
+ Py_DECREF(item);
+ return -1;
+ }
+
+ Py_DECREF(item);
+ return 1;
+}
+
+static int
+unicode_fast_translate_lookup(PyObject *mapping, Py_UCS1 ch,
+ Py_UCS1 *translate)
+{
+ PyObject *item = NULL;
+ int ret = 0;
+
+ if (charmaptranslate_lookup(ch, mapping, &item)) {
+ return -1;
+ }
+
+ if (item == Py_None) {
+ /* deletion */
+ translate[ch] = 0xfe;
+ }
+ else if (item == NULL) {
+ /* not found => default to 1:1 mapping */
+ translate[ch] = ch;
+ return 1;
+ }
+ else if (PyLong_Check(item)) {
+ long replace = PyLong_AS_LONG(item);
+ /* PyLong_AS_LONG() cannot fail, charmaptranslate_lookup() already
+ used it */
+ if (127 < replace) {
+ /* invalid character or character outside ASCII:
+ skip the fast translate */
+ goto exit;
+ }
+ translate[ch] = (Py_UCS1)replace;
+ }
+ else if (PyUnicode_Check(item)) {
+ Py_UCS4 replace;
+
+ if (PyUnicode_READY(item) == -1) {
+ Py_DECREF(item);
+ return -1;
+ }
+ if (PyUnicode_GET_LENGTH(item) != 1)
+ goto exit;
+
+ replace = PyUnicode_READ_CHAR(item, 0);
+ if (replace > 127)
+ goto exit;
+ translate[ch] = (Py_UCS1)replace;
+ }
+ else {
+ /* not None, NULL, long or unicode */
+ goto exit;
+ }
+ ret = 1;
+
+ exit:
+ Py_DECREF(item);
+ return ret;
+}
+
+/* Fast path for ascii => ascii translation. Return 1 if the whole string
+ was translated into writer, return 0 if the input string was partially
+ translated into writer, raise an exception and return -1 on error. */
+static int
+unicode_fast_translate(PyObject *input, PyObject *mapping,
+ _PyUnicodeWriter *writer, int ignore,
+ Py_ssize_t *input_pos)
+{
+ Py_UCS1 ascii_table[128], ch, ch2;
+ Py_ssize_t len;
+ Py_UCS1 *in, *end, *out;
+ int res = 0;
+
+ len = PyUnicode_GET_LENGTH(input);
+
+ memset(ascii_table, 0xff, 128);
+
+ in = PyUnicode_1BYTE_DATA(input);
+ end = in + len;
+
+ assert(PyUnicode_IS_ASCII(writer->buffer));
+ assert(PyUnicode_GET_LENGTH(writer->buffer) == len);
+ out = PyUnicode_1BYTE_DATA(writer->buffer);
+
+ for (; in < end; in++) {
+ ch = *in;
+ ch2 = ascii_table[ch];
+ if (ch2 == 0xff) {
+ int translate = unicode_fast_translate_lookup(mapping, ch,
+ ascii_table);
+ if (translate < 0)
+ return -1;
+ if (translate == 0)
+ goto exit;
+ ch2 = ascii_table[ch];
+ }
+ if (ch2 == 0xfe) {
+ if (ignore)
+ continue;
+ goto exit;
+ }
+ assert(ch2 < 128);
+ *out = ch2;
+ out++;
+ }
+ res = 1;
+
+exit:
+ writer->pos = out - PyUnicode_1BYTE_DATA(writer->buffer);
+ *input_pos = in - PyUnicode_1BYTE_DATA(input);
+ return res;
+}
+
+static PyObject *
+_PyUnicode_TranslateCharmap(PyObject *input,
+ PyObject *mapping,
+ const char *errors)
+{
+ /* input object */
+ char *data;
+ Py_ssize_t size, i;
+ int kind;
+ /* output buffer */
+ _PyUnicodeWriter writer;
+ /* error handler */
+ char *reason = "character maps to <undefined>";
+ PyObject *errorHandler = NULL;
+ PyObject *exc = NULL;
+ int ignore;
+ int res;
+
+ if (mapping == NULL) {
+ PyErr_BadArgument();
+ return NULL;
+ }
+
+ if (PyUnicode_READY(input) == -1)
+ return NULL;
+ data = (char*)PyUnicode_DATA(input);
+ kind = PyUnicode_KIND(input);
+ size = PyUnicode_GET_LENGTH(input);
+
+ if (size == 0)
+ return PyUnicode_FromObject(input);
+
+ /* allocate enough for a simple 1:1 translation without
+ replacements, if we need more, we'll resize */
+ _PyUnicodeWriter_Init(&writer);
+ if (_PyUnicodeWriter_Prepare(&writer, size, 127) == -1)
+ goto onError;
+
+ ignore = (errors != NULL && strcmp(errors, "ignore") == 0);
+
+ if (PyUnicode_READY(input) == -1)
+ return NULL;
+ if (PyUnicode_IS_ASCII(input)) {
+ res = unicode_fast_translate(input, mapping, &writer, ignore, &i);
+ if (res < 0) {
+ _PyUnicodeWriter_Dealloc(&writer);
+ return NULL;
+ }
+ if (res == 1)
+ return _PyUnicodeWriter_Finish(&writer);
+ }
+ else {
+ i = 0;
+ }
+
+ while (i<size) {
+ /* try to encode it */
+ int translate;
+ PyObject *repunicode = NULL; /* initialize to prevent gcc warning */
+ Py_ssize_t newpos;
+ /* startpos for collecting untranslatable chars */
+ Py_ssize_t collstart;
+ Py_ssize_t collend;
+ Py_UCS4 ch;
+
+ ch = PyUnicode_READ(kind, data, i);
+ translate = charmaptranslate_output(ch, mapping, &writer);
+ if (translate < 0)
+ goto onError;
+
+ if (translate != 0) {
+ /* it worked => adjust input pointer */
+ ++i;
+ continue;
+ }
+
+ /* untranslatable character */
+ collstart = i;
+ collend = i+1;
+
+ /* find all untranslatable characters */
+ while (collend < size) {
+ PyObject *x;
+ ch = PyUnicode_READ(kind, data, collend);
+ if (charmaptranslate_lookup(ch, mapping, &x))
+ goto onError;
+ Py_XDECREF(x);
+ if (x != Py_None)
+ break;
+ ++collend;
+ }
+
+ if (ignore) {
+ i = collend;
+ }
+ else {
+ repunicode = unicode_translate_call_errorhandler(errors, &errorHandler,
+ reason, input, &exc,
+ collstart, collend, &newpos);
+ if (repunicode == NULL)
+ goto onError;
+ if (_PyUnicodeWriter_WriteStr(&writer, repunicode) < 0) {
+ Py_DECREF(repunicode);
+ goto onError;
+ }
+ Py_DECREF(repunicode);
+ i = newpos;
+ }
+ }
+ Py_XDECREF(exc);
+ Py_XDECREF(errorHandler);
+ return _PyUnicodeWriter_Finish(&writer);
+
+ onError:
+ _PyUnicodeWriter_Dealloc(&writer);
+ Py_XDECREF(exc);
+ Py_XDECREF(errorHandler);
+ return NULL;
+}
+
+/* Deprecated. Use PyUnicode_Translate instead. */
+PyObject *
+PyUnicode_TranslateCharmap(const Py_UNICODE *p,
+ Py_ssize_t size,
+ PyObject *mapping,
+ const char *errors)
+{
+ PyObject *result;
+ PyObject *unicode = PyUnicode_FromUnicode(p, size);
+ if (!unicode)
+ return NULL;
+ result = _PyUnicode_TranslateCharmap(unicode, mapping, errors);
+ Py_DECREF(unicode);
+ return result;
+}
+
+PyObject *
+PyUnicode_Translate(PyObject *str,
+ PyObject *mapping,
+ const char *errors)
+{
+ if (ensure_unicode(str) < 0)
+ return NULL;
+ return _PyUnicode_TranslateCharmap(str, mapping, errors);
+}
+
+static Py_UCS4
+fix_decimal_and_space_to_ascii(PyObject *self)
+{
+ /* No need to call PyUnicode_READY(self) because this function is only
+ called as a callback from fixup() which does it already. */
+ const Py_ssize_t len = PyUnicode_GET_LENGTH(self);
+ const int kind = PyUnicode_KIND(self);
+ void *data = PyUnicode_DATA(self);
+ Py_UCS4 maxchar = 127, ch, fixed;
+ int modified = 0;
+ Py_ssize_t i;
+
+ for (i = 0; i < len; ++i) {
+ ch = PyUnicode_READ(kind, data, i);
+ fixed = 0;
+ if (ch > 127) {
+ if (Py_UNICODE_ISSPACE(ch))
+ fixed = ' ';
+ else {
+ const int decimal = Py_UNICODE_TODECIMAL(ch);
+ if (decimal >= 0)
+ fixed = '0' + decimal;
+ }
+ if (fixed != 0) {
+ modified = 1;
+ maxchar = Py_MAX(maxchar, fixed);
+ PyUnicode_WRITE(kind, data, i, fixed);
+ }
+ else
+ maxchar = Py_MAX(maxchar, ch);
+ }
+ }
+
+ return (modified) ? maxchar : 0;
+}
+
+PyObject *
+_PyUnicode_TransformDecimalAndSpaceToASCII(PyObject *unicode)
+{
+ if (!PyUnicode_Check(unicode)) {
+ PyErr_BadInternalCall();
+ return NULL;
+ }
+ if (PyUnicode_READY(unicode) == -1)
+ return NULL;
+ if (PyUnicode_MAX_CHAR_VALUE(unicode) <= 127) {
+ /* If the string is already ASCII, just return the same string */
+ Py_INCREF(unicode);
+ return unicode;
+ }
+ return fixup(unicode, fix_decimal_and_space_to_ascii);
+}
+
+PyObject *
+PyUnicode_TransformDecimalToASCII(Py_UNICODE *s,
+ Py_ssize_t length)
+{
+ PyObject *decimal;
+ Py_ssize_t i;
+ Py_UCS4 maxchar;
+ enum PyUnicode_Kind kind;
+ void *data;
+
+ maxchar = 127;
+ for (i = 0; i < length; i++) {
+ Py_UCS4 ch = s[i];
+ if (ch > 127) {
+ int decimal = Py_UNICODE_TODECIMAL(ch);
+ if (decimal >= 0)
+ ch = '0' + decimal;
+ maxchar = Py_MAX(maxchar, ch);
+ }
+ }
+
+ /* Copy to a new string */
+ decimal = PyUnicode_New(length, maxchar);
+ if (decimal == NULL)
+ return decimal;
+ kind = PyUnicode_KIND(decimal);
+ data = PyUnicode_DATA(decimal);
+ /* Iterate over code points */
+ for (i = 0; i < length; i++) {
+ Py_UCS4 ch = s[i];
+ if (ch > 127) {
+ int decimal = Py_UNICODE_TODECIMAL(ch);
+ if (decimal >= 0)
+ ch = '0' + decimal;
+ }
+ PyUnicode_WRITE(kind, data, i, ch);
+ }
+ return unicode_result(decimal);
+}
+/* --- Decimal Encoder ---------------------------------------------------- */
+
+int
+PyUnicode_EncodeDecimal(Py_UNICODE *s,
+ Py_ssize_t length,
+ char *output,
+ const char *errors)
+{
+ PyObject *unicode;
+ Py_ssize_t i;
+ enum PyUnicode_Kind kind;
+ void *data;
+
+ if (output == NULL) {
+ PyErr_BadArgument();
+ return -1;
+ }
+
+ unicode = PyUnicode_FromUnicode(s, length);
+ if (unicode == NULL)
+ return -1;
+
+ if (PyUnicode_READY(unicode) == -1) {
+ Py_DECREF(unicode);
+ return -1;
+ }
+ kind = PyUnicode_KIND(unicode);
+ data = PyUnicode_DATA(unicode);
+
+ for (i=0; i < length; ) {
+ PyObject *exc;
+ Py_UCS4 ch;
+ int decimal;
+ Py_ssize_t startpos;
+
+ ch = PyUnicode_READ(kind, data, i);
+
+ if (Py_UNICODE_ISSPACE(ch)) {
+ *output++ = ' ';
+ i++;
+ continue;
+ }
+ decimal = Py_UNICODE_TODECIMAL(ch);
+ if (decimal >= 0) {
+ *output++ = '0' + decimal;
+ i++;
+ continue;
+ }
+ if (0 < ch && ch < 256) {
+ *output++ = (char)ch;
+ i++;
+ continue;
+ }
+
+ startpos = i;
+ exc = NULL;
+ raise_encode_exception(&exc, "decimal", unicode,
+ startpos, startpos+1,
+ "invalid decimal Unicode string");
+ Py_XDECREF(exc);
+ Py_DECREF(unicode);
+ return -1;
+ }
+ /* 0-terminate the output string */
+ *output++ = '\0';
+ Py_DECREF(unicode);
+ return 0;
+}
+
+/* --- Helpers ------------------------------------------------------------ */
+
+/* helper macro to fixup start/end slice values */
+#define ADJUST_INDICES(start, end, len) \
+ if (end > len) \
+ end = len; \
+ else if (end < 0) { \
+ end += len; \
+ if (end < 0) \
+ end = 0; \
+ } \
+ if (start < 0) { \
+ start += len; \
+ if (start < 0) \
+ start = 0; \
+ }
+
+static Py_ssize_t
+any_find_slice(PyObject* s1, PyObject* s2,
+ Py_ssize_t start,
+ Py_ssize_t end,
+ int direction)
+{
+ int kind1, kind2;
+ void *buf1, *buf2;
+ Py_ssize_t len1, len2, result;
+
+ kind1 = PyUnicode_KIND(s1);
+ kind2 = PyUnicode_KIND(s2);
+ if (kind1 < kind2)
+ return -1;
+
+ len1 = PyUnicode_GET_LENGTH(s1);
+ len2 = PyUnicode_GET_LENGTH(s2);
+ ADJUST_INDICES(start, end, len1);
+ if (end - start < len2)
+ return -1;
+
+ buf1 = PyUnicode_DATA(s1);
+ buf2 = PyUnicode_DATA(s2);
+ if (len2 == 1) {
+ Py_UCS4 ch = PyUnicode_READ(kind2, buf2, 0);
+ result = findchar((const char *)buf1 + kind1*start,
+ kind1, end - start, ch, direction);
+ if (result == -1)
+ return -1;
+ else
+ return start + result;
+ }
+
+ if (kind2 != kind1) {
+ buf2 = _PyUnicode_AsKind(s2, kind1);
+ if (!buf2)
+ return -2;
+ }
+
+ if (direction > 0) {
+ switch (kind1) {
+ case PyUnicode_1BYTE_KIND:
+ if (PyUnicode_IS_ASCII(s1) && PyUnicode_IS_ASCII(s2))
+ result = asciilib_find_slice(buf1, len1, buf2, len2, start, end);
+ else
+ result = ucs1lib_find_slice(buf1, len1, buf2, len2, start, end);
+ break;
+ case PyUnicode_2BYTE_KIND:
+ result = ucs2lib_find_slice(buf1, len1, buf2, len2, start, end);
+ break;
+ case PyUnicode_4BYTE_KIND:
+ result = ucs4lib_find_slice(buf1, len1, buf2, len2, start, end);
+ break;
+ default:
+ assert(0); result = -2;
+ }
+ }
+ else {
+ switch (kind1) {
+ case PyUnicode_1BYTE_KIND:
+ if (PyUnicode_IS_ASCII(s1) && PyUnicode_IS_ASCII(s2))
+ result = asciilib_rfind_slice(buf1, len1, buf2, len2, start, end);
+ else
+ result = ucs1lib_rfind_slice(buf1, len1, buf2, len2, start, end);
+ break;
+ case PyUnicode_2BYTE_KIND:
+ result = ucs2lib_rfind_slice(buf1, len1, buf2, len2, start, end);
+ break;
+ case PyUnicode_4BYTE_KIND:
+ result = ucs4lib_rfind_slice(buf1, len1, buf2, len2, start, end);
+ break;
+ default:
+ assert(0); result = -2;
+ }
+ }
+
+ if (kind2 != kind1)
+ PyMem_Free(buf2);
+
+ return result;
+}
+
+/* _PyUnicode_InsertThousandsGrouping() helper functions */
+#include "stringlib/localeutil.h"
+
+/**
+ * InsertThousandsGrouping:
+ * @writer: Unicode writer.
+ * @n_buffer: Number of characters in @buffer.
+ * @digits: Digits we're reading from. If count is non-NULL, this is unused.
+ * @d_pos: Start of digits string.
+ * @n_digits: The number of digits in the string, in which we want
+ * to put the grouping chars.
+ * @min_width: The minimum width of the digits in the output string.
+ * Output will be zero-padded on the left to fill.
+ * @grouping: see definition in localeconv().
+ * @thousands_sep: see definition in localeconv().
+ *
+ * There are 2 modes: counting and filling. If @writer is NULL,
+ * we are in counting mode, else filling mode.
+ * If counting, the required buffer size is returned.
+ * If filling, we know the buffer will be large enough, so we don't
+ * need to pass in the buffer size.
+ * Inserts thousand grouping characters (as defined by grouping and
+ * thousands_sep) into @writer.
+ *
+ * Return value: -1 on error, number of characters otherwise.
+ **/
+Py_ssize_t
+_PyUnicode_InsertThousandsGrouping(
+ _PyUnicodeWriter *writer,
+ Py_ssize_t n_buffer,
+ PyObject *digits,
+ Py_ssize_t d_pos,
+ Py_ssize_t n_digits,
+ Py_ssize_t min_width,
+ const char *grouping,
+ PyObject *thousands_sep,
+ Py_UCS4 *maxchar)
+{
+ if (writer) {
+ assert(digits != NULL);
+ assert(maxchar == NULL);
+ }
+ else {
+ assert(digits == NULL);
+ assert(maxchar != NULL);
+ }
+ assert(0 <= d_pos);
+ assert(0 <= n_digits);
+ assert(0 <= min_width);
+ assert(grouping != NULL);
+
+ if (digits != NULL) {
+ if (PyUnicode_READY(digits) == -1) {
+ return -1;
+ }
+ }
+ if (PyUnicode_READY(thousands_sep) == -1) {
+ return -1;
+ }
+
+ Py_ssize_t count = 0;
+ Py_ssize_t n_zeros;
+ int loop_broken = 0;
+ int use_separator = 0; /* First time through, don't append the
+ separator. They only go between
+ groups. */
+ Py_ssize_t buffer_pos;
+ Py_ssize_t digits_pos;
+ Py_ssize_t len;
+ Py_ssize_t n_chars;
+ Py_ssize_t remaining = n_digits; /* Number of chars remaining to
+ be looked at */
+ /* A generator that returns all of the grouping widths, until it
+ returns 0. */
+ GroupGenerator groupgen;
+ GroupGenerator_init(&groupgen, grouping);
+ const Py_ssize_t thousands_sep_len = PyUnicode_GET_LENGTH(thousands_sep);
+
+ /* if digits are not grouped, thousands separator
+ should be an empty string */
+ assert(!(grouping[0] == CHAR_MAX && thousands_sep_len != 0));
+
+ digits_pos = d_pos + n_digits;
+ if (writer) {
+ buffer_pos = writer->pos + n_buffer;
+ assert(buffer_pos <= PyUnicode_GET_LENGTH(writer->buffer));
+ assert(digits_pos <= PyUnicode_GET_LENGTH(digits));
+ }
+ else {
+ buffer_pos = n_buffer;
+ }
+
+ if (!writer) {
+ *maxchar = 127;
+ }
+
+ while ((len = GroupGenerator_next(&groupgen)) > 0) {
+ len = Py_MIN(len, Py_MAX(Py_MAX(remaining, min_width), 1));
+ n_zeros = Py_MAX(0, len - remaining);
+ n_chars = Py_MAX(0, Py_MIN(remaining, len));
+
+ /* Use n_zero zero's and n_chars chars */
+
+ /* Count only, don't do anything. */
+ count += (use_separator ? thousands_sep_len : 0) + n_zeros + n_chars;
+
+ /* Copy into the writer. */
+ InsertThousandsGrouping_fill(writer, &buffer_pos,
+ digits, &digits_pos,
+ n_chars, n_zeros,
+ use_separator ? thousands_sep : NULL,
+ thousands_sep_len, maxchar);
+
+ /* Use a separator next time. */
+ use_separator = 1;
+
+ remaining -= n_chars;
+ min_width -= len;
+
+ if (remaining <= 0 && min_width <= 0) {
+ loop_broken = 1;
+ break;
+ }
+ min_width -= thousands_sep_len;
+ }
+ if (!loop_broken) {
+ /* We left the loop without using a break statement. */
+
+ len = Py_MAX(Py_MAX(remaining, min_width), 1);
+ n_zeros = Py_MAX(0, len - remaining);
+ n_chars = Py_MAX(0, Py_MIN(remaining, len));
+
+ /* Use n_zero zero's and n_chars chars */
+ count += (use_separator ? thousands_sep_len : 0) + n_zeros + n_chars;
+
+ /* Copy into the writer. */
+ InsertThousandsGrouping_fill(writer, &buffer_pos,
+ digits, &digits_pos,
+ n_chars, n_zeros,
+ use_separator ? thousands_sep : NULL,
+ thousands_sep_len, maxchar);
+ }
+ return count;
+}
+
+
+Py_ssize_t
+PyUnicode_Count(PyObject *str,
+ PyObject *substr,
+ Py_ssize_t start,
+ Py_ssize_t end)
+{
+ Py_ssize_t result;
+ int kind1, kind2;
+ void *buf1 = NULL, *buf2 = NULL;
+ Py_ssize_t len1, len2;
+
+ if (ensure_unicode(str) < 0 || ensure_unicode(substr) < 0)
+ return -1;
+
+ kind1 = PyUnicode_KIND(str);
+ kind2 = PyUnicode_KIND(substr);
+ if (kind1 < kind2)
+ return 0;
+
+ len1 = PyUnicode_GET_LENGTH(str);
+ len2 = PyUnicode_GET_LENGTH(substr);
+ ADJUST_INDICES(start, end, len1);
+ if (end - start < len2)
+ return 0;
+
+ buf1 = PyUnicode_DATA(str);
+ buf2 = PyUnicode_DATA(substr);
+ if (kind2 != kind1) {
+ buf2 = _PyUnicode_AsKind(substr, kind1);
+ if (!buf2)
+ goto onError;
+ }
+
+ switch (kind1) {
+ case PyUnicode_1BYTE_KIND:
+ if (PyUnicode_IS_ASCII(str) && PyUnicode_IS_ASCII(substr))
+ result = asciilib_count(
+ ((Py_UCS1*)buf1) + start, end - start,
+ buf2, len2, PY_SSIZE_T_MAX
+ );
+ else
+ result = ucs1lib_count(
+ ((Py_UCS1*)buf1) + start, end - start,
+ buf2, len2, PY_SSIZE_T_MAX
+ );
+ break;
+ case PyUnicode_2BYTE_KIND:
+ result = ucs2lib_count(
+ ((Py_UCS2*)buf1) + start, end - start,
+ buf2, len2, PY_SSIZE_T_MAX
+ );
+ break;
+ case PyUnicode_4BYTE_KIND:
+ result = ucs4lib_count(
+ ((Py_UCS4*)buf1) + start, end - start,
+ buf2, len2, PY_SSIZE_T_MAX
+ );
+ break;
+ default:
+ assert(0); result = 0;
+ }
+
+ if (kind2 != kind1)
+ PyMem_Free(buf2);
+
+ return result;
+ onError:
+ if (kind2 != kind1 && buf2)
+ PyMem_Free(buf2);
+ return -1;
+}
+
+Py_ssize_t
+PyUnicode_Find(PyObject *str,
+ PyObject *substr,
+ Py_ssize_t start,
+ Py_ssize_t end,
+ int direction)
+{
+ if (ensure_unicode(str) < 0 || ensure_unicode(substr) < 0)
+ return -2;
+
+ return any_find_slice(str, substr, start, end, direction);
+}
+
+Py_ssize_t
+PyUnicode_FindChar(PyObject *str, Py_UCS4 ch,
+ Py_ssize_t start, Py_ssize_t end,
+ int direction)
+{
+ int kind;
+ Py_ssize_t result;
+ if (PyUnicode_READY(str) == -1)
+ return -2;
+ if (start < 0 || end < 0) {
+ PyErr_SetString(PyExc_IndexError, "string index out of range");
+ return -2;
+ }
+ if (end > PyUnicode_GET_LENGTH(str))
+ end = PyUnicode_GET_LENGTH(str);
+ if (start >= end)
+ return -1;
+ kind = PyUnicode_KIND(str);
+ result = findchar(PyUnicode_1BYTE_DATA(str) + kind*start,
+ kind, end-start, ch, direction);
+ if (result == -1)
+ return -1;
+ else
+ return start + result;
+}
+
+static int
+tailmatch(PyObject *self,
+ PyObject *substring,
+ Py_ssize_t start,
+ Py_ssize_t end,
+ int direction)
+{
+ int kind_self;
+ int kind_sub;
+ void *data_self;
+ void *data_sub;
+ Py_ssize_t offset;
+ Py_ssize_t i;
+ Py_ssize_t end_sub;
+
+ if (PyUnicode_READY(self) == -1 ||
+ PyUnicode_READY(substring) == -1)
+ return -1;
+
+ ADJUST_INDICES(start, end, PyUnicode_GET_LENGTH(self));
+ end -= PyUnicode_GET_LENGTH(substring);
+ if (end < start)
+ return 0;
+
+ if (PyUnicode_GET_LENGTH(substring) == 0)
+ return 1;
+
+ kind_self = PyUnicode_KIND(self);
+ data_self = PyUnicode_DATA(self);
+ kind_sub = PyUnicode_KIND(substring);
+ data_sub = PyUnicode_DATA(substring);
+ end_sub = PyUnicode_GET_LENGTH(substring) - 1;
+
+ if (direction > 0)
+ offset = end;
+ else
+ offset = start;
+
+ if (PyUnicode_READ(kind_self, data_self, offset) ==
+ PyUnicode_READ(kind_sub, data_sub, 0) &&
+ PyUnicode_READ(kind_self, data_self, offset + end_sub) ==
+ PyUnicode_READ(kind_sub, data_sub, end_sub)) {
+ /* If both are of the same kind, memcmp is sufficient */
+ if (kind_self == kind_sub) {
+ return ! memcmp((char *)data_self +
+ (offset * PyUnicode_KIND(substring)),
+ data_sub,
+ PyUnicode_GET_LENGTH(substring) *
+ PyUnicode_KIND(substring));
+ }
+ /* otherwise we have to compare each character by first accessing it */
+ else {
+ /* We do not need to compare 0 and len(substring)-1 because
+ the if statement above ensured already that they are equal
+ when we end up here. */
+ for (i = 1; i < end_sub; ++i) {
+ if (PyUnicode_READ(kind_self, data_self, offset + i) !=
+ PyUnicode_READ(kind_sub, data_sub, i))
+ return 0;
+ }
+ return 1;
+ }
+ }
+
+ return 0;
+}
+
+Py_ssize_t
+PyUnicode_Tailmatch(PyObject *str,
+ PyObject *substr,
+ Py_ssize_t start,
+ Py_ssize_t end,
+ int direction)
+{
+ if (ensure_unicode(str) < 0 || ensure_unicode(substr) < 0)
+ return -1;
+
+ return tailmatch(str, substr, start, end, direction);
+}
+
+/* Apply fixfct filter to the Unicode object self and return a
+ reference to the modified object */
+
+static PyObject *
+fixup(PyObject *self,
+ Py_UCS4 (*fixfct)(PyObject *s))
+{
+ PyObject *u;
+ Py_UCS4 maxchar_old, maxchar_new = 0;
+ PyObject *v;
+
+ u = _PyUnicode_Copy(self);
+ if (u == NULL)
+ return NULL;
+ maxchar_old = PyUnicode_MAX_CHAR_VALUE(u);
+
+ /* fix functions return the new maximum character in a string,
+ if the kind of the resulting unicode object does not change,
+ everything is fine. Otherwise we need to change the string kind
+ and re-run the fix function. */
+ maxchar_new = fixfct(u);
+
+ if (maxchar_new == 0) {
+ /* no changes */;
+ if (PyUnicode_CheckExact(self)) {
+ Py_DECREF(u);
+ Py_INCREF(self);
+ return self;
+ }
+ else
+ return u;
+ }
+
+ maxchar_new = align_maxchar(maxchar_new);
+
+ if (maxchar_new == maxchar_old)
+ return u;
+
+ /* In case the maximum character changed, we need to
+ convert the string to the new category. */
+ v = PyUnicode_New(PyUnicode_GET_LENGTH(self), maxchar_new);
+ if (v == NULL) {
+ Py_DECREF(u);
+ return NULL;
+ }
+ if (maxchar_new > maxchar_old) {
+ /* If the maxchar increased so that the kind changed, not all
+ characters are representable anymore and we need to fix the
+ string again. This only happens in very few cases. */
+ _PyUnicode_FastCopyCharacters(v, 0,
+ self, 0, PyUnicode_GET_LENGTH(self));
+ maxchar_old = fixfct(v);
+ assert(maxchar_old > 0 && maxchar_old <= maxchar_new);
+ }
+ else {
+ _PyUnicode_FastCopyCharacters(v, 0,
+ u, 0, PyUnicode_GET_LENGTH(self));
+ }
+ Py_DECREF(u);
+ assert(_PyUnicode_CheckConsistency(v, 1));
+ return v;
+}
+
+static PyObject *
+ascii_upper_or_lower(PyObject *self, int lower)
+{
+ Py_ssize_t len = PyUnicode_GET_LENGTH(self);
+ char *resdata, *data = PyUnicode_DATA(self);
+ PyObject *res;
+
+ res = PyUnicode_New(len, 127);
+ if (res == NULL)
+ return NULL;
+ resdata = PyUnicode_DATA(res);
+ if (lower)
+ _Py_bytes_lower(resdata, data, len);
+ else
+ _Py_bytes_upper(resdata, data, len);
+ return res;
+}
+
+static Py_UCS4
+handle_capital_sigma(int kind, void *data, Py_ssize_t length, Py_ssize_t i)
+{
+ Py_ssize_t j;
+ int final_sigma;
+ Py_UCS4 c = 0; /* initialize to prevent gcc warning */
+ /* U+03A3 is in the Final_Sigma context when, it is found like this:
+
+ \p{cased}\p{case-ignorable}*U+03A3!(\p{case-ignorable}*\p{cased})
+
+ where ! is a negation and \p{xxx} is a character with property xxx.
+ */
+ for (j = i - 1; j >= 0; j--) {
+ c = PyUnicode_READ(kind, data, j);
+ if (!_PyUnicode_IsCaseIgnorable(c))
+ break;
+ }
+ final_sigma = j >= 0 && _PyUnicode_IsCased(c);
+ if (final_sigma) {
+ for (j = i + 1; j < length; j++) {
+ c = PyUnicode_READ(kind, data, j);
+ if (!_PyUnicode_IsCaseIgnorable(c))
+ break;
+ }
+ final_sigma = j == length || !_PyUnicode_IsCased(c);
+ }
+ return (final_sigma) ? 0x3C2 : 0x3C3;
+}
+
+static int
+lower_ucs4(int kind, void *data, Py_ssize_t length, Py_ssize_t i,
+ Py_UCS4 c, Py_UCS4 *mapped)
+{
+ /* Obscure special case. */
+ if (c == 0x3A3) {
+ mapped[0] = handle_capital_sigma(kind, data, length, i);
+ return 1;
+ }
+ return _PyUnicode_ToLowerFull(c, mapped);
+}
+
+static Py_ssize_t
+do_capitalize(int kind, void *data, Py_ssize_t length, Py_UCS4 *res, Py_UCS4 *maxchar)
+{
+ Py_ssize_t i, k = 0;
+ int n_res, j;
+ Py_UCS4 c, mapped[3];
+
+ c = PyUnicode_READ(kind, data, 0);
+ n_res = _PyUnicode_ToUpperFull(c, mapped);
+ for (j = 0; j < n_res; j++) {
+ *maxchar = Py_MAX(*maxchar, mapped[j]);
+ res[k++] = mapped[j];
+ }
+ for (i = 1; i < length; i++) {
+ c = PyUnicode_READ(kind, data, i);
+ n_res = lower_ucs4(kind, data, length, i, c, mapped);
+ for (j = 0; j < n_res; j++) {
+ *maxchar = Py_MAX(*maxchar, mapped[j]);
+ res[k++] = mapped[j];
+ }
+ }
+ return k;
+}
+
+static Py_ssize_t
+do_swapcase(int kind, void *data, Py_ssize_t length, Py_UCS4 *res, Py_UCS4 *maxchar) {
+ Py_ssize_t i, k = 0;
+
+ for (i = 0; i < length; i++) {
+ Py_UCS4 c = PyUnicode_READ(kind, data, i), mapped[3];
+ int n_res, j;
+ if (Py_UNICODE_ISUPPER(c)) {
+ n_res = lower_ucs4(kind, data, length, i, c, mapped);
+ }
+ else if (Py_UNICODE_ISLOWER(c)) {
+ n_res = _PyUnicode_ToUpperFull(c, mapped);
+ }
+ else {
+ n_res = 1;
+ mapped[0] = c;
+ }
+ for (j = 0; j < n_res; j++) {
+ *maxchar = Py_MAX(*maxchar, mapped[j]);
+ res[k++] = mapped[j];
+ }
+ }
+ return k;
+}
+
+static Py_ssize_t
+do_upper_or_lower(int kind, void *data, Py_ssize_t length, Py_UCS4 *res,
+ Py_UCS4 *maxchar, int lower)
+{
+ Py_ssize_t i, k = 0;
+
+ for (i = 0; i < length; i++) {
+ Py_UCS4 c = PyUnicode_READ(kind, data, i), mapped[3];
+ int n_res, j;
+ if (lower)
+ n_res = lower_ucs4(kind, data, length, i, c, mapped);
+ else
+ n_res = _PyUnicode_ToUpperFull(c, mapped);
+ for (j = 0; j < n_res; j++) {
+ *maxchar = Py_MAX(*maxchar, mapped[j]);
+ res[k++] = mapped[j];
+ }
+ }
+ return k;
+}
+
+static Py_ssize_t
+do_upper(int kind, void *data, Py_ssize_t length, Py_UCS4 *res, Py_UCS4 *maxchar)
+{
+ return do_upper_or_lower(kind, data, length, res, maxchar, 0);
+}
+
+static Py_ssize_t
+do_lower(int kind, void *data, Py_ssize_t length, Py_UCS4 *res, Py_UCS4 *maxchar)
+{
+ return do_upper_or_lower(kind, data, length, res, maxchar, 1);
+}
+
+static Py_ssize_t
+do_casefold(int kind, void *data, Py_ssize_t length, Py_UCS4 *res, Py_UCS4 *maxchar)
+{
+ Py_ssize_t i, k = 0;
+
+ for (i = 0; i < length; i++) {
+ Py_UCS4 c = PyUnicode_READ(kind, data, i);
+ Py_UCS4 mapped[3];
+ int j, n_res = _PyUnicode_ToFoldedFull(c, mapped);
+ for (j = 0; j < n_res; j++) {
+ *maxchar = Py_MAX(*maxchar, mapped[j]);
+ res[k++] = mapped[j];
+ }
+ }
+ return k;
+}
+
+static Py_ssize_t
+do_title(int kind, void *data, Py_ssize_t length, Py_UCS4 *res, Py_UCS4 *maxchar)
+{
+ Py_ssize_t i, k = 0;
+ int previous_is_cased;
+
+ previous_is_cased = 0;
+ for (i = 0; i < length; i++) {
+ const Py_UCS4 c = PyUnicode_READ(kind, data, i);
+ Py_UCS4 mapped[3];
+ int n_res, j;
+
+ if (previous_is_cased)
+ n_res = lower_ucs4(kind, data, length, i, c, mapped);
+ else
+ n_res = _PyUnicode_ToTitleFull(c, mapped);
+
+ for (j = 0; j < n_res; j++) {
+ *maxchar = Py_MAX(*maxchar, mapped[j]);
+ res[k++] = mapped[j];
+ }
+
+ previous_is_cased = _PyUnicode_IsCased(c);
+ }
+ return k;
+}
+
+static PyObject *
+case_operation(PyObject *self,
+ Py_ssize_t (*perform)(int, void *, Py_ssize_t, Py_UCS4 *, Py_UCS4 *))
+{
+ PyObject *res = NULL;
+ Py_ssize_t length, newlength = 0;
+ int kind, outkind;
+ void *data, *outdata;
+ Py_UCS4 maxchar = 0, *tmp, *tmpend;
+
+ assert(PyUnicode_IS_READY(self));
+
+ kind = PyUnicode_KIND(self);
+ data = PyUnicode_DATA(self);
+ length = PyUnicode_GET_LENGTH(self);
+ if ((size_t) length > PY_SSIZE_T_MAX / (3 * sizeof(Py_UCS4))) {
+ PyErr_SetString(PyExc_OverflowError, "string is too long");
+ return NULL;
+ }
+ tmp = PyMem_MALLOC(sizeof(Py_UCS4) * 3 * length);
+ if (tmp == NULL)
+ return PyErr_NoMemory();
+ newlength = perform(kind, data, length, tmp, &maxchar);
+ res = PyUnicode_New(newlength, maxchar);
+ if (res == NULL)
+ goto leave;
+ tmpend = tmp + newlength;
+ outdata = PyUnicode_DATA(res);
+ outkind = PyUnicode_KIND(res);
+ switch (outkind) {
+ case PyUnicode_1BYTE_KIND:
+ _PyUnicode_CONVERT_BYTES(Py_UCS4, Py_UCS1, tmp, tmpend, outdata);
+ break;
+ case PyUnicode_2BYTE_KIND:
+ _PyUnicode_CONVERT_BYTES(Py_UCS4, Py_UCS2, tmp, tmpend, outdata);
+ break;
+ case PyUnicode_4BYTE_KIND:
+ memcpy(outdata, tmp, sizeof(Py_UCS4) * newlength);
+ break;
+ default:
+ assert(0);
+ break;
+ }
+ leave:
+ PyMem_FREE(tmp);
+ return res;
+}
+
+PyObject *
+PyUnicode_Join(PyObject *separator, PyObject *seq)
+{
+ PyObject *res;
+ PyObject *fseq;
+ Py_ssize_t seqlen;
+ PyObject **items;
+
+ fseq = PySequence_Fast(seq, "can only join an iterable");
+ if (fseq == NULL) {
+ return NULL;
+ }
+
+ /* NOTE: the following code can't call back into Python code,
+ * so we are sure that fseq won't be mutated.
+ */
+
+ items = PySequence_Fast_ITEMS(fseq);
+ seqlen = PySequence_Fast_GET_SIZE(fseq);
+ res = _PyUnicode_JoinArray(separator, items, seqlen);
+ Py_DECREF(fseq);
+ return res;
+}
+
+PyObject *
+_PyUnicode_JoinArray(PyObject *separator, PyObject **items, Py_ssize_t seqlen)
+{
+ PyObject *res = NULL; /* the result */
+ PyObject *sep = NULL;
+ Py_ssize_t seplen;
+ PyObject *item;
+ Py_ssize_t sz, i, res_offset;
+ Py_UCS4 maxchar;
+ Py_UCS4 item_maxchar;
+ int use_memcpy;
+ unsigned char *res_data = NULL, *sep_data = NULL;
+ PyObject *last_obj;
+ unsigned int kind = 0;
+
+ /* If empty sequence, return u"". */
+ if (seqlen == 0) {
+ _Py_RETURN_UNICODE_EMPTY();
+ }
+
+ /* If singleton sequence with an exact Unicode, return that. */
+ last_obj = NULL;
+ if (seqlen == 1) {
+ if (PyUnicode_CheckExact(items[0])) {
+ res = items[0];
+ Py_INCREF(res);
+ return res;
+ }
+ seplen = 0;
+ maxchar = 0;
+ }
+ else {
+ /* Set up sep and seplen */
+ if (separator == NULL) {
+ /* fall back to a blank space separator */
+ sep = PyUnicode_FromOrdinal(' ');
+ if (!sep)
+ goto onError;
+ seplen = 1;
+ maxchar = 32;
+ }
+ else {
+ if (!PyUnicode_Check(separator)) {
+ PyErr_Format(PyExc_TypeError,
+ "separator: expected str instance,"
+ " %.80s found",
+ Py_TYPE(separator)->tp_name);
+ goto onError;
+ }
+ if (PyUnicode_READY(separator))
+ goto onError;
+ sep = separator;
+ seplen = PyUnicode_GET_LENGTH(separator);
+ maxchar = PyUnicode_MAX_CHAR_VALUE(separator);
+ /* inc refcount to keep this code path symmetric with the
+ above case of a blank separator */
+ Py_INCREF(sep);
+ }
+ last_obj = sep;
+ }
+
+ /* There are at least two things to join, or else we have a subclass
+ * of str in the sequence.
+ * Do a pre-pass to figure out the total amount of space we'll
+ * need (sz), and see whether all argument are strings.
+ */
+ sz = 0;
+#ifdef Py_DEBUG
+ use_memcpy = 0;
+#else
+ use_memcpy = 1;
+#endif
+ for (i = 0; i < seqlen; i++) {
+ size_t add_sz;
+ item = items[i];
+ if (!PyUnicode_Check(item)) {
+ PyErr_Format(PyExc_TypeError,
+ "sequence item %zd: expected str instance,"
+ " %.80s found",
+ i, Py_TYPE(item)->tp_name);
+ goto onError;
+ }
+ if (PyUnicode_READY(item) == -1)
+ goto onError;
+ add_sz = PyUnicode_GET_LENGTH(item);
+ item_maxchar = PyUnicode_MAX_CHAR_VALUE(item);
+ maxchar = Py_MAX(maxchar, item_maxchar);
+ if (i != 0) {
+ add_sz += seplen;
+ }
+ if (add_sz > (size_t)(PY_SSIZE_T_MAX - sz)) {
+ PyErr_SetString(PyExc_OverflowError,
+ "join() result is too long for a Python string");
+ goto onError;
+ }
+ sz += add_sz;
+ if (use_memcpy && last_obj != NULL) {
+ if (PyUnicode_KIND(last_obj) != PyUnicode_KIND(item))
+ use_memcpy = 0;
+ }
+ last_obj = item;
+ }
+
+ res = PyUnicode_New(sz, maxchar);
+ if (res == NULL)
+ goto onError;
+
+ /* Catenate everything. */
+#ifdef Py_DEBUG
+ use_memcpy = 0;
+#else
+ if (use_memcpy) {
+ res_data = PyUnicode_1BYTE_DATA(res);
+ kind = PyUnicode_KIND(res);
+ if (seplen != 0)
+ sep_data = PyUnicode_1BYTE_DATA(sep);
+ }
+#endif
+ if (use_memcpy) {
+ for (i = 0; i < seqlen; ++i) {
+ Py_ssize_t itemlen;
+ item = items[i];
+
+ /* Copy item, and maybe the separator. */
+ if (i && seplen != 0) {
+ memcpy(res_data,
+ sep_data,
+ kind * seplen);
+ res_data += kind * seplen;
+ }
+
+ itemlen = PyUnicode_GET_LENGTH(item);
+ if (itemlen != 0) {
+ memcpy(res_data,
+ PyUnicode_DATA(item),
+ kind * itemlen);
+ res_data += kind * itemlen;
+ }
+ }
+ assert(res_data == PyUnicode_1BYTE_DATA(res)
+ + kind * PyUnicode_GET_LENGTH(res));
+ }
+ else {
+ for (i = 0, res_offset = 0; i < seqlen; ++i) {
+ Py_ssize_t itemlen;
+ item = items[i];
+
+ /* Copy item, and maybe the separator. */
+ if (i && seplen != 0) {
+ _PyUnicode_FastCopyCharacters(res, res_offset, sep, 0, seplen);
+ res_offset += seplen;
+ }
+
+ itemlen = PyUnicode_GET_LENGTH(item);
+ if (itemlen != 0) {
+ _PyUnicode_FastCopyCharacters(res, res_offset, item, 0, itemlen);
+ res_offset += itemlen;
+ }
+ }
+ assert(res_offset == PyUnicode_GET_LENGTH(res));
+ }
+
+ Py_XDECREF(sep);
+ assert(_PyUnicode_CheckConsistency(res, 1));
+ return res;
+
+ onError:
+ Py_XDECREF(sep);
+ Py_XDECREF(res);
+ return NULL;
+}
+
+void
+_PyUnicode_FastFill(PyObject *unicode, Py_ssize_t start, Py_ssize_t length,
+ Py_UCS4 fill_char)
+{
+ const enum PyUnicode_Kind kind = PyUnicode_KIND(unicode);
+ void *data = PyUnicode_DATA(unicode);
+ assert(PyUnicode_IS_READY(unicode));
+ assert(unicode_modifiable(unicode));
+ assert(fill_char <= PyUnicode_MAX_CHAR_VALUE(unicode));
+ assert(start >= 0);
+ assert(start + length <= PyUnicode_GET_LENGTH(unicode));
+ FILL(kind, data, fill_char, start, length);
+}
+
+Py_ssize_t
+PyUnicode_Fill(PyObject *unicode, Py_ssize_t start, Py_ssize_t length,
+ Py_UCS4 fill_char)
+{
+ Py_ssize_t maxlen;
+
+ if (!PyUnicode_Check(unicode)) {
+ PyErr_BadInternalCall();
+ return -1;
+ }
+ if (PyUnicode_READY(unicode) == -1)
+ return -1;
+ if (unicode_check_modifiable(unicode))
+ return -1;
+
+ if (start < 0) {
+ PyErr_SetString(PyExc_IndexError, "string index out of range");
+ return -1;
+ }
+ if (fill_char > PyUnicode_MAX_CHAR_VALUE(unicode)) {
+ PyErr_SetString(PyExc_ValueError,
+ "fill character is bigger than "
+ "the string maximum character");
+ return -1;
+ }
+
+ maxlen = PyUnicode_GET_LENGTH(unicode) - start;
+ length = Py_MIN(maxlen, length);
+ if (length <= 0)
+ return 0;
+
+ _PyUnicode_FastFill(unicode, start, length, fill_char);
+ return length;
+}
+
+static PyObject *
+pad(PyObject *self,
+ Py_ssize_t left,
+ Py_ssize_t right,
+ Py_UCS4 fill)
+{
+ PyObject *u;
+ Py_UCS4 maxchar;
+ int kind;
+ void *data;
+
+ if (left < 0)
+ left = 0;
+ if (right < 0)
+ right = 0;
+
+ if (left == 0 && right == 0)
+ return unicode_result_unchanged(self);
+
+ if (left > PY_SSIZE_T_MAX - _PyUnicode_LENGTH(self) ||
+ right > PY_SSIZE_T_MAX - (left + _PyUnicode_LENGTH(self))) {
+ PyErr_SetString(PyExc_OverflowError, "padded string is too long");
+ return NULL;
+ }
+ maxchar = PyUnicode_MAX_CHAR_VALUE(self);
+ maxchar = Py_MAX(maxchar, fill);
+ u = PyUnicode_New(left + _PyUnicode_LENGTH(self) + right, maxchar);
+ if (!u)
+ return NULL;
+
+ kind = PyUnicode_KIND(u);
+ data = PyUnicode_DATA(u);
+ if (left)
+ FILL(kind, data, fill, 0, left);
+ if (right)
+ FILL(kind, data, fill, left + _PyUnicode_LENGTH(self), right);
+ _PyUnicode_FastCopyCharacters(u, left, self, 0, _PyUnicode_LENGTH(self));
+ assert(_PyUnicode_CheckConsistency(u, 1));
+ return u;
+}
+
+PyObject *
+PyUnicode_Splitlines(PyObject *string, int keepends)
+{
+ PyObject *list;
+
+ if (ensure_unicode(string) < 0)
+ return NULL;
+
+ switch (PyUnicode_KIND(string)) {
+ case PyUnicode_1BYTE_KIND:
+ if (PyUnicode_IS_ASCII(string))
+ list = asciilib_splitlines(
+ string, PyUnicode_1BYTE_DATA(string),
+ PyUnicode_GET_LENGTH(string), keepends);
+ else
+ list = ucs1lib_splitlines(
+ string, PyUnicode_1BYTE_DATA(string),
+ PyUnicode_GET_LENGTH(string), keepends);
+ break;
+ case PyUnicode_2BYTE_KIND:
+ list = ucs2lib_splitlines(
+ string, PyUnicode_2BYTE_DATA(string),
+ PyUnicode_GET_LENGTH(string), keepends);
+ break;
+ case PyUnicode_4BYTE_KIND:
+ list = ucs4lib_splitlines(
+ string, PyUnicode_4BYTE_DATA(string),
+ PyUnicode_GET_LENGTH(string), keepends);
+ break;
+ default:
+ assert(0);
+ list = 0;
+ }
+ return list;
+}
+
+static PyObject *
+split(PyObject *self,
+ PyObject *substring,
+ Py_ssize_t maxcount)
+{
+ int kind1, kind2;
+ void *buf1, *buf2;
+ Py_ssize_t len1, len2;
+ PyObject* out;
+
+ if (maxcount < 0)
+ maxcount = PY_SSIZE_T_MAX;
+
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+
+ if (substring == NULL)
+ switch (PyUnicode_KIND(self)) {
+ case PyUnicode_1BYTE_KIND:
+ if (PyUnicode_IS_ASCII(self))
+ return asciilib_split_whitespace(
+ self, PyUnicode_1BYTE_DATA(self),
+ PyUnicode_GET_LENGTH(self), maxcount
+ );
+ else
+ return ucs1lib_split_whitespace(
+ self, PyUnicode_1BYTE_DATA(self),
+ PyUnicode_GET_LENGTH(self), maxcount
+ );
+ case PyUnicode_2BYTE_KIND:
+ return ucs2lib_split_whitespace(
+ self, PyUnicode_2BYTE_DATA(self),
+ PyUnicode_GET_LENGTH(self), maxcount
+ );
+ case PyUnicode_4BYTE_KIND:
+ return ucs4lib_split_whitespace(
+ self, PyUnicode_4BYTE_DATA(self),
+ PyUnicode_GET_LENGTH(self), maxcount
+ );
+ default:
+ assert(0);
+ return NULL;
+ }
+
+ if (PyUnicode_READY(substring) == -1)
+ return NULL;
+
+ kind1 = PyUnicode_KIND(self);
+ kind2 = PyUnicode_KIND(substring);
+ len1 = PyUnicode_GET_LENGTH(self);
+ len2 = PyUnicode_GET_LENGTH(substring);
+ if (kind1 < kind2 || len1 < len2) {
+ out = PyList_New(1);
+ if (out == NULL)
+ return NULL;
+ Py_INCREF(self);
+ PyList_SET_ITEM(out, 0, self);
+ return out;
+ }
+ buf1 = PyUnicode_DATA(self);
+ buf2 = PyUnicode_DATA(substring);
+ if (kind2 != kind1) {
+ buf2 = _PyUnicode_AsKind(substring, kind1);
+ if (!buf2)
+ return NULL;
+ }
+
+ switch (kind1) {
+ case PyUnicode_1BYTE_KIND:
+ if (PyUnicode_IS_ASCII(self) && PyUnicode_IS_ASCII(substring))
+ out = asciilib_split(
+ self, buf1, len1, buf2, len2, maxcount);
+ else
+ out = ucs1lib_split(
+ self, buf1, len1, buf2, len2, maxcount);
+ break;
+ case PyUnicode_2BYTE_KIND:
+ out = ucs2lib_split(
+ self, buf1, len1, buf2, len2, maxcount);
+ break;
+ case PyUnicode_4BYTE_KIND:
+ out = ucs4lib_split(
+ self, buf1, len1, buf2, len2, maxcount);
+ break;
+ default:
+ out = NULL;
+ }
+ if (kind2 != kind1)
+ PyMem_Free(buf2);
+ return out;
+}
+
+static PyObject *
+rsplit(PyObject *self,
+ PyObject *substring,
+ Py_ssize_t maxcount)
+{
+ int kind1, kind2;
+ void *buf1, *buf2;
+ Py_ssize_t len1, len2;
+ PyObject* out;
+
+ if (maxcount < 0)
+ maxcount = PY_SSIZE_T_MAX;
+
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+
+ if (substring == NULL)
+ switch (PyUnicode_KIND(self)) {
+ case PyUnicode_1BYTE_KIND:
+ if (PyUnicode_IS_ASCII(self))
+ return asciilib_rsplit_whitespace(
+ self, PyUnicode_1BYTE_DATA(self),
+ PyUnicode_GET_LENGTH(self), maxcount
+ );
+ else
+ return ucs1lib_rsplit_whitespace(
+ self, PyUnicode_1BYTE_DATA(self),
+ PyUnicode_GET_LENGTH(self), maxcount
+ );
+ case PyUnicode_2BYTE_KIND:
+ return ucs2lib_rsplit_whitespace(
+ self, PyUnicode_2BYTE_DATA(self),
+ PyUnicode_GET_LENGTH(self), maxcount
+ );
+ case PyUnicode_4BYTE_KIND:
+ return ucs4lib_rsplit_whitespace(
+ self, PyUnicode_4BYTE_DATA(self),
+ PyUnicode_GET_LENGTH(self), maxcount
+ );
+ default:
+ assert(0);
+ return NULL;
+ }
+
+ if (PyUnicode_READY(substring) == -1)
+ return NULL;
+
+ kind1 = PyUnicode_KIND(self);
+ kind2 = PyUnicode_KIND(substring);
+ len1 = PyUnicode_GET_LENGTH(self);
+ len2 = PyUnicode_GET_LENGTH(substring);
+ if (kind1 < kind2 || len1 < len2) {
+ out = PyList_New(1);
+ if (out == NULL)
+ return NULL;
+ Py_INCREF(self);
+ PyList_SET_ITEM(out, 0, self);
+ return out;
+ }
+ buf1 = PyUnicode_DATA(self);
+ buf2 = PyUnicode_DATA(substring);
+ if (kind2 != kind1) {
+ buf2 = _PyUnicode_AsKind(substring, kind1);
+ if (!buf2)
+ return NULL;
+ }
+
+ switch (kind1) {
+ case PyUnicode_1BYTE_KIND:
+ if (PyUnicode_IS_ASCII(self) && PyUnicode_IS_ASCII(substring))
+ out = asciilib_rsplit(
+ self, buf1, len1, buf2, len2, maxcount);
+ else
+ out = ucs1lib_rsplit(
+ self, buf1, len1, buf2, len2, maxcount);
+ break;
+ case PyUnicode_2BYTE_KIND:
+ out = ucs2lib_rsplit(
+ self, buf1, len1, buf2, len2, maxcount);
+ break;
+ case PyUnicode_4BYTE_KIND:
+ out = ucs4lib_rsplit(
+ self, buf1, len1, buf2, len2, maxcount);
+ break;
+ default:
+ out = NULL;
+ }
+ if (kind2 != kind1)
+ PyMem_Free(buf2);
+ return out;
+}
+
+static Py_ssize_t
+anylib_find(int kind, PyObject *str1, void *buf1, Py_ssize_t len1,
+ PyObject *str2, void *buf2, Py_ssize_t len2, Py_ssize_t offset)
+{
+ switch (kind) {
+ case PyUnicode_1BYTE_KIND:
+ if (PyUnicode_IS_ASCII(str1) && PyUnicode_IS_ASCII(str2))
+ return asciilib_find(buf1, len1, buf2, len2, offset);
+ else
+ return ucs1lib_find(buf1, len1, buf2, len2, offset);
+ case PyUnicode_2BYTE_KIND:
+ return ucs2lib_find(buf1, len1, buf2, len2, offset);
+ case PyUnicode_4BYTE_KIND:
+ return ucs4lib_find(buf1, len1, buf2, len2, offset);
+ }
+ assert(0);
+ return -1;
+}
+
+static Py_ssize_t
+anylib_count(int kind, PyObject *sstr, void* sbuf, Py_ssize_t slen,
+ PyObject *str1, void *buf1, Py_ssize_t len1, Py_ssize_t maxcount)
+{
+ switch (kind) {
+ case PyUnicode_1BYTE_KIND:
+ if (PyUnicode_IS_ASCII(sstr) && PyUnicode_IS_ASCII(str1))
+ return asciilib_count(sbuf, slen, buf1, len1, maxcount);
+ else
+ return ucs1lib_count(sbuf, slen, buf1, len1, maxcount);
+ case PyUnicode_2BYTE_KIND:
+ return ucs2lib_count(sbuf, slen, buf1, len1, maxcount);
+ case PyUnicode_4BYTE_KIND:
+ return ucs4lib_count(sbuf, slen, buf1, len1, maxcount);
+ }
+ assert(0);
+ return 0;
+}
+
+static void
+replace_1char_inplace(PyObject *u, Py_ssize_t pos,
+ Py_UCS4 u1, Py_UCS4 u2, Py_ssize_t maxcount)
+{
+ int kind = PyUnicode_KIND(u);
+ void *data = PyUnicode_DATA(u);
+ Py_ssize_t len = PyUnicode_GET_LENGTH(u);
+ if (kind == PyUnicode_1BYTE_KIND) {
+ ucs1lib_replace_1char_inplace((Py_UCS1 *)data + pos,
+ (Py_UCS1 *)data + len,
+ u1, u2, maxcount);
+ }
+ else if (kind == PyUnicode_2BYTE_KIND) {
+ ucs2lib_replace_1char_inplace((Py_UCS2 *)data + pos,
+ (Py_UCS2 *)data + len,
+ u1, u2, maxcount);
+ }
+ else {
+ assert(kind == PyUnicode_4BYTE_KIND);
+ ucs4lib_replace_1char_inplace((Py_UCS4 *)data + pos,
+ (Py_UCS4 *)data + len,
+ u1, u2, maxcount);
+ }
+}
+
+static PyObject *
+replace(PyObject *self, PyObject *str1,
+ PyObject *str2, Py_ssize_t maxcount)
+{
+ PyObject *u;
+ char *sbuf = PyUnicode_DATA(self);
+ char *buf1 = PyUnicode_DATA(str1);
+ char *buf2 = PyUnicode_DATA(str2);
+ int srelease = 0, release1 = 0, release2 = 0;
+ int skind = PyUnicode_KIND(self);
+ int kind1 = PyUnicode_KIND(str1);
+ int kind2 = PyUnicode_KIND(str2);
+ Py_ssize_t slen = PyUnicode_GET_LENGTH(self);
+ Py_ssize_t len1 = PyUnicode_GET_LENGTH(str1);
+ Py_ssize_t len2 = PyUnicode_GET_LENGTH(str2);
+ int mayshrink;
+ Py_UCS4 maxchar, maxchar_str1, maxchar_str2;
+
+ if (maxcount < 0)
+ maxcount = PY_SSIZE_T_MAX;
+ else if (maxcount == 0 || slen == 0)
+ goto nothing;
+
+ if (str1 == str2)
+ goto nothing;
+
+ maxchar = PyUnicode_MAX_CHAR_VALUE(self);
+ maxchar_str1 = PyUnicode_MAX_CHAR_VALUE(str1);
+ if (maxchar < maxchar_str1)
+ /* substring too wide to be present */
+ goto nothing;
+ maxchar_str2 = PyUnicode_MAX_CHAR_VALUE(str2);
+ /* Replacing str1 with str2 may cause a maxchar reduction in the
+ result string. */
+ mayshrink = (maxchar_str2 < maxchar_str1) && (maxchar == maxchar_str1);
+ maxchar = Py_MAX(maxchar, maxchar_str2);
+
+ if (len1 == len2) {
+ /* same length */
+ if (len1 == 0)
+ goto nothing;
+ if (len1 == 1) {
+ /* replace characters */
+ Py_UCS4 u1, u2;
+ Py_ssize_t pos;
+
+ u1 = PyUnicode_READ(kind1, buf1, 0);
+ pos = findchar(sbuf, skind, slen, u1, 1);
+ if (pos < 0)
+ goto nothing;
+ u2 = PyUnicode_READ(kind2, buf2, 0);
+ u = PyUnicode_New(slen, maxchar);
+ if (!u)
+ goto error;
+
+ _PyUnicode_FastCopyCharacters(u, 0, self, 0, slen);
+ replace_1char_inplace(u, pos, u1, u2, maxcount);
+ }
+ else {
+ int rkind = skind;
+ char *res;
+ Py_ssize_t i;
+
+ if (kind1 < rkind) {
+ /* widen substring */
+ buf1 = _PyUnicode_AsKind(str1, rkind);
+ if (!buf1) goto error;
+ release1 = 1;
+ }
+ i = anylib_find(rkind, self, sbuf, slen, str1, buf1, len1, 0);
+ if (i < 0)
+ goto nothing;
+ if (rkind > kind2) {
+ /* widen replacement */
+ buf2 = _PyUnicode_AsKind(str2, rkind);
+ if (!buf2) goto error;
+ release2 = 1;
+ }
+ else if (rkind < kind2) {
+ /* widen self and buf1 */
+ rkind = kind2;
+ if (release1) PyMem_Free(buf1);
+ release1 = 0;
+ sbuf = _PyUnicode_AsKind(self, rkind);
+ if (!sbuf) goto error;
+ srelease = 1;
+ buf1 = _PyUnicode_AsKind(str1, rkind);
+ if (!buf1) goto error;
+ release1 = 1;
+ }
+ u = PyUnicode_New(slen, maxchar);
+ if (!u)
+ goto error;
+ assert(PyUnicode_KIND(u) == rkind);
+ res = PyUnicode_DATA(u);
+
+ memcpy(res, sbuf, rkind * slen);
+ /* change everything in-place, starting with this one */
+ memcpy(res + rkind * i,
+ buf2,
+ rkind * len2);
+ i += len1;
+
+ while ( --maxcount > 0) {
+ i = anylib_find(rkind, self,
+ sbuf+rkind*i, slen-i,
+ str1, buf1, len1, i);
+ if (i == -1)
+ break;
+ memcpy(res + rkind * i,
+ buf2,
+ rkind * len2);
+ i += len1;
+ }
+ }
+ }
+ else {
+ Py_ssize_t n, i, j, ires;
+ Py_ssize_t new_size;
+ int rkind = skind;
+ char *res;
+
+ if (kind1 < rkind) {
+ /* widen substring */
+ buf1 = _PyUnicode_AsKind(str1, rkind);
+ if (!buf1) goto error;
+ release1 = 1;
+ }
+ n = anylib_count(rkind, self, sbuf, slen, str1, buf1, len1, maxcount);
+ if (n == 0)
+ goto nothing;
+ if (kind2 < rkind) {
+ /* widen replacement */
+ buf2 = _PyUnicode_AsKind(str2, rkind);
+ if (!buf2) goto error;
+ release2 = 1;
+ }
+ else if (kind2 > rkind) {
+ /* widen self and buf1 */
+ rkind = kind2;
+ sbuf = _PyUnicode_AsKind(self, rkind);
+ if (!sbuf) goto error;
+ srelease = 1;
+ if (release1) PyMem_Free(buf1);
+ release1 = 0;
+ buf1 = _PyUnicode_AsKind(str1, rkind);
+ if (!buf1) goto error;
+ release1 = 1;
+ }
+ /* new_size = PyUnicode_GET_LENGTH(self) + n * (PyUnicode_GET_LENGTH(str2) -
+ PyUnicode_GET_LENGTH(str1))); */
+ if (len1 < len2 && len2 - len1 > (PY_SSIZE_T_MAX - slen) / n) {
+ PyErr_SetString(PyExc_OverflowError,
+ "replace string is too long");
+ goto error;
+ }
+ new_size = slen + n * (len2 - len1);
+ if (new_size == 0) {
+ _Py_INCREF_UNICODE_EMPTY();
+ if (!unicode_empty)
+ goto error;
+ u = unicode_empty;
+ goto done;
+ }
+ if (new_size > (PY_SSIZE_T_MAX / rkind)) {
+ PyErr_SetString(PyExc_OverflowError,
+ "replace string is too long");
+ goto error;
+ }
+ u = PyUnicode_New(new_size, maxchar);
+ if (!u)
+ goto error;
+ assert(PyUnicode_KIND(u) == rkind);
+ res = PyUnicode_DATA(u);
+ ires = i = 0;
+ if (len1 > 0) {
+ while (n-- > 0) {
+ /* look for next match */
+ j = anylib_find(rkind, self,
+ sbuf + rkind * i, slen-i,
+ str1, buf1, len1, i);
+ if (j == -1)
+ break;
+ else if (j > i) {
+ /* copy unchanged part [i:j] */
+ memcpy(res + rkind * ires,
+ sbuf + rkind * i,
+ rkind * (j-i));
+ ires += j - i;
+ }
+ /* copy substitution string */
+ if (len2 > 0) {
+ memcpy(res + rkind * ires,
+ buf2,
+ rkind * len2);
+ ires += len2;
+ }
+ i = j + len1;
+ }
+ if (i < slen)
+ /* copy tail [i:] */
+ memcpy(res + rkind * ires,
+ sbuf + rkind * i,
+ rkind * (slen-i));
+ }
+ else {
+ /* interleave */
+ while (n > 0) {
+ memcpy(res + rkind * ires,
+ buf2,
+ rkind * len2);
+ ires += len2;
+ if (--n <= 0)
+ break;
+ memcpy(res + rkind * ires,
+ sbuf + rkind * i,
+ rkind);
+ ires++;
+ i++;
+ }
+ memcpy(res + rkind * ires,
+ sbuf + rkind * i,
+ rkind * (slen-i));
+ }
+ }
+
+ if (mayshrink) {
+ unicode_adjust_maxchar(&u);
+ if (u == NULL)
+ goto error;
+ }
+
+ done:
+ if (srelease)
+ PyMem_FREE(sbuf);
+ if (release1)
+ PyMem_FREE(buf1);
+ if (release2)
+ PyMem_FREE(buf2);
+ assert(_PyUnicode_CheckConsistency(u, 1));
+ return u;
+
+ nothing:
+ /* nothing to replace; return original string (when possible) */
+ if (srelease)
+ PyMem_FREE(sbuf);
+ if (release1)
+ PyMem_FREE(buf1);
+ if (release2)
+ PyMem_FREE(buf2);
+ return unicode_result_unchanged(self);
+
+ error:
+ if (srelease && sbuf)
+ PyMem_FREE(sbuf);
+ if (release1 && buf1)
+ PyMem_FREE(buf1);
+ if (release2 && buf2)
+ PyMem_FREE(buf2);
+ return NULL;
+}
+
+/* --- Unicode Object Methods --------------------------------------------- */
+
+PyDoc_STRVAR(title__doc__,
+ "S.title() -> str\n\
+\n\
+Return a titlecased version of S, i.e. words start with title case\n\
+characters, all remaining cased characters have lower case.");
+
+static PyObject*
+unicode_title(PyObject *self)
+{
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+ return case_operation(self, do_title);
+}
+
+PyDoc_STRVAR(capitalize__doc__,
+ "S.capitalize() -> str\n\
+\n\
+Return a capitalized version of S, i.e. make the first character\n\
+have upper case and the rest lower case.");
+
+static PyObject*
+unicode_capitalize(PyObject *self)
+{
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+ if (PyUnicode_GET_LENGTH(self) == 0)
+ return unicode_result_unchanged(self);
+ return case_operation(self, do_capitalize);
+}
+
+PyDoc_STRVAR(casefold__doc__,
+ "S.casefold() -> str\n\
+\n\
+Return a version of S suitable for caseless comparisons.");
+
+static PyObject *
+unicode_casefold(PyObject *self)
+{
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+ if (PyUnicode_IS_ASCII(self))
+ return ascii_upper_or_lower(self, 1);
+ return case_operation(self, do_casefold);
+}
+
+
+/* Argument converter. Accepts a single Unicode character. */
+
+static int
+convert_uc(PyObject *obj, void *addr)
+{
+ Py_UCS4 *fillcharloc = (Py_UCS4 *)addr;
+
+ if (!PyUnicode_Check(obj)) {
+ PyErr_Format(PyExc_TypeError,
+ "The fill character must be a unicode character, "
+ "not %.100s", Py_TYPE(obj)->tp_name);
+ return 0;
+ }
+ if (PyUnicode_READY(obj) < 0)
+ return 0;
+ if (PyUnicode_GET_LENGTH(obj) != 1) {
+ PyErr_SetString(PyExc_TypeError,
+ "The fill character must be exactly one character long");
+ return 0;
+ }
+ *fillcharloc = PyUnicode_READ_CHAR(obj, 0);
+ return 1;
+}
+
+PyDoc_STRVAR(center__doc__,
+ "S.center(width[, fillchar]) -> str\n\
+\n\
+Return S centered in a string of length width. Padding is\n\
+done using the specified fill character (default is a space)");
+
+static PyObject *
+unicode_center(PyObject *self, PyObject *args)
+{
+ Py_ssize_t marg, left;
+ Py_ssize_t width;
+ Py_UCS4 fillchar = ' ';
+
+ if (!PyArg_ParseTuple(args, "n|O&:center", &width, convert_uc, &fillchar))
+ return NULL;
+
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+
+ if (PyUnicode_GET_LENGTH(self) >= width)
+ return unicode_result_unchanged(self);
+
+ marg = width - PyUnicode_GET_LENGTH(self);
+ left = marg / 2 + (marg & width & 1);
+
+ return pad(self, left, marg - left, fillchar);
+}
+
+/* This function assumes that str1 and str2 are readied by the caller. */
+
+static int
+unicode_compare(PyObject *str1, PyObject *str2)
+{
+#define COMPARE(TYPE1, TYPE2) \
+ do { \
+ TYPE1* p1 = (TYPE1 *)data1; \
+ TYPE2* p2 = (TYPE2 *)data2; \
+ TYPE1* end = p1 + len; \
+ Py_UCS4 c1, c2; \
+ for (; p1 != end; p1++, p2++) { \
+ c1 = *p1; \
+ c2 = *p2; \
+ if (c1 != c2) \
+ return (c1 < c2) ? -1 : 1; \
+ } \
+ } \
+ while (0)
+
+ int kind1, kind2;
+ void *data1, *data2;
+ Py_ssize_t len1, len2, len;
+
+ kind1 = PyUnicode_KIND(str1);
+ kind2 = PyUnicode_KIND(str2);
+ data1 = PyUnicode_DATA(str1);
+ data2 = PyUnicode_DATA(str2);
+ len1 = PyUnicode_GET_LENGTH(str1);
+ len2 = PyUnicode_GET_LENGTH(str2);
+ len = Py_MIN(len1, len2);
+
+ switch(kind1) {
+ case PyUnicode_1BYTE_KIND:
+ {
+ switch(kind2) {
+ case PyUnicode_1BYTE_KIND:
+ {
+ int cmp = memcmp(data1, data2, len);
+ /* normalize result of memcmp() into the range [-1; 1] */
+ if (cmp < 0)
+ return -1;
+ if (cmp > 0)
+ return 1;
+ break;
+ }
+ case PyUnicode_2BYTE_KIND:
+ COMPARE(Py_UCS1, Py_UCS2);
+ break;
+ case PyUnicode_4BYTE_KIND:
+ COMPARE(Py_UCS1, Py_UCS4);
+ break;
+ default:
+ assert(0);
+ }
+ break;
+ }
+ case PyUnicode_2BYTE_KIND:
+ {
+ switch(kind2) {
+ case PyUnicode_1BYTE_KIND:
+ COMPARE(Py_UCS2, Py_UCS1);
+ break;
+ case PyUnicode_2BYTE_KIND:
+ {
+ COMPARE(Py_UCS2, Py_UCS2);
+ break;
+ }
+ case PyUnicode_4BYTE_KIND:
+ COMPARE(Py_UCS2, Py_UCS4);
+ break;
+ default:
+ assert(0);
+ }
+ break;
+ }
+ case PyUnicode_4BYTE_KIND:
+ {
+ switch(kind2) {
+ case PyUnicode_1BYTE_KIND:
+ COMPARE(Py_UCS4, Py_UCS1);
+ break;
+ case PyUnicode_2BYTE_KIND:
+ COMPARE(Py_UCS4, Py_UCS2);
+ break;
+ case PyUnicode_4BYTE_KIND:
+ {
+#if defined(HAVE_WMEMCMP) && SIZEOF_WCHAR_T == 4
+ int cmp = wmemcmp((wchar_t *)data1, (wchar_t *)data2, len);
+ /* normalize result of wmemcmp() into the range [-1; 1] */
+ if (cmp < 0)
+ return -1;
+ if (cmp > 0)
+ return 1;
+#else
+ COMPARE(Py_UCS4, Py_UCS4);
+#endif
+ break;
+ }
+ default:
+ assert(0);
+ }
+ break;
+ }
+ default:
+ assert(0);
+ }
+
+ if (len1 == len2)
+ return 0;
+ if (len1 < len2)
+ return -1;
+ else
+ return 1;
+
+#undef COMPARE
+}
+
+static int
+unicode_compare_eq(PyObject *str1, PyObject *str2)
+{
+ int kind;
+ void *data1, *data2;
+ Py_ssize_t len;
+ int cmp;
+
+ len = PyUnicode_GET_LENGTH(str1);
+ if (PyUnicode_GET_LENGTH(str2) != len)
+ return 0;
+ kind = PyUnicode_KIND(str1);
+ if (PyUnicode_KIND(str2) != kind)
+ return 0;
+ data1 = PyUnicode_DATA(str1);
+ data2 = PyUnicode_DATA(str2);
+
+ cmp = memcmp(data1, data2, len * kind);
+ return (cmp == 0);
+}
+
+
+int
+PyUnicode_Compare(PyObject *left, PyObject *right)
+{
+ if (PyUnicode_Check(left) && PyUnicode_Check(right)) {
+ if (PyUnicode_READY(left) == -1 ||
+ PyUnicode_READY(right) == -1)
+ return -1;
+
+ /* a string is equal to itself */
+ if (left == right)
+ return 0;
+
+ return unicode_compare(left, right);
+ }
+ PyErr_Format(PyExc_TypeError,
+ "Can't compare %.100s and %.100s",
+ left->ob_type->tp_name,
+ right->ob_type->tp_name);
+ return -1;
+}
+
+int
+PyUnicode_CompareWithASCIIString(PyObject* uni, const char* str)
+{
+ Py_ssize_t i;
+ int kind;
+ Py_UCS4 chr;
+ const unsigned char *ustr = (const unsigned char *)str;
+
+ assert(_PyUnicode_CHECK(uni));
+ if (!PyUnicode_IS_READY(uni)) {
+ const wchar_t *ws = _PyUnicode_WSTR(uni);
+ /* Compare Unicode string and source character set string */
+ for (i = 0; (chr = ws[i]) && ustr[i]; i++) {
+ if (chr != ustr[i])
+ return (chr < ustr[i]) ? -1 : 1;
+ }
+ /* This check keeps Python strings that end in '\0' from comparing equal
+ to C strings identical up to that point. */
+ if (_PyUnicode_WSTR_LENGTH(uni) != i || chr)
+ return 1; /* uni is longer */
+ if (ustr[i])
+ return -1; /* str is longer */
+ return 0;
+ }
+ kind = PyUnicode_KIND(uni);
+ if (kind == PyUnicode_1BYTE_KIND) {
+ const void *data = PyUnicode_1BYTE_DATA(uni);
+ size_t len1 = (size_t)PyUnicode_GET_LENGTH(uni);
+ size_t len, len2 = strlen(str);
+ int cmp;
+
+ len = Py_MIN(len1, len2);
+ cmp = memcmp(data, str, len);
+ if (cmp != 0) {
+ if (cmp < 0)
+ return -1;
+ else
+ return 1;
+ }
+ if (len1 > len2)
+ return 1; /* uni is longer */
+ if (len1 < len2)
+ return -1; /* str is longer */
+ return 0;
+ }
+ else {
+ void *data = PyUnicode_DATA(uni);
+ /* Compare Unicode string and source character set string */
+ for (i = 0; (chr = PyUnicode_READ(kind, data, i)) && str[i]; i++)
+ if (chr != (unsigned char)str[i])
+ return (chr < (unsigned char)(str[i])) ? -1 : 1;
+ /* This check keeps Python strings that end in '\0' from comparing equal
+ to C strings identical up to that point. */
+ if (PyUnicode_GET_LENGTH(uni) != i || chr)
+ return 1; /* uni is longer */
+ if (str[i])
+ return -1; /* str is longer */
+ return 0;
+ }
+}
+
+static int
+non_ready_unicode_equal_to_ascii_string(PyObject *unicode, const char *str)
+{
+ size_t i, len;
+ const wchar_t *p;
+ len = (size_t)_PyUnicode_WSTR_LENGTH(unicode);
+ if (strlen(str) != len)
+ return 0;
+ p = _PyUnicode_WSTR(unicode);
+ assert(p);
+ for (i = 0; i < len; i++) {
+ unsigned char c = (unsigned char)str[i];
+ if (c >= 128 || p[i] != (wchar_t)c)
+ return 0;
+ }
+ return 1;
+}
+
+int
+_PyUnicode_EqualToASCIIString(PyObject *unicode, const char *str)
+{
+ size_t len;
+ assert(_PyUnicode_CHECK(unicode));
+ assert(str);
+#ifndef NDEBUG
+ for (const char *p = str; *p; p++) {
+ assert((unsigned char)*p < 128);
+ }
+#endif
+ if (PyUnicode_READY(unicode) == -1) {
+ /* Memory error or bad data */
+ PyErr_Clear();
+ return non_ready_unicode_equal_to_ascii_string(unicode, str);
+ }
+ if (!PyUnicode_IS_ASCII(unicode))
+ return 0;
+ len = (size_t)PyUnicode_GET_LENGTH(unicode);
+ return strlen(str) == len &&
+ memcmp(PyUnicode_1BYTE_DATA(unicode), str, len) == 0;
+}
+
+int
+_PyUnicode_EqualToASCIIId(PyObject *left, _Py_Identifier *right)
+{
+ PyObject *right_uni;
+ Py_hash_t hash;
+
+ assert(_PyUnicode_CHECK(left));
+ assert(right->string);
+#ifndef NDEBUG
+ for (const char *p = right->string; *p; p++) {
+ assert((unsigned char)*p < 128);
+ }
+#endif
+
+ if (PyUnicode_READY(left) == -1) {
+ /* memory error or bad data */
+ PyErr_Clear();
+ return non_ready_unicode_equal_to_ascii_string(left, right->string);
+ }
+
+ if (!PyUnicode_IS_ASCII(left))
+ return 0;
+
+ right_uni = _PyUnicode_FromId(right); /* borrowed */
+ if (right_uni == NULL) {
+ /* memory error or bad data */
+ PyErr_Clear();
+ return _PyUnicode_EqualToASCIIString(left, right->string);
+ }
+
+ if (left == right_uni)
+ return 1;
+
+ if (PyUnicode_CHECK_INTERNED(left))
+ return 0;
+
+ assert(_PyUnicode_HASH(right_uni) != -1);
+ hash = _PyUnicode_HASH(left);
+ if (hash != -1 && hash != _PyUnicode_HASH(right_uni))
+ return 0;
+
+ return unicode_compare_eq(left, right_uni);
+}
+
+#define TEST_COND(cond) \
+ ((cond) ? Py_True : Py_False)
+
+PyObject *
+PyUnicode_RichCompare(PyObject *left, PyObject *right, int op)
+{
+ int result;
+ PyObject *v;
+
+ if (!PyUnicode_Check(left) || !PyUnicode_Check(right))
+ Py_RETURN_NOTIMPLEMENTED;
+
+ if (PyUnicode_READY(left) == -1 ||
+ PyUnicode_READY(right) == -1)
+ return NULL;
+
+ if (left == right) {
+ switch (op) {
+ case Py_EQ:
+ case Py_LE:
+ case Py_GE:
+ /* a string is equal to itself */
+ v = Py_True;
+ break;
+ case Py_NE:
+ case Py_LT:
+ case Py_GT:
+ v = Py_False;
+ break;
+ default:
+ PyErr_BadArgument();
+ return NULL;
+ }
+ }
+ else if (op == Py_EQ || op == Py_NE) {
+ result = unicode_compare_eq(left, right);
+ result ^= (op == Py_NE);
+ v = TEST_COND(result);
+ }
+ else {
+ result = unicode_compare(left, right);
+
+ /* Convert the return value to a Boolean */
+ switch (op) {
+ case Py_LE:
+ v = TEST_COND(result <= 0);
+ break;
+ case Py_GE:
+ v = TEST_COND(result >= 0);
+ break;
+ case Py_LT:
+ v = TEST_COND(result == -1);
+ break;
+ case Py_GT:
+ v = TEST_COND(result == 1);
+ break;
+ default:
+ PyErr_BadArgument();
+ return NULL;
+ }
+ }
+ Py_INCREF(v);
+ return v;
+}
+
+int
+_PyUnicode_EQ(PyObject *aa, PyObject *bb)
+{
+ return unicode_eq(aa, bb);
+}
+
+int
+PyUnicode_Contains(PyObject *str, PyObject *substr)
+{
+ int kind1, kind2;
+ void *buf1, *buf2;
+ Py_ssize_t len1, len2;
+ int result;
+
+ if (!PyUnicode_Check(substr)) {
+ PyErr_Format(PyExc_TypeError,
+ "'in <string>' requires string as left operand, not %.100s",
+ Py_TYPE(substr)->tp_name);
+ return -1;
+ }
+ if (PyUnicode_READY(substr) == -1)
+ return -1;
+ if (ensure_unicode(str) < 0)
+ return -1;
+
+ kind1 = PyUnicode_KIND(str);
+ kind2 = PyUnicode_KIND(substr);
+ if (kind1 < kind2)
+ return 0;
+ len1 = PyUnicode_GET_LENGTH(str);
+ len2 = PyUnicode_GET_LENGTH(substr);
+ if (len1 < len2)
+ return 0;
+ buf1 = PyUnicode_DATA(str);
+ buf2 = PyUnicode_DATA(substr);
+ if (len2 == 1) {
+ Py_UCS4 ch = PyUnicode_READ(kind2, buf2, 0);
+ result = findchar((const char *)buf1, kind1, len1, ch, 1) != -1;
+ return result;
+ }
+ if (kind2 != kind1) {
+ buf2 = _PyUnicode_AsKind(substr, kind1);
+ if (!buf2)
+ return -1;
+ }
+
+ switch (kind1) {
+ case PyUnicode_1BYTE_KIND:
+ result = ucs1lib_find(buf1, len1, buf2, len2, 0) != -1;
+ break;
+ case PyUnicode_2BYTE_KIND:
+ result = ucs2lib_find(buf1, len1, buf2, len2, 0) != -1;
+ break;
+ case PyUnicode_4BYTE_KIND:
+ result = ucs4lib_find(buf1, len1, buf2, len2, 0) != -1;
+ break;
+ default:
+ result = -1;
+ assert(0);
+ }
+
+ if (kind2 != kind1)
+ PyMem_Free(buf2);
+
+ return result;
+}
+
+/* Concat to string or Unicode object giving a new Unicode object. */
+
+PyObject *
+PyUnicode_Concat(PyObject *left, PyObject *right)
+{
+ PyObject *result;
+ Py_UCS4 maxchar, maxchar2;
+ Py_ssize_t left_len, right_len, new_len;
+
+ if (ensure_unicode(left) < 0 || ensure_unicode(right) < 0)
+ return NULL;
+
+ /* Shortcuts */
+ if (left == unicode_empty)
+ return PyUnicode_FromObject(right);
+ if (right == unicode_empty)
+ return PyUnicode_FromObject(left);
+
+ left_len = PyUnicode_GET_LENGTH(left);
+ right_len = PyUnicode_GET_LENGTH(right);
+ if (left_len > PY_SSIZE_T_MAX - right_len) {
+ PyErr_SetString(PyExc_OverflowError,
+ "strings are too large to concat");
+ return NULL;
+ }
+ new_len = left_len + right_len;
+
+ maxchar = PyUnicode_MAX_CHAR_VALUE(left);
+ maxchar2 = PyUnicode_MAX_CHAR_VALUE(right);
+ maxchar = Py_MAX(maxchar, maxchar2);
+
+ /* Concat the two Unicode strings */
+ result = PyUnicode_New(new_len, maxchar);
+ if (result == NULL)
+ return NULL;
+ _PyUnicode_FastCopyCharacters(result, 0, left, 0, left_len);
+ _PyUnicode_FastCopyCharacters(result, left_len, right, 0, right_len);
+ assert(_PyUnicode_CheckConsistency(result, 1));
+ return result;
+}
+
+void
+PyUnicode_Append(PyObject **p_left, PyObject *right)
+{
+ PyObject *left, *res;
+ Py_UCS4 maxchar, maxchar2;
+ Py_ssize_t left_len, right_len, new_len;
+
+ if (p_left == NULL) {
+ if (!PyErr_Occurred())
+ PyErr_BadInternalCall();
+ return;
+ }
+ left = *p_left;
+ if (right == NULL || left == NULL
+ || !PyUnicode_Check(left) || !PyUnicode_Check(right)) {
+ if (!PyErr_Occurred())
+ PyErr_BadInternalCall();
+ goto error;
+ }
+
+ if (PyUnicode_READY(left) == -1)
+ goto error;
+ if (PyUnicode_READY(right) == -1)
+ goto error;
+
+ /* Shortcuts */
+ if (left == unicode_empty) {
+ Py_DECREF(left);
+ Py_INCREF(right);
+ *p_left = right;
+ return;
+ }
+ if (right == unicode_empty)
+ return;
+
+ left_len = PyUnicode_GET_LENGTH(left);
+ right_len = PyUnicode_GET_LENGTH(right);
+ if (left_len > PY_SSIZE_T_MAX - right_len) {
+ PyErr_SetString(PyExc_OverflowError,
+ "strings are too large to concat");
+ goto error;
+ }
+ new_len = left_len + right_len;
+
+ if (unicode_modifiable(left)
+ && PyUnicode_CheckExact(right)
+ && PyUnicode_KIND(right) <= PyUnicode_KIND(left)
+ /* Don't resize for ascii += latin1. Convert ascii to latin1 requires
+ to change the structure size, but characters are stored just after
+ the structure, and so it requires to move all characters which is
+ not so different than duplicating the string. */
+ && !(PyUnicode_IS_ASCII(left) && !PyUnicode_IS_ASCII(right)))
+ {
+ /* append inplace */
+ if (unicode_resize(p_left, new_len) != 0)
+ goto error;
+
+ /* copy 'right' into the newly allocated area of 'left' */
+ _PyUnicode_FastCopyCharacters(*p_left, left_len, right, 0, right_len);
+ }
+ else {
+ maxchar = PyUnicode_MAX_CHAR_VALUE(left);
+ maxchar2 = PyUnicode_MAX_CHAR_VALUE(right);
+ maxchar = Py_MAX(maxchar, maxchar2);
+
+ /* Concat the two Unicode strings */
+ res = PyUnicode_New(new_len, maxchar);
+ if (res == NULL)
+ goto error;
+ _PyUnicode_FastCopyCharacters(res, 0, left, 0, left_len);
+ _PyUnicode_FastCopyCharacters(res, left_len, right, 0, right_len);
+ Py_DECREF(left);
+ *p_left = res;
+ }
+ assert(_PyUnicode_CheckConsistency(*p_left, 1));
+ return;
+
+error:
+ Py_CLEAR(*p_left);
+}
+
+void
+PyUnicode_AppendAndDel(PyObject **pleft, PyObject *right)
+{
+ PyUnicode_Append(pleft, right);
+ Py_XDECREF(right);
+}
+
+/*
+Wraps stringlib_parse_args_finds() and additionally ensures that the
+first argument is a unicode object.
+*/
+
+static int
+parse_args_finds_unicode(const char * function_name, PyObject *args,
+ PyObject **substring,
+ Py_ssize_t *start, Py_ssize_t *end)
+{
+ if(stringlib_parse_args_finds(function_name, args, substring,
+ start, end)) {
+ if (ensure_unicode(*substring) < 0)
+ return 0;
+ return 1;
+ }
+ return 0;
+}
+
+PyDoc_STRVAR(count__doc__,
+ "S.count(sub[, start[, end]]) -> int\n\
+\n\
+Return the number of non-overlapping occurrences of substring sub in\n\
+string S[start:end]. Optional arguments start and end are\n\
+interpreted as in slice notation.");
+
+static PyObject *
+unicode_count(PyObject *self, PyObject *args)
+{
+ PyObject *substring = NULL; /* initialize to fix a compiler warning */
+ Py_ssize_t start = 0;
+ Py_ssize_t end = PY_SSIZE_T_MAX;
+ PyObject *result;
+ int kind1, kind2;
+ void *buf1, *buf2;
+ Py_ssize_t len1, len2, iresult;
+
+ if (!parse_args_finds_unicode("count", args, &substring, &start, &end))
+ return NULL;
+
+ kind1 = PyUnicode_KIND(self);
+ kind2 = PyUnicode_KIND(substring);
+ if (kind1 < kind2)
+ return PyLong_FromLong(0);
+
+ len1 = PyUnicode_GET_LENGTH(self);
+ len2 = PyUnicode_GET_LENGTH(substring);
+ ADJUST_INDICES(start, end, len1);
+ if (end - start < len2)
+ return PyLong_FromLong(0);
+
+ buf1 = PyUnicode_DATA(self);
+ buf2 = PyUnicode_DATA(substring);
+ if (kind2 != kind1) {
+ buf2 = _PyUnicode_AsKind(substring, kind1);
+ if (!buf2)
+ return NULL;
+ }
+ switch (kind1) {
+ case PyUnicode_1BYTE_KIND:
+ iresult = ucs1lib_count(
+ ((Py_UCS1*)buf1) + start, end - start,
+ buf2, len2, PY_SSIZE_T_MAX
+ );
+ break;
+ case PyUnicode_2BYTE_KIND:
+ iresult = ucs2lib_count(
+ ((Py_UCS2*)buf1) + start, end - start,
+ buf2, len2, PY_SSIZE_T_MAX
+ );
+ break;
+ case PyUnicode_4BYTE_KIND:
+ iresult = ucs4lib_count(
+ ((Py_UCS4*)buf1) + start, end - start,
+ buf2, len2, PY_SSIZE_T_MAX
+ );
+ break;
+ default:
+ assert(0); iresult = 0;
+ }
+
+ result = PyLong_FromSsize_t(iresult);
+
+ if (kind2 != kind1)
+ PyMem_Free(buf2);
+
+ return result;
+}
+
+PyDoc_STRVAR(encode__doc__,
+ "S.encode(encoding='utf-8', errors='strict') -> bytes\n\
+\n\
+Encode S using the codec registered for encoding. Default encoding\n\
+is 'utf-8'. errors may be given to set a different error\n\
+handling scheme. Default is 'strict' meaning that encoding errors raise\n\
+a UnicodeEncodeError. Other possible values are 'ignore', 'replace' and\n\
+'xmlcharrefreplace' as well as any other name registered with\n\
+codecs.register_error that can handle UnicodeEncodeErrors.");
+
+static PyObject *
+unicode_encode(PyObject *self, PyObject *args, PyObject *kwargs)
+{
+ static char *kwlist[] = {"encoding", "errors", 0};
+ char *encoding = NULL;
+ char *errors = NULL;
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|ss:encode",
+ kwlist, &encoding, &errors))
+ return NULL;
+ return PyUnicode_AsEncodedString(self, encoding, errors);
+}
+
+PyDoc_STRVAR(expandtabs__doc__,
+ "S.expandtabs(tabsize=8) -> str\n\
+\n\
+Return a copy of S where all tab characters are expanded using spaces.\n\
+If tabsize is not given, a tab size of 8 characters is assumed.");
+
+static PyObject*
+unicode_expandtabs(PyObject *self, PyObject *args, PyObject *kwds)
+{
+ Py_ssize_t i, j, line_pos, src_len, incr;
+ Py_UCS4 ch;
+ PyObject *u;
+ void *src_data, *dest_data;
+ static char *kwlist[] = {"tabsize", 0};
+ int tabsize = 8;
+ int kind;
+ int found;
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwds, "|i:expandtabs",
+ kwlist, &tabsize))
+ return NULL;
+
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+
+ /* First pass: determine size of output string */
+ src_len = PyUnicode_GET_LENGTH(self);
+ i = j = line_pos = 0;
+ kind = PyUnicode_KIND(self);
+ src_data = PyUnicode_DATA(self);
+ found = 0;
+ for (; i < src_len; i++) {
+ ch = PyUnicode_READ(kind, src_data, i);
+ if (ch == '\t') {
+ found = 1;
+ if (tabsize > 0) {
+ incr = tabsize - (line_pos % tabsize); /* cannot overflow */
+ if (j > PY_SSIZE_T_MAX - incr)
+ goto overflow;
+ line_pos += incr;
+ j += incr;
+ }
+ }
+ else {
+ if (j > PY_SSIZE_T_MAX - 1)
+ goto overflow;
+ line_pos++;
+ j++;
+ if (ch == '\n' || ch == '\r')
+ line_pos = 0;
+ }
+ }
+ if (!found)
+ return unicode_result_unchanged(self);
+
+ /* Second pass: create output string and fill it */
+ u = PyUnicode_New(j, PyUnicode_MAX_CHAR_VALUE(self));
+ if (!u)
+ return NULL;
+ dest_data = PyUnicode_DATA(u);
+
+ i = j = line_pos = 0;
+
+ for (; i < src_len; i++) {
+ ch = PyUnicode_READ(kind, src_data, i);
+ if (ch == '\t') {
+ if (tabsize > 0) {
+ incr = tabsize - (line_pos % tabsize);
+ line_pos += incr;
+ FILL(kind, dest_data, ' ', j, incr);
+ j += incr;
+ }
+ }
+ else {
+ line_pos++;
+ PyUnicode_WRITE(kind, dest_data, j, ch);
+ j++;
+ if (ch == '\n' || ch == '\r')
+ line_pos = 0;
+ }
+ }
+ assert (j == PyUnicode_GET_LENGTH(u));
+ return unicode_result(u);
+
+ overflow:
+ PyErr_SetString(PyExc_OverflowError, "new string is too long");
+ return NULL;
+}
+
+PyDoc_STRVAR(find__doc__,
+ "S.find(sub[, start[, end]]) -> int\n\
+\n\
+Return the lowest index in S where substring sub is found,\n\
+such that sub is contained within S[start:end]. Optional\n\
+arguments start and end are interpreted as in slice notation.\n\
+\n\
+Return -1 on failure.");
+
+static PyObject *
+unicode_find(PyObject *self, PyObject *args)
+{
+ /* initialize variables to prevent gcc warning */
+ PyObject *substring = NULL;
+ Py_ssize_t start = 0;
+ Py_ssize_t end = 0;
+ Py_ssize_t result;
+
+ if (!parse_args_finds_unicode("find", args, &substring, &start, &end))
+ return NULL;
+
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+
+ result = any_find_slice(self, substring, start, end, 1);
+
+ if (result == -2)
+ return NULL;
+
+ return PyLong_FromSsize_t(result);
+}
+
+static PyObject *
+unicode_getitem(PyObject *self, Py_ssize_t index)
+{
+ void *data;
+ enum PyUnicode_Kind kind;
+ Py_UCS4 ch;
+
+ if (!PyUnicode_Check(self)) {
+ PyErr_BadArgument();
+ return NULL;
+ }
+ if (PyUnicode_READY(self) == -1) {
+ return NULL;
+ }
+ if (index < 0 || index >= PyUnicode_GET_LENGTH(self)) {
+ PyErr_SetString(PyExc_IndexError, "string index out of range");
+ return NULL;
+ }
+ kind = PyUnicode_KIND(self);
+ data = PyUnicode_DATA(self);
+ ch = PyUnicode_READ(kind, data, index);
+ return unicode_char(ch);
+}
+
+/* Believe it or not, this produces the same value for ASCII strings
+ as bytes_hash(). */
+static Py_hash_t
+unicode_hash(PyObject *self)
+{
+ Py_ssize_t len;
+ Py_uhash_t x; /* Unsigned for defined overflow behavior. */
+
+#ifdef Py_DEBUG
+ assert(_Py_HashSecret_Initialized);
+#endif
+ if (_PyUnicode_HASH(self) != -1)
+ return _PyUnicode_HASH(self);
+ if (PyUnicode_READY(self) == -1)
+ return -1;
+ len = PyUnicode_GET_LENGTH(self);
+ /*
+ We make the hash of the empty string be 0, rather than using
+ (prefix ^ suffix), since this slightly obfuscates the hash secret
+ */
+ if (len == 0) {
+ _PyUnicode_HASH(self) = 0;
+ return 0;
+ }
+ x = _Py_HashBytes(PyUnicode_DATA(self),
+ PyUnicode_GET_LENGTH(self) * PyUnicode_KIND(self));
+ _PyUnicode_HASH(self) = x;
+ return x;
+}
+
+PyDoc_STRVAR(index__doc__,
+ "S.index(sub[, start[, end]]) -> int\n\
+\n\
+Return the lowest index in S where substring sub is found, \n\
+such that sub is contained within S[start:end]. Optional\n\
+arguments start and end are interpreted as in slice notation.\n\
+\n\
+Raises ValueError when the substring is not found.");
+
+static PyObject *
+unicode_index(PyObject *self, PyObject *args)
+{
+ /* initialize variables to prevent gcc warning */
+ Py_ssize_t result;
+ PyObject *substring = NULL;
+ Py_ssize_t start = 0;
+ Py_ssize_t end = 0;
+
+ if (!parse_args_finds_unicode("index", args, &substring, &start, &end))
+ return NULL;
+
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+
+ result = any_find_slice(self, substring, start, end, 1);
+
+ if (result == -2)
+ return NULL;
+
+ if (result < 0) {
+ PyErr_SetString(PyExc_ValueError, "substring not found");
+ return NULL;
+ }
+
+ return PyLong_FromSsize_t(result);
+}
+
+PyDoc_STRVAR(islower__doc__,
+ "S.islower() -> bool\n\
+\n\
+Return True if all cased characters in S are lowercase and there is\n\
+at least one cased character in S, False otherwise.");
+
+static PyObject*
+unicode_islower(PyObject *self)
+{
+ Py_ssize_t i, length;
+ int kind;
+ void *data;
+ int cased;
+
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+ length = PyUnicode_GET_LENGTH(self);
+ kind = PyUnicode_KIND(self);
+ data = PyUnicode_DATA(self);
+
+ /* Shortcut for single character strings */
+ if (length == 1)
+ return PyBool_FromLong(
+ Py_UNICODE_ISLOWER(PyUnicode_READ(kind, data, 0)));
+
+ /* Special case for empty strings */
+ if (length == 0)
+ return PyBool_FromLong(0);
+
+ cased = 0;
+ for (i = 0; i < length; i++) {
+ const Py_UCS4 ch = PyUnicode_READ(kind, data, i);
+
+ if (Py_UNICODE_ISUPPER(ch) || Py_UNICODE_ISTITLE(ch))
+ return PyBool_FromLong(0);
+ else if (!cased && Py_UNICODE_ISLOWER(ch))
+ cased = 1;
+ }
+ return PyBool_FromLong(cased);
+}
+
+PyDoc_STRVAR(isupper__doc__,
+ "S.isupper() -> bool\n\
+\n\
+Return True if all cased characters in S are uppercase and there is\n\
+at least one cased character in S, False otherwise.");
+
+static PyObject*
+unicode_isupper(PyObject *self)
+{
+ Py_ssize_t i, length;
+ int kind;
+ void *data;
+ int cased;
+
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+ length = PyUnicode_GET_LENGTH(self);
+ kind = PyUnicode_KIND(self);
+ data = PyUnicode_DATA(self);
+
+ /* Shortcut for single character strings */
+ if (length == 1)
+ return PyBool_FromLong(
+ Py_UNICODE_ISUPPER(PyUnicode_READ(kind, data, 0)) != 0);
+
+ /* Special case for empty strings */
+ if (length == 0)
+ return PyBool_FromLong(0);
+
+ cased = 0;
+ for (i = 0; i < length; i++) {
+ const Py_UCS4 ch = PyUnicode_READ(kind, data, i);
+
+ if (Py_UNICODE_ISLOWER(ch) || Py_UNICODE_ISTITLE(ch))
+ return PyBool_FromLong(0);
+ else if (!cased && Py_UNICODE_ISUPPER(ch))
+ cased = 1;
+ }
+ return PyBool_FromLong(cased);
+}
+
+PyDoc_STRVAR(istitle__doc__,
+ "S.istitle() -> bool\n\
+\n\
+Return True if S is a titlecased string and there is at least one\n\
+character in S, i.e. upper- and titlecase characters may only\n\
+follow uncased characters and lowercase characters only cased ones.\n\
+Return False otherwise.");
+
+static PyObject*
+unicode_istitle(PyObject *self)
+{
+ Py_ssize_t i, length;
+ int kind;
+ void *data;
+ int cased, previous_is_cased;
+
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+ length = PyUnicode_GET_LENGTH(self);
+ kind = PyUnicode_KIND(self);
+ data = PyUnicode_DATA(self);
+
+ /* Shortcut for single character strings */
+ if (length == 1) {
+ Py_UCS4 ch = PyUnicode_READ(kind, data, 0);
+ return PyBool_FromLong((Py_UNICODE_ISTITLE(ch) != 0) ||
+ (Py_UNICODE_ISUPPER(ch) != 0));
+ }
+
+ /* Special case for empty strings */
+ if (length == 0)
+ return PyBool_FromLong(0);
+
+ cased = 0;
+ previous_is_cased = 0;
+ for (i = 0; i < length; i++) {
+ const Py_UCS4 ch = PyUnicode_READ(kind, data, i);
+
+ if (Py_UNICODE_ISUPPER(ch) || Py_UNICODE_ISTITLE(ch)) {
+ if (previous_is_cased)
+ return PyBool_FromLong(0);
+ previous_is_cased = 1;
+ cased = 1;
+ }
+ else if (Py_UNICODE_ISLOWER(ch)) {
+ if (!previous_is_cased)
+ return PyBool_FromLong(0);
+ previous_is_cased = 1;
+ cased = 1;
+ }
+ else
+ previous_is_cased = 0;
+ }
+ return PyBool_FromLong(cased);
+}
+
+PyDoc_STRVAR(isspace__doc__,
+ "S.isspace() -> bool\n\
+\n\
+Return True if all characters in S are whitespace\n\
+and there is at least one character in S, False otherwise.");
+
+static PyObject*
+unicode_isspace(PyObject *self)
+{
+ Py_ssize_t i, length;
+ int kind;
+ void *data;
+
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+ length = PyUnicode_GET_LENGTH(self);
+ kind = PyUnicode_KIND(self);
+ data = PyUnicode_DATA(self);
+
+ /* Shortcut for single character strings */
+ if (length == 1)
+ return PyBool_FromLong(
+ Py_UNICODE_ISSPACE(PyUnicode_READ(kind, data, 0)));
+
+ /* Special case for empty strings */
+ if (length == 0)
+ return PyBool_FromLong(0);
+
+ for (i = 0; i < length; i++) {
+ const Py_UCS4 ch = PyUnicode_READ(kind, data, i);
+ if (!Py_UNICODE_ISSPACE(ch))
+ return PyBool_FromLong(0);
+ }
+ return PyBool_FromLong(1);
+}
+
+PyDoc_STRVAR(isalpha__doc__,
+ "S.isalpha() -> bool\n\
+\n\
+Return True if all characters in S are alphabetic\n\
+and there is at least one character in S, False otherwise.");
+
+static PyObject*
+unicode_isalpha(PyObject *self)
+{
+ Py_ssize_t i, length;
+ int kind;
+ void *data;
+
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+ length = PyUnicode_GET_LENGTH(self);
+ kind = PyUnicode_KIND(self);
+ data = PyUnicode_DATA(self);
+
+ /* Shortcut for single character strings */
+ if (length == 1)
+ return PyBool_FromLong(
+ Py_UNICODE_ISALPHA(PyUnicode_READ(kind, data, 0)));
+
+ /* Special case for empty strings */
+ if (length == 0)
+ return PyBool_FromLong(0);
+
+ for (i = 0; i < length; i++) {
+ if (!Py_UNICODE_ISALPHA(PyUnicode_READ(kind, data, i)))
+ return PyBool_FromLong(0);
+ }
+ return PyBool_FromLong(1);
+}
+
+PyDoc_STRVAR(isalnum__doc__,
+ "S.isalnum() -> bool\n\
+\n\
+Return True if all characters in S are alphanumeric\n\
+and there is at least one character in S, False otherwise.");
+
+static PyObject*
+unicode_isalnum(PyObject *self)
+{
+ int kind;
+ void *data;
+ Py_ssize_t len, i;
+
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+
+ kind = PyUnicode_KIND(self);
+ data = PyUnicode_DATA(self);
+ len = PyUnicode_GET_LENGTH(self);
+
+ /* Shortcut for single character strings */
+ if (len == 1) {
+ const Py_UCS4 ch = PyUnicode_READ(kind, data, 0);
+ return PyBool_FromLong(Py_UNICODE_ISALNUM(ch));
+ }
+
+ /* Special case for empty strings */
+ if (len == 0)
+ return PyBool_FromLong(0);
+
+ for (i = 0; i < len; i++) {
+ const Py_UCS4 ch = PyUnicode_READ(kind, data, i);
+ if (!Py_UNICODE_ISALNUM(ch))
+ return PyBool_FromLong(0);
+ }
+ return PyBool_FromLong(1);
+}
+
+PyDoc_STRVAR(isdecimal__doc__,
+ "S.isdecimal() -> bool\n\
+\n\
+Return True if there are only decimal characters in S,\n\
+False otherwise.");
+
+static PyObject*
+unicode_isdecimal(PyObject *self)
+{
+ Py_ssize_t i, length;
+ int kind;
+ void *data;
+
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+ length = PyUnicode_GET_LENGTH(self);
+ kind = PyUnicode_KIND(self);
+ data = PyUnicode_DATA(self);
+
+ /* Shortcut for single character strings */
+ if (length == 1)
+ return PyBool_FromLong(
+ Py_UNICODE_ISDECIMAL(PyUnicode_READ(kind, data, 0)));
+
+ /* Special case for empty strings */
+ if (length == 0)
+ return PyBool_FromLong(0);
+
+ for (i = 0; i < length; i++) {
+ if (!Py_UNICODE_ISDECIMAL(PyUnicode_READ(kind, data, i)))
+ return PyBool_FromLong(0);
+ }
+ return PyBool_FromLong(1);
+}
+
+PyDoc_STRVAR(isdigit__doc__,
+ "S.isdigit() -> bool\n\
+\n\
+Return True if all characters in S are digits\n\
+and there is at least one character in S, False otherwise.");
+
+static PyObject*
+unicode_isdigit(PyObject *self)
+{
+ Py_ssize_t i, length;
+ int kind;
+ void *data;
+
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+ length = PyUnicode_GET_LENGTH(self);
+ kind = PyUnicode_KIND(self);
+ data = PyUnicode_DATA(self);
+
+ /* Shortcut for single character strings */
+ if (length == 1) {
+ const Py_UCS4 ch = PyUnicode_READ(kind, data, 0);
+ return PyBool_FromLong(Py_UNICODE_ISDIGIT(ch));
+ }
+
+ /* Special case for empty strings */
+ if (length == 0)
+ return PyBool_FromLong(0);
+
+ for (i = 0; i < length; i++) {
+ if (!Py_UNICODE_ISDIGIT(PyUnicode_READ(kind, data, i)))
+ return PyBool_FromLong(0);
+ }
+ return PyBool_FromLong(1);
+}
+
+PyDoc_STRVAR(isnumeric__doc__,
+ "S.isnumeric() -> bool\n\
+\n\
+Return True if there are only numeric characters in S,\n\
+False otherwise.");
+
+static PyObject*
+unicode_isnumeric(PyObject *self)
+{
+ Py_ssize_t i, length;
+ int kind;
+ void *data;
+
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+ length = PyUnicode_GET_LENGTH(self);
+ kind = PyUnicode_KIND(self);
+ data = PyUnicode_DATA(self);
+
+ /* Shortcut for single character strings */
+ if (length == 1)
+ return PyBool_FromLong(
+ Py_UNICODE_ISNUMERIC(PyUnicode_READ(kind, data, 0)));
+
+ /* Special case for empty strings */
+ if (length == 0)
+ return PyBool_FromLong(0);
+
+ for (i = 0; i < length; i++) {
+ if (!Py_UNICODE_ISNUMERIC(PyUnicode_READ(kind, data, i)))
+ return PyBool_FromLong(0);
+ }
+ return PyBool_FromLong(1);
+}
+
+int
+PyUnicode_IsIdentifier(PyObject *self)
+{
+ int kind;
+ void *data;
+ Py_ssize_t i;
+ Py_UCS4 first;
+
+ if (PyUnicode_READY(self) == -1) {
+ Py_FatalError("identifier not ready");
+ return 0;
+ }
+
+ /* Special case for empty strings */
+ if (PyUnicode_GET_LENGTH(self) == 0)
+ return 0;
+ kind = PyUnicode_KIND(self);
+ data = PyUnicode_DATA(self);
+
+ /* PEP 3131 says that the first character must be in
+ XID_Start and subsequent characters in XID_Continue,
+ and for the ASCII range, the 2.x rules apply (i.e
+ start with letters and underscore, continue with
+ letters, digits, underscore). However, given the current
+ definition of XID_Start and XID_Continue, it is sufficient
+ to check just for these, except that _ must be allowed
+ as starting an identifier. */
+ first = PyUnicode_READ(kind, data, 0);
+ if (!_PyUnicode_IsXidStart(first) && first != 0x5F /* LOW LINE */)
+ return 0;
+
+ for (i = 1; i < PyUnicode_GET_LENGTH(self); i++)
+ if (!_PyUnicode_IsXidContinue(PyUnicode_READ(kind, data, i)))
+ return 0;
+ return 1;
+}
+
+PyDoc_STRVAR(isidentifier__doc__,
+ "S.isidentifier() -> bool\n\
+\n\
+Return True if S is a valid identifier according\n\
+to the language definition.\n\
+\n\
+Use keyword.iskeyword() to test for reserved identifiers\n\
+such as \"def\" and \"class\".\n");
+
+static PyObject*
+unicode_isidentifier(PyObject *self)
+{
+ return PyBool_FromLong(PyUnicode_IsIdentifier(self));
+}
+
+PyDoc_STRVAR(isprintable__doc__,
+ "S.isprintable() -> bool\n\
+\n\
+Return True if all characters in S are considered\n\
+printable in repr() or S is empty, False otherwise.");
+
+static PyObject*
+unicode_isprintable(PyObject *self)
+{
+ Py_ssize_t i, length;
+ int kind;
+ void *data;
+
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+ length = PyUnicode_GET_LENGTH(self);
+ kind = PyUnicode_KIND(self);
+ data = PyUnicode_DATA(self);
+
+ /* Shortcut for single character strings */
+ if (length == 1)
+ return PyBool_FromLong(
+ Py_UNICODE_ISPRINTABLE(PyUnicode_READ(kind, data, 0)));
+
+ for (i = 0; i < length; i++) {
+ if (!Py_UNICODE_ISPRINTABLE(PyUnicode_READ(kind, data, i))) {
+ Py_RETURN_FALSE;
+ }
+ }
+ Py_RETURN_TRUE;
+}
+
+PyDoc_STRVAR(join__doc__,
+ "S.join(iterable) -> str\n\
+\n\
+Return a string which is the concatenation of the strings in the\n\
+iterable. The separator between elements is S.");
+
+static PyObject*
+unicode_join(PyObject *self, PyObject *data)
+{
+ return PyUnicode_Join(self, data);
+}
+
+static Py_ssize_t
+unicode_length(PyObject *self)
+{
+ if (PyUnicode_READY(self) == -1)
+ return -1;
+ return PyUnicode_GET_LENGTH(self);
+}
+
+PyDoc_STRVAR(ljust__doc__,
+ "S.ljust(width[, fillchar]) -> str\n\
+\n\
+Return S left-justified in a Unicode string of length width. Padding is\n\
+done using the specified fill character (default is a space).");
+
+static PyObject *
+unicode_ljust(PyObject *self, PyObject *args)
+{
+ Py_ssize_t width;
+ Py_UCS4 fillchar = ' ';
+
+ if (!PyArg_ParseTuple(args, "n|O&:ljust", &width, convert_uc, &fillchar))
+ return NULL;
+
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+
+ if (PyUnicode_GET_LENGTH(self) >= width)
+ return unicode_result_unchanged(self);
+
+ return pad(self, 0, width - PyUnicode_GET_LENGTH(self), fillchar);
+}
+
+PyDoc_STRVAR(lower__doc__,
+ "S.lower() -> str\n\
+\n\
+Return a copy of the string S converted to lowercase.");
+
+static PyObject*
+unicode_lower(PyObject *self)
+{
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+ if (PyUnicode_IS_ASCII(self))
+ return ascii_upper_or_lower(self, 1);
+ return case_operation(self, do_lower);
+}
+
+#define LEFTSTRIP 0
+#define RIGHTSTRIP 1
+#define BOTHSTRIP 2
+
+/* Arrays indexed by above */
+static const char * const stripformat[] = {"|O:lstrip", "|O:rstrip", "|O:strip"};
+
+#define STRIPNAME(i) (stripformat[i]+3)
+
+/* externally visible for str.strip(unicode) */
+PyObject *
+_PyUnicode_XStrip(PyObject *self, int striptype, PyObject *sepobj)
+{
+ void *data;
+ int kind;
+ Py_ssize_t i, j, len;
+ BLOOM_MASK sepmask;
+ Py_ssize_t seplen;
+
+ if (PyUnicode_READY(self) == -1 || PyUnicode_READY(sepobj) == -1)
+ return NULL;
+
+ kind = PyUnicode_KIND(self);
+ data = PyUnicode_DATA(self);
+ len = PyUnicode_GET_LENGTH(self);
+ seplen = PyUnicode_GET_LENGTH(sepobj);
+ sepmask = make_bloom_mask(PyUnicode_KIND(sepobj),
+ PyUnicode_DATA(sepobj),
+ seplen);
+
+ i = 0;
+ if (striptype != RIGHTSTRIP) {
+ while (i < len) {
+ Py_UCS4 ch = PyUnicode_READ(kind, data, i);
+ if (!BLOOM(sepmask, ch))
+ break;
+ if (PyUnicode_FindChar(sepobj, ch, 0, seplen, 1) < 0)
+ break;
+ i++;
+ }
+ }
+
+ j = len;
+ if (striptype != LEFTSTRIP) {
+ j--;
+ while (j >= i) {
+ Py_UCS4 ch = PyUnicode_READ(kind, data, j);
+ if (!BLOOM(sepmask, ch))
+ break;
+ if (PyUnicode_FindChar(sepobj, ch, 0, seplen, 1) < 0)
+ break;
+ j--;
+ }
+
+ j++;
+ }
+
+ return PyUnicode_Substring(self, i, j);
+}
+
+PyObject*
+PyUnicode_Substring(PyObject *self, Py_ssize_t start, Py_ssize_t end)
+{
+ unsigned char *data;
+ int kind;
+ Py_ssize_t length;
+
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+
+ length = PyUnicode_GET_LENGTH(self);
+ end = Py_MIN(end, length);
+
+ if (start == 0 && end == length)
+ return unicode_result_unchanged(self);
+
+ if (start < 0 || end < 0) {
+ PyErr_SetString(PyExc_IndexError, "string index out of range");
+ return NULL;
+ }
+ if (start >= length || end < start)
+ _Py_RETURN_UNICODE_EMPTY();
+
+ length = end - start;
+ if (PyUnicode_IS_ASCII(self)) {
+ data = PyUnicode_1BYTE_DATA(self);
+ return _PyUnicode_FromASCII((char*)(data + start), length);
+ }
+ else {
+ kind = PyUnicode_KIND(self);
+ data = PyUnicode_1BYTE_DATA(self);
+ return PyUnicode_FromKindAndData(kind,
+ data + kind * start,
+ length);
+ }
+}
+
+static PyObject *
+do_strip(PyObject *self, int striptype)
+{
+ Py_ssize_t len, i, j;
+
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+
+ len = PyUnicode_GET_LENGTH(self);
+
+ if (PyUnicode_IS_ASCII(self)) {
+ Py_UCS1 *data = PyUnicode_1BYTE_DATA(self);
+
+ i = 0;
+ if (striptype != RIGHTSTRIP) {
+ while (i < len) {
+ Py_UCS1 ch = data[i];
+ if (!_Py_ascii_whitespace[ch])
+ break;
+ i++;
+ }
+ }
+
+ j = len;
+ if (striptype != LEFTSTRIP) {
+ j--;
+ while (j >= i) {
+ Py_UCS1 ch = data[j];
+ if (!_Py_ascii_whitespace[ch])
+ break;
+ j--;
+ }
+ j++;
+ }
+ }
+ else {
+ int kind = PyUnicode_KIND(self);
+ void *data = PyUnicode_DATA(self);
+
+ i = 0;
+ if (striptype != RIGHTSTRIP) {
+ while (i < len) {
+ Py_UCS4 ch = PyUnicode_READ(kind, data, i);
+ if (!Py_UNICODE_ISSPACE(ch))
+ break;
+ i++;
+ }
+ }
+
+ j = len;
+ if (striptype != LEFTSTRIP) {
+ j--;
+ while (j >= i) {
+ Py_UCS4 ch = PyUnicode_READ(kind, data, j);
+ if (!Py_UNICODE_ISSPACE(ch))
+ break;
+ j--;
+ }
+ j++;
+ }
+ }
+
+ return PyUnicode_Substring(self, i, j);
+}
+
+
+static PyObject *
+do_argstrip(PyObject *self, int striptype, PyObject *args)
+{
+ PyObject *sep = NULL;
+
+ if (!PyArg_ParseTuple(args, stripformat[striptype], &sep))
+ return NULL;
+
+ if (sep != NULL && sep != Py_None) {
+ if (PyUnicode_Check(sep))
+ return _PyUnicode_XStrip(self, striptype, sep);
+ else {
+ PyErr_Format(PyExc_TypeError,
+ "%s arg must be None or str",
+ STRIPNAME(striptype));
+ return NULL;
+ }
+ }
+
+ return do_strip(self, striptype);
+}
+
+
+PyDoc_STRVAR(strip__doc__,
+ "S.strip([chars]) -> str\n\
+\n\
+Return a copy of the string S with leading and trailing\n\
+whitespace removed.\n\
+If chars is given and not None, remove characters in chars instead.");
+
+static PyObject *
+unicode_strip(PyObject *self, PyObject *args)
+{
+ if (PyTuple_GET_SIZE(args) == 0)
+ return do_strip(self, BOTHSTRIP); /* Common case */
+ else
+ return do_argstrip(self, BOTHSTRIP, args);
+}
+
+
+PyDoc_STRVAR(lstrip__doc__,
+ "S.lstrip([chars]) -> str\n\
+\n\
+Return a copy of the string S with leading whitespace removed.\n\
+If chars is given and not None, remove characters in chars instead.");
+
+static PyObject *
+unicode_lstrip(PyObject *self, PyObject *args)
+{
+ if (PyTuple_GET_SIZE(args) == 0)
+ return do_strip(self, LEFTSTRIP); /* Common case */
+ else
+ return do_argstrip(self, LEFTSTRIP, args);
+}
+
+
+PyDoc_STRVAR(rstrip__doc__,
+ "S.rstrip([chars]) -> str\n\
+\n\
+Return a copy of the string S with trailing whitespace removed.\n\
+If chars is given and not None, remove characters in chars instead.");
+
+static PyObject *
+unicode_rstrip(PyObject *self, PyObject *args)
+{
+ if (PyTuple_GET_SIZE(args) == 0)
+ return do_strip(self, RIGHTSTRIP); /* Common case */
+ else
+ return do_argstrip(self, RIGHTSTRIP, args);
+}
+
+
+static PyObject*
+unicode_repeat(PyObject *str, Py_ssize_t len)
+{
+ PyObject *u;
+ Py_ssize_t nchars, n;
+
+ if (len < 1)
+ _Py_RETURN_UNICODE_EMPTY();
+
+ /* no repeat, return original string */
+ if (len == 1)
+ return unicode_result_unchanged(str);
+
+ if (PyUnicode_READY(str) == -1)
+ return NULL;
+
+ if (PyUnicode_GET_LENGTH(str) > PY_SSIZE_T_MAX / len) {
+ PyErr_SetString(PyExc_OverflowError,
+ "repeated string is too long");
+ return NULL;
+ }
+ nchars = len * PyUnicode_GET_LENGTH(str);
+
+ u = PyUnicode_New(nchars, PyUnicode_MAX_CHAR_VALUE(str));
+ if (!u)
+ return NULL;
+ assert(PyUnicode_KIND(u) == PyUnicode_KIND(str));
+
+ if (PyUnicode_GET_LENGTH(str) == 1) {
+ const int kind = PyUnicode_KIND(str);
+ const Py_UCS4 fill_char = PyUnicode_READ(kind, PyUnicode_DATA(str), 0);
+ if (kind == PyUnicode_1BYTE_KIND) {
+ void *to = PyUnicode_DATA(u);
+ memset(to, (unsigned char)fill_char, len);
+ }
+ else if (kind == PyUnicode_2BYTE_KIND) {
+ Py_UCS2 *ucs2 = PyUnicode_2BYTE_DATA(u);
+ for (n = 0; n < len; ++n)
+ ucs2[n] = fill_char;
+ } else {
+ Py_UCS4 *ucs4 = PyUnicode_4BYTE_DATA(u);
+ assert(kind == PyUnicode_4BYTE_KIND);
+ for (n = 0; n < len; ++n)
+ ucs4[n] = fill_char;
+ }
+ }
+ else {
+ /* number of characters copied this far */
+ Py_ssize_t done = PyUnicode_GET_LENGTH(str);
+ const Py_ssize_t char_size = PyUnicode_KIND(str);
+ char *to = (char *) PyUnicode_DATA(u);
+ memcpy(to, PyUnicode_DATA(str),
+ PyUnicode_GET_LENGTH(str) * char_size);
+ while (done < nchars) {
+ n = (done <= nchars-done) ? done : nchars-done;
+ memcpy(to + (done * char_size), to, n * char_size);
+ done += n;
+ }
+ }
+
+ assert(_PyUnicode_CheckConsistency(u, 1));
+ return u;
+}
+
+PyObject *
+PyUnicode_Replace(PyObject *str,
+ PyObject *substr,
+ PyObject *replstr,
+ Py_ssize_t maxcount)
+{
+ if (ensure_unicode(str) < 0 || ensure_unicode(substr) < 0 ||
+ ensure_unicode(replstr) < 0)
+ return NULL;
+ return replace(str, substr, replstr, maxcount);
+}
+
+PyDoc_STRVAR(replace__doc__,
+ "S.replace(old, new[, count]) -> str\n\
+\n\
+Return a copy of S with all occurrences of substring\n\
+old replaced by new. If the optional argument count is\n\
+given, only the first count occurrences are replaced.");
+
+static PyObject*
+unicode_replace(PyObject *self, PyObject *args)
+{
+ PyObject *str1;
+ PyObject *str2;
+ Py_ssize_t maxcount = -1;
+
+ if (!PyArg_ParseTuple(args, "UU|n:replace", &str1, &str2, &maxcount))
+ return NULL;
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+ return replace(self, str1, str2, maxcount);
+}
+
+static PyObject *
+unicode_repr(PyObject *unicode)
+{
+ PyObject *repr;
+ Py_ssize_t isize;
+ Py_ssize_t osize, squote, dquote, i, o;
+ Py_UCS4 max, quote;
+ int ikind, okind, unchanged;
+ void *idata, *odata;
+
+ if (PyUnicode_READY(unicode) == -1)
+ return NULL;
+
+ isize = PyUnicode_GET_LENGTH(unicode);
+ idata = PyUnicode_DATA(unicode);
+
+ /* Compute length of output, quote characters, and
+ maximum character */
+ osize = 0;
+ max = 127;
+ squote = dquote = 0;
+ ikind = PyUnicode_KIND(unicode);
+ for (i = 0; i < isize; i++) {
+ Py_UCS4 ch = PyUnicode_READ(ikind, idata, i);
+ Py_ssize_t incr = 1;
+ switch (ch) {
+ case '\'': squote++; break;
+ case '"': dquote++; break;
+ case '\\': case '\t': case '\r': case '\n':
+ incr = 2;
+ break;
+ default:
+ /* Fast-path ASCII */
+ if (ch < ' ' || ch == 0x7f)
+ incr = 4; /* \xHH */
+ else if (ch < 0x7f)
+ ;
+ else if (Py_UNICODE_ISPRINTABLE(ch))
+ max = ch > max ? ch : max;
+ else if (ch < 0x100)
+ incr = 4; /* \xHH */
+ else if (ch < 0x10000)
+ incr = 6; /* \uHHHH */
+ else
+ incr = 10; /* \uHHHHHHHH */
+ }
+ if (osize > PY_SSIZE_T_MAX - incr) {
+ PyErr_SetString(PyExc_OverflowError,
+ "string is too long to generate repr");
+ return NULL;
+ }
+ osize += incr;
+ }
+
+ quote = '\'';
+ unchanged = (osize == isize);
+ if (squote) {
+ unchanged = 0;
+ if (dquote)
+ /* Both squote and dquote present. Use squote,
+ and escape them */
+ osize += squote;
+ else
+ quote = '"';
+ }
+ osize += 2; /* quotes */
+
+ repr = PyUnicode_New(osize, max);
+ if (repr == NULL)
+ return NULL;
+ okind = PyUnicode_KIND(repr);
+ odata = PyUnicode_DATA(repr);
+
+ PyUnicode_WRITE(okind, odata, 0, quote);
+ PyUnicode_WRITE(okind, odata, osize-1, quote);
+ if (unchanged) {
+ _PyUnicode_FastCopyCharacters(repr, 1,
+ unicode, 0,
+ isize);
+ }
+ else {
+ for (i = 0, o = 1; i < isize; i++) {
+ Py_UCS4 ch = PyUnicode_READ(ikind, idata, i);
+
+ /* Escape quotes and backslashes */
+ if ((ch == quote) || (ch == '\\')) {
+ PyUnicode_WRITE(okind, odata, o++, '\\');
+ PyUnicode_WRITE(okind, odata, o++, ch);
+ continue;
+ }
+
+ /* Map special whitespace to '\t', \n', '\r' */
+ if (ch == '\t') {
+ PyUnicode_WRITE(okind, odata, o++, '\\');
+ PyUnicode_WRITE(okind, odata, o++, 't');
+ }
+ else if (ch == '\n') {
+ PyUnicode_WRITE(okind, odata, o++, '\\');
+ PyUnicode_WRITE(okind, odata, o++, 'n');
+ }
+ else if (ch == '\r') {
+ PyUnicode_WRITE(okind, odata, o++, '\\');
+ PyUnicode_WRITE(okind, odata, o++, 'r');
+ }
+
+ /* Map non-printable US ASCII to '\xhh' */
+ else if (ch < ' ' || ch == 0x7F) {
+ PyUnicode_WRITE(okind, odata, o++, '\\');
+ PyUnicode_WRITE(okind, odata, o++, 'x');
+ PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[(ch >> 4) & 0x000F]);
+ PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[ch & 0x000F]);
+ }
+
+ /* Copy ASCII characters as-is */
+ else if (ch < 0x7F) {
+ PyUnicode_WRITE(okind, odata, o++, ch);
+ }
+
+ /* Non-ASCII characters */
+ else {
+ /* Map Unicode whitespace and control characters
+ (categories Z* and C* except ASCII space)
+ */
+ if (!Py_UNICODE_ISPRINTABLE(ch)) {
+ PyUnicode_WRITE(okind, odata, o++, '\\');
+ /* Map 8-bit characters to '\xhh' */
+ if (ch <= 0xff) {
+ PyUnicode_WRITE(okind, odata, o++, 'x');
+ PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[(ch >> 4) & 0x000F]);
+ PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[ch & 0x000F]);
+ }
+ /* Map 16-bit characters to '\uxxxx' */
+ else if (ch <= 0xffff) {
+ PyUnicode_WRITE(okind, odata, o++, 'u');
+ PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[(ch >> 12) & 0xF]);
+ PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[(ch >> 8) & 0xF]);
+ PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[(ch >> 4) & 0xF]);
+ PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[ch & 0xF]);
+ }
+ /* Map 21-bit characters to '\U00xxxxxx' */
+ else {
+ PyUnicode_WRITE(okind, odata, o++, 'U');
+ PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[(ch >> 28) & 0xF]);
+ PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[(ch >> 24) & 0xF]);
+ PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[(ch >> 20) & 0xF]);
+ PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[(ch >> 16) & 0xF]);
+ PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[(ch >> 12) & 0xF]);
+ PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[(ch >> 8) & 0xF]);
+ PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[(ch >> 4) & 0xF]);
+ PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[ch & 0xF]);
+ }
+ }
+ /* Copy characters as-is */
+ else {
+ PyUnicode_WRITE(okind, odata, o++, ch);
+ }
+ }
+ }
+ }
+ /* Closing quote already added at the beginning */
+ assert(_PyUnicode_CheckConsistency(repr, 1));
+ return repr;
+}
+
+PyDoc_STRVAR(rfind__doc__,
+ "S.rfind(sub[, start[, end]]) -> int\n\
+\n\
+Return the highest index in S where substring sub is found,\n\
+such that sub is contained within S[start:end]. Optional\n\
+arguments start and end are interpreted as in slice notation.\n\
+\n\
+Return -1 on failure.");
+
+static PyObject *
+unicode_rfind(PyObject *self, PyObject *args)
+{
+ /* initialize variables to prevent gcc warning */
+ PyObject *substring = NULL;
+ Py_ssize_t start = 0;
+ Py_ssize_t end = 0;
+ Py_ssize_t result;
+
+ if (!parse_args_finds_unicode("rfind", args, &substring, &start, &end))
+ return NULL;
+
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+
+ result = any_find_slice(self, substring, start, end, -1);
+
+ if (result == -2)
+ return NULL;
+
+ return PyLong_FromSsize_t(result);
+}
+
+PyDoc_STRVAR(rindex__doc__,
+ "S.rindex(sub[, start[, end]]) -> int\n\
+\n\
+Return the highest index in S where substring sub is found,\n\
+such that sub is contained within S[start:end]. Optional\n\
+arguments start and end are interpreted as in slice notation.\n\
+\n\
+Raises ValueError when the substring is not found.");
+
+static PyObject *
+unicode_rindex(PyObject *self, PyObject *args)
+{
+ /* initialize variables to prevent gcc warning */
+ PyObject *substring = NULL;
+ Py_ssize_t start = 0;
+ Py_ssize_t end = 0;
+ Py_ssize_t result;
+
+ if (!parse_args_finds_unicode("rindex", args, &substring, &start, &end))
+ return NULL;
+
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+
+ result = any_find_slice(self, substring, start, end, -1);
+
+ if (result == -2)
+ return NULL;
+
+ if (result < 0) {
+ PyErr_SetString(PyExc_ValueError, "substring not found");
+ return NULL;
+ }
+
+ return PyLong_FromSsize_t(result);
+}
+
+PyDoc_STRVAR(rjust__doc__,
+ "S.rjust(width[, fillchar]) -> str\n\
+\n\
+Return S right-justified in a string of length width. Padding is\n\
+done using the specified fill character (default is a space).");
+
+static PyObject *
+unicode_rjust(PyObject *self, PyObject *args)
+{
+ Py_ssize_t width;
+ Py_UCS4 fillchar = ' ';
+
+ if (!PyArg_ParseTuple(args, "n|O&:rjust", &width, convert_uc, &fillchar))
+ return NULL;
+
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+
+ if (PyUnicode_GET_LENGTH(self) >= width)
+ return unicode_result_unchanged(self);
+
+ return pad(self, width - PyUnicode_GET_LENGTH(self), 0, fillchar);
+}
+
+PyObject *
+PyUnicode_Split(PyObject *s, PyObject *sep, Py_ssize_t maxsplit)
+{
+ if (ensure_unicode(s) < 0 || (sep != NULL && ensure_unicode(sep) < 0))
+ return NULL;
+
+ return split(s, sep, maxsplit);
+}
+
+PyDoc_STRVAR(split__doc__,
+ "S.split(sep=None, maxsplit=-1) -> list of strings\n\
+\n\
+Return a list of the words in S, using sep as the\n\
+delimiter string. If maxsplit is given, at most maxsplit\n\
+splits are done. If sep is not specified or is None, any\n\
+whitespace string is a separator and empty strings are\n\
+removed from the result.");
+
+static PyObject*
+unicode_split(PyObject *self, PyObject *args, PyObject *kwds)
+{
+ static char *kwlist[] = {"sep", "maxsplit", 0};
+ PyObject *substring = Py_None;
+ Py_ssize_t maxcount = -1;
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwds, "|On:split",
+ kwlist, &substring, &maxcount))
+ return NULL;
+
+ if (substring == Py_None)
+ return split(self, NULL, maxcount);
+
+ if (PyUnicode_Check(substring))
+ return split(self, substring, maxcount);
+
+ PyErr_Format(PyExc_TypeError,
+ "must be str or None, not %.100s",
+ Py_TYPE(substring)->tp_name);
+ return NULL;
+}
+
+PyObject *
+PyUnicode_Partition(PyObject *str_obj, PyObject *sep_obj)
+{
+ PyObject* out;
+ int kind1, kind2;
+ void *buf1, *buf2;
+ Py_ssize_t len1, len2;
+
+ if (ensure_unicode(str_obj) < 0 || ensure_unicode(sep_obj) < 0)
+ return NULL;
+
+ kind1 = PyUnicode_KIND(str_obj);
+ kind2 = PyUnicode_KIND(sep_obj);
+ len1 = PyUnicode_GET_LENGTH(str_obj);
+ len2 = PyUnicode_GET_LENGTH(sep_obj);
+ if (kind1 < kind2 || len1 < len2) {
+ _Py_INCREF_UNICODE_EMPTY();
+ if (!unicode_empty)
+ out = NULL;
+ else {
+ out = PyTuple_Pack(3, str_obj, unicode_empty, unicode_empty);
+ Py_DECREF(unicode_empty);
+ }
+ return out;
+ }
+ buf1 = PyUnicode_DATA(str_obj);
+ buf2 = PyUnicode_DATA(sep_obj);
+ if (kind2 != kind1) {
+ buf2 = _PyUnicode_AsKind(sep_obj, kind1);
+ if (!buf2)
+ return NULL;
+ }
+
+ switch (kind1) {
+ case PyUnicode_1BYTE_KIND:
+ if (PyUnicode_IS_ASCII(str_obj) && PyUnicode_IS_ASCII(sep_obj))
+ out = asciilib_partition(str_obj, buf1, len1, sep_obj, buf2, len2);
+ else
+ out = ucs1lib_partition(str_obj, buf1, len1, sep_obj, buf2, len2);
+ break;
+ case PyUnicode_2BYTE_KIND:
+ out = ucs2lib_partition(str_obj, buf1, len1, sep_obj, buf2, len2);
+ break;
+ case PyUnicode_4BYTE_KIND:
+ out = ucs4lib_partition(str_obj, buf1, len1, sep_obj, buf2, len2);
+ break;
+ default:
+ assert(0);
+ out = 0;
+ }
+
+ if (kind2 != kind1)
+ PyMem_Free(buf2);
+
+ return out;
+}
+
+
+PyObject *
+PyUnicode_RPartition(PyObject *str_obj, PyObject *sep_obj)
+{
+ PyObject* out;
+ int kind1, kind2;
+ void *buf1, *buf2;
+ Py_ssize_t len1, len2;
+
+ if (ensure_unicode(str_obj) < 0 || ensure_unicode(sep_obj) < 0)
+ return NULL;
+
+ kind1 = PyUnicode_KIND(str_obj);
+ kind2 = PyUnicode_KIND(sep_obj);
+ len1 = PyUnicode_GET_LENGTH(str_obj);
+ len2 = PyUnicode_GET_LENGTH(sep_obj);
+ if (kind1 < kind2 || len1 < len2) {
+ _Py_INCREF_UNICODE_EMPTY();
+ if (!unicode_empty)
+ out = NULL;
+ else {
+ out = PyTuple_Pack(3, unicode_empty, unicode_empty, str_obj);
+ Py_DECREF(unicode_empty);
+ }
+ return out;
+ }
+ buf1 = PyUnicode_DATA(str_obj);
+ buf2 = PyUnicode_DATA(sep_obj);
+ if (kind2 != kind1) {
+ buf2 = _PyUnicode_AsKind(sep_obj, kind1);
+ if (!buf2)
+ return NULL;
+ }
+
+ switch (kind1) {
+ case PyUnicode_1BYTE_KIND:
+ if (PyUnicode_IS_ASCII(str_obj) && PyUnicode_IS_ASCII(sep_obj))
+ out = asciilib_rpartition(str_obj, buf1, len1, sep_obj, buf2, len2);
+ else
+ out = ucs1lib_rpartition(str_obj, buf1, len1, sep_obj, buf2, len2);
+ break;
+ case PyUnicode_2BYTE_KIND:
+ out = ucs2lib_rpartition(str_obj, buf1, len1, sep_obj, buf2, len2);
+ break;
+ case PyUnicode_4BYTE_KIND:
+ out = ucs4lib_rpartition(str_obj, buf1, len1, sep_obj, buf2, len2);
+ break;
+ default:
+ assert(0);
+ out = 0;
+ }
+
+ if (kind2 != kind1)
+ PyMem_Free(buf2);
+
+ return out;
+}
+
+PyDoc_STRVAR(partition__doc__,
+ "S.partition(sep) -> (head, sep, tail)\n\
+\n\
+Search for the separator sep in S, and return the part before it,\n\
+the separator itself, and the part after it. If the separator is not\n\
+found, return S and two empty strings.");
+
+static PyObject*
+unicode_partition(PyObject *self, PyObject *separator)
+{
+ return PyUnicode_Partition(self, separator);
+}
+
+PyDoc_STRVAR(rpartition__doc__,
+ "S.rpartition(sep) -> (head, sep, tail)\n\
+\n\
+Search for the separator sep in S, starting at the end of S, and return\n\
+the part before it, the separator itself, and the part after it. If the\n\
+separator is not found, return two empty strings and S.");
+
+static PyObject*
+unicode_rpartition(PyObject *self, PyObject *separator)
+{
+ return PyUnicode_RPartition(self, separator);
+}
+
+PyObject *
+PyUnicode_RSplit(PyObject *s, PyObject *sep, Py_ssize_t maxsplit)
+{
+ if (ensure_unicode(s) < 0 || (sep != NULL && ensure_unicode(sep) < 0))
+ return NULL;
+
+ return rsplit(s, sep, maxsplit);
+}
+
+PyDoc_STRVAR(rsplit__doc__,
+ "S.rsplit(sep=None, maxsplit=-1) -> list of strings\n\
+\n\
+Return a list of the words in S, using sep as the\n\
+delimiter string, starting at the end of the string and\n\
+working to the front. If maxsplit is given, at most maxsplit\n\
+splits are done. If sep is not specified, any whitespace string\n\
+is a separator.");
+
+static PyObject*
+unicode_rsplit(PyObject *self, PyObject *args, PyObject *kwds)
+{
+ static char *kwlist[] = {"sep", "maxsplit", 0};
+ PyObject *substring = Py_None;
+ Py_ssize_t maxcount = -1;
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwds, "|On:rsplit",
+ kwlist, &substring, &maxcount))
+ return NULL;
+
+ if (substring == Py_None)
+ return rsplit(self, NULL, maxcount);
+
+ if (PyUnicode_Check(substring))
+ return rsplit(self, substring, maxcount);
+
+ PyErr_Format(PyExc_TypeError,
+ "must be str or None, not %.100s",
+ Py_TYPE(substring)->tp_name);
+ return NULL;
+}
+
+PyDoc_STRVAR(splitlines__doc__,
+ "S.splitlines([keepends]) -> list of strings\n\
+\n\
+Return a list of the lines in S, breaking at line boundaries.\n\
+Line breaks are not included in the resulting list unless keepends\n\
+is given and true.");
+
+static PyObject*
+unicode_splitlines(PyObject *self, PyObject *args, PyObject *kwds)
+{
+ static char *kwlist[] = {"keepends", 0};
+ int keepends = 0;
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwds, "|i:splitlines",
+ kwlist, &keepends))
+ return NULL;
+
+ return PyUnicode_Splitlines(self, keepends);
+}
+
+static
+PyObject *unicode_str(PyObject *self)
+{
+ return unicode_result_unchanged(self);
+}
+
+PyDoc_STRVAR(swapcase__doc__,
+ "S.swapcase() -> str\n\
+\n\
+Return a copy of S with uppercase characters converted to lowercase\n\
+and vice versa.");
+
+static PyObject*
+unicode_swapcase(PyObject *self)
+{
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+ return case_operation(self, do_swapcase);
+}
+
+/*[clinic input]
+
+@staticmethod
+str.maketrans as unicode_maketrans
+
+ x: object
+
+ y: unicode=NULL
+
+ z: unicode=NULL
+
+ /
+
+Return a translation table usable for str.translate().
+
+If there is only one argument, it must be a dictionary mapping Unicode
+ordinals (integers) or characters to Unicode ordinals, strings or None.
+Character keys will be then converted to ordinals.
+If there are two arguments, they must be strings of equal length, and
+in the resulting dictionary, each character in x will be mapped to the
+character at the same position in y. If there is a third argument, it
+must be a string, whose characters will be mapped to None in the result.
+[clinic start generated code]*/
+
+static PyObject *
+unicode_maketrans_impl(PyObject *x, PyObject *y, PyObject *z)
+/*[clinic end generated code: output=a925c89452bd5881 input=7bfbf529a293c6c5]*/
+{
+ PyObject *new = NULL, *key, *value;
+ Py_ssize_t i = 0;
+ int res;
+
+ new = PyDict_New();
+ if (!new)
+ return NULL;
+ if (y != NULL) {
+ int x_kind, y_kind, z_kind;
+ void *x_data, *y_data, *z_data;
+
+ /* x must be a string too, of equal length */
+ if (!PyUnicode_Check(x)) {
+ PyErr_SetString(PyExc_TypeError, "first maketrans argument must "
+ "be a string if there is a second argument");
+ goto err;
+ }
+ if (PyUnicode_GET_LENGTH(x) != PyUnicode_GET_LENGTH(y)) {
+ PyErr_SetString(PyExc_ValueError, "the first two maketrans "
+ "arguments must have equal length");
+ goto err;
+ }
+ /* create entries for translating chars in x to those in y */
+ x_kind = PyUnicode_KIND(x);
+ y_kind = PyUnicode_KIND(y);
+ x_data = PyUnicode_DATA(x);
+ y_data = PyUnicode_DATA(y);
+ for (i = 0; i < PyUnicode_GET_LENGTH(x); i++) {
+ key = PyLong_FromLong(PyUnicode_READ(x_kind, x_data, i));
+ if (!key)
+ goto err;
+ value = PyLong_FromLong(PyUnicode_READ(y_kind, y_data, i));
+ if (!value) {
+ Py_DECREF(key);
+ goto err;
+ }
+ res = PyDict_SetItem(new, key, value);
+ Py_DECREF(key);
+ Py_DECREF(value);
+ if (res < 0)
+ goto err;
+ }
+ /* create entries for deleting chars in z */
+ if (z != NULL) {
+ z_kind = PyUnicode_KIND(z);
+ z_data = PyUnicode_DATA(z);
+ for (i = 0; i < PyUnicode_GET_LENGTH(z); i++) {
+ key = PyLong_FromLong(PyUnicode_READ(z_kind, z_data, i));
+ if (!key)
+ goto err;
+ res = PyDict_SetItem(new, key, Py_None);
+ Py_DECREF(key);
+ if (res < 0)
+ goto err;
+ }
+ }
+ } else {
+ int kind;
+ void *data;
+
+ /* x must be a dict */
+ if (!PyDict_CheckExact(x)) {
+ PyErr_SetString(PyExc_TypeError, "if you give only one argument "
+ "to maketrans it must be a dict");
+ goto err;
+ }
+ /* copy entries into the new dict, converting string keys to int keys */
+ while (PyDict_Next(x, &i, &key, &value)) {
+ if (PyUnicode_Check(key)) {
+ /* convert string keys to integer keys */
+ PyObject *newkey;
+ if (PyUnicode_GET_LENGTH(key) != 1) {
+ PyErr_SetString(PyExc_ValueError, "string keys in translate "
+ "table must be of length 1");
+ goto err;
+ }
+ kind = PyUnicode_KIND(key);
+ data = PyUnicode_DATA(key);
+ newkey = PyLong_FromLong(PyUnicode_READ(kind, data, 0));
+ if (!newkey)
+ goto err;
+ res = PyDict_SetItem(new, newkey, value);
+ Py_DECREF(newkey);
+ if (res < 0)
+ goto err;
+ } else if (PyLong_Check(key)) {
+ /* just keep integer keys */
+ if (PyDict_SetItem(new, key, value) < 0)
+ goto err;
+ } else {
+ PyErr_SetString(PyExc_TypeError, "keys in translate table must "
+ "be strings or integers");
+ goto err;
+ }
+ }
+ }
+ return new;
+ err:
+ Py_DECREF(new);
+ return NULL;
+}
+
+PyDoc_STRVAR(translate__doc__,
+ "S.translate(table) -> str\n\
+\n\
+Return a copy of the string S in which each character has been mapped\n\
+through the given translation table. The table must implement\n\
+lookup/indexing via __getitem__, for instance a dictionary or list,\n\
+mapping Unicode ordinals to Unicode ordinals, strings, or None. If\n\
+this operation raises LookupError, the character is left untouched.\n\
+Characters mapped to None are deleted.");
+
+static PyObject*
+unicode_translate(PyObject *self, PyObject *table)
+{
+ return _PyUnicode_TranslateCharmap(self, table, "ignore");
+}
+
+PyDoc_STRVAR(upper__doc__,
+ "S.upper() -> str\n\
+\n\
+Return a copy of S converted to uppercase.");
+
+static PyObject*
+unicode_upper(PyObject *self)
+{
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+ if (PyUnicode_IS_ASCII(self))
+ return ascii_upper_or_lower(self, 0);
+ return case_operation(self, do_upper);
+}
+
+PyDoc_STRVAR(zfill__doc__,
+ "S.zfill(width) -> str\n\
+\n\
+Pad a numeric string S with zeros on the left, to fill a field\n\
+of the specified width. The string S is never truncated.");
+
+static PyObject *
+unicode_zfill(PyObject *self, PyObject *args)
+{
+ Py_ssize_t fill;
+ PyObject *u;
+ Py_ssize_t width;
+ int kind;
+ void *data;
+ Py_UCS4 chr;
+
+ if (!PyArg_ParseTuple(args, "n:zfill", &width))
+ return NULL;
+
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+
+ if (PyUnicode_GET_LENGTH(self) >= width)
+ return unicode_result_unchanged(self);
+
+ fill = width - PyUnicode_GET_LENGTH(self);
+
+ u = pad(self, fill, 0, '0');
+
+ if (u == NULL)
+ return NULL;
+
+ kind = PyUnicode_KIND(u);
+ data = PyUnicode_DATA(u);
+ chr = PyUnicode_READ(kind, data, fill);
+
+ if (chr == '+' || chr == '-') {
+ /* move sign to beginning of string */
+ PyUnicode_WRITE(kind, data, 0, chr);
+ PyUnicode_WRITE(kind, data, fill, '0');
+ }
+
+ assert(_PyUnicode_CheckConsistency(u, 1));
+ return u;
+}
+
+#if 0
+static PyObject *
+unicode__decimal2ascii(PyObject *self)
+{
+ return PyUnicode_TransformDecimalAndSpaceToASCII(self);
+}
+#endif
+
+PyDoc_STRVAR(startswith__doc__,
+ "S.startswith(prefix[, start[, end]]) -> bool\n\
+\n\
+Return True if S starts with the specified prefix, False otherwise.\n\
+With optional start, test S beginning at that position.\n\
+With optional end, stop comparing S at that position.\n\
+prefix can also be a tuple of strings to try.");
+
+static PyObject *
+unicode_startswith(PyObject *self,
+ PyObject *args)
+{
+ PyObject *subobj;
+ PyObject *substring;
+ Py_ssize_t start = 0;
+ Py_ssize_t end = PY_SSIZE_T_MAX;
+ int result;
+
+ if (!stringlib_parse_args_finds("startswith", args, &subobj, &start, &end))
+ return NULL;
+ if (PyTuple_Check(subobj)) {
+ Py_ssize_t i;
+ for (i = 0; i < PyTuple_GET_SIZE(subobj); i++) {
+ substring = PyTuple_GET_ITEM(subobj, i);
+ if (!PyUnicode_Check(substring)) {
+ PyErr_Format(PyExc_TypeError,
+ "tuple for startswith must only contain str, "
+ "not %.100s",
+ Py_TYPE(substring)->tp_name);
+ return NULL;
+ }
+ result = tailmatch(self, substring, start, end, -1);
+ if (result == -1)
+ return NULL;
+ if (result) {
+ Py_RETURN_TRUE;
+ }
+ }
+ /* nothing matched */
+ Py_RETURN_FALSE;
+ }
+ if (!PyUnicode_Check(subobj)) {
+ PyErr_Format(PyExc_TypeError,
+ "startswith first arg must be str or "
+ "a tuple of str, not %.100s", Py_TYPE(subobj)->tp_name);
+ return NULL;
+ }
+ result = tailmatch(self, subobj, start, end, -1);
+ if (result == -1)
+ return NULL;
+ return PyBool_FromLong(result);
+}
+
+
+PyDoc_STRVAR(endswith__doc__,
+ "S.endswith(suffix[, start[, end]]) -> bool\n\
+\n\
+Return True if S ends with the specified suffix, False otherwise.\n\
+With optional start, test S beginning at that position.\n\
+With optional end, stop comparing S at that position.\n\
+suffix can also be a tuple of strings to try.");
+
+static PyObject *
+unicode_endswith(PyObject *self,
+ PyObject *args)
+{
+ PyObject *subobj;
+ PyObject *substring;
+ Py_ssize_t start = 0;
+ Py_ssize_t end = PY_SSIZE_T_MAX;
+ int result;
+
+ if (!stringlib_parse_args_finds("endswith", args, &subobj, &start, &end))
+ return NULL;
+ if (PyTuple_Check(subobj)) {
+ Py_ssize_t i;
+ for (i = 0; i < PyTuple_GET_SIZE(subobj); i++) {
+ substring = PyTuple_GET_ITEM(subobj, i);
+ if (!PyUnicode_Check(substring)) {
+ PyErr_Format(PyExc_TypeError,
+ "tuple for endswith must only contain str, "
+ "not %.100s",
+ Py_TYPE(substring)->tp_name);
+ return NULL;
+ }
+ result = tailmatch(self, substring, start, end, +1);
+ if (result == -1)
+ return NULL;
+ if (result) {
+ Py_RETURN_TRUE;
+ }
+ }
+ Py_RETURN_FALSE;
+ }
+ if (!PyUnicode_Check(subobj)) {
+ PyErr_Format(PyExc_TypeError,
+ "endswith first arg must be str or "
+ "a tuple of str, not %.100s", Py_TYPE(subobj)->tp_name);
+ return NULL;
+ }
+ result = tailmatch(self, subobj, start, end, +1);
+ if (result == -1)
+ return NULL;
+ return PyBool_FromLong(result);
+}
+
+static void
+_PyUnicodeWriter_Update(_PyUnicodeWriter *writer)
+{
+ writer->maxchar = PyUnicode_MAX_CHAR_VALUE(writer->buffer);
+ writer->data = PyUnicode_DATA(writer->buffer);
+
+ if (!writer->readonly) {
+ writer->kind = PyUnicode_KIND(writer->buffer);
+ writer->size = PyUnicode_GET_LENGTH(writer->buffer);
+ }
+ else {
+ /* use a value smaller than PyUnicode_1BYTE_KIND() so
+ _PyUnicodeWriter_PrepareKind() will copy the buffer. */
+ writer->kind = PyUnicode_WCHAR_KIND;
+ assert(writer->kind <= PyUnicode_1BYTE_KIND);
+
+ /* Copy-on-write mode: set buffer size to 0 so
+ * _PyUnicodeWriter_Prepare() will copy (and enlarge) the buffer on
+ * next write. */
+ writer->size = 0;
+ }
+}
+
+void
+_PyUnicodeWriter_Init(_PyUnicodeWriter *writer)
+{
+ memset(writer, 0, sizeof(*writer));
+
+ /* ASCII is the bare minimum */
+ writer->min_char = 127;
+
+ /* use a value smaller than PyUnicode_1BYTE_KIND() so
+ _PyUnicodeWriter_PrepareKind() will copy the buffer. */
+ writer->kind = PyUnicode_WCHAR_KIND;
+ assert(writer->kind <= PyUnicode_1BYTE_KIND);
+}
+
+int
+_PyUnicodeWriter_PrepareInternal(_PyUnicodeWriter *writer,
+ Py_ssize_t length, Py_UCS4 maxchar)
+{
+ Py_ssize_t newlen;
+ PyObject *newbuffer;
+
+ assert(maxchar <= MAX_UNICODE);
+
+ /* ensure that the _PyUnicodeWriter_Prepare macro was used */
+ assert((maxchar > writer->maxchar && length >= 0)
+ || length > 0);
+
+ if (length > PY_SSIZE_T_MAX - writer->pos) {
+ PyErr_NoMemory();
+ return -1;
+ }
+ newlen = writer->pos + length;
+
+ maxchar = Py_MAX(maxchar, writer->min_char);
+
+ if (writer->buffer == NULL) {
+ assert(!writer->readonly);
+ if (writer->overallocate
+ && newlen <= (PY_SSIZE_T_MAX - newlen / OVERALLOCATE_FACTOR)) {
+ /* overallocate to limit the number of realloc() */
+ newlen += newlen / OVERALLOCATE_FACTOR;
+ }
+ if (newlen < writer->min_length)
+ newlen = writer->min_length;
+
+ writer->buffer = PyUnicode_New(newlen, maxchar);
+ if (writer->buffer == NULL)
+ return -1;
+ }
+ else if (newlen > writer->size) {
+ if (writer->overallocate
+ && newlen <= (PY_SSIZE_T_MAX - newlen / OVERALLOCATE_FACTOR)) {
+ /* overallocate to limit the number of realloc() */
+ newlen += newlen / OVERALLOCATE_FACTOR;
+ }
+ if (newlen < writer->min_length)
+ newlen = writer->min_length;
+
+ if (maxchar > writer->maxchar || writer->readonly) {
+ /* resize + widen */
+ maxchar = Py_MAX(maxchar, writer->maxchar);
+ newbuffer = PyUnicode_New(newlen, maxchar);
+ if (newbuffer == NULL)
+ return -1;
+ _PyUnicode_FastCopyCharacters(newbuffer, 0,
+ writer->buffer, 0, writer->pos);
+ Py_DECREF(writer->buffer);
+ writer->readonly = 0;
+ }
+ else {
+ newbuffer = resize_compact(writer->buffer, newlen);
+ if (newbuffer == NULL)
+ return -1;
+ }
+ writer->buffer = newbuffer;
+ }
+ else if (maxchar > writer->maxchar) {
+ assert(!writer->readonly);
+ newbuffer = PyUnicode_New(writer->size, maxchar);
+ if (newbuffer == NULL)
+ return -1;
+ _PyUnicode_FastCopyCharacters(newbuffer, 0,
+ writer->buffer, 0, writer->pos);
+ Py_SETREF(writer->buffer, newbuffer);
+ }
+ _PyUnicodeWriter_Update(writer);
+ return 0;
+
+#undef OVERALLOCATE_FACTOR
+}
+
+int
+_PyUnicodeWriter_PrepareKindInternal(_PyUnicodeWriter *writer,
+ enum PyUnicode_Kind kind)
+{
+ Py_UCS4 maxchar;
+
+ /* ensure that the _PyUnicodeWriter_PrepareKind macro was used */
+ assert(writer->kind < kind);
+
+ switch (kind)
+ {
+ case PyUnicode_1BYTE_KIND: maxchar = 0xff; break;
+ case PyUnicode_2BYTE_KIND: maxchar = 0xffff; break;
+ case PyUnicode_4BYTE_KIND: maxchar = 0x10ffff; break;
+ default:
+ assert(0 && "invalid kind");
+ return -1;
+ }
+
+ return _PyUnicodeWriter_PrepareInternal(writer, 0, maxchar);
+}
+
+static int
+_PyUnicodeWriter_WriteCharInline(_PyUnicodeWriter *writer, Py_UCS4 ch)
+{
+ assert(ch <= MAX_UNICODE);
+ if (_PyUnicodeWriter_Prepare(writer, 1, ch) < 0)
+ return -1;
+ PyUnicode_WRITE(writer->kind, writer->data, writer->pos, ch);
+ writer->pos++;
+ return 0;
+}
+
+int
+_PyUnicodeWriter_WriteChar(_PyUnicodeWriter *writer, Py_UCS4 ch)
+{
+ return _PyUnicodeWriter_WriteCharInline(writer, ch);
+}
+
+int
+_PyUnicodeWriter_WriteStr(_PyUnicodeWriter *writer, PyObject *str)
+{
+ Py_UCS4 maxchar;
+ Py_ssize_t len;
+
+ if (PyUnicode_READY(str) == -1)
+ return -1;
+ len = PyUnicode_GET_LENGTH(str);
+ if (len == 0)
+ return 0;
+ maxchar = PyUnicode_MAX_CHAR_VALUE(str);
+ if (maxchar > writer->maxchar || len > writer->size - writer->pos) {
+ if (writer->buffer == NULL && !writer->overallocate) {
+ assert(_PyUnicode_CheckConsistency(str, 1));
+ writer->readonly = 1;
+ Py_INCREF(str);
+ writer->buffer = str;
+ _PyUnicodeWriter_Update(writer);
+ writer->pos += len;
+ return 0;
+ }
+ if (_PyUnicodeWriter_PrepareInternal(writer, len, maxchar) == -1)
+ return -1;
+ }
+ _PyUnicode_FastCopyCharacters(writer->buffer, writer->pos,
+ str, 0, len);
+ writer->pos += len;
+ return 0;
+}
+
+int
+_PyUnicodeWriter_WriteSubstring(_PyUnicodeWriter *writer, PyObject *str,
+ Py_ssize_t start, Py_ssize_t end)
+{
+ Py_UCS4 maxchar;
+ Py_ssize_t len;
+
+ if (PyUnicode_READY(str) == -1)
+ return -1;
+
+ assert(0 <= start);
+ assert(end <= PyUnicode_GET_LENGTH(str));
+ assert(start <= end);
+
+ if (end == 0)
+ return 0;
+
+ if (start == 0 && end == PyUnicode_GET_LENGTH(str))
+ return _PyUnicodeWriter_WriteStr(writer, str);
+
+ if (PyUnicode_MAX_CHAR_VALUE(str) > writer->maxchar)
+ maxchar = _PyUnicode_FindMaxChar(str, start, end);
+ else
+ maxchar = writer->maxchar;
+ len = end - start;
+
+ if (_PyUnicodeWriter_Prepare(writer, len, maxchar) < 0)
+ return -1;
+
+ _PyUnicode_FastCopyCharacters(writer->buffer, writer->pos,
+ str, start, len);
+ writer->pos += len;
+ return 0;
+}
+
+int
+_PyUnicodeWriter_WriteASCIIString(_PyUnicodeWriter *writer,
+ const char *ascii, Py_ssize_t len)
+{
+ if (len == -1)
+ len = strlen(ascii);
+
+ assert(ucs1lib_find_max_char((Py_UCS1*)ascii, (Py_UCS1*)ascii + len) < 128);
+
+ if (writer->buffer == NULL && !writer->overallocate) {
+ PyObject *str;
+
+ str = _PyUnicode_FromASCII(ascii, len);
+ if (str == NULL)
+ return -1;
+
+ writer->readonly = 1;
+ writer->buffer = str;
+ _PyUnicodeWriter_Update(writer);
+ writer->pos += len;
+ return 0;
+ }
+
+ if (_PyUnicodeWriter_Prepare(writer, len, 127) == -1)
+ return -1;
+
+ switch (writer->kind)
+ {
+ case PyUnicode_1BYTE_KIND:
+ {
+ const Py_UCS1 *str = (const Py_UCS1 *)ascii;
+ Py_UCS1 *data = writer->data;
+
+ memcpy(data + writer->pos, str, len);
+ break;
+ }
+ case PyUnicode_2BYTE_KIND:
+ {
+ _PyUnicode_CONVERT_BYTES(
+ Py_UCS1, Py_UCS2,
+ ascii, ascii + len,
+ (Py_UCS2 *)writer->data + writer->pos);
+ break;
+ }
+ case PyUnicode_4BYTE_KIND:
+ {
+ _PyUnicode_CONVERT_BYTES(
+ Py_UCS1, Py_UCS4,
+ ascii, ascii + len,
+ (Py_UCS4 *)writer->data + writer->pos);
+ break;
+ }
+ default:
+ assert(0);
+ }
+
+ writer->pos += len;
+ return 0;
+}
+
+int
+_PyUnicodeWriter_WriteLatin1String(_PyUnicodeWriter *writer,
+ const char *str, Py_ssize_t len)
+{
+ Py_UCS4 maxchar;
+
+ maxchar = ucs1lib_find_max_char((Py_UCS1*)str, (Py_UCS1*)str + len);
+ if (_PyUnicodeWriter_Prepare(writer, len, maxchar) == -1)
+ return -1;
+ unicode_write_cstr(writer->buffer, writer->pos, str, len);
+ writer->pos += len;
+ return 0;
+}
+
+PyObject *
+_PyUnicodeWriter_Finish(_PyUnicodeWriter *writer)
+{
+ PyObject *str;
+
+ if (writer->pos == 0) {
+ Py_CLEAR(writer->buffer);
+ _Py_RETURN_UNICODE_EMPTY();
+ }
+
+ str = writer->buffer;
+ writer->buffer = NULL;
+
+ if (writer->readonly) {
+ assert(PyUnicode_GET_LENGTH(str) == writer->pos);
+ return str;
+ }
+
+ if (PyUnicode_GET_LENGTH(str) != writer->pos) {
+ PyObject *str2;
+ str2 = resize_compact(str, writer->pos);
+ if (str2 == NULL) {
+ Py_DECREF(str);
+ return NULL;
+ }
+ str = str2;
+ }
+
+ assert(_PyUnicode_CheckConsistency(str, 1));
+ return unicode_result_ready(str);
+}
+
+void
+_PyUnicodeWriter_Dealloc(_PyUnicodeWriter *writer)
+{
+ Py_CLEAR(writer->buffer);
+}
+
+#include "stringlib/unicode_format.h"
+
+PyDoc_STRVAR(format__doc__,
+ "S.format(*args, **kwargs) -> str\n\
+\n\
+Return a formatted version of S, using substitutions from args and kwargs.\n\
+The substitutions are identified by braces ('{' and '}').");
+
+PyDoc_STRVAR(format_map__doc__,
+ "S.format_map(mapping) -> str\n\
+\n\
+Return a formatted version of S, using substitutions from mapping.\n\
+The substitutions are identified by braces ('{' and '}').");
+
+static PyObject *
+unicode__format__(PyObject* self, PyObject* args)
+{
+ PyObject *format_spec;
+ _PyUnicodeWriter writer;
+ int ret;
+
+ if (!PyArg_ParseTuple(args, "U:__format__", &format_spec))
+ return NULL;
+
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+ _PyUnicodeWriter_Init(&writer);
+ ret = _PyUnicode_FormatAdvancedWriter(&writer,
+ self, format_spec, 0,
+ PyUnicode_GET_LENGTH(format_spec));
+ if (ret == -1) {
+ _PyUnicodeWriter_Dealloc(&writer);
+ return NULL;
+ }
+ return _PyUnicodeWriter_Finish(&writer);
+}
+
+PyDoc_STRVAR(p_format__doc__,
+ "S.__format__(format_spec) -> str\n\
+\n\
+Return a formatted version of S as described by format_spec.");
+
+static PyObject *
+unicode__sizeof__(PyObject *v)
+{
+ Py_ssize_t size;
+
+ /* If it's a compact object, account for base structure +
+ character data. */
+ if (PyUnicode_IS_COMPACT_ASCII(v))
+ size = sizeof(PyASCIIObject) + PyUnicode_GET_LENGTH(v) + 1;
+ else if (PyUnicode_IS_COMPACT(v))
+ size = sizeof(PyCompactUnicodeObject) +
+ (PyUnicode_GET_LENGTH(v) + 1) * PyUnicode_KIND(v);
+ else {
+ /* If it is a two-block object, account for base object, and
+ for character block if present. */
+ size = sizeof(PyUnicodeObject);
+ if (_PyUnicode_DATA_ANY(v))
+ size += (PyUnicode_GET_LENGTH(v) + 1) *
+ PyUnicode_KIND(v);
+ }
+ /* If the wstr pointer is present, account for it unless it is shared
+ with the data pointer. Check if the data is not shared. */
+ if (_PyUnicode_HAS_WSTR_MEMORY(v))
+ size += (PyUnicode_WSTR_LENGTH(v) + 1) * sizeof(wchar_t);
+ if (_PyUnicode_HAS_UTF8_MEMORY(v))
+ size += PyUnicode_UTF8_LENGTH(v) + 1;
+
+ return PyLong_FromSsize_t(size);
+}
+
+PyDoc_STRVAR(sizeof__doc__,
+ "S.__sizeof__() -> size of S in memory, in bytes");
+
+static PyObject *
+unicode_getnewargs(PyObject *v)
+{
+ PyObject *copy = _PyUnicode_Copy(v);
+ if (!copy)
+ return NULL;
+ return Py_BuildValue("(N)", copy);
+}
+
+static PyMethodDef unicode_methods[] = {
+ {"encode", (PyCFunction) unicode_encode, METH_VARARGS | METH_KEYWORDS, encode__doc__},
+ {"replace", (PyCFunction) unicode_replace, METH_VARARGS, replace__doc__},
+ {"split", (PyCFunction) unicode_split, METH_VARARGS | METH_KEYWORDS, split__doc__},
+ {"rsplit", (PyCFunction) unicode_rsplit, METH_VARARGS | METH_KEYWORDS, rsplit__doc__},
+ {"join", (PyCFunction) unicode_join, METH_O, join__doc__},
+ {"capitalize", (PyCFunction) unicode_capitalize, METH_NOARGS, capitalize__doc__},
+ {"casefold", (PyCFunction) unicode_casefold, METH_NOARGS, casefold__doc__},
+ {"title", (PyCFunction) unicode_title, METH_NOARGS, title__doc__},
+ {"center", (PyCFunction) unicode_center, METH_VARARGS, center__doc__},
+ {"count", (PyCFunction) unicode_count, METH_VARARGS, count__doc__},
+ {"expandtabs", (PyCFunction) unicode_expandtabs,
+ METH_VARARGS | METH_KEYWORDS, expandtabs__doc__},
+ {"find", (PyCFunction) unicode_find, METH_VARARGS, find__doc__},
+ {"partition", (PyCFunction) unicode_partition, METH_O, partition__doc__},
+ {"index", (PyCFunction) unicode_index, METH_VARARGS, index__doc__},
+ {"ljust", (PyCFunction) unicode_ljust, METH_VARARGS, ljust__doc__},
+ {"lower", (PyCFunction) unicode_lower, METH_NOARGS, lower__doc__},
+ {"lstrip", (PyCFunction) unicode_lstrip, METH_VARARGS, lstrip__doc__},
+ {"rfind", (PyCFunction) unicode_rfind, METH_VARARGS, rfind__doc__},
+ {"rindex", (PyCFunction) unicode_rindex, METH_VARARGS, rindex__doc__},
+ {"rjust", (PyCFunction) unicode_rjust, METH_VARARGS, rjust__doc__},
+ {"rstrip", (PyCFunction) unicode_rstrip, METH_VARARGS, rstrip__doc__},
+ {"rpartition", (PyCFunction) unicode_rpartition, METH_O, rpartition__doc__},
+ {"splitlines", (PyCFunction) unicode_splitlines,
+ METH_VARARGS | METH_KEYWORDS, splitlines__doc__},
+ {"strip", (PyCFunction) unicode_strip, METH_VARARGS, strip__doc__},
+ {"swapcase", (PyCFunction) unicode_swapcase, METH_NOARGS, swapcase__doc__},
+ {"translate", (PyCFunction) unicode_translate, METH_O, translate__doc__},
+ {"upper", (PyCFunction) unicode_upper, METH_NOARGS, upper__doc__},
+ {"startswith", (PyCFunction) unicode_startswith, METH_VARARGS, startswith__doc__},
+ {"endswith", (PyCFunction) unicode_endswith, METH_VARARGS, endswith__doc__},
+ {"islower", (PyCFunction) unicode_islower, METH_NOARGS, islower__doc__},
+ {"isupper", (PyCFunction) unicode_isupper, METH_NOARGS, isupper__doc__},
+ {"istitle", (PyCFunction) unicode_istitle, METH_NOARGS, istitle__doc__},
+ {"isspace", (PyCFunction) unicode_isspace, METH_NOARGS, isspace__doc__},
+ {"isdecimal", (PyCFunction) unicode_isdecimal, METH_NOARGS, isdecimal__doc__},
+ {"isdigit", (PyCFunction) unicode_isdigit, METH_NOARGS, isdigit__doc__},
+ {"isnumeric", (PyCFunction) unicode_isnumeric, METH_NOARGS, isnumeric__doc__},
+ {"isalpha", (PyCFunction) unicode_isalpha, METH_NOARGS, isalpha__doc__},
+ {"isalnum", (PyCFunction) unicode_isalnum, METH_NOARGS, isalnum__doc__},
+ {"isidentifier", (PyCFunction) unicode_isidentifier, METH_NOARGS, isidentifier__doc__},
+ {"isprintable", (PyCFunction) unicode_isprintable, METH_NOARGS, isprintable__doc__},
+ {"zfill", (PyCFunction) unicode_zfill, METH_VARARGS, zfill__doc__},
+ {"format", (PyCFunction) do_string_format, METH_VARARGS | METH_KEYWORDS, format__doc__},
+ {"format_map", (PyCFunction) do_string_format_map, METH_O, format_map__doc__},
+ {"__format__", (PyCFunction) unicode__format__, METH_VARARGS, p_format__doc__},
+ UNICODE_MAKETRANS_METHODDEF
+ {"__sizeof__", (PyCFunction) unicode__sizeof__, METH_NOARGS, sizeof__doc__},
+#if 0
+ /* These methods are just used for debugging the implementation. */
+ {"_decimal2ascii", (PyCFunction) unicode__decimal2ascii, METH_NOARGS},
+#endif
+
+ {"__getnewargs__", (PyCFunction)unicode_getnewargs, METH_NOARGS},
+ {NULL, NULL}
+};
+
+static PyObject *
+unicode_mod(PyObject *v, PyObject *w)
+{
+ if (!PyUnicode_Check(v))
+ Py_RETURN_NOTIMPLEMENTED;
+ return PyUnicode_Format(v, w);
+}
+
+static PyNumberMethods unicode_as_number = {
+ 0, /*nb_add*/
+ 0, /*nb_subtract*/
+ 0, /*nb_multiply*/
+ unicode_mod, /*nb_remainder*/
+};
+
+static PySequenceMethods unicode_as_sequence = {
+ (lenfunc) unicode_length, /* sq_length */
+ PyUnicode_Concat, /* sq_concat */
+ (ssizeargfunc) unicode_repeat, /* sq_repeat */
+ (ssizeargfunc) unicode_getitem, /* sq_item */
+ 0, /* sq_slice */
+ 0, /* sq_ass_item */
+ 0, /* sq_ass_slice */
+ PyUnicode_Contains, /* sq_contains */
+};
+
+static PyObject*
+unicode_subscript(PyObject* self, PyObject* item)
+{
+ if (PyUnicode_READY(self) == -1)
+ return NULL;
+
+ if (PyIndex_Check(item)) {
+ Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError);
+ if (i == -1 && PyErr_Occurred())
+ return NULL;
+ if (i < 0)
+ i += PyUnicode_GET_LENGTH(self);
+ return unicode_getitem(self, i);
+ } else if (PySlice_Check(item)) {
+ Py_ssize_t start, stop, step, slicelength, cur, i;
+ PyObject *result;
+ void *src_data, *dest_data;
+ int src_kind, dest_kind;
+ Py_UCS4 ch, max_char, kind_limit;
+
+ if (PySlice_Unpack(item, &start, &stop, &step) < 0) {
+ return NULL;
+ }
+ slicelength = PySlice_AdjustIndices(PyUnicode_GET_LENGTH(self),
+ &start, &stop, step);
+
+ if (slicelength <= 0) {
+ _Py_RETURN_UNICODE_EMPTY();
+ } else if (start == 0 && step == 1 &&
+ slicelength == PyUnicode_GET_LENGTH(self)) {
+ return unicode_result_unchanged(self);
+ } else if (step == 1) {
+ return PyUnicode_Substring(self,
+ start, start + slicelength);
+ }
+ /* General case */
+ src_kind = PyUnicode_KIND(self);
+ src_data = PyUnicode_DATA(self);
+ if (!PyUnicode_IS_ASCII(self)) {
+ kind_limit = kind_maxchar_limit(src_kind);
+ max_char = 0;
+ for (cur = start, i = 0; i < slicelength; cur += step, i++) {
+ ch = PyUnicode_READ(src_kind, src_data, cur);
+ if (ch > max_char) {
+ max_char = ch;
+ if (max_char >= kind_limit)
+ break;
+ }
+ }
+ }
+ else
+ max_char = 127;
+ result = PyUnicode_New(slicelength, max_char);
+ if (result == NULL)
+ return NULL;
+ dest_kind = PyUnicode_KIND(result);
+ dest_data = PyUnicode_DATA(result);
+
+ for (cur = start, i = 0; i < slicelength; cur += step, i++) {
+ Py_UCS4 ch = PyUnicode_READ(src_kind, src_data, cur);
+ PyUnicode_WRITE(dest_kind, dest_data, i, ch);
+ }
+ assert(_PyUnicode_CheckConsistency(result, 1));
+ return result;
+ } else {
+ PyErr_SetString(PyExc_TypeError, "string indices must be integers");
+ return NULL;
+ }
+}
+
+static PyMappingMethods unicode_as_mapping = {
+ (lenfunc)unicode_length, /* mp_length */
+ (binaryfunc)unicode_subscript, /* mp_subscript */
+ (objobjargproc)0, /* mp_ass_subscript */
+};
+
+
+/* Helpers for PyUnicode_Format() */
+
+struct unicode_formatter_t {
+ PyObject *args;
+ int args_owned;
+ Py_ssize_t arglen, argidx;
+ PyObject *dict;
+
+ enum PyUnicode_Kind fmtkind;
+ Py_ssize_t fmtcnt, fmtpos;
+ void *fmtdata;
+ PyObject *fmtstr;
+
+ _PyUnicodeWriter writer;
+};
+
+struct unicode_format_arg_t {
+ Py_UCS4 ch;
+ int flags;
+ Py_ssize_t width;
+ int prec;
+ int sign;
+};
+
+static PyObject *
+unicode_format_getnextarg(struct unicode_formatter_t *ctx)
+{
+ Py_ssize_t argidx = ctx->argidx;
+
+ if (argidx < ctx->arglen) {
+ ctx->argidx++;
+ if (ctx->arglen < 0)
+ return ctx->args;
+ else
+ return PyTuple_GetItem(ctx->args, argidx);
+ }
+ PyErr_SetString(PyExc_TypeError,
+ "not enough arguments for format string");
+ return NULL;
+}
+
+/* Returns a new reference to a PyUnicode object, or NULL on failure. */
+
+/* Format a float into the writer if the writer is not NULL, or into *p_output
+ otherwise.
+
+ Return 0 on success, raise an exception and return -1 on error. */
+static int
+formatfloat(PyObject *v, struct unicode_format_arg_t *arg,
+ PyObject **p_output,
+ _PyUnicodeWriter *writer)
+{
+ char *p;
+ double x;
+ Py_ssize_t len;
+ int prec;
+ int dtoa_flags;
+
+ x = PyFloat_AsDouble(v);
+ if (x == -1.0 && PyErr_Occurred())
+ return -1;
+
+ prec = arg->prec;
+ if (prec < 0)
+ prec = 6;
+
+ if (arg->flags & F_ALT)
+ dtoa_flags = Py_DTSF_ALT;
+ else
+ dtoa_flags = 0;
+ p = PyOS_double_to_string(x, arg->ch, prec, dtoa_flags, NULL);
+ if (p == NULL)
+ return -1;
+ len = strlen(p);
+ if (writer) {
+ if (_PyUnicodeWriter_WriteASCIIString(writer, p, len) < 0) {
+ PyMem_Free(p);
+ return -1;
+ }
+ }
+ else
+ *p_output = _PyUnicode_FromASCII(p, len);
+ PyMem_Free(p);
+ return 0;
+}
+
+/* formatlong() emulates the format codes d, u, o, x and X, and
+ * the F_ALT flag, for Python's long (unbounded) ints. It's not used for
+ * Python's regular ints.
+ * Return value: a new PyUnicodeObject*, or NULL if error.
+ * The output string is of the form
+ * "-"? ("0x" | "0X")? digit+
+ * "0x"/"0X" are present only for x and X conversions, with F_ALT
+ * set in flags. The case of hex digits will be correct,
+ * There will be at least prec digits, zero-filled on the left if
+ * necessary to get that many.
+ * val object to be converted
+ * flags bitmask of format flags; only F_ALT is looked at
+ * prec minimum number of digits; 0-fill on left if needed
+ * type a character in [duoxX]; u acts the same as d
+ *
+ * CAUTION: o, x and X conversions on regular ints can never
+ * produce a '-' sign, but can for Python's unbounded ints.
+ */
+PyObject *
+_PyUnicode_FormatLong(PyObject *val, int alt, int prec, int type)
+{
+ PyObject *result = NULL;
+ char *buf;
+ Py_ssize_t i;
+ int sign; /* 1 if '-', else 0 */
+ int len; /* number of characters */
+ Py_ssize_t llen;
+ int numdigits; /* len == numnondigits + numdigits */
+ int numnondigits = 0;
+
+ /* Avoid exceeding SSIZE_T_MAX */
+ if (prec > INT_MAX-3) {
+ PyErr_SetString(PyExc_OverflowError,
+ "precision too large");
+ return NULL;
+ }
+
+ assert(PyLong_Check(val));
+
+ switch (type) {
+ default:
+ assert(!"'type' not in [diuoxX]");
+ case 'd':
+ case 'i':
+ case 'u':
+ /* int and int subclasses should print numerically when a numeric */
+ /* format code is used (see issue18780) */
+ result = PyNumber_ToBase(val, 10);
+ break;
+ case 'o':
+ numnondigits = 2;
+ result = PyNumber_ToBase(val, 8);
+ break;
+ case 'x':
+ case 'X':
+ numnondigits = 2;
+ result = PyNumber_ToBase(val, 16);
+ break;
+ }
+ if (!result)
+ return NULL;
+
+ assert(unicode_modifiable(result));
+ assert(PyUnicode_IS_READY(result));
+ assert(PyUnicode_IS_ASCII(result));
+
+ /* To modify the string in-place, there can only be one reference. */
+ if (Py_REFCNT(result) != 1) {
+ Py_DECREF(result);
+ PyErr_BadInternalCall();
+ return NULL;
+ }
+ buf = PyUnicode_DATA(result);
+ llen = PyUnicode_GET_LENGTH(result);
+ if (llen > INT_MAX) {
+ Py_DECREF(result);
+ PyErr_SetString(PyExc_ValueError,
+ "string too large in _PyUnicode_FormatLong");
+ return NULL;
+ }
+ len = (int)llen;
+ sign = buf[0] == '-';
+ numnondigits += sign;
+ numdigits = len - numnondigits;
+ assert(numdigits > 0);
+
+ /* Get rid of base marker unless F_ALT */
+ if (((alt) == 0 &&
+ (type == 'o' || type == 'x' || type == 'X'))) {
+ assert(buf[sign] == '0');
+ assert(buf[sign+1] == 'x' || buf[sign+1] == 'X' ||
+ buf[sign+1] == 'o');
+ numnondigits -= 2;
+ buf += 2;
+ len -= 2;
+ if (sign)
+ buf[0] = '-';
+ assert(len == numnondigits + numdigits);
+ assert(numdigits > 0);
+ }
+
+ /* Fill with leading zeroes to meet minimum width. */
+ if (prec > numdigits) {
+ PyObject *r1 = PyBytes_FromStringAndSize(NULL,
+ numnondigits + prec);
+ char *b1;
+ if (!r1) {
+ Py_DECREF(result);
+ return NULL;
+ }
+ b1 = PyBytes_AS_STRING(r1);
+ for (i = 0; i < numnondigits; ++i)
+ *b1++ = *buf++;
+ for (i = 0; i < prec - numdigits; i++)
+ *b1++ = '0';
+ for (i = 0; i < numdigits; i++)
+ *b1++ = *buf++;
+ *b1 = '\0';
+ Py_DECREF(result);
+ result = r1;
+ buf = PyBytes_AS_STRING(result);
+ len = numnondigits + prec;
+ }
+
+ /* Fix up case for hex conversions. */
+ if (type == 'X') {
+ /* Need to convert all lower case letters to upper case.
+ and need to convert 0x to 0X (and -0x to -0X). */
+ for (i = 0; i < len; i++)
+ if (buf[i] >= 'a' && buf[i] <= 'x')
+ buf[i] -= 'a'-'A';
+ }
+ if (!PyUnicode_Check(result)
+ || buf != PyUnicode_DATA(result)) {
+ PyObject *unicode;
+ unicode = _PyUnicode_FromASCII(buf, len);
+ Py_DECREF(result);
+ result = unicode;
+ }
+ else if (len != PyUnicode_GET_LENGTH(result)) {
+ if (PyUnicode_Resize(&result, len) < 0)
+ Py_CLEAR(result);
+ }
+ return result;
+}
+
+/* Format an integer or a float as an integer.
+ * Return 1 if the number has been formatted into the writer,
+ * 0 if the number has been formatted into *p_output
+ * -1 and raise an exception on error */
+static int
+mainformatlong(PyObject *v,
+ struct unicode_format_arg_t *arg,
+ PyObject **p_output,
+ _PyUnicodeWriter *writer)
+{
+ PyObject *iobj, *res;
+ char type = (char)arg->ch;
+
+ if (!PyNumber_Check(v))
+ goto wrongtype;
+
+ /* make sure number is a type of integer for o, x, and X */
+ if (!PyLong_Check(v)) {
+ if (type == 'o' || type == 'x' || type == 'X') {
+ iobj = PyNumber_Index(v);
+ if (iobj == NULL) {
+ if (PyErr_ExceptionMatches(PyExc_TypeError))
+ goto wrongtype;
+ return -1;
+ }
+ }
+ else {
+ iobj = PyNumber_Long(v);
+ if (iobj == NULL ) {
+ if (PyErr_ExceptionMatches(PyExc_TypeError))
+ goto wrongtype;
+ return -1;
+ }
+ }
+ assert(PyLong_Check(iobj));
+ }
+ else {
+ iobj = v;
+ Py_INCREF(iobj);
+ }
+
+ if (PyLong_CheckExact(v)
+ && arg->width == -1 && arg->prec == -1
+ && !(arg->flags & (F_SIGN | F_BLANK))
+ && type != 'X')
+ {
+ /* Fast path */
+ int alternate = arg->flags & F_ALT;
+ int base;
+
+ switch(type)
+ {
+ default:
+ assert(0 && "'type' not in [diuoxX]");
+ case 'd':
+ case 'i':
+ case 'u':
+ base = 10;
+ break;
+ case 'o':
+ base = 8;
+ break;
+ case 'x':
+ case 'X':
+ base = 16;
+ break;
+ }
+
+ if (_PyLong_FormatWriter(writer, v, base, alternate) == -1) {
+ Py_DECREF(iobj);
+ return -1;
+ }
+ Py_DECREF(iobj);
+ return 1;
+ }
+
+ res = _PyUnicode_FormatLong(iobj, arg->flags & F_ALT, arg->prec, type);
+ Py_DECREF(iobj);
+ if (res == NULL)
+ return -1;
+ *p_output = res;
+ return 0;
+
+wrongtype:
+ switch(type)
+ {
+ case 'o':
+ case 'x':
+ case 'X':
+ PyErr_Format(PyExc_TypeError,
+ "%%%c format: an integer is required, "
+ "not %.200s",
+ type, Py_TYPE(v)->tp_name);
+ break;
+ default:
+ PyErr_Format(PyExc_TypeError,
+ "%%%c format: a number is required, "
+ "not %.200s",
+ type, Py_TYPE(v)->tp_name);
+ break;
+ }
+ return -1;
+}
+
+static Py_UCS4
+formatchar(PyObject *v)
+{
+ /* presume that the buffer is at least 3 characters long */
+ if (PyUnicode_Check(v)) {
+ if (PyUnicode_GET_LENGTH(v) == 1) {
+ return PyUnicode_READ_CHAR(v, 0);
+ }
+ goto onError;
+ }
+ else {
+ PyObject *iobj;
+ long x;
+ /* make sure number is a type of integer */
+ if (!PyLong_Check(v)) {
+ iobj = PyNumber_Index(v);
+ if (iobj == NULL) {
+ goto onError;
+ }
+ x = PyLong_AsLong(iobj);
+ Py_DECREF(iobj);
+ }
+ else {
+ x = PyLong_AsLong(v);
+ }
+ if (x == -1 && PyErr_Occurred())
+ goto onError;
+
+ if (x < 0 || x > MAX_UNICODE) {
+ PyErr_SetString(PyExc_OverflowError,
+ "%c arg not in range(0x110000)");
+ return (Py_UCS4) -1;
+ }
+
+ return (Py_UCS4) x;
+ }
+
+ onError:
+ PyErr_SetString(PyExc_TypeError,
+ "%c requires int or char");
+ return (Py_UCS4) -1;
+}
+
+/* Parse options of an argument: flags, width, precision.
+ Handle also "%(name)" syntax.
+
+ Return 0 if the argument has been formatted into arg->str.
+ Return 1 if the argument has been written into ctx->writer,
+ Raise an exception and return -1 on error. */
+static int
+unicode_format_arg_parse(struct unicode_formatter_t *ctx,
+ struct unicode_format_arg_t *arg)
+{
+#define FORMAT_READ(ctx) \
+ PyUnicode_READ((ctx)->fmtkind, (ctx)->fmtdata, (ctx)->fmtpos)
+
+ PyObject *v;
+
+ if (arg->ch == '(') {
+ /* Get argument value from a dictionary. Example: "%(name)s". */
+ Py_ssize_t keystart;
+ Py_ssize_t keylen;
+ PyObject *key;
+ int pcount = 1;
+
+ if (ctx->dict == NULL) {
+ PyErr_SetString(PyExc_TypeError,
+ "format requires a mapping");
+ return -1;
+ }
+ ++ctx->fmtpos;
+ --ctx->fmtcnt;
+ keystart = ctx->fmtpos;
+ /* Skip over balanced parentheses */
+ while (pcount > 0 && --ctx->fmtcnt >= 0) {
+ arg->ch = FORMAT_READ(ctx);
+ if (arg->ch == ')')
+ --pcount;
+ else if (arg->ch == '(')
+ ++pcount;
+ ctx->fmtpos++;
+ }
+ keylen = ctx->fmtpos - keystart - 1;
+ if (ctx->fmtcnt < 0 || pcount > 0) {
+ PyErr_SetString(PyExc_ValueError,
+ "incomplete format key");
+ return -1;
+ }
+ key = PyUnicode_Substring(ctx->fmtstr,
+ keystart, keystart + keylen);
+ if (key == NULL)
+ return -1;
+ if (ctx->args_owned) {
+ ctx->args_owned = 0;
+ Py_DECREF(ctx->args);
+ }
+ ctx->args = PyObject_GetItem(ctx->dict, key);
+ Py_DECREF(key);
+ if (ctx->args == NULL)
+ return -1;
+ ctx->args_owned = 1;
+ ctx->arglen = -1;
+ ctx->argidx = -2;
+ }
+
+ /* Parse flags. Example: "%+i" => flags=F_SIGN. */
+ while (--ctx->fmtcnt >= 0) {
+ arg->ch = FORMAT_READ(ctx);
+ ctx->fmtpos++;
+ switch (arg->ch) {
+ case '-': arg->flags |= F_LJUST; continue;
+ case '+': arg->flags |= F_SIGN; continue;
+ case ' ': arg->flags |= F_BLANK; continue;
+ case '#': arg->flags |= F_ALT; continue;
+ case '0': arg->flags |= F_ZERO; continue;
+ }
+ break;
+ }
+
+ /* Parse width. Example: "%10s" => width=10 */
+ if (arg->ch == '*') {
+ v = unicode_format_getnextarg(ctx);
+ if (v == NULL)
+ return -1;
+ if (!PyLong_Check(v)) {
+ PyErr_SetString(PyExc_TypeError,
+ "* wants int");
+ return -1;
+ }
+ arg->width = PyLong_AsSsize_t(v);
+ if (arg->width == -1 && PyErr_Occurred())
+ return -1;
+ if (arg->width < 0) {
+ arg->flags |= F_LJUST;
+ arg->width = -arg->width;
+ }
+ if (--ctx->fmtcnt >= 0) {
+ arg->ch = FORMAT_READ(ctx);
+ ctx->fmtpos++;
+ }
+ }
+ else if (arg->ch >= '0' && arg->ch <= '9') {
+ arg->width = arg->ch - '0';
+ while (--ctx->fmtcnt >= 0) {
+ arg->ch = FORMAT_READ(ctx);
+ ctx->fmtpos++;
+ if (arg->ch < '0' || arg->ch > '9')
+ break;
+ /* Since arg->ch is unsigned, the RHS would end up as unsigned,
+ mixing signed and unsigned comparison. Since arg->ch is between
+ '0' and '9', casting to int is safe. */
+ if (arg->width > (PY_SSIZE_T_MAX - ((int)arg->ch - '0')) / 10) {
+ PyErr_SetString(PyExc_ValueError,
+ "width too big");
+ return -1;
+ }
+ arg->width = arg->width*10 + (arg->ch - '0');
+ }
+ }
+
+ /* Parse precision. Example: "%.3f" => prec=3 */
+ if (arg->ch == '.') {
+ arg->prec = 0;
+ if (--ctx->fmtcnt >= 0) {
+ arg->ch = FORMAT_READ(ctx);
+ ctx->fmtpos++;
+ }
+ if (arg->ch == '*') {
+ v = unicode_format_getnextarg(ctx);
+ if (v == NULL)
+ return -1;
+ if (!PyLong_Check(v)) {
+ PyErr_SetString(PyExc_TypeError,
+ "* wants int");
+ return -1;
+ }
+ arg->prec = _PyLong_AsInt(v);
+ if (arg->prec == -1 && PyErr_Occurred())
+ return -1;
+ if (arg->prec < 0)
+ arg->prec = 0;
+ if (--ctx->fmtcnt >= 0) {
+ arg->ch = FORMAT_READ(ctx);
+ ctx->fmtpos++;
+ }
+ }
+ else if (arg->ch >= '0' && arg->ch <= '9') {
+ arg->prec = arg->ch - '0';
+ while (--ctx->fmtcnt >= 0) {
+ arg->ch = FORMAT_READ(ctx);
+ ctx->fmtpos++;
+ if (arg->ch < '0' || arg->ch > '9')
+ break;
+ if (arg->prec > (INT_MAX - ((int)arg->ch - '0')) / 10) {
+ PyErr_SetString(PyExc_ValueError,
+ "precision too big");
+ return -1;
+ }
+ arg->prec = arg->prec*10 + (arg->ch - '0');
+ }
+ }
+ }
+
+ /* Ignore "h", "l" and "L" format prefix (ex: "%hi" or "%ls") */
+ if (ctx->fmtcnt >= 0) {
+ if (arg->ch == 'h' || arg->ch == 'l' || arg->ch == 'L') {
+ if (--ctx->fmtcnt >= 0) {
+ arg->ch = FORMAT_READ(ctx);
+ ctx->fmtpos++;
+ }
+ }
+ }
+ if (ctx->fmtcnt < 0) {
+ PyErr_SetString(PyExc_ValueError,
+ "incomplete format");
+ return -1;
+ }
+ return 0;
+
+#undef FORMAT_READ
+}
+
+/* Format one argument. Supported conversion specifiers:
+
+ - "s", "r", "a": any type
+ - "i", "d", "u": int or float
+ - "o", "x", "X": int
+ - "e", "E", "f", "F", "g", "G": float
+ - "c": int or str (1 character)
+
+ When possible, the output is written directly into the Unicode writer
+ (ctx->writer). A string is created when padding is required.
+
+ Return 0 if the argument has been formatted into *p_str,
+ 1 if the argument has been written into ctx->writer,
+ -1 on error. */
+static int
+unicode_format_arg_format(struct unicode_formatter_t *ctx,
+ struct unicode_format_arg_t *arg,
+ PyObject **p_str)
+{
+ PyObject *v;
+ _PyUnicodeWriter *writer = &ctx->writer;
+
+ if (ctx->fmtcnt == 0)
+ ctx->writer.overallocate = 0;
+
+ if (arg->ch == '%') {
+ if (_PyUnicodeWriter_WriteCharInline(writer, '%') < 0)
+ return -1;
+ return 1;
+ }
+
+ v = unicode_format_getnextarg(ctx);
+ if (v == NULL)
+ return -1;
+
+
+ switch (arg->ch) {
+ case 's':
+ case 'r':
+ case 'a':
+ if (PyLong_CheckExact(v) && arg->width == -1 && arg->prec == -1) {
+ /* Fast path */
+ if (_PyLong_FormatWriter(writer, v, 10, arg->flags & F_ALT) == -1)
+ return -1;
+ return 1;
+ }
+
+ if (PyUnicode_CheckExact(v) && arg->ch == 's') {
+ *p_str = v;
+ Py_INCREF(*p_str);
+ }
+ else {
+ if (arg->ch == 's')
+ *p_str = PyObject_Str(v);
+ else if (arg->ch == 'r')
+ *p_str = PyObject_Repr(v);
+ else
+ *p_str = PyObject_ASCII(v);
+ }
+ break;
+
+ case 'i':
+ case 'd':
+ case 'u':
+ case 'o':
+ case 'x':
+ case 'X':
+ {
+ int ret = mainformatlong(v, arg, p_str, writer);
+ if (ret != 0)
+ return ret;
+ arg->sign = 1;
+ break;
+ }
+
+ case 'e':
+ case 'E':
+ case 'f':
+ case 'F':
+ case 'g':
+ case 'G':
+ if (arg->width == -1 && arg->prec == -1
+ && !(arg->flags & (F_SIGN | F_BLANK)))
+ {
+ /* Fast path */
+ if (formatfloat(v, arg, NULL, writer) == -1)
+ return -1;
+ return 1;
+ }
+
+ arg->sign = 1;
+ if (formatfloat(v, arg, p_str, NULL) == -1)
+ return -1;
+ break;
+
+ case 'c':
+ {
+ Py_UCS4 ch = formatchar(v);
+ if (ch == (Py_UCS4) -1)
+ return -1;
+ if (arg->width == -1 && arg->prec == -1) {
+ /* Fast path */
+ if (_PyUnicodeWriter_WriteCharInline(writer, ch) < 0)
+ return -1;
+ return 1;
+ }
+ *p_str = PyUnicode_FromOrdinal(ch);
+ break;
+ }
+
+ default:
+ PyErr_Format(PyExc_ValueError,
+ "unsupported format character '%c' (0x%x) "
+ "at index %zd",
+ (31<=arg->ch && arg->ch<=126) ? (char)arg->ch : '?',
+ (int)arg->ch,
+ ctx->fmtpos - 1);
+ return -1;
+ }
+ if (*p_str == NULL)
+ return -1;
+ assert (PyUnicode_Check(*p_str));
+ return 0;
+}
+
+static int
+unicode_format_arg_output(struct unicode_formatter_t *ctx,
+ struct unicode_format_arg_t *arg,
+ PyObject *str)
+{
+ Py_ssize_t len;
+ enum PyUnicode_Kind kind;
+ void *pbuf;
+ Py_ssize_t pindex;
+ Py_UCS4 signchar;
+ Py_ssize_t buflen;
+ Py_UCS4 maxchar;
+ Py_ssize_t sublen;
+ _PyUnicodeWriter *writer = &ctx->writer;
+ Py_UCS4 fill;
+
+ fill = ' ';
+ if (arg->sign && arg->flags & F_ZERO)
+ fill = '0';
+
+ if (PyUnicode_READY(str) == -1)
+ return -1;
+
+ len = PyUnicode_GET_LENGTH(str);
+ if ((arg->width == -1 || arg->width <= len)
+ && (arg->prec == -1 || arg->prec >= len)
+ && !(arg->flags & (F_SIGN | F_BLANK)))
+ {
+ /* Fast path */
+ if (_PyUnicodeWriter_WriteStr(writer, str) == -1)
+ return -1;
+ return 0;
+ }
+
+ /* Truncate the string for "s", "r" and "a" formats
+ if the precision is set */
+ if (arg->ch == 's' || arg->ch == 'r' || arg->ch == 'a') {
+ if (arg->prec >= 0 && len > arg->prec)
+ len = arg->prec;
+ }
+
+ /* Adjust sign and width */
+ kind = PyUnicode_KIND(str);
+ pbuf = PyUnicode_DATA(str);
+ pindex = 0;
+ signchar = '\0';
+ if (arg->sign) {
+ Py_UCS4 ch = PyUnicode_READ(kind, pbuf, pindex);
+ if (ch == '-' || ch == '+') {
+ signchar = ch;
+ len--;
+ pindex++;
+ }
+ else if (arg->flags & F_SIGN)
+ signchar = '+';
+ else if (arg->flags & F_BLANK)
+ signchar = ' ';
+ else
+ arg->sign = 0;
+ }
+ if (arg->width < len)
+ arg->width = len;
+
+ /* Prepare the writer */
+ maxchar = writer->maxchar;
+ if (!(arg->flags & F_LJUST)) {
+ if (arg->sign) {
+ if ((arg->width-1) > len)
+ maxchar = Py_MAX(maxchar, fill);
+ }
+ else {
+ if (arg->width > len)
+ maxchar = Py_MAX(maxchar, fill);
+ }
+ }
+ if (PyUnicode_MAX_CHAR_VALUE(str) > maxchar) {
+ Py_UCS4 strmaxchar = _PyUnicode_FindMaxChar(str, 0, pindex+len);
+ maxchar = Py_MAX(maxchar, strmaxchar);
+ }
+
+ buflen = arg->width;
+ if (arg->sign && len == arg->width)
+ buflen++;
+ if (_PyUnicodeWriter_Prepare(writer, buflen, maxchar) == -1)
+ return -1;
+
+ /* Write the sign if needed */
+ if (arg->sign) {
+ if (fill != ' ') {
+ PyUnicode_WRITE(writer->kind, writer->data, writer->pos, signchar);
+ writer->pos += 1;
+ }
+ if (arg->width > len)
+ arg->width--;
+ }
+
+ /* Write the numeric prefix for "x", "X" and "o" formats
+ if the alternate form is used.
+ For example, write "0x" for the "%#x" format. */
+ if ((arg->flags & F_ALT) && (arg->ch == 'x' || arg->ch == 'X' || arg->ch == 'o')) {
+ assert(PyUnicode_READ(kind, pbuf, pindex) == '0');
+ assert(PyUnicode_READ(kind, pbuf, pindex + 1) == arg->ch);
+ if (fill != ' ') {
+ PyUnicode_WRITE(writer->kind, writer->data, writer->pos, '0');
+ PyUnicode_WRITE(writer->kind, writer->data, writer->pos+1, arg->ch);
+ writer->pos += 2;
+ pindex += 2;
+ }
+ arg->width -= 2;
+ if (arg->width < 0)
+ arg->width = 0;
+ len -= 2;
+ }
+
+ /* Pad left with the fill character if needed */
+ if (arg->width > len && !(arg->flags & F_LJUST)) {
+ sublen = arg->width - len;
+ FILL(writer->kind, writer->data, fill, writer->pos, sublen);
+ writer->pos += sublen;
+ arg->width = len;
+ }
+
+ /* If padding with spaces: write sign if needed and/or numeric prefix if
+ the alternate form is used */
+ if (fill == ' ') {
+ if (arg->sign) {
+ PyUnicode_WRITE(writer->kind, writer->data, writer->pos, signchar);
+ writer->pos += 1;
+ }
+ if ((arg->flags & F_ALT) && (arg->ch == 'x' || arg->ch == 'X' || arg->ch == 'o')) {
+ assert(PyUnicode_READ(kind, pbuf, pindex) == '0');
+ assert(PyUnicode_READ(kind, pbuf, pindex+1) == arg->ch);
+ PyUnicode_WRITE(writer->kind, writer->data, writer->pos, '0');
+ PyUnicode_WRITE(writer->kind, writer->data, writer->pos+1, arg->ch);
+ writer->pos += 2;
+ pindex += 2;
+ }
+ }
+
+ /* Write characters */
+ if (len) {
+ _PyUnicode_FastCopyCharacters(writer->buffer, writer->pos,
+ str, pindex, len);
+ writer->pos += len;
+ }
+
+ /* Pad right with the fill character if needed */
+ if (arg->width > len) {
+ sublen = arg->width - len;
+ FILL(writer->kind, writer->data, ' ', writer->pos, sublen);
+ writer->pos += sublen;
+ }
+ return 0;
+}
+
+/* Helper of PyUnicode_Format(): format one arg.
+ Return 0 on success, raise an exception and return -1 on error. */
+static int
+unicode_format_arg(struct unicode_formatter_t *ctx)
+{
+ struct unicode_format_arg_t arg;
+ PyObject *str;
+ int ret;
+
+ arg.ch = PyUnicode_READ(ctx->fmtkind, ctx->fmtdata, ctx->fmtpos);
+ arg.flags = 0;
+ arg.width = -1;
+ arg.prec = -1;
+ arg.sign = 0;
+ str = NULL;
+
+ ret = unicode_format_arg_parse(ctx, &arg);
+ if (ret == -1)
+ return -1;
+
+ ret = unicode_format_arg_format(ctx, &arg, &str);
+ if (ret == -1)
+ return -1;
+
+ if (ret != 1) {
+ ret = unicode_format_arg_output(ctx, &arg, str);
+ Py_DECREF(str);
+ if (ret == -1)
+ return -1;
+ }
+
+ if (ctx->dict && (ctx->argidx < ctx->arglen) && arg.ch != '%') {
+ PyErr_SetString(PyExc_TypeError,
+ "not all arguments converted during string formatting");
+ return -1;
+ }
+ return 0;
+}
+
+PyObject *
+PyUnicode_Format(PyObject *format, PyObject *args)
+{
+ struct unicode_formatter_t ctx;
+
+ if (format == NULL || args == NULL) {
+ PyErr_BadInternalCall();
+ return NULL;
+ }
+
+ if (ensure_unicode(format) < 0)
+ return NULL;
+
+ ctx.fmtstr = format;
+ ctx.fmtdata = PyUnicode_DATA(ctx.fmtstr);
+ ctx.fmtkind = PyUnicode_KIND(ctx.fmtstr);
+ ctx.fmtcnt = PyUnicode_GET_LENGTH(ctx.fmtstr);
+ ctx.fmtpos = 0;
+
+ _PyUnicodeWriter_Init(&ctx.writer);
+ ctx.writer.min_length = ctx.fmtcnt + 100;
+ ctx.writer.overallocate = 1;
+
+ if (PyTuple_Check(args)) {
+ ctx.arglen = PyTuple_Size(args);
+ ctx.argidx = 0;
+ }
+ else {
+ ctx.arglen = -1;
+ ctx.argidx = -2;
+ }
+ ctx.args_owned = 0;
+ if (PyMapping_Check(args) && !PyTuple_Check(args) && !PyUnicode_Check(args))
+ ctx.dict = args;
+ else
+ ctx.dict = NULL;
+ ctx.args = args;
+
+ while (--ctx.fmtcnt >= 0) {
+ if (PyUnicode_READ(ctx.fmtkind, ctx.fmtdata, ctx.fmtpos) != '%') {
+ Py_ssize_t nonfmtpos;
+
+ nonfmtpos = ctx.fmtpos++;
+ while (ctx.fmtcnt >= 0 &&
+ PyUnicode_READ(ctx.fmtkind, ctx.fmtdata, ctx.fmtpos) != '%') {
+ ctx.fmtpos++;
+ ctx.fmtcnt--;
+ }
+ if (ctx.fmtcnt < 0) {
+ ctx.fmtpos--;
+ ctx.writer.overallocate = 0;
+ }
+
+ if (_PyUnicodeWriter_WriteSubstring(&ctx.writer, ctx.fmtstr,
+ nonfmtpos, ctx.fmtpos) < 0)
+ goto onError;
+ }
+ else {
+ ctx.fmtpos++;
+ if (unicode_format_arg(&ctx) == -1)
+ goto onError;
+ }
+ }
+
+ if (ctx.argidx < ctx.arglen && !ctx.dict) {
+ PyErr_SetString(PyExc_TypeError,
+ "not all arguments converted during string formatting");
+ goto onError;
+ }
+
+ if (ctx.args_owned) {
+ Py_DECREF(ctx.args);
+ }
+ return _PyUnicodeWriter_Finish(&ctx.writer);
+
+ onError:
+ _PyUnicodeWriter_Dealloc(&ctx.writer);
+ if (ctx.args_owned) {
+ Py_DECREF(ctx.args);
+ }
+ return NULL;
+}
+
+static PyObject *
+unicode_subtype_new(PyTypeObject *type, PyObject *args, PyObject *kwds);
+
+static PyObject *
+unicode_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
+{
+ PyObject *x = NULL;
+ static char *kwlist[] = {"object", "encoding", "errors", 0};
+ char *encoding = NULL;
+ char *errors = NULL;
+
+ if (type != &PyUnicode_Type)
+ return unicode_subtype_new(type, args, kwds);
+ if (!PyArg_ParseTupleAndKeywords(args, kwds, "|Oss:str",
+ kwlist, &x, &encoding, &errors))
+ return NULL;
+ if (x == NULL)
+ _Py_RETURN_UNICODE_EMPTY();
+ if (encoding == NULL && errors == NULL)
+ return PyObject_Str(x);
+ else
+ return PyUnicode_FromEncodedObject(x, encoding, errors);
+}
+
+static PyObject *
+unicode_subtype_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
+{
+ PyObject *unicode, *self;
+ Py_ssize_t length, char_size;
+ int share_wstr, share_utf8;
+ unsigned int kind;
+ void *data;
+
+ assert(PyType_IsSubtype(type, &PyUnicode_Type));
+
+ unicode = unicode_new(&PyUnicode_Type, args, kwds);
+ if (unicode == NULL)
+ return NULL;
+ assert(_PyUnicode_CHECK(unicode));
+ if (PyUnicode_READY(unicode) == -1) {
+ Py_DECREF(unicode);
+ return NULL;
+ }
+
+ self = type->tp_alloc(type, 0);
+ if (self == NULL) {
+ Py_DECREF(unicode);
+ return NULL;
+ }
+ kind = PyUnicode_KIND(unicode);
+ length = PyUnicode_GET_LENGTH(unicode);
+
+ _PyUnicode_LENGTH(self) = length;
+#ifdef Py_DEBUG
+ _PyUnicode_HASH(self) = -1;
+#else
+ _PyUnicode_HASH(self) = _PyUnicode_HASH(unicode);
+#endif
+ _PyUnicode_STATE(self).interned = 0;
+ _PyUnicode_STATE(self).kind = kind;
+ _PyUnicode_STATE(self).compact = 0;
+ _PyUnicode_STATE(self).ascii = _PyUnicode_STATE(unicode).ascii;
+ _PyUnicode_STATE(self).ready = 1;
+ _PyUnicode_WSTR(self) = NULL;
+ _PyUnicode_UTF8_LENGTH(self) = 0;
+ _PyUnicode_UTF8(self) = NULL;
+ _PyUnicode_WSTR_LENGTH(self) = 0;
+ _PyUnicode_DATA_ANY(self) = NULL;
+
+ share_utf8 = 0;
+ share_wstr = 0;
+ if (kind == PyUnicode_1BYTE_KIND) {
+ char_size = 1;
+ if (PyUnicode_MAX_CHAR_VALUE(unicode) < 128)
+ share_utf8 = 1;
+ }
+ else if (kind == PyUnicode_2BYTE_KIND) {
+ char_size = 2;
+ if (sizeof(wchar_t) == 2)
+ share_wstr = 1;
+ }
+ else {
+ assert(kind == PyUnicode_4BYTE_KIND);
+ char_size = 4;
+ if (sizeof(wchar_t) == 4)
+ share_wstr = 1;
+ }
+
+ /* Ensure we won't overflow the length. */
+ if (length > (PY_SSIZE_T_MAX / char_size - 1)) {
+ PyErr_NoMemory();
+ goto onError;
+ }
+ data = PyObject_MALLOC((length + 1) * char_size);
+ if (data == NULL) {
+ PyErr_NoMemory();
+ goto onError;
+ }
+
+ _PyUnicode_DATA_ANY(self) = data;
+ if (share_utf8) {
+ _PyUnicode_UTF8_LENGTH(self) = length;
+ _PyUnicode_UTF8(self) = data;
+ }
+ if (share_wstr) {
+ _PyUnicode_WSTR_LENGTH(self) = length;
+ _PyUnicode_WSTR(self) = (wchar_t *)data;
+ }
+
+ memcpy(data, PyUnicode_DATA(unicode),
+ kind * (length + 1));
+ assert(_PyUnicode_CheckConsistency(self, 1));
+#ifdef Py_DEBUG
+ _PyUnicode_HASH(self) = _PyUnicode_HASH(unicode);
+#endif
+ Py_DECREF(unicode);
+ return self;
+
+onError:
+ Py_DECREF(unicode);
+ Py_DECREF(self);
+ return NULL;
+}
+
+PyDoc_STRVAR(unicode_doc,
+"str(object='') -> str\n\
+str(bytes_or_buffer[, encoding[, errors]]) -> str\n\
+\n\
+Create a new string object from the given object. If encoding or\n\
+errors is specified, then the object must expose a data buffer\n\
+that will be decoded using the given encoding and error handler.\n\
+Otherwise, returns the result of object.__str__() (if defined)\n\
+or repr(object).\n\
+encoding defaults to sys.getdefaultencoding().\n\
+errors defaults to 'strict'.");
+
+static PyObject *unicode_iter(PyObject *seq);
+
+PyTypeObject PyUnicode_Type = {
+ PyVarObject_HEAD_INIT(&PyType_Type, 0)
+ "str", /* tp_name */
+ sizeof(PyUnicodeObject), /* tp_size */
+ 0, /* tp_itemsize */
+ /* Slots */
+ (destructor)unicode_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ unicode_repr, /* tp_repr */
+ &unicode_as_number, /* tp_as_number */
+ &unicode_as_sequence, /* tp_as_sequence */
+ &unicode_as_mapping, /* tp_as_mapping */
+ (hashfunc) unicode_hash, /* tp_hash*/
+ 0, /* tp_call*/
+ (reprfunc) unicode_str, /* tp_str */
+ PyObject_GenericGetAttr, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE |
+ Py_TPFLAGS_UNICODE_SUBCLASS, /* tp_flags */
+ unicode_doc, /* tp_doc */
+ 0, /* tp_traverse */
+ 0, /* tp_clear */
+ PyUnicode_RichCompare, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ unicode_iter, /* tp_iter */
+ 0, /* tp_iternext */
+ unicode_methods, /* tp_methods */
+ 0, /* tp_members */
+ 0, /* tp_getset */
+ &PyBaseObject_Type, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ 0, /* tp_init */
+ 0, /* tp_alloc */
+ unicode_new, /* tp_new */
+ PyObject_Del, /* tp_free */
+};
+
+/* Initialize the Unicode implementation */
+
+int _PyUnicode_Init(void)
+{
+ /* XXX - move this array to unicodectype.c ? */
+ Py_UCS2 linebreak[] = {
+ 0x000A, /* LINE FEED */
+ 0x000D, /* CARRIAGE RETURN */
+ 0x001C, /* FILE SEPARATOR */
+ 0x001D, /* GROUP SEPARATOR */
+ 0x001E, /* RECORD SEPARATOR */
+ 0x0085, /* NEXT LINE */
+ 0x2028, /* LINE SEPARATOR */
+ 0x2029, /* PARAGRAPH SEPARATOR */
+ };
+
+ /* Init the implementation */
+ _Py_INCREF_UNICODE_EMPTY();
+ if (!unicode_empty)
+ Py_FatalError("Can't create empty string");
+ Py_DECREF(unicode_empty);
+
+ if (PyType_Ready(&PyUnicode_Type) < 0)
+ Py_FatalError("Can't initialize 'unicode'");
+
+ /* initialize the linebreak bloom filter */
+ bloom_linebreak = make_bloom_mask(
+ PyUnicode_2BYTE_KIND, linebreak,
+ Py_ARRAY_LENGTH(linebreak));
+
+ if (PyType_Ready(&EncodingMapType) < 0)
+ Py_FatalError("Can't initialize encoding map type");
+
+ if (PyType_Ready(&PyFieldNameIter_Type) < 0)
+ Py_FatalError("Can't initialize field name iterator type");
+
+ if (PyType_Ready(&PyFormatterIter_Type) < 0)
+ Py_FatalError("Can't initialize formatter iter type");
+
+ return 0;
+}
+
+/* Finalize the Unicode implementation */
+
+int
+PyUnicode_ClearFreeList(void)
+{
+ return 0;
+}
+
+void
+_PyUnicode_Fini(void)
+{
+ int i;
+
+ Py_CLEAR(unicode_empty);
+
+ for (i = 0; i < 256; i++)
+ Py_CLEAR(unicode_latin1[i]);
+ _PyUnicode_ClearStaticStrings();
+ (void)PyUnicode_ClearFreeList();
+}
+
+void
+PyUnicode_InternInPlace(PyObject **p)
+{
+ PyObject *s = *p;
+ PyObject *t;
+#ifdef Py_DEBUG
+ assert(s != NULL);
+ assert(_PyUnicode_CHECK(s));
+#else
+ if (s == NULL || !PyUnicode_Check(s))
+ return;
+#endif
+ /* If it's a subclass, we don't really know what putting
+ it in the interned dict might do. */
+ if (!PyUnicode_CheckExact(s))
+ return;
+ if (PyUnicode_CHECK_INTERNED(s))
+ return;
+ if (interned == NULL) {
+ interned = PyDict_New();
+ if (interned == NULL) {
+ PyErr_Clear(); /* Don't leave an exception */
+ return;
+ }
+ }
+ Py_ALLOW_RECURSION
+ t = PyDict_SetDefault(interned, s, s);
+ Py_END_ALLOW_RECURSION
+ if (t == NULL) {
+ PyErr_Clear();
+ return;
+ }
+ if (t != s) {
+ Py_INCREF(t);
+ Py_SETREF(*p, t);
+ return;
+ }
+ /* The two references in interned are not counted by refcnt.
+ The deallocator will take care of this */
+ Py_REFCNT(s) -= 2;
+ _PyUnicode_STATE(s).interned = SSTATE_INTERNED_MORTAL;
+}
+
+void
+PyUnicode_InternImmortal(PyObject **p)
+{
+ PyUnicode_InternInPlace(p);
+ if (PyUnicode_CHECK_INTERNED(*p) != SSTATE_INTERNED_IMMORTAL) {
+ _PyUnicode_STATE(*p).interned = SSTATE_INTERNED_IMMORTAL;
+ Py_INCREF(*p);
+ }
+}
+
+PyObject *
+PyUnicode_InternFromString(const char *cp)
+{
+ PyObject *s = PyUnicode_FromString(cp);
+ if (s == NULL)
+ return NULL;
+ PyUnicode_InternInPlace(&s);
+ return s;
+}
+
+void
+_Py_ReleaseInternedUnicodeStrings(void)
+{
+ PyObject *keys;
+ PyObject *s;
+ Py_ssize_t i, n;
+ Py_ssize_t immortal_size = 0, mortal_size = 0;
+
+ if (interned == NULL || !PyDict_Check(interned))
+ return;
+ keys = PyDict_Keys(interned);
+ if (keys == NULL || !PyList_Check(keys)) {
+ PyErr_Clear();
+ return;
+ }
+
+ /* Since _Py_ReleaseInternedUnicodeStrings() is intended to help a leak
+ detector, interned unicode strings are not forcibly deallocated;
+ rather, we give them their stolen references back, and then clear
+ and DECREF the interned dict. */
+
+ n = PyList_GET_SIZE(keys);
+ fprintf(stderr, "releasing %" PY_FORMAT_SIZE_T "d interned strings\n",
+ n);
+ for (i = 0; i < n; i++) {
+ s = PyList_GET_ITEM(keys, i);
+ if (PyUnicode_READY(s) == -1) {
+ assert(0 && "could not ready string");
+ fprintf(stderr, "could not ready string\n");
+ }
+ switch (PyUnicode_CHECK_INTERNED(s)) {
+ case SSTATE_NOT_INTERNED:
+ /* XXX Shouldn't happen */
+ break;
+ case SSTATE_INTERNED_IMMORTAL:
+ Py_REFCNT(s) += 1;
+ immortal_size += PyUnicode_GET_LENGTH(s);
+ break;
+ case SSTATE_INTERNED_MORTAL:
+ Py_REFCNT(s) += 2;
+ mortal_size += PyUnicode_GET_LENGTH(s);
+ break;
+ default:
+ Py_FatalError("Inconsistent interned string state.");
+ }
+ _PyUnicode_STATE(s).interned = SSTATE_NOT_INTERNED;
+ }
+ fprintf(stderr, "total size of all interned strings: "
+ "%" PY_FORMAT_SIZE_T "d/%" PY_FORMAT_SIZE_T "d "
+ "mortal/immortal\n", mortal_size, immortal_size);
+ Py_DECREF(keys);
+ PyDict_Clear(interned);
+ Py_CLEAR(interned);
+}
+
+
+/********************* Unicode Iterator **************************/
+
+typedef struct {
+ PyObject_HEAD
+ Py_ssize_t it_index;
+ PyObject *it_seq; /* Set to NULL when iterator is exhausted */
+} unicodeiterobject;
+
+static void
+unicodeiter_dealloc(unicodeiterobject *it)
+{
+ _PyObject_GC_UNTRACK(it);
+ Py_XDECREF(it->it_seq);
+ PyObject_GC_Del(it);
+}
+
+static int
+unicodeiter_traverse(unicodeiterobject *it, visitproc visit, void *arg)
+{
+ Py_VISIT(it->it_seq);
+ return 0;
+}
+
+static PyObject *
+unicodeiter_next(unicodeiterobject *it)
+{
+ PyObject *seq, *item;
+
+ assert(it != NULL);
+ seq = it->it_seq;
+ if (seq == NULL)
+ return NULL;
+ assert(_PyUnicode_CHECK(seq));
+
+ if (it->it_index < PyUnicode_GET_LENGTH(seq)) {
+ int kind = PyUnicode_KIND(seq);
+ void *data = PyUnicode_DATA(seq);
+ Py_UCS4 chr = PyUnicode_READ(kind, data, it->it_index);
+ item = PyUnicode_FromOrdinal(chr);
+ if (item != NULL)
+ ++it->it_index;
+ return item;
+ }
+
+ it->it_seq = NULL;
+ Py_DECREF(seq);
+ return NULL;
+}
+
+static PyObject *
+unicodeiter_len(unicodeiterobject *it)
+{
+ Py_ssize_t len = 0;
+ if (it->it_seq)
+ len = PyUnicode_GET_LENGTH(it->it_seq) - it->it_index;
+ return PyLong_FromSsize_t(len);
+}
+
+PyDoc_STRVAR(length_hint_doc, "Private method returning an estimate of len(list(it)).");
+
+static PyObject *
+unicodeiter_reduce(unicodeiterobject *it)
+{
+ if (it->it_seq != NULL) {
+ return Py_BuildValue("N(O)n", _PyObject_GetBuiltin("iter"),
+ it->it_seq, it->it_index);
+ } else {
+ PyObject *u = PyUnicode_FromUnicode(NULL, 0);
+ if (u == NULL)
+ return NULL;
+ return Py_BuildValue("N(N)", _PyObject_GetBuiltin("iter"), u);
+ }
+}
+
+PyDoc_STRVAR(reduce_doc, "Return state information for pickling.");
+
+static PyObject *
+unicodeiter_setstate(unicodeiterobject *it, PyObject *state)
+{
+ Py_ssize_t index = PyLong_AsSsize_t(state);
+ if (index == -1 && PyErr_Occurred())
+ return NULL;
+ if (it->it_seq != NULL) {
+ if (index < 0)
+ index = 0;
+ else if (index > PyUnicode_GET_LENGTH(it->it_seq))
+ index = PyUnicode_GET_LENGTH(it->it_seq); /* iterator truncated */
+ it->it_index = index;
+ }
+ Py_RETURN_NONE;
+}
+
+PyDoc_STRVAR(setstate_doc, "Set state information for unpickling.");
+
+static PyMethodDef unicodeiter_methods[] = {
+ {"__length_hint__", (PyCFunction)unicodeiter_len, METH_NOARGS,
+ length_hint_doc},
+ {"__reduce__", (PyCFunction)unicodeiter_reduce, METH_NOARGS,
+ reduce_doc},
+ {"__setstate__", (PyCFunction)unicodeiter_setstate, METH_O,
+ setstate_doc},
+ {NULL, NULL} /* sentinel */
+};
+
+PyTypeObject PyUnicodeIter_Type = {
+ PyVarObject_HEAD_INIT(&PyType_Type, 0)
+ "str_iterator", /* tp_name */
+ sizeof(unicodeiterobject), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ /* methods */
+ (destructor)unicodeiter_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ 0, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ PyObject_GenericGetAttr, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,/* tp_flags */
+ 0, /* tp_doc */
+ (traverseproc)unicodeiter_traverse, /* tp_traverse */
+ 0, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ PyObject_SelfIter, /* tp_iter */
+ (iternextfunc)unicodeiter_next, /* tp_iternext */
+ unicodeiter_methods, /* tp_methods */
+ 0,
+};
+
+static PyObject *
+unicode_iter(PyObject *seq)
+{
+ unicodeiterobject *it;
+
+ if (!PyUnicode_Check(seq)) {
+ PyErr_BadInternalCall();
+ return NULL;
+ }
+ if (PyUnicode_READY(seq) == -1)
+ return NULL;
+ it = PyObject_GC_New(unicodeiterobject, &PyUnicodeIter_Type);
+ if (it == NULL)
+ return NULL;
+ it->it_index = 0;
+ Py_INCREF(seq);
+ it->it_seq = seq;
+ _PyObject_GC_TRACK(it);
+ return (PyObject *)it;
+}
+
+
+size_t
+Py_UNICODE_strlen(const Py_UNICODE *u)
+{
+ int res = 0;
+ while(*u++)
+ res++;
+ return res;
+}
+
+Py_UNICODE*
+Py_UNICODE_strcpy(Py_UNICODE *s1, const Py_UNICODE *s2)
+{
+ Py_UNICODE *u = s1;
+ while ((*u++ = *s2++));
+ return s1;
+}
+
+Py_UNICODE*
+Py_UNICODE_strncpy(Py_UNICODE *s1, const Py_UNICODE *s2, size_t n)
+{
+ Py_UNICODE *u = s1;
+ while ((*u++ = *s2++))
+ if (n-- == 0)
+ break;
+ return s1;
+}
+
+Py_UNICODE*
+Py_UNICODE_strcat(Py_UNICODE *s1, const Py_UNICODE *s2)
+{
+ Py_UNICODE *u1 = s1;
+ u1 += Py_UNICODE_strlen(u1);
+ Py_UNICODE_strcpy(u1, s2);
+ return s1;
+}
+
+int
+Py_UNICODE_strcmp(const Py_UNICODE *s1, const Py_UNICODE *s2)
+{
+ while (*s1 && *s2 && *s1 == *s2)
+ s1++, s2++;
+ if (*s1 && *s2)
+ return (*s1 < *s2) ? -1 : +1;
+ if (*s1)
+ return 1;
+ if (*s2)
+ return -1;
+ return 0;
+}
+
+int
+Py_UNICODE_strncmp(const Py_UNICODE *s1, const Py_UNICODE *s2, size_t n)
+{
+ Py_UNICODE u1, u2;
+ for (; n != 0; n--) {
+ u1 = *s1;
+ u2 = *s2;
+ if (u1 != u2)
+ return (u1 < u2) ? -1 : +1;
+ if (u1 == '\0')
+ return 0;
+ s1++;
+ s2++;
+ }
+ return 0;
+}
+
+Py_UNICODE*
+Py_UNICODE_strchr(const Py_UNICODE *s, Py_UNICODE c)
+{
+ const Py_UNICODE *p;
+ for (p = s; *p; p++)
+ if (*p == c)
+ return (Py_UNICODE*)p;
+ return NULL;
+}
+
+Py_UNICODE*
+Py_UNICODE_strrchr(const Py_UNICODE *s, Py_UNICODE c)
+{
+ const Py_UNICODE *p;
+ p = s + Py_UNICODE_strlen(s);
+ while (p != s) {
+ p--;
+ if (*p == c)
+ return (Py_UNICODE*)p;
+ }
+ return NULL;
+}
+
+Py_UNICODE*
+PyUnicode_AsUnicodeCopy(PyObject *unicode)
+{
+ Py_UNICODE *u, *copy;
+ Py_ssize_t len, size;
+
+ if (!PyUnicode_Check(unicode)) {
+ PyErr_BadArgument();
+ return NULL;
+ }
+ u = PyUnicode_AsUnicodeAndSize(unicode, &len);
+ if (u == NULL)
+ return NULL;
+ /* Ensure we won't overflow the size. */
+ if (len > ((PY_SSIZE_T_MAX / (Py_ssize_t)sizeof(Py_UNICODE)) - 1)) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+ size = len + 1; /* copy the null character */
+ size *= sizeof(Py_UNICODE);
+ copy = PyMem_Malloc(size);
+ if (copy == NULL) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+ memcpy(copy, u, size);
+ return copy;
+}
+
+/* A _string module, to export formatter_parser and formatter_field_name_split
+ to the string.Formatter class implemented in Python. */
+
+static PyMethodDef _string_methods[] = {
+ {"formatter_field_name_split", (PyCFunction) formatter_field_name_split,
+ METH_O, PyDoc_STR("split the argument as a field name")},
+ {"formatter_parser", (PyCFunction) formatter_parser,
+ METH_O, PyDoc_STR("parse the argument as a format string")},
+ {NULL, NULL}
+};
+
+static struct PyModuleDef _string_module = {
+ PyModuleDef_HEAD_INIT,
+ "_string",
+ PyDoc_STR("string helper module"),
+ 0,
+ _string_methods,
+ NULL,
+ NULL,
+ NULL,
+ NULL
+};
+
+PyMODINIT_FUNC
+PyInit__string(void)
+{
+ return PyModule_Create(&_string_module);
+}
+
+
+#ifdef __cplusplus
+}
+#endif
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/bltinmodule.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/bltinmodule.c
new file mode 100644
index 00000000..deb243ec
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/bltinmodule.c
@@ -0,0 +1,2794 @@
+/** @file
+ builtin module.
+
+ Copyright (c) 2010 - 2021, Intel Corporation. All rights reserved.<BR>
+ This program and the accompanying materials are licensed and made available under
+ the terms and conditions of the BSD License that accompanies this distribution.
+ The full text of the license may be found at
+ http://opensource.org/licenses/bsd-license.
+
+ THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+**/
+/* Built-in functions */
+
+#include "Python.h"
+#include "Python-ast.h"
+
+#include "node.h"
+#include "code.h"
+
+#include "asdl.h"
+#include "ast.h"
+
+#include <ctype.h>
+
+#ifdef HAVE_LANGINFO_H
+#include <langinfo.h> /* CODESET */
+#endif
+
+/* The default encoding used by the platform file system APIs
+ Can remain NULL for all platforms that don't have such a concept
+
+ Don't forget to modify PyUnicode_DecodeFSDefault() if you touch any of the
+ values for Py_FileSystemDefaultEncoding!
+*/
+#if defined(__APPLE__)
+const char *Py_FileSystemDefaultEncoding = "utf-8";
+int Py_HasFileSystemDefaultEncoding = 1;
+#elif defined(MS_WINDOWS)
+/* may be changed by initfsencoding(), but should never be free()d */
+const char *Py_FileSystemDefaultEncoding = "utf-8";
+int Py_HasFileSystemDefaultEncoding = 1;
+#elif defined(UEFI_MSVC_64)
+const char *Py_FileSystemDefaultEncoding = "utf-8";
+int Py_HasFileSystemDefaultEncoding = 1;
+#elif defined(UEFI_MSVC_32)
+const char *Py_FileSystemDefaultEncoding = "utf-8";
+int Py_HasFileSystemDefaultEncoding = 1;
+#else
+const char *Py_FileSystemDefaultEncoding = NULL; /* set by initfsencoding() */
+int Py_HasFileSystemDefaultEncoding = 0;
+#endif
+const char *Py_FileSystemDefaultEncodeErrors = "surrogateescape";
+
+_Py_IDENTIFIER(__builtins__);
+_Py_IDENTIFIER(__dict__);
+_Py_IDENTIFIER(__prepare__);
+_Py_IDENTIFIER(__round__);
+_Py_IDENTIFIER(encoding);
+_Py_IDENTIFIER(errors);
+_Py_IDENTIFIER(fileno);
+_Py_IDENTIFIER(flush);
+_Py_IDENTIFIER(metaclass);
+_Py_IDENTIFIER(sort);
+_Py_IDENTIFIER(stdin);
+_Py_IDENTIFIER(stdout);
+_Py_IDENTIFIER(stderr);
+
+#include "clinic/bltinmodule.c.h"
+
+/* AC: cannot convert yet, waiting for *args support */
+static PyObject *
+builtin___build_class__(PyObject *self, PyObject *args, PyObject *kwds)
+{
+ PyObject *func, *name, *bases, *mkw, *meta, *winner, *prep, *ns;
+ PyObject *cls = NULL, *cell = NULL;
+ Py_ssize_t nargs;
+ int isclass = 0; /* initialize to prevent gcc warning */
+
+ assert(args != NULL);
+ if (!PyTuple_Check(args)) {
+ PyErr_SetString(PyExc_TypeError,
+ "__build_class__: args is not a tuple");
+ return NULL;
+ }
+ nargs = PyTuple_GET_SIZE(args);
+ if (nargs < 2) {
+ PyErr_SetString(PyExc_TypeError,
+ "__build_class__: not enough arguments");
+ return NULL;
+ }
+ func = PyTuple_GET_ITEM(args, 0); /* Better be callable */
+ if (!PyFunction_Check(func)) {
+ PyErr_SetString(PyExc_TypeError,
+ "__build_class__: func must be a function");
+ return NULL;
+ }
+ name = PyTuple_GET_ITEM(args, 1);
+ if (!PyUnicode_Check(name)) {
+ PyErr_SetString(PyExc_TypeError,
+ "__build_class__: name is not a string");
+ return NULL;
+ }
+ bases = PyTuple_GetSlice(args, 2, nargs);
+ if (bases == NULL)
+ return NULL;
+
+ if (kwds == NULL) {
+ meta = NULL;
+ mkw = NULL;
+ }
+ else {
+ mkw = PyDict_Copy(kwds); /* Don't modify kwds passed in! */
+ if (mkw == NULL) {
+ Py_DECREF(bases);
+ return NULL;
+ }
+ meta = _PyDict_GetItemId(mkw, &PyId_metaclass);
+ if (meta != NULL) {
+ Py_INCREF(meta);
+ if (_PyDict_DelItemId(mkw, &PyId_metaclass) < 0) {
+ Py_DECREF(meta);
+ Py_DECREF(mkw);
+ Py_DECREF(bases);
+ return NULL;
+ }
+ /* metaclass is explicitly given, check if it's indeed a class */
+ isclass = PyType_Check(meta);
+ }
+ }
+ if (meta == NULL) {
+ /* if there are no bases, use type: */
+ if (PyTuple_GET_SIZE(bases) == 0) {
+ meta = (PyObject *) (&PyType_Type);
+ }
+ /* else get the type of the first base */
+ else {
+ PyObject *base0 = PyTuple_GET_ITEM(bases, 0);
+ meta = (PyObject *) (base0->ob_type);
+ }
+ Py_INCREF(meta);
+ isclass = 1; /* meta is really a class */
+ }
+
+ if (isclass) {
+ /* meta is really a class, so check for a more derived
+ metaclass, or possible metaclass conflicts: */
+ winner = (PyObject *)_PyType_CalculateMetaclass((PyTypeObject *)meta,
+ bases);
+ if (winner == NULL) {
+ Py_DECREF(meta);
+ Py_XDECREF(mkw);
+ Py_DECREF(bases);
+ return NULL;
+ }
+ if (winner != meta) {
+ Py_DECREF(meta);
+ meta = winner;
+ Py_INCREF(meta);
+ }
+ }
+ /* else: meta is not a class, so we cannot do the metaclass
+ calculation, so we will use the explicitly given object as it is */
+ prep = _PyObject_GetAttrId(meta, &PyId___prepare__);
+ if (prep == NULL) {
+ if (PyErr_ExceptionMatches(PyExc_AttributeError)) {
+ PyErr_Clear();
+ ns = PyDict_New();
+ }
+ else {
+ Py_DECREF(meta);
+ Py_XDECREF(mkw);
+ Py_DECREF(bases);
+ return NULL;
+ }
+ }
+ else {
+ PyObject *pargs[2] = {name, bases};
+ ns = _PyObject_FastCallDict(prep, pargs, 2, mkw);
+ Py_DECREF(prep);
+ }
+ if (ns == NULL) {
+ Py_DECREF(meta);
+ Py_XDECREF(mkw);
+ Py_DECREF(bases);
+ return NULL;
+ }
+ if (!PyMapping_Check(ns)) {
+ PyErr_Format(PyExc_TypeError,
+ "%.200s.__prepare__() must return a mapping, not %.200s",
+ isclass ? ((PyTypeObject *)meta)->tp_name : "<metaclass>",
+ Py_TYPE(ns)->tp_name);
+ goto error;
+ }
+ cell = PyEval_EvalCodeEx(PyFunction_GET_CODE(func), PyFunction_GET_GLOBALS(func), ns,
+ NULL, 0, NULL, 0, NULL, 0, NULL,
+ PyFunction_GET_CLOSURE(func));
+ if (cell != NULL) {
+ PyObject *margs[3] = {name, bases, ns};
+ cls = _PyObject_FastCallDict(meta, margs, 3, mkw);
+ if (cls != NULL && PyType_Check(cls) && PyCell_Check(cell)) {
+ PyObject *cell_cls = PyCell_GET(cell);
+ if (cell_cls != cls) {
+ /* TODO: In 3.7, DeprecationWarning will become RuntimeError.
+ * At that point, cell_error won't be needed.
+ */
+ int cell_error;
+ if (cell_cls == NULL) {
+ const char *msg =
+ "__class__ not set defining %.200R as %.200R. "
+ "Was __classcell__ propagated to type.__new__?";
+ cell_error = PyErr_WarnFormat(
+ PyExc_DeprecationWarning, 1, msg, name, cls);
+ } else {
+ const char *msg =
+ "__class__ set to %.200R defining %.200R as %.200R";
+ PyErr_Format(PyExc_TypeError, msg, cell_cls, name, cls);
+ cell_error = 1;
+ }
+ if (cell_error) {
+ Py_DECREF(cls);
+ cls = NULL;
+ goto error;
+ } else {
+ /* Fill in the cell, since type.__new__ didn't do it */
+ PyCell_Set(cell, cls);
+ }
+ }
+ }
+ }
+error:
+ Py_XDECREF(cell);
+ Py_DECREF(ns);
+ Py_DECREF(meta);
+ Py_XDECREF(mkw);
+ Py_DECREF(bases);
+ return cls;
+}
+
+PyDoc_STRVAR(build_class_doc,
+"__build_class__(func, name, *bases, metaclass=None, **kwds) -> class\n\
+\n\
+Internal helper function used by the class statement.");
+
+static PyObject *
+builtin___import__(PyObject *self, PyObject *args, PyObject *kwds)
+{
+ static char *kwlist[] = {"name", "globals", "locals", "fromlist",
+ "level", 0};
+ PyObject *name, *globals = NULL, *locals = NULL, *fromlist = NULL;
+ int level = 0;
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwds, "U|OOOi:__import__",
+ kwlist, &name, &globals, &locals, &fromlist, &level))
+ return NULL;
+ return PyImport_ImportModuleLevelObject(name, globals, locals,
+ fromlist, level);
+}
+
+PyDoc_STRVAR(import_doc,
+"__import__(name, globals=None, locals=None, fromlist=(), level=0) -> module\n\
+\n\
+Import a module. Because this function is meant for use by the Python\n\
+interpreter and not for general use, it is better to use\n\
+importlib.import_module() to programmatically import a module.\n\
+\n\
+The globals argument is only used to determine the context;\n\
+they are not modified. The locals argument is unused. The fromlist\n\
+should be a list of names to emulate ``from name import ...'', or an\n\
+empty list to emulate ``import name''.\n\
+When importing a module from a package, note that __import__('A.B', ...)\n\
+returns package A when fromlist is empty, but its submodule B when\n\
+fromlist is not empty. The level argument is used to determine whether to\n\
+perform absolute or relative imports: 0 is absolute, while a positive number\n\
+is the number of parent directories to search relative to the current module.");
+
+
+/*[clinic input]
+abs as builtin_abs
+
+ x: object
+ /
+
+Return the absolute value of the argument.
+[clinic start generated code]*/
+
+static PyObject *
+builtin_abs(PyObject *module, PyObject *x)
+/*[clinic end generated code: output=b1b433b9e51356f5 input=bed4ca14e29c20d1]*/
+{
+ return PyNumber_Absolute(x);
+}
+
+/*[clinic input]
+all as builtin_all
+
+ iterable: object
+ /
+
+Return True if bool(x) is True for all values x in the iterable.
+
+If the iterable is empty, return True.
+[clinic start generated code]*/
+
+static PyObject *
+builtin_all(PyObject *module, PyObject *iterable)
+/*[clinic end generated code: output=ca2a7127276f79b3 input=1a7c5d1bc3438a21]*/
+{
+ PyObject *it, *item;
+ PyObject *(*iternext)(PyObject *);
+ int cmp;
+
+ it = PyObject_GetIter(iterable);
+ if (it == NULL)
+ return NULL;
+ iternext = *Py_TYPE(it)->tp_iternext;
+
+ for (;;) {
+ item = iternext(it);
+ if (item == NULL)
+ break;
+ cmp = PyObject_IsTrue(item);
+ Py_DECREF(item);
+ if (cmp < 0) {
+ Py_DECREF(it);
+ return NULL;
+ }
+ if (cmp == 0) {
+ Py_DECREF(it);
+ Py_RETURN_FALSE;
+ }
+ }
+ Py_DECREF(it);
+ if (PyErr_Occurred()) {
+ if (PyErr_ExceptionMatches(PyExc_StopIteration))
+ PyErr_Clear();
+ else
+ return NULL;
+ }
+ Py_RETURN_TRUE;
+}
+
+/*[clinic input]
+any as builtin_any
+
+ iterable: object
+ /
+
+Return True if bool(x) is True for any x in the iterable.
+
+If the iterable is empty, return False.
+[clinic start generated code]*/
+
+static PyObject *
+builtin_any(PyObject *module, PyObject *iterable)
+/*[clinic end generated code: output=fa65684748caa60e input=41d7451c23384f24]*/
+{
+ PyObject *it, *item;
+ PyObject *(*iternext)(PyObject *);
+ int cmp;
+
+ it = PyObject_GetIter(iterable);
+ if (it == NULL)
+ return NULL;
+ iternext = *Py_TYPE(it)->tp_iternext;
+
+ for (;;) {
+ item = iternext(it);
+ if (item == NULL)
+ break;
+ cmp = PyObject_IsTrue(item);
+ Py_DECREF(item);
+ if (cmp < 0) {
+ Py_DECREF(it);
+ return NULL;
+ }
+ if (cmp > 0) {
+ Py_DECREF(it);
+ Py_RETURN_TRUE;
+ }
+ }
+ Py_DECREF(it);
+ if (PyErr_Occurred()) {
+ if (PyErr_ExceptionMatches(PyExc_StopIteration))
+ PyErr_Clear();
+ else
+ return NULL;
+ }
+ Py_RETURN_FALSE;
+}
+
+/*[clinic input]
+ascii as builtin_ascii
+
+ obj: object
+ /
+
+Return an ASCII-only representation of an object.
+
+As repr(), return a string containing a printable representation of an
+object, but escape the non-ASCII characters in the string returned by
+repr() using \\x, \\u or \\U escapes. This generates a string similar
+to that returned by repr() in Python 2.
+[clinic start generated code]*/
+
+static PyObject *
+builtin_ascii(PyObject *module, PyObject *obj)
+/*[clinic end generated code: output=6d37b3f0984c7eb9 input=4c62732e1b3a3cc9]*/
+{
+ return PyObject_ASCII(obj);
+}
+
+
+/*[clinic input]
+bin as builtin_bin
+
+ number: object
+ /
+
+Return the binary representation of an integer.
+
+ >>> bin(2796202)
+ '0b1010101010101010101010'
+[clinic start generated code]*/
+
+static PyObject *
+builtin_bin(PyObject *module, PyObject *number)
+/*[clinic end generated code: output=b6fc4ad5e649f4f7 input=53f8a0264bacaf90]*/
+{
+ return PyNumber_ToBase(number, 2);
+}
+
+
+/*[clinic input]
+callable as builtin_callable
+
+ obj: object
+ /
+
+Return whether the object is callable (i.e., some kind of function).
+
+Note that classes are callable, as are instances of classes with a
+__call__() method.
+[clinic start generated code]*/
+
+static PyObject *
+builtin_callable(PyObject *module, PyObject *obj)
+/*[clinic end generated code: output=2b095d59d934cb7e input=1423bab99cc41f58]*/
+{
+ return PyBool_FromLong((long)PyCallable_Check(obj));
+}
+
+
+typedef struct {
+ PyObject_HEAD
+ PyObject *func;
+ PyObject *it;
+} filterobject;
+
+static PyObject *
+filter_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
+{
+ PyObject *func, *seq;
+ PyObject *it;
+ filterobject *lz;
+
+ if (type == &PyFilter_Type && !_PyArg_NoKeywords("filter()", kwds))
+ return NULL;
+
+ if (!PyArg_UnpackTuple(args, "filter", 2, 2, &func, &seq))
+ return NULL;
+
+ /* Get iterator. */
+ it = PyObject_GetIter(seq);
+ if (it == NULL)
+ return NULL;
+
+ /* create filterobject structure */
+ lz = (filterobject *)type->tp_alloc(type, 0);
+ if (lz == NULL) {
+ Py_DECREF(it);
+ return NULL;
+ }
+ Py_INCREF(func);
+ lz->func = func;
+ lz->it = it;
+
+ return (PyObject *)lz;
+}
+
+static void
+filter_dealloc(filterobject *lz)
+{
+ PyObject_GC_UnTrack(lz);
+ Py_XDECREF(lz->func);
+ Py_XDECREF(lz->it);
+ Py_TYPE(lz)->tp_free(lz);
+}
+
+static int
+filter_traverse(filterobject *lz, visitproc visit, void *arg)
+{
+ Py_VISIT(lz->it);
+ Py_VISIT(lz->func);
+ return 0;
+}
+
+static PyObject *
+filter_next(filterobject *lz)
+{
+ PyObject *item;
+ PyObject *it = lz->it;
+ long ok;
+ PyObject *(*iternext)(PyObject *);
+ int checktrue = lz->func == Py_None || lz->func == (PyObject *)&PyBool_Type;
+
+ iternext = *Py_TYPE(it)->tp_iternext;
+ for (;;) {
+ item = iternext(it);
+ if (item == NULL)
+ return NULL;
+
+ if (checktrue) {
+ ok = PyObject_IsTrue(item);
+ } else {
+ PyObject *good;
+ good = PyObject_CallFunctionObjArgs(lz->func, item, NULL);
+ if (good == NULL) {
+ Py_DECREF(item);
+ return NULL;
+ }
+ ok = PyObject_IsTrue(good);
+ Py_DECREF(good);
+ }
+ if (ok > 0)
+ return item;
+ Py_DECREF(item);
+ if (ok < 0)
+ return NULL;
+ }
+}
+
+static PyObject *
+filter_reduce(filterobject *lz)
+{
+ return Py_BuildValue("O(OO)", Py_TYPE(lz), lz->func, lz->it);
+}
+
+PyDoc_STRVAR(reduce_doc, "Return state information for pickling.");
+
+static PyMethodDef filter_methods[] = {
+ {"__reduce__", (PyCFunction)filter_reduce, METH_NOARGS, reduce_doc},
+ {NULL, NULL} /* sentinel */
+};
+
+PyDoc_STRVAR(filter_doc,
+"filter(function or None, iterable) --> filter object\n\
+\n\
+Return an iterator yielding those items of iterable for which function(item)\n\
+is true. If function is None, return the items that are true.");
+
+PyTypeObject PyFilter_Type = {
+ PyVarObject_HEAD_INIT(&PyType_Type, 0)
+ "filter", /* tp_name */
+ sizeof(filterobject), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ /* methods */
+ (destructor)filter_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ 0, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ PyObject_GenericGetAttr, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC |
+ Py_TPFLAGS_BASETYPE, /* tp_flags */
+ filter_doc, /* tp_doc */
+ (traverseproc)filter_traverse, /* tp_traverse */
+ 0, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ PyObject_SelfIter, /* tp_iter */
+ (iternextfunc)filter_next, /* tp_iternext */
+ filter_methods, /* tp_methods */
+ 0, /* tp_members */
+ 0, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ 0, /* tp_init */
+ PyType_GenericAlloc, /* tp_alloc */
+ filter_new, /* tp_new */
+ PyObject_GC_Del, /* tp_free */
+};
+
+
+/*[clinic input]
+format as builtin_format
+
+ value: object
+ format_spec: unicode(c_default="NULL") = ''
+ /
+
+Return value.__format__(format_spec)
+
+format_spec defaults to the empty string.
+See the Format Specification Mini-Language section of help('FORMATTING') for
+details.
+[clinic start generated code]*/
+
+static PyObject *
+builtin_format_impl(PyObject *module, PyObject *value, PyObject *format_spec)
+/*[clinic end generated code: output=2f40bdfa4954b077 input=88339c93ea522b33]*/
+{
+ return PyObject_Format(value, format_spec);
+}
+
+/*[clinic input]
+chr as builtin_chr
+
+ i: int
+ /
+
+Return a Unicode string of one character with ordinal i; 0 <= i <= 0x10ffff.
+[clinic start generated code]*/
+
+static PyObject *
+builtin_chr_impl(PyObject *module, int i)
+/*[clinic end generated code: output=c733afcd200afcb7 input=3f604ef45a70750d]*/
+{
+ return PyUnicode_FromOrdinal(i);
+}
+
+
+static const char *
+source_as_string(PyObject *cmd, const char *funcname, const char *what, PyCompilerFlags *cf, PyObject **cmd_copy)
+{
+ const char *str;
+ Py_ssize_t size;
+ Py_buffer view;
+
+ *cmd_copy = NULL;
+ if (PyUnicode_Check(cmd)) {
+ cf->cf_flags |= PyCF_IGNORE_COOKIE;
+ str = PyUnicode_AsUTF8AndSize(cmd, &size);
+ if (str == NULL)
+ return NULL;
+ }
+ else if (PyBytes_Check(cmd)) {
+ str = PyBytes_AS_STRING(cmd);
+ size = PyBytes_GET_SIZE(cmd);
+ }
+ else if (PyByteArray_Check(cmd)) {
+ str = PyByteArray_AS_STRING(cmd);
+ size = PyByteArray_GET_SIZE(cmd);
+ }
+ else if (PyObject_GetBuffer(cmd, &view, PyBUF_SIMPLE) == 0) {
+ /* Copy to NUL-terminated buffer. */
+ *cmd_copy = PyBytes_FromStringAndSize(
+ (const char *)view.buf, view.len);
+ PyBuffer_Release(&view);
+ if (*cmd_copy == NULL) {
+ return NULL;
+ }
+ str = PyBytes_AS_STRING(*cmd_copy);
+ size = PyBytes_GET_SIZE(*cmd_copy);
+ }
+ else {
+ PyErr_Format(PyExc_TypeError,
+ "%s() arg 1 must be a %s object",
+ funcname, what);
+ return NULL;
+ }
+
+ if (strlen(str) != (size_t)size) {
+ PyErr_SetString(PyExc_ValueError,
+ "source code string cannot contain null bytes");
+ Py_CLEAR(*cmd_copy);
+ return NULL;
+ }
+ return str;
+}
+
+/*[clinic input]
+compile as builtin_compile
+
+ source: object
+ filename: object(converter="PyUnicode_FSDecoder")
+ mode: str
+ flags: int = 0
+ dont_inherit: int(c_default="0") = False
+ optimize: int = -1
+
+Compile source into a code object that can be executed by exec() or eval().
+
+The source code may represent a Python module, statement or expression.
+The filename will be used for run-time error messages.
+The mode must be 'exec' to compile a module, 'single' to compile a
+single (interactive) statement, or 'eval' to compile an expression.
+The flags argument, if present, controls which future statements influence
+the compilation of the code.
+The dont_inherit argument, if true, stops the compilation inheriting
+the effects of any future statements in effect in the code calling
+compile; if absent or false these statements do influence the compilation,
+in addition to any features explicitly specified.
+[clinic start generated code]*/
+
+static PyObject *
+builtin_compile_impl(PyObject *module, PyObject *source, PyObject *filename,
+ const char *mode, int flags, int dont_inherit,
+ int optimize)
+/*[clinic end generated code: output=1fa176e33452bb63 input=9d53e8cfb3c86414]*/
+{
+ PyObject *source_copy;
+ const char *str;
+ int compile_mode = -1;
+ int is_ast;
+ PyCompilerFlags cf;
+ int start[] = {Py_file_input, Py_eval_input, Py_single_input};
+ PyObject *result;
+
+ cf.cf_flags = flags | PyCF_SOURCE_IS_UTF8;
+
+ if (flags &
+ ~(PyCF_MASK | PyCF_MASK_OBSOLETE | PyCF_DONT_IMPLY_DEDENT | PyCF_ONLY_AST))
+ {
+ PyErr_SetString(PyExc_ValueError,
+ "compile(): unrecognised flags");
+ goto error;
+ }
+ /* XXX Warn if (supplied_flags & PyCF_MASK_OBSOLETE) != 0? */
+
+ if (optimize < -1 || optimize > 2) {
+ PyErr_SetString(PyExc_ValueError,
+ "compile(): invalid optimize value");
+ goto error;
+ }
+
+ if (!dont_inherit) {
+ PyEval_MergeCompilerFlags(&cf);
+ }
+
+ if (strcmp(mode, "exec") == 0)
+ compile_mode = 0;
+ else if (strcmp(mode, "eval") == 0)
+ compile_mode = 1;
+ else if (strcmp(mode, "single") == 0)
+ compile_mode = 2;
+ else {
+ PyErr_SetString(PyExc_ValueError,
+ "compile() mode must be 'exec', 'eval' or 'single'");
+ goto error;
+ }
+
+ is_ast = PyAST_Check(source);
+ if (is_ast == -1)
+ goto error;
+ if (is_ast) {
+ if (flags & PyCF_ONLY_AST) {
+ Py_INCREF(source);
+ result = source;
+ }
+ else {
+ PyArena *arena;
+ mod_ty mod;
+
+ arena = PyArena_New();
+ if (arena == NULL)
+ goto error;
+ mod = PyAST_obj2mod(source, arena, compile_mode);
+ if (mod == NULL) {
+ PyArena_Free(arena);
+ goto error;
+ }
+ if (!PyAST_Validate(mod)) {
+ PyArena_Free(arena);
+ goto error;
+ }
+ result = (PyObject*)PyAST_CompileObject(mod, filename,
+ &cf, optimize, arena);
+ PyArena_Free(arena);
+ }
+ goto finally;
+ }
+
+ str = source_as_string(source, "compile", "string, bytes or AST", &cf, &source_copy);
+ if (str == NULL)
+ goto error;
+
+ result = Py_CompileStringObject(str, filename, start[compile_mode], &cf, optimize);
+ Py_XDECREF(source_copy);
+ goto finally;
+
+error:
+ result = NULL;
+finally:
+ Py_DECREF(filename);
+ return result;
+}
+
+/* AC: cannot convert yet, as needs PEP 457 group support in inspect */
+static PyObject *
+builtin_dir(PyObject *self, PyObject *args)
+{
+ PyObject *arg = NULL;
+
+ if (!PyArg_UnpackTuple(args, "dir", 0, 1, &arg))
+ return NULL;
+ return PyObject_Dir(arg);
+}
+
+PyDoc_STRVAR(dir_doc,
+"dir([object]) -> list of strings\n"
+"\n"
+"If called without an argument, return the names in the current scope.\n"
+"Else, return an alphabetized list of names comprising (some of) the attributes\n"
+"of the given object, and of attributes reachable from it.\n"
+"If the object supplies a method named __dir__, it will be used; otherwise\n"
+"the default dir() logic is used and returns:\n"
+" for a module object: the module's attributes.\n"
+" for a class object: its attributes, and recursively the attributes\n"
+" of its bases.\n"
+" for any other object: its attributes, its class's attributes, and\n"
+" recursively the attributes of its class's base classes.");
+
+/*[clinic input]
+divmod as builtin_divmod
+
+ x: object
+ y: object
+ /
+
+Return the tuple (x//y, x%y). Invariant: div*y + mod == x.
+[clinic start generated code]*/
+
+static PyObject *
+builtin_divmod_impl(PyObject *module, PyObject *x, PyObject *y)
+/*[clinic end generated code: output=b06d8a5f6e0c745e input=175ad9c84ff41a85]*/
+{
+ return PyNumber_Divmod(x, y);
+}
+
+
+/*[clinic input]
+eval as builtin_eval
+
+ source: object
+ globals: object = None
+ locals: object = None
+ /
+
+Evaluate the given source in the context of globals and locals.
+
+The source may be a string representing a Python expression
+or a code object as returned by compile().
+The globals must be a dictionary and locals can be any mapping,
+defaulting to the current globals and locals.
+If only globals is given, locals defaults to it.
+[clinic start generated code]*/
+
+static PyObject *
+builtin_eval_impl(PyObject *module, PyObject *source, PyObject *globals,
+ PyObject *locals)
+/*[clinic end generated code: output=0a0824aa70093116 input=11ee718a8640e527]*/
+{
+ PyObject *result, *source_copy;
+ const char *str;
+ PyCompilerFlags cf;
+
+ if (locals != Py_None && !PyMapping_Check(locals)) {
+ PyErr_SetString(PyExc_TypeError, "locals must be a mapping");
+ return NULL;
+ }
+ if (globals != Py_None && !PyDict_Check(globals)) {
+ PyErr_SetString(PyExc_TypeError, PyMapping_Check(globals) ?
+ "globals must be a real dict; try eval(expr, {}, mapping)"
+ : "globals must be a dict");
+ return NULL;
+ }
+ if (globals == Py_None) {
+ globals = PyEval_GetGlobals();
+ if (locals == Py_None) {
+ locals = PyEval_GetLocals();
+ if (locals == NULL)
+ return NULL;
+ }
+ }
+ else if (locals == Py_None)
+ locals = globals;
+
+ if (globals == NULL || locals == NULL) {
+ PyErr_SetString(PyExc_TypeError,
+ "eval must be given globals and locals "
+ "when called without a frame");
+ return NULL;
+ }
+
+ if (_PyDict_GetItemId(globals, &PyId___builtins__) == NULL) {
+ if (_PyDict_SetItemId(globals, &PyId___builtins__,
+ PyEval_GetBuiltins()) != 0)
+ return NULL;
+ }
+
+ if (PyCode_Check(source)) {
+ if (PyCode_GetNumFree((PyCodeObject *)source) > 0) {
+ PyErr_SetString(PyExc_TypeError,
+ "code object passed to eval() may not contain free variables");
+ return NULL;
+ }
+ return PyEval_EvalCode(source, globals, locals);
+ }
+
+ cf.cf_flags = PyCF_SOURCE_IS_UTF8;
+ str = source_as_string(source, "eval", "string, bytes or code", &cf, &source_copy);
+ if (str == NULL)
+ return NULL;
+
+ while (*str == ' ' || *str == '\t')
+ str++;
+
+ (void)PyEval_MergeCompilerFlags(&cf);
+ result = PyRun_StringFlags(str, Py_eval_input, globals, locals, &cf);
+ Py_XDECREF(source_copy);
+ return result;
+}
+
+/*[clinic input]
+exec as builtin_exec
+
+ source: object
+ globals: object = None
+ locals: object = None
+ /
+
+Execute the given source in the context of globals and locals.
+
+The source may be a string representing one or more Python statements
+or a code object as returned by compile().
+The globals must be a dictionary and locals can be any mapping,
+defaulting to the current globals and locals.
+If only globals is given, locals defaults to it.
+[clinic start generated code]*/
+
+static PyObject *
+builtin_exec_impl(PyObject *module, PyObject *source, PyObject *globals,
+ PyObject *locals)
+/*[clinic end generated code: output=3c90efc6ab68ef5d input=01ca3e1c01692829]*/
+{
+ PyObject *v;
+
+ if (globals == Py_None) {
+ globals = PyEval_GetGlobals();
+ if (locals == Py_None) {
+ locals = PyEval_GetLocals();
+ if (locals == NULL)
+ return NULL;
+ }
+ if (!globals || !locals) {
+ PyErr_SetString(PyExc_SystemError,
+ "globals and locals cannot be NULL");
+ return NULL;
+ }
+ }
+ else if (locals == Py_None)
+ locals = globals;
+
+ if (!PyDict_Check(globals)) {
+ PyErr_Format(PyExc_TypeError, "exec() globals must be a dict, not %.100s",
+ globals->ob_type->tp_name);
+ return NULL;
+ }
+ if (!PyMapping_Check(locals)) {
+ PyErr_Format(PyExc_TypeError,
+ "locals must be a mapping or None, not %.100s",
+ locals->ob_type->tp_name);
+ return NULL;
+ }
+ if (_PyDict_GetItemId(globals, &PyId___builtins__) == NULL) {
+ if (_PyDict_SetItemId(globals, &PyId___builtins__,
+ PyEval_GetBuiltins()) != 0)
+ return NULL;
+ }
+
+ if (PyCode_Check(source)) {
+ if (PyCode_GetNumFree((PyCodeObject *)source) > 0) {
+ PyErr_SetString(PyExc_TypeError,
+ "code object passed to exec() may not "
+ "contain free variables");
+ return NULL;
+ }
+ v = PyEval_EvalCode(source, globals, locals);
+ }
+ else {
+ PyObject *source_copy;
+ const char *str;
+ PyCompilerFlags cf;
+ cf.cf_flags = PyCF_SOURCE_IS_UTF8;
+ str = source_as_string(source, "exec",
+ "string, bytes or code", &cf,
+ &source_copy);
+ if (str == NULL)
+ return NULL;
+ if (PyEval_MergeCompilerFlags(&cf))
+ v = PyRun_StringFlags(str, Py_file_input, globals,
+ locals, &cf);
+ else
+ v = PyRun_String(str, Py_file_input, globals, locals);
+ Py_XDECREF(source_copy);
+ }
+ if (v == NULL)
+ return NULL;
+ Py_DECREF(v);
+ Py_RETURN_NONE;
+}
+
+
+/* AC: cannot convert yet, as needs PEP 457 group support in inspect */
+static PyObject *
+builtin_getattr(PyObject *self, PyObject *args)
+{
+ PyObject *v, *result, *dflt = NULL;
+ PyObject *name;
+
+ if (!PyArg_UnpackTuple(args, "getattr", 2, 3, &v, &name, &dflt))
+ return NULL;
+
+ if (!PyUnicode_Check(name)) {
+ PyErr_SetString(PyExc_TypeError,
+ "getattr(): attribute name must be string");
+ return NULL;
+ }
+ result = PyObject_GetAttr(v, name);
+ if (result == NULL && dflt != NULL &&
+ PyErr_ExceptionMatches(PyExc_AttributeError))
+ {
+ PyErr_Clear();
+ Py_INCREF(dflt);
+ result = dflt;
+ }
+ return result;
+}
+
+PyDoc_STRVAR(getattr_doc,
+"getattr(object, name[, default]) -> value\n\
+\n\
+Get a named attribute from an object; getattr(x, 'y') is equivalent to x.y.\n\
+When a default argument is given, it is returned when the attribute doesn't\n\
+exist; without it, an exception is raised in that case.");
+
+
+/*[clinic input]
+globals as builtin_globals
+
+Return the dictionary containing the current scope's global variables.
+
+NOTE: Updates to this dictionary *will* affect name lookups in the current
+global scope and vice-versa.
+[clinic start generated code]*/
+
+static PyObject *
+builtin_globals_impl(PyObject *module)
+/*[clinic end generated code: output=e5dd1527067b94d2 input=9327576f92bb48ba]*/
+{
+ PyObject *d;
+
+ d = PyEval_GetGlobals();
+ Py_XINCREF(d);
+ return d;
+}
+
+
+/*[clinic input]
+hasattr as builtin_hasattr
+
+ obj: object
+ name: object
+ /
+
+Return whether the object has an attribute with the given name.
+
+This is done by calling getattr(obj, name) and catching AttributeError.
+[clinic start generated code]*/
+
+static PyObject *
+builtin_hasattr_impl(PyObject *module, PyObject *obj, PyObject *name)
+/*[clinic end generated code: output=a7aff2090a4151e5 input=0faec9787d979542]*/
+{
+ PyObject *v;
+
+ if (!PyUnicode_Check(name)) {
+ PyErr_SetString(PyExc_TypeError,
+ "hasattr(): attribute name must be string");
+ return NULL;
+ }
+ v = PyObject_GetAttr(obj, name);
+ if (v == NULL) {
+ if (PyErr_ExceptionMatches(PyExc_AttributeError)) {
+ PyErr_Clear();
+ Py_RETURN_FALSE;
+ }
+ return NULL;
+ }
+ Py_DECREF(v);
+ Py_RETURN_TRUE;
+}
+
+
+/* AC: gdb's integration with CPython relies on builtin_id having
+ * the *exact* parameter names of "self" and "v", so we ensure we
+ * preserve those name rather than using the AC defaults.
+ */
+/*[clinic input]
+id as builtin_id
+
+ self: self(type="PyModuleDef *")
+ obj as v: object
+ /
+
+Return the identity of an object.
+
+This is guaranteed to be unique among simultaneously existing objects.
+(CPython uses the object's memory address.)
+[clinic start generated code]*/
+
+static PyObject *
+builtin_id(PyModuleDef *self, PyObject *v)
+/*[clinic end generated code: output=0aa640785f697f65 input=5a534136419631f4]*/
+{
+ return PyLong_FromVoidPtr(v);
+}
+
+
+/* map object ************************************************************/
+
+typedef struct {
+ PyObject_HEAD
+ PyObject *iters;
+ PyObject *func;
+} mapobject;
+
+static PyObject *
+map_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
+{
+ PyObject *it, *iters, *func;
+ mapobject *lz;
+ Py_ssize_t numargs, i;
+
+ if (type == &PyMap_Type && !_PyArg_NoKeywords("map()", kwds))
+ return NULL;
+
+ numargs = PyTuple_Size(args);
+ if (numargs < 2) {
+ PyErr_SetString(PyExc_TypeError,
+ "map() must have at least two arguments.");
+ return NULL;
+ }
+
+ iters = PyTuple_New(numargs-1);
+ if (iters == NULL)
+ return NULL;
+
+ for (i=1 ; i<numargs ; i++) {
+ /* Get iterator. */
+ it = PyObject_GetIter(PyTuple_GET_ITEM(args, i));
+ if (it == NULL) {
+ Py_DECREF(iters);
+ return NULL;
+ }
+ PyTuple_SET_ITEM(iters, i-1, it);
+ }
+
+ /* create mapobject structure */
+ lz = (mapobject *)type->tp_alloc(type, 0);
+ if (lz == NULL) {
+ Py_DECREF(iters);
+ return NULL;
+ }
+ lz->iters = iters;
+ func = PyTuple_GET_ITEM(args, 0);
+ Py_INCREF(func);
+ lz->func = func;
+
+ return (PyObject *)lz;
+}
+
+static void
+map_dealloc(mapobject *lz)
+{
+ PyObject_GC_UnTrack(lz);
+ Py_XDECREF(lz->iters);
+ Py_XDECREF(lz->func);
+ Py_TYPE(lz)->tp_free(lz);
+}
+
+static int
+map_traverse(mapobject *lz, visitproc visit, void *arg)
+{
+ Py_VISIT(lz->iters);
+ Py_VISIT(lz->func);
+ return 0;
+}
+
+static PyObject *
+map_next(mapobject *lz)
+{
+ PyObject *small_stack[5];
+ PyObject **stack;
+ Py_ssize_t niters, nargs, i;
+ PyObject *result = NULL;
+
+ niters = PyTuple_GET_SIZE(lz->iters);
+ if (niters <= (Py_ssize_t)Py_ARRAY_LENGTH(small_stack)) {
+ stack = small_stack;
+ }
+ else {
+ stack = PyMem_Malloc(niters * sizeof(stack[0]));
+ if (stack == NULL) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+ }
+
+ nargs = 0;
+ for (i=0; i < niters; i++) {
+ PyObject *it = PyTuple_GET_ITEM(lz->iters, i);
+ PyObject *val = Py_TYPE(it)->tp_iternext(it);
+ if (val == NULL) {
+ goto exit;
+ }
+ stack[i] = val;
+ nargs++;
+ }
+
+ result = _PyObject_FastCall(lz->func, stack, nargs);
+
+exit:
+ for (i=0; i < nargs; i++) {
+ Py_DECREF(stack[i]);
+ }
+ if (stack != small_stack) {
+ PyMem_Free(stack);
+ }
+ return result;
+}
+
+static PyObject *
+map_reduce(mapobject *lz)
+{
+ Py_ssize_t numargs = PyTuple_GET_SIZE(lz->iters);
+ PyObject *args = PyTuple_New(numargs+1);
+ Py_ssize_t i;
+ if (args == NULL)
+ return NULL;
+ Py_INCREF(lz->func);
+ PyTuple_SET_ITEM(args, 0, lz->func);
+ for (i = 0; i<numargs; i++){
+ PyObject *it = PyTuple_GET_ITEM(lz->iters, i);
+ Py_INCREF(it);
+ PyTuple_SET_ITEM(args, i+1, it);
+ }
+
+ return Py_BuildValue("ON", Py_TYPE(lz), args);
+}
+
+static PyMethodDef map_methods[] = {
+ {"__reduce__", (PyCFunction)map_reduce, METH_NOARGS, reduce_doc},
+ {NULL, NULL} /* sentinel */
+};
+
+
+PyDoc_STRVAR(map_doc,
+"map(func, *iterables) --> map object\n\
+\n\
+Make an iterator that computes the function using arguments from\n\
+each of the iterables. Stops when the shortest iterable is exhausted.");
+
+PyTypeObject PyMap_Type = {
+ PyVarObject_HEAD_INIT(&PyType_Type, 0)
+ "map", /* tp_name */
+ sizeof(mapobject), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ /* methods */
+ (destructor)map_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ 0, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ PyObject_GenericGetAttr, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC |
+ Py_TPFLAGS_BASETYPE, /* tp_flags */
+ map_doc, /* tp_doc */
+ (traverseproc)map_traverse, /* tp_traverse */
+ 0, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ PyObject_SelfIter, /* tp_iter */
+ (iternextfunc)map_next, /* tp_iternext */
+ map_methods, /* tp_methods */
+ 0, /* tp_members */
+ 0, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ 0, /* tp_init */
+ PyType_GenericAlloc, /* tp_alloc */
+ map_new, /* tp_new */
+ PyObject_GC_Del, /* tp_free */
+};
+
+
+/* AC: cannot convert yet, as needs PEP 457 group support in inspect */
+static PyObject *
+builtin_next(PyObject *self, PyObject *args)
+{
+ PyObject *it, *res;
+ PyObject *def = NULL;
+
+ if (!PyArg_UnpackTuple(args, "next", 1, 2, &it, &def))
+ return NULL;
+ if (!PyIter_Check(it)) {
+ PyErr_Format(PyExc_TypeError,
+ "'%.200s' object is not an iterator",
+ it->ob_type->tp_name);
+ return NULL;
+ }
+
+ res = (*it->ob_type->tp_iternext)(it);
+ if (res != NULL) {
+ return res;
+ } else if (def != NULL) {
+ if (PyErr_Occurred()) {
+ if(!PyErr_ExceptionMatches(PyExc_StopIteration))
+ return NULL;
+ PyErr_Clear();
+ }
+ Py_INCREF(def);
+ return def;
+ } else if (PyErr_Occurred()) {
+ return NULL;
+ } else {
+ PyErr_SetNone(PyExc_StopIteration);
+ return NULL;
+ }
+}
+
+PyDoc_STRVAR(next_doc,
+"next(iterator[, default])\n\
+\n\
+Return the next item from the iterator. If default is given and the iterator\n\
+is exhausted, it is returned instead of raising StopIteration.");
+
+
+/*[clinic input]
+setattr as builtin_setattr
+
+ obj: object
+ name: object
+ value: object
+ /
+
+Sets the named attribute on the given object to the specified value.
+
+setattr(x, 'y', v) is equivalent to ``x.y = v''
+[clinic start generated code]*/
+
+static PyObject *
+builtin_setattr_impl(PyObject *module, PyObject *obj, PyObject *name,
+ PyObject *value)
+/*[clinic end generated code: output=dc2ce1d1add9acb4 input=bd2b7ca6875a1899]*/
+{
+ if (PyObject_SetAttr(obj, name, value) != 0)
+ return NULL;
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+
+/*[clinic input]
+delattr as builtin_delattr
+
+ obj: object
+ name: object
+ /
+
+Deletes the named attribute from the given object.
+
+delattr(x, 'y') is equivalent to ``del x.y''
+[clinic start generated code]*/
+
+static PyObject *
+builtin_delattr_impl(PyObject *module, PyObject *obj, PyObject *name)
+/*[clinic end generated code: output=85134bc58dff79fa input=db16685d6b4b9410]*/
+{
+ if (PyObject_SetAttr(obj, name, (PyObject *)NULL) != 0)
+ return NULL;
+ Py_INCREF(Py_None);
+ return Py_None;
+}
+
+
+/*[clinic input]
+hash as builtin_hash
+
+ obj: object
+ /
+
+Return the hash value for the given object.
+
+Two objects that compare equal must also have the same hash value, but the
+reverse is not necessarily true.
+[clinic start generated code]*/
+
+static PyObject *
+builtin_hash(PyObject *module, PyObject *obj)
+/*[clinic end generated code: output=237668e9d7688db7 input=58c48be822bf9c54]*/
+{
+ Py_hash_t x;
+
+ x = PyObject_Hash(obj);
+ if (x == -1)
+ return NULL;
+ return PyLong_FromSsize_t(x);
+}
+
+
+/*[clinic input]
+hex as builtin_hex
+
+ number: object
+ /
+
+Return the hexadecimal representation of an integer.
+
+ >>> hex(12648430)
+ '0xc0ffee'
+[clinic start generated code]*/
+
+static PyObject *
+builtin_hex(PyObject *module, PyObject *number)
+/*[clinic end generated code: output=e46b612169099408 input=e645aff5fc7d540e]*/
+{
+ return PyNumber_ToBase(number, 16);
+}
+
+
+/* AC: cannot convert yet, as needs PEP 457 group support in inspect */
+static PyObject *
+builtin_iter(PyObject *self, PyObject *args)
+{
+ PyObject *v, *w = NULL;
+
+ if (!PyArg_UnpackTuple(args, "iter", 1, 2, &v, &w))
+ return NULL;
+ if (w == NULL)
+ return PyObject_GetIter(v);
+ if (!PyCallable_Check(v)) {
+ PyErr_SetString(PyExc_TypeError,
+ "iter(v, w): v must be callable");
+ return NULL;
+ }
+ return PyCallIter_New(v, w);
+}
+
+PyDoc_STRVAR(iter_doc,
+"iter(iterable) -> iterator\n\
+iter(callable, sentinel) -> iterator\n\
+\n\
+Get an iterator from an object. In the first form, the argument must\n\
+supply its own iterator, or be a sequence.\n\
+In the second form, the callable is called until it returns the sentinel.");
+
+
+/*[clinic input]
+len as builtin_len
+
+ obj: object
+ /
+
+Return the number of items in a container.
+[clinic start generated code]*/
+
+static PyObject *
+builtin_len(PyObject *module, PyObject *obj)
+/*[clinic end generated code: output=fa7a270d314dfb6c input=bc55598da9e9c9b5]*/
+{
+ Py_ssize_t res;
+
+ res = PyObject_Size(obj);
+ if (res < 0 && PyErr_Occurred())
+ return NULL;
+ return PyLong_FromSsize_t(res);
+}
+
+
+/*[clinic input]
+locals as builtin_locals
+
+Return a dictionary containing the current scope's local variables.
+
+NOTE: Whether or not updates to this dictionary will affect name lookups in
+the local scope and vice-versa is *implementation dependent* and not
+covered by any backwards compatibility guarantees.
+[clinic start generated code]*/
+
+static PyObject *
+builtin_locals_impl(PyObject *module)
+/*[clinic end generated code: output=b46c94015ce11448 input=7874018d478d5c4b]*/
+{
+ PyObject *d;
+
+ d = PyEval_GetLocals();
+ Py_XINCREF(d);
+ return d;
+}
+
+
+static PyObject *
+min_max(PyObject *args, PyObject *kwds, int op)
+{
+ PyObject *v, *it, *item, *val, *maxitem, *maxval, *keyfunc=NULL;
+ PyObject *emptytuple, *defaultval = NULL;
+ static char *kwlist[] = {"key", "default", NULL};
+ const char *name = op == Py_LT ? "min" : "max";
+ const int positional = PyTuple_Size(args) > 1;
+ int ret;
+
+ if (positional)
+ v = args;
+ else if (!PyArg_UnpackTuple(args, name, 1, 1, &v))
+ return NULL;
+
+ emptytuple = PyTuple_New(0);
+ if (emptytuple == NULL)
+ return NULL;
+ ret = PyArg_ParseTupleAndKeywords(emptytuple, kwds, "|$OO", kwlist,
+ &keyfunc, &defaultval);
+ Py_DECREF(emptytuple);
+ if (!ret)
+ return NULL;
+
+ if (positional && defaultval != NULL) {
+ PyErr_Format(PyExc_TypeError,
+ "Cannot specify a default for %s() with multiple "
+ "positional arguments", name);
+ return NULL;
+ }
+
+ it = PyObject_GetIter(v);
+ if (it == NULL) {
+ return NULL;
+ }
+
+ maxitem = NULL; /* the result */
+ maxval = NULL; /* the value associated with the result */
+ while (( item = PyIter_Next(it) )) {
+ /* get the value from the key function */
+ if (keyfunc != NULL) {
+ val = PyObject_CallFunctionObjArgs(keyfunc, item, NULL);
+ if (val == NULL)
+ goto Fail_it_item;
+ }
+ /* no key function; the value is the item */
+ else {
+ val = item;
+ Py_INCREF(val);
+ }
+
+ /* maximum value and item are unset; set them */
+ if (maxval == NULL) {
+ maxitem = item;
+ maxval = val;
+ }
+ /* maximum value and item are set; update them as necessary */
+ else {
+ int cmp = PyObject_RichCompareBool(val, maxval, op);
+ if (cmp < 0)
+ goto Fail_it_item_and_val;
+ else if (cmp > 0) {
+ Py_DECREF(maxval);
+ Py_DECREF(maxitem);
+ maxval = val;
+ maxitem = item;
+ }
+ else {
+ Py_DECREF(item);
+ Py_DECREF(val);
+ }
+ }
+ }
+ if (PyErr_Occurred())
+ goto Fail_it;
+ if (maxval == NULL) {
+ assert(maxitem == NULL);
+ if (defaultval != NULL) {
+ Py_INCREF(defaultval);
+ maxitem = defaultval;
+ } else {
+ PyErr_Format(PyExc_ValueError,
+ "%s() arg is an empty sequence", name);
+ }
+ }
+ else
+ Py_DECREF(maxval);
+ Py_DECREF(it);
+ return maxitem;
+
+Fail_it_item_and_val:
+ Py_DECREF(val);
+Fail_it_item:
+ Py_DECREF(item);
+Fail_it:
+ Py_XDECREF(maxval);
+ Py_XDECREF(maxitem);
+ Py_DECREF(it);
+ return NULL;
+}
+
+/* AC: cannot convert yet, waiting for *args support */
+static PyObject *
+builtin_min(PyObject *self, PyObject *args, PyObject *kwds)
+{
+ return min_max(args, kwds, Py_LT);
+}
+
+PyDoc_STRVAR(min_doc,
+"min(iterable, *[, default=obj, key=func]) -> value\n\
+min(arg1, arg2, *args, *[, key=func]) -> value\n\
+\n\
+With a single iterable argument, return its smallest item. The\n\
+default keyword-only argument specifies an object to return if\n\
+the provided iterable is empty.\n\
+With two or more arguments, return the smallest argument.");
+
+
+/* AC: cannot convert yet, waiting for *args support */
+static PyObject *
+builtin_max(PyObject *self, PyObject *args, PyObject *kwds)
+{
+ return min_max(args, kwds, Py_GT);
+}
+
+PyDoc_STRVAR(max_doc,
+"max(iterable, *[, default=obj, key=func]) -> value\n\
+max(arg1, arg2, *args, *[, key=func]) -> value\n\
+\n\
+With a single iterable argument, return its biggest item. The\n\
+default keyword-only argument specifies an object to return if\n\
+the provided iterable is empty.\n\
+With two or more arguments, return the largest argument.");
+
+
+/*[clinic input]
+oct as builtin_oct
+
+ number: object
+ /
+
+Return the octal representation of an integer.
+
+ >>> oct(342391)
+ '0o1234567'
+[clinic start generated code]*/
+
+static PyObject *
+builtin_oct(PyObject *module, PyObject *number)
+/*[clinic end generated code: output=40a34656b6875352 input=ad6b274af4016c72]*/
+{
+ return PyNumber_ToBase(number, 8);
+}
+
+
+/*[clinic input]
+ord as builtin_ord
+
+ c: object
+ /
+
+Return the Unicode code point for a one-character string.
+[clinic start generated code]*/
+
+static PyObject *
+builtin_ord(PyObject *module, PyObject *c)
+/*[clinic end generated code: output=4fa5e87a323bae71 input=3064e5d6203ad012]*/
+{
+ long ord;
+ Py_ssize_t size;
+
+ if (PyBytes_Check(c)) {
+ size = PyBytes_GET_SIZE(c);
+ if (size == 1) {
+ ord = (long)((unsigned char)*PyBytes_AS_STRING(c));
+ return PyLong_FromLong(ord);
+ }
+ }
+ else if (PyUnicode_Check(c)) {
+ if (PyUnicode_READY(c) == -1)
+ return NULL;
+ size = PyUnicode_GET_LENGTH(c);
+ if (size == 1) {
+ ord = (long)PyUnicode_READ_CHAR(c, 0);
+ return PyLong_FromLong(ord);
+ }
+ }
+ else if (PyByteArray_Check(c)) {
+ /* XXX Hopefully this is temporary */
+ size = PyByteArray_GET_SIZE(c);
+ if (size == 1) {
+ ord = (long)((unsigned char)*PyByteArray_AS_STRING(c));
+ return PyLong_FromLong(ord);
+ }
+ }
+ else {
+ PyErr_Format(PyExc_TypeError,
+ "ord() expected string of length 1, but " \
+ "%.200s found", c->ob_type->tp_name);
+ return NULL;
+ }
+
+ PyErr_Format(PyExc_TypeError,
+ "ord() expected a character, "
+ "but string of length %zd found",
+ size);
+ return NULL;
+}
+
+
+/*[clinic input]
+pow as builtin_pow
+
+ x: object
+ y: object
+ z: object = None
+ /
+
+Equivalent to x**y (with two arguments) or x**y % z (with three arguments)
+
+Some types, such as ints, are able to use a more efficient algorithm when
+invoked using the three argument form.
+[clinic start generated code]*/
+
+static PyObject *
+builtin_pow_impl(PyObject *module, PyObject *x, PyObject *y, PyObject *z)
+/*[clinic end generated code: output=50a14d5d130d404b input=653d57d38d41fc07]*/
+{
+ return PyNumber_Power(x, y, z);
+}
+
+
+/* AC: cannot convert yet, waiting for *args support */
+static PyObject *
+builtin_print(PyObject *self, PyObject *args, PyObject *kwds)
+{
+ static char *kwlist[] = {"sep", "end", "file", "flush", 0};
+ static PyObject *dummy_args;
+ PyObject *sep = NULL, *end = NULL, *file = NULL, *flush = NULL;
+ int i, err;
+
+ if (dummy_args == NULL && !(dummy_args = PyTuple_New(0)))
+ return NULL;
+ if (!PyArg_ParseTupleAndKeywords(dummy_args, kwds, "|OOOO:print",
+ kwlist, &sep, &end, &file, &flush))
+ return NULL;
+ if (file == NULL || file == Py_None) {
+ file = _PySys_GetObjectId(&PyId_stdout);
+ if (file == NULL) {
+ PyErr_SetString(PyExc_RuntimeError, "lost sys.stdout");
+ return NULL;
+ }
+
+ /* sys.stdout may be None when FILE* stdout isn't connected */
+ if (file == Py_None)
+ Py_RETURN_NONE;
+ }
+
+ if (sep == Py_None) {
+ sep = NULL;
+ }
+ else if (sep && !PyUnicode_Check(sep)) {
+ PyErr_Format(PyExc_TypeError,
+ "sep must be None or a string, not %.200s",
+ sep->ob_type->tp_name);
+ return NULL;
+ }
+ if (end == Py_None) {
+ end = NULL;
+ }
+ else if (end && !PyUnicode_Check(end)) {
+ PyErr_Format(PyExc_TypeError,
+ "end must be None or a string, not %.200s",
+ end->ob_type->tp_name);
+ return NULL;
+ }
+
+ for (i = 0; i < PyTuple_Size(args); i++) {
+ if (i > 0) {
+ if (sep == NULL)
+ err = PyFile_WriteString(" ", file);
+ else
+ err = PyFile_WriteObject(sep, file,
+ Py_PRINT_RAW);
+ if (err)
+ return NULL;
+ }
+ err = PyFile_WriteObject(PyTuple_GetItem(args, i), file,
+ Py_PRINT_RAW);
+ if (err)
+ return NULL;
+ }
+
+ if (end == NULL)
+ err = PyFile_WriteString("\n", file);
+ else
+ err = PyFile_WriteObject(end, file, Py_PRINT_RAW);
+ if (err)
+ return NULL;
+
+ if (flush != NULL) {
+ PyObject *tmp;
+ int do_flush = PyObject_IsTrue(flush);
+ if (do_flush == -1)
+ return NULL;
+ else if (do_flush) {
+ tmp = _PyObject_CallMethodId(file, &PyId_flush, NULL);
+ if (tmp == NULL)
+ return NULL;
+ else
+ Py_DECREF(tmp);
+ }
+ }
+
+ Py_RETURN_NONE;
+}
+
+PyDoc_STRVAR(print_doc,
+"print(value, ..., sep=' ', end='\\n', file=sys.stdout, flush=False)\n\
+\n\
+Prints the values to a stream, or to sys.stdout by default.\n\
+Optional keyword arguments:\n\
+file: a file-like object (stream); defaults to the current sys.stdout.\n\
+sep: string inserted between values, default a space.\n\
+end: string appended after the last value, default a newline.\n\
+flush: whether to forcibly flush the stream.");
+
+
+/*[clinic input]
+input as builtin_input
+
+ prompt: object(c_default="NULL") = None
+ /
+
+Read a string from standard input. The trailing newline is stripped.
+
+The prompt string, if given, is printed to standard output without a
+trailing newline before reading input.
+
+If the user hits EOF (*nix: Ctrl-D, Windows: Ctrl-Z+Return), raise EOFError.
+On *nix systems, readline is used if available.
+[clinic start generated code]*/
+
+static PyObject *
+builtin_input_impl(PyObject *module, PyObject *prompt)
+/*[clinic end generated code: output=83db5a191e7a0d60 input=5e8bb70c2908fe3c]*/
+{
+ PyObject *fin = _PySys_GetObjectId(&PyId_stdin);
+ PyObject *fout = _PySys_GetObjectId(&PyId_stdout);
+ PyObject *ferr = _PySys_GetObjectId(&PyId_stderr);
+ PyObject *tmp;
+ long fd;
+ int tty;
+
+ /* Check that stdin/out/err are intact */
+ if (fin == NULL || fin == Py_None) {
+ PyErr_SetString(PyExc_RuntimeError,
+ "input(): lost sys.stdin");
+ return NULL;
+ }
+ if (fout == NULL || fout == Py_None) {
+ PyErr_SetString(PyExc_RuntimeError,
+ "input(): lost sys.stdout");
+ return NULL;
+ }
+ if (ferr == NULL || ferr == Py_None) {
+ PyErr_SetString(PyExc_RuntimeError,
+ "input(): lost sys.stderr");
+ return NULL;
+ }
+
+ /* First of all, flush stderr */
+ tmp = _PyObject_CallMethodId(ferr, &PyId_flush, NULL);
+ if (tmp == NULL)
+ PyErr_Clear();
+ else
+ Py_DECREF(tmp);
+
+ /* We should only use (GNU) readline if Python's sys.stdin and
+ sys.stdout are the same as C's stdin and stdout, because we
+ need to pass it those. */
+ tmp = _PyObject_CallMethodId(fin, &PyId_fileno, NULL);
+ if (tmp == NULL) {
+ PyErr_Clear();
+ tty = 0;
+ }
+ else {
+ fd = PyLong_AsLong(tmp);
+ Py_DECREF(tmp);
+ if (fd < 0 && PyErr_Occurred())
+ return NULL;
+ tty = fd == fileno(stdin) && isatty(fd);
+ }
+ if (tty) {
+ tmp = _PyObject_CallMethodId(fout, &PyId_fileno, NULL);
+ if (tmp == NULL) {
+ PyErr_Clear();
+ tty = 0;
+ }
+ else {
+ fd = PyLong_AsLong(tmp);
+ Py_DECREF(tmp);
+ if (fd < 0 && PyErr_Occurred())
+ return NULL;
+ tty = fd == fileno(stdout) && isatty(fd);
+ }
+ }
+
+ /* If we're interactive, use (GNU) readline */
+ if (tty) {
+ PyObject *po = NULL;
+ char *promptstr;
+ char *s = NULL;
+ PyObject *stdin_encoding = NULL, *stdin_errors = NULL;
+ PyObject *stdout_encoding = NULL, *stdout_errors = NULL;
+ char *stdin_encoding_str, *stdin_errors_str;
+ PyObject *result;
+ size_t len;
+
+ /* stdin is a text stream, so it must have an encoding. */
+ stdin_encoding = _PyObject_GetAttrId(fin, &PyId_encoding);
+ stdin_errors = _PyObject_GetAttrId(fin, &PyId_errors);
+ if (!stdin_encoding || !stdin_errors ||
+ !PyUnicode_Check(stdin_encoding) ||
+ !PyUnicode_Check(stdin_errors)) {
+ tty = 0;
+ goto _readline_errors;
+ }
+ stdin_encoding_str = PyUnicode_AsUTF8(stdin_encoding);
+ stdin_errors_str = PyUnicode_AsUTF8(stdin_errors);
+ if (!stdin_encoding_str || !stdin_errors_str)
+ goto _readline_errors;
+ tmp = _PyObject_CallMethodId(fout, &PyId_flush, NULL);
+ if (tmp == NULL)
+ PyErr_Clear();
+ else
+ Py_DECREF(tmp);
+ if (prompt != NULL) {
+ /* We have a prompt, encode it as stdout would */
+ char *stdout_encoding_str, *stdout_errors_str;
+ PyObject *stringpo;
+ stdout_encoding = _PyObject_GetAttrId(fout, &PyId_encoding);
+ stdout_errors = _PyObject_GetAttrId(fout, &PyId_errors);
+ if (!stdout_encoding || !stdout_errors ||
+ !PyUnicode_Check(stdout_encoding) ||
+ !PyUnicode_Check(stdout_errors)) {
+ tty = 0;
+ goto _readline_errors;
+ }
+ stdout_encoding_str = PyUnicode_AsUTF8(stdout_encoding);
+ stdout_errors_str = PyUnicode_AsUTF8(stdout_errors);
+ if (!stdout_encoding_str || !stdout_errors_str)
+ goto _readline_errors;
+ stringpo = PyObject_Str(prompt);
+ if (stringpo == NULL)
+ goto _readline_errors;
+ po = PyUnicode_AsEncodedString(stringpo,
+ stdout_encoding_str, stdout_errors_str);
+ Py_CLEAR(stdout_encoding);
+ Py_CLEAR(stdout_errors);
+ Py_CLEAR(stringpo);
+ if (po == NULL)
+ goto _readline_errors;
+ assert(PyBytes_Check(po));
+ promptstr = PyBytes_AS_STRING(po);
+ }
+ else {
+ po = NULL;
+ promptstr = "";
+ }
+ s = PyOS_Readline(stdin, stdout, promptstr);
+ if (s == NULL) {
+ PyErr_CheckSignals();
+ if (!PyErr_Occurred())
+ PyErr_SetNone(PyExc_KeyboardInterrupt);
+ goto _readline_errors;
+ }
+
+ len = strlen(s);
+ if (len == 0) {
+ PyErr_SetNone(PyExc_EOFError);
+ result = NULL;
+ }
+ else {
+ if (len > PY_SSIZE_T_MAX) {
+ PyErr_SetString(PyExc_OverflowError,
+ "input: input too long");
+ result = NULL;
+ }
+ else {
+ len--; /* strip trailing '\n' */
+ if (len != 0 && s[len-1] == '\r')
+ len--; /* strip trailing '\r' */
+ result = PyUnicode_Decode(s, len, stdin_encoding_str,
+ stdin_errors_str);
+ }
+ }
+ Py_DECREF(stdin_encoding);
+ Py_DECREF(stdin_errors);
+ Py_XDECREF(po);
+ PyMem_FREE(s);
+ return result;
+
+ _readline_errors:
+ Py_XDECREF(stdin_encoding);
+ Py_XDECREF(stdout_encoding);
+ Py_XDECREF(stdin_errors);
+ Py_XDECREF(stdout_errors);
+ Py_XDECREF(po);
+ if (tty)
+ return NULL;
+
+ PyErr_Clear();
+ }
+
+ /* Fallback if we're not interactive */
+ if (prompt != NULL) {
+ if (PyFile_WriteObject(prompt, fout, Py_PRINT_RAW) != 0)
+ return NULL;
+ }
+ tmp = _PyObject_CallMethodId(fout, &PyId_flush, NULL);
+ if (tmp == NULL)
+ PyErr_Clear();
+ else
+ Py_DECREF(tmp);
+ return PyFile_GetLine(fin, -1);
+}
+
+
+/*[clinic input]
+repr as builtin_repr
+
+ obj: object
+ /
+
+Return the canonical string representation of the object.
+
+For many object types, including most builtins, eval(repr(obj)) == obj.
+[clinic start generated code]*/
+
+static PyObject *
+builtin_repr(PyObject *module, PyObject *obj)
+/*[clinic end generated code: output=7ed3778c44fd0194 input=1c9e6d66d3e3be04]*/
+{
+ return PyObject_Repr(obj);
+}
+
+
+/* AC: cannot convert yet, as needs PEP 457 group support in inspect
+ * or a semantic change to accept None for "ndigits"
+ */
+static PyObject *
+builtin_round(PyObject *self, PyObject *args, PyObject *kwds)
+{
+ PyObject *ndigits = NULL;
+ static char *kwlist[] = {"number", "ndigits", 0};
+ PyObject *number, *round, *result;
+
+ if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|O:round",
+ kwlist, &number, &ndigits))
+ return NULL;
+
+ if (Py_TYPE(number)->tp_dict == NULL) {
+ if (PyType_Ready(Py_TYPE(number)) < 0)
+ return NULL;
+ }
+
+ round = _PyObject_LookupSpecial(number, &PyId___round__);
+ if (round == NULL) {
+ if (!PyErr_Occurred())
+ PyErr_Format(PyExc_TypeError,
+ "type %.100s doesn't define __round__ method",
+ Py_TYPE(number)->tp_name);
+ return NULL;
+ }
+
+ if (ndigits == NULL || ndigits == Py_None)
+ result = PyObject_CallFunctionObjArgs(round, NULL);
+ else
+ result = PyObject_CallFunctionObjArgs(round, ndigits, NULL);
+ Py_DECREF(round);
+ return result;
+}
+
+PyDoc_STRVAR(round_doc,
+"round(number[, ndigits]) -> number\n\
+\n\
+Round a number to a given precision in decimal digits (default 0 digits).\n\
+This returns an int when called with one argument, otherwise the\n\
+same type as the number. ndigits may be negative.");
+
+
+/*AC: we need to keep the kwds dict intact to easily call into the
+ * list.sort method, which isn't currently supported in AC. So we just use
+ * the initially generated signature with a custom implementation.
+ */
+/* [disabled clinic input]
+sorted as builtin_sorted
+
+ iterable as seq: object
+ key as keyfunc: object = None
+ reverse: object = False
+
+Return a new list containing all items from the iterable in ascending order.
+
+A custom key function can be supplied to customize the sort order, and the
+reverse flag can be set to request the result in descending order.
+[end disabled clinic input]*/
+
+PyDoc_STRVAR(builtin_sorted__doc__,
+"sorted($module, iterable, /, *, key=None, reverse=False)\n"
+"--\n"
+"\n"
+"Return a new list containing all items from the iterable in ascending order.\n"
+"\n"
+"A custom key function can be supplied to customize the sort order, and the\n"
+"reverse flag can be set to request the result in descending order.");
+
+#define BUILTIN_SORTED_METHODDEF \
+ {"sorted", (PyCFunction)builtin_sorted, METH_VARARGS|METH_KEYWORDS, builtin_sorted__doc__},
+
+static PyObject *
+builtin_sorted(PyObject *self, PyObject *args, PyObject *kwds)
+{
+ PyObject *newlist, *v, *seq, *keyfunc=NULL, **newargs;
+ PyObject *callable;
+ static char *kwlist[] = {"", "key", "reverse", 0};
+ int reverse;
+ Py_ssize_t nargs;
+
+ /* args 1-3 should match listsort in Objects/listobject.c */
+ if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|Oi:sorted",
+ kwlist, &seq, &keyfunc, &reverse))
+ return NULL;
+
+ newlist = PySequence_List(seq);
+ if (newlist == NULL)
+ return NULL;
+
+ callable = _PyObject_GetAttrId(newlist, &PyId_sort);
+ if (callable == NULL) {
+ Py_DECREF(newlist);
+ return NULL;
+ }
+
+ assert(PyTuple_GET_SIZE(args) >= 1);
+ newargs = &PyTuple_GET_ITEM(args, 1);
+ nargs = PyTuple_GET_SIZE(args) - 1;
+ v = _PyObject_FastCallDict(callable, newargs, nargs, kwds);
+ Py_DECREF(callable);
+ if (v == NULL) {
+ Py_DECREF(newlist);
+ return NULL;
+ }
+ Py_DECREF(v);
+ return newlist;
+}
+
+
+/* AC: cannot convert yet, as needs PEP 457 group support in inspect */
+static PyObject *
+builtin_vars(PyObject *self, PyObject *args)
+{
+ PyObject *v = NULL;
+ PyObject *d;
+
+ if (!PyArg_UnpackTuple(args, "vars", 0, 1, &v))
+ return NULL;
+ if (v == NULL) {
+ d = PyEval_GetLocals();
+ if (d == NULL)
+ return NULL;
+ Py_INCREF(d);
+ }
+ else {
+ d = _PyObject_GetAttrId(v, &PyId___dict__);
+ if (d == NULL) {
+ PyErr_SetString(PyExc_TypeError,
+ "vars() argument must have __dict__ attribute");
+ return NULL;
+ }
+ }
+ return d;
+}
+
+PyDoc_STRVAR(vars_doc,
+"vars([object]) -> dictionary\n\
+\n\
+Without arguments, equivalent to locals().\n\
+With an argument, equivalent to object.__dict__.");
+
+
+/*[clinic input]
+sum as builtin_sum
+
+ iterable: object
+ start: object(c_default="NULL") = 0
+ /
+
+Return the sum of a 'start' value (default: 0) plus an iterable of numbers
+
+When the iterable is empty, return the start value.
+This function is intended specifically for use with numeric values and may
+reject non-numeric types.
+[clinic start generated code]*/
+
+static PyObject *
+builtin_sum_impl(PyObject *module, PyObject *iterable, PyObject *start)
+/*[clinic end generated code: output=df758cec7d1d302f input=3b5b7a9d7611c73a]*/
+{
+ PyObject *result = start;
+ PyObject *temp, *item, *iter;
+
+ iter = PyObject_GetIter(iterable);
+ if (iter == NULL)
+ return NULL;
+
+ if (result == NULL) {
+ result = PyLong_FromLong(0);
+ if (result == NULL) {
+ Py_DECREF(iter);
+ return NULL;
+ }
+ } else {
+ /* reject string values for 'start' parameter */
+ if (PyUnicode_Check(result)) {
+ PyErr_SetString(PyExc_TypeError,
+ "sum() can't sum strings [use ''.join(seq) instead]");
+ Py_DECREF(iter);
+ return NULL;
+ }
+ if (PyBytes_Check(result)) {
+ PyErr_SetString(PyExc_TypeError,
+ "sum() can't sum bytes [use b''.join(seq) instead]");
+ Py_DECREF(iter);
+ return NULL;
+ }
+ if (PyByteArray_Check(result)) {
+ PyErr_SetString(PyExc_TypeError,
+ "sum() can't sum bytearray [use b''.join(seq) instead]");
+ Py_DECREF(iter);
+ return NULL;
+ }
+ Py_INCREF(result);
+ }
+
+#ifndef SLOW_SUM
+ /* Fast addition by keeping temporary sums in C instead of new Python objects.
+ Assumes all inputs are the same type. If the assumption fails, default
+ to the more general routine.
+ */
+ if (PyLong_CheckExact(result)) {
+ int overflow;
+ long i_result = PyLong_AsLongAndOverflow(result, &overflow);
+ /* If this already overflowed, don't even enter the loop. */
+ if (overflow == 0) {
+ Py_DECREF(result);
+ result = NULL;
+ }
+ while(result == NULL) {
+ item = PyIter_Next(iter);
+ if (item == NULL) {
+ Py_DECREF(iter);
+ if (PyErr_Occurred())
+ return NULL;
+ return PyLong_FromLong(i_result);
+ }
+ if (PyLong_CheckExact(item)) {
+ long b = PyLong_AsLongAndOverflow(item, &overflow);
+ long x = i_result + b;
+ if (overflow == 0 && ((x^i_result) >= 0 || (x^b) >= 0)) {
+ i_result = x;
+ Py_DECREF(item);
+ continue;
+ }
+ }
+ /* Either overflowed or is not an int. Restore real objects and process normally */
+ result = PyLong_FromLong(i_result);
+ if (result == NULL) {
+ Py_DECREF(item);
+ Py_DECREF(iter);
+ return NULL;
+ }
+ temp = PyNumber_Add(result, item);
+ Py_DECREF(result);
+ Py_DECREF(item);
+ result = temp;
+ if (result == NULL) {
+ Py_DECREF(iter);
+ return NULL;
+ }
+ }
+ }
+
+ if (PyFloat_CheckExact(result)) {
+ double f_result = PyFloat_AS_DOUBLE(result);
+ Py_DECREF(result);
+ result = NULL;
+ while(result == NULL) {
+ item = PyIter_Next(iter);
+ if (item == NULL) {
+ Py_DECREF(iter);
+ if (PyErr_Occurred())
+ return NULL;
+ return PyFloat_FromDouble(f_result);
+ }
+ if (PyFloat_CheckExact(item)) {
+ PyFPE_START_PROTECT("add", Py_DECREF(item); Py_DECREF(iter); return 0)
+ f_result += PyFloat_AS_DOUBLE(item);
+ PyFPE_END_PROTECT(f_result)
+ Py_DECREF(item);
+ continue;
+ }
+ if (PyLong_CheckExact(item)) {
+ long value;
+ int overflow;
+ value = PyLong_AsLongAndOverflow(item, &overflow);
+ if (!overflow) {
+ PyFPE_START_PROTECT("add", Py_DECREF(item); Py_DECREF(iter); return 0)
+ f_result += (double)value;
+ PyFPE_END_PROTECT(f_result)
+ Py_DECREF(item);
+ continue;
+ }
+ }
+ result = PyFloat_FromDouble(f_result);
+ if (result == NULL) {
+ Py_DECREF(item);
+ Py_DECREF(iter);
+ return NULL;
+ }
+ temp = PyNumber_Add(result, item);
+ Py_DECREF(result);
+ Py_DECREF(item);
+ result = temp;
+ if (result == NULL) {
+ Py_DECREF(iter);
+ return NULL;
+ }
+ }
+ }
+#endif
+
+ for(;;) {
+ item = PyIter_Next(iter);
+ if (item == NULL) {
+ /* error, or end-of-sequence */
+ if (PyErr_Occurred()) {
+ Py_DECREF(result);
+ result = NULL;
+ }
+ break;
+ }
+ /* It's tempting to use PyNumber_InPlaceAdd instead of
+ PyNumber_Add here, to avoid quadratic running time
+ when doing 'sum(list_of_lists, [])'. However, this
+ would produce a change in behaviour: a snippet like
+
+ empty = []
+ sum([[x] for x in range(10)], empty)
+
+ would change the value of empty. */
+ temp = PyNumber_Add(result, item);
+ Py_DECREF(result);
+ Py_DECREF(item);
+ result = temp;
+ if (result == NULL)
+ break;
+ }
+ Py_DECREF(iter);
+ return result;
+}
+
+
+/*[clinic input]
+isinstance as builtin_isinstance
+
+ obj: object
+ class_or_tuple: object
+ /
+
+Return whether an object is an instance of a class or of a subclass thereof.
+
+A tuple, as in ``isinstance(x, (A, B, ...))``, may be given as the target to
+check against. This is equivalent to ``isinstance(x, A) or isinstance(x, B)
+or ...`` etc.
+[clinic start generated code]*/
+
+static PyObject *
+builtin_isinstance_impl(PyObject *module, PyObject *obj,
+ PyObject *class_or_tuple)
+/*[clinic end generated code: output=6faf01472c13b003 input=ffa743db1daf7549]*/
+{
+ int retval;
+
+ retval = PyObject_IsInstance(obj, class_or_tuple);
+ if (retval < 0)
+ return NULL;
+ return PyBool_FromLong(retval);
+}
+
+
+/*[clinic input]
+issubclass as builtin_issubclass
+
+ cls: object
+ class_or_tuple: object
+ /
+
+Return whether 'cls' is a derived from another class or is the same class.
+
+A tuple, as in ``issubclass(x, (A, B, ...))``, may be given as the target to
+check against. This is equivalent to ``issubclass(x, A) or issubclass(x, B)
+or ...`` etc.
+[clinic start generated code]*/
+
+static PyObject *
+builtin_issubclass_impl(PyObject *module, PyObject *cls,
+ PyObject *class_or_tuple)
+/*[clinic end generated code: output=358412410cd7a250 input=af5f35e9ceaddaf6]*/
+{
+ int retval;
+
+ retval = PyObject_IsSubclass(cls, class_or_tuple);
+ if (retval < 0)
+ return NULL;
+ return PyBool_FromLong(retval);
+}
+
+
+typedef struct {
+ PyObject_HEAD
+ Py_ssize_t tuplesize;
+ PyObject *ittuple; /* tuple of iterators */
+ PyObject *result;
+} zipobject;
+
+static PyObject *
+zip_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
+{
+ zipobject *lz;
+ Py_ssize_t i;
+ PyObject *ittuple; /* tuple of iterators */
+ PyObject *result;
+ Py_ssize_t tuplesize = PySequence_Length(args);
+
+ if (type == &PyZip_Type && !_PyArg_NoKeywords("zip()", kwds))
+ return NULL;
+
+ /* args must be a tuple */
+ assert(PyTuple_Check(args));
+
+ /* obtain iterators */
+ ittuple = PyTuple_New(tuplesize);
+ if (ittuple == NULL)
+ return NULL;
+ for (i=0; i < tuplesize; ++i) {
+ PyObject *item = PyTuple_GET_ITEM(args, i);
+ PyObject *it = PyObject_GetIter(item);
+ if (it == NULL) {
+ if (PyErr_ExceptionMatches(PyExc_TypeError))
+ PyErr_Format(PyExc_TypeError,
+ "zip argument #%zd must support iteration",
+ i+1);
+ Py_DECREF(ittuple);
+ return NULL;
+ }
+ PyTuple_SET_ITEM(ittuple, i, it);
+ }
+
+ /* create a result holder */
+ result = PyTuple_New(tuplesize);
+ if (result == NULL) {
+ Py_DECREF(ittuple);
+ return NULL;
+ }
+ for (i=0 ; i < tuplesize ; i++) {
+ Py_INCREF(Py_None);
+ PyTuple_SET_ITEM(result, i, Py_None);
+ }
+
+ /* create zipobject structure */
+ lz = (zipobject *)type->tp_alloc(type, 0);
+ if (lz == NULL) {
+ Py_DECREF(ittuple);
+ Py_DECREF(result);
+ return NULL;
+ }
+ lz->ittuple = ittuple;
+ lz->tuplesize = tuplesize;
+ lz->result = result;
+
+ return (PyObject *)lz;
+}
+
+static void
+zip_dealloc(zipobject *lz)
+{
+ PyObject_GC_UnTrack(lz);
+ Py_XDECREF(lz->ittuple);
+ Py_XDECREF(lz->result);
+ Py_TYPE(lz)->tp_free(lz);
+}
+
+static int
+zip_traverse(zipobject *lz, visitproc visit, void *arg)
+{
+ Py_VISIT(lz->ittuple);
+ Py_VISIT(lz->result);
+ return 0;
+}
+
+static PyObject *
+zip_next(zipobject *lz)
+{
+ Py_ssize_t i;
+ Py_ssize_t tuplesize = lz->tuplesize;
+ PyObject *result = lz->result;
+ PyObject *it;
+ PyObject *item;
+ PyObject *olditem;
+
+ if (tuplesize == 0)
+ return NULL;
+ if (Py_REFCNT(result) == 1) {
+ Py_INCREF(result);
+ for (i=0 ; i < tuplesize ; i++) {
+ it = PyTuple_GET_ITEM(lz->ittuple, i);
+ item = (*Py_TYPE(it)->tp_iternext)(it);
+ if (item == NULL) {
+ Py_DECREF(result);
+ return NULL;
+ }
+ olditem = PyTuple_GET_ITEM(result, i);
+ PyTuple_SET_ITEM(result, i, item);
+ Py_DECREF(olditem);
+ }
+ } else {
+ result = PyTuple_New(tuplesize);
+ if (result == NULL)
+ return NULL;
+ for (i=0 ; i < tuplesize ; i++) {
+ it = PyTuple_GET_ITEM(lz->ittuple, i);
+ item = (*Py_TYPE(it)->tp_iternext)(it);
+ if (item == NULL) {
+ Py_DECREF(result);
+ return NULL;
+ }
+ PyTuple_SET_ITEM(result, i, item);
+ }
+ }
+ return result;
+}
+
+static PyObject *
+zip_reduce(zipobject *lz)
+{
+ /* Just recreate the zip with the internal iterator tuple */
+ return Py_BuildValue("OO", Py_TYPE(lz), lz->ittuple);
+}
+
+static PyMethodDef zip_methods[] = {
+ {"__reduce__", (PyCFunction)zip_reduce, METH_NOARGS, reduce_doc},
+ {NULL, NULL} /* sentinel */
+};
+
+PyDoc_STRVAR(zip_doc,
+"zip(iter1 [,iter2 [...]]) --> zip object\n\
+\n\
+Return a zip object whose .__next__() method returns a tuple where\n\
+the i-th element comes from the i-th iterable argument. The .__next__()\n\
+method continues until the shortest iterable in the argument sequence\n\
+is exhausted and then it raises StopIteration.");
+
+PyTypeObject PyZip_Type = {
+ PyVarObject_HEAD_INIT(&PyType_Type, 0)
+ "zip", /* tp_name */
+ sizeof(zipobject), /* tp_basicsize */
+ 0, /* tp_itemsize */
+ /* methods */
+ (destructor)zip_dealloc, /* tp_dealloc */
+ 0, /* tp_print */
+ 0, /* tp_getattr */
+ 0, /* tp_setattr */
+ 0, /* tp_reserved */
+ 0, /* tp_repr */
+ 0, /* tp_as_number */
+ 0, /* tp_as_sequence */
+ 0, /* tp_as_mapping */
+ 0, /* tp_hash */
+ 0, /* tp_call */
+ 0, /* tp_str */
+ PyObject_GenericGetAttr, /* tp_getattro */
+ 0, /* tp_setattro */
+ 0, /* tp_as_buffer */
+ Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC |
+ Py_TPFLAGS_BASETYPE, /* tp_flags */
+ zip_doc, /* tp_doc */
+ (traverseproc)zip_traverse, /* tp_traverse */
+ 0, /* tp_clear */
+ 0, /* tp_richcompare */
+ 0, /* tp_weaklistoffset */
+ PyObject_SelfIter, /* tp_iter */
+ (iternextfunc)zip_next, /* tp_iternext */
+ zip_methods, /* tp_methods */
+ 0, /* tp_members */
+ 0, /* tp_getset */
+ 0, /* tp_base */
+ 0, /* tp_dict */
+ 0, /* tp_descr_get */
+ 0, /* tp_descr_set */
+ 0, /* tp_dictoffset */
+ 0, /* tp_init */
+ PyType_GenericAlloc, /* tp_alloc */
+ zip_new, /* tp_new */
+ PyObject_GC_Del, /* tp_free */
+};
+
+
+static PyMethodDef builtin_methods[] = {
+ {"__build_class__", (PyCFunction)builtin___build_class__,
+ METH_VARARGS | METH_KEYWORDS, build_class_doc},
+ {"__import__", (PyCFunction)builtin___import__, METH_VARARGS | METH_KEYWORDS, import_doc},
+ BUILTIN_ABS_METHODDEF
+ BUILTIN_ALL_METHODDEF
+ BUILTIN_ANY_METHODDEF
+ BUILTIN_ASCII_METHODDEF
+ BUILTIN_BIN_METHODDEF
+ BUILTIN_CALLABLE_METHODDEF
+ BUILTIN_CHR_METHODDEF
+ BUILTIN_COMPILE_METHODDEF
+ BUILTIN_DELATTR_METHODDEF
+ {"dir", builtin_dir, METH_VARARGS, dir_doc},
+ BUILTIN_DIVMOD_METHODDEF
+ BUILTIN_EVAL_METHODDEF
+ BUILTIN_EXEC_METHODDEF
+ BUILTIN_FORMAT_METHODDEF
+ {"getattr", builtin_getattr, METH_VARARGS, getattr_doc},
+ BUILTIN_GLOBALS_METHODDEF
+ BUILTIN_HASATTR_METHODDEF
+ BUILTIN_HASH_METHODDEF
+ BUILTIN_HEX_METHODDEF
+ BUILTIN_ID_METHODDEF
+ BUILTIN_INPUT_METHODDEF
+ BUILTIN_ISINSTANCE_METHODDEF
+ BUILTIN_ISSUBCLASS_METHODDEF
+ {"iter", builtin_iter, METH_VARARGS, iter_doc},
+ BUILTIN_LEN_METHODDEF
+ BUILTIN_LOCALS_METHODDEF
+ {"max", (PyCFunction)builtin_max, METH_VARARGS | METH_KEYWORDS, max_doc},
+ {"min", (PyCFunction)builtin_min, METH_VARARGS | METH_KEYWORDS, min_doc},
+ {"next", (PyCFunction)builtin_next, METH_VARARGS, next_doc},
+ BUILTIN_OCT_METHODDEF
+ BUILTIN_ORD_METHODDEF
+ BUILTIN_POW_METHODDEF
+ {"print", (PyCFunction)builtin_print, METH_VARARGS | METH_KEYWORDS, print_doc},
+ BUILTIN_REPR_METHODDEF
+ {"round", (PyCFunction)builtin_round, METH_VARARGS | METH_KEYWORDS, round_doc},
+ BUILTIN_SETATTR_METHODDEF
+ BUILTIN_SORTED_METHODDEF
+ BUILTIN_SUM_METHODDEF
+ {"vars", builtin_vars, METH_VARARGS, vars_doc},
+ {NULL, NULL},
+};
+
+PyDoc_STRVAR(builtin_doc,
+"Built-in functions, exceptions, and other objects.\n\
+\n\
+Noteworthy: None is the `nil' object; Ellipsis represents `...' in slices.");
+
+static struct PyModuleDef builtinsmodule = {
+ PyModuleDef_HEAD_INIT,
+ "builtins",
+ builtin_doc,
+ -1, /* multiple "initialization" just copies the module dict. */
+ builtin_methods,
+ NULL,
+ NULL,
+ NULL,
+ NULL
+};
+
+
+PyObject *
+_PyBuiltin_Init(void)
+{
+ PyObject *mod, *dict, *debug;
+
+ if (PyType_Ready(&PyFilter_Type) < 0 ||
+ PyType_Ready(&PyMap_Type) < 0 ||
+ PyType_Ready(&PyZip_Type) < 0)
+ return NULL;
+
+ mod = PyModule_Create(&builtinsmodule);
+ if (mod == NULL)
+ return NULL;
+ dict = PyModule_GetDict(mod);
+
+#ifdef Py_TRACE_REFS
+ /* "builtins" exposes a number of statically allocated objects
+ * that, before this code was added in 2.3, never showed up in
+ * the list of "all objects" maintained by Py_TRACE_REFS. As a
+ * result, programs leaking references to None and False (etc)
+ * couldn't be diagnosed by examining sys.getobjects(0).
+ */
+#define ADD_TO_ALL(OBJECT) _Py_AddToAllObjects((PyObject *)(OBJECT), 0)
+#else
+#define ADD_TO_ALL(OBJECT) (void)0
+#endif
+
+#define SETBUILTIN(NAME, OBJECT) \
+ if (PyDict_SetItemString(dict, NAME, (PyObject *)OBJECT) < 0) \
+ return NULL; \
+ ADD_TO_ALL(OBJECT)
+
+ SETBUILTIN("None", Py_None);
+ SETBUILTIN("Ellipsis", Py_Ellipsis);
+ SETBUILTIN("NotImplemented", Py_NotImplemented);
+ SETBUILTIN("False", Py_False);
+ SETBUILTIN("True", Py_True);
+ SETBUILTIN("bool", &PyBool_Type);
+ SETBUILTIN("memoryview", &PyMemoryView_Type);
+ SETBUILTIN("bytearray", &PyByteArray_Type);
+ SETBUILTIN("bytes", &PyBytes_Type);
+ SETBUILTIN("classmethod", &PyClassMethod_Type);
+ SETBUILTIN("complex", &PyComplex_Type);
+ SETBUILTIN("dict", &PyDict_Type);
+ SETBUILTIN("enumerate", &PyEnum_Type);
+ SETBUILTIN("filter", &PyFilter_Type);
+ SETBUILTIN("float", &PyFloat_Type);
+ SETBUILTIN("frozenset", &PyFrozenSet_Type);
+ SETBUILTIN("property", &PyProperty_Type);
+ SETBUILTIN("int", &PyLong_Type);
+ SETBUILTIN("list", &PyList_Type);
+ SETBUILTIN("map", &PyMap_Type);
+ SETBUILTIN("object", &PyBaseObject_Type);
+ SETBUILTIN("range", &PyRange_Type);
+ SETBUILTIN("reversed", &PyReversed_Type);
+ SETBUILTIN("set", &PySet_Type);
+ SETBUILTIN("slice", &PySlice_Type);
+ SETBUILTIN("staticmethod", &PyStaticMethod_Type);
+ SETBUILTIN("str", &PyUnicode_Type);
+ SETBUILTIN("super", &PySuper_Type);
+ SETBUILTIN("tuple", &PyTuple_Type);
+ SETBUILTIN("type", &PyType_Type);
+ SETBUILTIN("zip", &PyZip_Type);
+ debug = PyBool_FromLong(Py_OptimizeFlag == 0);
+ if (PyDict_SetItemString(dict, "__debug__", debug) < 0) {
+ Py_DECREF(debug);
+ return NULL;
+ }
+ Py_DECREF(debug);
+
+ return mod;
+#undef ADD_TO_ALL
+#undef SETBUILTIN
+}
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/fileutils.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/fileutils.c
new file mode 100644
index 00000000..3367e296
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/fileutils.c
@@ -0,0 +1,1767 @@
+/** @file
+ File Utilities
+
+ Copyright (c) 2010 - 2021, Intel Corporation. All rights reserved.<BR>
+ This program and the accompanying materials are licensed and made available under
+ the terms and conditions of the BSD License that accompanies this distribution.
+ The full text of the license may be found at
+ http://opensource.org/licenses/bsd-license.
+
+ THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+**/
+#include "Python.h"
+#include "osdefs.h"
+#include <locale.h>
+
+#ifdef MS_WINDOWS
+# include <malloc.h>
+# include <windows.h>
+extern int winerror_to_errno(int);
+#endif
+
+#ifdef HAVE_LANGINFO_H
+#include <langinfo.h>
+#endif
+
+#ifdef HAVE_SYS_IOCTL_H
+#include <sys/ioctl.h>
+#endif
+
+#ifdef HAVE_FCNTL_H
+#include <fcntl.h>
+#endif /* HAVE_FCNTL_H */
+
+#if defined(__APPLE__) || defined(__ANDROID__)
+extern wchar_t* _Py_DecodeUTF8_surrogateescape(const char *s, Py_ssize_t size);
+#endif
+
+#ifdef O_CLOEXEC
+/* Does open() support the O_CLOEXEC flag? Possible values:
+
+ -1: unknown
+ 0: open() ignores O_CLOEXEC flag, ex: Linux kernel older than 2.6.23
+ 1: open() supports O_CLOEXEC flag, close-on-exec is set
+
+ The flag is used by _Py_open(), _Py_open_noraise(), io.FileIO
+ and os.open(). */
+int _Py_open_cloexec_works = -1;
+#endif
+
+PyObject *
+_Py_device_encoding(int fd)
+{
+#if defined(MS_WINDOWS)
+ UINT cp;
+#endif
+ int valid;
+ _Py_BEGIN_SUPPRESS_IPH
+ valid = isatty(fd);
+ _Py_END_SUPPRESS_IPH
+ if (!valid)
+ Py_RETURN_NONE;
+
+#if defined(MS_WINDOWS)
+ if (fd == 0)
+ cp = GetConsoleCP();
+ else if (fd == 1 || fd == 2)
+ cp = GetConsoleOutputCP();
+ else
+ cp = 0;
+ /* GetConsoleCP() and GetConsoleOutputCP() return 0 if the application
+ has no console */
+ if (cp != 0)
+ return PyUnicode_FromFormat("cp%u", (unsigned int)cp);
+#elif defined(CODESET)
+ {
+ char *codeset = nl_langinfo(CODESET);
+ if (codeset != NULL && codeset[0] != 0)
+ return PyUnicode_FromString(codeset);
+ }
+#endif
+ Py_RETURN_NONE;
+}
+
+#if !defined(__APPLE__) && !defined(__ANDROID__) && !defined(MS_WINDOWS)
+
+#define USE_FORCE_ASCII
+
+extern int _Py_normalize_encoding(const char *, char *, size_t);
+
+/* Workaround FreeBSD and OpenIndiana locale encoding issue with the C locale.
+ On these operating systems, nl_langinfo(CODESET) announces an alias of the
+ ASCII encoding, whereas mbstowcs() and wcstombs() functions use the
+ ISO-8859-1 encoding. The problem is that os.fsencode() and os.fsdecode() use
+ locale.getpreferredencoding() codec. For example, if command line arguments
+ are decoded by mbstowcs() and encoded back by os.fsencode(), we get a
+ UnicodeEncodeError instead of retrieving the original byte string.
+
+ The workaround is enabled if setlocale(LC_CTYPE, NULL) returns "C",
+ nl_langinfo(CODESET) announces "ascii" (or an alias to ASCII), and at least
+ one byte in range 0x80-0xff can be decoded from the locale encoding. The
+ workaround is also enabled on error, for example if getting the locale
+ failed.
+
+ Values of force_ascii:
+
+ 1: the workaround is used: Py_EncodeLocale() uses
+ encode_ascii_surrogateescape() and Py_DecodeLocale() uses
+ decode_ascii_surrogateescape()
+ 0: the workaround is not used: Py_EncodeLocale() uses wcstombs() and
+ Py_DecodeLocale() uses mbstowcs()
+ -1: unknown, need to call check_force_ascii() to get the value
+*/
+static int force_ascii = -1;
+
+static int
+check_force_ascii(void)
+{
+ char *loc;
+#if defined(HAVE_LANGINFO_H) && defined(CODESET)
+ char *codeset, **alias;
+ char encoding[20]; /* longest name: "iso_646.irv_1991\0" */
+ int is_ascii;
+ unsigned int i;
+ char* ascii_aliases[] = {
+ "ascii",
+ /* Aliases from Lib/encodings/aliases.py */
+ "646",
+ "ansi_x3.4_1968",
+ "ansi_x3.4_1986",
+ "ansi_x3_4_1968",
+ "cp367",
+ "csascii",
+ "ibm367",
+ "iso646_us",
+ "iso_646.irv_1991",
+ "iso_ir_6",
+ "us",
+ "us_ascii",
+ NULL
+ };
+#endif
+
+ loc = setlocale(LC_CTYPE, NULL);
+ if (loc == NULL)
+ goto error;
+ if (strcmp(loc, "C") != 0 && strcmp(loc, "POSIX") != 0) {
+ /* the LC_CTYPE locale is different than C */
+ return 0;
+ }
+
+#if defined(HAVE_LANGINFO_H) && defined(CODESET)
+ codeset = nl_langinfo(CODESET);
+ if (!codeset || codeset[0] == '\0') {
+ /* CODESET is not set or empty */
+ goto error;
+ }
+ if (!_Py_normalize_encoding(codeset, encoding, sizeof(encoding)))
+ goto error;
+
+ is_ascii = 0;
+ for (alias=ascii_aliases; *alias != NULL; alias++) {
+ if (strcmp(encoding, *alias) == 0) {
+ is_ascii = 1;
+ break;
+ }
+ }
+ if (!is_ascii) {
+ /* nl_langinfo(CODESET) is not "ascii" or an alias of ASCII */
+ return 0;
+ }
+
+ for (i=0x80; i<0xff; i++) {
+ unsigned char ch;
+ wchar_t wch;
+ size_t res;
+
+ ch = (unsigned char)i;
+ res = mbstowcs(&wch, (char*)&ch, 1);
+ if (res != (size_t)-1) {
+ /* decoding a non-ASCII character from the locale encoding succeed:
+ the locale encoding is not ASCII, force ASCII */
+ return 1;
+ }
+ }
+ /* None of the bytes in the range 0x80-0xff can be decoded from the locale
+ encoding: the locale encoding is really ASCII */
+ return 0;
+#else
+ /* nl_langinfo(CODESET) is not available: always force ASCII */
+ return 1;
+#endif
+
+error:
+ /* if an error occurred, force the ASCII encoding */
+ return 1;
+}
+
+static char*
+encode_ascii_surrogateescape(const wchar_t *text, size_t *error_pos)
+{
+ char *result = NULL, *out;
+ size_t len, i;
+ wchar_t ch;
+
+ if (error_pos != NULL)
+ *error_pos = (size_t)-1;
+
+ len = wcslen(text);
+
+ result = PyMem_Malloc(len + 1); /* +1 for NUL byte */
+ if (result == NULL)
+ return NULL;
+
+ out = result;
+ for (i=0; i<len; i++) {
+ ch = text[i];
+
+ if (ch <= 0x7f) {
+ /* ASCII character */
+ *out++ = (char)ch;
+ }
+ else if (0xdc80 <= ch && ch <= 0xdcff) {
+ /* UTF-8b surrogate */
+ *out++ = (char)(ch - 0xdc00);
+ }
+ else {
+ if (error_pos != NULL)
+ *error_pos = i;
+ PyMem_Free(result);
+ return NULL;
+ }
+ }
+ *out = '\0';
+ return result;
+}
+#endif /* !defined(__APPLE__) && !defined(MS_WINDOWS) */
+
+#if !defined(HAVE_MBRTOWC) || defined(USE_FORCE_ASCII)
+static wchar_t*
+decode_ascii_surrogateescape(const char *arg, size_t *size)
+{
+ wchar_t *res;
+ unsigned char *in;
+ wchar_t *out;
+ size_t argsize = strlen(arg) + 1;
+
+ if (argsize > PY_SSIZE_T_MAX/sizeof(wchar_t))
+ return NULL;
+ res = PyMem_RawMalloc(argsize*sizeof(wchar_t));
+ if (!res)
+ return NULL;
+
+ in = (unsigned char*)arg;
+ out = res;
+ while(*in)
+ if(*in < 128)
+ *out++ = *in++;
+ else
+ *out++ = 0xdc00 + *in++;
+ *out = 0;
+ if (size != NULL)
+ *size = out - res;
+ return res;
+}
+#endif
+
+
+static wchar_t*
+decode_current_locale(const char* arg, size_t *size)
+{
+ wchar_t *res;
+ size_t argsize;
+ size_t count;
+#ifdef HAVE_MBRTOWC
+ unsigned char *in;
+ wchar_t *out;
+ mbstate_t mbs;
+#endif
+
+#ifdef HAVE_BROKEN_MBSTOWCS
+ /* Some platforms have a broken implementation of
+ * mbstowcs which does not count the characters that
+ * would result from conversion. Use an upper bound.
+ */
+ argsize = strlen(arg);
+#else
+ argsize = mbstowcs(NULL, arg, 0);
+#endif
+ if (argsize != (size_t)-1) {
+ if (argsize == PY_SSIZE_T_MAX)
+ goto oom;
+ argsize += 1;
+ if (argsize > PY_SSIZE_T_MAX/sizeof(wchar_t))
+ goto oom;
+ res = (wchar_t *)PyMem_RawMalloc(argsize*sizeof(wchar_t));
+ if (!res)
+ goto oom;
+ count = mbstowcs(res, arg, argsize);
+ if (count != (size_t)-1) {
+ wchar_t *tmp;
+ /* Only use the result if it contains no
+ surrogate characters. */
+ for (tmp = res; *tmp != 0 &&
+ !Py_UNICODE_IS_SURROGATE(*tmp); tmp++)
+ ;
+ if (*tmp == 0) {
+ if (size != NULL)
+ *size = count;
+ return res;
+ }
+ }
+ PyMem_RawFree(res);
+ }
+ /* Conversion failed. Fall back to escaping with surrogateescape. */
+#ifdef HAVE_MBRTOWC
+ /* Try conversion with mbrtwoc (C99), and escape non-decodable bytes. */
+
+ /* Overallocate; as multi-byte characters are in the argument, the
+ actual output could use less memory. */
+ argsize = strlen(arg) + 1;
+ if (argsize > PY_SSIZE_T_MAX/sizeof(wchar_t))
+ goto oom;
+ res = (wchar_t*)PyMem_RawMalloc(argsize*sizeof(wchar_t));
+ if (!res)
+ goto oom;
+ in = (unsigned char*)arg;
+ out = res;
+ memset(&mbs, 0, sizeof mbs);
+ while (argsize) {
+ size_t converted = mbrtowc(out, (char*)in, argsize, &mbs);
+ if (converted == 0)
+ /* Reached end of string; null char stored. */
+ break;
+ if (converted == (size_t)-2) {
+ /* Incomplete character. This should never happen,
+ since we provide everything that we have -
+ unless there is a bug in the C library, or I
+ misunderstood how mbrtowc works. */
+ PyMem_RawFree(res);
+ if (size != NULL)
+ *size = (size_t)-2;
+ return NULL;
+ }
+ if (converted == (size_t)-1) {
+ /* Conversion error. Escape as UTF-8b, and start over
+ in the initial shift state. */
+ *out++ = 0xdc00 + *in++;
+ argsize--;
+ memset(&mbs, 0, sizeof mbs);
+ continue;
+ }
+ if (Py_UNICODE_IS_SURROGATE(*out)) {
+ /* Surrogate character. Escape the original
+ byte sequence with surrogateescape. */
+ argsize -= converted;
+ while (converted--)
+ *out++ = 0xdc00 + *in++;
+ continue;
+ }
+ /* successfully converted some bytes */
+ in += converted;
+ argsize -= converted;
+ out++;
+ }
+ if (size != NULL)
+ *size = out - res;
+#else /* HAVE_MBRTOWC */
+ /* Cannot use C locale for escaping; manually escape as if charset
+ is ASCII (i.e. escape all bytes > 128. This will still roundtrip
+ correctly in the locale's charset, which must be an ASCII superset. */
+ res = decode_ascii_surrogateescape(arg, size);
+ if (res == NULL)
+ goto oom;
+#endif /* HAVE_MBRTOWC */
+ return res;
+
+oom:
+ if (size != NULL)
+ *size = (size_t)-1;
+ return NULL;
+}
+
+
+static wchar_t*
+decode_locale(const char* arg, size_t *size, int current_locale)
+{
+ if (current_locale) {
+ return decode_current_locale(arg, size);
+ }
+
+#if defined(__APPLE__) || defined(__ANDROID__)
+ wchar_t *wstr;
+ wstr = _Py_DecodeUTF8_surrogateescape(arg, strlen(arg));
+ if (size != NULL) {
+ if (wstr != NULL)
+ *size = wcslen(wstr);
+ else
+ *size = (size_t)-1;
+ }
+ return wstr;
+#else
+
+#ifdef USE_FORCE_ASCII
+ if (force_ascii == -1) {
+ force_ascii = check_force_ascii();
+ }
+
+ if (force_ascii) {
+ /* force ASCII encoding to workaround mbstowcs() issue */
+ wchar_t *res = decode_ascii_surrogateescape(arg, size);
+ if (res == NULL) {
+ if (size != NULL)
+ *size = (size_t)-1;
+ return NULL;
+ }
+ return res;
+ }
+#endif
+
+ return decode_current_locale(arg, size);
+#endif /* __APPLE__ or __ANDROID__ */
+}
+
+
+/* Decode a byte string from the locale encoding with the
+ surrogateescape error handler: undecodable bytes are decoded as characters
+ in range U+DC80..U+DCFF. If a byte sequence can be decoded as a surrogate
+ character, escape the bytes using the surrogateescape error handler instead
+ of decoding them.
+
+ Return a pointer to a newly allocated wide character string, use
+ PyMem_RawFree() to free the memory. If size is not NULL, write the number of
+ wide characters excluding the null character into *size
+
+ Return NULL on decoding error or memory allocation error. If *size* is not
+ NULL, *size is set to (size_t)-1 on memory error or set to (size_t)-2 on
+ decoding error.
+
+ Decoding errors should never happen, unless there is a bug in the C
+ library.
+
+ Use the Py_EncodeLocale() function to encode the character string back to a
+ byte string. */
+wchar_t*
+Py_DecodeLocale(const char* arg, size_t *size)
+{
+ return decode_locale(arg, size, 0);
+}
+
+
+wchar_t*
+_Py_DecodeLocaleEx(const char* arg, size_t *size, int current_locale)
+{
+ return decode_locale(arg, size, current_locale);
+}
+
+
+static char*
+encode_current_locale(const wchar_t *text, size_t *error_pos)
+{
+ const size_t len = wcslen(text);
+ char *result = NULL, *bytes = NULL;
+ size_t i, size, converted;
+ wchar_t c, buf[2];
+
+ /* The function works in two steps:
+ 1. compute the length of the output buffer in bytes (size)
+ 2. outputs the bytes */
+ size = 0;
+ buf[1] = 0;
+ while (1) {
+ for (i=0; i < len; i++) {
+ c = text[i];
+ if (c >= 0xdc80 && c <= 0xdcff) {
+ /* UTF-8b surrogate */
+ if (bytes != NULL) {
+ *bytes++ = c - 0xdc00;
+ size--;
+ }
+ else
+ size++;
+ continue;
+ }
+ else {
+ buf[0] = c;
+ if (bytes != NULL)
+ converted = wcstombs(bytes, buf, size);
+ else
+ converted = wcstombs(NULL, buf, 0);
+ if (converted == (size_t)-1) {
+ if (result != NULL)
+ PyMem_Free(result);
+ if (error_pos != NULL)
+ *error_pos = i;
+ return NULL;
+ }
+ if (bytes != NULL) {
+ bytes += converted;
+ size -= converted;
+ }
+ else
+ size += converted;
+ }
+ }
+ if (result != NULL) {
+ *bytes = '\0';
+ break;
+ }
+
+ size += 1; /* nul byte at the end */
+ result = PyMem_Malloc(size);
+ if (result == NULL) {
+ if (error_pos != NULL)
+ *error_pos = (size_t)-1;
+ return NULL;
+ }
+ bytes = result;
+ }
+ return result;
+}
+
+
+static char*
+encode_locale(const wchar_t *text, size_t *error_pos, int current_locale)
+{
+ if (current_locale) {
+ return encode_current_locale(text, error_pos);
+ }
+
+#if defined(__APPLE__) || defined(__ANDROID__)
+ Py_ssize_t len;
+ PyObject *unicode, *bytes = NULL;
+ char *cpath;
+
+ unicode = PyUnicode_FromWideChar(text, wcslen(text));
+ if (unicode == NULL)
+ return NULL;
+
+ bytes = _PyUnicode_AsUTF8String(unicode, "surrogateescape");
+ Py_DECREF(unicode);
+ if (bytes == NULL) {
+ PyErr_Clear();
+ if (error_pos != NULL)
+ *error_pos = (size_t)-1;
+ return NULL;
+ }
+
+ len = PyBytes_GET_SIZE(bytes);
+ cpath = PyMem_Malloc(len+1);
+ if (cpath == NULL) {
+ PyErr_Clear();
+ Py_DECREF(bytes);
+ if (error_pos != NULL)
+ *error_pos = (size_t)-1;
+ return NULL;
+ }
+ memcpy(cpath, PyBytes_AsString(bytes), len + 1);
+ Py_DECREF(bytes);
+ return cpath;
+#else /* __APPLE__ */
+
+#ifdef USE_FORCE_ASCII
+ if (force_ascii == -1) {
+ force_ascii = check_force_ascii();
+ }
+
+ if (force_ascii) {
+ return encode_ascii_surrogateescape(text, error_pos);
+ }
+#endif
+
+ return encode_current_locale(text, error_pos);
+#endif /* __APPLE__ or __ANDROID__ */
+}
+
+
+/* Encode a wide character string to the locale encoding with the
+ surrogateescape error handler: surrogate characters in the range
+ U+DC80..U+DCFF are converted to bytes 0x80..0xFF.
+
+ Return a pointer to a newly allocated byte string, use PyMem_Free() to free
+ the memory. Return NULL on encoding or memory allocation error.
+
+ If error_pos is not NULL, *error_pos is set to the index of the invalid
+ character on encoding error, or set to (size_t)-1 otherwise.
+
+ Use the Py_DecodeLocale() function to decode the bytes string back to a wide
+ character string. */
+char*
+Py_EncodeLocale(const wchar_t *text, size_t *error_pos)
+{
+ return encode_locale(text, error_pos, 0);
+}
+
+
+char*
+_Py_EncodeLocaleEx(const wchar_t *text, size_t *error_pos, int current_locale)
+{
+ return encode_locale(text, error_pos, current_locale);
+}
+
+
+#ifdef MS_WINDOWS
+static __int64 secs_between_epochs = 11644473600; /* Seconds between 1.1.1601 and 1.1.1970 */
+
+static void
+FILE_TIME_to_time_t_nsec(FILETIME *in_ptr, time_t *time_out, int* nsec_out)
+{
+ /* XXX endianness. Shouldn't matter, as all Windows implementations are little-endian */
+ /* Cannot simply cast and dereference in_ptr,
+ since it might not be aligned properly */
+ __int64 in;
+ memcpy(&in, in_ptr, sizeof(in));
+ *nsec_out = (int)(in % 10000000) * 100; /* FILETIME is in units of 100 nsec. */
+ *time_out = Py_SAFE_DOWNCAST((in / 10000000) - secs_between_epochs, __int64, time_t);
+}
+
+void
+_Py_time_t_to_FILE_TIME(time_t time_in, int nsec_in, FILETIME *out_ptr)
+{
+ /* XXX endianness */
+ __int64 out;
+ out = time_in + secs_between_epochs;
+ out = out * 10000000 + nsec_in / 100;
+ memcpy(out_ptr, &out, sizeof(out));
+}
+
+/* Below, we *know* that ugo+r is 0444 */
+#if _S_IREAD != 0400
+#error Unsupported C library
+#endif
+static int
+attributes_to_mode(DWORD attr)
+{
+ int m = 0;
+ if (attr & FILE_ATTRIBUTE_DIRECTORY)
+ m |= _S_IFDIR | 0111; /* IFEXEC for user,group,other */
+ else
+ m |= _S_IFREG;
+ if (attr & FILE_ATTRIBUTE_READONLY)
+ m |= 0444;
+ else
+ m |= 0666;
+ return m;
+}
+
+void
+_Py_attribute_data_to_stat(BY_HANDLE_FILE_INFORMATION *info, ULONG reparse_tag,
+ struct _Py_stat_struct *result)
+{
+ memset(result, 0, sizeof(*result));
+ result->st_mode = attributes_to_mode(info->dwFileAttributes);
+ result->st_size = (((__int64)info->nFileSizeHigh)<<32) + info->nFileSizeLow;
+ result->st_dev = info->dwVolumeSerialNumber;
+ result->st_rdev = result->st_dev;
+ FILE_TIME_to_time_t_nsec(&info->ftCreationTime, &result->st_ctime, &result->st_ctime_nsec);
+ FILE_TIME_to_time_t_nsec(&info->ftLastWriteTime, &result->st_mtime, &result->st_mtime_nsec);
+ FILE_TIME_to_time_t_nsec(&info->ftLastAccessTime, &result->st_atime, &result->st_atime_nsec);
+ result->st_nlink = info->nNumberOfLinks;
+ result->st_ino = (((uint64_t)info->nFileIndexHigh) << 32) + info->nFileIndexLow;
+ if (reparse_tag == IO_REPARSE_TAG_SYMLINK) {
+ /* first clear the S_IFMT bits */
+ result->st_mode ^= (result->st_mode & S_IFMT);
+ /* now set the bits that make this a symlink */
+ result->st_mode |= S_IFLNK;
+ }
+ result->st_file_attributes = info->dwFileAttributes;
+}
+#endif
+
+/* Return information about a file.
+
+ On POSIX, use fstat().
+
+ On Windows, use GetFileType() and GetFileInformationByHandle() which support
+ files larger than 2 GB. fstat() may fail with EOVERFLOW on files larger
+ than 2 GB because the file size type is a signed 32-bit integer: see issue
+ #23152.
+
+ On Windows, set the last Windows error and return nonzero on error. On
+ POSIX, set errno and return nonzero on error. Fill status and return 0 on
+ success. */
+int
+_Py_fstat_noraise(int fd, struct _Py_stat_struct *status)
+{
+#ifdef MS_WINDOWS
+ BY_HANDLE_FILE_INFORMATION info;
+ HANDLE h;
+ int type;
+
+ _Py_BEGIN_SUPPRESS_IPH
+ h = (HANDLE)_get_osfhandle(fd);
+ _Py_END_SUPPRESS_IPH
+
+ if (h == INVALID_HANDLE_VALUE) {
+ /* errno is already set by _get_osfhandle, but we also set
+ the Win32 error for callers who expect that */
+ SetLastError(ERROR_INVALID_HANDLE);
+ return -1;
+ }
+ memset(status, 0, sizeof(*status));
+
+ type = GetFileType(h);
+ if (type == FILE_TYPE_UNKNOWN) {
+ DWORD error = GetLastError();
+ if (error != 0) {
+ errno = winerror_to_errno(error);
+ return -1;
+ }
+ /* else: valid but unknown file */
+ }
+
+ if (type != FILE_TYPE_DISK) {
+ if (type == FILE_TYPE_CHAR)
+ status->st_mode = _S_IFCHR;
+ else if (type == FILE_TYPE_PIPE)
+ status->st_mode = _S_IFIFO;
+ return 0;
+ }
+
+ if (!GetFileInformationByHandle(h, &info)) {
+ /* The Win32 error is already set, but we also set errno for
+ callers who expect it */
+ errno = winerror_to_errno(GetLastError());
+ return -1;
+ }
+
+ _Py_attribute_data_to_stat(&info, 0, status);
+ /* specific to fstat() */
+ status->st_ino = (((uint64_t)info.nFileIndexHigh) << 32) + info.nFileIndexLow;
+ return 0;
+#else
+ return fstat(fd, status);
+#endif
+}
+
+/* Return information about a file.
+
+ On POSIX, use fstat().
+
+ On Windows, use GetFileType() and GetFileInformationByHandle() which support
+ files larger than 2 GB. fstat() may fail with EOVERFLOW on files larger
+ than 2 GB because the file size type is a signed 32-bit integer: see issue
+ #23152.
+
+ Raise an exception and return -1 on error. On Windows, set the last Windows
+ error on error. On POSIX, set errno on error. Fill status and return 0 on
+ success.
+
+ Release the GIL to call GetFileType() and GetFileInformationByHandle(), or
+ to call fstat(). The caller must hold the GIL. */
+int
+_Py_fstat(int fd, struct _Py_stat_struct *status)
+{
+ int res;
+
+#ifdef WITH_THREAD
+ assert(PyGILState_Check());
+#endif
+
+ Py_BEGIN_ALLOW_THREADS
+ res = _Py_fstat_noraise(fd, status);
+ Py_END_ALLOW_THREADS
+
+ if (res != 0) {
+#ifdef MS_WINDOWS
+ PyErr_SetFromWindowsErr(0);
+#else
+ PyErr_SetFromErrno(PyExc_OSError);
+#endif
+ return -1;
+ }
+ return 0;
+}
+
+/* Call _wstat() on Windows, or encode the path to the filesystem encoding and
+ call stat() otherwise. Only fill st_mode attribute on Windows.
+
+ Return 0 on success, -1 on _wstat() / stat() error, -2 if an exception was
+ raised. */
+
+int
+_Py_stat(PyObject *path, struct stat *statbuf)
+{
+#ifdef MS_WINDOWS
+ int err;
+ struct _stat wstatbuf;
+ const wchar_t *wpath;
+
+ wpath = _PyUnicode_AsUnicode(path);
+ if (wpath == NULL)
+ return -2;
+
+ err = _wstat(wpath, &wstatbuf);
+ if (!err)
+ statbuf->st_mode = wstatbuf.st_mode;
+ return err;
+#else
+ int ret;
+ PyObject *bytes;
+ char *cpath;
+
+ bytes = PyUnicode_EncodeFSDefault(path);
+ if (bytes == NULL)
+ return -2;
+
+ /* check for embedded null bytes */
+ if (PyBytes_AsStringAndSize(bytes, &cpath, NULL) == -1) {
+ Py_DECREF(bytes);
+ return -2;
+ }
+
+ ret = stat(cpath, statbuf);
+ Py_DECREF(bytes);
+ return ret;
+#endif
+}
+
+
+/* This function MUST be kept async-signal-safe on POSIX when raise=0. */
+static int
+get_inheritable(int fd, int raise)
+{
+#ifdef MS_WINDOWS
+ HANDLE handle;
+ DWORD flags;
+
+ _Py_BEGIN_SUPPRESS_IPH
+ handle = (HANDLE)_get_osfhandle(fd);
+ _Py_END_SUPPRESS_IPH
+ if (handle == INVALID_HANDLE_VALUE) {
+ if (raise)
+ PyErr_SetFromErrno(PyExc_OSError);
+ return -1;
+ }
+
+ if (!GetHandleInformation(handle, &flags)) {
+ if (raise)
+ PyErr_SetFromWindowsErr(0);
+ return -1;
+ }
+
+ return (flags & HANDLE_FLAG_INHERIT);
+#else
+ int flags;
+
+ flags = fcntl(fd, F_GETFD, 0);
+ if (flags == -1) {
+ if (raise)
+ PyErr_SetFromErrno(PyExc_OSError);
+ return -1;
+ }
+ return !(flags & FD_CLOEXEC);
+#endif
+}
+
+/* Get the inheritable flag of the specified file descriptor.
+ Return 1 if the file descriptor can be inherited, 0 if it cannot,
+ raise an exception and return -1 on error. */
+int
+_Py_get_inheritable(int fd)
+{
+ return get_inheritable(fd, 1);
+}
+
+
+/* This function MUST be kept async-signal-safe on POSIX when raise=0. */
+static int
+set_inheritable(int fd, int inheritable, int raise, int *atomic_flag_works)
+{
+#ifdef MS_WINDOWS
+ HANDLE handle;
+ DWORD flags;
+#else
+#if defined(HAVE_SYS_IOCTL_H) && defined(FIOCLEX) && defined(FIONCLEX)
+ static int ioctl_works = -1;
+ int request;
+ int err;
+#endif
+ int flags, new_flags;
+ int res;
+#endif
+
+ /* atomic_flag_works can only be used to make the file descriptor
+ non-inheritable */
+ assert(!(atomic_flag_works != NULL && inheritable));
+
+ if (atomic_flag_works != NULL && !inheritable) {
+ if (*atomic_flag_works == -1) {
+ int isInheritable = get_inheritable(fd, raise);
+ if (isInheritable == -1)
+ return -1;
+ *atomic_flag_works = !isInheritable;
+ }
+
+ if (*atomic_flag_works)
+ return 0;
+ }
+
+#ifdef MS_WINDOWS
+ _Py_BEGIN_SUPPRESS_IPH
+ handle = (HANDLE)_get_osfhandle(fd);
+ _Py_END_SUPPRESS_IPH
+ if (handle == INVALID_HANDLE_VALUE) {
+ if (raise)
+ PyErr_SetFromErrno(PyExc_OSError);
+ return -1;
+ }
+
+ if (inheritable)
+ flags = HANDLE_FLAG_INHERIT;
+ else
+ flags = 0;
+ if (!SetHandleInformation(handle, HANDLE_FLAG_INHERIT, flags)) {
+ if (raise)
+ PyErr_SetFromWindowsErr(0);
+ return -1;
+ }
+ return 0;
+
+#else
+
+#if defined(HAVE_SYS_IOCTL_H) && defined(FIOCLEX) && defined(FIONCLEX)
+ if (ioctl_works != 0 && raise != 0) {
+ /* fast-path: ioctl() only requires one syscall */
+ /* caveat: raise=0 is an indicator that we must be async-signal-safe
+ * thus avoid using ioctl() so we skip the fast-path. */
+ if (inheritable)
+ request = FIONCLEX;
+ else
+ request = FIOCLEX;
+ err = ioctl(fd, request, NULL);
+ if (!err) {
+ ioctl_works = 1;
+ return 0;
+ }
+
+ if (errno != ENOTTY && errno != EACCES) {
+ if (raise)
+ PyErr_SetFromErrno(PyExc_OSError);
+ return -1;
+ }
+ else {
+ /* Issue #22258: Here, ENOTTY means "Inappropriate ioctl for
+ device". The ioctl is declared but not supported by the kernel.
+ Remember that ioctl() doesn't work. It is the case on
+ Illumos-based OS for example.
+
+ Issue #27057: When SELinux policy disallows ioctl it will fail
+ with EACCES. While FIOCLEX is safe operation it may be
+ unavailable because ioctl was denied altogether.
+ This can be the case on Android. */
+ ioctl_works = 0;
+ }
+ /* fallback to fcntl() if ioctl() does not work */
+ }
+#endif
+
+ /* slow-path: fcntl() requires two syscalls */
+ flags = fcntl(fd, F_GETFD);
+ if (flags < 0) {
+ if (raise)
+ PyErr_SetFromErrno(PyExc_OSError);
+ return -1;
+ }
+
+ if (inheritable) {
+ new_flags = flags & ~FD_CLOEXEC;
+ }
+ else {
+ new_flags = flags | FD_CLOEXEC;
+ }
+
+ if (new_flags == flags) {
+ /* FD_CLOEXEC flag already set/cleared: nothing to do */
+ return 0;
+ }
+
+ res = fcntl(fd, F_SETFD, new_flags);
+ if (res < 0) {
+ if (raise)
+ PyErr_SetFromErrno(PyExc_OSError);
+ return -1;
+ }
+ return 0;
+#endif
+}
+
+/* Make the file descriptor non-inheritable.
+ Return 0 on success, set errno and return -1 on error. */
+static int
+make_non_inheritable(int fd)
+{
+ return set_inheritable(fd, 0, 0, NULL);
+}
+
+/* Set the inheritable flag of the specified file descriptor.
+ On success: return 0, on error: raise an exception and return -1.
+
+ If atomic_flag_works is not NULL:
+
+ * if *atomic_flag_works==-1, check if the inheritable is set on the file
+ descriptor: if yes, set *atomic_flag_works to 1, otherwise set to 0 and
+ set the inheritable flag
+ * if *atomic_flag_works==1: do nothing
+ * if *atomic_flag_works==0: set inheritable flag to False
+
+ Set atomic_flag_works to NULL if no atomic flag was used to create the
+ file descriptor.
+
+ atomic_flag_works can only be used to make a file descriptor
+ non-inheritable: atomic_flag_works must be NULL if inheritable=1. */
+int
+_Py_set_inheritable(int fd, int inheritable, int *atomic_flag_works)
+{
+ return set_inheritable(fd, inheritable, 1, atomic_flag_works);
+}
+
+/* Same as _Py_set_inheritable() but on error, set errno and
+ don't raise an exception.
+ This function is async-signal-safe. */
+int
+_Py_set_inheritable_async_safe(int fd, int inheritable, int *atomic_flag_works)
+{
+ return set_inheritable(fd, inheritable, 0, atomic_flag_works);
+}
+
+static int
+_Py_open_impl(const char *pathname, int flags, int gil_held)
+{
+ int fd;
+ int async_err = 0;
+#ifndef MS_WINDOWS
+ int *atomic_flag_works;
+#endif
+
+#ifdef MS_WINDOWS
+ flags |= O_NOINHERIT;
+#elif defined(O_CLOEXEC)
+ atomic_flag_works = &_Py_open_cloexec_works;
+ flags |= O_CLOEXEC;
+#else
+ atomic_flag_works = NULL;
+#endif
+
+ if (gil_held) {
+ do {
+ Py_BEGIN_ALLOW_THREADS
+#ifdef UEFI_C_SOURCE
+ fd = open(pathname, flags, 0);
+#else
+ fd = open(pathname, flags);
+#endif
+ Py_END_ALLOW_THREADS
+ } while (fd < 0
+ && errno == EINTR && !(async_err = PyErr_CheckSignals()));
+ if (async_err)
+ return -1;
+ if (fd < 0) {
+ PyErr_SetFromErrnoWithFilename(PyExc_OSError, pathname);
+ return -1;
+ }
+ }
+ else {
+#ifdef UEFI_C_SOURCE
+ fd = open(pathname, flags, 0);
+#else
+ fd = open(pathname, flags);
+#endif
+ if (fd < 0)
+ return -1;
+ }
+
+#ifndef MS_WINDOWS
+ if (set_inheritable(fd, 0, gil_held, atomic_flag_works) < 0) {
+ close(fd);
+ return -1;
+ }
+#endif
+
+ return fd;
+}
+
+/* Open a file with the specified flags (wrapper to open() function).
+ Return a file descriptor on success. Raise an exception and return -1 on
+ error.
+
+ The file descriptor is created non-inheritable.
+
+ When interrupted by a signal (open() fails with EINTR), retry the syscall,
+ except if the Python signal handler raises an exception.
+
+ Release the GIL to call open(). The caller must hold the GIL. */
+int
+_Py_open(const char *pathname, int flags)
+{
+#ifdef WITH_THREAD
+ /* _Py_open() must be called with the GIL held. */
+ assert(PyGILState_Check());
+#endif
+ return _Py_open_impl(pathname, flags, 1);
+}
+
+/* Open a file with the specified flags (wrapper to open() function).
+ Return a file descriptor on success. Set errno and return -1 on error.
+
+ The file descriptor is created non-inheritable.
+
+ If interrupted by a signal, fail with EINTR. */
+int
+_Py_open_noraise(const char *pathname, int flags)
+{
+ return _Py_open_impl(pathname, flags, 0);
+}
+
+/* Open a file. Use _wfopen() on Windows, encode the path to the locale
+ encoding and use fopen() otherwise.
+
+ The file descriptor is created non-inheritable.
+
+ If interrupted by a signal, fail with EINTR. */
+FILE *
+_Py_wfopen(const wchar_t *path, const wchar_t *mode)
+{
+ FILE *f;
+#ifndef MS_WINDOWS
+ char *cpath;
+ char cmode[10];
+ size_t r;
+ r = wcstombs(cmode, mode, 10);
+ if (r == (size_t)-1 || r >= 10) {
+ errno = EINVAL;
+ return NULL;
+ }
+ cpath = Py_EncodeLocale(path, NULL);
+ if (cpath == NULL)
+ return NULL;
+ f = fopen(cpath, cmode);
+ PyMem_Free(cpath);
+#else
+ f = _wfopen(path, mode);
+#endif
+ if (f == NULL)
+ return NULL;
+ if (make_non_inheritable(fileno(f)) < 0) {
+ fclose(f);
+ return NULL;
+ }
+ return f;
+}
+
+/* Wrapper to fopen().
+
+ The file descriptor is created non-inheritable.
+
+ If interrupted by a signal, fail with EINTR. */
+FILE*
+_Py_fopen(const char *pathname, const char *mode)
+{
+ FILE *f = fopen(pathname, mode);
+ if (f == NULL)
+ return NULL;
+ if (make_non_inheritable(fileno(f)) < 0) {
+ fclose(f);
+ return NULL;
+ }
+ return f;
+}
+
+/* Open a file. Call _wfopen() on Windows, or encode the path to the filesystem
+ encoding and call fopen() otherwise.
+
+ Return the new file object on success. Raise an exception and return NULL
+ on error.
+
+ The file descriptor is created non-inheritable.
+
+ When interrupted by a signal (open() fails with EINTR), retry the syscall,
+ except if the Python signal handler raises an exception.
+
+ Release the GIL to call _wfopen() or fopen(). The caller must hold
+ the GIL. */
+FILE*
+_Py_fopen_obj(PyObject *path, const char *mode)
+{
+ FILE *f;
+ int async_err = 0;
+#ifdef MS_WINDOWS
+ const wchar_t *wpath;
+ wchar_t wmode[10];
+ int usize;
+
+#ifdef WITH_THREAD
+ assert(PyGILState_Check());
+#endif
+
+ if (!PyUnicode_Check(path)) {
+ PyErr_Format(PyExc_TypeError,
+ "str file path expected under Windows, got %R",
+ Py_TYPE(path));
+ return NULL;
+ }
+ wpath = _PyUnicode_AsUnicode(path);
+ if (wpath == NULL)
+ return NULL;
+
+ usize = MultiByteToWideChar(CP_ACP, 0, mode, -1,
+ wmode, Py_ARRAY_LENGTH(wmode));
+ if (usize == 0) {
+ PyErr_SetFromWindowsErr(0);
+ return NULL;
+ }
+
+ do {
+ Py_BEGIN_ALLOW_THREADS
+ f = _wfopen(wpath, wmode);
+ Py_END_ALLOW_THREADS
+ } while (f == NULL
+ && errno == EINTR && !(async_err = PyErr_CheckSignals()));
+#else
+ PyObject *bytes;
+ char *path_bytes;
+
+#ifdef WITH_THREAD
+ assert(PyGILState_Check());
+#endif
+
+ if (!PyUnicode_FSConverter(path, &bytes))
+ return NULL;
+ path_bytes = PyBytes_AS_STRING(bytes);
+
+ do {
+ Py_BEGIN_ALLOW_THREADS
+ f = fopen(path_bytes, mode);
+ Py_END_ALLOW_THREADS
+ } while (f == NULL
+ && errno == EINTR && !(async_err = PyErr_CheckSignals()));
+
+ Py_DECREF(bytes);
+#endif
+ if (async_err)
+ return NULL;
+
+ if (f == NULL) {
+ PyErr_SetFromErrnoWithFilenameObject(PyExc_OSError, path);
+ return NULL;
+ }
+
+ if (set_inheritable(fileno(f), 0, 1, NULL) < 0) {
+ fclose(f);
+ return NULL;
+ }
+ return f;
+}
+
+/* Read count bytes from fd into buf.
+
+ On success, return the number of read bytes, it can be lower than count.
+ If the current file offset is at or past the end of file, no bytes are read,
+ and read() returns zero.
+
+ On error, raise an exception, set errno and return -1.
+
+ When interrupted by a signal (read() fails with EINTR), retry the syscall.
+ If the Python signal handler raises an exception, the function returns -1
+ (the syscall is not retried).
+
+ Release the GIL to call read(). The caller must hold the GIL. */
+Py_ssize_t
+_Py_read(int fd, void *buf, size_t count)
+{
+ Py_ssize_t n;
+ int err;
+ int async_err = 0;
+
+#ifdef WITH_THREAD
+ assert(PyGILState_Check());
+#endif
+
+ /* _Py_read() must not be called with an exception set, otherwise the
+ * caller may think that read() was interrupted by a signal and the signal
+ * handler raised an exception. */
+ assert(!PyErr_Occurred());
+
+ if (count > _PY_READ_MAX) {
+ count = _PY_READ_MAX;
+ }
+
+ _Py_BEGIN_SUPPRESS_IPH
+ do {
+ Py_BEGIN_ALLOW_THREADS
+ errno = 0;
+#ifdef MS_WINDOWS
+ n = read(fd, buf, (int)count);
+#else
+ n = read(fd, buf, count);
+#endif
+ /* save/restore errno because PyErr_CheckSignals()
+ * and PyErr_SetFromErrno() can modify it */
+ err = errno;
+ Py_END_ALLOW_THREADS
+ } while (n < 0 && err == EINTR &&
+ !(async_err = PyErr_CheckSignals()));
+ _Py_END_SUPPRESS_IPH
+
+ if (async_err) {
+ /* read() was interrupted by a signal (failed with EINTR)
+ * and the Python signal handler raised an exception */
+ errno = err;
+ assert(errno == EINTR && PyErr_Occurred());
+ return -1;
+ }
+ if (n < 0) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ errno = err;
+ return -1;
+ }
+
+ return n;
+}
+
+static Py_ssize_t
+_Py_write_impl(int fd, const void *buf, size_t count, int gil_held)
+{
+ Py_ssize_t n = 0;
+ int err;
+ int async_err = 0;
+
+ _Py_BEGIN_SUPPRESS_IPH
+#ifdef MS_WINDOWS
+ if (count > 32767 && isatty(fd)) {
+ /* Issue #11395: the Windows console returns an error (12: not
+ enough space error) on writing into stdout if stdout mode is
+ binary and the length is greater than 66,000 bytes (or less,
+ depending on heap usage). */
+ count = 32767;
+ }
+#endif
+ if (count > _PY_WRITE_MAX) {
+ count = _PY_WRITE_MAX;
+ }
+
+ if (gil_held) {
+ do {
+ Py_BEGIN_ALLOW_THREADS
+ errno = 0;
+#ifdef MS_WINDOWS
+ n = write(fd, buf, (int)count);
+#elif UEFI_C_SOURCE
+ n += write(fd, ((char *)buf + n), count - n);
+#else
+ n = write(fd, buf, count);
+#endif
+ /* save/restore errno because PyErr_CheckSignals()
+ * and PyErr_SetFromErrno() can modify it */
+ err = errno;
+ Py_END_ALLOW_THREADS
+#ifdef UEFI_C_SOURCE
+ } while ((n < 0 && err == EINTR &&
+ !(async_err = PyErr_CheckSignals())) || (n < count));
+ }
+#else
+ } while (n < 0 && err == EINTR &&
+ !(async_err = PyErr_CheckSignals()));
+ }
+#endif
+ else {
+ do {
+ errno = 0;
+#ifdef MS_WINDOWS
+ n = write(fd, buf, (int)count);
+#else
+ n = write(fd, buf, count);
+#endif
+ err = errno;
+ } while (n < 0 && err == EINTR);
+ }
+ _Py_END_SUPPRESS_IPH
+
+ if (async_err) {
+ /* write() was interrupted by a signal (failed with EINTR)
+ and the Python signal handler raised an exception (if gil_held is
+ nonzero). */
+ errno = err;
+ assert(errno == EINTR && (!gil_held || PyErr_Occurred()));
+ return -1;
+ }
+ if (n < 0) {
+ if (gil_held)
+ PyErr_SetFromErrno(PyExc_OSError);
+ errno = err;
+ return -1;
+ }
+
+ return n;
+}
+
+/* Write count bytes of buf into fd.
+
+ On success, return the number of written bytes, it can be lower than count
+ including 0. On error, raise an exception, set errno and return -1.
+
+ When interrupted by a signal (write() fails with EINTR), retry the syscall.
+ If the Python signal handler raises an exception, the function returns -1
+ (the syscall is not retried).
+
+ Release the GIL to call write(). The caller must hold the GIL. */
+Py_ssize_t
+_Py_write(int fd, const void *buf, size_t count)
+{
+#ifdef WITH_THREAD
+ assert(PyGILState_Check());
+#endif
+
+ /* _Py_write() must not be called with an exception set, otherwise the
+ * caller may think that write() was interrupted by a signal and the signal
+ * handler raised an exception. */
+ assert(!PyErr_Occurred());
+
+ return _Py_write_impl(fd, buf, count, 1);
+}
+
+/* Write count bytes of buf into fd.
+ *
+ * On success, return the number of written bytes, it can be lower than count
+ * including 0. On error, set errno and return -1.
+ *
+ * When interrupted by a signal (write() fails with EINTR), retry the syscall
+ * without calling the Python signal handler. */
+Py_ssize_t
+_Py_write_noraise(int fd, const void *buf, size_t count)
+{
+ return _Py_write_impl(fd, buf, count, 0);
+}
+
+#ifdef HAVE_READLINK
+
+/* Read value of symbolic link. Encode the path to the locale encoding, decode
+ the result from the locale encoding. Return -1 on error. */
+
+int
+_Py_wreadlink(const wchar_t *path, wchar_t *buf, size_t bufsiz)
+{
+ char *cpath;
+ char cbuf[MAXPATHLEN];
+ wchar_t *wbuf;
+ int res;
+ size_t r1;
+
+ cpath = Py_EncodeLocale(path, NULL);
+ if (cpath == NULL) {
+ errno = EINVAL;
+ return -1;
+ }
+ res = (int)readlink(cpath, cbuf, Py_ARRAY_LENGTH(cbuf));
+ PyMem_Free(cpath);
+ if (res == -1)
+ return -1;
+ if (res == Py_ARRAY_LENGTH(cbuf)) {
+ errno = EINVAL;
+ return -1;
+ }
+ cbuf[res] = '\0'; /* buf will be null terminated */
+ wbuf = Py_DecodeLocale(cbuf, &r1);
+ if (wbuf == NULL) {
+ errno = EINVAL;
+ return -1;
+ }
+ if (bufsiz <= r1) {
+ PyMem_RawFree(wbuf);
+ errno = EINVAL;
+ return -1;
+ }
+ wcsncpy(buf, wbuf, bufsiz);
+ PyMem_RawFree(wbuf);
+ return (int)r1;
+}
+#endif
+
+#ifdef HAVE_REALPATH
+
+/* Return the canonicalized absolute pathname. Encode path to the locale
+ encoding, decode the result from the locale encoding.
+ Return NULL on error. */
+
+wchar_t*
+_Py_wrealpath(const wchar_t *path,
+ wchar_t *resolved_path, size_t resolved_path_size)
+{
+ char *cpath;
+ char cresolved_path[MAXPATHLEN];
+ wchar_t *wresolved_path;
+ char *res;
+ size_t r;
+ cpath = Py_EncodeLocale(path, NULL);
+ if (cpath == NULL) {
+ errno = EINVAL;
+ return NULL;
+ }
+ res = realpath(cpath, cresolved_path);
+ PyMem_Free(cpath);
+ if (res == NULL)
+ return NULL;
+
+ wresolved_path = Py_DecodeLocale(cresolved_path, &r);
+ if (wresolved_path == NULL) {
+ errno = EINVAL;
+ return NULL;
+ }
+ if (resolved_path_size <= r) {
+ PyMem_RawFree(wresolved_path);
+ errno = EINVAL;
+ return NULL;
+ }
+ wcsncpy(resolved_path, wresolved_path, resolved_path_size);
+ PyMem_RawFree(wresolved_path);
+ return resolved_path;
+}
+#endif
+
+/* Get the current directory. size is the buffer size in wide characters
+ including the null character. Decode the path from the locale encoding.
+ Return NULL on error. */
+
+wchar_t*
+_Py_wgetcwd(wchar_t *buf, size_t size)
+{
+#ifdef MS_WINDOWS
+ int isize = (int)Py_MIN(size, INT_MAX);
+ return _wgetcwd(buf, isize);
+#else
+ char fname[MAXPATHLEN];
+ wchar_t *wname;
+ size_t len;
+
+ if (getcwd(fname, Py_ARRAY_LENGTH(fname)) == NULL)
+ return NULL;
+ wname = Py_DecodeLocale(fname, &len);
+ if (wname == NULL)
+ return NULL;
+ if (size <= len) {
+ PyMem_RawFree(wname);
+ return NULL;
+ }
+ wcsncpy(buf, wname, size);
+ PyMem_RawFree(wname);
+ return buf;
+#endif
+}
+
+/* Duplicate a file descriptor. The new file descriptor is created as
+ non-inheritable. Return a new file descriptor on success, raise an OSError
+ exception and return -1 on error.
+
+ The GIL is released to call dup(). The caller must hold the GIL. */
+int
+_Py_dup(int fd)
+{
+#ifdef MS_WINDOWS
+ HANDLE handle;
+ DWORD ftype;
+#endif
+
+#ifdef WITH_THREAD
+ assert(PyGILState_Check());
+#endif
+
+#ifdef MS_WINDOWS
+ _Py_BEGIN_SUPPRESS_IPH
+ handle = (HANDLE)_get_osfhandle(fd);
+ _Py_END_SUPPRESS_IPH
+ if (handle == INVALID_HANDLE_VALUE) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ return -1;
+ }
+
+ /* get the file type, ignore the error if it failed */
+ ftype = GetFileType(handle);
+
+ Py_BEGIN_ALLOW_THREADS
+ _Py_BEGIN_SUPPRESS_IPH
+ fd = dup(fd);
+ _Py_END_SUPPRESS_IPH
+ Py_END_ALLOW_THREADS
+ if (fd < 0) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ return -1;
+ }
+
+ /* Character files like console cannot be make non-inheritable */
+ if (ftype != FILE_TYPE_CHAR) {
+ if (_Py_set_inheritable(fd, 0, NULL) < 0) {
+ _Py_BEGIN_SUPPRESS_IPH
+ close(fd);
+ _Py_END_SUPPRESS_IPH
+ return -1;
+ }
+ }
+#elif defined(HAVE_FCNTL_H) && defined(F_DUPFD_CLOEXEC)
+ Py_BEGIN_ALLOW_THREADS
+ _Py_BEGIN_SUPPRESS_IPH
+ fd = fcntl(fd, F_DUPFD_CLOEXEC, 0);
+ _Py_END_SUPPRESS_IPH
+ Py_END_ALLOW_THREADS
+ if (fd < 0) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ return -1;
+ }
+
+#else
+ Py_BEGIN_ALLOW_THREADS
+ _Py_BEGIN_SUPPRESS_IPH
+ fd = dup(fd);
+ _Py_END_SUPPRESS_IPH
+ Py_END_ALLOW_THREADS
+ if (fd < 0) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ return -1;
+ }
+
+ if (_Py_set_inheritable(fd, 0, NULL) < 0) {
+ _Py_BEGIN_SUPPRESS_IPH
+ close(fd);
+ _Py_END_SUPPRESS_IPH
+ return -1;
+ }
+#endif
+ return fd;
+}
+
+#ifndef MS_WINDOWS
+/* Get the blocking mode of the file descriptor.
+ Return 0 if the O_NONBLOCK flag is set, 1 if the flag is cleared,
+ raise an exception and return -1 on error. */
+int
+_Py_get_blocking(int fd)
+{
+ int flags;
+ _Py_BEGIN_SUPPRESS_IPH
+ flags = fcntl(fd, F_GETFL, 0);
+ _Py_END_SUPPRESS_IPH
+ if (flags < 0) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ return -1;
+ }
+
+ return !(flags & O_NONBLOCK);
+}
+
+/* Set the blocking mode of the specified file descriptor.
+
+ Set the O_NONBLOCK flag if blocking is False, clear the O_NONBLOCK flag
+ otherwise.
+
+ Return 0 on success, raise an exception and return -1 on error. */
+int
+_Py_set_blocking(int fd, int blocking)
+{
+#if defined(HAVE_SYS_IOCTL_H) && defined(FIONBIO)
+ int arg = !blocking;
+ if (ioctl(fd, FIONBIO, &arg) < 0)
+ goto error;
+#else
+ int flags, res;
+
+ _Py_BEGIN_SUPPRESS_IPH
+ flags = fcntl(fd, F_GETFL, 0);
+ if (flags >= 0) {
+ if (blocking)
+ flags = flags & (~O_NONBLOCK);
+ else
+ flags = flags | O_NONBLOCK;
+
+ res = fcntl(fd, F_SETFL, flags);
+ } else {
+ res = -1;
+ }
+ _Py_END_SUPPRESS_IPH
+
+ if (res < 0)
+ goto error;
+#endif
+ return 0;
+
+error:
+ PyErr_SetFromErrno(PyExc_OSError);
+ return -1;
+}
+#endif
+
+
+int
+_Py_GetLocaleconvNumeric(PyObject **decimal_point, PyObject **thousands_sep,
+ const char **grouping)
+{
+ int res = -1;
+
+ struct lconv *lc = localeconv();
+
+ int change_locale = 0;
+ if (decimal_point != NULL &&
+ (strlen(lc->decimal_point) > 1 || ((unsigned char)lc->decimal_point[0]) > 127))
+ {
+ change_locale = 1;
+ }
+ if (thousands_sep != NULL &&
+ (strlen(lc->thousands_sep) > 1 || ((unsigned char)lc->thousands_sep[0]) > 127))
+ {
+ change_locale = 1;
+ }
+
+ /* Keep a copy of the LC_CTYPE locale */
+ char *oldloc = NULL, *loc = NULL;
+ if (change_locale) {
+ oldloc = setlocale(LC_CTYPE, NULL);
+ if (!oldloc) {
+ PyErr_SetString(PyExc_RuntimeWarning, "failed to get LC_CTYPE locale");
+ return -1;
+ }
+
+ oldloc = _PyMem_Strdup(oldloc);
+ if (!oldloc) {
+ PyErr_NoMemory();
+ return -1;
+ }
+
+ loc = setlocale(LC_NUMERIC, NULL);
+ if (loc != NULL && strcmp(loc, oldloc) == 0) {
+ loc = NULL;
+ }
+
+ if (loc != NULL) {
+ /* Only set the locale temporarily the LC_CTYPE locale
+ if LC_NUMERIC locale is different than LC_CTYPE locale and
+ decimal_point and/or thousands_sep are non-ASCII or longer than
+ 1 byte */
+ setlocale(LC_CTYPE, loc);
+ }
+ }
+
+ if (decimal_point != NULL) {
+ *decimal_point = PyUnicode_DecodeLocale(lc->decimal_point, NULL);
+ if (*decimal_point == NULL) {
+ goto error;
+ }
+ }
+ if (thousands_sep != NULL) {
+ *thousands_sep = PyUnicode_DecodeLocale(lc->thousands_sep, NULL);
+ if (*thousands_sep == NULL) {
+ goto error;
+ }
+ }
+
+ if (grouping != NULL) {
+ *grouping = lc->grouping;
+ }
+
+ res = 0;
+
+error:
+ if (loc != NULL) {
+ setlocale(LC_CTYPE, oldloc);
+ }
+ PyMem_Free(oldloc);
+ return res;
+}
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/getcopyright.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/getcopyright.c
new file mode 100644
index 00000000..6c01d6ac
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/getcopyright.c
@@ -0,0 +1,38 @@
+/** @file
+ Return the copyright string. This is updated manually.
+
+ Copyright (c) 2015, Daryl McDaniel. All rights reserved.<BR>
+ Copyright (c) 2010 - 2021, Intel Corporation. All rights reserved.<BR>
+ This program and the accompanying materials are licensed and made available under
+ the terms and conditions of the BSD License that accompanies this distribution.
+ The full text of the license may be found at
+ http://opensource.org/licenses/bsd-license.
+
+ THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+**/
+
+#include "Python.h"
+
+static const char cprt[] =
+"\
+Copyright (c) 2010-2021 Intel Corporation.\n\
+All Rights Reserved.\n\
+\n\
+Copyright (c) 2001-2018 Python Software Foundation.\n\
+All Rights Reserved.\n\
+\n\
+Copyright (c) 2000 BeOpen.com.\n\
+All Rights Reserved.\n\
+\n\
+Copyright (c) 1995-2001 Corporation for National Research Initiatives.\n\
+All Rights Reserved.\n\
+\n\
+Copyright (c) 1991-1995 Stichting Mathematisch Centrum, Amsterdam.\n\
+All Rights Reserved.";
+
+const char *
+Py_GetCopyright(void)
+{
+ return cprt;
+}
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/importlib_external.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/importlib_external.h
new file mode 100644
index 00000000..25a4f885
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/importlib_external.h
@@ -0,0 +1,2431 @@
+/* Auto-generated by Programs/_freeze_importlib.c */
+const unsigned char _Py_M__importlib_external[] = {
+ 99,0,0,0,0,0,0,0,0,0,0,0,0,5,0,0,
+ 0,64,0,0,0,115,248,1,0,0,100,0,90,0,100,91,
+ 90,1,100,92,90,2,101,2,101,1,23,0,90,3,100,4,
+ 100,5,132,0,90,4,100,6,100,7,132,0,90,5,100,8,
+ 100,9,132,0,90,6,100,10,100,11,132,0,90,7,100,12,
+ 100,13,132,0,90,8,100,14,100,15,132,0,90,9,100,16,
+ 100,17,132,0,90,10,100,18,100,19,132,0,90,11,100,20,
+ 100,21,132,0,90,12,100,93,100,23,100,24,132,1,90,13,
+ 101,14,101,13,106,15,131,1,90,16,100,25,106,17,100,26,
+ 100,27,131,2,100,28,23,0,90,18,101,19,106,20,101,18,
+ 100,27,131,2,90,21,100,29,90,22,100,30,90,23,100,31,
+ 103,1,90,24,100,32,103,1,90,25,101,25,4,0,90,26,
+ 90,27,100,94,100,33,100,34,156,1,100,35,100,36,132,3,
+ 90,28,100,37,100,38,132,0,90,29,100,39,100,40,132,0,
+ 90,30,100,41,100,42,132,0,90,31,100,43,100,44,132,0,
+ 90,32,100,45,100,46,132,0,90,33,100,47,100,48,132,0,
+ 90,34,100,95,100,49,100,50,132,1,90,35,100,96,100,51,
+ 100,52,132,1,90,36,100,97,100,54,100,55,132,1,90,37,
+ 100,56,100,57,132,0,90,38,101,39,131,0,90,40,100,98,
+ 100,33,101,40,100,58,156,2,100,59,100,60,132,3,90,41,
+ 71,0,100,61,100,62,132,0,100,62,131,2,90,42,71,0,
+ 100,63,100,64,132,0,100,64,131,2,90,43,71,0,100,65,
+ 100,66,132,0,100,66,101,43,131,3,90,44,71,0,100,67,
+ 100,68,132,0,100,68,131,2,90,45,71,0,100,69,100,70,
+ 132,0,100,70,101,45,101,44,131,4,90,46,71,0,100,71,
+ 100,72,132,0,100,72,101,45,101,43,131,4,90,47,103,0,
+ 90,48,71,0,100,73,100,74,132,0,100,74,101,45,101,43,
+ 131,4,90,49,71,0,100,75,100,76,132,0,100,76,131,2,
+ 90,50,71,0,100,77,100,78,132,0,100,78,131,2,90,51,
+ 71,0,100,79,100,80,132,0,100,80,131,2,90,52,71,0,
+ 100,81,100,82,132,0,100,82,131,2,90,53,100,99,100,83,
+ 100,84,132,1,90,54,100,85,100,86,132,0,90,55,100,87,
+ 100,88,132,0,90,56,100,89,100,90,132,0,90,57,100,33,
+ 83,0,41,100,97,94,1,0,0,67,111,114,101,32,105,109,
+ 112,108,101,109,101,110,116,97,116,105,111,110,32,111,102,32,
+ 112,97,116,104,45,98,97,115,101,100,32,105,109,112,111,114,
+ 116,46,10,10,84,104,105,115,32,109,111,100,117,108,101,32,
+ 105,115,32,78,79,84,32,109,101,97,110,116,32,116,111,32,
+ 98,101,32,100,105,114,101,99,116,108,121,32,105,109,112,111,
+ 114,116,101,100,33,32,73,116,32,104,97,115,32,98,101,101,
+ 110,32,100,101,115,105,103,110,101,100,32,115,117,99,104,10,
+ 116,104,97,116,32,105,116,32,99,97,110,32,98,101,32,98,
+ 111,111,116,115,116,114,97,112,112,101,100,32,105,110,116,111,
+ 32,80,121,116,104,111,110,32,97,115,32,116,104,101,32,105,
+ 109,112,108,101,109,101,110,116,97,116,105,111,110,32,111,102,
+ 32,105,109,112,111,114,116,46,32,65,115,10,115,117,99,104,
+ 32,105,116,32,114,101,113,117,105,114,101,115,32,116,104,101,
+ 32,105,110,106,101,99,116,105,111,110,32,111,102,32,115,112,
+ 101,99,105,102,105,99,32,109,111,100,117,108,101,115,32,97,
+ 110,100,32,97,116,116,114,105,98,117,116,101,115,32,105,110,
+ 32,111,114,100,101,114,32,116,111,10,119,111,114,107,46,32,
+ 79,110,101,32,115,104,111,117,108,100,32,117,115,101,32,105,
+ 109,112,111,114,116,108,105,98,32,97,115,32,116,104,101,32,
+ 112,117,98,108,105,99,45,102,97,99,105,110,103,32,118,101,
+ 114,115,105,111,110,32,111,102,32,116,104,105,115,32,109,111,
+ 100,117,108,101,46,10,10,218,3,119,105,110,218,6,99,121,
+ 103,119,105,110,218,6,100,97,114,119,105,110,99,0,0,0,
+ 0,0,0,0,0,1,0,0,0,3,0,0,0,3,0,0,
+ 0,115,60,0,0,0,116,0,106,1,106,2,116,3,131,1,
+ 114,48,116,0,106,1,106,2,116,4,131,1,114,30,100,1,
+ 137,0,110,4,100,2,137,0,135,0,102,1,100,3,100,4,
+ 132,8,125,0,110,8,100,5,100,4,132,0,125,0,124,0,
+ 83,0,41,6,78,90,12,80,89,84,72,79,78,67,65,83,
+ 69,79,75,115,12,0,0,0,80,89,84,72,79,78,67,65,
+ 83,69,79,75,99,0,0,0,0,0,0,0,0,0,0,0,
+ 0,2,0,0,0,19,0,0,0,115,10,0,0,0,136,0,
+ 116,0,106,1,107,6,83,0,41,1,122,53,84,114,117,101,
+ 32,105,102,32,102,105,108,101,110,97,109,101,115,32,109,117,
+ 115,116,32,98,101,32,99,104,101,99,107,101,100,32,99,97,
+ 115,101,45,105,110,115,101,110,115,105,116,105,118,101,108,121,
+ 46,41,2,218,3,95,111,115,90,7,101,110,118,105,114,111,
+ 110,169,0,41,1,218,3,107,101,121,114,4,0,0,0,250,
+ 38,60,102,114,111,122,101,110,32,105,109,112,111,114,116,108,
+ 105,98,46,95,98,111,111,116,115,116,114,97,112,95,101,120,
+ 116,101,114,110,97,108,62,218,11,95,114,101,108,97,120,95,
+ 99,97,115,101,37,0,0,0,115,2,0,0,0,0,2,122,
+ 37,95,109,97,107,101,95,114,101,108,97,120,95,99,97,115,
+ 101,46,60,108,111,99,97,108,115,62,46,95,114,101,108,97,
+ 120,95,99,97,115,101,99,0,0,0,0,0,0,0,0,0,
+ 0,0,0,1,0,0,0,83,0,0,0,115,4,0,0,0,
+ 100,1,83,0,41,2,122,53,84,114,117,101,32,105,102,32,
+ 102,105,108,101,110,97,109,101,115,32,109,117,115,116,32,98,
+ 101,32,99,104,101,99,107,101,100,32,99,97,115,101,45,105,
+ 110,115,101,110,115,105,116,105,118,101,108,121,46,70,114,4,
+ 0,0,0,114,4,0,0,0,114,4,0,0,0,114,4,0,
+ 0,0,114,6,0,0,0,114,7,0,0,0,41,0,0,0,
+ 115,2,0,0,0,0,2,41,5,218,3,115,121,115,218,8,
+ 112,108,97,116,102,111,114,109,218,10,115,116,97,114,116,115,
+ 119,105,116,104,218,27,95,67,65,83,69,95,73,78,83,69,
+ 78,83,73,84,73,86,69,95,80,76,65,84,70,79,82,77,
+ 83,218,35,95,67,65,83,69,95,73,78,83,69,78,83,73,
+ 84,73,86,69,95,80,76,65,84,70,79,82,77,83,95,83,
+ 84,82,95,75,69,89,41,1,114,7,0,0,0,114,4,0,
+ 0,0,41,1,114,5,0,0,0,114,6,0,0,0,218,16,
+ 95,109,97,107,101,95,114,101,108,97,120,95,99,97,115,101,
+ 30,0,0,0,115,14,0,0,0,0,1,12,1,12,1,6,
+ 2,4,2,14,4,8,3,114,13,0,0,0,99,1,0,0,
+ 0,0,0,0,0,1,0,0,0,3,0,0,0,67,0,0,
+ 0,115,20,0,0,0,116,0,124,0,131,1,100,1,64,0,
+ 106,1,100,2,100,3,131,2,83,0,41,4,122,42,67,111,
+ 110,118,101,114,116,32,97,32,51,50,45,98,105,116,32,105,
+ 110,116,101,103,101,114,32,116,111,32,108,105,116,116,108,101,
+ 45,101,110,100,105,97,110,46,108,3,0,0,0,255,127,255,
+ 127,3,0,233,4,0,0,0,218,6,108,105,116,116,108,101,
+ 41,2,218,3,105,110,116,218,8,116,111,95,98,121,116,101,
+ 115,41,1,218,1,120,114,4,0,0,0,114,4,0,0,0,
+ 114,6,0,0,0,218,7,95,119,95,108,111,110,103,47,0,
+ 0,0,115,2,0,0,0,0,2,114,19,0,0,0,99,1,
+ 0,0,0,0,0,0,0,1,0,0,0,3,0,0,0,67,
+ 0,0,0,115,12,0,0,0,116,0,106,1,124,0,100,1,
+ 131,2,83,0,41,2,122,47,67,111,110,118,101,114,116,32,
+ 52,32,98,121,116,101,115,32,105,110,32,108,105,116,116,108,
+ 101,45,101,110,100,105,97,110,32,116,111,32,97,110,32,105,
+ 110,116,101,103,101,114,46,114,15,0,0,0,41,2,114,16,
+ 0,0,0,218,10,102,114,111,109,95,98,121,116,101,115,41,
+ 1,90,9,105,110,116,95,98,121,116,101,115,114,4,0,0,
+ 0,114,4,0,0,0,114,6,0,0,0,218,7,95,114,95,
+ 108,111,110,103,52,0,0,0,115,2,0,0,0,0,2,114,
+ 21,0,0,0,99,0,0,0,0,0,0,0,0,1,0,0,
+ 0,3,0,0,0,71,0,0,0,115,20,0,0,0,116,0,
+ 106,1,100,1,100,2,132,0,124,0,68,0,131,1,131,1,
+ 83,0,41,3,122,31,82,101,112,108,97,99,101,109,101,110,
+ 116,32,102,111,114,32,111,115,46,112,97,116,104,46,106,111,
+ 105,110,40,41,46,99,1,0,0,0,0,0,0,0,2,0,
+ 0,0,4,0,0,0,83,0,0,0,115,26,0,0,0,103,
+ 0,124,0,93,18,125,1,124,1,114,4,124,1,106,0,116,
+ 1,131,1,145,2,113,4,83,0,114,4,0,0,0,41,2,
+ 218,6,114,115,116,114,105,112,218,15,112,97,116,104,95,115,
+ 101,112,97,114,97,116,111,114,115,41,2,218,2,46,48,218,
+ 4,112,97,114,116,114,4,0,0,0,114,4,0,0,0,114,
+ 6,0,0,0,250,10,60,108,105,115,116,99,111,109,112,62,
+ 59,0,0,0,115,2,0,0,0,6,1,122,30,95,112,97,
+ 116,104,95,106,111,105,110,46,60,108,111,99,97,108,115,62,
+ 46,60,108,105,115,116,99,111,109,112,62,41,2,218,8,112,
+ 97,116,104,95,115,101,112,218,4,106,111,105,110,41,1,218,
+ 10,112,97,116,104,95,112,97,114,116,115,114,4,0,0,0,
+ 114,4,0,0,0,114,6,0,0,0,218,10,95,112,97,116,
+ 104,95,106,111,105,110,57,0,0,0,115,4,0,0,0,0,
+ 2,10,1,114,30,0,0,0,99,1,0,0,0,0,0,0,
+ 0,5,0,0,0,5,0,0,0,67,0,0,0,115,96,0,
+ 0,0,116,0,116,1,131,1,100,1,107,2,114,36,124,0,
+ 106,2,116,3,131,1,92,3,125,1,125,2,125,3,124,1,
+ 124,3,102,2,83,0,120,50,116,4,124,0,131,1,68,0,
+ 93,38,125,4,124,4,116,1,107,6,114,46,124,0,106,5,
+ 124,4,100,1,100,2,141,2,92,2,125,1,125,3,124,1,
+ 124,3,102,2,83,0,113,46,87,0,100,3,124,0,102,2,
+ 83,0,41,4,122,32,82,101,112,108,97,99,101,109,101,110,
+ 116,32,102,111,114,32,111,115,46,112,97,116,104,46,115,112,
+ 108,105,116,40,41,46,233,1,0,0,0,41,1,90,8,109,
+ 97,120,115,112,108,105,116,218,0,41,6,218,3,108,101,110,
+ 114,23,0,0,0,218,10,114,112,97,114,116,105,116,105,111,
+ 110,114,27,0,0,0,218,8,114,101,118,101,114,115,101,100,
+ 218,6,114,115,112,108,105,116,41,5,218,4,112,97,116,104,
+ 90,5,102,114,111,110,116,218,1,95,218,4,116,97,105,108,
+ 114,18,0,0,0,114,4,0,0,0,114,4,0,0,0,114,
+ 6,0,0,0,218,11,95,112,97,116,104,95,115,112,108,105,
+ 116,63,0,0,0,115,16,0,0,0,0,2,12,1,16,1,
+ 8,1,14,1,8,1,18,1,12,1,114,40,0,0,0,99,
+ 1,0,0,0,0,0,0,0,1,0,0,0,2,0,0,0,
+ 67,0,0,0,115,10,0,0,0,116,0,106,1,124,0,131,
+ 1,83,0,41,1,122,126,83,116,97,116,32,116,104,101,32,
+ 112,97,116,104,46,10,10,32,32,32,32,77,97,100,101,32,
+ 97,32,115,101,112,97,114,97,116,101,32,102,117,110,99,116,
+ 105,111,110,32,116,111,32,109,97,107,101,32,105,116,32,101,
+ 97,115,105,101,114,32,116,111,32,111,118,101,114,114,105,100,
+ 101,32,105,110,32,101,120,112,101,114,105,109,101,110,116,115,
+ 10,32,32,32,32,40,101,46,103,46,32,99,97,99,104,101,
+ 32,115,116,97,116,32,114,101,115,117,108,116,115,41,46,10,
+ 10,32,32,32,32,41,2,114,3,0,0,0,90,4,115,116,
+ 97,116,41,1,114,37,0,0,0,114,4,0,0,0,114,4,
+ 0,0,0,114,6,0,0,0,218,10,95,112,97,116,104,95,
+ 115,116,97,116,75,0,0,0,115,2,0,0,0,0,7,114,
+ 41,0,0,0,99,2,0,0,0,0,0,0,0,3,0,0,
+ 0,11,0,0,0,67,0,0,0,115,48,0,0,0,121,12,
+ 116,0,124,0,131,1,125,2,87,0,110,20,4,0,116,1,
+ 107,10,114,32,1,0,1,0,1,0,100,1,83,0,88,0,
+ 124,2,106,2,100,2,64,0,124,1,107,2,83,0,41,3,
+ 122,49,84,101,115,116,32,119,104,101,116,104,101,114,32,116,
+ 104,101,32,112,97,116,104,32,105,115,32,116,104,101,32,115,
+ 112,101,99,105,102,105,101,100,32,109,111,100,101,32,116,121,
+ 112,101,46,70,105,0,240,0,0,41,3,114,41,0,0,0,
+ 218,7,79,83,69,114,114,111,114,218,7,115,116,95,109,111,
+ 100,101,41,3,114,37,0,0,0,218,4,109,111,100,101,90,
+ 9,115,116,97,116,95,105,110,102,111,114,4,0,0,0,114,
+ 4,0,0,0,114,6,0,0,0,218,18,95,112,97,116,104,
+ 95,105,115,95,109,111,100,101,95,116,121,112,101,85,0,0,
+ 0,115,10,0,0,0,0,2,2,1,12,1,14,1,6,1,
+ 114,45,0,0,0,99,1,0,0,0,0,0,0,0,1,0,
+ 0,0,3,0,0,0,67,0,0,0,115,10,0,0,0,116,
+ 0,124,0,100,1,131,2,83,0,41,2,122,31,82,101,112,
+ 108,97,99,101,109,101,110,116,32,102,111,114,32,111,115,46,
+ 112,97,116,104,46,105,115,102,105,108,101,46,105,0,128,0,
+ 0,41,1,114,45,0,0,0,41,1,114,37,0,0,0,114,
+ 4,0,0,0,114,4,0,0,0,114,6,0,0,0,218,12,
+ 95,112,97,116,104,95,105,115,102,105,108,101,94,0,0,0,
+ 115,2,0,0,0,0,2,114,46,0,0,0,99,1,0,0,
+ 0,0,0,0,0,1,0,0,0,3,0,0,0,67,0,0,
+ 0,115,22,0,0,0,124,0,115,12,116,0,106,1,131,0,
+ 125,0,116,2,124,0,100,1,131,2,83,0,41,2,122,30,
+ 82,101,112,108,97,99,101,109,101,110,116,32,102,111,114,32,
+ 111,115,46,112,97,116,104,46,105,115,100,105,114,46,105,0,
+ 64,0,0,41,3,114,3,0,0,0,218,6,103,101,116,99,
+ 119,100,114,45,0,0,0,41,1,114,37,0,0,0,114,4,
+ 0,0,0,114,4,0,0,0,114,6,0,0,0,218,11,95,
+ 112,97,116,104,95,105,115,100,105,114,99,0,0,0,115,6,
+ 0,0,0,0,2,4,1,8,1,114,48,0,0,0,233,182,
+ 1,0,0,99,3,0,0,0,0,0,0,0,6,0,0,0,
+ 17,0,0,0,67,0,0,0,115,162,0,0,0,100,1,106,
+ 0,124,0,116,1,124,0,131,1,131,2,125,3,116,2,106,
+ 3,124,3,116,2,106,4,116,2,106,5,66,0,116,2,106,
+ 6,66,0,124,2,100,2,64,0,131,3,125,4,121,50,116,
+ 7,106,8,124,4,100,3,131,2,143,16,125,5,124,5,106,
+ 9,124,1,131,1,1,0,87,0,100,4,81,0,82,0,88,
+ 0,116,2,106,10,124,3,124,0,131,2,1,0,87,0,110,
+ 58,4,0,116,11,107,10,114,156,1,0,1,0,1,0,121,
+ 14,116,2,106,12,124,3,131,1,1,0,87,0,110,20,4,
+ 0,116,11,107,10,114,148,1,0,1,0,1,0,89,0,110,
+ 2,88,0,130,0,89,0,110,2,88,0,100,4,83,0,41,
+ 5,122,162,66,101,115,116,45,101,102,102,111,114,116,32,102,
+ 117,110,99,116,105,111,110,32,116,111,32,119,114,105,116,101,
+ 32,100,97,116,97,32,116,111,32,97,32,112,97,116,104,32,
+ 97,116,111,109,105,99,97,108,108,121,46,10,32,32,32,32,
+ 66,101,32,112,114,101,112,97,114,101,100,32,116,111,32,104,
+ 97,110,100,108,101,32,97,32,70,105,108,101,69,120,105,115,
+ 116,115,69,114,114,111,114,32,105,102,32,99,111,110,99,117,
+ 114,114,101,110,116,32,119,114,105,116,105,110,103,32,111,102,
+ 32,116,104,101,10,32,32,32,32,116,101,109,112,111,114,97,
+ 114,121,32,102,105,108,101,32,105,115,32,97,116,116,101,109,
+ 112,116,101,100,46,122,5,123,125,46,123,125,105,182,1,0,
+ 0,90,2,119,98,78,41,13,218,6,102,111,114,109,97,116,
+ 218,2,105,100,114,3,0,0,0,90,4,111,112,101,110,90,
+ 6,79,95,69,88,67,76,90,7,79,95,67,82,69,65,84,
+ 90,8,79,95,87,82,79,78,76,89,218,3,95,105,111,218,
+ 6,70,105,108,101,73,79,218,5,119,114,105,116,101,90,6,
+ 114,101,110,97,109,101,114,42,0,0,0,90,6,117,110,108,
+ 105,110,107,41,6,114,37,0,0,0,218,4,100,97,116,97,
+ 114,44,0,0,0,90,8,112,97,116,104,95,116,109,112,90,
+ 2,102,100,218,4,102,105,108,101,114,4,0,0,0,114,4,
+ 0,0,0,114,6,0,0,0,218,13,95,119,114,105,116,101,
+ 95,97,116,111,109,105,99,106,0,0,0,115,26,0,0,0,
+ 0,5,16,1,6,1,26,1,2,3,14,1,20,1,16,1,
+ 14,1,2,1,14,1,14,1,6,1,114,57,0,0,0,105,
+ 51,13,0,0,233,2,0,0,0,114,15,0,0,0,115,2,
+ 0,0,0,13,10,90,11,95,95,112,121,99,97,99,104,101,
+ 95,95,122,4,111,112,116,45,122,3,46,112,121,122,4,46,
+ 112,121,99,78,41,1,218,12,111,112,116,105,109,105,122,97,
+ 116,105,111,110,99,2,0,0,0,1,0,0,0,11,0,0,
+ 0,6,0,0,0,67,0,0,0,115,234,0,0,0,124,1,
+ 100,1,107,9,114,52,116,0,106,1,100,2,116,2,131,2,
+ 1,0,124,2,100,1,107,9,114,40,100,3,125,3,116,3,
+ 124,3,131,1,130,1,124,1,114,48,100,4,110,2,100,5,
+ 125,2,116,4,124,0,131,1,92,2,125,4,125,5,124,5,
+ 106,5,100,6,131,1,92,3,125,6,125,7,125,8,116,6,
+ 106,7,106,8,125,9,124,9,100,1,107,8,114,104,116,9,
+ 100,7,131,1,130,1,100,4,106,10,124,6,114,116,124,6,
+ 110,2,124,8,124,7,124,9,103,3,131,1,125,10,124,2,
+ 100,1,107,8,114,162,116,6,106,11,106,12,100,8,107,2,
+ 114,154,100,4,125,2,110,8,116,6,106,11,106,12,125,2,
+ 116,13,124,2,131,1,125,2,124,2,100,4,107,3,114,214,
+ 124,2,106,14,131,0,115,200,116,15,100,9,106,16,124,2,
+ 131,1,131,1,130,1,100,10,106,16,124,10,116,17,124,2,
+ 131,3,125,10,116,18,124,4,116,19,124,10,116,20,100,8,
+ 25,0,23,0,131,3,83,0,41,11,97,254,2,0,0,71,
+ 105,118,101,110,32,116,104,101,32,112,97,116,104,32,116,111,
+ 32,97,32,46,112,121,32,102,105,108,101,44,32,114,101,116,
+ 117,114,110,32,116,104,101,32,112,97,116,104,32,116,111,32,
+ 105,116,115,32,46,112,121,99,32,102,105,108,101,46,10,10,
+ 32,32,32,32,84,104,101,32,46,112,121,32,102,105,108,101,
+ 32,100,111,101,115,32,110,111,116,32,110,101,101,100,32,116,
+ 111,32,101,120,105,115,116,59,32,116,104,105,115,32,115,105,
+ 109,112,108,121,32,114,101,116,117,114,110,115,32,116,104,101,
+ 32,112,97,116,104,32,116,111,32,116,104,101,10,32,32,32,
+ 32,46,112,121,99,32,102,105,108,101,32,99,97,108,99,117,
+ 108,97,116,101,100,32,97,115,32,105,102,32,116,104,101,32,
+ 46,112,121,32,102,105,108,101,32,119,101,114,101,32,105,109,
+ 112,111,114,116,101,100,46,10,10,32,32,32,32,84,104,101,
+ 32,39,111,112,116,105,109,105,122,97,116,105,111,110,39,32,
+ 112,97,114,97,109,101,116,101,114,32,99,111,110,116,114,111,
+ 108,115,32,116,104,101,32,112,114,101,115,117,109,101,100,32,
+ 111,112,116,105,109,105,122,97,116,105,111,110,32,108,101,118,
+ 101,108,32,111,102,10,32,32,32,32,116,104,101,32,98,121,
+ 116,101,99,111,100,101,32,102,105,108,101,46,32,73,102,32,
+ 39,111,112,116,105,109,105,122,97,116,105,111,110,39,32,105,
+ 115,32,110,111,116,32,78,111,110,101,44,32,116,104,101,32,
+ 115,116,114,105,110,103,32,114,101,112,114,101,115,101,110,116,
+ 97,116,105,111,110,10,32,32,32,32,111,102,32,116,104,101,
+ 32,97,114,103,117,109,101,110,116,32,105,115,32,116,97,107,
+ 101,110,32,97,110,100,32,118,101,114,105,102,105,101,100,32,
+ 116,111,32,98,101,32,97,108,112,104,97,110,117,109,101,114,
+ 105,99,32,40,101,108,115,101,32,86,97,108,117,101,69,114,
+ 114,111,114,10,32,32,32,32,105,115,32,114,97,105,115,101,
+ 100,41,46,10,10,32,32,32,32,84,104,101,32,100,101,98,
+ 117,103,95,111,118,101,114,114,105,100,101,32,112,97,114,97,
+ 109,101,116,101,114,32,105,115,32,100,101,112,114,101,99,97,
+ 116,101,100,46,32,73,102,32,100,101,98,117,103,95,111,118,
+ 101,114,114,105,100,101,32,105,115,32,110,111,116,32,78,111,
+ 110,101,44,10,32,32,32,32,97,32,84,114,117,101,32,118,
+ 97,108,117,101,32,105,115,32,116,104,101,32,115,97,109,101,
+ 32,97,115,32,115,101,116,116,105,110,103,32,39,111,112,116,
+ 105,109,105,122,97,116,105,111,110,39,32,116,111,32,116,104,
+ 101,32,101,109,112,116,121,32,115,116,114,105,110,103,10,32,
+ 32,32,32,119,104,105,108,101,32,97,32,70,97,108,115,101,
+ 32,118,97,108,117,101,32,105,115,32,101,113,117,105,118,97,
+ 108,101,110,116,32,116,111,32,115,101,116,116,105,110,103,32,
+ 39,111,112,116,105,109,105,122,97,116,105,111,110,39,32,116,
+ 111,32,39,49,39,46,10,10,32,32,32,32,73,102,32,115,
+ 121,115,46,105,109,112,108,101,109,101,110,116,97,116,105,111,
+ 110,46,99,97,99,104,101,95,116,97,103,32,105,115,32,78,
+ 111,110,101,32,116,104,101,110,32,78,111,116,73,109,112,108,
+ 101,109,101,110,116,101,100,69,114,114,111,114,32,105,115,32,
+ 114,97,105,115,101,100,46,10,10,32,32,32,32,78,122,70,
+ 116,104,101,32,100,101,98,117,103,95,111,118,101,114,114,105,
+ 100,101,32,112,97,114,97,109,101,116,101,114,32,105,115,32,
+ 100,101,112,114,101,99,97,116,101,100,59,32,117,115,101,32,
+ 39,111,112,116,105,109,105,122,97,116,105,111,110,39,32,105,
+ 110,115,116,101,97,100,122,50,100,101,98,117,103,95,111,118,
+ 101,114,114,105,100,101,32,111,114,32,111,112,116,105,109,105,
+ 122,97,116,105,111,110,32,109,117,115,116,32,98,101,32,115,
+ 101,116,32,116,111,32,78,111,110,101,114,32,0,0,0,114,
+ 31,0,0,0,218,1,46,122,36,115,121,115,46,105,109,112,
+ 108,101,109,101,110,116,97,116,105,111,110,46,99,97,99,104,
+ 101,95,116,97,103,32,105,115,32,78,111,110,101,233,0,0,
+ 0,0,122,24,123,33,114,125,32,105,115,32,110,111,116,32,
+ 97,108,112,104,97,110,117,109,101,114,105,99,122,7,123,125,
+ 46,123,125,123,125,41,21,218,9,95,119,97,114,110,105,110,
+ 103,115,218,4,119,97,114,110,218,18,68,101,112,114,101,99,
+ 97,116,105,111,110,87,97,114,110,105,110,103,218,9,84,121,
+ 112,101,69,114,114,111,114,114,40,0,0,0,114,34,0,0,
+ 0,114,8,0,0,0,218,14,105,109,112,108,101,109,101,110,
+ 116,97,116,105,111,110,218,9,99,97,99,104,101,95,116,97,
+ 103,218,19,78,111,116,73,109,112,108,101,109,101,110,116,101,
+ 100,69,114,114,111,114,114,28,0,0,0,218,5,102,108,97,
+ 103,115,218,8,111,112,116,105,109,105,122,101,218,3,115,116,
+ 114,218,7,105,115,97,108,110,117,109,218,10,86,97,108,117,
+ 101,69,114,114,111,114,114,50,0,0,0,218,4,95,79,80,
+ 84,114,30,0,0,0,218,8,95,80,89,67,65,67,72,69,
+ 218,17,66,89,84,69,67,79,68,69,95,83,85,70,70,73,
+ 88,69,83,41,11,114,37,0,0,0,90,14,100,101,98,117,
+ 103,95,111,118,101,114,114,105,100,101,114,59,0,0,0,218,
+ 7,109,101,115,115,97,103,101,218,4,104,101,97,100,114,39,
+ 0,0,0,90,4,98,97,115,101,218,3,115,101,112,218,4,
+ 114,101,115,116,90,3,116,97,103,90,15,97,108,109,111,115,
+ 116,95,102,105,108,101,110,97,109,101,114,4,0,0,0,114,
+ 4,0,0,0,114,6,0,0,0,218,17,99,97,99,104,101,
+ 95,102,114,111,109,95,115,111,117,114,99,101,7,1,0,0,
+ 115,46,0,0,0,0,18,8,1,6,1,6,1,8,1,4,
+ 1,8,1,12,2,12,1,16,1,8,1,8,1,8,1,24,
+ 1,8,1,12,1,6,2,8,1,8,1,8,1,8,1,14,
+ 1,14,1,114,81,0,0,0,99,1,0,0,0,0,0,0,
+ 0,8,0,0,0,5,0,0,0,67,0,0,0,115,220,0,
+ 0,0,116,0,106,1,106,2,100,1,107,8,114,20,116,3,
+ 100,2,131,1,130,1,116,4,124,0,131,1,92,2,125,1,
+ 125,2,116,4,124,1,131,1,92,2,125,1,125,3,124,3,
+ 116,5,107,3,114,68,116,6,100,3,106,7,116,5,124,0,
+ 131,2,131,1,130,1,124,2,106,8,100,4,131,1,125,4,
+ 124,4,100,11,107,7,114,102,116,6,100,7,106,7,124,2,
+ 131,1,131,1,130,1,110,86,124,4,100,6,107,2,114,188,
+ 124,2,106,9,100,4,100,5,131,2,100,12,25,0,125,5,
+ 124,5,106,10,116,11,131,1,115,150,116,6,100,8,106,7,
+ 116,11,131,1,131,1,130,1,124,5,116,12,116,11,131,1,
+ 100,1,133,2,25,0,125,6,124,6,106,13,131,0,115,188,
+ 116,6,100,9,106,7,124,5,131,1,131,1,130,1,124,2,
+ 106,14,100,4,131,1,100,10,25,0,125,7,116,15,124,1,
+ 124,7,116,16,100,10,25,0,23,0,131,2,83,0,41,13,
+ 97,110,1,0,0,71,105,118,101,110,32,116,104,101,32,112,
+ 97,116,104,32,116,111,32,97,32,46,112,121,99,46,32,102,
+ 105,108,101,44,32,114,101,116,117,114,110,32,116,104,101,32,
+ 112,97,116,104,32,116,111,32,105,116,115,32,46,112,121,32,
+ 102,105,108,101,46,10,10,32,32,32,32,84,104,101,32,46,
+ 112,121,99,32,102,105,108,101,32,100,111,101,115,32,110,111,
+ 116,32,110,101,101,100,32,116,111,32,101,120,105,115,116,59,
+ 32,116,104,105,115,32,115,105,109,112,108,121,32,114,101,116,
+ 117,114,110,115,32,116,104,101,32,112,97,116,104,32,116,111,
+ 10,32,32,32,32,116,104,101,32,46,112,121,32,102,105,108,
+ 101,32,99,97,108,99,117,108,97,116,101,100,32,116,111,32,
+ 99,111,114,114,101,115,112,111,110,100,32,116,111,32,116,104,
+ 101,32,46,112,121,99,32,102,105,108,101,46,32,32,73,102,
+ 32,112,97,116,104,32,100,111,101,115,10,32,32,32,32,110,
+ 111,116,32,99,111,110,102,111,114,109,32,116,111,32,80,69,
+ 80,32,51,49,52,55,47,52,56,56,32,102,111,114,109,97,
+ 116,44,32,86,97,108,117,101,69,114,114,111,114,32,119,105,
+ 108,108,32,98,101,32,114,97,105,115,101,100,46,32,73,102,
+ 10,32,32,32,32,115,121,115,46,105,109,112,108,101,109,101,
+ 110,116,97,116,105,111,110,46,99,97,99,104,101,95,116,97,
+ 103,32,105,115,32,78,111,110,101,32,116,104,101,110,32,78,
+ 111,116,73,109,112,108,101,109,101,110,116,101,100,69,114,114,
+ 111,114,32,105,115,32,114,97,105,115,101,100,46,10,10,32,
+ 32,32,32,78,122,36,115,121,115,46,105,109,112,108,101,109,
+ 101,110,116,97,116,105,111,110,46,99,97,99,104,101,95,116,
+ 97,103,32,105,115,32,78,111,110,101,122,37,123,125,32,110,
+ 111,116,32,98,111,116,116,111,109,45,108,101,118,101,108,32,
+ 100,105,114,101,99,116,111,114,121,32,105,110,32,123,33,114,
+ 125,114,60,0,0,0,114,58,0,0,0,233,3,0,0,0,
+ 122,33,101,120,112,101,99,116,101,100,32,111,110,108,121,32,
+ 50,32,111,114,32,51,32,100,111,116,115,32,105,110,32,123,
+ 33,114,125,122,57,111,112,116,105,109,105,122,97,116,105,111,
+ 110,32,112,111,114,116,105,111,110,32,111,102,32,102,105,108,
+ 101,110,97,109,101,32,100,111,101,115,32,110,111,116,32,115,
+ 116,97,114,116,32,119,105,116,104,32,123,33,114,125,122,52,
+ 111,112,116,105,109,105,122,97,116,105,111,110,32,108,101,118,
+ 101,108,32,123,33,114,125,32,105,115,32,110,111,116,32,97,
+ 110,32,97,108,112,104,97,110,117,109,101,114,105,99,32,118,
+ 97,108,117,101,114,61,0,0,0,62,2,0,0,0,114,58,
+ 0,0,0,114,82,0,0,0,233,254,255,255,255,41,17,114,
+ 8,0,0,0,114,66,0,0,0,114,67,0,0,0,114,68,
+ 0,0,0,114,40,0,0,0,114,75,0,0,0,114,73,0,
+ 0,0,114,50,0,0,0,218,5,99,111,117,110,116,114,36,
+ 0,0,0,114,10,0,0,0,114,74,0,0,0,114,33,0,
+ 0,0,114,72,0,0,0,218,9,112,97,114,116,105,116,105,
+ 111,110,114,30,0,0,0,218,15,83,79,85,82,67,69,95,
+ 83,85,70,70,73,88,69,83,41,8,114,37,0,0,0,114,
+ 78,0,0,0,90,16,112,121,99,97,99,104,101,95,102,105,
+ 108,101,110,97,109,101,90,7,112,121,99,97,99,104,101,90,
+ 9,100,111,116,95,99,111,117,110,116,114,59,0,0,0,90,
+ 9,111,112,116,95,108,101,118,101,108,90,13,98,97,115,101,
+ 95,102,105,108,101,110,97,109,101,114,4,0,0,0,114,4,
+ 0,0,0,114,6,0,0,0,218,17,115,111,117,114,99,101,
+ 95,102,114,111,109,95,99,97,99,104,101,52,1,0,0,115,
+ 44,0,0,0,0,9,12,1,8,2,12,1,12,1,8,1,
+ 6,1,10,1,10,1,8,1,6,1,10,1,8,1,16,1,
+ 10,1,6,1,8,1,16,1,8,1,6,1,8,1,14,1,
+ 114,87,0,0,0,99,1,0,0,0,0,0,0,0,5,0,
+ 0,0,12,0,0,0,67,0,0,0,115,128,0,0,0,116,
+ 0,124,0,131,1,100,1,107,2,114,16,100,2,83,0,124,
+ 0,106,1,100,3,131,1,92,3,125,1,125,2,125,3,124,
+ 1,12,0,115,58,124,3,106,2,131,0,100,7,100,8,133,
+ 2,25,0,100,6,107,3,114,62,124,0,83,0,121,12,116,
+ 3,124,0,131,1,125,4,87,0,110,36,4,0,116,4,116,
+ 5,102,2,107,10,114,110,1,0,1,0,1,0,124,0,100,
+ 2,100,9,133,2,25,0,125,4,89,0,110,2,88,0,116,
+ 6,124,4,131,1,114,124,124,4,83,0,124,0,83,0,41,
+ 10,122,188,67,111,110,118,101,114,116,32,97,32,98,121,116,
+ 101,99,111,100,101,32,102,105,108,101,32,112,97,116,104,32,
+ 116,111,32,97,32,115,111,117,114,99,101,32,112,97,116,104,
+ 32,40,105,102,32,112,111,115,115,105,98,108,101,41,46,10,
+ 10,32,32,32,32,84,104,105,115,32,102,117,110,99,116,105,
+ 111,110,32,101,120,105,115,116,115,32,112,117,114,101,108,121,
+ 32,102,111,114,32,98,97,99,107,119,97,114,100,115,45,99,
+ 111,109,112,97,116,105,98,105,108,105,116,121,32,102,111,114,
+ 10,32,32,32,32,80,121,73,109,112,111,114,116,95,69,120,
+ 101,99,67,111,100,101,77,111,100,117,108,101,87,105,116,104,
+ 70,105,108,101,110,97,109,101,115,40,41,32,105,110,32,116,
+ 104,101,32,67,32,65,80,73,46,10,10,32,32,32,32,114,
+ 61,0,0,0,78,114,60,0,0,0,114,82,0,0,0,114,
+ 31,0,0,0,90,2,112,121,233,253,255,255,255,233,255,255,
+ 255,255,114,89,0,0,0,41,7,114,33,0,0,0,114,34,
+ 0,0,0,218,5,108,111,119,101,114,114,87,0,0,0,114,
+ 68,0,0,0,114,73,0,0,0,114,46,0,0,0,41,5,
+ 218,13,98,121,116,101,99,111,100,101,95,112,97,116,104,114,
+ 80,0,0,0,114,38,0,0,0,90,9,101,120,116,101,110,
+ 115,105,111,110,218,11,115,111,117,114,99,101,95,112,97,116,
+ 104,114,4,0,0,0,114,4,0,0,0,114,6,0,0,0,
+ 218,15,95,103,101,116,95,115,111,117,114,99,101,102,105,108,
+ 101,86,1,0,0,115,20,0,0,0,0,7,12,1,4,1,
+ 16,1,26,1,4,1,2,1,12,1,18,1,18,1,114,93,
+ 0,0,0,99,1,0,0,0,0,0,0,0,1,0,0,0,
+ 11,0,0,0,67,0,0,0,115,72,0,0,0,124,0,106,
+ 0,116,1,116,2,131,1,131,1,114,46,121,8,116,3,124,
+ 0,131,1,83,0,4,0,116,4,107,10,114,42,1,0,1,
+ 0,1,0,89,0,113,68,88,0,110,22,124,0,106,0,116,
+ 1,116,5,131,1,131,1,114,64,124,0,83,0,100,0,83,
+ 0,100,0,83,0,41,1,78,41,6,218,8,101,110,100,115,
+ 119,105,116,104,218,5,116,117,112,108,101,114,86,0,0,0,
+ 114,81,0,0,0,114,68,0,0,0,114,76,0,0,0,41,
+ 1,218,8,102,105,108,101,110,97,109,101,114,4,0,0,0,
+ 114,4,0,0,0,114,6,0,0,0,218,11,95,103,101,116,
+ 95,99,97,99,104,101,100,105,1,0,0,115,16,0,0,0,
+ 0,1,14,1,2,1,8,1,14,1,8,1,14,1,4,2,
+ 114,97,0,0,0,99,1,0,0,0,0,0,0,0,2,0,
+ 0,0,11,0,0,0,67,0,0,0,115,52,0,0,0,121,
+ 14,116,0,124,0,131,1,106,1,125,1,87,0,110,24,4,
+ 0,116,2,107,10,114,38,1,0,1,0,1,0,100,1,125,
+ 1,89,0,110,2,88,0,124,1,100,2,79,0,125,1,124,
+ 1,83,0,41,3,122,51,67,97,108,99,117,108,97,116,101,
+ 32,116,104,101,32,109,111,100,101,32,112,101,114,109,105,115,
+ 115,105,111,110,115,32,102,111,114,32,97,32,98,121,116,101,
+ 99,111,100,101,32,102,105,108,101,46,105,182,1,0,0,233,
+ 128,0,0,0,41,3,114,41,0,0,0,114,43,0,0,0,
+ 114,42,0,0,0,41,2,114,37,0,0,0,114,44,0,0,
+ 0,114,4,0,0,0,114,4,0,0,0,114,6,0,0,0,
+ 218,10,95,99,97,108,99,95,109,111,100,101,117,1,0,0,
+ 115,12,0,0,0,0,2,2,1,14,1,14,1,10,3,8,
+ 1,114,99,0,0,0,99,1,0,0,0,0,0,0,0,3,
+ 0,0,0,11,0,0,0,3,0,0,0,115,68,0,0,0,
+ 100,6,135,0,102,1,100,2,100,3,132,9,125,1,121,10,
+ 116,0,106,1,125,2,87,0,110,28,4,0,116,2,107,10,
+ 114,52,1,0,1,0,1,0,100,4,100,5,132,0,125,2,
+ 89,0,110,2,88,0,124,2,124,1,136,0,131,2,1,0,
+ 124,1,83,0,41,7,122,252,68,101,99,111,114,97,116,111,
+ 114,32,116,111,32,118,101,114,105,102,121,32,116,104,97,116,
+ 32,116,104,101,32,109,111,100,117,108,101,32,98,101,105,110,
+ 103,32,114,101,113,117,101,115,116,101,100,32,109,97,116,99,
+ 104,101,115,32,116,104,101,32,111,110,101,32,116,104,101,10,
+ 32,32,32,32,108,111,97,100,101,114,32,99,97,110,32,104,
+ 97,110,100,108,101,46,10,10,32,32,32,32,84,104,101,32,
+ 102,105,114,115,116,32,97,114,103,117,109,101,110,116,32,40,
+ 115,101,108,102,41,32,109,117,115,116,32,100,101,102,105,110,
+ 101,32,95,110,97,109,101,32,119,104,105,99,104,32,116,104,
+ 101,32,115,101,99,111,110,100,32,97,114,103,117,109,101,110,
+ 116,32,105,115,10,32,32,32,32,99,111,109,112,97,114,101,
+ 100,32,97,103,97,105,110,115,116,46,32,73,102,32,116,104,
+ 101,32,99,111,109,112,97,114,105,115,111,110,32,102,97,105,
+ 108,115,32,116,104,101,110,32,73,109,112,111,114,116,69,114,
+ 114,111,114,32,105,115,32,114,97,105,115,101,100,46,10,10,
+ 32,32,32,32,78,99,2,0,0,0,0,0,0,0,4,0,
+ 0,0,4,0,0,0,31,0,0,0,115,66,0,0,0,124,
+ 1,100,0,107,8,114,16,124,0,106,0,125,1,110,32,124,
+ 0,106,0,124,1,107,3,114,48,116,1,100,1,124,0,106,
+ 0,124,1,102,2,22,0,124,1,100,2,141,2,130,1,136,
+ 0,124,0,124,1,102,2,124,2,158,2,124,3,142,1,83,
+ 0,41,3,78,122,30,108,111,97,100,101,114,32,102,111,114,
+ 32,37,115,32,99,97,110,110,111,116,32,104,97,110,100,108,
+ 101,32,37,115,41,1,218,4,110,97,109,101,41,2,114,100,
+ 0,0,0,218,11,73,109,112,111,114,116,69,114,114,111,114,
+ 41,4,218,4,115,101,108,102,114,100,0,0,0,218,4,97,
+ 114,103,115,90,6,107,119,97,114,103,115,41,1,218,6,109,
+ 101,116,104,111,100,114,4,0,0,0,114,6,0,0,0,218,
+ 19,95,99,104,101,99,107,95,110,97,109,101,95,119,114,97,
+ 112,112,101,114,137,1,0,0,115,12,0,0,0,0,1,8,
+ 1,8,1,10,1,4,1,18,1,122,40,95,99,104,101,99,
+ 107,95,110,97,109,101,46,60,108,111,99,97,108,115,62,46,
+ 95,99,104,101,99,107,95,110,97,109,101,95,119,114,97,112,
+ 112,101,114,99,2,0,0,0,0,0,0,0,3,0,0,0,
+ 7,0,0,0,83,0,0,0,115,60,0,0,0,120,40,100,
+ 5,68,0,93,32,125,2,116,0,124,1,124,2,131,2,114,
+ 6,116,1,124,0,124,2,116,2,124,1,124,2,131,2,131,
+ 3,1,0,113,6,87,0,124,0,106,3,106,4,124,1,106,
+ 3,131,1,1,0,100,0,83,0,41,6,78,218,10,95,95,
+ 109,111,100,117,108,101,95,95,218,8,95,95,110,97,109,101,
+ 95,95,218,12,95,95,113,117,97,108,110,97,109,101,95,95,
+ 218,7,95,95,100,111,99,95,95,41,4,114,106,0,0,0,
+ 114,107,0,0,0,114,108,0,0,0,114,109,0,0,0,41,
+ 5,218,7,104,97,115,97,116,116,114,218,7,115,101,116,97,
+ 116,116,114,218,7,103,101,116,97,116,116,114,218,8,95,95,
+ 100,105,99,116,95,95,218,6,117,112,100,97,116,101,41,3,
+ 90,3,110,101,119,90,3,111,108,100,218,7,114,101,112,108,
+ 97,99,101,114,4,0,0,0,114,4,0,0,0,114,6,0,
+ 0,0,218,5,95,119,114,97,112,148,1,0,0,115,8,0,
+ 0,0,0,1,10,1,10,1,22,1,122,26,95,99,104,101,
+ 99,107,95,110,97,109,101,46,60,108,111,99,97,108,115,62,
+ 46,95,119,114,97,112,41,1,78,41,3,218,10,95,98,111,
+ 111,116,115,116,114,97,112,114,116,0,0,0,218,9,78,97,
+ 109,101,69,114,114,111,114,41,3,114,104,0,0,0,114,105,
+ 0,0,0,114,116,0,0,0,114,4,0,0,0,41,1,114,
+ 104,0,0,0,114,6,0,0,0,218,11,95,99,104,101,99,
+ 107,95,110,97,109,101,129,1,0,0,115,14,0,0,0,0,
+ 8,14,7,2,1,10,1,14,2,14,5,10,1,114,119,0,
+ 0,0,99,2,0,0,0,0,0,0,0,5,0,0,0,4,
+ 0,0,0,67,0,0,0,115,60,0,0,0,124,0,106,0,
+ 124,1,131,1,92,2,125,2,125,3,124,2,100,1,107,8,
+ 114,56,116,1,124,3,131,1,114,56,100,2,125,4,116,2,
+ 106,3,124,4,106,4,124,3,100,3,25,0,131,1,116,5,
+ 131,2,1,0,124,2,83,0,41,4,122,155,84,114,121,32,
+ 116,111,32,102,105,110,100,32,97,32,108,111,97,100,101,114,
+ 32,102,111,114,32,116,104,101,32,115,112,101,99,105,102,105,
+ 101,100,32,109,111,100,117,108,101,32,98,121,32,100,101,108,
+ 101,103,97,116,105,110,103,32,116,111,10,32,32,32,32,115,
+ 101,108,102,46,102,105,110,100,95,108,111,97,100,101,114,40,
+ 41,46,10,10,32,32,32,32,84,104,105,115,32,109,101,116,
+ 104,111,100,32,105,115,32,100,101,112,114,101,99,97,116,101,
+ 100,32,105,110,32,102,97,118,111,114,32,111,102,32,102,105,
+ 110,100,101,114,46,102,105,110,100,95,115,112,101,99,40,41,
+ 46,10,10,32,32,32,32,78,122,44,78,111,116,32,105,109,
+ 112,111,114,116,105,110,103,32,100,105,114,101,99,116,111,114,
+ 121,32,123,125,58,32,109,105,115,115,105,110,103,32,95,95,
+ 105,110,105,116,95,95,114,61,0,0,0,41,6,218,11,102,
+ 105,110,100,95,108,111,97,100,101,114,114,33,0,0,0,114,
+ 62,0,0,0,114,63,0,0,0,114,50,0,0,0,218,13,
+ 73,109,112,111,114,116,87,97,114,110,105,110,103,41,5,114,
+ 102,0,0,0,218,8,102,117,108,108,110,97,109,101,218,6,
+ 108,111,97,100,101,114,218,8,112,111,114,116,105,111,110,115,
+ 218,3,109,115,103,114,4,0,0,0,114,4,0,0,0,114,
+ 6,0,0,0,218,17,95,102,105,110,100,95,109,111,100,117,
+ 108,101,95,115,104,105,109,157,1,0,0,115,10,0,0,0,
+ 0,10,14,1,16,1,4,1,22,1,114,126,0,0,0,99,
+ 4,0,0,0,0,0,0,0,11,0,0,0,19,0,0,0,
+ 67,0,0,0,115,136,1,0,0,105,0,125,4,124,2,100,
+ 1,107,9,114,22,124,2,124,4,100,2,60,0,110,4,100,
+ 3,125,2,124,3,100,1,107,9,114,42,124,3,124,4,100,
+ 4,60,0,124,0,100,1,100,5,133,2,25,0,125,5,124,
+ 0,100,5,100,6,133,2,25,0,125,6,124,0,100,6,100,
+ 7,133,2,25,0,125,7,124,5,116,0,107,3,114,124,100,
+ 8,106,1,124,2,124,5,131,2,125,8,116,2,106,3,100,
+ 9,124,8,131,2,1,0,116,4,124,8,102,1,124,4,142,
+ 1,130,1,110,86,116,5,124,6,131,1,100,5,107,3,114,
+ 168,100,10,106,1,124,2,131,1,125,8,116,2,106,3,100,
+ 9,124,8,131,2,1,0,116,6,124,8,131,1,130,1,110,
+ 42,116,5,124,7,131,1,100,5,107,3,114,210,100,11,106,
+ 1,124,2,131,1,125,8,116,2,106,3,100,9,124,8,131,
+ 2,1,0,116,6,124,8,131,1,130,1,124,1,100,1,107,
+ 9,144,1,114,124,121,16,116,7,124,1,100,12,25,0,131,
+ 1,125,9,87,0,110,22,4,0,116,8,107,10,144,1,114,
+ 2,1,0,1,0,1,0,89,0,110,50,88,0,116,9,124,
+ 6,131,1,124,9,107,3,144,1,114,52,100,13,106,1,124,
+ 2,131,1,125,8,116,2,106,3,100,9,124,8,131,2,1,
+ 0,116,4,124,8,102,1,124,4,142,1,130,1,121,16,124,
+ 1,100,14,25,0,100,15,64,0,125,10,87,0,110,22,4,
+ 0,116,8,107,10,144,1,114,90,1,0,1,0,1,0,89,
+ 0,110,34,88,0,116,9,124,7,131,1,124,10,107,3,144,
+ 1,114,124,116,4,100,13,106,1,124,2,131,1,102,1,124,
+ 4,142,1,130,1,124,0,100,7,100,1,133,2,25,0,83,
+ 0,41,16,97,122,1,0,0,86,97,108,105,100,97,116,101,
+ 32,116,104,101,32,104,101,97,100,101,114,32,111,102,32,116,
+ 104,101,32,112,97,115,115,101,100,45,105,110,32,98,121,116,
+ 101,99,111,100,101,32,97,103,97,105,110,115,116,32,115,111,
+ 117,114,99,101,95,115,116,97,116,115,32,40,105,102,10,32,
+ 32,32,32,103,105,118,101,110,41,32,97,110,100,32,114,101,
+ 116,117,114,110,105,110,103,32,116,104,101,32,98,121,116,101,
+ 99,111,100,101,32,116,104,97,116,32,99,97,110,32,98,101,
+ 32,99,111,109,112,105,108,101,100,32,98,121,32,99,111,109,
+ 112,105,108,101,40,41,46,10,10,32,32,32,32,65,108,108,
+ 32,111,116,104,101,114,32,97,114,103,117,109,101,110,116,115,
+ 32,97,114,101,32,117,115,101,100,32,116,111,32,101,110,104,
+ 97,110,99,101,32,101,114,114,111,114,32,114,101,112,111,114,
+ 116,105,110,103,46,10,10,32,32,32,32,73,109,112,111,114,
+ 116,69,114,114,111,114,32,105,115,32,114,97,105,115,101,100,
+ 32,119,104,101,110,32,116,104,101,32,109,97,103,105,99,32,
+ 110,117,109,98,101,114,32,105,115,32,105,110,99,111,114,114,
+ 101,99,116,32,111,114,32,116,104,101,32,98,121,116,101,99,
+ 111,100,101,32,105,115,10,32,32,32,32,102,111,117,110,100,
+ 32,116,111,32,98,101,32,115,116,97,108,101,46,32,69,79,
+ 70,69,114,114,111,114,32,105,115,32,114,97,105,115,101,100,
+ 32,119,104,101,110,32,116,104,101,32,100,97,116,97,32,105,
+ 115,32,102,111,117,110,100,32,116,111,32,98,101,10,32,32,
+ 32,32,116,114,117,110,99,97,116,101,100,46,10,10,32,32,
+ 32,32,78,114,100,0,0,0,122,10,60,98,121,116,101,99,
+ 111,100,101,62,114,37,0,0,0,114,14,0,0,0,233,8,
+ 0,0,0,233,12,0,0,0,122,30,98,97,100,32,109,97,
+ 103,105,99,32,110,117,109,98,101,114,32,105,110,32,123,33,
+ 114,125,58,32,123,33,114,125,122,2,123,125,122,43,114,101,
+ 97,99,104,101,100,32,69,79,70,32,119,104,105,108,101,32,
+ 114,101,97,100,105,110,103,32,116,105,109,101,115,116,97,109,
+ 112,32,105,110,32,123,33,114,125,122,48,114,101,97,99,104,
+ 101,100,32,69,79,70,32,119,104,105,108,101,32,114,101,97,
+ 100,105,110,103,32,115,105,122,101,32,111,102,32,115,111,117,
+ 114,99,101,32,105,110,32,123,33,114,125,218,5,109,116,105,
+ 109,101,122,26,98,121,116,101,99,111,100,101,32,105,115,32,
+ 115,116,97,108,101,32,102,111,114,32,123,33,114,125,218,4,
+ 115,105,122,101,108,3,0,0,0,255,127,255,127,3,0,41,
+ 10,218,12,77,65,71,73,67,95,78,85,77,66,69,82,114,
+ 50,0,0,0,114,117,0,0,0,218,16,95,118,101,114,98,
+ 111,115,101,95,109,101,115,115,97,103,101,114,101,0,0,0,
+ 114,33,0,0,0,218,8,69,79,70,69,114,114,111,114,114,
+ 16,0,0,0,218,8,75,101,121,69,114,114,111,114,114,21,
+ 0,0,0,41,11,114,55,0,0,0,218,12,115,111,117,114,
+ 99,101,95,115,116,97,116,115,114,100,0,0,0,114,37,0,
+ 0,0,90,11,101,120,99,95,100,101,116,97,105,108,115,90,
+ 5,109,97,103,105,99,90,13,114,97,119,95,116,105,109,101,
+ 115,116,97,109,112,90,8,114,97,119,95,115,105,122,101,114,
+ 77,0,0,0,218,12,115,111,117,114,99,101,95,109,116,105,
+ 109,101,218,11,115,111,117,114,99,101,95,115,105,122,101,114,
+ 4,0,0,0,114,4,0,0,0,114,6,0,0,0,218,25,
+ 95,118,97,108,105,100,97,116,101,95,98,121,116,101,99,111,
+ 100,101,95,104,101,97,100,101,114,174,1,0,0,115,76,0,
+ 0,0,0,11,4,1,8,1,10,3,4,1,8,1,8,1,
+ 12,1,12,1,12,1,8,1,12,1,12,1,14,1,12,1,
+ 10,1,12,1,10,1,12,1,10,1,12,1,8,1,10,1,
+ 2,1,16,1,16,1,6,2,14,1,10,1,12,1,12,1,
+ 2,1,16,1,16,1,6,2,14,1,12,1,6,1,114,138,
+ 0,0,0,99,4,0,0,0,0,0,0,0,5,0,0,0,
+ 5,0,0,0,67,0,0,0,115,80,0,0,0,116,0,106,
+ 1,124,0,131,1,125,4,116,2,124,4,116,3,131,2,114,
+ 56,116,4,106,5,100,1,124,2,131,2,1,0,124,3,100,
+ 2,107,9,114,52,116,6,106,7,124,4,124,3,131,2,1,
+ 0,124,4,83,0,116,8,100,3,106,9,124,2,131,1,124,
+ 1,124,2,100,4,141,3,130,1,100,2,83,0,41,5,122,
+ 60,67,111,109,112,105,108,101,32,98,121,116,101,99,111,100,
+ 101,32,97,115,32,114,101,116,117,114,110,101,100,32,98,121,
+ 32,95,118,97,108,105,100,97,116,101,95,98,121,116,101,99,
+ 111,100,101,95,104,101,97,100,101,114,40,41,46,122,21,99,
+ 111,100,101,32,111,98,106,101,99,116,32,102,114,111,109,32,
+ 123,33,114,125,78,122,23,78,111,110,45,99,111,100,101,32,
+ 111,98,106,101,99,116,32,105,110,32,123,33,114,125,41,2,
+ 114,100,0,0,0,114,37,0,0,0,41,10,218,7,109,97,
+ 114,115,104,97,108,90,5,108,111,97,100,115,218,10,105,115,
+ 105,110,115,116,97,110,99,101,218,10,95,99,111,100,101,95,
+ 116,121,112,101,114,117,0,0,0,114,132,0,0,0,218,4,
+ 95,105,109,112,90,16,95,102,105,120,95,99,111,95,102,105,
+ 108,101,110,97,109,101,114,101,0,0,0,114,50,0,0,0,
+ 41,5,114,55,0,0,0,114,100,0,0,0,114,91,0,0,
+ 0,114,92,0,0,0,218,4,99,111,100,101,114,4,0,0,
+ 0,114,4,0,0,0,114,6,0,0,0,218,17,95,99,111,
+ 109,112,105,108,101,95,98,121,116,101,99,111,100,101,229,1,
+ 0,0,115,16,0,0,0,0,2,10,1,10,1,12,1,8,
+ 1,12,1,4,2,10,1,114,144,0,0,0,114,61,0,0,
+ 0,99,3,0,0,0,0,0,0,0,4,0,0,0,3,0,
+ 0,0,67,0,0,0,115,56,0,0,0,116,0,116,1,131,
+ 1,125,3,124,3,106,2,116,3,124,1,131,1,131,1,1,
+ 0,124,3,106,2,116,3,124,2,131,1,131,1,1,0,124,
+ 3,106,2,116,4,106,5,124,0,131,1,131,1,1,0,124,
+ 3,83,0,41,1,122,80,67,111,109,112,105,108,101,32,97,
+ 32,99,111,100,101,32,111,98,106,101,99,116,32,105,110,116,
+ 111,32,98,121,116,101,99,111,100,101,32,102,111,114,32,119,
+ 114,105,116,105,110,103,32,111,117,116,32,116,111,32,97,32,
+ 98,121,116,101,45,99,111,109,112,105,108,101,100,10,32,32,
+ 32,32,102,105,108,101,46,41,6,218,9,98,121,116,101,97,
+ 114,114,97,121,114,131,0,0,0,218,6,101,120,116,101,110,
+ 100,114,19,0,0,0,114,139,0,0,0,90,5,100,117,109,
+ 112,115,41,4,114,143,0,0,0,114,129,0,0,0,114,137,
+ 0,0,0,114,55,0,0,0,114,4,0,0,0,114,4,0,
+ 0,0,114,6,0,0,0,218,17,95,99,111,100,101,95,116,
+ 111,95,98,121,116,101,99,111,100,101,241,1,0,0,115,10,
+ 0,0,0,0,3,8,1,14,1,14,1,16,1,114,147,0,
+ 0,0,99,1,0,0,0,0,0,0,0,5,0,0,0,4,
+ 0,0,0,67,0,0,0,115,62,0,0,0,100,1,100,2,
+ 108,0,125,1,116,1,106,2,124,0,131,1,106,3,125,2,
+ 124,1,106,4,124,2,131,1,125,3,116,1,106,5,100,2,
+ 100,3,131,2,125,4,124,4,106,6,124,0,106,6,124,3,
+ 100,1,25,0,131,1,131,1,83,0,41,4,122,121,68,101,
+ 99,111,100,101,32,98,121,116,101,115,32,114,101,112,114,101,
+ 115,101,110,116,105,110,103,32,115,111,117,114,99,101,32,99,
+ 111,100,101,32,97,110,100,32,114,101,116,117,114,110,32,116,
+ 104,101,32,115,116,114,105,110,103,46,10,10,32,32,32,32,
+ 85,110,105,118,101,114,115,97,108,32,110,101,119,108,105,110,
+ 101,32,115,117,112,112,111,114,116,32,105,115,32,117,115,101,
+ 100,32,105,110,32,116,104,101,32,100,101,99,111,100,105,110,
+ 103,46,10,32,32,32,32,114,61,0,0,0,78,84,41,7,
+ 218,8,116,111,107,101,110,105,122,101,114,52,0,0,0,90,
+ 7,66,121,116,101,115,73,79,90,8,114,101,97,100,108,105,
+ 110,101,90,15,100,101,116,101,99,116,95,101,110,99,111,100,
+ 105,110,103,90,25,73,110,99,114,101,109,101,110,116,97,108,
+ 78,101,119,108,105,110,101,68,101,99,111,100,101,114,218,6,
+ 100,101,99,111,100,101,41,5,218,12,115,111,117,114,99,101,
+ 95,98,121,116,101,115,114,148,0,0,0,90,21,115,111,117,
+ 114,99,101,95,98,121,116,101,115,95,114,101,97,100,108,105,
+ 110,101,218,8,101,110,99,111,100,105,110,103,90,15,110,101,
+ 119,108,105,110,101,95,100,101,99,111,100,101,114,114,4,0,
+ 0,0,114,4,0,0,0,114,6,0,0,0,218,13,100,101,
+ 99,111,100,101,95,115,111,117,114,99,101,251,1,0,0,115,
+ 10,0,0,0,0,5,8,1,12,1,10,1,12,1,114,152,
+ 0,0,0,41,2,114,123,0,0,0,218,26,115,117,98,109,
+ 111,100,117,108,101,95,115,101,97,114,99,104,95,108,111,99,
+ 97,116,105,111,110,115,99,2,0,0,0,2,0,0,0,9,
+ 0,0,0,19,0,0,0,67,0,0,0,115,12,1,0,0,
+ 124,1,100,1,107,8,114,60,100,2,125,1,116,0,124,2,
+ 100,3,131,2,114,64,121,14,124,2,106,1,124,0,131,1,
+ 125,1,87,0,113,64,4,0,116,2,107,10,114,56,1,0,
+ 1,0,1,0,89,0,113,64,88,0,110,4,124,1,125,1,
+ 116,3,106,4,124,0,124,2,124,1,100,4,141,3,125,4,
+ 100,5,124,4,95,5,124,2,100,1,107,8,114,150,120,54,
+ 116,6,131,0,68,0,93,40,92,2,125,5,125,6,124,1,
+ 106,7,116,8,124,6,131,1,131,1,114,102,124,5,124,0,
+ 124,1,131,2,125,2,124,2,124,4,95,9,80,0,113,102,
+ 87,0,100,1,83,0,124,3,116,10,107,8,114,216,116,0,
+ 124,2,100,6,131,2,114,222,121,14,124,2,106,11,124,0,
+ 131,1,125,7,87,0,110,20,4,0,116,2,107,10,114,202,
+ 1,0,1,0,1,0,89,0,113,222,88,0,124,7,114,222,
+ 103,0,124,4,95,12,110,6,124,3,124,4,95,12,124,4,
+ 106,12,103,0,107,2,144,1,114,8,124,1,144,1,114,8,
+ 116,13,124,1,131,1,100,7,25,0,125,8,124,4,106,12,
+ 106,14,124,8,131,1,1,0,124,4,83,0,41,8,97,61,
+ 1,0,0,82,101,116,117,114,110,32,97,32,109,111,100,117,
+ 108,101,32,115,112,101,99,32,98,97,115,101,100,32,111,110,
+ 32,97,32,102,105,108,101,32,108,111,99,97,116,105,111,110,
+ 46,10,10,32,32,32,32,84,111,32,105,110,100,105,99,97,
+ 116,101,32,116,104,97,116,32,116,104,101,32,109,111,100,117,
+ 108,101,32,105,115,32,97,32,112,97,99,107,97,103,101,44,
+ 32,115,101,116,10,32,32,32,32,115,117,98,109,111,100,117,
+ 108,101,95,115,101,97,114,99,104,95,108,111,99,97,116,105,
+ 111,110,115,32,116,111,32,97,32,108,105,115,116,32,111,102,
+ 32,100,105,114,101,99,116,111,114,121,32,112,97,116,104,115,
+ 46,32,32,65,110,10,32,32,32,32,101,109,112,116,121,32,
+ 108,105,115,116,32,105,115,32,115,117,102,102,105,99,105,101,
+ 110,116,44,32,116,104,111,117,103,104,32,105,116,115,32,110,
+ 111,116,32,111,116,104,101,114,119,105,115,101,32,117,115,101,
+ 102,117,108,32,116,111,32,116,104,101,10,32,32,32,32,105,
+ 109,112,111,114,116,32,115,121,115,116,101,109,46,10,10,32,
+ 32,32,32,84,104,101,32,108,111,97,100,101,114,32,109,117,
+ 115,116,32,116,97,107,101,32,97,32,115,112,101,99,32,97,
+ 115,32,105,116,115,32,111,110,108,121,32,95,95,105,110,105,
+ 116,95,95,40,41,32,97,114,103,46,10,10,32,32,32,32,
+ 78,122,9,60,117,110,107,110,111,119,110,62,218,12,103,101,
+ 116,95,102,105,108,101,110,97,109,101,41,1,218,6,111,114,
+ 105,103,105,110,84,218,10,105,115,95,112,97,99,107,97,103,
+ 101,114,61,0,0,0,41,15,114,110,0,0,0,114,154,0,
+ 0,0,114,101,0,0,0,114,117,0,0,0,218,10,77,111,
+ 100,117,108,101,83,112,101,99,90,13,95,115,101,116,95,102,
+ 105,108,101,97,116,116,114,218,27,95,103,101,116,95,115,117,
+ 112,112,111,114,116,101,100,95,102,105,108,101,95,108,111,97,
+ 100,101,114,115,114,94,0,0,0,114,95,0,0,0,114,123,
+ 0,0,0,218,9,95,80,79,80,85,76,65,84,69,114,156,
+ 0,0,0,114,153,0,0,0,114,40,0,0,0,218,6,97,
+ 112,112,101,110,100,41,9,114,100,0,0,0,90,8,108,111,
+ 99,97,116,105,111,110,114,123,0,0,0,114,153,0,0,0,
+ 218,4,115,112,101,99,218,12,108,111,97,100,101,114,95,99,
+ 108,97,115,115,218,8,115,117,102,102,105,120,101,115,114,156,
+ 0,0,0,90,7,100,105,114,110,97,109,101,114,4,0,0,
+ 0,114,4,0,0,0,114,6,0,0,0,218,23,115,112,101,
+ 99,95,102,114,111,109,95,102,105,108,101,95,108,111,99,97,
+ 116,105,111,110,12,2,0,0,115,62,0,0,0,0,12,8,
+ 4,4,1,10,2,2,1,14,1,14,1,8,3,4,7,16,
+ 1,6,3,8,1,16,1,14,1,10,1,6,1,6,2,4,
+ 3,8,2,10,1,2,1,14,1,14,1,6,2,4,1,8,
+ 2,6,1,12,1,6,1,12,1,12,2,114,164,0,0,0,
+ 99,0,0,0,0,0,0,0,0,0,0,0,0,4,0,0,
+ 0,64,0,0,0,115,80,0,0,0,101,0,90,1,100,0,
+ 90,2,100,1,90,3,100,2,90,4,100,3,90,5,100,4,
+ 90,6,101,7,100,5,100,6,132,0,131,1,90,8,101,7,
+ 100,7,100,8,132,0,131,1,90,9,101,7,100,14,100,10,
+ 100,11,132,1,131,1,90,10,101,7,100,15,100,12,100,13,
+ 132,1,131,1,90,11,100,9,83,0,41,16,218,21,87,105,
+ 110,100,111,119,115,82,101,103,105,115,116,114,121,70,105,110,
+ 100,101,114,122,62,77,101,116,97,32,112,97,116,104,32,102,
+ 105,110,100,101,114,32,102,111,114,32,109,111,100,117,108,101,
+ 115,32,100,101,99,108,97,114,101,100,32,105,110,32,116,104,
+ 101,32,87,105,110,100,111,119,115,32,114,101,103,105,115,116,
+ 114,121,46,122,59,83,111,102,116,119,97,114,101,92,80,121,
+ 116,104,111,110,92,80,121,116,104,111,110,67,111,114,101,92,
+ 123,115,121,115,95,118,101,114,115,105,111,110,125,92,77,111,
+ 100,117,108,101,115,92,123,102,117,108,108,110,97,109,101,125,
+ 122,65,83,111,102,116,119,97,114,101,92,80,121,116,104,111,
+ 110,92,80,121,116,104,111,110,67,111,114,101,92,123,115,121,
+ 115,95,118,101,114,115,105,111,110,125,92,77,111,100,117,108,
+ 101,115,92,123,102,117,108,108,110,97,109,101,125,92,68,101,
+ 98,117,103,70,99,2,0,0,0,0,0,0,0,2,0,0,
+ 0,11,0,0,0,67,0,0,0,115,50,0,0,0,121,14,
+ 116,0,106,1,116,0,106,2,124,1,131,2,83,0,4,0,
+ 116,3,107,10,114,44,1,0,1,0,1,0,116,0,106,1,
+ 116,0,106,4,124,1,131,2,83,0,88,0,100,0,83,0,
+ 41,1,78,41,5,218,7,95,119,105,110,114,101,103,90,7,
+ 79,112,101,110,75,101,121,90,17,72,75,69,89,95,67,85,
+ 82,82,69,78,84,95,85,83,69,82,114,42,0,0,0,90,
+ 18,72,75,69,89,95,76,79,67,65,76,95,77,65,67,72,
+ 73,78,69,41,2,218,3,99,108,115,114,5,0,0,0,114,
+ 4,0,0,0,114,4,0,0,0,114,6,0,0,0,218,14,
+ 95,111,112,101,110,95,114,101,103,105,115,116,114,121,92,2,
+ 0,0,115,8,0,0,0,0,2,2,1,14,1,14,1,122,
+ 36,87,105,110,100,111,119,115,82,101,103,105,115,116,114,121,
+ 70,105,110,100,101,114,46,95,111,112,101,110,95,114,101,103,
+ 105,115,116,114,121,99,2,0,0,0,0,0,0,0,6,0,
+ 0,0,16,0,0,0,67,0,0,0,115,112,0,0,0,124,
+ 0,106,0,114,14,124,0,106,1,125,2,110,6,124,0,106,
+ 2,125,2,124,2,106,3,124,1,100,1,116,4,106,5,100,
+ 0,100,2,133,2,25,0,22,0,100,3,141,2,125,3,121,
+ 38,124,0,106,6,124,3,131,1,143,18,125,4,116,7,106,
+ 8,124,4,100,4,131,2,125,5,87,0,100,0,81,0,82,
+ 0,88,0,87,0,110,20,4,0,116,9,107,10,114,106,1,
+ 0,1,0,1,0,100,0,83,0,88,0,124,5,83,0,41,
+ 5,78,122,5,37,100,46,37,100,114,58,0,0,0,41,2,
+ 114,122,0,0,0,90,11,115,121,115,95,118,101,114,115,105,
+ 111,110,114,32,0,0,0,41,10,218,11,68,69,66,85,71,
+ 95,66,85,73,76,68,218,18,82,69,71,73,83,84,82,89,
+ 95,75,69,89,95,68,69,66,85,71,218,12,82,69,71,73,
+ 83,84,82,89,95,75,69,89,114,50,0,0,0,114,8,0,
+ 0,0,218,12,118,101,114,115,105,111,110,95,105,110,102,111,
+ 114,168,0,0,0,114,166,0,0,0,90,10,81,117,101,114,
+ 121,86,97,108,117,101,114,42,0,0,0,41,6,114,167,0,
+ 0,0,114,122,0,0,0,90,12,114,101,103,105,115,116,114,
+ 121,95,107,101,121,114,5,0,0,0,90,4,104,107,101,121,
+ 218,8,102,105,108,101,112,97,116,104,114,4,0,0,0,114,
+ 4,0,0,0,114,6,0,0,0,218,16,95,115,101,97,114,
+ 99,104,95,114,101,103,105,115,116,114,121,99,2,0,0,115,
+ 22,0,0,0,0,2,6,1,8,2,6,1,6,1,22,1,
+ 2,1,12,1,26,1,14,1,6,1,122,38,87,105,110,100,
+ 111,119,115,82,101,103,105,115,116,114,121,70,105,110,100,101,
+ 114,46,95,115,101,97,114,99,104,95,114,101,103,105,115,116,
+ 114,121,78,99,4,0,0,0,0,0,0,0,8,0,0,0,
+ 14,0,0,0,67,0,0,0,115,120,0,0,0,124,0,106,
+ 0,124,1,131,1,125,4,124,4,100,0,107,8,114,22,100,
+ 0,83,0,121,12,116,1,124,4,131,1,1,0,87,0,110,
+ 20,4,0,116,2,107,10,114,54,1,0,1,0,1,0,100,
+ 0,83,0,88,0,120,58,116,3,131,0,68,0,93,48,92,
+ 2,125,5,125,6,124,4,106,4,116,5,124,6,131,1,131,
+ 1,114,64,116,6,106,7,124,1,124,5,124,1,124,4,131,
+ 2,124,4,100,1,141,3,125,7,124,7,83,0,113,64,87,
+ 0,100,0,83,0,41,2,78,41,1,114,155,0,0,0,41,
+ 8,114,174,0,0,0,114,41,0,0,0,114,42,0,0,0,
+ 114,158,0,0,0,114,94,0,0,0,114,95,0,0,0,114,
+ 117,0,0,0,218,16,115,112,101,99,95,102,114,111,109,95,
+ 108,111,97,100,101,114,41,8,114,167,0,0,0,114,122,0,
+ 0,0,114,37,0,0,0,218,6,116,97,114,103,101,116,114,
+ 173,0,0,0,114,123,0,0,0,114,163,0,0,0,114,161,
+ 0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,0,
+ 0,0,218,9,102,105,110,100,95,115,112,101,99,114,2,0,
+ 0,115,26,0,0,0,0,2,10,1,8,1,4,1,2,1,
+ 12,1,14,1,6,1,16,1,14,1,6,1,8,1,8,1,
+ 122,31,87,105,110,100,111,119,115,82,101,103,105,115,116,114,
+ 121,70,105,110,100,101,114,46,102,105,110,100,95,115,112,101,
+ 99,99,3,0,0,0,0,0,0,0,4,0,0,0,3,0,
+ 0,0,67,0,0,0,115,34,0,0,0,124,0,106,0,124,
+ 1,124,2,131,2,125,3,124,3,100,1,107,9,114,26,124,
+ 3,106,1,83,0,100,1,83,0,100,1,83,0,41,2,122,
+ 108,70,105,110,100,32,109,111,100,117,108,101,32,110,97,109,
+ 101,100,32,105,110,32,116,104,101,32,114,101,103,105,115,116,
+ 114,121,46,10,10,32,32,32,32,32,32,32,32,84,104,105,
+ 115,32,109,101,116,104,111,100,32,105,115,32,100,101,112,114,
+ 101,99,97,116,101,100,46,32,32,85,115,101,32,101,120,101,
+ 99,95,109,111,100,117,108,101,40,41,32,105,110,115,116,101,
+ 97,100,46,10,10,32,32,32,32,32,32,32,32,78,41,2,
+ 114,177,0,0,0,114,123,0,0,0,41,4,114,167,0,0,
+ 0,114,122,0,0,0,114,37,0,0,0,114,161,0,0,0,
+ 114,4,0,0,0,114,4,0,0,0,114,6,0,0,0,218,
+ 11,102,105,110,100,95,109,111,100,117,108,101,130,2,0,0,
+ 115,8,0,0,0,0,7,12,1,8,1,6,2,122,33,87,
+ 105,110,100,111,119,115,82,101,103,105,115,116,114,121,70,105,
+ 110,100,101,114,46,102,105,110,100,95,109,111,100,117,108,101,
+ 41,2,78,78,41,1,78,41,12,114,107,0,0,0,114,106,
+ 0,0,0,114,108,0,0,0,114,109,0,0,0,114,171,0,
+ 0,0,114,170,0,0,0,114,169,0,0,0,218,11,99,108,
+ 97,115,115,109,101,116,104,111,100,114,168,0,0,0,114,174,
+ 0,0,0,114,177,0,0,0,114,178,0,0,0,114,4,0,
+ 0,0,114,4,0,0,0,114,4,0,0,0,114,6,0,0,
+ 0,114,165,0,0,0,80,2,0,0,115,20,0,0,0,8,
+ 2,4,3,4,3,4,2,4,2,12,7,12,15,2,1,12,
+ 15,2,1,114,165,0,0,0,99,0,0,0,0,0,0,0,
+ 0,0,0,0,0,2,0,0,0,64,0,0,0,115,48,0,
+ 0,0,101,0,90,1,100,0,90,2,100,1,90,3,100,2,
+ 100,3,132,0,90,4,100,4,100,5,132,0,90,5,100,6,
+ 100,7,132,0,90,6,100,8,100,9,132,0,90,7,100,10,
+ 83,0,41,11,218,13,95,76,111,97,100,101,114,66,97,115,
+ 105,99,115,122,83,66,97,115,101,32,99,108,97,115,115,32,
+ 111,102,32,99,111,109,109,111,110,32,99,111,100,101,32,110,
+ 101,101,100,101,100,32,98,121,32,98,111,116,104,32,83,111,
+ 117,114,99,101,76,111,97,100,101,114,32,97,110,100,10,32,
+ 32,32,32,83,111,117,114,99,101,108,101,115,115,70,105,108,
+ 101,76,111,97,100,101,114,46,99,2,0,0,0,0,0,0,
+ 0,5,0,0,0,3,0,0,0,67,0,0,0,115,64,0,
+ 0,0,116,0,124,0,106,1,124,1,131,1,131,1,100,1,
+ 25,0,125,2,124,2,106,2,100,2,100,1,131,2,100,3,
+ 25,0,125,3,124,1,106,3,100,2,131,1,100,4,25,0,
+ 125,4,124,3,100,5,107,2,111,62,124,4,100,5,107,3,
+ 83,0,41,6,122,141,67,111,110,99,114,101,116,101,32,105,
+ 109,112,108,101,109,101,110,116,97,116,105,111,110,32,111,102,
+ 32,73,110,115,112,101,99,116,76,111,97,100,101,114,46,105,
+ 115,95,112,97,99,107,97,103,101,32,98,121,32,99,104,101,
+ 99,107,105,110,103,32,105,102,10,32,32,32,32,32,32,32,
+ 32,116,104,101,32,112,97,116,104,32,114,101,116,117,114,110,
+ 101,100,32,98,121,32,103,101,116,95,102,105,108,101,110,97,
+ 109,101,32,104,97,115,32,97,32,102,105,108,101,110,97,109,
+ 101,32,111,102,32,39,95,95,105,110,105,116,95,95,46,112,
+ 121,39,46,114,31,0,0,0,114,60,0,0,0,114,61,0,
+ 0,0,114,58,0,0,0,218,8,95,95,105,110,105,116,95,
+ 95,41,4,114,40,0,0,0,114,154,0,0,0,114,36,0,
+ 0,0,114,34,0,0,0,41,5,114,102,0,0,0,114,122,
+ 0,0,0,114,96,0,0,0,90,13,102,105,108,101,110,97,
+ 109,101,95,98,97,115,101,90,9,116,97,105,108,95,110,97,
+ 109,101,114,4,0,0,0,114,4,0,0,0,114,6,0,0,
+ 0,114,156,0,0,0,149,2,0,0,115,8,0,0,0,0,
+ 3,18,1,16,1,14,1,122,24,95,76,111,97,100,101,114,
+ 66,97,115,105,99,115,46,105,115,95,112,97,99,107,97,103,
+ 101,99,2,0,0,0,0,0,0,0,2,0,0,0,1,0,
+ 0,0,67,0,0,0,115,4,0,0,0,100,1,83,0,41,
+ 2,122,42,85,115,101,32,100,101,102,97,117,108,116,32,115,
+ 101,109,97,110,116,105,99,115,32,102,111,114,32,109,111,100,
+ 117,108,101,32,99,114,101,97,116,105,111,110,46,78,114,4,
+ 0,0,0,41,2,114,102,0,0,0,114,161,0,0,0,114,
+ 4,0,0,0,114,4,0,0,0,114,6,0,0,0,218,13,
+ 99,114,101,97,116,101,95,109,111,100,117,108,101,157,2,0,
+ 0,115,0,0,0,0,122,27,95,76,111,97,100,101,114,66,
+ 97,115,105,99,115,46,99,114,101,97,116,101,95,109,111,100,
+ 117,108,101,99,2,0,0,0,0,0,0,0,3,0,0,0,
+ 4,0,0,0,67,0,0,0,115,56,0,0,0,124,0,106,
+ 0,124,1,106,1,131,1,125,2,124,2,100,1,107,8,114,
+ 36,116,2,100,2,106,3,124,1,106,1,131,1,131,1,130,
+ 1,116,4,106,5,116,6,124,2,124,1,106,7,131,3,1,
+ 0,100,1,83,0,41,3,122,19,69,120,101,99,117,116,101,
+ 32,116,104,101,32,109,111,100,117,108,101,46,78,122,52,99,
+ 97,110,110,111,116,32,108,111,97,100,32,109,111,100,117,108,
+ 101,32,123,33,114,125,32,119,104,101,110,32,103,101,116,95,
+ 99,111,100,101,40,41,32,114,101,116,117,114,110,115,32,78,
+ 111,110,101,41,8,218,8,103,101,116,95,99,111,100,101,114,
+ 107,0,0,0,114,101,0,0,0,114,50,0,0,0,114,117,
+ 0,0,0,218,25,95,99,97,108,108,95,119,105,116,104,95,
+ 102,114,97,109,101,115,95,114,101,109,111,118,101,100,218,4,
+ 101,120,101,99,114,113,0,0,0,41,3,114,102,0,0,0,
+ 218,6,109,111,100,117,108,101,114,143,0,0,0,114,4,0,
+ 0,0,114,4,0,0,0,114,6,0,0,0,218,11,101,120,
+ 101,99,95,109,111,100,117,108,101,160,2,0,0,115,10,0,
+ 0,0,0,2,12,1,8,1,6,1,10,1,122,25,95,76,
+ 111,97,100,101,114,66,97,115,105,99,115,46,101,120,101,99,
+ 95,109,111,100,117,108,101,99,2,0,0,0,0,0,0,0,
+ 2,0,0,0,3,0,0,0,67,0,0,0,115,12,0,0,
+ 0,116,0,106,1,124,0,124,1,131,2,83,0,41,1,122,
+ 26,84,104,105,115,32,109,111,100,117,108,101,32,105,115,32,
+ 100,101,112,114,101,99,97,116,101,100,46,41,2,114,117,0,
+ 0,0,218,17,95,108,111,97,100,95,109,111,100,117,108,101,
+ 95,115,104,105,109,41,2,114,102,0,0,0,114,122,0,0,
+ 0,114,4,0,0,0,114,4,0,0,0,114,6,0,0,0,
+ 218,11,108,111,97,100,95,109,111,100,117,108,101,168,2,0,
+ 0,115,2,0,0,0,0,2,122,25,95,76,111,97,100,101,
+ 114,66,97,115,105,99,115,46,108,111,97,100,95,109,111,100,
+ 117,108,101,78,41,8,114,107,0,0,0,114,106,0,0,0,
+ 114,108,0,0,0,114,109,0,0,0,114,156,0,0,0,114,
+ 182,0,0,0,114,187,0,0,0,114,189,0,0,0,114,4,
+ 0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,0,
+ 0,0,114,180,0,0,0,144,2,0,0,115,10,0,0,0,
+ 8,3,4,2,8,8,8,3,8,8,114,180,0,0,0,99,
+ 0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,
+ 64,0,0,0,115,74,0,0,0,101,0,90,1,100,0,90,
+ 2,100,1,100,2,132,0,90,3,100,3,100,4,132,0,90,
+ 4,100,5,100,6,132,0,90,5,100,7,100,8,132,0,90,
+ 6,100,9,100,10,132,0,90,7,100,18,100,12,156,1,100,
+ 13,100,14,132,2,90,8,100,15,100,16,132,0,90,9,100,
+ 17,83,0,41,19,218,12,83,111,117,114,99,101,76,111,97,
+ 100,101,114,99,2,0,0,0,0,0,0,0,2,0,0,0,
+ 1,0,0,0,67,0,0,0,115,8,0,0,0,116,0,130,
+ 1,100,1,83,0,41,2,122,178,79,112,116,105,111,110,97,
+ 108,32,109,101,116,104,111,100,32,116,104,97,116,32,114,101,
+ 116,117,114,110,115,32,116,104,101,32,109,111,100,105,102,105,
+ 99,97,116,105,111,110,32,116,105,109,101,32,40,97,110,32,
+ 105,110,116,41,32,102,111,114,32,116,104,101,10,32,32,32,
+ 32,32,32,32,32,115,112,101,99,105,102,105,101,100,32,112,
+ 97,116,104,44,32,119,104,101,114,101,32,112,97,116,104,32,
+ 105,115,32,97,32,115,116,114,46,10,10,32,32,32,32,32,
+ 32,32,32,82,97,105,115,101,115,32,73,79,69,114,114,111,
+ 114,32,119,104,101,110,32,116,104,101,32,112,97,116,104,32,
+ 99,97,110,110,111,116,32,98,101,32,104,97,110,100,108,101,
+ 100,46,10,32,32,32,32,32,32,32,32,78,41,1,218,7,
+ 73,79,69,114,114,111,114,41,2,114,102,0,0,0,114,37,
+ 0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,0,
+ 0,0,218,10,112,97,116,104,95,109,116,105,109,101,175,2,
+ 0,0,115,2,0,0,0,0,6,122,23,83,111,117,114,99,
+ 101,76,111,97,100,101,114,46,112,97,116,104,95,109,116,105,
+ 109,101,99,2,0,0,0,0,0,0,0,2,0,0,0,3,
+ 0,0,0,67,0,0,0,115,14,0,0,0,100,1,124,0,
+ 106,0,124,1,131,1,105,1,83,0,41,2,97,170,1,0,
+ 0,79,112,116,105,111,110,97,108,32,109,101,116,104,111,100,
+ 32,114,101,116,117,114,110,105,110,103,32,97,32,109,101,116,
+ 97,100,97,116,97,32,100,105,99,116,32,102,111,114,32,116,
+ 104,101,32,115,112,101,99,105,102,105,101,100,32,112,97,116,
+ 104,10,32,32,32,32,32,32,32,32,116,111,32,98,121,32,
+ 116,104,101,32,112,97,116,104,32,40,115,116,114,41,46,10,
+ 32,32,32,32,32,32,32,32,80,111,115,115,105,98,108,101,
+ 32,107,101,121,115,58,10,32,32,32,32,32,32,32,32,45,
+ 32,39,109,116,105,109,101,39,32,40,109,97,110,100,97,116,
+ 111,114,121,41,32,105,115,32,116,104,101,32,110,117,109,101,
+ 114,105,99,32,116,105,109,101,115,116,97,109,112,32,111,102,
+ 32,108,97,115,116,32,115,111,117,114,99,101,10,32,32,32,
+ 32,32,32,32,32,32,32,99,111,100,101,32,109,111,100,105,
+ 102,105,99,97,116,105,111,110,59,10,32,32,32,32,32,32,
+ 32,32,45,32,39,115,105,122,101,39,32,40,111,112,116,105,
+ 111,110,97,108,41,32,105,115,32,116,104,101,32,115,105,122,
+ 101,32,105,110,32,98,121,116,101,115,32,111,102,32,116,104,
+ 101,32,115,111,117,114,99,101,32,99,111,100,101,46,10,10,
+ 32,32,32,32,32,32,32,32,73,109,112,108,101,109,101,110,
+ 116,105,110,103,32,116,104,105,115,32,109,101,116,104,111,100,
+ 32,97,108,108,111,119,115,32,116,104,101,32,108,111,97,100,
+ 101,114,32,116,111,32,114,101,97,100,32,98,121,116,101,99,
+ 111,100,101,32,102,105,108,101,115,46,10,32,32,32,32,32,
+ 32,32,32,82,97,105,115,101,115,32,73,79,69,114,114,111,
+ 114,32,119,104,101,110,32,116,104,101,32,112,97,116,104,32,
+ 99,97,110,110,111,116,32,98,101,32,104,97,110,100,108,101,
+ 100,46,10,32,32,32,32,32,32,32,32,114,129,0,0,0,
+ 41,1,114,192,0,0,0,41,2,114,102,0,0,0,114,37,
+ 0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,0,
+ 0,0,218,10,112,97,116,104,95,115,116,97,116,115,183,2,
+ 0,0,115,2,0,0,0,0,11,122,23,83,111,117,114,99,
+ 101,76,111,97,100,101,114,46,112,97,116,104,95,115,116,97,
+ 116,115,99,4,0,0,0,0,0,0,0,4,0,0,0,3,
+ 0,0,0,67,0,0,0,115,12,0,0,0,124,0,106,0,
+ 124,2,124,3,131,2,83,0,41,1,122,228,79,112,116,105,
+ 111,110,97,108,32,109,101,116,104,111,100,32,119,104,105,99,
+ 104,32,119,114,105,116,101,115,32,100,97,116,97,32,40,98,
+ 121,116,101,115,41,32,116,111,32,97,32,102,105,108,101,32,
+ 112,97,116,104,32,40,97,32,115,116,114,41,46,10,10,32,
+ 32,32,32,32,32,32,32,73,109,112,108,101,109,101,110,116,
+ 105,110,103,32,116,104,105,115,32,109,101,116,104,111,100,32,
+ 97,108,108,111,119,115,32,102,111,114,32,116,104,101,32,119,
+ 114,105,116,105,110,103,32,111,102,32,98,121,116,101,99,111,
+ 100,101,32,102,105,108,101,115,46,10,10,32,32,32,32,32,
+ 32,32,32,84,104,101,32,115,111,117,114,99,101,32,112,97,
+ 116,104,32,105,115,32,110,101,101,100,101,100,32,105,110,32,
+ 111,114,100,101,114,32,116,111,32,99,111,114,114,101,99,116,
+ 108,121,32,116,114,97,110,115,102,101,114,32,112,101,114,109,
+ 105,115,115,105,111,110,115,10,32,32,32,32,32,32,32,32,
+ 41,1,218,8,115,101,116,95,100,97,116,97,41,4,114,102,
+ 0,0,0,114,92,0,0,0,90,10,99,97,99,104,101,95,
+ 112,97,116,104,114,55,0,0,0,114,4,0,0,0,114,4,
+ 0,0,0,114,6,0,0,0,218,15,95,99,97,99,104,101,
+ 95,98,121,116,101,99,111,100,101,196,2,0,0,115,2,0,
+ 0,0,0,8,122,28,83,111,117,114,99,101,76,111,97,100,
+ 101,114,46,95,99,97,99,104,101,95,98,121,116,101,99,111,
+ 100,101,99,3,0,0,0,0,0,0,0,3,0,0,0,1,
+ 0,0,0,67,0,0,0,115,4,0,0,0,100,1,83,0,
+ 41,2,122,150,79,112,116,105,111,110,97,108,32,109,101,116,
+ 104,111,100,32,119,104,105,99,104,32,119,114,105,116,101,115,
+ 32,100,97,116,97,32,40,98,121,116,101,115,41,32,116,111,
+ 32,97,32,102,105,108,101,32,112,97,116,104,32,40,97,32,
+ 115,116,114,41,46,10,10,32,32,32,32,32,32,32,32,73,
+ 109,112,108,101,109,101,110,116,105,110,103,32,116,104,105,115,
+ 32,109,101,116,104,111,100,32,97,108,108,111,119,115,32,102,
+ 111,114,32,116,104,101,32,119,114,105,116,105,110,103,32,111,
+ 102,32,98,121,116,101,99,111,100,101,32,102,105,108,101,115,
+ 46,10,32,32,32,32,32,32,32,32,78,114,4,0,0,0,
+ 41,3,114,102,0,0,0,114,37,0,0,0,114,55,0,0,
+ 0,114,4,0,0,0,114,4,0,0,0,114,6,0,0,0,
+ 114,194,0,0,0,206,2,0,0,115,0,0,0,0,122,21,
+ 83,111,117,114,99,101,76,111,97,100,101,114,46,115,101,116,
+ 95,100,97,116,97,99,2,0,0,0,0,0,0,0,5,0,
+ 0,0,16,0,0,0,67,0,0,0,115,82,0,0,0,124,
+ 0,106,0,124,1,131,1,125,2,121,14,124,0,106,1,124,
+ 2,131,1,125,3,87,0,110,48,4,0,116,2,107,10,114,
+ 72,1,0,125,4,1,0,122,20,116,3,100,1,124,1,100,
+ 2,141,2,124,4,130,2,87,0,89,0,100,3,100,3,125,
+ 4,126,4,88,0,110,2,88,0,116,4,124,3,131,1,83,
+ 0,41,4,122,52,67,111,110,99,114,101,116,101,32,105,109,
+ 112,108,101,109,101,110,116,97,116,105,111,110,32,111,102,32,
+ 73,110,115,112,101,99,116,76,111,97,100,101,114,46,103,101,
+ 116,95,115,111,117,114,99,101,46,122,39,115,111,117,114,99,
+ 101,32,110,111,116,32,97,118,97,105,108,97,98,108,101,32,
+ 116,104,114,111,117,103,104,32,103,101,116,95,100,97,116,97,
+ 40,41,41,1,114,100,0,0,0,78,41,5,114,154,0,0,
+ 0,218,8,103,101,116,95,100,97,116,97,114,42,0,0,0,
+ 114,101,0,0,0,114,152,0,0,0,41,5,114,102,0,0,
+ 0,114,122,0,0,0,114,37,0,0,0,114,150,0,0,0,
+ 218,3,101,120,99,114,4,0,0,0,114,4,0,0,0,114,
+ 6,0,0,0,218,10,103,101,116,95,115,111,117,114,99,101,
+ 213,2,0,0,115,14,0,0,0,0,2,10,1,2,1,14,
+ 1,16,1,4,1,28,1,122,23,83,111,117,114,99,101,76,
+ 111,97,100,101,114,46,103,101,116,95,115,111,117,114,99,101,
+ 114,31,0,0,0,41,1,218,9,95,111,112,116,105,109,105,
+ 122,101,99,3,0,0,0,1,0,0,0,4,0,0,0,8,
+ 0,0,0,67,0,0,0,115,22,0,0,0,116,0,106,1,
+ 116,2,124,1,124,2,100,1,100,2,124,3,100,3,141,6,
+ 83,0,41,4,122,130,82,101,116,117,114,110,32,116,104,101,
+ 32,99,111,100,101,32,111,98,106,101,99,116,32,99,111,109,
+ 112,105,108,101,100,32,102,114,111,109,32,115,111,117,114,99,
+ 101,46,10,10,32,32,32,32,32,32,32,32,84,104,101,32,
+ 39,100,97,116,97,39,32,97,114,103,117,109,101,110,116,32,
+ 99,97,110,32,98,101,32,97,110,121,32,111,98,106,101,99,
+ 116,32,116,121,112,101,32,116,104,97,116,32,99,111,109,112,
+ 105,108,101,40,41,32,115,117,112,112,111,114,116,115,46,10,
+ 32,32,32,32,32,32,32,32,114,185,0,0,0,84,41,2,
+ 218,12,100,111,110,116,95,105,110,104,101,114,105,116,114,70,
+ 0,0,0,41,3,114,117,0,0,0,114,184,0,0,0,218,
+ 7,99,111,109,112,105,108,101,41,4,114,102,0,0,0,114,
+ 55,0,0,0,114,37,0,0,0,114,199,0,0,0,114,4,
+ 0,0,0,114,4,0,0,0,114,6,0,0,0,218,14,115,
+ 111,117,114,99,101,95,116,111,95,99,111,100,101,223,2,0,
+ 0,115,4,0,0,0,0,5,12,1,122,27,83,111,117,114,
+ 99,101,76,111,97,100,101,114,46,115,111,117,114,99,101,95,
+ 116,111,95,99,111,100,101,99,2,0,0,0,0,0,0,0,
+ 10,0,0,0,43,0,0,0,67,0,0,0,115,94,1,0,
+ 0,124,0,106,0,124,1,131,1,125,2,100,1,125,3,121,
+ 12,116,1,124,2,131,1,125,4,87,0,110,24,4,0,116,
+ 2,107,10,114,50,1,0,1,0,1,0,100,1,125,4,89,
+ 0,110,162,88,0,121,14,124,0,106,3,124,2,131,1,125,
+ 5,87,0,110,20,4,0,116,4,107,10,114,86,1,0,1,
+ 0,1,0,89,0,110,126,88,0,116,5,124,5,100,2,25,
+ 0,131,1,125,3,121,14,124,0,106,6,124,4,131,1,125,
+ 6,87,0,110,20,4,0,116,7,107,10,114,134,1,0,1,
+ 0,1,0,89,0,110,78,88,0,121,20,116,8,124,6,124,
+ 5,124,1,124,4,100,3,141,4,125,7,87,0,110,24,4,
+ 0,116,9,116,10,102,2,107,10,114,180,1,0,1,0,1,
+ 0,89,0,110,32,88,0,116,11,106,12,100,4,124,4,124,
+ 2,131,3,1,0,116,13,124,7,124,1,124,4,124,2,100,
+ 5,141,4,83,0,124,0,106,6,124,2,131,1,125,8,124,
+ 0,106,14,124,8,124,2,131,2,125,9,116,11,106,12,100,
+ 6,124,2,131,2,1,0,116,15,106,16,12,0,144,1,114,
+ 90,124,4,100,1,107,9,144,1,114,90,124,3,100,1,107,
+ 9,144,1,114,90,116,17,124,9,124,3,116,18,124,8,131,
+ 1,131,3,125,6,121,30,124,0,106,19,124,2,124,4,124,
+ 6,131,3,1,0,116,11,106,12,100,7,124,4,131,2,1,
+ 0,87,0,110,22,4,0,116,2,107,10,144,1,114,88,1,
+ 0,1,0,1,0,89,0,110,2,88,0,124,9,83,0,41,
+ 8,122,190,67,111,110,99,114,101,116,101,32,105,109,112,108,
+ 101,109,101,110,116,97,116,105,111,110,32,111,102,32,73,110,
+ 115,112,101,99,116,76,111,97,100,101,114,46,103,101,116,95,
+ 99,111,100,101,46,10,10,32,32,32,32,32,32,32,32,82,
+ 101,97,100,105,110,103,32,111,102,32,98,121,116,101,99,111,
+ 100,101,32,114,101,113,117,105,114,101,115,32,112,97,116,104,
+ 95,115,116,97,116,115,32,116,111,32,98,101,32,105,109,112,
+ 108,101,109,101,110,116,101,100,46,32,84,111,32,119,114,105,
+ 116,101,10,32,32,32,32,32,32,32,32,98,121,116,101,99,
+ 111,100,101,44,32,115,101,116,95,100,97,116,97,32,109,117,
+ 115,116,32,97,108,115,111,32,98,101,32,105,109,112,108,101,
+ 109,101,110,116,101,100,46,10,10,32,32,32,32,32,32,32,
+ 32,78,114,129,0,0,0,41,3,114,135,0,0,0,114,100,
+ 0,0,0,114,37,0,0,0,122,13,123,125,32,109,97,116,
+ 99,104,101,115,32,123,125,41,3,114,100,0,0,0,114,91,
+ 0,0,0,114,92,0,0,0,122,19,99,111,100,101,32,111,
+ 98,106,101,99,116,32,102,114,111,109,32,123,125,122,10,119,
+ 114,111,116,101,32,123,33,114,125,41,20,114,154,0,0,0,
+ 114,81,0,0,0,114,68,0,0,0,114,193,0,0,0,114,
+ 191,0,0,0,114,16,0,0,0,114,196,0,0,0,114,42,
+ 0,0,0,114,138,0,0,0,114,101,0,0,0,114,133,0,
+ 0,0,114,117,0,0,0,114,132,0,0,0,114,144,0,0,
+ 0,114,202,0,0,0,114,8,0,0,0,218,19,100,111,110,
+ 116,95,119,114,105,116,101,95,98,121,116,101,99,111,100,101,
+ 114,147,0,0,0,114,33,0,0,0,114,195,0,0,0,41,
+ 10,114,102,0,0,0,114,122,0,0,0,114,92,0,0,0,
+ 114,136,0,0,0,114,91,0,0,0,218,2,115,116,114,55,
+ 0,0,0,218,10,98,121,116,101,115,95,100,97,116,97,114,
+ 150,0,0,0,90,11,99,111,100,101,95,111,98,106,101,99,
+ 116,114,4,0,0,0,114,4,0,0,0,114,6,0,0,0,
+ 114,183,0,0,0,231,2,0,0,115,78,0,0,0,0,7,
+ 10,1,4,1,2,1,12,1,14,1,10,2,2,1,14,1,
+ 14,1,6,2,12,1,2,1,14,1,14,1,6,2,2,1,
+ 4,1,4,1,12,1,18,1,6,2,8,1,6,1,6,1,
+ 2,1,8,1,10,1,12,1,12,1,20,1,10,1,6,1,
+ 10,1,2,1,14,1,16,1,16,1,6,1,122,21,83,111,
+ 117,114,99,101,76,111,97,100,101,114,46,103,101,116,95,99,
+ 111,100,101,78,114,89,0,0,0,41,10,114,107,0,0,0,
+ 114,106,0,0,0,114,108,0,0,0,114,192,0,0,0,114,
+ 193,0,0,0,114,195,0,0,0,114,194,0,0,0,114,198,
+ 0,0,0,114,202,0,0,0,114,183,0,0,0,114,4,0,
+ 0,0,114,4,0,0,0,114,4,0,0,0,114,6,0,0,
+ 0,114,190,0,0,0,173,2,0,0,115,14,0,0,0,8,
+ 2,8,8,8,13,8,10,8,7,8,10,14,8,114,190,0,
+ 0,0,99,0,0,0,0,0,0,0,0,0,0,0,0,4,
+ 0,0,0,0,0,0,0,115,80,0,0,0,101,0,90,1,
+ 100,0,90,2,100,1,90,3,100,2,100,3,132,0,90,4,
+ 100,4,100,5,132,0,90,5,100,6,100,7,132,0,90,6,
+ 101,7,135,0,102,1,100,8,100,9,132,8,131,1,90,8,
+ 101,7,100,10,100,11,132,0,131,1,90,9,100,12,100,13,
+ 132,0,90,10,135,0,4,0,90,11,83,0,41,14,218,10,
+ 70,105,108,101,76,111,97,100,101,114,122,103,66,97,115,101,
+ 32,102,105,108,101,32,108,111,97,100,101,114,32,99,108,97,
+ 115,115,32,119,104,105,99,104,32,105,109,112,108,101,109,101,
+ 110,116,115,32,116,104,101,32,108,111,97,100,101,114,32,112,
+ 114,111,116,111,99,111,108,32,109,101,116,104,111,100,115,32,
+ 116,104,97,116,10,32,32,32,32,114,101,113,117,105,114,101,
+ 32,102,105,108,101,32,115,121,115,116,101,109,32,117,115,97,
+ 103,101,46,99,3,0,0,0,0,0,0,0,3,0,0,0,
+ 2,0,0,0,67,0,0,0,115,16,0,0,0,124,1,124,
+ 0,95,0,124,2,124,0,95,1,100,1,83,0,41,2,122,
+ 75,67,97,99,104,101,32,116,104,101,32,109,111,100,117,108,
+ 101,32,110,97,109,101,32,97,110,100,32,116,104,101,32,112,
+ 97,116,104,32,116,111,32,116,104,101,32,102,105,108,101,32,
+ 102,111,117,110,100,32,98,121,32,116,104,101,10,32,32,32,
+ 32,32,32,32,32,102,105,110,100,101,114,46,78,41,2,114,
+ 100,0,0,0,114,37,0,0,0,41,3,114,102,0,0,0,
+ 114,122,0,0,0,114,37,0,0,0,114,4,0,0,0,114,
+ 4,0,0,0,114,6,0,0,0,114,181,0,0,0,32,3,
+ 0,0,115,4,0,0,0,0,3,6,1,122,19,70,105,108,
+ 101,76,111,97,100,101,114,46,95,95,105,110,105,116,95,95,
+ 99,2,0,0,0,0,0,0,0,2,0,0,0,2,0,0,
+ 0,67,0,0,0,115,24,0,0,0,124,0,106,0,124,1,
+ 106,0,107,2,111,22,124,0,106,1,124,1,106,1,107,2,
+ 83,0,41,1,78,41,2,218,9,95,95,99,108,97,115,115,
+ 95,95,114,113,0,0,0,41,2,114,102,0,0,0,218,5,
+ 111,116,104,101,114,114,4,0,0,0,114,4,0,0,0,114,
+ 6,0,0,0,218,6,95,95,101,113,95,95,38,3,0,0,
+ 115,4,0,0,0,0,1,12,1,122,17,70,105,108,101,76,
+ 111,97,100,101,114,46,95,95,101,113,95,95,99,1,0,0,
+ 0,0,0,0,0,1,0,0,0,3,0,0,0,67,0,0,
+ 0,115,20,0,0,0,116,0,124,0,106,1,131,1,116,0,
+ 124,0,106,2,131,1,65,0,83,0,41,1,78,41,3,218,
+ 4,104,97,115,104,114,100,0,0,0,114,37,0,0,0,41,
+ 1,114,102,0,0,0,114,4,0,0,0,114,4,0,0,0,
+ 114,6,0,0,0,218,8,95,95,104,97,115,104,95,95,42,
+ 3,0,0,115,2,0,0,0,0,1,122,19,70,105,108,101,
+ 76,111,97,100,101,114,46,95,95,104,97,115,104,95,95,99,
+ 2,0,0,0,0,0,0,0,2,0,0,0,3,0,0,0,
+ 3,0,0,0,115,16,0,0,0,116,0,116,1,124,0,131,
+ 2,106,2,124,1,131,1,83,0,41,1,122,100,76,111,97,
+ 100,32,97,32,109,111,100,117,108,101,32,102,114,111,109,32,
+ 97,32,102,105,108,101,46,10,10,32,32,32,32,32,32,32,
+ 32,84,104,105,115,32,109,101,116,104,111,100,32,105,115,32,
+ 100,101,112,114,101,99,97,116,101,100,46,32,32,85,115,101,
+ 32,101,120,101,99,95,109,111,100,117,108,101,40,41,32,105,
+ 110,115,116,101,97,100,46,10,10,32,32,32,32,32,32,32,
+ 32,41,3,218,5,115,117,112,101,114,114,206,0,0,0,114,
+ 189,0,0,0,41,2,114,102,0,0,0,114,122,0,0,0,
+ 41,1,114,207,0,0,0,114,4,0,0,0,114,6,0,0,
+ 0,114,189,0,0,0,45,3,0,0,115,2,0,0,0,0,
+ 10,122,22,70,105,108,101,76,111,97,100,101,114,46,108,111,
+ 97,100,95,109,111,100,117,108,101,99,2,0,0,0,0,0,
+ 0,0,2,0,0,0,1,0,0,0,67,0,0,0,115,6,
+ 0,0,0,124,0,106,0,83,0,41,1,122,58,82,101,116,
+ 117,114,110,32,116,104,101,32,112,97,116,104,32,116,111,32,
+ 116,104,101,32,115,111,117,114,99,101,32,102,105,108,101,32,
+ 97,115,32,102,111,117,110,100,32,98,121,32,116,104,101,32,
+ 102,105,110,100,101,114,46,41,1,114,37,0,0,0,41,2,
+ 114,102,0,0,0,114,122,0,0,0,114,4,0,0,0,114,
+ 4,0,0,0,114,6,0,0,0,114,154,0,0,0,57,3,
+ 0,0,115,2,0,0,0,0,3,122,23,70,105,108,101,76,
+ 111,97,100,101,114,46,103,101,116,95,102,105,108,101,110,97,
+ 109,101,99,2,0,0,0,0,0,0,0,3,0,0,0,9,
+ 0,0,0,67,0,0,0,115,32,0,0,0,116,0,106,1,
+ 124,1,100,1,131,2,143,10,125,2,124,2,106,2,131,0,
+ 83,0,81,0,82,0,88,0,100,2,83,0,41,3,122,39,
+ 82,101,116,117,114,110,32,116,104,101,32,100,97,116,97,32,
+ 102,114,111,109,32,112,97,116,104,32,97,115,32,114,97,119,
+ 32,98,121,116,101,115,46,218,1,114,78,41,3,114,52,0,
+ 0,0,114,53,0,0,0,90,4,114,101,97,100,41,3,114,
+ 102,0,0,0,114,37,0,0,0,114,56,0,0,0,114,4,
+ 0,0,0,114,4,0,0,0,114,6,0,0,0,114,196,0,
+ 0,0,62,3,0,0,115,4,0,0,0,0,2,14,1,122,
+ 19,70,105,108,101,76,111,97,100,101,114,46,103,101,116,95,
+ 100,97,116,97,41,12,114,107,0,0,0,114,106,0,0,0,
+ 114,108,0,0,0,114,109,0,0,0,114,181,0,0,0,114,
+ 209,0,0,0,114,211,0,0,0,114,119,0,0,0,114,189,
+ 0,0,0,114,154,0,0,0,114,196,0,0,0,90,13,95,
+ 95,99,108,97,115,115,99,101,108,108,95,95,114,4,0,0,
+ 0,114,4,0,0,0,41,1,114,207,0,0,0,114,6,0,
+ 0,0,114,206,0,0,0,27,3,0,0,115,14,0,0,0,
+ 8,3,4,2,8,6,8,4,8,3,16,12,12,5,114,206,
+ 0,0,0,99,0,0,0,0,0,0,0,0,0,0,0,0,
+ 3,0,0,0,64,0,0,0,115,46,0,0,0,101,0,90,
+ 1,100,0,90,2,100,1,90,3,100,2,100,3,132,0,90,
+ 4,100,4,100,5,132,0,90,5,100,6,100,7,156,1,100,
+ 8,100,9,132,2,90,6,100,10,83,0,41,11,218,16,83,
+ 111,117,114,99,101,70,105,108,101,76,111,97,100,101,114,122,
+ 62,67,111,110,99,114,101,116,101,32,105,109,112,108,101,109,
+ 101,110,116,97,116,105,111,110,32,111,102,32,83,111,117,114,
+ 99,101,76,111,97,100,101,114,32,117,115,105,110,103,32,116,
+ 104,101,32,102,105,108,101,32,115,121,115,116,101,109,46,99,
+ 2,0,0,0,0,0,0,0,3,0,0,0,3,0,0,0,
+ 67,0,0,0,115,22,0,0,0,116,0,124,1,131,1,125,
+ 2,124,2,106,1,124,2,106,2,100,1,156,2,83,0,41,
+ 2,122,33,82,101,116,117,114,110,32,116,104,101,32,109,101,
+ 116,97,100,97,116,97,32,102,111,114,32,116,104,101,32,112,
+ 97,116,104,46,41,2,114,129,0,0,0,114,130,0,0,0,
+ 41,3,114,41,0,0,0,218,8,115,116,95,109,116,105,109,
+ 101,90,7,115,116,95,115,105,122,101,41,3,114,102,0,0,
+ 0,114,37,0,0,0,114,204,0,0,0,114,4,0,0,0,
+ 114,4,0,0,0,114,6,0,0,0,114,193,0,0,0,72,
+ 3,0,0,115,4,0,0,0,0,2,8,1,122,27,83,111,
+ 117,114,99,101,70,105,108,101,76,111,97,100,101,114,46,112,
+ 97,116,104,95,115,116,97,116,115,99,4,0,0,0,0,0,
+ 0,0,5,0,0,0,5,0,0,0,67,0,0,0,115,24,
+ 0,0,0,116,0,124,1,131,1,125,4,124,0,106,1,124,
+ 2,124,3,124,4,100,1,141,3,83,0,41,2,78,41,1,
+ 218,5,95,109,111,100,101,41,2,114,99,0,0,0,114,194,
+ 0,0,0,41,5,114,102,0,0,0,114,92,0,0,0,114,
+ 91,0,0,0,114,55,0,0,0,114,44,0,0,0,114,4,
+ 0,0,0,114,4,0,0,0,114,6,0,0,0,114,195,0,
+ 0,0,77,3,0,0,115,4,0,0,0,0,2,8,1,122,
+ 32,83,111,117,114,99,101,70,105,108,101,76,111,97,100,101,
+ 114,46,95,99,97,99,104,101,95,98,121,116,101,99,111,100,
+ 101,105,182,1,0,0,41,1,114,216,0,0,0,99,3,0,
+ 0,0,1,0,0,0,9,0,0,0,17,0,0,0,67,0,
+ 0,0,115,250,0,0,0,116,0,124,1,131,1,92,2,125,
+ 4,125,5,103,0,125,6,120,40,124,4,114,56,116,1,124,
+ 4,131,1,12,0,114,56,116,0,124,4,131,1,92,2,125,
+ 4,125,7,124,6,106,2,124,7,131,1,1,0,113,18,87,
+ 0,120,108,116,3,124,6,131,1,68,0,93,96,125,7,116,
+ 4,124,4,124,7,131,2,125,4,121,14,116,5,106,6,124,
+ 4,131,1,1,0,87,0,113,68,4,0,116,7,107,10,114,
+ 118,1,0,1,0,1,0,119,68,89,0,113,68,4,0,116,
+ 8,107,10,114,162,1,0,125,8,1,0,122,18,116,9,106,
+ 10,100,1,124,4,124,8,131,3,1,0,100,2,83,0,100,
+ 2,125,8,126,8,88,0,113,68,88,0,113,68,87,0,121,
+ 28,116,11,124,1,124,2,124,3,131,3,1,0,116,9,106,
+ 10,100,3,124,1,131,2,1,0,87,0,110,48,4,0,116,
+ 8,107,10,114,244,1,0,125,8,1,0,122,20,116,9,106,
+ 10,100,1,124,1,124,8,131,3,1,0,87,0,89,0,100,
+ 2,100,2,125,8,126,8,88,0,110,2,88,0,100,2,83,
+ 0,41,4,122,27,87,114,105,116,101,32,98,121,116,101,115,
+ 32,100,97,116,97,32,116,111,32,97,32,102,105,108,101,46,
+ 122,27,99,111,117,108,100,32,110,111,116,32,99,114,101,97,
+ 116,101,32,123,33,114,125,58,32,123,33,114,125,78,122,12,
+ 99,114,101,97,116,101,100,32,123,33,114,125,41,12,114,40,
+ 0,0,0,114,48,0,0,0,114,160,0,0,0,114,35,0,
+ 0,0,114,30,0,0,0,114,3,0,0,0,90,5,109,107,
+ 100,105,114,218,15,70,105,108,101,69,120,105,115,116,115,69,
+ 114,114,111,114,114,42,0,0,0,114,117,0,0,0,114,132,
+ 0,0,0,114,57,0,0,0,41,9,114,102,0,0,0,114,
+ 37,0,0,0,114,55,0,0,0,114,216,0,0,0,218,6,
+ 112,97,114,101,110,116,114,96,0,0,0,114,29,0,0,0,
+ 114,25,0,0,0,114,197,0,0,0,114,4,0,0,0,114,
+ 4,0,0,0,114,6,0,0,0,114,194,0,0,0,82,3,
+ 0,0,115,42,0,0,0,0,2,12,1,4,2,16,1,12,
+ 1,14,2,14,1,10,1,2,1,14,1,14,2,6,1,16,
+ 3,6,1,8,1,20,1,2,1,12,1,16,1,16,2,8,
+ 1,122,25,83,111,117,114,99,101,70,105,108,101,76,111,97,
+ 100,101,114,46,115,101,116,95,100,97,116,97,78,41,7,114,
+ 107,0,0,0,114,106,0,0,0,114,108,0,0,0,114,109,
+ 0,0,0,114,193,0,0,0,114,195,0,0,0,114,194,0,
+ 0,0,114,4,0,0,0,114,4,0,0,0,114,4,0,0,
+ 0,114,6,0,0,0,114,214,0,0,0,68,3,0,0,115,
+ 8,0,0,0,8,2,4,2,8,5,8,5,114,214,0,0,
+ 0,99,0,0,0,0,0,0,0,0,0,0,0,0,2,0,
+ 0,0,64,0,0,0,115,32,0,0,0,101,0,90,1,100,
+ 0,90,2,100,1,90,3,100,2,100,3,132,0,90,4,100,
+ 4,100,5,132,0,90,5,100,6,83,0,41,7,218,20,83,
+ 111,117,114,99,101,108,101,115,115,70,105,108,101,76,111,97,
+ 100,101,114,122,45,76,111,97,100,101,114,32,119,104,105,99,
+ 104,32,104,97,110,100,108,101,115,32,115,111,117,114,99,101,
+ 108,101,115,115,32,102,105,108,101,32,105,109,112,111,114,116,
+ 115,46,99,2,0,0,0,0,0,0,0,5,0,0,0,5,
+ 0,0,0,67,0,0,0,115,48,0,0,0,124,0,106,0,
+ 124,1,131,1,125,2,124,0,106,1,124,2,131,1,125,3,
+ 116,2,124,3,124,1,124,2,100,1,141,3,125,4,116,3,
+ 124,4,124,1,124,2,100,2,141,3,83,0,41,3,78,41,
+ 2,114,100,0,0,0,114,37,0,0,0,41,2,114,100,0,
+ 0,0,114,91,0,0,0,41,4,114,154,0,0,0,114,196,
+ 0,0,0,114,138,0,0,0,114,144,0,0,0,41,5,114,
+ 102,0,0,0,114,122,0,0,0,114,37,0,0,0,114,55,
+ 0,0,0,114,205,0,0,0,114,4,0,0,0,114,4,0,
+ 0,0,114,6,0,0,0,114,183,0,0,0,117,3,0,0,
+ 115,8,0,0,0,0,1,10,1,10,1,14,1,122,29,83,
+ 111,117,114,99,101,108,101,115,115,70,105,108,101,76,111,97,
+ 100,101,114,46,103,101,116,95,99,111,100,101,99,2,0,0,
+ 0,0,0,0,0,2,0,0,0,1,0,0,0,67,0,0,
+ 0,115,4,0,0,0,100,1,83,0,41,2,122,39,82,101,
+ 116,117,114,110,32,78,111,110,101,32,97,115,32,116,104,101,
+ 114,101,32,105,115,32,110,111,32,115,111,117,114,99,101,32,
+ 99,111,100,101,46,78,114,4,0,0,0,41,2,114,102,0,
+ 0,0,114,122,0,0,0,114,4,0,0,0,114,4,0,0,
+ 0,114,6,0,0,0,114,198,0,0,0,123,3,0,0,115,
+ 2,0,0,0,0,2,122,31,83,111,117,114,99,101,108,101,
+ 115,115,70,105,108,101,76,111,97,100,101,114,46,103,101,116,
+ 95,115,111,117,114,99,101,78,41,6,114,107,0,0,0,114,
+ 106,0,0,0,114,108,0,0,0,114,109,0,0,0,114,183,
+ 0,0,0,114,198,0,0,0,114,4,0,0,0,114,4,0,
+ 0,0,114,4,0,0,0,114,6,0,0,0,114,219,0,0,
+ 0,113,3,0,0,115,6,0,0,0,8,2,4,2,8,6,
+ 114,219,0,0,0,99,0,0,0,0,0,0,0,0,0,0,
+ 0,0,3,0,0,0,64,0,0,0,115,92,0,0,0,101,
+ 0,90,1,100,0,90,2,100,1,90,3,100,2,100,3,132,
+ 0,90,4,100,4,100,5,132,0,90,5,100,6,100,7,132,
+ 0,90,6,100,8,100,9,132,0,90,7,100,10,100,11,132,
+ 0,90,8,100,12,100,13,132,0,90,9,100,14,100,15,132,
+ 0,90,10,100,16,100,17,132,0,90,11,101,12,100,18,100,
+ 19,132,0,131,1,90,13,100,20,83,0,41,21,218,19,69,
+ 120,116,101,110,115,105,111,110,70,105,108,101,76,111,97,100,
+ 101,114,122,93,76,111,97,100,101,114,32,102,111,114,32,101,
+ 120,116,101,110,115,105,111,110,32,109,111,100,117,108,101,115,
+ 46,10,10,32,32,32,32,84,104,101,32,99,111,110,115,116,
+ 114,117,99,116,111,114,32,105,115,32,100,101,115,105,103,110,
+ 101,100,32,116,111,32,119,111,114,107,32,119,105,116,104,32,
+ 70,105,108,101,70,105,110,100,101,114,46,10,10,32,32,32,
+ 32,99,3,0,0,0,0,0,0,0,3,0,0,0,2,0,
+ 0,0,67,0,0,0,115,16,0,0,0,124,1,124,0,95,
+ 0,124,2,124,0,95,1,100,0,83,0,41,1,78,41,2,
+ 114,100,0,0,0,114,37,0,0,0,41,3,114,102,0,0,
+ 0,114,100,0,0,0,114,37,0,0,0,114,4,0,0,0,
+ 114,4,0,0,0,114,6,0,0,0,114,181,0,0,0,140,
+ 3,0,0,115,4,0,0,0,0,1,6,1,122,28,69,120,
+ 116,101,110,115,105,111,110,70,105,108,101,76,111,97,100,101,
+ 114,46,95,95,105,110,105,116,95,95,99,2,0,0,0,0,
+ 0,0,0,2,0,0,0,2,0,0,0,67,0,0,0,115,
+ 24,0,0,0,124,0,106,0,124,1,106,0,107,2,111,22,
+ 124,0,106,1,124,1,106,1,107,2,83,0,41,1,78,41,
+ 2,114,207,0,0,0,114,113,0,0,0,41,2,114,102,0,
+ 0,0,114,208,0,0,0,114,4,0,0,0,114,4,0,0,
+ 0,114,6,0,0,0,114,209,0,0,0,144,3,0,0,115,
+ 4,0,0,0,0,1,12,1,122,26,69,120,116,101,110,115,
+ 105,111,110,70,105,108,101,76,111,97,100,101,114,46,95,95,
+ 101,113,95,95,99,1,0,0,0,0,0,0,0,1,0,0,
+ 0,3,0,0,0,67,0,0,0,115,20,0,0,0,116,0,
+ 124,0,106,1,131,1,116,0,124,0,106,2,131,1,65,0,
+ 83,0,41,1,78,41,3,114,210,0,0,0,114,100,0,0,
+ 0,114,37,0,0,0,41,1,114,102,0,0,0,114,4,0,
+ 0,0,114,4,0,0,0,114,6,0,0,0,114,211,0,0,
+ 0,148,3,0,0,115,2,0,0,0,0,1,122,28,69,120,
+ 116,101,110,115,105,111,110,70,105,108,101,76,111,97,100,101,
+ 114,46,95,95,104,97,115,104,95,95,99,2,0,0,0,0,
+ 0,0,0,3,0,0,0,4,0,0,0,67,0,0,0,115,
+ 36,0,0,0,116,0,106,1,116,2,106,3,124,1,131,2,
+ 125,2,116,0,106,4,100,1,124,1,106,5,124,0,106,6,
+ 131,3,1,0,124,2,83,0,41,2,122,38,67,114,101,97,
+ 116,101,32,97,110,32,117,110,105,116,105,97,108,105,122,101,
+ 100,32,101,120,116,101,110,115,105,111,110,32,109,111,100,117,
+ 108,101,122,38,101,120,116,101,110,115,105,111,110,32,109,111,
+ 100,117,108,101,32,123,33,114,125,32,108,111,97,100,101,100,
+ 32,102,114,111,109,32,123,33,114,125,41,7,114,117,0,0,
+ 0,114,184,0,0,0,114,142,0,0,0,90,14,99,114,101,
+ 97,116,101,95,100,121,110,97,109,105,99,114,132,0,0,0,
+ 114,100,0,0,0,114,37,0,0,0,41,3,114,102,0,0,
+ 0,114,161,0,0,0,114,186,0,0,0,114,4,0,0,0,
+ 114,4,0,0,0,114,6,0,0,0,114,182,0,0,0,151,
+ 3,0,0,115,10,0,0,0,0,2,4,1,10,1,6,1,
+ 12,1,122,33,69,120,116,101,110,115,105,111,110,70,105,108,
+ 101,76,111,97,100,101,114,46,99,114,101,97,116,101,95,109,
+ 111,100,117,108,101,99,2,0,0,0,0,0,0,0,2,0,
+ 0,0,4,0,0,0,67,0,0,0,115,36,0,0,0,116,
+ 0,106,1,116,2,106,3,124,1,131,2,1,0,116,0,106,
+ 4,100,1,124,0,106,5,124,0,106,6,131,3,1,0,100,
+ 2,83,0,41,3,122,30,73,110,105,116,105,97,108,105,122,
+ 101,32,97,110,32,101,120,116,101,110,115,105,111,110,32,109,
+ 111,100,117,108,101,122,40,101,120,116,101,110,115,105,111,110,
+ 32,109,111,100,117,108,101,32,123,33,114,125,32,101,120,101,
+ 99,117,116,101,100,32,102,114,111,109,32,123,33,114,125,78,
+ 41,7,114,117,0,0,0,114,184,0,0,0,114,142,0,0,
+ 0,90,12,101,120,101,99,95,100,121,110,97,109,105,99,114,
+ 132,0,0,0,114,100,0,0,0,114,37,0,0,0,41,2,
+ 114,102,0,0,0,114,186,0,0,0,114,4,0,0,0,114,
+ 4,0,0,0,114,6,0,0,0,114,187,0,0,0,159,3,
+ 0,0,115,6,0,0,0,0,2,14,1,6,1,122,31,69,
+ 120,116,101,110,115,105,111,110,70,105,108,101,76,111,97,100,
+ 101,114,46,101,120,101,99,95,109,111,100,117,108,101,99,2,
+ 0,0,0,0,0,0,0,2,0,0,0,4,0,0,0,3,
+ 0,0,0,115,36,0,0,0,116,0,124,0,106,1,131,1,
+ 100,1,25,0,137,0,116,2,135,0,102,1,100,2,100,3,
+ 132,8,116,3,68,0,131,1,131,1,83,0,41,4,122,49,
+ 82,101,116,117,114,110,32,84,114,117,101,32,105,102,32,116,
+ 104,101,32,101,120,116,101,110,115,105,111,110,32,109,111,100,
+ 117,108,101,32,105,115,32,97,32,112,97,99,107,97,103,101,
+ 46,114,31,0,0,0,99,1,0,0,0,0,0,0,0,2,
+ 0,0,0,4,0,0,0,51,0,0,0,115,26,0,0,0,
+ 124,0,93,18,125,1,136,0,100,0,124,1,23,0,107,2,
+ 86,0,1,0,113,2,100,1,83,0,41,2,114,181,0,0,
+ 0,78,114,4,0,0,0,41,2,114,24,0,0,0,218,6,
+ 115,117,102,102,105,120,41,1,218,9,102,105,108,101,95,110,
+ 97,109,101,114,4,0,0,0,114,6,0,0,0,250,9,60,
+ 103,101,110,101,120,112,114,62,168,3,0,0,115,2,0,0,
+ 0,4,1,122,49,69,120,116,101,110,115,105,111,110,70,105,
+ 108,101,76,111,97,100,101,114,46,105,115,95,112,97,99,107,
+ 97,103,101,46,60,108,111,99,97,108,115,62,46,60,103,101,
+ 110,101,120,112,114,62,41,4,114,40,0,0,0,114,37,0,
+ 0,0,218,3,97,110,121,218,18,69,88,84,69,78,83,73,
+ 79,78,95,83,85,70,70,73,88,69,83,41,2,114,102,0,
+ 0,0,114,122,0,0,0,114,4,0,0,0,41,1,114,222,
+ 0,0,0,114,6,0,0,0,114,156,0,0,0,165,3,0,
+ 0,115,6,0,0,0,0,2,14,1,12,1,122,30,69,120,
+ 116,101,110,115,105,111,110,70,105,108,101,76,111,97,100,101,
+ 114,46,105,115,95,112,97,99,107,97,103,101,99,2,0,0,
+ 0,0,0,0,0,2,0,0,0,1,0,0,0,67,0,0,
+ 0,115,4,0,0,0,100,1,83,0,41,2,122,63,82,101,
+ 116,117,114,110,32,78,111,110,101,32,97,115,32,97,110,32,
+ 101,120,116,101,110,115,105,111,110,32,109,111,100,117,108,101,
+ 32,99,97,110,110,111,116,32,99,114,101,97,116,101,32,97,
+ 32,99,111,100,101,32,111,98,106,101,99,116,46,78,114,4,
+ 0,0,0,41,2,114,102,0,0,0,114,122,0,0,0,114,
+ 4,0,0,0,114,4,0,0,0,114,6,0,0,0,114,183,
+ 0,0,0,171,3,0,0,115,2,0,0,0,0,2,122,28,
+ 69,120,116,101,110,115,105,111,110,70,105,108,101,76,111,97,
+ 100,101,114,46,103,101,116,95,99,111,100,101,99,2,0,0,
+ 0,0,0,0,0,2,0,0,0,1,0,0,0,67,0,0,
+ 0,115,4,0,0,0,100,1,83,0,41,2,122,53,82,101,
+ 116,117,114,110,32,78,111,110,101,32,97,115,32,101,120,116,
+ 101,110,115,105,111,110,32,109,111,100,117,108,101,115,32,104,
+ 97,118,101,32,110,111,32,115,111,117,114,99,101,32,99,111,
+ 100,101,46,78,114,4,0,0,0,41,2,114,102,0,0,0,
+ 114,122,0,0,0,114,4,0,0,0,114,4,0,0,0,114,
+ 6,0,0,0,114,198,0,0,0,175,3,0,0,115,2,0,
+ 0,0,0,2,122,30,69,120,116,101,110,115,105,111,110,70,
+ 105,108,101,76,111,97,100,101,114,46,103,101,116,95,115,111,
+ 117,114,99,101,99,2,0,0,0,0,0,0,0,2,0,0,
+ 0,1,0,0,0,67,0,0,0,115,6,0,0,0,124,0,
+ 106,0,83,0,41,1,122,58,82,101,116,117,114,110,32,116,
+ 104,101,32,112,97,116,104,32,116,111,32,116,104,101,32,115,
+ 111,117,114,99,101,32,102,105,108,101,32,97,115,32,102,111,
+ 117,110,100,32,98,121,32,116,104,101,32,102,105,110,100,101,
+ 114,46,41,1,114,37,0,0,0,41,2,114,102,0,0,0,
+ 114,122,0,0,0,114,4,0,0,0,114,4,0,0,0,114,
+ 6,0,0,0,114,154,0,0,0,179,3,0,0,115,2,0,
+ 0,0,0,3,122,32,69,120,116,101,110,115,105,111,110,70,
+ 105,108,101,76,111,97,100,101,114,46,103,101,116,95,102,105,
+ 108,101,110,97,109,101,78,41,14,114,107,0,0,0,114,106,
+ 0,0,0,114,108,0,0,0,114,109,0,0,0,114,181,0,
+ 0,0,114,209,0,0,0,114,211,0,0,0,114,182,0,0,
+ 0,114,187,0,0,0,114,156,0,0,0,114,183,0,0,0,
+ 114,198,0,0,0,114,119,0,0,0,114,154,0,0,0,114,
+ 4,0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,
+ 0,0,0,114,220,0,0,0,132,3,0,0,115,20,0,0,
+ 0,8,6,4,2,8,4,8,4,8,3,8,8,8,6,8,
+ 6,8,4,8,4,114,220,0,0,0,99,0,0,0,0,0,
+ 0,0,0,0,0,0,0,2,0,0,0,64,0,0,0,115,
+ 96,0,0,0,101,0,90,1,100,0,90,2,100,1,90,3,
+ 100,2,100,3,132,0,90,4,100,4,100,5,132,0,90,5,
+ 100,6,100,7,132,0,90,6,100,8,100,9,132,0,90,7,
+ 100,10,100,11,132,0,90,8,100,12,100,13,132,0,90,9,
+ 100,14,100,15,132,0,90,10,100,16,100,17,132,0,90,11,
+ 100,18,100,19,132,0,90,12,100,20,100,21,132,0,90,13,
+ 100,22,83,0,41,23,218,14,95,78,97,109,101,115,112,97,
+ 99,101,80,97,116,104,97,38,1,0,0,82,101,112,114,101,
+ 115,101,110,116,115,32,97,32,110,97,109,101,115,112,97,99,
+ 101,32,112,97,99,107,97,103,101,39,115,32,112,97,116,104,
+ 46,32,32,73,116,32,117,115,101,115,32,116,104,101,32,109,
+ 111,100,117,108,101,32,110,97,109,101,10,32,32,32,32,116,
+ 111,32,102,105,110,100,32,105,116,115,32,112,97,114,101,110,
+ 116,32,109,111,100,117,108,101,44,32,97,110,100,32,102,114,
+ 111,109,32,116,104,101,114,101,32,105,116,32,108,111,111,107,
+ 115,32,117,112,32,116,104,101,32,112,97,114,101,110,116,39,
+ 115,10,32,32,32,32,95,95,112,97,116,104,95,95,46,32,
+ 32,87,104,101,110,32,116,104,105,115,32,99,104,97,110,103,
+ 101,115,44,32,116,104,101,32,109,111,100,117,108,101,39,115,
+ 32,111,119,110,32,112,97,116,104,32,105,115,32,114,101,99,
+ 111,109,112,117,116,101,100,44,10,32,32,32,32,117,115,105,
+ 110,103,32,112,97,116,104,95,102,105,110,100,101,114,46,32,
+ 32,70,111,114,32,116,111,112,45,108,101,118,101,108,32,109,
+ 111,100,117,108,101,115,44,32,116,104,101,32,112,97,114,101,
+ 110,116,32,109,111,100,117,108,101,39,115,32,112,97,116,104,
+ 10,32,32,32,32,105,115,32,115,121,115,46,112,97,116,104,
+ 46,99,4,0,0,0,0,0,0,0,4,0,0,0,2,0,
+ 0,0,67,0,0,0,115,36,0,0,0,124,1,124,0,95,
+ 0,124,2,124,0,95,1,116,2,124,0,106,3,131,0,131,
+ 1,124,0,95,4,124,3,124,0,95,5,100,0,83,0,41,
+ 1,78,41,6,218,5,95,110,97,109,101,218,5,95,112,97,
+ 116,104,114,95,0,0,0,218,16,95,103,101,116,95,112,97,
+ 114,101,110,116,95,112,97,116,104,218,17,95,108,97,115,116,
+ 95,112,97,114,101,110,116,95,112,97,116,104,218,12,95,112,
+ 97,116,104,95,102,105,110,100,101,114,41,4,114,102,0,0,
+ 0,114,100,0,0,0,114,37,0,0,0,218,11,112,97,116,
+ 104,95,102,105,110,100,101,114,114,4,0,0,0,114,4,0,
+ 0,0,114,6,0,0,0,114,181,0,0,0,192,3,0,0,
+ 115,8,0,0,0,0,1,6,1,6,1,14,1,122,23,95,
+ 78,97,109,101,115,112,97,99,101,80,97,116,104,46,95,95,
+ 105,110,105,116,95,95,99,1,0,0,0,0,0,0,0,4,
+ 0,0,0,3,0,0,0,67,0,0,0,115,38,0,0,0,
+ 124,0,106,0,106,1,100,1,131,1,92,3,125,1,125,2,
+ 125,3,124,2,100,2,107,2,114,30,100,6,83,0,124,1,
+ 100,5,102,2,83,0,41,7,122,62,82,101,116,117,114,110,
+ 115,32,97,32,116,117,112,108,101,32,111,102,32,40,112,97,
+ 114,101,110,116,45,109,111,100,117,108,101,45,110,97,109,101,
+ 44,32,112,97,114,101,110,116,45,112,97,116,104,45,97,116,
+ 116,114,45,110,97,109,101,41,114,60,0,0,0,114,32,0,
+ 0,0,114,8,0,0,0,114,37,0,0,0,90,8,95,95,
+ 112,97,116,104,95,95,41,2,114,8,0,0,0,114,37,0,
+ 0,0,41,2,114,227,0,0,0,114,34,0,0,0,41,4,
+ 114,102,0,0,0,114,218,0,0,0,218,3,100,111,116,90,
+ 2,109,101,114,4,0,0,0,114,4,0,0,0,114,6,0,
+ 0,0,218,23,95,102,105,110,100,95,112,97,114,101,110,116,
+ 95,112,97,116,104,95,110,97,109,101,115,198,3,0,0,115,
+ 8,0,0,0,0,2,18,1,8,2,4,3,122,38,95,78,
+ 97,109,101,115,112,97,99,101,80,97,116,104,46,95,102,105,
+ 110,100,95,112,97,114,101,110,116,95,112,97,116,104,95,110,
+ 97,109,101,115,99,1,0,0,0,0,0,0,0,3,0,0,
+ 0,3,0,0,0,67,0,0,0,115,28,0,0,0,124,0,
+ 106,0,131,0,92,2,125,1,125,2,116,1,116,2,106,3,
+ 124,1,25,0,124,2,131,2,83,0,41,1,78,41,4,114,
+ 234,0,0,0,114,112,0,0,0,114,8,0,0,0,218,7,
+ 109,111,100,117,108,101,115,41,3,114,102,0,0,0,90,18,
+ 112,97,114,101,110,116,95,109,111,100,117,108,101,95,110,97,
+ 109,101,90,14,112,97,116,104,95,97,116,116,114,95,110,97,
+ 109,101,114,4,0,0,0,114,4,0,0,0,114,6,0,0,
+ 0,114,229,0,0,0,208,3,0,0,115,4,0,0,0,0,
+ 1,12,1,122,31,95,78,97,109,101,115,112,97,99,101,80,
+ 97,116,104,46,95,103,101,116,95,112,97,114,101,110,116,95,
+ 112,97,116,104,99,1,0,0,0,0,0,0,0,3,0,0,
+ 0,3,0,0,0,67,0,0,0,115,80,0,0,0,116,0,
+ 124,0,106,1,131,0,131,1,125,1,124,1,124,0,106,2,
+ 107,3,114,74,124,0,106,3,124,0,106,4,124,1,131,2,
+ 125,2,124,2,100,0,107,9,114,68,124,2,106,5,100,0,
+ 107,8,114,68,124,2,106,6,114,68,124,2,106,6,124,0,
+ 95,7,124,1,124,0,95,2,124,0,106,7,83,0,41,1,
+ 78,41,8,114,95,0,0,0,114,229,0,0,0,114,230,0,
+ 0,0,114,231,0,0,0,114,227,0,0,0,114,123,0,0,
+ 0,114,153,0,0,0,114,228,0,0,0,41,3,114,102,0,
+ 0,0,90,11,112,97,114,101,110,116,95,112,97,116,104,114,
+ 161,0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,
+ 0,0,0,218,12,95,114,101,99,97,108,99,117,108,97,116,
+ 101,212,3,0,0,115,16,0,0,0,0,2,12,1,10,1,
+ 14,3,18,1,6,1,8,1,6,1,122,27,95,78,97,109,
+ 101,115,112,97,99,101,80,97,116,104,46,95,114,101,99,97,
+ 108,99,117,108,97,116,101,99,1,0,0,0,0,0,0,0,
+ 1,0,0,0,2,0,0,0,67,0,0,0,115,12,0,0,
+ 0,116,0,124,0,106,1,131,0,131,1,83,0,41,1,78,
+ 41,2,218,4,105,116,101,114,114,236,0,0,0,41,1,114,
+ 102,0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,
+ 0,0,0,218,8,95,95,105,116,101,114,95,95,225,3,0,
+ 0,115,2,0,0,0,0,1,122,23,95,78,97,109,101,115,
+ 112,97,99,101,80,97,116,104,46,95,95,105,116,101,114,95,
+ 95,99,3,0,0,0,0,0,0,0,3,0,0,0,3,0,
+ 0,0,67,0,0,0,115,14,0,0,0,124,2,124,0,106,
+ 0,124,1,60,0,100,0,83,0,41,1,78,41,1,114,228,
+ 0,0,0,41,3,114,102,0,0,0,218,5,105,110,100,101,
+ 120,114,37,0,0,0,114,4,0,0,0,114,4,0,0,0,
+ 114,6,0,0,0,218,11,95,95,115,101,116,105,116,101,109,
+ 95,95,228,3,0,0,115,2,0,0,0,0,1,122,26,95,
+ 78,97,109,101,115,112,97,99,101,80,97,116,104,46,95,95,
+ 115,101,116,105,116,101,109,95,95,99,1,0,0,0,0,0,
+ 0,0,1,0,0,0,2,0,0,0,67,0,0,0,115,12,
+ 0,0,0,116,0,124,0,106,1,131,0,131,1,83,0,41,
+ 1,78,41,2,114,33,0,0,0,114,236,0,0,0,41,1,
+ 114,102,0,0,0,114,4,0,0,0,114,4,0,0,0,114,
+ 6,0,0,0,218,7,95,95,108,101,110,95,95,231,3,0,
+ 0,115,2,0,0,0,0,1,122,22,95,78,97,109,101,115,
+ 112,97,99,101,80,97,116,104,46,95,95,108,101,110,95,95,
+ 99,1,0,0,0,0,0,0,0,1,0,0,0,2,0,0,
+ 0,67,0,0,0,115,12,0,0,0,100,1,106,0,124,0,
+ 106,1,131,1,83,0,41,2,78,122,20,95,78,97,109,101,
+ 115,112,97,99,101,80,97,116,104,40,123,33,114,125,41,41,
+ 2,114,50,0,0,0,114,228,0,0,0,41,1,114,102,0,
+ 0,0,114,4,0,0,0,114,4,0,0,0,114,6,0,0,
+ 0,218,8,95,95,114,101,112,114,95,95,234,3,0,0,115,
+ 2,0,0,0,0,1,122,23,95,78,97,109,101,115,112,97,
+ 99,101,80,97,116,104,46,95,95,114,101,112,114,95,95,99,
+ 2,0,0,0,0,0,0,0,2,0,0,0,2,0,0,0,
+ 67,0,0,0,115,12,0,0,0,124,1,124,0,106,0,131,
+ 0,107,6,83,0,41,1,78,41,1,114,236,0,0,0,41,
+ 2,114,102,0,0,0,218,4,105,116,101,109,114,4,0,0,
+ 0,114,4,0,0,0,114,6,0,0,0,218,12,95,95,99,
+ 111,110,116,97,105,110,115,95,95,237,3,0,0,115,2,0,
+ 0,0,0,1,122,27,95,78,97,109,101,115,112,97,99,101,
+ 80,97,116,104,46,95,95,99,111,110,116,97,105,110,115,95,
+ 95,99,2,0,0,0,0,0,0,0,2,0,0,0,2,0,
+ 0,0,67,0,0,0,115,16,0,0,0,124,0,106,0,106,
+ 1,124,1,131,1,1,0,100,0,83,0,41,1,78,41,2,
+ 114,228,0,0,0,114,160,0,0,0,41,2,114,102,0,0,
+ 0,114,243,0,0,0,114,4,0,0,0,114,4,0,0,0,
+ 114,6,0,0,0,114,160,0,0,0,240,3,0,0,115,2,
+ 0,0,0,0,1,122,21,95,78,97,109,101,115,112,97,99,
+ 101,80,97,116,104,46,97,112,112,101,110,100,78,41,14,114,
+ 107,0,0,0,114,106,0,0,0,114,108,0,0,0,114,109,
+ 0,0,0,114,181,0,0,0,114,234,0,0,0,114,229,0,
+ 0,0,114,236,0,0,0,114,238,0,0,0,114,240,0,0,
+ 0,114,241,0,0,0,114,242,0,0,0,114,244,0,0,0,
+ 114,160,0,0,0,114,4,0,0,0,114,4,0,0,0,114,
+ 4,0,0,0,114,6,0,0,0,114,226,0,0,0,185,3,
+ 0,0,115,22,0,0,0,8,5,4,2,8,6,8,10,8,
+ 4,8,13,8,3,8,3,8,3,8,3,8,3,114,226,0,
+ 0,0,99,0,0,0,0,0,0,0,0,0,0,0,0,3,
+ 0,0,0,64,0,0,0,115,80,0,0,0,101,0,90,1,
+ 100,0,90,2,100,1,100,2,132,0,90,3,101,4,100,3,
+ 100,4,132,0,131,1,90,5,100,5,100,6,132,0,90,6,
+ 100,7,100,8,132,0,90,7,100,9,100,10,132,0,90,8,
+ 100,11,100,12,132,0,90,9,100,13,100,14,132,0,90,10,
+ 100,15,100,16,132,0,90,11,100,17,83,0,41,18,218,16,
+ 95,78,97,109,101,115,112,97,99,101,76,111,97,100,101,114,
+ 99,4,0,0,0,0,0,0,0,4,0,0,0,4,0,0,
+ 0,67,0,0,0,115,18,0,0,0,116,0,124,1,124,2,
+ 124,3,131,3,124,0,95,1,100,0,83,0,41,1,78,41,
+ 2,114,226,0,0,0,114,228,0,0,0,41,4,114,102,0,
+ 0,0,114,100,0,0,0,114,37,0,0,0,114,232,0,0,
+ 0,114,4,0,0,0,114,4,0,0,0,114,6,0,0,0,
+ 114,181,0,0,0,246,3,0,0,115,2,0,0,0,0,1,
+ 122,25,95,78,97,109,101,115,112,97,99,101,76,111,97,100,
+ 101,114,46,95,95,105,110,105,116,95,95,99,2,0,0,0,
+ 0,0,0,0,2,0,0,0,2,0,0,0,67,0,0,0,
+ 115,12,0,0,0,100,1,106,0,124,1,106,1,131,1,83,
+ 0,41,2,122,115,82,101,116,117,114,110,32,114,101,112,114,
+ 32,102,111,114,32,116,104,101,32,109,111,100,117,108,101,46,
+ 10,10,32,32,32,32,32,32,32,32,84,104,101,32,109,101,
+ 116,104,111,100,32,105,115,32,100,101,112,114,101,99,97,116,
+ 101,100,46,32,32,84,104,101,32,105,109,112,111,114,116,32,
+ 109,97,99,104,105,110,101,114,121,32,100,111,101,115,32,116,
+ 104,101,32,106,111,98,32,105,116,115,101,108,102,46,10,10,
+ 32,32,32,32,32,32,32,32,122,25,60,109,111,100,117,108,
+ 101,32,123,33,114,125,32,40,110,97,109,101,115,112,97,99,
+ 101,41,62,41,2,114,50,0,0,0,114,107,0,0,0,41,
+ 2,114,167,0,0,0,114,186,0,0,0,114,4,0,0,0,
+ 114,4,0,0,0,114,6,0,0,0,218,11,109,111,100,117,
+ 108,101,95,114,101,112,114,249,3,0,0,115,2,0,0,0,
+ 0,7,122,28,95,78,97,109,101,115,112,97,99,101,76,111,
+ 97,100,101,114,46,109,111,100,117,108,101,95,114,101,112,114,
+ 99,2,0,0,0,0,0,0,0,2,0,0,0,1,0,0,
+ 0,67,0,0,0,115,4,0,0,0,100,1,83,0,41,2,
+ 78,84,114,4,0,0,0,41,2,114,102,0,0,0,114,122,
+ 0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,0,
+ 0,0,114,156,0,0,0,2,4,0,0,115,2,0,0,0,
+ 0,1,122,27,95,78,97,109,101,115,112,97,99,101,76,111,
+ 97,100,101,114,46,105,115,95,112,97,99,107,97,103,101,99,
+ 2,0,0,0,0,0,0,0,2,0,0,0,1,0,0,0,
+ 67,0,0,0,115,4,0,0,0,100,1,83,0,41,2,78,
+ 114,32,0,0,0,114,4,0,0,0,41,2,114,102,0,0,
+ 0,114,122,0,0,0,114,4,0,0,0,114,4,0,0,0,
+ 114,6,0,0,0,114,198,0,0,0,5,4,0,0,115,2,
+ 0,0,0,0,1,122,27,95,78,97,109,101,115,112,97,99,
+ 101,76,111,97,100,101,114,46,103,101,116,95,115,111,117,114,
+ 99,101,99,2,0,0,0,0,0,0,0,2,0,0,0,6,
+ 0,0,0,67,0,0,0,115,16,0,0,0,116,0,100,1,
+ 100,2,100,3,100,4,100,5,141,4,83,0,41,6,78,114,
+ 32,0,0,0,122,8,60,115,116,114,105,110,103,62,114,185,
+ 0,0,0,84,41,1,114,200,0,0,0,41,1,114,201,0,
+ 0,0,41,2,114,102,0,0,0,114,122,0,0,0,114,4,
+ 0,0,0,114,4,0,0,0,114,6,0,0,0,114,183,0,
+ 0,0,8,4,0,0,115,2,0,0,0,0,1,122,25,95,
+ 78,97,109,101,115,112,97,99,101,76,111,97,100,101,114,46,
+ 103,101,116,95,99,111,100,101,99,2,0,0,0,0,0,0,
+ 0,2,0,0,0,1,0,0,0,67,0,0,0,115,4,0,
+ 0,0,100,1,83,0,41,2,122,42,85,115,101,32,100,101,
+ 102,97,117,108,116,32,115,101,109,97,110,116,105,99,115,32,
+ 102,111,114,32,109,111,100,117,108,101,32,99,114,101,97,116,
+ 105,111,110,46,78,114,4,0,0,0,41,2,114,102,0,0,
+ 0,114,161,0,0,0,114,4,0,0,0,114,4,0,0,0,
+ 114,6,0,0,0,114,182,0,0,0,11,4,0,0,115,0,
+ 0,0,0,122,30,95,78,97,109,101,115,112,97,99,101,76,
+ 111,97,100,101,114,46,99,114,101,97,116,101,95,109,111,100,
+ 117,108,101,99,2,0,0,0,0,0,0,0,2,0,0,0,
+ 1,0,0,0,67,0,0,0,115,4,0,0,0,100,0,83,
+ 0,41,1,78,114,4,0,0,0,41,2,114,102,0,0,0,
+ 114,186,0,0,0,114,4,0,0,0,114,4,0,0,0,114,
+ 6,0,0,0,114,187,0,0,0,14,4,0,0,115,2,0,
+ 0,0,0,1,122,28,95,78,97,109,101,115,112,97,99,101,
+ 76,111,97,100,101,114,46,101,120,101,99,95,109,111,100,117,
+ 108,101,99,2,0,0,0,0,0,0,0,2,0,0,0,3,
+ 0,0,0,67,0,0,0,115,26,0,0,0,116,0,106,1,
+ 100,1,124,0,106,2,131,2,1,0,116,0,106,3,124,0,
+ 124,1,131,2,83,0,41,2,122,98,76,111,97,100,32,97,
+ 32,110,97,109,101,115,112,97,99,101,32,109,111,100,117,108,
+ 101,46,10,10,32,32,32,32,32,32,32,32,84,104,105,115,
+ 32,109,101,116,104,111,100,32,105,115,32,100,101,112,114,101,
+ 99,97,116,101,100,46,32,32,85,115,101,32,101,120,101,99,
+ 95,109,111,100,117,108,101,40,41,32,105,110,115,116,101,97,
+ 100,46,10,10,32,32,32,32,32,32,32,32,122,38,110,97,
+ 109,101,115,112,97,99,101,32,109,111,100,117,108,101,32,108,
+ 111,97,100,101,100,32,119,105,116,104,32,112,97,116,104,32,
+ 123,33,114,125,41,4,114,117,0,0,0,114,132,0,0,0,
+ 114,228,0,0,0,114,188,0,0,0,41,2,114,102,0,0,
+ 0,114,122,0,0,0,114,4,0,0,0,114,4,0,0,0,
+ 114,6,0,0,0,114,189,0,0,0,17,4,0,0,115,6,
+ 0,0,0,0,7,6,1,8,1,122,28,95,78,97,109,101,
+ 115,112,97,99,101,76,111,97,100,101,114,46,108,111,97,100,
+ 95,109,111,100,117,108,101,78,41,12,114,107,0,0,0,114,
+ 106,0,0,0,114,108,0,0,0,114,181,0,0,0,114,179,
+ 0,0,0,114,246,0,0,0,114,156,0,0,0,114,198,0,
+ 0,0,114,183,0,0,0,114,182,0,0,0,114,187,0,0,
+ 0,114,189,0,0,0,114,4,0,0,0,114,4,0,0,0,
+ 114,4,0,0,0,114,6,0,0,0,114,245,0,0,0,245,
+ 3,0,0,115,16,0,0,0,8,1,8,3,12,9,8,3,
+ 8,3,8,3,8,3,8,3,114,245,0,0,0,99,0,0,
+ 0,0,0,0,0,0,0,0,0,0,4,0,0,0,64,0,
+ 0,0,115,106,0,0,0,101,0,90,1,100,0,90,2,100,
+ 1,90,3,101,4,100,2,100,3,132,0,131,1,90,5,101,
+ 4,100,4,100,5,132,0,131,1,90,6,101,4,100,6,100,
+ 7,132,0,131,1,90,7,101,4,100,8,100,9,132,0,131,
+ 1,90,8,101,4,100,17,100,11,100,12,132,1,131,1,90,
+ 9,101,4,100,18,100,13,100,14,132,1,131,1,90,10,101,
+ 4,100,19,100,15,100,16,132,1,131,1,90,11,100,10,83,
+ 0,41,20,218,10,80,97,116,104,70,105,110,100,101,114,122,
+ 62,77,101,116,97,32,112,97,116,104,32,102,105,110,100,101,
+ 114,32,102,111,114,32,115,121,115,46,112,97,116,104,32,97,
+ 110,100,32,112,97,99,107,97,103,101,32,95,95,112,97,116,
+ 104,95,95,32,97,116,116,114,105,98,117,116,101,115,46,99,
+ 1,0,0,0,0,0,0,0,2,0,0,0,4,0,0,0,
+ 67,0,0,0,115,42,0,0,0,120,36,116,0,106,1,106,
+ 2,131,0,68,0,93,22,125,1,116,3,124,1,100,1,131,
+ 2,114,12,124,1,106,4,131,0,1,0,113,12,87,0,100,
+ 2,83,0,41,3,122,125,67,97,108,108,32,116,104,101,32,
+ 105,110,118,97,108,105,100,97,116,101,95,99,97,99,104,101,
+ 115,40,41,32,109,101,116,104,111,100,32,111,110,32,97,108,
+ 108,32,112,97,116,104,32,101,110,116,114,121,32,102,105,110,
+ 100,101,114,115,10,32,32,32,32,32,32,32,32,115,116,111,
+ 114,101,100,32,105,110,32,115,121,115,46,112,97,116,104,95,
+ 105,109,112,111,114,116,101,114,95,99,97,99,104,101,115,32,
+ 40,119,104,101,114,101,32,105,109,112,108,101,109,101,110,116,
+ 101,100,41,46,218,17,105,110,118,97,108,105,100,97,116,101,
+ 95,99,97,99,104,101,115,78,41,5,114,8,0,0,0,218,
+ 19,112,97,116,104,95,105,109,112,111,114,116,101,114,95,99,
+ 97,99,104,101,218,6,118,97,108,117,101,115,114,110,0,0,
+ 0,114,248,0,0,0,41,2,114,167,0,0,0,218,6,102,
+ 105,110,100,101,114,114,4,0,0,0,114,4,0,0,0,114,
+ 6,0,0,0,114,248,0,0,0,35,4,0,0,115,6,0,
+ 0,0,0,4,16,1,10,1,122,28,80,97,116,104,70,105,
+ 110,100,101,114,46,105,110,118,97,108,105,100,97,116,101,95,
+ 99,97,99,104,101,115,99,2,0,0,0,0,0,0,0,3,
+ 0,0,0,12,0,0,0,67,0,0,0,115,86,0,0,0,
+ 116,0,106,1,100,1,107,9,114,30,116,0,106,1,12,0,
+ 114,30,116,2,106,3,100,2,116,4,131,2,1,0,120,50,
+ 116,0,106,1,68,0,93,36,125,2,121,8,124,2,124,1,
+ 131,1,83,0,4,0,116,5,107,10,114,72,1,0,1,0,
+ 1,0,119,38,89,0,113,38,88,0,113,38,87,0,100,1,
+ 83,0,100,1,83,0,41,3,122,46,83,101,97,114,99,104,
+ 32,115,121,115,46,112,97,116,104,95,104,111,111,107,115,32,
+ 102,111,114,32,97,32,102,105,110,100,101,114,32,102,111,114,
+ 32,39,112,97,116,104,39,46,78,122,23,115,121,115,46,112,
+ 97,116,104,95,104,111,111,107,115,32,105,115,32,101,109,112,
+ 116,121,41,6,114,8,0,0,0,218,10,112,97,116,104,95,
+ 104,111,111,107,115,114,62,0,0,0,114,63,0,0,0,114,
+ 121,0,0,0,114,101,0,0,0,41,3,114,167,0,0,0,
+ 114,37,0,0,0,90,4,104,111,111,107,114,4,0,0,0,
+ 114,4,0,0,0,114,6,0,0,0,218,11,95,112,97,116,
+ 104,95,104,111,111,107,115,43,4,0,0,115,16,0,0,0,
+ 0,3,18,1,12,1,12,1,2,1,8,1,14,1,12,2,
+ 122,22,80,97,116,104,70,105,110,100,101,114,46,95,112,97,
+ 116,104,95,104,111,111,107,115,99,2,0,0,0,0,0,0,
+ 0,3,0,0,0,19,0,0,0,67,0,0,0,115,102,0,
+ 0,0,124,1,100,1,107,2,114,42,121,12,116,0,106,1,
+ 131,0,125,1,87,0,110,20,4,0,116,2,107,10,114,40,
+ 1,0,1,0,1,0,100,2,83,0,88,0,121,14,116,3,
+ 106,4,124,1,25,0,125,2,87,0,110,40,4,0,116,5,
+ 107,10,114,96,1,0,1,0,1,0,124,0,106,6,124,1,
+ 131,1,125,2,124,2,116,3,106,4,124,1,60,0,89,0,
+ 110,2,88,0,124,2,83,0,41,3,122,210,71,101,116,32,
+ 116,104,101,32,102,105,110,100,101,114,32,102,111,114,32,116,
+ 104,101,32,112,97,116,104,32,101,110,116,114,121,32,102,114,
+ 111,109,32,115,121,115,46,112,97,116,104,95,105,109,112,111,
+ 114,116,101,114,95,99,97,99,104,101,46,10,10,32,32,32,
+ 32,32,32,32,32,73,102,32,116,104,101,32,112,97,116,104,
+ 32,101,110,116,114,121,32,105,115,32,110,111,116,32,105,110,
+ 32,116,104,101,32,99,97,99,104,101,44,32,102,105,110,100,
+ 32,116,104,101,32,97,112,112,114,111,112,114,105,97,116,101,
+ 32,102,105,110,100,101,114,10,32,32,32,32,32,32,32,32,
+ 97,110,100,32,99,97,99,104,101,32,105,116,46,32,73,102,
+ 32,110,111,32,102,105,110,100,101,114,32,105,115,32,97,118,
+ 97,105,108,97,98,108,101,44,32,115,116,111,114,101,32,78,
+ 111,110,101,46,10,10,32,32,32,32,32,32,32,32,114,32,
+ 0,0,0,78,41,7,114,3,0,0,0,114,47,0,0,0,
+ 218,17,70,105,108,101,78,111,116,70,111,117,110,100,69,114,
+ 114,111,114,114,8,0,0,0,114,249,0,0,0,114,134,0,
+ 0,0,114,253,0,0,0,41,3,114,167,0,0,0,114,37,
+ 0,0,0,114,251,0,0,0,114,4,0,0,0,114,4,0,
+ 0,0,114,6,0,0,0,218,20,95,112,97,116,104,95,105,
+ 109,112,111,114,116,101,114,95,99,97,99,104,101,56,4,0,
+ 0,115,22,0,0,0,0,8,8,1,2,1,12,1,14,3,
+ 6,1,2,1,14,1,14,1,10,1,16,1,122,31,80,97,
+ 116,104,70,105,110,100,101,114,46,95,112,97,116,104,95,105,
+ 109,112,111,114,116,101,114,95,99,97,99,104,101,99,3,0,
+ 0,0,0,0,0,0,6,0,0,0,3,0,0,0,67,0,
+ 0,0,115,82,0,0,0,116,0,124,2,100,1,131,2,114,
+ 26,124,2,106,1,124,1,131,1,92,2,125,3,125,4,110,
+ 14,124,2,106,2,124,1,131,1,125,3,103,0,125,4,124,
+ 3,100,0,107,9,114,60,116,3,106,4,124,1,124,3,131,
+ 2,83,0,116,3,106,5,124,1,100,0,131,2,125,5,124,
+ 4,124,5,95,6,124,5,83,0,41,2,78,114,120,0,0,
+ 0,41,7,114,110,0,0,0,114,120,0,0,0,114,178,0,
+ 0,0,114,117,0,0,0,114,175,0,0,0,114,157,0,0,
+ 0,114,153,0,0,0,41,6,114,167,0,0,0,114,122,0,
+ 0,0,114,251,0,0,0,114,123,0,0,0,114,124,0,0,
+ 0,114,161,0,0,0,114,4,0,0,0,114,4,0,0,0,
+ 114,6,0,0,0,218,16,95,108,101,103,97,99,121,95,103,
+ 101,116,95,115,112,101,99,78,4,0,0,115,18,0,0,0,
+ 0,4,10,1,16,2,10,1,4,1,8,1,12,1,12,1,
+ 6,1,122,27,80,97,116,104,70,105,110,100,101,114,46,95,
+ 108,101,103,97,99,121,95,103,101,116,95,115,112,101,99,78,
+ 99,4,0,0,0,0,0,0,0,9,0,0,0,5,0,0,
+ 0,67,0,0,0,115,170,0,0,0,103,0,125,4,120,160,
+ 124,2,68,0,93,130,125,5,116,0,124,5,116,1,116,2,
+ 102,2,131,2,115,30,113,10,124,0,106,3,124,5,131,1,
+ 125,6,124,6,100,1,107,9,114,10,116,4,124,6,100,2,
+ 131,2,114,72,124,6,106,5,124,1,124,3,131,2,125,7,
+ 110,12,124,0,106,6,124,1,124,6,131,2,125,7,124,7,
+ 100,1,107,8,114,94,113,10,124,7,106,7,100,1,107,9,
+ 114,108,124,7,83,0,124,7,106,8,125,8,124,8,100,1,
+ 107,8,114,130,116,9,100,3,131,1,130,1,124,4,106,10,
+ 124,8,131,1,1,0,113,10,87,0,116,11,106,12,124,1,
+ 100,1,131,2,125,7,124,4,124,7,95,8,124,7,83,0,
+ 100,1,83,0,41,4,122,63,70,105,110,100,32,116,104,101,
+ 32,108,111,97,100,101,114,32,111,114,32,110,97,109,101,115,
+ 112,97,99,101,95,112,97,116,104,32,102,111,114,32,116,104,
+ 105,115,32,109,111,100,117,108,101,47,112,97,99,107,97,103,
+ 101,32,110,97,109,101,46,78,114,177,0,0,0,122,19,115,
+ 112,101,99,32,109,105,115,115,105,110,103,32,108,111,97,100,
+ 101,114,41,13,114,140,0,0,0,114,71,0,0,0,218,5,
+ 98,121,116,101,115,114,255,0,0,0,114,110,0,0,0,114,
+ 177,0,0,0,114,0,1,0,0,114,123,0,0,0,114,153,
+ 0,0,0,114,101,0,0,0,114,146,0,0,0,114,117,0,
+ 0,0,114,157,0,0,0,41,9,114,167,0,0,0,114,122,
+ 0,0,0,114,37,0,0,0,114,176,0,0,0,218,14,110,
+ 97,109,101,115,112,97,99,101,95,112,97,116,104,90,5,101,
+ 110,116,114,121,114,251,0,0,0,114,161,0,0,0,114,124,
+ 0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,0,
+ 0,0,218,9,95,103,101,116,95,115,112,101,99,93,4,0,
+ 0,115,40,0,0,0,0,5,4,1,10,1,14,1,2,1,
+ 10,1,8,1,10,1,14,2,12,1,8,1,2,1,10,1,
+ 4,1,6,1,8,1,8,5,14,2,12,1,6,1,122,20,
+ 80,97,116,104,70,105,110,100,101,114,46,95,103,101,116,95,
+ 115,112,101,99,99,4,0,0,0,0,0,0,0,6,0,0,
+ 0,4,0,0,0,67,0,0,0,115,100,0,0,0,124,2,
+ 100,1,107,8,114,14,116,0,106,1,125,2,124,0,106,2,
+ 124,1,124,2,124,3,131,3,125,4,124,4,100,1,107,8,
+ 114,40,100,1,83,0,124,4,106,3,100,1,107,8,114,92,
+ 124,4,106,4,125,5,124,5,114,86,100,2,124,4,95,5,
+ 116,6,124,1,124,5,124,0,106,2,131,3,124,4,95,4,
+ 124,4,83,0,100,1,83,0,110,4,124,4,83,0,100,1,
+ 83,0,41,3,122,141,84,114,121,32,116,111,32,102,105,110,
+ 100,32,97,32,115,112,101,99,32,102,111,114,32,39,102,117,
+ 108,108,110,97,109,101,39,32,111,110,32,115,121,115,46,112,
+ 97,116,104,32,111,114,32,39,112,97,116,104,39,46,10,10,
+ 32,32,32,32,32,32,32,32,84,104,101,32,115,101,97,114,
+ 99,104,32,105,115,32,98,97,115,101,100,32,111,110,32,115,
+ 121,115,46,112,97,116,104,95,104,111,111,107,115,32,97,110,
+ 100,32,115,121,115,46,112,97,116,104,95,105,109,112,111,114,
+ 116,101,114,95,99,97,99,104,101,46,10,32,32,32,32,32,
+ 32,32,32,78,90,9,110,97,109,101,115,112,97,99,101,41,
+ 7,114,8,0,0,0,114,37,0,0,0,114,3,1,0,0,
+ 114,123,0,0,0,114,153,0,0,0,114,155,0,0,0,114,
+ 226,0,0,0,41,6,114,167,0,0,0,114,122,0,0,0,
+ 114,37,0,0,0,114,176,0,0,0,114,161,0,0,0,114,
+ 2,1,0,0,114,4,0,0,0,114,4,0,0,0,114,6,
+ 0,0,0,114,177,0,0,0,125,4,0,0,115,26,0,0,
+ 0,0,6,8,1,6,1,14,1,8,1,4,1,10,1,6,
+ 1,4,3,6,1,16,1,4,2,6,2,122,20,80,97,116,
+ 104,70,105,110,100,101,114,46,102,105,110,100,95,115,112,101,
+ 99,99,3,0,0,0,0,0,0,0,4,0,0,0,3,0,
+ 0,0,67,0,0,0,115,30,0,0,0,124,0,106,0,124,
+ 1,124,2,131,2,125,3,124,3,100,1,107,8,114,24,100,
+ 1,83,0,124,3,106,1,83,0,41,2,122,170,102,105,110,
+ 100,32,116,104,101,32,109,111,100,117,108,101,32,111,110,32,
+ 115,121,115,46,112,97,116,104,32,111,114,32,39,112,97,116,
+ 104,39,32,98,97,115,101,100,32,111,110,32,115,121,115,46,
+ 112,97,116,104,95,104,111,111,107,115,32,97,110,100,10,32,
+ 32,32,32,32,32,32,32,115,121,115,46,112,97,116,104,95,
+ 105,109,112,111,114,116,101,114,95,99,97,99,104,101,46,10,
+ 10,32,32,32,32,32,32,32,32,84,104,105,115,32,109,101,
+ 116,104,111,100,32,105,115,32,100,101,112,114,101,99,97,116,
+ 101,100,46,32,32,85,115,101,32,102,105,110,100,95,115,112,
+ 101,99,40,41,32,105,110,115,116,101,97,100,46,10,10,32,
+ 32,32,32,32,32,32,32,78,41,2,114,177,0,0,0,114,
+ 123,0,0,0,41,4,114,167,0,0,0,114,122,0,0,0,
+ 114,37,0,0,0,114,161,0,0,0,114,4,0,0,0,114,
+ 4,0,0,0,114,6,0,0,0,114,178,0,0,0,149,4,
+ 0,0,115,8,0,0,0,0,8,12,1,8,1,4,1,122,
+ 22,80,97,116,104,70,105,110,100,101,114,46,102,105,110,100,
+ 95,109,111,100,117,108,101,41,1,78,41,2,78,78,41,1,
+ 78,41,12,114,107,0,0,0,114,106,0,0,0,114,108,0,
+ 0,0,114,109,0,0,0,114,179,0,0,0,114,248,0,0,
+ 0,114,253,0,0,0,114,255,0,0,0,114,0,1,0,0,
+ 114,3,1,0,0,114,177,0,0,0,114,178,0,0,0,114,
+ 4,0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,
+ 0,0,0,114,247,0,0,0,31,4,0,0,115,22,0,0,
+ 0,8,2,4,2,12,8,12,13,12,22,12,15,2,1,12,
+ 31,2,1,12,23,2,1,114,247,0,0,0,99,0,0,0,
+ 0,0,0,0,0,0,0,0,0,3,0,0,0,64,0,0,
+ 0,115,90,0,0,0,101,0,90,1,100,0,90,2,100,1,
+ 90,3,100,2,100,3,132,0,90,4,100,4,100,5,132,0,
+ 90,5,101,6,90,7,100,6,100,7,132,0,90,8,100,8,
+ 100,9,132,0,90,9,100,19,100,11,100,12,132,1,90,10,
+ 100,13,100,14,132,0,90,11,101,12,100,15,100,16,132,0,
+ 131,1,90,13,100,17,100,18,132,0,90,14,100,10,83,0,
+ 41,20,218,10,70,105,108,101,70,105,110,100,101,114,122,172,
+ 70,105,108,101,45,98,97,115,101,100,32,102,105,110,100,101,
+ 114,46,10,10,32,32,32,32,73,110,116,101,114,97,99,116,
+ 105,111,110,115,32,119,105,116,104,32,116,104,101,32,102,105,
+ 108,101,32,115,121,115,116,101,109,32,97,114,101,32,99,97,
+ 99,104,101,100,32,102,111,114,32,112,101,114,102,111,114,109,
+ 97,110,99,101,44,32,98,101,105,110,103,10,32,32,32,32,
+ 114,101,102,114,101,115,104,101,100,32,119,104,101,110,32,116,
+ 104,101,32,100,105,114,101,99,116,111,114,121,32,116,104,101,
+ 32,102,105,110,100,101,114,32,105,115,32,104,97,110,100,108,
+ 105,110,103,32,104,97,115,32,98,101,101,110,32,109,111,100,
+ 105,102,105,101,100,46,10,10,32,32,32,32,99,2,0,0,
+ 0,0,0,0,0,5,0,0,0,5,0,0,0,7,0,0,
+ 0,115,88,0,0,0,103,0,125,3,120,40,124,2,68,0,
+ 93,32,92,2,137,0,125,4,124,3,106,0,135,0,102,1,
+ 100,1,100,2,132,8,124,4,68,0,131,1,131,1,1,0,
+ 113,10,87,0,124,3,124,0,95,1,124,1,112,58,100,3,
+ 124,0,95,2,100,6,124,0,95,3,116,4,131,0,124,0,
+ 95,5,116,4,131,0,124,0,95,6,100,5,83,0,41,7,
+ 122,154,73,110,105,116,105,97,108,105,122,101,32,119,105,116,
+ 104,32,116,104,101,32,112,97,116,104,32,116,111,32,115,101,
+ 97,114,99,104,32,111,110,32,97,110,100,32,97,32,118,97,
+ 114,105,97,98,108,101,32,110,117,109,98,101,114,32,111,102,
+ 10,32,32,32,32,32,32,32,32,50,45,116,117,112,108,101,
+ 115,32,99,111,110,116,97,105,110,105,110,103,32,116,104,101,
+ 32,108,111,97,100,101,114,32,97,110,100,32,116,104,101,32,
+ 102,105,108,101,32,115,117,102,102,105,120,101,115,32,116,104,
+ 101,32,108,111,97,100,101,114,10,32,32,32,32,32,32,32,
+ 32,114,101,99,111,103,110,105,122,101,115,46,99,1,0,0,
+ 0,0,0,0,0,2,0,0,0,3,0,0,0,51,0,0,
+ 0,115,22,0,0,0,124,0,93,14,125,1,124,1,136,0,
+ 102,2,86,0,1,0,113,2,100,0,83,0,41,1,78,114,
+ 4,0,0,0,41,2,114,24,0,0,0,114,221,0,0,0,
+ 41,1,114,123,0,0,0,114,4,0,0,0,114,6,0,0,
+ 0,114,223,0,0,0,178,4,0,0,115,2,0,0,0,4,
+ 0,122,38,70,105,108,101,70,105,110,100,101,114,46,95,95,
+ 105,110,105,116,95,95,46,60,108,111,99,97,108,115,62,46,
+ 60,103,101,110,101,120,112,114,62,114,60,0,0,0,114,31,
+ 0,0,0,78,114,89,0,0,0,41,7,114,146,0,0,0,
+ 218,8,95,108,111,97,100,101,114,115,114,37,0,0,0,218,
+ 11,95,112,97,116,104,95,109,116,105,109,101,218,3,115,101,
+ 116,218,11,95,112,97,116,104,95,99,97,99,104,101,218,19,
+ 95,114,101,108,97,120,101,100,95,112,97,116,104,95,99,97,
+ 99,104,101,41,5,114,102,0,0,0,114,37,0,0,0,218,
+ 14,108,111,97,100,101,114,95,100,101,116,97,105,108,115,90,
+ 7,108,111,97,100,101,114,115,114,163,0,0,0,114,4,0,
+ 0,0,41,1,114,123,0,0,0,114,6,0,0,0,114,181,
+ 0,0,0,172,4,0,0,115,16,0,0,0,0,4,4,1,
+ 14,1,28,1,6,2,10,1,6,1,8,1,122,19,70,105,
+ 108,101,70,105,110,100,101,114,46,95,95,105,110,105,116,95,
+ 95,99,1,0,0,0,0,0,0,0,1,0,0,0,2,0,
+ 0,0,67,0,0,0,115,10,0,0,0,100,3,124,0,95,
+ 0,100,2,83,0,41,4,122,31,73,110,118,97,108,105,100,
+ 97,116,101,32,116,104,101,32,100,105,114,101,99,116,111,114,
+ 121,32,109,116,105,109,101,46,114,31,0,0,0,78,114,89,
+ 0,0,0,41,1,114,6,1,0,0,41,1,114,102,0,0,
+ 0,114,4,0,0,0,114,4,0,0,0,114,6,0,0,0,
+ 114,248,0,0,0,186,4,0,0,115,2,0,0,0,0,2,
+ 122,28,70,105,108,101,70,105,110,100,101,114,46,105,110,118,
+ 97,108,105,100,97,116,101,95,99,97,99,104,101,115,99,2,
+ 0,0,0,0,0,0,0,3,0,0,0,2,0,0,0,67,
+ 0,0,0,115,42,0,0,0,124,0,106,0,124,1,131,1,
+ 125,2,124,2,100,1,107,8,114,26,100,1,103,0,102,2,
+ 83,0,124,2,106,1,124,2,106,2,112,38,103,0,102,2,
+ 83,0,41,2,122,197,84,114,121,32,116,111,32,102,105,110,
+ 100,32,97,32,108,111,97,100,101,114,32,102,111,114,32,116,
+ 104,101,32,115,112,101,99,105,102,105,101,100,32,109,111,100,
+ 117,108,101,44,32,111,114,32,116,104,101,32,110,97,109,101,
+ 115,112,97,99,101,10,32,32,32,32,32,32,32,32,112,97,
+ 99,107,97,103,101,32,112,111,114,116,105,111,110,115,46,32,
+ 82,101,116,117,114,110,115,32,40,108,111,97,100,101,114,44,
+ 32,108,105,115,116,45,111,102,45,112,111,114,116,105,111,110,
+ 115,41,46,10,10,32,32,32,32,32,32,32,32,84,104,105,
+ 115,32,109,101,116,104,111,100,32,105,115,32,100,101,112,114,
+ 101,99,97,116,101,100,46,32,32,85,115,101,32,102,105,110,
+ 100,95,115,112,101,99,40,41,32,105,110,115,116,101,97,100,
+ 46,10,10,32,32,32,32,32,32,32,32,78,41,3,114,177,
+ 0,0,0,114,123,0,0,0,114,153,0,0,0,41,3,114,
+ 102,0,0,0,114,122,0,0,0,114,161,0,0,0,114,4,
+ 0,0,0,114,4,0,0,0,114,6,0,0,0,114,120,0,
+ 0,0,192,4,0,0,115,8,0,0,0,0,7,10,1,8,
+ 1,8,1,122,22,70,105,108,101,70,105,110,100,101,114,46,
+ 102,105,110,100,95,108,111,97,100,101,114,99,6,0,0,0,
+ 0,0,0,0,7,0,0,0,6,0,0,0,67,0,0,0,
+ 115,26,0,0,0,124,1,124,2,124,3,131,2,125,6,116,
+ 0,124,2,124,3,124,6,124,4,100,1,141,4,83,0,41,
+ 2,78,41,2,114,123,0,0,0,114,153,0,0,0,41,1,
+ 114,164,0,0,0,41,7,114,102,0,0,0,114,162,0,0,
+ 0,114,122,0,0,0,114,37,0,0,0,90,4,115,109,115,
+ 108,114,176,0,0,0,114,123,0,0,0,114,4,0,0,0,
+ 114,4,0,0,0,114,6,0,0,0,114,3,1,0,0,204,
+ 4,0,0,115,6,0,0,0,0,1,10,1,8,1,122,20,
+ 70,105,108,101,70,105,110,100,101,114,46,95,103,101,116,95,
+ 115,112,101,99,78,99,3,0,0,0,0,0,0,0,14,0,
+ 0,0,15,0,0,0,67,0,0,0,115,98,1,0,0,100,
+ 1,125,3,124,1,106,0,100,2,131,1,100,3,25,0,125,
+ 4,121,24,116,1,124,0,106,2,112,34,116,3,106,4,131,
+ 0,131,1,106,5,125,5,87,0,110,24,4,0,116,6,107,
+ 10,114,66,1,0,1,0,1,0,100,10,125,5,89,0,110,
+ 2,88,0,124,5,124,0,106,7,107,3,114,92,124,0,106,
+ 8,131,0,1,0,124,5,124,0,95,7,116,9,131,0,114,
+ 114,124,0,106,10,125,6,124,4,106,11,131,0,125,7,110,
+ 10,124,0,106,12,125,6,124,4,125,7,124,7,124,6,107,
+ 6,114,218,116,13,124,0,106,2,124,4,131,2,125,8,120,
+ 72,124,0,106,14,68,0,93,54,92,2,125,9,125,10,100,
+ 5,124,9,23,0,125,11,116,13,124,8,124,11,131,2,125,
+ 12,116,15,124,12,131,1,114,152,124,0,106,16,124,10,124,
+ 1,124,12,124,8,103,1,124,2,131,5,83,0,113,152,87,
+ 0,116,17,124,8,131,1,125,3,120,88,124,0,106,14,68,
+ 0,93,78,92,2,125,9,125,10,116,13,124,0,106,2,124,
+ 4,124,9,23,0,131,2,125,12,116,18,106,19,100,6,124,
+ 12,100,3,100,7,141,3,1,0,124,7,124,9,23,0,124,
+ 6,107,6,114,226,116,15,124,12,131,1,114,226,124,0,106,
+ 16,124,10,124,1,124,12,100,8,124,2,131,5,83,0,113,
+ 226,87,0,124,3,144,1,114,94,116,18,106,19,100,9,124,
+ 8,131,2,1,0,116,18,106,20,124,1,100,8,131,2,125,
+ 13,124,8,103,1,124,13,95,21,124,13,83,0,100,8,83,
+ 0,41,11,122,111,84,114,121,32,116,111,32,102,105,110,100,
+ 32,97,32,115,112,101,99,32,102,111,114,32,116,104,101,32,
+ 115,112,101,99,105,102,105,101,100,32,109,111,100,117,108,101,
+ 46,10,10,32,32,32,32,32,32,32,32,82,101,116,117,114,
+ 110,115,32,116,104,101,32,109,97,116,99,104,105,110,103,32,
+ 115,112,101,99,44,32,111,114,32,78,111,110,101,32,105,102,
+ 32,110,111,116,32,102,111,117,110,100,46,10,32,32,32,32,
+ 32,32,32,32,70,114,60,0,0,0,114,58,0,0,0,114,
+ 31,0,0,0,114,181,0,0,0,122,9,116,114,121,105,110,
+ 103,32,123,125,41,1,90,9,118,101,114,98,111,115,105,116,
+ 121,78,122,25,112,111,115,115,105,98,108,101,32,110,97,109,
+ 101,115,112,97,99,101,32,102,111,114,32,123,125,114,89,0,
+ 0,0,41,22,114,34,0,0,0,114,41,0,0,0,114,37,
+ 0,0,0,114,3,0,0,0,114,47,0,0,0,114,215,0,
+ 0,0,114,42,0,0,0,114,6,1,0,0,218,11,95,102,
+ 105,108,108,95,99,97,99,104,101,114,7,0,0,0,114,9,
+ 1,0,0,114,90,0,0,0,114,8,1,0,0,114,30,0,
+ 0,0,114,5,1,0,0,114,46,0,0,0,114,3,1,0,
+ 0,114,48,0,0,0,114,117,0,0,0,114,132,0,0,0,
+ 114,157,0,0,0,114,153,0,0,0,41,14,114,102,0,0,
+ 0,114,122,0,0,0,114,176,0,0,0,90,12,105,115,95,
+ 110,97,109,101,115,112,97,99,101,90,11,116,97,105,108,95,
+ 109,111,100,117,108,101,114,129,0,0,0,90,5,99,97,99,
+ 104,101,90,12,99,97,99,104,101,95,109,111,100,117,108,101,
+ 90,9,98,97,115,101,95,112,97,116,104,114,221,0,0,0,
+ 114,162,0,0,0,90,13,105,110,105,116,95,102,105,108,101,
+ 110,97,109,101,90,9,102,117,108,108,95,112,97,116,104,114,
+ 161,0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,
+ 0,0,0,114,177,0,0,0,209,4,0,0,115,70,0,0,
+ 0,0,5,4,1,14,1,2,1,24,1,14,1,10,1,10,
+ 1,8,1,6,2,6,1,6,1,10,2,6,1,4,2,8,
+ 1,12,1,16,1,8,1,10,1,8,1,24,4,8,2,16,
+ 1,16,1,16,1,12,1,8,1,10,1,12,1,6,1,12,
+ 1,12,1,8,1,4,1,122,20,70,105,108,101,70,105,110,
+ 100,101,114,46,102,105,110,100,95,115,112,101,99,99,1,0,
+ 0,0,0,0,0,0,9,0,0,0,13,0,0,0,67,0,
+ 0,0,115,194,0,0,0,124,0,106,0,125,1,121,22,116,
+ 1,106,2,124,1,112,22,116,1,106,3,131,0,131,1,125,
+ 2,87,0,110,30,4,0,116,4,116,5,116,6,102,3,107,
+ 10,114,58,1,0,1,0,1,0,103,0,125,2,89,0,110,
+ 2,88,0,116,7,106,8,106,9,100,1,131,1,115,84,116,
+ 10,124,2,131,1,124,0,95,11,110,78,116,10,131,0,125,
+ 3,120,64,124,2,68,0,93,56,125,4,124,4,106,12,100,
+ 2,131,1,92,3,125,5,125,6,125,7,124,6,114,138,100,
+ 3,106,13,124,5,124,7,106,14,131,0,131,2,125,8,110,
+ 4,124,5,125,8,124,3,106,15,124,8,131,1,1,0,113,
+ 96,87,0,124,3,124,0,95,11,116,7,106,8,106,9,116,
+ 16,131,1,114,190,100,4,100,5,132,0,124,2,68,0,131,
+ 1,124,0,95,17,100,6,83,0,41,7,122,68,70,105,108,
+ 108,32,116,104,101,32,99,97,99,104,101,32,111,102,32,112,
+ 111,116,101,110,116,105,97,108,32,109,111,100,117,108,101,115,
+ 32,97,110,100,32,112,97,99,107,97,103,101,115,32,102,111,
+ 114,32,116,104,105,115,32,100,105,114,101,99,116,111,114,121,
+ 46,114,0,0,0,0,114,60,0,0,0,122,5,123,125,46,
+ 123,125,99,1,0,0,0,0,0,0,0,2,0,0,0,3,
+ 0,0,0,83,0,0,0,115,20,0,0,0,104,0,124,0,
+ 93,12,125,1,124,1,106,0,131,0,146,2,113,4,83,0,
+ 114,4,0,0,0,41,1,114,90,0,0,0,41,2,114,24,
+ 0,0,0,90,2,102,110,114,4,0,0,0,114,4,0,0,
+ 0,114,6,0,0,0,250,9,60,115,101,116,99,111,109,112,
+ 62,30,5,0,0,115,2,0,0,0,6,0,122,41,70,105,
+ 108,101,70,105,110,100,101,114,46,95,102,105,108,108,95,99,
+ 97,99,104,101,46,60,108,111,99,97,108,115,62,46,60,115,
+ 101,116,99,111,109,112,62,78,41,18,114,37,0,0,0,114,
+ 3,0,0,0,90,7,108,105,115,116,100,105,114,114,47,0,
+ 0,0,114,254,0,0,0,218,15,80,101,114,109,105,115,115,
+ 105,111,110,69,114,114,111,114,218,18,78,111,116,65,68,105,
+ 114,101,99,116,111,114,121,69,114,114,111,114,114,8,0,0,
+ 0,114,9,0,0,0,114,10,0,0,0,114,7,1,0,0,
+ 114,8,1,0,0,114,85,0,0,0,114,50,0,0,0,114,
+ 90,0,0,0,218,3,97,100,100,114,11,0,0,0,114,9,
+ 1,0,0,41,9,114,102,0,0,0,114,37,0,0,0,90,
+ 8,99,111,110,116,101,110,116,115,90,21,108,111,119,101,114,
+ 95,115,117,102,102,105,120,95,99,111,110,116,101,110,116,115,
+ 114,243,0,0,0,114,100,0,0,0,114,233,0,0,0,114,
+ 221,0,0,0,90,8,110,101,119,95,110,97,109,101,114,4,
+ 0,0,0,114,4,0,0,0,114,6,0,0,0,114,11,1,
+ 0,0,1,5,0,0,115,34,0,0,0,0,2,6,1,2,
+ 1,22,1,20,3,10,3,12,1,12,7,6,1,10,1,16,
+ 1,4,1,18,2,4,1,14,1,6,1,12,1,122,22,70,
+ 105,108,101,70,105,110,100,101,114,46,95,102,105,108,108,95,
+ 99,97,99,104,101,99,1,0,0,0,0,0,0,0,3,0,
+ 0,0,3,0,0,0,7,0,0,0,115,18,0,0,0,135,
+ 0,135,1,102,2,100,1,100,2,132,8,125,2,124,2,83,
+ 0,41,3,97,20,1,0,0,65,32,99,108,97,115,115,32,
+ 109,101,116,104,111,100,32,119,104,105,99,104,32,114,101,116,
+ 117,114,110,115,32,97,32,99,108,111,115,117,114,101,32,116,
+ 111,32,117,115,101,32,111,110,32,115,121,115,46,112,97,116,
+ 104,95,104,111,111,107,10,32,32,32,32,32,32,32,32,119,
+ 104,105,99,104,32,119,105,108,108,32,114,101,116,117,114,110,
+ 32,97,110,32,105,110,115,116,97,110,99,101,32,117,115,105,
+ 110,103,32,116,104,101,32,115,112,101,99,105,102,105,101,100,
+ 32,108,111,97,100,101,114,115,32,97,110,100,32,116,104,101,
+ 32,112,97,116,104,10,32,32,32,32,32,32,32,32,99,97,
+ 108,108,101,100,32,111,110,32,116,104,101,32,99,108,111,115,
+ 117,114,101,46,10,10,32,32,32,32,32,32,32,32,73,102,
+ 32,116,104,101,32,112,97,116,104,32,99,97,108,108,101,100,
+ 32,111,110,32,116,104,101,32,99,108,111,115,117,114,101,32,
+ 105,115,32,110,111,116,32,97,32,100,105,114,101,99,116,111,
+ 114,121,44,32,73,109,112,111,114,116,69,114,114,111,114,32,
+ 105,115,10,32,32,32,32,32,32,32,32,114,97,105,115,101,
+ 100,46,10,10,32,32,32,32,32,32,32,32,99,1,0,0,
+ 0,0,0,0,0,1,0,0,0,4,0,0,0,19,0,0,
+ 0,115,34,0,0,0,116,0,124,0,131,1,115,20,116,1,
+ 100,1,124,0,100,2,141,2,130,1,136,0,124,0,102,1,
+ 136,1,158,2,142,0,83,0,41,3,122,45,80,97,116,104,
+ 32,104,111,111,107,32,102,111,114,32,105,109,112,111,114,116,
+ 108,105,98,46,109,97,99,104,105,110,101,114,121,46,70,105,
+ 108,101,70,105,110,100,101,114,46,122,30,111,110,108,121,32,
+ 100,105,114,101,99,116,111,114,105,101,115,32,97,114,101,32,
+ 115,117,112,112,111,114,116,101,100,41,1,114,37,0,0,0,
+ 41,2,114,48,0,0,0,114,101,0,0,0,41,1,114,37,
+ 0,0,0,41,2,114,167,0,0,0,114,10,1,0,0,114,
+ 4,0,0,0,114,6,0,0,0,218,24,112,97,116,104,95,
+ 104,111,111,107,95,102,111,114,95,70,105,108,101,70,105,110,
+ 100,101,114,42,5,0,0,115,6,0,0,0,0,2,8,1,
+ 12,1,122,54,70,105,108,101,70,105,110,100,101,114,46,112,
+ 97,116,104,95,104,111,111,107,46,60,108,111,99,97,108,115,
+ 62,46,112,97,116,104,95,104,111,111,107,95,102,111,114,95,
+ 70,105,108,101,70,105,110,100,101,114,114,4,0,0,0,41,
+ 3,114,167,0,0,0,114,10,1,0,0,114,16,1,0,0,
+ 114,4,0,0,0,41,2,114,167,0,0,0,114,10,1,0,
+ 0,114,6,0,0,0,218,9,112,97,116,104,95,104,111,111,
+ 107,32,5,0,0,115,4,0,0,0,0,10,14,6,122,20,
+ 70,105,108,101,70,105,110,100,101,114,46,112,97,116,104,95,
+ 104,111,111,107,99,1,0,0,0,0,0,0,0,1,0,0,
+ 0,2,0,0,0,67,0,0,0,115,12,0,0,0,100,1,
+ 106,0,124,0,106,1,131,1,83,0,41,2,78,122,16,70,
+ 105,108,101,70,105,110,100,101,114,40,123,33,114,125,41,41,
+ 2,114,50,0,0,0,114,37,0,0,0,41,1,114,102,0,
+ 0,0,114,4,0,0,0,114,4,0,0,0,114,6,0,0,
+ 0,114,242,0,0,0,50,5,0,0,115,2,0,0,0,0,
+ 1,122,19,70,105,108,101,70,105,110,100,101,114,46,95,95,
+ 114,101,112,114,95,95,41,1,78,41,15,114,107,0,0,0,
+ 114,106,0,0,0,114,108,0,0,0,114,109,0,0,0,114,
+ 181,0,0,0,114,248,0,0,0,114,126,0,0,0,114,178,
+ 0,0,0,114,120,0,0,0,114,3,1,0,0,114,177,0,
+ 0,0,114,11,1,0,0,114,179,0,0,0,114,17,1,0,
+ 0,114,242,0,0,0,114,4,0,0,0,114,4,0,0,0,
+ 114,4,0,0,0,114,6,0,0,0,114,4,1,0,0,163,
+ 4,0,0,115,20,0,0,0,8,7,4,2,8,14,8,4,
+ 4,2,8,12,8,5,10,48,8,31,12,18,114,4,1,0,
+ 0,99,4,0,0,0,0,0,0,0,6,0,0,0,11,0,
+ 0,0,67,0,0,0,115,146,0,0,0,124,0,106,0,100,
+ 1,131,1,125,4,124,0,106,0,100,2,131,1,125,5,124,
+ 4,115,66,124,5,114,36,124,5,106,1,125,4,110,30,124,
+ 2,124,3,107,2,114,56,116,2,124,1,124,2,131,2,125,
+ 4,110,10,116,3,124,1,124,2,131,2,125,4,124,5,115,
+ 84,116,4,124,1,124,2,124,4,100,3,141,3,125,5,121,
+ 36,124,5,124,0,100,2,60,0,124,4,124,0,100,1,60,
+ 0,124,2,124,0,100,4,60,0,124,3,124,0,100,5,60,
+ 0,87,0,110,20,4,0,116,5,107,10,114,140,1,0,1,
+ 0,1,0,89,0,110,2,88,0,100,0,83,0,41,6,78,
+ 218,10,95,95,108,111,97,100,101,114,95,95,218,8,95,95,
+ 115,112,101,99,95,95,41,1,114,123,0,0,0,90,8,95,
+ 95,102,105,108,101,95,95,90,10,95,95,99,97,99,104,101,
+ 100,95,95,41,6,218,3,103,101,116,114,123,0,0,0,114,
+ 219,0,0,0,114,214,0,0,0,114,164,0,0,0,218,9,
+ 69,120,99,101,112,116,105,111,110,41,6,90,2,110,115,114,
+ 100,0,0,0,90,8,112,97,116,104,110,97,109,101,90,9,
+ 99,112,97,116,104,110,97,109,101,114,123,0,0,0,114,161,
+ 0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,0,
+ 0,0,218,14,95,102,105,120,95,117,112,95,109,111,100,117,
+ 108,101,56,5,0,0,115,34,0,0,0,0,2,10,1,10,
+ 1,4,1,4,1,8,1,8,1,12,2,10,1,4,1,14,
+ 1,2,1,8,1,8,1,8,1,12,1,14,2,114,22,1,
+ 0,0,99,0,0,0,0,0,0,0,0,3,0,0,0,3,
+ 0,0,0,67,0,0,0,115,38,0,0,0,116,0,116,1,
+ 106,2,131,0,102,2,125,0,116,3,116,4,102,2,125,1,
+ 116,5,116,6,102,2,125,2,124,0,124,1,124,2,103,3,
+ 83,0,41,1,122,95,82,101,116,117,114,110,115,32,97,32,
+ 108,105,115,116,32,111,102,32,102,105,108,101,45,98,97,115,
+ 101,100,32,109,111,100,117,108,101,32,108,111,97,100,101,114,
+ 115,46,10,10,32,32,32,32,69,97,99,104,32,105,116,101,
+ 109,32,105,115,32,97,32,116,117,112,108,101,32,40,108,111,
+ 97,100,101,114,44,32,115,117,102,102,105,120,101,115,41,46,
+ 10,32,32,32,32,41,7,114,220,0,0,0,114,142,0,0,
+ 0,218,18,101,120,116,101,110,115,105,111,110,95,115,117,102,
+ 102,105,120,101,115,114,214,0,0,0,114,86,0,0,0,114,
+ 219,0,0,0,114,76,0,0,0,41,3,90,10,101,120,116,
+ 101,110,115,105,111,110,115,90,6,115,111,117,114,99,101,90,
+ 8,98,121,116,101,99,111,100,101,114,4,0,0,0,114,4,
+ 0,0,0,114,6,0,0,0,114,158,0,0,0,79,5,0,
+ 0,115,8,0,0,0,0,5,12,1,8,1,8,1,114,158,
+ 0,0,0,99,1,0,0,0,0,0,0,0,12,0,0,0,
+ 12,0,0,0,67,0,0,0,115,198,1,0,0,124,0,97,
+ 0,116,0,106,1,97,1,116,0,106,2,97,2,116,1,106,
+ 3,116,4,25,0,125,1,120,56,100,27,68,0,93,48,125,
+ 2,124,2,116,1,106,3,107,7,114,58,116,0,106,5,124,
+ 2,131,1,125,3,110,10,116,1,106,3,124,2,25,0,125,
+ 3,116,6,124,1,124,2,124,3,131,3,1,0,113,32,87,
+ 0,100,5,100,6,103,1,102,2,100,7,100,8,100,6,103,
+ 2,102,2,100,9,100,8,100,6,103,2,102,2,102,3,125,
+ 4,120,118,124,4,68,0,93,102,92,2,125,5,125,6,116,
+ 7,100,10,100,11,132,0,124,6,68,0,131,1,131,1,115,
+ 152,116,8,130,1,124,6,100,12,25,0,125,7,124,5,116,
+ 1,106,3,107,6,114,184,116,1,106,3,124,5,25,0,125,
+ 8,80,0,113,122,121,16,116,0,106,5,124,5,131,1,125,
+ 8,80,0,87,0,113,122,4,0,116,9,107,10,114,222,1,
+ 0,1,0,1,0,119,122,89,0,113,122,88,0,113,122,87,
+ 0,116,9,100,13,131,1,130,1,116,6,124,1,100,14,124,
+ 8,131,3,1,0,116,6,124,1,100,15,124,7,131,3,1,
+ 0,116,6,124,1,100,16,100,17,106,10,124,6,131,1,131,
+ 3,1,0,121,14,116,0,106,5,100,18,131,1,125,9,87,
+ 0,110,26,4,0,116,9,107,10,144,1,114,62,1,0,1,
+ 0,1,0,100,19,125,9,89,0,110,2,88,0,116,6,124,
+ 1,100,18,124,9,131,3,1,0,116,0,106,5,100,20,131,
+ 1,125,10,116,6,124,1,100,20,124,10,131,3,1,0,124,
+ 5,100,7,107,2,144,1,114,130,116,0,106,5,100,21,131,
+ 1,125,11,116,6,124,1,100,22,124,11,131,3,1,0,116,
+ 6,124,1,100,23,116,11,131,0,131,3,1,0,116,12,106,
+ 13,116,2,106,14,131,0,131,1,1,0,124,5,100,7,107,
+ 2,144,1,114,194,116,15,106,16,100,24,131,1,1,0,100,
+ 25,116,12,107,6,144,1,114,194,100,26,116,17,95,18,100,
+ 19,83,0,41,28,122,205,83,101,116,117,112,32,116,104,101,
+ 32,112,97,116,104,45,98,97,115,101,100,32,105,109,112,111,
+ 114,116,101,114,115,32,102,111,114,32,105,109,112,111,114,116,
+ 108,105,98,32,98,121,32,105,109,112,111,114,116,105,110,103,
+ 32,110,101,101,100,101,100,10,32,32,32,32,98,117,105,108,
+ 116,45,105,110,32,109,111,100,117,108,101,115,32,97,110,100,
+ 32,105,110,106,101,99,116,105,110,103,32,116,104,101,109,32,
+ 105,110,116,111,32,116,104,101,32,103,108,111,98,97,108,32,
+ 110,97,109,101,115,112,97,99,101,46,10,10,32,32,32,32,
+ 79,116,104,101,114,32,99,111,109,112,111,110,101,110,116,115,
+ 32,97,114,101,32,101,120,116,114,97,99,116,101,100,32,102,
+ 114,111,109,32,116,104,101,32,99,111,114,101,32,98,111,111,
+ 116,115,116,114,97,112,32,109,111,100,117,108,101,46,10,10,
+ 32,32,32,32,114,52,0,0,0,114,62,0,0,0,218,8,
+ 98,117,105,108,116,105,110,115,114,139,0,0,0,90,5,112,
+ 111,115,105,120,250,1,47,90,2,110,116,250,1,92,90,4,
+ 101,100,107,50,99,1,0,0,0,0,0,0,0,2,0,0,
+ 0,3,0,0,0,115,0,0,0,115,26,0,0,0,124,0,
+ 93,18,125,1,116,0,124,1,131,1,100,0,107,2,86,0,
+ 1,0,113,2,100,1,83,0,41,2,114,31,0,0,0,78,
+ 41,1,114,33,0,0,0,41,2,114,24,0,0,0,114,79,
+ 0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,0,
+ 0,0,114,223,0,0,0,115,5,0,0,115,2,0,0,0,
+ 4,0,122,25,95,115,101,116,117,112,46,60,108,111,99,97,
+ 108,115,62,46,60,103,101,110,101,120,112,114,62,114,61,0,
+ 0,0,122,38,105,109,112,111,114,116,108,105,98,32,114,101,
+ 113,117,105,114,101,115,32,112,111,115,105,120,32,111,114,32,
+ 110,116,32,111,114,32,101,100,107,50,114,3,0,0,0,114,
+ 27,0,0,0,114,23,0,0,0,114,32,0,0,0,90,7,
+ 95,116,104,114,101,97,100,78,90,8,95,119,101,97,107,114,
+ 101,102,90,6,119,105,110,114,101,103,114,166,0,0,0,114,
+ 7,0,0,0,122,4,46,112,121,119,122,6,95,100,46,112,
+ 121,100,84,41,4,114,52,0,0,0,114,62,0,0,0,114,
+ 24,1,0,0,114,139,0,0,0,41,19,114,117,0,0,0,
+ 114,8,0,0,0,114,142,0,0,0,114,235,0,0,0,114,
+ 107,0,0,0,90,18,95,98,117,105,108,116,105,110,95,102,
+ 114,111,109,95,110,97,109,101,114,111,0,0,0,218,3,97,
+ 108,108,218,14,65,115,115,101,114,116,105,111,110,69,114,114,
+ 111,114,114,101,0,0,0,114,28,0,0,0,114,13,0,0,
+ 0,114,225,0,0,0,114,146,0,0,0,114,23,1,0,0,
+ 114,86,0,0,0,114,160,0,0,0,114,165,0,0,0,114,
+ 169,0,0,0,41,12,218,17,95,98,111,111,116,115,116,114,
+ 97,112,95,109,111,100,117,108,101,90,11,115,101,108,102,95,
+ 109,111,100,117,108,101,90,12,98,117,105,108,116,105,110,95,
+ 110,97,109,101,90,14,98,117,105,108,116,105,110,95,109,111,
+ 100,117,108,101,90,10,111,115,95,100,101,116,97,105,108,115,
+ 90,10,98,117,105,108,116,105,110,95,111,115,114,23,0,0,
+ 0,114,27,0,0,0,90,9,111,115,95,109,111,100,117,108,
+ 101,90,13,116,104,114,101,97,100,95,109,111,100,117,108,101,
+ 90,14,119,101,97,107,114,101,102,95,109,111,100,117,108,101,
+ 90,13,119,105,110,114,101,103,95,109,111,100,117,108,101,114,
+ 4,0,0,0,114,4,0,0,0,114,6,0,0,0,218,6,
+ 95,115,101,116,117,112,90,5,0,0,115,82,0,0,0,0,
+ 8,4,1,6,1,6,3,10,1,10,1,10,1,12,2,10,
+ 1,16,3,32,1,14,2,22,1,8,1,10,1,10,1,4,
+ 2,2,1,10,1,6,1,14,1,12,2,8,1,12,1,12,
+ 1,18,3,2,1,14,1,16,2,10,1,12,3,10,1,12,
+ 3,10,1,10,1,12,3,14,1,14,1,10,1,10,1,10,
+ 1,114,30,1,0,0,99,1,0,0,0,0,0,0,0,2,
+ 0,0,0,3,0,0,0,67,0,0,0,115,50,0,0,0,
+ 116,0,124,0,131,1,1,0,116,1,131,0,125,1,116,2,
+ 106,3,106,4,116,5,106,6,124,1,142,0,103,1,131,1,
+ 1,0,116,2,106,7,106,8,116,9,131,1,1,0,100,1,
+ 83,0,41,2,122,41,73,110,115,116,97,108,108,32,116,104,
+ 101,32,112,97,116,104,45,98,97,115,101,100,32,105,109,112,
+ 111,114,116,32,99,111,109,112,111,110,101,110,116,115,46,78,
+ 41,10,114,30,1,0,0,114,158,0,0,0,114,8,0,0,
+ 0,114,252,0,0,0,114,146,0,0,0,114,4,1,0,0,
+ 114,17,1,0,0,218,9,109,101,116,97,95,112,97,116,104,
+ 114,160,0,0,0,114,247,0,0,0,41,2,114,29,1,0,
+ 0,90,17,115,117,112,112,111,114,116,101,100,95,108,111,97,
+ 100,101,114,115,114,4,0,0,0,114,4,0,0,0,114,6,
+ 0,0,0,218,8,95,105,110,115,116,97,108,108,158,5,0,
+ 0,115,8,0,0,0,0,2,8,1,6,1,20,1,114,32,
+ 1,0,0,41,1,114,0,0,0,0,41,2,114,1,0,0,
+ 0,114,2,0,0,0,41,1,114,49,0,0,0,41,1,78,
+ 41,3,78,78,78,41,3,78,78,78,41,2,114,61,0,0,
+ 0,114,61,0,0,0,41,1,78,41,1,78,41,58,114,109,
+ 0,0,0,114,12,0,0,0,90,37,95,67,65,83,69,95,
+ 73,78,83,69,78,83,73,84,73,86,69,95,80,76,65,84,
+ 70,79,82,77,83,95,66,89,84,69,83,95,75,69,89,114,
+ 11,0,0,0,114,13,0,0,0,114,19,0,0,0,114,21,
+ 0,0,0,114,30,0,0,0,114,40,0,0,0,114,41,0,
+ 0,0,114,45,0,0,0,114,46,0,0,0,114,48,0,0,
+ 0,114,57,0,0,0,218,4,116,121,112,101,218,8,95,95,
+ 99,111,100,101,95,95,114,141,0,0,0,114,17,0,0,0,
+ 114,131,0,0,0,114,16,0,0,0,114,20,0,0,0,90,
+ 17,95,82,65,87,95,77,65,71,73,67,95,78,85,77,66,
+ 69,82,114,75,0,0,0,114,74,0,0,0,114,86,0,0,
+ 0,114,76,0,0,0,90,23,68,69,66,85,71,95,66,89,
+ 84,69,67,79,68,69,95,83,85,70,70,73,88,69,83,90,
+ 27,79,80,84,73,77,73,90,69,68,95,66,89,84,69,67,
+ 79,68,69,95,83,85,70,70,73,88,69,83,114,81,0,0,
+ 0,114,87,0,0,0,114,93,0,0,0,114,97,0,0,0,
+ 114,99,0,0,0,114,119,0,0,0,114,126,0,0,0,114,
+ 138,0,0,0,114,144,0,0,0,114,147,0,0,0,114,152,
+ 0,0,0,218,6,111,98,106,101,99,116,114,159,0,0,0,
+ 114,164,0,0,0,114,165,0,0,0,114,180,0,0,0,114,
+ 190,0,0,0,114,206,0,0,0,114,214,0,0,0,114,219,
+ 0,0,0,114,225,0,0,0,114,220,0,0,0,114,226,0,
+ 0,0,114,245,0,0,0,114,247,0,0,0,114,4,1,0,
+ 0,114,22,1,0,0,114,158,0,0,0,114,30,1,0,0,
+ 114,32,1,0,0,114,4,0,0,0,114,4,0,0,0,114,
+ 4,0,0,0,114,6,0,0,0,218,8,60,109,111,100,117,
+ 108,101,62,8,0,0,0,115,108,0,0,0,4,16,4,1,
+ 4,1,2,1,6,3,8,17,8,5,8,5,8,6,8,12,
+ 8,10,8,9,8,5,8,7,10,22,10,123,16,1,12,2,
+ 4,1,4,2,6,2,6,2,8,2,16,45,8,34,8,19,
+ 8,12,8,12,8,28,8,17,10,55,10,12,10,10,8,14,
+ 6,3,4,1,14,67,14,64,14,29,16,110,14,41,18,45,
+ 18,16,4,3,18,53,14,60,14,42,14,127,0,5,14,127,
+ 0,22,10,23,8,11,8,68,
+};
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/marshal.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/marshal.c
new file mode 100644
index 00000000..dbe75e3b
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/marshal.c
@@ -0,0 +1,1861 @@
+
+/* Write Python objects to files and read them back.
+ This is primarily intended for writing and reading compiled Python code,
+ even though dicts, lists, sets and frozensets, not commonly seen in
+ code objects, are supported.
+ Version 3 of this protocol properly supports circular links
+ and sharing. */
+
+#define PY_SSIZE_T_CLEAN
+
+#include "Python.h"
+#include "longintrepr.h"
+#include "code.h"
+#include "marshal.h"
+#include "../Modules/hashtable.h"
+
+/* High water mark to determine when the marshalled object is dangerously deep
+ * and risks coring the interpreter. When the object stack gets this deep,
+ * raise an exception instead of continuing.
+ * On Windows debug builds, reduce this value.
+ */
+#if defined(MS_WINDOWS) && defined(_DEBUG)
+#define MAX_MARSHAL_STACK_DEPTH 1000
+#else
+#define MAX_MARSHAL_STACK_DEPTH 2000
+#endif
+
+#define TYPE_NULL '0'
+#define TYPE_NONE 'N'
+#define TYPE_FALSE 'F'
+#define TYPE_TRUE 'T'
+#define TYPE_STOPITER 'S'
+#define TYPE_ELLIPSIS '.'
+#define TYPE_INT 'i'
+/* TYPE_INT64 is not generated anymore.
+ Supported for backward compatibility only. */
+#define TYPE_INT64 'I'
+#define TYPE_FLOAT 'f'
+#define TYPE_BINARY_FLOAT 'g'
+#define TYPE_COMPLEX 'x'
+#define TYPE_BINARY_COMPLEX 'y'
+#define TYPE_LONG 'l'
+#define TYPE_STRING 's'
+#define TYPE_INTERNED 't'
+#define TYPE_REF 'r'
+#define TYPE_TUPLE '('
+#define TYPE_LIST '['
+#define TYPE_DICT '{'
+#define TYPE_CODE 'c'
+#define TYPE_UNICODE 'u'
+#define TYPE_UNKNOWN '?'
+#define TYPE_SET '<'
+#define TYPE_FROZENSET '>'
+#define FLAG_REF '\x80' /* with a type, add obj to index */
+
+#define TYPE_ASCII 'a'
+#define TYPE_ASCII_INTERNED 'A'
+#define TYPE_SMALL_TUPLE ')'
+#define TYPE_SHORT_ASCII 'z'
+#define TYPE_SHORT_ASCII_INTERNED 'Z'
+
+#define WFERR_OK 0
+#define WFERR_UNMARSHALLABLE 1
+#define WFERR_NESTEDTOODEEP 2
+#define WFERR_NOMEMORY 3
+
+typedef struct {
+ FILE *fp;
+ int error; /* see WFERR_* values */
+ int depth;
+ PyObject *str;
+ char *ptr;
+ char *end;
+ char *buf;
+ _Py_hashtable_t *hashtable;
+ int version;
+} WFILE;
+
+#define w_byte(c, p) do { \
+ if ((p)->ptr != (p)->end || w_reserve((p), 1)) \
+ *(p)->ptr++ = (c); \
+ } while(0)
+
+static void
+w_flush(WFILE *p)
+{
+ assert(p->fp != NULL);
+ fwrite(p->buf, 1, p->ptr - p->buf, p->fp);
+ p->ptr = p->buf;
+}
+
+static int
+w_reserve(WFILE *p, Py_ssize_t needed)
+{
+ Py_ssize_t pos, size, delta;
+ if (p->ptr == NULL)
+ return 0; /* An error already occurred */
+ if (p->fp != NULL) {
+ w_flush(p);
+ return needed <= p->end - p->ptr;
+ }
+ assert(p->str != NULL);
+ pos = p->ptr - p->buf;
+ size = PyBytes_Size(p->str);
+ if (size > 16*1024*1024)
+ delta = (size >> 3); /* 12.5% overallocation */
+ else
+ delta = size + 1024;
+ delta = Py_MAX(delta, needed);
+ if (delta > PY_SSIZE_T_MAX - size) {
+ p->error = WFERR_NOMEMORY;
+ return 0;
+ }
+ size += delta;
+ if (_PyBytes_Resize(&p->str, size) != 0) {
+ p->ptr = p->buf = p->end = NULL;
+ return 0;
+ }
+ else {
+ p->buf = PyBytes_AS_STRING(p->str);
+ p->ptr = p->buf + pos;
+ p->end = p->buf + size;
+ return 1;
+ }
+}
+
+static void
+w_string(const char *s, Py_ssize_t n, WFILE *p)
+{
+ Py_ssize_t m;
+ if (!n || p->ptr == NULL)
+ return;
+ m = p->end - p->ptr;
+ if (p->fp != NULL) {
+ if (n <= m) {
+ memcpy(p->ptr, s, n);
+ p->ptr += n;
+ }
+ else {
+ w_flush(p);
+ fwrite(s, 1, n, p->fp);
+ }
+ }
+ else {
+ if (n <= m || w_reserve(p, n - m)) {
+ memcpy(p->ptr, s, n);
+ p->ptr += n;
+ }
+ }
+}
+
+static void
+w_short(int x, WFILE *p)
+{
+ w_byte((char)( x & 0xff), p);
+ w_byte((char)((x>> 8) & 0xff), p);
+}
+
+static void
+w_long(long x, WFILE *p)
+{
+ w_byte((char)( x & 0xff), p);
+ w_byte((char)((x>> 8) & 0xff), p);
+ w_byte((char)((x>>16) & 0xff), p);
+ w_byte((char)((x>>24) & 0xff), p);
+}
+
+#define SIZE32_MAX 0x7FFFFFFF
+
+#if SIZEOF_SIZE_T > 4
+# define W_SIZE(n, p) do { \
+ if ((n) > SIZE32_MAX) { \
+ (p)->depth--; \
+ (p)->error = WFERR_UNMARSHALLABLE; \
+ return; \
+ } \
+ w_long((long)(n), p); \
+ } while(0)
+#else
+# define W_SIZE w_long
+#endif
+
+static void
+w_pstring(const char *s, Py_ssize_t n, WFILE *p)
+{
+ W_SIZE(n, p);
+ w_string(s, n, p);
+}
+
+static void
+w_short_pstring(const char *s, Py_ssize_t n, WFILE *p)
+{
+ w_byte(Py_SAFE_DOWNCAST(n, Py_ssize_t, unsigned char), p);
+ w_string(s, n, p);
+}
+
+/* We assume that Python ints are stored internally in base some power of
+ 2**15; for the sake of portability we'll always read and write them in base
+ exactly 2**15. */
+
+#define PyLong_MARSHAL_SHIFT 15
+#define PyLong_MARSHAL_BASE ((short)1 << PyLong_MARSHAL_SHIFT)
+#define PyLong_MARSHAL_MASK (PyLong_MARSHAL_BASE - 1)
+#if PyLong_SHIFT % PyLong_MARSHAL_SHIFT != 0
+#error "PyLong_SHIFT must be a multiple of PyLong_MARSHAL_SHIFT"
+#endif
+#define PyLong_MARSHAL_RATIO (PyLong_SHIFT / PyLong_MARSHAL_SHIFT)
+
+#define W_TYPE(t, p) do { \
+ w_byte((t) | flag, (p)); \
+} while(0)
+
+static void
+w_PyLong(const PyLongObject *ob, char flag, WFILE *p)
+{
+ Py_ssize_t i, j, n, l;
+ digit d;
+
+ W_TYPE(TYPE_LONG, p);
+ if (Py_SIZE(ob) == 0) {
+ w_long((long)0, p);
+ return;
+ }
+
+ /* set l to number of base PyLong_MARSHAL_BASE digits */
+ n = Py_ABS(Py_SIZE(ob));
+ l = (n-1) * PyLong_MARSHAL_RATIO;
+ d = ob->ob_digit[n-1];
+ assert(d != 0); /* a PyLong is always normalized */
+ do {
+ d >>= PyLong_MARSHAL_SHIFT;
+ l++;
+ } while (d != 0);
+ if (l > SIZE32_MAX) {
+ p->depth--;
+ p->error = WFERR_UNMARSHALLABLE;
+ return;
+ }
+ w_long((long)(Py_SIZE(ob) > 0 ? l : -l), p);
+
+ for (i=0; i < n-1; i++) {
+ d = ob->ob_digit[i];
+ for (j=0; j < PyLong_MARSHAL_RATIO; j++) {
+ w_short(d & PyLong_MARSHAL_MASK, p);
+ d >>= PyLong_MARSHAL_SHIFT;
+ }
+ assert (d == 0);
+ }
+ d = ob->ob_digit[n-1];
+ do {
+ w_short(d & PyLong_MARSHAL_MASK, p);
+ d >>= PyLong_MARSHAL_SHIFT;
+ } while (d != 0);
+}
+
+static int
+w_ref(PyObject *v, char *flag, WFILE *p)
+{
+ _Py_hashtable_entry_t *entry;
+ int w;
+
+ if (p->version < 3 || p->hashtable == NULL)
+ return 0; /* not writing object references */
+
+ /* if it has only one reference, it definitely isn't shared */
+ if (Py_REFCNT(v) == 1)
+ return 0;
+
+ entry = _Py_HASHTABLE_GET_ENTRY(p->hashtable, v);
+ if (entry != NULL) {
+ /* write the reference index to the stream */
+ _Py_HASHTABLE_ENTRY_READ_DATA(p->hashtable, entry, w);
+ /* we don't store "long" indices in the dict */
+ assert(0 <= w && w <= 0x7fffffff);
+ w_byte(TYPE_REF, p);
+ w_long(w, p);
+ return 1;
+ } else {
+ size_t s = p->hashtable->entries;
+ /* we don't support long indices */
+ if (s >= 0x7fffffff) {
+ PyErr_SetString(PyExc_ValueError, "too many objects");
+ goto err;
+ }
+ w = (int)s;
+ Py_INCREF(v);
+ if (_Py_HASHTABLE_SET(p->hashtable, v, w) < 0) {
+ Py_DECREF(v);
+ goto err;
+ }
+ *flag |= FLAG_REF;
+ return 0;
+ }
+err:
+ p->error = WFERR_UNMARSHALLABLE;
+ return 1;
+}
+
+static void
+w_complex_object(PyObject *v, char flag, WFILE *p);
+
+static void
+w_object(PyObject *v, WFILE *p)
+{
+ char flag = '\0';
+
+ p->depth++;
+
+ if (p->depth > MAX_MARSHAL_STACK_DEPTH) {
+ p->error = WFERR_NESTEDTOODEEP;
+ }
+ else if (v == NULL) {
+ w_byte(TYPE_NULL, p);
+ }
+ else if (v == Py_None) {
+ w_byte(TYPE_NONE, p);
+ }
+ else if (v == PyExc_StopIteration) {
+ w_byte(TYPE_STOPITER, p);
+ }
+ else if (v == Py_Ellipsis) {
+ w_byte(TYPE_ELLIPSIS, p);
+ }
+ else if (v == Py_False) {
+ w_byte(TYPE_FALSE, p);
+ }
+ else if (v == Py_True) {
+ w_byte(TYPE_TRUE, p);
+ }
+ else if (!w_ref(v, &flag, p))
+ w_complex_object(v, flag, p);
+
+ p->depth--;
+}
+
+static void
+w_complex_object(PyObject *v, char flag, WFILE *p)
+{
+ Py_ssize_t i, n;
+
+ if (PyLong_CheckExact(v)) {
+ long x = PyLong_AsLong(v);
+ if ((x == -1) && PyErr_Occurred()) {
+ PyLongObject *ob = (PyLongObject *)v;
+ PyErr_Clear();
+ w_PyLong(ob, flag, p);
+ }
+ else {
+#if SIZEOF_LONG > 4
+ long y = Py_ARITHMETIC_RIGHT_SHIFT(long, x, 31);
+ if (y && y != -1) {
+ /* Too large for TYPE_INT */
+ w_PyLong((PyLongObject*)v, flag, p);
+ }
+ else
+#endif
+ {
+ W_TYPE(TYPE_INT, p);
+ w_long(x, p);
+ }
+ }
+ }
+ else if (PyFloat_CheckExact(v)) {
+ if (p->version > 1) {
+ unsigned char buf[8];
+ if (_PyFloat_Pack8(PyFloat_AsDouble(v),
+ buf, 1) < 0) {
+ p->error = WFERR_UNMARSHALLABLE;
+ return;
+ }
+ W_TYPE(TYPE_BINARY_FLOAT, p);
+ w_string((char*)buf, 8, p);
+ }
+ else {
+ char *buf = PyOS_double_to_string(PyFloat_AS_DOUBLE(v),
+ 'g', 17, 0, NULL);
+ if (!buf) {
+ p->error = WFERR_NOMEMORY;
+ return;
+ }
+ n = strlen(buf);
+ W_TYPE(TYPE_FLOAT, p);
+ w_byte((int)n, p);
+ w_string(buf, n, p);
+ PyMem_Free(buf);
+ }
+ }
+ else if (PyComplex_CheckExact(v)) {
+ if (p->version > 1) {
+ unsigned char buf[8];
+ if (_PyFloat_Pack8(PyComplex_RealAsDouble(v),
+ buf, 1) < 0) {
+ p->error = WFERR_UNMARSHALLABLE;
+ return;
+ }
+ W_TYPE(TYPE_BINARY_COMPLEX, p);
+ w_string((char*)buf, 8, p);
+ if (_PyFloat_Pack8(PyComplex_ImagAsDouble(v),
+ buf, 1) < 0) {
+ p->error = WFERR_UNMARSHALLABLE;
+ return;
+ }
+ w_string((char*)buf, 8, p);
+ }
+ else {
+ char *buf;
+ W_TYPE(TYPE_COMPLEX, p);
+ buf = PyOS_double_to_string(PyComplex_RealAsDouble(v),
+ 'g', 17, 0, NULL);
+ if (!buf) {
+ p->error = WFERR_NOMEMORY;
+ return;
+ }
+ n = strlen(buf);
+ w_byte((int)n, p);
+ w_string(buf, n, p);
+ PyMem_Free(buf);
+ buf = PyOS_double_to_string(PyComplex_ImagAsDouble(v),
+ 'g', 17, 0, NULL);
+ if (!buf) {
+ p->error = WFERR_NOMEMORY;
+ return;
+ }
+ n = strlen(buf);
+ w_byte((int)n, p);
+ w_string(buf, n, p);
+ PyMem_Free(buf);
+ }
+ }
+ else if (PyBytes_CheckExact(v)) {
+ W_TYPE(TYPE_STRING, p);
+ w_pstring(PyBytes_AS_STRING(v), PyBytes_GET_SIZE(v), p);
+ }
+ else if (PyUnicode_CheckExact(v)) {
+ if (p->version >= 4 && PyUnicode_IS_ASCII(v)) {
+ int is_short = PyUnicode_GET_LENGTH(v) < 256;
+ if (is_short) {
+ if (PyUnicode_CHECK_INTERNED(v))
+ W_TYPE(TYPE_SHORT_ASCII_INTERNED, p);
+ else
+ W_TYPE(TYPE_SHORT_ASCII, p);
+ w_short_pstring((char *) PyUnicode_1BYTE_DATA(v),
+ PyUnicode_GET_LENGTH(v), p);
+ }
+ else {
+ if (PyUnicode_CHECK_INTERNED(v))
+ W_TYPE(TYPE_ASCII_INTERNED, p);
+ else
+ W_TYPE(TYPE_ASCII, p);
+ w_pstring((char *) PyUnicode_1BYTE_DATA(v),
+ PyUnicode_GET_LENGTH(v), p);
+ }
+ }
+ else {
+ PyObject *utf8;
+ utf8 = PyUnicode_AsEncodedString(v, "utf8", "surrogatepass");
+ if (utf8 == NULL) {
+ p->depth--;
+ p->error = WFERR_UNMARSHALLABLE;
+ return;
+ }
+ if (p->version >= 3 && PyUnicode_CHECK_INTERNED(v))
+ W_TYPE(TYPE_INTERNED, p);
+ else
+ W_TYPE(TYPE_UNICODE, p);
+ w_pstring(PyBytes_AS_STRING(utf8), PyBytes_GET_SIZE(utf8), p);
+ Py_DECREF(utf8);
+ }
+ }
+ else if (PyTuple_CheckExact(v)) {
+ n = PyTuple_Size(v);
+ if (p->version >= 4 && n < 256) {
+ W_TYPE(TYPE_SMALL_TUPLE, p);
+ w_byte((unsigned char)n, p);
+ }
+ else {
+ W_TYPE(TYPE_TUPLE, p);
+ W_SIZE(n, p);
+ }
+ for (i = 0; i < n; i++) {
+ w_object(PyTuple_GET_ITEM(v, i), p);
+ }
+ }
+ else if (PyList_CheckExact(v)) {
+ W_TYPE(TYPE_LIST, p);
+ n = PyList_GET_SIZE(v);
+ W_SIZE(n, p);
+ for (i = 0; i < n; i++) {
+ w_object(PyList_GET_ITEM(v, i), p);
+ }
+ }
+ else if (PyDict_CheckExact(v)) {
+ Py_ssize_t pos;
+ PyObject *key, *value;
+ W_TYPE(TYPE_DICT, p);
+ /* This one is NULL object terminated! */
+ pos = 0;
+ while (PyDict_Next(v, &pos, &key, &value)) {
+ w_object(key, p);
+ w_object(value, p);
+ }
+ w_object((PyObject *)NULL, p);
+ }
+ else if (PyAnySet_CheckExact(v)) {
+ PyObject *value, *it;
+
+ if (PyObject_TypeCheck(v, &PySet_Type))
+ W_TYPE(TYPE_SET, p);
+ else
+ W_TYPE(TYPE_FROZENSET, p);
+ n = PyObject_Size(v);
+ if (n == -1) {
+ p->depth--;
+ p->error = WFERR_UNMARSHALLABLE;
+ return;
+ }
+ W_SIZE(n, p);
+ it = PyObject_GetIter(v);
+ if (it == NULL) {
+ p->depth--;
+ p->error = WFERR_UNMARSHALLABLE;
+ return;
+ }
+ while ((value = PyIter_Next(it)) != NULL) {
+ w_object(value, p);
+ Py_DECREF(value);
+ }
+ Py_DECREF(it);
+ if (PyErr_Occurred()) {
+ p->depth--;
+ p->error = WFERR_UNMARSHALLABLE;
+ return;
+ }
+ }
+ else if (PyCode_Check(v)) {
+ PyCodeObject *co = (PyCodeObject *)v;
+ W_TYPE(TYPE_CODE, p);
+ w_long(co->co_argcount, p);
+ w_long(co->co_kwonlyargcount, p);
+ w_long(co->co_nlocals, p);
+ w_long(co->co_stacksize, p);
+ w_long(co->co_flags, p);
+ w_object(co->co_code, p);
+ w_object(co->co_consts, p);
+ w_object(co->co_names, p);
+ w_object(co->co_varnames, p);
+ w_object(co->co_freevars, p);
+ w_object(co->co_cellvars, p);
+ w_object(co->co_filename, p);
+ w_object(co->co_name, p);
+ w_long(co->co_firstlineno, p);
+ w_object(co->co_lnotab, p);
+ }
+ else if (PyObject_CheckBuffer(v)) {
+ /* Write unknown bytes-like objects as a bytes object */
+ Py_buffer view;
+ if (PyObject_GetBuffer(v, &view, PyBUF_SIMPLE) != 0) {
+ w_byte(TYPE_UNKNOWN, p);
+ p->depth--;
+ p->error = WFERR_UNMARSHALLABLE;
+ return;
+ }
+ W_TYPE(TYPE_STRING, p);
+ w_pstring(view.buf, view.len, p);
+ PyBuffer_Release(&view);
+ }
+ else {
+ W_TYPE(TYPE_UNKNOWN, p);
+ p->error = WFERR_UNMARSHALLABLE;
+ }
+}
+
+static int
+w_init_refs(WFILE *wf, int version)
+{
+ if (version >= 3) {
+ wf->hashtable = _Py_hashtable_new(sizeof(PyObject *), sizeof(int),
+ _Py_hashtable_hash_ptr,
+ _Py_hashtable_compare_direct);
+ if (wf->hashtable == NULL) {
+ PyErr_NoMemory();
+ return -1;
+ }
+ }
+ return 0;
+}
+
+static int
+w_decref_entry(_Py_hashtable_t *ht, _Py_hashtable_entry_t *entry,
+ void *Py_UNUSED(data))
+{
+ PyObject *entry_key;
+
+ _Py_HASHTABLE_ENTRY_READ_KEY(ht, entry, entry_key);
+ Py_XDECREF(entry_key);
+ return 0;
+}
+
+static void
+w_clear_refs(WFILE *wf)
+{
+ if (wf->hashtable != NULL) {
+ _Py_hashtable_foreach(wf->hashtable, w_decref_entry, NULL);
+ _Py_hashtable_destroy(wf->hashtable);
+ }
+}
+
+/* version currently has no effect for writing ints. */
+void
+PyMarshal_WriteLongToFile(long x, FILE *fp, int version)
+{
+ char buf[4];
+ WFILE wf;
+ memset(&wf, 0, sizeof(wf));
+ wf.fp = fp;
+ wf.ptr = wf.buf = buf;
+ wf.end = wf.ptr + sizeof(buf);
+ wf.error = WFERR_OK;
+ wf.version = version;
+ w_long(x, &wf);
+ w_flush(&wf);
+}
+
+void
+PyMarshal_WriteObjectToFile(PyObject *x, FILE *fp, int version)
+{
+ char buf[BUFSIZ];
+ WFILE wf;
+ memset(&wf, 0, sizeof(wf));
+ wf.fp = fp;
+ wf.ptr = wf.buf = buf;
+ wf.end = wf.ptr + sizeof(buf);
+ wf.error = WFERR_OK;
+ wf.version = version;
+ if (w_init_refs(&wf, version))
+ return; /* caller mush check PyErr_Occurred() */
+ w_object(x, &wf);
+ w_clear_refs(&wf);
+ w_flush(&wf);
+}
+
+typedef struct {
+ FILE *fp;
+ int depth;
+ PyObject *readable; /* Stream-like object being read from */
+ PyObject *current_filename;
+ char *ptr;
+ char *end;
+ char *buf;
+ Py_ssize_t buf_size;
+ PyObject *refs; /* a list */
+} RFILE;
+
+static const char *
+r_string(Py_ssize_t n, RFILE *p)
+{
+ Py_ssize_t read = -1;
+
+ if (p->ptr != NULL) {
+ /* Fast path for loads() */
+ char *res = p->ptr;
+ Py_ssize_t left = p->end - p->ptr;
+ if (left < n) {
+ PyErr_SetString(PyExc_EOFError,
+ "marshal data too short");
+ return NULL;
+ }
+ p->ptr += n;
+ return res;
+ }
+ if (p->buf == NULL) {
+ p->buf = PyMem_MALLOC(n);
+ if (p->buf == NULL) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+ p->buf_size = n;
+ }
+ else if (p->buf_size < n) {
+ char *tmp = PyMem_REALLOC(p->buf, n);
+ if (tmp == NULL) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+ p->buf = tmp;
+ p->buf_size = n;
+ }
+
+ if (!p->readable) {
+ assert(p->fp != NULL);
+ read = fread(p->buf, 1, n, p->fp);
+ }
+ else {
+ _Py_IDENTIFIER(readinto);
+ PyObject *res, *mview;
+ Py_buffer buf;
+
+ if (PyBuffer_FillInfo(&buf, NULL, p->buf, n, 0, PyBUF_CONTIG) == -1)
+ return NULL;
+ mview = PyMemoryView_FromBuffer(&buf);
+ if (mview == NULL)
+ return NULL;
+
+ res = _PyObject_CallMethodId(p->readable, &PyId_readinto, "N", mview);
+ if (res != NULL) {
+ read = PyNumber_AsSsize_t(res, PyExc_ValueError);
+ Py_DECREF(res);
+ }
+ }
+ if (read != n) {
+ if (!PyErr_Occurred()) {
+ if (read > n)
+ PyErr_Format(PyExc_ValueError,
+ "read() returned too much data: "
+ "%zd bytes requested, %zd returned",
+ n, read);
+ else
+ PyErr_SetString(PyExc_EOFError,
+ "EOF read where not expected");
+ }
+ return NULL;
+ }
+ return p->buf;
+}
+
+static int
+r_byte(RFILE *p)
+{
+ int c = EOF;
+
+ if (p->ptr != NULL) {
+ if (p->ptr < p->end)
+ c = (unsigned char) *p->ptr++;
+ return c;
+ }
+ if (!p->readable) {
+ assert(p->fp);
+ c = getc(p->fp);
+ }
+ else {
+ const char *ptr = r_string(1, p);
+ if (ptr != NULL)
+ c = *(unsigned char *) ptr;
+ }
+ return c;
+}
+
+static int
+r_short(RFILE *p)
+{
+ short x = -1;
+ const unsigned char *buffer;
+
+ buffer = (const unsigned char *) r_string(2, p);
+ if (buffer != NULL) {
+ x = buffer[0];
+ x |= buffer[1] << 8;
+ /* Sign-extension, in case short greater than 16 bits */
+ x |= -(x & 0x8000);
+ }
+ return x;
+}
+
+static long
+r_long(RFILE *p)
+{
+ long x = -1;
+ const unsigned char *buffer;
+
+ buffer = (const unsigned char *) r_string(4, p);
+ if (buffer != NULL) {
+ x = buffer[0];
+ x |= (long)buffer[1] << 8;
+ x |= (long)buffer[2] << 16;
+ x |= (long)buffer[3] << 24;
+#if SIZEOF_LONG > 4
+ /* Sign extension for 64-bit machines */
+ x |= -(x & 0x80000000L);
+#endif
+ }
+ return x;
+}
+
+/* r_long64 deals with the TYPE_INT64 code. */
+static PyObject *
+r_long64(RFILE *p)
+{
+ const unsigned char *buffer = (const unsigned char *) r_string(8, p);
+ if (buffer == NULL) {
+ return NULL;
+ }
+ return _PyLong_FromByteArray(buffer, 8,
+ 1 /* little endian */,
+ 1 /* signed */);
+}
+
+static PyObject *
+r_PyLong(RFILE *p)
+{
+ PyLongObject *ob;
+ long n, size, i;
+ int j, md, shorts_in_top_digit;
+ digit d;
+
+ n = r_long(p);
+ if (PyErr_Occurred())
+ return NULL;
+ if (n == 0)
+ return (PyObject *)_PyLong_New(0);
+ if (n < -SIZE32_MAX || n > SIZE32_MAX) {
+ PyErr_SetString(PyExc_ValueError,
+ "bad marshal data (long size out of range)");
+ return NULL;
+ }
+
+ size = 1 + (Py_ABS(n) - 1) / PyLong_MARSHAL_RATIO;
+ shorts_in_top_digit = 1 + (Py_ABS(n) - 1) % PyLong_MARSHAL_RATIO;
+ ob = _PyLong_New(size);
+ if (ob == NULL)
+ return NULL;
+
+ Py_SIZE(ob) = n > 0 ? size : -size;
+
+ for (i = 0; i < size-1; i++) {
+ d = 0;
+ for (j=0; j < PyLong_MARSHAL_RATIO; j++) {
+ md = r_short(p);
+ if (PyErr_Occurred()) {
+ Py_DECREF(ob);
+ return NULL;
+ }
+ if (md < 0 || md > PyLong_MARSHAL_BASE)
+ goto bad_digit;
+ d += (digit)md << j*PyLong_MARSHAL_SHIFT;
+ }
+ ob->ob_digit[i] = d;
+ }
+
+ d = 0;
+ for (j=0; j < shorts_in_top_digit; j++) {
+ md = r_short(p);
+ if (PyErr_Occurred()) {
+ Py_DECREF(ob);
+ return NULL;
+ }
+ if (md < 0 || md > PyLong_MARSHAL_BASE)
+ goto bad_digit;
+ /* topmost marshal digit should be nonzero */
+ if (md == 0 && j == shorts_in_top_digit - 1) {
+ Py_DECREF(ob);
+ PyErr_SetString(PyExc_ValueError,
+ "bad marshal data (unnormalized long data)");
+ return NULL;
+ }
+ d += (digit)md << j*PyLong_MARSHAL_SHIFT;
+ }
+ if (PyErr_Occurred()) {
+ Py_DECREF(ob);
+ return NULL;
+ }
+ /* top digit should be nonzero, else the resulting PyLong won't be
+ normalized */
+ ob->ob_digit[size-1] = d;
+ return (PyObject *)ob;
+ bad_digit:
+ Py_DECREF(ob);
+ PyErr_SetString(PyExc_ValueError,
+ "bad marshal data (digit out of range in long)");
+ return NULL;
+}
+
+/* allocate the reflist index for a new object. Return -1 on failure */
+static Py_ssize_t
+r_ref_reserve(int flag, RFILE *p)
+{
+ if (flag) { /* currently only FLAG_REF is defined */
+ Py_ssize_t idx = PyList_GET_SIZE(p->refs);
+ if (idx >= 0x7ffffffe) {
+ PyErr_SetString(PyExc_ValueError, "bad marshal data (index list too large)");
+ return -1;
+ }
+ if (PyList_Append(p->refs, Py_None) < 0)
+ return -1;
+ return idx;
+ } else
+ return 0;
+}
+
+/* insert the new object 'o' to the reflist at previously
+ * allocated index 'idx'.
+ * 'o' can be NULL, in which case nothing is done.
+ * if 'o' was non-NULL, and the function succeeds, 'o' is returned.
+ * if 'o' was non-NULL, and the function fails, 'o' is released and
+ * NULL returned. This simplifies error checking at the call site since
+ * a single test for NULL for the function result is enough.
+ */
+static PyObject *
+r_ref_insert(PyObject *o, Py_ssize_t idx, int flag, RFILE *p)
+{
+ if (o != NULL && flag) { /* currently only FLAG_REF is defined */
+ PyObject *tmp = PyList_GET_ITEM(p->refs, idx);
+ Py_INCREF(o);
+ PyList_SET_ITEM(p->refs, idx, o);
+ Py_DECREF(tmp);
+ }
+ return o;
+}
+
+/* combination of both above, used when an object can be
+ * created whenever it is seen in the file, as opposed to
+ * after having loaded its sub-objects.
+ */
+static PyObject *
+r_ref(PyObject *o, int flag, RFILE *p)
+{
+ assert(flag & FLAG_REF);
+ if (o == NULL)
+ return NULL;
+ if (PyList_Append(p->refs, o) < 0) {
+ Py_DECREF(o); /* release the new object */
+ return NULL;
+ }
+ return o;
+}
+
+static PyObject *
+r_object(RFILE *p)
+{
+ /* NULL is a valid return value, it does not necessarily means that
+ an exception is set. */
+ PyObject *v, *v2;
+ Py_ssize_t idx = 0;
+ long i, n;
+ int type, code = r_byte(p);
+ int flag, is_interned = 0;
+ PyObject *retval = NULL;
+
+ if (code == EOF) {
+ PyErr_SetString(PyExc_EOFError,
+ "EOF read where object expected");
+ return NULL;
+ }
+
+ p->depth++;
+
+ if (p->depth > MAX_MARSHAL_STACK_DEPTH) {
+ p->depth--;
+ PyErr_SetString(PyExc_ValueError, "recursion limit exceeded");
+ return NULL;
+ }
+
+ flag = code & FLAG_REF;
+ type = code & ~FLAG_REF;
+
+#define R_REF(O) do{\
+ if (flag) \
+ O = r_ref(O, flag, p);\
+} while (0)
+
+ switch (type) {
+
+ case TYPE_NULL:
+ break;
+
+ case TYPE_NONE:
+ Py_INCREF(Py_None);
+ retval = Py_None;
+ break;
+
+ case TYPE_STOPITER:
+ Py_INCREF(PyExc_StopIteration);
+ retval = PyExc_StopIteration;
+ break;
+
+ case TYPE_ELLIPSIS:
+ Py_INCREF(Py_Ellipsis);
+ retval = Py_Ellipsis;
+ break;
+
+ case TYPE_FALSE:
+ Py_INCREF(Py_False);
+ retval = Py_False;
+ break;
+
+ case TYPE_TRUE:
+ Py_INCREF(Py_True);
+ retval = Py_True;
+ break;
+
+ case TYPE_INT:
+ n = r_long(p);
+ retval = PyErr_Occurred() ? NULL : PyLong_FromLong(n);
+ R_REF(retval);
+ break;
+
+ case TYPE_INT64:
+ retval = r_long64(p);
+ R_REF(retval);
+ break;
+
+ case TYPE_LONG:
+ retval = r_PyLong(p);
+ R_REF(retval);
+ break;
+
+ case TYPE_FLOAT:
+ {
+ char buf[256];
+ const char *ptr;
+ double dx;
+ n = r_byte(p);
+ if (n == EOF) {
+ PyErr_SetString(PyExc_EOFError,
+ "EOF read where object expected");
+ break;
+ }
+ ptr = r_string(n, p);
+ if (ptr == NULL)
+ break;
+ memcpy(buf, ptr, n);
+ buf[n] = '\0';
+ dx = PyOS_string_to_double(buf, NULL, NULL);
+ if (dx == -1.0 && PyErr_Occurred())
+ break;
+ retval = PyFloat_FromDouble(dx);
+ R_REF(retval);
+ break;
+ }
+
+ case TYPE_BINARY_FLOAT:
+ {
+ const unsigned char *buf;
+ double x;
+ buf = (const unsigned char *) r_string(8, p);
+ if (buf == NULL)
+ break;
+ x = _PyFloat_Unpack8(buf, 1);
+ if (x == -1.0 && PyErr_Occurred())
+ break;
+ retval = PyFloat_FromDouble(x);
+ R_REF(retval);
+ break;
+ }
+
+ case TYPE_COMPLEX:
+ {
+ char buf[256];
+ const char *ptr;
+ Py_complex c;
+ n = r_byte(p);
+ if (n == EOF) {
+ PyErr_SetString(PyExc_EOFError,
+ "EOF read where object expected");
+ break;
+ }
+ ptr = r_string(n, p);
+ if (ptr == NULL)
+ break;
+ memcpy(buf, ptr, n);
+ buf[n] = '\0';
+ c.real = PyOS_string_to_double(buf, NULL, NULL);
+ if (c.real == -1.0 && PyErr_Occurred())
+ break;
+ n = r_byte(p);
+ if (n == EOF) {
+ PyErr_SetString(PyExc_EOFError,
+ "EOF read where object expected");
+ break;
+ }
+ ptr = r_string(n, p);
+ if (ptr == NULL)
+ break;
+ memcpy(buf, ptr, n);
+ buf[n] = '\0';
+ c.imag = PyOS_string_to_double(buf, NULL, NULL);
+ if (c.imag == -1.0 && PyErr_Occurred())
+ break;
+ retval = PyComplex_FromCComplex(c);
+ R_REF(retval);
+ break;
+ }
+
+ case TYPE_BINARY_COMPLEX:
+ {
+ const unsigned char *buf;
+ Py_complex c;
+ buf = (const unsigned char *) r_string(8, p);
+ if (buf == NULL)
+ break;
+ c.real = _PyFloat_Unpack8(buf, 1);
+ if (c.real == -1.0 && PyErr_Occurred())
+ break;
+ buf = (const unsigned char *) r_string(8, p);
+ if (buf == NULL)
+ break;
+ c.imag = _PyFloat_Unpack8(buf, 1);
+ if (c.imag == -1.0 && PyErr_Occurred())
+ break;
+ retval = PyComplex_FromCComplex(c);
+ R_REF(retval);
+ break;
+ }
+
+ case TYPE_STRING:
+ {
+ const char *ptr;
+ n = r_long(p);
+ if (PyErr_Occurred())
+ break;
+ if (n < 0 || n > SIZE32_MAX) {
+ PyErr_SetString(PyExc_ValueError, "bad marshal data (bytes object size out of range)");
+ break;
+ }
+ v = PyBytes_FromStringAndSize((char *)NULL, n);
+ if (v == NULL)
+ break;
+ ptr = r_string(n, p);
+ if (ptr == NULL) {
+ Py_DECREF(v);
+ break;
+ }
+ memcpy(PyBytes_AS_STRING(v), ptr, n);
+ retval = v;
+ R_REF(retval);
+ break;
+ }
+
+ case TYPE_ASCII_INTERNED:
+ is_interned = 1;
+ /* fall through */
+ case TYPE_ASCII:
+ n = r_long(p);
+ if (PyErr_Occurred())
+ break;
+ if (n < 0 || n > SIZE32_MAX) {
+ PyErr_SetString(PyExc_ValueError, "bad marshal data (string size out of range)");
+ break;
+ }
+ goto _read_ascii;
+
+ case TYPE_SHORT_ASCII_INTERNED:
+ is_interned = 1;
+ /* fall through */
+ case TYPE_SHORT_ASCII:
+ n = r_byte(p);
+ if (n == EOF) {
+ PyErr_SetString(PyExc_EOFError,
+ "EOF read where object expected");
+ break;
+ }
+ _read_ascii:
+ {
+ const char *ptr;
+ ptr = r_string(n, p);
+ if (ptr == NULL)
+ break;
+ v = PyUnicode_FromKindAndData(PyUnicode_1BYTE_KIND, ptr, n);
+ if (v == NULL)
+ break;
+ if (is_interned)
+ PyUnicode_InternInPlace(&v);
+ retval = v;
+ R_REF(retval);
+ break;
+ }
+
+ case TYPE_INTERNED:
+ is_interned = 1;
+ /* fall through */
+ case TYPE_UNICODE:
+ {
+ const char *buffer;
+
+ n = r_long(p);
+ if (PyErr_Occurred())
+ break;
+ if (n < 0 || n > SIZE32_MAX) {
+ PyErr_SetString(PyExc_ValueError, "bad marshal data (string size out of range)");
+ break;
+ }
+ if (n != 0) {
+ buffer = r_string(n, p);
+ if (buffer == NULL)
+ break;
+ v = PyUnicode_DecodeUTF8(buffer, n, "surrogatepass");
+ }
+ else {
+ v = PyUnicode_New(0, 0);
+ }
+ if (v == NULL)
+ break;
+ if (is_interned)
+ PyUnicode_InternInPlace(&v);
+ retval = v;
+ R_REF(retval);
+ break;
+ }
+
+ case TYPE_SMALL_TUPLE:
+ n = (unsigned char) r_byte(p);
+ if (PyErr_Occurred())
+ break;
+ goto _read_tuple;
+ case TYPE_TUPLE:
+ n = r_long(p);
+ if (PyErr_Occurred())
+ break;
+ if (n < 0 || n > SIZE32_MAX) {
+ PyErr_SetString(PyExc_ValueError, "bad marshal data (tuple size out of range)");
+ break;
+ }
+ _read_tuple:
+ v = PyTuple_New(n);
+ R_REF(v);
+ if (v == NULL)
+ break;
+
+ for (i = 0; i < n; i++) {
+ v2 = r_object(p);
+ if ( v2 == NULL ) {
+ if (!PyErr_Occurred())
+ PyErr_SetString(PyExc_TypeError,
+ "NULL object in marshal data for tuple");
+ Py_DECREF(v);
+ v = NULL;
+ break;
+ }
+ PyTuple_SET_ITEM(v, i, v2);
+ }
+ retval = v;
+ break;
+
+ case TYPE_LIST:
+ n = r_long(p);
+ if (PyErr_Occurred())
+ break;
+ if (n < 0 || n > SIZE32_MAX) {
+ PyErr_SetString(PyExc_ValueError, "bad marshal data (list size out of range)");
+ break;
+ }
+ v = PyList_New(n);
+ R_REF(v);
+ if (v == NULL)
+ break;
+ for (i = 0; i < n; i++) {
+ v2 = r_object(p);
+ if ( v2 == NULL ) {
+ if (!PyErr_Occurred())
+ PyErr_SetString(PyExc_TypeError,
+ "NULL object in marshal data for list");
+ Py_DECREF(v);
+ v = NULL;
+ break;
+ }
+ PyList_SET_ITEM(v, i, v2);
+ }
+ retval = v;
+ break;
+
+ case TYPE_DICT:
+ v = PyDict_New();
+ R_REF(v);
+ if (v == NULL)
+ break;
+ for (;;) {
+ PyObject *key, *val;
+ key = r_object(p);
+ if (key == NULL)
+ break;
+ val = r_object(p);
+ if (val == NULL) {
+ Py_DECREF(key);
+ break;
+ }
+ if (PyDict_SetItem(v, key, val) < 0) {
+ Py_DECREF(key);
+ Py_DECREF(val);
+ break;
+ }
+ Py_DECREF(key);
+ Py_DECREF(val);
+ }
+ if (PyErr_Occurred()) {
+ Py_DECREF(v);
+ v = NULL;
+ }
+ retval = v;
+ break;
+
+ case TYPE_SET:
+ case TYPE_FROZENSET:
+ n = r_long(p);
+ if (PyErr_Occurred())
+ break;
+ if (n < 0 || n > SIZE32_MAX) {
+ PyErr_SetString(PyExc_ValueError, "bad marshal data (set size out of range)");
+ break;
+ }
+
+ if (n == 0 && type == TYPE_FROZENSET) {
+ /* call frozenset() to get the empty frozenset singleton */
+ v = PyObject_CallFunction((PyObject*)&PyFrozenSet_Type, NULL);
+ if (v == NULL)
+ break;
+ R_REF(v);
+ retval = v;
+ }
+ else {
+ v = (type == TYPE_SET) ? PySet_New(NULL) : PyFrozenSet_New(NULL);
+ if (type == TYPE_SET) {
+ R_REF(v);
+ } else {
+ /* must use delayed registration of frozensets because they must
+ * be init with a refcount of 1
+ */
+ idx = r_ref_reserve(flag, p);
+ if (idx < 0)
+ Py_CLEAR(v); /* signal error */
+ }
+ if (v == NULL)
+ break;
+
+ for (i = 0; i < n; i++) {
+ v2 = r_object(p);
+ if ( v2 == NULL ) {
+ if (!PyErr_Occurred())
+ PyErr_SetString(PyExc_TypeError,
+ "NULL object in marshal data for set");
+ Py_DECREF(v);
+ v = NULL;
+ break;
+ }
+ if (PySet_Add(v, v2) == -1) {
+ Py_DECREF(v);
+ Py_DECREF(v2);
+ v = NULL;
+ break;
+ }
+ Py_DECREF(v2);
+ }
+ if (type != TYPE_SET)
+ v = r_ref_insert(v, idx, flag, p);
+ retval = v;
+ }
+ break;
+
+ case TYPE_CODE:
+ {
+ int argcount;
+ int kwonlyargcount;
+ int nlocals;
+ int stacksize;
+ int flags;
+ PyObject *code = NULL;
+ PyObject *consts = NULL;
+ PyObject *names = NULL;
+ PyObject *varnames = NULL;
+ PyObject *freevars = NULL;
+ PyObject *cellvars = NULL;
+ PyObject *filename = NULL;
+ PyObject *name = NULL;
+ int firstlineno;
+ PyObject *lnotab = NULL;
+
+ idx = r_ref_reserve(flag, p);
+ if (idx < 0)
+ break;
+
+ v = NULL;
+
+ /* XXX ignore long->int overflows for now */
+ argcount = (int)r_long(p);
+ if (PyErr_Occurred())
+ goto code_error;
+ kwonlyargcount = (int)r_long(p);
+ if (PyErr_Occurred())
+ goto code_error;
+ nlocals = (int)r_long(p);
+ if (PyErr_Occurred())
+ goto code_error;
+ stacksize = (int)r_long(p);
+ if (PyErr_Occurred())
+ goto code_error;
+ flags = (int)r_long(p);
+ if (PyErr_Occurred())
+ goto code_error;
+ code = r_object(p);
+ if (code == NULL)
+ goto code_error;
+ consts = r_object(p);
+ if (consts == NULL)
+ goto code_error;
+ names = r_object(p);
+ if (names == NULL)
+ goto code_error;
+ varnames = r_object(p);
+ if (varnames == NULL)
+ goto code_error;
+ freevars = r_object(p);
+ if (freevars == NULL)
+ goto code_error;
+ cellvars = r_object(p);
+ if (cellvars == NULL)
+ goto code_error;
+ filename = r_object(p);
+ if (filename == NULL)
+ goto code_error;
+ if (PyUnicode_CheckExact(filename)) {
+ if (p->current_filename != NULL) {
+ if (!PyUnicode_Compare(filename, p->current_filename)) {
+ Py_DECREF(filename);
+ Py_INCREF(p->current_filename);
+ filename = p->current_filename;
+ }
+ }
+ else {
+ p->current_filename = filename;
+ }
+ }
+ name = r_object(p);
+ if (name == NULL)
+ goto code_error;
+ firstlineno = (int)r_long(p);
+ if (firstlineno == -1 && PyErr_Occurred())
+ break;
+ lnotab = r_object(p);
+ if (lnotab == NULL)
+ goto code_error;
+
+ v = (PyObject *) PyCode_New(
+ argcount, kwonlyargcount,
+ nlocals, stacksize, flags,
+ code, consts, names, varnames,
+ freevars, cellvars, filename, name,
+ firstlineno, lnotab);
+ v = r_ref_insert(v, idx, flag, p);
+
+ code_error:
+ Py_XDECREF(code);
+ Py_XDECREF(consts);
+ Py_XDECREF(names);
+ Py_XDECREF(varnames);
+ Py_XDECREF(freevars);
+ Py_XDECREF(cellvars);
+ Py_XDECREF(filename);
+ Py_XDECREF(name);
+ Py_XDECREF(lnotab);
+ }
+ retval = v;
+ break;
+
+ case TYPE_REF:
+ n = r_long(p);
+ if (n < 0 || n >= PyList_GET_SIZE(p->refs)) {
+ if (n == -1 && PyErr_Occurred())
+ break;
+ PyErr_SetString(PyExc_ValueError, "bad marshal data (invalid reference)");
+ break;
+ }
+ v = PyList_GET_ITEM(p->refs, n);
+ if (v == Py_None) {
+ PyErr_SetString(PyExc_ValueError, "bad marshal data (invalid reference)");
+ break;
+ }
+ Py_INCREF(v);
+ retval = v;
+ break;
+
+ default:
+ /* Bogus data got written, which isn't ideal.
+ This will let you keep working and recover. */
+ PyErr_SetString(PyExc_ValueError, "bad marshal data (unknown type code)");
+ break;
+
+ }
+ p->depth--;
+ return retval;
+}
+
+static PyObject *
+read_object(RFILE *p)
+{
+ PyObject *v;
+ if (PyErr_Occurred()) {
+ fprintf(stderr, "XXX readobject called with exception set\n");
+ return NULL;
+ }
+ v = r_object(p);
+ if (v == NULL && !PyErr_Occurred())
+ PyErr_SetString(PyExc_TypeError, "NULL object in marshal data for object");
+ return v;
+}
+
+int
+PyMarshal_ReadShortFromFile(FILE *fp)
+{
+ RFILE rf;
+ int res;
+ assert(fp);
+ rf.readable = NULL;
+ rf.fp = fp;
+ rf.current_filename = NULL;
+ rf.end = rf.ptr = NULL;
+ rf.buf = NULL;
+ res = r_short(&rf);
+ if (rf.buf != NULL)
+ PyMem_FREE(rf.buf);
+ return res;
+}
+
+long
+PyMarshal_ReadLongFromFile(FILE *fp)
+{
+ RFILE rf;
+ long res;
+ rf.fp = fp;
+ rf.readable = NULL;
+ rf.current_filename = NULL;
+ rf.ptr = rf.end = NULL;
+ rf.buf = NULL;
+ res = r_long(&rf);
+ if (rf.buf != NULL)
+ PyMem_FREE(rf.buf);
+ return res;
+}
+
+/* Return size of file in bytes; < 0 if unknown or INT_MAX if too big */
+static off_t
+getfilesize(FILE *fp)
+{
+ struct _Py_stat_struct st;
+ if (_Py_fstat_noraise(fileno(fp), &st) != 0)
+ return -1;
+#if SIZEOF_OFF_T == 4
+ else if (st.st_size >= INT_MAX)
+ return (off_t)INT_MAX;
+#endif
+ else
+ return (off_t)st.st_size;
+}
+
+/* If we can get the size of the file up-front, and it's reasonably small,
+ * read it in one gulp and delegate to ...FromString() instead. Much quicker
+ * than reading a byte at a time from file; speeds .pyc imports.
+ * CAUTION: since this may read the entire remainder of the file, don't
+ * call it unless you know you're done with the file.
+ */
+PyObject *
+PyMarshal_ReadLastObjectFromFile(FILE *fp)
+{
+/* REASONABLE_FILE_LIMIT is by defn something big enough for Tkinter.pyc. */
+#define REASONABLE_FILE_LIMIT (1L << 18)
+ off_t filesize;
+ filesize = getfilesize(fp);
+ if (filesize > 0 && filesize <= REASONABLE_FILE_LIMIT) {
+ char* pBuf = (char *)PyMem_MALLOC(filesize);
+ if (pBuf != NULL) {
+ size_t n = fread(pBuf, 1, (size_t)filesize, fp);
+ PyObject* v = PyMarshal_ReadObjectFromString(pBuf, n);
+ PyMem_FREE(pBuf);
+ return v;
+ }
+
+ }
+ /* We don't have fstat, or we do but the file is larger than
+ * REASONABLE_FILE_LIMIT or malloc failed -- read a byte at a time.
+ */
+ return PyMarshal_ReadObjectFromFile(fp);
+
+#undef REASONABLE_FILE_LIMIT
+}
+
+PyObject *
+PyMarshal_ReadObjectFromFile(FILE *fp)
+{
+ RFILE rf;
+ PyObject *result;
+ rf.fp = fp;
+ rf.readable = NULL;
+ rf.current_filename = NULL;
+ rf.depth = 0;
+ rf.ptr = rf.end = NULL;
+ rf.buf = NULL;
+ rf.refs = PyList_New(0);
+ if (rf.refs == NULL)
+ return NULL;
+ result = r_object(&rf);
+ Py_DECREF(rf.refs);
+ if (rf.buf != NULL)
+ PyMem_FREE(rf.buf);
+ return result;
+}
+
+PyObject *
+PyMarshal_ReadObjectFromString(const char *str, Py_ssize_t len)
+{
+ RFILE rf;
+ PyObject *result;
+ rf.fp = NULL;
+ rf.readable = NULL;
+ rf.current_filename = NULL;
+ rf.ptr = (char *)str;
+ rf.end = (char *)str + len;
+ rf.buf = NULL;
+ rf.depth = 0;
+ rf.refs = PyList_New(0);
+ if (rf.refs == NULL)
+ return NULL;
+ result = r_object(&rf);
+ Py_DECREF(rf.refs);
+ if (rf.buf != NULL)
+ PyMem_FREE(rf.buf);
+ return result;
+}
+
+PyObject *
+PyMarshal_WriteObjectToString(PyObject *x, int version)
+{
+ WFILE wf;
+
+ memset(&wf, 0, sizeof(wf));
+ wf.str = PyBytes_FromStringAndSize((char *)NULL, 50);
+ if (wf.str == NULL)
+ return NULL;
+ wf.ptr = wf.buf = PyBytes_AS_STRING((PyBytesObject *)wf.str);
+ wf.end = wf.ptr + PyBytes_Size(wf.str);
+ wf.error = WFERR_OK;
+ wf.version = version;
+ if (w_init_refs(&wf, version)) {
+ Py_DECREF(wf.str);
+ return NULL;
+ }
+ w_object(x, &wf);
+ w_clear_refs(&wf);
+ if (wf.str != NULL) {
+ char *base = PyBytes_AS_STRING((PyBytesObject *)wf.str);
+ if (wf.ptr - base > PY_SSIZE_T_MAX) {
+ Py_DECREF(wf.str);
+ PyErr_SetString(PyExc_OverflowError,
+ "too much marshal data for a bytes object");
+ return NULL;
+ }
+ if (_PyBytes_Resize(&wf.str, (Py_ssize_t)(wf.ptr - base)) < 0)
+ return NULL;
+ }
+ if (wf.error != WFERR_OK) {
+ Py_XDECREF(wf.str);
+ if (wf.error == WFERR_NOMEMORY)
+ PyErr_NoMemory();
+ else
+ PyErr_SetString(PyExc_ValueError,
+ (wf.error==WFERR_UNMARSHALLABLE)?"unmarshallable object"
+ :"object too deeply nested to marshal");
+ return NULL;
+ }
+ return wf.str;
+}
+
+/* And an interface for Python programs... */
+
+static PyObject *
+marshal_dump(PyObject *self, PyObject *args)
+{
+ /* XXX Quick hack -- need to do this differently */
+ PyObject *x;
+ PyObject *f;
+ int version = Py_MARSHAL_VERSION;
+ PyObject *s;
+ PyObject *res;
+ _Py_IDENTIFIER(write);
+
+ if (!PyArg_ParseTuple(args, "OO|i:dump", &x, &f, &version))
+ return NULL;
+ s = PyMarshal_WriteObjectToString(x, version);
+ if (s == NULL)
+ return NULL;
+ res = _PyObject_CallMethodId(f, &PyId_write, "O", s);
+ Py_DECREF(s);
+ return res;
+}
+
+PyDoc_STRVAR(dump_doc,
+"dump(value, file[, version])\n\
+\n\
+Write the value on the open file. The value must be a supported type.\n\
+The file must be a writeable binary file.\n\
+\n\
+If the value has (or contains an object that has) an unsupported type, a\n\
+ValueError exception is raised - but garbage data will also be written\n\
+to the file. The object will not be properly read back by load()\n\
+\n\
+The version argument indicates the data format that dump should use.");
+
+static PyObject *
+marshal_load(PyObject *self, PyObject *f)
+{
+ PyObject *data, *result;
+ _Py_IDENTIFIER(read);
+ RFILE rf;
+
+ /*
+ * Make a call to the read method, but read zero bytes.
+ * This is to ensure that the object passed in at least
+ * has a read method which returns bytes.
+ * This can be removed if we guarantee good error handling
+ * for r_string()
+ */
+ data = _PyObject_CallMethodId(f, &PyId_read, "i", 0);
+ if (data == NULL)
+ return NULL;
+ if (!PyBytes_Check(data)) {
+ PyErr_Format(PyExc_TypeError,
+ "f.read() returned not bytes but %.100s",
+ data->ob_type->tp_name);
+ result = NULL;
+ }
+ else {
+ rf.depth = 0;
+ rf.fp = NULL;
+ rf.readable = f;
+ rf.current_filename = NULL;
+ rf.ptr = rf.end = NULL;
+ rf.buf = NULL;
+ if ((rf.refs = PyList_New(0)) != NULL) {
+ result = read_object(&rf);
+ Py_DECREF(rf.refs);
+ if (rf.buf != NULL)
+ PyMem_FREE(rf.buf);
+ } else
+ result = NULL;
+ }
+ Py_DECREF(data);
+ return result;
+}
+
+PyDoc_STRVAR(load_doc,
+"load(file)\n\
+\n\
+Read one value from the open file and return it. If no valid value is\n\
+read (e.g. because the data has a different Python version's\n\
+incompatible marshal format), raise EOFError, ValueError or TypeError.\n\
+The file must be a readable binary file.\n\
+\n\
+Note: If an object containing an unsupported type was marshalled with\n\
+dump(), load() will substitute None for the unmarshallable type.");
+
+
+static PyObject *
+marshal_dumps(PyObject *self, PyObject *args)
+{
+ PyObject *x;
+ int version = Py_MARSHAL_VERSION;
+ if (!PyArg_ParseTuple(args, "O|i:dumps", &x, &version))
+ return NULL;
+ return PyMarshal_WriteObjectToString(x, version);
+}
+
+PyDoc_STRVAR(dumps_doc,
+"dumps(value[, version])\n\
+\n\
+Return the bytes object that would be written to a file by dump(value, file).\n\
+The value must be a supported type. Raise a ValueError exception if\n\
+value has (or contains an object that has) an unsupported type.\n\
+\n\
+The version argument indicates the data format that dumps should use.");
+
+
+static PyObject *
+marshal_loads(PyObject *self, PyObject *args)
+{
+ RFILE rf;
+ Py_buffer p;
+ char *s;
+ Py_ssize_t n;
+ PyObject* result;
+ if (!PyArg_ParseTuple(args, "y*:loads", &p))
+ return NULL;
+ s = p.buf;
+ n = p.len;
+ rf.fp = NULL;
+ rf.readable = NULL;
+ rf.current_filename = NULL;
+ rf.ptr = s;
+ rf.end = s + n;
+ rf.depth = 0;
+ if ((rf.refs = PyList_New(0)) == NULL)
+ return NULL;
+ result = read_object(&rf);
+ PyBuffer_Release(&p);
+ Py_DECREF(rf.refs);
+ return result;
+}
+
+PyDoc_STRVAR(loads_doc,
+"loads(bytes)\n\
+\n\
+Convert the bytes-like object to a value. If no valid value is found,\n\
+raise EOFError, ValueError or TypeError. Extra bytes in the input are\n\
+ignored.");
+
+static PyMethodDef marshal_methods[] = {
+ {"dump", marshal_dump, METH_VARARGS, dump_doc},
+ {"load", marshal_load, METH_O, load_doc},
+ {"dumps", marshal_dumps, METH_VARARGS, dumps_doc},
+ {"loads", marshal_loads, METH_VARARGS, loads_doc},
+ {NULL, NULL} /* sentinel */
+};
+
+
+PyDoc_STRVAR(module_doc,
+"This module contains functions that can read and write Python values in\n\
+a binary format. The format is specific to Python, but independent of\n\
+machine architecture issues.\n\
+\n\
+Not all Python object types are supported; in general, only objects\n\
+whose value is independent from a particular invocation of Python can be\n\
+written and read by this module. The following types are supported:\n\
+None, integers, floating point numbers, strings, bytes, bytearrays,\n\
+tuples, lists, sets, dictionaries, and code objects, where it\n\
+should be understood that tuples, lists and dictionaries are only\n\
+supported as long as the values contained therein are themselves\n\
+supported; and recursive lists and dictionaries should not be written\n\
+(they will cause infinite loops).\n\
+\n\
+Variables:\n\
+\n\
+version -- indicates the format that the module uses. Version 0 is the\n\
+ historical format, version 1 shares interned strings and version 2\n\
+ uses a binary format for floating point numbers.\n\
+ Version 3 shares common object references (New in version 3.4).\n\
+\n\
+Functions:\n\
+\n\
+dump() -- write value to a file\n\
+load() -- read value from a file\n\
+dumps() -- marshal value as a bytes object\n\
+loads() -- read value from a bytes-like object");
+
+
+
+static struct PyModuleDef marshalmodule = {
+ PyModuleDef_HEAD_INIT,
+ "marshal",
+ module_doc,
+ 0,
+ marshal_methods,
+ NULL,
+ NULL,
+ NULL,
+ NULL
+};
+
+PyMODINIT_FUNC
+PyMarshal_Init(void)
+{
+ PyObject *mod = PyModule_Create(&marshalmodule);
+ if (mod == NULL)
+ return NULL;
+ PyModule_AddIntConstant(mod, "version", Py_MARSHAL_VERSION);
+ return mod;
+}
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pyhash.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pyhash.c
new file mode 100644
index 00000000..32ae0e46
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pyhash.c
@@ -0,0 +1,437 @@
+/** @file
+ Hash utility functions.
+
+ Copyright (c) 2010 - 2021, Intel Corporation. All rights reserved.<BR>
+ This program and the accompanying materials are licensed and made available under
+ the terms and conditions of the BSD License that accompanies this distribution.
+ The full text of the license may be found at
+ http://opensource.org/licenses/bsd-license.
+
+ THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+**/
+
+/* Set of hash utility functions to help maintaining the invariant that
+ if a==b then hash(a)==hash(b)
+
+ All the utility functions (_Py_Hash*()) return "-1" to signify an error.
+*/
+#include "Python.h"
+
+#ifdef __APPLE__
+# include <libkern/OSByteOrder.h>
+#elif defined(HAVE_LE64TOH) && defined(HAVE_ENDIAN_H)
+# include <endian.h>
+#elif defined(HAVE_LE64TOH) && defined(HAVE_SYS_ENDIAN_H)
+# include <sys/endian.h>
+#endif
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+_Py_HashSecret_t _Py_HashSecret;
+
+#if Py_HASH_ALGORITHM == Py_HASH_EXTERNAL
+extern PyHash_FuncDef PyHash_Func;
+#else
+static PyHash_FuncDef PyHash_Func;
+#endif
+
+/* Count _Py_HashBytes() calls */
+#ifdef Py_HASH_STATS
+#define Py_HASH_STATS_MAX 32
+static Py_ssize_t hashstats[Py_HASH_STATS_MAX + 1] = {0};
+#endif
+
+/* For numeric types, the hash of a number x is based on the reduction
+ of x modulo the prime P = 2**_PyHASH_BITS - 1. It's designed so that
+ hash(x) == hash(y) whenever x and y are numerically equal, even if
+ x and y have different types.
+
+ A quick summary of the hashing strategy:
+
+ (1) First define the 'reduction of x modulo P' for any rational
+ number x; this is a standard extension of the usual notion of
+ reduction modulo P for integers. If x == p/q (written in lowest
+ terms), the reduction is interpreted as the reduction of p times
+ the inverse of the reduction of q, all modulo P; if q is exactly
+ divisible by P then define the reduction to be infinity. So we've
+ got a well-defined map
+
+ reduce : { rational numbers } -> { 0, 1, 2, ..., P-1, infinity }.
+
+ (2) Now for a rational number x, define hash(x) by:
+
+ reduce(x) if x >= 0
+ -reduce(-x) if x < 0
+
+ If the result of the reduction is infinity (this is impossible for
+ integers, floats and Decimals) then use the predefined hash value
+ _PyHASH_INF for x >= 0, or -_PyHASH_INF for x < 0, instead.
+ _PyHASH_INF, -_PyHASH_INF and _PyHASH_NAN are also used for the
+ hashes of float and Decimal infinities and nans.
+
+ A selling point for the above strategy is that it makes it possible
+ to compute hashes of decimal and binary floating-point numbers
+ efficiently, even if the exponent of the binary or decimal number
+ is large. The key point is that
+
+ reduce(x * y) == reduce(x) * reduce(y) (modulo _PyHASH_MODULUS)
+
+ provided that {reduce(x), reduce(y)} != {0, infinity}. The reduction of a
+ binary or decimal float is never infinity, since the denominator is a power
+ of 2 (for binary) or a divisor of a power of 10 (for decimal). So we have,
+ for nonnegative x,
+
+ reduce(x * 2**e) == reduce(x) * reduce(2**e) % _PyHASH_MODULUS
+
+ reduce(x * 10**e) == reduce(x) * reduce(10**e) % _PyHASH_MODULUS
+
+ and reduce(10**e) can be computed efficiently by the usual modular
+ exponentiation algorithm. For reduce(2**e) it's even better: since
+ P is of the form 2**n-1, reduce(2**e) is 2**(e mod n), and multiplication
+ by 2**(e mod n) modulo 2**n-1 just amounts to a rotation of bits.
+
+ */
+
+Py_hash_t
+_Py_HashDouble(double v)
+{
+ int e, sign;
+ double m;
+ Py_uhash_t x, y;
+
+ if (!Py_IS_FINITE(v)) {
+ if (Py_IS_INFINITY(v))
+ return v > 0 ? _PyHASH_INF : -_PyHASH_INF;
+ else
+ return _PyHASH_NAN;
+ }
+
+ m = frexp(v, &e);
+
+ sign = 1;
+ if (m < 0) {
+ sign = -1;
+ m = -m;
+ }
+
+ /* process 28 bits at a time; this should work well both for binary
+ and hexadecimal floating point. */
+ x = 0;
+ while (m) {
+ x = ((x << 28) & _PyHASH_MODULUS) | x >> (_PyHASH_BITS - 28);
+ m *= 268435456.0; /* 2**28 */
+ e -= 28;
+ y = (Py_uhash_t)m; /* pull out integer part */
+ m -= y;
+ x += y;
+ if (x >= _PyHASH_MODULUS)
+ x -= _PyHASH_MODULUS;
+ }
+
+ /* adjust for the exponent; first reduce it modulo _PyHASH_BITS */
+ e = e >= 0 ? e % _PyHASH_BITS : _PyHASH_BITS-1-((-1-e) % _PyHASH_BITS);
+ x = ((x << e) & _PyHASH_MODULUS) | x >> (_PyHASH_BITS - e);
+
+ x = x * sign;
+ if (x == (Py_uhash_t)-1)
+ x = (Py_uhash_t)-2;
+ return (Py_hash_t)x;
+}
+
+Py_hash_t
+_Py_HashPointer(void *p)
+{
+ Py_hash_t x;
+ size_t y = (size_t)p;
+ /* bottom 3 or 4 bits are likely to be 0; rotate y by 4 to avoid
+ excessive hash collisions for dicts and sets */
+ y = (y >> 4) | (y << (8 * SIZEOF_VOID_P - 4));
+ x = (Py_hash_t)y;
+ if (x == -1)
+ x = -2;
+ return x;
+}
+
+Py_hash_t
+_Py_HashBytes(const void *src, Py_ssize_t len)
+{
+ Py_hash_t x;
+ /*
+ We make the hash of the empty string be 0, rather than using
+ (prefix ^ suffix), since this slightly obfuscates the hash secret
+ */
+ if (len == 0) {
+ return 0;
+ }
+
+#ifdef Py_HASH_STATS
+ hashstats[(len <= Py_HASH_STATS_MAX) ? len : 0]++;
+#endif
+
+#if Py_HASH_CUTOFF > 0
+ if (len < Py_HASH_CUTOFF) {
+ /* Optimize hashing of very small strings with inline DJBX33A. */
+ Py_uhash_t hash;
+ const unsigned char *p = src;
+ hash = 5381; /* DJBX33A starts with 5381 */
+
+ switch(len) {
+ /* ((hash << 5) + hash) + *p == hash * 33 + *p */
+ case 7: hash = ((hash << 5) + hash) + *p++; /* fallthrough */
+ case 6: hash = ((hash << 5) + hash) + *p++; /* fallthrough */
+ case 5: hash = ((hash << 5) + hash) + *p++; /* fallthrough */
+ case 4: hash = ((hash << 5) + hash) + *p++; /* fallthrough */
+ case 3: hash = ((hash << 5) + hash) + *p++; /* fallthrough */
+ case 2: hash = ((hash << 5) + hash) + *p++; /* fallthrough */
+ case 1: hash = ((hash << 5) + hash) + *p++; break;
+ default:
+ assert(0);
+ }
+ hash ^= len;
+ hash ^= (Py_uhash_t) _Py_HashSecret.djbx33a.suffix;
+ x = (Py_hash_t)hash;
+ }
+ else
+#endif /* Py_HASH_CUTOFF */
+ x = PyHash_Func.hash(src, len);
+
+ if (x == -1)
+ return -2;
+ return x;
+}
+
+void
+_PyHash_Fini(void)
+{
+#ifdef Py_HASH_STATS
+ int i;
+ Py_ssize_t total = 0;
+ char *fmt = "%2i %8" PY_FORMAT_SIZE_T "d %8" PY_FORMAT_SIZE_T "d\n";
+
+ fprintf(stderr, "len calls total\n");
+ for (i = 1; i <= Py_HASH_STATS_MAX; i++) {
+ total += hashstats[i];
+ fprintf(stderr, fmt, i, hashstats[i], total);
+ }
+ total += hashstats[0];
+ fprintf(stderr, "> %8" PY_FORMAT_SIZE_T "d %8" PY_FORMAT_SIZE_T "d\n",
+ hashstats[0], total);
+#endif
+}
+
+PyHash_FuncDef *
+PyHash_GetFuncDef(void)
+{
+ return &PyHash_Func;
+}
+
+/* Optimized memcpy() for Windows */
+#ifdef _MSC_VER
+# if SIZEOF_PY_UHASH_T == 4
+# define PY_UHASH_CPY(dst, src) do { \
+ dst[0] = src[0]; dst[1] = src[1]; dst[2] = src[2]; dst[3] = src[3]; \
+ } while(0)
+# elif SIZEOF_PY_UHASH_T == 8
+# define PY_UHASH_CPY(dst, src) do { \
+ dst[0] = src[0]; dst[1] = src[1]; dst[2] = src[2]; dst[3] = src[3]; \
+ dst[4] = src[4]; dst[5] = src[5]; dst[6] = src[6]; dst[7] = src[7]; \
+ } while(0)
+# else
+# error SIZEOF_PY_UHASH_T must be 4 or 8
+# endif /* SIZEOF_PY_UHASH_T */
+#else /* not Windows */
+# define PY_UHASH_CPY(dst, src) memcpy(dst, src, SIZEOF_PY_UHASH_T)
+#endif /* _MSC_VER */
+
+
+#if Py_HASH_ALGORITHM == Py_HASH_FNV
+/* **************************************************************************
+ * Modified Fowler-Noll-Vo (FNV) hash function
+ */
+static Py_hash_t
+fnv(const void *src, Py_ssize_t len)
+{
+ const unsigned char *p = src;
+ Py_uhash_t x;
+ Py_ssize_t remainder, blocks;
+ union {
+ Py_uhash_t value;
+ unsigned char bytes[SIZEOF_PY_UHASH_T];
+ } block;
+
+#ifdef Py_DEBUG
+ assert(_Py_HashSecret_Initialized);
+#endif
+ remainder = len % SIZEOF_PY_UHASH_T;
+ if (remainder == 0) {
+ /* Process at least one block byte by byte to reduce hash collisions
+ * for strings with common prefixes. */
+ remainder = SIZEOF_PY_UHASH_T;
+ }
+ blocks = (len - remainder) / SIZEOF_PY_UHASH_T;
+
+ x = (Py_uhash_t) _Py_HashSecret.fnv.prefix;
+ x ^= (Py_uhash_t) *p << 7;
+ while (blocks--) {
+ PY_UHASH_CPY(block.bytes, p);
+ x = (_PyHASH_MULTIPLIER * x) ^ block.value;
+ p += SIZEOF_PY_UHASH_T;
+ }
+ /* add remainder */
+ for (; remainder > 0; remainder--)
+ x = (_PyHASH_MULTIPLIER * x) ^ (Py_uhash_t) *p++;
+ x ^= (Py_uhash_t) len;
+ x ^= (Py_uhash_t) _Py_HashSecret.fnv.suffix;
+ if (x == (Py_uhash_t) -1) {
+ x = (Py_uhash_t) -2;
+ }
+ return x;
+}
+
+static PyHash_FuncDef PyHash_Func = {fnv, "fnv", 8 * SIZEOF_PY_HASH_T,
+ 16 * SIZEOF_PY_HASH_T};
+
+#endif /* Py_HASH_ALGORITHM == Py_HASH_FNV */
+
+
+#if Py_HASH_ALGORITHM == Py_HASH_SIPHASH24
+/* **************************************************************************
+ <MIT License>
+ Copyright (c) 2013 Marek Majkowski <marek@popcount.org>
+
+ Permission is hereby granted, free of charge, to any person obtaining a copy
+ of this software and associated documentation files (the "Software"), to deal
+ in the Software without restriction, including without limitation the rights
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ copies of the Software, and to permit persons to whom the Software is
+ furnished to do so, subject to the following conditions:
+
+ The above copyright notice and this permission notice shall be included in
+ all copies or substantial portions of the Software.
+
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ THE SOFTWARE.
+ </MIT License>
+
+ Original location:
+ https://github.com/majek/csiphash/
+
+ Solution inspired by code from:
+ Samuel Neves (supercop/crypto_auth/siphash24/little)
+ djb (supercop/crypto_auth/siphash24/little2)
+ Jean-Philippe Aumasson (https://131002.net/siphash/siphash24.c)
+
+ Modified for Python by Christian Heimes:
+ - C89 / MSVC compatibility
+ - _rotl64() on Windows
+ - letoh64() fallback
+*/
+
+/* byte swap little endian to host endian
+ * Endian conversion not only ensures that the hash function returns the same
+ * value on all platforms. It is also required to for a good dispersion of
+ * the hash values' least significant bits.
+ */
+#if PY_LITTLE_ENDIAN
+# define _le64toh(x) ((uint64_t)(x))
+#elif defined(__APPLE__)
+# define _le64toh(x) OSSwapLittleToHostInt64(x)
+#elif defined(HAVE_LETOH64)
+# define _le64toh(x) le64toh(x)
+#else
+# define _le64toh(x) (((uint64_t)(x) << 56) | \
+ (((uint64_t)(x) << 40) & 0xff000000000000ULL) | \
+ (((uint64_t)(x) << 24) & 0xff0000000000ULL) | \
+ (((uint64_t)(x) << 8) & 0xff00000000ULL) | \
+ (((uint64_t)(x) >> 8) & 0xff000000ULL) | \
+ (((uint64_t)(x) >> 24) & 0xff0000ULL) | \
+ (((uint64_t)(x) >> 40) & 0xff00ULL) | \
+ ((uint64_t)(x) >> 56))
+#endif
+
+
+#if defined(_MSC_VER) && !defined(UEFI_MSVC_64) && !defined(UEFI_MSVC_32)
+# define ROTATE(x, b) _rotl64(x, b)
+#else
+# define ROTATE(x, b) (uint64_t)( ((x) << (b)) | ( (x) >> (64 - (b))) )
+#endif
+
+#define HALF_ROUND(a,b,c,d,s,t) \
+ a += b; c += d; \
+ b = ROTATE(b, s) ^ a; \
+ d = ROTATE(d, t) ^ c; \
+ a = ROTATE(a, 32);
+
+#define DOUBLE_ROUND(v0,v1,v2,v3) \
+ HALF_ROUND(v0,v1,v2,v3,13,16); \
+ HALF_ROUND(v2,v1,v0,v3,17,21); \
+ HALF_ROUND(v0,v1,v2,v3,13,16); \
+ HALF_ROUND(v2,v1,v0,v3,17,21);
+
+
+static Py_hash_t
+siphash24(const void *src, Py_ssize_t src_sz) {
+ uint64_t k0 = _le64toh(_Py_HashSecret.siphash.k0);
+ uint64_t k1 = _le64toh(_Py_HashSecret.siphash.k1);
+ uint64_t b = (uint64_t)src_sz << 56;
+ const uint8_t *in = (uint8_t*)src;
+
+ uint64_t v0 = k0 ^ 0x736f6d6570736575ULL;
+ uint64_t v1 = k1 ^ 0x646f72616e646f6dULL;
+ uint64_t v2 = k0 ^ 0x6c7967656e657261ULL;
+ uint64_t v3 = k1 ^ 0x7465646279746573ULL;
+
+ uint64_t t;
+ uint8_t *pt;
+
+ while (src_sz >= 8) {
+ uint64_t mi;
+ memcpy(&mi, in, sizeof(mi));
+ mi = _le64toh(mi);
+ in += sizeof(mi);
+ src_sz -= sizeof(mi);
+ v3 ^= mi;
+ DOUBLE_ROUND(v0,v1,v2,v3);
+ v0 ^= mi;
+ }
+
+ t = 0;
+ pt = (uint8_t *)&t;
+ switch (src_sz) {
+ case 7: pt[6] = in[6]; /* fall through */
+ case 6: pt[5] = in[5]; /* fall through */
+ case 5: pt[4] = in[4]; /* fall through */
+ case 4: memcpy(pt, in, sizeof(uint32_t)); break;
+ case 3: pt[2] = in[2]; /* fall through */
+ case 2: pt[1] = in[1]; /* fall through */
+ case 1: pt[0] = in[0]; /* fall through */
+ }
+ b |= _le64toh(t);
+
+ v3 ^= b;
+ DOUBLE_ROUND(v0,v1,v2,v3);
+ v0 ^= b;
+ v2 ^= 0xff;
+ DOUBLE_ROUND(v0,v1,v2,v3);
+ DOUBLE_ROUND(v0,v1,v2,v3);
+
+ /* modified */
+ t = (v0 ^ v1) ^ (v2 ^ v3);
+ return (Py_hash_t)t;
+}
+
+static PyHash_FuncDef PyHash_Func = {siphash24, "siphash24", 64, 128};
+
+#endif /* Py_HASH_ALGORITHM == Py_HASH_SIPHASH24 */
+
+#ifdef __cplusplus
+}
+#endif
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pylifecycle.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pylifecycle.c
new file mode 100644
index 00000000..919b5c18
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pylifecycle.c
@@ -0,0 +1,1726 @@
+/** @file
+ Python interpreter top-level routines, including init/exit
+
+ Copyright (c) 2010 - 2021, Intel Corporation. All rights reserved.<BR>
+ This program and the accompanying materials are licensed and made available under
+ the terms and conditions of the BSD License that accompanies this distribution.
+ The full text of the license may be found at
+ http://opensource.org/licenses/bsd-license.
+
+ THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+**/
+
+#include "Python.h"
+
+#include "Python-ast.h"
+#undef Yield /* undefine macro conflicting with winbase.h */
+#include "grammar.h"
+#include "node.h"
+#include "token.h"
+#include "parsetok.h"
+#include "errcode.h"
+#include "code.h"
+#include "symtable.h"
+#include "ast.h"
+#include "marshal.h"
+#include "osdefs.h"
+#include <locale.h>
+
+#ifdef HAVE_SIGNAL_H
+#include <signal.h>
+#endif
+
+#ifdef MS_WINDOWS
+#include "malloc.h" /* for alloca */
+#endif
+
+#ifdef HAVE_LANGINFO_H
+#include <langinfo.h>
+#endif
+
+#ifdef MS_WINDOWS
+#undef BYTE
+#include "windows.h"
+
+extern PyTypeObject PyWindowsConsoleIO_Type;
+#define PyWindowsConsoleIO_Check(op) (PyObject_TypeCheck((op), &PyWindowsConsoleIO_Type))
+#endif
+
+_Py_IDENTIFIER(flush);
+_Py_IDENTIFIER(name);
+_Py_IDENTIFIER(stdin);
+_Py_IDENTIFIER(stdout);
+_Py_IDENTIFIER(stderr);
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+extern wchar_t *Py_GetPath(void);
+
+extern grammar _PyParser_Grammar; /* From graminit.c */
+
+/* Forward */
+static void initmain(PyInterpreterState *interp);
+static int initfsencoding(PyInterpreterState *interp);
+static void initsite(void);
+static int initstdio(void);
+static void initsigs(void);
+static void call_py_exitfuncs(void);
+static void wait_for_thread_shutdown(void);
+static void call_ll_exitfuncs(void);
+extern int _PyUnicode_Init(void);
+extern int _PyStructSequence_Init(void);
+extern void _PyUnicode_Fini(void);
+extern int _PyLong_Init(void);
+extern void PyLong_Fini(void);
+extern int _PyFaulthandler_Init(void);
+extern void _PyFaulthandler_Fini(void);
+extern void _PyHash_Fini(void);
+extern int _PyTraceMalloc_Init(void);
+extern int _PyTraceMalloc_Fini(void);
+
+#ifdef WITH_THREAD
+extern void _PyGILState_Init(PyInterpreterState *, PyThreadState *);
+extern void _PyGILState_Fini(void);
+#endif /* WITH_THREAD */
+
+/* Global configuration variable declarations are in pydebug.h */
+/* XXX (ncoghlan): move those declarations to pylifecycle.h? */
+int Py_DebugFlag; /* Needed by parser.c */
+int Py_VerboseFlag; /* Needed by import.c */
+int Py_QuietFlag; /* Needed by sysmodule.c */
+int Py_InteractiveFlag; /* Needed by Py_FdIsInteractive() below */
+int Py_InspectFlag; /* Needed to determine whether to exit at SystemExit */
+int Py_OptimizeFlag = 0; /* Needed by compile.c */
+int Py_NoSiteFlag; /* Suppress 'import site' */
+int Py_BytesWarningFlag; /* Warn on str(bytes) and str(buffer) */
+int Py_UseClassExceptionsFlag = 1; /* Needed by bltinmodule.c: deprecated */
+int Py_FrozenFlag; /* Needed by getpath.c */
+int Py_IgnoreEnvironmentFlag; /* e.g. PYTHONPATH, PYTHONHOME */
+int Py_DontWriteBytecodeFlag; /* Suppress writing bytecode files (*.pyc) */
+int Py_NoUserSiteDirectory = 0; /* for -s and site.py */
+int Py_UnbufferedStdioFlag = 0; /* Unbuffered binary std{in,out,err} */
+int Py_HashRandomizationFlag = 0; /* for -R and PYTHONHASHSEED */
+int Py_IsolatedFlag = 0; /* for -I, isolate from user's env */
+#ifdef MS_WINDOWS
+int Py_LegacyWindowsFSEncodingFlag = 0; /* Uses mbcs instead of utf-8 */
+int Py_LegacyWindowsStdioFlag = 0; /* Uses FileIO instead of WindowsConsoleIO */
+#endif
+
+PyThreadState *_Py_Finalizing = NULL;
+
+/* Hack to force loading of object files */
+int (*_PyOS_mystrnicmp_hack)(const char *, const char *, Py_ssize_t) = \
+ PyOS_mystrnicmp; /* Python/pystrcmp.o */
+
+/* PyModule_GetWarningsModule is no longer necessary as of 2.6
+since _warnings is builtin. This API should not be used. */
+PyObject *
+PyModule_GetWarningsModule(void)
+{
+ return PyImport_ImportModule("warnings");
+}
+
+static int initialized = 0;
+
+/* API to access the initialized flag -- useful for esoteric use */
+
+int
+Py_IsInitialized(void)
+{
+ return initialized;
+}
+
+/* Helper to allow an embedding application to override the normal
+ * mechanism that attempts to figure out an appropriate IO encoding
+ */
+
+static char *_Py_StandardStreamEncoding = NULL;
+static char *_Py_StandardStreamErrors = NULL;
+
+int
+Py_SetStandardStreamEncoding(const char *encoding, const char *errors)
+{
+ if (Py_IsInitialized()) {
+ /* This is too late to have any effect */
+ return -1;
+ }
+ /* Can't call PyErr_NoMemory() on errors, as Python hasn't been
+ * initialised yet.
+ *
+ * However, the raw memory allocators are initialised appropriately
+ * as C static variables, so _PyMem_RawStrdup is OK even though
+ * Py_Initialize hasn't been called yet.
+ */
+ if (encoding) {
+ _Py_StandardStreamEncoding = _PyMem_RawStrdup(encoding);
+ if (!_Py_StandardStreamEncoding) {
+ return -2;
+ }
+ }
+ if (errors) {
+ _Py_StandardStreamErrors = _PyMem_RawStrdup(errors);
+ if (!_Py_StandardStreamErrors) {
+ if (_Py_StandardStreamEncoding) {
+ PyMem_RawFree(_Py_StandardStreamEncoding);
+ }
+ return -3;
+ }
+ }
+#ifdef MS_WINDOWS
+ if (_Py_StandardStreamEncoding) {
+ /* Overriding the stream encoding implies legacy streams */
+ Py_LegacyWindowsStdioFlag = 1;
+ }
+#endif
+ return 0;
+}
+
+/* Global initializations. Can be undone by Py_FinalizeEx(). Don't
+ call this twice without an intervening Py_FinalizeEx() call. When
+ initializations fail, a fatal error is issued and the function does
+ not return. On return, the first thread and interpreter state have
+ been created.
+
+ Locking: you must hold the interpreter lock while calling this.
+ (If the lock has not yet been initialized, that's equivalent to
+ having the lock, but you cannot use multiple threads.)
+
+*/
+
+static int
+add_flag(int flag, const char *envs)
+{
+ int env = atoi(envs);
+ if (flag < env)
+ flag = env;
+ if (flag < 1)
+ flag = 1;
+ return flag;
+}
+
+static char*
+get_codec_name(const char *encoding)
+{
+ char *name_utf8, *name_str;
+ PyObject *codec, *name = NULL;
+
+ codec = _PyCodec_Lookup(encoding);
+ if (!codec)
+ goto error;
+
+ name = _PyObject_GetAttrId(codec, &PyId_name);
+ Py_CLEAR(codec);
+ if (!name)
+ goto error;
+
+ name_utf8 = PyUnicode_AsUTF8(name);
+ if (name_utf8 == NULL)
+ goto error;
+ name_str = _PyMem_RawStrdup(name_utf8);
+ Py_DECREF(name);
+ if (name_str == NULL) {
+ PyErr_NoMemory();
+ return NULL;
+ }
+ return name_str;
+
+error:
+ Py_XDECREF(codec);
+ Py_XDECREF(name);
+ return NULL;
+}
+
+static char*
+get_locale_encoding(void)
+{
+#ifdef MS_WINDOWS
+ char codepage[100];
+ PyOS_snprintf(codepage, sizeof(codepage), "cp%d", GetACP());
+ return get_codec_name(codepage);
+#elif defined(HAVE_LANGINFO_H) && defined(CODESET)
+ char* codeset = nl_langinfo(CODESET);
+ if (!codeset || codeset[0] == '\0') {
+ PyErr_SetString(PyExc_ValueError, "CODESET is not set or empty");
+ return NULL;
+ }
+ return get_codec_name(codeset);
+#elif defined(__ANDROID__)
+ return get_codec_name("UTF-8");
+#else
+ PyErr_SetNone(PyExc_NotImplementedError);
+ return NULL;
+#endif
+}
+
+static void
+import_init(PyInterpreterState *interp, PyObject *sysmod)
+{
+ PyObject *importlib;
+ PyObject *impmod;
+ PyObject *sys_modules;
+ PyObject *value;
+
+ /* Import _importlib through its frozen version, _frozen_importlib. */
+ if (PyImport_ImportFrozenModule("_frozen_importlib") <= 0) {
+ Py_FatalError("Py_Initialize: can't import _frozen_importlib");
+ }
+ else if (Py_VerboseFlag) {
+ PySys_FormatStderr("import _frozen_importlib # frozen\n");
+ }
+ importlib = PyImport_AddModule("_frozen_importlib");
+ if (importlib == NULL) {
+ Py_FatalError("Py_Initialize: couldn't get _frozen_importlib from "
+ "sys.modules");
+ }
+ interp->importlib = importlib;
+ Py_INCREF(interp->importlib);
+
+ interp->import_func = PyDict_GetItemString(interp->builtins, "__import__");
+ if (interp->import_func == NULL)
+ Py_FatalError("Py_Initialize: __import__ not found");
+ Py_INCREF(interp->import_func);
+
+ /* Import the _imp module */
+ impmod = PyInit_imp();
+ if (impmod == NULL) {
+ Py_FatalError("Py_Initialize: can't import _imp");
+ }
+ else if (Py_VerboseFlag) {
+ PySys_FormatStderr("import _imp # builtin\n");
+ }
+ sys_modules = PyImport_GetModuleDict();
+ if (Py_VerboseFlag) {
+ PySys_FormatStderr("import sys # builtin\n");
+ }
+ if (PyDict_SetItemString(sys_modules, "_imp", impmod) < 0) {
+ Py_FatalError("Py_Initialize: can't save _imp to sys.modules");
+ }
+
+ /* Install importlib as the implementation of import */
+ value = PyObject_CallMethod(importlib, "_install", "OO", sysmod, impmod);
+ if (value == NULL) {
+ PyErr_Print();
+ Py_FatalError("Py_Initialize: importlib install failed");
+ }
+ Py_DECREF(value);
+ Py_DECREF(impmod);
+
+ _PyImportZip_Init();
+}
+
+
+void
+_Py_InitializeEx_Private(int install_sigs, int install_importlib)
+{
+ PyInterpreterState *interp;
+ PyThreadState *tstate;
+ PyObject *bimod, *sysmod, *pstderr;
+ char *p;
+ extern void _Py_ReadyTypes(void);
+
+ if (initialized)
+ return;
+ initialized = 1;
+ _Py_Finalizing = NULL;
+
+#ifdef HAVE_SETLOCALE
+ /* Set up the LC_CTYPE locale, so we can obtain
+ the locale's charset without having to switch
+ locales. */
+ setlocale(LC_CTYPE, "");
+#endif
+
+ if ((p = Py_GETENV("PYTHONDEBUG")) && *p != '\0')
+ Py_DebugFlag = add_flag(Py_DebugFlag, p);
+ if ((p = Py_GETENV("PYTHONVERBOSE")) && *p != '\0')
+ Py_VerboseFlag = add_flag(Py_VerboseFlag, p);
+ if ((p = Py_GETENV("PYTHONOPTIMIZE")) && *p != '\0')
+ Py_OptimizeFlag = add_flag(Py_OptimizeFlag, p);
+ if ((p = Py_GETENV("PYTHONDONTWRITEBYTECODE")) && *p != '\0')
+ Py_DontWriteBytecodeFlag = add_flag(Py_DontWriteBytecodeFlag, p);
+#ifdef MS_WINDOWS
+ if ((p = Py_GETENV("PYTHONLEGACYWINDOWSFSENCODING")) && *p != '\0')
+ Py_LegacyWindowsFSEncodingFlag = add_flag(Py_LegacyWindowsFSEncodingFlag, p);
+ if ((p = Py_GETENV("PYTHONLEGACYWINDOWSSTDIO")) && *p != '\0')
+ Py_LegacyWindowsStdioFlag = add_flag(Py_LegacyWindowsStdioFlag, p);
+#endif
+
+ _PyRandom_Init();
+
+ interp = PyInterpreterState_New();
+ if (interp == NULL)
+ Py_FatalError("Py_Initialize: can't make first interpreter");
+
+ tstate = PyThreadState_New(interp);
+ if (tstate == NULL)
+ Py_FatalError("Py_Initialize: can't make first thread");
+ (void) PyThreadState_Swap(tstate);
+
+#ifdef WITH_THREAD
+ /* We can't call _PyEval_FiniThreads() in Py_FinalizeEx because
+ destroying the GIL might fail when it is being referenced from
+ another running thread (see issue #9901).
+ Instead we destroy the previously created GIL here, which ensures
+ that we can call Py_Initialize / Py_FinalizeEx multiple times. */
+ _PyEval_FiniThreads();
+
+ /* Auto-thread-state API */
+ _PyGILState_Init(interp, tstate);
+#endif /* WITH_THREAD */
+
+ _Py_ReadyTypes();
+
+ if (!_PyFrame_Init())
+ Py_FatalError("Py_Initialize: can't init frames");
+
+ if (!_PyLong_Init())
+ Py_FatalError("Py_Initialize: can't init longs");
+
+ if (!PyByteArray_Init())
+ Py_FatalError("Py_Initialize: can't init bytearray");
+
+ if (!_PyFloat_Init())
+ Py_FatalError("Py_Initialize: can't init float");
+
+ interp->modules = PyDict_New();
+ if (interp->modules == NULL)
+ Py_FatalError("Py_Initialize: can't make modules dictionary");
+
+ /* Init Unicode implementation; relies on the codec registry */
+ if (_PyUnicode_Init() < 0)
+ Py_FatalError("Py_Initialize: can't initialize unicode");
+ if (_PyStructSequence_Init() < 0)
+ Py_FatalError("Py_Initialize: can't initialize structseq");
+
+ bimod = _PyBuiltin_Init();
+ if (bimod == NULL)
+ Py_FatalError("Py_Initialize: can't initialize builtins modules");
+ _PyImport_FixupBuiltin(bimod, "builtins");
+ interp->builtins = PyModule_GetDict(bimod);
+ if (interp->builtins == NULL)
+ Py_FatalError("Py_Initialize: can't initialize builtins dict");
+ Py_INCREF(interp->builtins);
+
+ /* initialize builtin exceptions */
+ _PyExc_Init(bimod);
+
+ sysmod = _PySys_Init();
+ if (sysmod == NULL)
+ Py_FatalError("Py_Initialize: can't initialize sys");
+ interp->sysdict = PyModule_GetDict(sysmod);
+ if (interp->sysdict == NULL)
+ Py_FatalError("Py_Initialize: can't initialize sys dict");
+ Py_INCREF(interp->sysdict);
+ _PyImport_FixupBuiltin(sysmod, "sys");
+ PySys_SetPath(Py_GetPath());
+ PyDict_SetItemString(interp->sysdict, "modules",
+ interp->modules);
+
+ /* Set up a preliminary stderr printer until we have enough
+ infrastructure for the io module in place. */
+ pstderr = PyFile_NewStdPrinter(fileno(stderr));
+ if (pstderr == NULL)
+ Py_FatalError("Py_Initialize: can't set preliminary stderr");
+ _PySys_SetObjectId(&PyId_stderr, pstderr);
+ PySys_SetObject("__stderr__", pstderr);
+ Py_DECREF(pstderr);
+
+ _PyImport_Init();
+
+ _PyImportHooks_Init();
+
+ /* Initialize _warnings. */
+ _PyWarnings_Init();
+
+ if (!install_importlib)
+ return;
+
+ if (_PyTime_Init() < 0)
+ Py_FatalError("Py_Initialize: can't initialize time");
+
+ import_init(interp, sysmod);
+
+ /* initialize the faulthandler module */
+ if (_PyFaulthandler_Init())
+ Py_FatalError("Py_Initialize: can't initialize faulthandler");
+#ifndef UEFI_C_SOURCE
+ if (initfsencoding(interp) < 0)
+ Py_FatalError("Py_Initialize: unable to load the file system codec");
+
+ if (install_sigs)
+ initsigs(); /* Signal handling stuff, including initintr() */
+
+ if (_PyTraceMalloc_Init() < 0)
+ Py_FatalError("Py_Initialize: can't initialize tracemalloc");
+#endif
+ initmain(interp); /* Module __main__ */
+ if (initstdio() < 0)
+ Py_FatalError(
+ "Py_Initialize: can't initialize sys standard streams");
+
+ /* Initialize warnings. */
+ if (PySys_HasWarnOptions()) {
+ PyObject *warnings_module = PyImport_ImportModule("warnings");
+ if (warnings_module == NULL) {
+ fprintf(stderr, "'import warnings' failed; traceback:\n");
+ PyErr_Print();
+ }
+ Py_XDECREF(warnings_module);
+ }
+
+ if (!Py_NoSiteFlag)
+ initsite(); /* Module site */
+}
+
+void
+Py_InitializeEx(int install_sigs)
+{
+ _Py_InitializeEx_Private(install_sigs, 1);
+}
+
+void
+Py_Initialize(void)
+{
+ Py_InitializeEx(1);
+}
+
+
+#ifdef COUNT_ALLOCS
+extern void dump_counts(FILE*);
+#endif
+
+/* Flush stdout and stderr */
+
+static int
+file_is_closed(PyObject *fobj)
+{
+ int r;
+ PyObject *tmp = PyObject_GetAttrString(fobj, "closed");
+ if (tmp == NULL) {
+ PyErr_Clear();
+ return 0;
+ }
+ r = PyObject_IsTrue(tmp);
+ Py_DECREF(tmp);
+ if (r < 0)
+ PyErr_Clear();
+ return r > 0;
+}
+
+static int
+flush_std_files(void)
+{
+ PyObject *fout = _PySys_GetObjectId(&PyId_stdout);
+ PyObject *ferr = _PySys_GetObjectId(&PyId_stderr);
+ PyObject *tmp;
+ int status = 0;
+
+ if (fout != NULL && fout != Py_None && !file_is_closed(fout)) {
+ tmp = _PyObject_CallMethodId(fout, &PyId_flush, NULL);
+ if (tmp == NULL) {
+ PyErr_WriteUnraisable(fout);
+ status = -1;
+ }
+ else
+ Py_DECREF(tmp);
+ }
+
+ if (ferr != NULL && ferr != Py_None && !file_is_closed(ferr)) {
+ tmp = _PyObject_CallMethodId(ferr, &PyId_flush, NULL);
+ if (tmp == NULL) {
+ PyErr_Clear();
+ status = -1;
+ }
+ else
+ Py_DECREF(tmp);
+ }
+
+ return status;
+}
+
+/* Undo the effect of Py_Initialize().
+
+ Beware: if multiple interpreter and/or thread states exist, these
+ are not wiped out; only the current thread and interpreter state
+ are deleted. But since everything else is deleted, those other
+ interpreter and thread states should no longer be used.
+
+ (XXX We should do better, e.g. wipe out all interpreters and
+ threads.)
+
+ Locking: as above.
+
+*/
+
+int
+Py_FinalizeEx(void)
+{
+ PyInterpreterState *interp;
+ PyThreadState *tstate;
+ int status = 0;
+
+ if (!initialized)
+ return status;
+
+ wait_for_thread_shutdown();
+
+ /* The interpreter is still entirely intact at this point, and the
+ * exit funcs may be relying on that. In particular, if some thread
+ * or exit func is still waiting to do an import, the import machinery
+ * expects Py_IsInitialized() to return true. So don't say the
+ * interpreter is uninitialized until after the exit funcs have run.
+ * Note that Threading.py uses an exit func to do a join on all the
+ * threads created thru it, so this also protects pending imports in
+ * the threads created via Threading.
+ */
+ call_py_exitfuncs();
+
+ /* Get current thread state and interpreter pointer */
+ tstate = PyThreadState_GET();
+ interp = tstate->interp;
+
+ /* Remaining threads (e.g. daemon threads) will automatically exit
+ after taking the GIL (in PyEval_RestoreThread()). */
+ _Py_Finalizing = tstate;
+ initialized = 0;
+
+ /* Flush sys.stdout and sys.stderr */
+ if (flush_std_files() < 0) {
+ status = -1;
+ }
+
+ /* Disable signal handling */
+ PyOS_FiniInterrupts();
+
+ /* Collect garbage. This may call finalizers; it's nice to call these
+ * before all modules are destroyed.
+ * XXX If a __del__ or weakref callback is triggered here, and tries to
+ * XXX import a module, bad things can happen, because Python no
+ * XXX longer believes it's initialized.
+ * XXX Fatal Python error: Interpreter not initialized (version mismatch?)
+ * XXX is easy to provoke that way. I've also seen, e.g.,
+ * XXX Exception exceptions.ImportError: 'No module named sha'
+ * XXX in <function callback at 0x008F5718> ignored
+ * XXX but I'm unclear on exactly how that one happens. In any case,
+ * XXX I haven't seen a real-life report of either of these.
+ */
+ _PyGC_CollectIfEnabled();
+#ifdef COUNT_ALLOCS
+ /* With COUNT_ALLOCS, it helps to run GC multiple times:
+ each collection might release some types from the type
+ list, so they become garbage. */
+ while (_PyGC_CollectIfEnabled() > 0)
+ /* nothing */;
+#endif
+ /* Destroy all modules */
+ PyImport_Cleanup();
+
+ /* Flush sys.stdout and sys.stderr (again, in case more was printed) */
+ if (flush_std_files() < 0) {
+ status = -1;
+ }
+
+ /* Collect final garbage. This disposes of cycles created by
+ * class definitions, for example.
+ * XXX This is disabled because it caused too many problems. If
+ * XXX a __del__ or weakref callback triggers here, Python code has
+ * XXX a hard time running, because even the sys module has been
+ * XXX cleared out (sys.stdout is gone, sys.excepthook is gone, etc).
+ * XXX One symptom is a sequence of information-free messages
+ * XXX coming from threads (if a __del__ or callback is invoked,
+ * XXX other threads can execute too, and any exception they encounter
+ * XXX triggers a comedy of errors as subsystem after subsystem
+ * XXX fails to find what it *expects* to find in sys to help report
+ * XXX the exception and consequent unexpected failures). I've also
+ * XXX seen segfaults then, after adding print statements to the
+ * XXX Python code getting called.
+ */
+#if 0
+ _PyGC_CollectIfEnabled();
+#endif
+
+ /* Disable tracemalloc after all Python objects have been destroyed,
+ so it is possible to use tracemalloc in objects destructor. */
+ _PyTraceMalloc_Fini();
+
+ /* Destroy the database used by _PyImport_{Fixup,Find}Extension */
+ _PyImport_Fini();
+
+ /* Cleanup typeobject.c's internal caches. */
+ _PyType_Fini();
+
+ /* unload faulthandler module */
+ _PyFaulthandler_Fini();
+
+ /* Debugging stuff */
+#ifdef COUNT_ALLOCS
+ dump_counts(stderr);
+#endif
+ /* dump hash stats */
+ _PyHash_Fini();
+
+ _PY_DEBUG_PRINT_TOTAL_REFS();
+
+#ifdef Py_TRACE_REFS
+ /* Display all objects still alive -- this can invoke arbitrary
+ * __repr__ overrides, so requires a mostly-intact interpreter.
+ * Alas, a lot of stuff may still be alive now that will be cleaned
+ * up later.
+ */
+ if (Py_GETENV("PYTHONDUMPREFS"))
+ _Py_PrintReferences(stderr);
+#endif /* Py_TRACE_REFS */
+
+ /* Clear interpreter state and all thread states. */
+ PyInterpreterState_Clear(interp);
+
+ /* Now we decref the exception classes. After this point nothing
+ can raise an exception. That's okay, because each Fini() method
+ below has been checked to make sure no exceptions are ever
+ raised.
+ */
+
+ _PyExc_Fini();
+
+ /* Sundry finalizers */
+ PyMethod_Fini();
+ PyFrame_Fini();
+ PyCFunction_Fini();
+ PyTuple_Fini();
+ PyList_Fini();
+ PySet_Fini();
+ PyBytes_Fini();
+ PyByteArray_Fini();
+ PyLong_Fini();
+ PyFloat_Fini();
+ PyDict_Fini();
+ PySlice_Fini();
+ _PyGC_Fini();
+ _PyRandom_Fini();
+ _PyArg_Fini();
+ PyAsyncGen_Fini();
+
+ /* Cleanup Unicode implementation */
+ _PyUnicode_Fini();
+
+ /* reset file system default encoding */
+ if (!Py_HasFileSystemDefaultEncoding && Py_FileSystemDefaultEncoding) {
+ PyMem_RawFree((char*)Py_FileSystemDefaultEncoding);
+ Py_FileSystemDefaultEncoding = NULL;
+ }
+
+ /* XXX Still allocated:
+ - various static ad-hoc pointers to interned strings
+ - int and float free list blocks
+ - whatever various modules and libraries allocate
+ */
+
+ PyGrammar_RemoveAccelerators(&_PyParser_Grammar);
+
+ /* Cleanup auto-thread-state */
+#ifdef WITH_THREAD
+ _PyGILState_Fini();
+#endif /* WITH_THREAD */
+
+ /* Delete current thread. After this, many C API calls become crashy. */
+ PyThreadState_Swap(NULL);
+
+ PyInterpreterState_Delete(interp);
+
+#ifdef Py_TRACE_REFS
+ /* Display addresses (& refcnts) of all objects still alive.
+ * An address can be used to find the repr of the object, printed
+ * above by _Py_PrintReferences.
+ */
+ if (Py_GETENV("PYTHONDUMPREFS"))
+ _Py_PrintReferenceAddresses(stderr);
+#endif /* Py_TRACE_REFS */
+#ifdef WITH_PYMALLOC
+ if (_PyMem_PymallocEnabled()) {
+ char *opt = Py_GETENV("PYTHONMALLOCSTATS");
+ if (opt != NULL && *opt != '\0')
+ _PyObject_DebugMallocStats(stderr);
+ }
+#endif
+
+ call_ll_exitfuncs();
+ return status;
+}
+
+void
+Py_Finalize(void)
+{
+ Py_FinalizeEx();
+}
+
+/* Create and initialize a new interpreter and thread, and return the
+ new thread. This requires that Py_Initialize() has been called
+ first.
+
+ Unsuccessful initialization yields a NULL pointer. Note that *no*
+ exception information is available even in this case -- the
+ exception information is held in the thread, and there is no
+ thread.
+
+ Locking: as above.
+
+*/
+
+PyThreadState *
+Py_NewInterpreter(void)
+{
+ PyInterpreterState *interp;
+ PyThreadState *tstate, *save_tstate;
+ PyObject *bimod, *sysmod;
+
+ if (!initialized)
+ Py_FatalError("Py_NewInterpreter: call Py_Initialize first");
+
+#ifdef WITH_THREAD
+ /* Issue #10915, #15751: The GIL API doesn't work with multiple
+ interpreters: disable PyGILState_Check(). */
+ _PyGILState_check_enabled = 0;
+#endif
+
+ interp = PyInterpreterState_New();
+ if (interp == NULL)
+ return NULL;
+
+ tstate = PyThreadState_New(interp);
+ if (tstate == NULL) {
+ PyInterpreterState_Delete(interp);
+ return NULL;
+ }
+
+ save_tstate = PyThreadState_Swap(tstate);
+
+ /* XXX The following is lax in error checking */
+
+ interp->modules = PyDict_New();
+
+ bimod = _PyImport_FindBuiltin("builtins");
+ if (bimod != NULL) {
+ interp->builtins = PyModule_GetDict(bimod);
+ if (interp->builtins == NULL)
+ goto handle_error;
+ Py_INCREF(interp->builtins);
+ }
+ else if (PyErr_Occurred()) {
+ goto handle_error;
+ }
+
+ /* initialize builtin exceptions */
+ _PyExc_Init(bimod);
+
+ sysmod = _PyImport_FindBuiltin("sys");
+ if (bimod != NULL && sysmod != NULL) {
+ PyObject *pstderr;
+
+ interp->sysdict = PyModule_GetDict(sysmod);
+ if (interp->sysdict == NULL)
+ goto handle_error;
+ Py_INCREF(interp->sysdict);
+ PySys_SetPath(Py_GetPath());
+ PyDict_SetItemString(interp->sysdict, "modules",
+ interp->modules);
+ /* Set up a preliminary stderr printer until we have enough
+ infrastructure for the io module in place. */
+ pstderr = PyFile_NewStdPrinter(fileno(stderr));
+ if (pstderr == NULL)
+ Py_FatalError("Py_Initialize: can't set preliminary stderr");
+ _PySys_SetObjectId(&PyId_stderr, pstderr);
+ PySys_SetObject("__stderr__", pstderr);
+ Py_DECREF(pstderr);
+
+ _PyImportHooks_Init();
+
+ import_init(interp, sysmod);
+
+ if (initfsencoding(interp) < 0)
+ goto handle_error;
+
+ if (initstdio() < 0)
+ Py_FatalError(
+ "Py_Initialize: can't initialize sys standard streams");
+ initmain(interp);
+ if (!Py_NoSiteFlag)
+ initsite();
+ }
+
+ if (!PyErr_Occurred())
+ return tstate;
+
+handle_error:
+ /* Oops, it didn't work. Undo it all. */
+
+ PyErr_PrintEx(0);
+ PyThreadState_Clear(tstate);
+ PyThreadState_Swap(save_tstate);
+ PyThreadState_Delete(tstate);
+ PyInterpreterState_Delete(interp);
+
+ return NULL;
+}
+
+/* Delete an interpreter and its last thread. This requires that the
+ given thread state is current, that the thread has no remaining
+ frames, and that it is its interpreter's only remaining thread.
+ It is a fatal error to violate these constraints.
+
+ (Py_FinalizeEx() doesn't have these constraints -- it zaps
+ everything, regardless.)
+
+ Locking: as above.
+
+*/
+
+void
+Py_EndInterpreter(PyThreadState *tstate)
+{
+ PyInterpreterState *interp = tstate->interp;
+
+ if (tstate != PyThreadState_GET())
+ Py_FatalError("Py_EndInterpreter: thread is not current");
+ if (tstate->frame != NULL)
+ Py_FatalError("Py_EndInterpreter: thread still has a frame");
+
+ wait_for_thread_shutdown();
+
+ if (tstate != interp->tstate_head || tstate->next != NULL)
+ Py_FatalError("Py_EndInterpreter: not the last thread");
+
+ PyImport_Cleanup();
+ PyInterpreterState_Clear(interp);
+ PyThreadState_Swap(NULL);
+ PyInterpreterState_Delete(interp);
+}
+
+#ifdef MS_WINDOWS
+static wchar_t *progname = L"python";
+#else
+static wchar_t *progname = L"python3";
+#endif
+
+void
+Py_SetProgramName(wchar_t *pn)
+{
+ if (pn && *pn)
+ progname = pn;
+}
+
+wchar_t *
+Py_GetProgramName(void)
+{
+ return progname;
+}
+
+static wchar_t *default_home = NULL;
+static wchar_t env_home[MAXPATHLEN+1];
+
+void
+Py_SetPythonHome(wchar_t *home)
+{
+ default_home = home;
+}
+
+wchar_t *
+Py_GetPythonHome(void)
+{
+ wchar_t *home = default_home;
+ if (home == NULL && !Py_IgnoreEnvironmentFlag) {
+ char* chome = Py_GETENV("PYTHONHOME");
+ if (chome) {
+ size_t size = Py_ARRAY_LENGTH(env_home);
+ size_t r = mbstowcs(env_home, chome, size);
+ if (r != (size_t)-1 && r < size)
+ home = env_home;
+ }
+
+ }
+ return home;
+}
+
+/* Create __main__ module */
+
+static void
+initmain(PyInterpreterState *interp)
+{
+ PyObject *m, *d, *loader, *ann_dict;
+ m = PyImport_AddModule("__main__");
+ if (m == NULL)
+ Py_FatalError("can't create __main__ module");
+ d = PyModule_GetDict(m);
+ ann_dict = PyDict_New();
+ if ((ann_dict == NULL) ||
+ (PyDict_SetItemString(d, "__annotations__", ann_dict) < 0)) {
+ Py_FatalError("Failed to initialize __main__.__annotations__");
+ }
+ Py_DECREF(ann_dict);
+ if (PyDict_GetItemString(d, "__builtins__") == NULL) {
+ PyObject *bimod = PyImport_ImportModule("builtins");
+ if (bimod == NULL) {
+ Py_FatalError("Failed to retrieve builtins module");
+ }
+ if (PyDict_SetItemString(d, "__builtins__", bimod) < 0) {
+ Py_FatalError("Failed to initialize __main__.__builtins__");
+ }
+ Py_DECREF(bimod);
+ }
+ /* Main is a little special - imp.is_builtin("__main__") will return
+ * False, but BuiltinImporter is still the most appropriate initial
+ * setting for its __loader__ attribute. A more suitable value will
+ * be set if __main__ gets further initialized later in the startup
+ * process.
+ */
+ loader = PyDict_GetItemString(d, "__loader__");
+ if (loader == NULL || loader == Py_None) {
+ PyObject *loader = PyObject_GetAttrString(interp->importlib,
+ "BuiltinImporter");
+ if (loader == NULL) {
+ Py_FatalError("Failed to retrieve BuiltinImporter");
+ }
+ if (PyDict_SetItemString(d, "__loader__", loader) < 0) {
+ Py_FatalError("Failed to initialize __main__.__loader__");
+ }
+ Py_DECREF(loader);
+ }
+}
+
+static int
+initfsencoding(PyInterpreterState *interp)
+{
+ PyObject *codec;
+
+#ifdef UEFI_MSVC_64
+ Py_FileSystemDefaultEncoding = "utf-8";
+ Py_FileSystemDefaultEncodeErrors = "surrogatepass";
+#elif defined(UEFI_MSVC_32)
+ Py_FileSystemDefaultEncoding = "utf-8";
+ Py_FileSystemDefaultEncodeErrors = "surrogatepass";
+#elif defined(MS_WINDOWS)
+ if (Py_LegacyWindowsFSEncodingFlag)
+ {
+ Py_FileSystemDefaultEncoding = "mbcs";
+ Py_FileSystemDefaultEncodeErrors = "replace";
+ }
+ else
+ {
+ Py_FileSystemDefaultEncoding = "utf-8";
+ Py_FileSystemDefaultEncodeErrors = "surrogatepass";
+ }
+#else
+ if (Py_FileSystemDefaultEncoding == NULL)
+ {
+ Py_FileSystemDefaultEncoding = get_locale_encoding();
+ if (Py_FileSystemDefaultEncoding == NULL)
+ Py_FatalError("Py_Initialize: Unable to get the locale encoding");
+
+ Py_HasFileSystemDefaultEncoding = 0;
+ interp->fscodec_initialized = 1;
+ return 0;
+ }
+#endif
+
+ /* the encoding is mbcs, utf-8 or ascii */
+ codec = _PyCodec_Lookup(Py_FileSystemDefaultEncoding);
+ if (!codec) {
+ /* Such error can only occurs in critical situations: no more
+ * memory, import a module of the standard library failed,
+ * etc. */
+ return -1;
+ }
+ Py_DECREF(codec);
+ interp->fscodec_initialized = 1;
+ return 0;
+}
+
+/* Import the site module (not into __main__ though) */
+
+static void
+initsite(void)
+{
+ PyObject *m;
+ m = PyImport_ImportModule("site");
+ if (m == NULL) {
+ fprintf(stderr, "Failed to import the site module\n");
+ PyErr_Print();
+ Py_Finalize();
+ exit(1);
+ }
+ else {
+ Py_DECREF(m);
+ }
+}
+
+/* Check if a file descriptor is valid or not.
+ Return 0 if the file descriptor is invalid, return non-zero otherwise. */
+static int
+is_valid_fd(int fd)
+{
+#ifdef __APPLE__
+ /* bpo-30225: On macOS Tiger, when stdout is redirected to a pipe
+ and the other side of the pipe is closed, dup(1) succeed, whereas
+ fstat(1, &st) fails with EBADF. Prefer fstat() over dup() to detect
+ such error. */
+ struct stat st;
+ return (fstat(fd, &st) == 0);
+#else
+ int fd2;
+ if (fd < 0)
+ return 0;
+ _Py_BEGIN_SUPPRESS_IPH
+ /* Prefer dup() over fstat(). fstat() can require input/output whereas
+ dup() doesn't, there is a low risk of EMFILE/ENFILE at Python
+ startup. */
+ fd2 = dup(fd);
+ if (fd2 >= 0)
+ close(fd2);
+ _Py_END_SUPPRESS_IPH
+ return fd2 >= 0;
+#endif
+}
+
+/* returns Py_None if the fd is not valid */
+static PyObject*
+create_stdio(PyObject* io,
+ int fd, int write_mode, const char* name,
+ const char* encoding, const char* errors)
+{
+ PyObject *buf = NULL, *stream = NULL, *text = NULL, *raw = NULL, *res;
+ const char* mode;
+ const char* newline;
+ PyObject *line_buffering;
+ int buffering, isatty;
+ _Py_IDENTIFIER(open);
+ _Py_IDENTIFIER(isatty);
+ _Py_IDENTIFIER(TextIOWrapper);
+ _Py_IDENTIFIER(mode);
+
+ if (!is_valid_fd(fd))
+ Py_RETURN_NONE;
+
+ /* stdin is always opened in buffered mode, first because it shouldn't
+ make a difference in common use cases, second because TextIOWrapper
+ depends on the presence of a read1() method which only exists on
+ buffered streams.
+ */
+ if (Py_UnbufferedStdioFlag && write_mode)
+ buffering = 0;
+ else
+ buffering = -1;
+ if (write_mode)
+ mode = "wb";
+ else
+ mode = "rb";
+ buf = _PyObject_CallMethodId(io, &PyId_open, "isiOOOi",
+ fd, mode, buffering,
+ Py_None, Py_None, /* encoding, errors */
+ Py_None, 0); /* newline, closefd */
+ if (buf == NULL)
+ goto error;
+
+ if (buffering) {
+ _Py_IDENTIFIER(raw);
+ raw = _PyObject_GetAttrId(buf, &PyId_raw);
+ if (raw == NULL)
+ goto error;
+ }
+ else {
+ raw = buf;
+ Py_INCREF(raw);
+ }
+
+#ifdef MS_WINDOWS
+ /* Windows console IO is always UTF-8 encoded */
+ if (PyWindowsConsoleIO_Check(raw))
+ encoding = "utf-8";
+#endif
+
+ text = PyUnicode_FromString(name);
+ if (text == NULL || _PyObject_SetAttrId(raw, &PyId_name, text) < 0)
+ goto error;
+ res = _PyObject_CallMethodId(raw, &PyId_isatty, NULL);
+ if (res == NULL)
+ goto error;
+ isatty = PyObject_IsTrue(res);
+ Py_DECREF(res);
+ if (isatty == -1)
+ goto error;
+ if (isatty || Py_UnbufferedStdioFlag)
+ line_buffering = Py_True;
+ else
+ line_buffering = Py_False;
+
+ Py_CLEAR(raw);
+ Py_CLEAR(text);
+
+#ifdef MS_WINDOWS
+ /* sys.stdin: enable universal newline mode, translate "\r\n" and "\r"
+ newlines to "\n".
+ sys.stdout and sys.stderr: translate "\n" to "\r\n". */
+ newline = NULL;
+#else
+ /* sys.stdin: split lines at "\n".
+ sys.stdout and sys.stderr: don't translate newlines (use "\n"). */
+ newline = "\n";
+#endif
+
+ stream = _PyObject_CallMethodId(io, &PyId_TextIOWrapper, "OsssO",
+ buf, encoding, errors,
+ newline, line_buffering);
+ Py_CLEAR(buf);
+ if (stream == NULL)
+ goto error;
+
+ if (write_mode)
+ mode = "w";
+ else
+ mode = "r";
+ text = PyUnicode_FromString(mode);
+ if (!text || _PyObject_SetAttrId(stream, &PyId_mode, text) < 0)
+ goto error;
+ Py_CLEAR(text);
+ return stream;
+
+error:
+ Py_XDECREF(buf);
+ Py_XDECREF(stream);
+ Py_XDECREF(text);
+ Py_XDECREF(raw);
+
+ if (PyErr_ExceptionMatches(PyExc_OSError) && !is_valid_fd(fd)) {
+ /* Issue #24891: the file descriptor was closed after the first
+ is_valid_fd() check was called. Ignore the OSError and set the
+ stream to None. */
+ PyErr_Clear();
+ Py_RETURN_NONE;
+ }
+ return NULL;
+}
+
+/* Initialize sys.stdin, stdout, stderr and builtins.open */
+static int
+initstdio(void)
+{
+ PyObject *iomod = NULL, *wrapper;
+ PyObject *bimod = NULL;
+ PyObject *m;
+ PyObject *std = NULL;
+ int status = 0, fd;
+ PyObject * encoding_attr;
+ char *pythonioencoding = NULL, *encoding, *errors;
+
+ /* Hack to avoid a nasty recursion issue when Python is invoked
+ in verbose mode: pre-import the Latin-1 and UTF-8 codecs */
+ if ((m = PyImport_ImportModule("encodings.utf_8")) == NULL) {
+ goto error;
+ }
+ Py_DECREF(m);
+
+ if (!(m = PyImport_ImportModule("encodings.latin_1"))) {
+ goto error;
+ }
+ Py_DECREF(m);
+
+ if (!(bimod = PyImport_ImportModule("builtins"))) {
+ goto error;
+ }
+
+ if (!(iomod = PyImport_ImportModule("io"))) {
+ goto error;
+ }
+ if (!(wrapper = PyObject_GetAttrString(iomod, "OpenWrapper"))) {
+ goto error;
+ }
+
+ /* Set builtins.open */
+ if (PyObject_SetAttrString(bimod, "open", wrapper) == -1) {
+ Py_DECREF(wrapper);
+ goto error;
+ }
+ Py_DECREF(wrapper);
+
+ encoding = _Py_StandardStreamEncoding;
+ errors = _Py_StandardStreamErrors;
+ if (!encoding || !errors) {
+ pythonioencoding = Py_GETENV("PYTHONIOENCODING");
+ if (pythonioencoding) {
+ char *err;
+ pythonioencoding = _PyMem_Strdup(pythonioencoding);
+ if (pythonioencoding == NULL) {
+ PyErr_NoMemory();
+ goto error;
+ }
+ err = strchr(pythonioencoding, ':');
+ if (err) {
+ *err = '\0';
+ err++;
+ if (*err && !errors) {
+ errors = err;
+ }
+ }
+ if (*pythonioencoding && !encoding) {
+ encoding = pythonioencoding;
+ }
+ }
+ if (!errors && !(pythonioencoding && *pythonioencoding)) {
+ /* When the LC_CTYPE locale is the POSIX locale ("C locale"),
+ stdin and stdout use the surrogateescape error handler by
+ default, instead of the strict error handler. */
+ char *loc = setlocale(LC_CTYPE, NULL);
+ if (loc != NULL && strcmp(loc, "C") == 0)
+ errors = "surrogateescape";
+ }
+ }
+
+ /* Set sys.stdin */
+ fd = fileno(stdin);
+ /* Under some conditions stdin, stdout and stderr may not be connected
+ * and fileno() may point to an invalid file descriptor. For example
+ * GUI apps don't have valid standard streams by default.
+ */
+ std = create_stdio(iomod, fd, 0, "<stdin>", encoding, errors);
+ if (std == NULL)
+ goto error;
+ PySys_SetObject("__stdin__", std);
+ _PySys_SetObjectId(&PyId_stdin, std);
+ Py_DECREF(std);
+#ifdef UEFI_C_SOURCE
+ // UEFI shell don't have seperate stderr
+ // connect stderr back to stdout
+ std = PySys_GetObject("stderr");
+ PySys_SetObject("__stdout__", std) ;
+ _PySys_SetObjectId(&PyId_stdout, std);
+ Py_DECREF(std);
+#endif
+
+#ifndef UEFI_C_SOURCE // Couldn't get this code working on EFI shell, the interpreter application hangs here
+ /* Set sys.stdout */
+ fd = fileno(stdout);
+ std = create_stdio(iomod, fd, 1, "<stdout>", encoding, errors);
+ if (std == NULL)
+ goto error;
+ PySys_SetObject("__stdout__", std);
+ _PySys_SetObjectId(&PyId_stdout, std);
+ Py_DECREF(std);
+#endif
+
+
+#if 0 /* Disable this if you have trouble debugging bootstrap stuff */
+ /* Set sys.stderr, replaces the preliminary stderr */
+ fd = fileno(stderr);
+ std = create_stdio(iomod, fd, 1, "<stderr>", encoding, "backslashreplace");
+ if (std == NULL)
+ goto error;
+
+ /* Same as hack above, pre-import stderr's codec to avoid recursion
+ when import.c tries to write to stderr in verbose mode. */
+ encoding_attr = PyObject_GetAttrString(std, "encoding");
+ if (encoding_attr != NULL) {
+ const char * std_encoding;
+ std_encoding = PyUnicode_AsUTF8(encoding_attr);
+ if (std_encoding != NULL) {
+ PyObject *codec_info = _PyCodec_Lookup(std_encoding);
+ Py_XDECREF(codec_info);
+ }
+ Py_DECREF(encoding_attr);
+ }
+ PyErr_Clear(); /* Not a fatal error if codec isn't available */
+
+ if (PySys_SetObject("__stderr__", std) < 0) {
+ Py_DECREF(std);
+ goto error;
+ }
+ if (_PySys_SetObjectId(&PyId_stderr, std) < 0) {
+ Py_DECREF(std);
+ goto error;
+ }
+ Py_DECREF(std);
+#endif
+
+ if (0) {
+ error:
+ status = -1;
+ }
+
+ /* We won't need them anymore. */
+ if (_Py_StandardStreamEncoding) {
+ PyMem_RawFree(_Py_StandardStreamEncoding);
+ _Py_StandardStreamEncoding = NULL;
+ }
+ if (_Py_StandardStreamErrors) {
+ PyMem_RawFree(_Py_StandardStreamErrors);
+ _Py_StandardStreamErrors = NULL;
+ }
+ PyMem_Free(pythonioencoding);
+ Py_XDECREF(bimod);
+ Py_XDECREF(iomod);
+ return status;
+}
+
+
+static void
+_Py_FatalError_DumpTracebacks(int fd)
+{
+ fputc('\n', stderr);
+ fflush(stderr);
+
+ /* display the current Python stack */
+ _Py_DumpTracebackThreads(fd, NULL, NULL);
+}
+
+/* Print the current exception (if an exception is set) with its traceback,
+ or display the current Python stack.
+
+ Don't call PyErr_PrintEx() and the except hook, because Py_FatalError() is
+ called on catastrophic cases.
+
+ Return 1 if the traceback was displayed, 0 otherwise. */
+
+static int
+_Py_FatalError_PrintExc(int fd)
+{
+ PyObject *ferr, *res;
+ PyObject *exception, *v, *tb;
+ int has_tb;
+
+ PyErr_Fetch(&exception, &v, &tb);
+ if (exception == NULL) {
+ /* No current exception */
+ return 0;
+ }
+
+ ferr = _PySys_GetObjectId(&PyId_stderr);
+ if (ferr == NULL || ferr == Py_None) {
+ /* sys.stderr is not set yet or set to None,
+ no need to try to display the exception */
+ return 0;
+ }
+
+ PyErr_NormalizeException(&exception, &v, &tb);
+ if (tb == NULL) {
+ tb = Py_None;
+ Py_INCREF(tb);
+ }
+ PyException_SetTraceback(v, tb);
+ if (exception == NULL) {
+ /* PyErr_NormalizeException() failed */
+ return 0;
+ }
+
+ has_tb = (tb != Py_None);
+ PyErr_Display(exception, v, tb);
+ Py_XDECREF(exception);
+ Py_XDECREF(v);
+ Py_XDECREF(tb);
+
+ /* sys.stderr may be buffered: call sys.stderr.flush() */
+ res = _PyObject_CallMethodId(ferr, &PyId_flush, NULL);
+ if (res == NULL)
+ PyErr_Clear();
+ else
+ Py_DECREF(res);
+
+ return has_tb;
+}
+
+/* Print fatal error message and abort */
+
+void
+Py_FatalError(const char *msg)
+{
+ const int fd = fileno(stderr);
+ static int reentrant = 0;
+ PyThreadState *tss_tstate = NULL;
+#ifdef MS_WINDOWS
+ size_t len;
+ WCHAR* buffer;
+ size_t i;
+#endif
+
+ if (reentrant) {
+ /* Py_FatalError() caused a second fatal error.
+ Example: flush_std_files() raises a recursion error. */
+ goto exit;
+ }
+ reentrant = 1;
+
+ fprintf(stderr, "Fatal Python error: %s\n", msg);
+ fflush(stderr); /* it helps in Windows debug build */
+
+#ifdef WITH_THREAD
+ /* Check if the current thread has a Python thread state
+ and holds the GIL */
+ tss_tstate = PyGILState_GetThisThreadState();
+ if (tss_tstate != NULL) {
+ PyThreadState *tstate = PyThreadState_GET();
+ if (tss_tstate != tstate) {
+ /* The Python thread does not hold the GIL */
+ tss_tstate = NULL;
+ }
+ }
+ else {
+ /* Py_FatalError() has been called from a C thread
+ which has no Python thread state. */
+ }
+#endif
+ int has_tstate_and_gil = (tss_tstate != NULL);
+
+ if (has_tstate_and_gil) {
+ /* If an exception is set, print the exception with its traceback */
+ if (!_Py_FatalError_PrintExc(fd)) {
+ /* No exception is set, or an exception is set without traceback */
+ _Py_FatalError_DumpTracebacks(fd);
+ }
+ }
+ else {
+ _Py_FatalError_DumpTracebacks(fd);
+ }
+
+ /* The main purpose of faulthandler is to display the traceback. We already
+ * did our best to display it. So faulthandler can now be disabled.
+ * (Don't trigger it on abort().) */
+ _PyFaulthandler_Fini();
+
+ /* Check if the current Python thread hold the GIL */
+ if (has_tstate_and_gil) {
+ /* Flush sys.stdout and sys.stderr */
+ flush_std_files();
+ }
+
+#ifdef MS_WINDOWS
+ len = strlen(msg);
+
+ /* Convert the message to wchar_t. This uses a simple one-to-one
+ conversion, assuming that the this error message actually uses ASCII
+ only. If this ceases to be true, we will have to convert. */
+ buffer = alloca( (len+1) * (sizeof *buffer));
+ for( i=0; i<=len; ++i)
+ buffer[i] = msg[i];
+ OutputDebugStringW(L"Fatal Python error: ");
+ OutputDebugStringW(buffer);
+ OutputDebugStringW(L"\n");
+#endif /* MS_WINDOWS */
+
+exit:
+#if defined(MS_WINDOWS) && defined(_DEBUG)
+ DebugBreak();
+#endif
+ abort();
+}
+
+/* Clean up and exit */
+
+#ifdef WITH_THREAD
+# include "pythread.h"
+#endif
+
+static void (*pyexitfunc)(void) = NULL;
+/* For the atexit module. */
+void _Py_PyAtExit(void (*func)(void))
+{
+ pyexitfunc = func;
+}
+
+static void
+call_py_exitfuncs(void)
+{
+ if (pyexitfunc == NULL)
+ return;
+
+ (*pyexitfunc)();
+ PyErr_Clear();
+}
+
+/* Wait until threading._shutdown completes, provided
+ the threading module was imported in the first place.
+ The shutdown routine will wait until all non-daemon
+ "threading" threads have completed. */
+static void
+wait_for_thread_shutdown(void)
+{
+#ifdef WITH_THREAD
+ _Py_IDENTIFIER(_shutdown);
+ PyObject *result;
+ PyThreadState *tstate = PyThreadState_GET();
+ PyObject *threading = PyMapping_GetItemString(tstate->interp->modules,
+ "threading");
+ if (threading == NULL) {
+ /* threading not imported */
+ PyErr_Clear();
+ return;
+ }
+ result = _PyObject_CallMethodId(threading, &PyId__shutdown, NULL);
+ if (result == NULL) {
+ PyErr_WriteUnraisable(threading);
+ }
+ else {
+ Py_DECREF(result);
+ }
+ Py_DECREF(threading);
+#endif
+}
+
+#define NEXITFUNCS 32
+static void (*exitfuncs[NEXITFUNCS])(void);
+static int nexitfuncs = 0;
+
+int Py_AtExit(void (*func)(void))
+{
+ if (nexitfuncs >= NEXITFUNCS)
+ return -1;
+ exitfuncs[nexitfuncs++] = func;
+ return 0;
+}
+
+static void
+call_ll_exitfuncs(void)
+{
+ while (nexitfuncs > 0)
+ (*exitfuncs[--nexitfuncs])();
+
+ fflush(stdout);
+ fflush(stderr);
+}
+
+void
+Py_Exit(int sts)
+{
+ if (Py_FinalizeEx() < 0) {
+ sts = 120;
+ }
+
+ exit(sts);
+}
+
+static void
+initsigs(void)
+{
+#ifdef SIGPIPE
+ PyOS_setsig(SIGPIPE, SIG_IGN);
+#endif
+#ifdef SIGXFZ
+ PyOS_setsig(SIGXFZ, SIG_IGN);
+#endif
+#ifdef SIGXFSZ
+ PyOS_setsig(SIGXFSZ, SIG_IGN);
+#endif
+ PyOS_InitInterrupts(); /* May imply initsignal() */
+ if (PyErr_Occurred()) {
+ Py_FatalError("Py_Initialize: can't import signal");
+ }
+}
+
+
+/* Restore signals that the interpreter has called SIG_IGN on to SIG_DFL.
+ *
+ * All of the code in this function must only use async-signal-safe functions,
+ * listed at `man 7 signal` or
+ * http://www.opengroup.org/onlinepubs/009695399/functions/xsh_chap02_04.html.
+ */
+void
+_Py_RestoreSignals(void)
+{
+#ifdef SIGPIPE
+ PyOS_setsig(SIGPIPE, SIG_DFL);
+#endif
+#ifdef SIGXFZ
+ PyOS_setsig(SIGXFZ, SIG_DFL);
+#endif
+#ifdef SIGXFSZ
+ PyOS_setsig(SIGXFSZ, SIG_DFL);
+#endif
+}
+
+
+/*
+ * The file descriptor fd is considered ``interactive'' if either
+ * a) isatty(fd) is TRUE, or
+ * b) the -i flag was given, and the filename associated with
+ * the descriptor is NULL or "<stdin>" or "???".
+ */
+int
+Py_FdIsInteractive(FILE *fp, const char *filename)
+{
+ if (isatty((int)fileno(fp)))
+ return 1;
+ if (!Py_InteractiveFlag)
+ return 0;
+ return (filename == NULL) ||
+ (strcmp(filename, "<stdin>") == 0) ||
+ (strcmp(filename, "???") == 0);
+}
+
+
+/* Wrappers around sigaction() or signal(). */
+
+PyOS_sighandler_t
+PyOS_getsig(int sig)
+{
+#ifdef HAVE_SIGACTION
+ struct sigaction context;
+ if (sigaction(sig, NULL, &context) == -1)
+ return SIG_ERR;
+ return context.sa_handler;
+#else
+ PyOS_sighandler_t handler;
+/* Special signal handling for the secure CRT in Visual Studio 2005 */
+#if defined(_MSC_VER) && _MSC_VER >= 1400
+ switch (sig) {
+ /* Only these signals are valid */
+ case SIGINT:
+ case SIGILL:
+ case SIGFPE:
+ case SIGSEGV:
+ case SIGTERM:
+ case SIGBREAK:
+ case SIGABRT:
+ break;
+ /* Don't call signal() with other values or it will assert */
+ default:
+ return SIG_ERR;
+ }
+#endif /* _MSC_VER && _MSC_VER >= 1400 */
+ handler = signal(sig, SIG_IGN);
+ if (handler != SIG_ERR)
+ signal(sig, handler);
+ return handler;
+#endif
+}
+
+/*
+ * All of the code in this function must only use async-signal-safe functions,
+ * listed at `man 7 signal` or
+ * http://www.opengroup.org/onlinepubs/009695399/functions/xsh_chap02_04.html.
+ */
+PyOS_sighandler_t
+PyOS_setsig(int sig, PyOS_sighandler_t handler)
+{
+#ifdef HAVE_SIGACTION
+ /* Some code in Modules/signalmodule.c depends on sigaction() being
+ * used here if HAVE_SIGACTION is defined. Fix that if this code
+ * changes to invalidate that assumption.
+ */
+ struct sigaction context, ocontext;
+ context.sa_handler = handler;
+ sigemptyset(&context.sa_mask);
+ context.sa_flags = 0;
+ if (sigaction(sig, &context, &ocontext) == -1)
+ return SIG_ERR;
+ return ocontext.sa_handler;
+#else
+ PyOS_sighandler_t oldhandler;
+ oldhandler = signal(sig, handler);
+#ifdef HAVE_SIGINTERRUPT
+ siginterrupt(sig, 1);
+#endif
+ return oldhandler;
+#endif
+}
+
+#ifdef __cplusplus
+}
+#endif
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pystate.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pystate.c
new file mode 100644
index 00000000..df5f0eda
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pystate.c
@@ -0,0 +1,969 @@
+/** @file
+ Thread and interpreter state structures and their interfaces
+ Copyright (c) 2010 - 2021, Intel Corporation. All rights reserved.<BR>
+ This program and the accompanying materials are licensed and made available under
+ the terms and conditions of the BSD License that accompanies this distribution.
+ The full text of the license may be found at
+ http://opensource.org/licenses/bsd-license.
+
+ THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+**/
+
+#include "Python.h"
+
+#define GET_TSTATE() \
+ ((PyThreadState*)_Py_atomic_load_relaxed(&_PyThreadState_Current))
+#define SET_TSTATE(value) \
+ _Py_atomic_store_relaxed(&_PyThreadState_Current, (uintptr_t)(value))
+#define GET_INTERP_STATE() \
+ (GET_TSTATE()->interp)
+
+
+/* --------------------------------------------------------------------------
+CAUTION
+
+Always use PyMem_RawMalloc() and PyMem_RawFree() directly in this file. A
+number of these functions are advertised as safe to call when the GIL isn't
+held, and in a debug build Python redirects (e.g.) PyMem_NEW (etc) to Python's
+debugging obmalloc functions. Those aren't thread-safe (they rely on the GIL
+to avoid the expense of doing their own locking).
+-------------------------------------------------------------------------- */
+
+#ifdef HAVE_DLOPEN
+#ifdef HAVE_DLFCN_H
+#include <dlfcn.h>
+#endif
+#if !HAVE_DECL_RTLD_LAZY
+#define RTLD_LAZY 1
+#endif
+#endif
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+int _PyGILState_check_enabled = 1;
+
+#ifdef WITH_THREAD
+#include "pythread.h"
+static PyThread_type_lock head_mutex = NULL; /* Protects interp->tstate_head */
+#define HEAD_INIT() (void)(head_mutex || (head_mutex = PyThread_allocate_lock()))
+#define HEAD_LOCK() PyThread_acquire_lock(head_mutex, WAIT_LOCK)
+#define HEAD_UNLOCK() PyThread_release_lock(head_mutex)
+
+/* The single PyInterpreterState used by this process'
+ GILState implementation
+*/
+static PyInterpreterState *autoInterpreterState = NULL;
+static int autoTLSkey = -1;
+#else
+#define HEAD_INIT() /* Nothing */
+#define HEAD_LOCK() /* Nothing */
+#define HEAD_UNLOCK() /* Nothing */
+#endif
+
+static PyInterpreterState *interp_head = NULL;
+static __PyCodeExtraState *coextra_head = NULL;
+
+/* Assuming the current thread holds the GIL, this is the
+ PyThreadState for the current thread. */
+_Py_atomic_address _PyThreadState_Current = {0};
+PyThreadFrameGetter _PyThreadState_GetFrame = NULL;
+
+#ifdef WITH_THREAD
+static void _PyGILState_NoteThreadState(PyThreadState* tstate);
+#endif
+
+
+PyInterpreterState *
+PyInterpreterState_New(void)
+{
+ PyInterpreterState *interp = (PyInterpreterState *)
+ PyMem_RawMalloc(sizeof(PyInterpreterState));
+
+ if (interp != NULL) {
+ __PyCodeExtraState* coextra = PyMem_RawMalloc(sizeof(__PyCodeExtraState));
+ if (coextra == NULL) {
+ PyMem_RawFree(interp);
+ return NULL;
+ }
+
+ HEAD_INIT();
+#ifdef WITH_THREAD
+ if (head_mutex == NULL)
+ Py_FatalError("Can't initialize threads for interpreter");
+#endif
+ interp->modules = NULL;
+ interp->modules_by_index = NULL;
+ interp->sysdict = NULL;
+ interp->builtins = NULL;
+ interp->builtins_copy = NULL;
+ interp->tstate_head = NULL;
+ interp->codec_search_path = NULL;
+ interp->codec_search_cache = NULL;
+ interp->codec_error_registry = NULL;
+ interp->codecs_initialized = 0;
+ interp->fscodec_initialized = 0;
+ interp->importlib = NULL;
+ interp->import_func = NULL;
+ interp->eval_frame = _PyEval_EvalFrameDefault;
+ coextra->co_extra_user_count = 0;
+ coextra->interp = interp;
+#ifdef HAVE_DLOPEN
+#if HAVE_DECL_RTLD_NOW
+ interp->dlopenflags = RTLD_NOW;
+#else
+ interp->dlopenflags = RTLD_LAZY;
+#endif
+#endif
+
+ HEAD_LOCK();
+ interp->next = interp_head;
+ interp_head = interp;
+ coextra->next = coextra_head;
+ coextra_head = coextra;
+ HEAD_UNLOCK();
+ }
+
+ return interp;
+}
+
+
+void
+PyInterpreterState_Clear(PyInterpreterState *interp)
+{
+ PyThreadState *p;
+ HEAD_LOCK();
+ for (p = interp->tstate_head; p != NULL; p = p->next)
+ PyThreadState_Clear(p);
+ HEAD_UNLOCK();
+ Py_CLEAR(interp->codec_search_path);
+ Py_CLEAR(interp->codec_search_cache);
+ Py_CLEAR(interp->codec_error_registry);
+ Py_CLEAR(interp->modules);
+ Py_CLEAR(interp->modules_by_index);
+ Py_CLEAR(interp->sysdict);
+ Py_CLEAR(interp->builtins);
+ Py_CLEAR(interp->builtins_copy);
+ Py_CLEAR(interp->importlib);
+ Py_CLEAR(interp->import_func);
+}
+
+
+static void
+zapthreads(PyInterpreterState *interp)
+{
+ PyThreadState *p;
+ /* No need to lock the mutex here because this should only happen
+ when the threads are all really dead (XXX famous last words). */
+ while ((p = interp->tstate_head) != NULL) {
+ PyThreadState_Delete(p);
+ }
+}
+
+
+void
+PyInterpreterState_Delete(PyInterpreterState *interp)
+{
+ PyInterpreterState **p;
+ __PyCodeExtraState **pextra;
+ __PyCodeExtraState* extra;
+ zapthreads(interp);
+ HEAD_LOCK();
+ for (p = &interp_head; /* N/A */; p = &(*p)->next) {
+ if (*p == NULL)
+ Py_FatalError(
+ "PyInterpreterState_Delete: invalid interp");
+ if (*p == interp)
+ break;
+ }
+ if (interp->tstate_head != NULL)
+ Py_FatalError("PyInterpreterState_Delete: remaining threads");
+ *p = interp->next;
+
+ for (pextra = &coextra_head; ; pextra = &(*pextra)->next) {
+ if (*pextra == NULL)
+ Py_FatalError(
+ "PyInterpreterState_Delete: invalid extra");
+ extra = *pextra;
+ if (extra->interp == interp) {
+ *pextra = extra->next;
+ PyMem_RawFree(extra);
+ break;
+ }
+ }
+ HEAD_UNLOCK();
+ PyMem_RawFree(interp);
+#ifdef WITH_THREAD
+ if (interp_head == NULL && head_mutex != NULL) {
+ PyThread_free_lock(head_mutex);
+ head_mutex = NULL;
+ }
+#endif
+}
+
+
+/* Default implementation for _PyThreadState_GetFrame */
+static struct _frame *
+threadstate_getframe(PyThreadState *self)
+{
+ return self->frame;
+}
+
+static PyThreadState *
+new_threadstate(PyInterpreterState *interp, int init)
+{
+ PyThreadState *tstate = (PyThreadState *)PyMem_RawMalloc(sizeof(PyThreadState));
+
+ if (_PyThreadState_GetFrame == NULL)
+ _PyThreadState_GetFrame = threadstate_getframe;
+
+ if (tstate != NULL) {
+ tstate->interp = interp;
+
+ tstate->frame = NULL;
+ tstate->recursion_depth = 0;
+ tstate->overflowed = 0;
+ tstate->recursion_critical = 0;
+ tstate->tracing = 0;
+ tstate->use_tracing = 0;
+ tstate->gilstate_counter = 0;
+ tstate->async_exc = NULL;
+#ifdef WITH_THREAD
+ tstate->thread_id = PyThread_get_thread_ident();
+#else
+ tstate->thread_id = 0;
+#endif
+
+ tstate->dict = NULL;
+
+ tstate->curexc_type = NULL;
+ tstate->curexc_value = NULL;
+ tstate->curexc_traceback = NULL;
+
+ tstate->exc_type = NULL;
+ tstate->exc_value = NULL;
+ tstate->exc_traceback = NULL;
+
+ tstate->c_profilefunc = NULL;
+ tstate->c_tracefunc = NULL;
+ tstate->c_profileobj = NULL;
+ tstate->c_traceobj = NULL;
+
+ tstate->trash_delete_nesting = 0;
+ tstate->trash_delete_later = NULL;
+ tstate->on_delete = NULL;
+ tstate->on_delete_data = NULL;
+
+ tstate->coroutine_wrapper = NULL;
+ tstate->in_coroutine_wrapper = 0;
+
+ tstate->async_gen_firstiter = NULL;
+ tstate->async_gen_finalizer = NULL;
+
+ if (init)
+ _PyThreadState_Init(tstate);
+
+ HEAD_LOCK();
+ tstate->prev = NULL;
+ tstate->next = interp->tstate_head;
+ if (tstate->next)
+ tstate->next->prev = tstate;
+ interp->tstate_head = tstate;
+ HEAD_UNLOCK();
+ }
+
+ return tstate;
+}
+
+PyThreadState *
+PyThreadState_New(PyInterpreterState *interp)
+{
+ return new_threadstate(interp, 1);
+}
+
+PyThreadState *
+_PyThreadState_Prealloc(PyInterpreterState *interp)
+{
+ return new_threadstate(interp, 0);
+}
+
+void
+_PyThreadState_Init(PyThreadState *tstate)
+{
+#ifdef WITH_THREAD
+ _PyGILState_NoteThreadState(tstate);
+#endif
+}
+
+PyObject*
+PyState_FindModule(struct PyModuleDef* module)
+{
+ Py_ssize_t index = module->m_base.m_index;
+ PyInterpreterState *state = GET_INTERP_STATE();
+ PyObject *res;
+ if (module->m_slots) {
+ return NULL;
+ }
+ if (index == 0)
+ return NULL;
+ if (state->modules_by_index == NULL)
+ return NULL;
+ if (index >= PyList_GET_SIZE(state->modules_by_index))
+ return NULL;
+ res = PyList_GET_ITEM(state->modules_by_index, index);
+ return res==Py_None ? NULL : res;
+}
+
+int
+_PyState_AddModule(PyObject* module, struct PyModuleDef* def)
+{
+ PyInterpreterState *state;
+ if (!def) {
+ assert(PyErr_Occurred());
+ return -1;
+ }
+ if (def->m_slots) {
+ PyErr_SetString(PyExc_SystemError,
+ "PyState_AddModule called on module with slots");
+ return -1;
+ }
+ state = GET_INTERP_STATE();
+ if (!state->modules_by_index) {
+ state->modules_by_index = PyList_New(0);
+ if (!state->modules_by_index)
+ return -1;
+ }
+ while(PyList_GET_SIZE(state->modules_by_index) <= def->m_base.m_index)
+ if (PyList_Append(state->modules_by_index, Py_None) < 0)
+ return -1;
+ Py_INCREF(module);
+ return PyList_SetItem(state->modules_by_index,
+ def->m_base.m_index, module);
+}
+
+int
+PyState_AddModule(PyObject* module, struct PyModuleDef* def)
+{
+ Py_ssize_t index;
+ PyInterpreterState *state = GET_INTERP_STATE();
+ if (!def) {
+ Py_FatalError("PyState_AddModule: Module Definition is NULL");
+ return -1;
+ }
+ index = def->m_base.m_index;
+ if (state->modules_by_index) {
+ if(PyList_GET_SIZE(state->modules_by_index) >= index) {
+ if(module == PyList_GET_ITEM(state->modules_by_index, index)) {
+ Py_FatalError("PyState_AddModule: Module already added!");
+ return -1;
+ }
+ }
+ }
+ return _PyState_AddModule(module, def);
+}
+
+int
+PyState_RemoveModule(struct PyModuleDef* def)
+{
+ PyInterpreterState *state;
+ Py_ssize_t index = def->m_base.m_index;
+ if (def->m_slots) {
+ PyErr_SetString(PyExc_SystemError,
+ "PyState_RemoveModule called on module with slots");
+ return -1;
+ }
+ state = GET_INTERP_STATE();
+ if (index == 0) {
+ Py_FatalError("PyState_RemoveModule: Module index invalid.");
+ return -1;
+ }
+ if (state->modules_by_index == NULL) {
+ Py_FatalError("PyState_RemoveModule: Interpreters module-list not acessible.");
+ return -1;
+ }
+ if (index > PyList_GET_SIZE(state->modules_by_index)) {
+ Py_FatalError("PyState_RemoveModule: Module index out of bounds.");
+ return -1;
+ }
+ Py_INCREF(Py_None);
+ return PyList_SetItem(state->modules_by_index, index, Py_None);
+}
+
+/* used by import.c:PyImport_Cleanup */
+void
+_PyState_ClearModules(void)
+{
+ PyInterpreterState *state = GET_INTERP_STATE();
+ if (state->modules_by_index) {
+ Py_ssize_t i;
+ for (i = 0; i < PyList_GET_SIZE(state->modules_by_index); i++) {
+ PyObject *m = PyList_GET_ITEM(state->modules_by_index, i);
+ if (PyModule_Check(m)) {
+ /* cleanup the saved copy of module dicts */
+ PyModuleDef *md = PyModule_GetDef(m);
+ if (md)
+ Py_CLEAR(md->m_base.m_copy);
+ }
+ }
+ /* Setting modules_by_index to NULL could be dangerous, so we
+ clear the list instead. */
+ if (PyList_SetSlice(state->modules_by_index,
+ 0, PyList_GET_SIZE(state->modules_by_index),
+ NULL))
+ PyErr_WriteUnraisable(state->modules_by_index);
+ }
+}
+
+void
+PyThreadState_Clear(PyThreadState *tstate)
+{
+ if (Py_VerboseFlag && tstate->frame != NULL)
+ fprintf(stderr,
+ "PyThreadState_Clear: warning: thread still has a frame\n");
+
+ Py_CLEAR(tstate->frame);
+
+ Py_CLEAR(tstate->dict);
+ Py_CLEAR(tstate->async_exc);
+
+ Py_CLEAR(tstate->curexc_type);
+ Py_CLEAR(tstate->curexc_value);
+ Py_CLEAR(tstate->curexc_traceback);
+
+ Py_CLEAR(tstate->exc_type);
+ Py_CLEAR(tstate->exc_value);
+ Py_CLEAR(tstate->exc_traceback);
+
+ tstate->c_profilefunc = NULL;
+ tstate->c_tracefunc = NULL;
+ Py_CLEAR(tstate->c_profileobj);
+ Py_CLEAR(tstate->c_traceobj);
+
+ Py_CLEAR(tstate->coroutine_wrapper);
+ Py_CLEAR(tstate->async_gen_firstiter);
+ Py_CLEAR(tstate->async_gen_finalizer);
+}
+
+
+/* Common code for PyThreadState_Delete() and PyThreadState_DeleteCurrent() */
+static void
+tstate_delete_common(PyThreadState *tstate)
+{
+ PyInterpreterState *interp;
+ if (tstate == NULL)
+ Py_FatalError("PyThreadState_Delete: NULL tstate");
+ interp = tstate->interp;
+ if (interp == NULL)
+ Py_FatalError("PyThreadState_Delete: NULL interp");
+ HEAD_LOCK();
+ if (tstate->prev)
+ tstate->prev->next = tstate->next;
+ else
+ interp->tstate_head = tstate->next;
+ if (tstate->next)
+ tstate->next->prev = tstate->prev;
+ HEAD_UNLOCK();
+ if (tstate->on_delete != NULL) {
+ tstate->on_delete(tstate->on_delete_data);
+ }
+ PyMem_RawFree(tstate);
+}
+
+
+void
+PyThreadState_Delete(PyThreadState *tstate)
+{
+ if (tstate == GET_TSTATE())
+ Py_FatalError("PyThreadState_Delete: tstate is still current");
+#ifdef WITH_THREAD
+ if (autoInterpreterState && PyThread_get_key_value(autoTLSkey) == tstate)
+ PyThread_delete_key_value(autoTLSkey);
+#endif /* WITH_THREAD */
+ tstate_delete_common(tstate);
+}
+
+
+#ifdef WITH_THREAD
+void
+PyThreadState_DeleteCurrent()
+{
+ PyThreadState *tstate = GET_TSTATE();
+ if (tstate == NULL)
+ Py_FatalError(
+ "PyThreadState_DeleteCurrent: no current tstate");
+ tstate_delete_common(tstate);
+ if (autoInterpreterState && PyThread_get_key_value(autoTLSkey) == tstate)
+ PyThread_delete_key_value(autoTLSkey);
+ SET_TSTATE(NULL);
+ PyEval_ReleaseLock();
+}
+#endif /* WITH_THREAD */
+
+
+/*
+ * Delete all thread states except the one passed as argument.
+ * Note that, if there is a current thread state, it *must* be the one
+ * passed as argument. Also, this won't touch any other interpreters
+ * than the current one, since we don't know which thread state should
+ * be kept in those other interpreteres.
+ */
+void
+_PyThreadState_DeleteExcept(PyThreadState *tstate)
+{
+ PyInterpreterState *interp = tstate->interp;
+ PyThreadState *p, *next, *garbage;
+ HEAD_LOCK();
+ /* Remove all thread states, except tstate, from the linked list of
+ thread states. This will allow calling PyThreadState_Clear()
+ without holding the lock. */
+ garbage = interp->tstate_head;
+ if (garbage == tstate)
+ garbage = tstate->next;
+ if (tstate->prev)
+ tstate->prev->next = tstate->next;
+ if (tstate->next)
+ tstate->next->prev = tstate->prev;
+ tstate->prev = tstate->next = NULL;
+ interp->tstate_head = tstate;
+ HEAD_UNLOCK();
+ /* Clear and deallocate all stale thread states. Even if this
+ executes Python code, we should be safe since it executes
+ in the current thread, not one of the stale threads. */
+ for (p = garbage; p; p = next) {
+ next = p->next;
+ PyThreadState_Clear(p);
+ PyMem_RawFree(p);
+ }
+}
+
+
+PyThreadState *
+_PyThreadState_UncheckedGet(void)
+{
+ return GET_TSTATE();
+}
+
+
+PyThreadState *
+PyThreadState_Get(void)
+{
+ PyThreadState *tstate = GET_TSTATE();
+ if (tstate == NULL)
+ Py_FatalError("PyThreadState_Get: no current thread");
+
+ return tstate;
+}
+
+
+PyThreadState *
+PyThreadState_Swap(PyThreadState *newts)
+{
+ PyThreadState *oldts = GET_TSTATE();
+
+ SET_TSTATE(newts);
+ /* It should not be possible for more than one thread state
+ to be used for a thread. Check this the best we can in debug
+ builds.
+ */
+#if defined(Py_DEBUG) && defined(WITH_THREAD)
+ if (newts) {
+ /* This can be called from PyEval_RestoreThread(). Similar
+ to it, we need to ensure errno doesn't change.
+ */
+ int err = errno;
+ PyThreadState *check = PyGILState_GetThisThreadState();
+ if (check && check->interp == newts->interp && check != newts)
+ Py_FatalError("Invalid thread state for this thread");
+ errno = err;
+ }
+#endif
+ return oldts;
+}
+
+__PyCodeExtraState*
+__PyCodeExtraState_Get(void) {
+ PyInterpreterState* interp = PyThreadState_Get()->interp;
+
+ HEAD_LOCK();
+ for (__PyCodeExtraState* cur = coextra_head; cur != NULL; cur = cur->next) {
+ if (cur->interp == interp) {
+ HEAD_UNLOCK();
+ return cur;
+ }
+ }
+ HEAD_UNLOCK();
+
+ Py_FatalError("__PyCodeExtraState_Get: no code state for interpreter");
+ return NULL;
+}
+
+/* An extension mechanism to store arbitrary additional per-thread state.
+ PyThreadState_GetDict() returns a dictionary that can be used to hold such
+ state; the caller should pick a unique key and store its state there. If
+ PyThreadState_GetDict() returns NULL, an exception has *not* been raised
+ and the caller should assume no per-thread state is available. */
+
+PyObject *
+PyThreadState_GetDict(void)
+{
+ PyThreadState *tstate = GET_TSTATE();
+ if (tstate == NULL)
+ return NULL;
+
+ if (tstate->dict == NULL) {
+ PyObject *d;
+ tstate->dict = d = PyDict_New();
+ if (d == NULL)
+ PyErr_Clear();
+ }
+ return tstate->dict;
+}
+
+
+/* Asynchronously raise an exception in a thread.
+ Requested by Just van Rossum and Alex Martelli.
+ To prevent naive misuse, you must write your own extension
+ to call this, or use ctypes. Must be called with the GIL held.
+ Returns the number of tstates modified (normally 1, but 0 if `id` didn't
+ match any known thread id). Can be called with exc=NULL to clear an
+ existing async exception. This raises no exceptions. */
+
+int
+PyThreadState_SetAsyncExc(long id, PyObject *exc) {
+ PyInterpreterState *interp = GET_INTERP_STATE();
+ PyThreadState *p;
+
+ /* Although the GIL is held, a few C API functions can be called
+ * without the GIL held, and in particular some that create and
+ * destroy thread and interpreter states. Those can mutate the
+ * list of thread states we're traversing, so to prevent that we lock
+ * head_mutex for the duration.
+ */
+ HEAD_LOCK();
+ for (p = interp->tstate_head; p != NULL; p = p->next) {
+ if (p->thread_id == id) {
+ /* Tricky: we need to decref the current value
+ * (if any) in p->async_exc, but that can in turn
+ * allow arbitrary Python code to run, including
+ * perhaps calls to this function. To prevent
+ * deadlock, we need to release head_mutex before
+ * the decref.
+ */
+ PyObject *old_exc = p->async_exc;
+ Py_XINCREF(exc);
+ p->async_exc = exc;
+ HEAD_UNLOCK();
+ Py_XDECREF(old_exc);
+ _PyEval_SignalAsyncExc();
+ return 1;
+ }
+ }
+ HEAD_UNLOCK();
+ return 0;
+}
+
+
+/* Routines for advanced debuggers, requested by David Beazley.
+ Don't use unless you know what you are doing! */
+
+PyInterpreterState *
+PyInterpreterState_Head(void)
+{
+ return interp_head;
+}
+
+PyInterpreterState *
+PyInterpreterState_Next(PyInterpreterState *interp) {
+ return interp->next;
+}
+
+PyThreadState *
+PyInterpreterState_ThreadHead(PyInterpreterState *interp) {
+ return interp->tstate_head;
+}
+
+PyThreadState *
+PyThreadState_Next(PyThreadState *tstate) {
+ return tstate->next;
+}
+
+/* The implementation of sys._current_frames(). This is intended to be
+ called with the GIL held, as it will be when called via
+ sys._current_frames(). It's possible it would work fine even without
+ the GIL held, but haven't thought enough about that.
+*/
+PyObject *
+_PyThread_CurrentFrames(void)
+{
+ PyObject *result;
+ PyInterpreterState *i;
+
+ result = PyDict_New();
+ if (result == NULL)
+ return NULL;
+
+ /* for i in all interpreters:
+ * for t in all of i's thread states:
+ * if t's frame isn't NULL, map t's id to its frame
+ * Because these lists can mutate even when the GIL is held, we
+ * need to grab head_mutex for the duration.
+ */
+ HEAD_LOCK();
+ for (i = interp_head; i != NULL; i = i->next) {
+ PyThreadState *t;
+ for (t = i->tstate_head; t != NULL; t = t->next) {
+ PyObject *id;
+ int stat;
+ struct _frame *frame = t->frame;
+ if (frame == NULL)
+ continue;
+ id = PyLong_FromLong(t->thread_id);
+ if (id == NULL)
+ goto Fail;
+ stat = PyDict_SetItem(result, id, (PyObject *)frame);
+ Py_DECREF(id);
+ if (stat < 0)
+ goto Fail;
+ }
+ }
+ HEAD_UNLOCK();
+ return result;
+
+ Fail:
+ HEAD_UNLOCK();
+ Py_DECREF(result);
+ return NULL;
+}
+
+/* Python "auto thread state" API. */
+#ifdef WITH_THREAD
+
+/* Keep this as a static, as it is not reliable! It can only
+ ever be compared to the state for the *current* thread.
+ * If not equal, then it doesn't matter that the actual
+ value may change immediately after comparison, as it can't
+ possibly change to the current thread's state.
+ * If equal, then the current thread holds the lock, so the value can't
+ change until we yield the lock.
+*/
+static int
+PyThreadState_IsCurrent(PyThreadState *tstate)
+{
+ /* Must be the tstate for this thread */
+ assert(PyGILState_GetThisThreadState()==tstate);
+ return tstate == GET_TSTATE();
+}
+
+/* Internal initialization/finalization functions called by
+ Py_Initialize/Py_FinalizeEx
+*/
+void
+_PyGILState_Init(PyInterpreterState *i, PyThreadState *t)
+{
+ assert(i && t); /* must init with valid states */
+ autoTLSkey = PyThread_create_key();
+ if (autoTLSkey == -1)
+ Py_FatalError("Could not allocate TLS entry");
+ autoInterpreterState = i;
+ assert(PyThread_get_key_value(autoTLSkey) == NULL);
+ assert(t->gilstate_counter == 0);
+
+ _PyGILState_NoteThreadState(t);
+}
+
+PyInterpreterState *
+_PyGILState_GetInterpreterStateUnsafe(void)
+{
+ return autoInterpreterState;
+}
+
+void
+_PyGILState_Fini(void)
+{
+ PyThread_delete_key(autoTLSkey);
+ autoTLSkey = -1;
+ autoInterpreterState = NULL;
+}
+
+/* Reset the TLS key - called by PyOS_AfterFork().
+ * This should not be necessary, but some - buggy - pthread implementations
+ * don't reset TLS upon fork(), see issue #10517.
+ */
+void
+_PyGILState_Reinit(void)
+{
+#ifdef WITH_THREAD
+ head_mutex = NULL;
+ HEAD_INIT();
+#endif
+ PyThreadState *tstate = PyGILState_GetThisThreadState();
+ PyThread_delete_key(autoTLSkey);
+ if ((autoTLSkey = PyThread_create_key()) == -1)
+ Py_FatalError("Could not allocate TLS entry");
+
+ /* If the thread had an associated auto thread state, reassociate it with
+ * the new key. */
+ if (tstate && PyThread_set_key_value(autoTLSkey, (void *)tstate) < 0)
+ Py_FatalError("Couldn't create autoTLSkey mapping");
+}
+
+/* When a thread state is created for a thread by some mechanism other than
+ PyGILState_Ensure, it's important that the GILState machinery knows about
+ it so it doesn't try to create another thread state for the thread (this is
+ a better fix for SF bug #1010677 than the first one attempted).
+*/
+static void
+_PyGILState_NoteThreadState(PyThreadState* tstate)
+{
+ /* If autoTLSkey isn't initialized, this must be the very first
+ threadstate created in Py_Initialize(). Don't do anything for now
+ (we'll be back here when _PyGILState_Init is called). */
+ if (!autoInterpreterState)
+ return;
+
+ /* Stick the thread state for this thread in thread local storage.
+
+ The only situation where you can legitimately have more than one
+ thread state for an OS level thread is when there are multiple
+ interpreters.
+
+ You shouldn't really be using the PyGILState_ APIs anyway (see issues
+ #10915 and #15751).
+
+ The first thread state created for that given OS level thread will
+ "win", which seems reasonable behaviour.
+ */
+ if (PyThread_get_key_value(autoTLSkey) == NULL) {
+ if (PyThread_set_key_value(autoTLSkey, (void *)tstate) < 0)
+ Py_FatalError("Couldn't create autoTLSkey mapping");
+ }
+
+ /* PyGILState_Release must not try to delete this thread state. */
+ tstate->gilstate_counter = 1;
+}
+
+/* The public functions */
+PyThreadState *
+PyGILState_GetThisThreadState(void)
+{
+ if (autoInterpreterState == NULL)
+ return NULL;
+ return (PyThreadState *)PyThread_get_key_value(autoTLSkey);
+}
+
+int
+PyGILState_Check(void)
+{
+ PyThreadState *tstate;
+
+ if (!_PyGILState_check_enabled)
+ return 1;
+
+ if (autoTLSkey == -1)
+ return 1;
+
+ tstate = GET_TSTATE();
+ if (tstate == NULL)
+ return 0;
+
+ return (tstate == PyGILState_GetThisThreadState());
+}
+
+PyGILState_STATE
+PyGILState_Ensure(void)
+{
+ int current;
+ PyThreadState *tcur;
+ int need_init_threads = 0;
+
+ /* Note that we do not auto-init Python here - apart from
+ potential races with 2 threads auto-initializing, pep-311
+ spells out other issues. Embedders are expected to have
+ called Py_Initialize() and usually PyEval_InitThreads().
+ */
+ assert(autoInterpreterState); /* Py_Initialize() hasn't been called! */
+ tcur = (PyThreadState *)PyThread_get_key_value(autoTLSkey);
+ if (tcur == NULL) {
+ need_init_threads = 1;
+
+ /* Create a new thread state for this thread */
+ tcur = PyThreadState_New(autoInterpreterState);
+ if (tcur == NULL)
+ Py_FatalError("Couldn't create thread-state for new thread");
+ /* This is our thread state! We'll need to delete it in the
+ matching call to PyGILState_Release(). */
+ tcur->gilstate_counter = 0;
+ current = 0; /* new thread state is never current */
+ }
+ else {
+ current = PyThreadState_IsCurrent(tcur);
+ }
+
+ if (current == 0) {
+ PyEval_RestoreThread(tcur);
+ }
+
+ /* Update our counter in the thread-state - no need for locks:
+ - tcur will remain valid as we hold the GIL.
+ - the counter is safe as we are the only thread "allowed"
+ to modify this value
+ */
+ ++tcur->gilstate_counter;
+
+ if (need_init_threads) {
+ /* At startup, Python has no concrete GIL. If PyGILState_Ensure() is
+ called from a new thread for the first time, we need the create the
+ GIL. */
+ PyEval_InitThreads();
+ }
+
+ return current ? PyGILState_LOCKED : PyGILState_UNLOCKED;
+}
+
+void
+PyGILState_Release(PyGILState_STATE oldstate)
+{
+ PyThreadState *tcur = (PyThreadState *)PyThread_get_key_value(
+ autoTLSkey);
+ if (tcur == NULL)
+ Py_FatalError("auto-releasing thread-state, "
+ "but no thread-state for this thread");
+ /* We must hold the GIL and have our thread state current */
+ /* XXX - remove the check - the assert should be fine,
+ but while this is very new (April 2003), the extra check
+ by release-only users can't hurt.
+ */
+ if (! PyThreadState_IsCurrent(tcur))
+ Py_FatalError("This thread state must be current when releasing");
+ assert(PyThreadState_IsCurrent(tcur));
+ --tcur->gilstate_counter;
+ assert(tcur->gilstate_counter >= 0); /* illegal counter value */
+
+ /* If we're going to destroy this thread-state, we must
+ * clear it while the GIL is held, as destructors may run.
+ */
+ if (tcur->gilstate_counter == 0) {
+ /* can't have been locked when we created it */
+ assert(oldstate == PyGILState_UNLOCKED);
+ PyThreadState_Clear(tcur);
+ /* Delete the thread-state. Note this releases the GIL too!
+ * It's vital that the GIL be held here, to avoid shutdown
+ * races; see bugs 225673 and 1061968 (that nasty bug has a
+ * habit of coming back).
+ */
+ PyThreadState_DeleteCurrent();
+ }
+ /* Release the lock if necessary */
+ else if (oldstate == PyGILState_UNLOCKED)
+ PyEval_SaveThread();
+}
+
+#endif /* WITH_THREAD */
+
+#ifdef __cplusplus
+}
+#endif
+
+
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pytime.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pytime.c
new file mode 100644
index 00000000..0dedf035
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pytime.c
@@ -0,0 +1,749 @@
+/** @file
+ Time related functions
+
+ Copyright (c) 2010 - 2021, Intel Corporation. All rights reserved.<BR>
+ This program and the accompanying materials are licensed and made available under
+ the terms and conditions of the BSD License that accompanies this distribution.
+ The full text of the license may be found at
+ http://opensource.org/licenses/bsd-license.
+
+ THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+**/
+
+#include "Python.h"
+#ifdef MS_WINDOWS
+#include <windows.h>
+#endif
+
+#if defined(__APPLE__)
+#include <mach/mach_time.h> /* mach_absolute_time(), mach_timebase_info() */
+#endif
+
+#define _PyTime_check_mul_overflow(a, b) \
+ (assert(b > 0), \
+ (_PyTime_t)(a) < _PyTime_MIN / (_PyTime_t)(b) \
+ || _PyTime_MAX / (_PyTime_t)(b) < (_PyTime_t)(a))
+
+/* To millisecond (10^-3) */
+#define SEC_TO_MS 1000
+
+/* To microseconds (10^-6) */
+#define MS_TO_US 1000
+#define SEC_TO_US (SEC_TO_MS * MS_TO_US)
+
+/* To nanoseconds (10^-9) */
+#define US_TO_NS 1000
+#define MS_TO_NS (MS_TO_US * US_TO_NS)
+#define SEC_TO_NS (SEC_TO_MS * MS_TO_NS)
+
+/* Conversion from nanoseconds */
+#define NS_TO_MS (1000 * 1000)
+#define NS_TO_US (1000)
+
+static void
+error_time_t_overflow(void)
+{
+ PyErr_SetString(PyExc_OverflowError,
+ "timestamp out of range for platform time_t");
+}
+
+time_t
+_PyLong_AsTime_t(PyObject *obj)
+{
+#if SIZEOF_TIME_T == SIZEOF_LONG_LONG
+ long long val;
+ val = PyLong_AsLongLong(obj);
+#else
+ long val;
+ Py_BUILD_ASSERT(sizeof(time_t) <= sizeof(long));
+ val = PyLong_AsLong(obj);
+#endif
+ if (val == -1 && PyErr_Occurred()) {
+ if (PyErr_ExceptionMatches(PyExc_OverflowError))
+ error_time_t_overflow();
+ return -1;
+ }
+ return (time_t)val;
+}
+
+PyObject *
+_PyLong_FromTime_t(time_t t)
+{
+#if SIZEOF_TIME_T == SIZEOF_LONG_LONG
+ return PyLong_FromLongLong((long long)t);
+#else
+ Py_BUILD_ASSERT(sizeof(time_t) <= sizeof(long));
+ return PyLong_FromLong((long)t);
+#endif
+}
+
+/* Round to nearest with ties going to nearest even integer
+ (_PyTime_ROUND_HALF_EVEN) */
+static double
+_PyTime_RoundHalfEven(double x)
+{
+ double rounded = round(x);
+ if (fabs(x-rounded) == 0.5)
+ /* halfway case: round to even */
+ rounded = 2.0*round(x/2.0);
+ return rounded;
+}
+
+static double
+_PyTime_Round(double x, _PyTime_round_t round)
+{
+ /* volatile avoids optimization changing how numbers are rounded */
+ volatile double d;
+
+ d = x;
+ if (round == _PyTime_ROUND_HALF_EVEN){
+ d = _PyTime_RoundHalfEven(d);
+ }
+ else if (round == _PyTime_ROUND_CEILING){
+ d = ceil(d);
+ }
+ else if (round == _PyTime_ROUND_FLOOR) {
+ d = floor(d);
+ }
+ else {
+ assert(round == _PyTime_ROUND_UP);
+ d = (d >= 0.0) ? ceil(d) : floor(d);
+ }
+ return d;
+}
+
+static int
+_PyTime_DoubleToDenominator(double d, time_t *sec, long *numerator,
+ double denominator, _PyTime_round_t round)
+{
+ double intpart;
+ /* volatile avoids optimization changing how numbers are rounded */
+ volatile double floatpart;
+
+ floatpart = modf(d, &intpart);
+
+ floatpart *= denominator;
+ floatpart = _PyTime_Round(floatpart, round);
+ if (floatpart >= denominator) {
+ floatpart -= denominator;
+ intpart += 1.0;
+ }
+ else if (floatpart < 0) {
+ floatpart += denominator;
+ intpart -= 1.0;
+ }
+ assert(0.0 <= floatpart && floatpart < denominator);
+
+ if (!_Py_InIntegralTypeRange(time_t, intpart)) {
+ error_time_t_overflow();
+ return -1;
+ }
+ *sec = (time_t)intpart;
+ *numerator = (long)floatpart;
+
+ return 0;
+}
+
+static int
+_PyTime_ObjectToDenominator(PyObject *obj, time_t *sec, long *numerator,
+ double denominator, _PyTime_round_t round)
+{
+ assert(denominator <= (double)LONG_MAX);
+
+ if (PyFloat_Check(obj)) {
+ double d = PyFloat_AsDouble(obj);
+ if (Py_IS_NAN(d)) {
+ *numerator = 0;
+ PyErr_SetString(PyExc_ValueError, "Invalid value NaN (not a number)");
+ return -1;
+ }
+ return _PyTime_DoubleToDenominator(d, sec, numerator,
+ denominator, round);
+ }
+ else {
+ *sec = _PyLong_AsTime_t(obj);
+ *numerator = 0;
+ if (*sec == (time_t)-1 && PyErr_Occurred())
+ return -1;
+ return 0;
+ }
+}
+
+int
+_PyTime_ObjectToTime_t(PyObject *obj, time_t *sec, _PyTime_round_t round)
+{
+ if (PyFloat_Check(obj)) {
+ double intpart;
+ /* volatile avoids optimization changing how numbers are rounded */
+ volatile double d;
+
+ d = PyFloat_AsDouble(obj);
+ if (Py_IS_NAN(d)) {
+ PyErr_SetString(PyExc_ValueError, "Invalid value NaN (not a number)");
+ return -1;
+ }
+
+ d = _PyTime_Round(d, round);
+ (void)modf(d, &intpart);
+
+ if (!_Py_InIntegralTypeRange(time_t, intpart)) {
+ error_time_t_overflow();
+ return -1;
+ }
+ *sec = (time_t)intpart;
+ return 0;
+ }
+ else {
+ *sec = _PyLong_AsTime_t(obj);
+ if (*sec == (time_t)-1 && PyErr_Occurred())
+ return -1;
+ return 0;
+ }
+}
+
+int
+_PyTime_ObjectToTimespec(PyObject *obj, time_t *sec, long *nsec,
+ _PyTime_round_t round)
+{
+ int res;
+ res = _PyTime_ObjectToDenominator(obj, sec, nsec, 1e9, round);
+ if (res == 0) {
+ assert(0 <= *nsec && *nsec < SEC_TO_NS);
+ }
+ return res;
+}
+
+int
+_PyTime_ObjectToTimeval(PyObject *obj, time_t *sec, long *usec,
+ _PyTime_round_t round)
+{
+ int res;
+ res = _PyTime_ObjectToDenominator(obj, sec, usec, 1e6, round);
+ if (res == 0) {
+ assert(0 <= *usec && *usec < SEC_TO_US);
+ }
+ return res;
+}
+
+static void
+_PyTime_overflow(void)
+{
+ PyErr_SetString(PyExc_OverflowError,
+ "timestamp too large to convert to C _PyTime_t");
+}
+
+_PyTime_t
+_PyTime_FromSeconds(int seconds)
+{
+ _PyTime_t t;
+ t = (_PyTime_t)seconds;
+ /* ensure that integer overflow cannot happen, int type should have 32
+ bits, whereas _PyTime_t type has at least 64 bits (SEC_TO_MS takes 30
+ bits). */
+ Py_BUILD_ASSERT(INT_MAX <= _PyTime_MAX / SEC_TO_NS);
+ Py_BUILD_ASSERT(INT_MIN >= _PyTime_MIN / SEC_TO_NS);
+ assert((t >= 0 && t <= _PyTime_MAX / SEC_TO_NS)
+ || (t < 0 && t >= _PyTime_MIN / SEC_TO_NS));
+ t *= SEC_TO_NS;
+ return t;
+}
+
+_PyTime_t
+_PyTime_FromNanoseconds(long long ns)
+{
+ _PyTime_t t;
+ Py_BUILD_ASSERT(sizeof(long long) <= sizeof(_PyTime_t));
+ t = Py_SAFE_DOWNCAST(ns, long long, _PyTime_t);
+ return t;
+}
+
+static int
+_PyTime_FromTimespec(_PyTime_t *tp, struct timespec *ts, int raise)
+{
+ _PyTime_t t;
+ int res = 0;
+
+ Py_BUILD_ASSERT(sizeof(ts->tv_sec) <= sizeof(_PyTime_t));
+ t = (_PyTime_t)ts->tv_sec;
+
+ if (_PyTime_check_mul_overflow(t, SEC_TO_NS)) {
+ if (raise)
+ _PyTime_overflow();
+ res = -1;
+ }
+ t = t * SEC_TO_NS;
+
+ t += ts->tv_nsec;
+
+ *tp = t;
+ return res;
+}
+
+
+#ifdef HAVE_CLOCK_GETTIME
+static int
+_PyTime_FromTimespec(_PyTime_t *tp, struct timespec *ts, int raise)
+{
+ _PyTime_t t;
+ int res = 0;
+
+ Py_BUILD_ASSERT(sizeof(ts->tv_sec) <= sizeof(_PyTime_t));
+ t = (_PyTime_t)ts->tv_sec;
+
+ if (_PyTime_check_mul_overflow(t, SEC_TO_NS)) {
+ if (raise)
+ _PyTime_overflow();
+ res = -1;
+ }
+ t = t * SEC_TO_NS;
+
+ t += ts->tv_nsec;
+
+ *tp = t;
+ return res;
+}
+#elif !defined(MS_WINDOWS)
+static int
+_PyTime_FromTimeval(_PyTime_t *tp, struct timeval *tv, int raise)
+{
+ _PyTime_t t;
+ int res = 0;
+
+ Py_BUILD_ASSERT(sizeof(tv->tv_sec) <= sizeof(_PyTime_t));
+ t = (_PyTime_t)tv->tv_sec;
+
+ if (_PyTime_check_mul_overflow(t, SEC_TO_NS)) {
+ if (raise)
+ _PyTime_overflow();
+ res = -1;
+ }
+ t = t * SEC_TO_NS;
+
+ t += (_PyTime_t)tv->tv_usec * US_TO_NS;
+
+ *tp = t;
+ return res;
+}
+#endif
+
+static int
+_PyTime_FromFloatObject(_PyTime_t *t, double value, _PyTime_round_t round,
+ long unit_to_ns)
+{
+ /* volatile avoids optimization changing how numbers are rounded */
+ volatile double d;
+
+ /* convert to a number of nanoseconds */
+ d = value;
+ d *= (double)unit_to_ns;
+ d = _PyTime_Round(d, round);
+
+ if (!_Py_InIntegralTypeRange(_PyTime_t, d)) {
+ _PyTime_overflow();
+ return -1;
+ }
+ *t = (_PyTime_t)d;
+ return 0;
+}
+
+static int
+_PyTime_FromObject(_PyTime_t *t, PyObject *obj, _PyTime_round_t round,
+ long unit_to_ns)
+{
+ if (PyFloat_Check(obj)) {
+ double d;
+ d = PyFloat_AsDouble(obj);
+ if (Py_IS_NAN(d)) {
+ PyErr_SetString(PyExc_ValueError, "Invalid value NaN (not a number)");
+ return -1;
+ }
+ return _PyTime_FromFloatObject(t, d, round, unit_to_ns);
+ }
+ else {
+ long long sec;
+ Py_BUILD_ASSERT(sizeof(long long) <= sizeof(_PyTime_t));
+
+ sec = PyLong_AsLongLong(obj);
+ if (sec == -1 && PyErr_Occurred()) {
+ if (PyErr_ExceptionMatches(PyExc_OverflowError))
+ _PyTime_overflow();
+ return -1;
+ }
+
+ if (_PyTime_check_mul_overflow(sec, unit_to_ns)) {
+ _PyTime_overflow();
+ return -1;
+ }
+ *t = sec * unit_to_ns;
+ return 0;
+ }
+}
+
+int
+_PyTime_FromSecondsObject(_PyTime_t *t, PyObject *obj, _PyTime_round_t round)
+{
+ return _PyTime_FromObject(t, obj, round, SEC_TO_NS);
+}
+
+int
+_PyTime_FromMillisecondsObject(_PyTime_t *t, PyObject *obj, _PyTime_round_t round)
+{
+ return _PyTime_FromObject(t, obj, round, MS_TO_NS);
+}
+
+double
+_PyTime_AsSecondsDouble(_PyTime_t t)
+{
+ /* volatile avoids optimization changing how numbers are rounded */
+ volatile double d;
+
+ if (t % SEC_TO_NS == 0) {
+ _PyTime_t secs;
+ /* Divide using integers to avoid rounding issues on the integer part.
+ 1e-9 cannot be stored exactly in IEEE 64-bit. */
+ secs = t / SEC_TO_NS;
+ d = (double)secs;
+ }
+ else {
+ d = (double)t;
+ d /= 1e9;
+ }
+ return d;
+}
+
+PyObject *
+_PyTime_AsNanosecondsObject(_PyTime_t t)
+{
+ Py_BUILD_ASSERT(sizeof(long long) >= sizeof(_PyTime_t));
+ return PyLong_FromLongLong((long long)t);
+}
+
+static _PyTime_t
+_PyTime_Divide(const _PyTime_t t, const _PyTime_t k,
+ const _PyTime_round_t round)
+{
+ assert(k > 1);
+ if (round == _PyTime_ROUND_HALF_EVEN) {
+ _PyTime_t x, r, abs_r;
+ x = t / k;
+ r = t % k;
+ abs_r = Py_ABS(r);
+ if (abs_r > k / 2 || (abs_r == k / 2 && (Py_ABS(x) & 1))) {
+ if (t >= 0)
+ x++;
+ else
+ x--;
+ }
+ return x;
+ }
+ else if (round == _PyTime_ROUND_CEILING) {
+ if (t >= 0){
+ return (t + k - 1) / k;
+ }
+ else{
+ return t / k;
+ }
+ }
+ else if (round == _PyTime_ROUND_FLOOR){
+ if (t >= 0) {
+ return t / k;
+ }
+ else{
+ return (t - (k - 1)) / k;
+ }
+ }
+ else {
+ assert(round == _PyTime_ROUND_UP);
+ if (t >= 0) {
+ return (t + k - 1) / k;
+ }
+ else {
+ return (t - (k - 1)) / k;
+ }
+ }
+}
+
+_PyTime_t
+_PyTime_AsMilliseconds(_PyTime_t t, _PyTime_round_t round)
+{
+ return _PyTime_Divide(t, NS_TO_MS, round);
+}
+
+_PyTime_t
+_PyTime_AsMicroseconds(_PyTime_t t, _PyTime_round_t round)
+{
+ return _PyTime_Divide(t, NS_TO_US, round);
+}
+
+static int
+_PyTime_AsTimeval_impl(_PyTime_t t, _PyTime_t *p_secs, int *p_us,
+ _PyTime_round_t round)
+{
+ _PyTime_t secs, ns;
+ int usec;
+ int res = 0;
+
+ secs = t / SEC_TO_NS;
+ ns = t % SEC_TO_NS;
+
+ usec = (int)_PyTime_Divide(ns, US_TO_NS, round);
+ if (usec < 0) {
+ usec += SEC_TO_US;
+ if (secs != _PyTime_MIN)
+ secs -= 1;
+ else
+ res = -1;
+ }
+ else if (usec >= SEC_TO_US) {
+ usec -= SEC_TO_US;
+ if (secs != _PyTime_MAX)
+ secs += 1;
+ else
+ res = -1;
+ }
+ assert(0 <= usec && usec < SEC_TO_US);
+
+ *p_secs = secs;
+ *p_us = usec;
+
+ return res;
+}
+
+static int
+_PyTime_AsTimevalStruct_impl(_PyTime_t t, struct timeval *tv,
+ _PyTime_round_t round, int raise)
+{
+ _PyTime_t secs, secs2;
+ int us;
+ int res;
+
+ res = _PyTime_AsTimeval_impl(t, &secs, &us, round);
+
+#ifdef MS_WINDOWS
+ tv->tv_sec = (long)secs;
+#else
+ tv->tv_sec = secs;
+#endif
+ tv->tv_usec = us;
+
+ secs2 = (_PyTime_t)tv->tv_sec;
+ if (res < 0 || secs2 != secs) {
+ if (raise)
+ error_time_t_overflow();
+ return -1;
+ }
+ return 0;
+}
+
+int
+_PyTime_AsTimeval(_PyTime_t t, struct timeval *tv, _PyTime_round_t round)
+{
+ return _PyTime_AsTimevalStruct_impl(t, tv, round, 1);
+}
+
+int
+_PyTime_AsTimeval_noraise(_PyTime_t t, struct timeval *tv, _PyTime_round_t round)
+{
+ return _PyTime_AsTimevalStruct_impl(t, tv, round, 0);
+}
+
+int
+_PyTime_AsTimevalTime_t(_PyTime_t t, time_t *p_secs, int *us,
+ _PyTime_round_t round)
+{
+ _PyTime_t secs;
+ int res;
+
+ res = _PyTime_AsTimeval_impl(t, &secs, us, round);
+
+ *p_secs = secs;
+
+ if (res < 0 || (_PyTime_t)*p_secs != secs) {
+ error_time_t_overflow();
+ return -1;
+ }
+ return 0;
+}
+
+
+#if defined(HAVE_CLOCK_GETTIME) || defined(HAVE_KQUEUE)
+int
+_PyTime_AsTimespec(_PyTime_t t, struct timespec *ts)
+{
+ _PyTime_t secs, nsec;
+
+ secs = t / SEC_TO_NS;
+ nsec = t % SEC_TO_NS;
+ if (nsec < 0) {
+ nsec += SEC_TO_NS;
+ secs -= 1;
+ }
+ ts->tv_sec = (time_t)secs;
+ assert(0 <= nsec && nsec < SEC_TO_NS);
+ ts->tv_nsec = nsec;
+
+ if ((_PyTime_t)ts->tv_sec != secs) {
+ error_time_t_overflow();
+ return -1;
+ }
+ return 0;
+}
+#endif
+
+static int
+pygettimeofday(_PyTime_t *tp, _Py_clock_info_t *info, int raise)
+{
+ int err;
+ struct timeval tv;
+
+ assert(info == NULL || raise);
+ err = gettimeofday(&tv, (struct timezone *)NULL);
+ if (err) {
+ if (raise)
+ PyErr_SetFromErrno(PyExc_OSError);
+ return -1;
+ }
+ if (_PyTime_FromTimeval(tp, &tv, raise) < 0)
+ return -1;
+
+ if (info) {
+ info->implementation = "gettimeofday()";
+ info->resolution = 1e-6;
+ info->monotonic = 0;
+ info->adjustable = 1;
+ }
+ return 0;
+}
+
+_PyTime_t
+_PyTime_GetSystemClock(void)
+{
+ _PyTime_t t;
+ if (pygettimeofday(&t, NULL, 0) < 0) {
+ /* should not happen, _PyTime_Init() checked the clock at startup */
+ assert(0);
+
+ /* use a fixed value instead of a random value from the stack */
+ t = 0;
+ }
+ return t;
+}
+
+int
+_PyTime_GetSystemClockWithInfo(_PyTime_t *t, _Py_clock_info_t *info)
+{
+ return pygettimeofday(t, info, 1);
+}
+
+static int
+pymonotonic(_PyTime_t *tp, _Py_clock_info_t *info, int raise)
+{
+ struct timespec ts;
+
+ assert(info == NULL || raise);
+
+ if (info) {
+ info->implementation = "gettimeofday()";
+ info->resolution = 1e-6;
+ info->monotonic = 0;
+ info->adjustable = 1;
+ }
+
+ if (_PyTime_FromTimespec(tp, &ts, raise) < 0)
+ return -1;
+ return 0;
+}
+
+_PyTime_t
+_PyTime_GetMonotonicClock(void)
+{
+ _PyTime_t t;
+ if (pymonotonic(&t, NULL, 0) < 0) {
+ /* should not happen, _PyTime_Init() checked that monotonic clock at
+ startup */
+ assert(0);
+
+ /* use a fixed value instead of a random value from the stack */
+ t = 0;
+ }
+ return t;
+}
+
+int
+_PyTime_GetMonotonicClockWithInfo(_PyTime_t *tp, _Py_clock_info_t *info)
+{
+ return pymonotonic(tp, info, 1);
+}
+
+int
+_PyTime_Init(void)
+{
+ _PyTime_t t;
+
+ /* ensure that the system clock works */
+ if (_PyTime_GetSystemClockWithInfo(&t, NULL) < 0)
+ return -1;
+
+ /* ensure that the operating system provides a monotonic clock */
+ if (_PyTime_GetMonotonicClockWithInfo(&t, NULL) < 0)
+ return -1;
+
+ return 0;
+}
+
+int
+_PyTime_localtime(time_t t, struct tm *tm)
+{
+#ifdef MS_WINDOWS
+ int error;
+
+ error = localtime_s(tm, &t);
+ if (error != 0) {
+ errno = error;
+ PyErr_SetFromErrno(PyExc_OSError);
+ return -1;
+ }
+ return 0;
+#else /* !MS_WINDOWS */
+ struct tm *temp = NULL;
+ if ((temp = localtime(&t)) == NULL) {
+#ifdef EINVAL
+ if (errno == 0)
+ errno = EINVAL;
+#endif
+ PyErr_SetFromErrno(PyExc_OSError);
+ return -1;
+ }
+ *tm = *temp;
+ return 0;
+#endif /* MS_WINDOWS */
+}
+
+int
+_PyTime_gmtime(time_t t, struct tm *tm)
+{
+#ifdef MS_WINDOWS
+ int error;
+
+ error = gmtime_s(tm, &t);
+ if (error != 0) {
+ errno = error;
+ PyErr_SetFromErrno(PyExc_OSError);
+ return -1;
+ }
+ return 0;
+#else /* !MS_WINDOWS */
+ struct tm *temp = NULL;
+ if ((temp = gmtime(&t)) == NULL) {
+#ifdef EINVAL
+ if (errno == 0)
+ errno = EINVAL;
+#endif
+ PyErr_SetFromErrno(PyExc_OSError);
+ return -1;
+ }
+ *tm = *temp;
+ return 0;
+#endif /* MS_WINDOWS */
+}
diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/random.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/random.c
new file mode 100644
index 00000000..73c756a0
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/random.c
@@ -0,0 +1,636 @@
+#include "Python.h"
+#ifdef MS_WINDOWS
+# include <windows.h>
+/* All sample MSDN wincrypt programs include the header below. It is at least
+ * required with Min GW. */
+# include <wincrypt.h>
+#else
+# include <fcntl.h>
+# ifdef HAVE_SYS_STAT_H
+# include <sys/stat.h>
+# endif
+# ifdef HAVE_LINUX_RANDOM_H
+# include <linux/random.h>
+# endif
+# if defined(HAVE_SYS_RANDOM_H) && (defined(HAVE_GETRANDOM) || defined(HAVE_GETENTROPY))
+# include <sys/random.h>
+# endif
+# if !defined(HAVE_GETRANDOM) && defined(HAVE_GETRANDOM_SYSCALL)
+# include <sys/syscall.h>
+# endif
+#endif
+
+#ifdef _Py_MEMORY_SANITIZER
+# include <sanitizer/msan_interface.h>
+#endif
+
+#ifdef Py_DEBUG
+int _Py_HashSecret_Initialized = 0;
+#else
+static int _Py_HashSecret_Initialized = 0;
+#endif
+
+#ifdef MS_WINDOWS
+static HCRYPTPROV hCryptProv = 0;
+
+static int
+win32_urandom_init(int raise)
+{
+ /* Acquire context */
+ if (!CryptAcquireContext(&hCryptProv, NULL, NULL,
+ PROV_RSA_FULL, CRYPT_VERIFYCONTEXT))
+ goto error;
+
+ return 0;
+
+error:
+ if (raise) {
+ PyErr_SetFromWindowsErr(0);
+ }
+ return -1;
+}
+
+/* Fill buffer with size pseudo-random bytes generated by the Windows CryptoGen
+ API. Return 0 on success, or raise an exception and return -1 on error. */
+static int
+win32_urandom(unsigned char *buffer, Py_ssize_t size, int raise)
+{
+ Py_ssize_t chunk;
+
+ if (hCryptProv == 0)
+ {
+ if (win32_urandom_init(raise) == -1) {
+ return -1;
+ }
+ }
+
+ while (size > 0)
+ {
+ chunk = size > INT_MAX ? INT_MAX : size;
+ if (!CryptGenRandom(hCryptProv, (DWORD)chunk, buffer))
+ {
+ /* CryptGenRandom() failed */
+ if (raise) {
+ PyErr_SetFromWindowsErr(0);
+ }
+ return -1;
+ }
+ buffer += chunk;
+ size -= chunk;
+ }
+ return 0;
+}
+
+#else /* !MS_WINDOWS */
+
+#if defined(HAVE_GETRANDOM) || defined(HAVE_GETRANDOM_SYSCALL)
+#define PY_GETRANDOM 1
+
+/* Call getrandom() to get random bytes:
+
+ - Return 1 on success
+ - Return 0 if getrandom() is not available (failed with ENOSYS or EPERM),
+ or if getrandom(GRND_NONBLOCK) failed with EAGAIN (system urandom not
+ initialized yet) and raise=0.
+ - Raise an exception (if raise is non-zero) and return -1 on error:
+ if getrandom() failed with EINTR, raise is non-zero and the Python signal
+ handler raised an exception, or if getrandom() failed with a different
+ error.
+
+ getrandom() is retried if it failed with EINTR: interrupted by a signal. */
+static int
+py_getrandom(void *buffer, Py_ssize_t size, int blocking, int raise)
+{
+ /* Is getrandom() supported by the running kernel? Set to 0 if getrandom()
+ failed with ENOSYS or EPERM. Need Linux kernel 3.17 or newer, or Solaris
+ 11.3 or newer */
+ static int getrandom_works = 1;
+ int flags;
+ char *dest;
+ long n;
+
+ if (!getrandom_works) {
+ return 0;
+ }
+
+ flags = blocking ? 0 : GRND_NONBLOCK;
+ dest = buffer;
+ while (0 < size) {
+#ifdef sun
+ /* Issue #26735: On Solaris, getrandom() is limited to returning up
+ to 1024 bytes. Call it multiple times if more bytes are
+ requested. */
+ n = Py_MIN(size, 1024);
+#else
+ n = Py_MIN(size, LONG_MAX);
+#endif
+
+ errno = 0;
+#ifdef HAVE_GETRANDOM
+ if (raise) {
+ Py_BEGIN_ALLOW_THREADS
+ n = getrandom(dest, n, flags);
+ Py_END_ALLOW_THREADS
+ }
+ else {
+ n = getrandom(dest, n, flags);
+ }
+#else
+ /* On Linux, use the syscall() function because the GNU libc doesn't
+ expose the Linux getrandom() syscall yet. See:
+ https://sourceware.org/bugzilla/show_bug.cgi?id=17252 */
+ if (raise) {
+ Py_BEGIN_ALLOW_THREADS
+ n = syscall(SYS_getrandom, dest, n, flags);
+ Py_END_ALLOW_THREADS
+ }
+ else {
+ n = syscall(SYS_getrandom, dest, n, flags);
+ }
+# ifdef _Py_MEMORY_SANITIZER
+ if (n > 0) {
+ __msan_unpoison(dest, n);
+ }
+# endif
+#endif
+
+ if (n < 0) {
+ /* ENOSYS: the syscall is not supported by the kernel.
+ EPERM: the syscall is blocked by a security policy (ex: SECCOMP)
+ or something else. */
+ if (errno == ENOSYS || errno == EPERM) {
+ getrandom_works = 0;
+ return 0;
+ }
+
+ /* getrandom(GRND_NONBLOCK) fails with EAGAIN if the system urandom
+ is not initialiazed yet. For _PyRandom_Init(), we ignore the
+ error and fall back on reading /dev/urandom which never blocks,
+ even if the system urandom is not initialized yet:
+ see the PEP 524. */
+ if (errno == EAGAIN && !raise && !blocking) {
+ return 0;
+ }
+
+ if (errno == EINTR) {
+ if (raise) {
+ if (PyErr_CheckSignals()) {
+ return -1;
+ }
+ }
+
+ /* retry getrandom() if it was interrupted by a signal */
+ continue;
+ }
+
+ if (raise) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ }
+ return -1;
+ }
+
+ dest += n;
+ size -= n;
+ }
+ return 1;
+}
+
+#elif defined(HAVE_GETENTROPY)
+#define PY_GETENTROPY 1
+
+/* Fill buffer with size pseudo-random bytes generated by getentropy():
+
+ - Return 1 on success
+ - Return 0 if getentropy() syscall is not available (failed with ENOSYS or
+ EPERM).
+ - Raise an exception (if raise is non-zero) and return -1 on error:
+ if getentropy() failed with EINTR, raise is non-zero and the Python signal
+ handler raised an exception, or if getentropy() failed with a different
+ error.
+
+ getentropy() is retried if it failed with EINTR: interrupted by a signal. */
+static int
+py_getentropy(char *buffer, Py_ssize_t size, int raise)
+{
+ /* Is getentropy() supported by the running kernel? Set to 0 if
+ getentropy() failed with ENOSYS or EPERM. */
+ static int getentropy_works = 1;
+
+ if (!getentropy_works) {
+ return 0;
+ }
+
+ while (size > 0) {
+ /* getentropy() is limited to returning up to 256 bytes. Call it
+ multiple times if more bytes are requested. */
+ Py_ssize_t len = Py_MIN(size, 256);
+ int res;
+
+ if (raise) {
+ Py_BEGIN_ALLOW_THREADS
+ res = getentropy(buffer, len);
+ Py_END_ALLOW_THREADS
+ }
+ else {
+ res = getentropy(buffer, len);
+ }
+
+ if (res < 0) {
+ /* ENOSYS: the syscall is not supported by the running kernel.
+ EPERM: the syscall is blocked by a security policy (ex: SECCOMP)
+ or something else. */
+ if (errno == ENOSYS || errno == EPERM) {
+ getentropy_works = 0;
+ return 0;
+ }
+
+ if (errno == EINTR) {
+ if (raise) {
+ if (PyErr_CheckSignals()) {
+ return -1;
+ }
+ }
+
+ /* retry getentropy() if it was interrupted by a signal */
+ continue;
+ }
+
+ if (raise) {
+ PyErr_SetFromErrno(PyExc_OSError);
+ }
+ return -1;
+ }
+
+ buffer += len;
+ size -= len;
+ }
+ return 1;
+}
+#endif /* defined(HAVE_GETENTROPY) && !defined(sun) */
+
+
+#if !defined(MS_WINDOWS) && !defined(__VMS)
+
+static struct {
+ int fd;
+#ifdef HAVE_STRUCT_STAT_ST_DEV
+ dev_t st_dev;
+#endif
+#ifdef HAVE_STRUCT_STAT_ST_INO
+ ino_t st_ino;
+#endif
+} urandom_cache = { -1 };
+
+/* Read random bytes from the /dev/urandom device:
+
+ - Return 0 on success
+ - Raise an exception (if raise is non-zero) and return -1 on error
+
+ Possible causes of errors:
+
+ - open() failed with ENOENT, ENXIO, ENODEV, EACCES: the /dev/urandom device
+ was not found. For example, it was removed manually or not exposed in a
+ chroot or container.
+ - open() failed with a different error
+ - fstat() failed
+ - read() failed or returned 0
+
+ read() is retried if it failed with EINTR: interrupted by a signal.
+
+ The file descriptor of the device is kept open between calls to avoid using
+ many file descriptors when run in parallel from multiple threads:
+ see the issue #18756.
+
+ st_dev and st_ino fields of the file descriptor (from fstat()) are cached to
+ check if the file descriptor was replaced by a different file (which is
+ likely a bug in the application): see the issue #21207.
+
+ If the file descriptor was closed or replaced, open a new file descriptor
+ but don't close the old file descriptor: it probably points to something
+ important for some third-party code. */
+static int
+dev_urandom(char *buffer, Py_ssize_t size, int raise)
+{
+ int fd;
+ Py_ssize_t n;
+
+ if (raise) {
+ struct _Py_stat_struct st;
+ int fstat_result;
+
+ if (urandom_cache.fd >= 0) {
+ Py_BEGIN_ALLOW_THREADS
+ fstat_result = _Py_fstat_noraise(urandom_cache.fd, &st);
+ Py_END_ALLOW_THREADS
+
+ /* Does the fd point to the same thing as before? (issue #21207) */
+ if (fstat_result
+#ifdef HAVE_STRUCT_STAT_ST_DEV
+ || st.st_dev != urandom_cache.st_dev
+#endif
+#ifdef HAVE_STRUCT_STAT_ST_INO
+ || st.st_ino != urandom_cache.st_ino
+#endif
+ )
+ {
+ /* Something changed: forget the cached fd (but don't close it,
+ since it probably points to something important for some
+ third-party code). */
+ urandom_cache.fd = -1;
+ }
+ }
+ if (urandom_cache.fd >= 0)
+ fd = urandom_cache.fd;
+ else {
+ fd = _Py_open("/dev/urandom", O_RDONLY);
+ if (fd < 0) {
+ if (errno == ENOENT || errno == ENXIO ||
+ errno == ENODEV || errno == EACCES) {
+ PyErr_SetString(PyExc_NotImplementedError,
+ "/dev/urandom (or equivalent) not found");
+ }
+ /* otherwise, keep the OSError exception raised by _Py_open() */
+ return -1;
+ }
+ if (urandom_cache.fd >= 0) {
+ /* urandom_fd was initialized by another thread while we were
+ not holding the GIL, keep it. */
+ close(fd);
+ fd = urandom_cache.fd;
+ }
+ else {
+ if (_Py_fstat(fd, &st)) {
+ close(fd);
+ return -1;
+ }
+ else {
+ urandom_cache.fd = fd;
+#ifdef HAVE_STRUCT_STAT_ST_DEV
+ urandom_cache.st_dev = st.st_dev;
+#endif
+#ifdef HAVE_STRUCT_STAT_ST_INO
+ urandom_cache.st_ino = st.st_ino;
+#endif
+ }
+ }
+ }
+
+ do {
+ n = _Py_read(fd, buffer, (size_t)size);
+ if (n == -1)
+ return -1;
+ if (n == 0) {
+ PyErr_Format(PyExc_RuntimeError,
+ "Failed to read %zi bytes from /dev/urandom",
+ size);
+ return -1;
+ }
+
+ buffer += n;
+ size -= n;
+ } while (0 < size);
+ }
+ else {
+ fd = _Py_open_noraise("/dev/urandom", O_RDONLY);
+ if (fd < 0) {
+ return -1;
+ }
+
+ while (0 < size)
+ {
+ do {
+ n = read(fd, buffer, (size_t)size);
+ } while (n < 0 && errno == EINTR);
+
+ if (n <= 0) {
+ /* stop on error or if read(size) returned 0 */
+ close(fd);
+ return -1;
+ }
+
+ buffer += n;
+ size -= n;
+ }
+ close(fd);
+ }
+ return 0;
+}
+
+static void
+dev_urandom_close(void)
+{
+ if (urandom_cache.fd >= 0) {
+ close(urandom_cache.fd);
+ urandom_cache.fd = -1;
+ }
+}
+#endif /* !MS_WINDOWS */
+
+
+/* Fill buffer with pseudo-random bytes generated by a linear congruent
+ generator (LCG):
+
+ x(n+1) = (x(n) * 214013 + 2531011) % 2^32
+
+ Use bits 23..16 of x(n) to generate a byte. */
+static void
+lcg_urandom(unsigned int x0, unsigned char *buffer, size_t size)
+{
+ size_t index;
+ unsigned int x;
+
+ x = x0;
+ for (index=0; index < size; index++) {
+ x *= 214013;
+ x += 2531011;
+ /* modulo 2 ^ (8 * sizeof(int)) */
+ buffer[index] = (x >> 16) & 0xff;
+ }
+}
+
+/* Read random bytes:
+
+ - Return 0 on success
+ - Raise an exception (if raise is non-zero) and return -1 on error
+
+ Used sources of entropy ordered by preference, preferred source first:
+
+ - CryptGenRandom() on Windows
+ - getrandom() function (ex: Linux and Solaris): call py_getrandom()
+ - getentropy() function (ex: OpenBSD): call py_getentropy()
+ - /dev/urandom device
+
+ Read from the /dev/urandom device if getrandom() or getentropy() function
+ is not available or does not work.
+
+ Prefer getrandom() over getentropy() because getrandom() supports blocking
+ and non-blocking mode: see the PEP 524. Python requires non-blocking RNG at
+ startup to initialize its hash secret, but os.urandom() must block until the
+ system urandom is initialized (at least on Linux 3.17 and newer).
+
+ Prefer getrandom() and getentropy() over reading directly /dev/urandom
+ because these functions don't need file descriptors and so avoid ENFILE or
+ EMFILE errors (too many open files): see the issue #18756.
+
+ Only the getrandom() function supports non-blocking mode.
+
+ Only use RNG running in the kernel. They are more secure because it is
+ harder to get the internal state of a RNG running in the kernel land than a
+ RNG running in the user land. The kernel has a direct access to the hardware
+ and has access to hardware RNG, they are used as entropy sources.
+
+ Note: the OpenSSL RAND_pseudo_bytes() function does not automatically reseed
+ its RNG on fork(), two child processes (with the same pid) generate the same
+ random numbers: see issue #18747. Kernel RNGs don't have this issue,
+ they have access to good quality entropy sources.
+
+ If raise is zero:
+
+ - Don't raise an exception on error
+ - Don't call the Python signal handler (don't call PyErr_CheckSignals()) if
+ a function fails with EINTR: retry directly the interrupted function
+ - Don't release the GIL to call functions.
+*/
+static int
+pyurandom(void *buffer, Py_ssize_t size, int blocking, int raise)
+{
+#if defined(PY_GETRANDOM) || defined(PY_GETENTROPY)
+ int res;
+#endif
+
+ if (size < 0) {
+ if (raise) {
+ PyErr_Format(PyExc_ValueError,
+ "negative argument not allowed");
+ }
+ return -1;
+ }
+
+ if (size == 0) {
+ return 0;
+ }
+
+#ifdef MS_WINDOWS
+ return win32_urandom((unsigned char *)buffer, size, raise);
+#else
+
+#if defined(PY_GETRANDOM) || defined(PY_GETENTROPY)
+#ifdef PY_GETRANDOM
+ res = py_getrandom(buffer, size, blocking, raise);
+#else
+ res = py_getentropy(buffer, size, raise);
+#endif
+ if (res < 0) {
+ return -1;
+ }
+ if (res == 1) {
+ return 0;
+ }
+ /* getrandom() or getentropy() function is not available: failed with
+ ENOSYS or EPERM. Fall back on reading from /dev/urandom. */
+#endif
+
+ return dev_urandom(buffer, size, raise);
+#endif
+}
+
+/* Fill buffer with size pseudo-random bytes from the operating system random
+ number generator (RNG). It is suitable for most cryptographic purposes
+ except long living private keys for asymmetric encryption.
+
+ On Linux 3.17 and newer, the getrandom() syscall is used in blocking mode:
+ block until the system urandom entropy pool is initialized (128 bits are
+ collected by the kernel).
+
+ Return 0 on success. Raise an exception and return -1 on error. */
+int
+_PyOS_URandom(void *buffer, Py_ssize_t size)
+{
+ return pyurandom(buffer, size, 1, 1);
+}
+
+/* Fill buffer with size pseudo-random bytes from the operating system random
+ number generator (RNG). It is not suitable for cryptographic purpose.
+
+ On Linux 3.17 and newer (when getrandom() syscall is used), if the system
+ urandom is not initialized yet, the function returns "weak" entropy read
+ from /dev/urandom.
+
+ Return 0 on success. Raise an exception and return -1 on error. */
+int
+_PyOS_URandomNonblock(void *buffer, Py_ssize_t size)
+{
+ return pyurandom(buffer, size, 0, 1);
+}
+
+void
+_PyRandom_Init(void)
+{
+
+ char *env;
+ unsigned char *secret = (unsigned char *)&_Py_HashSecret.uc;
+ Py_ssize_t secret_size = sizeof(_Py_HashSecret_t);
+ Py_BUILD_ASSERT(sizeof(_Py_HashSecret_t) == sizeof(_Py_HashSecret.uc));
+
+ if (_Py_HashSecret_Initialized)
+ return;
+ _Py_HashSecret_Initialized = 1;
+
+ /*
+ Hash randomization is enabled. Generate a per-process secret,
+ using PYTHONHASHSEED if provided.
+ */
+
+ env = Py_GETENV("PYTHONHASHSEED");
+ if (env && *env != '\0' && strcmp(env, "random") != 0) {
+ char *endptr = env;
+ unsigned long seed;
+ seed = strtoul(env, &endptr, 10);
+ if (*endptr != '\0'
+ || seed > 4294967295UL
+ || (errno == ERANGE && seed == ULONG_MAX))
+ {
+ Py_FatalError("PYTHONHASHSEED must be \"random\" or an integer "
+ "in range [0; 4294967295]");
+ }
+ if (seed == 0) {
+ /* disable the randomized hash */
+ memset(secret, 0, secret_size);
+ Py_HashRandomizationFlag = 0;
+ }
+ else {
+ lcg_urandom(seed, secret, secret_size);
+ Py_HashRandomizationFlag = 1;
+ }
+ }
+ else {
+ int res;
+
+ /* _PyRandom_Init() is called very early in the Python initialization
+ and so exceptions cannot be used (use raise=0).
+
+ _PyRandom_Init() must not block Python initialization: call
+ pyurandom() is non-blocking mode (blocking=0): see the PEP 524. */
+ res = pyurandom(secret, secret_size, 0, 0);
+ if (res < 0) {
+ //Py_FatalError("failed to get random numbers to initialize Python");
+ }
+ Py_HashRandomizationFlag = 1;
+ }
+
+}
+
+void
+_PyRandom_Fini(void)
+{
+#ifdef MS_WINDOWS
+ if (hCryptProv) {
+ CryptReleaseContext(hCryptProv, 0);
+ hCryptProv = 0;
+ }
+#else
+ dev_urandom_close();
+#endif
+}
+
+#endif
\ No newline at end of file
diff --git a/AppPkg/Applications/Python/Python-3.6.8/Python368.inf b/AppPkg/Applications/Python/Python-3.6.8/Python368.inf
new file mode 100644
index 00000000..d2e6e734
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/Python368.inf
@@ -0,0 +1,275 @@
+## @file
+# Python368.inf
+#
+# Copyright (c) 2011-2021, Intel Corporation. All rights reserved.<BR>
+# This program and the accompanying materials
+# are licensed and made available under the terms and conditions of the BSD License
+# which accompanies this distribution. The full text of the license may be found at
+# http://opensource.org/licenses/bsd-license.
+#
+# THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
+# WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+#
+##
+
+[Defines]
+ INF_VERSION = 0x00010016
+ BASE_NAME = Python368
+ FILE_GUID = 9DA30E98-094C-4FF0-94CB-81C10E69F750
+ MODULE_TYPE = UEFI_APPLICATION
+ VERSION_STRING = 0.1
+ ENTRY_POINT = ShellCEntryLib
+
+ DEFINE PYTHON_VERSION = 3.6.8
+
+#
+# VALID_ARCHITECTURES = X64
+#
+
+[Packages]
+ StdLib/StdLib.dec
+ MdePkg/MdePkg.dec
+ MdeModulePkg/MdeModulePkg.dec
+
+[LibraryClasses]
+ UefiLib
+ DebugLib
+ LibC
+ LibString
+ LibStdio
+ LibMath
+ LibWchar
+ LibGen
+ LibNetUtil
+ DevMedia
+ #BsdSocketLib
+ #EfiSocketLib
+
+[FixedPcd]
+ gEfiMdePkgTokenSpaceGuid.PcdDebugPropertyMask|0x0F
+ gEfiMdePkgTokenSpaceGuid.PcdDebugPrintErrorLevel|0x80000040
+
+[Sources]
+#Parser
+ Parser/acceler.c
+ Parser/bitset.c
+ Parser/firstsets.c
+ Parser/grammar.c
+ Parser/grammar1.c
+ Parser/listnode.c
+ Parser/metagrammar.c
+ Parser/myreadline.c
+ Parser/node.c
+ Parser/parser.c
+ Parser/parsetok.c
+ Parser/tokenizer.c
+
+#Python
+ PyMod-$(PYTHON_VERSION)/Python/bltinmodule.c
+ PyMod-$(PYTHON_VERSION)/Python/getcopyright.c
+ PyMod-$(PYTHON_VERSION)/Python/marshal.c
+ PyMod-$(PYTHON_VERSION)/Python/random.c
+ PyMod-$(PYTHON_VERSION)/Python/fileutils.c
+ PyMod-$(PYTHON_VERSION)/Python/pytime.c
+ PyMod-$(PYTHON_VERSION)/Python/pylifecycle.c
+ PyMod-$(PYTHON_VERSION)/Python/pyhash.c
+ PyMod-$(PYTHON_VERSION)/Python/pystate.c
+
+ Python/_warnings.c
+ Python/asdl.c
+ Python/ast.c
+ Python/ceval.c
+ Python/codecs.c
+ Python/compile.c
+ Python/dtoa.c
+ Python/dynload_stub.c
+ Python/errors.c
+ Python/formatter_unicode.c
+ Python/frozen.c
+ Python/future.c
+ Python/getargs.c
+ Python/getcompiler.c
+ Python/getopt.c
+ Python/getplatform.c
+ Python/getversion.c
+ Python/graminit.c
+ Python/import.c
+ Python/importdl.c
+ Python/modsupport.c
+ Python/mysnprintf.c
+ Python/mystrtoul.c
+ Python/peephole.c
+ Python/pyarena.c
+ Python/pyctype.c
+ Python/pyfpe.c
+ Python/pymath.c
+ Python/pystrcmp.c
+ Python/pystrtod.c
+ Python/Python-ast.c
+ Python/pythonrun.c
+ Python/structmember.c
+ Python/symtable.c
+ Python/sysmodule.c
+ Python/traceback.c
+ Python/pystrhex.c
+
+#Objects
+ PyMod-$(PYTHON_VERSION)/Objects/dictobject.c
+ PyMod-$(PYTHON_VERSION)/Objects/memoryobject.c
+ PyMod-$(PYTHON_VERSION)/Objects/object.c
+ PyMod-$(PYTHON_VERSION)/Objects/unicodeobject.c
+
+ Objects/accu.c
+ Objects/abstract.c
+ Objects/boolobject.c
+ Objects/bytesobject.c
+ Objects/bytearrayobject.c
+ Objects/bytes_methods.c
+ Objects/capsule.c
+ Objects/cellobject.c
+ Objects/classobject.c
+ Objects/codeobject.c
+ Objects/complexobject.c
+ Objects/descrobject.c
+ Objects/enumobject.c
+ Objects/exceptions.c
+ Objects/fileobject.c
+ Objects/floatobject.c
+ Objects/frameobject.c
+ Objects/funcobject.c
+ Objects/genobject.c
+ Objects/longobject.c
+ Objects/iterobject.c
+ Objects/listobject.c
+ Objects/methodobject.c
+ Objects/moduleobject.c
+ Objects/obmalloc.c
+ Objects/odictobject.c
+ Objects/rangeobject.c
+ Objects/setobject.c
+ Objects/sliceobject.c
+ Objects/structseq.c
+ Objects/tupleobject.c
+ Objects/typeobject.c
+ Objects/unicodectype.c
+ Objects/weakrefobject.c
+ Objects/namespaceobject.c
+
+ # Mandatory Modules -- These must always be built in.
+ PyMod-$(PYTHON_VERSION)/Modules/config.c
+ PyMod-$(PYTHON_VERSION)/Modules/edk2module.c
+ PyMod-$(PYTHON_VERSION)/Modules/errnomodule.c
+ PyMod-$(PYTHON_VERSION)/Modules/getpath.c
+ PyMod-$(PYTHON_VERSION)/Modules/main.c
+ PyMod-$(PYTHON_VERSION)/Modules/selectmodule.c
+ PyMod-$(PYTHON_VERSION)/Modules/faulthandler.c
+ PyMod-$(PYTHON_VERSION)/Modules/timemodule.c
+
+ Modules/_functoolsmodule.c
+ Modules/gcmodule.c
+ Modules/getbuildinfo.c
+ Programs/python.c
+ Modules/hashtable.c
+ Modules/_stat.c
+ Modules/_opcode.c
+ Modules/_sre.c
+ Modules/_tracemalloc.c
+ Modules/_bisectmodule.c #
+ Modules/_codecsmodule.c #
+ Modules/_collectionsmodule.c #
+ Modules/_csv.c #
+ Modules/_heapqmodule.c #
+ Modules/_json.c #
+ Modules/_localemodule.c #
+ Modules/_math.c #
+ Modules/_randommodule.c #
+ Modules/_struct.c #
+ Modules/_weakref.c #
+ Modules/arraymodule.c #
+ Modules/binascii.c #
+ Modules/cmathmodule.c #
+ Modules/_datetimemodule.c #
+ Modules/itertoolsmodule.c #
+ Modules/mathmodule.c #
+ Modules/md5module.c #
+ Modules/_operator.c #
+ Modules/parsermodule.c #
+ Modules/sha256module.c #
+ Modules/sha512module.c #
+ Modules/sha1module.c #
+ Modules/_blake2/blake2module.c #
+ Modules/_blake2/blake2b_impl.c #
+ Modules/_blake2/blake2s_impl.c #
+ Modules/_sha3/sha3module.c #
+ Modules/signalmodule.c #
+ #Modules/socketmodule.c #
+ Modules/symtablemodule.c #
+ Modules/unicodedata.c #
+ Modules/xxsubtype.c #
+ Modules/zipimport.c #
+ Modules/zlibmodule.c #
+ Modules/_io/_iomodule.c #
+ Modules/_io/bufferedio.c #
+ Modules/_io/bytesio.c #
+ Modules/_io/fileio.c #
+ Modules/_io/iobase.c #
+ Modules/_io/stringio.c #
+ Modules/_io/textio.c #
+
+#Modules/cjkcodecs
+ Modules/cjkcodecs/multibytecodec.c #
+ Modules/cjkcodecs/_codecs_cn.c #
+ Modules/cjkcodecs/_codecs_hk.c #
+ Modules/cjkcodecs/_codecs_iso2022.c #
+ Modules/cjkcodecs/_codecs_jp.c #
+ Modules/cjkcodecs/_codecs_kr.c #
+ Modules/cjkcodecs/_codecs_tw.c #
+
+#Modules/expat
+ Modules/pyexpat.c #
+ Modules/expat/xmlrole.c #
+ Modules/expat/xmltok.c #
+ Modules/expat/xmlparse.c #
+
+Modules/zlib
+ Modules/zlib/adler32.c #
+ Modules/zlib/compress.c #
+ Modules/zlib/crc32.c #
+ Modules/zlib/deflate.c #
+ Modules/zlib/gzclose.c #
+ Modules/zlib/gzlib.c #
+ Modules/zlib/gzread.c #
+ Modules/zlib/gzwrite.c #
+
+ Modules/zlib/infback.c #
+ Modules/zlib/inffast.c #
+ Modules/zlib/inflate.c #
+ Modules/zlib/inftrees.c #
+ Modules/zlib/trees.c #
+ Modules/zlib/uncompr.c #
+ Modules/zlib/zutil.c #
+
+#Modules/ctypes
+ PyMod-$(PYTHON_VERSION)/Modules/_ctypes/_ctypes.c #
+ Modules/_ctypes/stgdict.c #
+ Modules/_ctypes/libffi_msvc/prep_cif.c #
+ PyMod-$(PYTHON_VERSION)/Modules/_ctypes/malloc_closure.c #
+ PyMod-$(PYTHON_VERSION)/Modules/_ctypes/libffi_msvc/ffi.c #
+ Modules/_ctypes/cfield.c #
+ PyMod-$(PYTHON_VERSION)/Modules/_ctypes/callproc.c #
+ Modules/_ctypes/callbacks.c #
+
+[Sources.IA32]
+ Modules/_ctypes/libffi_msvc/win32.c #
+
+[Sources.X64]
+ Modules/_ctypes/libffi_msvc/win64.asm #
+
+[BuildOptions]
+ MSFT:*_*_*_CC_FLAGS = /GL- /Oi- /wd4018 /wd4054 /wd4055 /wd4101 /wd4131 /wd4152 /wd4204 /wd4210 /wd4244 /wd4267 /wd4305 /wd4310 /wd4389 /wd4701 /wd4702 /wd4706 /wd4456 /wd4312 /wd4457 /wd4459 /wd4474 /wd4476 /I$(WORKSPACE)\AppPkg\Applications\Python\Python-3.6.8\Include /DHAVE_MEMMOVE /DUSE_PYEXPAT_CAPI /DXML_STATIC -D UEFI /WX- /DXML_POOR_ENTROPY /DUEFI_C_SOURCE
+
+[BuildOptions.IA32]
+ MSFT:*_*_*_CC_FLAGS = /DUEFI_MSVC_32
+
+[BuildOptions.X64]
+ MSFT:*_*_*_CC_FLAGS = /DUEFI_MSVC_64
diff --git a/AppPkg/Applications/Python/Python-3.6.8/create_python368_pkg.bat b/AppPkg/Applications/Python/Python-3.6.8/create_python368_pkg.bat
new file mode 100644
index 00000000..6bbdbd9e
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/create_python368_pkg.bat
@@ -0,0 +1,48 @@
+@echo off
+
+set TOOL_CHAIN_TAG=%1
+set TARGET=%2
+set OUT_FOLDER=%3
+if "%TOOL_CHAIN_TAG%"=="" goto usage
+if "%TARGET%"=="" goto usage
+if "%OUT_FOLDER%"=="" goto usage
+goto continue
+
+:usage
+echo.
+echo.
+echo.
+echo Creates Python EFI Package.
+echo.
+echo "Usage: %0 <ToolChain> <Target> <OutFolder>"
+echo.
+echo ToolChain = one of VS2013x86, VS2015x86, VS2017, VS2019
+echo Target = one of RELEASE, DEBUG
+echo OutFolder = Target folder where package needs to create
+echo.
+echo.
+echo.
+
+goto :eof
+
+:continue
+cd ..\..\..\..\
+IF NOT EXIST Build\AppPkg\%TARGET%_%TOOL_CHAIN_TAG%\X64\Python368.efi goto error
+mkdir %OUT_FOLDER%\EFI\Tools
+xcopy Build\AppPkg\%TARGET%_%TOOL_CHAIN_TAG%\X64\Python368.efi %OUT_FOLDER%\EFI\Tools\ /y
+mkdir %OUT_FOLDER%\EFI\StdLib\lib\python36.8
+mkdir %OUT_FOLDER%\EFI\StdLib\etc
+xcopy AppPkg\Applications\Python\Python-3.6.8\Lib\* %OUT_FOLDER%\EFI\StdLib\lib\python36.8\ /Y /S /I
+xcopy StdLib\Efi\StdLib\etc\* %OUT_FOLDER%\EFI\StdLib\etc\ /Y /S /I
+goto all_done
+
+:error
+echo Failed to Create Python 3.6.8 Package, Python368.efi is not available on build location Build\AppPkg\%TARGET%_%TOOL_CHAIN_TAG%\X64\
+
+
+:all_done
+exit /b %ec%
+
+
+
+
diff --git a/AppPkg/Applications/Python/Python-3.6.8/srcprep.py b/AppPkg/Applications/Python/Python-3.6.8/srcprep.py
new file mode 100644
index 00000000..622cea01
--- /dev/null
+++ b/AppPkg/Applications/Python/Python-3.6.8/srcprep.py
@@ -0,0 +1,30 @@
+"""Python module to copy specific file to respective destination folder"""
+import os
+import shutil
+
+def copyDirTree(root_src_dir,root_dst_dir):
+ """
+ Copy directory tree. Overwrites also read only files.
+ :param root_src_dir: source directory
+ :param root_dst_dir: destination directory
+ """
+ for src_dir, dirs, files in os.walk(root_src_dir):
+ dst_dir = src_dir.replace(root_src_dir, root_dst_dir, 1)
+ if not os.path.exists(dst_dir):
+ os.makedirs(dst_dir)
+ for file_ in files:
+ src_file = os.path.join(src_dir, file_)
+ dst_file = os.path.join(dst_dir, file_)
+ if(src_file.__contains__('.h') or src_file.__contains__('.py')):
+ if os.path.exists(dst_file):
+ try:
+ os.remove(dst_file)
+ except PermissionError as exc:
+ os.chmod(dst_file, stat.S_IWUSR)
+ os.remove(dst_file)
+
+ shutil.copy(src_file, dst_dir)
+
+src = r'PyMod-3.6.8'
+dest = os.getcwd()
+copyDirTree(src,dest)
--
2.32.0.windows.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [edk2-devel] [edk2-libc Patch 0/1] Add Python 3.6.8
2021-09-02 17:12 [edk2-libc Patch 0/1] Add Python 3.6.8 Michael D Kinney
2021-09-02 17:12 ` [edk2-libc Patch 1/1] AppPkg/Applications/Python/Python-3.6.8: Py 3.6.8 UEFI changes Michael D Kinney
@ 2021-09-02 18:22 ` Michael D Kinney
2021-09-03 1:35 ` Michael D Kinney
2 siblings, 0 replies; 8+ messages in thread
From: Michael D Kinney @ 2021-09-02 18:22 UTC (permalink / raw)
To: devel@edk2.groups.io, Kinney, Michael D; +Cc: Rebecca Cran, Jayaprakash, N
A branch of edk2-libc with both commits is available here for evaluation.
https://github.com/jpshivakavi/edk2-libc/commits/py36_port_for_uefi
Mike
> -----Original Message-----
> From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Michael D Kinney
> Sent: Thursday, September 2, 2021 10:13 AM
> To: devel@edk2.groups.io
> Cc: Rebecca Cran <rebecca@nuviainc.com>; Jayaprakash, N <n.jayaprakash@intel.com>
> Subject: [edk2-devel] [edk2-libc Patch 0/1] Add Python 3.6.8
>
> REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3588
>
> This patch series contains the modifications required to
> support Python 3.6.8 in the UEFI Shell. Currently supports
> building Py3.6.8 for UEFI with IA32 and X64 architectures using
> VS2017, VS2019 with the latest edk2/master.
>
> There is an additional patch that must be applied first that
> contains the source code from the Python project that is too
> large to send as an email and does not need to be reviewed since
> it is unmodified content from the Python project
> https://github.com/python/cpython/tree/v3.6.8.
>
> https://github.com/jpshivakavi/edk2-libc/tree/py36_base_code_from_python_project
> https://github.com/jpshivakavi/edk2-libc/commit/d9f7b2e5748c382ad988a98bd3e5e4bb2d50c5c0
>
> Cc: Rebecca Cran <rebecca@nuviainc.com>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Signed-off-by: Jayaprakash N <n.jayaprakash@intel.com>
>
> Jayaprakash Nevara (1):
> AppPkg/Applications/Python/Python-3.6.8: Py 3.6.8 UEFI changes
>
> AppPkg/AppPkg.dsc | 3 +
> .../Python/Python-3.6.8/Py368ReadMe.txt | 220 +
> .../PyMod-3.6.8/Include/fileutils.h | 159 +
> .../Python-3.6.8/PyMod-3.6.8/Include/osdefs.h | 51 +
> .../PyMod-3.6.8/Include/pyconfig.h | 1322 ++
> .../PyMod-3.6.8/Include/pydtrace.h | 74 +
> .../Python-3.6.8/PyMod-3.6.8/Include/pyport.h | 788 +
> .../PyMod-3.6.8/Lib/ctypes/__init__.py | 549 +
> .../PyMod-3.6.8/Lib/genericpath.py | 157 +
> .../Python-3.6.8/PyMod-3.6.8/Lib/glob.py | 110 +
> .../PyMod-3.6.8/Lib/http/client.py | 1481 ++
> .../Lib/importlib/_bootstrap_external.py | 1443 ++
> .../Python/Python-3.6.8/PyMod-3.6.8/Lib/io.py | 99 +
> .../PyMod-3.6.8/Lib/logging/__init__.py | 2021 ++
> .../Python-3.6.8/PyMod-3.6.8/Lib/ntpath.py | 568 +
> .../Python/Python-3.6.8/PyMod-3.6.8/Lib/os.py | 792 +
> .../Python-3.6.8/PyMod-3.6.8/Lib/pydoc.py | 2686 +++
> .../Python-3.6.8/PyMod-3.6.8/Lib/shutil.py | 1160 ++
> .../Python-3.6.8/PyMod-3.6.8/Lib/site.py | 529 +
> .../PyMod-3.6.8/Lib/subprocess.py | 1620 ++
> .../Python-3.6.8/PyMod-3.6.8/Lib/zipfile.py | 2060 ++
> .../PyMod-3.6.8/Modules/_blake2/impl/blake2.h | 161 +
> .../PyMod-3.6.8/Modules/_ctypes/_ctypes.c | 5623 ++++++
> .../PyMod-3.6.8/Modules/_ctypes/callproc.c | 1871 ++
> .../Modules/_ctypes/ctypes_dlfcn.h | 29 +
> .../Modules/_ctypes/libffi_msvc/ffi.c | 572 +
> .../Modules/_ctypes/libffi_msvc/ffi.h | 331 +
> .../Modules/_ctypes/libffi_msvc/ffi_common.h | 85 +
> .../Modules/_ctypes/malloc_closure.c | 128 +
> .../Python-3.6.8/PyMod-3.6.8/Modules/config.c | 159 +
> .../PyMod-3.6.8/Modules/edk2module.c | 4348 +++++
> .../PyMod-3.6.8/Modules/errnomodule.c | 890 +
> .../PyMod-3.6.8/Modules/faulthandler.c | 1414 ++
> .../PyMod-3.6.8/Modules/getpath.c | 1283 ++
> .../Python-3.6.8/PyMod-3.6.8/Modules/main.c | 878 +
> .../PyMod-3.6.8/Modules/selectmodule.c | 2638 +++
> .../PyMod-3.6.8/Modules/socketmodule.c | 7810 ++++++++
> .../PyMod-3.6.8/Modules/socketmodule.h | 282 +
> .../PyMod-3.6.8/Modules/sre_lib.h | 1372 ++
> .../PyMod-3.6.8/Modules/timemodule.c | 1526 ++
> .../PyMod-3.6.8/Modules/zlib/gzguts.h | 218 +
> .../PyMod-3.6.8/Objects/dictobject.c | 4472 +++++
> .../PyMod-3.6.8/Objects/memoryobject.c | 3114 +++
> .../Python-3.6.8/PyMod-3.6.8/Objects/object.c | 2082 ++
> .../Objects/stringlib/transmogrify.h | 701 +
> .../PyMod-3.6.8/Objects/unicodeobject.c | 15773 ++++++++++++++++
> .../PyMod-3.6.8/Python/bltinmodule.c | 2794 +++
> .../PyMod-3.6.8/Python/fileutils.c | 1767 ++
> .../PyMod-3.6.8/Python/getcopyright.c | 38 +
> .../PyMod-3.6.8/Python/importlib_external.h | 2431 +++
> .../Python-3.6.8/PyMod-3.6.8/Python/marshal.c | 1861 ++
> .../Python-3.6.8/PyMod-3.6.8/Python/pyhash.c | 437 +
> .../PyMod-3.6.8/Python/pylifecycle.c | 1726 ++
> .../Python-3.6.8/PyMod-3.6.8/Python/pystate.c | 969 +
> .../Python-3.6.8/PyMod-3.6.8/Python/pytime.c | 749 +
> .../Python-3.6.8/PyMod-3.6.8/Python/random.c | 636 +
> .../Python/Python-3.6.8/Python368.inf | 275 +
> .../Python-3.6.8/create_python368_pkg.bat | 48 +
> .../Python/Python-3.6.8/srcprep.py | 30 +
> 59 files changed, 89413 insertions(+)
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/Py368ReadMe.txt
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/fileutils.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/osdefs.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pyconfig.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pydtrace.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pyport.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/ctypes/__init__.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/genericpath.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/glob.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/http/client.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/importlib/_bootstrap_external.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/io.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/logging/__init__.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/ntpath.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/os.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/pydoc.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/shutil.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/site.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/subprocess.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/zipfile.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_blake2/impl/blake2.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/_ctypes.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/callproc.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/ctypes_dlfcn.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi_common.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/malloc_closure.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/config.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/edk2module.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/errnomodule.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/faulthandler.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/getpath.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/main.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/selectmodule.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/socketmodule.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/socketmodule.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/sre_lib.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/timemodule.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/zlib/gzguts.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/dictobject.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/memoryobject.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/object.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/stringlib/transmogrify.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/unicodeobject.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/bltinmodule.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/fileutils.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/getcopyright.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/importlib_external.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/marshal.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pyhash.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pylifecycle.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pystate.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pytime.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/random.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/Python368.inf
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/create_python368_pkg.bat
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/srcprep.py
>
> --
> 2.32.0.windows.1
>
>
>
>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [edk2-devel] [edk2-libc Patch 1/1] AppPkg/Applications/Python/Python-3.6.8: Py 3.6.8 UEFI changes
2021-09-02 17:12 ` [edk2-libc Patch 1/1] AppPkg/Applications/Python/Python-3.6.8: Py 3.6.8 UEFI changes Michael D Kinney
@ 2021-09-02 18:41 ` Rebecca Cran
2021-09-02 20:46 ` Michael D Kinney
2021-09-02 20:48 ` Rebecca Cran
1 sibling, 1 reply; 8+ messages in thread
From: Rebecca Cran @ 2021-09-02 18:41 UTC (permalink / raw)
To: devel, michael.d.kinney; +Cc: Jayaprakash Nevara
On 9/2/21 11:12 AM, Michael D Kinney wrote:
> AppPkg/AppPkg.dsc | 3 +
> .../Python/Python-3.6.8/Py368ReadMe.txt | 220 +
This looks like it's formatted using Markdown, so should it be
Py368ReadMe.md?
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/create_python368_pkg.bat
The xcopy commands should probably have error checking after them.
--
Rebecca Cran
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [edk2-devel] [edk2-libc Patch 1/1] AppPkg/Applications/Python/Python-3.6.8: Py 3.6.8 UEFI changes
2021-09-02 18:41 ` [edk2-devel] " Rebecca Cran
@ 2021-09-02 20:46 ` Michael D Kinney
2021-09-02 20:48 ` Rebecca Cran
0 siblings, 1 reply; 8+ messages in thread
From: Michael D Kinney @ 2021-09-02 20:46 UTC (permalink / raw)
To: devel@edk2.groups.io, rebecca@nuviainc.com, Kinney, Michael D
Cc: Jayaprakash, N
Hi Rebecca,
Responses below.
Some of the items you are observing are due to following the exact
same pattern as the Python 2.x ports. There are many things that can
get cleaned up in the Python 3.x ports. I would prefer to see this
initial functional version go in and add new BZs for additional cleanups.
Thanks,
Mike
> -----Original Message-----
> From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Rebecca Cran
> Sent: Thursday, September 2, 2021 11:41 AM
> To: devel@edk2.groups.io; Kinney, Michael D <michael.d.kinney@intel.com>
> Cc: Jayaprakash, N <n.jayaprakash@intel.com>
> Subject: Re: [edk2-devel] [edk2-libc Patch 1/1] AppPkg/Applications/Python/Python-3.6.8: Py 3.6.8 UEFI changes
>
> On 9/2/21 11:12 AM, Michael D Kinney wrote:
>
> > AppPkg/AppPkg.dsc | 3 +
> > .../Python/Python-3.6.8/Py368ReadMe.txt | 220 +
>
> This looks like it's formatted using Markdown, so should it be
> Py368ReadMe.md?
It looks like there are elements that do not follow MarkDown and the formatting
looks bad when using a MarkDown viewer. I would recommend leaving it as .txt for
now. We can enter a new issue to convert to MD.
>
> > create mode 100644 AppPkg/Applications/Python/Python-3.6.8/create_python368_pkg.bat
>
> The xcopy commands should probably have error checking after them.
There are several limitations to the BAT file. It is just being reused from the
Python 2.x ports. I think it would be better to port this to a Python script and
add all error checking in that version. We can enter a new issues for this Python
port.
>
>
> --
>
> Rebecca Cran
>
>
>
>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [edk2-devel] [edk2-libc Patch 1/1] AppPkg/Applications/Python/Python-3.6.8: Py 3.6.8 UEFI changes
2021-09-02 17:12 ` [edk2-libc Patch 1/1] AppPkg/Applications/Python/Python-3.6.8: Py 3.6.8 UEFI changes Michael D Kinney
2021-09-02 18:41 ` [edk2-devel] " Rebecca Cran
@ 2021-09-02 20:48 ` Rebecca Cran
1 sibling, 0 replies; 8+ messages in thread
From: Rebecca Cran @ 2021-09-02 20:48 UTC (permalink / raw)
To: devel, michael.d.kinney; +Cc: Jayaprakash Nevara
Acked-by: Rebecca Cran <rebecca@nuviainc.com>
--
Rebecca Cran
On 9/2/21 11:12 AM, Michael D Kinney wrote:
> From: Jayaprakash Nevara <n.jayaprakash@intel.com>
>
> REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3588
>
> This commit contains several changes made to the base Python 3.6.8
> source code to compile it and run it on UEFI shell.
> Currently supports building Py3.6.8 for UEFI with IA32 and X64
> architectures using VS2017, VS2019 with the latest edk2/master.
>
> Cc: Rebecca Cran <rebecca@nuviainc.com>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Signed-off-by: Jayaprakash N <n.jayaprakash@intel.com>
> ---
> AppPkg/AppPkg.dsc | 3 +
> .../Python/Python-3.6.8/Py368ReadMe.txt | 220 +
> .../PyMod-3.6.8/Include/fileutils.h | 159 +
> .../Python-3.6.8/PyMod-3.6.8/Include/osdefs.h | 51 +
> .../PyMod-3.6.8/Include/pyconfig.h | 1322 ++
> .../PyMod-3.6.8/Include/pydtrace.h | 74 +
> .../Python-3.6.8/PyMod-3.6.8/Include/pyport.h | 788 +
> .../PyMod-3.6.8/Lib/ctypes/__init__.py | 549 +
> .../PyMod-3.6.8/Lib/genericpath.py | 157 +
> .../Python-3.6.8/PyMod-3.6.8/Lib/glob.py | 110 +
> .../PyMod-3.6.8/Lib/http/client.py | 1481 ++
> .../Lib/importlib/_bootstrap_external.py | 1443 ++
> .../Python/Python-3.6.8/PyMod-3.6.8/Lib/io.py | 99 +
> .../PyMod-3.6.8/Lib/logging/__init__.py | 2021 ++
> .../Python-3.6.8/PyMod-3.6.8/Lib/ntpath.py | 568 +
> .../Python/Python-3.6.8/PyMod-3.6.8/Lib/os.py | 792 +
> .../Python-3.6.8/PyMod-3.6.8/Lib/pydoc.py | 2686 +++
> .../Python-3.6.8/PyMod-3.6.8/Lib/shutil.py | 1160 ++
> .../Python-3.6.8/PyMod-3.6.8/Lib/site.py | 529 +
> .../PyMod-3.6.8/Lib/subprocess.py | 1620 ++
> .../Python-3.6.8/PyMod-3.6.8/Lib/zipfile.py | 2060 ++
> .../PyMod-3.6.8/Modules/_blake2/impl/blake2.h | 161 +
> .../PyMod-3.6.8/Modules/_ctypes/_ctypes.c | 5623 ++++++
> .../PyMod-3.6.8/Modules/_ctypes/callproc.c | 1871 ++
> .../Modules/_ctypes/ctypes_dlfcn.h | 29 +
> .../Modules/_ctypes/libffi_msvc/ffi.c | 572 +
> .../Modules/_ctypes/libffi_msvc/ffi.h | 331 +
> .../Modules/_ctypes/libffi_msvc/ffi_common.h | 85 +
> .../Modules/_ctypes/malloc_closure.c | 128 +
> .../Python-3.6.8/PyMod-3.6.8/Modules/config.c | 159 +
> .../PyMod-3.6.8/Modules/edk2module.c | 4348 +++++
> .../PyMod-3.6.8/Modules/errnomodule.c | 890 +
> .../PyMod-3.6.8/Modules/faulthandler.c | 1414 ++
> .../PyMod-3.6.8/Modules/getpath.c | 1283 ++
> .../Python-3.6.8/PyMod-3.6.8/Modules/main.c | 878 +
> .../PyMod-3.6.8/Modules/selectmodule.c | 2638 +++
> .../PyMod-3.6.8/Modules/socketmodule.c | 7810 ++++++++
> .../PyMod-3.6.8/Modules/socketmodule.h | 282 +
> .../PyMod-3.6.8/Modules/sre_lib.h | 1372 ++
> .../PyMod-3.6.8/Modules/timemodule.c | 1526 ++
> .../PyMod-3.6.8/Modules/zlib/gzguts.h | 218 +
> .../PyMod-3.6.8/Objects/dictobject.c | 4472 +++++
> .../PyMod-3.6.8/Objects/memoryobject.c | 3114 +++
> .../Python-3.6.8/PyMod-3.6.8/Objects/object.c | 2082 ++
> .../Objects/stringlib/transmogrify.h | 701 +
> .../PyMod-3.6.8/Objects/unicodeobject.c | 15773 ++++++++++++++++
> .../PyMod-3.6.8/Python/bltinmodule.c | 2794 +++
> .../PyMod-3.6.8/Python/fileutils.c | 1767 ++
> .../PyMod-3.6.8/Python/getcopyright.c | 38 +
> .../PyMod-3.6.8/Python/importlib_external.h | 2431 +++
> .../Python-3.6.8/PyMod-3.6.8/Python/marshal.c | 1861 ++
> .../Python-3.6.8/PyMod-3.6.8/Python/pyhash.c | 437 +
> .../PyMod-3.6.8/Python/pylifecycle.c | 1726 ++
> .../Python-3.6.8/PyMod-3.6.8/Python/pystate.c | 969 +
> .../Python-3.6.8/PyMod-3.6.8/Python/pytime.c | 749 +
> .../Python-3.6.8/PyMod-3.6.8/Python/random.c | 636 +
> .../Python/Python-3.6.8/Python368.inf | 275 +
> .../Python-3.6.8/create_python368_pkg.bat | 48 +
> .../Python/Python-3.6.8/srcprep.py | 30 +
> 59 files changed, 89413 insertions(+)
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/Py368ReadMe.txt
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/fileutils.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/osdefs.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pyconfig.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pydtrace.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pyport.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/ctypes/__init__.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/genericpath.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/glob.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/http/client.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/importlib/_bootstrap_external.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/io.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/logging/__init__.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/ntpath.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/os.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/pydoc.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/shutil.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/site.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/subprocess.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/zipfile.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_blake2/impl/blake2.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/_ctypes.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/callproc.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/ctypes_dlfcn.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi_common.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/malloc_closure.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/config.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/edk2module.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/errnomodule.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/faulthandler.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/getpath.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/main.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/selectmodule.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/socketmodule.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/socketmodule.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/sre_lib.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/timemodule.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/zlib/gzguts.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/dictobject.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/memoryobject.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/object.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/stringlib/transmogrify.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/unicodeobject.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/bltinmodule.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/fileutils.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/getcopyright.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/importlib_external.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/marshal.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pyhash.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pylifecycle.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pystate.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pytime.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/random.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/Python368.inf
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/create_python368_pkg.bat
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/srcprep.py
>
> diff --git a/AppPkg/AppPkg.dsc b/AppPkg/AppPkg.dsc
> index c2305d71..5938789d 100644
> --- a/AppPkg/AppPkg.dsc
> +++ b/AppPkg/AppPkg.dsc
> @@ -126,6 +126,9 @@
> #### Un-comment the following line to build Python 2.7.10.
> # AppPkg/Applications/Python/Python-2.7.10/Python2710.inf
>
> +#### Un-comment the following line to build Python 3.6.8.
> +# AppPkg/Applications/Python/Python-3.6.8/Python368.inf
> +
> #### Un-comment the following line to build Lua.
> # AppPkg/Applications/Lua/Lua.inf
>
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/Py368ReadMe.txt b/AppPkg/Applications/Python/Python-3.6.8/Py368ReadMe.txt
> new file mode 100644
> index 00000000..69bb6bd1
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/Py368ReadMe.txt
> @@ -0,0 +1,220 @@
> + EDK II Python
> + ReadMe
> + Version 3.6.8
> + Release 1.00
> + 01 September 2021
> +
> +
> +1. OVERVIEW
> +===========
> +This document is devoted to general information on building and setup of the
> +Python environment for UEFI, the invocation of the interpreter, and things
> +that make working with Python easier.
> +
> +It is assumed that you already have UDK2018 or later, or a current snapshot of
> +the EDK II sources from www.tianocore.org (https://github.com/tianocore/edk2),
> +and that you can successfully build packages within that distribution.
> +
> +2. Release Notes
> +================
> + 1) All C extension modules must be statically linked (built in)
> + 2) The site and os modules must exist as discrete files in ...\lib\python36.8
> + 3) User-specific configurations are not supported.
> + 4) Environment variables are not supported.
> +
> +3. Getting and Building Python
> +======================================================
> + 3.1 Getting Python
> + ==================
> + This file describes the UEFI port of version 3.6.8 of the CPython distribution.
> + For development ease, a subset of the Python 3.6.8 distribution has been
> + included as part of the AppPkg/Applications/Python/Python-3.6.8 source tree.
> + If this is sufficient, you may skip to section 3.2, Building Python.
> +
> + If a full distribution is desired, it can be merged into the Python-3.6.8
> + source tree. Directory AppPkg/Applications/Python/Python-3.6.8 corresponds
> + to the root directory of the CPython 3.6.8 distribution. The full
> + CPython 3.6.8 source code may be downloaded from
> + http://www.python.org/ftp/python/3.6.8/.
> +
> + A. Within your EDK II development tree, extract the Python distribution into
> + AppPkg/Applications/Python/Python-3.6.8. This should merge the additional
> + files into the source tree. It will also create the following directories:
> + Demo Doc Grammar Mac Misc
> + PC PCbuild RISCOS Tools
> +
> + The greatest change will be within the Python-3.6.8/Lib directory where
> + many more packages and modules will be added. These additional components
> + may not have been ported to EDK II yet.
> +
> + 3.2 Building Python
> + ===================
> + A. From the AppPkg/Applications/Python/Python-3.6.8 directory, execute the
> + srcprep.py script to copy the header files from within the
> + PyMod-3.6.8 sub-tree into their corresponding directories within the
> + distribution. This step only needs to be performed prior to the first
> + build of Python, or if one of the header files within the PyMod tree has been
> + modified.
> +
> + B. Edit PyMod-3.6.8\Modules\config.c to enable the built-in modules you need.
> + By default, it is configured for the minimally required set of modules.
> + Mandatory Built-in Modules:
> + edk2 errno imp marshal
> +
> + Additional built-in modules which are required to use the help()
> + functionality provided by PyDoc, are:
> + _codecs _collections _functools _random
> + _sre _struct _weakref binascii
> + gc itertools math _operator
> + time
> +
> + C. Edit AppPkg/AppPkg.dsc to enable (uncomment) the Python368.inf line
> + within the [Components] section.
> +
> + D. Build AppPkg using the standard "build" command:
> + For example, to build Python for an X64 CPU architecture:
> + build -a X64 -p AppPkg\AppPkg.dsc
> +
> +4. Python-related paths and files
> +=================================
> +Python depends upon the existence of several directories and files on the
> +target system.
> +
> + \EFI Root of the UEFI system area.
> + |- \Tools Location of the Python.efi executable.
> + |- \Boot UEFI specified Boot directory.
> + |- \StdLib Root of the Standard Libraries sub-tree.
> + |- \etc Configuration files used by libraries.
> + |- \lib Root of the libraries tree.
> + |- \python36.8 Directory containing the Python library
> + | modules.
> + |- \lib-dynload Dynamically loadable Python extensions.
> + | Not supported currently.
> + |- \site-packages Site-specific packages and modules.
> +
> +
> +5. Installing Python
> +====================
> +These directories, on the target system, are populated from the development
> +system as follows:
> +
> + * \Efi\Tools receives a copy of Build/AppPkg/RELEASE_VS2017/X64/Python368.efi.
> + ^^^^^^^^^^^^^^^^
> + Modify the host path to match your build type and compiler.
> +
> + * The \Efi\StdLib\etc directory is populated from the StdLib/Efi/StdLib/etc
> + source directory.
> +
> + * Directory \Efi\StdLib\lib\python36.8 is populated with packages and modules
> + from the AppPkg/Applications/Python/Python-3.6.8/Lib directory.
> + The recommended minimum set of modules (.py, .pyc, and/or .pyo):
> + os stat ntpath warnings traceback
> + site types linecache genericpath
> +
> + * Python C Extension Modules built as dynamically loadable extensions go into
> + the \Efi\StdLib\lib\python36.8\lib-dynload directory. This functionality is not
> + yet implemented.
> +
> + A script, create_python368_pkg.bat , is provided which facilitates the population
> + of the target EFI package. Execute this script from within the
> + AppPkg/Applications/Python/Python-3.6.8 directory, providing the Tool Chain, Target
> + Build and destination directory which is the path to the destination directory.
> + The appropriate contents of the AppPkg/Applications/Python/Python-3.6.8/Lib and
> + Python368.efi Application from Build/AppPkg/RELEASE_VS2017/X64/ will be
> + ^^^^^^^^^^^^^^
> + copied into the specified destination directory.
> +
> + Replace "RELEASE_VS2017", in the source path, with values appropriate for your tool chain.
> +
> +
> +6. Example: Enabling socket support
> +===================================
> + 1. enable {"_socket", init_socket}, in PyMod-3.6.8\Modules\config.c
> + 2. enable LibraryClasses BsdSocketLib and EfiSocketLib in Python368.inf.
> + 3. Build Python368
> + build -a X64 -p AppPkg\AppPkg.dsc
> + 6. copy Build\AppPkg\RELEASE_VS2017\X64\Python368.efi to \Efi\Tools on your
> + target system. Replace "RELEASE_VS2017", in the source path, with
> + values appropriate for your tool chain.
> +
> +7. Running Python
> +=================
> + Python must currently be run from an EFI FAT-32 partition, or volume, under
> + the UEFI Shell. At the Shell prompt enter the desired volume name, followed
> + by a colon ':', then press Enter. Python can then be executed by typing its
> + name, followed by any desired options and arguments.
> +
> + EXAMPLE:
> + Shell> fs0:
> + FS0:\> python368
> + Python 3.6.8 (default, Jun 24 2015, 17:38:32) [C] on uefi
> + Type "help", "copyright", "credits" or "license" for more information.
> + >>> exit()
> + FS0:\>
> +
> +
> +8. Supported C Modules
> +======================
> + Module Name C File(s)
> + =============== =============================================
> + _ast Python/Python-ast.c
> + _codecs Modules/_codecsmodule.c
> + _collections Modules/_collectionsmodule.c
> + _csv Modules/_csv.c
> + _functools Modules/_functoolsmodule.c
> + _io Modules/_io/_iomodule.c
> + _json Modules/_json.c
> + _md5 Modules/md5module.c
> + _multibytecodec Modules/cjkcodecs/multibytecodec.c
> + _random Modules/_randommodule.c
> + _sha1 Modules/sha1module.c
> + _sha256 Modules/sha256module.c
> + _sha512 Modules/sha512module.c
> + _sre Modules/_sre.c
> + _struct Modules/_struct.c
> + _symtable Modules/symtablemodule.c
> + _weakref Modules/_weakref.c
> + array Modules/arraymodule.c
> + binascii Modules/binascii.c
> + cmath Modules/cmathmodule.c
> + datetime Modules/_datetimemodule.c
> + edk2 Modules/PyMod-3.6.8/edk2module.c
> + errno Modules/errnomodule.c
> + gc Modules/gcmodule.c
> + imp Python/import.c
> + itertools Modules/itertoolsmodule.c
> + marshal Python/marshal.c
> + _operator Modules/_operator.c
> + parser Modules/parsermodule.c
> + select Modules/selectmodule.c
> + signal Modules/signalmodule.c
> + time Modules/timemodule.c
> + zlib Modules/zlibmodule.c
> +
> +
> +9. Tested Python Library Modules
> +================================
> +This is a partial list of the packages and modules of the Python Standard
> +Library that have been tested or used in some manner.
> +
> + encodings genericpath.py site.py
> + importlib getopt.py socket.py
> + json hashlib.py sre.py
> + pydoc_data heapq.py sre_compile.py
> + xml inspect.py sre_constants.py
> + abc.py io.py sre_parse.py
> + argparse.py keyword.py stat.py
> + ast.py linecache.py string.py
> + atexit.py locale.py struct.py
> + binhex.py modulefinder.py textwrap.py
> + bisect.py ntpath.py token.py
> + calendar.py numbers.py tokenize.py
> + cmd.py optparse.py traceback.py
> + codecs.py os.py types.py
> + collections.py platform.py warnings.py
> + copy.py posixpath.py weakref.py
> + csv.py pydoc.py zipfile.py
> + fileinput.py random.py
> + formatter.py re.py
> + functools.py runpy.py
> +# # #
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/fileutils.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/fileutils.h
> new file mode 100644
> index 00000000..5540505d
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/fileutils.h
> @@ -0,0 +1,159 @@
> +#ifndef Py_FILEUTILS_H
> +#define Py_FILEUTILS_H
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#if !defined(Py_LIMITED_API) || Py_LIMITED_API+0 >= 0x03050000
> +PyAPI_FUNC(wchar_t *) Py_DecodeLocale(
> + const char *arg,
> + size_t *size);
> +
> +PyAPI_FUNC(char*) Py_EncodeLocale(
> + const wchar_t *text,
> + size_t *error_pos);
> +#endif
> +
> +#ifndef Py_LIMITED_API
> +
> +PyAPI_FUNC(wchar_t *) _Py_DecodeLocaleEx(
> + const char *arg,
> + size_t *size,
> + int current_locale);
> +
> +PyAPI_FUNC(char*) _Py_EncodeLocaleEx(
> + const wchar_t *text,
> + size_t *error_pos,
> + int current_locale);
> +
> +PyAPI_FUNC(PyObject *) _Py_device_encoding(int);
> +
> +#if defined(MS_WINDOWS) || defined(__APPLE__) || defined(UEFI_C_SOURCE)
> + /* On Windows, the count parameter of read() is an int (bpo-9015, bpo-9611).
> + On macOS 10.13, read() and write() with more than INT_MAX bytes
> + fail with EINVAL (bpo-24658). */
> +# define _PY_READ_MAX INT_MAX
> +# define _PY_WRITE_MAX INT_MAX
> +#else
> + /* write() should truncate the input to PY_SSIZE_T_MAX bytes,
> + but it's safer to do it ourself to have a portable behaviour */
> +# define _PY_READ_MAX PY_SSIZE_T_MAX
> +# define _PY_WRITE_MAX PY_SSIZE_T_MAX
> +#endif
> +
> +#ifdef MS_WINDOWS
> +struct _Py_stat_struct {
> + unsigned long st_dev;
> + uint64_t st_ino;
> + unsigned short st_mode;
> + int st_nlink;
> + int st_uid;
> + int st_gid;
> + unsigned long st_rdev;
> + __int64 st_size;
> + time_t st_atime;
> + int st_atime_nsec;
> + time_t st_mtime;
> + int st_mtime_nsec;
> + time_t st_ctime;
> + int st_ctime_nsec;
> + unsigned long st_file_attributes;
> +};
> +#else
> +# define _Py_stat_struct stat
> +#endif
> +
> +PyAPI_FUNC(int) _Py_fstat(
> + int fd,
> + struct _Py_stat_struct *status);
> +
> +PyAPI_FUNC(int) _Py_fstat_noraise(
> + int fd,
> + struct _Py_stat_struct *status);
> +
> +PyAPI_FUNC(int) _Py_stat(
> + PyObject *path,
> + struct stat *status);
> +
> +PyAPI_FUNC(int) _Py_open(
> + const char *pathname,
> + int flags);
> +
> +PyAPI_FUNC(int) _Py_open_noraise(
> + const char *pathname,
> + int flags);
> +
> +PyAPI_FUNC(FILE *) _Py_wfopen(
> + const wchar_t *path,
> + const wchar_t *mode);
> +
> +PyAPI_FUNC(FILE*) _Py_fopen(
> + const char *pathname,
> + const char *mode);
> +
> +PyAPI_FUNC(FILE*) _Py_fopen_obj(
> + PyObject *path,
> + const char *mode);
> +
> +PyAPI_FUNC(Py_ssize_t) _Py_read(
> + int fd,
> + void *buf,
> + size_t count);
> +
> +PyAPI_FUNC(Py_ssize_t) _Py_write(
> + int fd,
> + const void *buf,
> + size_t count);
> +
> +PyAPI_FUNC(Py_ssize_t) _Py_write_noraise(
> + int fd,
> + const void *buf,
> + size_t count);
> +
> +#ifdef HAVE_READLINK
> +PyAPI_FUNC(int) _Py_wreadlink(
> + const wchar_t *path,
> + wchar_t *buf,
> + size_t bufsiz);
> +#endif
> +
> +#ifdef HAVE_REALPATH
> +PyAPI_FUNC(wchar_t*) _Py_wrealpath(
> + const wchar_t *path,
> + wchar_t *resolved_path,
> + size_t resolved_path_size);
> +#endif
> +
> +PyAPI_FUNC(wchar_t*) _Py_wgetcwd(
> + wchar_t *buf,
> + size_t size);
> +
> +PyAPI_FUNC(int) _Py_get_inheritable(int fd);
> +
> +PyAPI_FUNC(int) _Py_set_inheritable(int fd, int inheritable,
> + int *atomic_flag_works);
> +
> +PyAPI_FUNC(int) _Py_set_inheritable_async_safe(int fd, int inheritable,
> + int *atomic_flag_works);
> +
> +PyAPI_FUNC(int) _Py_dup(int fd);
> +
> +#ifndef MS_WINDOWS
> +PyAPI_FUNC(int) _Py_get_blocking(int fd);
> +
> +PyAPI_FUNC(int) _Py_set_blocking(int fd, int blocking);
> +#endif /* !MS_WINDOWS */
> +
> +PyAPI_FUNC(int) _Py_GetLocaleconvNumeric(
> + PyObject **decimal_point,
> + PyObject **thousands_sep,
> + const char **grouping);
> +
> +#endif /* Py_LIMITED_API */
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* !Py_FILEUTILS_H */
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/osdefs.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/osdefs.h
> new file mode 100644
> index 00000000..98ce842c
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/osdefs.h
> @@ -0,0 +1,51 @@
> +#ifndef Py_OSDEFS_H
> +#define Py_OSDEFS_H
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +
> +/* Operating system dependencies */
> +
> +#ifdef MS_WINDOWS
> +#define SEP L'\\'
> +#define ALTSEP L'/'
> +#define MAXPATHLEN 256
> +#define DELIM L';'
> +#endif
> +
> +/* Filename separator */
> +#ifndef SEP
> +#define SEP L'/'
> +#endif
> +
> +/* Max pathname length */
> +#ifdef __hpux
> +#include <sys/param.h>
> +#include <limits.h>
> +#ifndef PATH_MAX
> +#define PATH_MAX MAXPATHLEN
> +#endif
> +#endif
> +
> +#ifndef MAXPATHLEN
> +#if defined(PATH_MAX) && PATH_MAX > 1024
> +#define MAXPATHLEN PATH_MAX
> +#else
> +#define MAXPATHLEN 1024
> +#endif
> +#endif
> +
> +/* Search path entry delimiter */
> +#ifndef DELIM
> +#ifndef UEFI_C_SOURCE
> +#define DELIM L':'
> +#else
> +#define DELIM L';'
> +#endif
> +#endif
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +#endif /* !Py_OSDEFS_H */
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pyconfig.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pyconfig.h
> new file mode 100644
> index 00000000..d4685da3
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pyconfig.h
> @@ -0,0 +1,1322 @@
> +/** @file
> + Manually generated Python Configuration file for EDK II.
> +
> + Copyright (c) 2011 - 2021, Intel Corporation. All rights reserved.<BR>
> + This program and the accompanying materials are licensed and made available under
> + the terms and conditions of the BSD License that accompanies this distribution.
> + The full text of the license may be found at
> + http://opensource.org/licenses/bsd-license.
> +
> + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
> + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
> +**/
> +#ifndef Py_PYCONFIG_H
> +#define Py_PYCONFIG_H
> +#ifdef UEFI_C_SOURCE
> +#include <Uefi.h>
> +#define PLATFORM "uefi"
> +#define _rotl64(a, offset) (a << offset) | (a >> (64 - offset))
> +#endif
> +#define Py_BUILD_CORE
> +
> +//#define Py_LIMITED_API=0x03060000
> +
> +/* Define if building universal (internal helper macro) */
> +#undef AC_APPLE_UNIVERSAL_BUILD
> +
> +/* Define for AIX if your compiler is a genuine IBM xlC/xlC_r and you want
> + support for AIX C++ shared extension modules. */
> +#undef AIX_GENUINE_CPLUSPLUS
> +
> +/* Define this if you have AtheOS threads. */
> +#undef ATHEOS_THREADS
> +
> +/* Define this if you have BeOS threads. */
> +#undef BEOS_THREADS
> +
> +/* Define if you have the Mach cthreads package */
> +#undef C_THREADS
> +
> +/* Define if C doubles are 64-bit IEEE 754 binary format, stored in ARM
> + mixed-endian order (byte order 45670123) */
> +#undef DOUBLE_IS_ARM_MIXED_ENDIAN_IEEE754
> +
> +/* Define if C doubles are 64-bit IEEE 754 binary format, stored with the most
> + significant byte first */
> +#undef DOUBLE_IS_BIG_ENDIAN_IEEE754
> +
> +/* Define if C doubles are 64-bit IEEE 754 binary format, stored with the
> + least significant byte first */
> +#define DOUBLE_IS_LITTLE_ENDIAN_IEEE754
> +
> +/* Define if --enable-ipv6 is specified */
> +#undef ENABLE_IPV6
> +
> +/* Define if flock needs to be linked with bsd library. */
> +#undef FLOCK_NEEDS_LIBBSD
> +
> +/* Define if getpgrp() must be called as getpgrp(0). */
> +#undef GETPGRP_HAVE_ARG
> +
> +/* Define if gettimeofday() does not have second (timezone) argument This is
> + the case on Motorola V4 (R40V4.2) */
> +#undef GETTIMEOFDAY_NO_TZ
> +
> +/* Define to 1 if you have the 'acosh' function. */
> +#undef HAVE_ACOSH
> +
> +/* struct addrinfo (netdb.h) */
> +#undef HAVE_ADDRINFO
> +
> +/* Define to 1 if you have the 'alarm' function. */
> +#undef HAVE_ALARM
> +
> +/* Define to 1 if you have the <alloca.h> header file. */
> +#undef HAVE_ALLOCA_H
> +
> +/* Define this if your time.h defines altzone. */
> +#undef HAVE_ALTZONE
> +
> +/* Define to 1 if you have the 'asinh' function. */
> +#undef HAVE_ASINH
> +
> +/* Define to 1 if you have the <asm/types.h> header file. */
> +#undef HAVE_ASM_TYPES_H
> +
> +/* Define to 1 if you have the 'atanh' function. */
> +#undef HAVE_ATANH
> +
> +/* Define if GCC supports __attribute__((format(PyArg_ParseTuple, 2, 3))) */
> +#undef HAVE_ATTRIBUTE_FORMAT_PARSETUPLE
> +
> +/* Define to 1 if you have the 'bind_textdomain_codeset' function. */
> +#undef HAVE_BIND_TEXTDOMAIN_CODESET
> +
> +/* Define to 1 if you have the <bluetooth/bluetooth.h> header file. */
> +#undef HAVE_BLUETOOTH_BLUETOOTH_H
> +
> +/* Define to 1 if you have the <bluetooth.h> header file. */
> +#undef HAVE_BLUETOOTH_H
> +
> +/* Define if nice() returns success/failure instead of the new priority. */
> +#undef HAVE_BROKEN_NICE
> +
> +/* Define if the system reports an invalid PIPE_BUF value. */
> +#undef HAVE_BROKEN_PIPE_BUF
> +
> +/* Define if poll() sets errno on invalid file descriptors. */
> +#undef HAVE_BROKEN_POLL
> +
> +/* Define if the Posix semaphores do not work on your system */
> +#define HAVE_BROKEN_POSIX_SEMAPHORES 1
> +
> +/* Define if pthread_sigmask() does not work on your system. */
> +#define HAVE_BROKEN_PTHREAD_SIGMASK 1
> +
> +/* define to 1 if your sem_getvalue is broken. */
> +#define HAVE_BROKEN_SEM_GETVALUE 1
> +
> +/* Define if 'unsetenv' does not return an int. */
> +#undef HAVE_BROKEN_UNSETENV
> +
> +/* Define this if you have the type _Bool. */
> +#define HAVE_C99_BOOL 1
> +
> +/* Define to 1 if you have the 'chflags' function. */
> +#undef HAVE_CHFLAGS
> +
> +/* Define to 1 if you have the 'chown' function. */
> +#undef HAVE_CHOWN
> +
> +/* Define if you have the 'chroot' function. */
> +#undef HAVE_CHROOT
> +
> +/* Define to 1 if you have the 'clock' function. */
> +#define HAVE_CLOCK 1
> +
> +/* Define to 1 if you have the 'confstr' function. */
> +#undef HAVE_CONFSTR
> +
> +/* Define to 1 if you have the <conio.h> header file. */
> +#undef HAVE_CONIO_H
> +
> +/* Define to 1 if you have the 'copysign' function. */
> +#define HAVE_COPYSIGN 1
> +
> +/* Define to 1 if you have the 'ctermid' function. */
> +#undef HAVE_CTERMID
> +
> +/* Define if you have the 'ctermid_r' function. */
> +#undef HAVE_CTERMID_R
> +
> +/* Define to 1 if you have the <curses.h> header file. */
> +#undef HAVE_CURSES_H
> +
> +/* Define if you have the 'is_term_resized' function. */
> +#undef HAVE_CURSES_IS_TERM_RESIZED
> +
> +/* Define if you have the 'resizeterm' function. */
> +#undef HAVE_CURSES_RESIZETERM
> +
> +/* Define if you have the 'resize_term' function. */
> +#undef HAVE_CURSES_RESIZE_TERM
> +
> +/* Define to 1 if you have the declaration of 'isfinite', and to 0 if you
> + don't. */
> +#define HAVE_DECL_ISFINITE 0
> +
> +/* Define to 1 if you have the declaration of 'isinf', and to 0 if you don't.
> + */
> +#define HAVE_DECL_ISINF 1
> +
> +/* Define to 1 if you have the declaration of 'isnan', and to 0 if you don't.
> + */
> +#define HAVE_DECL_ISNAN 1
> +
> +/* Define to 1 if you have the declaration of 'tzname', and to 0 if you don't.
> + */
> +#define HAVE_DECL_TZNAME 0
> +
> +/* Define to 1 if you have the device macros. */
> +#undef HAVE_DEVICE_MACROS
> +
> +/* Define to 1 if you have the /dev/ptc device file. */
> +#undef HAVE_DEV_PTC
> +
> +/* Define to 1 if you have the /dev/ptmx device file. */
> +#undef HAVE_DEV_PTMX
> +
> +/* Define to 1 if you have the <direct.h> header file. */
> +#undef HAVE_DIRECT_H
> +
> +/* Define to 1 if you have the <dirent.h> header file, and it defines 'DIR'. */
> +#define HAVE_DIRENT_H 1
> +
> +/* Define to 1 if you have the <dlfcn.h> header file. */
> +#undef HAVE_DLFCN_H
> +
> +/* Define to 1 if you have the 'dlopen' function. */
> +#undef HAVE_DLOPEN
> +
> +/* Define to 1 if you have the 'dup2' function. */
> +#define HAVE_DUP2 1
> +
> +/* Defined when any dynamic module loading is enabled. */
> +#undef HAVE_DYNAMIC_LOADING
> +
> +/* Define if you have the 'epoll' functions. */
> +#undef HAVE_EPOLL
> +
> +/* Define to 1 if you have the 'erf' function. */
> +#undef HAVE_ERF
> +
> +/* Define to 1 if you have the 'erfc' function. */
> +#undef HAVE_ERFC
> +
> +/* Define to 1 if you have the <errno.h> header file. */
> +#define HAVE_ERRNO_H 1
> +
> +/* Define to 1 if you have the 'execv' function. */
> +#undef HAVE_EXECV
> +
> +/* Define to 1 if you have the 'expm1' function. */
> +#undef HAVE_EXPM1
> +
> +/* Define if you have the 'fchdir' function. */
> +#undef HAVE_FCHDIR
> +
> +/* Define to 1 if you have the 'fchmod' function. */
> +#undef HAVE_FCHMOD
> +
> +/* Define to 1 if you have the 'fchown' function. */
> +#undef HAVE_FCHOWN
> +
> +/* Define to 1 if you have the <fcntl.h> header file. */
> +#define HAVE_FCNTL_H 1
> +
> +/* Define if you have the 'fdatasync' function. */
> +#undef HAVE_FDATASYNC
> +
> +/* Define to 1 if you have the 'finite' function. */
> +#define HAVE_FINITE 1
> +
> +/* Define to 1 if you have the 'flock' function. */
> +#undef HAVE_FLOCK
> +
> +/* Define to 1 if you have the 'fork' function. */
> +#undef HAVE_FORK
> +
> +/* Define to 1 if you have the 'forkpty' function. */
> +#undef HAVE_FORKPTY
> +
> +/* Define to 1 if you have the 'fpathconf' function. */
> +#undef HAVE_FPATHCONF
> +
> +/* Define to 1 if you have the 'fseek64' function. */
> +#undef HAVE_FSEEK64
> +
> +/* Define to 1 if you have the 'fseeko' function. */
> +#define HAVE_FSEEKO 1
> +
> +/* Define to 1 if you have the 'fstatvfs' function. */
> +#undef HAVE_FSTATVFS
> +
> +/* Define if you have the 'fsync' function. */
> +#undef HAVE_FSYNC
> +
> +/* Define to 1 if you have the 'ftell64' function. */
> +#undef HAVE_FTELL64
> +
> +/* Define to 1 if you have the 'ftello' function. */
> +#define HAVE_FTELLO 1
> +
> +/* Define to 1 if you have the 'ftime' function. */
> +#undef HAVE_FTIME
> +
> +/* Define to 1 if you have the 'ftruncate' function. */
> +#undef HAVE_FTRUNCATE
> +
> +/* Define to 1 if you have the 'gai_strerror' function. */
> +#undef HAVE_GAI_STRERROR
> +
> +/* Define to 1 if you have the 'gamma' function. */
> +#undef HAVE_GAMMA
> +
> +/* Define if we can use gcc inline assembler to get and set x87 control word */
> +#if defined(__GNUC__)
> + #define HAVE_GCC_ASM_FOR_X87 1
> +#else
> + #undef HAVE_GCC_ASM_FOR_X87
> +#endif
> +
> +/* Define if you have the getaddrinfo function. */
> +//#undef HAVE_GETADDRINFO
> +#define HAVE_GETADDRINFO 1
> +
> +/* Define to 1 if you have the 'getcwd' function. */
> +#define HAVE_GETCWD 1
> +
> +/* Define this if you have flockfile(), getc_unlocked(), and funlockfile() */
> +#undef HAVE_GETC_UNLOCKED
> +
> +/* Define to 1 if you have the 'getentropy' function. */
> +#undef HAVE_GETENTROPY
> +
> +/* Define to 1 if you have the 'getgroups' function. */
> +#undef HAVE_GETGROUPS
> +
> +/* Define to 1 if you have the 'gethostbyname' function. */
> +//#undef HAVE_GETHOSTBYNAME
> +#define HAVE_GETHOSTBYNAME 1
> +
> +/* Define this if you have some version of gethostbyname_r() */
> +#undef HAVE_GETHOSTBYNAME_R
> +
> +/* Define this if you have the 3-arg version of gethostbyname_r(). */
> +#undef HAVE_GETHOSTBYNAME_R_3_ARG
> +
> +/* Define this if you have the 5-arg version of gethostbyname_r(). */
> +#undef HAVE_GETHOSTBYNAME_R_5_ARG
> +
> +/* Define this if you have the 6-arg version of gethostbyname_r(). */
> +#undef HAVE_GETHOSTBYNAME_R_6_ARG
> +
> +/* Define to 1 if you have the 'getitimer' function. */
> +#undef HAVE_GETITIMER
> +
> +/* Define to 1 if you have the 'getloadavg' function. */
> +#undef HAVE_GETLOADAVG
> +
> +/* Define to 1 if you have the 'getlogin' function. */
> +#undef HAVE_GETLOGIN
> +
> +/* Define to 1 if you have the 'getnameinfo' function. */
> +//#undef HAVE_GETNAMEINFO
> +#define HAVE_GETNAMEINFO 1
> +
> +/* Define if you have the 'getpagesize' function. */
> +#undef HAVE_GETPAGESIZE
> +
> +/* Define to 1 if you have the 'getpeername' function. */
> +#define HAVE_GETPEERNAME 1
> +
> +/* Define to 1 if you have the 'getpgid' function. */
> +#undef HAVE_GETPGID
> +
> +/* Define to 1 if you have the 'getpgrp' function. */
> +#undef HAVE_GETPGRP
> +
> +/* Define to 1 if you have the 'getpid' function. */
> +#undef HAVE_GETPID
> +
> +/* Define to 1 if you have the 'getpriority' function. */
> +#undef HAVE_GETPRIORITY
> +
> +/* Define to 1 if you have the 'getpwent' function. */
> +#undef HAVE_GETPWENT
> +
> +/* Define to 1 if you have the 'getresgid' function. */
> +#undef HAVE_GETRESGID
> +
> +/* Define to 1 if you have the 'getresuid' function. */
> +#undef HAVE_GETRESUID
> +
> +/* Define to 1 if you have the 'getsid' function. */
> +#undef HAVE_GETSID
> +
> +/* Define to 1 if you have the 'getspent' function. */
> +#undef HAVE_GETSPENT
> +
> +/* Define to 1 if you have the 'getspnam' function. */
> +#undef HAVE_GETSPNAM
> +
> +/* Define to 1 if you have the 'gettimeofday' function. */
> +#undef HAVE_GETTIMEOFDAY
> +
> +/* Define to 1 if you have the 'getwd' function. */
> +#undef HAVE_GETWD
> +
> +/* Define to 1 if you have the <grp.h> header file. */
> +#undef HAVE_GRP_H
> +
> +/* Define if you have the 'hstrerror' function. */
> +#undef HAVE_HSTRERROR
> +
> +/* Define to 1 if you have the 'hypot' function. */
> +#undef HAVE_HYPOT
> +
> +/* Define to 1 if you have the <ieeefp.h> header file. */
> +#undef HAVE_IEEEFP_H
> +
> +/* Define if you have the 'inet_aton' function. */
> +#define HAVE_INET_ATON 1
> +
> +/* Define if you have the 'inet_pton' function. */
> +#define HAVE_INET_PTON 1
> +
> +/* Define to 1 if you have the 'initgroups' function. */
> +#undef HAVE_INITGROUPS
> +
> +/* Define if your compiler provides int32_t. */
> +#undef HAVE_INT32_T
> +
> +/* Define if your compiler provides int64_t. */
> +#undef HAVE_INT64_T
> +
> +/* Define to 1 if you have the <inttypes.h> header file. */
> +#define HAVE_INTTYPES_H 1
> +
> +/* Define to 1 if you have the <io.h> header file. */
> +#undef HAVE_IO_H
> +
> +/* Define to 1 if you have the 'kill' function. */
> +#undef HAVE_KILL
> +
> +/* Define to 1 if you have the 'killpg' function. */
> +#undef HAVE_KILLPG
> +
> +/* Define if you have the 'kqueue' functions. */
> +#undef HAVE_KQUEUE
> +
> +/* Define to 1 if you have the <langinfo.h> header file. */
> +#undef HAVE_LANGINFO_H /* non-functional in EFI. */
> +
> +/* Defined to enable large file support when an off_t is bigger than a long
> + and long long is available and at least as big as an off_t. You may need to
> + add some flags for configuration and compilation to enable this mode. (For
> + Solaris and Linux, the necessary defines are already defined.) */
> +#undef HAVE_LARGEFILE_SUPPORT
> +
> +/* Define to 1 if you have the 'lchflags' function. */
> +#undef HAVE_LCHFLAGS
> +
> +/* Define to 1 if you have the 'lchmod' function. */
> +#undef HAVE_LCHMOD
> +
> +/* Define to 1 if you have the 'lchown' function. */
> +#undef HAVE_LCHOWN
> +
> +/* Define to 1 if you have the 'lgamma' function. */
> +#undef HAVE_LGAMMA
> +
> +/* Define to 1 if you have the 'dl' library (-ldl). */
> +#undef HAVE_LIBDL
> +
> +/* Define to 1 if you have the 'dld' library (-ldld). */
> +#undef HAVE_LIBDLD
> +
> +/* Define to 1 if you have the 'ieee' library (-lieee). */
> +#undef HAVE_LIBIEEE
> +
> +/* Define to 1 if you have the <libintl.h> header file. */
> +#undef HAVE_LIBINTL_H
> +
> +/* Define if you have the readline library (-lreadline). */
> +#undef HAVE_LIBREADLINE
> +
> +/* Define to 1 if you have the 'resolv' library (-lresolv). */
> +#undef HAVE_LIBRESOLV
> +
> +/* Define to 1 if you have the <libutil.h> header file. */
> +#undef HAVE_LIBUTIL_H
> +
> +/* Define if you have the 'link' function. */
> +#undef HAVE_LINK
> +
> +/* Define to 1 if you have the <linux/netlink.h> header file. */
> +#undef HAVE_LINUX_NETLINK_H
> +
> +/* Define to 1 if you have the <linux/tipc.h> header file. */
> +#undef HAVE_LINUX_TIPC_H
> +
> +/* Define to 1 if you have the 'log1p' function. */
> +#undef HAVE_LOG1P
> +
> +/* Define this if you have the type long double. */
> +#undef HAVE_LONG_DOUBLE
> +
> +/* Define this if you have the type long long. */
> +#define HAVE_LONG_LONG 1
> +
> +/* Define to 1 if you have the 'lstat' function. */
> +#define HAVE_LSTAT 1
> +
> +/* Define this if you have the makedev macro. */
> +#undef HAVE_MAKEDEV
> +
> +/* Define to 1 if you have the 'memmove' function. */
> +#define HAVE_MEMMOVE 1
> +
> +/* Define to 1 if you have the <memory.h> header file. */
> +#undef HAVE_MEMORY_H
> +
> +/* Define to 1 if you have the 'mkfifo' function. */
> +#undef HAVE_MKFIFO
> +
> +/* Define to 1 if you have the 'mknod' function. */
> +#undef HAVE_MKNOD
> +
> +/* Define to 1 if you have the 'mktime' function. */
> +#define HAVE_MKTIME 1
> +
> +/* Define to 1 if you have the 'mmap' function. */
> +#undef HAVE_MMAP
> +
> +/* Define to 1 if you have the 'mremap' function. */
> +#undef HAVE_MREMAP
> +
> +/* Define to 1 if you have the <ncurses.h> header file. */
> +#undef HAVE_NCURSES_H
> +
> +/* Define to 1 if you have the <ndir.h> header file, and it defines 'DIR'. */
> +#undef HAVE_NDIR_H
> +
> +/* Define to 1 if you have the <netpacket/packet.h> header file. */
> +#undef HAVE_NETPACKET_PACKET_H
> +
> +/* Define to 1 if you have the 'nice' function. */
> +#undef HAVE_NICE
> +
> +/* Define to 1 if you have the 'openpty' function. */
> +#undef HAVE_OPENPTY
> +
> +/* Define if compiling using MacOS X 10.5 SDK or later. */
> +#undef HAVE_OSX105_SDK
> +
> +/* Define to 1 if you have the 'pathconf' function. */
> +#undef HAVE_PATHCONF
> +
> +/* Define to 1 if you have the 'pause' function. */
> +#undef HAVE_PAUSE
> +
> +/* Define to 1 if you have the 'plock' function. */
> +#undef HAVE_PLOCK
> +
> +/* Define to 1 if you have the 'poll' function. */
> +#define HAVE_POLL 1
> +
> +/* Define to 1 if you have the <poll.h> header file. */
> +#undef HAVE_POLL_H
> +
> +/* Define to 1 if you have the <process.h> header file. */
> +#undef HAVE_PROCESS_H
> +
> +/* Define if your compiler supports function prototype */
> +#define HAVE_PROTOTYPES 1
> +
> +/* Define if you have GNU PTH threads. */
> +#undef HAVE_PTH
> +
> +/* Define to 1 if you have the 'pthread_atfork' function. */
> +#undef HAVE_PTHREAD_ATFORK
> +
> +/* Defined for Solaris 2.6 bug in pthread header. */
> +#undef HAVE_PTHREAD_DESTRUCTOR
> +
> +/* Define to 1 if you have the <pthread.h> header file. */
> +#undef HAVE_PTHREAD_H
> +
> +/* Define to 1 if you have the 'pthread_init' function. */
> +#undef HAVE_PTHREAD_INIT
> +
> +/* Define to 1 if you have the 'pthread_sigmask' function. */
> +#undef HAVE_PTHREAD_SIGMASK
> +
> +/* Define to 1 if you have the <pty.h> header file. */
> +#undef HAVE_PTY_H
> +
> +/* Define to 1 if you have the 'putenv' function. */
> +#undef HAVE_PUTENV
> +
> +/* Define if the libcrypto has RAND_egd */
> +#undef HAVE_RAND_EGD
> +
> +/* Define to 1 if you have the 'readlink' function. */
> +#undef HAVE_READLINK
> +
> +/* Define to 1 if you have the 'realpath' function. */
> +#define HAVE_REALPATH 1
> +
> +/* Define if you have readline 2.1 */
> +#undef HAVE_RL_CALLBACK
> +
> +/* Define if you can turn off readline's signal handling. */
> +#undef HAVE_RL_CATCH_SIGNAL
> +
> +/* Define if you have readline 2.2 */
> +#undef HAVE_RL_COMPLETION_APPEND_CHARACTER
> +
> +/* Define if you have readline 4.0 */
> +#undef HAVE_RL_COMPLETION_DISPLAY_MATCHES_HOOK
> +
> +/* Define if you have readline 4.2 */
> +#undef HAVE_RL_COMPLETION_MATCHES
> +
> +/* Define if you have rl_completion_suppress_append */
> +#undef HAVE_RL_COMPLETION_SUPPRESS_APPEND
> +
> +/* Define if you have readline 4.0 */
> +#undef HAVE_RL_PRE_INPUT_HOOK
> +
> +/* Define to 1 if you have the 'round' function. */
> +#undef HAVE_ROUND
> +
> +/* Define to 1 if you have the 'select' function. */
> +#define HAVE_SELECT 1
> +
> +/* Define to 1 if you have the 'sem_getvalue' function. */
> +#undef HAVE_SEM_GETVALUE
> +
> +/* Define to 1 if you have the 'sem_open' function. */
> +#undef HAVE_SEM_OPEN
> +
> +/* Define to 1 if you have the 'sem_timedwait' function. */
> +#undef HAVE_SEM_TIMEDWAIT
> +
> +/* Define to 1 if you have the 'sem_unlink' function. */
> +#undef HAVE_SEM_UNLINK
> +
> +/* Define to 1 if you have the 'setegid' function. */
> +#undef HAVE_SETEGID
> +
> +/* Define to 1 if you have the 'seteuid' function. */
> +#undef HAVE_SETEUID
> +
> +/* Define to 1 if you have the 'setgid' function. */
> +#undef HAVE_SETGID
> +
> +/* Define if you have the 'setgroups' function. */
> +#undef HAVE_SETGROUPS
> +
> +/* Define to 1 if you have the 'setitimer' function. */
> +#undef HAVE_SETITIMER
> +
> +/* Define to 1 if you have the 'setlocale' function. */
> +#define HAVE_SETLOCALE 1
> +
> +/* Define to 1 if you have the 'setpgid' function. */
> +#undef HAVE_SETPGID
> +
> +/* Define to 1 if you have the 'setpgrp' function. */
> +#undef HAVE_SETPGRP
> +
> +/* Define to 1 if you have the 'setregid' function. */
> +#undef HAVE_SETREGID
> +
> +/* Define to 1 if you have the 'setresgid' function. */
> +#undef HAVE_SETRESGID
> +
> +/* Define to 1 if you have the 'setresuid' function. */
> +#undef HAVE_SETRESUID
> +
> +/* Define to 1 if you have the 'setreuid' function. */
> +#undef HAVE_SETREUID
> +
> +/* Define to 1 if you have the 'setsid' function. */
> +#undef HAVE_SETSID
> +
> +/* Define to 1 if you have the 'setuid' function. */
> +#undef HAVE_SETUID
> +
> +/* Define to 1 if you have the 'setvbuf' function. */
> +#define HAVE_SETVBUF 1
> +
> +/* Define to 1 if you have the <shadow.h> header file. */
> +#undef HAVE_SHADOW_H
> +
> +/* Define to 1 if you have the 'sigaction' function. */
> +#undef HAVE_SIGACTION
> +
> +/* Define to 1 if you have the 'siginterrupt' function. */
> +#undef HAVE_SIGINTERRUPT
> +
> +/* Define to 1 if you have the <signal.h> header file. */
> +#define HAVE_SIGNAL_H 1
> +
> +/* Define to 1 if you have the 'sigrelse' function. */
> +#undef HAVE_SIGRELSE
> +
> +/* Define to 1 if you have the 'snprintf' function. */
> +#define HAVE_SNPRINTF 1
> +
> +/* Define if sockaddr has sa_len member */
> +#undef HAVE_SOCKADDR_SA_LEN
> +
> +/* struct sockaddr_storage (sys/socket.h) */
> +#undef HAVE_SOCKADDR_STORAGE
> +
> +/* Define if you have the 'socketpair' function. */
> +#undef HAVE_SOCKETPAIR
> +
> +/* Define to 1 if you have the <spawn.h> header file. */
> +#undef HAVE_SPAWN_H
> +
> +/* Define if your compiler provides ssize_t */
> +#define HAVE_SSIZE_T 1
> +
> +/* Define to 1 if you have the 'statvfs' function. */
> +#undef HAVE_STATVFS
> +
> +/* Define if you have struct stat.st_mtim.tv_nsec */
> +#undef HAVE_STAT_TV_NSEC
> +
> +/* Define if you have struct stat.st_mtimensec */
> +#undef HAVE_STAT_TV_NSEC2
> +
> +/* Define if your compiler supports variable length function prototypes (e.g.
> + void fprintf(FILE *, char *, ...);) *and* <stdarg.h> */
> +#define HAVE_STDARG_PROTOTYPES 1
> +
> +/* Define to 1 if you have the <stdint.h> header file. */
> +#define HAVE_STDINT_H 1
> +
> +/* Define to 1 if you have the <stdlib.h> header file. */
> +#define HAVE_STDLIB_H 1
> +
> +/* Define to 1 if you have the 'strdup' function. */
> +#define HAVE_STRDUP 1
> +
> +/* Define to 1 if you have the 'strftime' function. */
> +#define HAVE_STRFTIME 1
> +
> +/* Define to 1 if you have the <strings.h> header file. */
> +#undef HAVE_STRINGS_H
> +
> +/* Define to 1 if you have the <string.h> header file. */
> +#define HAVE_STRING_H 1
> +
> +/* Define to 1 if you have the <stropts.h> header file. */
> +#undef HAVE_STROPTS_H
> +
> +/* Define to 1 if 'st_birthtime' is a member of 'struct stat'. */
> +#define HAVE_STRUCT_STAT_ST_BIRTHTIME 1
> +
> +/* Define to 1 if 'st_blksize' is a member of 'struct stat'. */
> +#define HAVE_STRUCT_STAT_ST_BLKSIZE 1
> +
> +/* Define to 1 if 'st_blocks' is a member of 'struct stat'. */
> +#undef HAVE_STRUCT_STAT_ST_BLOCKS
> +
> +/* Define to 1 if 'st_flags' is a member of 'struct stat'. */
> +#undef HAVE_STRUCT_STAT_ST_FLAGS
> +
> +/* Define to 1 if 'st_gen' is a member of 'struct stat'. */
> +#undef HAVE_STRUCT_STAT_ST_GEN
> +
> +/* Define to 1 if 'st_rdev' is a member of 'struct stat'. */
> +#undef HAVE_STRUCT_STAT_ST_RDEV
> +
> +/* Define to 1 if 'st_dev' is a member of 'struct stat'. */
> +#undef HAVE_STRUCT_STAT_ST_DEV
> +
> +/* Define to 1 if 'st_ino' is a member of 'struct stat'. */
> +#undef HAVE_STRUCT_STAT_ST_INO
> +
> +/* Define to 1 if 'tm_zone' is a member of 'struct tm'. */
> +#undef HAVE_STRUCT_TM_TM_ZONE
> +
> +/* Define to 1 if your 'struct stat' has 'st_blocks'. Deprecated, use
> + 'HAVE_STRUCT_STAT_ST_BLOCKS' instead. */
> +#undef HAVE_ST_BLOCKS
> +
> +/* Define if you have the 'symlink' function. */
> +#undef HAVE_SYMLINK
> +
> +/* Define to 1 if you have the 'sysconf' function. */
> +#undef HAVE_SYSCONF
> +
> +/* Define to 1 if you have the <sysexits.h> header file. */
> +#undef HAVE_SYSEXITS_H
> +
> +/* Define to 1 if you have the <sys/audioio.h> header file. */
> +#undef HAVE_SYS_AUDIOIO_H
> +
> +/* Define to 1 if you have the <sys/bsdtty.h> header file. */
> +#undef HAVE_SYS_BSDTTY_H
> +
> +/* Define to 1 if you have the <sys/dir.h> header file, and it defines 'DIR'.
> + */
> +#undef HAVE_SYS_DIR_H
> +
> +/* Define to 1 if you have the <sys/epoll.h> header file. */
> +#undef HAVE_SYS_EPOLL_H
> +
> +/* Define to 1 if you have the <sys/event.h> header file. */
> +#undef HAVE_SYS_EVENT_H
> +
> +/* Define to 1 if you have the <sys/file.h> header file. */
> +#undef HAVE_SYS_FILE_H
> +
> +/* Define to 1 if you have the <sys/loadavg.h> header file. */
> +#undef HAVE_SYS_LOADAVG_H
> +
> +/* Define to 1 if you have the <sys/lock.h> header file. */
> +#undef HAVE_SYS_LOCK_H
> +
> +/* Define to 1 if you have the <sys/mkdev.h> header file. */
> +#undef HAVE_SYS_MKDEV_H
> +
> +/* Define to 1 if you have the <sys/modem.h> header file. */
> +#undef HAVE_SYS_MODEM_H
> +
> +/* Define to 1 if you have the <sys/ndir.h> header file, and it defines 'DIR'.
> + */
> +#undef HAVE_SYS_NDIR_H
> +
> +/* Define to 1 if you have the <sys/param.h> header file. */
> +#define HAVE_SYS_PARAM_H 1
> +
> +/* Define to 1 if you have the <sys/poll.h> header file. */
> +#define HAVE_SYS_POLL_H 1
> +
> +/* Define to 1 if you have the <sys/resource.h> header file. */
> +#define HAVE_SYS_RESOURCE_H 1
> +
> +/* Define to 1 if you have the <sys/select.h> header file. */
> +#define HAVE_SYS_SELECT_H 1
> +
> +/* Define to 1 if you have the <sys/socket.h> header file. */
> +#define HAVE_SYS_SOCKET_H 1
> +
> +/* Define to 1 if you have the <sys/statvfs.h> header file. */
> +#undef HAVE_SYS_STATVFS_H
> +
> +/* Define to 1 if you have the <sys/stat.h> header file. */
> +#define HAVE_SYS_STAT_H 1
> +
> +/* Define to 1 if you have the <sys/termio.h> header file. */
> +#undef HAVE_SYS_TERMIO_H
> +
> +/* Define to 1 if you have the <sys/times.h> header file. */
> +#undef HAVE_SYS_TIMES_H
> +
> +/* Define to 1 if you have the <sys/time.h> header file. */
> +#define HAVE_SYS_TIME_H 1
> +
> +/* Define to 1 if you have the <sys/types.h> header file. */
> +#define HAVE_SYS_TYPES_H 1
> +
> +/* Define to 1 if you have the <sys/un.h> header file. */
> +#undef HAVE_SYS_UN_H
> +
> +/* Define to 1 if you have the <sys/utsname.h> header file. */
> +#undef HAVE_SYS_UTSNAME_H
> +
> +/* Define to 1 if you have the <sys/wait.h> header file. */
> +#undef HAVE_SYS_WAIT_H
> +
> +/* Define to 1 if you have the system() command. */
> +#define HAVE_SYSTEM 1
> +
> +/* Define to 1 if you have the 'tcgetpgrp' function. */
> +#undef HAVE_TCGETPGRP
> +
> +/* Define to 1 if you have the 'tcsetpgrp' function. */
> +#undef HAVE_TCSETPGRP
> +
> +/* Define to 1 if you have the 'tempnam' function. */
> +#define HAVE_TEMPNAM 1
> +
> +/* Define to 1 if you have the <termios.h> header file. */
> +#undef HAVE_TERMIOS_H
> +
> +/* Define to 1 if you have the <term.h> header file. */
> +#undef HAVE_TERM_H
> +
> +/* Define to 1 if you have the 'tgamma' function. */
> +#undef HAVE_TGAMMA
> +
> +/* Define to 1 if you have the <thread.h> header file. */
> +#undef HAVE_THREAD_H
> +
> +/* Define to 1 if you have the 'timegm' function. */
> +#undef HAVE_TIMEGM
> +
> +/* Define to 1 if you have the 'times' function. */
> +#undef HAVE_TIMES
> +
> +/* Define to 1 if you have the 'tmpfile' function. */
> +#define HAVE_TMPFILE 1
> +
> +/* Define to 1 if you have the 'tmpnam' function. */
> +#define HAVE_TMPNAM 1
> +
> +/* Define to 1 if you have the 'tmpnam_r' function. */
> +#undef HAVE_TMPNAM_R
> +
> +/* Define to 1 if your 'struct tm' has 'tm_zone'. Deprecated, use
> + 'HAVE_STRUCT_TM_TM_ZONE' instead. */
> +#undef HAVE_TM_ZONE
> +
> +/* Define to 1 if you have the 'truncate' function. */
> +#undef HAVE_TRUNCATE
> +
> +/* Define to 1 if you don't have 'tm_zone' but do have the external array
> + 'tzname'. */
> +#undef HAVE_TZNAME
> +
> +/* Define this if you have tcl and TCL_UTF_MAX==6 */
> +#undef HAVE_UCS4_TCL
> +
> +/* Define if your compiler provides uint32_t. */
> +#undef HAVE_UINT32_T
> +
> +/* Define if your compiler provides uint64_t. */
> +#undef HAVE_UINT64_T
> +
> +/* Define to 1 if the system has the type 'uintptr_t'. */
> +#define HAVE_UINTPTR_T 1
> +
> +/* Define to 1 if you have the 'uname' function. */
> +#undef HAVE_UNAME
> +
> +/* Define to 1 if you have the <unistd.h> header file. */
> +#define HAVE_UNISTD_H 1
> +
> +/* Define to 1 if you have the 'unsetenv' function. */
> +#undef HAVE_UNSETENV
> +
> +/* Define if you have a useable wchar_t type defined in wchar.h; useable means
> + wchar_t must be an unsigned type with at least 16 bits. (see
> + Include/unicodeobject.h). */
> +#define HAVE_USABLE_WCHAR_T 1
> +
> +/* Define to 1 if you have the <util.h> header file. */
> +#undef HAVE_UTIL_H
> +
> +/* Define to 1 if you have the 'utimes' function. */
> +#undef HAVE_UTIMES
> +
> +/* Define to 1 if you have the <utime.h> header file. */
> +#define HAVE_UTIME_H 1
> +
> +/* Define to 1 if you have the 'wait3' function. */
> +#undef HAVE_WAIT3
> +
> +/* Define to 1 if you have the 'wait4' function. */
> +#undef HAVE_WAIT4
> +
> +/* Define to 1 if you have the 'waitpid' function. */
> +#undef HAVE_WAITPID
> +
> +/* Define if the compiler provides a wchar.h header file. */
> +#define HAVE_WCHAR_H 1
> +
> +/* Define to 1 if you have the 'wcscoll' function. */
> +#define HAVE_WCSCOLL 1
> +
> +/* Define if tzset() actually switches the local timezone in a meaningful way.
> + */
> +#undef HAVE_WORKING_TZSET
> +
> +/* Define if the zlib library has inflateCopy */
> +#undef HAVE_ZLIB_COPY
> +
> +/* Define to 1 if you have the '_getpty' function. */
> +#undef HAVE__GETPTY
> +
> +/* Define if you are using Mach cthreads directly under /include */
> +#undef HURD_C_THREADS
> +
> +/* Define if you are using Mach cthreads under mach / */
> +#undef MACH_C_THREADS
> +
> +/* Define to 1 if 'major', 'minor', and 'makedev' are declared in <mkdev.h>.
> + */
> +#undef MAJOR_IN_MKDEV
> +
> +/* Define to 1 if 'major', 'minor', and 'makedev' are declared in
> + <sysmacros.h>. */
> +#undef MAJOR_IN_SYSMACROS
> +
> +/* Define if mvwdelch in curses.h is an expression. */
> +#undef MVWDELCH_IS_EXPRESSION
> +
> +/* Define to the address where bug reports for this package should be sent. */
> +#define PACKAGE_BUGREPORT "edk2-devel@lists.01.org"
> +
> +/* Define to the full name of this package. */
> +#define PACKAGE_NAME "EDK II Python 2.7.10 Package"
> +
> +/* Define to the full name and version of this package. */
> +#define PACKAGE_STRING "EDK II Python 2.7.10 Package V0.1"
> +
> +/* Define to the one symbol short name of this package. */
> +#define PACKAGE_TARNAME "EADK_Python"
> +
> +/* Define to the home page for this package. */
> +#define PACKAGE_URL "http://www.tianocore.org/"
> +
> +/* Define to the version of this package. */
> +#define PACKAGE_VERSION "V0.1"
> +
> +/* Define if POSIX semaphores aren't enabled on your system */
> +#define POSIX_SEMAPHORES_NOT_ENABLED 1
> +
> +/* Defined if PTHREAD_SCOPE_SYSTEM supported. */
> +#undef PTHREAD_SYSTEM_SCHED_SUPPORTED
> +
> +/* Define as the preferred size in bits of long digits */
> +#undef PYLONG_BITS_IN_DIGIT
> +
> +/* Define to printf format modifier for long long type */
> +#define PY_FORMAT_LONG_LONG "ll"
> +
> +/* Define to printf format modifier for Py_ssize_t */
> +#define PY_FORMAT_SIZE_T "z"
> +
> +/* Define as the integral type used for Unicode representation. */
> +#define PY_UNICODE_TYPE wchar_t
> +
> +/* Define if you want to build an interpreter with many run-time checks. */
> +#undef Py_DEBUG
> +
> +/* Defined if Python is built as a shared library. */
> +#undef Py_ENABLE_SHARED
> +
> +/* Define if you want to have a Unicode type. */
> +#define Py_USING_UNICODE
> +
> +/* assume C89 semantics that RETSIGTYPE is always void */
> +#undef RETSIGTYPE
> +
> +/* Define if setpgrp() must be called as setpgrp(0, 0). */
> +#undef SETPGRP_HAVE_ARG
> +
> +/* Define this to be extension of shared libraries (including the dot!). */
> +#undef SHLIB_EXT
> +
> +/* Define if i>>j for signed int i does not extend the sign bit when i < 0 */
> +#undef SIGNED_RIGHT_SHIFT_ZERO_FILLS
> +
> +/* The size of 'double', as computed by sizeof. */
> +#define SIZEOF_DOUBLE 8
> +
> +/* The size of 'float', as computed by sizeof. */
> +#define SIZEOF_FLOAT 4
> +
> +/* The size of 'fpos_t', as computed by sizeof. */
> +#define SIZEOF_FPOS_T 8
> +
> +/* The size of 'int', as computed by sizeof. */
> +#define SIZEOF_INT 4
> +
> +/* The size of 'long', as computed by sizeof. */
> +#if defined(_MSC_VER) /* Handle Microsoft VC++ compiler specifics. */
> +#define SIZEOF_LONG 4
> +#else
> +#define SIZEOF_LONG 8
> +#endif
> +
> +/* The size of 'long double', as computed by sizeof. */
> +#undef SIZEOF_LONG_DOUBLE
> +
> +/* The size of 'long long', as computed by sizeof. */
> +#define SIZEOF_LONG_LONG 8
> +
> +/* The size of 'off_t', as computed by sizeof. */
> +#ifdef UEFI_MSVC_64
> +#define SIZEOF_OFF_T 8
> +#else
> +#define SIZEOF_OFF_T 4
> +#endif
> +
> +
> +/* The size of 'pid_t', as computed by sizeof. */
> +#define SIZEOF_PID_T 4
> +
> +/* The size of 'pthread_t', as computed by sizeof. */
> +#undef SIZEOF_PTHREAD_T
> +
> +/* The size of 'short', as computed by sizeof. */
> +#define SIZEOF_SHORT 2
> +
> +/* The size of 'size_t', as computed by sizeof. */
> +#ifdef UEFI_MSVC_64
> +#define SIZEOF_SIZE_T 8
> +#else
> +#define SIZEOF_SIZE_T 4
> +#endif
> +
> +/* The size of 'time_t', as computed by sizeof. */
> +#define SIZEOF_TIME_T 4
> +
> +/* The size of 'uintptr_t', as computed by sizeof. */
> +#ifdef UEFI_MSVC_64
> +#define SIZEOF_UINTPTR_T 8
> +#else
> +#define SIZEOF_UINTPTR_T 4
> +#endif
> +
> +/* The size of 'void *', as computed by sizeof. */
> +#ifdef UEFI_MSVC_64
> +#define SIZEOF_VOID_P 8
> +#else
> +#define SIZEOF_VOID_P 4
> +#endif
> +
> +/* The size of 'wchar_t', as computed by sizeof. */
> +#define SIZEOF_WCHAR_T 2
> +
> +/* The size of '_Bool', as computed by sizeof. */
> +#define SIZEOF__BOOL 1
> +
> +/* Define to 1 if you have the ANSI C header files. */
> +#define STDC_HEADERS 1
> +
> +/* Define if you can safely include both <sys/select.h> and <sys/time.h>
> + (which you can't on SCO ODT 3.0). */
> +#undef SYS_SELECT_WITH_SYS_TIME
> +
> +/* Define if tanh(-0.) is -0., or if platform doesn't have signed zeros */
> +#undef TANH_PRESERVES_ZERO_SIGN
> +
> +/* Define to 1 if you can safely include both <sys/time.h> and <time.h>. */
> +#undef TIME_WITH_SYS_TIME
> +
> +/* Define to 1 if your <sys/time.h> declares 'struct tm'. */
> +#undef TM_IN_SYS_TIME
> +
> +/* Enable extensions on AIX 3, Interix. */
> +#ifndef _ALL_SOURCE
> +# undef _ALL_SOURCE
> +#endif
> +/* Enable GNU extensions on systems that have them. */
> +#ifndef _GNU_SOURCE
> +# undef _GNU_SOURCE
> +#endif
> +/* Enable threading extensions on Solaris. */
> +#ifndef _POSIX_PTHREAD_SEMANTICS
> +# undef _POSIX_PTHREAD_SEMANTICS
> +#endif
> +/* Enable extensions on HP NonStop. */
> +#ifndef _TANDEM_SOURCE
> +# undef _TANDEM_SOURCE
> +#endif
> +/* Enable general extensions on Solaris. */
> +#ifndef __EXTENSIONS__
> +# undef __EXTENSIONS__
> +#endif
> +
> +
> +/* Define if you want to use MacPython modules on MacOSX in unix-Python. */
> +#undef USE_TOOLBOX_OBJECT_GLUE
> +
> +/* Define if a va_list is an array of some kind */
> +#undef VA_LIST_IS_ARRAY
> +
> +/* Define if you want SIGFPE handled (see Include/pyfpe.h). */
> +#undef WANT_SIGFPE_HANDLER
> +
> +/* Define if you want wctype.h functions to be used instead of the one
> + supplied by Python itself. (see Include/unicodectype.h). */
> +#define WANT_WCTYPE_FUNCTIONS 1
> +
> +/* Define if WINDOW in curses.h offers a field _flags. */
> +#undef WINDOW_HAS_FLAGS
> +
> +/* Define if you want documentation strings in extension modules */
> +#undef WITH_DOC_STRINGS
> +
> +/* Define if you want to use the new-style (Openstep, Rhapsody, MacOS) dynamic
> + linker (dyld) instead of the old-style (NextStep) dynamic linker (rld).
> + Dyld is necessary to support frameworks. */
> +#undef WITH_DYLD
> +
> +/* Define to 1 if libintl is needed for locale functions. */
> +#undef WITH_LIBINTL
> +
> +/* Define if you want to produce an OpenStep/Rhapsody framework (shared
> + library plus accessory files). */
> +#undef WITH_NEXT_FRAMEWORK
> +
> +/* Define if you want to compile in Python-specific mallocs */
> +#undef WITH_PYMALLOC
> +
> +/* Define if you want to compile in rudimentary thread support */
> +#undef WITH_THREAD
> +
> +/* Define to profile with the Pentium timestamp counter */
> +#undef WITH_TSC
> +
> +/* Define if you want pymalloc to be disabled when running under valgrind */
> +#undef WITH_VALGRIND
> +
> +/* Define WORDS_BIGENDIAN to 1 if your processor stores words with the most
> + significant byte first (like Motorola and SPARC, unlike Intel). */
> +#if defined AC_APPLE_UNIVERSAL_BUILD
> +# if defined __BIG_ENDIAN__
> +# define WORDS_BIGENDIAN 1
> +# endif
> +#else
> +# ifndef WORDS_BIGENDIAN
> +# undef WORDS_BIGENDIAN
> +# endif
> +#endif
> +
> +/* Define if arithmetic is subject to x87-style double rounding issue */
> +#undef X87_DOUBLE_ROUNDING
> +
> +/* Define on OpenBSD to activate all library features */
> +#undef _BSD_SOURCE
> +
> +/* Define on Irix to enable u_int */
> +#undef _BSD_TYPES
> +
> +/* Define on Darwin to activate all library features */
> +#undef _DARWIN_C_SOURCE
> +
> +/* This must be set to 64 on some systems to enable large file support. */
> +#undef _FILE_OFFSET_BITS
> +
> +/* Define on Linux to activate all library features */
> +#undef _GNU_SOURCE
> +
> +/* This must be defined on some systems to enable large file support. */
> +#undef _LARGEFILE_SOURCE
> +
> +/* This must be defined on AIX systems to enable large file support. */
> +#undef _LARGE_FILES
> +
> +/* Define to 1 if on MINIX. */
> +#undef _MINIX
> +
> +/* Define on NetBSD to activate all library features */
> +#define _NETBSD_SOURCE 1
> +
> +/* Define _OSF_SOURCE to get the makedev macro. */
> +#undef _OSF_SOURCE
> +
> +/* Define to 2 if the system does not provide POSIX.1 features except with
> + this defined. */
> +#undef _POSIX_1_SOURCE
> +
> +/* Define to activate features from IEEE Stds 1003.1-2001 */
> +#undef _POSIX_C_SOURCE
> +
> +/* Define to 1 if you need to in order for 'stat' and other things to work. */
> +#undef _POSIX_SOURCE
> +
> +/* Define if you have POSIX threads, and your system does not define that. */
> +#undef _POSIX_THREADS
> +
> +/* Define to force use of thread-safe errno, h_errno, and other functions */
> +#undef _REENTRANT
> +
> +/* Define for Solaris 2.5.1 so the uint32_t typedef from <sys/synch.h>,
> + <pthread.h>, or <semaphore.h> is not used. If the typedef were allowed, the
> + #define below would cause a syntax error. */
> +#undef _UINT32_T
> +
> +/* Define for Solaris 2.5.1 so the uint64_t typedef from <sys/synch.h>,
> + <pthread.h>, or <semaphore.h> is not used. If the typedef were allowed, the
> + #define below would cause a syntax error. */
> +#undef _UINT64_T
> +
> +/* Define to the level of X/Open that your system supports */
> +#undef _XOPEN_SOURCE
> +
> +/* Define to activate Unix95-and-earlier features */
> +#undef _XOPEN_SOURCE_EXTENDED
> +
> +/* Define on FreeBSD to activate all library features */
> +#undef __BSD_VISIBLE
> +
> +/* Define to 1 if type 'char' is unsigned and you are not using gcc. */
> +#ifndef __CHAR_UNSIGNED__
> +# undef __CHAR_UNSIGNED__
> +#endif
> +
> +/* Defined on Solaris to see additional function prototypes. */
> +#undef __EXTENSIONS__
> +
> +/* Define to 'long' if <time.h> doesn't define. */
> +//#undef clock_t
> +
> +/* Define to empty if 'const' does not conform to ANSI C. */
> +//#undef const
> +
> +/* Define to 'int' if <sys/types.h> doesn't define. */
> +//#undef gid_t
> +
> +/* Define to the type of a signed integer type of width exactly 32 bits if
> + such a type exists and the standard includes do not define it. */
> +//#undef int32_t
> +
> +/* Define to the type of a signed integer type of width exactly 64 bits if
> + such a type exists and the standard includes do not define it. */
> +//#undef int64_t
> +
> +/* Define to 'int' if <sys/types.h> does not define. */
> +//#undef mode_t
> +
> +/* Define to 'long int' if <sys/types.h> does not define. */
> +//#undef off_t
> +
> +/* Define to 'int' if <sys/types.h> does not define. */
> +//#undef pid_t
> +
> +/* Define to empty if the keyword does not work. */
> +//#undef signed
> +
> +/* Define to 'unsigned int' if <sys/types.h> does not define. */
> +//#undef size_t
> +
> +/* Define to 'int' if <sys/socket.h> does not define. */
> +//#undef socklen_t
> +
> +/* Define to 'int' if <sys/types.h> doesn't define. */
> +//#undef uid_t
> +
> +/* Define to the type of an unsigned integer type of width exactly 32 bits if
> + such a type exists and the standard includes do not define it. */
> +//#undef uint32_t
> +
> +/* Define to the type of an unsigned integer type of width exactly 64 bits if
> + such a type exists and the standard includes do not define it. */
> +//#undef uint64_t
> +
> +/* Define to empty if the keyword does not work. */
> +//#undef volatile
> +
> +#endif /*Py_PYCONFIG_H*/
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pydtrace.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pydtrace.h
> new file mode 100644
> index 00000000..a463004d
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pydtrace.h
> @@ -0,0 +1,74 @@
> +/* Static DTrace probes interface */
> +
> +#ifndef Py_DTRACE_H
> +#define Py_DTRACE_H
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#ifdef WITH_DTRACE
> +
> +#include "pydtrace_probes.h"
> +
> +/* pydtrace_probes.h, on systems with DTrace, is auto-generated to include
> + `PyDTrace_{PROBE}` and `PyDTrace_{PROBE}_ENABLED()` macros for every probe
> + defined in pydtrace_provider.d.
> +
> + Calling these functions must be guarded by a `PyDTrace_{PROBE}_ENABLED()`
> + check to minimize performance impact when probing is off. For example:
> +
> + if (PyDTrace_FUNCTION_ENTRY_ENABLED())
> + PyDTrace_FUNCTION_ENTRY(f);
> +*/
> +
> +#else
> +
> +/* Without DTrace, compile to nothing. */
> +#ifndef UEFI_C_SOURCE
> +static inline void PyDTrace_LINE(const char *arg0, const char *arg1, int arg2) {}
> +static inline void PyDTrace_FUNCTION_ENTRY(const char *arg0, const char *arg1, int arg2) {}
> +static inline void PyDTrace_FUNCTION_RETURN(const char *arg0, const char *arg1, int arg2) {}
> +static inline void PyDTrace_GC_START(int arg0) {}
> +static inline void PyDTrace_GC_DONE(int arg0) {}
> +static inline void PyDTrace_INSTANCE_NEW_START(int arg0) {}
> +static inline void PyDTrace_INSTANCE_NEW_DONE(int arg0) {}
> +static inline void PyDTrace_INSTANCE_DELETE_START(int arg0) {}
> +static inline void PyDTrace_INSTANCE_DELETE_DONE(int arg0) {}
> +
> +static inline int PyDTrace_LINE_ENABLED(void) { return 0; }
> +static inline int PyDTrace_FUNCTION_ENTRY_ENABLED(void) { return 0; }
> +static inline int PyDTrace_FUNCTION_RETURN_ENABLED(void) { return 0; }
> +static inline int PyDTrace_GC_START_ENABLED(void) { return 0; }
> +static inline int PyDTrace_GC_DONE_ENABLED(void) { return 0; }
> +static inline int PyDTrace_INSTANCE_NEW_START_ENABLED(void) { return 0; }
> +static inline int PyDTrace_INSTANCE_NEW_DONE_ENABLED(void) { return 0; }
> +static inline int PyDTrace_INSTANCE_DELETE_START_ENABLED(void) { return 0; }
> +static inline int PyDTrace_INSTANCE_DELETE_DONE_ENABLED(void) { return 0; }
> +#else
> +static void PyDTrace_LINE(const char *arg0, const char *arg1, int arg2) {}
> +static void PyDTrace_FUNCTION_ENTRY(const char *arg0, const char *arg1, int arg2) {}
> +static void PyDTrace_FUNCTION_RETURN(const char *arg0, const char *arg1, int arg2) {}
> +static void PyDTrace_GC_START(int arg0) {}
> +static void PyDTrace_GC_DONE(int arg0) {}
> +static void PyDTrace_INSTANCE_NEW_START(int arg0) {}
> +static void PyDTrace_INSTANCE_NEW_DONE(int arg0) {}
> +static void PyDTrace_INSTANCE_DELETE_START(int arg0) {}
> +static void PyDTrace_INSTANCE_DELETE_DONE(int arg0) {}
> +
> +static int PyDTrace_LINE_ENABLED(void) { return 0; }
> +static int PyDTrace_FUNCTION_ENTRY_ENABLED(void) { return 0; }
> +static int PyDTrace_FUNCTION_RETURN_ENABLED(void) { return 0; }
> +static int PyDTrace_GC_START_ENABLED(void) { return 0; }
> +static int PyDTrace_GC_DONE_ENABLED(void) { return 0; }
> +static int PyDTrace_INSTANCE_NEW_START_ENABLED(void) { return 0; }
> +static int PyDTrace_INSTANCE_NEW_DONE_ENABLED(void) { return 0; }
> +static int PyDTrace_INSTANCE_DELETE_START_ENABLED(void) { return 0; }
> +static int PyDTrace_INSTANCE_DELETE_DONE_ENABLED(void) { return 0; }
> +#endif
> +
> +#endif /* !WITH_DTRACE */
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +#endif /* !Py_DTRACE_H */
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pyport.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pyport.h
> new file mode 100644
> index 00000000..ca49c295
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pyport.h
> @@ -0,0 +1,788 @@
> +#ifndef Py_PYPORT_H
> +#define Py_PYPORT_H
> +
> +#include "pyconfig.h" /* include for defines */
> +
> +/* Some versions of HP-UX & Solaris need inttypes.h for int32_t,
> + INT32_MAX, etc. */
> +#ifdef UEFI_C_SOURCE
> +#ifdef HAVE_INTTYPES_H
> +#include <inttypes.h>
> +#endif
> +
> +#ifdef HAVE_STDINT_H
> +#include <stdint.h>
> +#endif
> +#else
> +#include <inttypes.h>
> +#endif
> +
> +/**************************************************************************
> +Symbols and macros to supply platform-independent interfaces to basic
> +C language & library operations whose spellings vary across platforms.
> +
> +Please try to make documentation here as clear as possible: by definition,
> +the stuff here is trying to illuminate C's darkest corners.
> +
> +Config #defines referenced here:
> +
> +SIGNED_RIGHT_SHIFT_ZERO_FILLS
> +Meaning: To be defined iff i>>j does not extend the sign bit when i is a
> + signed integral type and i < 0.
> +Used in: Py_ARITHMETIC_RIGHT_SHIFT
> +
> +Py_DEBUG
> +Meaning: Extra checks compiled in for debug mode.
> +Used in: Py_SAFE_DOWNCAST
> +
> +**************************************************************************/
> +
> +/* typedefs for some C9X-defined synonyms for integral types.
> + *
> + * The names in Python are exactly the same as the C9X names, except with a
> + * Py_ prefix. Until C9X is universally implemented, this is the only way
> + * to ensure that Python gets reliable names that don't conflict with names
> + * in non-Python code that are playing their own tricks to define the C9X
> + * names.
> + *
> + * NOTE: don't go nuts here! Python has no use for *most* of the C9X
> + * integral synonyms. Only define the ones we actually need.
> + */
> +
> +/* long long is required. Ensure HAVE_LONG_LONG is defined for compatibility. */
> +#ifndef HAVE_LONG_LONG
> +#define HAVE_LONG_LONG 1
> +#endif
> +#ifndef PY_LONG_LONG
> +#define PY_LONG_LONG long long
> +/* If LLONG_MAX is defined in limits.h, use that. */
> +#define PY_LLONG_MIN LLONG_MIN
> +#define PY_LLONG_MAX LLONG_MAX
> +#define PY_ULLONG_MAX ULLONG_MAX
> +#endif
> +
> +#define PY_UINT32_T uint32_t
> +#define PY_UINT64_T uint64_t
> +
> +/* Signed variants of the above */
> +#define PY_INT32_T int32_t
> +#define PY_INT64_T int64_t
> +
> +/* If PYLONG_BITS_IN_DIGIT is not defined then we'll use 30-bit digits if all
> + the necessary integer types are available, and we're on a 64-bit platform
> + (as determined by SIZEOF_VOID_P); otherwise we use 15-bit digits. */
> +
> +#ifndef PYLONG_BITS_IN_DIGIT
> +#if SIZEOF_VOID_P >= 8
> +#define PYLONG_BITS_IN_DIGIT 30
> +#else
> +#define PYLONG_BITS_IN_DIGIT 15
> +#endif
> +#endif
> +
> +/* uintptr_t is the C9X name for an unsigned integral type such that a
> + * legitimate void* can be cast to uintptr_t and then back to void* again
> + * without loss of information. Similarly for intptr_t, wrt a signed
> + * integral type.
> + */
> +typedef uintptr_t Py_uintptr_t;
> +typedef intptr_t Py_intptr_t;
> +
> +/* Py_ssize_t is a signed integral type such that sizeof(Py_ssize_t) ==
> + * sizeof(size_t). C99 doesn't define such a thing directly (size_t is an
> + * unsigned integral type). See PEP 353 for details.
> + */
> +#ifdef HAVE_SSIZE_T
> +typedef ssize_t Py_ssize_t;
> +#elif SIZEOF_VOID_P == SIZEOF_SIZE_T
> +typedef Py_intptr_t Py_ssize_t;
> +#else
> +# error "Python needs a typedef for Py_ssize_t in pyport.h."
> +#endif
> +
> +/* Py_hash_t is the same size as a pointer. */
> +#define SIZEOF_PY_HASH_T SIZEOF_SIZE_T
> +typedef Py_ssize_t Py_hash_t;
> +/* Py_uhash_t is the unsigned equivalent needed to calculate numeric hash. */
> +#define SIZEOF_PY_UHASH_T SIZEOF_SIZE_T
> +typedef size_t Py_uhash_t;
> +
> +/* Only used for compatibility with code that may not be PY_SSIZE_T_CLEAN. */
> +#ifdef PY_SSIZE_T_CLEAN
> +typedef Py_ssize_t Py_ssize_clean_t;
> +#else
> +typedef int Py_ssize_clean_t;
> +#endif
> +
> +/* Largest possible value of size_t. */
> +#define PY_SIZE_MAX SIZE_MAX
> +
> +/* Largest positive value of type Py_ssize_t. */
> +#define PY_SSIZE_T_MAX ((Py_ssize_t)(((size_t)-1)>>1))
> +/* Smallest negative value of type Py_ssize_t. */
> +#define PY_SSIZE_T_MIN (-PY_SSIZE_T_MAX-1)
> +
> +/* PY_FORMAT_SIZE_T is a platform-specific modifier for use in a printf
> + * format to convert an argument with the width of a size_t or Py_ssize_t.
> + * C99 introduced "z" for this purpose, but not all platforms support that;
> + * e.g., MS compilers use "I" instead.
> + *
> + * These "high level" Python format functions interpret "z" correctly on
> + * all platforms (Python interprets the format string itself, and does whatever
> + * the platform C requires to convert a size_t/Py_ssize_t argument):
> + *
> + * PyBytes_FromFormat
> + * PyErr_Format
> + * PyBytes_FromFormatV
> + * PyUnicode_FromFormatV
> + *
> + * Lower-level uses require that you interpolate the correct format modifier
> + * yourself (e.g., calling printf, fprintf, sprintf, PyOS_snprintf); for
> + * example,
> + *
> + * Py_ssize_t index;
> + * fprintf(stderr, "index %" PY_FORMAT_SIZE_T "d sucks\n", index);
> + *
> + * That will expand to %ld, or %Id, or to something else correct for a
> + * Py_ssize_t on the platform.
> + */
> +#ifndef PY_FORMAT_SIZE_T
> +# if SIZEOF_SIZE_T == SIZEOF_INT && !defined(__APPLE__)
> +# define PY_FORMAT_SIZE_T ""
> +# elif SIZEOF_SIZE_T == SIZEOF_LONG
> +# define PY_FORMAT_SIZE_T "l"
> +# elif defined(MS_WINDOWS)
> +# define PY_FORMAT_SIZE_T "I"
> +# else
> +# error "This platform's pyconfig.h needs to define PY_FORMAT_SIZE_T"
> +# endif
> +#endif
> +
> +/* Py_LOCAL can be used instead of static to get the fastest possible calling
> + * convention for functions that are local to a given module.
> + *
> + * Py_LOCAL_INLINE does the same thing, and also explicitly requests inlining,
> + * for platforms that support that.
> + *
> + * If PY_LOCAL_AGGRESSIVE is defined before python.h is included, more
> + * "aggressive" inlining/optimization is enabled for the entire module. This
> + * may lead to code bloat, and may slow things down for those reasons. It may
> + * also lead to errors, if the code relies on pointer aliasing. Use with
> + * care.
> + *
> + * NOTE: You can only use this for functions that are entirely local to a
> + * module; functions that are exported via method tables, callbacks, etc,
> + * should keep using static.
> + */
> +
> +#if defined(_MSC_VER)
> +#if defined(PY_LOCAL_AGGRESSIVE)
> +/* enable more aggressive optimization for visual studio */
> +#ifdef UEFI_C_SOURCE
> +#pragma optimize("gt", on)
> +#else
> +#pragma optimize("agtw", on)
> +#endif
> +#endif
> +/* ignore warnings if the compiler decides not to inline a function */
> +#pragma warning(disable: 4710)
> +/* fastest possible local call under MSVC */
> +#define Py_LOCAL(type) static type __fastcall
> +#define Py_LOCAL_INLINE(type) static __inline type __fastcall
> +#elif defined(USE_INLINE)
> +#define Py_LOCAL(type) static type
> +#define Py_LOCAL_INLINE(type) static inline type
> +#else
> +#define Py_LOCAL(type) static type
> +#define Py_LOCAL_INLINE(type) static type
> +#endif
> +
> +/* Py_MEMCPY is kept for backwards compatibility,
> + * see https://bugs.python.org/issue28126 */
> +#define Py_MEMCPY memcpy
> +
> +#include <stdlib.h>
> +
> +#ifdef HAVE_IEEEFP_H
> +#include <ieeefp.h> /* needed for 'finite' declaration on some platforms */
> +#endif
> +
> +#include <math.h> /* Moved here from the math section, before extern "C" */
> +
> +/********************************************
> + * WRAPPER FOR <time.h> and/or <sys/time.h> *
> + ********************************************/
> +
> +#ifdef TIME_WITH_SYS_TIME
> +#include <sys/time.h>
> +#include <time.h>
> +#else /* !TIME_WITH_SYS_TIME */
> +#ifdef HAVE_SYS_TIME_H
> +#include <sys/time.h>
> +#else /* !HAVE_SYS_TIME_H */
> +#include <time.h>
> +#endif /* !HAVE_SYS_TIME_H */
> +#endif /* !TIME_WITH_SYS_TIME */
> +
> +
> +/******************************
> + * WRAPPER FOR <sys/select.h> *
> + ******************************/
> +
> +/* NB caller must include <sys/types.h> */
> +
> +#ifdef HAVE_SYS_SELECT_H
> +#include <sys/select.h>
> +#endif /* !HAVE_SYS_SELECT_H */
> +
> +/*******************************
> + * stat() and fstat() fiddling *
> + *******************************/
> +
> +#ifdef HAVE_SYS_STAT_H
> +#include <sys/stat.h>
> +#elif defined(HAVE_STAT_H)
> +#include <stat.h>
> +#endif
> +
> +#ifndef S_IFMT
> +/* VisualAge C/C++ Failed to Define MountType Field in sys/stat.h */
> +#define S_IFMT 0170000
> +#endif
> +
> +#ifndef S_IFLNK
> +/* Windows doesn't define S_IFLNK but posixmodule.c maps
> + * IO_REPARSE_TAG_SYMLINK to S_IFLNK */
> +# define S_IFLNK 0120000
> +#endif
> +
> +#ifndef S_ISREG
> +#define S_ISREG(x) (((x) & S_IFMT) == S_IFREG)
> +#endif
> +
> +#ifndef S_ISDIR
> +#define S_ISDIR(x) (((x) & S_IFMT) == S_IFDIR)
> +#endif
> +
> +#ifndef S_ISCHR
> +#define S_ISCHR(x) (((x) & S_IFMT) == S_IFCHR)
> +#endif
> +
> +#ifdef __cplusplus
> +/* Move this down here since some C++ #include's don't like to be included
> + inside an extern "C" */
> +extern "C" {
> +#endif
> +
> +
> +/* Py_ARITHMETIC_RIGHT_SHIFT
> + * C doesn't define whether a right-shift of a signed integer sign-extends
> + * or zero-fills. Here a macro to force sign extension:
> + * Py_ARITHMETIC_RIGHT_SHIFT(TYPE, I, J)
> + * Return I >> J, forcing sign extension. Arithmetically, return the
> + * floor of I/2**J.
> + * Requirements:
> + * I should have signed integer type. In the terminology of C99, this can
> + * be either one of the five standard signed integer types (signed char,
> + * short, int, long, long long) or an extended signed integer type.
> + * J is an integer >= 0 and strictly less than the number of bits in the
> + * type of I (because C doesn't define what happens for J outside that
> + * range either).
> + * TYPE used to specify the type of I, but is now ignored. It's been left
> + * in for backwards compatibility with versions <= 2.6 or 3.0.
> + * Caution:
> + * I may be evaluated more than once.
> + */
> +#ifdef SIGNED_RIGHT_SHIFT_ZERO_FILLS
> +#define Py_ARITHMETIC_RIGHT_SHIFT(TYPE, I, J) \
> + ((I) < 0 ? -1-((-1-(I)) >> (J)) : (I) >> (J))
> +#else
> +#define Py_ARITHMETIC_RIGHT_SHIFT(TYPE, I, J) ((I) >> (J))
> +#endif
> +
> +/* Py_FORCE_EXPANSION(X)
> + * "Simply" returns its argument. However, macro expansions within the
> + * argument are evaluated. This unfortunate trickery is needed to get
> + * token-pasting to work as desired in some cases.
> + */
> +#define Py_FORCE_EXPANSION(X) X
> +
> +/* Py_SAFE_DOWNCAST(VALUE, WIDE, NARROW)
> + * Cast VALUE to type NARROW from type WIDE. In Py_DEBUG mode, this
> + * assert-fails if any information is lost.
> + * Caution:
> + * VALUE may be evaluated more than once.
> + */
> +#ifdef Py_DEBUG
> +#define Py_SAFE_DOWNCAST(VALUE, WIDE, NARROW) \
> + (assert((WIDE)(NARROW)(VALUE) == (VALUE)), (NARROW)(VALUE))
> +#else
> +#define Py_SAFE_DOWNCAST(VALUE, WIDE, NARROW) (NARROW)(VALUE)
> +#endif
> +
> +/* Py_SET_ERRNO_ON_MATH_ERROR(x)
> + * If a libm function did not set errno, but it looks like the result
> + * overflowed or not-a-number, set errno to ERANGE or EDOM. Set errno
> + * to 0 before calling a libm function, and invoke this macro after,
> + * passing the function result.
> + * Caution:
> + * This isn't reliable. See Py_OVERFLOWED comments.
> + * X is evaluated more than once.
> + */
> +#if defined(__FreeBSD__) || defined(__OpenBSD__) || (defined(__hpux) && defined(__ia64))
> +#define _Py_SET_EDOM_FOR_NAN(X) if (isnan(X)) errno = EDOM;
> +#else
> +#define _Py_SET_EDOM_FOR_NAN(X) ;
> +#endif
> +#define Py_SET_ERRNO_ON_MATH_ERROR(X) \
> + do { \
> + if (errno == 0) { \
> + if ((X) == Py_HUGE_VAL || (X) == -Py_HUGE_VAL) \
> + errno = ERANGE; \
> + else _Py_SET_EDOM_FOR_NAN(X) \
> + } \
> + } while(0)
> +
> +/* Py_SET_ERANGE_ON_OVERFLOW(x)
> + * An alias of Py_SET_ERRNO_ON_MATH_ERROR for backward-compatibility.
> + */
> +#define Py_SET_ERANGE_IF_OVERFLOW(X) Py_SET_ERRNO_ON_MATH_ERROR(X)
> +
> +/* Py_ADJUST_ERANGE1(x)
> + * Py_ADJUST_ERANGE2(x, y)
> + * Set errno to 0 before calling a libm function, and invoke one of these
> + * macros after, passing the function result(s) (Py_ADJUST_ERANGE2 is useful
> + * for functions returning complex results). This makes two kinds of
> + * adjustments to errno: (A) If it looks like the platform libm set
> + * errno=ERANGE due to underflow, clear errno. (B) If it looks like the
> + * platform libm overflowed but didn't set errno, force errno to ERANGE. In
> + * effect, we're trying to force a useful implementation of C89 errno
> + * behavior.
> + * Caution:
> + * This isn't reliable. See Py_OVERFLOWED comments.
> + * X and Y may be evaluated more than once.
> + */
> +#define Py_ADJUST_ERANGE1(X) \
> + do { \
> + if (errno == 0) { \
> + if ((X) == Py_HUGE_VAL || (X) == -Py_HUGE_VAL) \
> + errno = ERANGE; \
> + } \
> + else if (errno == ERANGE && (X) == 0.0) \
> + errno = 0; \
> + } while(0)
> +
> +#define Py_ADJUST_ERANGE2(X, Y) \
> + do { \
> + if ((X) == Py_HUGE_VAL || (X) == -Py_HUGE_VAL || \
> + (Y) == Py_HUGE_VAL || (Y) == -Py_HUGE_VAL) { \
> + if (errno == 0) \
> + errno = ERANGE; \
> + } \
> + else if (errno == ERANGE) \
> + errno = 0; \
> + } while(0)
> +
> +/* The functions _Py_dg_strtod and _Py_dg_dtoa in Python/dtoa.c (which are
> + * required to support the short float repr introduced in Python 3.1) require
> + * that the floating-point unit that's being used for arithmetic operations
> + * on C doubles is set to use 53-bit precision. It also requires that the
> + * FPU rounding mode is round-half-to-even, but that's less often an issue.
> + *
> + * If your FPU isn't already set to 53-bit precision/round-half-to-even, and
> + * you want to make use of _Py_dg_strtod and _Py_dg_dtoa, then you should
> + *
> + * #define HAVE_PY_SET_53BIT_PRECISION 1
> + *
> + * and also give appropriate definitions for the following three macros:
> + *
> + * _PY_SET_53BIT_PRECISION_START : store original FPU settings, and
> + * set FPU to 53-bit precision/round-half-to-even
> + * _PY_SET_53BIT_PRECISION_END : restore original FPU settings
> + * _PY_SET_53BIT_PRECISION_HEADER : any variable declarations needed to
> + * use the two macros above.
> + *
> + * The macros are designed to be used within a single C function: see
> + * Python/pystrtod.c for an example of their use.
> + */
> +
> +/* get and set x87 control word for gcc/x86 */
> +#ifdef HAVE_GCC_ASM_FOR_X87
> +#define HAVE_PY_SET_53BIT_PRECISION 1
> +/* _Py_get/set_387controlword functions are defined in Python/pymath.c */
> +#define _Py_SET_53BIT_PRECISION_HEADER \
> + unsigned short old_387controlword, new_387controlword
> +#define _Py_SET_53BIT_PRECISION_START \
> + do { \
> + old_387controlword = _Py_get_387controlword(); \
> + new_387controlword = (old_387controlword & ~0x0f00) | 0x0200; \
> + if (new_387controlword != old_387controlword) \
> + _Py_set_387controlword(new_387controlword); \
> + } while (0)
> +#define _Py_SET_53BIT_PRECISION_END \
> + if (new_387controlword != old_387controlword) \
> + _Py_set_387controlword(old_387controlword)
> +#endif
> +
> +/* get and set x87 control word for VisualStudio/x86 */
> +#if defined(_MSC_VER) && !defined(_WIN64) && !defined(UEFI_C_SOURCE)/* x87 not supported in 64-bit */
> +#define HAVE_PY_SET_53BIT_PRECISION 1
> +#define _Py_SET_53BIT_PRECISION_HEADER \
> + unsigned int old_387controlword, new_387controlword, out_387controlword
> +/* We use the __control87_2 function to set only the x87 control word.
> + The SSE control word is unaffected. */
> +#define _Py_SET_53BIT_PRECISION_START \
> + do { \
> + __control87_2(0, 0, &old_387controlword, NULL); \
> + new_387controlword = \
> + (old_387controlword & ~(_MCW_PC | _MCW_RC)) | (_PC_53 | _RC_NEAR); \
> + if (new_387controlword != old_387controlword) \
> + __control87_2(new_387controlword, _MCW_PC | _MCW_RC, \
> + &out_387controlword, NULL); \
> + } while (0)
> +#define _Py_SET_53BIT_PRECISION_END \
> + do { \
> + if (new_387controlword != old_387controlword) \
> + __control87_2(old_387controlword, _MCW_PC | _MCW_RC, \
> + &out_387controlword, NULL); \
> + } while (0)
> +#endif
> +
> +#ifdef HAVE_GCC_ASM_FOR_MC68881
> +#define HAVE_PY_SET_53BIT_PRECISION 1
> +#define _Py_SET_53BIT_PRECISION_HEADER \
> + unsigned int old_fpcr, new_fpcr
> +#define _Py_SET_53BIT_PRECISION_START \
> + do { \
> + __asm__ ("fmove.l %%fpcr,%0" : "=g" (old_fpcr)); \
> + /* Set double precision / round to nearest. */ \
> + new_fpcr = (old_fpcr & ~0xf0) | 0x80; \
> + if (new_fpcr != old_fpcr) \
> + __asm__ volatile ("fmove.l %0,%%fpcr" : : "g" (new_fpcr)); \
> + } while (0)
> +#define _Py_SET_53BIT_PRECISION_END \
> + do { \
> + if (new_fpcr != old_fpcr) \
> + __asm__ volatile ("fmove.l %0,%%fpcr" : : "g" (old_fpcr)); \
> + } while (0)
> +#endif
> +
> +/* default definitions are empty */
> +#ifndef HAVE_PY_SET_53BIT_PRECISION
> +#define _Py_SET_53BIT_PRECISION_HEADER
> +#define _Py_SET_53BIT_PRECISION_START
> +#define _Py_SET_53BIT_PRECISION_END
> +#endif
> +
> +/* If we can't guarantee 53-bit precision, don't use the code
> + in Python/dtoa.c, but fall back to standard code. This
> + means that repr of a float will be long (17 sig digits).
> +
> + Realistically, there are two things that could go wrong:
> +
> + (1) doubles aren't IEEE 754 doubles, or
> + (2) we're on x86 with the rounding precision set to 64-bits
> + (extended precision), and we don't know how to change
> + the rounding precision.
> + */
> +
> +#if !defined(DOUBLE_IS_LITTLE_ENDIAN_IEEE754) && \
> + !defined(DOUBLE_IS_BIG_ENDIAN_IEEE754) && \
> + !defined(DOUBLE_IS_ARM_MIXED_ENDIAN_IEEE754)
> +#define PY_NO_SHORT_FLOAT_REPR
> +#endif
> +
> +/* double rounding is symptomatic of use of extended precision on x86. If
> + we're seeing double rounding, and we don't have any mechanism available for
> + changing the FPU rounding precision, then don't use Python/dtoa.c. */
> +#if defined(X87_DOUBLE_ROUNDING) && !defined(HAVE_PY_SET_53BIT_PRECISION)
> +#define PY_NO_SHORT_FLOAT_REPR
> +#endif
> +
> +
> +/* Py_DEPRECATED(version)
> + * Declare a variable, type, or function deprecated.
> + * Usage:
> + * extern int old_var Py_DEPRECATED(2.3);
> + * typedef int T1 Py_DEPRECATED(2.4);
> + * extern int x() Py_DEPRECATED(2.5);
> + */
> +#if defined(__GNUC__) && ((__GNUC__ >= 4) || \
> + (__GNUC__ == 3) && (__GNUC_MINOR__ >= 1))
> +#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
> +#else
> +#define Py_DEPRECATED(VERSION_UNUSED)
> +#endif
> +
> +/**************************************************************************
> +Prototypes that are missing from the standard include files on some systems
> +(and possibly only some versions of such systems.)
> +
> +Please be conservative with adding new ones, document them and enclose them
> +in platform-specific #ifdefs.
> +**************************************************************************/
> +
> +#ifdef SOLARIS
> +/* Unchecked */
> +extern int gethostname(char *, int);
> +#endif
> +
> +#ifdef HAVE__GETPTY
> +#include <sys/types.h> /* we need to import mode_t */
> +extern char * _getpty(int *, int, mode_t, int);
> +#endif
> +
> +/* On QNX 6, struct termio must be declared by including sys/termio.h
> + if TCGETA, TCSETA, TCSETAW, or TCSETAF are used. sys/termio.h must
> + be included before termios.h or it will generate an error. */
> +#if defined(HAVE_SYS_TERMIO_H) && !defined(__hpux)
> +#include <sys/termio.h>
> +#endif
> +
> +#if defined(HAVE_OPENPTY) || defined(HAVE_FORKPTY)
> +#if !defined(HAVE_PTY_H) && !defined(HAVE_LIBUTIL_H)
> +/* BSDI does not supply a prototype for the 'openpty' and 'forkpty'
> + functions, even though they are included in libutil. */
> +#include <termios.h>
> +extern int openpty(int *, int *, char *, struct termios *, struct winsize *);
> +extern pid_t forkpty(int *, char *, struct termios *, struct winsize *);
> +#endif /* !defined(HAVE_PTY_H) && !defined(HAVE_LIBUTIL_H) */
> +#endif /* defined(HAVE_OPENPTY) || defined(HAVE_FORKPTY) */
> +
> +
> +/* On 4.4BSD-descendants, ctype functions serves the whole range of
> + * wchar_t character set rather than single byte code points only.
> + * This characteristic can break some operations of string object
> + * including str.upper() and str.split() on UTF-8 locales. This
> + * workaround was provided by Tim Robbins of FreeBSD project.
> + */
> +
> +#ifdef __FreeBSD__
> +#include <osreldate.h>
> +#if (__FreeBSD_version >= 500040 && __FreeBSD_version < 602113) || \
> + (__FreeBSD_version >= 700000 && __FreeBSD_version < 700054) || \
> + (__FreeBSD_version >= 800000 && __FreeBSD_version < 800001)
> +# define _PY_PORT_CTYPE_UTF8_ISSUE
> +#endif
> +#endif
> +
> +
> +#if defined(__APPLE__)
> +# define _PY_PORT_CTYPE_UTF8_ISSUE
> +#endif
> +
> +#ifdef _PY_PORT_CTYPE_UTF8_ISSUE
> +#ifndef __cplusplus
> + /* The workaround below is unsafe in C++ because
> + * the <locale> defines these symbols as real functions,
> + * with a slightly different signature.
> + * See issue #10910
> + */
> +#include <ctype.h>
> +#include <wctype.h>
> +#undef isalnum
> +#define isalnum(c) iswalnum(btowc(c))
> +#undef isalpha
> +#define isalpha(c) iswalpha(btowc(c))
> +#undef islower
> +#define islower(c) iswlower(btowc(c))
> +#undef isspace
> +#define isspace(c) iswspace(btowc(c))
> +#undef isupper
> +#define isupper(c) iswupper(btowc(c))
> +#undef tolower
> +#define tolower(c) towlower(btowc(c))
> +#undef toupper
> +#define toupper(c) towupper(btowc(c))
> +#endif
> +#endif
> +
> +
> +/* Declarations for symbol visibility.
> +
> + PyAPI_FUNC(type): Declares a public Python API function and return type
> + PyAPI_DATA(type): Declares public Python data and its type
> + PyMODINIT_FUNC: A Python module init function. If these functions are
> + inside the Python core, they are private to the core.
> + If in an extension module, it may be declared with
> + external linkage depending on the platform.
> +
> + As a number of platforms support/require "__declspec(dllimport/dllexport)",
> + we support a HAVE_DECLSPEC_DLL macro to save duplication.
> +*/
> +
> +/*
> + All windows ports, except cygwin, are handled in PC/pyconfig.h.
> +
> + Cygwin is the only other autoconf platform requiring special
> + linkage handling and it uses __declspec().
> +*/
> +#if defined(__CYGWIN__)
> +# define HAVE_DECLSPEC_DLL
> +#endif
> +
> +/* only get special linkage if built as shared or platform is Cygwin */
> +#if defined(Py_ENABLE_SHARED) || defined(__CYGWIN__)
> +# if defined(HAVE_DECLSPEC_DLL)
> +# ifdef Py_BUILD_CORE
> +# define PyAPI_FUNC(RTYPE) __declspec(dllexport) RTYPE
> +# define PyAPI_DATA(RTYPE) extern __declspec(dllexport) RTYPE
> + /* module init functions inside the core need no external linkage */
> + /* except for Cygwin to handle embedding */
> +# if defined(__CYGWIN__)
> +# define PyMODINIT_FUNC __declspec(dllexport) PyObject*
> +# else /* __CYGWIN__ */
> +# define PyMODINIT_FUNC PyObject*
> +# endif /* __CYGWIN__ */
> +# else /* Py_BUILD_CORE */
> + /* Building an extension module, or an embedded situation */
> + /* public Python functions and data are imported */
> + /* Under Cygwin, auto-import functions to prevent compilation */
> + /* failures similar to those described at the bottom of 4.1: */
> + /* http://docs.python.org/extending/windows.html#a-cookbook-approach */
> +# if !defined(__CYGWIN__)
> +# define PyAPI_FUNC(RTYPE) __declspec(dllimport) RTYPE
> +# endif /* !__CYGWIN__ */
> +# define PyAPI_DATA(RTYPE) extern __declspec(dllimport) RTYPE
> + /* module init functions outside the core must be exported */
> +# if defined(__cplusplus)
> +# define PyMODINIT_FUNC extern "C" __declspec(dllexport) PyObject*
> +# else /* __cplusplus */
> +# define PyMODINIT_FUNC __declspec(dllexport) PyObject*
> +# endif /* __cplusplus */
> +# endif /* Py_BUILD_CORE */
> +# endif /* HAVE_DECLSPEC */
> +#endif /* Py_ENABLE_SHARED */
> +
> +/* If no external linkage macros defined by now, create defaults */
> +#ifndef PyAPI_FUNC
> +# define PyAPI_FUNC(RTYPE) RTYPE
> +#endif
> +#ifndef PyAPI_DATA
> +# define PyAPI_DATA(RTYPE) extern RTYPE
> +#endif
> +#ifndef PyMODINIT_FUNC
> +# if defined(__cplusplus)
> +# define PyMODINIT_FUNC extern "C" PyObject*
> +# else /* __cplusplus */
> +# define PyMODINIT_FUNC PyObject*
> +# endif /* __cplusplus */
> +#endif
> +
> +/* limits.h constants that may be missing */
> +
> +#ifndef INT_MAX
> +#define INT_MAX 2147483647
> +#endif
> +
> +#ifndef LONG_MAX
> +#if SIZEOF_LONG == 4
> +#define LONG_MAX 0X7FFFFFFFL
> +#elif SIZEOF_LONG == 8
> +#define LONG_MAX 0X7FFFFFFFFFFFFFFFL
> +#else
> +#error "could not set LONG_MAX in pyport.h"
> +#endif
> +#endif
> +
> +#ifndef LONG_MIN
> +#define LONG_MIN (-LONG_MAX-1)
> +#endif
> +
> +#ifndef LONG_BIT
> +#define LONG_BIT (8 * SIZEOF_LONG)
> +#endif
> +
> +#if LONG_BIT != 8 * SIZEOF_LONG
> +/* 04-Oct-2000 LONG_BIT is apparently (mis)defined as 64 on some recent
> + * 32-bit platforms using gcc. We try to catch that here at compile-time
> + * rather than waiting for integer multiplication to trigger bogus
> + * overflows.
> + */
> +#error "LONG_BIT definition appears wrong for platform (bad gcc/glibc config?)."
> +#endif
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +/*
> + * Hide GCC attributes from compilers that don't support them.
> + */
> +#if (!defined(__GNUC__) || __GNUC__ < 2 || \
> + (__GNUC__ == 2 && __GNUC_MINOR__ < 7) )
> +#define Py_GCC_ATTRIBUTE(x)
> +#else
> +#define Py_GCC_ATTRIBUTE(x) __attribute__(x)
> +#endif
> +
> +/*
> + * Specify alignment on compilers that support it.
> + */
> +#if defined(__GNUC__) && __GNUC__ >= 3
> +#define Py_ALIGNED(x) __attribute__((aligned(x)))
> +#else
> +#define Py_ALIGNED(x)
> +#endif
> +
> +/* Eliminate end-of-loop code not reached warnings from SunPro C
> + * when using do{...}while(0) macros
> + */
> +#ifdef __SUNPRO_C
> +#pragma error_messages (off,E_END_OF_LOOP_CODE_NOT_REACHED)
> +#endif
> +
> +#ifndef Py_LL
> +#define Py_LL(x) x##LL
> +#endif
> +
> +#ifndef Py_ULL
> +#define Py_ULL(x) Py_LL(x##U)
> +#endif
> +
> +#define Py_VA_COPY va_copy
> +
> +/*
> + * Convenient macros to deal with endianness of the platform. WORDS_BIGENDIAN is
> + * detected by configure and defined in pyconfig.h. The code in pyconfig.h
> + * also takes care of Apple's universal builds.
> + */
> +
> +#ifdef WORDS_BIGENDIAN
> +#define PY_BIG_ENDIAN 1
> +#define PY_LITTLE_ENDIAN 0
> +#else
> +#define PY_BIG_ENDIAN 0
> +#define PY_LITTLE_ENDIAN 1
> +#endif
> +
> +#ifdef Py_BUILD_CORE
> +/*
> + * Macros to protect CRT calls against instant termination when passed an
> + * invalid parameter (issue23524).
> + */
> +#if defined _MSC_VER && _MSC_VER >= 1900 && !defined(UEFI_C_SOURCE)
> +
> +extern _invalid_parameter_handler _Py_silent_invalid_parameter_handler;
> +#define _Py_BEGIN_SUPPRESS_IPH { _invalid_parameter_handler _Py_old_handler = \
> + _set_thread_local_invalid_parameter_handler(_Py_silent_invalid_parameter_handler);
> +#define _Py_END_SUPPRESS_IPH _set_thread_local_invalid_parameter_handler(_Py_old_handler); }
> +
> +#else
> +
> +#define _Py_BEGIN_SUPPRESS_IPH
> +#define _Py_END_SUPPRESS_IPH
> +
> +#endif /* _MSC_VER >= 1900 */
> +#endif /* Py_BUILD_CORE */
> +
> +#ifdef UEFI_C_SOURCE
> +#define _Py_BEGIN_SUPPRESS_IPH
> +#define _Py_END_SUPPRESS_IPH
> +#endif
> +
> +#ifdef __ANDROID__
> +#include <android/api-level.h>
> +#endif
> +
> +#endif /* Py_PYPORT_H */
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/ctypes/__init__.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/ctypes/__init__.py
> new file mode 100644
> index 00000000..07fb8bda
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/ctypes/__init__.py
> @@ -0,0 +1,549 @@
> +# Create and manipulate C data types in Python
> +#
> +# Copyright (c) 2011 - 2021, Intel Corporation. All rights reserved.<BR>
> +# This program and the accompanying materials are licensed and made available under
> +# the terms and conditions of the BSD License that accompanies this distribution.
> +# The full text of the license may be found at
> +# http://opensource.org/licenses/bsd-license.
> +#
> +# THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
> +# WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
> +
> +import os as _os, sys as _sys
> +
> +__version__ = "1.1.0"
> +
> +from _ctypes import Union, Structure, Array
> +from _ctypes import _Pointer
> +from _ctypes import CFuncPtr as _CFuncPtr
> +from _ctypes import __version__ as _ctypes_version
> +from _ctypes import RTLD_LOCAL, RTLD_GLOBAL
> +from _ctypes import ArgumentError
> +
> +from struct import calcsize as _calcsize
> +
> +if __version__ != _ctypes_version:
> + raise Exception("Version number mismatch", __version__, _ctypes_version)
> +
> +if _os.name == "nt":
> + from _ctypes import FormatError
> +
> +DEFAULT_MODE = RTLD_LOCAL
> +if _os.name == "posix" and _sys.platform == "darwin":
> + # On OS X 10.3, we use RTLD_GLOBAL as default mode
> + # because RTLD_LOCAL does not work at least on some
> + # libraries. OS X 10.3 is Darwin 7, so we check for
> + # that.
> +
> + if int(_os.uname().release.split('.')[0]) < 8:
> + DEFAULT_MODE = RTLD_GLOBAL
> +
> +from _ctypes import FUNCFLAG_CDECL as _FUNCFLAG_CDECL, \
> + FUNCFLAG_PYTHONAPI as _FUNCFLAG_PYTHONAPI, \
> + FUNCFLAG_USE_ERRNO as _FUNCFLAG_USE_ERRNO, \
> + FUNCFLAG_USE_LASTERROR as _FUNCFLAG_USE_LASTERROR
> +
> +# WINOLEAPI -> HRESULT
> +# WINOLEAPI_(type)
> +#
> +# STDMETHODCALLTYPE
> +#
> +# STDMETHOD(name)
> +# STDMETHOD_(type, name)
> +#
> +# STDAPICALLTYPE
> +
> +def create_string_buffer(init, size=None):
> + """create_string_buffer(aBytes) -> character array
> + create_string_buffer(anInteger) -> character array
> + create_string_buffer(aBytes, anInteger) -> character array
> + """
> + if isinstance(init, bytes):
> + if size is None:
> + size = len(init)+1
> + buftype = c_char * size
> + buf = buftype()
> + buf.value = init
> + return buf
> + elif isinstance(init, int):
> + buftype = c_char * init
> + buf = buftype()
> + return buf
> + raise TypeError(init)
> +
> +def c_buffer(init, size=None):
> +## "deprecated, use create_string_buffer instead"
> +## import warnings
> +## warnings.warn("c_buffer is deprecated, use create_string_buffer instead",
> +## DeprecationWarning, stacklevel=2)
> + return create_string_buffer(init, size)
> +
> +_c_functype_cache = {}
> +def CFUNCTYPE(restype, *argtypes, **kw):
> + """CFUNCTYPE(restype, *argtypes,
> + use_errno=False, use_last_error=False) -> function prototype.
> +
> + restype: the result type
> + argtypes: a sequence specifying the argument types
> +
> + The function prototype can be called in different ways to create a
> + callable object:
> +
> + prototype(integer address) -> foreign function
> + prototype(callable) -> create and return a C callable function from callable
> + prototype(integer index, method name[, paramflags]) -> foreign function calling a COM method
> + prototype((ordinal number, dll object)[, paramflags]) -> foreign function exported by ordinal
> + prototype((function name, dll object)[, paramflags]) -> foreign function exported by name
> + """
> + flags = _FUNCFLAG_CDECL
> + if kw.pop("use_errno", False):
> + flags |= _FUNCFLAG_USE_ERRNO
> + if kw.pop("use_last_error", False):
> + flags |= _FUNCFLAG_USE_LASTERROR
> + if kw:
> + raise ValueError("unexpected keyword argument(s) %s" % kw.keys())
> + try:
> + return _c_functype_cache[(restype, argtypes, flags)]
> + except KeyError:
> + class CFunctionType(_CFuncPtr):
> + _argtypes_ = argtypes
> + _restype_ = restype
> + _flags_ = flags
> + _c_functype_cache[(restype, argtypes, flags)] = CFunctionType
> + return CFunctionType
> +
> +if _os.name == "nt":
> + from _ctypes import LoadLibrary as _dlopen
> + from _ctypes import FUNCFLAG_STDCALL as _FUNCFLAG_STDCALL
> +
> + _win_functype_cache = {}
> + def WINFUNCTYPE(restype, *argtypes, **kw):
> + # docstring set later (very similar to CFUNCTYPE.__doc__)
> + flags = _FUNCFLAG_STDCALL
> + if kw.pop("use_errno", False):
> + flags |= _FUNCFLAG_USE_ERRNO
> + if kw.pop("use_last_error", False):
> + flags |= _FUNCFLAG_USE_LASTERROR
> + if kw:
> + raise ValueError("unexpected keyword argument(s) %s" % kw.keys())
> + try:
> + return _win_functype_cache[(restype, argtypes, flags)]
> + except KeyError:
> + class WinFunctionType(_CFuncPtr):
> + _argtypes_ = argtypes
> + _restype_ = restype
> + _flags_ = flags
> + _win_functype_cache[(restype, argtypes, flags)] = WinFunctionType
> + return WinFunctionType
> + if WINFUNCTYPE.__doc__:
> + WINFUNCTYPE.__doc__ = CFUNCTYPE.__doc__.replace("CFUNCTYPE", "WINFUNCTYPE")
> +
> +elif _os.name == "posix":
> + from _ctypes import dlopen as _dlopen
> +
> +from _ctypes import sizeof, byref, addressof, alignment, resize
> +from _ctypes import get_errno, set_errno
> +from _ctypes import _SimpleCData
> +
> +def _check_size(typ, typecode=None):
> + # Check if sizeof(ctypes_type) against struct.calcsize. This
> + # should protect somewhat against a misconfigured libffi.
> + from struct import calcsize
> + if typecode is None:
> + # Most _type_ codes are the same as used in struct
> + typecode = typ._type_
> + actual, required = sizeof(typ), calcsize(typecode)
> + if actual != required:
> + raise SystemError("sizeof(%s) wrong: %d instead of %d" % \
> + (typ, actual, required))
> +
> +class py_object(_SimpleCData):
> + _type_ = "O"
> + def __repr__(self):
> + try:
> + return super().__repr__()
> + except ValueError:
> + return "%s(<NULL>)" % type(self).__name__
> +_check_size(py_object, "P")
> +
> +class c_short(_SimpleCData):
> + _type_ = "h"
> +_check_size(c_short)
> +
> +class c_ushort(_SimpleCData):
> + _type_ = "H"
> +_check_size(c_ushort)
> +
> +class c_long(_SimpleCData):
> + _type_ = "l"
> +_check_size(c_long)
> +
> +class c_ulong(_SimpleCData):
> + _type_ = "L"
> +_check_size(c_ulong)
> +
> +if _calcsize("i") == _calcsize("l"):
> + # if int and long have the same size, make c_int an alias for c_long
> + c_int = c_long
> + c_uint = c_ulong
> +else:
> + class c_int(_SimpleCData):
> + _type_ = "i"
> + _check_size(c_int)
> +
> + class c_uint(_SimpleCData):
> + _type_ = "I"
> + _check_size(c_uint)
> +
> +class c_float(_SimpleCData):
> + _type_ = "f"
> +_check_size(c_float)
> +
> +class c_double(_SimpleCData):
> + _type_ = "d"
> +_check_size(c_double)
> +
> +class c_longdouble(_SimpleCData):
> + _type_ = "g"
> +if sizeof(c_longdouble) == sizeof(c_double):
> + c_longdouble = c_double
> +
> +if _calcsize("l") == _calcsize("q"):
> + # if long and long long have the same size, make c_longlong an alias for c_long
> + c_longlong = c_long
> + c_ulonglong = c_ulong
> +else:
> + class c_longlong(_SimpleCData):
> + _type_ = "q"
> + _check_size(c_longlong)
> +
> + class c_ulonglong(_SimpleCData):
> + _type_ = "Q"
> + ## def from_param(cls, val):
> + ## return ('d', float(val), val)
> + ## from_param = classmethod(from_param)
> + _check_size(c_ulonglong)
> +
> +class c_ubyte(_SimpleCData):
> + _type_ = "B"
> +c_ubyte.__ctype_le__ = c_ubyte.__ctype_be__ = c_ubyte
> +# backward compatibility:
> +##c_uchar = c_ubyte
> +_check_size(c_ubyte)
> +
> +class c_byte(_SimpleCData):
> + _type_ = "b"
> +c_byte.__ctype_le__ = c_byte.__ctype_be__ = c_byte
> +_check_size(c_byte)
> +
> +class c_char(_SimpleCData):
> + _type_ = "c"
> +c_char.__ctype_le__ = c_char.__ctype_be__ = c_char
> +_check_size(c_char)
> +
> +class c_char_p(_SimpleCData):
> + _type_ = "z"
> + def __repr__(self):
> + return "%s(%s)" % (self.__class__.__name__, c_void_p.from_buffer(self).value)
> +_check_size(c_char_p, "P")
> +
> +class c_void_p(_SimpleCData):
> + _type_ = "P"
> +c_voidp = c_void_p # backwards compatibility (to a bug)
> +_check_size(c_void_p)
> +
> +class c_bool(_SimpleCData):
> + _type_ = "?"
> +
> +from _ctypes import POINTER, pointer, _pointer_type_cache
> +
> +class c_wchar_p(_SimpleCData):
> + _type_ = "Z"
> + def __repr__(self):
> + return "%s(%s)" % (self.__class__.__name__, c_void_p.from_buffer(self).value)
> +
> +class c_wchar(_SimpleCData):
> + _type_ = "u"
> +
> +def _reset_cache():
> + _pointer_type_cache.clear()
> + _c_functype_cache.clear()
> + if _os.name == "nt":
> + _win_functype_cache.clear()
> + # _SimpleCData.c_wchar_p_from_param
> + POINTER(c_wchar).from_param = c_wchar_p.from_param
> + # _SimpleCData.c_char_p_from_param
> + POINTER(c_char).from_param = c_char_p.from_param
> + _pointer_type_cache[None] = c_void_p
> + # XXX for whatever reasons, creating the first instance of a callback
> + # function is needed for the unittests on Win64 to succeed. This MAY
> + # be a compiler bug, since the problem occurs only when _ctypes is
> + # compiled with the MS SDK compiler. Or an uninitialized variable?
> + CFUNCTYPE(c_int)(lambda: None)
> +
> +def create_unicode_buffer(init, size=None):
> + """create_unicode_buffer(aString) -> character array
> + create_unicode_buffer(anInteger) -> character array
> + create_unicode_buffer(aString, anInteger) -> character array
> + """
> + if isinstance(init, str):
> + if size is None:
> + size = len(init)+1
> + buftype = c_wchar * size
> + buf = buftype()
> + buf.value = init
> + return buf
> + elif isinstance(init, int):
> + buftype = c_wchar * init
> + buf = buftype()
> + return buf
> + raise TypeError(init)
> +
> +
> +# XXX Deprecated
> +def SetPointerType(pointer, cls):
> + if _pointer_type_cache.get(cls, None) is not None:
> + raise RuntimeError("This type already exists in the cache")
> + if id(pointer) not in _pointer_type_cache:
> + raise RuntimeError("What's this???")
> + pointer.set_type(cls)
> + _pointer_type_cache[cls] = pointer
> + del _pointer_type_cache[id(pointer)]
> +
> +# XXX Deprecated
> +def ARRAY(typ, len):
> + return typ * len
> +
> +################################################################
> +
> +
> +if _os.name != "edk2":
> + class CDLL(object):
> + """An instance of this class represents a loaded dll/shared
> + library, exporting functions using the standard C calling
> + convention (named 'cdecl' on Windows).
> +
> + The exported functions can be accessed as attributes, or by
> + indexing with the function name. Examples:
> +
> + <obj>.qsort -> callable object
> + <obj>['qsort'] -> callable object
> +
> + Calling the functions releases the Python GIL during the call and
> + reacquires it afterwards.
> + """
> + _func_flags_ = _FUNCFLAG_CDECL
> + _func_restype_ = c_int
> + # default values for repr
> + _name = '<uninitialized>'
> + _handle = 0
> + _FuncPtr = None
> +
> + def __init__(self, name, mode=DEFAULT_MODE, handle=None,
> + use_errno=False,
> + use_last_error=False):
> + self._name = name
> + flags = self._func_flags_
> + if use_errno:
> + flags |= _FUNCFLAG_USE_ERRNO
> + if use_last_error:
> + flags |= _FUNCFLAG_USE_LASTERROR
> +
> + class _FuncPtr(_CFuncPtr):
> + _flags_ = flags
> + _restype_ = self._func_restype_
> + self._FuncPtr = _FuncPtr
> +
> + if handle is None:
> + self._handle = _dlopen(self._name, mode)
> + else:
> + self._handle = handle
> +
> + def __repr__(self):
> + return "<%s '%s', handle %x at %#x>" % \
> + (self.__class__.__name__, self._name,
> + (self._handle & (_sys.maxsize*2 + 1)),
> + id(self) & (_sys.maxsize*2 + 1))
> +
> + def __getattr__(self, name):
> + if name.startswith('__') and name.endswith('__'):
> + raise AttributeError(name)
> + func = self.__getitem__(name)
> + setattr(self, name, func)
> + return func
> +
> + def __getitem__(self, name_or_ordinal):
> + func = self._FuncPtr((name_or_ordinal, self))
> + if not isinstance(name_or_ordinal, int):
> + func.__name__ = name_or_ordinal
> + return func
> +
> + class PyDLL(CDLL):
> + """This class represents the Python library itself. It allows to
> + access Python API functions. The GIL is not released, and
> + Python exceptions are handled correctly.
> + """
> + _func_flags_ = _FUNCFLAG_CDECL | _FUNCFLAG_PYTHONAPI
> +
> +if _os.name == "nt":
> +
> + class WinDLL(CDLL):
> + """This class represents a dll exporting functions using the
> + Windows stdcall calling convention.
> + """
> + _func_flags_ = _FUNCFLAG_STDCALL
> +
> + # XXX Hm, what about HRESULT as normal parameter?
> + # Mustn't it derive from c_long then?
> + from _ctypes import _check_HRESULT, _SimpleCData
> + class HRESULT(_SimpleCData):
> + _type_ = "l"
> + # _check_retval_ is called with the function's result when it
> + # is used as restype. It checks for the FAILED bit, and
> + # raises an OSError if it is set.
> + #
> + # The _check_retval_ method is implemented in C, so that the
> + # method definition itself is not included in the traceback
> + # when it raises an error - that is what we want (and Python
> + # doesn't have a way to raise an exception in the caller's
> + # frame).
> + _check_retval_ = _check_HRESULT
> +
> + class OleDLL(CDLL):
> + """This class represents a dll exporting functions using the
> + Windows stdcall calling convention, and returning HRESULT.
> + HRESULT error values are automatically raised as OSError
> + exceptions.
> + """
> + _func_flags_ = _FUNCFLAG_STDCALL
> + _func_restype_ = HRESULT
> +
> +if _os.name != "edk2":
> + class LibraryLoader(object):
> + def __init__(self, dlltype):
> + self._dlltype = dlltype
> +
> + def __getattr__(self, name):
> + if name[0] == '_':
> + raise AttributeError(name)
> + dll = self._dlltype(name)
> + setattr(self, name, dll)
> + return dll
> +
> + def __getitem__(self, name):
> + return getattr(self, name)
> +
> + def LoadLibrary(self, name):
> + return self._dlltype(name)
> +
> + cdll = LibraryLoader(CDLL)
> + pydll = LibraryLoader(PyDLL)
> +
> + if _os.name == "nt":
> + pythonapi = PyDLL("python dll", None, _sys.dllhandle)
> + elif _sys.platform == "cygwin":
> + pythonapi = PyDLL("libpython%d.%d.dll" % _sys.version_info[:2])
> + else:
> + pythonapi = PyDLL(None)
> +
> +
> +if _os.name == "nt":
> + windll = LibraryLoader(WinDLL)
> + oledll = LibraryLoader(OleDLL)
> +
> + if _os.name == "nt":
> + GetLastError = windll.kernel32.GetLastError
> + else:
> + GetLastError = windll.coredll.GetLastError
> + from _ctypes import get_last_error, set_last_error
> +
> + def WinError(code=None, descr=None):
> + if code is None:
> + code = GetLastError()
> + if descr is None:
> + descr = FormatError(code).strip()
> + return OSError(None, descr, None, code)
> +
> +if sizeof(c_uint) == sizeof(c_void_p):
> + c_size_t = c_uint
> + c_ssize_t = c_int
> +elif sizeof(c_ulong) == sizeof(c_void_p):
> + c_size_t = c_ulong
> + c_ssize_t = c_long
> +elif sizeof(c_ulonglong) == sizeof(c_void_p):
> + c_size_t = c_ulonglong
> + c_ssize_t = c_longlong
> +
> +# functions
> +
> +from _ctypes import _memmove_addr, _memset_addr, _string_at_addr, _cast_addr
> +
> +## void *memmove(void *, const void *, size_t);
> +memmove = CFUNCTYPE(c_void_p, c_void_p, c_void_p, c_size_t)(_memmove_addr)
> +
> +## void *memset(void *, int, size_t)
> +memset = CFUNCTYPE(c_void_p, c_void_p, c_int, c_size_t)(_memset_addr)
> +
> +def PYFUNCTYPE(restype, *argtypes):
> + class CFunctionType(_CFuncPtr):
> + _argtypes_ = argtypes
> + _restype_ = restype
> + _flags_ = _FUNCFLAG_CDECL | _FUNCFLAG_PYTHONAPI
> + return CFunctionType
> +
> +_cast = PYFUNCTYPE(py_object, c_void_p, py_object, py_object)(_cast_addr)
> +def cast(obj, typ):
> + return _cast(obj, obj, typ)
> +
> +_string_at = PYFUNCTYPE(py_object, c_void_p, c_int)(_string_at_addr)
> +def string_at(ptr, size=-1):
> + """string_at(addr[, size]) -> string
> +
> + Return the string at addr."""
> + return _string_at(ptr, size)
> +
> +try:
> + from _ctypes import _wstring_at_addr
> +except ImportError:
> + pass
> +else:
> + _wstring_at = PYFUNCTYPE(py_object, c_void_p, c_int)(_wstring_at_addr)
> + def wstring_at(ptr, size=-1):
> + """wstring_at(addr[, size]) -> string
> +
> + Return the string at addr."""
> + return _wstring_at(ptr, size)
> +
> +
> +if _os.name == "nt": # COM stuff
> + def DllGetClassObject(rclsid, riid, ppv):
> + try:
> + ccom = __import__("comtypes.server.inprocserver", globals(), locals(), ['*'])
> + except ImportError:
> + return -2147221231 # CLASS_E_CLASSNOTAVAILABLE
> + else:
> + return ccom.DllGetClassObject(rclsid, riid, ppv)
> +
> + def DllCanUnloadNow():
> + try:
> + ccom = __import__("comtypes.server.inprocserver", globals(), locals(), ['*'])
> + except ImportError:
> + return 0 # S_OK
> + return ccom.DllCanUnloadNow()
> +
> +from ctypes._endian import BigEndianStructure, LittleEndianStructure
> +
> +# Fill in specifically-sized types
> +c_int8 = c_byte
> +c_uint8 = c_ubyte
> +for kind in [c_short, c_int, c_long, c_longlong]:
> + if sizeof(kind) == 2: c_int16 = kind
> + elif sizeof(kind) == 4: c_int32 = kind
> + elif sizeof(kind) == 8: c_int64 = kind
> +for kind in [c_ushort, c_uint, c_ulong, c_ulonglong]:
> + if sizeof(kind) == 2: c_uint16 = kind
> + elif sizeof(kind) == 4: c_uint32 = kind
> + elif sizeof(kind) == 8: c_uint64 = kind
> +del(kind)
> +
> +_reset_cache()
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/genericpath.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/genericpath.py
> new file mode 100644
> index 00000000..46b8c921
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/genericpath.py
> @@ -0,0 +1,157 @@
> +"""
> +Path operations common to more than one OS
> +Do not use directly. The OS specific modules import the appropriate
> +functions from this module themselves.
> +"""
> +import os
> +import stat
> +
> +# If Python is built without Unicode support, the unicode type
> +# will not exist. Fake one.
> +class _unicode(object):
> + pass
> +
> +
> +__all__ = ['commonprefix', 'exists', 'getatime', 'getctime', 'getmtime',
> + 'getsize', 'isdir', 'isfile', 'samefile', 'sameopenfile',
> + 'samestat', '_unicode']
> +
> +
> +# Does a path exist?
> +# This is false for dangling symbolic links on systems that support them.
> +def exists(path):
> + """Test whether a path exists. Returns False for broken symbolic links"""
> + try:
> + os.stat(path)
> + except OSError:
> + return False
> + return True
> +
> +
> +# This follows symbolic links, so both islink() and isdir() can be true
> +# for the same path on systems that support symlinks
> +def isfile(path):
> + """Test whether a path is a regular file"""
> + try:
> + st = os.stat(path)
> + except OSError:
> + return False
> + return stat.S_ISREG(st.st_mode)
> +
> +
> +# Is a path a directory?
> +# This follows symbolic links, so both islink() and isdir()
> +# can be true for the same path on systems that support symlinks
> +def isdir(s):
> + """Return true if the pathname refers to an existing directory."""
> + try:
> + st = os.stat(s)
> + except OSError:
> + return False
> + return stat.S_ISDIR(st.st_mode)
> +
> +
> +def getsize(filename):
> + """Return the size of a file, reported by os.stat()."""
> + return os.stat(filename).st_size
> +
> +
> +def getmtime(filename):
> + """Return the last modification time of a file, reported by os.stat()."""
> + return os.stat(filename).st_mtime
> +
> +
> +def getatime(filename):
> + """Return the last access time of a file, reported by os.stat()."""
> + return os.stat(filename).st_atime
> +
> +
> +def getctime(filename):
> + """Return the metadata change time of a file, reported by os.stat()."""
> + return os.stat(filename).st_ctime
> +
> +
> +# Return the longest prefix of all list elements.
> +def commonprefix(m):
> + "Given a list of pathnames, returns the longest common leading component"
> + if not m: return ''
> + # Some people pass in a list of pathname parts to operate in an OS-agnostic
> + # fashion; don't try to translate in that case as that's an abuse of the
> + # API and they are already doing what they need to be OS-agnostic and so
> + # they most likely won't be using an os.PathLike object in the sublists.
> + if not isinstance(m[0], (list, tuple)):
> + m = tuple(map(os.fspath, m))
> + s1 = min(m)
> + s2 = max(m)
> + for i, c in enumerate(s1):
> + if c != s2[i]:
> + return s1[:i]
> + return s1
> +
> +# Are two stat buffers (obtained from stat, fstat or lstat)
> +# describing the same file?
> +def samestat(s1, s2):
> + """Test whether two stat buffers reference the same file"""
> + return (s1.st_ino == s2.st_ino and
> + s1.st_dev == s2.st_dev)
> +
> +
> +# Are two filenames really pointing to the same file?
> +def samefile(f1, f2):
> + """Test whether two pathnames reference the same actual file"""
> + s1 = os.stat(f1)
> + s2 = os.stat(f2)
> + return samestat(s1, s2)
> +
> +
> +# Are two open files really referencing the same file?
> +# (Not necessarily the same file descriptor!)
> +def sameopenfile(fp1, fp2):
> + """Test whether two open file objects reference the same file"""
> + s1 = os.fstat(fp1)
> + s2 = os.fstat(fp2)
> + return samestat(s1, s2)
> +
> +
> +# Split a path in root and extension.
> +# The extension is everything starting at the last dot in the last
> +# pathname component; the root is everything before that.
> +# It is always true that root + ext == p.
> +
> +# Generic implementation of splitext, to be parametrized with
> +# the separators
> +def _splitext(p, sep, altsep, extsep):
> + """Split the extension from a pathname.
> +
> + Extension is everything from the last dot to the end, ignoring
> + leading dots. Returns "(root, ext)"; ext may be empty."""
> + # NOTE: This code must work for text and bytes strings.
> +
> + sepIndex = p.rfind(sep)
> + if altsep:
> + altsepIndex = p.rfind(altsep)
> + sepIndex = max(sepIndex, altsepIndex)
> +
> + dotIndex = p.rfind(extsep)
> + if dotIndex > sepIndex:
> + # skip all leading dots
> + filenameIndex = sepIndex + 1
> + while filenameIndex < dotIndex:
> + if p[filenameIndex:filenameIndex+1] != extsep:
> + return p[:dotIndex], p[dotIndex:]
> + filenameIndex += 1
> +
> + return p, p[:0]
> +
> +def _check_arg_types(funcname, *args):
> + hasstr = hasbytes = False
> + for s in args:
> + if isinstance(s, str):
> + hasstr = True
> + elif isinstance(s, bytes):
> + hasbytes = True
> + else:
> + raise TypeError('%s() argument must be str or bytes, not %r' %
> + (funcname, s.__class__.__name__)) from None
> + if hasstr and hasbytes:
> + raise TypeError("Can't mix strings and bytes in path components") from None
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/glob.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/glob.py
> new file mode 100644
> index 00000000..d6eca248
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/glob.py
> @@ -0,0 +1,110 @@
> +"""Filename globbing utility."""
> +
> +import os
> +import re
> +import fnmatch
> +
> +__all__ = ["glob", "iglob"]
> +
> +def glob(pathname):
> + """Return a list of paths matching a pathname pattern.
> +
> + The pattern may contain simple shell-style wildcards a la
> + fnmatch. However, unlike fnmatch, filenames starting with a
> + dot are special cases that are not matched by '*' and '?'
> + patterns.
> +
> + """
> + return list(iglob(pathname))
> +
> +def iglob(pathname):
> + """Return an iterator which yields the paths matching a pathname pattern.
> +
> + The pattern may contain simple shell-style wildcards a la
> + fnmatch. However, unlike fnmatch, filenames starting with a
> + dot are special cases that are not matched by '*' and '?'
> + patterns.
> +
> + """
> + dirname, basename = os.path.split(pathname)
> + if not has_magic(pathname):
> + if basename:
> + if os.path.lexists(pathname):
> + yield pathname
> + else:
> + # Patterns ending with a slash should match only directories
> + if os.path.isdir(dirname):
> + yield pathname
> + return
> + if not dirname:
> + yield from glob1(None, basename)
> + return
> + # `os.path.split()` returns the argument itself as a dirname if it is a
> + # drive or UNC path. Prevent an infinite recursion if a drive or UNC path
> + # contains magic characters (i.e. r'\\?\C:').
> + if dirname != pathname and has_magic(dirname):
> + dirs = iglob(dirname)
> + else:
> + dirs = [dirname]
> + if has_magic(basename):
> + glob_in_dir = glob1
> + else:
> + glob_in_dir = glob0
> + for dirname in dirs:
> + for name in glob_in_dir(dirname, basename):
> + yield os.path.join(dirname, name)
> +
> +# These 2 helper functions non-recursively glob inside a literal directory.
> +# They return a list of basenames. `glob1` accepts a pattern while `glob0`
> +# takes a literal basename (so it only has to check for its existence).
> +
> +def glob1(dirname, pattern):
> + if not dirname:
> + if isinstance(pattern, bytes):
> + dirname = bytes(os.curdir, 'ASCII')
> + else:
> + dirname = os.curdir
> + try:
> + names = os.listdir(dirname)
> + except OSError:
> + return []
> + if not _ishidden(pattern):
> + names = [x for x in names if not _ishidden(x)]
> + return fnmatch.filter(names, pattern)
> +
> +def glob0(dirname, basename):
> + if not basename:
> + # `os.path.split()` returns an empty basename for paths ending with a
> + # directory separator. 'q*x/' should match only directories.
> + if os.path.isdir(dirname):
> + return [basename]
> + else:
> + if os.path.lexists(os.path.join(dirname, basename)):
> + return [basename]
> + return []
> +
> +
> +magic_check = re.compile('([*?[])')
> +magic_check_bytes = re.compile(b'([*?[])')
> +
> +def has_magic(s):
> + if isinstance(s, bytes):
> + match = magic_check_bytes.search(s)
> + else:
> + match = magic_check.search(s)
> + return match is not None
> +
> +def _ishidden(path):
> + return path[0] in ('.', b'.'[0])
> +
> +def escape(pathname):
> + """Escape all special characters.
> + """
> + # Escaping is done by wrapping any of "*?[" between square brackets.
> + # Metacharacters do not work in the drive part and shouldn't be escaped.
> + drive, pathname = os.path.splitdrive(pathname)
> + if isinstance(pathname, bytes):
> + pathname = magic_check_bytes.sub(br'[\1]', pathname)
> + else:
> + pathname = magic_check.sub(r'[\1]', pathname)
> + return drive + pathname
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/http/client.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/http/client.py
> new file mode 100644
> index 00000000..fe7cb47c
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/http/client.py
> @@ -0,0 +1,1481 @@
> +r"""HTTP/1.1 client library
> +
> +<intro stuff goes here>
> +<other stuff, too>
> +
> +HTTPConnection goes through a number of "states", which define when a client
> +may legally make another request or fetch the response for a particular
> +request. This diagram details these state transitions:
> +
> + (null)
> + |
> + | HTTPConnection()
> + v
> + Idle
> + |
> + | putrequest()
> + v
> + Request-started
> + |
> + | ( putheader() )* endheaders()
> + v
> + Request-sent
> + |\_____________________________
> + | | getresponse() raises
> + | response = getresponse() | ConnectionError
> + v v
> + Unread-response Idle
> + [Response-headers-read]
> + |\____________________
> + | |
> + | response.read() | putrequest()
> + v v
> + Idle Req-started-unread-response
> + ______/|
> + / |
> + response.read() | | ( putheader() )* endheaders()
> + v v
> + Request-started Req-sent-unread-response
> + |
> + | response.read()
> + v
> + Request-sent
> +
> +This diagram presents the following rules:
> + -- a second request may not be started until {response-headers-read}
> + -- a response [object] cannot be retrieved until {request-sent}
> + -- there is no differentiation between an unread response body and a
> + partially read response body
> +
> +Note: this enforcement is applied by the HTTPConnection class. The
> + HTTPResponse class does not enforce this state machine, which
> + implies sophisticated clients may accelerate the request/response
> + pipeline. Caution should be taken, though: accelerating the states
> + beyond the above pattern may imply knowledge of the server's
> + connection-close behavior for certain requests. For example, it
> + is impossible to tell whether the server will close the connection
> + UNTIL the response headers have been read; this means that further
> + requests cannot be placed into the pipeline until it is known that
> + the server will NOT be closing the connection.
> +
> +Logical State __state __response
> +------------- ------- ----------
> +Idle _CS_IDLE None
> +Request-started _CS_REQ_STARTED None
> +Request-sent _CS_REQ_SENT None
> +Unread-response _CS_IDLE <response_class>
> +Req-started-unread-response _CS_REQ_STARTED <response_class>
> +Req-sent-unread-response _CS_REQ_SENT <response_class>
> +"""
> +
> +import email.parser
> +import email.message
> +import http
> +import io
> +import os
> +import re
> +import socket
> +import collections
> +from urllib.parse import urlsplit
> +
> +# HTTPMessage, parse_headers(), and the HTTP status code constants are
> +# intentionally omitted for simplicity
> +__all__ = ["HTTPResponse", "HTTPConnection",
> + "HTTPException", "NotConnected", "UnknownProtocol",
> + "UnknownTransferEncoding", "UnimplementedFileMode",
> + "IncompleteRead", "InvalidURL", "ImproperConnectionState",
> + "CannotSendRequest", "CannotSendHeader", "ResponseNotReady",
> + "BadStatusLine", "LineTooLong", "RemoteDisconnected", "error",
> + "responses"]
> +
> +HTTP_PORT = 80
> +HTTPS_PORT = 443
> +
> +_UNKNOWN = 'UNKNOWN'
> +
> +# connection states
> +_CS_IDLE = 'Idle'
> +_CS_REQ_STARTED = 'Request-started'
> +_CS_REQ_SENT = 'Request-sent'
> +
> +
> +# hack to maintain backwards compatibility
> +globals().update(http.HTTPStatus.__members__)
> +
> +# another hack to maintain backwards compatibility
> +# Mapping status codes to official W3C names
> +responses = {v: v.phrase for v in http.HTTPStatus.__members__.values()}
> +
> +# maximal amount of data to read at one time in _safe_read
> +MAXAMOUNT = 1048576
> +
> +# maximal line length when calling readline().
> +_MAXLINE = 65536
> +_MAXHEADERS = 100
> +
> +# Header name/value ABNF (http://tools.ietf.org/html/rfc7230#section-3.2)
> +#
> +# VCHAR = %x21-7E
> +# obs-text = %x80-FF
> +# header-field = field-name ":" OWS field-value OWS
> +# field-name = token
> +# field-value = *( field-content / obs-fold )
> +# field-content = field-vchar [ 1*( SP / HTAB ) field-vchar ]
> +# field-vchar = VCHAR / obs-text
> +#
> +# obs-fold = CRLF 1*( SP / HTAB )
> +# ; obsolete line folding
> +# ; see Section 3.2.4
> +
> +# token = 1*tchar
> +#
> +# tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*"
> +# / "+" / "-" / "." / "^" / "_" / "`" / "|" / "~"
> +# / DIGIT / ALPHA
> +# ; any VCHAR, except delimiters
> +#
> +# VCHAR defined in http://tools.ietf.org/html/rfc5234#appendix-B.1
> +
> +# the patterns for both name and value are more lenient than RFC
> +# definitions to allow for backwards compatibility
> +_is_legal_header_name = re.compile(rb'[^:\s][^:\r\n]*').fullmatch
> +_is_illegal_header_value = re.compile(rb'\n(?![ \t])|\r(?![ \t\n])').search
> +
> +# We always set the Content-Length header for these methods because some
> +# servers will otherwise respond with a 411
> +_METHODS_EXPECTING_BODY = {'PATCH', 'POST', 'PUT'}
> +
> +
> +def _encode(data, name='data'):
> + """Call data.encode("latin-1") but show a better error message."""
> + try:
> + return data.encode("latin-1")
> + except UnicodeEncodeError as err:
> + raise UnicodeEncodeError(
> + err.encoding,
> + err.object,
> + err.start,
> + err.end,
> + "%s (%.20r) is not valid Latin-1. Use %s.encode('utf-8') "
> + "if you want to send it encoded in UTF-8." %
> + (name.title(), data[err.start:err.end], name)) from None
> +
> +
> +class HTTPMessage(email.message.Message):
> + # XXX The only usage of this method is in
> + # http.server.CGIHTTPRequestHandler. Maybe move the code there so
> + # that it doesn't need to be part of the public API. The API has
> + # never been defined so this could cause backwards compatibility
> + # issues.
> +
> + def getallmatchingheaders(self, name):
> + """Find all header lines matching a given header name.
> +
> + Look through the list of headers and find all lines matching a given
> + header name (and their continuation lines). A list of the lines is
> + returned, without interpretation. If the header does not occur, an
> + empty list is returned. If the header occurs multiple times, all
> + occurrences are returned. Case is not important in the header name.
> +
> + """
> + name = name.lower() + ':'
> + n = len(name)
> + lst = []
> + hit = 0
> + for line in self.keys():
> + if line[:n].lower() == name:
> + hit = 1
> + elif not line[:1].isspace():
> + hit = 0
> + if hit:
> + lst.append(line)
> + return lst
> +
> +def parse_headers(fp, _class=HTTPMessage):
> + """Parses only RFC2822 headers from a file pointer.
> +
> + email Parser wants to see strings rather than bytes.
> + But a TextIOWrapper around self.rfile would buffer too many bytes
> + from the stream, bytes which we later need to read as bytes.
> + So we read the correct bytes here, as bytes, for email Parser
> + to parse.
> +
> + """
> + headers = []
> + while True:
> + line = fp.readline(_MAXLINE + 1)
> + if len(line) > _MAXLINE:
> + raise LineTooLong("header line")
> + headers.append(line)
> + if len(headers) > _MAXHEADERS:
> + raise HTTPException("got more than %d headers" % _MAXHEADERS)
> + if line in (b'\r\n', b'\n', b''):
> + break
> + hstring = b''.join(headers).decode('iso-8859-1')
> + return email.parser.Parser(_class=_class).parsestr(hstring)
> +
> +
> +class HTTPResponse(io.BufferedIOBase):
> +
> + # See RFC 2616 sec 19.6 and RFC 1945 sec 6 for details.
> +
> + # The bytes from the socket object are iso-8859-1 strings.
> + # See RFC 2616 sec 2.2 which notes an exception for MIME-encoded
> + # text following RFC 2047. The basic status line parsing only
> + # accepts iso-8859-1.
> +
> + def __init__(self, sock, debuglevel=0, method=None, url=None):
> + # If the response includes a content-length header, we need to
> + # make sure that the client doesn't read more than the
> + # specified number of bytes. If it does, it will block until
> + # the server times out and closes the connection. This will
> + # happen if a self.fp.read() is done (without a size) whether
> + # self.fp is buffered or not. So, no self.fp.read() by
> + # clients unless they know what they are doing.
> + self.fp = sock.makefile("rb")
> + self.debuglevel = debuglevel
> + self._method = method
> +
> + # The HTTPResponse object is returned via urllib. The clients
> + # of http and urllib expect different attributes for the
> + # headers. headers is used here and supports urllib. msg is
> + # provided as a backwards compatibility layer for http
> + # clients.
> +
> + self.headers = self.msg = None
> +
> + # from the Status-Line of the response
> + self.version = _UNKNOWN # HTTP-Version
> + self.status = _UNKNOWN # Status-Code
> + self.reason = _UNKNOWN # Reason-Phrase
> +
> + self.chunked = _UNKNOWN # is "chunked" being used?
> + self.chunk_left = _UNKNOWN # bytes left to read in current chunk
> + self.length = _UNKNOWN # number of bytes left in response
> + self.will_close = _UNKNOWN # conn will close at end of response
> +
> + def _read_status(self):
> + line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
> + if len(line) > _MAXLINE:
> + raise LineTooLong("status line")
> + if self.debuglevel > 0:
> + print("reply:", repr(line))
> + if not line:
> + # Presumably, the server closed the connection before
> + # sending a valid response.
> + raise RemoteDisconnected("Remote end closed connection without"
> + " response")
> + try:
> + version, status, reason = line.split(None, 2)
> + except ValueError:
> + try:
> + version, status = line.split(None, 1)
> + reason = ""
> + except ValueError:
> + # empty version will cause next test to fail.
> + version = ""
> + if not version.startswith("HTTP/"):
> + self._close_conn()
> + raise BadStatusLine(line)
> +
> + # The status code is a three-digit number
> + try:
> + status = int(status)
> + if status < 100 or status > 999:
> + raise BadStatusLine(line)
> + except ValueError:
> + raise BadStatusLine(line)
> + return version, status, reason
> +
> + def begin(self):
> + if self.headers is not None:
> + # we've already started reading the response
> + return
> +
> + # read until we get a non-100 response
> + while True:
> + version, status, reason = self._read_status()
> + if status != CONTINUE:
> + break
> + # skip the header from the 100 response
> + while True:
> + skip = self.fp.readline(_MAXLINE + 1)
> + if len(skip) > _MAXLINE:
> + raise LineTooLong("header line")
> + skip = skip.strip()
> + if not skip:
> + break
> + if self.debuglevel > 0:
> + print("header:", skip)
> +
> + self.code = self.status = status
> + self.reason = reason.strip()
> + if version in ("HTTP/1.0", "HTTP/0.9"):
> + # Some servers might still return "0.9", treat it as 1.0 anyway
> + self.version = 10
> + elif version.startswith("HTTP/1."):
> + self.version = 11 # use HTTP/1.1 code for HTTP/1.x where x>=1
> + else:
> + raise UnknownProtocol(version)
> +
> + self.headers = self.msg = parse_headers(self.fp)
> +
> + if self.debuglevel > 0:
> + for hdr in self.headers:
> + print("header:", hdr + ":", self.headers.get(hdr))
> +
> + # are we using the chunked-style of transfer encoding?
> + tr_enc = self.headers.get("transfer-encoding")
> + if tr_enc and tr_enc.lower() == "chunked":
> + self.chunked = True
> + self.chunk_left = None
> + else:
> + self.chunked = False
> +
> + # will the connection close at the end of the response?
> + self.will_close = self._check_close()
> +
> + # do we have a Content-Length?
> + # NOTE: RFC 2616, S4.4, #3 says we ignore this if tr_enc is "chunked"
> + self.length = None
> + length = self.headers.get("content-length")
> +
> + # are we using the chunked-style of transfer encoding?
> + tr_enc = self.headers.get("transfer-encoding")
> + if length and not self.chunked:
> + try:
> + self.length = int(length)
> + except ValueError:
> + self.length = None
> + else:
> + if self.length < 0: # ignore nonsensical negative lengths
> + self.length = None
> + else:
> + self.length = None
> +
> + # does the body have a fixed length? (of zero)
> + if (status == NO_CONTENT or status == NOT_MODIFIED or
> + 100 <= status < 200 or # 1xx codes
> + self._method == "HEAD"):
> + self.length = 0
> +
> + # if the connection remains open, and we aren't using chunked, and
> + # a content-length was not provided, then assume that the connection
> + # WILL close.
> + if (not self.will_close and
> + not self.chunked and
> + self.length is None):
> + self.will_close = True
> +
> + def _check_close(self):
> + conn = self.headers.get("connection")
> + if self.version == 11:
> + # An HTTP/1.1 proxy is assumed to stay open unless
> + # explicitly closed.
> + conn = self.headers.get("connection")
> + if conn and "close" in conn.lower():
> + return True
> + return False
> +
> + # Some HTTP/1.0 implementations have support for persistent
> + # connections, using rules different than HTTP/1.1.
> +
> + # For older HTTP, Keep-Alive indicates persistent connection.
> + if self.headers.get("keep-alive"):
> + return False
> +
> + # At least Akamai returns a "Connection: Keep-Alive" header,
> + # which was supposed to be sent by the client.
> + if conn and "keep-alive" in conn.lower():
> + return False
> +
> + # Proxy-Connection is a netscape hack.
> + pconn = self.headers.get("proxy-connection")
> + if pconn and "keep-alive" in pconn.lower():
> + return False
> +
> + # otherwise, assume it will close
> + return True
> +
> + def _close_conn(self):
> + fp = self.fp
> + self.fp = None
> + fp.close()
> +
> + def close(self):
> + try:
> + super().close() # set "closed" flag
> + finally:
> + if self.fp:
> + self._close_conn()
> +
> + # These implementations are for the benefit of io.BufferedReader.
> +
> + # XXX This class should probably be revised to act more like
> + # the "raw stream" that BufferedReader expects.
> +
> + def flush(self):
> + super().flush()
> + if self.fp:
> + self.fp.flush()
> +
> + def readable(self):
> + """Always returns True"""
> + return True
> +
> + # End of "raw stream" methods
> +
> + def isclosed(self):
> + """True if the connection is closed."""
> + # NOTE: it is possible that we will not ever call self.close(). This
> + # case occurs when will_close is TRUE, length is None, and we
> + # read up to the last byte, but NOT past it.
> + #
> + # IMPLIES: if will_close is FALSE, then self.close() will ALWAYS be
> + # called, meaning self.isclosed() is meaningful.
> + return self.fp is None
> +
> + def read(self, amt=None):
> + if self.fp is None:
> + return b""
> +
> + if self._method == "HEAD":
> + self._close_conn()
> + return b""
> +
> + if amt is not None:
> + # Amount is given, implement using readinto
> + b = bytearray(amt)
> + n = self.readinto(b)
> + return memoryview(b)[:n].tobytes()
> + else:
> + # Amount is not given (unbounded read) so we must check self.length
> + # and self.chunked
> +
> + if self.chunked:
> + return self._readall_chunked()
> +
> + if self.length is None:
> + s = self.fp.read()
> + else:
> + try:
> + s = self._safe_read(self.length)
> + except IncompleteRead:
> + self._close_conn()
> + raise
> + self.length = 0
> + self._close_conn() # we read everything
> + return s
> +
> + def readinto(self, b):
> + """Read up to len(b) bytes into bytearray b and return the number
> + of bytes read.
> + """
> +
> + if self.fp is None:
> + return 0
> +
> + if self._method == "HEAD":
> + self._close_conn()
> + return 0
> +
> + if self.chunked:
> + return self._readinto_chunked(b)
> +
> + if self.length is not None:
> + if len(b) > self.length:
> + # clip the read to the "end of response"
> + b = memoryview(b)[0:self.length]
> +
> + # we do not use _safe_read() here because this may be a .will_close
> + # connection, and the user is reading more bytes than will be provided
> + # (for example, reading in 1k chunks)
> + n = self.fp.readinto(b)
> + if not n and b:
> + # Ideally, we would raise IncompleteRead if the content-length
> + # wasn't satisfied, but it might break compatibility.
> + self._close_conn()
> + elif self.length is not None:
> + self.length -= n
> + if not self.length:
> + self._close_conn()
> + return n
> +
> + def _read_next_chunk_size(self):
> + # Read the next chunk size from the file
> + line = self.fp.readline(_MAXLINE + 1)
> + if len(line) > _MAXLINE:
> + raise LineTooLong("chunk size")
> + i = line.find(b";")
> + if i >= 0:
> + line = line[:i] # strip chunk-extensions
> + try:
> + return int(line, 16)
> + except ValueError:
> + # close the connection as protocol synchronisation is
> + # probably lost
> + self._close_conn()
> + raise
> +
> + def _read_and_discard_trailer(self):
> + # read and discard trailer up to the CRLF terminator
> + ### note: we shouldn't have any trailers!
> + while True:
> + line = self.fp.readline(_MAXLINE + 1)
> + if len(line) > _MAXLINE:
> + raise LineTooLong("trailer line")
> + if not line:
> + # a vanishingly small number of sites EOF without
> + # sending the trailer
> + break
> + if line in (b'\r\n', b'\n', b''):
> + break
> +
> + def _get_chunk_left(self):
> + # return self.chunk_left, reading a new chunk if necessary.
> + # chunk_left == 0: at the end of the current chunk, need to close it
> + # chunk_left == None: No current chunk, should read next.
> + # This function returns non-zero or None if the last chunk has
> + # been read.
> + chunk_left = self.chunk_left
> + if not chunk_left: # Can be 0 or None
> + if chunk_left is not None:
> + # We are at the end of chunk, discard chunk end
> + self._safe_read(2) # toss the CRLF at the end of the chunk
> + try:
> + chunk_left = self._read_next_chunk_size()
> + except ValueError:
> + raise IncompleteRead(b'')
> + if chunk_left == 0:
> + # last chunk: 1*("0") [ chunk-extension ] CRLF
> + self._read_and_discard_trailer()
> + # we read everything; close the "file"
> + self._close_conn()
> + chunk_left = None
> + self.chunk_left = chunk_left
> + return chunk_left
> +
> + def _readall_chunked(self):
> + assert self.chunked != _UNKNOWN
> + value = []
> + try:
> + while True:
> + chunk_left = self._get_chunk_left()
> + if chunk_left is None:
> + break
> + value.append(self._safe_read(chunk_left))
> + self.chunk_left = 0
> + return b''.join(value)
> + except IncompleteRead:
> + raise IncompleteRead(b''.join(value))
> +
> + def _readinto_chunked(self, b):
> + assert self.chunked != _UNKNOWN
> + total_bytes = 0
> + mvb = memoryview(b)
> + try:
> + while True:
> + chunk_left = self._get_chunk_left()
> + if chunk_left is None:
> + return total_bytes
> +
> + if len(mvb) <= chunk_left:
> + n = self._safe_readinto(mvb)
> + self.chunk_left = chunk_left - n
> + return total_bytes + n
> +
> + temp_mvb = mvb[:chunk_left]
> + n = self._safe_readinto(temp_mvb)
> + mvb = mvb[n:]
> + total_bytes += n
> + self.chunk_left = 0
> +
> + except IncompleteRead:
> + raise IncompleteRead(bytes(b[0:total_bytes]))
> +
> + def _safe_read(self, amt):
> + """Read the number of bytes requested, compensating for partial reads.
> +
> + Normally, we have a blocking socket, but a read() can be interrupted
> + by a signal (resulting in a partial read).
> +
> + Note that we cannot distinguish between EOF and an interrupt when zero
> + bytes have been read. IncompleteRead() will be raised in this
> + situation.
> +
> + This function should be used when <amt> bytes "should" be present for
> + reading. If the bytes are truly not available (due to EOF), then the
> + IncompleteRead exception can be used to detect the problem.
> + """
> + s = []
> + while amt > 0:
> + chunk = self.fp.read(min(amt, MAXAMOUNT))
> + if not chunk:
> + raise IncompleteRead(b''.join(s), amt)
> + s.append(chunk)
> + amt -= len(chunk)
> + return b"".join(s)
> +
> + def _safe_readinto(self, b):
> + """Same as _safe_read, but for reading into a buffer."""
> + total_bytes = 0
> + mvb = memoryview(b)
> + while total_bytes < len(b):
> + if MAXAMOUNT < len(mvb):
> + temp_mvb = mvb[0:MAXAMOUNT]
> + n = self.fp.readinto(temp_mvb)
> + else:
> + n = self.fp.readinto(mvb)
> + if not n:
> + raise IncompleteRead(bytes(mvb[0:total_bytes]), len(b))
> + mvb = mvb[n:]
> + total_bytes += n
> + return total_bytes
> +
> + def read1(self, n=-1):
> + """Read with at most one underlying system call. If at least one
> + byte is buffered, return that instead.
> + """
> + if self.fp is None or self._method == "HEAD":
> + return b""
> + if self.chunked:
> + return self._read1_chunked(n)
> + if self.length is not None and (n < 0 or n > self.length):
> + n = self.length
> + try:
> + result = self.fp.read1(n)
> + except ValueError:
> + if n >= 0:
> + raise
> + # some implementations, like BufferedReader, don't support -1
> + # Read an arbitrarily selected largeish chunk.
> + result = self.fp.read1(16*1024)
> + if not result and n:
> + self._close_conn()
> + elif self.length is not None:
> + self.length -= len(result)
> + return result
> +
> + def peek(self, n=-1):
> + # Having this enables IOBase.readline() to read more than one
> + # byte at a time
> + if self.fp is None or self._method == "HEAD":
> + return b""
> + if self.chunked:
> + return self._peek_chunked(n)
> + return self.fp.peek(n)
> +
> + def readline(self, limit=-1):
> + if self.fp is None or self._method == "HEAD":
> + return b""
> + if self.chunked:
> + # Fallback to IOBase readline which uses peek() and read()
> + return super().readline(limit)
> + if self.length is not None and (limit < 0 or limit > self.length):
> + limit = self.length
> + result = self.fp.readline(limit)
> + if not result and limit:
> + self._close_conn()
> + elif self.length is not None:
> + self.length -= len(result)
> + return result
> +
> + def _read1_chunked(self, n):
> + # Strictly speaking, _get_chunk_left() may cause more than one read,
> + # but that is ok, since that is to satisfy the chunked protocol.
> + chunk_left = self._get_chunk_left()
> + if chunk_left is None or n == 0:
> + return b''
> + if not (0 <= n <= chunk_left):
> + n = chunk_left # if n is negative or larger than chunk_left
> + read = self.fp.read1(n)
> + self.chunk_left -= len(read)
> + if not read:
> + raise IncompleteRead(b"")
> + return read
> +
> + def _peek_chunked(self, n):
> + # Strictly speaking, _get_chunk_left() may cause more than one read,
> + # but that is ok, since that is to satisfy the chunked protocol.
> + try:
> + chunk_left = self._get_chunk_left()
> + except IncompleteRead:
> + return b'' # peek doesn't worry about protocol
> + if chunk_left is None:
> + return b'' # eof
> + # peek is allowed to return more than requested. Just request the
> + # entire chunk, and truncate what we get.
> + return self.fp.peek(chunk_left)[:chunk_left]
> +
> + def fileno(self):
> + return self.fp.fileno()
> +
> + def getheader(self, name, default=None):
> + '''Returns the value of the header matching *name*.
> +
> + If there are multiple matching headers, the values are
> + combined into a single string separated by commas and spaces.
> +
> + If no matching header is found, returns *default* or None if
> + the *default* is not specified.
> +
> + If the headers are unknown, raises http.client.ResponseNotReady.
> +
> + '''
> + if self.headers is None:
> + raise ResponseNotReady()
> + headers = self.headers.get_all(name) or default
> + if isinstance(headers, str) or not hasattr(headers, '__iter__'):
> + return headers
> + else:
> + return ', '.join(headers)
> +
> + def getheaders(self):
> + """Return list of (header, value) tuples."""
> + if self.headers is None:
> + raise ResponseNotReady()
> + return list(self.headers.items())
> +
> + # We override IOBase.__iter__ so that it doesn't check for closed-ness
> +
> + def __iter__(self):
> + return self
> +
> + # For compatibility with old-style urllib responses.
> +
> + def info(self):
> + '''Returns an instance of the class mimetools.Message containing
> + meta-information associated with the URL.
> +
> + When the method is HTTP, these headers are those returned by
> + the server at the head of the retrieved HTML page (including
> + Content-Length and Content-Type).
> +
> + When the method is FTP, a Content-Length header will be
> + present if (as is now usual) the server passed back a file
> + length in response to the FTP retrieval request. A
> + Content-Type header will be present if the MIME type can be
> + guessed.
> +
> + When the method is local-file, returned headers will include
> + a Date representing the file's last-modified time, a
> + Content-Length giving file size, and a Content-Type
> + containing a guess at the file's type. See also the
> + description of the mimetools module.
> +
> + '''
> + return self.headers
> +
> + def geturl(self):
> + '''Return the real URL of the page.
> +
> + In some cases, the HTTP server redirects a client to another
> + URL. The urlopen() function handles this transparently, but in
> + some cases the caller needs to know which URL the client was
> + redirected to. The geturl() method can be used to get at this
> + redirected URL.
> +
> + '''
> + return self.url
> +
> + def getcode(self):
> + '''Return the HTTP status code that was sent with the response,
> + or None if the URL is not an HTTP URL.
> +
> + '''
> + return self.status
> +
> +class HTTPConnection:
> +
> + _http_vsn = 11
> + _http_vsn_str = 'HTTP/1.1'
> +
> + response_class = HTTPResponse
> + default_port = HTTP_PORT
> + auto_open = 1
> + debuglevel = 0
> +
> + @staticmethod
> + def _is_textIO(stream):
> + """Test whether a file-like object is a text or a binary stream.
> + """
> + return isinstance(stream, io.TextIOBase)
> +
> + @staticmethod
> + def _get_content_length(body, method):
> + """Get the content-length based on the body.
> +
> + If the body is None, we set Content-Length: 0 for methods that expect
> + a body (RFC 7230, Section 3.3.2). We also set the Content-Length for
> + any method if the body is a str or bytes-like object and not a file.
> + """
> + if body is None:
> + # do an explicit check for not None here to distinguish
> + # between unset and set but empty
> + if method.upper() in _METHODS_EXPECTING_BODY:
> + return 0
> + else:
> + return None
> +
> + if hasattr(body, 'read'):
> + # file-like object.
> + return None
> +
> + try:
> + # does it implement the buffer protocol (bytes, bytearray, array)?
> + mv = memoryview(body)
> + return mv.nbytes
> + except TypeError:
> + pass
> +
> + if isinstance(body, str):
> + return len(body)
> +
> + return None
> +
> + def __init__(self, host, port=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
> + source_address=None):
> + self.timeout = timeout
> + self.source_address = source_address
> + self.sock = None
> + self._buffer = []
> + self.__response = None
> + self.__state = _CS_IDLE
> + self._method = None
> + self._tunnel_host = None
> + self._tunnel_port = None
> + self._tunnel_headers = {}
> +
> + (self.host, self.port) = self._get_hostport(host, port)
> +
> + # This is stored as an instance variable to allow unit
> + # tests to replace it with a suitable mockup
> + self._create_connection = socket.create_connection
> +
> + def set_tunnel(self, host, port=None, headers=None):
> + """Set up host and port for HTTP CONNECT tunnelling.
> +
> + In a connection that uses HTTP CONNECT tunneling, the host passed to the
> + constructor is used as a proxy server that relays all communication to
> + the endpoint passed to `set_tunnel`. This done by sending an HTTP
> + CONNECT request to the proxy server when the connection is established.
> +
> + This method must be called before the HTML connection has been
> + established.
> +
> + The headers argument should be a mapping of extra HTTP headers to send
> + with the CONNECT request.
> + """
> +
> + if self.sock:
> + raise RuntimeError("Can't set up tunnel for established connection")
> +
> + self._tunnel_host, self._tunnel_port = self._get_hostport(host, port)
> + if headers:
> + self._tunnel_headers = headers
> + else:
> + self._tunnel_headers.clear()
> +
> + def _get_hostport(self, host, port):
> + if port is None:
> + i = host.rfind(':')
> + j = host.rfind(']') # ipv6 addresses have [...]
> + if i > j:
> + try:
> + port = int(host[i+1:])
> + except ValueError:
> + if host[i+1:] == "": # http://foo.com:/ == http://foo.com/
> + port = self.default_port
> + else:
> + raise InvalidURL("nonnumeric port: '%s'" % host[i+1:])
> + host = host[:i]
> + else:
> + port = self.default_port
> + if host and host[0] == '[' and host[-1] == ']':
> + host = host[1:-1]
> +
> + return (host, port)
> +
> + def set_debuglevel(self, level):
> + self.debuglevel = level
> +
> + def _tunnel(self):
> + connect_str = "CONNECT %s:%d HTTP/1.0\r\n" % (self._tunnel_host,
> + self._tunnel_port)
> + connect_bytes = connect_str.encode("ascii")
> + self.send(connect_bytes)
> + for header, value in self._tunnel_headers.items():
> + header_str = "%s: %s\r\n" % (header, value)
> + header_bytes = header_str.encode("latin-1")
> + self.send(header_bytes)
> + self.send(b'\r\n')
> +
> + response = self.response_class(self.sock, method=self._method)
> + (version, code, message) = response._read_status()
> +
> + if code != http.HTTPStatus.OK:
> + self.close()
> + raise OSError("Tunnel connection failed: %d %s" % (code,
> + message.strip()))
> + while True:
> + line = response.fp.readline(_MAXLINE + 1)
> + if len(line) > _MAXLINE:
> + raise LineTooLong("header line")
> + if not line:
> + # for sites which EOF without sending a trailer
> + break
> + if line in (b'\r\n', b'\n', b''):
> + break
> +
> + if self.debuglevel > 0:
> + print('header:', line.decode())
> +
> + def connect(self):
> + """Connect to the host and port specified in __init__."""
> + self.sock = self._create_connection(
> + (self.host,self.port), self.timeout, self.source_address)
> + #self.sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
> +
> + if self._tunnel_host:
> + self._tunnel()
> +
> + def close(self):
> + """Close the connection to the HTTP server."""
> + self.__state = _CS_IDLE
> + try:
> + sock = self.sock
> + if sock:
> + self.sock = None
> + sock.close() # close it manually... there may be other refs
> + finally:
> + response = self.__response
> + if response:
> + self.__response = None
> + response.close()
> +
> + def send(self, data):
> + """Send `data' to the server.
> + ``data`` can be a string object, a bytes object, an array object, a
> + file-like object that supports a .read() method, or an iterable object.
> + """
> +
> + if self.sock is None:
> + if self.auto_open:
> + self.connect()
> + else:
> + raise NotConnected()
> +
> + if self.debuglevel > 0:
> + print("send:", repr(data))
> + blocksize = 8192
> + if hasattr(data, "read") :
> + if self.debuglevel > 0:
> + print("sendIng a read()able")
> + encode = self._is_textIO(data)
> + if encode and self.debuglevel > 0:
> + print("encoding file using iso-8859-1")
> + while 1:
> + datablock = data.read(blocksize)
> + if not datablock:
> + break
> + if encode:
> + datablock = datablock.encode("iso-8859-1")
> + self.sock.sendall(datablock)
> + return
> + try:
> + self.sock.sendall(data)
> + except TypeError:
> + if isinstance(data, collections.Iterable):
> + for d in data:
> + self.sock.sendall(d)
> + else:
> + raise TypeError("data should be a bytes-like object "
> + "or an iterable, got %r" % type(data))
> +
> + def _output(self, s):
> + """Add a line of output to the current request buffer.
> +
> + Assumes that the line does *not* end with \\r\\n.
> + """
> + self._buffer.append(s)
> +
> + def _read_readable(self, readable):
> + blocksize = 8192
> + if self.debuglevel > 0:
> + print("sendIng a read()able")
> + encode = self._is_textIO(readable)
> + if encode and self.debuglevel > 0:
> + print("encoding file using iso-8859-1")
> + while True:
> + datablock = readable.read(blocksize)
> + if not datablock:
> + break
> + if encode:
> + datablock = datablock.encode("iso-8859-1")
> + yield datablock
> +
> + def _send_output(self, message_body=None, encode_chunked=False):
> + """Send the currently buffered request and clear the buffer.
> +
> + Appends an extra \\r\\n to the buffer.
> + A message_body may be specified, to be appended to the request.
> + """
> + self._buffer.extend((b"", b""))
> + msg = b"\r\n".join(self._buffer)
> + del self._buffer[:]
> + self.send(msg)
> +
> + if message_body is not None:
> +
> + # create a consistent interface to message_body
> + if hasattr(message_body, 'read'):
> + # Let file-like take precedence over byte-like. This
> + # is needed to allow the current position of mmap'ed
> + # files to be taken into account.
> + chunks = self._read_readable(message_body)
> + else:
> + try:
> + # this is solely to check to see if message_body
> + # implements the buffer API. it /would/ be easier
> + # to capture if PyObject_CheckBuffer was exposed
> + # to Python.
> + memoryview(message_body)
> + except TypeError:
> + try:
> + chunks = iter(message_body)
> + except TypeError:
> + raise TypeError("message_body should be a bytes-like "
> + "object or an iterable, got %r"
> + % type(message_body))
> + else:
> + # the object implements the buffer interface and
> + # can be passed directly into socket methods
> + chunks = (message_body,)
> +
> + for chunk in chunks:
> + if not chunk:
> + if self.debuglevel > 0:
> + print('Zero length chunk ignored')
> + continue
> +
> + if encode_chunked and self._http_vsn == 11:
> + # chunked encoding
> + chunk = f'{len(chunk):X}\r\n'.encode('ascii') + chunk \
> + + b'\r\n'
> + self.send(chunk)
> +
> + if encode_chunked and self._http_vsn == 11:
> + # end chunked transfer
> + self.send(b'0\r\n\r\n')
> +
> + def putrequest(self, method, url, skip_host=False,
> + skip_accept_encoding=False):
> + """Send a request to the server.
> +
> + `method' specifies an HTTP request method, e.g. 'GET'.
> + `url' specifies the object being requested, e.g. '/index.html'.
> + `skip_host' if True does not add automatically a 'Host:' header
> + `skip_accept_encoding' if True does not add automatically an
> + 'Accept-Encoding:' header
> + """
> +
> + # if a prior response has been completed, then forget about it.
> + if self.__response and self.__response.isclosed():
> + self.__response = None
> +
> +
> + # in certain cases, we cannot issue another request on this connection.
> + # this occurs when:
> + # 1) we are in the process of sending a request. (_CS_REQ_STARTED)
> + # 2) a response to a previous request has signalled that it is going
> + # to close the connection upon completion.
> + # 3) the headers for the previous response have not been read, thus
> + # we cannot determine whether point (2) is true. (_CS_REQ_SENT)
> + #
> + # if there is no prior response, then we can request at will.
> + #
> + # if point (2) is true, then we will have passed the socket to the
> + # response (effectively meaning, "there is no prior response"), and
> + # will open a new one when a new request is made.
> + #
> + # Note: if a prior response exists, then we *can* start a new request.
> + # We are not allowed to begin fetching the response to this new
> + # request, however, until that prior response is complete.
> + #
> + if self.__state == _CS_IDLE:
> + self.__state = _CS_REQ_STARTED
> + else:
> + raise CannotSendRequest(self.__state)
> +
> + # Save the method we use, we need it later in the response phase
> + self._method = method
> + if not url:
> + url = '/'
> + request = '%s %s %s' % (method, url, self._http_vsn_str)
> +
> + # Non-ASCII characters should have been eliminated earlier
> + self._output(request.encode('ascii'))
> +
> + if self._http_vsn == 11:
> + # Issue some standard headers for better HTTP/1.1 compliance
> +
> + if not skip_host:
> + # this header is issued *only* for HTTP/1.1
> + # connections. more specifically, this means it is
> + # only issued when the client uses the new
> + # HTTPConnection() class. backwards-compat clients
> + # will be using HTTP/1.0 and those clients may be
> + # issuing this header themselves. we should NOT issue
> + # it twice; some web servers (such as Apache) barf
> + # when they see two Host: headers
> +
> + # If we need a non-standard port,include it in the
> + # header. If the request is going through a proxy,
> + # but the host of the actual URL, not the host of the
> + # proxy.
> +
> + netloc = ''
> + if isinstance(url,str):
> + url = bytes(url,encoding='utf-8')
> + b = url.decode('utf-8')
> + if b.startswith('http'):
> + nil, netloc, nil, nil, nil = urlsplit(url)
> +
> + if netloc:
> + try:
> + netloc_enc = netloc.encode("ascii")
> + except UnicodeEncodeError:
> + netloc_enc = netloc.encode("idna")
> + self.putheader('Host', netloc_enc)
> + else:
> + if self._tunnel_host:
> + host = self._tunnel_host
> + port = self._tunnel_port
> + else:
> + host = self.host
> + port = self.port
> +
> + try:
> + host_enc = host.encode("ascii")
> + except UnicodeEncodeError:
> + host_enc = host.encode("idna")
> +
> + # As per RFC 273, IPv6 address should be wrapped with []
> + # when used as Host header
> +
> + if host.find(':') >= 0:
> + host_enc = b'[' + host_enc + b']'
> +
> + if port == self.default_port:
> + self.putheader('Host', host_enc)
> + else:
> + host_enc = host_enc.decode("ascii")
> + self.putheader('Host', "%s:%s" % (host_enc, port))
> +
> + # note: we are assuming that clients will not attempt to set these
> + # headers since *this* library must deal with the
> + # consequences. this also means that when the supporting
> + # libraries are updated to recognize other forms, then this
> + # code should be changed (removed or updated).
> +
> + # we only want a Content-Encoding of "identity" since we don't
> + # support encodings such as x-gzip or x-deflate.
> + if not skip_accept_encoding:
> + self.putheader('Accept-Encoding', 'identity')
> +
> + # we can accept "chunked" Transfer-Encodings, but no others
> + # NOTE: no TE header implies *only* "chunked"
> + #self.putheader('TE', 'chunked')
> +
> + # if TE is supplied in the header, then it must appear in a
> + # Connection header.
> + #self.putheader('Connection', 'TE')
> +
> + else:
> + # For HTTP/1.0, the server will assume "not chunked"
> + pass
> +
> + def putheader(self, header, *values):
> + """Send a request header line to the server.
> +
> + For example: h.putheader('Accept', 'text/html')
> + """
> + if self.__state != _CS_REQ_STARTED:
> + raise CannotSendHeader()
> +
> + if hasattr(header, 'encode'):
> + header = header.encode('ascii')
> +
> + if not _is_legal_header_name(header):
> + raise ValueError('Invalid header name %r' % (header,))
> +
> + values = list(values)
> + for i, one_value in enumerate(values):
> + if hasattr(one_value, 'encode'):
> + values[i] = one_value.encode('latin-1')
> + elif isinstance(one_value, int):
> + values[i] = str(one_value).encode('ascii')
> +
> + if _is_illegal_header_value(values[i]):
> + raise ValueError('Invalid header value %r' % (values[i],))
> +
> + value = b'\r\n\t'.join(values)
> + header = header + b': ' + value
> + self._output(header)
> +
> + def endheaders(self, message_body=None, *, encode_chunked=False):
> + """Indicate that the last header line has been sent to the server.
> +
> + This method sends the request to the server. The optional message_body
> + argument can be used to pass a message body associated with the
> + request.
> + """
> + if self.__state == _CS_REQ_STARTED:
> + self.__state = _CS_REQ_SENT
> + else:
> + raise CannotSendHeader()
> + self._send_output(message_body, encode_chunked=encode_chunked)
> +
> + def request(self, method, url, body=None, headers={}, *,
> + encode_chunked=False):
> + """Send a complete request to the server."""
> + self._send_request(method, url, body, headers, encode_chunked)
> +
> + def _send_request(self, method, url, body, headers, encode_chunked):
> + # Honor explicitly requested Host: and Accept-Encoding: headers.
> + header_names = frozenset(k.lower() for k in headers)
> + skips = {}
> + if 'host' in header_names:
> + skips['skip_host'] = 1
> + if 'accept-encoding' in header_names:
> + skips['skip_accept_encoding'] = 1
> +
> + self.putrequest(method, url, **skips)
> +
> + # chunked encoding will happen if HTTP/1.1 is used and either
> + # the caller passes encode_chunked=True or the following
> + # conditions hold:
> + # 1. content-length has not been explicitly set
> + # 2. the body is a file or iterable, but not a str or bytes-like
> + # 3. Transfer-Encoding has NOT been explicitly set by the caller
> +
> + if 'content-length' not in header_names:
> + # only chunk body if not explicitly set for backwards
> + # compatibility, assuming the client code is already handling the
> + # chunking
> + if 'transfer-encoding' not in header_names:
> + # if content-length cannot be automatically determined, fall
> + # back to chunked encoding
> + encode_chunked = False
> + content_length = self._get_content_length(body, method)
> + if content_length is None:
> + if body is not None:
> + if self.debuglevel > 0:
> + print('Unable to determine size of %r' % body)
> + encode_chunked = True
> + self.putheader('Transfer-Encoding', 'chunked')
> + else:
> + self.putheader('Content-Length', str(content_length))
> + else:
> + encode_chunked = False
> +
> + for hdr, value in headers.items():
> + self.putheader(hdr, value)
> + if isinstance(body, str):
> + # RFC 2616 Section 3.7.1 says that text default has a
> + # default charset of iso-8859-1.
> + body = _encode(body, 'body')
> + self.endheaders(body, encode_chunked=encode_chunked)
> +
> + def getresponse(self):
> + """Get the response from the server.
> +
> + If the HTTPConnection is in the correct state, returns an
> + instance of HTTPResponse or of whatever object is returned by
> + the response_class variable.
> +
> + If a request has not been sent or if a previous response has
> + not be handled, ResponseNotReady is raised. If the HTTP
> + response indicates that the connection should be closed, then
> + it will be closed before the response is returned. When the
> + connection is closed, the underlying socket is closed.
> + """
> +
> + # if a prior response has been completed, then forget about it.
> + if self.__response and self.__response.isclosed():
> + self.__response = None
> +
> + # if a prior response exists, then it must be completed (otherwise, we
> + # cannot read this response's header to determine the connection-close
> + # behavior)
> + #
> + # note: if a prior response existed, but was connection-close, then the
> + # socket and response were made independent of this HTTPConnection
> + # object since a new request requires that we open a whole new
> + # connection
> + #
> + # this means the prior response had one of two states:
> + # 1) will_close: this connection was reset and the prior socket and
> + # response operate independently
> + # 2) persistent: the response was retained and we await its
> + # isclosed() status to become true.
> + #
> + if self.__state != _CS_REQ_SENT or self.__response:
> + raise ResponseNotReady(self.__state)
> +
> + if self.debuglevel > 0:
> + response = self.response_class(self.sock, self.debuglevel,
> + method=self._method)
> + else:
> + response = self.response_class(self.sock, method=self._method)
> +
> + try:
> + try:
> + response.begin()
> + except ConnectionError:
> + self.close()
> + raise
> + assert response.will_close != _UNKNOWN
> + self.__state = _CS_IDLE
> +
> + if response.will_close:
> + # this effectively passes the connection to the response
> + self.close()
> + else:
> + # remember this, so we can tell when it is complete
> + self.__response = response
> +
> + return response
> + except:
> + response.close()
> + raise
> +
> +try:
> + import ssl
> +except ImportError:
> + pass
> +else:
> + class HTTPSConnection(HTTPConnection):
> + "This class allows communication via SSL."
> +
> + default_port = HTTPS_PORT
> +
> + # XXX Should key_file and cert_file be deprecated in favour of context?
> +
> + def __init__(self, host, port=None, key_file=None, cert_file=None,
> + timeout=socket._GLOBAL_DEFAULT_TIMEOUT,
> + source_address=None, *, context=None,
> + check_hostname=None):
> + super(HTTPSConnection, self).__init__(host, port, timeout,
> + source_address)
> + if (key_file is not None or cert_file is not None or
> + check_hostname is not None):
> + import warnings
> + warnings.warn("key_file, cert_file and check_hostname are "
> + "deprecated, use a custom context instead.",
> + DeprecationWarning, 2)
> + self.key_file = key_file
> + self.cert_file = cert_file
> + if context is None:
> + context = ssl._create_default_https_context()
> + will_verify = context.verify_mode != ssl.CERT_NONE
> + if check_hostname is None:
> + check_hostname = context.check_hostname
> + if check_hostname and not will_verify:
> + raise ValueError("check_hostname needs a SSL context with "
> + "either CERT_OPTIONAL or CERT_REQUIRED")
> + if key_file or cert_file:
> + context.load_cert_chain(cert_file, key_file)
> + self._context = context
> + self._check_hostname = check_hostname
> +
> + def connect(self):
> + "Connect to a host on a given (SSL) port."
> +
> + super().connect()
> +
> + if self._tunnel_host:
> + server_hostname = self._tunnel_host
> + else:
> + server_hostname = self.host
> +
> + self.sock = self._context.wrap_socket(self.sock,
> + server_hostname=server_hostname)
> + if not self._context.check_hostname and self._check_hostname:
> + try:
> + ssl.match_hostname(self.sock.getpeercert(), server_hostname)
> + except Exception:
> + self.sock.shutdown(socket.SHUT_RDWR)
> + self.sock.close()
> + raise
> +
> + __all__.append("HTTPSConnection")
> +
> +class HTTPException(Exception):
> + # Subclasses that define an __init__ must call Exception.__init__
> + # or define self.args. Otherwise, str() will fail.
> + pass
> +
> +class NotConnected(HTTPException):
> + pass
> +
> +class InvalidURL(HTTPException):
> + pass
> +
> +class UnknownProtocol(HTTPException):
> + def __init__(self, version):
> + self.args = version,
> + self.version = version
> +
> +class UnknownTransferEncoding(HTTPException):
> + pass
> +
> +class UnimplementedFileMode(HTTPException):
> + pass
> +
> +class IncompleteRead(HTTPException):
> + def __init__(self, partial, expected=None):
> + self.args = partial,
> + self.partial = partial
> + self.expected = expected
> + def __repr__(self):
> + if self.expected is not None:
> + e = ', %i more expected' % self.expected
> + else:
> + e = ''
> + return '%s(%i bytes read%s)' % (self.__class__.__name__,
> + len(self.partial), e)
> + def __str__(self):
> + return repr(self)
> +
> +class ImproperConnectionState(HTTPException):
> + pass
> +
> +class CannotSendRequest(ImproperConnectionState):
> + pass
> +
> +class CannotSendHeader(ImproperConnectionState):
> + pass
> +
> +class ResponseNotReady(ImproperConnectionState):
> + pass
> +
> +class BadStatusLine(HTTPException):
> + def __init__(self, line):
> + if not line:
> + line = repr(line)
> + self.args = line,
> + self.line = line
> +
> +class LineTooLong(HTTPException):
> + def __init__(self, line_type):
> + HTTPException.__init__(self, "got more than %d bytes when reading %s"
> + % (_MAXLINE, line_type))
> +
> +class RemoteDisconnected(ConnectionResetError, BadStatusLine):
> + def __init__(self, *pos, **kw):
> + BadStatusLine.__init__(self, "")
> + ConnectionResetError.__init__(self, *pos, **kw)
> +
> +# for backwards compatibility
> +error = HTTPException
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/importlib/_bootstrap_external.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/importlib/_bootstrap_external.py
> new file mode 100644
> index 00000000..dcf41018
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/importlib/_bootstrap_external.py
> @@ -0,0 +1,1443 @@
> +"""Core implementation of path-based import.
> +
> +This module is NOT meant to be directly imported! It has been designed such
> +that it can be bootstrapped into Python as the implementation of import. As
> +such it requires the injection of specific modules and attributes in order to
> +work. One should use importlib as the public-facing version of this module.
> +
> +"""
> +#
> +# IMPORTANT: Whenever making changes to this module, be sure to run
> +# a top-level make in order to get the frozen version of the module
> +# updated. Not doing so will result in the Makefile to fail for
> +# all others who don't have a ./python around to freeze the module
> +# in the early stages of compilation.
> +#
> +
> +# See importlib._setup() for what is injected into the global namespace.
> +
> +# When editing this code be aware that code executed at import time CANNOT
> +# reference any injected objects! This includes not only global code but also
> +# anything specified at the class level.
> +
> +# Bootstrap-related code ######################################################
> +_CASE_INSENSITIVE_PLATFORMS_STR_KEY = 'win',
> +_CASE_INSENSITIVE_PLATFORMS_BYTES_KEY = 'cygwin', 'darwin'
> +_CASE_INSENSITIVE_PLATFORMS = (_CASE_INSENSITIVE_PLATFORMS_BYTES_KEY
> + + _CASE_INSENSITIVE_PLATFORMS_STR_KEY)
> +
> +
> +def _make_relax_case():
> + if sys.platform.startswith(_CASE_INSENSITIVE_PLATFORMS):
> + if sys.platform.startswith(_CASE_INSENSITIVE_PLATFORMS_STR_KEY):
> + key = 'PYTHONCASEOK'
> + else:
> + key = b'PYTHONCASEOK'
> +
> + def _relax_case():
> + """True if filenames must be checked case-insensitively."""
> + return key in _os.environ
> + else:
> + def _relax_case():
> + """True if filenames must be checked case-insensitively."""
> + return False
> + return _relax_case
> +
> +
> +def _w_long(x):
> + """Convert a 32-bit integer to little-endian."""
> + return (int(x) & 0xFFFFFFFF).to_bytes(4, 'little')
> +
> +
> +def _r_long(int_bytes):
> + """Convert 4 bytes in little-endian to an integer."""
> + return int.from_bytes(int_bytes, 'little')
> +
> +
> +def _path_join(*path_parts):
> + """Replacement for os.path.join()."""
> + return path_sep.join([part.rstrip(path_separators)
> + for part in path_parts if part])
> +
> +
> +def _path_split(path):
> + """Replacement for os.path.split()."""
> + if len(path_separators) == 1:
> + front, _, tail = path.rpartition(path_sep)
> + return front, tail
> + for x in reversed(path):
> + if x in path_separators:
> + front, tail = path.rsplit(x, maxsplit=1)
> + return front, tail
> + return '', path
> +
> +
> +def _path_stat(path):
> + """Stat the path.
> +
> + Made a separate function to make it easier to override in experiments
> + (e.g. cache stat results).
> +
> + """
> + return _os.stat(path)
> +
> +
> +def _path_is_mode_type(path, mode):
> + """Test whether the path is the specified mode type."""
> + try:
> + stat_info = _path_stat(path)
> + except OSError:
> + return False
> + return (stat_info.st_mode & 0o170000) == mode
> +
> +
> +def _path_isfile(path):
> + """Replacement for os.path.isfile."""
> + return _path_is_mode_type(path, 0o100000)
> +
> +
> +def _path_isdir(path):
> + """Replacement for os.path.isdir."""
> + if not path:
> + path = _os.getcwd()
> + return _path_is_mode_type(path, 0o040000)
> +
> +
> +def _write_atomic(path, data, mode=0o666):
> + """Best-effort function to write data to a path atomically.
> + Be prepared to handle a FileExistsError if concurrent writing of the
> + temporary file is attempted."""
> + # id() is used to generate a pseudo-random filename.
> + path_tmp = '{}.{}'.format(path, id(path))
> + fd = _os.open(path_tmp,
> + _os.O_EXCL | _os.O_CREAT | _os.O_WRONLY, mode & 0o666)
> + try:
> + # We first write data to a temporary file, and then use os.replace() to
> + # perform an atomic rename.
> + with _io.FileIO(fd, 'wb') as file:
> + file.write(data)
> + _os.rename(path_tmp, path)
> + except OSError:
> + try:
> + _os.unlink(path_tmp)
> + except OSError:
> + pass
> + raise
> +
> +
> +_code_type = type(_write_atomic.__code__)
> +
> +
> +# Finder/loader utility code ###############################################
> +
> +# Magic word to reject .pyc files generated by other Python versions.
> +# It should change for each incompatible change to the bytecode.
> +#
> +# The value of CR and LF is incorporated so if you ever read or write
> +# a .pyc file in text mode the magic number will be wrong; also, the
> +# Apple MPW compiler swaps their values, botching string constants.
> +#
> +# There were a variety of old schemes for setting the magic number.
> +# The current working scheme is to increment the previous value by
> +# 10.
> +#
> +# Starting with the adoption of PEP 3147 in Python 3.2, every bump in magic
> +# number also includes a new "magic tag", i.e. a human readable string used
> +# to represent the magic number in __pycache__ directories. When you change
> +# the magic number, you must also set a new unique magic tag. Generally this
> +# can be named after the Python major version of the magic number bump, but
> +# it can really be anything, as long as it's different than anything else
> +# that's come before. The tags are included in the following table, starting
> +# with Python 3.2a0.
> +#
> +# Known values:
> +# Python 1.5: 20121
> +# Python 1.5.1: 20121
> +# Python 1.5.2: 20121
> +# Python 1.6: 50428
> +# Python 2.0: 50823
> +# Python 2.0.1: 50823
> +# Python 2.1: 60202
> +# Python 2.1.1: 60202
> +# Python 2.1.2: 60202
> +# Python 2.2: 60717
> +# Python 2.3a0: 62011
> +# Python 2.3a0: 62021
> +# Python 2.3a0: 62011 (!)
> +# Python 2.4a0: 62041
> +# Python 2.4a3: 62051
> +# Python 2.4b1: 62061
> +# Python 2.5a0: 62071
> +# Python 2.5a0: 62081 (ast-branch)
> +# Python 2.5a0: 62091 (with)
> +# Python 2.5a0: 62092 (changed WITH_CLEANUP opcode)
> +# Python 2.5b3: 62101 (fix wrong code: for x, in ...)
> +# Python 2.5b3: 62111 (fix wrong code: x += yield)
> +# Python 2.5c1: 62121 (fix wrong lnotab with for loops and
> +# storing constants that should have been removed)
> +# Python 2.5c2: 62131 (fix wrong code: for x, in ... in listcomp/genexp)
> +# Python 2.6a0: 62151 (peephole optimizations and STORE_MAP opcode)
> +# Python 2.6a1: 62161 (WITH_CLEANUP optimization)
> +# Python 2.7a0: 62171 (optimize list comprehensions/change LIST_APPEND)
> +# Python 2.7a0: 62181 (optimize conditional branches:
> +# introduce POP_JUMP_IF_FALSE and POP_JUMP_IF_TRUE)
> +# Python 2.7a0 62191 (introduce SETUP_WITH)
> +# Python 2.7a0 62201 (introduce BUILD_SET)
> +# Python 2.7a0 62211 (introduce MAP_ADD and SET_ADD)
> +# Python 3000: 3000
> +# 3010 (removed UNARY_CONVERT)
> +# 3020 (added BUILD_SET)
> +# 3030 (added keyword-only parameters)
> +# 3040 (added signature annotations)
> +# 3050 (print becomes a function)
> +# 3060 (PEP 3115 metaclass syntax)
> +# 3061 (string literals become unicode)
> +# 3071 (PEP 3109 raise changes)
> +# 3081 (PEP 3137 make __file__ and __name__ unicode)
> +# 3091 (kill str8 interning)
> +# 3101 (merge from 2.6a0, see 62151)
> +# 3103 (__file__ points to source file)
> +# Python 3.0a4: 3111 (WITH_CLEANUP optimization).
> +# Python 3.0a5: 3131 (lexical exception stacking, including POP_EXCEPT)
> +# Python 3.1a0: 3141 (optimize list, set and dict comprehensions:
> +# change LIST_APPEND and SET_ADD, add MAP_ADD)
> +# Python 3.1a0: 3151 (optimize conditional branches:
> +# introduce POP_JUMP_IF_FALSE and POP_JUMP_IF_TRUE)
> +# Python 3.2a0: 3160 (add SETUP_WITH)
> +# tag: cpython-32
> +# Python 3.2a1: 3170 (add DUP_TOP_TWO, remove DUP_TOPX and ROT_FOUR)
> +# tag: cpython-32
> +# Python 3.2a2 3180 (add DELETE_DEREF)
> +# Python 3.3a0 3190 __class__ super closure changed
> +# Python 3.3a0 3200 (__qualname__ added)
> +# 3210 (added size modulo 2**32 to the pyc header)
> +# Python 3.3a1 3220 (changed PEP 380 implementation)
> +# Python 3.3a4 3230 (revert changes to implicit __class__ closure)
> +# Python 3.4a1 3250 (evaluate positional default arguments before
> +# keyword-only defaults)
> +# Python 3.4a1 3260 (add LOAD_CLASSDEREF; allow locals of class to override
> +# free vars)
> +# Python 3.4a1 3270 (various tweaks to the __class__ closure)
> +# Python 3.4a1 3280 (remove implicit class argument)
> +# Python 3.4a4 3290 (changes to __qualname__ computation)
> +# Python 3.4a4 3300 (more changes to __qualname__ computation)
> +# Python 3.4rc2 3310 (alter __qualname__ computation)
> +# Python 3.5a0 3320 (matrix multiplication operator)
> +# Python 3.5b1 3330 (PEP 448: Additional Unpacking Generalizations)
> +# Python 3.5b2 3340 (fix dictionary display evaluation order #11205)
> +# Python 3.5b2 3350 (add GET_YIELD_FROM_ITER opcode #24400)
> +# Python 3.5.2 3351 (fix BUILD_MAP_UNPACK_WITH_CALL opcode #27286)
> +# Python 3.6a0 3360 (add FORMAT_VALUE opcode #25483
> +# Python 3.6a0 3361 (lineno delta of code.co_lnotab becomes signed)
> +# Python 3.6a1 3370 (16 bit wordcode)
> +# Python 3.6a1 3371 (add BUILD_CONST_KEY_MAP opcode #27140)
> +# Python 3.6a1 3372 (MAKE_FUNCTION simplification, remove MAKE_CLOSURE
> +# #27095)
> +# Python 3.6b1 3373 (add BUILD_STRING opcode #27078)
> +# Python 3.6b1 3375 (add SETUP_ANNOTATIONS and STORE_ANNOTATION opcodes
> +# #27985)
> +# Python 3.6b1 3376 (simplify CALL_FUNCTIONs & BUILD_MAP_UNPACK_WITH_CALL)
> +# Python 3.6b1 3377 (set __class__ cell from type.__new__ #23722)
> +# Python 3.6b2 3378 (add BUILD_TUPLE_UNPACK_WITH_CALL #28257)
> +# Python 3.6rc1 3379 (more thorough __class__ validation #23722)
> +#
> +# MAGIC must change whenever the bytecode emitted by the compiler may no
> +# longer be understood by older implementations of the eval loop (usually
> +# due to the addition of new opcodes).
> +#
> +# Whenever MAGIC_NUMBER is changed, the ranges in the magic_values array
> +# in PC/launcher.c must also be updated.
> +
> +MAGIC_NUMBER = (3379).to_bytes(2, 'little') + b'\r\n'
> +_RAW_MAGIC_NUMBER = int.from_bytes(MAGIC_NUMBER, 'little') # For import.c
> +
> +_PYCACHE = '__pycache__'
> +_OPT = 'opt-'
> +
> +SOURCE_SUFFIXES = ['.py'] # _setup() adds .pyw as needed.
> +
> +BYTECODE_SUFFIXES = ['.pyc']
> +# Deprecated.
> +DEBUG_BYTECODE_SUFFIXES = OPTIMIZED_BYTECODE_SUFFIXES = BYTECODE_SUFFIXES
> +
> +def cache_from_source(path, debug_override=None, *, optimization=None):
> + """Given the path to a .py file, return the path to its .pyc file.
> +
> + The .py file does not need to exist; this simply returns the path to the
> + .pyc file calculated as if the .py file were imported.
> +
> + The 'optimization' parameter controls the presumed optimization level of
> + the bytecode file. If 'optimization' is not None, the string representation
> + of the argument is taken and verified to be alphanumeric (else ValueError
> + is raised).
> +
> + The debug_override parameter is deprecated. If debug_override is not None,
> + a True value is the same as setting 'optimization' to the empty string
> + while a False value is equivalent to setting 'optimization' to '1'.
> +
> + If sys.implementation.cache_tag is None then NotImplementedError is raised.
> +
> + """
> + if debug_override is not None:
> + _warnings.warn('the debug_override parameter is deprecated; use '
> + "'optimization' instead", DeprecationWarning)
> + if optimization is not None:
> + message = 'debug_override or optimization must be set to None'
> + raise TypeError(message)
> + optimization = '' if debug_override else 1
> + #path = _os.fspath(path) #JP hack
> + head, tail = _path_split(path)
> + base, sep, rest = tail.rpartition('.')
> + tag = sys.implementation.cache_tag
> + if tag is None:
> + raise NotImplementedError('sys.implementation.cache_tag is None')
> + almost_filename = ''.join([(base if base else rest), sep, tag])
> + if optimization is None:
> + if sys.flags.optimize == 0:
> + optimization = ''
> + else:
> + optimization = sys.flags.optimize
> + optimization = str(optimization)
> + if optimization != '':
> + if not optimization.isalnum():
> + raise ValueError('{!r} is not alphanumeric'.format(optimization))
> + almost_filename = '{}.{}{}'.format(almost_filename, _OPT, optimization)
> + return _path_join(head, _PYCACHE, almost_filename + BYTECODE_SUFFIXES[0])
> +
> +
> +def source_from_cache(path):
> + """Given the path to a .pyc. file, return the path to its .py file.
> +
> + The .pyc file does not need to exist; this simply returns the path to
> + the .py file calculated to correspond to the .pyc file. If path does
> + not conform to PEP 3147/488 format, ValueError will be raised. If
> + sys.implementation.cache_tag is None then NotImplementedError is raised.
> +
> + """
> + if sys.implementation.cache_tag is None:
> + raise NotImplementedError('sys.implementation.cache_tag is None')
> + #path = _os.fspath(path) #JP hack
> + head, pycache_filename = _path_split(path)
> + head, pycache = _path_split(head)
> + if pycache != _PYCACHE:
> + raise ValueError('{} not bottom-level directory in '
> + '{!r}'.format(_PYCACHE, path))
> + dot_count = pycache_filename.count('.')
> + if dot_count not in {2, 3}:
> + raise ValueError('expected only 2 or 3 dots in '
> + '{!r}'.format(pycache_filename))
> + elif dot_count == 3:
> + optimization = pycache_filename.rsplit('.', 2)[-2]
> + if not optimization.startswith(_OPT):
> + raise ValueError("optimization portion of filename does not start "
> + "with {!r}".format(_OPT))
> + opt_level = optimization[len(_OPT):]
> + if not opt_level.isalnum():
> + raise ValueError("optimization level {!r} is not an alphanumeric "
> + "value".format(optimization))
> + base_filename = pycache_filename.partition('.')[0]
> + return _path_join(head, base_filename + SOURCE_SUFFIXES[0])
> +
> +
> +def _get_sourcefile(bytecode_path):
> + """Convert a bytecode file path to a source path (if possible).
> +
> + This function exists purely for backwards-compatibility for
> + PyImport_ExecCodeModuleWithFilenames() in the C API.
> +
> + """
> + if len(bytecode_path) == 0:
> + return None
> + rest, _, extension = bytecode_path.rpartition('.')
> + if not rest or extension.lower()[-3:-1] != 'py':
> + return bytecode_path
> + try:
> + source_path = source_from_cache(bytecode_path)
> + except (NotImplementedError, ValueError):
> + source_path = bytecode_path[:-1]
> + return source_path if _path_isfile(source_path) else bytecode_path
> +
> +
> +def _get_cached(filename):
> + if filename.endswith(tuple(SOURCE_SUFFIXES)):
> + try:
> + return cache_from_source(filename)
> + except NotImplementedError:
> + pass
> + elif filename.endswith(tuple(BYTECODE_SUFFIXES)):
> + return filename
> + else:
> + return None
> +
> +
> +def _calc_mode(path):
> + """Calculate the mode permissions for a bytecode file."""
> + try:
> + mode = _path_stat(path).st_mode
> + except OSError:
> + mode = 0o666
> + # We always ensure write access so we can update cached files
> + # later even when the source files are read-only on Windows (#6074)
> + mode |= 0o200
> + return mode
> +
> +
> +def _check_name(method):
> + """Decorator to verify that the module being requested matches the one the
> + loader can handle.
> +
> + The first argument (self) must define _name which the second argument is
> + compared against. If the comparison fails then ImportError is raised.
> +
> + """
> + def _check_name_wrapper(self, name=None, *args, **kwargs):
> + if name is None:
> + name = self.name
> + elif self.name != name:
> + raise ImportError('loader for %s cannot handle %s' %
> + (self.name, name), name=name)
> + return method(self, name, *args, **kwargs)
> + try:
> + _wrap = _bootstrap._wrap
> + except NameError:
> + # XXX yuck
> + def _wrap(new, old):
> + for replace in ['__module__', '__name__', '__qualname__', '__doc__']:
> + if hasattr(old, replace):
> + setattr(new, replace, getattr(old, replace))
> + new.__dict__.update(old.__dict__)
> + _wrap(_check_name_wrapper, method)
> + return _check_name_wrapper
> +
> +
> +def _find_module_shim(self, fullname):
> + """Try to find a loader for the specified module by delegating to
> + self.find_loader().
> +
> + This method is deprecated in favor of finder.find_spec().
> +
> + """
> + # Call find_loader(). If it returns a string (indicating this
> + # is a namespace package portion), generate a warning and
> + # return None.
> + loader, portions = self.find_loader(fullname)
> + if loader is None and len(portions):
> + msg = 'Not importing directory {}: missing __init__'
> + _warnings.warn(msg.format(portions[0]), ImportWarning)
> + return loader
> +
> +
> +def _validate_bytecode_header(data, source_stats=None, name=None, path=None):
> + """Validate the header of the passed-in bytecode against source_stats (if
> + given) and returning the bytecode that can be compiled by compile().
> +
> + All other arguments are used to enhance error reporting.
> +
> + ImportError is raised when the magic number is incorrect or the bytecode is
> + found to be stale. EOFError is raised when the data is found to be
> + truncated.
> +
> + """
> + exc_details = {}
> + if name is not None:
> + exc_details['name'] = name
> + else:
> + # To prevent having to make all messages have a conditional name.
> + name = '<bytecode>'
> + if path is not None:
> + exc_details['path'] = path
> + magic = data[:4]
> + raw_timestamp = data[4:8]
> + raw_size = data[8:12]
> + if magic != MAGIC_NUMBER:
> + message = 'bad magic number in {!r}: {!r}'.format(name, magic)
> + _bootstrap._verbose_message('{}', message)
> + raise ImportError(message, **exc_details)
> + elif len(raw_timestamp) != 4:
> + message = 'reached EOF while reading timestamp in {!r}'.format(name)
> + _bootstrap._verbose_message('{}', message)
> + raise EOFError(message)
> + elif len(raw_size) != 4:
> + message = 'reached EOF while reading size of source in {!r}'.format(name)
> + _bootstrap._verbose_message('{}', message)
> + raise EOFError(message)
> + if source_stats is not None:
> + try:
> + source_mtime = int(source_stats['mtime'])
> + except KeyError:
> + pass
> + else:
> + if _r_long(raw_timestamp) != source_mtime:
> + message = 'bytecode is stale for {!r}'.format(name)
> + _bootstrap._verbose_message('{}', message)
> + raise ImportError(message, **exc_details)
> + try:
> + source_size = source_stats['size'] & 0xFFFFFFFF
> + except KeyError:
> + pass
> + else:
> + if _r_long(raw_size) != source_size:
> + raise ImportError('bytecode is stale for {!r}'.format(name),
> + **exc_details)
> + return data[12:]
> +
> +
> +def _compile_bytecode(data, name=None, bytecode_path=None, source_path=None):
> + """Compile bytecode as returned by _validate_bytecode_header()."""
> + code = marshal.loads(data)
> + if isinstance(code, _code_type):
> + _bootstrap._verbose_message('code object from {!r}', bytecode_path)
> + if source_path is not None:
> + _imp._fix_co_filename(code, source_path)
> + return code
> + else:
> + raise ImportError('Non-code object in {!r}'.format(bytecode_path),
> + name=name, path=bytecode_path)
> +
> +def _code_to_bytecode(code, mtime=0, source_size=0):
> + """Compile a code object into bytecode for writing out to a byte-compiled
> + file."""
> + data = bytearray(MAGIC_NUMBER)
> + data.extend(_w_long(mtime))
> + data.extend(_w_long(source_size))
> + data.extend(marshal.dumps(code))
> + return data
> +
> +
> +def decode_source(source_bytes):
> + """Decode bytes representing source code and return the string.
> +
> + Universal newline support is used in the decoding.
> + """
> + import tokenize # To avoid bootstrap issues.
> + source_bytes_readline = _io.BytesIO(source_bytes).readline
> + encoding = tokenize.detect_encoding(source_bytes_readline)
> + newline_decoder = _io.IncrementalNewlineDecoder(None, True)
> + return newline_decoder.decode(source_bytes.decode(encoding[0]))
> +
> +
> +# Module specifications #######################################################
> +
> +_POPULATE = object()
> +
> +
> +def spec_from_file_location(name, location=None, *, loader=None,
> + submodule_search_locations=_POPULATE):
> + """Return a module spec based on a file location.
> +
> + To indicate that the module is a package, set
> + submodule_search_locations to a list of directory paths. An
> + empty list is sufficient, though its not otherwise useful to the
> + import system.
> +
> + The loader must take a spec as its only __init__() arg.
> +
> + """
> + if location is None:
> + # The caller may simply want a partially populated location-
> + # oriented spec. So we set the location to a bogus value and
> + # fill in as much as we can.
> + location = '<unknown>'
> + if hasattr(loader, 'get_filename'):
> + # ExecutionLoader
> + try:
> + location = loader.get_filename(name)
> + except ImportError:
> + pass
> + else:
> + #location = _os.fspath(location) #JP hack
> + location = location
> + # If the location is on the filesystem, but doesn't actually exist,
> + # we could return None here, indicating that the location is not
> + # valid. However, we don't have a good way of testing since an
> + # indirect location (e.g. a zip file or URL) will look like a
> + # non-existent file relative to the filesystem.
> +
> + spec = _bootstrap.ModuleSpec(name, loader, origin=location)
> + spec._set_fileattr = True
> +
> + # Pick a loader if one wasn't provided.
> + if loader is None:
> + for loader_class, suffixes in _get_supported_file_loaders():
> + if location.endswith(tuple(suffixes)):
> + loader = loader_class(name, location)
> + spec.loader = loader
> + break
> + else:
> + return None
> +
> + # Set submodule_search_paths appropriately.
> + if submodule_search_locations is _POPULATE:
> + # Check the loader.
> + if hasattr(loader, 'is_package'):
> + try:
> + is_package = loader.is_package(name)
> + except ImportError:
> + pass
> + else:
> + if is_package:
> + spec.submodule_search_locations = []
> + else:
> + spec.submodule_search_locations = submodule_search_locations
> + if spec.submodule_search_locations == []:
> + if location:
> + dirname = _path_split(location)[0]
> + spec.submodule_search_locations.append(dirname)
> +
> + return spec
> +
> +
> +# Loaders #####################################################################
> +
> +class WindowsRegistryFinder:
> +
> + """Meta path finder for modules declared in the Windows registry."""
> +
> + REGISTRY_KEY = (
> + 'Software\\Python\\PythonCore\\{sys_version}'
> + '\\Modules\\{fullname}')
> + REGISTRY_KEY_DEBUG = (
> + 'Software\\Python\\PythonCore\\{sys_version}'
> + '\\Modules\\{fullname}\\Debug')
> + DEBUG_BUILD = False # Changed in _setup()
> +
> + @classmethod
> + def _open_registry(cls, key):
> + try:
> + return _winreg.OpenKey(_winreg.HKEY_CURRENT_USER, key)
> + except OSError:
> + return _winreg.OpenKey(_winreg.HKEY_LOCAL_MACHINE, key)
> +
> + @classmethod
> + def _search_registry(cls, fullname):
> + if cls.DEBUG_BUILD:
> + registry_key = cls.REGISTRY_KEY_DEBUG
> + else:
> + registry_key = cls.REGISTRY_KEY
> + key = registry_key.format(fullname=fullname,
> + sys_version='%d.%d' % sys.version_info[:2])
> + try:
> + with cls._open_registry(key) as hkey:
> + filepath = _winreg.QueryValue(hkey, '')
> + except OSError:
> + return None
> + return filepath
> +
> + @classmethod
> + def find_spec(cls, fullname, path=None, target=None):
> + filepath = cls._search_registry(fullname)
> + if filepath is None:
> + return None
> + try:
> + _path_stat(filepath)
> + except OSError:
> + return None
> + for loader, suffixes in _get_supported_file_loaders():
> + if filepath.endswith(tuple(suffixes)):
> + spec = _bootstrap.spec_from_loader(fullname,
> + loader(fullname, filepath),
> + origin=filepath)
> + return spec
> +
> + @classmethod
> + def find_module(cls, fullname, path=None):
> + """Find module named in the registry.
> +
> + This method is deprecated. Use exec_module() instead.
> +
> + """
> + spec = cls.find_spec(fullname, path)
> + if spec is not None:
> + return spec.loader
> + else:
> + return None
> +
> +
> +class _LoaderBasics:
> +
> + """Base class of common code needed by both SourceLoader and
> + SourcelessFileLoader."""
> +
> + def is_package(self, fullname):
> + """Concrete implementation of InspectLoader.is_package by checking if
> + the path returned by get_filename has a filename of '__init__.py'."""
> + filename = _path_split(self.get_filename(fullname))[1]
> + filename_base = filename.rsplit('.', 1)[0]
> + tail_name = fullname.rpartition('.')[2]
> + return filename_base == '__init__' and tail_name != '__init__'
> +
> + def create_module(self, spec):
> + """Use default semantics for module creation."""
> +
> + def exec_module(self, module):
> + """Execute the module."""
> + code = self.get_code(module.__name__)
> + if code is None:
> + raise ImportError('cannot load module {!r} when get_code() '
> + 'returns None'.format(module.__name__))
> + _bootstrap._call_with_frames_removed(exec, code, module.__dict__)
> +
> + def load_module(self, fullname):
> + """This module is deprecated."""
> + return _bootstrap._load_module_shim(self, fullname)
> +
> +
> +class SourceLoader(_LoaderBasics):
> +
> + def path_mtime(self, path):
> + """Optional method that returns the modification time (an int) for the
> + specified path, where path is a str.
> +
> + Raises IOError when the path cannot be handled.
> + """
> + raise IOError
> +
> + def path_stats(self, path):
> + """Optional method returning a metadata dict for the specified path
> + to by the path (str).
> + Possible keys:
> + - 'mtime' (mandatory) is the numeric timestamp of last source
> + code modification;
> + - 'size' (optional) is the size in bytes of the source code.
> +
> + Implementing this method allows the loader to read bytecode files.
> + Raises IOError when the path cannot be handled.
> + """
> + return {'mtime': self.path_mtime(path)}
> +
> + def _cache_bytecode(self, source_path, cache_path, data):
> + """Optional method which writes data (bytes) to a file path (a str).
> +
> + Implementing this method allows for the writing of bytecode files.
> +
> + The source path is needed in order to correctly transfer permissions
> + """
> + # For backwards compatibility, we delegate to set_data()
> + return self.set_data(cache_path, data)
> +
> + def set_data(self, path, data):
> + """Optional method which writes data (bytes) to a file path (a str).
> +
> + Implementing this method allows for the writing of bytecode files.
> + """
> +
> +
> + def get_source(self, fullname):
> + """Concrete implementation of InspectLoader.get_source."""
> + path = self.get_filename(fullname)
> + try:
> + source_bytes = self.get_data(path)
> + except OSError as exc:
> + raise ImportError('source not available through get_data()',
> + name=fullname) from exc
> + return decode_source(source_bytes)
> +
> + def source_to_code(self, data, path, *, _optimize=-1):
> + """Return the code object compiled from source.
> +
> + The 'data' argument can be any object type that compile() supports.
> + """
> + return _bootstrap._call_with_frames_removed(compile, data, path, 'exec',
> + dont_inherit=True, optimize=_optimize)
> +
> + def get_code(self, fullname):
> + """Concrete implementation of InspectLoader.get_code.
> +
> + Reading of bytecode requires path_stats to be implemented. To write
> + bytecode, set_data must also be implemented.
> +
> + """
> + source_path = self.get_filename(fullname)
> + source_mtime = None
> + try:
> + bytecode_path = cache_from_source(source_path)
> + except NotImplementedError:
> + bytecode_path = None
> + else:
> + try:
> + st = self.path_stats(source_path)
> + except IOError:
> + pass
> + else:
> + source_mtime = int(st['mtime'])
> + try:
> + data = self.get_data(bytecode_path)
> + except OSError:
> + pass
> + else:
> + try:
> + bytes_data = _validate_bytecode_header(data,
> + source_stats=st, name=fullname,
> + path=bytecode_path)
> + except (ImportError, EOFError):
> + pass
> + else:
> + _bootstrap._verbose_message('{} matches {}', bytecode_path,
> + source_path)
> + return _compile_bytecode(bytes_data, name=fullname,
> + bytecode_path=bytecode_path,
> + source_path=source_path)
> + source_bytes = self.get_data(source_path)
> + code_object = self.source_to_code(source_bytes, source_path)
> + _bootstrap._verbose_message('code object from {}', source_path)
> + if (not sys.dont_write_bytecode and bytecode_path is not None and
> + source_mtime is not None):
> + data = _code_to_bytecode(code_object, source_mtime,
> + len(source_bytes))
> + try:
> + self._cache_bytecode(source_path, bytecode_path, data)
> + _bootstrap._verbose_message('wrote {!r}', bytecode_path)
> + except NotImplementedError:
> + pass
> + return code_object
> +
> +
> +class FileLoader:
> +
> + """Base file loader class which implements the loader protocol methods that
> + require file system usage."""
> +
> + def __init__(self, fullname, path):
> + """Cache the module name and the path to the file found by the
> + finder."""
> + self.name = fullname
> + self.path = path
> +
> + def __eq__(self, other):
> + return (self.__class__ == other.__class__ and
> + self.__dict__ == other.__dict__)
> +
> + def __hash__(self):
> + return hash(self.name) ^ hash(self.path)
> +
> + @_check_name
> + def load_module(self, fullname):
> + """Load a module from a file.
> +
> + This method is deprecated. Use exec_module() instead.
> +
> + """
> + # The only reason for this method is for the name check.
> + # Issue #14857: Avoid the zero-argument form of super so the implementation
> + # of that form can be updated without breaking the frozen module
> + return super(FileLoader, self).load_module(fullname)
> +
> + @_check_name
> + def get_filename(self, fullname):
> + """Return the path to the source file as found by the finder."""
> + return self.path
> +
> + def get_data(self, path):
> + """Return the data from path as raw bytes."""
> + with _io.FileIO(path, 'r') as file:
> + return file.read()
> +
> +
> +class SourceFileLoader(FileLoader, SourceLoader):
> +
> + """Concrete implementation of SourceLoader using the file system."""
> +
> + def path_stats(self, path):
> + """Return the metadata for the path."""
> + st = _path_stat(path)
> + return {'mtime': st.st_mtime, 'size': st.st_size}
> +
> + def _cache_bytecode(self, source_path, bytecode_path, data):
> + # Adapt between the two APIs
> + mode = _calc_mode(source_path)
> + return self.set_data(bytecode_path, data, _mode=mode)
> +
> + def set_data(self, path, data, *, _mode=0o666):
> + """Write bytes data to a file."""
> + parent, filename = _path_split(path)
> + path_parts = []
> + # Figure out what directories are missing.
> + while parent and not _path_isdir(parent):
> + parent, part = _path_split(parent)
> + path_parts.append(part)
> + # Create needed directories.
> + for part in reversed(path_parts):
> + parent = _path_join(parent, part)
> + try:
> + _os.mkdir(parent)
> + except FileExistsError:
> + # Probably another Python process already created the dir.
> + continue
> + except OSError as exc:
> + # Could be a permission error, read-only filesystem: just forget
> + # about writing the data.
> + _bootstrap._verbose_message('could not create {!r}: {!r}',
> + parent, exc)
> + return
> + try:
> + _write_atomic(path, data, _mode)
> + _bootstrap._verbose_message('created {!r}', path)
> + except OSError as exc:
> + # Same as above: just don't write the bytecode.
> + _bootstrap._verbose_message('could not create {!r}: {!r}', path,
> + exc)
> +
> +
> +class SourcelessFileLoader(FileLoader, _LoaderBasics):
> +
> + """Loader which handles sourceless file imports."""
> +
> + def get_code(self, fullname):
> + path = self.get_filename(fullname)
> + data = self.get_data(path)
> + bytes_data = _validate_bytecode_header(data, name=fullname, path=path)
> + return _compile_bytecode(bytes_data, name=fullname, bytecode_path=path)
> +
> + def get_source(self, fullname):
> + """Return None as there is no source code."""
> + return None
> +
> +
> +# Filled in by _setup().
> +EXTENSION_SUFFIXES = []
> +
> +
> +class ExtensionFileLoader(FileLoader, _LoaderBasics):
> +
> + """Loader for extension modules.
> +
> + The constructor is designed to work with FileFinder.
> +
> + """
> +
> + def __init__(self, name, path):
> + self.name = name
> + self.path = path
> +
> + def __eq__(self, other):
> + return (self.__class__ == other.__class__ and
> + self.__dict__ == other.__dict__)
> +
> + def __hash__(self):
> + return hash(self.name) ^ hash(self.path)
> +
> + def create_module(self, spec):
> + """Create an unitialized extension module"""
> + module = _bootstrap._call_with_frames_removed(
> + _imp.create_dynamic, spec)
> + _bootstrap._verbose_message('extension module {!r} loaded from {!r}',
> + spec.name, self.path)
> + return module
> +
> + def exec_module(self, module):
> + """Initialize an extension module"""
> + _bootstrap._call_with_frames_removed(_imp.exec_dynamic, module)
> + _bootstrap._verbose_message('extension module {!r} executed from {!r}',
> + self.name, self.path)
> +
> + def is_package(self, fullname):
> + """Return True if the extension module is a package."""
> + file_name = _path_split(self.path)[1]
> + return any(file_name == '__init__' + suffix
> + for suffix in EXTENSION_SUFFIXES)
> +
> + def get_code(self, fullname):
> + """Return None as an extension module cannot create a code object."""
> + return None
> +
> + def get_source(self, fullname):
> + """Return None as extension modules have no source code."""
> + return None
> +
> + @_check_name
> + def get_filename(self, fullname):
> + """Return the path to the source file as found by the finder."""
> + return self.path
> +
> +
> +class _NamespacePath:
> + """Represents a namespace package's path. It uses the module name
> + to find its parent module, and from there it looks up the parent's
> + __path__. When this changes, the module's own path is recomputed,
> + using path_finder. For top-level modules, the parent module's path
> + is sys.path."""
> +
> + def __init__(self, name, path, path_finder):
> + self._name = name
> + self._path = path
> + self._last_parent_path = tuple(self._get_parent_path())
> + self._path_finder = path_finder
> +
> + def _find_parent_path_names(self):
> + """Returns a tuple of (parent-module-name, parent-path-attr-name)"""
> + parent, dot, me = self._name.rpartition('.')
> + if dot == '':
> + # This is a top-level module. sys.path contains the parent path.
> + return 'sys', 'path'
> + # Not a top-level module. parent-module.__path__ contains the
> + # parent path.
> + return parent, '__path__'
> +
> + def _get_parent_path(self):
> + parent_module_name, path_attr_name = self._find_parent_path_names()
> + return getattr(sys.modules[parent_module_name], path_attr_name)
> +
> + def _recalculate(self):
> + # If the parent's path has changed, recalculate _path
> + parent_path = tuple(self._get_parent_path()) # Make a copy
> + if parent_path != self._last_parent_path:
> + spec = self._path_finder(self._name, parent_path)
> + # Note that no changes are made if a loader is returned, but we
> + # do remember the new parent path
> + if spec is not None and spec.loader is None:
> + if spec.submodule_search_locations:
> + self._path = spec.submodule_search_locations
> + self._last_parent_path = parent_path # Save the copy
> + return self._path
> +
> + def __iter__(self):
> + return iter(self._recalculate())
> +
> + def __setitem__(self, index, path):
> + self._path[index] = path
> +
> + def __len__(self):
> + return len(self._recalculate())
> +
> + def __repr__(self):
> + return '_NamespacePath({!r})'.format(self._path)
> +
> + def __contains__(self, item):
> + return item in self._recalculate()
> +
> + def append(self, item):
> + self._path.append(item)
> +
> +
> +# We use this exclusively in module_from_spec() for backward-compatibility.
> +class _NamespaceLoader:
> + def __init__(self, name, path, path_finder):
> + self._path = _NamespacePath(name, path, path_finder)
> +
> + @classmethod
> + def module_repr(cls, module):
> + """Return repr for the module.
> +
> + The method is deprecated. The import machinery does the job itself.
> +
> + """
> + return '<module {!r} (namespace)>'.format(module.__name__)
> +
> + def is_package(self, fullname):
> + return True
> +
> + def get_source(self, fullname):
> + return ''
> +
> + def get_code(self, fullname):
> + return compile('', '<string>', 'exec', dont_inherit=True)
> +
> + def create_module(self, spec):
> + """Use default semantics for module creation."""
> +
> + def exec_module(self, module):
> + pass
> +
> + def load_module(self, fullname):
> + """Load a namespace module.
> +
> + This method is deprecated. Use exec_module() instead.
> +
> + """
> + # The import system never calls this method.
> + _bootstrap._verbose_message('namespace module loaded with path {!r}',
> + self._path)
> + return _bootstrap._load_module_shim(self, fullname)
> +
> +
> +# Finders #####################################################################
> +
> +class PathFinder:
> +
> + """Meta path finder for sys.path and package __path__ attributes."""
> +
> + @classmethod
> + def invalidate_caches(cls):
> + """Call the invalidate_caches() method on all path entry finders
> + stored in sys.path_importer_caches (where implemented)."""
> + for finder in sys.path_importer_cache.values():
> + if hasattr(finder, 'invalidate_caches'):
> + finder.invalidate_caches()
> +
> + @classmethod
> + def _path_hooks(cls, path):
> + """Search sys.path_hooks for a finder for 'path'."""
> + if sys.path_hooks is not None and not sys.path_hooks:
> + _warnings.warn('sys.path_hooks is empty', ImportWarning)
> + for hook in sys.path_hooks:
> + try:
> + return hook(path)
> + except ImportError:
> + continue
> + else:
> + return None
> +
> + @classmethod
> + def _path_importer_cache(cls, path):
> + """Get the finder for the path entry from sys.path_importer_cache.
> +
> + If the path entry is not in the cache, find the appropriate finder
> + and cache it. If no finder is available, store None.
> +
> + """
> + if path == '':
> + try:
> + path = _os.getcwd()
> + except FileNotFoundError:
> + # Don't cache the failure as the cwd can easily change to
> + # a valid directory later on.
> + return None
> + try:
> + finder = sys.path_importer_cache[path]
> + except KeyError:
> + finder = cls._path_hooks(path)
> + sys.path_importer_cache[path] = finder
> + return finder
> +
> + @classmethod
> + def _legacy_get_spec(cls, fullname, finder):
> + # This would be a good place for a DeprecationWarning if
> + # we ended up going that route.
> + if hasattr(finder, 'find_loader'):
> + loader, portions = finder.find_loader(fullname)
> + else:
> + loader = finder.find_module(fullname)
> + portions = []
> + if loader is not None:
> + return _bootstrap.spec_from_loader(fullname, loader)
> + spec = _bootstrap.ModuleSpec(fullname, None)
> + spec.submodule_search_locations = portions
> + return spec
> +
> + @classmethod
> + def _get_spec(cls, fullname, path, target=None):
> + """Find the loader or namespace_path for this module/package name."""
> + # If this ends up being a namespace package, namespace_path is
> + # the list of paths that will become its __path__
> + namespace_path = []
> + for entry in path:
> + if not isinstance(entry, (str, bytes)):
> + continue
> + finder = cls._path_importer_cache(entry)
> + if finder is not None:
> + if hasattr(finder, 'find_spec'):
> + spec = finder.find_spec(fullname, target)
> + else:
> + spec = cls._legacy_get_spec(fullname, finder)
> + if spec is None:
> + continue
> + if spec.loader is not None:
> + return spec
> + portions = spec.submodule_search_locations
> + if portions is None:
> + raise ImportError('spec missing loader')
> + # This is possibly part of a namespace package.
> + # Remember these path entries (if any) for when we
> + # create a namespace package, and continue iterating
> + # on path.
> + namespace_path.extend(portions)
> + else:
> + spec = _bootstrap.ModuleSpec(fullname, None)
> + spec.submodule_search_locations = namespace_path
> + return spec
> +
> + @classmethod
> + def find_spec(cls, fullname, path=None, target=None):
> + """Try to find a spec for 'fullname' on sys.path or 'path'.
> +
> + The search is based on sys.path_hooks and sys.path_importer_cache.
> + """
> + if path is None:
> + path = sys.path
> + spec = cls._get_spec(fullname, path, target)
> + if spec is None:
> + return None
> + elif spec.loader is None:
> + namespace_path = spec.submodule_search_locations
> + if namespace_path:
> + # We found at least one namespace path. Return a
> + # spec which can create the namespace package.
> + spec.origin = 'namespace'
> + spec.submodule_search_locations = _NamespacePath(fullname, namespace_path, cls._get_spec)
> + return spec
> + else:
> + return None
> + else:
> + return spec
> +
> + @classmethod
> + def find_module(cls, fullname, path=None):
> + """find the module on sys.path or 'path' based on sys.path_hooks and
> + sys.path_importer_cache.
> +
> + This method is deprecated. Use find_spec() instead.
> +
> + """
> + spec = cls.find_spec(fullname, path)
> + if spec is None:
> + return None
> + return spec.loader
> +
> +
> +class FileFinder:
> +
> + """File-based finder.
> +
> + Interactions with the file system are cached for performance, being
> + refreshed when the directory the finder is handling has been modified.
> +
> + """
> +
> + def __init__(self, path, *loader_details):
> + """Initialize with the path to search on and a variable number of
> + 2-tuples containing the loader and the file suffixes the loader
> + recognizes."""
> + loaders = []
> + for loader, suffixes in loader_details:
> + loaders.extend((suffix, loader) for suffix in suffixes)
> + self._loaders = loaders
> + # Base (directory) path
> + self.path = path or '.'
> + self._path_mtime = -1
> + self._path_cache = set()
> + self._relaxed_path_cache = set()
> +
> + def invalidate_caches(self):
> + """Invalidate the directory mtime."""
> + self._path_mtime = -1
> +
> + find_module = _find_module_shim
> +
> + def find_loader(self, fullname):
> + """Try to find a loader for the specified module, or the namespace
> + package portions. Returns (loader, list-of-portions).
> +
> + This method is deprecated. Use find_spec() instead.
> +
> + """
> + spec = self.find_spec(fullname)
> + if spec is None:
> + return None, []
> + return spec.loader, spec.submodule_search_locations or []
> +
> + def _get_spec(self, loader_class, fullname, path, smsl, target):
> + loader = loader_class(fullname, path)
> + return spec_from_file_location(fullname, path, loader=loader,
> + submodule_search_locations=smsl)
> +
> + def find_spec(self, fullname, target=None):
> + """Try to find a spec for the specified module.
> +
> + Returns the matching spec, or None if not found.
> + """
> + is_namespace = False
> + tail_module = fullname.rpartition('.')[2]
> + try:
> + mtime = _path_stat(self.path or _os.getcwd()).st_mtime
> + except OSError:
> + mtime = -1
> + if mtime != self._path_mtime:
> + self._fill_cache()
> + self._path_mtime = mtime
> + # tail_module keeps the original casing, for __file__ and friends
> + if _relax_case():
> + cache = self._relaxed_path_cache
> + cache_module = tail_module.lower()
> + else:
> + cache = self._path_cache
> + cache_module = tail_module
> + # Check if the module is the name of a directory (and thus a package).
> + if cache_module in cache:
> + base_path = _path_join(self.path, tail_module)
> + for suffix, loader_class in self._loaders:
> + init_filename = '__init__' + suffix
> + full_path = _path_join(base_path, init_filename)
> + if _path_isfile(full_path):
> + return self._get_spec(loader_class, fullname, full_path, [base_path], target)
> + else:
> + # If a namespace package, return the path if we don't
> + # find a module in the next section.
> + is_namespace = _path_isdir(base_path)
> + # Check for a file w/ a proper suffix exists.
> + for suffix, loader_class in self._loaders:
> + full_path = _path_join(self.path, tail_module + suffix)
> + _bootstrap._verbose_message('trying {}', full_path, verbosity=2)
> + if cache_module + suffix in cache:
> + if _path_isfile(full_path):
> + return self._get_spec(loader_class, fullname, full_path,
> + None, target)
> + if is_namespace:
> + _bootstrap._verbose_message('possible namespace for {}', base_path)
> + spec = _bootstrap.ModuleSpec(fullname, None)
> + spec.submodule_search_locations = [base_path]
> + return spec
> + return None
> +
> + def _fill_cache(self):
> + """Fill the cache of potential modules and packages for this directory."""
> + path = self.path
> + try:
> + contents = _os.listdir(path or _os.getcwd())
> + except (FileNotFoundError, PermissionError, NotADirectoryError):
> + # Directory has either been removed, turned into a file, or made
> + # unreadable.
> + contents = []
> + # We store two cached versions, to handle runtime changes of the
> + # PYTHONCASEOK environment variable.
> + if not sys.platform.startswith('win'):
> + self._path_cache = set(contents)
> + else:
> + # Windows users can import modules with case-insensitive file
> + # suffixes (for legacy reasons). Make the suffix lowercase here
> + # so it's done once instead of for every import. This is safe as
> + # the specified suffixes to check against are always specified in a
> + # case-sensitive manner.
> + lower_suffix_contents = set()
> + for item in contents:
> + name, dot, suffix = item.partition('.')
> + if dot:
> + new_name = '{}.{}'.format(name, suffix.lower())
> + else:
> + new_name = name
> + lower_suffix_contents.add(new_name)
> + self._path_cache = lower_suffix_contents
> + if sys.platform.startswith(_CASE_INSENSITIVE_PLATFORMS):
> + self._relaxed_path_cache = {fn.lower() for fn in contents}
> +
> + @classmethod
> + def path_hook(cls, *loader_details):
> + """A class method which returns a closure to use on sys.path_hook
> + which will return an instance using the specified loaders and the path
> + called on the closure.
> +
> + If the path called on the closure is not a directory, ImportError is
> + raised.
> +
> + """
> + def path_hook_for_FileFinder(path):
> + """Path hook for importlib.machinery.FileFinder."""
> + if not _path_isdir(path):
> + raise ImportError('only directories are supported', path=path)
> + return cls(path, *loader_details)
> +
> + return path_hook_for_FileFinder
> +
> + def __repr__(self):
> + return 'FileFinder({!r})'.format(self.path)
> +
> +
> +# Import setup ###############################################################
> +
> +def _fix_up_module(ns, name, pathname, cpathname=None):
> + # This function is used by PyImport_ExecCodeModuleObject().
> + loader = ns.get('__loader__')
> + spec = ns.get('__spec__')
> + if not loader:
> + if spec:
> + loader = spec.loader
> + elif pathname == cpathname:
> + loader = SourcelessFileLoader(name, pathname)
> + else:
> + loader = SourceFileLoader(name, pathname)
> + if not spec:
> + spec = spec_from_file_location(name, pathname, loader=loader)
> + try:
> + ns['__spec__'] = spec
> + ns['__loader__'] = loader
> + ns['__file__'] = pathname
> + ns['__cached__'] = cpathname
> + except Exception:
> + # Not important enough to report.
> + pass
> +
> +
> +def _get_supported_file_loaders():
> + """Returns a list of file-based module loaders.
> +
> + Each item is a tuple (loader, suffixes).
> + """
> + extensions = ExtensionFileLoader, _imp.extension_suffixes()
> + source = SourceFileLoader, SOURCE_SUFFIXES
> + bytecode = SourcelessFileLoader, BYTECODE_SUFFIXES
> + return [extensions, source, bytecode]
> +
> +
> +def _setup(_bootstrap_module):
> + """Setup the path-based importers for importlib by importing needed
> + built-in modules and injecting them into the global namespace.
> +
> + Other components are extracted from the core bootstrap module.
> +
> + """
> + global sys, _imp, _bootstrap
> + _bootstrap = _bootstrap_module
> + sys = _bootstrap.sys
> + _imp = _bootstrap._imp
> +
> + # Directly load built-in modules needed during bootstrap.
> + self_module = sys.modules[__name__]
> + for builtin_name in ('_io', '_warnings', 'builtins', 'marshal'):
> + if builtin_name not in sys.modules:
> + builtin_module = _bootstrap._builtin_from_name(builtin_name)
> + else:
> + builtin_module = sys.modules[builtin_name]
> + setattr(self_module, builtin_name, builtin_module)
> +
> + # Directly load the os module (needed during bootstrap).
> + os_details = ('posix', ['/']), ('nt', ['\\', '/']), ('edk2', ['\\', '/'])
> + for builtin_os, path_separators in os_details:
> + # Assumption made in _path_join()
> + assert all(len(sep) == 1 for sep in path_separators)
> + path_sep = path_separators[0]
> + if builtin_os in sys.modules:
> + os_module = sys.modules[builtin_os]
> + break
> + else:
> + try:
> + os_module = _bootstrap._builtin_from_name(builtin_os)
> + break
> + except ImportError:
> + continue
> + else:
> + raise ImportError('importlib requires posix or nt or edk2')
> + setattr(self_module, '_os', os_module)
> + setattr(self_module, 'path_sep', path_sep)
> + setattr(self_module, 'path_separators', ''.join(path_separators))
> +
> + # Directly load the _thread module (needed during bootstrap).
> + try:
> + thread_module = _bootstrap._builtin_from_name('_thread')
> + except ImportError:
> + # Python was built without threads
> + thread_module = None
> + setattr(self_module, '_thread', thread_module)
> +
> + # Directly load the _weakref module (needed during bootstrap).
> + weakref_module = _bootstrap._builtin_from_name('_weakref')
> + setattr(self_module, '_weakref', weakref_module)
> +
> + # Directly load the winreg module (needed during bootstrap).
> + if builtin_os == 'nt':
> + winreg_module = _bootstrap._builtin_from_name('winreg')
> + setattr(self_module, '_winreg', winreg_module)
> +
> + # Constants
> + setattr(self_module, '_relax_case', _make_relax_case())
> + EXTENSION_SUFFIXES.extend(_imp.extension_suffixes())
> + if builtin_os == 'nt':
> + SOURCE_SUFFIXES.append('.pyw')
> + if '_d.pyd' in EXTENSION_SUFFIXES:
> + WindowsRegistryFinder.DEBUG_BUILD = True
> +
> +
> +def _install(_bootstrap_module):
> + """Install the path-based import components."""
> + _setup(_bootstrap_module)
> + supported_loaders = _get_supported_file_loaders()
> + sys.path_hooks.extend([FileFinder.path_hook(*supported_loaders)])
> + sys.meta_path.append(PathFinder)
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/io.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/io.py
> new file mode 100644
> index 00000000..1c5ffcf9
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/io.py
> @@ -0,0 +1,99 @@
> +"""The io module provides the Python interfaces to stream handling. The
> +builtin open function is defined in this module.
> +
> +At the top of the I/O hierarchy is the abstract base class IOBase. It
> +defines the basic interface to a stream. Note, however, that there is no
> +separation between reading and writing to streams; implementations are
> +allowed to raise an OSError if they do not support a given operation.
> +
> +Extending IOBase is RawIOBase which deals simply with the reading and
> +writing of raw bytes to a stream. FileIO subclasses RawIOBase to provide
> +an interface to OS files.
> +
> +BufferedIOBase deals with buffering on a raw byte stream (RawIOBase). Its
> +subclasses, BufferedWriter, BufferedReader, and BufferedRWPair buffer
> +streams that are readable, writable, and both respectively.
> +BufferedRandom provides a buffered interface to random access
> +streams. BytesIO is a simple stream of in-memory bytes.
> +
> +Another IOBase subclass, TextIOBase, deals with the encoding and decoding
> +of streams into text. TextIOWrapper, which extends it, is a buffered text
> +interface to a buffered raw stream (`BufferedIOBase`). Finally, StringIO
> +is an in-memory stream for text.
> +
> +Argument names are not part of the specification, and only the arguments
> +of open() are intended to be used as keyword arguments.
> +
> +data:
> +
> +DEFAULT_BUFFER_SIZE
> +
> + An int containing the default buffer size used by the module's buffered
> + I/O classes. open() uses the file's blksize (as obtained by os.stat) if
> + possible.
> +"""
> +# New I/O library conforming to PEP 3116.
> +
> +__author__ = ("Guido van Rossum <guido@python.org>, "
> + "Mike Verdone <mike.verdone@gmail.com>, "
> + "Mark Russell <mark.russell@zen.co.uk>, "
> + "Antoine Pitrou <solipsis@pitrou.net>, "
> + "Amaury Forgeot d'Arc <amauryfa@gmail.com>, "
> + "Benjamin Peterson <benjamin@python.org>")
> +
> +__all__ = ["BlockingIOError", "open", "IOBase", "RawIOBase", "FileIO",
> + "BytesIO", "StringIO", "BufferedIOBase",
> + "BufferedReader", "BufferedWriter", "BufferedRWPair",
> + "BufferedRandom", "TextIOBase", "TextIOWrapper",
> + "UnsupportedOperation", "SEEK_SET", "SEEK_CUR", "SEEK_END", "OpenWrapper"]
> +
> +
> +import _io
> +import abc
> +
> +from _io import (DEFAULT_BUFFER_SIZE, BlockingIOError, UnsupportedOperation,
> + open, FileIO, BytesIO, StringIO, BufferedReader,
> + BufferedWriter, BufferedRWPair, BufferedRandom,
> + IncrementalNewlineDecoder, TextIOWrapper)
> +
> +OpenWrapper = _io.open # for compatibility with _pyio
> +
> +# Pretend this exception was created here.
> +UnsupportedOperation.__module__ = "io"
> +
> +# for seek()
> +SEEK_SET = 0
> +SEEK_CUR = 1
> +SEEK_END = 2
> +
> +# Declaring ABCs in C is tricky so we do it here.
> +# Method descriptions and default implementations are inherited from the C
> +# version however.
> +class IOBase(_io._IOBase, metaclass=abc.ABCMeta):
> + __doc__ = _io._IOBase.__doc__
> +
> +class RawIOBase(_io._RawIOBase, IOBase):
> + __doc__ = _io._RawIOBase.__doc__
> +
> +class BufferedIOBase(_io._BufferedIOBase, IOBase):
> + __doc__ = _io._BufferedIOBase.__doc__
> +
> +class TextIOBase(_io._TextIOBase, IOBase):
> + __doc__ = _io._TextIOBase.__doc__
> +
> +RawIOBase.register(FileIO)
> +
> +for klass in (BytesIO, BufferedReader, BufferedWriter, BufferedRandom,
> + BufferedRWPair):
> + BufferedIOBase.register(klass)
> +
> +for klass in (StringIO, TextIOWrapper):
> + TextIOBase.register(klass)
> +del klass
> +
> +try:
> + from _io import _WindowsConsoleIO
> +except ImportError:
> + pass
> +else:
> + RawIOBase.register(_WindowsConsoleIO)
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/logging/__init__.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/logging/__init__.py
> new file mode 100644
> index 00000000..c605b10c
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/logging/__init__.py
> @@ -0,0 +1,2021 @@
> +# Copyright 2001-2016 by Vinay Sajip. All Rights Reserved.
> +#
> +# Permission to use, copy, modify, and distribute this software and its
> +# documentation for any purpose and without fee is hereby granted,
> +# provided that the above copyright notice appear in all copies and that
> +# both that copyright notice and this permission notice appear in
> +# supporting documentation, and that the name of Vinay Sajip
> +# not be used in advertising or publicity pertaining to distribution
> +# of the software without specific, written prior permission.
> +# VINAY SAJIP DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING
> +# ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL
> +# VINAY SAJIP BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR
> +# ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER
> +# IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
> +# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
> +
> +"""
> +Logging package for Python. Based on PEP 282 and comments thereto in
> +comp.lang.python.
> +
> +Copyright (C) 2001-2016 Vinay Sajip. All Rights Reserved.
> +
> +To use, simply 'import logging' and log away!
> +"""
> +
> +import sys, os, time, io, traceback, warnings, weakref, collections
> +
> +from string import Template
> +
> +__all__ = ['BASIC_FORMAT', 'BufferingFormatter', 'CRITICAL', 'DEBUG', 'ERROR',
> + 'FATAL', 'FileHandler', 'Filter', 'Formatter', 'Handler', 'INFO',
> + 'LogRecord', 'Logger', 'LoggerAdapter', 'NOTSET', 'NullHandler',
> + 'StreamHandler', 'WARN', 'WARNING', 'addLevelName', 'basicConfig',
> + 'captureWarnings', 'critical', 'debug', 'disable', 'error',
> + 'exception', 'fatal', 'getLevelName', 'getLogger', 'getLoggerClass',
> + 'info', 'log', 'makeLogRecord', 'setLoggerClass', 'shutdown',
> + 'warn', 'warning', 'getLogRecordFactory', 'setLogRecordFactory',
> + 'lastResort', 'raiseExceptions']
> +
> +try:
> + import threading
> +except ImportError: #pragma: no cover
> + threading = None
> +
> +__author__ = "Vinay Sajip <vinay_sajip@red-dove.com>"
> +__status__ = "production"
> +# The following module attributes are no longer updated.
> +__version__ = "0.5.1.2"
> +__date__ = "07 February 2010"
> +
> +#---------------------------------------------------------------------------
> +# Miscellaneous module data
> +#---------------------------------------------------------------------------
> +
> +#
> +#_startTime is used as the base when calculating the relative time of events
> +#
> +_startTime = time.time()
> +
> +#
> +#raiseExceptions is used to see if exceptions during handling should be
> +#propagated
> +#
> +raiseExceptions = True
> +
> +#
> +# If you don't want threading information in the log, set this to zero
> +#
> +logThreads = True
> +
> +#
> +# If you don't want multiprocessing information in the log, set this to zero
> +#
> +logMultiprocessing = True
> +
> +#
> +# If you don't want process information in the log, set this to zero
> +#
> +logProcesses = True
> +
> +#---------------------------------------------------------------------------
> +# Level related stuff
> +#---------------------------------------------------------------------------
> +#
> +# Default levels and level names, these can be replaced with any positive set
> +# of values having corresponding names. There is a pseudo-level, NOTSET, which
> +# is only really there as a lower limit for user-defined levels. Handlers and
> +# loggers are initialized with NOTSET so that they will log all messages, even
> +# at user-defined levels.
> +#
> +
> +CRITICAL = 50
> +FATAL = CRITICAL
> +ERROR = 40
> +WARNING = 30
> +WARN = WARNING
> +INFO = 20
> +DEBUG = 10
> +NOTSET = 0
> +
> +_levelToName = {
> + CRITICAL: 'CRITICAL',
> + ERROR: 'ERROR',
> + WARNING: 'WARNING',
> + INFO: 'INFO',
> + DEBUG: 'DEBUG',
> + NOTSET: 'NOTSET',
> +}
> +_nameToLevel = {
> + 'CRITICAL': CRITICAL,
> + 'FATAL': FATAL,
> + 'ERROR': ERROR,
> + 'WARN': WARNING,
> + 'WARNING': WARNING,
> + 'INFO': INFO,
> + 'DEBUG': DEBUG,
> + 'NOTSET': NOTSET,
> +}
> +
> +def getLevelName(level):
> + """
> + Return the textual representation of logging level 'level'.
> +
> + If the level is one of the predefined levels (CRITICAL, ERROR, WARNING,
> + INFO, DEBUG) then you get the corresponding string. If you have
> + associated levels with names using addLevelName then the name you have
> + associated with 'level' is returned.
> +
> + If a numeric value corresponding to one of the defined levels is passed
> + in, the corresponding string representation is returned.
> +
> + Otherwise, the string "Level %s" % level is returned.
> + """
> + # See Issues #22386, #27937 and #29220 for why it's this way
> + result = _levelToName.get(level)
> + if result is not None:
> + return result
> + result = _nameToLevel.get(level)
> + if result is not None:
> + return result
> + return "Level %s" % level
> +
> +def addLevelName(level, levelName):
> + """
> + Associate 'levelName' with 'level'.
> +
> + This is used when converting levels to text during message formatting.
> + """
> + _acquireLock()
> + try: #unlikely to cause an exception, but you never know...
> + _levelToName[level] = levelName
> + _nameToLevel[levelName] = level
> + finally:
> + _releaseLock()
> +
> +if hasattr(sys, '_getframe'):
> + currentframe = lambda: sys._getframe(3)
> +else: #pragma: no cover
> + def currentframe():
> + """Return the frame object for the caller's stack frame."""
> + try:
> + raise Exception
> + except Exception:
> + return sys.exc_info()[2].tb_frame.f_back
> +
> +#
> +# _srcfile is used when walking the stack to check when we've got the first
> +# caller stack frame, by skipping frames whose filename is that of this
> +# module's source. It therefore should contain the filename of this module's
> +# source file.
> +#
> +# Ordinarily we would use __file__ for this, but frozen modules don't always
> +# have __file__ set, for some reason (see Issue #21736). Thus, we get the
> +# filename from a handy code object from a function defined in this module.
> +# (There's no particular reason for picking addLevelName.)
> +#
> +
> +_srcfile = os.path.normcase(addLevelName.__code__.co_filename)
> +
> +# _srcfile is only used in conjunction with sys._getframe().
> +# To provide compatibility with older versions of Python, set _srcfile
> +# to None if _getframe() is not available; this value will prevent
> +# findCaller() from being called. You can also do this if you want to avoid
> +# the overhead of fetching caller information, even when _getframe() is
> +# available.
> +#if not hasattr(sys, '_getframe'):
> +# _srcfile = None
> +
> +
> +def _checkLevel(level):
> + if isinstance(level, int):
> + rv = level
> + elif str(level) == level:
> + if level not in _nameToLevel:
> + raise ValueError("Unknown level: %r" % level)
> + rv = _nameToLevel[level]
> + else:
> + raise TypeError("Level not an integer or a valid string: %r" % level)
> + return rv
> +
> +#---------------------------------------------------------------------------
> +# Thread-related stuff
> +#---------------------------------------------------------------------------
> +
> +#
> +#_lock is used to serialize access to shared data structures in this module.
> +#This needs to be an RLock because fileConfig() creates and configures
> +#Handlers, and so might arbitrary user threads. Since Handler code updates the
> +#shared dictionary _handlers, it needs to acquire the lock. But if configuring,
> +#the lock would already have been acquired - so we need an RLock.
> +#The same argument applies to Loggers and Manager.loggerDict.
> +#
> +if threading:
> + _lock = threading.RLock()
> +else: #pragma: no cover
> + _lock = None
> +
> +
> +def _acquireLock():
> + """
> + Acquire the module-level lock for serializing access to shared data.
> +
> + This should be released with _releaseLock().
> + """
> + if _lock:
> + _lock.acquire()
> +
> +def _releaseLock():
> + """
> + Release the module-level lock acquired by calling _acquireLock().
> + """
> + if _lock:
> + _lock.release()
> +
> +#---------------------------------------------------------------------------
> +# The logging record
> +#---------------------------------------------------------------------------
> +
> +class LogRecord(object):
> + """
> + A LogRecord instance represents an event being logged.
> +
> + LogRecord instances are created every time something is logged. They
> + contain all the information pertinent to the event being logged. The
> + main information passed in is in msg and args, which are combined
> + using str(msg) % args to create the message field of the record. The
> + record also includes information such as when the record was created,
> + the source line where the logging call was made, and any exception
> + information to be logged.
> + """
> + def __init__(self, name, level, pathname, lineno,
> + msg, args, exc_info, func=None, sinfo=None, **kwargs):
> + """
> + Initialize a logging record with interesting information.
> + """
> + ct = time.time()
> + self.name = name
> + self.msg = msg
> + #
> + # The following statement allows passing of a dictionary as a sole
> + # argument, so that you can do something like
> + # logging.debug("a %(a)d b %(b)s", {'a':1, 'b':2})
> + # Suggested by Stefan Behnel.
> + # Note that without the test for args[0], we get a problem because
> + # during formatting, we test to see if the arg is present using
> + # 'if self.args:'. If the event being logged is e.g. 'Value is %d'
> + # and if the passed arg fails 'if self.args:' then no formatting
> + # is done. For example, logger.warning('Value is %d', 0) would log
> + # 'Value is %d' instead of 'Value is 0'.
> + # For the use case of passing a dictionary, this should not be a
> + # problem.
> + # Issue #21172: a request was made to relax the isinstance check
> + # to hasattr(args[0], '__getitem__'). However, the docs on string
> + # formatting still seem to suggest a mapping object is required.
> + # Thus, while not removing the isinstance check, it does now look
> + # for collections.Mapping rather than, as before, dict.
> + if (args and len(args) == 1 and isinstance(args[0], collections.Mapping)
> + and args[0]):
> + args = args[0]
> + self.args = args
> + self.levelname = getLevelName(level)
> + self.levelno = level
> + self.pathname = pathname
> + try:
> + self.filename = os.path.basename(pathname)
> + self.module = os.path.splitext(self.filename)[0]
> + except (TypeError, ValueError, AttributeError):
> + self.filename = pathname
> + self.module = "Unknown module"
> + self.exc_info = exc_info
> + self.exc_text = None # used to cache the traceback text
> + self.stack_info = sinfo
> + self.lineno = lineno
> + self.funcName = func
> + self.created = ct
> + self.msecs = (ct - int(ct)) * 1000
> + self.relativeCreated = (self.created - _startTime) * 1000
> + if logThreads and threading:
> + self.thread = threading.get_ident()
> + self.threadName = threading.current_thread().name
> + else: # pragma: no cover
> + self.thread = None
> + self.threadName = None
> + if not logMultiprocessing: # pragma: no cover
> + self.processName = None
> + else:
> + self.processName = 'MainProcess'
> + mp = sys.modules.get('multiprocessing')
> + if mp is not None:
> + # Errors may occur if multiprocessing has not finished loading
> + # yet - e.g. if a custom import hook causes third-party code
> + # to run when multiprocessing calls import. See issue 8200
> + # for an example
> + try:
> + self.processName = mp.current_process().name
> + except Exception: #pragma: no cover
> + pass
> + if logProcesses and hasattr(os, 'getpid'):
> + self.process = os.getpid()
> + else:
> + self.process = None
> +
> + def __str__(self):
> + return '<LogRecord: %s, %s, %s, %s, "%s">'%(self.name, self.levelno,
> + self.pathname, self.lineno, self.msg)
> +
> + __repr__ = __str__
> +
> + def getMessage(self):
> + """
> + Return the message for this LogRecord.
> +
> + Return the message for this LogRecord after merging any user-supplied
> + arguments with the message.
> + """
> + msg = str(self.msg)
> + if self.args:
> + msg = msg % self.args
> + return msg
> +
> +#
> +# Determine which class to use when instantiating log records.
> +#
> +_logRecordFactory = LogRecord
> +
> +def setLogRecordFactory(factory):
> + """
> + Set the factory to be used when instantiating a log record.
> +
> + :param factory: A callable which will be called to instantiate
> + a log record.
> + """
> + global _logRecordFactory
> + _logRecordFactory = factory
> +
> +def getLogRecordFactory():
> + """
> + Return the factory to be used when instantiating a log record.
> + """
> +
> + return _logRecordFactory
> +
> +def makeLogRecord(dict):
> + """
> + Make a LogRecord whose attributes are defined by the specified dictionary,
> + This function is useful for converting a logging event received over
> + a socket connection (which is sent as a dictionary) into a LogRecord
> + instance.
> + """
> + rv = _logRecordFactory(None, None, "", 0, "", (), None, None)
> + rv.__dict__.update(dict)
> + return rv
> +
> +#---------------------------------------------------------------------------
> +# Formatter classes and functions
> +#---------------------------------------------------------------------------
> +
> +class PercentStyle(object):
> +
> + default_format = '%(message)s'
> + asctime_format = '%(asctime)s'
> + asctime_search = '%(asctime)'
> +
> + def __init__(self, fmt):
> + self._fmt = fmt or self.default_format
> +
> + def usesTime(self):
> + return self._fmt.find(self.asctime_search) >= 0
> +
> + def format(self, record):
> + return self._fmt % record.__dict__
> +
> +class StrFormatStyle(PercentStyle):
> + default_format = '{message}'
> + asctime_format = '{asctime}'
> + asctime_search = '{asctime'
> +
> + def format(self, record):
> + return self._fmt.format(**record.__dict__)
> +
> +
> +class StringTemplateStyle(PercentStyle):
> + default_format = '${message}'
> + asctime_format = '${asctime}'
> + asctime_search = '${asctime}'
> +
> + def __init__(self, fmt):
> + self._fmt = fmt or self.default_format
> + self._tpl = Template(self._fmt)
> +
> + def usesTime(self):
> + fmt = self._fmt
> + return fmt.find('$asctime') >= 0 or fmt.find(self.asctime_format) >= 0
> +
> + def format(self, record):
> + return self._tpl.substitute(**record.__dict__)
> +
> +BASIC_FORMAT = "%(levelname)s:%(name)s:%(message)s"
> +
> +_STYLES = {
> + '%': (PercentStyle, BASIC_FORMAT),
> + '{': (StrFormatStyle, '{levelname}:{name}:{message}'),
> + '$': (StringTemplateStyle, '${levelname}:${name}:${message}'),
> +}
> +
> +class Formatter(object):
> + """
> + Formatter instances are used to convert a LogRecord to text.
> +
> + Formatters need to know how a LogRecord is constructed. They are
> + responsible for converting a LogRecord to (usually) a string which can
> + be interpreted by either a human or an external system. The base Formatter
> + allows a formatting string to be specified. If none is supplied, the
> + the style-dependent default value, "%(message)s", "{message}", or
> + "${message}", is used.
> +
> + The Formatter can be initialized with a format string which makes use of
> + knowledge of the LogRecord attributes - e.g. the default value mentioned
> + above makes use of the fact that the user's message and arguments are pre-
> + formatted into a LogRecord's message attribute. Currently, the useful
> + attributes in a LogRecord are described by:
> +
> + %(name)s Name of the logger (logging channel)
> + %(levelno)s Numeric logging level for the message (DEBUG, INFO,
> + WARNING, ERROR, CRITICAL)
> + %(levelname)s Text logging level for the message ("DEBUG", "INFO",
> + "WARNING", "ERROR", "CRITICAL")
> + %(pathname)s Full pathname of the source file where the logging
> + call was issued (if available)
> + %(filename)s Filename portion of pathname
> + %(module)s Module (name portion of filename)
> + %(lineno)d Source line number where the logging call was issued
> + (if available)
> + %(funcName)s Function name
> + %(created)f Time when the LogRecord was created (time.time()
> + return value)
> + %(asctime)s Textual time when the LogRecord was created
> + %(msecs)d Millisecond portion of the creation time
> + %(relativeCreated)d Time in milliseconds when the LogRecord was created,
> + relative to the time the logging module was loaded
> + (typically at application startup time)
> + %(thread)d Thread ID (if available)
> + %(threadName)s Thread name (if available)
> + %(process)d Process ID (if available)
> + %(message)s The result of record.getMessage(), computed just as
> + the record is emitted
> + """
> +
> + converter = time.localtime
> +
> + def __init__(self, fmt=None, datefmt=None, style='%'):
> + """
> + Initialize the formatter with specified format strings.
> +
> + Initialize the formatter either with the specified format string, or a
> + default as described above. Allow for specialized date formatting with
> + the optional datefmt argument. If datefmt is omitted, you get an
> + ISO8601-like (or RFC 3339-like) format.
> +
> + Use a style parameter of '%', '{' or '$' to specify that you want to
> + use one of %-formatting, :meth:`str.format` (``{}``) formatting or
> + :class:`string.Template` formatting in your format string.
> +
> + .. versionchanged:: 3.2
> + Added the ``style`` parameter.
> + """
> + if style not in _STYLES:
> + raise ValueError('Style must be one of: %s' % ','.join(
> + _STYLES.keys()))
> + self._style = _STYLES[style][0](fmt)
> + self._fmt = self._style._fmt
> + self.datefmt = datefmt
> +
> + default_time_format = '%Y-%m-%d %H:%M:%S'
> + default_msec_format = '%s,%03d'
> +
> + def formatTime(self, record, datefmt=None):
> + """
> + Return the creation time of the specified LogRecord as formatted text.
> +
> + This method should be called from format() by a formatter which
> + wants to make use of a formatted time. This method can be overridden
> + in formatters to provide for any specific requirement, but the
> + basic behaviour is as follows: if datefmt (a string) is specified,
> + it is used with time.strftime() to format the creation time of the
> + record. Otherwise, an ISO8601-like (or RFC 3339-like) format is used.
> + The resulting string is returned. This function uses a user-configurable
> + function to convert the creation time to a tuple. By default,
> + time.localtime() is used; to change this for a particular formatter
> + instance, set the 'converter' attribute to a function with the same
> + signature as time.localtime() or time.gmtime(). To change it for all
> + formatters, for example if you want all logging times to be shown in GMT,
> + set the 'converter' attribute in the Formatter class.
> + """
> + ct = self.converter(record.created)
> + if datefmt:
> + s = time.strftime(datefmt, ct)
> + else:
> + t = time.strftime(self.default_time_format, ct)
> + s = self.default_msec_format % (t, record.msecs)
> + return s
> +
> + def formatException(self, ei):
> + """
> + Format and return the specified exception information as a string.
> +
> + This default implementation just uses
> + traceback.print_exception()
> + """
> + sio = io.StringIO()
> + tb = ei[2]
> + # See issues #9427, #1553375. Commented out for now.
> + #if getattr(self, 'fullstack', False):
> + # traceback.print_stack(tb.tb_frame.f_back, file=sio)
> + traceback.print_exception(ei[0], ei[1], tb, None, sio)
> + s = sio.getvalue()
> + sio.close()
> + if s[-1:] == "\n":
> + s = s[:-1]
> + return s
> +
> + def usesTime(self):
> + """
> + Check if the format uses the creation time of the record.
> + """
> + return self._style.usesTime()
> +
> + def formatMessage(self, record):
> + return self._style.format(record)
> +
> + def formatStack(self, stack_info):
> + """
> + This method is provided as an extension point for specialized
> + formatting of stack information.
> +
> + The input data is a string as returned from a call to
> + :func:`traceback.print_stack`, but with the last trailing newline
> + removed.
> +
> + The base implementation just returns the value passed in.
> + """
> + return stack_info
> +
> + def format(self, record):
> + """
> + Format the specified record as text.
> +
> + The record's attribute dictionary is used as the operand to a
> + string formatting operation which yields the returned string.
> + Before formatting the dictionary, a couple of preparatory steps
> + are carried out. The message attribute of the record is computed
> + using LogRecord.getMessage(). If the formatting string uses the
> + time (as determined by a call to usesTime(), formatTime() is
> + called to format the event time. If there is exception information,
> + it is formatted using formatException() and appended to the message.
> + """
> + record.message = record.getMessage()
> + if self.usesTime():
> + record.asctime = self.formatTime(record, self.datefmt)
> + s = self.formatMessage(record)
> + if record.exc_info:
> + # Cache the traceback text to avoid converting it multiple times
> + # (it's constant anyway)
> + if not record.exc_text:
> + record.exc_text = self.formatException(record.exc_info)
> + if record.exc_text:
> + if s[-1:] != "\n":
> + s = s + "\n"
> + s = s + record.exc_text
> + if record.stack_info:
> + if s[-1:] != "\n":
> + s = s + "\n"
> + s = s + self.formatStack(record.stack_info)
> + return s
> +
> +#
> +# The default formatter to use when no other is specified
> +#
> +_defaultFormatter = Formatter()
> +
> +class BufferingFormatter(object):
> + """
> + A formatter suitable for formatting a number of records.
> + """
> + def __init__(self, linefmt=None):
> + """
> + Optionally specify a formatter which will be used to format each
> + individual record.
> + """
> + if linefmt:
> + self.linefmt = linefmt
> + else:
> + self.linefmt = _defaultFormatter
> +
> + def formatHeader(self, records):
> + """
> + Return the header string for the specified records.
> + """
> + return ""
> +
> + def formatFooter(self, records):
> + """
> + Return the footer string for the specified records.
> + """
> + return ""
> +
> + def format(self, records):
> + """
> + Format the specified records and return the result as a string.
> + """
> + rv = ""
> + if len(records) > 0:
> + rv = rv + self.formatHeader(records)
> + for record in records:
> + rv = rv + self.linefmt.format(record)
> + rv = rv + self.formatFooter(records)
> + return rv
> +
> +#---------------------------------------------------------------------------
> +# Filter classes and functions
> +#---------------------------------------------------------------------------
> +
> +class Filter(object):
> + """
> + Filter instances are used to perform arbitrary filtering of LogRecords.
> +
> + Loggers and Handlers can optionally use Filter instances to filter
> + records as desired. The base filter class only allows events which are
> + below a certain point in the logger hierarchy. For example, a filter
> + initialized with "A.B" will allow events logged by loggers "A.B",
> + "A.B.C", "A.B.C.D", "A.B.D" etc. but not "A.BB", "B.A.B" etc. If
> + initialized with the empty string, all events are passed.
> + """
> + def __init__(self, name=''):
> + """
> + Initialize a filter.
> +
> + Initialize with the name of the logger which, together with its
> + children, will have its events allowed through the filter. If no
> + name is specified, allow every event.
> + """
> + self.name = name
> + self.nlen = len(name)
> +
> + def filter(self, record):
> + """
> + Determine if the specified record is to be logged.
> +
> + Is the specified record to be logged? Returns 0 for no, nonzero for
> + yes. If deemed appropriate, the record may be modified in-place.
> + """
> + if self.nlen == 0:
> + return True
> + elif self.name == record.name:
> + return True
> + elif record.name.find(self.name, 0, self.nlen) != 0:
> + return False
> + return (record.name[self.nlen] == ".")
> +
> +class Filterer(object):
> + """
> + A base class for loggers and handlers which allows them to share
> + common code.
> + """
> + def __init__(self):
> + """
> + Initialize the list of filters to be an empty list.
> + """
> + self.filters = []
> +
> + def addFilter(self, filter):
> + """
> + Add the specified filter to this handler.
> + """
> + if not (filter in self.filters):
> + self.filters.append(filter)
> +
> + def removeFilter(self, filter):
> + """
> + Remove the specified filter from this handler.
> + """
> + if filter in self.filters:
> + self.filters.remove(filter)
> +
> + def filter(self, record):
> + """
> + Determine if a record is loggable by consulting all the filters.
> +
> + The default is to allow the record to be logged; any filter can veto
> + this and the record is then dropped. Returns a zero value if a record
> + is to be dropped, else non-zero.
> +
> + .. versionchanged:: 3.2
> +
> + Allow filters to be just callables.
> + """
> + rv = True
> + for f in self.filters:
> + if hasattr(f, 'filter'):
> + result = f.filter(record)
> + else:
> + result = f(record) # assume callable - will raise if not
> + if not result:
> + rv = False
> + break
> + return rv
> +
> +#---------------------------------------------------------------------------
> +# Handler classes and functions
> +#---------------------------------------------------------------------------
> +
> +_handlers = weakref.WeakValueDictionary() #map of handler names to handlers
> +_handlerList = [] # added to allow handlers to be removed in reverse of order initialized
> +
> +def _removeHandlerRef(wr):
> + """
> + Remove a handler reference from the internal cleanup list.
> + """
> + # This function can be called during module teardown, when globals are
> + # set to None. It can also be called from another thread. So we need to
> + # pre-emptively grab the necessary globals and check if they're None,
> + # to prevent race conditions and failures during interpreter shutdown.
> + acquire, release, handlers = _acquireLock, _releaseLock, _handlerList
> + if acquire and release and handlers:
> + acquire()
> + try:
> + if wr in handlers:
> + handlers.remove(wr)
> + finally:
> + release()
> +
> +def _addHandlerRef(handler):
> + """
> + Add a handler to the internal cleanup list using a weak reference.
> + """
> + _acquireLock()
> + try:
> + _handlerList.append(weakref.ref(handler, _removeHandlerRef))
> + finally:
> + _releaseLock()
> +
> +class Handler(Filterer):
> + """
> + Handler instances dispatch logging events to specific destinations.
> +
> + The base handler class. Acts as a placeholder which defines the Handler
> + interface. Handlers can optionally use Formatter instances to format
> + records as desired. By default, no formatter is specified; in this case,
> + the 'raw' message as determined by record.message is logged.
> + """
> + def __init__(self, level=NOTSET):
> + """
> + Initializes the instance - basically setting the formatter to None
> + and the filter list to empty.
> + """
> + Filterer.__init__(self)
> + self._name = None
> + self.level = _checkLevel(level)
> + self.formatter = None
> + # Add the handler to the global _handlerList (for cleanup on shutdown)
> + _addHandlerRef(self)
> + self.createLock()
> +
> + def get_name(self):
> + return self._name
> +
> + def set_name(self, name):
> + _acquireLock()
> + try:
> + if self._name in _handlers:
> + del _handlers[self._name]
> + self._name = name
> + if name:
> + _handlers[name] = self
> + finally:
> + _releaseLock()
> +
> + name = property(get_name, set_name)
> +
> + def createLock(self):
> + """
> + Acquire a thread lock for serializing access to the underlying I/O.
> + """
> + if threading:
> + self.lock = threading.RLock()
> + else: #pragma: no cover
> + self.lock = None
> +
> + def acquire(self):
> + """
> + Acquire the I/O thread lock.
> + """
> + if self.lock:
> + self.lock.acquire()
> +
> + def release(self):
> + """
> + Release the I/O thread lock.
> + """
> + if self.lock:
> + self.lock.release()
> +
> + def setLevel(self, level):
> + """
> + Set the logging level of this handler. level must be an int or a str.
> + """
> + self.level = _checkLevel(level)
> +
> + def format(self, record):
> + """
> + Format the specified record.
> +
> + If a formatter is set, use it. Otherwise, use the default formatter
> + for the module.
> + """
> + if self.formatter:
> + fmt = self.formatter
> + else:
> + fmt = _defaultFormatter
> + return fmt.format(record)
> +
> + def emit(self, record):
> + """
> + Do whatever it takes to actually log the specified logging record.
> +
> + This version is intended to be implemented by subclasses and so
> + raises a NotImplementedError.
> + """
> + raise NotImplementedError('emit must be implemented '
> + 'by Handler subclasses')
> +
> + def handle(self, record):
> + """
> + Conditionally emit the specified logging record.
> +
> + Emission depends on filters which may have been added to the handler.
> + Wrap the actual emission of the record with acquisition/release of
> + the I/O thread lock. Returns whether the filter passed the record for
> + emission.
> + """
> + rv = self.filter(record)
> + if rv:
> + self.acquire()
> + try:
> + self.emit(record)
> + finally:
> + self.release()
> + return rv
> +
> + def setFormatter(self, fmt):
> + """
> + Set the formatter for this handler.
> + """
> + self.formatter = fmt
> +
> + def flush(self):
> + """
> + Ensure all logging output has been flushed.
> +
> + This version does nothing and is intended to be implemented by
> + subclasses.
> + """
> + pass
> +
> + def close(self):
> + """
> + Tidy up any resources used by the handler.
> +
> + This version removes the handler from an internal map of handlers,
> + _handlers, which is used for handler lookup by name. Subclasses
> + should ensure that this gets called from overridden close()
> + methods.
> + """
> + #get the module data lock, as we're updating a shared structure.
> + _acquireLock()
> + try: #unlikely to raise an exception, but you never know...
> + if self._name and self._name in _handlers:
> + del _handlers[self._name]
> + finally:
> + _releaseLock()
> +
> + def handleError(self, record):
> + """
> + Handle errors which occur during an emit() call.
> +
> + This method should be called from handlers when an exception is
> + encountered during an emit() call. If raiseExceptions is false,
> + exceptions get silently ignored. This is what is mostly wanted
> + for a logging system - most users will not care about errors in
> + the logging system, they are more interested in application errors.
> + You could, however, replace this with a custom handler if you wish.
> + The record which was being processed is passed in to this method.
> + """
> + if raiseExceptions and sys.stderr: # see issue 13807
> + t, v, tb = sys.exc_info()
> + try:
> + sys.stderr.write('--- Logging error ---\n')
> + traceback.print_exception(t, v, tb, None, sys.stderr)
> + sys.stderr.write('Call stack:\n')
> + # Walk the stack frame up until we're out of logging,
> + # so as to print the calling context.
> + frame = tb.tb_frame
> + while (frame and os.path.dirname(frame.f_code.co_filename) ==
> + __path__[0]):
> + frame = frame.f_back
> + if frame:
> + traceback.print_stack(frame, file=sys.stderr)
> + else:
> + # couldn't find the right stack frame, for some reason
> + sys.stderr.write('Logged from file %s, line %s\n' % (
> + record.filename, record.lineno))
> + # Issue 18671: output logging message and arguments
> + try:
> + sys.stderr.write('Message: %r\n'
> + 'Arguments: %s\n' % (record.msg,
> + record.args))
> + except Exception:
> + sys.stderr.write('Unable to print the message and arguments'
> + ' - possible formatting error.\nUse the'
> + ' traceback above to help find the error.\n'
> + )
> + except OSError: #pragma: no cover
> + pass # see issue 5971
> + finally:
> + del t, v, tb
> +
> + def __repr__(self):
> + level = getLevelName(self.level)
> + return '<%s (%s)>' % (self.__class__.__name__, level)
> +
> +class StreamHandler(Handler):
> + """
> + A handler class which writes logging records, appropriately formatted,
> + to a stream. Note that this class does not close the stream, as
> + sys.stdout or sys.stderr may be used.
> + """
> +
> + terminator = '\n'
> +
> + def __init__(self, stream=None):
> + """
> + Initialize the handler.
> +
> + If stream is not specified, sys.stderr is used.
> + """
> + Handler.__init__(self)
> + if stream is None:
> + stream = sys.stderr
> + self.stream = stream
> +
> + def flush(self):
> + """
> + Flushes the stream.
> + """
> + self.acquire()
> + try:
> + if self.stream and hasattr(self.stream, "flush"):
> + self.stream.flush()
> + finally:
> + self.release()
> +
> + def emit(self, record):
> + """
> + Emit a record.
> +
> + If a formatter is specified, it is used to format the record.
> + The record is then written to the stream with a trailing newline. If
> + exception information is present, it is formatted using
> + traceback.print_exception and appended to the stream. If the stream
> + has an 'encoding' attribute, it is used to determine how to do the
> + output to the stream.
> + """
> + try:
> + msg = self.format(record)
> + stream = self.stream
> + stream.write(msg)
> + stream.write(self.terminator)
> + self.flush()
> + except Exception:
> + self.handleError(record)
> +
> + def __repr__(self):
> + level = getLevelName(self.level)
> + name = getattr(self.stream, 'name', '')
> + if name:
> + name += ' '
> + return '<%s %s(%s)>' % (self.__class__.__name__, name, level)
> +
> +
> +class FileHandler(StreamHandler):
> + """
> + A handler class which writes formatted logging records to disk files.
> + """
> + def __init__(self, filename, mode='a', encoding=None, delay=False):
> + """
> + Open the specified file and use it as the stream for logging.
> + """
> + # Issue #27493: add support for Path objects to be passed in
> + # filename = os.fspath(filename)
> + #keep the absolute path, otherwise derived classes which use this
> + #may come a cropper when the current directory changes
> + self.baseFilename = os.path.abspath(filename)
> + self.mode = mode
> + self.encoding = encoding
> + self.delay = delay
> + if delay:
> + #We don't open the stream, but we still need to call the
> + #Handler constructor to set level, formatter, lock etc.
> + Handler.__init__(self)
> + self.stream = None
> + else:
> + StreamHandler.__init__(self, self._open())
> +
> + def close(self):
> + """
> + Closes the stream.
> + """
> + self.acquire()
> + try:
> + try:
> + if self.stream:
> + try:
> + self.flush()
> + finally:
> + stream = self.stream
> + self.stream = None
> + if hasattr(stream, "close"):
> + stream.close()
> + finally:
> + # Issue #19523: call unconditionally to
> + # prevent a handler leak when delay is set
> + StreamHandler.close(self)
> + finally:
> + self.release()
> +
> + def _open(self):
> + """
> + Open the current base file with the (original) mode and encoding.
> + Return the resulting stream.
> + """
> + return open(self.baseFilename, self.mode, encoding=self.encoding)
> +
> + def emit(self, record):
> + """
> + Emit a record.
> +
> + If the stream was not opened because 'delay' was specified in the
> + constructor, open it before calling the superclass's emit.
> + """
> + if self.stream is None:
> + self.stream = self._open()
> + StreamHandler.emit(self, record)
> +
> + def __repr__(self):
> + level = getLevelName(self.level)
> + return '<%s %s (%s)>' % (self.__class__.__name__, self.baseFilename, level)
> +
> +
> +class _StderrHandler(StreamHandler):
> + """
> + This class is like a StreamHandler using sys.stderr, but always uses
> + whatever sys.stderr is currently set to rather than the value of
> + sys.stderr at handler construction time.
> + """
> + def __init__(self, level=NOTSET):
> + """
> + Initialize the handler.
> + """
> + Handler.__init__(self, level)
> +
> + @property
> + def stream(self):
> + return sys.stderr
> +
> +
> +_defaultLastResort = _StderrHandler(WARNING)
> +lastResort = _defaultLastResort
> +
> +#---------------------------------------------------------------------------
> +# Manager classes and functions
> +#---------------------------------------------------------------------------
> +
> +class PlaceHolder(object):
> + """
> + PlaceHolder instances are used in the Manager logger hierarchy to take
> + the place of nodes for which no loggers have been defined. This class is
> + intended for internal use only and not as part of the public API.
> + """
> + def __init__(self, alogger):
> + """
> + Initialize with the specified logger being a child of this placeholder.
> + """
> + self.loggerMap = { alogger : None }
> +
> + def append(self, alogger):
> + """
> + Add the specified logger as a child of this placeholder.
> + """
> + if alogger not in self.loggerMap:
> + self.loggerMap[alogger] = None
> +
> +#
> +# Determine which class to use when instantiating loggers.
> +#
> +
> +def setLoggerClass(klass):
> + """
> + Set the class to be used when instantiating a logger. The class should
> + define __init__() such that only a name argument is required, and the
> + __init__() should call Logger.__init__()
> + """
> + if klass != Logger:
> + if not issubclass(klass, Logger):
> + raise TypeError("logger not derived from logging.Logger: "
> + + klass.__name__)
> + global _loggerClass
> + _loggerClass = klass
> +
> +def getLoggerClass():
> + """
> + Return the class to be used when instantiating a logger.
> + """
> + return _loggerClass
> +
> +class Manager(object):
> + """
> + There is [under normal circumstances] just one Manager instance, which
> + holds the hierarchy of loggers.
> + """
> + def __init__(self, rootnode):
> + """
> + Initialize the manager with the root node of the logger hierarchy.
> + """
> + self.root = rootnode
> + self.disable = 0
> + self.emittedNoHandlerWarning = False
> + self.loggerDict = {}
> + self.loggerClass = None
> + self.logRecordFactory = None
> +
> + def getLogger(self, name):
> + """
> + Get a logger with the specified name (channel name), creating it
> + if it doesn't yet exist. This name is a dot-separated hierarchical
> + name, such as "a", "a.b", "a.b.c" or similar.
> +
> + If a PlaceHolder existed for the specified name [i.e. the logger
> + didn't exist but a child of it did], replace it with the created
> + logger and fix up the parent/child references which pointed to the
> + placeholder to now point to the logger.
> + """
> + rv = None
> + if not isinstance(name, str):
> + raise TypeError('A logger name must be a string')
> + _acquireLock()
> + try:
> + if name in self.loggerDict:
> + rv = self.loggerDict[name]
> + if isinstance(rv, PlaceHolder):
> + ph = rv
> + rv = (self.loggerClass or _loggerClass)(name)
> + rv.manager = self
> + self.loggerDict[name] = rv
> + self._fixupChildren(ph, rv)
> + self._fixupParents(rv)
> + else:
> + rv = (self.loggerClass or _loggerClass)(name)
> + rv.manager = self
> + self.loggerDict[name] = rv
> + self._fixupParents(rv)
> + finally:
> + _releaseLock()
> + return rv
> +
> + def setLoggerClass(self, klass):
> + """
> + Set the class to be used when instantiating a logger with this Manager.
> + """
> + if klass != Logger:
> + if not issubclass(klass, Logger):
> + raise TypeError("logger not derived from logging.Logger: "
> + + klass.__name__)
> + self.loggerClass = klass
> +
> + def setLogRecordFactory(self, factory):
> + """
> + Set the factory to be used when instantiating a log record with this
> + Manager.
> + """
> + self.logRecordFactory = factory
> +
> + def _fixupParents(self, alogger):
> + """
> + Ensure that there are either loggers or placeholders all the way
> + from the specified logger to the root of the logger hierarchy.
> + """
> + name = alogger.name
> + i = name.rfind(".")
> + rv = None
> + while (i > 0) and not rv:
> + substr = name[:i]
> + if substr not in self.loggerDict:
> + self.loggerDict[substr] = PlaceHolder(alogger)
> + else:
> + obj = self.loggerDict[substr]
> + if isinstance(obj, Logger):
> + rv = obj
> + else:
> + assert isinstance(obj, PlaceHolder)
> + obj.append(alogger)
> + i = name.rfind(".", 0, i - 1)
> + if not rv:
> + rv = self.root
> + alogger.parent = rv
> +
> + def _fixupChildren(self, ph, alogger):
> + """
> + Ensure that children of the placeholder ph are connected to the
> + specified logger.
> + """
> + name = alogger.name
> + namelen = len(name)
> + for c in ph.loggerMap.keys():
> + #The if means ... if not c.parent.name.startswith(nm)
> + if c.parent.name[:namelen] != name:
> + alogger.parent = c.parent
> + c.parent = alogger
> +
> +#---------------------------------------------------------------------------
> +# Logger classes and functions
> +#---------------------------------------------------------------------------
> +
> +class Logger(Filterer):
> + """
> + Instances of the Logger class represent a single logging channel. A
> + "logging channel" indicates an area of an application. Exactly how an
> + "area" is defined is up to the application developer. Since an
> + application can have any number of areas, logging channels are identified
> + by a unique string. Application areas can be nested (e.g. an area
> + of "input processing" might include sub-areas "read CSV files", "read
> + XLS files" and "read Gnumeric files"). To cater for this natural nesting,
> + channel names are organized into a namespace hierarchy where levels are
> + separated by periods, much like the Java or Python package namespace. So
> + in the instance given above, channel names might be "input" for the upper
> + level, and "input.csv", "input.xls" and "input.gnu" for the sub-levels.
> + There is no arbitrary limit to the depth of nesting.
> + """
> + def __init__(self, name, level=NOTSET):
> + """
> + Initialize the logger with a name and an optional level.
> + """
> + Filterer.__init__(self)
> + self.name = name
> + self.level = _checkLevel(level)
> + self.parent = None
> + self.propagate = True
> + self.handlers = []
> + self.disabled = False
> +
> + def setLevel(self, level):
> + """
> + Set the logging level of this logger. level must be an int or a str.
> + """
> + self.level = _checkLevel(level)
> +
> + def debug(self, msg, *args, **kwargs):
> + """
> + Log 'msg % args' with severity 'DEBUG'.
> +
> + To pass exception information, use the keyword argument exc_info with
> + a true value, e.g.
> +
> + logger.debug("Houston, we have a %s", "thorny problem", exc_info=1)
> + """
> + if self.isEnabledFor(DEBUG):
> + self._log(DEBUG, msg, args, **kwargs)
> +
> + def info(self, msg, *args, **kwargs):
> + """
> + Log 'msg % args' with severity 'INFO'.
> +
> + To pass exception information, use the keyword argument exc_info with
> + a true value, e.g.
> +
> + logger.info("Houston, we have a %s", "interesting problem", exc_info=1)
> + """
> + if self.isEnabledFor(INFO):
> + self._log(INFO, msg, args, **kwargs)
> +
> + def warning(self, msg, *args, **kwargs):
> + """
> + Log 'msg % args' with severity 'WARNING'.
> +
> + To pass exception information, use the keyword argument exc_info with
> + a true value, e.g.
> +
> + logger.warning("Houston, we have a %s", "bit of a problem", exc_info=1)
> + """
> + if self.isEnabledFor(WARNING):
> + self._log(WARNING, msg, args, **kwargs)
> +
> + def warn(self, msg, *args, **kwargs):
> + warnings.warn("The 'warn' method is deprecated, "
> + "use 'warning' instead", DeprecationWarning, 2)
> + self.warning(msg, *args, **kwargs)
> +
> + def error(self, msg, *args, **kwargs):
> + """
> + Log 'msg % args' with severity 'ERROR'.
> +
> + To pass exception information, use the keyword argument exc_info with
> + a true value, e.g.
> +
> + logger.error("Houston, we have a %s", "major problem", exc_info=1)
> + """
> + if self.isEnabledFor(ERROR):
> + self._log(ERROR, msg, args, **kwargs)
> +
> + def exception(self, msg, *args, exc_info=True, **kwargs):
> + """
> + Convenience method for logging an ERROR with exception information.
> + """
> + self.error(msg, *args, exc_info=exc_info, **kwargs)
> +
> + def critical(self, msg, *args, **kwargs):
> + """
> + Log 'msg % args' with severity 'CRITICAL'.
> +
> + To pass exception information, use the keyword argument exc_info with
> + a true value, e.g.
> +
> + logger.critical("Houston, we have a %s", "major disaster", exc_info=1)
> + """
> + if self.isEnabledFor(CRITICAL):
> + self._log(CRITICAL, msg, args, **kwargs)
> +
> + fatal = critical
> +
> + def log(self, level, msg, *args, **kwargs):
> + """
> + Log 'msg % args' with the integer severity 'level'.
> +
> + To pass exception information, use the keyword argument exc_info with
> + a true value, e.g.
> +
> + logger.log(level, "We have a %s", "mysterious problem", exc_info=1)
> + """
> + if not isinstance(level, int):
> + if raiseExceptions:
> + raise TypeError("level must be an integer")
> + else:
> + return
> + if self.isEnabledFor(level):
> + self._log(level, msg, args, **kwargs)
> +
> + def findCaller(self, stack_info=False):
> + """
> + Find the stack frame of the caller so that we can note the source
> + file name, line number and function name.
> + """
> + f = currentframe()
> + #On some versions of IronPython, currentframe() returns None if
> + #IronPython isn't run with -X:Frames.
> + if f is not None:
> + f = f.f_back
> + rv = "(unknown file)", 0, "(unknown function)", None
> + while hasattr(f, "f_code"):
> + co = f.f_code
> + filename = os.path.normcase(co.co_filename)
> + if filename == _srcfile:
> + f = f.f_back
> + continue
> + sinfo = None
> + if stack_info:
> + sio = io.StringIO()
> + sio.write('Stack (most recent call last):\n')
> + traceback.print_stack(f, file=sio)
> + sinfo = sio.getvalue()
> + if sinfo[-1] == '\n':
> + sinfo = sinfo[:-1]
> + sio.close()
> + rv = (co.co_filename, f.f_lineno, co.co_name, sinfo)
> + break
> + return rv
> +
> + def makeRecord(self, name, level, fn, lno, msg, args, exc_info,
> + func=None, extra=None, sinfo=None):
> + """
> + A factory method which can be overridden in subclasses to create
> + specialized LogRecords.
> + """
> + rv = _logRecordFactory(name, level, fn, lno, msg, args, exc_info, func,
> + sinfo)
> + if extra is not None:
> + for key in extra:
> + if (key in ["message", "asctime"]) or (key in rv.__dict__):
> + raise KeyError("Attempt to overwrite %r in LogRecord" % key)
> + rv.__dict__[key] = extra[key]
> + return rv
> +
> + def _log(self, level, msg, args, exc_info=None, extra=None, stack_info=False):
> + """
> + Low-level logging routine which creates a LogRecord and then calls
> + all the handlers of this logger to handle the record.
> + """
> + sinfo = None
> + if _srcfile:
> + #IronPython doesn't track Python frames, so findCaller raises an
> + #exception on some versions of IronPython. We trap it here so that
> + #IronPython can use logging.
> + try:
> + fn, lno, func, sinfo = self.findCaller(stack_info)
> + except ValueError: # pragma: no cover
> + fn, lno, func = "(unknown file)", 0, "(unknown function)"
> + else: # pragma: no cover
> + fn, lno, func = "(unknown file)", 0, "(unknown function)"
> + if exc_info:
> + if isinstance(exc_info, BaseException):
> + exc_info = (type(exc_info), exc_info, exc_info.__traceback__)
> + elif not isinstance(exc_info, tuple):
> + exc_info = sys.exc_info()
> + record = self.makeRecord(self.name, level, fn, lno, msg, args,
> + exc_info, func, extra, sinfo)
> + self.handle(record)
> +
> + def handle(self, record):
> + """
> + Call the handlers for the specified record.
> +
> + This method is used for unpickled records received from a socket, as
> + well as those created locally. Logger-level filtering is applied.
> + """
> + if (not self.disabled) and self.filter(record):
> + self.callHandlers(record)
> +
> + def addHandler(self, hdlr):
> + """
> + Add the specified handler to this logger.
> + """
> + _acquireLock()
> + try:
> + if not (hdlr in self.handlers):
> + self.handlers.append(hdlr)
> + finally:
> + _releaseLock()
> +
> + def removeHandler(self, hdlr):
> + """
> + Remove the specified handler from this logger.
> + """
> + _acquireLock()
> + try:
> + if hdlr in self.handlers:
> + self.handlers.remove(hdlr)
> + finally:
> + _releaseLock()
> +
> + def hasHandlers(self):
> + """
> + See if this logger has any handlers configured.
> +
> + Loop through all handlers for this logger and its parents in the
> + logger hierarchy. Return True if a handler was found, else False.
> + Stop searching up the hierarchy whenever a logger with the "propagate"
> + attribute set to zero is found - that will be the last logger which
> + is checked for the existence of handlers.
> + """
> + c = self
> + rv = False
> + while c:
> + if c.handlers:
> + rv = True
> + break
> + if not c.propagate:
> + break
> + else:
> + c = c.parent
> + return rv
> +
> + def callHandlers(self, record):
> + """
> + Pass a record to all relevant handlers.
> +
> + Loop through all handlers for this logger and its parents in the
> + logger hierarchy. If no handler was found, output a one-off error
> + message to sys.stderr. Stop searching up the hierarchy whenever a
> + logger with the "propagate" attribute set to zero is found - that
> + will be the last logger whose handlers are called.
> + """
> + c = self
> + found = 0
> + while c:
> + for hdlr in c.handlers:
> + found = found + 1
> + if record.levelno >= hdlr.level:
> + hdlr.handle(record)
> + if not c.propagate:
> + c = None #break out
> + else:
> + c = c.parent
> + if (found == 0):
> + if lastResort:
> + if record.levelno >= lastResort.level:
> + lastResort.handle(record)
> + elif raiseExceptions and not self.manager.emittedNoHandlerWarning:
> + sys.stderr.write("No handlers could be found for logger"
> + " \"%s\"\n" % self.name)
> + self.manager.emittedNoHandlerWarning = True
> +
> + def getEffectiveLevel(self):
> + """
> + Get the effective level for this logger.
> +
> + Loop through this logger and its parents in the logger hierarchy,
> + looking for a non-zero logging level. Return the first one found.
> + """
> + logger = self
> + while logger:
> + if logger.level:
> + return logger.level
> + logger = logger.parent
> + return NOTSET
> +
> + def isEnabledFor(self, level):
> + """
> + Is this logger enabled for level 'level'?
> + """
> + if self.manager.disable >= level:
> + return False
> + return level >= self.getEffectiveLevel()
> +
> + def getChild(self, suffix):
> + """
> + Get a logger which is a descendant to this one.
> +
> + This is a convenience method, such that
> +
> + logging.getLogger('abc').getChild('def.ghi')
> +
> + is the same as
> +
> + logging.getLogger('abc.def.ghi')
> +
> + It's useful, for example, when the parent logger is named using
> + __name__ rather than a literal string.
> + """
> + if self.root is not self:
> + suffix = '.'.join((self.name, suffix))
> + return self.manager.getLogger(suffix)
> +
> + def __repr__(self):
> + level = getLevelName(self.getEffectiveLevel())
> + return '<%s %s (%s)>' % (self.__class__.__name__, self.name, level)
> +
> +
> +class RootLogger(Logger):
> + """
> + A root logger is not that different to any other logger, except that
> + it must have a logging level and there is only one instance of it in
> + the hierarchy.
> + """
> + def __init__(self, level):
> + """
> + Initialize the logger with the name "root".
> + """
> + Logger.__init__(self, "root", level)
> +
> +_loggerClass = Logger
> +
> +class LoggerAdapter(object):
> + """
> + An adapter for loggers which makes it easier to specify contextual
> + information in logging output.
> + """
> +
> + def __init__(self, logger, extra):
> + """
> + Initialize the adapter with a logger and a dict-like object which
> + provides contextual information. This constructor signature allows
> + easy stacking of LoggerAdapters, if so desired.
> +
> + You can effectively pass keyword arguments as shown in the
> + following example:
> +
> + adapter = LoggerAdapter(someLogger, dict(p1=v1, p2="v2"))
> + """
> + self.logger = logger
> + self.extra = extra
> +
> + def process(self, msg, kwargs):
> + """
> + Process the logging message and keyword arguments passed in to
> + a logging call to insert contextual information. You can either
> + manipulate the message itself, the keyword args or both. Return
> + the message and kwargs modified (or not) to suit your needs.
> +
> + Normally, you'll only need to override this one method in a
> + LoggerAdapter subclass for your specific needs.
> + """
> + kwargs["extra"] = self.extra
> + return msg, kwargs
> +
> + #
> + # Boilerplate convenience methods
> + #
> + def debug(self, msg, *args, **kwargs):
> + """
> + Delegate a debug call to the underlying logger.
> + """
> + self.log(DEBUG, msg, *args, **kwargs)
> +
> + def info(self, msg, *args, **kwargs):
> + """
> + Delegate an info call to the underlying logger.
> + """
> + self.log(INFO, msg, *args, **kwargs)
> +
> + def warning(self, msg, *args, **kwargs):
> + """
> + Delegate a warning call to the underlying logger.
> + """
> + self.log(WARNING, msg, *args, **kwargs)
> +
> + def warn(self, msg, *args, **kwargs):
> + warnings.warn("The 'warn' method is deprecated, "
> + "use 'warning' instead", DeprecationWarning, 2)
> + self.warning(msg, *args, **kwargs)
> +
> + def error(self, msg, *args, **kwargs):
> + """
> + Delegate an error call to the underlying logger.
> + """
> + self.log(ERROR, msg, *args, **kwargs)
> +
> + def exception(self, msg, *args, exc_info=True, **kwargs):
> + """
> + Delegate an exception call to the underlying logger.
> + """
> + self.log(ERROR, msg, *args, exc_info=exc_info, **kwargs)
> +
> + def critical(self, msg, *args, **kwargs):
> + """
> + Delegate a critical call to the underlying logger.
> + """
> + self.log(CRITICAL, msg, *args, **kwargs)
> +
> + def log(self, level, msg, *args, **kwargs):
> + """
> + Delegate a log call to the underlying logger, after adding
> + contextual information from this adapter instance.
> + """
> + if self.isEnabledFor(level):
> + msg, kwargs = self.process(msg, kwargs)
> + self.logger.log(level, msg, *args, **kwargs)
> +
> + def isEnabledFor(self, level):
> + """
> + Is this logger enabled for level 'level'?
> + """
> + if self.logger.manager.disable >= level:
> + return False
> + return level >= self.getEffectiveLevel()
> +
> + def setLevel(self, level):
> + """
> + Set the specified level on the underlying logger.
> + """
> + self.logger.setLevel(level)
> +
> + def getEffectiveLevel(self):
> + """
> + Get the effective level for the underlying logger.
> + """
> + return self.logger.getEffectiveLevel()
> +
> + def hasHandlers(self):
> + """
> + See if the underlying logger has any handlers.
> + """
> + return self.logger.hasHandlers()
> +
> + def _log(self, level, msg, args, exc_info=None, extra=None, stack_info=False):
> + """
> + Low-level log implementation, proxied to allow nested logger adapters.
> + """
> + return self.logger._log(
> + level,
> + msg,
> + args,
> + exc_info=exc_info,
> + extra=extra,
> + stack_info=stack_info,
> + )
> +
> + @property
> + def manager(self):
> + return self.logger.manager
> +
> + @manager.setter
> + def manager(self, value):
> + self.logger.manager = value
> +
> + @property
> + def name(self):
> + return self.logger.name
> +
> + def __repr__(self):
> + logger = self.logger
> + level = getLevelName(logger.getEffectiveLevel())
> + return '<%s %s (%s)>' % (self.__class__.__name__, logger.name, level)
> +
> +root = RootLogger(WARNING)
> +Logger.root = root
> +Logger.manager = Manager(Logger.root)
> +
> +#---------------------------------------------------------------------------
> +# Configuration classes and functions
> +#---------------------------------------------------------------------------
> +
> +def basicConfig(**kwargs):
> + """
> + Do basic configuration for the logging system.
> +
> + This function does nothing if the root logger already has handlers
> + configured. It is a convenience method intended for use by simple scripts
> + to do one-shot configuration of the logging package.
> +
> + The default behaviour is to create a StreamHandler which writes to
> + sys.stderr, set a formatter using the BASIC_FORMAT format string, and
> + add the handler to the root logger.
> +
> + A number of optional keyword arguments may be specified, which can alter
> + the default behaviour.
> +
> + filename Specifies that a FileHandler be created, using the specified
> + filename, rather than a StreamHandler.
> + filemode Specifies the mode to open the file, if filename is specified
> + (if filemode is unspecified, it defaults to 'a').
> + format Use the specified format string for the handler.
> + datefmt Use the specified date/time format.
> + style If a format string is specified, use this to specify the
> + type of format string (possible values '%', '{', '$', for
> + %-formatting, :meth:`str.format` and :class:`string.Template`
> + - defaults to '%').
> + level Set the root logger level to the specified level.
> + stream Use the specified stream to initialize the StreamHandler. Note
> + that this argument is incompatible with 'filename' - if both
> + are present, 'stream' is ignored.
> + handlers If specified, this should be an iterable of already created
> + handlers, which will be added to the root handler. Any handler
> + in the list which does not have a formatter assigned will be
> + assigned the formatter created in this function.
> +
> + Note that you could specify a stream created using open(filename, mode)
> + rather than passing the filename and mode in. However, it should be
> + remembered that StreamHandler does not close its stream (since it may be
> + using sys.stdout or sys.stderr), whereas FileHandler closes its stream
> + when the handler is closed.
> +
> + .. versionchanged:: 3.2
> + Added the ``style`` parameter.
> +
> + .. versionchanged:: 3.3
> + Added the ``handlers`` parameter. A ``ValueError`` is now thrown for
> + incompatible arguments (e.g. ``handlers`` specified together with
> + ``filename``/``filemode``, or ``filename``/``filemode`` specified
> + together with ``stream``, or ``handlers`` specified together with
> + ``stream``.
> + """
> + # Add thread safety in case someone mistakenly calls
> + # basicConfig() from multiple threads
> + _acquireLock()
> + try:
> + if len(root.handlers) == 0:
> + handlers = kwargs.pop("handlers", None)
> + if handlers is None:
> + if "stream" in kwargs and "filename" in kwargs:
> + raise ValueError("'stream' and 'filename' should not be "
> + "specified together")
> + else:
> + if "stream" in kwargs or "filename" in kwargs:
> + raise ValueError("'stream' or 'filename' should not be "
> + "specified together with 'handlers'")
> + if handlers is None:
> + filename = kwargs.pop("filename", None)
> + mode = kwargs.pop("filemode", 'a')
> + if filename:
> + h = FileHandler(filename, mode)
> + else:
> + stream = kwargs.pop("stream", None)
> + h = StreamHandler(stream)
> + handlers = [h]
> + dfs = kwargs.pop("datefmt", None)
> + style = kwargs.pop("style", '%')
> + if style not in _STYLES:
> + raise ValueError('Style must be one of: %s' % ','.join(
> + _STYLES.keys()))
> + fs = kwargs.pop("format", _STYLES[style][1])
> + fmt = Formatter(fs, dfs, style)
> + for h in handlers:
> + if h.formatter is None:
> + h.setFormatter(fmt)
> + root.addHandler(h)
> + level = kwargs.pop("level", None)
> + if level is not None:
> + root.setLevel(level)
> + if kwargs:
> + keys = ', '.join(kwargs.keys())
> + raise ValueError('Unrecognised argument(s): %s' % keys)
> + finally:
> + _releaseLock()
> +
> +#---------------------------------------------------------------------------
> +# Utility functions at module level.
> +# Basically delegate everything to the root logger.
> +#---------------------------------------------------------------------------
> +
> +def getLogger(name=None):
> + """
> + Return a logger with the specified name, creating it if necessary.
> +
> + If no name is specified, return the root logger.
> + """
> + if name:
> + return Logger.manager.getLogger(name)
> + else:
> + return root
> +
> +def critical(msg, *args, **kwargs):
> + """
> + Log a message with severity 'CRITICAL' on the root logger. If the logger
> + has no handlers, call basicConfig() to add a console handler with a
> + pre-defined format.
> + """
> + if len(root.handlers) == 0:
> + basicConfig()
> + root.critical(msg, *args, **kwargs)
> +
> +fatal = critical
> +
> +def error(msg, *args, **kwargs):
> + """
> + Log a message with severity 'ERROR' on the root logger. If the logger has
> + no handlers, call basicConfig() to add a console handler with a pre-defined
> + format.
> + """
> + if len(root.handlers) == 0:
> + basicConfig()
> + root.error(msg, *args, **kwargs)
> +
> +def exception(msg, *args, exc_info=True, **kwargs):
> + """
> + Log a message with severity 'ERROR' on the root logger, with exception
> + information. If the logger has no handlers, basicConfig() is called to add
> + a console handler with a pre-defined format.
> + """
> + error(msg, *args, exc_info=exc_info, **kwargs)
> +
> +def warning(msg, *args, **kwargs):
> + """
> + Log a message with severity 'WARNING' on the root logger. If the logger has
> + no handlers, call basicConfig() to add a console handler with a pre-defined
> + format.
> + """
> + if len(root.handlers) == 0:
> + basicConfig()
> + root.warning(msg, *args, **kwargs)
> +
> +def warn(msg, *args, **kwargs):
> + warnings.warn("The 'warn' function is deprecated, "
> + "use 'warning' instead", DeprecationWarning, 2)
> + warning(msg, *args, **kwargs)
> +
> +def info(msg, *args, **kwargs):
> + """
> + Log a message with severity 'INFO' on the root logger. If the logger has
> + no handlers, call basicConfig() to add a console handler with a pre-defined
> + format.
> + """
> + if len(root.handlers) == 0:
> + basicConfig()
> + root.info(msg, *args, **kwargs)
> +
> +def debug(msg, *args, **kwargs):
> + """
> + Log a message with severity 'DEBUG' on the root logger. If the logger has
> + no handlers, call basicConfig() to add a console handler with a pre-defined
> + format.
> + """
> + if len(root.handlers) == 0:
> + basicConfig()
> + root.debug(msg, *args, **kwargs)
> +
> +def log(level, msg, *args, **kwargs):
> + """
> + Log 'msg % args' with the integer severity 'level' on the root logger. If
> + the logger has no handlers, call basicConfig() to add a console handler
> + with a pre-defined format.
> + """
> + if len(root.handlers) == 0:
> + basicConfig()
> + root.log(level, msg, *args, **kwargs)
> +
> +def disable(level):
> + """
> + Disable all logging calls of severity 'level' and below.
> + """
> + root.manager.disable = level
> +
> +def shutdown(handlerList=_handlerList):
> + """
> + Perform any cleanup actions in the logging system (e.g. flushing
> + buffers).
> +
> + Should be called at application exit.
> + """
> + for wr in reversed(handlerList[:]):
> + #errors might occur, for example, if files are locked
> + #we just ignore them if raiseExceptions is not set
> + try:
> + h = wr()
> + if h:
> + try:
> + h.acquire()
> + h.flush()
> + h.close()
> + except (OSError, ValueError):
> + # Ignore errors which might be caused
> + # because handlers have been closed but
> + # references to them are still around at
> + # application exit.
> + pass
> + finally:
> + h.release()
> + except: # ignore everything, as we're shutting down
> + if raiseExceptions:
> + raise
> + #else, swallow
> +
> +#Let's try and shutdown automatically on application exit...
> +import atexit
> +atexit.register(shutdown)
> +
> +# Null handler
> +
> +class NullHandler(Handler):
> + """
> + This handler does nothing. It's intended to be used to avoid the
> + "No handlers could be found for logger XXX" one-off warning. This is
> + important for library code, which may contain code to log events. If a user
> + of the library does not configure logging, the one-off warning might be
> + produced; to avoid this, the library developer simply needs to instantiate
> + a NullHandler and add it to the top-level logger of the library module or
> + package.
> + """
> + def handle(self, record):
> + """Stub."""
> +
> + def emit(self, record):
> + """Stub."""
> +
> + def createLock(self):
> + self.lock = None
> +
> +# Warnings integration
> +
> +_warnings_showwarning = None
> +
> +def _showwarning(message, category, filename, lineno, file=None, line=None):
> + """
> + Implementation of showwarnings which redirects to logging, which will first
> + check to see if the file parameter is None. If a file is specified, it will
> + delegate to the original warnings implementation of showwarning. Otherwise,
> + it will call warnings.formatwarning and will log the resulting string to a
> + warnings logger named "py.warnings" with level logging.WARNING.
> + """
> + if file is not None:
> + if _warnings_showwarning is not None:
> + _warnings_showwarning(message, category, filename, lineno, file, line)
> + else:
> + s = warnings.formatwarning(message, category, filename, lineno, line)
> + logger = getLogger("py.warnings")
> + if not logger.handlers:
> + logger.addHandler(NullHandler())
> + logger.warning("%s", s)
> +
> +def captureWarnings(capture):
> + """
> + If capture is true, redirect all warnings to the logging package.
> + If capture is False, ensure that warnings are not redirected to logging
> + but to their original destinations.
> + """
> + global _warnings_showwarning
> + if capture:
> + if _warnings_showwarning is None:
> + _warnings_showwarning = warnings.showwarning
> + warnings.showwarning = _showwarning
> + else:
> + if _warnings_showwarning is not None:
> + warnings.showwarning = _warnings_showwarning
> + _warnings_showwarning = None
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/ntpath.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/ntpath.py
> new file mode 100644
> index 00000000..d1ffb774
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/ntpath.py
> @@ -0,0 +1,568 @@
> +
> +# Module 'ntpath' -- common operations on WinNT/Win95 and UEFI pathnames.
> +#
> +# Copyright (c) 2015, Daryl McDaniel. All rights reserved.<BR>
> +# Copyright (c) 2011 - 2012, Intel Corporation. All rights reserved.<BR>
> +# This program and the accompanying materials are licensed and made available under
> +# the terms and conditions of the BSD License that accompanies this distribution.
> +# The full text of the license may be found at
> +# http://opensource.org/licenses/bsd-license.
> +#
> +# THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
> +# WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
> +
> +
> +"""Common pathname manipulations, WindowsNT/95 and UEFI version.
> +
> +Instead of importing this module directly, import os and refer to this
> +module as os.path.
> +"""
> +
> +import os
> +import sys
> +import stat
> +import genericpath
> +import warnings
> +
> +from genericpath import *
> +from genericpath import _unicode
> +
> +__all__ = ["normcase","isabs","join","splitdrive","split","splitext",
> + "basename","dirname","commonprefix","getsize","getmtime",
> + "getatime","getctime", "islink","exists","lexists","isdir","isfile",
> + "ismount","walk","expanduser","expandvars","normpath","abspath",
> + "splitunc","curdir","pardir","sep","pathsep","defpath","altsep",
> + "extsep","devnull","realpath","supports_unicode_filenames","relpath"]
> +
> +# strings representing various path-related bits and pieces
> +curdir = '.'
> +pardir = '..'
> +extsep = '.'
> +sep = '\\'
> +pathsep = ';'
> +altsep = '/'
> +defpath = '.;C:\\bin'
> +if 'ce' in sys.builtin_module_names:
> + defpath = '\\Windows'
> +elif 'os2' in sys.builtin_module_names:
> + # OS/2 w/ VACPP
> + altsep = '/'
> +devnull = 'nul'
> +
> +# Normalize the case of a pathname and map slashes to backslashes.
> +# Other normalizations (such as optimizing '../' away) are not done
> +# (this is done by normpath).
> +
> +def normcase(s):
> + """Normalize case of pathname.
> +
> + Makes all characters lowercase and all slashes into backslashes."""
> + return s.replace("/", "\\").lower()
> +
> +
> +# Return whether a path is absolute.
> +# Trivial in Posix, harder on the Mac or MS-DOS.
> +# For DOS it is absolute if it starts with a slash or backslash (current
> +# volume), or if a pathname after the volume letter and colon / UNC resource
> +# starts with a slash or backslash.
> +
> +def isabs(s):
> + """Test whether a path is absolute"""
> + s = splitdrive(s)[1]
> + return s != '' and s[:1] in '/\\'
> +
> +
> +# Join two (or more) paths.
> +def join(path, *paths):
> + """Join two or more pathname components, inserting "\\" as needed."""
> + result_drive, result_path = splitdrive(path)
> + for p in paths:
> + p_drive, p_path = splitdrive(p)
> + if p_path and p_path[0] in '\\/':
> + # Second path is absolute
> + if p_drive or not result_drive:
> + result_drive = p_drive
> + result_path = p_path
> + continue
> + elif p_drive and p_drive != result_drive:
> + if p_drive.lower() != result_drive.lower():
> + # Different drives => ignore the first path entirely
> + result_drive = p_drive
> + result_path = p_path
> + continue
> + # Same drive in different case
> + result_drive = p_drive
> + # Second path is relative to the first
> + if result_path and result_path[-1] not in '\\/':
> + result_path = result_path + '\\'
> + result_path = result_path + p_path
> + ## add separator between UNC and non-absolute path
> + if (result_path and result_path[0] not in '\\/' and
> + result_drive and result_drive[-1:] != ':'):
> + return result_drive + sep + result_path
> + return result_drive + result_path
> +
> +
> +# Split a path in a drive specification (a drive letter followed by a
> +# colon) and the path specification.
> +# It is always true that drivespec + pathspec == p
> +# NOTE: for UEFI (and even Windows) you can have multiple characters to the left
> +# of the ':' for the device or drive spec. This is reflected in the modifications
> +# to splitdrive() and splitunc().
> +def splitdrive(p):
> + """Split a pathname into drive/UNC sharepoint and relative path specifiers.
> + Returns a 2-tuple (drive_or_unc, path); either part may be empty.
> +
> + If you assign
> + result = splitdrive(p)
> + It is always true that:
> + result[0] + result[1] == p
> +
> + If the path contained a drive letter, drive_or_unc will contain everything
> + up to and including the colon. e.g. splitdrive("c:/dir") returns ("c:", "/dir")
> +
> + If the path contained a UNC path, the drive_or_unc will contain the host name
> + and share up to but not including the fourth directory separator character.
> + e.g. splitdrive("//host/computer/dir") returns ("//host/computer", "/dir")
> +
> + Paths cannot contain both a drive letter and a UNC path.
> +
> + """
> + if len(p) > 1:
> + normp = p.replace(altsep, sep)
> + if (normp[0:2] == sep*2) and (normp[2:3] != sep):
> + # is a UNC path:
> + # vvvvvvvvvvvvvvvvvvvv drive letter or UNC path
> + # \\machine\mountpoint\directory\etc\...
> + # directory ^^^^^^^^^^^^^^^
> + index = normp.find(sep, 2)
> + if index == -1:
> + return '', p
> + index2 = normp.find(sep, index + 1)
> + # a UNC path can't have two slashes in a row
> + # (after the initial two)
> + if index2 == index + 1:
> + return '', p
> + if index2 == -1:
> + index2 = len(p)
> + return p[:index2], p[index2:]
> + index = p.find(':')
> + if index != -1:
> + index = index + 1
> + return p[:index], p[index:]
> + return '', p
> +
> +# Parse UNC paths
> +def splitunc(p):
> + """Split a pathname into UNC mount point and relative path specifiers.
> +
> + Return a 2-tuple (unc, rest); either part may be empty.
> + If unc is not empty, it has the form '//host/mount' (or similar
> + using backslashes). unc+rest is always the input path.
> + Paths containing drive letters never have an UNC part.
> + """
> + if ':' in p:
> + return '', p # Drive letter or device name present
> + firstTwo = p[0:2]
> + if firstTwo == '//' or firstTwo == '\\\\':
> + # is a UNC path:
> + # vvvvvvvvvvvvvvvvvvvv equivalent to drive letter
> + # \\machine\mountpoint\directories...
> + # directory ^^^^^^^^^^^^^^^
> + normp = p.replace('\\', '/')
> + index = normp.find('/', 2)
> + if index <= 2:
> + return '', p
> + index2 = normp.find('/', index + 1)
> + # a UNC path can't have two slashes in a row
> + # (after the initial two)
> + if index2 == index + 1:
> + return '', p
> + if index2 == -1:
> + index2 = len(p)
> + return p[:index2], p[index2:]
> + return '', p
> +
> +
> +# Split a path in head (everything up to the last '/') and tail (the
> +# rest). After the trailing '/' is stripped, the invariant
> +# join(head, tail) == p holds.
> +# The resulting head won't end in '/' unless it is the root.
> +
> +def split(p):
> + """Split a pathname.
> +
> + Return tuple (head, tail) where tail is everything after the final slash.
> + Either part may be empty."""
> +
> + d, p = splitdrive(p)
> + # set i to index beyond p's last slash
> + i = len(p)
> + while i and p[i-1] not in '/\\':
> + i = i - 1
> + head, tail = p[:i], p[i:] # now tail has no slashes
> + # remove trailing slashes from head, unless it's all slashes
> + head2 = head
> + while head2 and head2[-1] in '/\\':
> + head2 = head2[:-1]
> + head = head2 or head
> + return d + head, tail
> +
> +
> +# Split a path in root and extension.
> +# The extension is everything starting at the last dot in the last
> +# pathname component; the root is everything before that.
> +# It is always true that root + ext == p.
> +
> +def splitext(p):
> + return genericpath._splitext(p, sep, altsep, extsep)
> +splitext.__doc__ = genericpath._splitext.__doc__
> +
> +
> +# Return the tail (basename) part of a path.
> +
> +def basename(p):
> + """Returns the final component of a pathname"""
> + return split(p)[1]
> +
> +
> +# Return the head (dirname) part of a path.
> +
> +def dirname(p):
> + """Returns the directory component of a pathname"""
> + return split(p)[0]
> +
> +# Is a path a symbolic link?
> +# This will always return false on systems where posix.lstat doesn't exist.
> +
> +def islink(path):
> + """Test for symbolic link.
> + On WindowsNT/95 and OS/2 always returns false
> + """
> + return False
> +
> +# alias exists to lexists
> +lexists = exists
> +
> +# Is a path a mount point? Either a root (with or without drive letter)
> +# or an UNC path with at most a / or \ after the mount point.
> +
> +def ismount(path):
> + """Test whether a path is a mount point (defined as root of drive)"""
> + unc, rest = splitunc(path)
> + if unc:
> + return rest in ("", "/", "\\")
> + p = splitdrive(path)[1]
> + return len(p) == 1 and p[0] in '/\\'
> +
> +
> +# Directory tree walk.
> +# For each directory under top (including top itself, but excluding
> +# '.' and '..'), func(arg, dirname, filenames) is called, where
> +# dirname is the name of the directory and filenames is the list
> +# of files (and subdirectories etc.) in the directory.
> +# The func may modify the filenames list, to implement a filter,
> +# or to impose a different order of visiting.
> +
> +def walk(top, func, arg):
> + """Directory tree walk with callback function.
> +
> + For each directory in the directory tree rooted at top (including top
> + itself, but excluding '.' and '..'), call func(arg, dirname, fnames).
> + dirname is the name of the directory, and fnames a list of the names of
> + the files and subdirectories in dirname (excluding '.' and '..'). func
> + may modify the fnames list in-place (e.g. via del or slice assignment),
> + and walk will only recurse into the subdirectories whose names remain in
> + fnames; this can be used to implement a filter, or to impose a specific
> + order of visiting. No semantics are defined for, or required of, arg,
> + beyond that arg is always passed to func. It can be used, e.g., to pass
> + a filename pattern, or a mutable object designed to accumulate
> + statistics. Passing None for arg is common."""
> + warnings.warnpy3k("In 3.x, os.path.walk is removed in favor of os.walk.",
> + stacklevel=2)
> + try:
> + names = os.listdir(top)
> + except os.error:
> + return
> + func(arg, top, names)
> + for name in names:
> + name = join(top, name)
> + if isdir(name):
> + walk(name, func, arg)
> +
> +
> +# Expand paths beginning with '~' or '~user'.
> +# '~' means $HOME; '~user' means that user's home directory.
> +# If the path doesn't begin with '~', or if the user or $HOME is unknown,
> +# the path is returned unchanged (leaving error reporting to whatever
> +# function is called with the expanded path as argument).
> +# See also module 'glob' for expansion of *, ? and [...] in pathnames.
> +# (A function should also be defined to do full *sh-style environment
> +# variable expansion.)
> +
> +def expanduser(path):
> + """Expand ~ and ~user constructs.
> +
> + If user or $HOME is unknown, do nothing."""
> + if path[:1] != '~':
> + return path
> + i, n = 1, len(path)
> + while i < n and path[i] not in '/\\':
> + i = i + 1
> +
> + if 'HOME' in os.environ:
> + userhome = os.environ['HOME']
> + elif 'USERPROFILE' in os.environ:
> + userhome = os.environ['USERPROFILE']
> + elif not 'HOMEPATH' in os.environ:
> + return path
> + else:
> + try:
> + drive = os.environ['HOMEDRIVE']
> + except KeyError:
> + drive = ''
> + userhome = join(drive, os.environ['HOMEPATH'])
> +
> + if i != 1: #~user
> + userhome = join(dirname(userhome), path[1:i])
> +
> + return userhome + path[i:]
> +
> +
> +# Expand paths containing shell variable substitutions.
> +# The following rules apply:
> +# - no expansion within single quotes
> +# - '$$' is translated into '$'
> +# - '%%' is translated into '%' if '%%' are not seen in %var1%%var2%
> +# - ${varname} is accepted.
> +# - $varname is accepted.
> +# - %varname% is accepted.
> +# - varnames can be made out of letters, digits and the characters '_-'
> +# (though is not verified in the ${varname} and %varname% cases)
> +# XXX With COMMAND.COM you can use any characters in a variable name,
> +# XXX except '^|<>='.
> +
> +def expandvars(path):
> + """Expand shell variables of the forms $var, ${var} and %var%.
> +
> + Unknown variables are left unchanged."""
> + if '$' not in path and '%' not in path:
> + return path
> + import string
> + varchars = string.ascii_letters + string.digits + '_-'
> + if isinstance(path, _unicode):
> + encoding = sys.getfilesystemencoding()
> + def getenv(var):
> + return os.environ[var.encode(encoding)].decode(encoding)
> + else:
> + def getenv(var):
> + return os.environ[var]
> + res = ''
> + index = 0
> + pathlen = len(path)
> + while index < pathlen:
> + c = path[index]
> + if c == '\'': # no expansion within single quotes
> + path = path[index + 1:]
> + pathlen = len(path)
> + try:
> + index = path.index('\'')
> + res = res + '\'' + path[:index + 1]
> + except ValueError:
> + res = res + c + path
> + index = pathlen - 1
> + elif c == '%': # variable or '%'
> + if path[index + 1:index + 2] == '%':
> + res = res + c
> + index = index + 1
> + else:
> + path = path[index+1:]
> + pathlen = len(path)
> + try:
> + index = path.index('%')
> + except ValueError:
> + res = res + '%' + path
> + index = pathlen - 1
> + else:
> + var = path[:index]
> + try:
> + res = res + getenv(var)
> + except KeyError:
> + res = res + '%' + var + '%'
> + elif c == '$': # variable or '$$'
> + if path[index + 1:index + 2] == '$':
> + res = res + c
> + index = index + 1
> + elif path[index + 1:index + 2] == '{':
> + path = path[index+2:]
> + pathlen = len(path)
> + try:
> + index = path.index('}')
> + var = path[:index]
> + try:
> + res = res + getenv(var)
> + except KeyError:
> + res = res + '${' + var + '}'
> + except ValueError:
> + res = res + '${' + path
> + index = pathlen - 1
> + else:
> + var = ''
> + index = index + 1
> + c = path[index:index + 1]
> + while c != '' and c in varchars:
> + var = var + c
> + index = index + 1
> + c = path[index:index + 1]
> + try:
> + res = res + getenv(var)
> + except KeyError:
> + res = res + '$' + var
> + if c != '':
> + index = index - 1
> + else:
> + res = res + c
> + index = index + 1
> + return res
> +
> +
> +# Normalize a path, e.g. A//B, A/./B and A/foo/../B all become A\B.
> +# Previously, this function also truncated pathnames to 8+3 format,
> +# but as this module is called "ntpath", that's obviously wrong!
> +
> +def normpath(path):
> + """Normalize path, eliminating double slashes, etc."""
> + # Preserve unicode (if path is unicode)
> + backslash, dot = (u'\\', u'.') if isinstance(path, _unicode) else ('\\', '.')
> + if path.startswith(('\\\\.\\', '\\\\?\\')):
> + # in the case of paths with these prefixes:
> + # \\.\ -> device names
> + # \\?\ -> literal paths
> + # do not do any normalization, but return the path unchanged
> + return path
> + path = path.replace("/", "\\")
> + prefix, path = splitdrive(path)
> + # We need to be careful here. If the prefix is empty, and the path starts
> + # with a backslash, it could either be an absolute path on the current
> + # drive (\dir1\dir2\file) or a UNC filename (\\server\mount\dir1\file). It
> + # is therefore imperative NOT to collapse multiple backslashes blindly in
> + # that case.
> + # The code below preserves multiple backslashes when there is no drive
> + # letter. This means that the invalid filename \\\a\b is preserved
> + # unchanged, where a\\\b is normalised to a\b. It's not clear that there
> + # is any better behaviour for such edge cases.
> + if prefix == '':
> + # No drive letter - preserve initial backslashes
> + while path[:1] == "\\":
> + prefix = prefix + backslash
> + path = path[1:]
> + else:
> + # We have a drive letter - collapse initial backslashes
> + if path.startswith("\\"):
> + prefix = prefix + backslash
> + path = path.lstrip("\\")
> + comps = path.split("\\")
> + i = 0
> + while i < len(comps):
> + if comps[i] in ('.', ''):
> + del comps[i]
> + elif comps[i] == '..':
> + if i > 0 and comps[i-1] != '..':
> + del comps[i-1:i+1]
> + i -= 1
> + elif i == 0 and prefix.endswith("\\"):
> + del comps[i]
> + else:
> + i += 1
> + else:
> + i += 1
> + # If the path is now empty, substitute '.'
> + if not prefix and not comps:
> + comps.append(dot)
> + return prefix + backslash.join(comps)
> +
> +
> +# Return an absolute path.
> +try:
> + from nt import _getfullpathname
> +
> +except ImportError: # not running on Windows - mock up something sensible
> + def abspath(path):
> + """Return the absolute version of a path."""
> + if not isabs(path):
> + if isinstance(path, _unicode):
> + cwd = os.getcwdu()
> + else:
> + cwd = os.getcwd()
> + path = join(cwd, path)
> + return normpath(path)
> +
> +else: # use native Windows method on Windows
> + def abspath(path):
> + """Return the absolute version of a path."""
> +
> + if path: # Empty path must return current working directory.
> + try:
> + path = _getfullpathname(path)
> + except WindowsError:
> + pass # Bad path - return unchanged.
> + elif isinstance(path, _unicode):
> + path = os.getcwdu()
> + else:
> + path = os.getcwd()
> + return normpath(path)
> +
> +# realpath is a no-op on systems without islink support
> +realpath = abspath
> +# Win9x family and earlier have no Unicode filename support.
> +supports_unicode_filenames = (hasattr(sys, "getwindowsversion") and
> + sys.getwindowsversion()[3] >= 2)
> +
> +def _abspath_split(path):
> + abs = abspath(normpath(path))
> + prefix, rest = splitunc(abs)
> + is_unc = bool(prefix)
> + if not is_unc:
> + prefix, rest = splitdrive(abs)
> + return is_unc, prefix, [x for x in rest.split(sep) if x]
> +
> +def relpath(path, start=curdir):
> + """Return a relative version of a path"""
> +
> + if not path:
> + raise ValueError("no path specified")
> +
> + start_is_unc, start_prefix, start_list = _abspath_split(start)
> + path_is_unc, path_prefix, path_list = _abspath_split(path)
> +
> + if path_is_unc ^ start_is_unc:
> + raise ValueError("Cannot mix UNC and non-UNC paths (%s and %s)"
> + % (path, start))
> + if path_prefix.lower() != start_prefix.lower():
> + if path_is_unc:
> + raise ValueError("path is on UNC root %s, start on UNC root %s"
> + % (path_prefix, start_prefix))
> + else:
> + raise ValueError("path is on drive %s, start on drive %s"
> + % (path_prefix, start_prefix))
> + # Work out how much of the filepath is shared by start and path.
> + i = 0
> + for e1, e2 in zip(start_list, path_list):
> + if e1.lower() != e2.lower():
> + break
> + i += 1
> +
> + rel_list = [pardir] * (len(start_list)-i) + path_list[i:]
> + if not rel_list:
> + return curdir
> + return join(*rel_list)
> +
> +try:
> + # The genericpath.isdir implementation uses os.stat and checks the mode
> + # attribute to tell whether or not the path is a directory.
> + # This is overkill on Windows - just pass the path to GetFileAttributes
> + # and check the attribute from there.
> + from nt import _isdir as isdir
> +except ImportError:
> + # Use genericpath.isdir as imported above.
> + pass
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/os.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/os.py
> new file mode 100644
> index 00000000..b163199f
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/os.py
> @@ -0,0 +1,792 @@
> +
> +# Module 'os' -- OS routines for NT, Posix, or UEFI depending on what system we're on.
> +#
> +# Copyright (c) 2015, Daryl McDaniel. All rights reserved.<BR>
> +# Copyright (c) 2011 - 2012, Intel Corporation. All rights reserved.<BR>
> +# This program and the accompanying materials are licensed and made available under
> +# the terms and conditions of the BSD License that accompanies this distribution.
> +# The full text of the license may be found at
> +# http://opensource.org/licenses/bsd-license.
> +#
> +# THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
> +# WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
> +
> +r"""OS routines for NT, Posix, or UEFI depending on what system we're on.
> +
> +This exports:
> + - all functions from edk2, posix, nt, os2, or ce, e.g. unlink, stat, etc.
> + - os.path is one of the modules uefipath, posixpath, or ntpath
> + - os.name is 'edk2', 'posix', 'nt', 'os2', 'ce' or 'riscos'
> + - os.curdir is a string representing the current directory ('.' or ':')
> + - os.pardir is a string representing the parent directory ('..' or '::')
> + - os.sep is the (or a most common) pathname separator ('/' or ':' or '\\')
> + - os.extsep is the extension separator ('.' or '/')
> + - os.altsep is the alternate pathname separator (None or '/')
> + - os.pathsep is the component separator used in $PATH etc
> + - os.linesep is the line separator in text files ('\r' or '\n' or '\r\n')
> + - os.defpath is the default search path for executables
> + - os.devnull is the file path of the null device ('/dev/null', etc.)
> +
> +Programs that import and use 'os' stand a better chance of being
> +portable between different platforms. Of course, they must then
> +only use functions that are defined by all platforms (e.g., unlink
> +and opendir), and leave all pathname manipulation to os.path
> +(e.g., split and join).
> +"""
> +
> +#'
> +
> +import sys, errno
> +
> +_names = sys.builtin_module_names
> +
> +# Note: more names are added to __all__ later.
> +__all__ = ["altsep", "curdir", "pardir", "sep", "extsep", "pathsep", "linesep",
> + "defpath", "name", "path", "devnull",
> + "SEEK_SET", "SEEK_CUR", "SEEK_END"]
> +
> +def _get_exports_list(module):
> + try:
> + return list(module.__all__)
> + except AttributeError:
> + return [n for n in dir(module) if n[0] != '_']
> +
> +if 'posix' in _names:
> + name = 'posix'
> + linesep = '\n'
> + from posix import *
> + try:
> + from posix import _exit
> + except ImportError:
> + pass
> + import posixpath as path
> +
> + import posix
> + __all__.extend(_get_exports_list(posix))
> + del posix
> +
> +elif 'nt' in _names:
> + name = 'nt'
> + linesep = '\r\n'
> + from nt import *
> + try:
> + from nt import _exit
> + except ImportError:
> + pass
> + import ntpath as path
> +
> + import nt
> + __all__.extend(_get_exports_list(nt))
> + del nt
> +
> +elif 'os2' in _names:
> + name = 'os2'
> + linesep = '\r\n'
> + from os2 import *
> + try:
> + from os2 import _exit
> + except ImportError:
> + pass
> + if sys.version.find('EMX GCC') == -1:
> + import ntpath as path
> + else:
> + import os2emxpath as path
> + from _emx_link import link
> +
> + import os2
> + __all__.extend(_get_exports_list(os2))
> + del os2
> +
> +elif 'ce' in _names:
> + name = 'ce'
> + linesep = '\r\n'
> + from ce import *
> + try:
> + from ce import _exit
> + except ImportError:
> + pass
> + # We can use the standard Windows path.
> + import ntpath as path
> +
> + import ce
> + __all__.extend(_get_exports_list(ce))
> + del ce
> +
> +elif 'riscos' in _names:
> + name = 'riscos'
> + linesep = '\n'
> + from riscos import *
> + try:
> + from riscos import _exit
> + except ImportError:
> + pass
> + import riscospath as path
> +
> + import riscos
> + __all__.extend(_get_exports_list(riscos))
> + del riscos
> +
> +elif 'edk2' in _names:
> + name = 'edk2'
> + linesep = '\n'
> + from edk2 import *
> + try:
> + from edk2 import _exit
> + except ImportError:
> + pass
> + import ntpath as path
> + path.defpath = '.;/efi/tools/'
> +
> + import edk2
> + __all__.extend(_get_exports_list(edk2))
> + del edk2
> +
> +else:
> + raise ImportError('no os specific module found')
> +
> +sys.modules['os.path'] = path
> +from os.path import (curdir, pardir, sep, pathsep, defpath, extsep, altsep,
> + devnull)
> +
> +del _names
> +
> +# Python uses fixed values for the SEEK_ constants; they are mapped
> +# to native constants if necessary in posixmodule.c
> +SEEK_SET = 0
> +SEEK_CUR = 1
> +SEEK_END = 2
> +
> +#'
> +
> +# Super directory utilities.
> +# (Inspired by Eric Raymond; the doc strings are mostly his)
> +
> +def makedirs(name, mode=777):
> + """makedirs(path [, mode=0777])
> +
> + Super-mkdir; create a leaf directory and all intermediate ones.
> + Works like mkdir, except that any intermediate path segment (not
> + just the rightmost) will be created if it does not exist. This is
> + recursive.
> +
> + """
> + head, tail = path.split(name)
> + if not tail:
> + head, tail = path.split(head)
> + if head and tail and not path.exists(head):
> + try:
> + makedirs(head, mode)
> + except OSError as e:
> + # be happy if someone already created the path
> + if e.errno != errno.EEXIST:
> + raise
> + if tail == curdir: # xxx/newdir/. exists if xxx/newdir exists
> + return
> + mkdir(name, mode)
> +
> +def removedirs(name):
> + """removedirs(path)
> +
> + Super-rmdir; remove a leaf directory and all empty intermediate
> + ones. Works like rmdir except that, if the leaf directory is
> + successfully removed, directories corresponding to rightmost path
> + segments will be pruned away until either the whole path is
> + consumed or an error occurs. Errors during this latter phase are
> + ignored -- they generally mean that a directory was not empty.
> +
> + """
> + rmdir(name)
> + head, tail = path.split(name)
> + if not tail:
> + head, tail = path.split(head)
> + while head and tail:
> + try:
> + rmdir(head)
> + except error:
> + break
> + head, tail = path.split(head)
> +
> +def renames(old, new):
> + """renames(old, new)
> +
> + Super-rename; create directories as necessary and delete any left
> + empty. Works like rename, except creation of any intermediate
> + directories needed to make the new pathname good is attempted
> + first. After the rename, directories corresponding to rightmost
> + path segments of the old name will be pruned until either the
> + whole path is consumed or a nonempty directory is found.
> +
> + Note: this function can fail with the new directory structure made
> + if you lack permissions needed to unlink the leaf directory or
> + file.
> +
> + """
> + head, tail = path.split(new)
> + if head and tail and not path.exists(head):
> + makedirs(head)
> + rename(old, new)
> + head, tail = path.split(old)
> + if head and tail:
> + try:
> + removedirs(head)
> + except error:
> + pass
> +
> +__all__.extend(["makedirs", "removedirs", "renames"])
> +
> +def walk(top, topdown=True, onerror=None, followlinks=False):
> + """Directory tree generator.
> +
> + For each directory in the directory tree rooted at top (including top
> + itself, but excluding '.' and '..'), yields a 3-tuple
> +
> + dirpath, dirnames, filenames
> +
> + dirpath is a string, the path to the directory. dirnames is a list of
> + the names of the subdirectories in dirpath (excluding '.' and '..').
> + filenames is a list of the names of the non-directory files in dirpath.
> + Note that the names in the lists are just names, with no path components.
> + To get a full path (which begins with top) to a file or directory in
> + dirpath, do os.path.join(dirpath, name).
> +
> + If optional arg 'topdown' is true or not specified, the triple for a
> + directory is generated before the triples for any of its subdirectories
> + (directories are generated top down). If topdown is false, the triple
> + for a directory is generated after the triples for all of its
> + subdirectories (directories are generated bottom up).
> +
> + When topdown is true, the caller can modify the dirnames list in-place
> + (e.g., via del or slice assignment), and walk will only recurse into the
> + subdirectories whose names remain in dirnames; this can be used to prune the
> + search, or to impose a specific order of visiting. Modifying dirnames when
> + topdown is false is ineffective, since the directories in dirnames have
> + already been generated by the time dirnames itself is generated. No matter
> + the value of topdown, the list of subdirectories is retrieved before the
> + tuples for the directory and its subdirectories are generated.
> +
> + By default errors from the os.listdir() call are ignored. If
> + optional arg 'onerror' is specified, it should be a function; it
> + will be called with one argument, an os.error instance. It can
> + report the error to continue with the walk, or raise the exception
> + to abort the walk. Note that the filename is available as the
> + filename attribute of the exception object.
> +
> + By default, os.walk does not follow symbolic links to subdirectories on
> + systems that support them. In order to get this functionality, set the
> + optional argument 'followlinks' to true.
> +
> + Caution: if you pass a relative pathname for top, don't change the
> + current working directory between resumptions of walk. walk never
> + changes the current directory, and assumes that the client doesn't
> + either.
> +
> + Example:
> +
> + import os
> + from os.path import join, getsize
> + for root, dirs, files in os.walk('python/Lib/email'):
> + print root, "consumes",
> + print sum([getsize(join(root, name)) for name in files]),
> + print "bytes in", len(files), "non-directory files"
> + if 'CVS' in dirs:
> + dirs.remove('CVS') # don't visit CVS directories
> +
> + """
> +
> + islink, join, isdir = path.islink, path.join, path.isdir
> +
> + # We may not have read permission for top, in which case we can't
> + # get a list of the files the directory contains. os.path.walk
> + # always suppressed the exception then, rather than blow up for a
> + # minor reason when (say) a thousand readable directories are still
> + # left to visit. That logic is copied here.
> + try:
> + # Note that listdir and error are globals in this module due
> + # to earlier import-*.
> + names = listdir(top)
> + except error as err:
> + if onerror is not None:
> + onerror(err)
> + return
> +
> + dirs, nondirs = [], []
> + for name in names:
> + if isdir(join(top, name)):
> + dirs.append(name)
> + else:
> + nondirs.append(name)
> +
> + if topdown:
> + yield top, dirs, nondirs
> + for name in dirs:
> + new_path = join(top, name)
> + if followlinks or not islink(new_path):
> + for x in walk(new_path, topdown, onerror, followlinks):
> + yield x
> + if not topdown:
> + yield top, dirs, nondirs
> +
> +__all__.append("walk")
> +
> +# Make sure os.environ exists, at least
> +try:
> + environ
> +except NameError:
> + environ = {}
> +
> +def execl(file, *args):
> + """execl(file, *args)
> +
> + Execute the executable file with argument list args, replacing the
> + current process. """
> + execv(file, args)
> +
> +def execle(file, *args):
> + """execle(file, *args, env)
> +
> + Execute the executable file with argument list args and
> + environment env, replacing the current process. """
> + env = args[-1]
> + execve(file, args[:-1], env)
> +
> +def execlp(file, *args):
> + """execlp(file, *args)
> +
> + Execute the executable file (which is searched for along $PATH)
> + with argument list args, replacing the current process. """
> + execvp(file, args)
> +
> +def execlpe(file, *args):
> + """execlpe(file, *args, env)
> +
> + Execute the executable file (which is searched for along $PATH)
> + with argument list args and environment env, replacing the current
> + process. """
> + env = args[-1]
> + execvpe(file, args[:-1], env)
> +
> +def execvp(file, args):
> + """execvp(file, args)
> +
> + Execute the executable file (which is searched for along $PATH)
> + with argument list args, replacing the current process.
> + args may be a list or tuple of strings. """
> + _execvpe(file, args)
> +
> +def execvpe(file, args, env):
> + """execvpe(file, args, env)
> +
> + Execute the executable file (which is searched for along $PATH)
> + with argument list args and environment env , replacing the
> + current process.
> + args may be a list or tuple of strings. """
> + _execvpe(file, args, env)
> +
> +__all__.extend(["execl","execle","execlp","execlpe","execvp","execvpe"])
> +
> +def _execvpe(file, args, env=None):
> + if env is not None:
> + func = execve
> + argrest = (args, env)
> + else:
> + func = execv
> + argrest = (args,)
> + env = environ
> +
> + head, tail = path.split(file)
> + if head:
> + func(file, *argrest)
> + return
> + if 'PATH' in env:
> + envpath = env['PATH']
> + else:
> + envpath = defpath
> + PATH = envpath.split(pathsep)
> + saved_exc = None
> + saved_tb = None
> + for dir in PATH:
> + fullname = path.join(dir, file)
> + try:
> + func(fullname, *argrest)
> + except error as e:
> + tb = sys.exc_info()[2]
> + if (e.errno != errno.ENOENT and e.errno != errno.ENOTDIR
> + and saved_exc is None):
> + saved_exc = e
> + saved_tb = tb
> + if saved_exc:
> + raise error(saved_exc, saved_tb)
> + raise error(e, tb)
> +
> +# Change environ to automatically call putenv() if it exists
> +try:
> + # This will fail if there's no putenv
> + putenv
> +except NameError:
> + pass
> +else:
> + import UserDict
> +
> + # Fake unsetenv() for Windows
> + # not sure about os2 here but
> + # I'm guessing they are the same.
> +
> + if name in ('os2', 'nt'):
> + def unsetenv(key):
> + putenv(key, "")
> +
> + if name == "riscos":
> + # On RISC OS, all env access goes through getenv and putenv
> + from riscosenviron import _Environ
> + elif name in ('os2', 'nt'): # Where Env Var Names Must Be UPPERCASE
> + # But we store them as upper case
> + class _Environ(UserDict.IterableUserDict):
> + def __init__(self, environ):
> + UserDict.UserDict.__init__(self)
> + data = self.data
> + for k, v in environ.items():
> + data[k.upper()] = v
> + def __setitem__(self, key, item):
> + putenv(key, item)
> + self.data[key.upper()] = item
> + def __getitem__(self, key):
> + return self.data[key.upper()]
> + try:
> + unsetenv
> + except NameError:
> + def __delitem__(self, key):
> + del self.data[key.upper()]
> + else:
> + def __delitem__(self, key):
> + unsetenv(key)
> + del self.data[key.upper()]
> + def clear(self):
> + for key in self.data.keys():
> + unsetenv(key)
> + del self.data[key]
> + def pop(self, key, *args):
> + unsetenv(key)
> + return self.data.pop(key.upper(), *args)
> + def has_key(self, key):
> + return key.upper() in self.data
> + def __contains__(self, key):
> + return key.upper() in self.data
> + def get(self, key, failobj=None):
> + return self.data.get(key.upper(), failobj)
> + def update(self, dict=None, **kwargs):
> + if dict:
> + try:
> + keys = dict.keys()
> + except AttributeError:
> + # List of (key, value)
> + for k, v in dict:
> + self[k] = v
> + else:
> + # got keys
> + # cannot use items(), since mappings
> + # may not have them.
> + for k in keys:
> + self[k] = dict[k]
> + if kwargs:
> + self.update(kwargs)
> + def copy(self):
> + return dict(self)
> +
> + else: # Where Env Var Names Can Be Mixed Case
> + class _Environ(UserDict.IterableUserDict):
> + def __init__(self, environ):
> + UserDict.UserDict.__init__(self)
> + self.data = environ
> + def __setitem__(self, key, item):
> + putenv(key, item)
> + self.data[key] = item
> + def update(self, dict=None, **kwargs):
> + if dict:
> + try:
> + keys = dict.keys()
> + except AttributeError:
> + # List of (key, value)
> + for k, v in dict:
> + self[k] = v
> + else:
> + # got keys
> + # cannot use items(), since mappings
> + # may not have them.
> + for k in keys:
> + self[k] = dict[k]
> + if kwargs:
> + self.update(kwargs)
> + try:
> + unsetenv
> + except NameError:
> + pass
> + else:
> + def __delitem__(self, key):
> + unsetenv(key)
> + del self.data[key]
> + def clear(self):
> + for key in self.data.keys():
> + unsetenv(key)
> + del self.data[key]
> + def pop(self, key, *args):
> + unsetenv(key)
> + return self.data.pop(key, *args)
> + def copy(self):
> + return dict(self)
> +
> +
> + environ = _Environ(environ)
> +
> +def getenv(key, default=None):
> + """Get an environment variable, return None if it doesn't exist.
> + The optional second argument can specify an alternate default."""
> + return environ.get(key, default)
> +__all__.append("getenv")
> +
> +def _exists(name):
> + return name in globals()
> +
> +# Supply spawn*() (probably only for Unix)
> +if _exists("fork") and not _exists("spawnv") and _exists("execv"):
> +
> + P_WAIT = 0
> + P_NOWAIT = P_NOWAITO = 1
> +
> + # XXX Should we support P_DETACH? I suppose it could fork()**2
> + # and close the std I/O streams. Also, P_OVERLAY is the same
> + # as execv*()?
> +
> + def _spawnvef(mode, file, args, env, func):
> + # Internal helper; func is the exec*() function to use
> + pid = fork()
> + if not pid:
> + # Child
> + try:
> + if env is None:
> + func(file, args)
> + else:
> + func(file, args, env)
> + except:
> + _exit(127)
> + else:
> + # Parent
> + if mode == P_NOWAIT:
> + return pid # Caller is responsible for waiting!
> + while 1:
> + wpid, sts = waitpid(pid, 0)
> + if WIFSTOPPED(sts):
> + continue
> + elif WIFSIGNALED(sts):
> + return -WTERMSIG(sts)
> + elif WIFEXITED(sts):
> + return WEXITSTATUS(sts)
> + else:
> + raise error("Not stopped, signaled or exited???")
> +
> + def spawnv(mode, file, args):
> + """spawnv(mode, file, args) -> integer
> +
> +Execute file with arguments from args in a subprocess.
> +If mode == P_NOWAIT return the pid of the process.
> +If mode == P_WAIT return the process's exit code if it exits normally;
> +otherwise return -SIG, where SIG is the signal that killed it. """
> + return _spawnvef(mode, file, args, None, execv)
> +
> + def spawnve(mode, file, args, env):
> + """spawnve(mode, file, args, env) -> integer
> +
> +Execute file with arguments from args in a subprocess with the
> +specified environment.
> +If mode == P_NOWAIT return the pid of the process.
> +If mode == P_WAIT return the process's exit code if it exits normally;
> +otherwise return -SIG, where SIG is the signal that killed it. """
> + return _spawnvef(mode, file, args, env, execve)
> +
> + # Note: spawnvp[e] is't currently supported on Windows
> +
> + def spawnvp(mode, file, args):
> + """spawnvp(mode, file, args) -> integer
> +
> +Execute file (which is looked for along $PATH) with arguments from
> +args in a subprocess.
> +If mode == P_NOWAIT return the pid of the process.
> +If mode == P_WAIT return the process's exit code if it exits normally;
> +otherwise return -SIG, where SIG is the signal that killed it. """
> + return _spawnvef(mode, file, args, None, execvp)
> +
> + def spawnvpe(mode, file, args, env):
> + """spawnvpe(mode, file, args, env) -> integer
> +
> +Execute file (which is looked for along $PATH) with arguments from
> +args in a subprocess with the supplied environment.
> +If mode == P_NOWAIT return the pid of the process.
> +If mode == P_WAIT return the process's exit code if it exits normally;
> +otherwise return -SIG, where SIG is the signal that killed it. """
> + return _spawnvef(mode, file, args, env, execvpe)
> +
> +if _exists("spawnv"):
> + # These aren't supplied by the basic Windows code
> + # but can be easily implemented in Python
> +
> + def spawnl(mode, file, *args):
> + """spawnl(mode, file, *args) -> integer
> +
> +Execute file with arguments from args in a subprocess.
> +If mode == P_NOWAIT return the pid of the process.
> +If mode == P_WAIT return the process's exit code if it exits normally;
> +otherwise return -SIG, where SIG is the signal that killed it. """
> + return spawnv(mode, file, args)
> +
> + def spawnle(mode, file, *args):
> + """spawnle(mode, file, *args, env) -> integer
> +
> +Execute file with arguments from args in a subprocess with the
> +supplied environment.
> +If mode == P_NOWAIT return the pid of the process.
> +If mode == P_WAIT return the process's exit code if it exits normally;
> +otherwise return -SIG, where SIG is the signal that killed it. """
> + env = args[-1]
> + return spawnve(mode, file, args[:-1], env)
> +
> +
> + __all__.extend(["spawnv", "spawnve", "spawnl", "spawnle",])
> +
> +
> +if _exists("spawnvp"):
> + # At the moment, Windows doesn't implement spawnvp[e],
> + # so it won't have spawnlp[e] either.
> + def spawnlp(mode, file, *args):
> + """spawnlp(mode, file, *args) -> integer
> +
> +Execute file (which is looked for along $PATH) with arguments from
> +args in a subprocess with the supplied environment.
> +If mode == P_NOWAIT return the pid of the process.
> +If mode == P_WAIT return the process's exit code if it exits normally;
> +otherwise return -SIG, where SIG is the signal that killed it. """
> + return spawnvp(mode, file, args)
> +
> + def spawnlpe(mode, file, *args):
> + """spawnlpe(mode, file, *args, env) -> integer
> +
> +Execute file (which is looked for along $PATH) with arguments from
> +args in a subprocess with the supplied environment.
> +If mode == P_NOWAIT return the pid of the process.
> +If mode == P_WAIT return the process's exit code if it exits normally;
> +otherwise return -SIG, where SIG is the signal that killed it. """
> + env = args[-1]
> + return spawnvpe(mode, file, args[:-1], env)
> +
> +
> + __all__.extend(["spawnvp", "spawnvpe", "spawnlp", "spawnlpe",])
> +
> +
> +# Supply popen2 etc. (for Unix)
> +if _exists("fork"):
> + if not _exists("popen2"):
> + def popen2(cmd, mode="t", bufsize=-1):
> + """Execute the shell command 'cmd' in a sub-process. On UNIX, 'cmd'
> + may be a sequence, in which case arguments will be passed directly to
> + the program without shell intervention (as with os.spawnv()). If 'cmd'
> + is a string it will be passed to the shell (as with os.system()). If
> + 'bufsize' is specified, it sets the buffer size for the I/O pipes. The
> + file objects (child_stdin, child_stdout) are returned."""
> + import warnings
> + msg = "os.popen2 is deprecated. Use the subprocess module."
> + warnings.warn(msg, DeprecationWarning, stacklevel=2)
> +
> + import subprocess
> + PIPE = subprocess.PIPE
> + p = subprocess.Popen(cmd, shell=isinstance(cmd, basestring),
> + bufsize=bufsize, stdin=PIPE, stdout=PIPE,
> + close_fds=True)
> + return p.stdin, p.stdout
> + __all__.append("popen2")
> +
> + if not _exists("popen3"):
> + def popen3(cmd, mode="t", bufsize=-1):
> + """Execute the shell command 'cmd' in a sub-process. On UNIX, 'cmd'
> + may be a sequence, in which case arguments will be passed directly to
> + the program without shell intervention (as with os.spawnv()). If 'cmd'
> + is a string it will be passed to the shell (as with os.system()). If
> + 'bufsize' is specified, it sets the buffer size for the I/O pipes. The
> + file objects (child_stdin, child_stdout, child_stderr) are returned."""
> + import warnings
> + msg = "os.popen3 is deprecated. Use the subprocess module."
> + warnings.warn(msg, DeprecationWarning, stacklevel=2)
> +
> + import subprocess
> + PIPE = subprocess.PIPE
> + p = subprocess.Popen(cmd, shell=isinstance(cmd, basestring),
> + bufsize=bufsize, stdin=PIPE, stdout=PIPE,
> + stderr=PIPE, close_fds=True)
> + return p.stdin, p.stdout, p.stderr
> + __all__.append("popen3")
> +
> + if not _exists("popen4"):
> + def popen4(cmd, mode="t", bufsize=-1):
> + """Execute the shell command 'cmd' in a sub-process. On UNIX, 'cmd'
> + may be a sequence, in which case arguments will be passed directly to
> + the program without shell intervention (as with os.spawnv()). If 'cmd'
> + is a string it will be passed to the shell (as with os.system()). If
> + 'bufsize' is specified, it sets the buffer size for the I/O pipes. The
> + file objects (child_stdin, child_stdout_stderr) are returned."""
> + import warnings
> + msg = "os.popen4 is deprecated. Use the subprocess module."
> + warnings.warn(msg, DeprecationWarning, stacklevel=2)
> +
> + import subprocess
> + PIPE = subprocess.PIPE
> + p = subprocess.Popen(cmd, shell=isinstance(cmd, basestring),
> + bufsize=bufsize, stdin=PIPE, stdout=PIPE,
> + stderr=subprocess.STDOUT, close_fds=True)
> + return p.stdin, p.stdout
> + __all__.append("popen4")
> +
> +#import copy_reg as _copy_reg
> +
> +def _make_stat_result(tup, dict):
> + return stat_result(tup, dict)
> +
> +def _pickle_stat_result(sr):
> + (type, args) = sr.__reduce__()
> + return (_make_stat_result, args)
> +
> +try:
> + _copy_reg.pickle(stat_result, _pickle_stat_result, _make_stat_result)
> +except NameError: # stat_result may not exist
> + pass
> +
> +def _make_statvfs_result(tup, dict):
> + return statvfs_result(tup, dict)
> +
> +def _pickle_statvfs_result(sr):
> + (type, args) = sr.__reduce__()
> + return (_make_statvfs_result, args)
> +
> +try:
> + _copy_reg.pickle(statvfs_result, _pickle_statvfs_result,
> + _make_statvfs_result)
> +except NameError: # statvfs_result may not exist
> + pass
> +
> +if not _exists("urandom"):
> + def urandom(n):
> + """urandom(n) -> str
> +
> + Return a string of n random bytes suitable for cryptographic use.
> +
> + """
> + if name != 'edk2':
> + try:
> + _urandomfd = open("/dev/urandom", O_RDONLY)
> + except (OSError, IOError):
> + raise NotImplementedError("/dev/urandom (or equivalent) not found")
> + try:
> + bs = b""
> + while n > len(bs):
> + bs += read(_urandomfd, n - len(bs))
> + finally:
> + close(_urandomfd)
> + else:
> + bs = b'/\xd3\x00\xa1\x11\x9b+\xef\x1dM-G\xd0\xa7\xd6v\x8f?o\xcaS\xd3aa\x03\xf8?b0b\xf2\xc3\xdek~\x19\xe0<\xbf\xe5! \xe23>\x04\x15\xa7u\x82\x0f\xf5~\xe0\xc3\xbe\x02\x17\x9a;\x90\xdaF\xa4\xb7\x9f\x05\x95}T^\x86b\x02b\xbe\xa8 ct\xbd\xd1>\tf\xe3\xf73\xeb\xae"\xdf\xea\xea\xa0I\xeb\xe2\xc6\xa5'
> + return bs[:n]
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/pydoc.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/pydoc.py
> new file mode 100644
> index 00000000..ec521ce7
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/pydoc.py
> @@ -0,0 +1,2686 @@
> +#!/usr/bin/env python3
> +"""Generate Python documentation in HTML or text for interactive use.
> +
> +At the Python interactive prompt, calling help(thing) on a Python object
> +documents the object, and calling help() starts up an interactive
> +help session.
> +
> +Or, at the shell command line outside of Python:
> +
> +Run "pydoc <name>" to show documentation on something. <name> may be
> +the name of a function, module, package, or a dotted reference to a
> +class or function within a module or module in a package. If the
> +argument contains a path segment delimiter (e.g. slash on Unix,
> +backslash on Windows) it is treated as the path to a Python source file.
> +
> +Run "pydoc -k <keyword>" to search for a keyword in the synopsis lines
> +of all available modules.
> +
> +Run "pydoc -p <port>" to start an HTTP server on the given port on the
> +local machine. Port number 0 can be used to get an arbitrary unused port.
> +
> +Run "pydoc -b" to start an HTTP server on an arbitrary unused port and
> +open a Web browser to interactively browse documentation. The -p option
> +can be used with the -b option to explicitly specify the server port.
> +
> +Run "pydoc -w <name>" to write out the HTML documentation for a module
> +to a file named "<name>.html".
> +
> +Module docs for core modules are assumed to be in
> +
> + https://docs.python.org/X.Y/library/
> +
> +This can be overridden by setting the PYTHONDOCS environment variable
> +to a different URL or to a local directory containing the Library
> +Reference Manual pages.
> +"""
> +__all__ = ['help']
> +__author__ = "Ka-Ping Yee <ping@lfw.org>"
> +__date__ = "26 February 2001"
> +
> +__credits__ = """Guido van Rossum, for an excellent programming language.
> +Tommy Burnette, the original creator of manpy.
> +Paul Prescod, for all his work on onlinehelp.
> +Richard Chamberlain, for the first implementation of textdoc.
> +"""
> +
> +# Known bugs that can't be fixed here:
> +# - synopsis() cannot be prevented from clobbering existing
> +# loaded modules.
> +# - If the __file__ attribute on a module is a relative path and
> +# the current directory is changed with os.chdir(), an incorrect
> +# path will be displayed.
> +
> +import builtins
> +import importlib._bootstrap
> +import importlib._bootstrap_external
> +import importlib.machinery
> +import importlib.util
> +import inspect
> +import io
> +import os
> +import pkgutil
> +import platform
> +import re
> +import sys
> +import time
> +import tokenize
> +import urllib.parse
> +import warnings
> +from collections import deque
> +from reprlib import Repr
> +from traceback import format_exception_only
> +
> +
> +# --------------------------------------------------------- common routines
> +
> +def pathdirs():
> + """Convert sys.path into a list of absolute, existing, unique paths."""
> + dirs = []
> + normdirs = []
> + for dir in sys.path:
> + dir = os.path.abspath(dir or '.')
> + normdir = os.path.normcase(dir)
> + if normdir not in normdirs and os.path.isdir(dir):
> + dirs.append(dir)
> + normdirs.append(normdir)
> + return dirs
> +
> +def getdoc(object):
> + """Get the doc string or comments for an object."""
> + result = inspect.getdoc(object) or inspect.getcomments(object)
> + return result and re.sub('^ *\n', '', result.rstrip()) or ''
> +
> +def splitdoc(doc):
> + """Split a doc string into a synopsis line (if any) and the rest."""
> + lines = doc.strip().split('\n')
> + if len(lines) == 1:
> + return lines[0], ''
> + elif len(lines) >= 2 and not lines[1].rstrip():
> + return lines[0], '\n'.join(lines[2:])
> + return '', '\n'.join(lines)
> +
> +def classname(object, modname):
> + """Get a class name and qualify it with a module name if necessary."""
> + name = object.__name__
> + if object.__module__ != modname:
> + name = object.__module__ + '.' + name
> + return name
> +
> +def isdata(object):
> + """Check if an object is of a type that probably means it's data."""
> + return not (inspect.ismodule(object) or inspect.isclass(object) or
> + inspect.isroutine(object) or inspect.isframe(object) or
> + inspect.istraceback(object) or inspect.iscode(object))
> +
> +def replace(text, *pairs):
> + """Do a series of global replacements on a string."""
> + while pairs:
> + text = pairs[1].join(text.split(pairs[0]))
> + pairs = pairs[2:]
> + return text
> +
> +def cram(text, maxlen):
> + """Omit part of a string if needed to make it fit in a maximum length."""
> + if len(text) > maxlen:
> + pre = max(0, (maxlen-3)//2)
> + post = max(0, maxlen-3-pre)
> + return text[:pre] + '...' + text[len(text)-post:]
> + return text
> +
> +_re_stripid = re.compile(r' at 0x[0-9a-f]{6,16}(>+)$', re.IGNORECASE)
> +def stripid(text):
> + """Remove the hexadecimal id from a Python object representation."""
> + # The behaviour of %p is implementation-dependent in terms of case.
> + return _re_stripid.sub(r'\1', text)
> +
> +def _is_some_method(obj):
> + return (inspect.isfunction(obj) or
> + inspect.ismethod(obj) or
> + inspect.isbuiltin(obj) or
> + inspect.ismethoddescriptor(obj))
> +
> +def _is_bound_method(fn):
> + """
> + Returns True if fn is a bound method, regardless of whether
> + fn was implemented in Python or in C.
> + """
> + if inspect.ismethod(fn):
> + return True
> + if inspect.isbuiltin(fn):
> + self = getattr(fn, '__self__', None)
> + return not (inspect.ismodule(self) or (self is None))
> + return False
> +
> +
> +def allmethods(cl):
> + methods = {}
> + for key, value in inspect.getmembers(cl, _is_some_method):
> + methods[key] = 1
> + for base in cl.__bases__:
> + methods.update(allmethods(base)) # all your base are belong to us
> + for key in methods.keys():
> + methods[key] = getattr(cl, key)
> + return methods
> +
> +def _split_list(s, predicate):
> + """Split sequence s via predicate, and return pair ([true], [false]).
> +
> + The return value is a 2-tuple of lists,
> + ([x for x in s if predicate(x)],
> + [x for x in s if not predicate(x)])
> + """
> +
> + yes = []
> + no = []
> + for x in s:
> + if predicate(x):
> + yes.append(x)
> + else:
> + no.append(x)
> + return yes, no
> +
> +def visiblename(name, all=None, obj=None):
> + """Decide whether to show documentation on a variable."""
> + # Certain special names are redundant or internal.
> + # XXX Remove __initializing__?
> + if name in {'__author__', '__builtins__', '__cached__', '__credits__',
> + '__date__', '__doc__', '__file__', '__spec__',
> + '__loader__', '__module__', '__name__', '__package__',
> + '__path__', '__qualname__', '__slots__', '__version__'}:
> + return 0
> + # Private names are hidden, but special names are displayed.
> + if name.startswith('__') and name.endswith('__'): return 1
> + # Namedtuples have public fields and methods with a single leading underscore
> + if name.startswith('_') and hasattr(obj, '_fields'):
> + return True
> + if all is not None:
> + # only document that which the programmer exported in __all__
> + return name in all
> + else:
> + return not name.startswith('_')
> +
> +def classify_class_attrs(object):
> + """Wrap inspect.classify_class_attrs, with fixup for data descriptors."""
> + results = []
> + for (name, kind, cls, value) in inspect.classify_class_attrs(object):
> + if inspect.isdatadescriptor(value):
> + kind = 'data descriptor'
> + results.append((name, kind, cls, value))
> + return results
> +
> +def sort_attributes(attrs, object):
> + 'Sort the attrs list in-place by _fields and then alphabetically by name'
> + # This allows data descriptors to be ordered according
> + # to a _fields attribute if present.
> + fields = getattr(object, '_fields', [])
> + try:
> + field_order = {name : i-len(fields) for (i, name) in enumerate(fields)}
> + except TypeError:
> + field_order = {}
> + keyfunc = lambda attr: (field_order.get(attr[0], 0), attr[0])
> + attrs.sort(key=keyfunc)
> +
> +# ----------------------------------------------------- module manipulation
> +
> +def ispackage(path):
> + """Guess whether a path refers to a package directory."""
> + if os.path.isdir(path):
> + for ext in ('.py', '.pyc'):
> + if os.path.isfile(os.path.join(path, '__init__' + ext)):
> + return True
> + return False
> +
> +def source_synopsis(file):
> + line = file.readline()
> + while line[:1] == '#' or not line.strip():
> + line = file.readline()
> + if not line: break
> + line = line.strip()
> + if line[:4] == 'r"""': line = line[1:]
> + if line[:3] == '"""':
> + line = line[3:]
> + if line[-1:] == '\\': line = line[:-1]
> + while not line.strip():
> + line = file.readline()
> + if not line: break
> + result = line.split('"""')[0].strip()
> + else: result = None
> + return result
> +
> +def synopsis(filename, cache={}):
> + """Get the one-line summary out of a module file."""
> + mtime = os.stat(filename).st_mtime
> + lastupdate, result = cache.get(filename, (None, None))
> + if lastupdate is None or lastupdate < mtime:
> + # Look for binary suffixes first, falling back to source.
> + if filename.endswith(tuple(importlib.machinery.BYTECODE_SUFFIXES)):
> + loader_cls = importlib.machinery.SourcelessFileLoader
> + elif filename.endswith(tuple(importlib.machinery.EXTENSION_SUFFIXES)):
> + loader_cls = importlib.machinery.ExtensionFileLoader
> + else:
> + loader_cls = None
> + # Now handle the choice.
> + if loader_cls is None:
> + # Must be a source file.
> + try:
> + file = tokenize.open(filename)
> + except OSError:
> + # module can't be opened, so skip it
> + return None
> + # text modules can be directly examined
> + with file:
> + result = source_synopsis(file)
> + else:
> + # Must be a binary module, which has to be imported.
> + loader = loader_cls('__temp__', filename)
> + # XXX We probably don't need to pass in the loader here.
> + spec = importlib.util.spec_from_file_location('__temp__', filename,
> + loader=loader)
> + try:
> + module = importlib._bootstrap._load(spec)
> + except:
> + return None
> + del sys.modules['__temp__']
> + result = module.__doc__.splitlines()[0] if module.__doc__ else None
> + # Cache the result.
> + cache[filename] = (mtime, result)
> + return result
> +
> +class ErrorDuringImport(Exception):
> + """Errors that occurred while trying to import something to document it."""
> + def __init__(self, filename, exc_info):
> + self.filename = filename
> + self.exc, self.value, self.tb = exc_info
> +
> + def __str__(self):
> + exc = self.exc.__name__
> + return 'problem in %s - %s: %s' % (self.filename, exc, self.value)
> +
> +def importfile(path):
> + """Import a Python source file or compiled file given its path."""
> + magic = importlib.util.MAGIC_NUMBER
> + with open(path, 'rb') as file:
> + is_bytecode = magic == file.read(len(magic))
> + filename = os.path.basename(path)
> + name, ext = os.path.splitext(filename)
> + if is_bytecode:
> + loader = importlib._bootstrap_external.SourcelessFileLoader(name, path)
> + else:
> + loader = importlib._bootstrap_external.SourceFileLoader(name, path)
> + # XXX We probably don't need to pass in the loader here.
> + spec = importlib.util.spec_from_file_location(name, path, loader=loader)
> + try:
> + return importlib._bootstrap._load(spec)
> + except:
> + raise ErrorDuringImport(path, sys.exc_info())
> +
> +def safeimport(path, forceload=0, cache={}):
> + """Import a module; handle errors; return None if the module isn't found.
> +
> + If the module *is* found but an exception occurs, it's wrapped in an
> + ErrorDuringImport exception and reraised. Unlike __import__, if a
> + package path is specified, the module at the end of the path is returned,
> + not the package at the beginning. If the optional 'forceload' argument
> + is 1, we reload the module from disk (unless it's a dynamic extension)."""
> + try:
> + # If forceload is 1 and the module has been previously loaded from
> + # disk, we always have to reload the module. Checking the file's
> + # mtime isn't good enough (e.g. the module could contain a class
> + # that inherits from another module that has changed).
> + if forceload and path in sys.modules:
> + if path not in sys.builtin_module_names:
> + # Remove the module from sys.modules and re-import to try
> + # and avoid problems with partially loaded modules.
> + # Also remove any submodules because they won't appear
> + # in the newly loaded module's namespace if they're already
> + # in sys.modules.
> + subs = [m for m in sys.modules if m.startswith(path + '.')]
> + for key in [path] + subs:
> + # Prevent garbage collection.
> + cache[key] = sys.modules[key]
> + del sys.modules[key]
> + module = __import__(path)
> + except:
> + # Did the error occur before or after the module was found?
> + (exc, value, tb) = info = sys.exc_info()
> + if path in sys.modules:
> + # An error occurred while executing the imported module.
> + raise ErrorDuringImport(sys.modules[path].__file__, info)
> + elif exc is SyntaxError:
> + # A SyntaxError occurred before we could execute the module.
> + raise ErrorDuringImport(value.filename, info)
> + elif issubclass(exc, ImportError) and value.name == path:
> + # No such module in the path.
> + return None
> + else:
> + # Some other error occurred during the importing process.
> + raise ErrorDuringImport(path, sys.exc_info())
> + for part in path.split('.')[1:]:
> + try: module = getattr(module, part)
> + except AttributeError: return None
> + return module
> +
> +# ---------------------------------------------------- formatter base class
> +
> +class Doc:
> +
> + PYTHONDOCS = os.environ.get("PYTHONDOCS",
> + "https://docs.python.org/%d.%d/library"
> + % sys.version_info[:2])
> +
> + def document(self, object, name=None, *args):
> + """Generate documentation for an object."""
> + args = (object, name) + args
> + # 'try' clause is to attempt to handle the possibility that inspect
> + # identifies something in a way that pydoc itself has issues handling;
> + # think 'super' and how it is a descriptor (which raises the exception
> + # by lacking a __name__ attribute) and an instance.
> + if inspect.isgetsetdescriptor(object): return self.docdata(*args)
> + if inspect.ismemberdescriptor(object): return self.docdata(*args)
> + try:
> + if inspect.ismodule(object): return self.docmodule(*args)
> + if inspect.isclass(object): return self.docclass(*args)
> + if inspect.isroutine(object): return self.docroutine(*args)
> + except AttributeError:
> + pass
> + if isinstance(object, property): return self.docproperty(*args)
> + return self.docother(*args)
> +
> + def fail(self, object, name=None, *args):
> + """Raise an exception for unimplemented types."""
> + message = "don't know how to document object%s of type %s" % (
> + name and ' ' + repr(name), type(object).__name__)
> + raise TypeError(message)
> +
> + docmodule = docclass = docroutine = docother = docproperty = docdata = fail
> +
> + def getdocloc(self, object,
> + basedir=os.path.join(sys.base_exec_prefix, "lib",
> + "python%d.%d" % sys.version_info[:2])):
> + """Return the location of module docs or None"""
> +
> + try:
> + file = inspect.getabsfile(object)
> + except TypeError:
> + file = '(built-in)'
> +
> + docloc = os.environ.get("PYTHONDOCS", self.PYTHONDOCS)
> +
> + basedir = os.path.normcase(basedir)
> + if (isinstance(object, type(os)) and
> + (object.__name__ in ('errno', 'exceptions', 'gc', 'imp',
> + 'marshal', 'posix', 'signal', 'sys',
> + '_thread', 'zipimport') or
> + (file.startswith(basedir) and
> + not file.startswith(os.path.join(basedir, 'site-packages')))) and
> + object.__name__ not in ('xml.etree', 'test.pydoc_mod')):
> + if docloc.startswith(("http://", "https://")):
> + docloc = "%s/%s" % (docloc.rstrip("/"), object.__name__.lower())
> + else:
> + docloc = os.path.join(docloc, object.__name__.lower() + ".html")
> + else:
> + docloc = None
> + return docloc
> +
> +# -------------------------------------------- HTML documentation generator
> +
> +class HTMLRepr(Repr):
> + """Class for safely making an HTML representation of a Python object."""
> + def __init__(self):
> + Repr.__init__(self)
> + self.maxlist = self.maxtuple = 20
> + self.maxdict = 10
> + self.maxstring = self.maxother = 100
> +
> + def escape(self, text):
> + return replace(text, '&', '&', '<', '<', '>', '>')
> +
> + def repr(self, object):
> + return Repr.repr(self, object)
> +
> + def repr1(self, x, level):
> + if hasattr(type(x), '__name__'):
> + methodname = 'repr_' + '_'.join(type(x).__name__.split())
> + if hasattr(self, methodname):
> + return getattr(self, methodname)(x, level)
> + return self.escape(cram(stripid(repr(x)), self.maxother))
> +
> + def repr_string(self, x, level):
> + test = cram(x, self.maxstring)
> + testrepr = repr(test)
> + if '\\' in test and '\\' not in replace(testrepr, r'\\', ''):
> + # Backslashes are only literal in the string and are never
> + # needed to make any special characters, so show a raw string.
> + return 'r' + testrepr[0] + self.escape(test) + testrepr[0]
> + return re.sub(r'((\\[\\abfnrtv\'"]|\\[0-9]..|\\x..|\\u....)+)',
> + r'<font color="#c040c0">\1</font>',
> + self.escape(testrepr))
> +
> + repr_str = repr_string
> +
> + def repr_instance(self, x, level):
> + try:
> + return self.escape(cram(stripid(repr(x)), self.maxstring))
> + except:
> + return self.escape('<%s instance>' % x.__class__.__name__)
> +
> + repr_unicode = repr_string
> +
> +class HTMLDoc(Doc):
> + """Formatter class for HTML documentation."""
> +
> + # ------------------------------------------- HTML formatting utilities
> +
> + _repr_instance = HTMLRepr()
> + repr = _repr_instance.repr
> + escape = _repr_instance.escape
> +
> + def page(self, title, contents):
> + """Format an HTML page."""
> + return '''\
> +<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
> +<html><head><title>Python: %s</title>
> +<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
> +</head><body bgcolor="#f0f0f8">
> +%s
> +</body></html>''' % (title, contents)
> +
> + def heading(self, title, fgcol, bgcol, extras=''):
> + """Format a page heading."""
> + return '''
> +<table width="100%%" cellspacing=0 cellpadding=2 border=0 summary="heading">
> +<tr bgcolor="%s">
> +<td valign=bottom> <br>
> +<font color="%s" face="helvetica, arial"> <br>%s</font></td
> +><td align=right valign=bottom
> +><font color="%s" face="helvetica, arial">%s</font></td></tr></table>
> + ''' % (bgcol, fgcol, title, fgcol, extras or ' ')
> +
> + def section(self, title, fgcol, bgcol, contents, width=6,
> + prelude='', marginalia=None, gap=' '):
> + """Format a section with a heading."""
> + if marginalia is None:
> + marginalia = '<tt>' + ' ' * width + '</tt>'
> + result = '''<p>
> +<table width="100%%" cellspacing=0 cellpadding=2 border=0 summary="section">
> +<tr bgcolor="%s">
> +<td colspan=3 valign=bottom> <br>
> +<font color="%s" face="helvetica, arial">%s</font></td></tr>
> + ''' % (bgcol, fgcol, title)
> + if prelude:
> + result = result + '''
> +<tr bgcolor="%s"><td rowspan=2>%s</td>
> +<td colspan=2>%s</td></tr>
> +<tr><td>%s</td>''' % (bgcol, marginalia, prelude, gap)
> + else:
> + result = result + '''
> +<tr><td bgcolor="%s">%s</td><td>%s</td>''' % (bgcol, marginalia, gap)
> +
> + return result + '\n<td width="100%%">%s</td></tr></table>' % contents
> +
> + def bigsection(self, title, *args):
> + """Format a section with a big heading."""
> + title = '<big><strong>%s</strong></big>' % title
> + return self.section(title, *args)
> +
> + def preformat(self, text):
> + """Format literal preformatted text."""
> + text = self.escape(text.expandtabs())
> + return replace(text, '\n\n', '\n \n', '\n\n', '\n \n',
> + ' ', ' ', '\n', '<br>\n')
> +
> + def multicolumn(self, list, format, cols=4):
> + """Format a list of items into a multi-column list."""
> + result = ''
> + rows = (len(list)+cols-1)//cols
> + for col in range(cols):
> + result = result + '<td width="%d%%" valign=top>' % (100//cols)
> + for i in range(rows*col, rows*col+rows):
> + if i < len(list):
> + result = result + format(list[i]) + '<br>\n'
> + result = result + '</td>'
> + return '<table width="100%%" summary="list"><tr>%s</tr></table>' % result
> +
> + def grey(self, text): return '<font color="#909090">%s</font>' % text
> +
> + def namelink(self, name, *dicts):
> + """Make a link for an identifier, given name-to-URL mappings."""
> + for dict in dicts:
> + if name in dict:
> + return '<a href="%s">%s</a>' % (dict[name], name)
> + return name
> +
> + def classlink(self, object, modname):
> + """Make a link for a class."""
> + name, module = object.__name__, sys.modules.get(object.__module__)
> + if hasattr(module, name) and getattr(module, name) is object:
> + return '<a href="%s.html#%s">%s</a>' % (
> + module.__name__, name, classname(object, modname))
> + return classname(object, modname)
> +
> + def modulelink(self, object):
> + """Make a link for a module."""
> + return '<a href="%s.html">%s</a>' % (object.__name__, object.__name__)
> +
> + def modpkglink(self, modpkginfo):
> + """Make a link for a module or package to display in an index."""
> + name, path, ispackage, shadowed = modpkginfo
> + if shadowed:
> + return self.grey(name)
> + if path:
> + url = '%s.%s.html' % (path, name)
> + else:
> + url = '%s.html' % name
> + if ispackage:
> + text = '<strong>%s</strong> (package)' % name
> + else:
> + text = name
> + return '<a href="%s">%s</a>' % (url, text)
> +
> + def filelink(self, url, path):
> + """Make a link to source file."""
> + return '<a href="file:%s">%s</a>' % (url, path)
> +
> + def markup(self, text, escape=None, funcs={}, classes={}, methods={}):
> + """Mark up some plain text, given a context of symbols to look for.
> + Each context dictionary maps object names to anchor names."""
> + escape = escape or self.escape
> + results = []
> + here = 0
> + pattern = re.compile(r'\b((http|ftp)://\S+[\w/]|'
> + r'RFC[- ]?(\d+)|'
> + r'PEP[- ]?(\d+)|'
> + r'(self\.)?(\w+))')
> + while True:
> + match = pattern.search(text, here)
> + if not match: break
> + start, end = match.span()
> + results.append(escape(text[here:start]))
> +
> + all, scheme, rfc, pep, selfdot, name = match.groups()
> + if scheme:
> + url = escape(all).replace('"', '"')
> + results.append('<a href="%s">%s</a>' % (url, url))
> + elif rfc:
> + url = 'http://www.rfc-editor.org/rfc/rfc%d.txt' % int(rfc)
> + results.append('<a href="%s">%s</a>' % (url, escape(all)))
> + elif pep:
> + url = 'http://www.python.org/dev/peps/pep-%04d/' % int(pep)
> + results.append('<a href="%s">%s</a>' % (url, escape(all)))
> + elif selfdot:
> + # Create a link for methods like 'self.method(...)'
> + # and use <strong> for attributes like 'self.attr'
> + if text[end:end+1] == '(':
> + results.append('self.' + self.namelink(name, methods))
> + else:
> + results.append('self.<strong>%s</strong>' % name)
> + elif text[end:end+1] == '(':
> + results.append(self.namelink(name, methods, funcs, classes))
> + else:
> + results.append(self.namelink(name, classes))
> + here = end
> + results.append(escape(text[here:]))
> + return ''.join(results)
> +
> + # ---------------------------------------------- type-specific routines
> +
> + def formattree(self, tree, modname, parent=None):
> + """Produce HTML for a class tree as given by inspect.getclasstree()."""
> + result = ''
> + for entry in tree:
> + if type(entry) is type(()):
> + c, bases = entry
> + result = result + '<dt><font face="helvetica, arial">'
> + result = result + self.classlink(c, modname)
> + if bases and bases != (parent,):
> + parents = []
> + for base in bases:
> + parents.append(self.classlink(base, modname))
> + result = result + '(' + ', '.join(parents) + ')'
> + result = result + '\n</font></dt>'
> + elif type(entry) is type([]):
> + result = result + '<dd>\n%s</dd>\n' % self.formattree(
> + entry, modname, c)
> + return '<dl>\n%s</dl>\n' % result
> +
> + def docmodule(self, object, name=None, mod=None, *ignored):
> + """Produce HTML documentation for a module object."""
> + name = object.__name__ # ignore the passed-in name
> + try:
> + all = object.__all__
> + except AttributeError:
> + all = None
> + parts = name.split('.')
> + links = []
> + for i in range(len(parts)-1):
> + links.append(
> + '<a href="%s.html"><font color="#ffffff">%s</font></a>' %
> + ('.'.join(parts[:i+1]), parts[i]))
> + linkedname = '.'.join(links + parts[-1:])
> + head = '<big><big><strong>%s</strong></big></big>' % linkedname
> + try:
> + path = inspect.getabsfile(object)
> + url = urllib.parse.quote(path)
> + filelink = self.filelink(url, path)
> + except TypeError:
> + filelink = '(built-in)'
> + info = []
> + if hasattr(object, '__version__'):
> + version = str(object.__version__)
> + if version[:11] == '$' + 'Revision: ' and version[-1:] == '$':
> + version = version[11:-1].strip()
> + info.append('version %s' % self.escape(version))
> + if hasattr(object, '__date__'):
> + info.append(self.escape(str(object.__date__)))
> + if info:
> + head = head + ' (%s)' % ', '.join(info)
> + docloc = self.getdocloc(object)
> + if docloc is not None:
> + docloc = '<br><a href="%(docloc)s">Module Reference</a>' % locals()
> + else:
> + docloc = ''
> + result = self.heading(
> + head, '#ffffff', '#7799ee',
> + '<a href=".">index</a><br>' + filelink + docloc)
> +
> + modules = inspect.getmembers(object, inspect.ismodule)
> +
> + classes, cdict = [], {}
> + for key, value in inspect.getmembers(object, inspect.isclass):
> + # if __all__ exists, believe it. Otherwise use old heuristic.
> + if (all is not None or
> + (inspect.getmodule(value) or object) is object):
> + if visiblename(key, all, object):
> + classes.append((key, value))
> + cdict[key] = cdict[value] = '#' + key
> + for key, value in classes:
> + for base in value.__bases__:
> + key, modname = base.__name__, base.__module__
> + module = sys.modules.get(modname)
> + if modname != name and module and hasattr(module, key):
> + if getattr(module, key) is base:
> + if not key in cdict:
> + cdict[key] = cdict[base] = modname + '.html#' + key
> + funcs, fdict = [], {}
> + for key, value in inspect.getmembers(object, inspect.isroutine):
> + # if __all__ exists, believe it. Otherwise use old heuristic.
> + if (all is not None or
> + inspect.isbuiltin(value) or inspect.getmodule(value) is object):
> + if visiblename(key, all, object):
> + funcs.append((key, value))
> + fdict[key] = '#-' + key
> + if inspect.isfunction(value): fdict[value] = fdict[key]
> + data = []
> + for key, value in inspect.getmembers(object, isdata):
> + if visiblename(key, all, object):
> + data.append((key, value))
> +
> + doc = self.markup(getdoc(object), self.preformat, fdict, cdict)
> + doc = doc and '<tt>%s</tt>' % doc
> + result = result + '<p>%s</p>\n' % doc
> +
> + if hasattr(object, '__path__'):
> + modpkgs = []
> + for importer, modname, ispkg in pkgutil.iter_modules(object.__path__):
> + modpkgs.append((modname, name, ispkg, 0))
> + modpkgs.sort()
> + contents = self.multicolumn(modpkgs, self.modpkglink)
> + result = result + self.bigsection(
> + 'Package Contents', '#ffffff', '#aa55cc', contents)
> + elif modules:
> + contents = self.multicolumn(
> + modules, lambda t: self.modulelink(t[1]))
> + result = result + self.bigsection(
> + 'Modules', '#ffffff', '#aa55cc', contents)
> +
> + if classes:
> + classlist = [value for (key, value) in classes]
> + contents = [
> + self.formattree(inspect.getclasstree(classlist, 1), name)]
> + for key, value in classes:
> + contents.append(self.document(value, key, name, fdict, cdict))
> + result = result + self.bigsection(
> + 'Classes', '#ffffff', '#ee77aa', ' '.join(contents))
> + if funcs:
> + contents = []
> + for key, value in funcs:
> + contents.append(self.document(value, key, name, fdict, cdict))
> + result = result + self.bigsection(
> + 'Functions', '#ffffff', '#eeaa77', ' '.join(contents))
> + if data:
> + contents = []
> + for key, value in data:
> + contents.append(self.document(value, key))
> + result = result + self.bigsection(
> + 'Data', '#ffffff', '#55aa55', '<br>\n'.join(contents))
> + if hasattr(object, '__author__'):
> + contents = self.markup(str(object.__author__), self.preformat)
> + result = result + self.bigsection(
> + 'Author', '#ffffff', '#7799ee', contents)
> + if hasattr(object, '__credits__'):
> + contents = self.markup(str(object.__credits__), self.preformat)
> + result = result + self.bigsection(
> + 'Credits', '#ffffff', '#7799ee', contents)
> +
> + return result
> +
> + def docclass(self, object, name=None, mod=None, funcs={}, classes={},
> + *ignored):
> + """Produce HTML documentation for a class object."""
> + realname = object.__name__
> + name = name or realname
> + bases = object.__bases__
> +
> + contents = []
> + push = contents.append
> +
> + # Cute little class to pump out a horizontal rule between sections.
> + class HorizontalRule:
> + def __init__(self):
> + self.needone = 0
> + def maybe(self):
> + if self.needone:
> + push('<hr>\n')
> + self.needone = 1
> + hr = HorizontalRule()
> +
> + # List the mro, if non-trivial.
> + mro = deque(inspect.getmro(object))
> + if len(mro) > 2:
> + hr.maybe()
> + push('<dl><dt>Method resolution order:</dt>\n')
> + for base in mro:
> + push('<dd>%s</dd>\n' % self.classlink(base,
> + object.__module__))
> + push('</dl>\n')
> +
> + def spill(msg, attrs, predicate):
> + ok, attrs = _split_list(attrs, predicate)
> + if ok:
> + hr.maybe()
> + push(msg)
> + for name, kind, homecls, value in ok:
> + try:
> + value = getattr(object, name)
> + except Exception:
> + # Some descriptors may meet a failure in their __get__.
> + # (bug #1785)
> + push(self._docdescriptor(name, value, mod))
> + else:
> + push(self.document(value, name, mod,
> + funcs, classes, mdict, object))
> + push('\n')
> + return attrs
> +
> + def spilldescriptors(msg, attrs, predicate):
> + ok, attrs = _split_list(attrs, predicate)
> + if ok:
> + hr.maybe()
> + push(msg)
> + for name, kind, homecls, value in ok:
> + push(self._docdescriptor(name, value, mod))
> + return attrs
> +
> + def spilldata(msg, attrs, predicate):
> + ok, attrs = _split_list(attrs, predicate)
> + if ok:
> + hr.maybe()
> + push(msg)
> + for name, kind, homecls, value in ok:
> + base = self.docother(getattr(object, name), name, mod)
> + if callable(value) or inspect.isdatadescriptor(value):
> + doc = getattr(value, "__doc__", None)
> + else:
> + doc = None
> + if doc is None:
> + push('<dl><dt>%s</dl>\n' % base)
> + else:
> + doc = self.markup(getdoc(value), self.preformat,
> + funcs, classes, mdict)
> + doc = '<dd><tt>%s</tt>' % doc
> + push('<dl><dt>%s%s</dl>\n' % (base, doc))
> + push('\n')
> + return attrs
> +
> + attrs = [(name, kind, cls, value)
> + for name, kind, cls, value in classify_class_attrs(object)
> + if visiblename(name, obj=object)]
> +
> + mdict = {}
> + for key, kind, homecls, value in attrs:
> + mdict[key] = anchor = '#' + name + '-' + key
> + try:
> + value = getattr(object, name)
> + except Exception:
> + # Some descriptors may meet a failure in their __get__.
> + # (bug #1785)
> + pass
> + try:
> + # The value may not be hashable (e.g., a data attr with
> + # a dict or list value).
> + mdict[value] = anchor
> + except TypeError:
> + pass
> +
> + while attrs:
> + if mro:
> + thisclass = mro.popleft()
> + else:
> + thisclass = attrs[0][2]
> + attrs, inherited = _split_list(attrs, lambda t: t[2] is thisclass)
> +
> + if thisclass is builtins.object:
> + attrs = inherited
> + continue
> + elif thisclass is object:
> + tag = 'defined here'
> + else:
> + tag = 'inherited from %s' % self.classlink(thisclass,
> + object.__module__)
> + tag += ':<br>\n'
> +
> + sort_attributes(attrs, object)
> +
> + # Pump out the attrs, segregated by kind.
> + attrs = spill('Methods %s' % tag, attrs,
> + lambda t: t[1] == 'method')
> + attrs = spill('Class methods %s' % tag, attrs,
> + lambda t: t[1] == 'class method')
> + attrs = spill('Static methods %s' % tag, attrs,
> + lambda t: t[1] == 'static method')
> + attrs = spilldescriptors('Data descriptors %s' % tag, attrs,
> + lambda t: t[1] == 'data descriptor')
> + attrs = spilldata('Data and other attributes %s' % tag, attrs,
> + lambda t: t[1] == 'data')
> + assert attrs == []
> + attrs = inherited
> +
> + contents = ''.join(contents)
> +
> + if name == realname:
> + title = '<a name="%s">class <strong>%s</strong></a>' % (
> + name, realname)
> + else:
> + title = '<strong>%s</strong> = <a name="%s">class %s</a>' % (
> + name, name, realname)
> + if bases:
> + parents = []
> + for base in bases:
> + parents.append(self.classlink(base, object.__module__))
> + title = title + '(%s)' % ', '.join(parents)
> + doc = self.markup(getdoc(object), self.preformat, funcs, classes, mdict)
> + doc = doc and '<tt>%s<br> </tt>' % doc
> +
> + return self.section(title, '#000000', '#ffc8d8', contents, 3, doc)
> +
> + def formatvalue(self, object):
> + """Format an argument default value as text."""
> + return self.grey('=' + self.repr(object))
> +
> + def docroutine(self, object, name=None, mod=None,
> + funcs={}, classes={}, methods={}, cl=None):
> + """Produce HTML documentation for a function or method object."""
> + realname = object.__name__
> + name = name or realname
> + anchor = (cl and cl.__name__ or '') + '-' + name
> + note = ''
> + skipdocs = 0
> + if _is_bound_method(object):
> + imclass = object.__self__.__class__
> + if cl:
> + if imclass is not cl:
> + note = ' from ' + self.classlink(imclass, mod)
> + else:
> + if object.__self__ is not None:
> + note = ' method of %s instance' % self.classlink(
> + object.__self__.__class__, mod)
> + else:
> + note = ' unbound %s method' % self.classlink(imclass,mod)
> +
> + if name == realname:
> + title = '<a name="%s"><strong>%s</strong></a>' % (anchor, realname)
> + else:
> + if cl and inspect.getattr_static(cl, realname, []) is object:
> + reallink = '<a href="#%s">%s</a>' % (
> + cl.__name__ + '-' + realname, realname)
> + skipdocs = 1
> + else:
> + reallink = realname
> + title = '<a name="%s"><strong>%s</strong></a> = %s' % (
> + anchor, name, reallink)
> + argspec = None
> + if inspect.isroutine(object):
> + try:
> + signature = inspect.signature(object)
> + except (ValueError, TypeError):
> + signature = None
> + if signature:
> + argspec = str(signature)
> + if realname == '<lambda>':
> + title = '<strong>%s</strong> <em>lambda</em> ' % name
> + # XXX lambda's won't usually have func_annotations['return']
> + # since the syntax doesn't support but it is possible.
> + # So removing parentheses isn't truly safe.
> + argspec = argspec[1:-1] # remove parentheses
> + if not argspec:
> + argspec = '(...)'
> +
> + decl = title + self.escape(argspec) + (note and self.grey(
> + '<font face="helvetica, arial">%s</font>' % note))
> +
> + if skipdocs:
> + return '<dl><dt>%s</dt></dl>\n' % decl
> + else:
> + doc = self.markup(
> + getdoc(object), self.preformat, funcs, classes, methods)
> + doc = doc and '<dd><tt>%s</tt></dd>' % doc
> + return '<dl><dt>%s</dt>%s</dl>\n' % (decl, doc)
> +
> + def _docdescriptor(self, name, value, mod):
> + results = []
> + push = results.append
> +
> + if name:
> + push('<dl><dt><strong>%s</strong></dt>\n' % name)
> + if value.__doc__ is not None:
> + doc = self.markup(getdoc(value), self.preformat)
> + push('<dd><tt>%s</tt></dd>\n' % doc)
> + push('</dl>\n')
> +
> + return ''.join(results)
> +
> + def docproperty(self, object, name=None, mod=None, cl=None):
> + """Produce html documentation for a property."""
> + return self._docdescriptor(name, object, mod)
> +
> + def docother(self, object, name=None, mod=None, *ignored):
> + """Produce HTML documentation for a data object."""
> + lhs = name and '<strong>%s</strong> = ' % name or ''
> + return lhs + self.repr(object)
> +
> + def docdata(self, object, name=None, mod=None, cl=None):
> + """Produce html documentation for a data descriptor."""
> + return self._docdescriptor(name, object, mod)
> +
> + def index(self, dir, shadowed=None):
> + """Generate an HTML index for a directory of modules."""
> + modpkgs = []
> + if shadowed is None: shadowed = {}
> + for importer, name, ispkg in pkgutil.iter_modules([dir]):
> + if any((0xD800 <= ord(ch) <= 0xDFFF) for ch in name):
> + # ignore a module if its name contains a surrogate character
> + continue
> + modpkgs.append((name, '', ispkg, name in shadowed))
> + shadowed[name] = 1
> +
> + modpkgs.sort()
> + contents = self.multicolumn(modpkgs, self.modpkglink)
> + return self.bigsection(dir, '#ffffff', '#ee77aa', contents)
> +
> +# -------------------------------------------- text documentation generator
> +
> +class TextRepr(Repr):
> + """Class for safely making a text representation of a Python object."""
> + def __init__(self):
> + Repr.__init__(self)
> + self.maxlist = self.maxtuple = 20
> + self.maxdict = 10
> + self.maxstring = self.maxother = 100
> +
> + def repr1(self, x, level):
> + if hasattr(type(x), '__name__'):
> + methodname = 'repr_' + '_'.join(type(x).__name__.split())
> + if hasattr(self, methodname):
> + return getattr(self, methodname)(x, level)
> + return cram(stripid(repr(x)), self.maxother)
> +
> + def repr_string(self, x, level):
> + test = cram(x, self.maxstring)
> + testrepr = repr(test)
> + if '\\' in test and '\\' not in replace(testrepr, r'\\', ''):
> + # Backslashes are only literal in the string and are never
> + # needed to make any special characters, so show a raw string.
> + return 'r' + testrepr[0] + test + testrepr[0]
> + return testrepr
> +
> + repr_str = repr_string
> +
> + def repr_instance(self, x, level):
> + try:
> + return cram(stripid(repr(x)), self.maxstring)
> + except:
> + return '<%s instance>' % x.__class__.__name__
> +
> +class TextDoc(Doc):
> + """Formatter class for text documentation."""
> +
> + # ------------------------------------------- text formatting utilities
> +
> + _repr_instance = TextRepr()
> + repr = _repr_instance.repr
> +
> + def bold(self, text):
> + """Format a string in bold by overstriking."""
> + return ''.join(ch + '\b' + ch for ch in text)
> +
> + def indent(self, text, prefix=' '):
> + """Indent text by prepending a given prefix to each line."""
> + if not text: return ''
> + lines = [prefix + line for line in text.split('\n')]
> + if lines: lines[-1] = lines[-1].rstrip()
> + return '\n'.join(lines)
> +
> + def section(self, title, contents):
> + """Format a section with a given heading."""
> + clean_contents = self.indent(contents).rstrip()
> + return self.bold(title) + '\n' + clean_contents + '\n\n'
> +
> + # ---------------------------------------------- type-specific routines
> +
> + def formattree(self, tree, modname, parent=None, prefix=''):
> + """Render in text a class tree as returned by inspect.getclasstree()."""
> + result = ''
> + for entry in tree:
> + if type(entry) is type(()):
> + c, bases = entry
> + result = result + prefix + classname(c, modname)
> + if bases and bases != (parent,):
> + parents = (classname(c, modname) for c in bases)
> + result = result + '(%s)' % ', '.join(parents)
> + result = result + '\n'
> + elif type(entry) is type([]):
> + result = result + self.formattree(
> + entry, modname, c, prefix + ' ')
> + return result
> +
> + def docmodule(self, object, name=None, mod=None):
> + """Produce text documentation for a given module object."""
> + name = object.__name__ # ignore the passed-in name
> + synop, desc = splitdoc(getdoc(object))
> + result = self.section('NAME', name + (synop and ' - ' + synop))
> + all = getattr(object, '__all__', None)
> + docloc = self.getdocloc(object)
> + if docloc is not None:
> + result = result + self.section('MODULE REFERENCE', docloc + """
> +
> +The following documentation is automatically generated from the Python
> +source files. It may be incomplete, incorrect or include features that
> +are considered implementation detail and may vary between Python
> +implementations. When in doubt, consult the module reference at the
> +location listed above.
> +""")
> +
> + if desc:
> + result = result + self.section('DESCRIPTION', desc)
> +
> + classes = []
> + for key, value in inspect.getmembers(object, inspect.isclass):
> + # if __all__ exists, believe it. Otherwise use old heuristic.
> + if (all is not None
> + or (inspect.getmodule(value) or object) is object):
> + if visiblename(key, all, object):
> + classes.append((key, value))
> + funcs = []
> + for key, value in inspect.getmembers(object, inspect.isroutine):
> + # if __all__ exists, believe it. Otherwise use old heuristic.
> + if (all is not None or
> + inspect.isbuiltin(value) or inspect.getmodule(value) is object):
> + if visiblename(key, all, object):
> + funcs.append((key, value))
> + data = []
> + for key, value in inspect.getmembers(object, isdata):
> + if visiblename(key, all, object):
> + data.append((key, value))
> +
> + modpkgs = []
> + modpkgs_names = set()
> + if hasattr(object, '__path__'):
> + for importer, modname, ispkg in pkgutil.iter_modules(object.__path__):
> + modpkgs_names.add(modname)
> + if ispkg:
> + modpkgs.append(modname + ' (package)')
> + else:
> + modpkgs.append(modname)
> +
> + modpkgs.sort()
> + result = result + self.section(
> + 'PACKAGE CONTENTS', '\n'.join(modpkgs))
> +
> + # Detect submodules as sometimes created by C extensions
> + submodules = []
> + for key, value in inspect.getmembers(object, inspect.ismodule):
> + if value.__name__.startswith(name + '.') and key not in modpkgs_names:
> + submodules.append(key)
> + if submodules:
> + submodules.sort()
> + result = result + self.section(
> + 'SUBMODULES', '\n'.join(submodules))
> +
> + if classes:
> + classlist = [value for key, value in classes]
> + contents = [self.formattree(
> + inspect.getclasstree(classlist, 1), name)]
> + for key, value in classes:
> + contents.append(self.document(value, key, name))
> + result = result + self.section('CLASSES', '\n'.join(contents))
> +
> + if funcs:
> + contents = []
> + for key, value in funcs:
> + contents.append(self.document(value, key, name))
> + result = result + self.section('FUNCTIONS', '\n'.join(contents))
> +
> + if data:
> + contents = []
> + for key, value in data:
> + contents.append(self.docother(value, key, name, maxlen=70))
> + result = result + self.section('DATA', '\n'.join(contents))
> +
> + if hasattr(object, '__version__'):
> + version = str(object.__version__)
> + if version[:11] == '$' + 'Revision: ' and version[-1:] == '$':
> + version = version[11:-1].strip()
> + result = result + self.section('VERSION', version)
> + if hasattr(object, '__date__'):
> + result = result + self.section('DATE', str(object.__date__))
> + if hasattr(object, '__author__'):
> + result = result + self.section('AUTHOR', str(object.__author__))
> + if hasattr(object, '__credits__'):
> + result = result + self.section('CREDITS', str(object.__credits__))
> + try:
> + file = inspect.getabsfile(object)
> + except TypeError:
> + file = '(built-in)'
> + result = result + self.section('FILE', file)
> + return result
> +
> + def docclass(self, object, name=None, mod=None, *ignored):
> + """Produce text documentation for a given class object."""
> + realname = object.__name__
> + name = name or realname
> + bases = object.__bases__
> +
> + def makename(c, m=object.__module__):
> + return classname(c, m)
> +
> + if name == realname:
> + title = 'class ' + self.bold(realname)
> + else:
> + title = self.bold(name) + ' = class ' + realname
> + if bases:
> + parents = map(makename, bases)
> + title = title + '(%s)' % ', '.join(parents)
> +
> + doc = getdoc(object)
> + contents = doc and [doc + '\n'] or []
> + push = contents.append
> +
> + # List the mro, if non-trivial.
> + mro = deque(inspect.getmro(object))
> + if len(mro) > 2:
> + push("Method resolution order:")
> + for base in mro:
> + push(' ' + makename(base))
> + push('')
> +
> + # Cute little class to pump out a horizontal rule between sections.
> + class HorizontalRule:
> + def __init__(self):
> + self.needone = 0
> + def maybe(self):
> + if self.needone:
> + push('-' * 70)
> + self.needone = 1
> + hr = HorizontalRule()
> +
> + def spill(msg, attrs, predicate):
> + ok, attrs = _split_list(attrs, predicate)
> + if ok:
> + hr.maybe()
> + push(msg)
> + for name, kind, homecls, value in ok:
> + try:
> + value = getattr(object, name)
> + except Exception:
> + # Some descriptors may meet a failure in their __get__.
> + # (bug #1785)
> + push(self._docdescriptor(name, value, mod))
> + else:
> + push(self.document(value,
> + name, mod, object))
> + return attrs
> +
> + def spilldescriptors(msg, attrs, predicate):
> + ok, attrs = _split_list(attrs, predicate)
> + if ok:
> + hr.maybe()
> + push(msg)
> + for name, kind, homecls, value in ok:
> + push(self._docdescriptor(name, value, mod))
> + return attrs
> +
> + def spilldata(msg, attrs, predicate):
> + ok, attrs = _split_list(attrs, predicate)
> + if ok:
> + hr.maybe()
> + push(msg)
> + for name, kind, homecls, value in ok:
> + if callable(value) or inspect.isdatadescriptor(value):
> + doc = getdoc(value)
> + else:
> + doc = None
> + try:
> + obj = getattr(object, name)
> + except AttributeError:
> + obj = homecls.__dict__[name]
> + push(self.docother(obj, name, mod, maxlen=70, doc=doc) +
> + '\n')
> + return attrs
> +
> + attrs = [(name, kind, cls, value)
> + for name, kind, cls, value in classify_class_attrs(object)
> + if visiblename(name, obj=object)]
> +
> + while attrs:
> + if mro:
> + thisclass = mro.popleft()
> + else:
> + thisclass = attrs[0][2]
> + attrs, inherited = _split_list(attrs, lambda t: t[2] is thisclass)
> +
> + if thisclass is builtins.object:
> + attrs = inherited
> + continue
> + elif thisclass is object:
> + tag = "defined here"
> + else:
> + tag = "inherited from %s" % classname(thisclass,
> + object.__module__)
> +
> + sort_attributes(attrs, object)
> +
> + # Pump out the attrs, segregated by kind.
> + attrs = spill("Methods %s:\n" % tag, attrs,
> + lambda t: t[1] == 'method')
> + attrs = spill("Class methods %s:\n" % tag, attrs,
> + lambda t: t[1] == 'class method')
> + attrs = spill("Static methods %s:\n" % tag, attrs,
> + lambda t: t[1] == 'static method')
> + attrs = spilldescriptors("Data descriptors %s:\n" % tag, attrs,
> + lambda t: t[1] == 'data descriptor')
> + attrs = spilldata("Data and other attributes %s:\n" % tag, attrs,
> + lambda t: t[1] == 'data')
> +
> + assert attrs == []
> + attrs = inherited
> +
> + contents = '\n'.join(contents)
> + if not contents:
> + return title + '\n'
> + return title + '\n' + self.indent(contents.rstrip(), ' | ') + '\n'
> +
> + def formatvalue(self, object):
> + """Format an argument default value as text."""
> + return '=' + self.repr(object)
> +
> + def docroutine(self, object, name=None, mod=None, cl=None):
> + """Produce text documentation for a function or method object."""
> + realname = object.__name__
> + name = name or realname
> + note = ''
> + skipdocs = 0
> + if _is_bound_method(object):
> + imclass = object.__self__.__class__
> + if cl:
> + if imclass is not cl:
> + note = ' from ' + classname(imclass, mod)
> + else:
> + if object.__self__ is not None:
> + note = ' method of %s instance' % classname(
> + object.__self__.__class__, mod)
> + else:
> + note = ' unbound %s method' % classname(imclass,mod)
> +
> + if name == realname:
> + title = self.bold(realname)
> + else:
> + if cl and inspect.getattr_static(cl, realname, []) is object:
> + skipdocs = 1
> + title = self.bold(name) + ' = ' + realname
> + argspec = None
> +
> + if inspect.isroutine(object):
> + try:
> + signature = inspect.signature(object)
> + except (ValueError, TypeError):
> + signature = None
> + if signature:
> + argspec = str(signature)
> + if realname == '<lambda>':
> + title = self.bold(name) + ' lambda '
> + # XXX lambda's won't usually have func_annotations['return']
> + # since the syntax doesn't support but it is possible.
> + # So removing parentheses isn't truly safe.
> + argspec = argspec[1:-1] # remove parentheses
> + if not argspec:
> + argspec = '(...)'
> + decl = title + argspec + note
> +
> + if skipdocs:
> + return decl + '\n'
> + else:
> + doc = getdoc(object) or ''
> + return decl + '\n' + (doc and self.indent(doc).rstrip() + '\n')
> +
> + def _docdescriptor(self, name, value, mod):
> + results = []
> + push = results.append
> +
> + if name:
> + push(self.bold(name))
> + push('\n')
> + doc = getdoc(value) or ''
> + if doc:
> + push(self.indent(doc))
> + push('\n')
> + return ''.join(results)
> +
> + def docproperty(self, object, name=None, mod=None, cl=None):
> + """Produce text documentation for a property."""
> + return self._docdescriptor(name, object, mod)
> +
> + def docdata(self, object, name=None, mod=None, cl=None):
> + """Produce text documentation for a data descriptor."""
> + return self._docdescriptor(name, object, mod)
> +
> + def docother(self, object, name=None, mod=None, parent=None, maxlen=None, doc=None):
> + """Produce text documentation for a data object."""
> + repr = self.repr(object)
> + if maxlen:
> + line = (name and name + ' = ' or '') + repr
> + chop = maxlen - len(line)
> + if chop < 0: repr = repr[:chop] + '...'
> + line = (name and self.bold(name) + ' = ' or '') + repr
> + if doc is not None:
> + line += '\n' + self.indent(str(doc))
> + return line
> +
> +class _PlainTextDoc(TextDoc):
> + """Subclass of TextDoc which overrides string styling"""
> + def bold(self, text):
> + return text
> +
> +# --------------------------------------------------------- user interfaces
> +
> +def pager(text):
> + """The first time this is called, determine what kind of pager to use."""
> + global pager
> + pager = getpager()
> + pager(text)
> +
> +def getpager():
> + """Decide what method to use for paging through text."""
> + if not hasattr(sys.stdin, "isatty"):
> + return plainpager
> + if not hasattr(sys.stdout, "isatty"):
> + return plainpager
> + if not sys.stdin.isatty() or not sys.stdout.isatty():
> + return plainpager
> + use_pager = os.environ.get('MANPAGER') or os.environ.get('PAGER')
> + if use_pager:
> + if sys.platform == 'win32': # pipes completely broken in Windows
> + return lambda text: tempfilepager(plain(text), use_pager)
> + elif sys.platform == 'uefi':
> + return lambda text: tempfilepager(plain(text), use_pager)
> + elif os.environ.get('TERM') in ('dumb', 'emacs'):
> + return lambda text: pipepager(plain(text), use_pager)
> + else:
> + return lambda text: pipepager(text, use_pager)
> + if os.environ.get('TERM') in ('dumb', 'emacs'):
> + return plainpager
> + if sys.platform == 'uefi':
> + return plainpager
> + if sys.platform == 'win32':
> + return lambda text: tempfilepager(plain(text), 'more <')
> + if hasattr(os, 'system') and os.system('(less) 2>/dev/null') == 0:
> + return lambda text: pipepager(text, 'less')
> +
> + import tempfile
> + (fd, filename) = tempfile.mkstemp()
> + os.close(fd)
> + try:
> + if hasattr(os, 'system') and os.system('more "%s"' % filename) == 0:
> + return lambda text: pipepager(text, 'more')
> + else:
> + return ttypager
> + finally:
> + os.unlink(filename)
> +
> +def plain(text):
> + """Remove boldface formatting from text."""
> + return re.sub('.\b', '', text)
> +
> +def pipepager(text, cmd):
> + """Page through text by feeding it to another program."""
> + import sys
> + if sys.platform != 'uefi':
> + import subprocess
> + proc = subprocess.Popen(cmd, shell=True, stdin=subprocess.PIPE)
> + try:
> + with io.TextIOWrapper(proc.stdin, errors='backslashreplace') as pipe:
> + try:
> + pipe.write(text)
> + except KeyboardInterrupt:
> + # We've hereby abandoned whatever text hasn't been written,
> + # but the pager is still in control of the terminal.
> + pass
> + except OSError:
> + pass # Ignore broken pipes caused by quitting the pager program.
> + while True:
> + try:
> + proc.wait()
> + break
> + except KeyboardInterrupt:
> + # Ignore ctl-c like the pager itself does. Otherwise the pager is
> + # left running and the terminal is in raw mode and unusable.
> + pass
> + else:
> + pipe = os.popen(cmd, 'w')
> + try:
> + pipe.write(_encode(text))
> + pipe.close()
> + except IOError:
> + pass # Ignore broken pipes caused by quitting the pager program.
> +
> +def tempfilepager(text, cmd):
> + """Page through text by invoking a program on a temporary file."""
> + import tempfile
> + filename = tempfile.mktemp()
> + with open(filename, 'w', errors='backslashreplace') as file:
> + file.write(text)
> + try:
> + os.system(cmd + ' "' + filename + '"')
> + finally:
> + os.unlink(filename)
> +
> +def _escape_stdout(text):
> + # Escape non-encodable characters to avoid encoding errors later
> + encoding = getattr(sys.stdout, 'encoding', None) or 'utf-8'
> + return text.encode(encoding, 'backslashreplace').decode(encoding)
> +
> +def ttypager(text):
> + """Page through text on a text terminal."""
> + lines = plain(_escape_stdout(text)).split('\n')
> + try:
> + import tty
> + fd = sys.stdin.fileno()
> + old = tty.tcgetattr(fd)
> + tty.setcbreak(fd)
> + getchar = lambda: sys.stdin.read(1)
> + except (ImportError, AttributeError, io.UnsupportedOperation):
> + tty = None
> + getchar = lambda: sys.stdin.readline()[:-1][:1]
> +
> + try:
> + try:
> + h = int(os.environ.get('LINES', 0))
> + except ValueError:
> + h = 0
> + if h <= 1:
> + h = 25
> + r = inc = h - 1
> + sys.stdout.write('\n'.join(lines[:inc]) + '\n')
> + while lines[r:]:
> + sys.stdout.write('-- more --')
> + sys.stdout.flush()
> + c = getchar()
> +
> + if c in ('q', 'Q'):
> + sys.stdout.write('\r \r')
> + break
> + elif c in ('\r', '\n'):
> + sys.stdout.write('\r \r' + lines[r] + '\n')
> + r = r + 1
> + continue
> + if c in ('b', 'B', '\x1b'):
> + r = r - inc - inc
> + if r < 0: r = 0
> + sys.stdout.write('\n' + '\n'.join(lines[r:r+inc]) + '\n')
> + r = r + inc
> +
> + finally:
> + if tty:
> + tty.tcsetattr(fd, tty.TCSAFLUSH, old)
> +
> +def plainpager(text):
> + """Simply print unformatted text. This is the ultimate fallback."""
> + sys.stdout.write(plain(_escape_stdout(text)))
> +
> +def describe(thing):
> + """Produce a short description of the given thing."""
> + if inspect.ismodule(thing):
> + if thing.__name__ in sys.builtin_module_names:
> + return 'built-in module ' + thing.__name__
> + if hasattr(thing, '__path__'):
> + return 'package ' + thing.__name__
> + else:
> + return 'module ' + thing.__name__
> + if inspect.isbuiltin(thing):
> + return 'built-in function ' + thing.__name__
> + if inspect.isgetsetdescriptor(thing):
> + return 'getset descriptor %s.%s.%s' % (
> + thing.__objclass__.__module__, thing.__objclass__.__name__,
> + thing.__name__)
> + if inspect.ismemberdescriptor(thing):
> + return 'member descriptor %s.%s.%s' % (
> + thing.__objclass__.__module__, thing.__objclass__.__name__,
> + thing.__name__)
> + if inspect.isclass(thing):
> + return 'class ' + thing.__name__
> + if inspect.isfunction(thing):
> + return 'function ' + thing.__name__
> + if inspect.ismethod(thing):
> + return 'method ' + thing.__name__
> + return type(thing).__name__
> +
> +def locate(path, forceload=0):
> + """Locate an object by name or dotted path, importing as necessary."""
> + parts = [part for part in path.split('.') if part]
> + module, n = None, 0
> + while n < len(parts):
> + nextmodule = safeimport('.'.join(parts[:n+1]), forceload)
> + if nextmodule: module, n = nextmodule, n + 1
> + else: break
> + if module:
> + object = module
> + else:
> + object = builtins
> + for part in parts[n:]:
> + try:
> + object = getattr(object, part)
> + except AttributeError:
> + return None
> + return object
> +
> +# --------------------------------------- interactive interpreter interface
> +
> +text = TextDoc()
> +plaintext = _PlainTextDoc()
> +html = HTMLDoc()
> +
> +def resolve(thing, forceload=0):
> + """Given an object or a path to an object, get the object and its name."""
> + if isinstance(thing, str):
> + object = locate(thing, forceload)
> + if object is None:
> + raise ImportError('''\
> +No Python documentation found for %r.
> +Use help() to get the interactive help utility.
> +Use help(str) for help on the str class.''' % thing)
> + return object, thing
> + else:
> + name = getattr(thing, '__name__', None)
> + return thing, name if isinstance(name, str) else None
> +
> +def render_doc(thing, title='Python Library Documentation: %s', forceload=0,
> + renderer=None):
> + """Render text documentation, given an object or a path to an object."""
> + if renderer is None:
> + renderer = text
> + object, name = resolve(thing, forceload)
> + desc = describe(object)
> + module = inspect.getmodule(object)
> + if name and '.' in name:
> + desc += ' in ' + name[:name.rfind('.')]
> + elif module and module is not object:
> + desc += ' in module ' + module.__name__
> +
> + if not (inspect.ismodule(object) or
> + inspect.isclass(object) or
> + inspect.isroutine(object) or
> + inspect.isgetsetdescriptor(object) or
> + inspect.ismemberdescriptor(object) or
> + isinstance(object, property)):
> + # If the passed object is a piece of data or an instance,
> + # document its available methods instead of its value.
> + object = type(object)
> + desc += ' object'
> + return title % desc + '\n\n' + renderer.document(object, name)
> +
> +def doc(thing, title='Python Library Documentation: %s', forceload=0,
> + output=None):
> + """Display text documentation, given an object or a path to an object."""
> + try:
> + if output is None:
> + pager(render_doc(thing, title, forceload))
> + else:
> + output.write(render_doc(thing, title, forceload, plaintext))
> + except (ImportError, ErrorDuringImport) as value:
> + print(value)
> +
> +def writedoc(thing, forceload=0):
> + """Write HTML documentation to a file in the current directory."""
> + try:
> + object, name = resolve(thing, forceload)
> + page = html.page(describe(object), html.document(object, name))
> + with open(name + '.html', 'w', encoding='utf-8') as file:
> + file.write(page)
> + print('wrote', name + '.html')
> + except (ImportError, ErrorDuringImport) as value:
> + print(value)
> +
> +def writedocs(dir, pkgpath='', done=None):
> + """Write out HTML documentation for all modules in a directory tree."""
> + if done is None: done = {}
> + for importer, modname, ispkg in pkgutil.walk_packages([dir], pkgpath):
> + writedoc(modname)
> + return
> +
> +class Helper:
> +
> + # These dictionaries map a topic name to either an alias, or a tuple
> + # (label, seealso-items). The "label" is the label of the corresponding
> + # section in the .rst file under Doc/ and an index into the dictionary
> + # in pydoc_data/topics.py.
> + #
> + # CAUTION: if you change one of these dictionaries, be sure to adapt the
> + # list of needed labels in Doc/tools/pyspecific.py and
> + # regenerate the pydoc_data/topics.py file by running
> + # make pydoc-topics
> + # in Doc/ and copying the output file into the Lib/ directory.
> +
> + keywords = {
> + 'False': '',
> + 'None': '',
> + 'True': '',
> + 'and': 'BOOLEAN',
> + 'as': 'with',
> + 'assert': ('assert', ''),
> + 'break': ('break', 'while for'),
> + 'class': ('class', 'CLASSES SPECIALMETHODS'),
> + 'continue': ('continue', 'while for'),
> + 'def': ('function', ''),
> + 'del': ('del', 'BASICMETHODS'),
> + 'elif': 'if',
> + 'else': ('else', 'while for'),
> + 'except': 'try',
> + 'finally': 'try',
> + 'for': ('for', 'break continue while'),
> + 'from': 'import',
> + 'global': ('global', 'nonlocal NAMESPACES'),
> + 'if': ('if', 'TRUTHVALUE'),
> + 'import': ('import', 'MODULES'),
> + 'in': ('in', 'SEQUENCEMETHODS'),
> + 'is': 'COMPARISON',
> + 'lambda': ('lambda', 'FUNCTIONS'),
> + 'nonlocal': ('nonlocal', 'global NAMESPACES'),
> + 'not': 'BOOLEAN',
> + 'or': 'BOOLEAN',
> + 'pass': ('pass', ''),
> + 'raise': ('raise', 'EXCEPTIONS'),
> + 'return': ('return', 'FUNCTIONS'),
> + 'try': ('try', 'EXCEPTIONS'),
> + 'while': ('while', 'break continue if TRUTHVALUE'),
> + 'with': ('with', 'CONTEXTMANAGERS EXCEPTIONS yield'),
> + 'yield': ('yield', ''),
> + }
> + # Either add symbols to this dictionary or to the symbols dictionary
> + # directly: Whichever is easier. They are merged later.
> + _strprefixes = [p + q for p in ('b', 'f', 'r', 'u') for q in ("'", '"')]
> + _symbols_inverse = {
> + 'STRINGS' : ("'", "'''", '"', '"""', *_strprefixes),
> + 'OPERATORS' : ('+', '-', '*', '**', '/', '//', '%', '<<', '>>', '&',
> + '|', '^', '~', '<', '>', '<=', '>=', '==', '!=', '<>'),
> + 'COMPARISON' : ('<', '>', '<=', '>=', '==', '!=', '<>'),
> + 'UNARY' : ('-', '~'),
> + 'AUGMENTEDASSIGNMENT' : ('+=', '-=', '*=', '/=', '%=', '&=', '|=',
> + '^=', '<<=', '>>=', '**=', '//='),
> + 'BITWISE' : ('<<', '>>', '&', '|', '^', '~'),
> + 'COMPLEX' : ('j', 'J')
> + }
> + symbols = {
> + '%': 'OPERATORS FORMATTING',
> + '**': 'POWER',
> + ',': 'TUPLES LISTS FUNCTIONS',
> + '.': 'ATTRIBUTES FLOAT MODULES OBJECTS',
> + '...': 'ELLIPSIS',
> + ':': 'SLICINGS DICTIONARYLITERALS',
> + '@': 'def class',
> + '\\': 'STRINGS',
> + '_': 'PRIVATENAMES',
> + '__': 'PRIVATENAMES SPECIALMETHODS',
> + '`': 'BACKQUOTES',
> + '(': 'TUPLES FUNCTIONS CALLS',
> + ')': 'TUPLES FUNCTIONS CALLS',
> + '[': 'LISTS SUBSCRIPTS SLICINGS',
> + ']': 'LISTS SUBSCRIPTS SLICINGS'
> + }
> + for topic, symbols_ in _symbols_inverse.items():
> + for symbol in symbols_:
> + topics = symbols.get(symbol, topic)
> + if topic not in topics:
> + topics = topics + ' ' + topic
> + symbols[symbol] = topics
> +
> + topics = {
> + 'TYPES': ('types', 'STRINGS UNICODE NUMBERS SEQUENCES MAPPINGS '
> + 'FUNCTIONS CLASSES MODULES FILES inspect'),
> + 'STRINGS': ('strings', 'str UNICODE SEQUENCES STRINGMETHODS '
> + 'FORMATTING TYPES'),
> + 'STRINGMETHODS': ('string-methods', 'STRINGS FORMATTING'),
> + 'FORMATTING': ('formatstrings', 'OPERATORS'),
> + 'UNICODE': ('strings', 'encodings unicode SEQUENCES STRINGMETHODS '
> + 'FORMATTING TYPES'),
> + 'NUMBERS': ('numbers', 'INTEGER FLOAT COMPLEX TYPES'),
> + 'INTEGER': ('integers', 'int range'),
> + 'FLOAT': ('floating', 'float math'),
> + 'COMPLEX': ('imaginary', 'complex cmath'),
> + 'SEQUENCES': ('typesseq', 'STRINGMETHODS FORMATTING range LISTS'),
> + 'MAPPINGS': 'DICTIONARIES',
> + 'FUNCTIONS': ('typesfunctions', 'def TYPES'),
> + 'METHODS': ('typesmethods', 'class def CLASSES TYPES'),
> + 'CODEOBJECTS': ('bltin-code-objects', 'compile FUNCTIONS TYPES'),
> + 'TYPEOBJECTS': ('bltin-type-objects', 'types TYPES'),
> + 'FRAMEOBJECTS': 'TYPES',
> + 'TRACEBACKS': 'TYPES',
> + 'NONE': ('bltin-null-object', ''),
> + 'ELLIPSIS': ('bltin-ellipsis-object', 'SLICINGS'),
> + 'SPECIALATTRIBUTES': ('specialattrs', ''),
> + 'CLASSES': ('types', 'class SPECIALMETHODS PRIVATENAMES'),
> + 'MODULES': ('typesmodules', 'import'),
> + 'PACKAGES': 'import',
> + 'EXPRESSIONS': ('operator-summary', 'lambda or and not in is BOOLEAN '
> + 'COMPARISON BITWISE SHIFTING BINARY FORMATTING POWER '
> + 'UNARY ATTRIBUTES SUBSCRIPTS SLICINGS CALLS TUPLES '
> + 'LISTS DICTIONARIES'),
> + 'OPERATORS': 'EXPRESSIONS',
> + 'PRECEDENCE': 'EXPRESSIONS',
> + 'OBJECTS': ('objects', 'TYPES'),
> + 'SPECIALMETHODS': ('specialnames', 'BASICMETHODS ATTRIBUTEMETHODS '
> + 'CALLABLEMETHODS SEQUENCEMETHODS MAPPINGMETHODS '
> + 'NUMBERMETHODS CLASSES'),
> + 'BASICMETHODS': ('customization', 'hash repr str SPECIALMETHODS'),
> + 'ATTRIBUTEMETHODS': ('attribute-access', 'ATTRIBUTES SPECIALMETHODS'),
> + 'CALLABLEMETHODS': ('callable-types', 'CALLS SPECIALMETHODS'),
> + 'SEQUENCEMETHODS': ('sequence-types', 'SEQUENCES SEQUENCEMETHODS '
> + 'SPECIALMETHODS'),
> + 'MAPPINGMETHODS': ('sequence-types', 'MAPPINGS SPECIALMETHODS'),
> + 'NUMBERMETHODS': ('numeric-types', 'NUMBERS AUGMENTEDASSIGNMENT '
> + 'SPECIALMETHODS'),
> + 'EXECUTION': ('execmodel', 'NAMESPACES DYNAMICFEATURES EXCEPTIONS'),
> + 'NAMESPACES': ('naming', 'global nonlocal ASSIGNMENT DELETION DYNAMICFEATURES'),
> + 'DYNAMICFEATURES': ('dynamic-features', ''),
> + 'SCOPING': 'NAMESPACES',
> + 'FRAMES': 'NAMESPACES',
> + 'EXCEPTIONS': ('exceptions', 'try except finally raise'),
> + 'CONVERSIONS': ('conversions', ''),
> + 'IDENTIFIERS': ('identifiers', 'keywords SPECIALIDENTIFIERS'),
> + 'SPECIALIDENTIFIERS': ('id-classes', ''),
> + 'PRIVATENAMES': ('atom-identifiers', ''),
> + 'LITERALS': ('atom-literals', 'STRINGS NUMBERS TUPLELITERALS '
> + 'LISTLITERALS DICTIONARYLITERALS'),
> + 'TUPLES': 'SEQUENCES',
> + 'TUPLELITERALS': ('exprlists', 'TUPLES LITERALS'),
> + 'LISTS': ('typesseq-mutable', 'LISTLITERALS'),
> + 'LISTLITERALS': ('lists', 'LISTS LITERALS'),
> + 'DICTIONARIES': ('typesmapping', 'DICTIONARYLITERALS'),
> + 'DICTIONARYLITERALS': ('dict', 'DICTIONARIES LITERALS'),
> + 'ATTRIBUTES': ('attribute-references', 'getattr hasattr setattr ATTRIBUTEMETHODS'),
> + 'SUBSCRIPTS': ('subscriptions', 'SEQUENCEMETHODS'),
> + 'SLICINGS': ('slicings', 'SEQUENCEMETHODS'),
> + 'CALLS': ('calls', 'EXPRESSIONS'),
> + 'POWER': ('power', 'EXPRESSIONS'),
> + 'UNARY': ('unary', 'EXPRESSIONS'),
> + 'BINARY': ('binary', 'EXPRESSIONS'),
> + 'SHIFTING': ('shifting', 'EXPRESSIONS'),
> + 'BITWISE': ('bitwise', 'EXPRESSIONS'),
> + 'COMPARISON': ('comparisons', 'EXPRESSIONS BASICMETHODS'),
> + 'BOOLEAN': ('booleans', 'EXPRESSIONS TRUTHVALUE'),
> + 'ASSERTION': 'assert',
> + 'ASSIGNMENT': ('assignment', 'AUGMENTEDASSIGNMENT'),
> + 'AUGMENTEDASSIGNMENT': ('augassign', 'NUMBERMETHODS'),
> + 'DELETION': 'del',
> + 'RETURNING': 'return',
> + 'IMPORTING': 'import',
> + 'CONDITIONAL': 'if',
> + 'LOOPING': ('compound', 'for while break continue'),
> + 'TRUTHVALUE': ('truth', 'if while and or not BASICMETHODS'),
> + 'DEBUGGING': ('debugger', 'pdb'),
> + 'CONTEXTMANAGERS': ('context-managers', 'with'),
> + }
> +
> + def __init__(self, input=None, output=None):
> + self._input = input
> + self._output = output
> +
> + input = property(lambda self: self._input or sys.stdin)
> + output = property(lambda self: self._output or sys.stdout)
> +
> + def __repr__(self):
> + if inspect.stack()[1][3] == '?':
> + self()
> + return ''
> + return '<%s.%s instance>' % (self.__class__.__module__,
> + self.__class__.__qualname__)
> +
> + _GoInteractive = object()
> + def __call__(self, request=_GoInteractive):
> + if request is not self._GoInteractive:
> + self.help(request)
> + else:
> + self.intro()
> + self.interact()
> + self.output.write('''
> +You are now leaving help and returning to the Python interpreter.
> +If you want to ask for help on a particular object directly from the
> +interpreter, you can type "help(object)". Executing "help('string')"
> +has the same effect as typing a particular string at the help> prompt.
> +''')
> +
> + def interact(self):
> + self.output.write('\n')
> + while True:
> + try:
> + request = self.getline('help> ')
> + if not request: break
> + except (KeyboardInterrupt, EOFError):
> + break
> + request = request.strip()
> +
> + # Make sure significant trailing quoting marks of literals don't
> + # get deleted while cleaning input
> + if (len(request) > 2 and request[0] == request[-1] in ("'", '"')
> + and request[0] not in request[1:-1]):
> + request = request[1:-1]
> + if request.lower() in ('q', 'quit'): break
> + if request == 'help':
> + self.intro()
> + else:
> + self.help(request)
> +
> + def getline(self, prompt):
> + """Read one line, using input() when appropriate."""
> + if self.input is sys.stdin:
> + return input(prompt)
> + else:
> + self.output.write(prompt)
> + self.output.flush()
> + return self.input.readline()
> +
> + def help(self, request):
> + if type(request) is type(''):
> + request = request.strip()
> + if request == 'keywords': self.listkeywords()
> + elif request == 'symbols': self.listsymbols()
> + elif request == 'topics': self.listtopics()
> + elif request == 'modules': self.listmodules()
> + elif request[:8] == 'modules ':
> + self.listmodules(request.split()[1])
> + elif request in self.symbols: self.showsymbol(request)
> + elif request in ['True', 'False', 'None']:
> + # special case these keywords since they are objects too
> + doc(eval(request), 'Help on %s:')
> + elif request in self.keywords: self.showtopic(request)
> + elif request in self.topics: self.showtopic(request)
> + elif request: doc(request, 'Help on %s:', output=self._output)
> + else: doc(str, 'Help on %s:', output=self._output)
> + elif isinstance(request, Helper): self()
> + else: doc(request, 'Help on %s:', output=self._output)
> + self.output.write('\n')
> +
> + def intro(self):
> + self.output.write('''
> +Welcome to Python {0}'s help utility!
> +
> +If this is your first time using Python, you should definitely check out
> +the tutorial on the Internet at https://docs.python.org/{0}/tutorial/.
> +
> +Enter the name of any module, keyword, or topic to get help on writing
> +Python programs and using Python modules. To quit this help utility and
> +return to the interpreter, just type "quit".
> +
> +To get a list of available modules, keywords, symbols, or topics, type
> +"modules", "keywords", "symbols", or "topics". Each module also comes
> +with a one-line summary of what it does; to list the modules whose name
> +or summary contain a given string such as "spam", type "modules spam".
> +'''.format('%d.%d' % sys.version_info[:2]))
> +
> + def list(self, items, columns=4, width=80):
> + items = list(sorted(items))
> + colw = width // columns
> + rows = (len(items) + columns - 1) // columns
> + for row in range(rows):
> + for col in range(columns):
> + i = col * rows + row
> + if i < len(items):
> + self.output.write(items[i])
> + if col < columns - 1:
> + self.output.write(' ' + ' ' * (colw - 1 - len(items[i])))
> + self.output.write('\n')
> +
> + def listkeywords(self):
> + self.output.write('''
> +Here is a list of the Python keywords. Enter any keyword to get more help.
> +
> +''')
> + self.list(self.keywords.keys())
> +
> + def listsymbols(self):
> + self.output.write('''
> +Here is a list of the punctuation symbols which Python assigns special meaning
> +to. Enter any symbol to get more help.
> +
> +''')
> + self.list(self.symbols.keys())
> +
> + def listtopics(self):
> + self.output.write('''
> +Here is a list of available topics. Enter any topic name to get more help.
> +
> +''')
> + self.list(self.topics.keys())
> +
> + def showtopic(self, topic, more_xrefs=''):
> + try:
> + import pydoc_data.topics
> + except ImportError:
> + self.output.write('''
> +Sorry, topic and keyword documentation is not available because the
> +module "pydoc_data.topics" could not be found.
> +''')
> + return
> + target = self.topics.get(topic, self.keywords.get(topic))
> + if not target:
> + self.output.write('no documentation found for %s\n' % repr(topic))
> + return
> + if type(target) is type(''):
> + return self.showtopic(target, more_xrefs)
> +
> + label, xrefs = target
> + try:
> + doc = pydoc_data.topics.topics[label]
> + except KeyError:
> + self.output.write('no documentation found for %s\n' % repr(topic))
> + return
> + doc = doc.strip() + '\n'
> + if more_xrefs:
> + xrefs = (xrefs or '') + ' ' + more_xrefs
> + if xrefs:
> + import textwrap
> + text = 'Related help topics: ' + ', '.join(xrefs.split()) + '\n'
> + wrapped_text = textwrap.wrap(text, 72)
> + doc += '\n%s\n' % '\n'.join(wrapped_text)
> + pager(doc)
> +
> + def _gettopic(self, topic, more_xrefs=''):
> + """Return unbuffered tuple of (topic, xrefs).
> +
> + If an error occurs here, the exception is caught and displayed by
> + the url handler.
> +
> + This function duplicates the showtopic method but returns its
> + result directly so it can be formatted for display in an html page.
> + """
> + try:
> + import pydoc_data.topics
> + except ImportError:
> + return('''
> +Sorry, topic and keyword documentation is not available because the
> +module "pydoc_data.topics" could not be found.
> +''' , '')
> + target = self.topics.get(topic, self.keywords.get(topic))
> + if not target:
> + raise ValueError('could not find topic')
> + if isinstance(target, str):
> + return self._gettopic(target, more_xrefs)
> + label, xrefs = target
> + doc = pydoc_data.topics.topics[label]
> + if more_xrefs:
> + xrefs = (xrefs or '') + ' ' + more_xrefs
> + return doc, xrefs
> +
> + def showsymbol(self, symbol):
> + target = self.symbols[symbol]
> + topic, _, xrefs = target.partition(' ')
> + self.showtopic(topic, xrefs)
> +
> + def listmodules(self, key=''):
> + if key:
> + self.output.write('''
> +Here is a list of modules whose name or summary contains '{}'.
> +If there are any, enter a module name to get more help.
> +
> +'''.format(key))
> + apropos(key)
> + else:
> + self.output.write('''
> +Please wait a moment while I gather a list of all available modules...
> +
> +''')
> + modules = {}
> + def callback(path, modname, desc, modules=modules):
> + if modname and modname[-9:] == '.__init__':
> + modname = modname[:-9] + ' (package)'
> + if modname.find('.') < 0:
> + modules[modname] = 1
> + def onerror(modname):
> + callback(None, modname, None)
> + ModuleScanner().run(callback, onerror=onerror)
> + self.list(modules.keys())
> + self.output.write('''
> +Enter any module name to get more help. Or, type "modules spam" to search
> +for modules whose name or summary contain the string "spam".
> +''')
> +
> +help = Helper()
> +
> +class ModuleScanner:
> + """An interruptible scanner that searches module synopses."""
> +
> + def run(self, callback, key=None, completer=None, onerror=None):
> + if key: key = key.lower()
> + self.quit = False
> + seen = {}
> +
> + for modname in sys.builtin_module_names:
> + if modname != '__main__':
> + seen[modname] = 1
> + if key is None:
> + callback(None, modname, '')
> + else:
> + name = __import__(modname).__doc__ or ''
> + desc = name.split('\n')[0]
> + name = modname + ' - ' + desc
> + if name.lower().find(key) >= 0:
> + callback(None, modname, desc)
> +
> + for importer, modname, ispkg in pkgutil.walk_packages(onerror=onerror):
> + if self.quit:
> + break
> +
> + if key is None:
> + callback(None, modname, '')
> + else:
> + try:
> + spec = pkgutil._get_spec(importer, modname)
> + except SyntaxError:
> + # raised by tests for bad coding cookies or BOM
> + continue
> + loader = spec.loader
> + if hasattr(loader, 'get_source'):
> + try:
> + source = loader.get_source(modname)
> + except Exception:
> + if onerror:
> + onerror(modname)
> + continue
> + desc = source_synopsis(io.StringIO(source)) or ''
> + if hasattr(loader, 'get_filename'):
> + path = loader.get_filename(modname)
> + else:
> + path = None
> + else:
> + try:
> + module = importlib._bootstrap._load(spec)
> + except ImportError:
> + if onerror:
> + onerror(modname)
> + continue
> + desc = module.__doc__.splitlines()[0] if module.__doc__ else ''
> + path = getattr(module,'__file__',None)
> + name = modname + ' - ' + desc
> + if name.lower().find(key) >= 0:
> + callback(path, modname, desc)
> +
> + if completer:
> + completer()
> +
> +def apropos(key):
> + """Print all the one-line module summaries that contain a substring."""
> + def callback(path, modname, desc):
> + if modname[-9:] == '.__init__':
> + modname = modname[:-9] + ' (package)'
> + print(modname, desc and '- ' + desc)
> + def onerror(modname):
> + pass
> + with warnings.catch_warnings():
> + warnings.filterwarnings('ignore') # ignore problems during import
> + ModuleScanner().run(callback, key, onerror=onerror)
> +
> +# --------------------------------------- enhanced Web browser interface
> +
> +def _start_server(urlhandler, port):
> + """Start an HTTP server thread on a specific port.
> +
> + Start an HTML/text server thread, so HTML or text documents can be
> + browsed dynamically and interactively with a Web browser. Example use:
> +
> + >>> import time
> + >>> import pydoc
> +
> + Define a URL handler. To determine what the client is asking
> + for, check the URL and content_type.
> +
> + Then get or generate some text or HTML code and return it.
> +
> + >>> def my_url_handler(url, content_type):
> + ... text = 'the URL sent was: (%s, %s)' % (url, content_type)
> + ... return text
> +
> + Start server thread on port 0.
> + If you use port 0, the server will pick a random port number.
> + You can then use serverthread.port to get the port number.
> +
> + >>> port = 0
> + >>> serverthread = pydoc._start_server(my_url_handler, port)
> +
> + Check that the server is really started. If it is, open browser
> + and get first page. Use serverthread.url as the starting page.
> +
> + >>> if serverthread.serving:
> + ... import webbrowser
> +
> + The next two lines are commented out so a browser doesn't open if
> + doctest is run on this module.
> +
> + #... webbrowser.open(serverthread.url)
> + #True
> +
> + Let the server do its thing. We just need to monitor its status.
> + Use time.sleep so the loop doesn't hog the CPU.
> +
> + >>> starttime = time.time()
> + >>> timeout = 1 #seconds
> +
> + This is a short timeout for testing purposes.
> +
> + >>> while serverthread.serving:
> + ... time.sleep(.01)
> + ... if serverthread.serving and time.time() - starttime > timeout:
> + ... serverthread.stop()
> + ... break
> +
> + Print any errors that may have occurred.
> +
> + >>> print(serverthread.error)
> + None
> + """
> + import http.server
> + import email.message
> + import select
> + import threading
> +
> + class DocHandler(http.server.BaseHTTPRequestHandler):
> +
> + def do_GET(self):
> + """Process a request from an HTML browser.
> +
> + The URL received is in self.path.
> + Get an HTML page from self.urlhandler and send it.
> + """
> + if self.path.endswith('.css'):
> + content_type = 'text/css'
> + else:
> + content_type = 'text/html'
> + self.send_response(200)
> + self.send_header('Content-Type', '%s; charset=UTF-8' % content_type)
> + self.end_headers()
> + self.wfile.write(self.urlhandler(
> + self.path, content_type).encode('utf-8'))
> +
> + def log_message(self, *args):
> + # Don't log messages.
> + pass
> +
> + class DocServer(http.server.HTTPServer):
> +
> + def __init__(self, port, callback):
> + self.host = 'localhost'
> + self.address = (self.host, port)
> + self.callback = callback
> + self.base.__init__(self, self.address, self.handler)
> + self.quit = False
> +
> + def serve_until_quit(self):
> + while not self.quit:
> + rd, wr, ex = select.select([self.socket.fileno()], [], [], 1)
> + if rd:
> + self.handle_request()
> + self.server_close()
> +
> + def server_activate(self):
> + self.base.server_activate(self)
> + if self.callback:
> + self.callback(self)
> +
> + class ServerThread(threading.Thread):
> +
> + def __init__(self, urlhandler, port):
> + self.urlhandler = urlhandler
> + self.port = int(port)
> + threading.Thread.__init__(self)
> + self.serving = False
> + self.error = None
> +
> + def run(self):
> + """Start the server."""
> + try:
> + DocServer.base = http.server.HTTPServer
> + DocServer.handler = DocHandler
> + DocHandler.MessageClass = email.message.Message
> + DocHandler.urlhandler = staticmethod(self.urlhandler)
> + docsvr = DocServer(self.port, self.ready)
> + self.docserver = docsvr
> + docsvr.serve_until_quit()
> + except Exception as e:
> + self.error = e
> +
> + def ready(self, server):
> + self.serving = True
> + self.host = server.host
> + self.port = server.server_port
> + self.url = 'http://%s:%d/' % (self.host, self.port)
> +
> + def stop(self):
> + """Stop the server and this thread nicely"""
> + self.docserver.quit = True
> + self.join()
> + # explicitly break a reference cycle: DocServer.callback
> + # has indirectly a reference to ServerThread.
> + self.docserver = None
> + self.serving = False
> + self.url = None
> +
> + thread = ServerThread(urlhandler, port)
> + thread.start()
> + # Wait until thread.serving is True to make sure we are
> + # really up before returning.
> + while not thread.error and not thread.serving:
> + time.sleep(.01)
> + return thread
> +
> +
> +def _url_handler(url, content_type="text/html"):
> + """The pydoc url handler for use with the pydoc server.
> +
> + If the content_type is 'text/css', the _pydoc.css style
> + sheet is read and returned if it exits.
> +
> + If the content_type is 'text/html', then the result of
> + get_html_page(url) is returned.
> + """
> + class _HTMLDoc(HTMLDoc):
> +
> + def page(self, title, contents):
> + """Format an HTML page."""
> + css_path = "pydoc_data/_pydoc.css"
> + css_link = (
> + '<link rel="stylesheet" type="text/css" href="%s">' %
> + css_path)
> + return '''\
> +<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
> +<html><head><title>Pydoc: %s</title>
> +<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
> +%s</head><body bgcolor="#f0f0f8">%s<div style="clear:both;padding-top:.5em;">%s</div>
> +</body></html>''' % (title, css_link, html_navbar(), contents)
> +
> + def filelink(self, url, path):
> + return '<a href="getfile?key=%s">%s</a>' % (url, path)
> +
> +
> + html = _HTMLDoc()
> +
> + def html_navbar():
> + version = html.escape("%s [%s, %s]" % (platform.python_version(),
> + platform.python_build()[0],
> + platform.python_compiler()))
> + return """
> + <div style='float:left'>
> + Python %s<br>%s
> + </div>
> + <div style='float:right'>
> + <div style='text-align:center'>
> + <a href="index.html">Module Index</a>
> + : <a href="topics.html">Topics</a>
> + : <a href="keywords.html">Keywords</a>
> + </div>
> + <div>
> + <form action="get" style='display:inline;'>
> + <input type=text name=key size=15>
> + <input type=submit value="Get">
> + </form>
> + <form action="search" style='display:inline;'>
> + <input type=text name=key size=15>
> + <input type=submit value="Search">
> + </form>
> + </div>
> + </div>
> + """ % (version, html.escape(platform.platform(terse=True)))
> +
> + def html_index():
> + """Module Index page."""
> +
> + def bltinlink(name):
> + return '<a href="%s.html">%s</a>' % (name, name)
> +
> + heading = html.heading(
> + '<big><big><strong>Index of Modules</strong></big></big>',
> + '#ffffff', '#7799ee')
> + names = [name for name in sys.builtin_module_names
> + if name != '__main__']
> + contents = html.multicolumn(names, bltinlink)
> + contents = [heading, '<p>' + html.bigsection(
> + 'Built-in Modules', '#ffffff', '#ee77aa', contents)]
> +
> + seen = {}
> + for dir in sys.path:
> + contents.append(html.index(dir, seen))
> +
> + contents.append(
> + '<p align=right><font color="#909090" face="helvetica,'
> + 'arial"><strong>pydoc</strong> by Ka-Ping Yee'
> + '<ping@lfw.org></font>')
> + return 'Index of Modules', ''.join(contents)
> +
> + def html_search(key):
> + """Search results page."""
> + # scan for modules
> + search_result = []
> +
> + def callback(path, modname, desc):
> + if modname[-9:] == '.__init__':
> + modname = modname[:-9] + ' (package)'
> + search_result.append((modname, desc and '- ' + desc))
> +
> + with warnings.catch_warnings():
> + warnings.filterwarnings('ignore') # ignore problems during import
> + def onerror(modname):
> + pass
> + ModuleScanner().run(callback, key, onerror=onerror)
> +
> + # format page
> + def bltinlink(name):
> + return '<a href="%s.html">%s</a>' % (name, name)
> +
> + results = []
> + heading = html.heading(
> + '<big><big><strong>Search Results</strong></big></big>',
> + '#ffffff', '#7799ee')
> + for name, desc in search_result:
> + results.append(bltinlink(name) + desc)
> + contents = heading + html.bigsection(
> + 'key = %s' % key, '#ffffff', '#ee77aa', '<br>'.join(results))
> + return 'Search Results', contents
> +
> + def html_getfile(path):
> + """Get and display a source file listing safely."""
> + path = urllib.parse.unquote(path)
> + with tokenize.open(path) as fp:
> + lines = html.escape(fp.read())
> + body = '<pre>%s</pre>' % lines
> + heading = html.heading(
> + '<big><big><strong>File Listing</strong></big></big>',
> + '#ffffff', '#7799ee')
> + contents = heading + html.bigsection(
> + 'File: %s' % path, '#ffffff', '#ee77aa', body)
> + return 'getfile %s' % path, contents
> +
> + def html_topics():
> + """Index of topic texts available."""
> +
> + def bltinlink(name):
> + return '<a href="topic?key=%s">%s</a>' % (name, name)
> +
> + heading = html.heading(
> + '<big><big><strong>INDEX</strong></big></big>',
> + '#ffffff', '#7799ee')
> + names = sorted(Helper.topics.keys())
> +
> + contents = html.multicolumn(names, bltinlink)
> + contents = heading + html.bigsection(
> + 'Topics', '#ffffff', '#ee77aa', contents)
> + return 'Topics', contents
> +
> + def html_keywords():
> + """Index of keywords."""
> + heading = html.heading(
> + '<big><big><strong>INDEX</strong></big></big>',
> + '#ffffff', '#7799ee')
> + names = sorted(Helper.keywords.keys())
> +
> + def bltinlink(name):
> + return '<a href="topic?key=%s">%s</a>' % (name, name)
> +
> + contents = html.multicolumn(names, bltinlink)
> + contents = heading + html.bigsection(
> + 'Keywords', '#ffffff', '#ee77aa', contents)
> + return 'Keywords', contents
> +
> + def html_topicpage(topic):
> + """Topic or keyword help page."""
> + buf = io.StringIO()
> + htmlhelp = Helper(buf, buf)
> + contents, xrefs = htmlhelp._gettopic(topic)
> + if topic in htmlhelp.keywords:
> + title = 'KEYWORD'
> + else:
> + title = 'TOPIC'
> + heading = html.heading(
> + '<big><big><strong>%s</strong></big></big>' % title,
> + '#ffffff', '#7799ee')
> + contents = '<pre>%s</pre>' % html.markup(contents)
> + contents = html.bigsection(topic , '#ffffff','#ee77aa', contents)
> + if xrefs:
> + xrefs = sorted(xrefs.split())
> +
> + def bltinlink(name):
> + return '<a href="topic?key=%s">%s</a>' % (name, name)
> +
> + xrefs = html.multicolumn(xrefs, bltinlink)
> + xrefs = html.section('Related help topics: ',
> + '#ffffff', '#ee77aa', xrefs)
> + return ('%s %s' % (title, topic),
> + ''.join((heading, contents, xrefs)))
> +
> + def html_getobj(url):
> + obj = locate(url, forceload=1)
> + if obj is None and url != 'None':
> + raise ValueError('could not find object')
> + title = describe(obj)
> + content = html.document(obj, url)
> + return title, content
> +
> + def html_error(url, exc):
> + heading = html.heading(
> + '<big><big><strong>Error</strong></big></big>',
> + '#ffffff', '#7799ee')
> + contents = '<br>'.join(html.escape(line) for line in
> + format_exception_only(type(exc), exc))
> + contents = heading + html.bigsection(url, '#ffffff', '#bb0000',
> + contents)
> + return "Error - %s" % url, contents
> +
> + def get_html_page(url):
> + """Generate an HTML page for url."""
> + complete_url = url
> + if url.endswith('.html'):
> + url = url[:-5]
> + try:
> + if url in ("", "index"):
> + title, content = html_index()
> + elif url == "topics":
> + title, content = html_topics()
> + elif url == "keywords":
> + title, content = html_keywords()
> + elif '=' in url:
> + op, _, url = url.partition('=')
> + if op == "search?key":
> + title, content = html_search(url)
> + elif op == "getfile?key":
> + title, content = html_getfile(url)
> + elif op == "topic?key":
> + # try topics first, then objects.
> + try:
> + title, content = html_topicpage(url)
> + except ValueError:
> + title, content = html_getobj(url)
> + elif op == "get?key":
> + # try objects first, then topics.
> + if url in ("", "index"):
> + title, content = html_index()
> + else:
> + try:
> + title, content = html_getobj(url)
> + except ValueError:
> + title, content = html_topicpage(url)
> + else:
> + raise ValueError('bad pydoc url')
> + else:
> + title, content = html_getobj(url)
> + except Exception as exc:
> + # Catch any errors and display them in an error page.
> + title, content = html_error(complete_url, exc)
> + return html.page(title, content)
> +
> + if url.startswith('/'):
> + url = url[1:]
> + if content_type == 'text/css':
> + path_here = os.path.dirname(os.path.realpath(__file__))
> + css_path = os.path.join(path_here, url)
> + with open(css_path) as fp:
> + return ''.join(fp.readlines())
> + elif content_type == 'text/html':
> + return get_html_page(url)
> + # Errors outside the url handler are caught by the server.
> + raise TypeError('unknown content type %r for url %s' % (content_type, url))
> +
> +
> +def browse(port=0, *, open_browser=True):
> + """Start the enhanced pydoc Web server and open a Web browser.
> +
> + Use port '0' to start the server on an arbitrary port.
> + Set open_browser to False to suppress opening a browser.
> + """
> + import webbrowser
> + serverthread = _start_server(_url_handler, port)
> + if serverthread.error:
> + print(serverthread.error)
> + return
> + if serverthread.serving:
> + server_help_msg = 'Server commands: [b]rowser, [q]uit'
> + if open_browser:
> + webbrowser.open(serverthread.url)
> + try:
> + print('Server ready at', serverthread.url)
> + print(server_help_msg)
> + while serverthread.serving:
> + cmd = input('server> ')
> + cmd = cmd.lower()
> + if cmd == 'q':
> + break
> + elif cmd == 'b':
> + webbrowser.open(serverthread.url)
> + else:
> + print(server_help_msg)
> + except (KeyboardInterrupt, EOFError):
> + print()
> + finally:
> + if serverthread.serving:
> + serverthread.stop()
> + print('Server stopped')
> +
> +
> +# -------------------------------------------------- command-line interface
> +
> +def ispath(x):
> + return isinstance(x, str) and x.find(os.sep) >= 0
> +
> +def cli():
> + """Command-line interface (looks at sys.argv to decide what to do)."""
> + import getopt
> + class BadUsage(Exception): pass
> +
> + # Scripts don't get the current directory in their path by default
> + # unless they are run with the '-m' switch
> + if '' not in sys.path:
> + scriptdir = os.path.dirname(sys.argv[0])
> + if scriptdir in sys.path:
> + sys.path.remove(scriptdir)
> + sys.path.insert(0, '.')
> +
> + try:
> + opts, args = getopt.getopt(sys.argv[1:], 'bk:p:w')
> + writing = False
> + start_server = False
> + open_browser = False
> + port = None
> + for opt, val in opts:
> + if opt == '-b':
> + start_server = True
> + open_browser = True
> + if opt == '-k':
> + apropos(val)
> + return
> + if opt == '-p':
> + start_server = True
> + port = val
> + if opt == '-w':
> + writing = True
> +
> + if start_server:
> + if port is None:
> + port = 0
> + browse(port, open_browser=open_browser)
> + return
> +
> + if not args: raise BadUsage
> + for arg in args:
> + if ispath(arg) and not os.path.exists(arg):
> + print('file %r does not exist' % arg)
> + break
> + try:
> + if ispath(arg) and os.path.isfile(arg):
> + arg = importfile(arg)
> + if writing:
> + if ispath(arg) and os.path.isdir(arg):
> + writedocs(arg)
> + else:
> + writedoc(arg)
> + else:
> + help.help(arg)
> + except ErrorDuringImport as value:
> + print(value)
> +
> + except (getopt.error, BadUsage):
> + cmd = os.path.splitext(os.path.basename(sys.argv[0]))[0]
> + print("""pydoc - the Python documentation tool
> +
> +{cmd} <name> ...
> + Show text documentation on something. <name> may be the name of a
> + Python keyword, topic, function, module, or package, or a dotted
> + reference to a class or function within a module or module in a
> + package. If <name> contains a '{sep}', it is used as the path to a
> + Python source file to document. If name is 'keywords', 'topics',
> + or 'modules', a listing of these things is displayed.
> +
> +{cmd} -k <keyword>
> + Search for a keyword in the synopsis lines of all available modules.
> +
> +{cmd} -p <port>
> + Start an HTTP server on the given port on the local machine. Port
> + number 0 can be used to get an arbitrary unused port.
> +
> +{cmd} -b
> + Start an HTTP server on an arbitrary unused port and open a Web browser
> + to interactively browse documentation. The -p option can be used with
> + the -b option to explicitly specify the server port.
> +
> +{cmd} -w <name> ...
> + Write out the HTML documentation for a module to a file in the current
> + directory. If <name> contains a '{sep}', it is treated as a filename; if
> + it names a directory, documentation is written for all the contents.
> +""".format(cmd=cmd, sep=os.sep))
> +
> +if __name__ == '__main__':
> + cli()
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/shutil.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/shutil.py
> new file mode 100644
> index 00000000..d34a9d0f
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/shutil.py
> @@ -0,0 +1,1160 @@
> +"""Utility functions for copying and archiving files and directory trees.
> +
> +XXX The functions here don't copy the resource fork or other metadata on Mac.
> +
> +"""
> +
> +import os
> +import sys
> +import stat
> +import fnmatch
> +import collections
> +import errno
> +
> +try:
> + import zlib
> + del zlib
> + _ZLIB_SUPPORTED = True
> +except ImportError:
> + _ZLIB_SUPPORTED = False
> +
> +try:
> + import bz2
> + del bz2
> + _BZ2_SUPPORTED = True
> +except ImportError:
> + _BZ2_SUPPORTED = False
> +
> +try:
> + import lzma
> + del lzma
> + _LZMA_SUPPORTED = True
> +except ImportError:
> + _LZMA_SUPPORTED = False
> +
> +try:
> + from pwd import getpwnam
> +except ImportError:
> + getpwnam = None
> +
> +try:
> + from grp import getgrnam
> +except ImportError:
> + getgrnam = None
> +
> +__all__ = ["copyfileobj", "copyfile", "copymode", "copystat", "copy", "copy2",
> + "copytree", "move", "rmtree", "Error", "SpecialFileError",
> + "ExecError", "make_archive", "get_archive_formats",
> + "register_archive_format", "unregister_archive_format",
> + "get_unpack_formats", "register_unpack_format",
> + "unregister_unpack_format", "unpack_archive",
> + "ignore_patterns", "chown", "which", "get_terminal_size",
> + "SameFileError"]
> + # disk_usage is added later, if available on the platform
> +
> +class Error(OSError):
> + pass
> +
> +class SameFileError(Error):
> + """Raised when source and destination are the same file."""
> +
> +class SpecialFileError(OSError):
> + """Raised when trying to do a kind of operation (e.g. copying) which is
> + not supported on a special file (e.g. a named pipe)"""
> +
> +class ExecError(OSError):
> + """Raised when a command could not be executed"""
> +
> +class ReadError(OSError):
> + """Raised when an archive cannot be read"""
> +
> +class RegistryError(Exception):
> + """Raised when a registry operation with the archiving
> + and unpacking registries fails"""
> +
> +
> +def copyfileobj(fsrc, fdst, length=16*1024):
> + """copy data from file-like object fsrc to file-like object fdst"""
> + while 1:
> + buf = fsrc.read(length)
> + if not buf:
> + break
> + fdst.write(buf)
> +
> +def _samefile(src, dst):
> + # Macintosh, Unix.
> + if hasattr(os.path, 'samefile'):
> + try:
> + return os.path.samefile(src, dst)
> + except OSError:
> + return False
> +
> + # All other platforms: check for same pathname.
> + return (os.path.normcase(os.path.abspath(src)) ==
> + os.path.normcase(os.path.abspath(dst)))
> +
> +def copyfile(src, dst, *, follow_symlinks=True):
> + """Copy data from src to dst.
> +
> + If follow_symlinks is not set and src is a symbolic link, a new
> + symlink will be created instead of copying the file it points to.
> +
> + """
> + if _samefile(src, dst):
> + raise SameFileError("{!r} and {!r} are the same file".format(src, dst))
> +
> + for fn in [src, dst]:
> + try:
> + st = os.stat(fn)
> + except OSError:
> + # File most likely does not exist
> + pass
> + else:
> + # XXX What about other special files? (sockets, devices...)
> + if stat.S_ISFIFO(st.st_mode):
> + raise SpecialFileError("`%s` is a named pipe" % fn)
> +
> + if not follow_symlinks and os.path.islink(src):
> + os.symlink(os.readlink(src), dst)
> + else:
> + with open(src, 'rb') as fsrc:
> + with open(dst, 'wb') as fdst:
> + copyfileobj(fsrc, fdst)
> + return dst
> +
> +def copymode(src, dst, *, follow_symlinks=True):
> + """Copy mode bits from src to dst.
> +
> + If follow_symlinks is not set, symlinks aren't followed if and only
> + if both `src` and `dst` are symlinks. If `lchmod` isn't available
> + (e.g. Linux) this method does nothing.
> +
> + """
> + if not follow_symlinks and os.path.islink(src) and os.path.islink(dst):
> + if hasattr(os, 'lchmod'):
> + stat_func, chmod_func = os.lstat, os.lchmod
> + else:
> + return
> + elif hasattr(os, 'chmod'):
> + stat_func, chmod_func = os.stat, os.chmod
> + else:
> + return
> +
> + st = stat_func(src)
> + chmod_func(dst, stat.S_IMODE(st.st_mode))
> +
> +if hasattr(os, 'listxattr'):
> + def _copyxattr(src, dst, *, follow_symlinks=True):
> + """Copy extended filesystem attributes from `src` to `dst`.
> +
> + Overwrite existing attributes.
> +
> + If `follow_symlinks` is false, symlinks won't be followed.
> +
> + """
> +
> + try:
> + names = os.listxattr(src, follow_symlinks=follow_symlinks)
> + except OSError as e:
> + if e.errno not in (errno.ENOTSUP, errno.ENODATA):
> + raise
> + return
> + for name in names:
> + try:
> + value = os.getxattr(src, name, follow_symlinks=follow_symlinks)
> + os.setxattr(dst, name, value, follow_symlinks=follow_symlinks)
> + except OSError as e:
> + if e.errno not in (errno.EPERM, errno.ENOTSUP, errno.ENODATA):
> + raise
> +else:
> + def _copyxattr(*args, **kwargs):
> + pass
> +
> +def copystat(src, dst, *, follow_symlinks=True):
> + """Copy file metadata
> +
> + Copy the permission bits, last access time, last modification time, and
> + flags from `src` to `dst`. On Linux, copystat() also copies the "extended
> + attributes" where possible. The file contents, owner, and group are
> + unaffected. `src` and `dst` are path names given as strings.
> +
> + If the optional flag `follow_symlinks` is not set, symlinks aren't
> + followed if and only if both `src` and `dst` are symlinks.
> + """
> + def _nop(*args, ns=None, follow_symlinks=None):
> + pass
> +
> + # follow symlinks (aka don't not follow symlinks)
> + follow = follow_symlinks or not (os.path.islink(src) and os.path.islink(dst))
> + if follow:
> + # use the real function if it exists
> + def lookup(name):
> + return getattr(os, name, _nop)
> + else:
> + # use the real function only if it exists
> + # *and* it supports follow_symlinks
> + def lookup(name):
> + fn = getattr(os, name, _nop)
> + if fn in os.supports_follow_symlinks:
> + return fn
> + return _nop
> +
> + st = lookup("stat")(src, follow_symlinks=follow)
> + mode = stat.S_IMODE(st.st_mode)
> + lookup("utime")(dst, ns=(st.st_atime_ns, st.st_mtime_ns),
> + follow_symlinks=follow)
> + try:
> + lookup("chmod")(dst, mode, follow_symlinks=follow)
> + except NotImplementedError:
> + # if we got a NotImplementedError, it's because
> + # * follow_symlinks=False,
> + # * lchown() is unavailable, and
> + # * either
> + # * fchownat() is unavailable or
> + # * fchownat() doesn't implement AT_SYMLINK_NOFOLLOW.
> + # (it returned ENOSUP.)
> + # therefore we're out of options--we simply cannot chown the
> + # symlink. give up, suppress the error.
> + # (which is what shutil always did in this circumstance.)
> + pass
> + if hasattr(st, 'st_flags'):
> + try:
> + lookup("chflags")(dst, st.st_flags, follow_symlinks=follow)
> + except OSError as why:
> + for err in 'EOPNOTSUPP', 'ENOTSUP':
> + if hasattr(errno, err) and why.errno == getattr(errno, err):
> + break
> + else:
> + raise
> + _copyxattr(src, dst, follow_symlinks=follow)
> +
> +def copy(src, dst, *, follow_symlinks=True):
> + """Copy data and mode bits ("cp src dst"). Return the file's destination.
> +
> + The destination may be a directory.
> +
> + If follow_symlinks is false, symlinks won't be followed. This
> + resembles GNU's "cp -P src dst".
> +
> + If source and destination are the same file, a SameFileError will be
> + raised.
> +
> + """
> + if os.path.isdir(dst):
> + dst = os.path.join(dst, os.path.basename(src))
> + copyfile(src, dst, follow_symlinks=follow_symlinks)
> + copymode(src, dst, follow_symlinks=follow_symlinks)
> + return dst
> +
> +def copy2(src, dst, *, follow_symlinks=True):
> + """Copy data and metadata. Return the file's destination.
> +
> + Metadata is copied with copystat(). Please see the copystat function
> + for more information.
> +
> + The destination may be a directory.
> +
> + If follow_symlinks is false, symlinks won't be followed. This
> + resembles GNU's "cp -P src dst".
> +
> + """
> + if os.path.isdir(dst):
> + dst = os.path.join(dst, os.path.basename(src))
> + copyfile(src, dst, follow_symlinks=follow_symlinks)
> + copystat(src, dst, follow_symlinks=follow_symlinks)
> + return dst
> +
> +def ignore_patterns(*patterns):
> + """Function that can be used as copytree() ignore parameter.
> +
> + Patterns is a sequence of glob-style patterns
> + that are used to exclude files"""
> + def _ignore_patterns(path, names):
> + ignored_names = []
> + for pattern in patterns:
> + ignored_names.extend(fnmatch.filter(names, pattern))
> + return set(ignored_names)
> + return _ignore_patterns
> +
> +def copytree(src, dst, symlinks=False, ignore=None, copy_function=copy2,
> + ignore_dangling_symlinks=False):
> + """Recursively copy a directory tree.
> +
> + The destination directory must not already exist.
> + If exception(s) occur, an Error is raised with a list of reasons.
> +
> + If the optional symlinks flag is true, symbolic links in the
> + source tree result in symbolic links in the destination tree; if
> + it is false, the contents of the files pointed to by symbolic
> + links are copied. If the file pointed by the symlink doesn't
> + exist, an exception will be added in the list of errors raised in
> + an Error exception at the end of the copy process.
> +
> + You can set the optional ignore_dangling_symlinks flag to true if you
> + want to silence this exception. Notice that this has no effect on
> + platforms that don't support os.symlink.
> +
> + The optional ignore argument is a callable. If given, it
> + is called with the `src` parameter, which is the directory
> + being visited by copytree(), and `names` which is the list of
> + `src` contents, as returned by os.listdir():
> +
> + callable(src, names) -> ignored_names
> +
> + Since copytree() is called recursively, the callable will be
> + called once for each directory that is copied. It returns a
> + list of names relative to the `src` directory that should
> + not be copied.
> +
> + The optional copy_function argument is a callable that will be used
> + to copy each file. It will be called with the source path and the
> + destination path as arguments. By default, copy2() is used, but any
> + function that supports the same signature (like copy()) can be used.
> +
> + """
> + names = os.listdir(src)
> + if ignore is not None:
> + ignored_names = ignore(src, names)
> + else:
> + ignored_names = set()
> +
> + os.makedirs(dst)
> + errors = []
> + for name in names:
> + if name in ignored_names:
> + continue
> + srcname = os.path.join(src, name)
> + dstname = os.path.join(dst, name)
> + try:
> + if os.path.islink(srcname):
> + linkto = os.readlink(srcname)
> + if symlinks:
> + # We can't just leave it to `copy_function` because legacy
> + # code with a custom `copy_function` may rely on copytree
> + # doing the right thing.
> + os.symlink(linkto, dstname)
> + copystat(srcname, dstname, follow_symlinks=not symlinks)
> + else:
> + # ignore dangling symlink if the flag is on
> + if not os.path.exists(linkto) and ignore_dangling_symlinks:
> + continue
> + # otherwise let the copy occurs. copy2 will raise an error
> + if os.path.isdir(srcname):
> + copytree(srcname, dstname, symlinks, ignore,
> + copy_function)
> + else:
> + copy_function(srcname, dstname)
> + elif os.path.isdir(srcname):
> + copytree(srcname, dstname, symlinks, ignore, copy_function)
> + else:
> + # Will raise a SpecialFileError for unsupported file types
> + copy_function(srcname, dstname)
> + # catch the Error from the recursive copytree so that we can
> + # continue with other files
> + except Error as err:
> + errors.extend(err.args[0])
> + except OSError as why:
> + errors.append((srcname, dstname, str(why)))
> + try:
> + copystat(src, dst)
> + except OSError as why:
> + # Copying file access times may fail on Windows
> + if getattr(why, 'winerror', None) is None:
> + errors.append((src, dst, str(why)))
> + if errors:
> + raise Error(errors)
> + return dst
> +
> +# version vulnerable to race conditions
> +def _rmtree_unsafe(path, onerror):
> + try:
> + if os.path.islink(path):
> + # symlinks to directories are forbidden, see bug #1669
> + raise OSError("Cannot call rmtree on a symbolic link")
> + except OSError:
> + onerror(os.path.islink, path, sys.exc_info())
> + # can't continue even if onerror hook returns
> + return
> + names = []
> + try:
> + names = os.listdir(path)
> + except OSError:
> + onerror(os.listdir, path, sys.exc_info())
> + for name in names:
> + fullname = os.path.join(path, name)
> + try:
> + mode = os.lstat(fullname).st_mode
> + except OSError:
> + mode = 0
> + if stat.S_ISDIR(mode):
> + _rmtree_unsafe(fullname, onerror)
> + else:
> + try:
> + os.unlink(fullname)
> + except OSError:
> + onerror(os.unlink, fullname, sys.exc_info())
> + try:
> + os.rmdir(path)
> + except OSError:
> + onerror(os.rmdir, path, sys.exc_info())
> +
> +# Version using fd-based APIs to protect against races
> +def _rmtree_safe_fd(topfd, path, onerror):
> + names = []
> + try:
> + names = os.listdir(topfd)
> + except OSError as err:
> + err.filename = path
> + onerror(os.listdir, path, sys.exc_info())
> + for name in names:
> + fullname = os.path.join(path, name)
> + try:
> + orig_st = os.stat(name, dir_fd=topfd, follow_symlinks=False)
> + mode = orig_st.st_mode
> + except OSError:
> + mode = 0
> + if stat.S_ISDIR(mode):
> + try:
> + dirfd = os.open(name, os.O_RDONLY, dir_fd=topfd)
> + except OSError:
> + onerror(os.open, fullname, sys.exc_info())
> + else:
> + try:
> + if os.path.samestat(orig_st, os.fstat(dirfd)):
> + _rmtree_safe_fd(dirfd, fullname, onerror)
> + try:
> + os.rmdir(name, dir_fd=topfd)
> + except OSError:
> + onerror(os.rmdir, fullname, sys.exc_info())
> + else:
> + try:
> + # This can only happen if someone replaces
> + # a directory with a symlink after the call to
> + # stat.S_ISDIR above.
> + raise OSError("Cannot call rmtree on a symbolic "
> + "link")
> + except OSError:
> + onerror(os.path.islink, fullname, sys.exc_info())
> + finally:
> + os.close(dirfd)
> + else:
> + try:
> + os.unlink(name, dir_fd=topfd)
> + except OSError:
> + onerror(os.unlink, fullname, sys.exc_info())
> +_use_fd_functions = 1
> +
> +# _use_fd_functions = ({os.open, os.stat, os.unlink, os.rmdir} <=
> +# os.supports_dir_fd and
> +# os.listdir in os.supports_fd and
> +# os.stat in os.supports_follow_symlinks)
> +
> +def rmtree(path, ignore_errors=False, onerror=None):
> + """Recursively delete a directory tree.
> +
> + If ignore_errors is set, errors are ignored; otherwise, if onerror
> + is set, it is called to handle the error with arguments (func,
> + path, exc_info) where func is platform and implementation dependent;
> + path is the argument to that function that caused it to fail; and
> + exc_info is a tuple returned by sys.exc_info(). If ignore_errors
> + is false and onerror is None, an exception is raised.
> +
> + """
> + if ignore_errors:
> + def onerror(*args):
> + pass
> + elif onerror is None:
> + def onerror(*args):
> + raise
> + if _use_fd_functions:
> + # While the unsafe rmtree works fine on bytes, the fd based does not.
> + if isinstance(path, bytes):
> + path = os.fsdecode(path)
> + # Note: To guard against symlink races, we use the standard
> + # lstat()/open()/fstat() trick.
> + try:
> + orig_st = os.lstat(path)
> + except Exception:
> + onerror(os.lstat, path, sys.exc_info())
> + return
> + try:
> + fd = os.open(path, os.O_RDONLY)
> + except Exception:
> + onerror(os.lstat, path, sys.exc_info())
> + return
> + try:
> + if os.path.samestat(orig_st, os.fstat(fd)):
> + _rmtree_safe_fd(fd, path, onerror)
> + try:
> + os.rmdir(path)
> + except OSError:
> + onerror(os.rmdir, path, sys.exc_info())
> + else:
> + try:
> + # symlinks to directories are forbidden, see bug #1669
> + raise OSError("Cannot call rmtree on a symbolic link")
> + except OSError:
> + onerror(os.path.islink, path, sys.exc_info())
> + finally:
> + os.close(fd)
> + else:
> + return _rmtree_unsafe(path, onerror)
> +
> +# Allow introspection of whether or not the hardening against symlink
> +# attacks is supported on the current platform
> +rmtree.avoids_symlink_attacks = _use_fd_functions
> +
> +def _basename(path):
> + # A basename() variant which first strips the trailing slash, if present.
> + # Thus we always get the last component of the path, even for directories.
> + sep = os.path.sep + (os.path.altsep or '')
> + return os.path.basename(path.rstrip(sep))
> +
> +def move(src, dst, copy_function=copy2):
> + """Recursively move a file or directory to another location. This is
> + similar to the Unix "mv" command. Return the file or directory's
> + destination.
> +
> + If the destination is a directory or a symlink to a directory, the source
> + is moved inside the directory. The destination path must not already
> + exist.
> +
> + If the destination already exists but is not a directory, it may be
> + overwritten depending on os.rename() semantics.
> +
> + If the destination is on our current filesystem, then rename() is used.
> + Otherwise, src is copied to the destination and then removed. Symlinks are
> + recreated under the new name if os.rename() fails because of cross
> + filesystem renames.
> +
> + The optional `copy_function` argument is a callable that will be used
> + to copy the source or it will be delegated to `copytree`.
> + By default, copy2() is used, but any function that supports the same
> + signature (like copy()) can be used.
> +
> + A lot more could be done here... A look at a mv.c shows a lot of
> + the issues this implementation glosses over.
> +
> + """
> + real_dst = dst
> + if os.path.isdir(dst):
> + if _samefile(src, dst):
> + # We might be on a case insensitive filesystem,
> + # perform the rename anyway.
> + os.rename(src, dst)
> + return
> +
> + real_dst = os.path.join(dst, _basename(src))
> + if os.path.exists(real_dst):
> + raise Error("Destination path '%s' already exists" % real_dst)
> + try:
> + os.rename(src, real_dst)
> + except OSError:
> + if os.path.islink(src):
> + linkto = os.readlink(src)
> + os.symlink(linkto, real_dst)
> + os.unlink(src)
> + elif os.path.isdir(src):
> + if _destinsrc(src, dst):
> + raise Error("Cannot move a directory '%s' into itself"
> + " '%s'." % (src, dst))
> + copytree(src, real_dst, copy_function=copy_function,
> + symlinks=True)
> + rmtree(src)
> + else:
> + copy_function(src, real_dst)
> + os.unlink(src)
> + return real_dst
> +
> +def _destinsrc(src, dst):
> + src = os.path.abspath(src)
> + dst = os.path.abspath(dst)
> + if not src.endswith(os.path.sep):
> + src += os.path.sep
> + if not dst.endswith(os.path.sep):
> + dst += os.path.sep
> + return dst.startswith(src)
> +
> +def _get_gid(name):
> + """Returns a gid, given a group name."""
> + if getgrnam is None or name is None:
> + return None
> + try:
> + result = getgrnam(name)
> + except KeyError:
> + result = None
> + if result is not None:
> + return result[2]
> + return None
> +
> +def _get_uid(name):
> + """Returns an uid, given a user name."""
> + if getpwnam is None or name is None:
> + return None
> + try:
> + result = getpwnam(name)
> + except KeyError:
> + result = None
> + if result is not None:
> + return result[2]
> + return None
> +
> +def _make_tarball(base_name, base_dir, compress="gzip", verbose=0, dry_run=0,
> + owner=None, group=None, logger=None):
> + """Create a (possibly compressed) tar file from all the files under
> + 'base_dir'.
> +
> + 'compress' must be "gzip" (the default), "bzip2", "xz", or None.
> +
> + 'owner' and 'group' can be used to define an owner and a group for the
> + archive that is being built. If not provided, the current owner and group
> + will be used.
> +
> + The output tar file will be named 'base_name' + ".tar", possibly plus
> + the appropriate compression extension (".gz", ".bz2", or ".xz").
> +
> + Returns the output filename.
> + """
> + if compress is None:
> + tar_compression = ''
> + elif _ZLIB_SUPPORTED and compress == 'gzip':
> + tar_compression = 'gz'
> + elif _BZ2_SUPPORTED and compress == 'bzip2':
> + tar_compression = 'bz2'
> + elif _LZMA_SUPPORTED and compress == 'xz':
> + tar_compression = 'xz'
> + else:
> + raise ValueError("bad value for 'compress', or compression format not "
> + "supported : {0}".format(compress))
> +
> + import tarfile # late import for breaking circular dependency
> +
> + compress_ext = '.' + tar_compression if compress else ''
> + archive_name = base_name + '.tar' + compress_ext
> + archive_dir = os.path.dirname(archive_name)
> +
> + if archive_dir and not os.path.exists(archive_dir):
> + if logger is not None:
> + logger.info("creating %s", archive_dir)
> + if not dry_run:
> + os.makedirs(archive_dir)
> +
> + # creating the tarball
> + if logger is not None:
> + logger.info('Creating tar archive')
> +
> + uid = _get_uid(owner)
> + gid = _get_gid(group)
> +
> + def _set_uid_gid(tarinfo):
> + if gid is not None:
> + tarinfo.gid = gid
> + tarinfo.gname = group
> + if uid is not None:
> + tarinfo.uid = uid
> + tarinfo.uname = owner
> + return tarinfo
> +
> + if not dry_run:
> + tar = tarfile.open(archive_name, 'w|%s' % tar_compression)
> + try:
> + tar.add(base_dir, filter=_set_uid_gid)
> + finally:
> + tar.close()
> +
> + return archive_name
> +
> +def _make_zipfile(base_name, base_dir, verbose=0, dry_run=0, logger=None):
> + """Create a zip file from all the files under 'base_dir'.
> +
> + The output zip file will be named 'base_name' + ".zip". Returns the
> + name of the output zip file.
> + """
> + import zipfile # late import for breaking circular dependency
> +
> + zip_filename = base_name + ".zip"
> + archive_dir = os.path.dirname(base_name)
> +
> + if archive_dir and not os.path.exists(archive_dir):
> + if logger is not None:
> + logger.info("creating %s", archive_dir)
> + if not dry_run:
> + os.makedirs(archive_dir)
> +
> + if logger is not None:
> + logger.info("creating '%s' and adding '%s' to it",
> + zip_filename, base_dir)
> +
> + if not dry_run:
> + with zipfile.ZipFile(zip_filename, "w",
> + compression=zipfile.ZIP_DEFLATED) as zf:
> + path = os.path.normpath(base_dir)
> + if path != os.curdir:
> + zf.write(path, path)
> + if logger is not None:
> + logger.info("adding '%s'", path)
> + for dirpath, dirnames, filenames in os.walk(base_dir):
> + for name in sorted(dirnames):
> + path = os.path.normpath(os.path.join(dirpath, name))
> + zf.write(path, path)
> + if logger is not None:
> + logger.info("adding '%s'", path)
> + for name in filenames:
> + path = os.path.normpath(os.path.join(dirpath, name))
> + if os.path.isfile(path):
> + zf.write(path, path)
> + if logger is not None:
> + logger.info("adding '%s'", path)
> +
> + return zip_filename
> +
> +_ARCHIVE_FORMATS = {
> + 'tar': (_make_tarball, [('compress', None)], "uncompressed tar file"),
> +}
> +
> +if _ZLIB_SUPPORTED:
> + _ARCHIVE_FORMATS['gztar'] = (_make_tarball, [('compress', 'gzip')],
> + "gzip'ed tar-file")
> + _ARCHIVE_FORMATS['zip'] = (_make_zipfile, [], "ZIP file")
> +
> +if _BZ2_SUPPORTED:
> + _ARCHIVE_FORMATS['bztar'] = (_make_tarball, [('compress', 'bzip2')],
> + "bzip2'ed tar-file")
> +
> +if _LZMA_SUPPORTED:
> + _ARCHIVE_FORMATS['xztar'] = (_make_tarball, [('compress', 'xz')],
> + "xz'ed tar-file")
> +
> +def get_archive_formats():
> + """Returns a list of supported formats for archiving and unarchiving.
> +
> + Each element of the returned sequence is a tuple (name, description)
> + """
> + formats = [(name, registry[2]) for name, registry in
> + _ARCHIVE_FORMATS.items()]
> + formats.sort()
> + return formats
> +
> +def register_archive_format(name, function, extra_args=None, description=''):
> + """Registers an archive format.
> +
> + name is the name of the format. function is the callable that will be
> + used to create archives. If provided, extra_args is a sequence of
> + (name, value) tuples that will be passed as arguments to the callable.
> + description can be provided to describe the format, and will be returned
> + by the get_archive_formats() function.
> + """
> + if extra_args is None:
> + extra_args = []
> + if not callable(function):
> + raise TypeError('The %s object is not callable' % function)
> + if not isinstance(extra_args, (tuple, list)):
> + raise TypeError('extra_args needs to be a sequence')
> + for element in extra_args:
> + if not isinstance(element, (tuple, list)) or len(element) !=2:
> + raise TypeError('extra_args elements are : (arg_name, value)')
> +
> + _ARCHIVE_FORMATS[name] = (function, extra_args, description)
> +
> +def unregister_archive_format(name):
> + del _ARCHIVE_FORMATS[name]
> +
> +def make_archive(base_name, format, root_dir=None, base_dir=None, verbose=0,
> + dry_run=0, owner=None, group=None, logger=None):
> + """Create an archive file (eg. zip or tar).
> +
> + 'base_name' is the name of the file to create, minus any format-specific
> + extension; 'format' is the archive format: one of "zip", "tar", "gztar",
> + "bztar", or "xztar". Or any other registered format.
> +
> + 'root_dir' is a directory that will be the root directory of the
> + archive; ie. we typically chdir into 'root_dir' before creating the
> + archive. 'base_dir' is the directory where we start archiving from;
> + ie. 'base_dir' will be the common prefix of all files and
> + directories in the archive. 'root_dir' and 'base_dir' both default
> + to the current directory. Returns the name of the archive file.
> +
> + 'owner' and 'group' are used when creating a tar archive. By default,
> + uses the current owner and group.
> + """
> + save_cwd = os.getcwd()
> + if root_dir is not None:
> + if logger is not None:
> + logger.debug("changing into '%s'", root_dir)
> + base_name = os.path.abspath(base_name)
> + if not dry_run:
> + os.chdir(root_dir)
> +
> + if base_dir is None:
> + base_dir = os.curdir
> +
> + kwargs = {'dry_run': dry_run, 'logger': logger}
> +
> + try:
> + format_info = _ARCHIVE_FORMATS[format]
> + except KeyError:
> + raise ValueError("unknown archive format '%s'" % format)
> +
> + func = format_info[0]
> + for arg, val in format_info[1]:
> + kwargs[arg] = val
> +
> + if format != 'zip':
> + kwargs['owner'] = owner
> + kwargs['group'] = group
> +
> + try:
> + filename = func(base_name, base_dir, **kwargs)
> + finally:
> + if root_dir is not None:
> + if logger is not None:
> + logger.debug("changing back to '%s'", save_cwd)
> + os.chdir(save_cwd)
> +
> + return filename
> +
> +
> +def get_unpack_formats():
> + """Returns a list of supported formats for unpacking.
> +
> + Each element of the returned sequence is a tuple
> + (name, extensions, description)
> + """
> + formats = [(name, info[0], info[3]) for name, info in
> + _UNPACK_FORMATS.items()]
> + formats.sort()
> + return formats
> +
> +def _check_unpack_options(extensions, function, extra_args):
> + """Checks what gets registered as an unpacker."""
> + # first make sure no other unpacker is registered for this extension
> + existing_extensions = {}
> + for name, info in _UNPACK_FORMATS.items():
> + for ext in info[0]:
> + existing_extensions[ext] = name
> +
> + for extension in extensions:
> + if extension in existing_extensions:
> + msg = '%s is already registered for "%s"'
> + raise RegistryError(msg % (extension,
> + existing_extensions[extension]))
> +
> + if not callable(function):
> + raise TypeError('The registered function must be a callable')
> +
> +
> +def register_unpack_format(name, extensions, function, extra_args=None,
> + description=''):
> + """Registers an unpack format.
> +
> + `name` is the name of the format. `extensions` is a list of extensions
> + corresponding to the format.
> +
> + `function` is the callable that will be
> + used to unpack archives. The callable will receive archives to unpack.
> + If it's unable to handle an archive, it needs to raise a ReadError
> + exception.
> +
> + If provided, `extra_args` is a sequence of
> + (name, value) tuples that will be passed as arguments to the callable.
> + description can be provided to describe the format, and will be returned
> + by the get_unpack_formats() function.
> + """
> + if extra_args is None:
> + extra_args = []
> + _check_unpack_options(extensions, function, extra_args)
> + _UNPACK_FORMATS[name] = extensions, function, extra_args, description
> +
> +def unregister_unpack_format(name):
> + """Removes the pack format from the registry."""
> + del _UNPACK_FORMATS[name]
> +
> +def _ensure_directory(path):
> + """Ensure that the parent directory of `path` exists"""
> + dirname = os.path.dirname(path)
> + if not os.path.isdir(dirname):
> + os.makedirs(dirname)
> +
> +def _unpack_zipfile(filename, extract_dir):
> + """Unpack zip `filename` to `extract_dir`
> + """
> + import zipfile # late import for breaking circular dependency
> +
> + if not zipfile.is_zipfile(filename):
> + raise ReadError("%s is not a zip file" % filename)
> +
> + zip = zipfile.ZipFile(filename)
> + try:
> + for info in zip.infolist():
> + name = info.filename
> +
> + # don't extract absolute paths or ones with .. in them
> + if name.startswith('/') or '..' in name:
> + continue
> +
> + target = os.path.join(extract_dir, *name.split('/'))
> + if not target:
> + continue
> +
> + _ensure_directory(target)
> + if not name.endswith('/'):
> + # file
> + data = zip.read(info.filename)
> + f = open(target, 'wb')
> + try:
> + f.write(data)
> + finally:
> + f.close()
> + del data
> + finally:
> + zip.close()
> +
> +def _unpack_tarfile(filename, extract_dir):
> + """Unpack tar/tar.gz/tar.bz2/tar.xz `filename` to `extract_dir`
> + """
> + import tarfile # late import for breaking circular dependency
> + try:
> + tarobj = tarfile.open(filename)
> + except tarfile.TarError:
> + raise ReadError(
> + "%s is not a compressed or uncompressed tar file" % filename)
> + try:
> + tarobj.extractall(extract_dir)
> + finally:
> + tarobj.close()
> +
> +_UNPACK_FORMATS = {
> + 'tar': (['.tar'], _unpack_tarfile, [], "uncompressed tar file"),
> + 'zip': (['.zip'], _unpack_zipfile, [], "ZIP file"),
> +}
> +
> +if _ZLIB_SUPPORTED:
> + _UNPACK_FORMATS['gztar'] = (['.tar.gz', '.tgz'], _unpack_tarfile, [],
> + "gzip'ed tar-file")
> +
> +if _BZ2_SUPPORTED:
> + _UNPACK_FORMATS['bztar'] = (['.tar.bz2', '.tbz2'], _unpack_tarfile, [],
> + "bzip2'ed tar-file")
> +
> +if _LZMA_SUPPORTED:
> + _UNPACK_FORMATS['xztar'] = (['.tar.xz', '.txz'], _unpack_tarfile, [],
> + "xz'ed tar-file")
> +
> +def _find_unpack_format(filename):
> + for name, info in _UNPACK_FORMATS.items():
> + for extension in info[0]:
> + if filename.endswith(extension):
> + return name
> + return None
> +
> +def unpack_archive(filename, extract_dir=None, format=None):
> + """Unpack an archive.
> +
> + `filename` is the name of the archive.
> +
> + `extract_dir` is the name of the target directory, where the archive
> + is unpacked. If not provided, the current working directory is used.
> +
> + `format` is the archive format: one of "zip", "tar", "gztar", "bztar",
> + or "xztar". Or any other registered format. If not provided,
> + unpack_archive will use the filename extension and see if an unpacker
> + was registered for that extension.
> +
> + In case none is found, a ValueError is raised.
> + """
> + if extract_dir is None:
> + extract_dir = os.getcwd()
> +
> + if format is not None:
> + try:
> + format_info = _UNPACK_FORMATS[format]
> + except KeyError:
> + raise ValueError("Unknown unpack format '{0}'".format(format))
> +
> + func = format_info[1]
> + func(filename, extract_dir, **dict(format_info[2]))
> + else:
> + # we need to look at the registered unpackers supported extensions
> + format = _find_unpack_format(filename)
> + if format is None:
> + raise ReadError("Unknown archive format '{0}'".format(filename))
> +
> + func = _UNPACK_FORMATS[format][1]
> + kwargs = dict(_UNPACK_FORMATS[format][2])
> + func(filename, extract_dir, **kwargs)
> +
> +
> +if hasattr(os, 'statvfs'):
> +
> + __all__.append('disk_usage')
> + _ntuple_diskusage = collections.namedtuple('usage', 'total used free')
> + _ntuple_diskusage.total.__doc__ = 'Total space in bytes'
> + _ntuple_diskusage.used.__doc__ = 'Used space in bytes'
> + _ntuple_diskusage.free.__doc__ = 'Free space in bytes'
> +
> + def disk_usage(path):
> + """Return disk usage statistics about the given path.
> +
> + Returned value is a named tuple with attributes 'total', 'used' and
> + 'free', which are the amount of total, used and free space, in bytes.
> + """
> + st = os.statvfs(path)
> + free = st.f_bavail * st.f_frsize
> + total = st.f_blocks * st.f_frsize
> + used = (st.f_blocks - st.f_bfree) * st.f_frsize
> + return _ntuple_diskusage(total, used, free)
> +
> +elif os.name == 'nt':
> +
> + import nt
> + __all__.append('disk_usage')
> + _ntuple_diskusage = collections.namedtuple('usage', 'total used free')
> +
> + def disk_usage(path):
> + """Return disk usage statistics about the given path.
> +
> + Returned values is a named tuple with attributes 'total', 'used' and
> + 'free', which are the amount of total, used and free space, in bytes.
> + """
> + total, free = nt._getdiskusage(path)
> + used = total - free
> + return _ntuple_diskusage(total, used, free)
> +
> +
> +def chown(path, user=None, group=None):
> + """Change owner user and group of the given path.
> +
> + user and group can be the uid/gid or the user/group names, and in that case,
> + they are converted to their respective uid/gid.
> + """
> +
> + if user is None and group is None:
> + raise ValueError("user and/or group must be set")
> +
> + _user = user
> + _group = group
> +
> + # -1 means don't change it
> + if user is None:
> + _user = -1
> + # user can either be an int (the uid) or a string (the system username)
> + elif isinstance(user, str):
> + _user = _get_uid(user)
> + if _user is None:
> + raise LookupError("no such user: {!r}".format(user))
> +
> + if group is None:
> + _group = -1
> + elif not isinstance(group, int):
> + _group = _get_gid(group)
> + if _group is None:
> + raise LookupError("no such group: {!r}".format(group))
> +
> + os.chown(path, _user, _group)
> +
> +def get_terminal_size(fallback=(80, 24)):
> + """Get the size of the terminal window.
> +
> + For each of the two dimensions, the environment variable, COLUMNS
> + and LINES respectively, is checked. If the variable is defined and
> + the value is a positive integer, it is used.
> +
> + When COLUMNS or LINES is not defined, which is the common case,
> + the terminal connected to sys.__stdout__ is queried
> + by invoking os.get_terminal_size.
> +
> + If the terminal size cannot be successfully queried, either because
> + the system doesn't support querying, or because we are not
> + connected to a terminal, the value given in fallback parameter
> + is used. Fallback defaults to (80, 24) which is the default
> + size used by many terminal emulators.
> +
> + The value returned is a named tuple of type os.terminal_size.
> + """
> + # columns, lines are the working values
> + try:
> + columns = int(os.environ['COLUMNS'])
> + except (KeyError, ValueError):
> + columns = 0
> +
> + try:
> + lines = int(os.environ['LINES'])
> + except (KeyError, ValueError):
> + lines = 0
> +
> + # only query if necessary
> + if columns <= 0 or lines <= 0:
> + try:
> + size = os.get_terminal_size(sys.__stdout__.fileno())
> + except (AttributeError, ValueError, OSError):
> + # stdout is None, closed, detached, or not a terminal, or
> + # os.get_terminal_size() is unsupported
> + size = os.terminal_size(fallback)
> + if columns <= 0:
> + columns = size.columns
> + if lines <= 0:
> + lines = size.lines
> +
> + return os.terminal_size((columns, lines))
> +
> +def which(cmd, mode=os.F_OK | os.X_OK, path=None):
> + """Given a command, mode, and a PATH string, return the path which
> + conforms to the given mode on the PATH, or None if there is no such
> + file.
> +
> + `mode` defaults to os.F_OK | os.X_OK. `path` defaults to the result
> + of os.environ.get("PATH"), or can be overridden with a custom search
> + path.
> +
> + """
> + # Check that a given file can be accessed with the correct mode.
> + # Additionally check that `file` is not a directory, as on Windows
> + # directories pass the os.access check.
> + def _access_check(fn, mode):
> + return (os.path.exists(fn) and os.access(fn, mode)
> + and not os.path.isdir(fn))
> +
> + # If we're given a path with a directory part, look it up directly rather
> + # than referring to PATH directories. This includes checking relative to the
> + # current directory, e.g. ./script
> + if os.path.dirname(cmd):
> + if _access_check(cmd, mode):
> + return cmd
> + return None
> +
> + if path is None:
> + path = os.environ.get("PATH", os.defpath)
> + if not path:
> + return None
> + path = path.split(os.pathsep)
> +
> + if sys.platform == "win32":
> + # The current directory takes precedence on Windows.
> + if not os.curdir in path:
> + path.insert(0, os.curdir)
> +
> + # PATHEXT is necessary to check on Windows.
> + pathext = os.environ.get("PATHEXT", "").split(os.pathsep)
> + # See if the given file matches any of the expected path extensions.
> + # This will allow us to short circuit when given "python.exe".
> + # If it does match, only test that one, otherwise we have to try
> + # others.
> + if any(cmd.lower().endswith(ext.lower()) for ext in pathext):
> + files = [cmd]
> + else:
> + files = [cmd + ext for ext in pathext]
> + else:
> + # On other platforms you don't have things like PATHEXT to tell you
> + # what file suffixes are executable, so just pass on cmd as-is.
> + files = [cmd]
> +
> + seen = set()
> + for dir in path:
> + normdir = os.path.normcase(dir)
> + if not normdir in seen:
> + seen.add(normdir)
> + for thefile in files:
> + name = os.path.join(dir, thefile)
> + if _access_check(name, mode):
> + return name
> + return None
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/site.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/site.py
> new file mode 100644
> index 00000000..16eb0eff
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/site.py
> @@ -0,0 +1,529 @@
> +"""Append module search paths for third-party packages to sys.path.
> +
> +****************************************************************
> +* This module is automatically imported during initialization. *
> +****************************************************************
> +
> +This is a UEFI-specific version of site.py.
> +
> +In earlier versions of Python (up to 1.5a3), scripts or modules that
> +needed to use site-specific modules would place ``import site''
> +somewhere near the top of their code. Because of the automatic
> +import, this is no longer necessary (but code that does it still
> +works).
> +
> +This will append site-specific paths to the module search path. It starts with sys.prefix and
> +sys.exec_prefix (if different) and appends
> +lib/python<version>/site-packages as well as lib/site-python.
> +The
> +resulting directories, if they exist, are appended to sys.path, and
> +also inspected for path configuration files.
> +
> +A path configuration file is a file whose name has the form
> +<package>.pth; its contents are additional directories (one per line)
> +to be added to sys.path. Non-existing directories (or
> +non-directories) are never added to sys.path; no directory is added to
> +sys.path more than once. Blank lines and lines beginning with
> +'#' are skipped. Lines starting with 'import' are executed.
> +
> +For example, suppose sys.prefix and sys.exec_prefix are set to
> +/Efi/StdLib and there is a directory /Efi/StdLib/lib/python27.10/site-packages
> +with three subdirectories, foo, bar and spam, and two path
> +configuration files, foo.pth and bar.pth. Assume foo.pth contains the
> +following:
> +
> + # foo package configuration
> + foo
> + bar
> + bletch
> +
> +and bar.pth contains:
> +
> + # bar package configuration
> + bar
> +
> +Then the following directories are added to sys.path, in this order:
> +
> + /Efi/StdLib/lib/python27.10/site-packages/bar
> + /Efi/StdLib/lib/python27.10/site-packages/foo
> +
> +Note that bletch is omitted because it doesn't exist; bar precedes foo
> +because bar.pth comes alphabetically before foo.pth; and spam is
> +omitted because it is not mentioned in either path configuration file.
> +
> +After these path manipulations, an attempt is made to import a module
> +named sitecustomize, which can perform arbitrary additional
> +site-specific customizations. If this import fails with an
> +ImportError exception, it is silently ignored.
> +
> +Copyright (c) 2011 - 2021, Intel Corporation. All rights reserved.<BR>
> +
> +This program and the accompanying materials are licensed and made available under
> +the terms and conditions of the BSD License that accompanies this distribution.
> +The full text of the license may be found at
> +http://opensource.org/licenses/bsd-license.
> +
> +THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
> +WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
> +"""
> +
> +import sys
> +import os
> +import builtins
> +import traceback
> +from io import open
> +# Prefixes for site-packages; add additional prefixes like /usr/local here
> +PREFIXES = [sys.prefix, sys.exec_prefix]
> +# Enable per user site-packages directory
> +# set it to False to disable the feature or True to force the feature
> +ENABLE_USER_SITE = False
> +
> +# for distutils.commands.install
> +# These values are initialized by the getuserbase() and getusersitepackages()
> +# functions, through the main() function when Python starts.
> +USER_SITE = None
> +USER_BASE = None
> +
> +
> +def makepath(*paths):
> + dir = os.path.join(*paths)
> + try:
> + dir = os.path.abspath(dir)
> + except OSError:
> + pass
> + return dir, os.path.normcase(dir)
> +
> +
> +def abs__file__():
> + """Set all module' __file__ attribute to an absolute path"""
> + for m in list(sys.modules.values()):
> + if hasattr(m, '__loader__'):
> + continue # don't mess with a PEP 302-supplied __file__
> + try:
> + m.__file__ = os.path.abspath(m.__file__)
> + except (AttributeError, OSError):
> + pass
> +
> +
> +def removeduppaths():
> + """ Remove duplicate entries from sys.path along with making them
> + absolute"""
> + # This ensures that the initial path provided by the interpreter contains
> + # only absolute pathnames, even if we're running from the build directory.
> + L = []
> + known_paths = set()
> + for dir in sys.path:
> + # Filter out duplicate paths (on case-insensitive file systems also
> + # if they only differ in case); turn relative paths into absolute
> + # paths.
> + dir, dircase = makepath(dir)
> + if not dircase in known_paths:
> + L.append(dir)
> + known_paths.add(dircase)
> + sys.path[:] = L
> + return known_paths
> +
> +
> +def _init_pathinfo():
> + """Return a set containing all existing directory entries from sys.path"""
> + d = set()
> + for dir in sys.path:
> + try:
> + if os.path.isdir(dir):
> + dir, dircase = makepath(dir)
> + d.add(dircase)
> + except TypeError:
> + continue
> + return d
> +
> +
> +def addpackage(sitedir, name, known_paths):
> + """Process a .pth file within the site-packages directory:
> + For each line in the file, either combine it with sitedir to a path
> + and add that to known_paths, or execute it if it starts with 'import '.
> + """
> + if known_paths is None:
> + _init_pathinfo()
> + reset = 1
> + else:
> + reset = 0
> + fullname = os.path.join(sitedir, name)
> + try:
> + f = open(fullname, "r")
> + except IOError:
> + return
> + with f:
> + for n, line in enumerate(f):
> + if line.startswith("#"):
> + continue
> + try:
> + if line.startswith(("import ", "import\t")):
> + exec(line)
> + continue
> + line = line.rstrip()
> + dir, dircase = makepath(sitedir, line)
> + if not dircase in known_paths and os.path.exists(dir):
> + sys.path.append(dir)
> + known_paths.add(dircase)
> + except Exception as err:
> + print("Error processing line {:d} of {}:\n".format(
> + n+1, fullname), file=sys.stderr)
> + for record in traceback.format_exception(*sys.exc_info()):
> + for line in record.splitlines():
> + print(' '+line, file=sys.stderr)
> + print("\nRemainder of file ignored", file=sys.stderr)
> + break
> + if reset:
> + known_paths = None
> + return known_paths
> +
> +
> +def addsitedir(sitedir, known_paths=None):
> + """Add 'sitedir' argument to sys.path if missing and handle .pth files in
> + 'sitedir'"""
> + if known_paths is None:
> + known_paths = _init_pathinfo()
> + reset = 1
> + else:
> + reset = 0
> + sitedir, sitedircase = makepath(sitedir)
> + if not sitedircase in known_paths:
> + sys.path.append(sitedir) # Add path component
> + try:
> + names = os.listdir(sitedir)
> + except os.error:
> + return
> + dotpth = os.extsep + "pth"
> + names = [name for name in names if name.endswith(dotpth)]
> + for name in sorted(names):
> + addpackage(sitedir, name, known_paths)
> + if reset:
> + known_paths = None
> + return known_paths
> +
> +
> +def check_enableusersite():
> + """Check if user site directory is safe for inclusion
> +
> + The function tests for the command line flag (including environment var),
> + process uid/gid equal to effective uid/gid.
> +
> + None: Disabled for security reasons
> + False: Disabled by user (command line option)
> + True: Safe and enabled
> + """
> + if sys.flags.no_user_site:
> + return False
> +
> + if hasattr(os, "getuid") and hasattr(os, "geteuid"):
> + # check process uid == effective uid
> + if os.geteuid() != os.getuid():
> + return None
> + if hasattr(os, "getgid") and hasattr(os, "getegid"):
> + # check process gid == effective gid
> + if os.getegid() != os.getgid():
> + return None
> +
> + return True
> +
> +def getuserbase():
> + """Returns the `user base` directory path.
> +
> + The `user base` directory can be used to store data. If the global
> + variable ``USER_BASE`` is not initialized yet, this function will also set
> + it.
> + """
> + global USER_BASE
> + if USER_BASE is not None:
> + return USER_BASE
> + from sysconfig import get_config_var
> + USER_BASE = get_config_var('userbase')
> + return USER_BASE
> +
> +def getusersitepackages():
> + """Returns the user-specific site-packages directory path.
> +
> + If the global variable ``USER_SITE`` is not initialized yet, this
> + function will also set it.
> + """
> + global USER_SITE
> + user_base = getuserbase() # this will also set USER_BASE
> +
> + if USER_SITE is not None:
> + return USER_SITE
> +
> + from sysconfig import get_path
> + import os
> +
> + USER_SITE = get_path('purelib', '%s_user' % os.name)
> + return USER_SITE
> +
> +def addusersitepackages(known_paths):
> + """Add a per user site-package to sys.path
> +
> + Each user has its own python directory with site-packages in the
> + home directory.
> + """
> + if ENABLE_USER_SITE and os.path.isdir(user_site):
> + # get the per user site-package path
> + # this call will also make sure USER_BASE and USER_SITE are set
> + user_site = getusersitepackages()
> +
> + addsitedir(user_site, known_paths)
> + return known_paths
> +
> +def getsitepackages():
> + """Returns a list containing all global site-packages directories
> + (and possibly site-python).
> +
> + For each directory present in the global ``PREFIXES``, this function
> + will find its `site-packages` subdirectory depending on the system
> + environment, and will return a list of full paths.
> + """
> + sitepackages = []
> + seen = set()
> +
> + for prefix in PREFIXES:
> + if not prefix or prefix in seen:
> + continue
> + seen.add(prefix)
> +
> + ix = sys.version.find(' ')
> + if ix != -1:
> + micro = sys.version[4:ix]
> + else:
> + micro = '0'
> +
> + sitepackages.append(os.path.join(prefix, "lib",
> + "python" + sys.version[0] + sys.version[2] + '.' + micro,
> + "site-packages"))
> + sitepackages.append(os.path.join(prefix, "lib", "site-python"))
> + return sitepackages
> +
> +def addsitepackages(known_paths):
> + """Add site-packages (and possibly site-python) to sys.path"""
> + for sitedir in getsitepackages():
> + if os.path.isdir(sitedir):
> + addsitedir(sitedir, known_paths)
> +
> + return known_paths
> +
> +def setBEGINLIBPATH():
> + """The UEFI port has optional extension modules that do double duty
> + as DLLs (even though they have .efi file extensions) for other extensions.
> + The library search path needs to be amended so these will be found
> + during module import. Use BEGINLIBPATH so that these are at the start
> + of the library search path.
> +
> + """
> + dllpath = os.path.join(sys.prefix, "Lib", "lib-dynload")
> + libpath = os.environ['BEGINLIBPATH'].split(os.path.pathsep)
> + if libpath[-1]:
> + libpath.append(dllpath)
> + else:
> + libpath[-1] = dllpath
> + os.environ['BEGINLIBPATH'] = os.path.pathsep.join(libpath)
> +
> +
> +def setquit():
> + """Define new builtins 'quit' and 'exit'.
> +
> + These are objects which make the interpreter exit when called.
> + The repr of each object contains a hint at how it works.
> +
> + """
> + eof = 'Ctrl-D (i.e. EOF)'
> +
> + class Quitter(object):
> + def __init__(self, name):
> + self.name = name
> + def __repr__(self):
> + return 'Use %s() or %s to exit' % (self.name, eof)
> + def __call__(self, code=None):
> + # Shells like IDLE catch the SystemExit, but listen when their
> + # stdin wrapper is closed.
> + try:
> + sys.stdin.close()
> + except:
> + pass
> + raise SystemExit(code)
> + builtins.quit = Quitter('quit')
> + builtins.exit = Quitter('exit')
> +
> +
> +class _Printer(object):
> + """interactive prompt objects for printing the license text, a list of
> + contributors and the copyright notice."""
> +
> + MAXLINES = 23
> +
> + def __init__(self, name, data, files=(), dirs=()):
> + self.__name = name
> + self.__data = data
> + self.__files = files
> + self.__dirs = dirs
> + self.__lines = None
> +
> + def __setup(self):
> + if self.__lines:
> + return
> + data = None
> + for dir in self.__dirs:
> + for filename in self.__files:
> + filename = os.path.join(dir, filename)
> + try:
> + fp = open(filename, "r")
> + data = fp.read()
> + fp.close()
> + break
> + except IOError:
> + pass
> + if data:
> + break
> + if not data:
> + data = self.__data
> + self.__lines = data.split('\n')
> + self.__linecnt = len(self.__lines)
> +
> + def __repr__(self):
> + self.__setup()
> + if len(self.__lines) <= self.MAXLINES:
> + return "\n".join(self.__lines)
> + else:
> + return "Type %s() to see the full %s text" % ((self.__name,)*2)
> +
> + def __call__(self):
> + self.__setup()
> + prompt = 'Hit Return for more, or q (and Return) to quit: '
> + lineno = 0
> + while 1:
> + try:
> + for i in range(lineno, lineno + self.MAXLINES):
> + print((self.__lines[i]))
> + except IndexError:
> + break
> + else:
> + lineno += self.MAXLINES
> + key = None
> + while key is None:
> + key = input(prompt)
> + if key not in ('', 'q'):
> + key = None
> + if key == 'q':
> + break
> +
> +def setcopyright():
> + """Set 'copyright' and 'credits' in __builtin__"""
> + builtins.copyright = _Printer("copyright", sys.copyright)
> + builtins.credits = _Printer("credits", """\
> + Thanks to CWI, CNRI, BeOpen.com, Zope Corporation and a cast of thousands
> + for supporting Python development. See www.python.org for more information.""")
> + here = os.path.dirname(os.__file__)
> + builtins.license = _Printer(
> + "license", "See https://www.python.org/psf/license/",
> + ["LICENSE.txt", "LICENSE"],
> + [os.path.join(here, os.pardir), here, os.curdir])
> +
> +
> +class _Helper(object):
> + """Define the builtin 'help'.
> + This is a wrapper around pydoc.help (with a twist).
> +
> + """
> +
> + def __repr__(self):
> + return "Type help() for interactive help, " \
> + "or help(object) for help about object."
> + def __call__(self, *args, **kwds):
> + import pydoc
> + return pydoc.help(*args, **kwds)
> +
> +def sethelper():
> + builtins.help = _Helper()
> +
> +def setencoding():
> + """Set the string encoding used by the Unicode implementation. The
> + default is 'ascii', but if you're willing to experiment, you can
> + change this."""
> + encoding = "ascii" # Default value set by _PyUnicode_Init()
> + if 0:
> + # Enable to support locale aware default string encodings.
> + import locale
> + loc = locale.getdefaultlocale()
> + if loc[1]:
> + encoding = loc[1]
> + if 0:
> + # Enable to switch off string to Unicode coercion and implicit
> + # Unicode to string conversion.
> + encoding = "undefined"
> + if encoding != "ascii":
> + # On Non-Unicode builds this will raise an AttributeError...
> + sys.setdefaultencoding(encoding) # Needs Python Unicode build !
> +
> +
> +def execsitecustomize():
> + """Run custom site specific code, if available."""
> + try:
> + import sitecustomize
> + except ImportError:
> + pass
> + except Exception:
> + if sys.flags.verbose:
> + sys.excepthook(*sys.exc_info())
> + else:
> + print("'import sitecustomize' failed; use -v for traceback", file=sys.stderr)
> +
> +
> +def execusercustomize():
> + """Run custom user specific code, if available."""
> + try:
> + import usercustomize
> + except ImportError:
> + pass
> + except Exception:
> + if sys.flags.verbose:
> + sys.excepthook(*sys.exc_info())
> + else:
> + print("'import usercustomize' failed; use -v for traceback", file=sys.stderr)
> +
> +
> +def main():
> + global ENABLE_USER_SITE
> +
> + abs__file__()
> + known_paths = removeduppaths()
> + if ENABLE_USER_SITE is None:
> + ENABLE_USER_SITE = check_enableusersite()
> + known_paths = addusersitepackages(known_paths)
> + known_paths = addsitepackages(known_paths)
> + setquit()
> + setcopyright()
> + sethelper()
> + setencoding()
> + execsitecustomize()
> + # Remove sys.setdefaultencoding() so that users cannot change the
> + # encoding after initialization. The test for presence is needed when
> + # this module is run as a script, because this code is executed twice.
> + if hasattr(sys, "setdefaultencoding"):
> + del sys.setdefaultencoding
> +
> +main()
> +
> +def _script():
> + help = """\
> + %s
> +
> + Path elements are normally separated by '%s'.
> + """
> +
> + print("sys.path = [")
> + for dir in sys.path:
> + print(" %r," % (dir,))
> + print("]")
> +
> + import textwrap
> + print(textwrap.dedent(help % (sys.argv[0], os.pathsep)))
> + sys.exit(0)
> +
> +if __name__ == '__main__':
> + _script()
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/subprocess.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/subprocess.py
> new file mode 100644
> index 00000000..24ea86c0
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/subprocess.py
> @@ -0,0 +1,1620 @@
> +# subprocess - Subprocesses with accessible I/O streams
> +#
> +# For more information about this module, see PEP 324.
> +#
> +# Copyright (c) 2003-2005 by Peter Astrand <astrand@lysator.liu.se>
> +#
> +# Licensed to PSF under a Contributor Agreement.
> +# See http://www.python.org/2.4/license for licensing details.
> +
> +r"""Subprocesses with accessible I/O streams
> +
> +This module allows you to spawn processes, connect to their
> +input/output/error pipes, and obtain their return codes.
> +
> +For a complete description of this module see the Python documentation.
> +
> +Main API
> +========
> +run(...): Runs a command, waits for it to complete, then returns a
> + CompletedProcess instance.
> +Popen(...): A class for flexibly executing a command in a new process
> +
> +Constants
> +---------
> +DEVNULL: Special value that indicates that os.devnull should be used
> +PIPE: Special value that indicates a pipe should be created
> +STDOUT: Special value that indicates that stderr should go to stdout
> +
> +
> +Older API
> +=========
> +call(...): Runs a command, waits for it to complete, then returns
> + the return code.
> +check_call(...): Same as call() but raises CalledProcessError()
> + if return code is not 0
> +check_output(...): Same as check_call() but returns the contents of
> + stdout instead of a return code
> +getoutput(...): Runs a command in the shell, waits for it to complete,
> + then returns the output
> +getstatusoutput(...): Runs a command in the shell, waits for it to complete,
> + then returns a (exitcode, output) tuple
> +"""
> +
> +import sys
> +_mswindows = (sys.platform == "win32")
> +_uefi = (sys.platform == "uefi")
> +import io
> +import os
> +import time
> +import signal
> +import builtins
> +import warnings
> +import errno
> +from time import monotonic as _time
> +import edk2 #JP added
> +# Exception classes used by this module.
> +class SubprocessError(Exception): pass
> +
> +
> +class CalledProcessError(SubprocessError):
> + """Raised when run() is called with check=True and the process
> + returns a non-zero exit status.
> +
> + Attributes:
> + cmd, returncode, stdout, stderr, output
> + """
> + def __init__(self, returncode, cmd, output=None, stderr=None):
> + self.returncode = returncode
> + self.cmd = cmd
> + self.output = output
> + self.stderr = stderr
> +
> + def __str__(self):
> + if self.returncode and self.returncode < 0:
> + try:
> + return "Command '%s' died with %r." % (
> + self.cmd, signal.Signals(-self.returncode))
> + except ValueError:
> + return "Command '%s' died with unknown signal %d." % (
> + self.cmd, -self.returncode)
> + else:
> + return "Command '%s' returned non-zero exit status %d." % (
> + self.cmd, self.returncode)
> +
> + @property
> + def stdout(self):
> + """Alias for output attribute, to match stderr"""
> + return self.output
> +
> + @stdout.setter
> + def stdout(self, value):
> + # There's no obvious reason to set this, but allow it anyway so
> + # .stdout is a transparent alias for .output
> + self.output = value
> +
> +
> +class TimeoutExpired(SubprocessError):
> + """This exception is raised when the timeout expires while waiting for a
> + child process.
> +
> + Attributes:
> + cmd, output, stdout, stderr, timeout
> + """
> + def __init__(self, cmd, timeout, output=None, stderr=None):
> + self.cmd = cmd
> + self.timeout = timeout
> + self.output = output
> + self.stderr = stderr
> +
> + def __str__(self):
> + return ("Command '%s' timed out after %s seconds" %
> + (self.cmd, self.timeout))
> +
> + @property
> + def stdout(self):
> + return self.output
> +
> + @stdout.setter
> + def stdout(self, value):
> + # There's no obvious reason to set this, but allow it anyway so
> + # .stdout is a transparent alias for .output
> + self.output = value
> +
> +
> +if _mswindows:
> + import threading
> + import msvcrt
> + import _winapi
> + class STARTUPINFO:
> + dwFlags = 0
> + hStdInput = None
> + hStdOutput = None
> + hStdError = None
> + wShowWindow = 0
> +else:
> + if not _uefi: #JP hack, subprocess will not work on EFI shell
> + import _posixsubprocess
> +
> + import select
> + import selectors
> + try:
> + import threading
> + except ImportError:
> + import dummy_threading as threading
> +
> + # When select or poll has indicated that the file is writable,
> + # we can write up to _PIPE_BUF bytes without risk of blocking.
> + # POSIX defines PIPE_BUF as >= 512.
> + _PIPE_BUF = getattr(select, 'PIPE_BUF', 512)
> +
> + # poll/select have the advantage of not requiring any extra file
> + # descriptor, contrarily to epoll/kqueue (also, they require a single
> + # syscall).
> + if hasattr(selectors, 'PollSelector'):
> + _PopenSelector = selectors.PollSelector
> + else:
> + _PopenSelector = selectors.SelectSelector
> +
> +
> +__all__ = ["Popen", "PIPE", "STDOUT", "call", "check_call", "getstatusoutput",
> + "getoutput", "check_output", "run", "CalledProcessError", "DEVNULL",
> + "SubprocessError", "TimeoutExpired", "CompletedProcess"]
> + # NOTE: We intentionally exclude list2cmdline as it is
> + # considered an internal implementation detail. issue10838.
> +
> +if _mswindows:
> + from _winapi import (CREATE_NEW_CONSOLE, CREATE_NEW_PROCESS_GROUP,
> + STD_INPUT_HANDLE, STD_OUTPUT_HANDLE,
> + STD_ERROR_HANDLE, SW_HIDE,
> + STARTF_USESTDHANDLES, STARTF_USESHOWWINDOW)
> +
> + __all__.extend(["CREATE_NEW_CONSOLE", "CREATE_NEW_PROCESS_GROUP",
> + "STD_INPUT_HANDLE", "STD_OUTPUT_HANDLE",
> + "STD_ERROR_HANDLE", "SW_HIDE",
> + "STARTF_USESTDHANDLES", "STARTF_USESHOWWINDOW",
> + "STARTUPINFO"])
> +
> + class Handle(int):
> + closed = False
> +
> + def Close(self, CloseHandle=_winapi.CloseHandle):
> + if not self.closed:
> + self.closed = True
> + CloseHandle(self)
> +
> + def Detach(self):
> + if not self.closed:
> + self.closed = True
> + return int(self)
> + raise ValueError("already closed")
> +
> + def __repr__(self):
> + return "%s(%d)" % (self.__class__.__name__, int(self))
> +
> + __del__ = Close
> + __str__ = __repr__
> +
> +
> +# This lists holds Popen instances for which the underlying process had not
> +# exited at the time its __del__ method got called: those processes are wait()ed
> +# for synchronously from _cleanup() when a new Popen object is created, to avoid
> +# zombie processes.
> +_active = []
> +
> +def _cleanup():
> + for inst in _active[:]:
> + res = inst._internal_poll(_deadstate=sys.maxsize)
> + if res is not None:
> + try:
> + _active.remove(inst)
> + except ValueError:
> + # This can happen if two threads create a new Popen instance.
> + # It's harmless that it was already removed, so ignore.
> + pass
> +
> +PIPE = -1
> +STDOUT = -2
> +DEVNULL = -3
> +
> +
> +# XXX This function is only used by multiprocessing and the test suite,
> +# but it's here so that it can be imported when Python is compiled without
> +# threads.
> +
> +def _optim_args_from_interpreter_flags():
> + """Return a list of command-line arguments reproducing the current
> + optimization settings in sys.flags."""
> + args = []
> + value = sys.flags.optimize
> + if value > 0:
> + args.append('-' + 'O' * value)
> + return args
> +
> +
> +def _args_from_interpreter_flags():
> + """Return a list of command-line arguments reproducing the current
> + settings in sys.flags, sys.warnoptions and sys._xoptions."""
> + flag_opt_map = {
> + 'debug': 'd',
> + # 'inspect': 'i',
> + # 'interactive': 'i',
> + 'dont_write_bytecode': 'B',
> + 'no_site': 'S',
> + 'verbose': 'v',
> + 'bytes_warning': 'b',
> + 'quiet': 'q',
> + # -O is handled in _optim_args_from_interpreter_flags()
> + }
> + args = _optim_args_from_interpreter_flags()
> + for flag, opt in flag_opt_map.items():
> + v = getattr(sys.flags, flag)
> + if v > 0:
> + args.append('-' + opt * v)
> +
> + if sys.flags.isolated:
> + args.append('-I')
> + else:
> + if sys.flags.ignore_environment:
> + args.append('-E')
> + if sys.flags.no_user_site:
> + args.append('-s')
> +
> + for opt in sys.warnoptions:
> + args.append('-W' + opt)
> +
> + # -X options
> + xoptions = getattr(sys, '_xoptions', {})
> + for opt in ('faulthandler', 'tracemalloc',
> + 'showalloccount', 'showrefcount', 'utf8'):
> + if opt in xoptions:
> + value = xoptions[opt]
> + if value is True:
> + arg = opt
> + else:
> + arg = '%s=%s' % (opt, value)
> + args.extend(('-X', arg))
> +
> + return args
> +
> +
> +def call(*popenargs, timeout=None, **kwargs):
> + """Run command with arguments. Wait for command to complete or
> + timeout, then return the returncode attribute.
> +
> + The arguments are the same as for the Popen constructor. Example:
> +
> + retcode = call(["ls", "-l"])
> + """
> + with Popen(*popenargs, **kwargs) as p:
> + try:
> + return p.wait(timeout=timeout)
> + except:
> + p.kill()
> + p.wait()
> + raise
> +
> +
> +def check_call(*popenargs, **kwargs):
> + """Run command with arguments. Wait for command to complete. If
> + the exit code was zero then return, otherwise raise
> + CalledProcessError. The CalledProcessError object will have the
> + return code in the returncode attribute.
> +
> + The arguments are the same as for the call function. Example:
> +
> + check_call(["ls", "-l"])
> + """
> + retcode = call(*popenargs, **kwargs)
> + if retcode:
> + cmd = kwargs.get("args")
> + if cmd is None:
> + cmd = popenargs[0]
> + raise CalledProcessError(retcode, cmd)
> + return 0
> +
> +
> +def check_output(*popenargs, timeout=None, **kwargs):
> + r"""Run command with arguments and return its output.
> +
> + If the exit code was non-zero it raises a CalledProcessError. The
> + CalledProcessError object will have the return code in the returncode
> + attribute and output in the output attribute.
> +
> + The arguments are the same as for the Popen constructor. Example:
> +
> + >>> check_output(["ls", "-l", "/dev/null"])
> + b'crw-rw-rw- 1 root root 1, 3 Oct 18 2007 /dev/null\n'
> +
> + The stdout argument is not allowed as it is used internally.
> + To capture standard error in the result, use stderr=STDOUT.
> +
> + >>> check_output(["/bin/sh", "-c",
> + ... "ls -l non_existent_file ; exit 0"],
> + ... stderr=STDOUT)
> + b'ls: non_existent_file: No such file or directory\n'
> +
> + There is an additional optional argument, "input", allowing you to
> + pass a string to the subprocess's stdin. If you use this argument
> + you may not also use the Popen constructor's "stdin" argument, as
> + it too will be used internally. Example:
> +
> + >>> check_output(["sed", "-e", "s/foo/bar/"],
> + ... input=b"when in the course of fooman events\n")
> + b'when in the course of barman events\n'
> +
> + If universal_newlines=True is passed, the "input" argument must be a
> + string and the return value will be a string rather than bytes.
> + """
> + if 'stdout' in kwargs:
> + raise ValueError('stdout argument not allowed, it will be overridden.')
> +
> + if 'input' in kwargs and kwargs['input'] is None:
> + # Explicitly passing input=None was previously equivalent to passing an
> + # empty string. That is maintained here for backwards compatibility.
> + kwargs['input'] = '' if kwargs.get('universal_newlines', False) else b''
> +
> + return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
> + **kwargs).stdout
> +
> +
> +class CompletedProcess(object):
> + """A process that has finished running.
> +
> + This is returned by run().
> +
> + Attributes:
> + args: The list or str args passed to run().
> + returncode: The exit code of the process, negative for signals.
> + stdout: The standard output (None if not captured).
> + stderr: The standard error (None if not captured).
> + """
> + def __init__(self, args, returncode, stdout=None, stderr=None):
> + self.args = args
> + self.returncode = returncode
> + self.stdout = stdout
> + self.stderr = stderr
> +
> + def __repr__(self):
> + args = ['args={!r}'.format(self.args),
> + 'returncode={!r}'.format(self.returncode)]
> + if self.stdout is not None:
> + args.append('stdout={!r}'.format(self.stdout))
> + if self.stderr is not None:
> + args.append('stderr={!r}'.format(self.stderr))
> + return "{}({})".format(type(self).__name__, ', '.join(args))
> +
> + def check_returncode(self):
> + """Raise CalledProcessError if the exit code is non-zero."""
> + if self.returncode:
> + raise CalledProcessError(self.returncode, self.args, self.stdout,
> + self.stderr)
> +
> +
> +def run(*popenargs, input=None, timeout=None, check=False, **kwargs):
> + """Run command with arguments and return a CompletedProcess instance.
> +
> + The returned instance will have attributes args, returncode, stdout and
> + stderr. By default, stdout and stderr are not captured, and those attributes
> + will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them.
> +
> + If check is True and the exit code was non-zero, it raises a
> + CalledProcessError. The CalledProcessError object will have the return code
> + in the returncode attribute, and output & stderr attributes if those streams
> + were captured.
> +
> + If timeout is given, and the process takes too long, a TimeoutExpired
> + exception will be raised.
> +
> + There is an optional argument "input", allowing you to
> + pass a string to the subprocess's stdin. If you use this argument
> + you may not also use the Popen constructor's "stdin" argument, as
> + it will be used internally.
> +
> + The other arguments are the same as for the Popen constructor.
> +
> + If universal_newlines=True is passed, the "input" argument must be a
> + string and stdout/stderr in the returned object will be strings rather than
> + bytes.
> + """
> + if input is not None:
> + if 'stdin' in kwargs:
> + raise ValueError('stdin and input arguments may not both be used.')
> + kwargs['stdin'] = PIPE
> +
> + with Popen(*popenargs, **kwargs) as process:
> + try:
> + stdout, stderr = process.communicate(input, timeout=timeout)
> + except TimeoutExpired:
> + process.kill()
> + stdout, stderr = process.communicate()
> + raise TimeoutExpired(process.args, timeout, output=stdout,
> + stderr=stderr)
> + except:
> + process.kill()
> + process.wait()
> + raise
> + retcode = process.poll()
> + if check and retcode:
> + raise CalledProcessError(retcode, process.args,
> + output=stdout, stderr=stderr)
> + return CompletedProcess(process.args, retcode, stdout, stderr)
> +
> +
> +def list2cmdline(seq):
> + """
> + Translate a sequence of arguments into a command line
> + string, using the same rules as the MS C runtime:
> +
> + 1) Arguments are delimited by white space, which is either a
> + space or a tab.
> +
> + 2) A string surrounded by double quotation marks is
> + interpreted as a single argument, regardless of white space
> + contained within. A quoted string can be embedded in an
> + argument.
> +
> + 3) A double quotation mark preceded by a backslash is
> + interpreted as a literal double quotation mark.
> +
> + 4) Backslashes are interpreted literally, unless they
> + immediately precede a double quotation mark.
> +
> + 5) If backslashes immediately precede a double quotation mark,
> + every pair of backslashes is interpreted as a literal
> + backslash. If the number of backslashes is odd, the last
> + backslash escapes the next double quotation mark as
> + described in rule 3.
> + """
> +
> + # See
> + # http://msdn.microsoft.com/en-us/library/17w5ykft.aspx
> + # or search http://msdn.microsoft.com for
> + # "Parsing C++ Command-Line Arguments"
> + result = []
> + needquote = False
> + for arg in seq:
> + bs_buf = []
> +
> + # Add a space to separate this argument from the others
> + if result:
> + result.append(' ')
> +
> + needquote = (" " in arg) or ("\t" in arg) or not arg
> + if needquote:
> + result.append('"')
> +
> + for c in arg:
> + if c == '\\':
> + # Don't know if we need to double yet.
> + bs_buf.append(c)
> + elif c == '"':
> + # Double backslashes.
> + result.append('\\' * len(bs_buf)*2)
> + bs_buf = []
> + result.append('\\"')
> + else:
> + # Normal char
> + if bs_buf:
> + result.extend(bs_buf)
> + bs_buf = []
> + result.append(c)
> +
> + # Add remaining backslashes, if any.
> + if bs_buf:
> + result.extend(bs_buf)
> +
> + if needquote:
> + result.extend(bs_buf)
> + result.append('"')
> +
> + return ''.join(result)
> +
> +
> +# Various tools for executing commands and looking at their output and status.
> +#
> +
> +def getstatusoutput(cmd):
> + """Return (exitcode, output) of executing cmd in a shell.
> +
> + Execute the string 'cmd' in a shell with 'check_output' and
> + return a 2-tuple (status, output). The locale encoding is used
> + to decode the output and process newlines.
> +
> + A trailing newline is stripped from the output.
> + The exit status for the command can be interpreted
> + according to the rules for the function 'wait'. Example:
> +
> + >>> import subprocess
> + >>> subprocess.getstatusoutput('ls /bin/ls')
> + (0, '/bin/ls')
> + >>> subprocess.getstatusoutput('cat /bin/junk')
> + (1, 'cat: /bin/junk: No such file or directory')
> + >>> subprocess.getstatusoutput('/bin/junk')
> + (127, 'sh: /bin/junk: not found')
> + >>> subprocess.getstatusoutput('/bin/kill $$')
> + (-15, '')
> + """
> + try:
> + data = check_output(cmd, shell=True, universal_newlines=True, stderr=STDOUT)
> + exitcode = 0
> + except CalledProcessError as ex:
> + data = ex.output
> + exitcode = ex.returncode
> + if data[-1:] == '\n':
> + data = data[:-1]
> + return exitcode, data
> +
> +def getoutput(cmd):
> + """Return output (stdout or stderr) of executing cmd in a shell.
> +
> + Like getstatusoutput(), except the exit status is ignored and the return
> + value is a string containing the command's output. Example:
> +
> + >>> import subprocess
> + >>> subprocess.getoutput('ls /bin/ls')
> + '/bin/ls'
> + """
> + return getstatusoutput(cmd)[1]
> +
> +
> +_PLATFORM_DEFAULT_CLOSE_FDS = object()
> +
> +
> +class Popen(object):
> + """ Execute a child program in a new process.
> +
> + For a complete description of the arguments see the Python documentation.
> +
> + Arguments:
> + args: A string, or a sequence of program arguments.
> +
> + bufsize: supplied as the buffering argument to the open() function when
> + creating the stdin/stdout/stderr pipe file objects
> +
> + executable: A replacement program to execute.
> +
> + stdin, stdout and stderr: These specify the executed programs' standard
> + input, standard output and standard error file handles, respectively.
> +
> + preexec_fn: (POSIX only) An object to be called in the child process
> + just before the child is executed.
> +
> + close_fds: Controls closing or inheriting of file descriptors.
> +
> + shell: If true, the command will be executed through the shell.
> +
> + cwd: Sets the current directory before the child is executed.
> +
> + env: Defines the environment variables for the new process.
> +
> + universal_newlines: If true, use universal line endings for file
> + objects stdin, stdout and stderr.
> +
> + startupinfo and creationflags (Windows only)
> +
> + restore_signals (POSIX only)
> +
> + start_new_session (POSIX only)
> +
> + pass_fds (POSIX only)
> +
> + encoding and errors: Text mode encoding and error handling to use for
> + file objects stdin, stdout and stderr.
> +
> + Attributes:
> + stdin, stdout, stderr, pid, returncode
> + """
> + _child_created = False # Set here since __del__ checks it
> +
> + def __init__(self, args, bufsize=-1, executable=None,
> + stdin=None, stdout=None, stderr=None,
> + preexec_fn=None, close_fds=_PLATFORM_DEFAULT_CLOSE_FDS,
> + shell=False, cwd=None, env=None, universal_newlines=False,
> + startupinfo=None, creationflags=0,
> + restore_signals=True, start_new_session=False,
> + pass_fds=(), *, encoding=None, errors=None):
> + """Create new Popen instance."""
> + _cleanup()
> + # Held while anything is calling waitpid before returncode has been
> + # updated to prevent clobbering returncode if wait() or poll() are
> + # called from multiple threads at once. After acquiring the lock,
> + # code must re-check self.returncode to see if another thread just
> + # finished a waitpid() call.
> + self._waitpid_lock = threading.Lock()
> +
> + self._input = None
> + self._communication_started = False
> + if bufsize is None:
> + bufsize = -1 # Restore default
> + if not isinstance(bufsize, int):
> + raise TypeError("bufsize must be an integer")
> +
> + if _mswindows:
> + if preexec_fn is not None:
> + raise ValueError("preexec_fn is not supported on Windows "
> + "platforms")
> + any_stdio_set = (stdin is not None or stdout is not None or
> + stderr is not None)
> + if close_fds is _PLATFORM_DEFAULT_CLOSE_FDS:
> + if any_stdio_set:
> + close_fds = False
> + else:
> + close_fds = True
> + elif close_fds and any_stdio_set:
> + raise ValueError(
> + "close_fds is not supported on Windows platforms"
> + " if you redirect stdin/stdout/stderr")
> + else:
> + # POSIX
> + if close_fds is _PLATFORM_DEFAULT_CLOSE_FDS:
> + close_fds = True
> + if pass_fds and not close_fds:
> + warnings.warn("pass_fds overriding close_fds.", RuntimeWarning)
> + close_fds = True
> + if startupinfo is not None:
> + raise ValueError("startupinfo is only supported on Windows "
> + "platforms")
> + if creationflags != 0:
> + raise ValueError("creationflags is only supported on Windows "
> + "platforms")
> +
> + self.args = args
> + self.stdin = None
> + self.stdout = None
> + self.stderr = None
> + self.pid = None
> + self.returncode = None
> + self.universal_newlines = universal_newlines
> + self.encoding = encoding
> + self.errors = errors
> +
> + # Input and output objects. The general principle is like
> + # this:
> + #
> + # Parent Child
> + # ------ -----
> + # p2cwrite ---stdin---> p2cread
> + # c2pread <--stdout--- c2pwrite
> + # errread <--stderr--- errwrite
> + #
> + # On POSIX, the child objects are file descriptors. On
> + # Windows, these are Windows file handles. The parent objects
> + # are file descriptors on both platforms. The parent objects
> + # are -1 when not using PIPEs. The child objects are -1
> + # when not redirecting.
> +
> + (p2cread, p2cwrite,
> + c2pread, c2pwrite,
> + errread, errwrite) = self._get_handles(stdin, stdout, stderr)
> +
> + # We wrap OS handles *before* launching the child, otherwise a
> + # quickly terminating child could make our fds unwrappable
> + # (see #8458).
> +
> + if _mswindows:
> + if p2cwrite != -1:
> + p2cwrite = msvcrt.open_osfhandle(p2cwrite.Detach(), 0)
> + if c2pread != -1:
> + c2pread = msvcrt.open_osfhandle(c2pread.Detach(), 0)
> + if errread != -1:
> + errread = msvcrt.open_osfhandle(errread.Detach(), 0)
> +
> + text_mode = encoding or errors or universal_newlines
> +
> + self._closed_child_pipe_fds = False
> +
> + try:
> + if p2cwrite != -1:
> + self.stdin = io.open(p2cwrite, 'wb', bufsize)
> + if text_mode:
> + self.stdin = io.TextIOWrapper(self.stdin, write_through=True,
> + line_buffering=(bufsize == 1),
> + encoding=encoding, errors=errors)
> + if c2pread != -1:
> + self.stdout = io.open(c2pread, 'rb', bufsize)
> + if text_mode:
> + self.stdout = io.TextIOWrapper(self.stdout,
> + encoding=encoding, errors=errors)
> + if errread != -1:
> + self.stderr = io.open(errread, 'rb', bufsize)
> + if text_mode:
> + self.stderr = io.TextIOWrapper(self.stderr,
> + encoding=encoding, errors=errors)
> +
> + self._execute_child(args, executable, preexec_fn, close_fds,
> + pass_fds, cwd, env,
> + startupinfo, creationflags, shell,
> + p2cread, p2cwrite,
> + c2pread, c2pwrite,
> + errread, errwrite,
> + restore_signals, start_new_session)
> + except:
> + # Cleanup if the child failed starting.
> + for f in filter(None, (self.stdin, self.stdout, self.stderr)):
> + try:
> + f.close()
> + except OSError:
> + pass # Ignore EBADF or other errors.
> +
> + if not self._closed_child_pipe_fds:
> + to_close = []
> + if stdin == PIPE:
> + to_close.append(p2cread)
> + if stdout == PIPE:
> + to_close.append(c2pwrite)
> + if stderr == PIPE:
> + to_close.append(errwrite)
> + if hasattr(self, '_devnull'):
> + to_close.append(self._devnull)
> + for fd in to_close:
> + try:
> + if _mswindows and isinstance(fd, Handle):
> + fd.Close()
> + else:
> + os.close(fd)
> + except OSError:
> + pass
> +
> + raise
> +
> + def _translate_newlines(self, data, encoding, errors):
> + data = data.decode(encoding, errors)
> + return data.replace("\r\n", "\n").replace("\r", "\n")
> +
> + def __enter__(self):
> + return self
> +
> + def __exit__(self, type, value, traceback):
> + if self.stdout:
> + self.stdout.close()
> + if self.stderr:
> + self.stderr.close()
> + try: # Flushing a BufferedWriter may raise an error
> + if self.stdin:
> + self.stdin.close()
> + finally:
> + # Wait for the process to terminate, to avoid zombies.
> + self.wait()
> +
> + def __del__(self, _maxsize=sys.maxsize, _warn=warnings.warn):
> + if not self._child_created:
> + # We didn't get to successfully create a child process.
> + return
> + if self.returncode is None:
> + # Not reading subprocess exit status creates a zombi process which
> + # is only destroyed at the parent python process exit
> + _warn("subprocess %s is still running" % self.pid,
> + ResourceWarning, source=self)
> + # In case the child hasn't been waited on, check if it's done.
> + self._internal_poll(_deadstate=_maxsize)
> + if self.returncode is None and _active is not None:
> + # Child is still running, keep us alive until we can wait on it.
> + _active.append(self)
> +
> + def _get_devnull(self):
> + if not hasattr(self, '_devnull'):
> + self._devnull = os.open(os.devnull, os.O_RDWR)
> + return self._devnull
> +
> + def _stdin_write(self, input):
> + if input:
> + try:
> + self.stdin.write(input)
> + except BrokenPipeError:
> + pass # communicate() must ignore broken pipe errors.
> + except OSError as exc:
> + if exc.errno == errno.EINVAL:
> + # bpo-19612, bpo-30418: On Windows, stdin.write() fails
> + # with EINVAL if the child process exited or if the child
> + # process is still running but closed the pipe.
> + pass
> + else:
> + raise
> +
> + try:
> + self.stdin.close()
> + except BrokenPipeError:
> + pass # communicate() must ignore broken pipe errors.
> + except OSError as exc:
> + if exc.errno == errno.EINVAL:
> + pass
> + else:
> + raise
> +
> + def communicate(self, input=None, timeout=None):
> + """Interact with process: Send data to stdin. Read data from
> + stdout and stderr, until end-of-file is reached. Wait for
> + process to terminate.
> +
> + The optional "input" argument should be data to be sent to the
> + child process (if self.universal_newlines is True, this should
> + be a string; if it is False, "input" should be bytes), or
> + None, if no data should be sent to the child.
> +
> + communicate() returns a tuple (stdout, stderr). These will be
> + bytes or, if self.universal_newlines was True, a string.
> + """
> +
> + if self._communication_started and input:
> + raise ValueError("Cannot send input after starting communication")
> +
> + # Optimization: If we are not worried about timeouts, we haven't
> + # started communicating, and we have one or zero pipes, using select()
> + # or threads is unnecessary.
> + if (timeout is None and not self._communication_started and
> + [self.stdin, self.stdout, self.stderr].count(None) >= 2):
> + stdout = None
> + stderr = None
> + if self.stdin:
> + self._stdin_write(input)
> + elif self.stdout:
> + stdout = self.stdout.read()
> + self.stdout.close()
> + elif self.stderr:
> + stderr = self.stderr.read()
> + self.stderr.close()
> + self.wait()
> + else:
> + if timeout is not None:
> + endtime = _time() + timeout
> + else:
> + endtime = None
> +
> + try:
> + stdout, stderr = self._communicate(input, endtime, timeout)
> + finally:
> + self._communication_started = True
> +
> + sts = self.wait(timeout=self._remaining_time(endtime))
> +
> + return (stdout, stderr)
> +
> +
> + def poll(self):
> + """Check if child process has terminated. Set and return returncode
> + attribute."""
> + return self._internal_poll()
> +
> +
> + def _remaining_time(self, endtime):
> + """Convenience for _communicate when computing timeouts."""
> + if endtime is None:
> + return None
> + else:
> + return endtime - _time()
> +
> +
> + def _check_timeout(self, endtime, orig_timeout):
> + """Convenience for checking if a timeout has expired."""
> + if endtime is None:
> + return
> + if _time() > endtime:
> + raise TimeoutExpired(self.args, orig_timeout)
> +
> +
> + if _mswindows:
> + #
> + # Windows methods
> + #
> + def _get_handles(self, stdin, stdout, stderr):
> + """Construct and return tuple with IO objects:
> + p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite
> + """
> + if stdin is None and stdout is None and stderr is None:
> + return (-1, -1, -1, -1, -1, -1)
> +
> + p2cread, p2cwrite = -1, -1
> + c2pread, c2pwrite = -1, -1
> + errread, errwrite = -1, -1
> +
> + if stdin is None:
> + p2cread = _winapi.GetStdHandle(_winapi.STD_INPUT_HANDLE)
> + if p2cread is None:
> + p2cread, _ = _winapi.CreatePipe(None, 0)
> + p2cread = Handle(p2cread)
> + _winapi.CloseHandle(_)
> + elif stdin == PIPE:
> + p2cread, p2cwrite = _winapi.CreatePipe(None, 0)
> + p2cread, p2cwrite = Handle(p2cread), Handle(p2cwrite)
> + elif stdin == DEVNULL:
> + p2cread = msvcrt.get_osfhandle(self._get_devnull())
> + elif isinstance(stdin, int):
> + p2cread = msvcrt.get_osfhandle(stdin)
> + else:
> + # Assuming file-like object
> + p2cread = msvcrt.get_osfhandle(stdin.fileno())
> + p2cread = self._make_inheritable(p2cread)
> +
> + if stdout is None:
> + c2pwrite = _winapi.GetStdHandle(_winapi.STD_OUTPUT_HANDLE)
> + if c2pwrite is None:
> + _, c2pwrite = _winapi.CreatePipe(None, 0)
> + c2pwrite = Handle(c2pwrite)
> + _winapi.CloseHandle(_)
> + elif stdout == PIPE:
> + c2pread, c2pwrite = _winapi.CreatePipe(None, 0)
> + c2pread, c2pwrite = Handle(c2pread), Handle(c2pwrite)
> + elif stdout == DEVNULL:
> + c2pwrite = msvcrt.get_osfhandle(self._get_devnull())
> + elif isinstance(stdout, int):
> + c2pwrite = msvcrt.get_osfhandle(stdout)
> + else:
> + # Assuming file-like object
> + c2pwrite = msvcrt.get_osfhandle(stdout.fileno())
> + c2pwrite = self._make_inheritable(c2pwrite)
> +
> + if stderr is None:
> + errwrite = _winapi.GetStdHandle(_winapi.STD_ERROR_HANDLE)
> + if errwrite is None:
> + _, errwrite = _winapi.CreatePipe(None, 0)
> + errwrite = Handle(errwrite)
> + _winapi.CloseHandle(_)
> + elif stderr == PIPE:
> + errread, errwrite = _winapi.CreatePipe(None, 0)
> + errread, errwrite = Handle(errread), Handle(errwrite)
> + elif stderr == STDOUT:
> + errwrite = c2pwrite
> + elif stderr == DEVNULL:
> + errwrite = msvcrt.get_osfhandle(self._get_devnull())
> + elif isinstance(stderr, int):
> + errwrite = msvcrt.get_osfhandle(stderr)
> + else:
> + # Assuming file-like object
> + errwrite = msvcrt.get_osfhandle(stderr.fileno())
> + errwrite = self._make_inheritable(errwrite)
> +
> + return (p2cread, p2cwrite,
> + c2pread, c2pwrite,
> + errread, errwrite)
> +
> +
> + def _make_inheritable(self, handle):
> + """Return a duplicate of handle, which is inheritable"""
> + h = _winapi.DuplicateHandle(
> + _winapi.GetCurrentProcess(), handle,
> + _winapi.GetCurrentProcess(), 0, 1,
> + _winapi.DUPLICATE_SAME_ACCESS)
> + return Handle(h)
> +
> +
> + def _execute_child(self, args, executable, preexec_fn, close_fds,
> + pass_fds, cwd, env,
> + startupinfo, creationflags, shell,
> + p2cread, p2cwrite,
> + c2pread, c2pwrite,
> + errread, errwrite,
> + unused_restore_signals, unused_start_new_session):
> + """Execute program (MS Windows version)"""
> +
> + assert not pass_fds, "pass_fds not supported on Windows."
> +
> + if not isinstance(args, str):
> + args = list2cmdline(args)
> +
> + # Process startup details
> + if startupinfo is None:
> + startupinfo = STARTUPINFO()
> + if -1 not in (p2cread, c2pwrite, errwrite):
> + startupinfo.dwFlags |= _winapi.STARTF_USESTDHANDLES
> + startupinfo.hStdInput = p2cread
> + startupinfo.hStdOutput = c2pwrite
> + startupinfo.hStdError = errwrite
> +
> + if shell:
> + startupinfo.dwFlags |= _winapi.STARTF_USESHOWWINDOW
> + startupinfo.wShowWindow = _winapi.SW_HIDE
> + comspec = os.environ.get("COMSPEC", "cmd.exe")
> + args = '{} /c "{}"'.format (comspec, args)
> +
> + # Start the process
> + try:
> + hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
> + # no special security
> + None, None,
> + int(not close_fds),
> + creationflags,
> + env,
> + os.fspath(cwd) if cwd is not None else None,
> + startupinfo)
> + finally:
> + # Child is launched. Close the parent's copy of those pipe
> + # handles that only the child should have open. You need
> + # to make sure that no handles to the write end of the
> + # output pipe are maintained in this process or else the
> + # pipe will not close when the child process exits and the
> + # ReadFile will hang.
> + if p2cread != -1:
> + p2cread.Close()
> + if c2pwrite != -1:
> + c2pwrite.Close()
> + if errwrite != -1:
> + errwrite.Close()
> + if hasattr(self, '_devnull'):
> + os.close(self._devnull)
> + # Prevent a double close of these handles/fds from __init__
> + # on error.
> + self._closed_child_pipe_fds = True
> +
> + # Retain the process handle, but close the thread handle
> + self._child_created = True
> + self._handle = Handle(hp)
> + self.pid = pid
> + _winapi.CloseHandle(ht)
> +
> + def _internal_poll(self, _deadstate=None,
> + _WaitForSingleObject=_winapi.WaitForSingleObject,
> + _WAIT_OBJECT_0=_winapi.WAIT_OBJECT_0,
> + _GetExitCodeProcess=_winapi.GetExitCodeProcess):
> + """Check if child process has terminated. Returns returncode
> + attribute.
> +
> + This method is called by __del__, so it can only refer to objects
> + in its local scope.
> +
> + """
> + if self.returncode is None:
> + if _WaitForSingleObject(self._handle, 0) == _WAIT_OBJECT_0:
> + self.returncode = _GetExitCodeProcess(self._handle)
> + return self.returncode
> +
> +
> + def wait(self, timeout=None, endtime=None):
> + """Wait for child process to terminate. Returns returncode
> + attribute."""
> + if endtime is not None:
> + warnings.warn(
> + "'endtime' argument is deprecated; use 'timeout'.",
> + DeprecationWarning,
> + stacklevel=2)
> + timeout = self._remaining_time(endtime)
> + if timeout is None:
> + timeout_millis = _winapi.INFINITE
> + else:
> + timeout_millis = int(timeout * 1000)
> + if self.returncode is None:
> + result = _winapi.WaitForSingleObject(self._handle,
> + timeout_millis)
> + if result == _winapi.WAIT_TIMEOUT:
> + raise TimeoutExpired(self.args, timeout)
> + self.returncode = _winapi.GetExitCodeProcess(self._handle)
> + return self.returncode
> +
> +
> + def _readerthread(self, fh, buffer):
> + buffer.append(fh.read())
> + fh.close()
> +
> +
> + def _communicate(self, input, endtime, orig_timeout):
> + # Start reader threads feeding into a list hanging off of this
> + # object, unless they've already been started.
> + if self.stdout and not hasattr(self, "_stdout_buff"):
> + self._stdout_buff = []
> + self.stdout_thread = \
> + threading.Thread(target=self._readerthread,
> + args=(self.stdout, self._stdout_buff))
> + self.stdout_thread.daemon = True
> + self.stdout_thread.start()
> + if self.stderr and not hasattr(self, "_stderr_buff"):
> + self._stderr_buff = []
> + self.stderr_thread = \
> + threading.Thread(target=self._readerthread,
> + args=(self.stderr, self._stderr_buff))
> + self.stderr_thread.daemon = True
> + self.stderr_thread.start()
> +
> + if self.stdin:
> + self._stdin_write(input)
> +
> + # Wait for the reader threads, or time out. If we time out, the
> + # threads remain reading and the fds left open in case the user
> + # calls communicate again.
> + if self.stdout is not None:
> + self.stdout_thread.join(self._remaining_time(endtime))
> + if self.stdout_thread.is_alive():
> + raise TimeoutExpired(self.args, orig_timeout)
> + if self.stderr is not None:
> + self.stderr_thread.join(self._remaining_time(endtime))
> + if self.stderr_thread.is_alive():
> + raise TimeoutExpired(self.args, orig_timeout)
> +
> + # Collect the output from and close both pipes, now that we know
> + # both have been read successfully.
> + stdout = None
> + stderr = None
> + if self.stdout:
> + stdout = self._stdout_buff
> + self.stdout.close()
> + if self.stderr:
> + stderr = self._stderr_buff
> + self.stderr.close()
> +
> + # All data exchanged. Translate lists into strings.
> + if stdout is not None:
> + stdout = stdout[0]
> + if stderr is not None:
> + stderr = stderr[0]
> +
> + return (stdout, stderr)
> +
> + def send_signal(self, sig):
> + """Send a signal to the process."""
> + # Don't signal a process that we know has already died.
> + if self.returncode is not None:
> + return
> + if sig == signal.SIGTERM:
> + self.terminate()
> + elif sig == signal.CTRL_C_EVENT:
> + os.kill(self.pid, signal.CTRL_C_EVENT)
> + elif sig == signal.CTRL_BREAK_EVENT:
> + os.kill(self.pid, signal.CTRL_BREAK_EVENT)
> + else:
> + raise ValueError("Unsupported signal: {}".format(sig))
> +
> + def terminate(self):
> + """Terminates the process."""
> + # Don't terminate a process that we know has already died.
> + if self.returncode is not None:
> + return
> + try:
> + _winapi.TerminateProcess(self._handle, 1)
> + except PermissionError:
> + # ERROR_ACCESS_DENIED (winerror 5) is received when the
> + # process already died.
> + rc = _winapi.GetExitCodeProcess(self._handle)
> + if rc == _winapi.STILL_ACTIVE:
> + raise
> + self.returncode = rc
> +
> + kill = terminate
> +
> + else:
> + #
> + # POSIX methods
> + #
> + def _get_handles(self, stdin, stdout, stderr):
> + """Construct and return tuple with IO objects:
> + p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite
> + """
> + p2cread, p2cwrite = -1, -1
> + c2pread, c2pwrite = -1, -1
> + errread, errwrite = -1, -1
> +
> + if stdin is None:
> + pass
> + elif stdin == PIPE:
> + p2cread, p2cwrite = os.pipe()
> + elif stdin == DEVNULL:
> + p2cread = self._get_devnull()
> + elif isinstance(stdin, int):
> + p2cread = stdin
> + else:
> + # Assuming file-like object
> + p2cread = stdin.fileno()
> +
> + if stdout is None:
> + pass
> + elif stdout == PIPE:
> + c2pread, c2pwrite = os.pipe()
> + elif stdout == DEVNULL:
> + c2pwrite = self._get_devnull()
> + elif isinstance(stdout, int):
> + c2pwrite = stdout
> + else:
> + # Assuming file-like object
> + c2pwrite = stdout.fileno()
> +
> + if stderr is None:
> + pass
> + elif stderr == PIPE:
> + errread, errwrite = os.pipe()
> + elif stderr == STDOUT:
> + if c2pwrite != -1:
> + errwrite = c2pwrite
> + else: # child's stdout is not set, use parent's stdout
> + errwrite = sys.__stdout__.fileno()
> + elif stderr == DEVNULL:
> + errwrite = self._get_devnull()
> + elif isinstance(stderr, int):
> + errwrite = stderr
> + else:
> + # Assuming file-like object
> + errwrite = stderr.fileno()
> +
> + return (p2cread, p2cwrite,
> + c2pread, c2pwrite,
> + errread, errwrite)
> +
> +
> + def _execute_child(self, args, executable, preexec_fn, close_fds,
> + pass_fds, cwd, env,
> + startupinfo, creationflags, shell,
> + p2cread, p2cwrite,
> + c2pread, c2pwrite,
> + errread, errwrite,
> + restore_signals, start_new_session):
> + """Execute program (POSIX version)"""
> +
> + if isinstance(args, (str, bytes)):
> + args = [args]
> + else:
> + args = list(args)
> +
> + if shell:
> + args = ["/bin/sh", "-c"] + args
> + if executable:
> + args[0] = executable
> +
> + if executable is None:
> + executable = args[0]
> + orig_executable = executable
> +
> + # For transferring possible exec failure from child to parent.
> + # Data format: "exception name:hex errno:description"
> + # Pickle is not used; it is complex and involves memory allocation.
> + errpipe_read, errpipe_write = os.pipe()
> + # errpipe_write must not be in the standard io 0, 1, or 2 fd range.
> + low_fds_to_close = []
> + while errpipe_write < 3:
> + low_fds_to_close.append(errpipe_write)
> + errpipe_write = os.dup(errpipe_write)
> + for low_fd in low_fds_to_close:
> + os.close(low_fd)
> + try:
> + try:
> + # We must avoid complex work that could involve
> + # malloc or free in the child process to avoid
> + # potential deadlocks, thus we do all this here.
> + # and pass it to fork_exec()
> +
> + if env is not None:
> + env_list = []
> + for k, v in env.items():
> + k = os.fsencode(k)
> + if b'=' in k:
> + raise ValueError("illegal environment variable name")
> + env_list.append(k + b'=' + os.fsencode(v))
> + else:
> + env_list = None # Use execv instead of execve.
> + executable = os.fsencode(executable)
> + if os.path.dirname(executable):
> + executable_list = (executable,)
> + else:
> + # This matches the behavior of os._execvpe().
> + executable_list = tuple(
> + os.path.join(os.fsencode(dir), executable)
> + for dir in os.get_exec_path(env))
> + fds_to_keep = set(pass_fds)
> + fds_to_keep.add(errpipe_write)
> + self.pid = _posixsubprocess.fork_exec(
> + args, executable_list,
> + close_fds, tuple(sorted(map(int, fds_to_keep))),
> + cwd, env_list,
> + p2cread, p2cwrite, c2pread, c2pwrite,
> + errread, errwrite,
> + errpipe_read, errpipe_write,
> + restore_signals, start_new_session, preexec_fn)
> + self._child_created = True
> + finally:
> + # be sure the FD is closed no matter what
> + os.close(errpipe_write)
> +
> + # self._devnull is not always defined.
> + devnull_fd = getattr(self, '_devnull', None)
> + if p2cread != -1 and p2cwrite != -1 and p2cread != devnull_fd:
> + os.close(p2cread)
> + if c2pwrite != -1 and c2pread != -1 and c2pwrite != devnull_fd:
> + os.close(c2pwrite)
> + if errwrite != -1 and errread != -1 and errwrite != devnull_fd:
> + os.close(errwrite)
> + if devnull_fd is not None:
> + os.close(devnull_fd)
> + # Prevent a double close of these fds from __init__ on error.
> + self._closed_child_pipe_fds = True
> +
> + # Wait for exec to fail or succeed; possibly raising an
> + # exception (limited in size)
> + errpipe_data = bytearray()
> + while True:
> + part = os.read(errpipe_read, 50000)
> + errpipe_data += part
> + if not part or len(errpipe_data) > 50000:
> + break
> + finally:
> + # be sure the FD is closed no matter what
> + os.close(errpipe_read)
> +
> + if errpipe_data:
> + try:
> + pid, sts = os.waitpid(self.pid, 0)
> + if pid == self.pid:
> + self._handle_exitstatus(sts)
> + else:
> + self.returncode = sys.maxsize
> + except ChildProcessError:
> + pass
> +
> + try:
> + exception_name, hex_errno, err_msg = (
> + errpipe_data.split(b':', 2))
> + # The encoding here should match the encoding
> + # written in by the subprocess implementations
> + # like _posixsubprocess
> + err_msg = err_msg.decode()
> + except ValueError:
> + exception_name = b'SubprocessError'
> + hex_errno = b'0'
> + err_msg = 'Bad exception data from child: {!r}'.format(
> + bytes(errpipe_data))
> + child_exception_type = getattr(
> + builtins, exception_name.decode('ascii'),
> + SubprocessError)
> + if issubclass(child_exception_type, OSError) and hex_errno:
> + errno_num = int(hex_errno, 16)
> + child_exec_never_called = (err_msg == "noexec")
> + if child_exec_never_called:
> + err_msg = ""
> + # The error must be from chdir(cwd).
> + err_filename = cwd
> + else:
> + err_filename = orig_executable
> + if errno_num != 0:
> + err_msg = os.strerror(errno_num)
> + if errno_num == errno.ENOENT:
> + err_msg += ': ' + repr(err_filename)
> + raise child_exception_type(errno_num, err_msg, err_filename)
> + raise child_exception_type(err_msg)
> +
> +#JP Hack
> + def _handle_exitstatus(self, sts, _WIFSIGNALED=None,
> + _WTERMSIG=None, _WIFEXITED=None,
> + _WEXITSTATUS=None, _WIFSTOPPED=None,
> + _WSTOPSIG=None):
> + pass
> + '''
> + def _handle_exitstatus(self, sts, _WIFSIGNALED=edk2.WIFSIGNALED,
> + _WTERMSIG=edk2.WTERMSIG, _WIFEXITED=edk2.WIFEXITED,
> + _WEXITSTATUS=edk2.WEXITSTATUS, _WIFSTOPPED=edk2.WIFSTOPPED,
> + _WSTOPSIG=edk2.WSTOPSIG):
> + """All callers to this function MUST hold self._waitpid_lock."""
> + # This method is called (indirectly) by __del__, so it cannot
> + # refer to anything outside of its local scope.
> + if _WIFSIGNALED(sts):
> + self.returncode = -_WTERMSIG(sts)
> + elif _WIFEXITED(sts):
> + self.returncode = _WEXITSTATUS(sts)
> + elif _WIFSTOPPED(sts):
> + self.returncode = -_WSTOPSIG(sts)
> + else:
> + # Should never happen
> + raise SubprocessError("Unknown child exit status!")
> +
> + def _internal_poll(self, _deadstate=None, _waitpid=os.waitpid,
> + _WNOHANG=os.WNOHANG, _ECHILD=errno.ECHILD):
> + """Check if child process has terminated. Returns returncode
> + attribute.
> +
> + This method is called by __del__, so it cannot reference anything
> + outside of the local scope (nor can any methods it calls).
> +
> + """
> + if self.returncode is None:
> + if not self._waitpid_lock.acquire(False):
> + # Something else is busy calling waitpid. Don't allow two
> + # at once. We know nothing yet.
> + return None
> + try:
> + if self.returncode is not None:
> + return self.returncode # Another thread waited.
> + pid, sts = _waitpid(self.pid, _WNOHANG)
> + if pid == self.pid:
> + self._handle_exitstatus(sts)
> + except OSError as e:
> + if _deadstate is not None:
> + self.returncode = _deadstate
> + elif e.errno == _ECHILD:
> + # This happens if SIGCLD is set to be ignored or
> + # waiting for child processes has otherwise been
> + # disabled for our process. This child is dead, we
> + # can't get the status.
> + # http://bugs.python.org/issue15756
> + self.returncode = 0
> + finally:
> + self._waitpid_lock.release()
> + return self.returncode
> + '''
> + def _internal_poll(self, _deadstate=None, _waitpid=None,
> + _WNOHANG=None, _ECHILD=None):
> + pass #JP Hack
> +
> + def _try_wait(self, wait_flags):
> + """All callers to this function MUST hold self._waitpid_lock."""
> + try:
> + (pid, sts) = os.waitpid(self.pid, wait_flags)
> + except ChildProcessError:
> + # This happens if SIGCLD is set to be ignored or waiting
> + # for child processes has otherwise been disabled for our
> + # process. This child is dead, we can't get the status.
> + pid = self.pid
> + sts = 0
> + return (pid, sts)
> +
> +
> + def wait(self, timeout=None, endtime=None):
> + """Wait for child process to terminate. Returns returncode
> + attribute."""
> + if self.returncode is not None:
> + return self.returncode
> +
> + if endtime is not None:
> + warnings.warn(
> + "'endtime' argument is deprecated; use 'timeout'.",
> + DeprecationWarning,
> + stacklevel=2)
> + if endtime is not None or timeout is not None:
> + if endtime is None:
> + endtime = _time() + timeout
> + elif timeout is None:
> + timeout = self._remaining_time(endtime)
> +
> + if endtime is not None:
> + # Enter a busy loop if we have a timeout. This busy loop was
> + # cribbed from Lib/threading.py in Thread.wait() at r71065.
> + delay = 0.0005 # 500 us -> initial delay of 1 ms
> + while True:
> + if self._waitpid_lock.acquire(False):
> + try:
> + if self.returncode is not None:
> + break # Another thread waited.
> + (pid, sts) = self._try_wait(os.WNOHANG)
> + assert pid == self.pid or pid == 0
> + if pid == self.pid:
> + self._handle_exitstatus(sts)
> + break
> + finally:
> + self._waitpid_lock.release()
> + remaining = self._remaining_time(endtime)
> + if remaining <= 0:
> + raise TimeoutExpired(self.args, timeout)
> + delay = min(delay * 2, remaining, .05)
> + time.sleep(delay)
> + else:
> + while self.returncode is None:
> + with self._waitpid_lock:
> + if self.returncode is not None:
> + break # Another thread waited.
> + (pid, sts) = self._try_wait(0)
> + # Check the pid and loop as waitpid has been known to
> + # return 0 even without WNOHANG in odd situations.
> + # http://bugs.python.org/issue14396.
> + if pid == self.pid:
> + self._handle_exitstatus(sts)
> + return self.returncode
> +
> +
> + def _communicate(self, input, endtime, orig_timeout):
> + if self.stdin and not self._communication_started:
> + # Flush stdio buffer. This might block, if the user has
> + # been writing to .stdin in an uncontrolled fashion.
> + try:
> + self.stdin.flush()
> + except BrokenPipeError:
> + pass # communicate() must ignore BrokenPipeError.
> + if not input:
> + try:
> + self.stdin.close()
> + except BrokenPipeError:
> + pass # communicate() must ignore BrokenPipeError.
> +
> + stdout = None
> + stderr = None
> +
> + # Only create this mapping if we haven't already.
> + if not self._communication_started:
> + self._fileobj2output = {}
> + if self.stdout:
> + self._fileobj2output[self.stdout] = []
> + if self.stderr:
> + self._fileobj2output[self.stderr] = []
> +
> + if self.stdout:
> + stdout = self._fileobj2output[self.stdout]
> + if self.stderr:
> + stderr = self._fileobj2output[self.stderr]
> +
> + self._save_input(input)
> +
> + if self._input:
> + input_view = memoryview(self._input)
> +
> + with _PopenSelector() as selector:
> + if self.stdin and input:
> + selector.register(self.stdin, selectors.EVENT_WRITE)
> + if self.stdout:
> + selector.register(self.stdout, selectors.EVENT_READ)
> + if self.stderr:
> + selector.register(self.stderr, selectors.EVENT_READ)
> +
> + while selector.get_map():
> + timeout = self._remaining_time(endtime)
> + if timeout is not None and timeout < 0:
> + raise TimeoutExpired(self.args, orig_timeout)
> +
> + ready = selector.select(timeout)
> + self._check_timeout(endtime, orig_timeout)
> +
> + # XXX Rewrite these to use non-blocking I/O on the file
> + # objects; they are no longer using C stdio!
> +
> + for key, events in ready:
> + if key.fileobj is self.stdin:
> + chunk = input_view[self._input_offset :
> + self._input_offset + _PIPE_BUF]
> + try:
> + self._input_offset += os.write(key.fd, chunk)
> + except BrokenPipeError:
> + selector.unregister(key.fileobj)
> + key.fileobj.close()
> + else:
> + if self._input_offset >= len(self._input):
> + selector.unregister(key.fileobj)
> + key.fileobj.close()
> + elif key.fileobj in (self.stdout, self.stderr):
> + data = os.read(key.fd, 32768)
> + if not data:
> + selector.unregister(key.fileobj)
> + key.fileobj.close()
> + self._fileobj2output[key.fileobj].append(data)
> +
> + self.wait(timeout=self._remaining_time(endtime))
> +
> + # All data exchanged. Translate lists into strings.
> + if stdout is not None:
> + stdout = b''.join(stdout)
> + if stderr is not None:
> + stderr = b''.join(stderr)
> +
> + # Translate newlines, if requested.
> + # This also turns bytes into strings.
> + if self.encoding or self.errors or self.universal_newlines:
> + if stdout is not None:
> + stdout = self._translate_newlines(stdout,
> + self.stdout.encoding,
> + self.stdout.errors)
> + if stderr is not None:
> + stderr = self._translate_newlines(stderr,
> + self.stderr.encoding,
> + self.stderr.errors)
> +
> + return (stdout, stderr)
> +
> +
> + def _save_input(self, input):
> + # This method is called from the _communicate_with_*() methods
> + # so that if we time out while communicating, we can continue
> + # sending input if we retry.
> + if self.stdin and self._input is None:
> + self._input_offset = 0
> + self._input = input
> + if input is not None and (
> + self.encoding or self.errors or self.universal_newlines):
> + self._input = self._input.encode(self.stdin.encoding,
> + self.stdin.errors)
> +
> +
> + def send_signal(self, sig):
> + """Send a signal to the process."""
> + # Skip signalling a process that we know has already died.
> + if self.returncode is None:
> + os.kill(self.pid, sig)
> +
> + def terminate(self):
> + """Terminate the process with SIGTERM
> + """
> + self.send_signal(signal.SIGTERM)
> +
> + def kill(self):
> + """Kill the process with SIGKILL
> + """
> + self.send_signal(signal.SIGKILL)
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/zipfile.py b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/zipfile.py
> new file mode 100644
> index 00000000..77ed6666
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/zipfile.py
> @@ -0,0 +1,2060 @@
> +"""
> +Read and write ZIP files.
> +
> +XXX references to utf-8 need further investigation.
> +"""
> +import io
> +import os
> +import re
> +import importlib.util
> +import sys
> +import time
> +import stat
> +import shutil
> +import struct
> +import binascii
> +
> +try:
> + import threading
> +except ImportError:
> + import dummy_threading as threading
> +
> +try:
> + import zlib # We may need its compression method
> + crc32 = zlib.crc32
> +except ImportError:
> + zlib = None
> + crc32 = binascii.crc32
> +
> +try:
> + import bz2 # We may need its compression method
> +except ImportError:
> + bz2 = None
> +
> +try:
> + import lzma # We may need its compression method
> +except ImportError:
> + lzma = None
> +
> +__all__ = ["BadZipFile", "BadZipfile", "error",
> + "ZIP_STORED", "ZIP_DEFLATED", "ZIP_BZIP2", "ZIP_LZMA",
> + "is_zipfile", "ZipInfo", "ZipFile", "PyZipFile", "LargeZipFile"]
> +
> +class BadZipFile(Exception):
> + pass
> +
> +
> +class LargeZipFile(Exception):
> + """
> + Raised when writing a zipfile, the zipfile requires ZIP64 extensions
> + and those extensions are disabled.
> + """
> +
> +error = BadZipfile = BadZipFile # Pre-3.2 compatibility names
> +
> +
> +ZIP64_LIMIT = (1 << 31) - 1
> +ZIP_FILECOUNT_LIMIT = (1 << 16) - 1
> +ZIP_MAX_COMMENT = (1 << 16) - 1
> +
> +# constants for Zip file compression methods
> +ZIP_STORED = 0
> +ZIP_DEFLATED = 8
> +ZIP_BZIP2 = 12
> +ZIP_LZMA = 14
> +# Other ZIP compression methods not supported
> +
> +DEFAULT_VERSION = 20
> +ZIP64_VERSION = 45
> +BZIP2_VERSION = 46
> +LZMA_VERSION = 63
> +# we recognize (but not necessarily support) all features up to that version
> +MAX_EXTRACT_VERSION = 63
> +
> +# Below are some formats and associated data for reading/writing headers using
> +# the struct module. The names and structures of headers/records are those used
> +# in the PKWARE description of the ZIP file format:
> +# http://www.pkware.com/documents/casestudies/APPNOTE.TXT
> +# (URL valid as of January 2008)
> +
> +# The "end of central directory" structure, magic number, size, and indices
> +# (section V.I in the format document)
> +structEndArchive = b"<4s4H2LH"
> +stringEndArchive = b"PK\005\006"
> +sizeEndCentDir = struct.calcsize(structEndArchive)
> +
> +_ECD_SIGNATURE = 0
> +_ECD_DISK_NUMBER = 1
> +_ECD_DISK_START = 2
> +_ECD_ENTRIES_THIS_DISK = 3
> +_ECD_ENTRIES_TOTAL = 4
> +_ECD_SIZE = 5
> +_ECD_OFFSET = 6
> +_ECD_COMMENT_SIZE = 7
> +# These last two indices are not part of the structure as defined in the
> +# spec, but they are used internally by this module as a convenience
> +_ECD_COMMENT = 8
> +_ECD_LOCATION = 9
> +
> +# The "central directory" structure, magic number, size, and indices
> +# of entries in the structure (section V.F in the format document)
> +structCentralDir = "<4s4B4HL2L5H2L"
> +stringCentralDir = b"PK\001\002"
> +sizeCentralDir = struct.calcsize(structCentralDir)
> +
> +# indexes of entries in the central directory structure
> +_CD_SIGNATURE = 0
> +_CD_CREATE_VERSION = 1
> +_CD_CREATE_SYSTEM = 2
> +_CD_EXTRACT_VERSION = 3
> +_CD_EXTRACT_SYSTEM = 4
> +_CD_FLAG_BITS = 5
> +_CD_COMPRESS_TYPE = 6
> +_CD_TIME = 7
> +_CD_DATE = 8
> +_CD_CRC = 9
> +_CD_COMPRESSED_SIZE = 10
> +_CD_UNCOMPRESSED_SIZE = 11
> +_CD_FILENAME_LENGTH = 12
> +_CD_EXTRA_FIELD_LENGTH = 13
> +_CD_COMMENT_LENGTH = 14
> +_CD_DISK_NUMBER_START = 15
> +_CD_INTERNAL_FILE_ATTRIBUTES = 16
> +_CD_EXTERNAL_FILE_ATTRIBUTES = 17
> +_CD_LOCAL_HEADER_OFFSET = 18
> +
> +# The "local file header" structure, magic number, size, and indices
> +# (section V.A in the format document)
> +structFileHeader = "<4s2B4HL2L2H"
> +stringFileHeader = b"PK\003\004"
> +sizeFileHeader = struct.calcsize(structFileHeader)
> +
> +_FH_SIGNATURE = 0
> +_FH_EXTRACT_VERSION = 1
> +_FH_EXTRACT_SYSTEM = 2
> +_FH_GENERAL_PURPOSE_FLAG_BITS = 3
> +_FH_COMPRESSION_METHOD = 4
> +_FH_LAST_MOD_TIME = 5
> +_FH_LAST_MOD_DATE = 6
> +_FH_CRC = 7
> +_FH_COMPRESSED_SIZE = 8
> +_FH_UNCOMPRESSED_SIZE = 9
> +_FH_FILENAME_LENGTH = 10
> +_FH_EXTRA_FIELD_LENGTH = 11
> +
> +# The "Zip64 end of central directory locator" structure, magic number, and size
> +structEndArchive64Locator = "<4sLQL"
> +stringEndArchive64Locator = b"PK\x06\x07"
> +sizeEndCentDir64Locator = struct.calcsize(structEndArchive64Locator)
> +
> +# The "Zip64 end of central directory" record, magic number, size, and indices
> +# (section V.G in the format document)
> +structEndArchive64 = "<4sQ2H2L4Q"
> +stringEndArchive64 = b"PK\x06\x06"
> +sizeEndCentDir64 = struct.calcsize(structEndArchive64)
> +
> +_CD64_SIGNATURE = 0
> +_CD64_DIRECTORY_RECSIZE = 1
> +_CD64_CREATE_VERSION = 2
> +_CD64_EXTRACT_VERSION = 3
> +_CD64_DISK_NUMBER = 4
> +_CD64_DISK_NUMBER_START = 5
> +_CD64_NUMBER_ENTRIES_THIS_DISK = 6
> +_CD64_NUMBER_ENTRIES_TOTAL = 7
> +_CD64_DIRECTORY_SIZE = 8
> +_CD64_OFFSET_START_CENTDIR = 9
> +
> +_DD_SIGNATURE = 0x08074b50
> +
> +_EXTRA_FIELD_STRUCT = struct.Struct('<HH')
> +
> +def _strip_extra(extra, xids):
> + # Remove Extra Fields with specified IDs.
> + unpack = _EXTRA_FIELD_STRUCT.unpack
> + modified = False
> + buffer = []
> + start = i = 0
> + while i + 4 <= len(extra):
> + xid, xlen = unpack(extra[i : i + 4])
> + j = i + 4 + xlen
> + if xid in xids:
> + if i != start:
> + buffer.append(extra[start : i])
> + start = j
> + modified = True
> + i = j
> + if not modified:
> + return extra
> + return b''.join(buffer)
> +
> +def _check_zipfile(fp):
> + try:
> + if _EndRecData(fp):
> + return True # file has correct magic number
> + except OSError:
> + pass
> + return False
> +
> +def is_zipfile(filename):
> + """Quickly see if a file is a ZIP file by checking the magic number.
> +
> + The filename argument may be a file or file-like object too.
> + """
> + result = False
> + try:
> + if hasattr(filename, "read"):
> + result = _check_zipfile(fp=filename)
> + else:
> + with open(filename, "rb") as fp:
> + result = _check_zipfile(fp)
> + except OSError:
> + pass
> + return result
> +
> +def _EndRecData64(fpin, offset, endrec):
> + """
> + Read the ZIP64 end-of-archive records and use that to update endrec
> + """
> + try:
> + fpin.seek(offset - sizeEndCentDir64Locator, 2)
> + except OSError:
> + # If the seek fails, the file is not large enough to contain a ZIP64
> + # end-of-archive record, so just return the end record we were given.
> + return endrec
> +
> + data = fpin.read(sizeEndCentDir64Locator)
> + if len(data) != sizeEndCentDir64Locator:
> + return endrec
> + sig, diskno, reloff, disks = struct.unpack(structEndArchive64Locator, data)
> + if sig != stringEndArchive64Locator:
> + return endrec
> +
> + if diskno != 0 or disks != 1:
> + raise BadZipFile("zipfiles that span multiple disks are not supported")
> +
> + # Assume no 'zip64 extensible data'
> + fpin.seek(offset - sizeEndCentDir64Locator - sizeEndCentDir64, 2)
> + data = fpin.read(sizeEndCentDir64)
> + if len(data) != sizeEndCentDir64:
> + return endrec
> + sig, sz, create_version, read_version, disk_num, disk_dir, \
> + dircount, dircount2, dirsize, diroffset = \
> + struct.unpack(structEndArchive64, data)
> + if sig != stringEndArchive64:
> + return endrec
> +
> + # Update the original endrec using data from the ZIP64 record
> + endrec[_ECD_SIGNATURE] = sig
> + endrec[_ECD_DISK_NUMBER] = disk_num
> + endrec[_ECD_DISK_START] = disk_dir
> + endrec[_ECD_ENTRIES_THIS_DISK] = dircount
> + endrec[_ECD_ENTRIES_TOTAL] = dircount2
> + endrec[_ECD_SIZE] = dirsize
> + endrec[_ECD_OFFSET] = diroffset
> + return endrec
> +
> +
> +def _EndRecData(fpin):
> + """Return data from the "End of Central Directory" record, or None.
> +
> + The data is a list of the nine items in the ZIP "End of central dir"
> + record followed by a tenth item, the file seek offset of this record."""
> +
> + # Determine file size
> + fpin.seek(0, 2)
> + filesize = fpin.tell()
> +
> + # Check to see if this is ZIP file with no archive comment (the
> + # "end of central directory" structure should be the last item in the
> + # file if this is the case).
> + try:
> + fpin.seek(-sizeEndCentDir, 2)
> + except OSError:
> + return None
> + data = fpin.read()
> + if (len(data) == sizeEndCentDir and
> + data[0:4] == stringEndArchive and
> + data[-2:] == b"\000\000"):
> + # the signature is correct and there's no comment, unpack structure
> + endrec = struct.unpack(structEndArchive, data)
> + endrec=list(endrec)
> +
> + # Append a blank comment and record start offset
> + endrec.append(b"")
> + endrec.append(filesize - sizeEndCentDir)
> +
> + # Try to read the "Zip64 end of central directory" structure
> + return _EndRecData64(fpin, -sizeEndCentDir, endrec)
> +
> + # Either this is not a ZIP file, or it is a ZIP file with an archive
> + # comment. Search the end of the file for the "end of central directory"
> + # record signature. The comment is the last item in the ZIP file and may be
> + # up to 64K long. It is assumed that the "end of central directory" magic
> + # number does not appear in the comment.
> + maxCommentStart = max(filesize - (1 << 16) - sizeEndCentDir, 0)
> + fpin.seek(maxCommentStart, 0)
> + data = fpin.read()
> + start = data.rfind(stringEndArchive)
> + if start >= 0:
> + # found the magic number; attempt to unpack and interpret
> + recData = data[start:start+sizeEndCentDir]
> + if len(recData) != sizeEndCentDir:
> + # Zip file is corrupted.
> + return None
> + endrec = list(struct.unpack(structEndArchive, recData))
> + commentSize = endrec[_ECD_COMMENT_SIZE] #as claimed by the zip file
> + comment = data[start+sizeEndCentDir:start+sizeEndCentDir+commentSize]
> + endrec.append(comment)
> + endrec.append(maxCommentStart + start)
> +
> + # Try to read the "Zip64 end of central directory" structure
> + return _EndRecData64(fpin, maxCommentStart + start - filesize,
> + endrec)
> +
> + # Unable to find a valid end of central directory structure
> + return None
> +
> +
> +class ZipInfo (object):
> + """Class with attributes describing each file in the ZIP archive."""
> +
> + __slots__ = (
> + 'orig_filename',
> + 'filename',
> + 'date_time',
> + 'compress_type',
> + 'comment',
> + 'extra',
> + 'create_system',
> + 'create_version',
> + 'extract_version',
> + 'reserved',
> + 'flag_bits',
> + 'volume',
> + 'internal_attr',
> + 'external_attr',
> + 'header_offset',
> + 'CRC',
> + 'compress_size',
> + 'file_size',
> + '_raw_time',
> + )
> +
> + def __init__(self, filename="NoName", date_time=(1980,1,1,0,0,0)):
> + self.orig_filename = filename # Original file name in archive
> +
> + # Terminate the file name at the first null byte. Null bytes in file
> + # names are used as tricks by viruses in archives.
> + null_byte = filename.find(chr(0))
> + if null_byte >= 0:
> + filename = filename[0:null_byte]
> + # This is used to ensure paths in generated ZIP files always use
> + # forward slashes as the directory separator, as required by the
> + # ZIP format specification.
> + if os.sep != "/" and os.sep in filename:
> + filename = filename.replace(os.sep, "/")
> +
> + self.filename = filename # Normalized file name
> + self.date_time = date_time # year, month, day, hour, min, sec
> +
> + if date_time[0] < 1980:
> + raise ValueError('ZIP does not support timestamps before 1980')
> +
> + # Standard values:
> + self.compress_type = ZIP_STORED # Type of compression for the file
> + self.comment = b"" # Comment for each file
> + self.extra = b"" # ZIP extra data
> + if sys.platform == 'win32':
> + self.create_system = 0 # System which created ZIP archive
> + else:
> + # Assume everything else is unix-y
> + self.create_system = 3 # System which created ZIP archive
> + self.create_version = DEFAULT_VERSION # Version which created ZIP archive
> + self.extract_version = DEFAULT_VERSION # Version needed to extract archive
> + self.reserved = 0 # Must be zero
> + self.flag_bits = 0 # ZIP flag bits
> + self.volume = 0 # Volume number of file header
> + self.internal_attr = 0 # Internal attributes
> + self.external_attr = 0 # External file attributes
> + # Other attributes are set by class ZipFile:
> + # header_offset Byte offset to the file header
> + # CRC CRC-32 of the uncompressed file
> + # compress_size Size of the compressed file
> + # file_size Size of the uncompressed file
> +
> + def __repr__(self):
> + result = ['<%s filename=%r' % (self.__class__.__name__, self.filename)]
> + if self.compress_type != ZIP_STORED:
> + result.append(' compress_type=%s' %
> + compressor_names.get(self.compress_type,
> + self.compress_type))
> + hi = self.external_attr >> 16
> + lo = self.external_attr & 0xFFFF
> + if hi:
> + result.append(' filemode=%r' % stat.filemode(hi))
> + if lo:
> + result.append(' external_attr=%#x' % lo)
> + isdir = self.is_dir()
> + if not isdir or self.file_size:
> + result.append(' file_size=%r' % self.file_size)
> + if ((not isdir or self.compress_size) and
> + (self.compress_type != ZIP_STORED or
> + self.file_size != self.compress_size)):
> + result.append(' compress_size=%r' % self.compress_size)
> + result.append('>')
> + return ''.join(result)
> +
> + def FileHeader(self, zip64=None):
> + """Return the per-file header as a bytes object."""
> + dt = self.date_time
> + dosdate = (dt[0] - 1980) << 9 | dt[1] << 5 | dt[2]
> + dostime = dt[3] << 11 | dt[4] << 5 | (dt[5] // 2)
> + if self.flag_bits & 0x08:
> + # Set these to zero because we write them after the file data
> + CRC = compress_size = file_size = 0
> + else:
> + CRC = self.CRC
> + compress_size = self.compress_size
> + file_size = self.file_size
> +
> + extra = self.extra
> +
> + min_version = 0
> + if zip64 is None:
> + zip64 = file_size > ZIP64_LIMIT or compress_size > ZIP64_LIMIT
> + if zip64:
> + fmt = '<HHQQ'
> + extra = extra + struct.pack(fmt,
> + 1, struct.calcsize(fmt)-4, file_size, compress_size)
> + if file_size > ZIP64_LIMIT or compress_size > ZIP64_LIMIT:
> + if not zip64:
> + raise LargeZipFile("Filesize would require ZIP64 extensions")
> + # File is larger than what fits into a 4 byte integer,
> + # fall back to the ZIP64 extension
> + file_size = 0xffffffff
> + compress_size = 0xffffffff
> + min_version = ZIP64_VERSION
> +
> + if self.compress_type == ZIP_BZIP2:
> + min_version = max(BZIP2_VERSION, min_version)
> + elif self.compress_type == ZIP_LZMA:
> + min_version = max(LZMA_VERSION, min_version)
> +
> + self.extract_version = max(min_version, self.extract_version)
> + self.create_version = max(min_version, self.create_version)
> + filename, flag_bits = self._encodeFilenameFlags()
> + header = struct.pack(structFileHeader, stringFileHeader,
> + self.extract_version, self.reserved, flag_bits,
> + self.compress_type, dostime, dosdate, CRC,
> + compress_size, file_size,
> + len(filename), len(extra))
> + return header + filename + extra
> +
> + def _encodeFilenameFlags(self):
> + try:
> + return self.filename.encode('ascii'), self.flag_bits
> + except UnicodeEncodeError:
> + return self.filename.encode('utf-8'), self.flag_bits | 0x800
> +
> + def _decodeExtra(self):
> + # Try to decode the extra field.
> + extra = self.extra
> + unpack = struct.unpack
> + while len(extra) >= 4:
> + tp, ln = unpack('<HH', extra[:4])
> + if tp == 1:
> + if ln >= 24:
> + counts = unpack('<QQQ', extra[4:28])
> + elif ln == 16:
> + counts = unpack('<QQ', extra[4:20])
> + elif ln == 8:
> + counts = unpack('<Q', extra[4:12])
> + elif ln == 0:
> + counts = ()
> + else:
> + raise BadZipFile("Corrupt extra field %04x (size=%d)" % (tp, ln))
> +
> + idx = 0
> +
> + # ZIP64 extension (large files and/or large archives)
> + if self.file_size in (0xffffffffffffffff, 0xffffffff):
> + self.file_size = counts[idx]
> + idx += 1
> +
> + if self.compress_size == 0xFFFFFFFF:
> + self.compress_size = counts[idx]
> + idx += 1
> +
> + if self.header_offset == 0xffffffff:
> + old = self.header_offset
> + self.header_offset = counts[idx]
> + idx+=1
> +
> + extra = extra[ln+4:]
> +
> + @classmethod
> + def from_file(cls, filename, arcname=None):
> + """Construct an appropriate ZipInfo for a file on the filesystem.
> +
> + filename should be the path to a file or directory on the filesystem.
> +
> + arcname is the name which it will have within the archive (by default,
> + this will be the same as filename, but without a drive letter and with
> + leading path separators removed).
> + """
> + if isinstance(filename, os.PathLike):
> + filename = os.fspath(filename)
> + st = os.stat(filename)
> + isdir = stat.S_ISDIR(st.st_mode)
> + mtime = time.localtime(st.st_mtime)
> + date_time = mtime[0:6]
> + # Create ZipInfo instance to store file information
> + if arcname is None:
> + arcname = filename
> + arcname = os.path.normpath(os.path.splitdrive(arcname)[1])
> + while arcname[0] in (os.sep, os.altsep):
> + arcname = arcname[1:]
> + if isdir:
> + arcname += '/'
> + zinfo = cls(arcname, date_time)
> + zinfo.external_attr = (st.st_mode & 0xFFFF) << 16 # Unix attributes
> + if isdir:
> + zinfo.file_size = 0
> + zinfo.external_attr |= 0x10 # MS-DOS directory flag
> + else:
> + zinfo.file_size = st.st_size
> +
> + return zinfo
> +
> + def is_dir(self):
> + """Return True if this archive member is a directory."""
> + return self.filename[-1] == '/'
> +
> +
> +class _ZipDecrypter:
> + """Class to handle decryption of files stored within a ZIP archive.
> +
> + ZIP supports a password-based form of encryption. Even though known
> + plaintext attacks have been found against it, it is still useful
> + to be able to get data out of such a file.
> +
> + Usage:
> + zd = _ZipDecrypter(mypwd)
> + plain_char = zd(cypher_char)
> + plain_text = map(zd, cypher_text)
> + """
> +
> + def _GenerateCRCTable():
> + """Generate a CRC-32 table.
> +
> + ZIP encryption uses the CRC32 one-byte primitive for scrambling some
> + internal keys. We noticed that a direct implementation is faster than
> + relying on binascii.crc32().
> + """
> + poly = 0xedb88320
> + table = [0] * 256
> + for i in range(256):
> + crc = i
> + for j in range(8):
> + if crc & 1:
> + crc = ((crc >> 1) & 0x7FFFFFFF) ^ poly
> + else:
> + crc = ((crc >> 1) & 0x7FFFFFFF)
> + table[i] = crc
> + return table
> + crctable = None
> +
> + def _crc32(self, ch, crc):
> + """Compute the CRC32 primitive on one byte."""
> + return ((crc >> 8) & 0xffffff) ^ self.crctable[(crc ^ ch) & 0xff]
> +
> + def __init__(self, pwd):
> + if _ZipDecrypter.crctable is None:
> + _ZipDecrypter.crctable = _ZipDecrypter._GenerateCRCTable()
> + self.key0 = 305419896
> + self.key1 = 591751049
> + self.key2 = 878082192
> + for p in pwd:
> + self._UpdateKeys(p)
> +
> + def _UpdateKeys(self, c):
> + self.key0 = self._crc32(c, self.key0)
> + self.key1 = (self.key1 + (self.key0 & 255)) & 4294967295
> + self.key1 = (self.key1 * 134775813 + 1) & 4294967295
> + self.key2 = self._crc32((self.key1 >> 24) & 255, self.key2)
> +
> + def __call__(self, c):
> + """Decrypt a single character."""
> + assert isinstance(c, int)
> + k = self.key2 | 2
> + c = c ^ (((k * (k^1)) >> 8) & 255)
> + self._UpdateKeys(c)
> + return c
> +
> +
> +class LZMACompressor:
> +
> + def __init__(self):
> + self._comp = None
> +
> + def _init(self):
> + props = lzma._encode_filter_properties({'id': lzma.FILTER_LZMA1})
> + self._comp = lzma.LZMACompressor(lzma.FORMAT_RAW, filters=[
> + lzma._decode_filter_properties(lzma.FILTER_LZMA1, props)
> + ])
> + return struct.pack('<BBH', 9, 4, len(props)) + props
> +
> + def compress(self, data):
> + if self._comp is None:
> + return self._init() + self._comp.compress(data)
> + return self._comp.compress(data)
> +
> + def flush(self):
> + if self._comp is None:
> + return self._init() + self._comp.flush()
> + return self._comp.flush()
> +
> +
> +class LZMADecompressor:
> +
> + def __init__(self):
> + self._decomp = None
> + self._unconsumed = b''
> + self.eof = False
> +
> + def decompress(self, data):
> + if self._decomp is None:
> + self._unconsumed += data
> + if len(self._unconsumed) <= 4:
> + return b''
> + psize, = struct.unpack('<H', self._unconsumed[2:4])
> + if len(self._unconsumed) <= 4 + psize:
> + return b''
> +
> + self._decomp = lzma.LZMADecompressor(lzma.FORMAT_RAW, filters=[
> + lzma._decode_filter_properties(lzma.FILTER_LZMA1,
> + self._unconsumed[4:4 + psize])
> + ])
> + data = self._unconsumed[4 + psize:]
> + del self._unconsumed
> +
> + result = self._decomp.decompress(data)
> + self.eof = self._decomp.eof
> + return result
> +
> +
> +compressor_names = {
> + 0: 'store',
> + 1: 'shrink',
> + 2: 'reduce',
> + 3: 'reduce',
> + 4: 'reduce',
> + 5: 'reduce',
> + 6: 'implode',
> + 7: 'tokenize',
> + 8: 'deflate',
> + 9: 'deflate64',
> + 10: 'implode',
> + 12: 'bzip2',
> + 14: 'lzma',
> + 18: 'terse',
> + 19: 'lz77',
> + 97: 'wavpack',
> + 98: 'ppmd',
> +}
> +
> +def _check_compression(compression):
> + if compression == ZIP_STORED:
> + pass
> + elif compression == ZIP_DEFLATED:
> + if not zlib:
> + raise RuntimeError(
> + "Compression requires the (missing) zlib module")
> + elif compression == ZIP_BZIP2:
> + if not bz2:
> + raise RuntimeError(
> + "Compression requires the (missing) bz2 module")
> + elif compression == ZIP_LZMA:
> + if not lzma:
> + raise RuntimeError(
> + "Compression requires the (missing) lzma module")
> + else:
> + raise NotImplementedError("That compression method is not supported")
> +
> +
> +def _get_compressor(compress_type):
> + if compress_type == ZIP_DEFLATED:
> + return zlib.compressobj(zlib.Z_DEFAULT_COMPRESSION,
> + zlib.DEFLATED, -15)
> + elif compress_type == ZIP_BZIP2:
> + return bz2.BZ2Compressor()
> + elif compress_type == ZIP_LZMA:
> + return LZMACompressor()
> + else:
> + return None
> +
> +
> +def _get_decompressor(compress_type):
> + if compress_type == ZIP_STORED:
> + return None
> + elif compress_type == ZIP_DEFLATED:
> + return zlib.decompressobj(-15)
> + elif compress_type == ZIP_BZIP2:
> + return bz2.BZ2Decompressor()
> + elif compress_type == ZIP_LZMA:
> + return LZMADecompressor()
> + else:
> + descr = compressor_names.get(compress_type)
> + if descr:
> + raise NotImplementedError("compression type %d (%s)" % (compress_type, descr))
> + else:
> + raise NotImplementedError("compression type %d" % (compress_type,))
> +
> +
> +class _SharedFile:
> + def __init__(self, file, pos, close, lock, writing):
> + self._file = file
> + self._pos = pos
> + self._close = close
> + self._lock = lock
> + self._writing = writing
> +
> + def read(self, n=-1):
> + with self._lock:
> + if self._writing():
> + raise ValueError("Can't read from the ZIP file while there "
> + "is an open writing handle on it. "
> + "Close the writing handle before trying to read.")
> + self._file.seek(self._pos)
> + data = self._file.read(n)
> + self._pos = self._file.tell()
> + return data
> +
> + def close(self):
> + if self._file is not None:
> + fileobj = self._file
> + self._file = None
> + self._close(fileobj)
> +
> +# Provide the tell method for unseekable stream
> +class _Tellable:
> + def __init__(self, fp):
> + self.fp = fp
> + self.offset = 0
> +
> + def write(self, data):
> + n = self.fp.write(data)
> + self.offset += n
> + return n
> +
> + def tell(self):
> + return self.offset
> +
> + def flush(self):
> + self.fp.flush()
> +
> + def close(self):
> + self.fp.close()
> +
> +
> +class ZipExtFile(io.BufferedIOBase):
> + """File-like object for reading an archive member.
> + Is returned by ZipFile.open().
> + """
> +
> + # Max size supported by decompressor.
> + MAX_N = 1 << 31 - 1
> +
> + # Read from compressed files in 4k blocks.
> + MIN_READ_SIZE = 4096
> +
> + def __init__(self, fileobj, mode, zipinfo, decrypter=None,
> + close_fileobj=False):
> + self._fileobj = fileobj
> + self._decrypter = decrypter
> + self._close_fileobj = close_fileobj
> +
> + self._compress_type = zipinfo.compress_type
> + self._compress_left = zipinfo.compress_size
> + self._left = zipinfo.file_size
> +
> + self._decompressor = _get_decompressor(self._compress_type)
> +
> + self._eof = False
> + self._readbuffer = b''
> + self._offset = 0
> +
> + self.newlines = None
> +
> + # Adjust read size for encrypted files since the first 12 bytes
> + # are for the encryption/password information.
> + if self._decrypter is not None:
> + self._compress_left -= 12
> +
> + self.mode = mode
> + self.name = zipinfo.filename
> +
> + if hasattr(zipinfo, 'CRC'):
> + self._expected_crc = zipinfo.CRC
> + self._running_crc = crc32(b'')
> + else:
> + self._expected_crc = None
> +
> + def __repr__(self):
> + result = ['<%s.%s' % (self.__class__.__module__,
> + self.__class__.__qualname__)]
> + if not self.closed:
> + result.append(' name=%r mode=%r' % (self.name, self.mode))
> + if self._compress_type != ZIP_STORED:
> + result.append(' compress_type=%s' %
> + compressor_names.get(self._compress_type,
> + self._compress_type))
> + else:
> + result.append(' [closed]')
> + result.append('>')
> + return ''.join(result)
> +
> + def readline(self, limit=-1):
> + """Read and return a line from the stream.
> +
> + If limit is specified, at most limit bytes will be read.
> + """
> +
> + if limit < 0:
> + # Shortcut common case - newline found in buffer.
> + i = self._readbuffer.find(b'\n', self._offset) + 1
> + if i > 0:
> + line = self._readbuffer[self._offset: i]
> + self._offset = i
> + return line
> +
> + return io.BufferedIOBase.readline(self, limit)
> +
> + def peek(self, n=1):
> + """Returns buffered bytes without advancing the position."""
> + if n > len(self._readbuffer) - self._offset:
> + chunk = self.read(n)
> + if len(chunk) > self._offset:
> + self._readbuffer = chunk + self._readbuffer[self._offset:]
> + self._offset = 0
> + else:
> + self._offset -= len(chunk)
> +
> + # Return up to 512 bytes to reduce allocation overhead for tight loops.
> + return self._readbuffer[self._offset: self._offset + 512]
> +
> + def readable(self):
> + return True
> +
> + def read(self, n=-1):
> + """Read and return up to n bytes.
> + If the argument is omitted, None, or negative, data is read and returned until EOF is reached..
> + """
> + if n is None or n < 0:
> + buf = self._readbuffer[self._offset:]
> + self._readbuffer = b''
> + self._offset = 0
> + while not self._eof:
> + buf += self._read1(self.MAX_N)
> + return buf
> +
> + end = n + self._offset
> + if end < len(self._readbuffer):
> + buf = self._readbuffer[self._offset:end]
> + self._offset = end
> + return buf
> +
> + n = end - len(self._readbuffer)
> + buf = self._readbuffer[self._offset:]
> + self._readbuffer = b''
> + self._offset = 0
> + while n > 0 and not self._eof:
> + data = self._read1(n)
> + if n < len(data):
> + self._readbuffer = data
> + self._offset = n
> + buf += data[:n]
> + break
> + buf += data
> + n -= len(data)
> + return buf
> +
> + def _update_crc(self, newdata):
> + # Update the CRC using the given data.
> + if self._expected_crc is None:
> + # No need to compute the CRC if we don't have a reference value
> + return
> + self._running_crc = crc32(newdata, self._running_crc)
> + # Check the CRC if we're at the end of the file
> + if self._eof and self._running_crc != self._expected_crc:
> + raise BadZipFile("Bad CRC-32 for file %r" % self.name)
> +
> + def read1(self, n):
> + """Read up to n bytes with at most one read() system call."""
> +
> + if n is None or n < 0:
> + buf = self._readbuffer[self._offset:]
> + self._readbuffer = b''
> + self._offset = 0
> + while not self._eof:
> + data = self._read1(self.MAX_N)
> + if data:
> + buf += data
> + break
> + return buf
> +
> + end = n + self._offset
> + if end < len(self._readbuffer):
> + buf = self._readbuffer[self._offset:end]
> + self._offset = end
> + return buf
> +
> + n = end - len(self._readbuffer)
> + buf = self._readbuffer[self._offset:]
> + self._readbuffer = b''
> + self._offset = 0
> + if n > 0:
> + while not self._eof:
> + data = self._read1(n)
> + if n < len(data):
> + self._readbuffer = data
> + self._offset = n
> + buf += data[:n]
> + break
> + if data:
> + buf += data
> + break
> + return buf
> +
> + def _read1(self, n):
> + # Read up to n compressed bytes with at most one read() system call,
> + # decrypt and decompress them.
> + if self._eof or n <= 0:
> + return b''
> +
> + # Read from file.
> + if self._compress_type == ZIP_DEFLATED:
> + ## Handle unconsumed data.
> + data = self._decompressor.unconsumed_tail
> + if n > len(data):
> + data += self._read2(n - len(data))
> + else:
> + data = self._read2(n)
> +
> + if self._compress_type == ZIP_STORED:
> + self._eof = self._compress_left <= 0
> + elif self._compress_type == ZIP_DEFLATED:
> + n = max(n, self.MIN_READ_SIZE)
> + data = self._decompressor.decompress(data, n)
> + self._eof = (self._decompressor.eof or
> + self._compress_left <= 0 and
> + not self._decompressor.unconsumed_tail)
> + if self._eof:
> + data += self._decompressor.flush()
> + else:
> + data = self._decompressor.decompress(data)
> + self._eof = self._decompressor.eof or self._compress_left <= 0
> +
> + data = data[:self._left]
> + self._left -= len(data)
> + if self._left <= 0:
> + self._eof = True
> + self._update_crc(data)
> + return data
> +
> + def _read2(self, n):
> + if self._compress_left <= 0:
> + return b''
> +
> + n = max(n, self.MIN_READ_SIZE)
> + n = min(n, self._compress_left)
> +
> + data = self._fileobj.read(n)
> + self._compress_left -= len(data)
> + if not data:
> + raise EOFError
> +
> + if self._decrypter is not None:
> + data = bytes(map(self._decrypter, data))
> + return data
> +
> + def close(self):
> + try:
> + if self._close_fileobj:
> + self._fileobj.close()
> + finally:
> + super().close()
> +
> +
> +class _ZipWriteFile(io.BufferedIOBase):
> + def __init__(self, zf, zinfo, zip64):
> + self._zinfo = zinfo
> + self._zip64 = zip64
> + self._zipfile = zf
> + self._compressor = _get_compressor(zinfo.compress_type)
> + self._file_size = 0
> + self._compress_size = 0
> + self._crc = 0
> +
> + @property
> + def _fileobj(self):
> + return self._zipfile.fp
> +
> + def writable(self):
> + return True
> +
> + def write(self, data):
> + if self.closed:
> + raise ValueError('I/O operation on closed file.')
> + nbytes = len(data)
> + self._file_size += nbytes
> + self._crc = crc32(data, self._crc)
> + if self._compressor:
> + data = self._compressor.compress(data)
> + self._compress_size += len(data)
> + self._fileobj.write(data)
> + return nbytes
> +
> + def close(self):
> + if self.closed:
> + return
> + super().close()
> + # Flush any data from the compressor, and update header info
> + if self._compressor:
> + buf = self._compressor.flush()
> + self._compress_size += len(buf)
> + self._fileobj.write(buf)
> + self._zinfo.compress_size = self._compress_size
> + else:
> + self._zinfo.compress_size = self._file_size
> + self._zinfo.CRC = self._crc
> + self._zinfo.file_size = self._file_size
> +
> + # Write updated header info
> + if self._zinfo.flag_bits & 0x08:
> + # Write CRC and file sizes after the file data
> + fmt = '<LLQQ' if self._zip64 else '<LLLL'
> + self._fileobj.write(struct.pack(fmt, _DD_SIGNATURE, self._zinfo.CRC,
> + self._zinfo.compress_size, self._zinfo.file_size))
> + self._zipfile.start_dir = self._fileobj.tell()
> + else:
> + if not self._zip64:
> + if self._file_size > ZIP64_LIMIT:
> + raise RuntimeError('File size unexpectedly exceeded ZIP64 '
> + 'limit')
> + if self._compress_size > ZIP64_LIMIT:
> + raise RuntimeError('Compressed size unexpectedly exceeded '
> + 'ZIP64 limit')
> + # Seek backwards and write file header (which will now include
> + # correct CRC and file sizes)
> +
> + # Preserve current position in file
> + self._zipfile.start_dir = self._fileobj.tell()
> + self._fileobj.seek(self._zinfo.header_offset)
> + self._fileobj.write(self._zinfo.FileHeader(self._zip64))
> + self._fileobj.seek(self._zipfile.start_dir)
> +
> + self._zipfile._writing = False
> +
> + # Successfully written: Add file to our caches
> + self._zipfile.filelist.append(self._zinfo)
> + self._zipfile.NameToInfo[self._zinfo.filename] = self._zinfo
> +
> +class ZipFile:
> + """ Class with methods to open, read, write, close, list zip files.
> +
> + z = ZipFile(file, mode="r", compression=ZIP_STORED, allowZip64=True)
> +
> + file: Either the path to the file, or a file-like object.
> + If it is a path, the file will be opened and closed by ZipFile.
> + mode: The mode can be either read 'r', write 'w', exclusive create 'x',
> + or append 'a'.
> + compression: ZIP_STORED (no compression), ZIP_DEFLATED (requires zlib),
> + ZIP_BZIP2 (requires bz2) or ZIP_LZMA (requires lzma).
> + allowZip64: if True ZipFile will create files with ZIP64 extensions when
> + needed, otherwise it will raise an exception when this would
> + be necessary.
> +
> + """
> +
> + fp = None # Set here since __del__ checks it
> + _windows_illegal_name_trans_table = None
> +
> + def __init__(self, file, mode="r", compression=ZIP_STORED, allowZip64=True):
> + """Open the ZIP file with mode read 'r', write 'w', exclusive create 'x',
> + or append 'a'."""
> + if mode not in ('r', 'w', 'x', 'a'):
> + raise ValueError("ZipFile requires mode 'r', 'w', 'x', or 'a'")
> +
> + _check_compression(compression)
> +
> + self._allowZip64 = allowZip64
> + self._didModify = False
> + self.debug = 0 # Level of printing: 0 through 3
> + self.NameToInfo = {} # Find file info given name
> + self.filelist = [] # List of ZipInfo instances for archive
> + self.compression = compression # Method of compression
> + self.mode = mode
> + self.pwd = None
> + self._comment = b''
> +
> + # Check if we were passed a file-like object
> +# if isinstance(file, os.PathLike):
> +# file = os.fspath(file)
> + if isinstance(file, str):
> + # No, it's a filename
> + self._filePassed = 0
> + self.filename = file
> + modeDict = {'r' : 'rb', 'w': 'w+b', 'x': 'x+b', 'a' : 'r+b',
> + 'r+b': 'w+b', 'w+b': 'wb', 'x+b': 'xb'}
> + filemode = modeDict[mode]
> + while True:
> + try:
> + self.fp = io.open(file, filemode)
> + except OSError:
> + if filemode in modeDict:
> + filemode = modeDict[filemode]
> + continue
> + raise
> + break
> + else:
> + self._filePassed = 1
> + self.fp = file
> + self.filename = getattr(file, 'name', None)
> + self._fileRefCnt = 1
> + self._lock = threading.RLock()
> + self._seekable = True
> + self._writing = False
> +
> + try:
> + if mode == 'r':
> + self._RealGetContents()
> + elif mode in ('w', 'x'):
> + # set the modified flag so central directory gets written
> + # even if no files are added to the archive
> + self._didModify = True
> + try:
> + self.start_dir = self.fp.tell()
> + except (AttributeError, OSError):
> + self.fp = _Tellable(self.fp)
> + self.start_dir = 0
> + self._seekable = False
> + else:
> + # Some file-like objects can provide tell() but not seek()
> + try:
> + self.fp.seek(self.start_dir)
> + except (AttributeError, OSError):
> + self._seekable = False
> + elif mode == 'a':
> + try:
> + # See if file is a zip file
> + self._RealGetContents()
> + # seek to start of directory and overwrite
> + self.fp.seek(self.start_dir)
> + except BadZipFile:
> + # file is not a zip file, just append
> + self.fp.seek(0, 2)
> +
> + # set the modified flag so central directory gets written
> + # even if no files are added to the archive
> + self._didModify = True
> + self.start_dir = self.fp.tell()
> + else:
> + raise ValueError("Mode must be 'r', 'w', 'x', or 'a'")
> + except:
> + fp = self.fp
> + self.fp = None
> + self._fpclose(fp)
> + raise
> +
> + def __enter__(self):
> + return self
> +
> + def __exit__(self, type, value, traceback):
> + self.close()
> +
> + def __repr__(self):
> + result = ['<%s.%s' % (self.__class__.__module__,
> + self.__class__.__qualname__)]
> + if self.fp is not None:
> + if self._filePassed:
> + result.append(' file=%r' % self.fp)
> + elif self.filename is not None:
> + result.append(' filename=%r' % self.filename)
> + result.append(' mode=%r' % self.mode)
> + else:
> + result.append(' [closed]')
> + result.append('>')
> + return ''.join(result)
> +
> + def _RealGetContents(self):
> + """Read in the table of contents for the ZIP file."""
> + fp = self.fp
> + try:
> + endrec = _EndRecData(fp)
> + except OSError:
> + raise BadZipFile("File is not a zip file")
> + if not endrec:
> + raise BadZipFile("File is not a zip file")
> + if self.debug > 1:
> + print(endrec)
> + size_cd = endrec[_ECD_SIZE] # bytes in central directory
> + offset_cd = endrec[_ECD_OFFSET] # offset of central directory
> + self._comment = endrec[_ECD_COMMENT] # archive comment
> +
> + # "concat" is zero, unless zip was concatenated to another file
> + concat = endrec[_ECD_LOCATION] - size_cd - offset_cd
> + if endrec[_ECD_SIGNATURE] == stringEndArchive64:
> + # If Zip64 extension structures are present, account for them
> + concat -= (sizeEndCentDir64 + sizeEndCentDir64Locator)
> +
> + if self.debug > 2:
> + inferred = concat + offset_cd
> + print("given, inferred, offset", offset_cd, inferred, concat)
> + # self.start_dir: Position of start of central directory
> + self.start_dir = offset_cd + concat
> + fp.seek(self.start_dir, 0)
> + data = fp.read(size_cd)
> + fp = io.BytesIO(data)
> + total = 0
> + while total < size_cd:
> + centdir = fp.read(sizeCentralDir)
> + if len(centdir) != sizeCentralDir:
> + raise BadZipFile("Truncated central directory")
> + centdir = struct.unpack(structCentralDir, centdir)
> + if centdir[_CD_SIGNATURE] != stringCentralDir:
> + raise BadZipFile("Bad magic number for central directory")
> + if self.debug > 2:
> + print(centdir)
> + filename = fp.read(centdir[_CD_FILENAME_LENGTH])
> + flags = centdir[5]
> + if flags & 0x800:
> + # UTF-8 file names extension
> + filename = filename.decode('utf-8')
> + else:
> + # Historical ZIP filename encoding
> + filename = filename.decode('cp437')
> + # Create ZipInfo instance to store file information
> + x = ZipInfo(filename)
> + x.extra = fp.read(centdir[_CD_EXTRA_FIELD_LENGTH])
> + x.comment = fp.read(centdir[_CD_COMMENT_LENGTH])
> + x.header_offset = centdir[_CD_LOCAL_HEADER_OFFSET]
> + (x.create_version, x.create_system, x.extract_version, x.reserved,
> + x.flag_bits, x.compress_type, t, d,
> + x.CRC, x.compress_size, x.file_size) = centdir[1:12]
> + if x.extract_version > MAX_EXTRACT_VERSION:
> + raise NotImplementedError("zip file version %.1f" %
> + (x.extract_version / 10))
> + x.volume, x.internal_attr, x.external_attr = centdir[15:18]
> + # Convert date/time code to (year, month, day, hour, min, sec)
> + x._raw_time = t
> + x.date_time = ( (d>>9)+1980, (d>>5)&0xF, d&0x1F,
> + t>>11, (t>>5)&0x3F, (t&0x1F) * 2 )
> +
> + x._decodeExtra()
> + x.header_offset = x.header_offset + concat
> + self.filelist.append(x)
> + self.NameToInfo[x.filename] = x
> +
> + # update total bytes read from central directory
> + total = (total + sizeCentralDir + centdir[_CD_FILENAME_LENGTH]
> + + centdir[_CD_EXTRA_FIELD_LENGTH]
> + + centdir[_CD_COMMENT_LENGTH])
> +
> + if self.debug > 2:
> + print("total", total)
> +
> +
> + def namelist(self):
> + """Return a list of file names in the archive."""
> + return [data.filename for data in self.filelist]
> +
> + def infolist(self):
> + """Return a list of class ZipInfo instances for files in the
> + archive."""
> + return self.filelist
> +
> + def printdir(self, file=None):
> + """Print a table of contents for the zip file."""
> + print("%-46s %19s %12s" % ("File Name", "Modified ", "Size"),
> + file=file)
> + for zinfo in self.filelist:
> + date = "%d-%02d-%02d %02d:%02d:%02d" % zinfo.date_time[:6]
> + print("%-46s %s %12d" % (zinfo.filename, date, zinfo.file_size),
> + file=file)
> +
> + def testzip(self):
> + """Read all the files and check the CRC."""
> + chunk_size = 2 ** 20
> + for zinfo in self.filelist:
> + try:
> + # Read by chunks, to avoid an OverflowError or a
> + # MemoryError with very large embedded files.
> + with self.open(zinfo.filename, "r") as f:
> + while f.read(chunk_size): # Check CRC-32
> + pass
> + except BadZipFile:
> + return zinfo.filename
> +
> + def getinfo(self, name):
> + """Return the instance of ZipInfo given 'name'."""
> + info = self.NameToInfo.get(name)
> + if info is None:
> + raise KeyError(
> + 'There is no item named %r in the archive' % name)
> +
> + return info
> +
> + def setpassword(self, pwd):
> + """Set default password for encrypted files."""
> + if pwd and not isinstance(pwd, bytes):
> + raise TypeError("pwd: expected bytes, got %s" % type(pwd).__name__)
> + if pwd:
> + self.pwd = pwd
> + else:
> + self.pwd = None
> +
> + @property
> + def comment(self):
> + """The comment text associated with the ZIP file."""
> + return self._comment
> +
> + @comment.setter
> + def comment(self, comment):
> + if not isinstance(comment, bytes):
> + raise TypeError("comment: expected bytes, got %s" % type(comment).__name__)
> + # check for valid comment length
> + if len(comment) > ZIP_MAX_COMMENT:
> + import warnings
> + warnings.warn('Archive comment is too long; truncating to %d bytes'
> + % ZIP_MAX_COMMENT, stacklevel=2)
> + comment = comment[:ZIP_MAX_COMMENT]
> + self._comment = comment
> + self._didModify = True
> +
> + def read(self, name, pwd=None):
> + """Return file bytes for name."""
> + with self.open(name, "r", pwd) as fp:
> + return fp.read()
> +
> + def open(self, name, mode="r", pwd=None, *, force_zip64=False):
> + """Return file-like object for 'name'.
> +
> + name is a string for the file name within the ZIP file, or a ZipInfo
> + object.
> +
> + mode should be 'r' to read a file already in the ZIP file, or 'w' to
> + write to a file newly added to the archive.
> +
> + pwd is the password to decrypt files (only used for reading).
> +
> + When writing, if the file size is not known in advance but may exceed
> + 2 GiB, pass force_zip64 to use the ZIP64 format, which can handle large
> + files. If the size is known in advance, it is best to pass a ZipInfo
> + instance for name, with zinfo.file_size set.
> + """
> + if mode not in {"r", "w"}:
> + raise ValueError('open() requires mode "r" or "w"')
> + if pwd and not isinstance(pwd, bytes):
> + raise TypeError("pwd: expected bytes, got %s" % type(pwd).__name__)
> + if pwd and (mode == "w"):
> + raise ValueError("pwd is only supported for reading files")
> + if not self.fp:
> + raise ValueError(
> + "Attempt to use ZIP archive that was already closed")
> +
> + # Make sure we have an info object
> + if isinstance(name, ZipInfo):
> + # 'name' is already an info object
> + zinfo = name
> + elif mode == 'w':
> + zinfo = ZipInfo(name)
> + zinfo.compress_type = self.compression
> + else:
> + # Get info object for name
> + zinfo = self.getinfo(name)
> +
> + if mode == 'w':
> + return self._open_to_write(zinfo, force_zip64=force_zip64)
> +
> + if self._writing:
> + raise ValueError("Can't read from the ZIP file while there "
> + "is an open writing handle on it. "
> + "Close the writing handle before trying to read.")
> +
> + # Open for reading:
> + self._fileRefCnt += 1
> + zef_file = _SharedFile(self.fp, zinfo.header_offset,
> + self._fpclose, self._lock, lambda: self._writing)
> + try:
> + # Skip the file header:
> + fheader = zef_file.read(sizeFileHeader)
> + if len(fheader) != sizeFileHeader:
> + raise BadZipFile("Truncated file header")
> + fheader = struct.unpack(structFileHeader, fheader)
> + if fheader[_FH_SIGNATURE] != stringFileHeader:
> + raise BadZipFile("Bad magic number for file header")
> +
> + fname = zef_file.read(fheader[_FH_FILENAME_LENGTH])
> + if fheader[_FH_EXTRA_FIELD_LENGTH]:
> + zef_file.read(fheader[_FH_EXTRA_FIELD_LENGTH])
> +
> + if zinfo.flag_bits & 0x20:
> + # Zip 2.7: compressed patched data
> + raise NotImplementedError("compressed patched data (flag bit 5)")
> +
> + if zinfo.flag_bits & 0x40:
> + # strong encryption
> + raise NotImplementedError("strong encryption (flag bit 6)")
> +
> + if zinfo.flag_bits & 0x800:
> + # UTF-8 filename
> + fname_str = fname.decode("utf-8")
> + else:
> + fname_str = fname.decode("cp437")
> +
> + if fname_str != zinfo.orig_filename:
> + raise BadZipFile(
> + 'File name in directory %r and header %r differ.'
> + % (zinfo.orig_filename, fname))
> +
> + # check for encrypted flag & handle password
> + is_encrypted = zinfo.flag_bits & 0x1
> + zd = None
> + if is_encrypted:
> + if not pwd:
> + pwd = self.pwd
> + if not pwd:
> + raise RuntimeError("File %r is encrypted, password "
> + "required for extraction" % name)
> +
> + zd = _ZipDecrypter(pwd)
> + # The first 12 bytes in the cypher stream is an encryption header
> + # used to strengthen the algorithm. The first 11 bytes are
> + # completely random, while the 12th contains the MSB of the CRC,
> + # or the MSB of the file time depending on the header type
> + # and is used to check the correctness of the password.
> + header = zef_file.read(12)
> + h = list(map(zd, header[0:12]))
> + if zinfo.flag_bits & 0x8:
> + # compare against the file type from extended local headers
> + check_byte = (zinfo._raw_time >> 8) & 0xff
> + else:
> + # compare against the CRC otherwise
> + check_byte = (zinfo.CRC >> 24) & 0xff
> + if h[11] != check_byte:
> + raise RuntimeError("Bad password for file %r" % name)
> +
> + return ZipExtFile(zef_file, mode, zinfo, zd, True)
> + except:
> + zef_file.close()
> + raise
> +
> + def _open_to_write(self, zinfo, force_zip64=False):
> + if force_zip64 and not self._allowZip64:
> + raise ValueError(
> + "force_zip64 is True, but allowZip64 was False when opening "
> + "the ZIP file."
> + )
> + if self._writing:
> + raise ValueError("Can't write to the ZIP file while there is "
> + "another write handle open on it. "
> + "Close the first handle before opening another.")
> +
> + # Sizes and CRC are overwritten with correct data after processing the file
> + if not hasattr(zinfo, 'file_size'):
> + zinfo.file_size = 0
> + zinfo.compress_size = 0
> + zinfo.CRC = 0
> +
> + zinfo.flag_bits = 0x00
> + if zinfo.compress_type == ZIP_LZMA:
> + # Compressed data includes an end-of-stream (EOS) marker
> + zinfo.flag_bits |= 0x02
> + if not self._seekable:
> + zinfo.flag_bits |= 0x08
> +
> + if not zinfo.external_attr:
> + zinfo.external_attr = 0o600 << 16 # permissions: ?rw-------
> +
> + # Compressed size can be larger than uncompressed size
> + zip64 = self._allowZip64 and \
> + (force_zip64 or zinfo.file_size * 1.05 > ZIP64_LIMIT)
> +
> + if self._seekable:
> + self.fp.seek(self.start_dir)
> + zinfo.header_offset = self.fp.tell()
> +
> + self._writecheck(zinfo)
> + self._didModify = True
> +
> + self.fp.write(zinfo.FileHeader(zip64))
> +
> + self._writing = True
> + return _ZipWriteFile(self, zinfo, zip64)
> +
> + def extract(self, member, path=None, pwd=None):
> + """Extract a member from the archive to the current working directory,
> + using its full name. Its file information is extracted as accurately
> + as possible. `member' may be a filename or a ZipInfo object. You can
> + specify a different directory using `path'.
> + """
> + if path is None:
> + path = os.getcwd()
> + else:
> + path = os.fspath(path)
> +
> + return self._extract_member(member, path, pwd)
> +
> + def extractall(self, path=None, members=None, pwd=None):
> + """Extract all members from the archive to the current working
> + directory. `path' specifies a different directory to extract to.
> + `members' is optional and must be a subset of the list returned
> + by namelist().
> + """
> + if members is None:
> + members = self.namelist()
> +
> + if path is None:
> + path = os.getcwd()
> +# else:
> +# path = os.fspath(path)
> +
> + for zipinfo in members:
> + self._extract_member(zipinfo, path, pwd)
> +
> + @classmethod
> + def _sanitize_windows_name(cls, arcname, pathsep):
> + """Replace bad characters and remove trailing dots from parts."""
> + table = cls._windows_illegal_name_trans_table
> + if not table:
> + illegal = ':<>|"?*'
> + table = str.maketrans(illegal, '_' * len(illegal))
> + cls._windows_illegal_name_trans_table = table
> + arcname = arcname.translate(table)
> + # remove trailing dots
> + arcname = (x.rstrip('.') for x in arcname.split(pathsep))
> + # rejoin, removing empty parts.
> + arcname = pathsep.join(x for x in arcname if x)
> + return arcname
> +
> + def _extract_member(self, member, targetpath, pwd):
> + """Extract the ZipInfo object 'member' to a physical
> + file on the path targetpath.
> + """
> + if not isinstance(member, ZipInfo):
> + member = self.getinfo(member)
> +
> + # build the destination pathname, replacing
> + # forward slashes to platform specific separators.
> + arcname = member.filename.replace('/', os.path.sep)
> +
> + if os.path.altsep:
> + arcname = arcname.replace(os.path.altsep, os.path.sep)
> + # interpret absolute pathname as relative, remove drive letter or
> + # UNC path, redundant separators, "." and ".." components.
> + arcname = os.path.splitdrive(arcname)[1]
> + invalid_path_parts = ('', os.path.curdir, os.path.pardir)
> + arcname = os.path.sep.join(x for x in arcname.split(os.path.sep)
> + if x not in invalid_path_parts)
> + if os.path.sep == '\\':
> + # filter illegal characters on Windows
> + arcname = self._sanitize_windows_name(arcname, os.path.sep)
> +
> + targetpath = os.path.join(targetpath, arcname)
> + targetpath = os.path.normpath(targetpath)
> +
> + # Create all upper directories if necessary.
> + upperdirs = os.path.dirname(targetpath)
> + if upperdirs and not os.path.exists(upperdirs):
> + os.makedirs(upperdirs)
> +
> + if member.is_dir():
> + if not os.path.isdir(targetpath):
> + os.mkdir(targetpath)
> + return targetpath
> +
> + with self.open(member, pwd=pwd) as source, \
> + open(targetpath, "wb") as target:
> + shutil.copyfileobj(source, target)
> +
> + return targetpath
> +
> + def _writecheck(self, zinfo):
> + """Check for errors before writing a file to the archive."""
> + if zinfo.filename in self.NameToInfo:
> + import warnings
> + warnings.warn('Duplicate name: %r' % zinfo.filename, stacklevel=3)
> + if self.mode not in ('w', 'x', 'a'):
> + raise ValueError("write() requires mode 'w', 'x', or 'a'")
> + if not self.fp:
> + raise ValueError(
> + "Attempt to write ZIP archive that was already closed")
> + _check_compression(zinfo.compress_type)
> + if not self._allowZip64:
> + requires_zip64 = None
> + if len(self.filelist) >= ZIP_FILECOUNT_LIMIT:
> + requires_zip64 = "Files count"
> + elif zinfo.file_size > ZIP64_LIMIT:
> + requires_zip64 = "Filesize"
> + elif zinfo.header_offset > ZIP64_LIMIT:
> + requires_zip64 = "Zipfile size"
> + if requires_zip64:
> + raise LargeZipFile(requires_zip64 +
> + " would require ZIP64 extensions")
> +
> + def write(self, filename, arcname=None, compress_type=None):
> + """Put the bytes from filename into the archive under the name
> + arcname."""
> + if not self.fp:
> + raise ValueError(
> + "Attempt to write to ZIP archive that was already closed")
> + if self._writing:
> + raise ValueError(
> + "Can't write to ZIP archive while an open writing handle exists"
> + )
> +
> + zinfo = ZipInfo.from_file(filename, arcname)
> +
> + if zinfo.is_dir():
> + zinfo.compress_size = 0
> + zinfo.CRC = 0
> + else:
> + if compress_type is not None:
> + zinfo.compress_type = compress_type
> + else:
> + zinfo.compress_type = self.compression
> +
> + if zinfo.is_dir():
> + with self._lock:
> + if self._seekable:
> + self.fp.seek(self.start_dir)
> + zinfo.header_offset = self.fp.tell() # Start of header bytes
> + if zinfo.compress_type == ZIP_LZMA:
> + # Compressed data includes an end-of-stream (EOS) marker
> + zinfo.flag_bits |= 0x02
> +
> + self._writecheck(zinfo)
> + self._didModify = True
> +
> + self.filelist.append(zinfo)
> + self.NameToInfo[zinfo.filename] = zinfo
> + self.fp.write(zinfo.FileHeader(False))
> + self.start_dir = self.fp.tell()
> + else:
> + with open(filename, "rb") as src, self.open(zinfo, 'w') as dest:
> + shutil.copyfileobj(src, dest, 1024*8)
> +
> + def writestr(self, zinfo_or_arcname, data, compress_type=None):
> + """Write a file into the archive. The contents is 'data', which
> + may be either a 'str' or a 'bytes' instance; if it is a 'str',
> + it is encoded as UTF-8 first.
> + 'zinfo_or_arcname' is either a ZipInfo instance or
> + the name of the file in the archive."""
> + if isinstance(data, str):
> + data = data.encode("utf-8")
> + if not isinstance(zinfo_or_arcname, ZipInfo):
> + zinfo = ZipInfo(filename=zinfo_or_arcname,
> + date_time=time.localtime(time.time())[:6])
> + zinfo.compress_type = self.compression
> + if zinfo.filename[-1] == '/':
> + zinfo.external_attr = 0o40775 << 16 # drwxrwxr-x
> + zinfo.external_attr |= 0x10 # MS-DOS directory flag
> + else:
> + zinfo.external_attr = 0o600 << 16 # ?rw-------
> + else:
> + zinfo = zinfo_or_arcname
> +
> + if not self.fp:
> + raise ValueError(
> + "Attempt to write to ZIP archive that was already closed")
> + if self._writing:
> + raise ValueError(
> + "Can't write to ZIP archive while an open writing handle exists."
> + )
> +
> + if compress_type is not None:
> + zinfo.compress_type = compress_type
> +
> + zinfo.file_size = len(data) # Uncompressed size
> + with self._lock:
> + with self.open(zinfo, mode='w') as dest:
> + dest.write(data)
> +
> + def __del__(self):
> + """Call the "close()" method in case the user forgot."""
> + self.close()
> +
> + def close(self):
> + """Close the file, and for mode 'w', 'x' and 'a' write the ending
> + records."""
> + if self.fp is None:
> + return
> +
> + if self._writing:
> + raise ValueError("Can't close the ZIP file while there is "
> + "an open writing handle on it. "
> + "Close the writing handle before closing the zip.")
> +
> + try:
> + if self.mode in ('w', 'x', 'a') and self._didModify: # write ending records
> + with self._lock:
> + if self._seekable:
> + self.fp.seek(self.start_dir)
> + self._write_end_record()
> + finally:
> + fp = self.fp
> + self.fp = None
> + self._fpclose(fp)
> +
> + def _write_end_record(self):
> + for zinfo in self.filelist: # write central directory
> + dt = zinfo.date_time
> + dosdate = (dt[0] - 1980) << 9 | dt[1] << 5 | dt[2]
> + dostime = dt[3] << 11 | dt[4] << 5 | (dt[5] // 2)
> + extra = []
> + if zinfo.file_size > ZIP64_LIMIT \
> + or zinfo.compress_size > ZIP64_LIMIT:
> + extra.append(zinfo.file_size)
> + extra.append(zinfo.compress_size)
> + file_size = 0xffffffff
> + compress_size = 0xffffffff
> + else:
> + file_size = zinfo.file_size
> + compress_size = zinfo.compress_size
> +
> + if zinfo.header_offset > ZIP64_LIMIT:
> + extra.append(zinfo.header_offset)
> + header_offset = 0xffffffff
> + else:
> + header_offset = zinfo.header_offset
> +
> + extra_data = zinfo.extra
> + min_version = 0
> + if extra:
> + # Append a ZIP64 field to the extra's
> + extra_data = _strip_extra(extra_data, (1,))
> + extra_data = struct.pack(
> + '<HH' + 'Q'*len(extra),
> + 1, 8*len(extra), *extra) + extra_data
> +
> + min_version = ZIP64_VERSION
> +
> + if zinfo.compress_type == ZIP_BZIP2:
> + min_version = max(BZIP2_VERSION, min_version)
> + elif zinfo.compress_type == ZIP_LZMA:
> + min_version = max(LZMA_VERSION, min_version)
> +
> + extract_version = max(min_version, zinfo.extract_version)
> + create_version = max(min_version, zinfo.create_version)
> + try:
> + filename, flag_bits = zinfo._encodeFilenameFlags()
> + centdir = struct.pack(structCentralDir,
> + stringCentralDir, create_version,
> + zinfo.create_system, extract_version, zinfo.reserved,
> + flag_bits, zinfo.compress_type, dostime, dosdate,
> + zinfo.CRC, compress_size, file_size,
> + len(filename), len(extra_data), len(zinfo.comment),
> + 0, zinfo.internal_attr, zinfo.external_attr,
> + header_offset)
> + except DeprecationWarning:
> + print((structCentralDir, stringCentralDir, create_version,
> + zinfo.create_system, extract_version, zinfo.reserved,
> + zinfo.flag_bits, zinfo.compress_type, dostime, dosdate,
> + zinfo.CRC, compress_size, file_size,
> + len(zinfo.filename), len(extra_data), len(zinfo.comment),
> + 0, zinfo.internal_attr, zinfo.external_attr,
> + header_offset), file=sys.stderr)
> + raise
> + self.fp.write(centdir)
> + self.fp.write(filename)
> + self.fp.write(extra_data)
> + self.fp.write(zinfo.comment)
> +
> + pos2 = self.fp.tell()
> + # Write end-of-zip-archive record
> + centDirCount = len(self.filelist)
> + centDirSize = pos2 - self.start_dir
> + centDirOffset = self.start_dir
> + requires_zip64 = None
> + if centDirCount > ZIP_FILECOUNT_LIMIT:
> + requires_zip64 = "Files count"
> + elif centDirOffset > ZIP64_LIMIT:
> + requires_zip64 = "Central directory offset"
> + elif centDirSize > ZIP64_LIMIT:
> + requires_zip64 = "Central directory size"
> + if requires_zip64:
> + # Need to write the ZIP64 end-of-archive records
> + if not self._allowZip64:
> + raise LargeZipFile(requires_zip64 +
> + " would require ZIP64 extensions")
> + zip64endrec = struct.pack(
> + structEndArchive64, stringEndArchive64,
> + 44, 45, 45, 0, 0, centDirCount, centDirCount,
> + centDirSize, centDirOffset)
> + self.fp.write(zip64endrec)
> +
> + zip64locrec = struct.pack(
> + structEndArchive64Locator,
> + stringEndArchive64Locator, 0, pos2, 1)
> + self.fp.write(zip64locrec)
> + centDirCount = min(centDirCount, 0xFFFF)
> + centDirSize = min(centDirSize, 0xFFFFFFFF)
> + centDirOffset = min(centDirOffset, 0xFFFFFFFF)
> +
> + endrec = struct.pack(structEndArchive, stringEndArchive,
> + 0, 0, centDirCount, centDirCount,
> + centDirSize, centDirOffset, len(self._comment))
> + self.fp.write(endrec)
> + self.fp.write(self._comment)
> + self.fp.flush()
> +
> + def _fpclose(self, fp):
> + assert self._fileRefCnt > 0
> + self._fileRefCnt -= 1
> + if not self._fileRefCnt and not self._filePassed:
> + fp.close()
> +
> +
> +class PyZipFile(ZipFile):
> + """Class to create ZIP archives with Python library files and packages."""
> +
> + def __init__(self, file, mode="r", compression=ZIP_STORED,
> + allowZip64=True, optimize=-1):
> + ZipFile.__init__(self, file, mode=mode, compression=compression,
> + allowZip64=allowZip64)
> + self._optimize = optimize
> +
> + def writepy(self, pathname, basename="", filterfunc=None):
> + """Add all files from "pathname" to the ZIP archive.
> +
> + If pathname is a package directory, search the directory and
> + all package subdirectories recursively for all *.py and enter
> + the modules into the archive. If pathname is a plain
> + directory, listdir *.py and enter all modules. Else, pathname
> + must be a Python *.py file and the module will be put into the
> + archive. Added modules are always module.pyc.
> + This method will compile the module.py into module.pyc if
> + necessary.
> + If filterfunc(pathname) is given, it is called with every argument.
> + When it is False, the file or directory is skipped.
> + """
> + pathname = os.fspath(pathname)
> + if filterfunc and not filterfunc(pathname):
> + if self.debug:
> + label = 'path' if os.path.isdir(pathname) else 'file'
> + print('%s %r skipped by filterfunc' % (label, pathname))
> + return
> + dir, name = os.path.split(pathname)
> + if os.path.isdir(pathname):
> + initname = os.path.join(pathname, "__init__.py")
> + if os.path.isfile(initname):
> + # This is a package directory, add it
> + if basename:
> + basename = "%s/%s" % (basename, name)
> + else:
> + basename = name
> + if self.debug:
> + print("Adding package in", pathname, "as", basename)
> + fname, arcname = self._get_codename(initname[0:-3], basename)
> + if self.debug:
> + print("Adding", arcname)
> + self.write(fname, arcname)
> + dirlist = os.listdir(pathname)
> + dirlist.remove("__init__.py")
> + # Add all *.py files and package subdirectories
> + for filename in dirlist:
> + path = os.path.join(pathname, filename)
> + root, ext = os.path.splitext(filename)
> + if os.path.isdir(path):
> + if os.path.isfile(os.path.join(path, "__init__.py")):
> + # This is a package directory, add it
> + self.writepy(path, basename,
> + filterfunc=filterfunc) # Recursive call
> + elif ext == ".py":
> + if filterfunc and not filterfunc(path):
> + if self.debug:
> + print('file %r skipped by filterfunc' % path)
> + continue
> + fname, arcname = self._get_codename(path[0:-3],
> + basename)
> + if self.debug:
> + print("Adding", arcname)
> + self.write(fname, arcname)
> + else:
> + # This is NOT a package directory, add its files at top level
> + if self.debug:
> + print("Adding files from directory", pathname)
> + for filename in os.listdir(pathname):
> + path = os.path.join(pathname, filename)
> + root, ext = os.path.splitext(filename)
> + if ext == ".py":
> + if filterfunc and not filterfunc(path):
> + if self.debug:
> + print('file %r skipped by filterfunc' % path)
> + continue
> + fname, arcname = self._get_codename(path[0:-3],
> + basename)
> + if self.debug:
> + print("Adding", arcname)
> + self.write(fname, arcname)
> + else:
> + if pathname[-3:] != ".py":
> + raise RuntimeError(
> + 'Files added with writepy() must end with ".py"')
> + fname, arcname = self._get_codename(pathname[0:-3], basename)
> + if self.debug:
> + print("Adding file", arcname)
> + self.write(fname, arcname)
> +
> + def _get_codename(self, pathname, basename):
> + """Return (filename, archivename) for the path.
> +
> + Given a module name path, return the correct file path and
> + archive name, compiling if necessary. For example, given
> + /python/lib/string, return (/python/lib/string.pyc, string).
> + """
> + def _compile(file, optimize=-1):
> + import py_compile
> + if self.debug:
> + print("Compiling", file)
> + try:
> + py_compile.compile(file, doraise=True, optimize=optimize)
> + except py_compile.PyCompileError as err:
> + print(err.msg)
> + return False
> + return True
> +
> + file_py = pathname + ".py"
> + file_pyc = pathname + ".pyc"
> + pycache_opt0 = importlib.util.cache_from_source(file_py, optimization='')
> + pycache_opt1 = importlib.util.cache_from_source(file_py, optimization=1)
> + pycache_opt2 = importlib.util.cache_from_source(file_py, optimization=2)
> + if self._optimize == -1:
> + # legacy mode: use whatever file is present
> + if (os.path.isfile(file_pyc) and
> + os.stat(file_pyc).st_mtime >= os.stat(file_py).st_mtime):
> + # Use .pyc file.
> + arcname = fname = file_pyc
> + elif (os.path.isfile(pycache_opt0) and
> + os.stat(pycache_opt0).st_mtime >= os.stat(file_py).st_mtime):
> + # Use the __pycache__/*.pyc file, but write it to the legacy pyc
> + # file name in the archive.
> + fname = pycache_opt0
> + arcname = file_pyc
> + elif (os.path.isfile(pycache_opt1) and
> + os.stat(pycache_opt1).st_mtime >= os.stat(file_py).st_mtime):
> + # Use the __pycache__/*.pyc file, but write it to the legacy pyc
> + # file name in the archive.
> + fname = pycache_opt1
> + arcname = file_pyc
> + elif (os.path.isfile(pycache_opt2) and
> + os.stat(pycache_opt2).st_mtime >= os.stat(file_py).st_mtime):
> + # Use the __pycache__/*.pyc file, but write it to the legacy pyc
> + # file name in the archive.
> + fname = pycache_opt2
> + arcname = file_pyc
> + else:
> + # Compile py into PEP 3147 pyc file.
> + if _compile(file_py):
> + if sys.flags.optimize == 0:
> + fname = pycache_opt0
> + elif sys.flags.optimize == 1:
> + fname = pycache_opt1
> + else:
> + fname = pycache_opt2
> + arcname = file_pyc
> + else:
> + fname = arcname = file_py
> + else:
> + # new mode: use given optimization level
> + if self._optimize == 0:
> + fname = pycache_opt0
> + arcname = file_pyc
> + else:
> + arcname = file_pyc
> + if self._optimize == 1:
> + fname = pycache_opt1
> + elif self._optimize == 2:
> + fname = pycache_opt2
> + else:
> + msg = "invalid value for 'optimize': {!r}".format(self._optimize)
> + raise ValueError(msg)
> + if not (os.path.isfile(fname) and
> + os.stat(fname).st_mtime >= os.stat(file_py).st_mtime):
> + if not _compile(file_py, optimize=self._optimize):
> + fname = arcname = file_py
> + archivename = os.path.split(arcname)[1]
> + if basename:
> + archivename = "%s/%s" % (basename, archivename)
> + return (fname, archivename)
> +
> +
> +def main(args = None):
> + import textwrap
> + USAGE=textwrap.dedent("""\
> + Usage:
> + zipfile.py -l zipfile.zip # Show listing of a zipfile
> + zipfile.py -t zipfile.zip # Test if a zipfile is valid
> + zipfile.py -e zipfile.zip target # Extract zipfile into target dir
> + zipfile.py -c zipfile.zip src ... # Create zipfile from sources
> + """)
> + if args is None:
> + args = sys.argv[1:]
> +
> + if not args or args[0] not in ('-l', '-c', '-e', '-t'):
> + print(USAGE)
> + sys.exit(1)
> +
> + if args[0] == '-l':
> + if len(args) != 2:
> + print(USAGE)
> + sys.exit(1)
> + with ZipFile(args[1], 'r') as zf:
> + zf.printdir()
> +
> + elif args[0] == '-t':
> + if len(args) != 2:
> + print(USAGE)
> + sys.exit(1)
> + with ZipFile(args[1], 'r') as zf:
> + badfile = zf.testzip()
> + if badfile:
> + print("The following enclosed file is corrupted: {!r}".format(badfile))
> + print("Done testing")
> +
> + elif args[0] == '-e':
> + if len(args) != 3:
> + print(USAGE)
> + sys.exit(1)
> +
> + with ZipFile(args[1], 'r') as zf:
> + zf.extractall(args[2])
> +
> + elif args[0] == '-c':
> + if len(args) < 3:
> + print(USAGE)
> + sys.exit(1)
> +
> + def addToZip(zf, path, zippath):
> + if os.path.isfile(path):
> + zf.write(path, zippath, ZIP_DEFLATED)
> + elif os.path.isdir(path):
> + if zippath:
> + zf.write(path, zippath)
> + for nm in os.listdir(path):
> + addToZip(zf,
> + os.path.join(path, nm), os.path.join(zippath, nm))
> + # else: ignore
> +
> + with ZipFile(args[1], 'w') as zf:
> + for path in args[2:]:
> + zippath = os.path.basename(path)
> + if not zippath:
> + zippath = os.path.basename(os.path.dirname(path))
> + if zippath in ('', os.curdir, os.pardir):
> + zippath = ''
> + addToZip(zf, path, zippath)
> +
> +if __name__ == "__main__":
> + main()
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_blake2/impl/blake2.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_blake2/impl/blake2.h
> new file mode 100644
> index 00000000..cd6b26db
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_blake2/impl/blake2.h
> @@ -0,0 +1,161 @@
> +/*
> + BLAKE2 reference source code package - reference C implementations
> +
> + Copyright 2012, Samuel Neves <sneves@dei.uc.pt>. You may use this under the
> + terms of the CC0, the OpenSSL Licence, or the Apache Public License 2.0, at
> + your option. The terms of these licenses can be found at:
> +
> + - CC0 1.0 Universal : http://creativecommons.org/publicdomain/zero/1.0
> + - OpenSSL license : https://www.openssl.org/source/license.html
> + - Apache 2.0 : http://www.apache.org/licenses/LICENSE-2.0
> +
> + More information about the BLAKE2 hash function can be found at
> + https://blake2.net.
> +*/
> +#pragma once
> +#ifndef __BLAKE2_H__
> +#define __BLAKE2_H__
> +
> +#include <stddef.h>
> +#include <stdint.h>
> +
> +#ifdef BLAKE2_NO_INLINE
> +#define BLAKE2_LOCAL_INLINE(type) static type
> +#endif
> +
> +#ifndef BLAKE2_LOCAL_INLINE
> +#define BLAKE2_LOCAL_INLINE(type) static inline type
> +#endif
> +
> +#if defined(__cplusplus)
> +extern "C" {
> +#endif
> +
> + enum blake2s_constant
> + {
> + BLAKE2S_BLOCKBYTES = 64,
> + BLAKE2S_OUTBYTES = 32,
> + BLAKE2S_KEYBYTES = 32,
> + BLAKE2S_SALTBYTES = 8,
> + BLAKE2S_PERSONALBYTES = 8
> + };
> +
> + enum blake2b_constant
> + {
> + BLAKE2B_BLOCKBYTES = 128,
> + BLAKE2B_OUTBYTES = 64,
> + BLAKE2B_KEYBYTES = 64,
> + BLAKE2B_SALTBYTES = 16,
> + BLAKE2B_PERSONALBYTES = 16
> + };
> +
> + typedef struct __blake2s_state
> + {
> + uint32_t h[8];
> + uint32_t t[2];
> + uint32_t f[2];
> + uint8_t buf[2 * BLAKE2S_BLOCKBYTES];
> + size_t buflen;
> + uint8_t last_node;
> + } blake2s_state;
> +
> + typedef struct __blake2b_state
> + {
> + uint64_t h[8];
> + uint64_t t[2];
> + uint64_t f[2];
> + uint8_t buf[2 * BLAKE2B_BLOCKBYTES];
> + size_t buflen;
> + uint8_t last_node;
> + } blake2b_state;
> +
> + typedef struct __blake2sp_state
> + {
> + blake2s_state S[8][1];
> + blake2s_state R[1];
> + uint8_t buf[8 * BLAKE2S_BLOCKBYTES];
> + size_t buflen;
> + } blake2sp_state;
> +
> + typedef struct __blake2bp_state
> + {
> + blake2b_state S[4][1];
> + blake2b_state R[1];
> + uint8_t buf[4 * BLAKE2B_BLOCKBYTES];
> + size_t buflen;
> + } blake2bp_state;
> +
> +
> +#pragma pack(push, 1)
> + typedef struct __blake2s_param
> + {
> + uint8_t digest_length; /* 1 */
> + uint8_t key_length; /* 2 */
> + uint8_t fanout; /* 3 */
> + uint8_t depth; /* 4 */
> + uint32_t leaf_length; /* 8 */
> + uint8_t node_offset[6];// 14
> + uint8_t node_depth; /* 15 */
> + uint8_t inner_length; /* 16 */
> + /* uint8_t reserved[0]; */
> + uint8_t salt[BLAKE2S_SALTBYTES]; /* 24 */
> + uint8_t personal[BLAKE2S_PERSONALBYTES]; /* 32 */
> + } blake2s_param;
> +
> + typedef struct __blake2b_param
> + {
> + uint8_t digest_length; /* 1 */
> + uint8_t key_length; /* 2 */
> + uint8_t fanout; /* 3 */
> + uint8_t depth; /* 4 */
> + uint32_t leaf_length; /* 8 */
> + uint64_t node_offset; /* 16 */
> + uint8_t node_depth; /* 17 */
> + uint8_t inner_length; /* 18 */
> + uint8_t reserved[14]; /* 32 */
> + uint8_t salt[BLAKE2B_SALTBYTES]; /* 48 */
> + uint8_t personal[BLAKE2B_PERSONALBYTES]; /* 64 */
> + } blake2b_param;
> +#pragma pack(pop)
> +
> + /* Streaming API */
> + int blake2s_init( blake2s_state *S, const uint8_t outlen );
> + int blake2s_init_key( blake2s_state *S, const uint8_t outlen, const void *key, const uint8_t keylen );
> + int blake2s_init_param( blake2s_state *S, const blake2s_param *P );
> + int blake2s_update( blake2s_state *S, const uint8_t *in, uint64_t inlen );
> + int blake2s_final( blake2s_state *S, uint8_t *out, uint8_t outlen );
> +
> + int blake2b_init( blake2b_state *S, const uint8_t outlen );
> + int blake2b_init_key( blake2b_state *S, const uint8_t outlen, const void *key, const uint8_t keylen );
> + int blake2b_init_param( blake2b_state *S, const blake2b_param *P );
> + int blake2b_update( blake2b_state *S, const uint8_t *in, uint64_t inlen );
> + int blake2b_final( blake2b_state *S, uint8_t *out, uint8_t outlen );
> +
> + int blake2sp_init( blake2sp_state *S, const uint8_t outlen );
> + int blake2sp_init_key( blake2sp_state *S, const uint8_t outlen, const void *key, const uint8_t keylen );
> + int blake2sp_update( blake2sp_state *S, const uint8_t *in, uint64_t inlen );
> + int blake2sp_final( blake2sp_state *S, uint8_t *out, uint8_t outlen );
> +
> + int blake2bp_init( blake2bp_state *S, const uint8_t outlen );
> + int blake2bp_init_key( blake2bp_state *S, const uint8_t outlen, const void *key, const uint8_t keylen );
> + int blake2bp_update( blake2bp_state *S, const uint8_t *in, uint64_t inlen );
> + int blake2bp_final( blake2bp_state *S, uint8_t *out, uint8_t outlen );
> +
> + /* Simple API */
> + int blake2s( uint8_t *out, const void *in, const void *key, const uint8_t outlen, const uint64_t inlen, uint8_t keylen );
> + int blake2b( uint8_t *out, const void *in, const void *key, const uint8_t outlen, const uint64_t inlen, uint8_t keylen );
> +
> + int blake2sp( uint8_t *out, const void *in, const void *key, const uint8_t outlen, const uint64_t inlen, uint8_t keylen );
> + int blake2bp( uint8_t *out, const void *in, const void *key, const uint8_t outlen, const uint64_t inlen, uint8_t keylen );
> +
> + static int blake2( uint8_t *out, const void *in, const void *key, const uint8_t outlen, const uint64_t inlen, uint8_t keylen )
> + {
> + return blake2b( out, in, key, outlen, inlen, keylen );
> + }
> +
> +#if defined(__cplusplus)
> +}
> +#endif
> +
> +#endif
> +
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/_ctypes.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/_ctypes.c
> new file mode 100644
> index 00000000..ea6b3811
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/_ctypes.c
> @@ -0,0 +1,5623 @@
> +/*
> + ToDo:
> +
> + Get rid of the checker (and also the converters) field in PyCFuncPtrObject and
> + StgDictObject, and replace them by slot functions in StgDictObject.
> +
> + think about a buffer-like object (memory? bytes?)
> +
> + Should POINTER(c_char) and POINTER(c_wchar) have a .value property?
> + What about c_char and c_wchar arrays then?
> +
> + Add from_mmap, from_file, from_string metaclass methods.
> +
> + Maybe we can get away with from_file (calls read) and with a from_buffer
> + method?
> +
> + And what about the to_mmap, to_file, to_str(?) methods? They would clobber
> + the namespace, probably. So, functions instead? And we already have memmove...
> +*/
> +
> +/*
> +
> +Name methods, members, getsets
> +==============================================================================
> +
> +PyCStructType_Type __new__(), from_address(), __mul__(), from_param()
> +UnionType_Type __new__(), from_address(), __mul__(), from_param()
> +PyCPointerType_Type __new__(), from_address(), __mul__(), from_param(), set_type()
> +PyCArrayType_Type __new__(), from_address(), __mul__(), from_param()
> +PyCSimpleType_Type __new__(), from_address(), __mul__(), from_param()
> +
> +PyCData_Type
> + Struct_Type __new__(), __init__()
> + PyCPointer_Type __new__(), __init__(), _as_parameter_, contents
> + PyCArray_Type __new__(), __init__(), _as_parameter_, __get/setitem__(), __len__()
> + Simple_Type __new__(), __init__(), _as_parameter_
> +
> +PyCField_Type
> +PyCStgDict_Type
> +
> +==============================================================================
> +
> +class methods
> +-------------
> +
> +It has some similarity to the byref() construct compared to pointer()
> +from_address(addr)
> + - construct an instance from a given memory block (sharing this memory block)
> +
> +from_param(obj)
> + - typecheck and convert a Python object into a C function call parameter
> + The result may be an instance of the type, or an integer or tuple
> + (typecode, value[, obj])
> +
> +instance methods/properties
> +---------------------------
> +
> +_as_parameter_
> + - convert self into a C function call parameter
> + This is either an integer, or a 3-tuple (typecode, value, obj)
> +
> +functions
> +---------
> +
> +sizeof(cdata)
> + - return the number of bytes the buffer contains
> +
> +sizeof(ctype)
> + - return the number of bytes the buffer of an instance would contain
> +
> +byref(cdata)
> +
> +addressof(cdata)
> +
> +pointer(cdata)
> +
> +POINTER(ctype)
> +
> +bytes(cdata)
> + - return the buffer contents as a sequence of bytes (which is currently a string)
> +
> +*/
> +
> +/*
> + * PyCStgDict_Type
> + * PyCStructType_Type
> + * UnionType_Type
> + * PyCPointerType_Type
> + * PyCArrayType_Type
> + * PyCSimpleType_Type
> + *
> + * PyCData_Type
> + * Struct_Type
> + * Union_Type
> + * PyCArray_Type
> + * Simple_Type
> + * PyCPointer_Type
> + * PyCField_Type
> + *
> + */
> +
> +#define PY_SSIZE_T_CLEAN
> +
> +#include "Python.h"
> +#include "structmember.h"
> +
> +#include <ffi.h>
> +#ifdef MS_WIN32
> +#include <windows.h>
> +#include <malloc.h>
> +#ifndef IS_INTRESOURCE
> +#define IS_INTRESOURCE(x) (((size_t)(x) >> 16) == 0)
> +#endif
> +#else
> +#include "ctypes_dlfcn.h"
> +#endif
> +#include "ctypes.h"
> +
> +PyObject *PyExc_ArgError;
> +
> +/* This dict maps ctypes types to POINTER types */
> +PyObject *_ctypes_ptrtype_cache;
> +
> +static PyTypeObject Simple_Type;
> +
> +/* a callable object used for unpickling */
> +static PyObject *_unpickle;
> +
> +
> +
> +/****************************************************************/
> +
> +typedef struct {
> + PyObject_HEAD
> + PyObject *key;
> + PyObject *dict;
> +} DictRemoverObject;
> +
> +static void
> +_DictRemover_dealloc(PyObject *myself)
> +{
> + DictRemoverObject *self = (DictRemoverObject *)myself;
> + Py_XDECREF(self->key);
> + Py_XDECREF(self->dict);
> + Py_TYPE(self)->tp_free(myself);
> +}
> +
> +static PyObject *
> +_DictRemover_call(PyObject *myself, PyObject *args, PyObject *kw)
> +{
> + DictRemoverObject *self = (DictRemoverObject *)myself;
> + if (self->key && self->dict) {
> + if (-1 == PyDict_DelItem(self->dict, self->key))
> + /* XXX Error context */
> + PyErr_WriteUnraisable(Py_None);
> + Py_CLEAR(self->key);
> + Py_CLEAR(self->dict);
> + }
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +static PyTypeObject DictRemover_Type = {
> + PyVarObject_HEAD_INIT(NULL, 0)
> + "_ctypes.DictRemover", /* tp_name */
> + sizeof(DictRemoverObject), /* tp_basicsize */
> + 0, /* tp_itemsize */
> + _DictRemover_dealloc, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + 0, /* tp_repr */
> + 0, /* tp_as_number */
> + 0, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + _DictRemover_call, /* tp_call */
> + 0, /* tp_str */
> + 0, /* tp_getattro */
> + 0, /* tp_setattro */
> + 0, /* tp_as_buffer */
> +/* XXX should participate in GC? */
> + Py_TPFLAGS_DEFAULT, /* tp_flags */
> + "deletes a key from a dictionary", /* tp_doc */
> + 0, /* tp_traverse */
> + 0, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + 0, /* tp_iter */
> + 0, /* tp_iternext */
> + 0, /* tp_methods */
> + 0, /* tp_members */
> + 0, /* tp_getset */
> + 0, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + 0, /* tp_init */
> + 0, /* tp_alloc */
> + 0, /* tp_new */
> + 0, /* tp_free */
> +};
> +
> +int
> +PyDict_SetItemProxy(PyObject *dict, PyObject *key, PyObject *item)
> +{
> + PyObject *obj;
> + DictRemoverObject *remover;
> + PyObject *proxy;
> + int result;
> +
> + obj = PyObject_CallObject((PyObject *)&DictRemover_Type, NULL);
> + if (obj == NULL)
> + return -1;
> +
> + remover = (DictRemoverObject *)obj;
> + assert(remover->key == NULL);
> + assert(remover->dict == NULL);
> + Py_INCREF(key);
> + remover->key = key;
> + Py_INCREF(dict);
> + remover->dict = dict;
> +
> + proxy = PyWeakref_NewProxy(item, obj);
> + Py_DECREF(obj);
> + if (proxy == NULL)
> + return -1;
> +
> + result = PyDict_SetItem(dict, key, proxy);
> + Py_DECREF(proxy);
> + return result;
> +}
> +
> +PyObject *
> +PyDict_GetItemProxy(PyObject *dict, PyObject *key)
> +{
> + PyObject *result;
> + PyObject *item = PyDict_GetItem(dict, key);
> +
> + if (item == NULL)
> + return NULL;
> + if (!PyWeakref_CheckProxy(item))
> + return item;
> + result = PyWeakref_GET_OBJECT(item);
> + if (result == Py_None)
> + return NULL;
> + return result;
> +}
> +
> +/******************************************************************/
> +
> +/*
> + Allocate a memory block for a pep3118 format string, filled with
> + a suitable PEP 3118 type code corresponding to the given ctypes
> + type. Returns NULL on failure, with the error indicator set.
> +
> + This produces type codes in the standard size mode (cf. struct module),
> + since the endianness may need to be swapped to a non-native one
> + later on.
> + */
> +static char *
> +_ctypes_alloc_format_string_for_type(char code, int big_endian)
> +{
> + char *result;
> + char pep_code = '\0';
> +
> + switch (code) {
> +#if SIZEOF_INT == 2
> + case 'i': pep_code = 'h'; break;
> + case 'I': pep_code = 'H'; break;
> +#elif SIZEOF_INT == 4
> + case 'i': pep_code = 'i'; break;
> + case 'I': pep_code = 'I'; break;
> +#elif SIZEOF_INT == 8
> + case 'i': pep_code = 'q'; break;
> + case 'I': pep_code = 'Q'; break;
> +#else
> +# error SIZEOF_INT has an unexpected value
> +#endif /* SIZEOF_INT */
> +#if SIZEOF_LONG == 4
> + case 'l': pep_code = 'l'; break;
> + case 'L': pep_code = 'L'; break;
> +#elif SIZEOF_LONG == 8
> + case 'l': pep_code = 'q'; break;
> + case 'L': pep_code = 'Q'; break;
> +#else
> +# error SIZEOF_LONG has an unexpected value
> +#endif /* SIZEOF_LONG */
> +#if SIZEOF__BOOL == 1
> + case '?': pep_code = '?'; break;
> +#elif SIZEOF__BOOL == 2
> + case '?': pep_code = 'H'; break;
> +#elif SIZEOF__BOOL == 4
> + case '?': pep_code = 'L'; break;
> +#elif SIZEOF__BOOL == 8
> + case '?': pep_code = 'Q'; break;
> +#else
> +# error SIZEOF__BOOL has an unexpected value
> +#endif /* SIZEOF__BOOL */
> + default:
> + /* The standard-size code is the same as the ctypes one */
> + pep_code = code;
> + break;
> + }
> +
> + result = PyMem_Malloc(3);
> + if (result == NULL) {
> + PyErr_NoMemory();
> + return NULL;
> + }
> +
> + result[0] = big_endian ? '>' : '<';
> + result[1] = pep_code;
> + result[2] = '\0';
> + return result;
> +}
> +
> +/*
> + Allocate a memory block for a pep3118 format string, copy prefix (if
> + non-null) and suffix into it. Returns NULL on failure, with the error
> + indicator set. If called with a suffix of NULL the error indicator must
> + already be set.
> + */
> +char *
> +_ctypes_alloc_format_string(const char *prefix, const char *suffix)
> +{
> + size_t len;
> + char *result;
> +
> + if (suffix == NULL) {
> + assert(PyErr_Occurred());
> + return NULL;
> + }
> + len = strlen(suffix);
> + if (prefix)
> + len += strlen(prefix);
> + result = PyMem_Malloc(len + 1);
> + if (result == NULL) {
> + PyErr_NoMemory();
> + return NULL;
> + }
> + if (prefix)
> + strcpy(result, prefix);
> + else
> + result[0] = '\0';
> + strcat(result, suffix);
> + return result;
> +}
> +
> +/*
> + Allocate a memory block for a pep3118 format string, adding
> + the given prefix (if non-null), an additional shape prefix, and a suffix.
> + Returns NULL on failure, with the error indicator set. If called with
> + a suffix of NULL the error indicator must already be set.
> + */
> +char *
> +_ctypes_alloc_format_string_with_shape(int ndim, const Py_ssize_t *shape,
> + const char *prefix, const char *suffix)
> +{
> + char *new_prefix;
> + char *result;
> + char buf[32];
> + Py_ssize_t prefix_len;
> + int k;
> +
> + prefix_len = 32 * ndim + 3;
> + if (prefix)
> + prefix_len += strlen(prefix);
> + new_prefix = PyMem_Malloc(prefix_len);
> + if (new_prefix == NULL) {
> + PyErr_NoMemory();
> + return NULL;
> + }
> + new_prefix[0] = '\0';
> + if (prefix)
> + strcpy(new_prefix, prefix);
> + if (ndim > 0) {
> + /* Add the prefix "(shape[0],shape[1],...,shape[ndim-1])" */
> + strcat(new_prefix, "(");
> + for (k = 0; k < ndim; ++k) {
> + if (k < ndim-1) {
> + sprintf(buf, "%"PY_FORMAT_SIZE_T"d,", shape[k]);
> + } else {
> + sprintf(buf, "%"PY_FORMAT_SIZE_T"d)", shape[k]);
> + }
> + strcat(new_prefix, buf);
> + }
> + }
> + result = _ctypes_alloc_format_string(new_prefix, suffix);
> + PyMem_Free(new_prefix);
> + return result;
> +}
> +
> +/*
> + PyCStructType_Type - a meta type/class. Creating a new class using this one as
> + __metaclass__ will call the constructor StructUnionType_new. It replaces the
> + tp_dict member with a new instance of StgDict, and initializes the C
> + accessible fields somehow.
> +*/
> +
> +static PyCArgObject *
> +StructUnionType_paramfunc(CDataObject *self)
> +{
> + PyCArgObject *parg;
> + StgDictObject *stgdict;
> +
> + parg = PyCArgObject_new();
> + if (parg == NULL)
> + return NULL;
> +
> + parg->tag = 'V';
> + stgdict = PyObject_stgdict((PyObject *)self);
> + assert(stgdict); /* Cannot be NULL for structure/union instances */
> + parg->pffi_type = &stgdict->ffi_type_pointer;
> + /* For structure parameters (by value), parg->value doesn't contain the structure
> + data itself, instead parg->value.p *points* to the structure's data
> + See also _ctypes.c, function _call_function_pointer().
> + */
> + parg->value.p = self->b_ptr;
> + parg->size = self->b_size;
> + Py_INCREF(self);
> + parg->obj = (PyObject *)self;
> + return parg;
> +}
> +
> +static PyObject *
> +StructUnionType_new(PyTypeObject *type, PyObject *args, PyObject *kwds, int isStruct)
> +{
> + PyTypeObject *result;
> + PyObject *fields;
> + StgDictObject *dict;
> +
> + /* create the new instance (which is a class,
> + since we are a metatype!) */
> + result = (PyTypeObject *)PyType_Type.tp_new(type, args, kwds);
> + if (!result)
> + return NULL;
> +
> + /* keep this for bw compatibility */
> + if (PyDict_GetItemString(result->tp_dict, "_abstract_"))
> + return (PyObject *)result;
> +
> + dict = (StgDictObject *)PyObject_CallObject((PyObject *)&PyCStgDict_Type, NULL);
> + if (!dict) {
> + Py_DECREF(result);
> + return NULL;
> + }
> + /* replace the class dict by our updated stgdict, which holds info
> + about storage requirements of the instances */
> + if (-1 == PyDict_Update((PyObject *)dict, result->tp_dict)) {
> + Py_DECREF(result);
> + Py_DECREF((PyObject *)dict);
> + return NULL;
> + }
> + Py_SETREF(result->tp_dict, (PyObject *)dict);
> + dict->format = _ctypes_alloc_format_string(NULL, "B");
> + if (dict->format == NULL) {
> + Py_DECREF(result);
> + return NULL;
> + }
> +
> + dict->paramfunc = StructUnionType_paramfunc;
> +
> + fields = PyDict_GetItemString((PyObject *)dict, "_fields_");
> + if (!fields) {
> + StgDictObject *basedict = PyType_stgdict((PyObject *)result->tp_base);
> +
> + if (basedict == NULL)
> + return (PyObject *)result;
> + /* copy base dict */
> + if (-1 == PyCStgDict_clone(dict, basedict)) {
> + Py_DECREF(result);
> + return NULL;
> + }
> + dict->flags &= ~DICTFLAG_FINAL; /* clear the 'final' flag in the subclass dict */
> + basedict->flags |= DICTFLAG_FINAL; /* set the 'final' flag in the baseclass dict */
> + return (PyObject *)result;
> + }
> +
> + if (-1 == PyObject_SetAttrString((PyObject *)result, "_fields_", fields)) {
> + Py_DECREF(result);
> + return NULL;
> + }
> + return (PyObject *)result;
> +}
> +
> +static PyObject *
> +PyCStructType_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
> +{
> + return StructUnionType_new(type, args, kwds, 1);
> +}
> +
> +static PyObject *
> +UnionType_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
> +{
> + return StructUnionType_new(type, args, kwds, 0);
> +}
> +
> +static const char from_address_doc[] =
> +"C.from_address(integer) -> C instance\naccess a C instance at the specified address";
> +
> +static PyObject *
> +CDataType_from_address(PyObject *type, PyObject *value)
> +{
> + void *buf;
> + if (!PyLong_Check(value)) {
> + PyErr_SetString(PyExc_TypeError,
> + "integer expected");
> + return NULL;
> + }
> + buf = (void *)PyLong_AsVoidPtr(value);
> + if (PyErr_Occurred())
> + return NULL;
> + return PyCData_AtAddress(type, buf);
> +}
> +
> +static const char from_buffer_doc[] =
> +"C.from_buffer(object, offset=0) -> C instance\ncreate a C instance from a writeable buffer";
> +
> +static int
> +KeepRef(CDataObject *target, Py_ssize_t index, PyObject *keep);
> +
> +static PyObject *
> +CDataType_from_buffer(PyObject *type, PyObject *args)
> +{
> + PyObject *obj;
> + PyObject *mv;
> + PyObject *result;
> + Py_buffer *buffer;
> + Py_ssize_t offset = 0;
> +
> + StgDictObject *dict = PyType_stgdict(type);
> + if (!dict) {
> + PyErr_SetString(PyExc_TypeError, "abstract class");
> + return NULL;
> + }
> +
> + if (!PyArg_ParseTuple(args, "O|n:from_buffer", &obj, &offset))
> + return NULL;
> +
> + mv = PyMemoryView_FromObject(obj);
> + if (mv == NULL)
> + return NULL;
> +
> + buffer = PyMemoryView_GET_BUFFER(mv);
> +
> + if (buffer->readonly) {
> + PyErr_SetString(PyExc_TypeError,
> + "underlying buffer is not writable");
> + Py_DECREF(mv);
> + return NULL;
> + }
> +
> + if (!PyBuffer_IsContiguous(buffer, 'C')) {
> + PyErr_SetString(PyExc_TypeError,
> + "underlying buffer is not C contiguous");
> + Py_DECREF(mv);
> + return NULL;
> + }
> +
> + if (offset < 0) {
> + PyErr_SetString(PyExc_ValueError,
> + "offset cannot be negative");
> + Py_DECREF(mv);
> + return NULL;
> + }
> +
> + if (dict->size > buffer->len - offset) {
> + PyErr_Format(PyExc_ValueError,
> + "Buffer size too small "
> + "(%zd instead of at least %zd bytes)",
> + buffer->len, dict->size + offset);
> + Py_DECREF(mv);
> + return NULL;
> + }
> +
> + result = PyCData_AtAddress(type, (char *)buffer->buf + offset);
> + if (result == NULL) {
> + Py_DECREF(mv);
> + return NULL;
> + }
> +
> + if (-1 == KeepRef((CDataObject *)result, -1, mv)) {
> + Py_DECREF(result);
> + return NULL;
> + }
> +
> + return result;
> +}
> +
> +static const char from_buffer_copy_doc[] =
> +"C.from_buffer_copy(object, offset=0) -> C instance\ncreate a C instance from a readable buffer";
> +
> +static PyObject *
> +GenericPyCData_new(PyTypeObject *type, PyObject *args, PyObject *kwds);
> +
> +static PyObject *
> +CDataType_from_buffer_copy(PyObject *type, PyObject *args)
> +{
> + Py_buffer buffer;
> + Py_ssize_t offset = 0;
> + PyObject *result;
> + StgDictObject *dict = PyType_stgdict(type);
> + if (!dict) {
> + PyErr_SetString(PyExc_TypeError, "abstract class");
> + return NULL;
> + }
> +
> + if (!PyArg_ParseTuple(args, "y*|n:from_buffer_copy", &buffer, &offset))
> + return NULL;
> +
> + if (offset < 0) {
> + PyErr_SetString(PyExc_ValueError,
> + "offset cannot be negative");
> + PyBuffer_Release(&buffer);
> + return NULL;
> + }
> +
> + if (dict->size > buffer.len - offset) {
> + PyErr_Format(PyExc_ValueError,
> + "Buffer size too small (%zd instead of at least %zd bytes)",
> + buffer.len, dict->size + offset);
> + PyBuffer_Release(&buffer);
> + return NULL;
> + }
> +
> + result = GenericPyCData_new((PyTypeObject *)type, NULL, NULL);
> + if (result != NULL) {
> + memcpy(((CDataObject *)result)->b_ptr,
> + (char *)buffer.buf + offset, dict->size);
> + }
> + PyBuffer_Release(&buffer);
> + return result;
> +}
> +#ifndef UEFI_C_SOURCE
> +static const char in_dll_doc[] =
> +"C.in_dll(dll, name) -> C instance\naccess a C instance in a dll";
> +
> +static PyObject *
> +CDataType_in_dll(PyObject *type, PyObject *args)
> +{
> + PyObject *dll;
> + char *name;
> + PyObject *obj;
> + void *handle;
> + void *address;
> +
> + if (!PyArg_ParseTuple(args, "Os:in_dll", &dll, &name))
> + return NULL;
> +
> + obj = PyObject_GetAttrString(dll, "_handle");
> + if (!obj)
> + return NULL;
> + if (!PyLong_Check(obj)) {
> + PyErr_SetString(PyExc_TypeError,
> + "the _handle attribute of the second argument must be an integer");
> + Py_DECREF(obj);
> + return NULL;
> + }
> + handle = (void *)PyLong_AsVoidPtr(obj);
> + Py_DECREF(obj);
> + if (PyErr_Occurred()) {
> + PyErr_SetString(PyExc_ValueError,
> + "could not convert the _handle attribute to a pointer");
> + return NULL;
> + }
> +
> +#ifdef MS_WIN32
> + address = (void *)GetProcAddress(handle, name);
> + if (!address) {
> + PyErr_Format(PyExc_ValueError,
> + "symbol '%s' not found",
> + name);
> + return NULL;
> + }
> +#else
> + address = (void *)ctypes_dlsym(handle, name);
> + if (!address) {
> +#ifdef __CYGWIN__
> +/* dlerror() isn't very helpful on cygwin */
> + PyErr_Format(PyExc_ValueError,
> + "symbol '%s' not found",
> + name);
> +#else
> + PyErr_SetString(PyExc_ValueError, ctypes_dlerror());
> +#endif
> + return NULL;
> + }
> +#endif
> + return PyCData_AtAddress(type, address);
> +}
> +#endif
> +static const char from_param_doc[] =
> +"Convert a Python object into a function call parameter.";
> +
> +static PyObject *
> +CDataType_from_param(PyObject *type, PyObject *value)
> +{
> + PyObject *as_parameter;
> + int res = PyObject_IsInstance(value, type);
> + if (res == -1)
> + return NULL;
> + if (res) {
> + Py_INCREF(value);
> + return value;
> + }
> + if (PyCArg_CheckExact(value)) {
> + PyCArgObject *p = (PyCArgObject *)value;
> + PyObject *ob = p->obj;
> + const char *ob_name;
> + StgDictObject *dict;
> + dict = PyType_stgdict(type);
> +
> + /* If we got a PyCArgObject, we must check if the object packed in it
> + is an instance of the type's dict->proto */
> + if(dict && ob) {
> + res = PyObject_IsInstance(ob, dict->proto);
> + if (res == -1)
> + return NULL;
> + if (res) {
> + Py_INCREF(value);
> + return value;
> + }
> + }
> + ob_name = (ob) ? Py_TYPE(ob)->tp_name : "???";
> + PyErr_Format(PyExc_TypeError,
> + "expected %s instance instead of pointer to %s",
> + ((PyTypeObject *)type)->tp_name, ob_name);
> + return NULL;
> + }
> +
> + as_parameter = PyObject_GetAttrString(value, "_as_parameter_");
> + if (as_parameter) {
> + value = CDataType_from_param(type, as_parameter);
> + Py_DECREF(as_parameter);
> + return value;
> + }
> + PyErr_Format(PyExc_TypeError,
> + "expected %s instance instead of %s",
> + ((PyTypeObject *)type)->tp_name,
> + Py_TYPE(value)->tp_name);
> + return NULL;
> +}
> +
> +static PyMethodDef CDataType_methods[] = {
> + { "from_param", CDataType_from_param, METH_O, from_param_doc },
> + { "from_address", CDataType_from_address, METH_O, from_address_doc },
> + { "from_buffer", CDataType_from_buffer, METH_VARARGS, from_buffer_doc, },
> + { "from_buffer_copy", CDataType_from_buffer_copy, METH_VARARGS, from_buffer_copy_doc, },
> +#ifndef UEFI_C_SOURCE
> + { "in_dll", CDataType_in_dll, METH_VARARGS, in_dll_doc },
> +#endif
> + { NULL, NULL },
> +};
> +
> +static PyObject *
> +CDataType_repeat(PyObject *self, Py_ssize_t length)
> +{
> + if (length < 0)
> + return PyErr_Format(PyExc_ValueError,
> + "Array length must be >= 0, not %zd",
> + length);
> + return PyCArrayType_from_ctype(self, length);
> +}
> +
> +static PySequenceMethods CDataType_as_sequence = {
> + 0, /* inquiry sq_length; */
> + 0, /* binaryfunc sq_concat; */
> + CDataType_repeat, /* intargfunc sq_repeat; */
> + 0, /* intargfunc sq_item; */
> + 0, /* intintargfunc sq_slice; */
> + 0, /* intobjargproc sq_ass_item; */
> + 0, /* intintobjargproc sq_ass_slice; */
> + 0, /* objobjproc sq_contains; */
> +
> + 0, /* binaryfunc sq_inplace_concat; */
> + 0, /* intargfunc sq_inplace_repeat; */
> +};
> +
> +static int
> +CDataType_clear(PyTypeObject *self)
> +{
> + StgDictObject *dict = PyType_stgdict((PyObject *)self);
> + if (dict)
> + Py_CLEAR(dict->proto);
> + return PyType_Type.tp_clear((PyObject *)self);
> +}
> +
> +static int
> +CDataType_traverse(PyTypeObject *self, visitproc visit, void *arg)
> +{
> + StgDictObject *dict = PyType_stgdict((PyObject *)self);
> + if (dict)
> + Py_VISIT(dict->proto);
> + return PyType_Type.tp_traverse((PyObject *)self, visit, arg);
> +}
> +
> +static int
> +PyCStructType_setattro(PyObject *self, PyObject *key, PyObject *value)
> +{
> + /* XXX Should we disallow deleting _fields_? */
> + if (-1 == PyType_Type.tp_setattro(self, key, value))
> + return -1;
> +
> + if (value && PyUnicode_Check(key) &&
> + _PyUnicode_EqualToASCIIString(key, "_fields_"))
> + return PyCStructUnionType_update_stgdict(self, value, 1);
> + return 0;
> +}
> +
> +
> +static int
> +UnionType_setattro(PyObject *self, PyObject *key, PyObject *value)
> +{
> + /* XXX Should we disallow deleting _fields_? */
> + if (-1 == PyObject_GenericSetAttr(self, key, value))
> + return -1;
> +
> + if (PyUnicode_Check(key) &&
> + _PyUnicode_EqualToASCIIString(key, "_fields_"))
> + return PyCStructUnionType_update_stgdict(self, value, 0);
> + return 0;
> +}
> +
> +
> +PyTypeObject PyCStructType_Type = {
> + PyVarObject_HEAD_INIT(NULL, 0)
> + "_ctypes.PyCStructType", /* tp_name */
> + 0, /* tp_basicsize */
> + 0, /* tp_itemsize */
> + 0, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + 0, /* tp_repr */
> + 0, /* tp_as_number */
> + &CDataType_as_sequence, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + 0, /* tp_getattro */
> + PyCStructType_setattro, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC, /* tp_flags */
> + "metatype for the CData Objects", /* tp_doc */
> + (traverseproc)CDataType_traverse, /* tp_traverse */
> + (inquiry)CDataType_clear, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + 0, /* tp_iter */
> + 0, /* tp_iternext */
> + CDataType_methods, /* tp_methods */
> + 0, /* tp_members */
> + 0, /* tp_getset */
> + 0, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + 0, /* tp_init */
> + 0, /* tp_alloc */
> + PyCStructType_new, /* tp_new */
> + 0, /* tp_free */
> +};
> +
> +static PyTypeObject UnionType_Type = {
> + PyVarObject_HEAD_INIT(NULL, 0)
> + "_ctypes.UnionType", /* tp_name */
> + 0, /* tp_basicsize */
> + 0, /* tp_itemsize */
> + 0, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + 0, /* tp_repr */
> + 0, /* tp_as_number */
> + &CDataType_as_sequence, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + 0, /* tp_getattro */
> + UnionType_setattro, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC, /* tp_flags */
> + "metatype for the CData Objects", /* tp_doc */
> + (traverseproc)CDataType_traverse, /* tp_traverse */
> + (inquiry)CDataType_clear, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + 0, /* tp_iter */
> + 0, /* tp_iternext */
> + CDataType_methods, /* tp_methods */
> + 0, /* tp_members */
> + 0, /* tp_getset */
> + 0, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + 0, /* tp_init */
> + 0, /* tp_alloc */
> + UnionType_new, /* tp_new */
> + 0, /* tp_free */
> +};
> +
> +
> +/******************************************************************/
> +
> +/*
> +
> +The PyCPointerType_Type metaclass must ensure that the subclass of Pointer can be
> +created. It must check for a _type_ attribute in the class. Since are no
> +runtime created properties, a CField is probably *not* needed ?
> +
> +class IntPointer(Pointer):
> + _type_ = "i"
> +
> +The PyCPointer_Type provides the functionality: a contents method/property, a
> +size property/method, and the sequence protocol.
> +
> +*/
> +
> +static int
> +PyCPointerType_SetProto(StgDictObject *stgdict, PyObject *proto)
> +{
> + if (!proto || !PyType_Check(proto)) {
> + PyErr_SetString(PyExc_TypeError,
> + "_type_ must be a type");
> + return -1;
> + }
> + if (!PyType_stgdict(proto)) {
> + PyErr_SetString(PyExc_TypeError,
> + "_type_ must have storage info");
> + return -1;
> + }
> + Py_INCREF(proto);
> + Py_XSETREF(stgdict->proto, proto);
> + return 0;
> +}
> +
> +static PyCArgObject *
> +PyCPointerType_paramfunc(CDataObject *self)
> +{
> + PyCArgObject *parg;
> +
> + parg = PyCArgObject_new();
> + if (parg == NULL)
> + return NULL;
> +
> + parg->tag = 'P';
> + parg->pffi_type = &ffi_type_pointer;
> + Py_INCREF(self);
> + parg->obj = (PyObject *)self;
> + parg->value.p = *(void **)self->b_ptr;
> + return parg;
> +}
> +
> +static PyObject *
> +PyCPointerType_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
> +{
> + PyTypeObject *result;
> + StgDictObject *stgdict;
> + PyObject *proto;
> + PyObject *typedict;
> +
> + typedict = PyTuple_GetItem(args, 2);
> + if (!typedict)
> + return NULL;
> +/*
> + stgdict items size, align, length contain info about pointers itself,
> + stgdict->proto has info about the pointed to type!
> +*/
> + stgdict = (StgDictObject *)PyObject_CallObject(
> + (PyObject *)&PyCStgDict_Type, NULL);
> + if (!stgdict)
> + return NULL;
> + stgdict->size = sizeof(void *);
> + stgdict->align = _ctypes_get_fielddesc("P")->pffi_type->alignment;
> + stgdict->length = 1;
> + stgdict->ffi_type_pointer = ffi_type_pointer;
> + stgdict->paramfunc = PyCPointerType_paramfunc;
> + stgdict->flags |= TYPEFLAG_ISPOINTER;
> +
> + proto = PyDict_GetItemString(typedict, "_type_"); /* Borrowed ref */
> + if (proto && -1 == PyCPointerType_SetProto(stgdict, proto)) {
> + Py_DECREF((PyObject *)stgdict);
> + return NULL;
> + }
> +
> + if (proto) {
> + StgDictObject *itemdict = PyType_stgdict(proto);
> + const char *current_format;
> + /* PyCPointerType_SetProto has verified proto has a stgdict. */
> + assert(itemdict);
> + /* If itemdict->format is NULL, then this is a pointer to an
> + incomplete type. We create a generic format string
> + 'pointer to bytes' in this case. XXX Better would be to
> + fix the format string later...
> + */
> + current_format = itemdict->format ? itemdict->format : "B";
> + if (itemdict->shape != NULL) {
> + /* pointer to an array: the shape needs to be prefixed */
> + stgdict->format = _ctypes_alloc_format_string_with_shape(
> + itemdict->ndim, itemdict->shape, "&", current_format);
> + } else {
> + stgdict->format = _ctypes_alloc_format_string("&", current_format);
> + }
> + if (stgdict->format == NULL) {
> + Py_DECREF((PyObject *)stgdict);
> + return NULL;
> + }
> + }
> +
> + /* create the new instance (which is a class,
> + since we are a metatype!) */
> + result = (PyTypeObject *)PyType_Type.tp_new(type, args, kwds);
> + if (result == NULL) {
> + Py_DECREF((PyObject *)stgdict);
> + return NULL;
> + }
> +
> + /* replace the class dict by our updated spam dict */
> + if (-1 == PyDict_Update((PyObject *)stgdict, result->tp_dict)) {
> + Py_DECREF(result);
> + Py_DECREF((PyObject *)stgdict);
> + return NULL;
> + }
> + Py_SETREF(result->tp_dict, (PyObject *)stgdict);
> +
> + return (PyObject *)result;
> +}
> +
> +
> +static PyObject *
> +PyCPointerType_set_type(PyTypeObject *self, PyObject *type)
> +{
> + StgDictObject *dict;
> +
> + dict = PyType_stgdict((PyObject *)self);
> + if (!dict) {
> + PyErr_SetString(PyExc_TypeError,
> + "abstract class");
> + return NULL;
> + }
> +
> + if (-1 == PyCPointerType_SetProto(dict, type))
> + return NULL;
> +
> + if (-1 == PyDict_SetItemString((PyObject *)dict, "_type_", type))
> + return NULL;
> +
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +static PyObject *_byref(PyObject *);
> +
> +static PyObject *
> +PyCPointerType_from_param(PyObject *type, PyObject *value)
> +{
> + StgDictObject *typedict;
> +
> + if (value == Py_None) {
> + /* ConvParam will convert to a NULL pointer later */
> + Py_INCREF(value);
> + return value;
> + }
> +
> + typedict = PyType_stgdict(type);
> + if (!typedict) {
> + PyErr_SetString(PyExc_TypeError,
> + "abstract class");
> + return NULL;
> + }
> +
> + /* If we expect POINTER(<type>), but receive a <type> instance, accept
> + it by calling byref(<type>).
> + */
> + switch (PyObject_IsInstance(value, typedict->proto)) {
> + case 1:
> + Py_INCREF(value); /* _byref steals a refcount */
> + return _byref(value);
> + case -1:
> + return NULL;
> + default:
> + break;
> + }
> +
> + if (PointerObject_Check(value) || ArrayObject_Check(value)) {
> + /* Array instances are also pointers when
> + the item types are the same.
> + */
> + StgDictObject *v = PyObject_stgdict(value);
> + assert(v); /* Cannot be NULL for pointer or array objects */
> + if (PyObject_IsSubclass(v->proto, typedict->proto)) {
> + Py_INCREF(value);
> + return value;
> + }
> + }
> + return CDataType_from_param(type, value);
> +}
> +
> +static PyMethodDef PyCPointerType_methods[] = {
> + { "from_address", CDataType_from_address, METH_O, from_address_doc },
> + { "from_buffer", CDataType_from_buffer, METH_VARARGS, from_buffer_doc, },
> + { "from_buffer_copy", CDataType_from_buffer_copy, METH_VARARGS, from_buffer_copy_doc, },
> +#ifndef UEFI_C_SOURCE
> + { "in_dll", CDataType_in_dll, METH_VARARGS, in_dll_doc},
> +#endif
> + { "from_param", (PyCFunction)PyCPointerType_from_param, METH_O, from_param_doc},
> + { "set_type", (PyCFunction)PyCPointerType_set_type, METH_O },
> + { NULL, NULL },
> +};
> +
> +PyTypeObject PyCPointerType_Type = {
> + PyVarObject_HEAD_INIT(NULL, 0)
> + "_ctypes.PyCPointerType", /* tp_name */
> + 0, /* tp_basicsize */
> + 0, /* tp_itemsize */
> + 0, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + 0, /* tp_repr */
> + 0, /* tp_as_number */
> + &CDataType_as_sequence, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + 0, /* tp_getattro */
> + 0, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC, /* tp_flags */
> + "metatype for the Pointer Objects", /* tp_doc */
> + (traverseproc)CDataType_traverse, /* tp_traverse */
> + (inquiry)CDataType_clear, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + 0, /* tp_iter */
> + 0, /* tp_iternext */
> + PyCPointerType_methods, /* tp_methods */
> + 0, /* tp_members */
> + 0, /* tp_getset */
> + 0, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + 0, /* tp_init */
> + 0, /* tp_alloc */
> + PyCPointerType_new, /* tp_new */
> + 0, /* tp_free */
> +};
> +
> +
> +/******************************************************************/
> +/*
> + PyCArrayType_Type
> +*/
> +/*
> + PyCArrayType_new ensures that the new Array subclass created has a _length_
> + attribute, and a _type_ attribute.
> +*/
> +
> +static int
> +CharArray_set_raw(CDataObject *self, PyObject *value, void *Py_UNUSED(ignored))
> +{
> + char *ptr;
> + Py_ssize_t size;
> + Py_buffer view;
> +
> + if (PyObject_GetBuffer(value, &view, PyBUF_SIMPLE) < 0)
> + return -1;
> + size = view.len;
> + ptr = view.buf;
> + if (size > self->b_size) {
> + PyErr_SetString(PyExc_ValueError,
> + "byte string too long");
> + goto fail;
> + }
> +
> + memcpy(self->b_ptr, ptr, size);
> +
> + PyBuffer_Release(&view);
> + return 0;
> + fail:
> + PyBuffer_Release(&view);
> + return -1;
> +}
> +
> +static PyObject *
> +CharArray_get_raw(CDataObject *self, void *Py_UNUSED(ignored))
> +{
> + return PyBytes_FromStringAndSize(self->b_ptr, self->b_size);
> +}
> +
> +static PyObject *
> +CharArray_get_value(CDataObject *self, void *Py_UNUSED(ignored))
> +{
> + Py_ssize_t i;
> + char *ptr = self->b_ptr;
> + for (i = 0; i < self->b_size; ++i)
> + if (*ptr++ == '\0')
> + break;
> + return PyBytes_FromStringAndSize(self->b_ptr, i);
> +}
> +
> +static int
> +CharArray_set_value(CDataObject *self, PyObject *value, void *Py_UNUSED(ignored))
> +{
> + char *ptr;
> + Py_ssize_t size;
> +
> + if (value == NULL) {
> + PyErr_SetString(PyExc_TypeError,
> + "can't delete attribute");
> + return -1;
> + }
> +
> + if (!PyBytes_Check(value)) {
> + PyErr_Format(PyExc_TypeError,
> + "bytes expected instead of %s instance",
> + Py_TYPE(value)->tp_name);
> + return -1;
> + } else
> + Py_INCREF(value);
> + size = PyBytes_GET_SIZE(value);
> + if (size > self->b_size) {
> + PyErr_SetString(PyExc_ValueError,
> + "byte string too long");
> + Py_DECREF(value);
> + return -1;
> + }
> +
> + ptr = PyBytes_AS_STRING(value);
> + memcpy(self->b_ptr, ptr, size);
> + if (size < self->b_size)
> + self->b_ptr[size] = '\0';
> + Py_DECREF(value);
> +
> + return 0;
> +}
> +
> +static PyGetSetDef CharArray_getsets[] = {
> + { "raw", (getter)CharArray_get_raw, (setter)CharArray_set_raw,
> + "value", NULL },
> + { "value", (getter)CharArray_get_value, (setter)CharArray_set_value,
> + "string value"},
> + { NULL, NULL }
> +};
> +
> +#ifdef CTYPES_UNICODE
> +static PyObject *
> +WCharArray_get_value(CDataObject *self, void *Py_UNUSED(ignored))
> +{
> + Py_ssize_t i;
> + wchar_t *ptr = (wchar_t *)self->b_ptr;
> + for (i = 0; i < self->b_size/(Py_ssize_t)sizeof(wchar_t); ++i)
> + if (*ptr++ == (wchar_t)0)
> + break;
> + return PyUnicode_FromWideChar((wchar_t *)self->b_ptr, i);
> +}
> +
> +static int
> +WCharArray_set_value(CDataObject *self, PyObject *value, void *Py_UNUSED(ignored))
> +{
> + Py_ssize_t result = 0;
> + Py_UNICODE *wstr;
> + Py_ssize_t len;
> +
> + if (value == NULL) {
> + PyErr_SetString(PyExc_TypeError,
> + "can't delete attribute");
> + return -1;
> + }
> + if (!PyUnicode_Check(value)) {
> + PyErr_Format(PyExc_TypeError,
> + "unicode string expected instead of %s instance",
> + Py_TYPE(value)->tp_name);
> + return -1;
> + } else
> + Py_INCREF(value);
> +
> + wstr = PyUnicode_AsUnicodeAndSize(value, &len);
> + if (wstr == NULL)
> + return -1;
> + if ((size_t)len > self->b_size/sizeof(wchar_t)) {
> + PyErr_SetString(PyExc_ValueError,
> + "string too long");
> + result = -1;
> + goto done;
> + }
> + result = PyUnicode_AsWideChar(value,
> + (wchar_t *)self->b_ptr,
> + self->b_size/sizeof(wchar_t));
> + if (result >= 0 && (size_t)result < self->b_size/sizeof(wchar_t))
> + ((wchar_t *)self->b_ptr)[result] = (wchar_t)0;
> + done:
> + Py_DECREF(value);
> +
> + return result >= 0 ? 0 : -1;
> +}
> +
> +static PyGetSetDef WCharArray_getsets[] = {
> + { "value", (getter)WCharArray_get_value, (setter)WCharArray_set_value,
> + "string value"},
> + { NULL, NULL }
> +};
> +#endif
> +
> +/*
> + The next three functions copied from Python's typeobject.c.
> +
> + They are used to attach methods, members, or getsets to a type *after* it
> + has been created: Arrays of characters have additional getsets to treat them
> + as strings.
> + */
> +/*
> +static int
> +add_methods(PyTypeObject *type, PyMethodDef *meth)
> +{
> + PyObject *dict = type->tp_dict;
> + for (; meth->ml_name != NULL; meth++) {
> + PyObject *descr;
> + descr = PyDescr_NewMethod(type, meth);
> + if (descr == NULL)
> + return -1;
> + if (PyDict_SetItemString(dict, meth->ml_name, descr) < 0) {
> + Py_DECREF(descr);
> + return -1;
> + }
> + Py_DECREF(descr);
> + }
> + return 0;
> +}
> +
> +static int
> +add_members(PyTypeObject *type, PyMemberDef *memb)
> +{
> + PyObject *dict = type->tp_dict;
> + for (; memb->name != NULL; memb++) {
> + PyObject *descr;
> + descr = PyDescr_NewMember(type, memb);
> + if (descr == NULL)
> + return -1;
> + if (PyDict_SetItemString(dict, memb->name, descr) < 0) {
> + Py_DECREF(descr);
> + return -1;
> + }
> + Py_DECREF(descr);
> + }
> + return 0;
> +}
> +*/
> +
> +static int
> +add_getset(PyTypeObject *type, PyGetSetDef *gsp)
> +{
> + PyObject *dict = type->tp_dict;
> + for (; gsp->name != NULL; gsp++) {
> + PyObject *descr;
> + descr = PyDescr_NewGetSet(type, gsp);
> + if (descr == NULL)
> + return -1;
> + if (PyDict_SetItemString(dict, gsp->name, descr) < 0) {
> + Py_DECREF(descr);
> + return -1;
> + }
> + Py_DECREF(descr);
> + }
> + return 0;
> +}
> +
> +static PyCArgObject *
> +PyCArrayType_paramfunc(CDataObject *self)
> +{
> + PyCArgObject *p = PyCArgObject_new();
> + if (p == NULL)
> + return NULL;
> + p->tag = 'P';
> + p->pffi_type = &ffi_type_pointer;
> + p->value.p = (char *)self->b_ptr;
> + Py_INCREF(self);
> + p->obj = (PyObject *)self;
> + return p;
> +}
> +
> +static PyObject *
> +PyCArrayType_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
> +{
> + PyTypeObject *result;
> + StgDictObject *stgdict;
> + StgDictObject *itemdict;
> + PyObject *length_attr, *type_attr;
> + Py_ssize_t length;
> + Py_ssize_t itemsize, itemalign;
> +
> + /* create the new instance (which is a class,
> + since we are a metatype!) */
> + result = (PyTypeObject *)PyType_Type.tp_new(type, args, kwds);
> + if (result == NULL)
> + return NULL;
> +
> + /* Initialize these variables to NULL so that we can simplify error
> + handling by using Py_XDECREF. */
> + stgdict = NULL;
> + type_attr = NULL;
> +
> + length_attr = PyObject_GetAttrString((PyObject *)result, "_length_");
> + if (!length_attr || !PyLong_Check(length_attr)) {
> + PyErr_SetString(PyExc_AttributeError,
> + "class must define a '_length_' attribute, "
> + "which must be a positive integer");
> + Py_XDECREF(length_attr);
> + goto error;
> + }
> + length = PyLong_AsSsize_t(length_attr);
> + Py_DECREF(length_attr);
> + if (length == -1 && PyErr_Occurred()) {
> + if (PyErr_ExceptionMatches(PyExc_OverflowError)) {
> + PyErr_SetString(PyExc_OverflowError,
> + "The '_length_' attribute is too large");
> + }
> + goto error;
> + }
> +
> + type_attr = PyObject_GetAttrString((PyObject *)result, "_type_");
> + if (!type_attr) {
> + PyErr_SetString(PyExc_AttributeError,
> + "class must define a '_type_' attribute");
> + goto error;
> + }
> +
> + stgdict = (StgDictObject *)PyObject_CallObject(
> + (PyObject *)&PyCStgDict_Type, NULL);
> + if (!stgdict)
> + goto error;
> +
> + itemdict = PyType_stgdict(type_attr);
> + if (!itemdict) {
> + PyErr_SetString(PyExc_TypeError,
> + "_type_ must have storage info");
> + goto error;
> + }
> +
> + assert(itemdict->format);
> + stgdict->format = _ctypes_alloc_format_string(NULL, itemdict->format);
> + if (stgdict->format == NULL)
> + goto error;
> + stgdict->ndim = itemdict->ndim + 1;
> + stgdict->shape = PyMem_Malloc(sizeof(Py_ssize_t) * stgdict->ndim);
> + if (stgdict->shape == NULL) {
> + PyErr_NoMemory();
> + goto error;
> + }
> + stgdict->shape[0] = length;
> + if (stgdict->ndim > 1) {
> + memmove(&stgdict->shape[1], itemdict->shape,
> + sizeof(Py_ssize_t) * (stgdict->ndim - 1));
> + }
> +
> + itemsize = itemdict->size;
> + if (length * itemsize < 0) {
> + PyErr_SetString(PyExc_OverflowError,
> + "array too large");
> + goto error;
> + }
> +
> + itemalign = itemdict->align;
> +
> + if (itemdict->flags & (TYPEFLAG_ISPOINTER | TYPEFLAG_HASPOINTER))
> + stgdict->flags |= TYPEFLAG_HASPOINTER;
> +
> + stgdict->size = itemsize * length;
> + stgdict->align = itemalign;
> + stgdict->length = length;
> + stgdict->proto = type_attr;
> +
> + stgdict->paramfunc = &PyCArrayType_paramfunc;
> +
> + /* Arrays are passed as pointers to function calls. */
> + stgdict->ffi_type_pointer = ffi_type_pointer;
> +
> + /* replace the class dict by our updated spam dict */
> + if (-1 == PyDict_Update((PyObject *)stgdict, result->tp_dict))
> + goto error;
> + Py_SETREF(result->tp_dict, (PyObject *)stgdict); /* steal the reference */
> + stgdict = NULL;
> +
> + /* Special case for character arrays.
> + A permanent annoyance: char arrays are also strings!
> + */
> + if (itemdict->getfunc == _ctypes_get_fielddesc("c")->getfunc) {
> + if (-1 == add_getset(result, CharArray_getsets))
> + goto error;
> +#ifdef CTYPES_UNICODE
> + } else if (itemdict->getfunc == _ctypes_get_fielddesc("u")->getfunc) {
> + if (-1 == add_getset(result, WCharArray_getsets))
> + goto error;
> +#endif
> + }
> +
> + return (PyObject *)result;
> +error:
> + Py_XDECREF((PyObject*)stgdict);
> + Py_XDECREF(type_attr);
> + Py_DECREF(result);
> + return NULL;
> +}
> +
> +PyTypeObject PyCArrayType_Type = {
> + PyVarObject_HEAD_INIT(NULL, 0)
> + "_ctypes.PyCArrayType", /* tp_name */
> + 0, /* tp_basicsize */
> + 0, /* tp_itemsize */
> + 0, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + 0, /* tp_repr */
> + 0, /* tp_as_number */
> + &CDataType_as_sequence, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + 0, /* tp_getattro */
> + 0, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */
> + "metatype for the Array Objects", /* tp_doc */
> + 0, /* tp_traverse */
> + 0, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + 0, /* tp_iter */
> + 0, /* tp_iternext */
> + CDataType_methods, /* tp_methods */
> + 0, /* tp_members */
> + 0, /* tp_getset */
> + 0, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + 0, /* tp_init */
> + 0, /* tp_alloc */
> + PyCArrayType_new, /* tp_new */
> + 0, /* tp_free */
> +};
> +
> +
> +/******************************************************************/
> +/*
> + PyCSimpleType_Type
> +*/
> +/*
> +
> +PyCSimpleType_new ensures that the new Simple_Type subclass created has a valid
> +_type_ attribute.
> +
> +*/
> +
> +static const char SIMPLE_TYPE_CHARS[] = "cbBhHiIlLdfuzZqQPXOv?g";
> +
> +static PyObject *
> +c_wchar_p_from_param(PyObject *type, PyObject *value)
> +{
> + PyObject *as_parameter;
> + int res;
> + if (value == Py_None) {
> + Py_INCREF(Py_None);
> + return Py_None;
> + }
> + if (PyUnicode_Check(value)) {
> + PyCArgObject *parg;
> + struct fielddesc *fd = _ctypes_get_fielddesc("Z");
> +
> + parg = PyCArgObject_new();
> + if (parg == NULL)
> + return NULL;
> + parg->pffi_type = &ffi_type_pointer;
> + parg->tag = 'Z';
> + parg->obj = fd->setfunc(&parg->value, value, 0);
> + if (parg->obj == NULL) {
> + Py_DECREF(parg);
> + return NULL;
> + }
> + return (PyObject *)parg;
> + }
> + res = PyObject_IsInstance(value, type);
> + if (res == -1)
> + return NULL;
> + if (res) {
> + Py_INCREF(value);
> + return value;
> + }
> + if (ArrayObject_Check(value) || PointerObject_Check(value)) {
> + /* c_wchar array instance or pointer(c_wchar(...)) */
> + StgDictObject *dt = PyObject_stgdict(value);
> + StgDictObject *dict;
> + assert(dt); /* Cannot be NULL for pointer or array objects */
> + dict = dt && dt->proto ? PyType_stgdict(dt->proto) : NULL;
> + if (dict && (dict->setfunc == _ctypes_get_fielddesc("u")->setfunc)) {
> + Py_INCREF(value);
> + return value;
> + }
> + }
> + if (PyCArg_CheckExact(value)) {
> + /* byref(c_char(...)) */
> + PyCArgObject *a = (PyCArgObject *)value;
> + StgDictObject *dict = PyObject_stgdict(a->obj);
> + if (dict && (dict->setfunc == _ctypes_get_fielddesc("u")->setfunc)) {
> + Py_INCREF(value);
> + return value;
> + }
> + }
> +
> + as_parameter = PyObject_GetAttrString(value, "_as_parameter_");
> + if (as_parameter) {
> + value = c_wchar_p_from_param(type, as_parameter);
> + Py_DECREF(as_parameter);
> + return value;
> + }
> + /* XXX better message */
> + PyErr_SetString(PyExc_TypeError,
> + "wrong type");
> + return NULL;
> +}
> +
> +static PyObject *
> +c_char_p_from_param(PyObject *type, PyObject *value)
> +{
> + PyObject *as_parameter;
> + int res;
> + if (value == Py_None) {
> + Py_INCREF(Py_None);
> + return Py_None;
> + }
> + if (PyBytes_Check(value)) {
> + PyCArgObject *parg;
> + struct fielddesc *fd = _ctypes_get_fielddesc("z");
> +
> + parg = PyCArgObject_new();
> + if (parg == NULL)
> + return NULL;
> + parg->pffi_type = &ffi_type_pointer;
> + parg->tag = 'z';
> + parg->obj = fd->setfunc(&parg->value, value, 0);
> + if (parg->obj == NULL) {
> + Py_DECREF(parg);
> + return NULL;
> + }
> + return (PyObject *)parg;
> + }
> + res = PyObject_IsInstance(value, type);
> + if (res == -1)
> + return NULL;
> + if (res) {
> + Py_INCREF(value);
> + return value;
> + }
> + if (ArrayObject_Check(value) || PointerObject_Check(value)) {
> + /* c_char array instance or pointer(c_char(...)) */
> + StgDictObject *dt = PyObject_stgdict(value);
> + StgDictObject *dict;
> + assert(dt); /* Cannot be NULL for pointer or array objects */
> + dict = dt && dt->proto ? PyType_stgdict(dt->proto) : NULL;
> + if (dict && (dict->setfunc == _ctypes_get_fielddesc("c")->setfunc)) {
> + Py_INCREF(value);
> + return value;
> + }
> + }
> + if (PyCArg_CheckExact(value)) {
> + /* byref(c_char(...)) */
> + PyCArgObject *a = (PyCArgObject *)value;
> + StgDictObject *dict = PyObject_stgdict(a->obj);
> + if (dict && (dict->setfunc == _ctypes_get_fielddesc("c")->setfunc)) {
> + Py_INCREF(value);
> + return value;
> + }
> + }
> +
> + as_parameter = PyObject_GetAttrString(value, "_as_parameter_");
> + if (as_parameter) {
> + value = c_char_p_from_param(type, as_parameter);
> + Py_DECREF(as_parameter);
> + return value;
> + }
> + /* XXX better message */
> + PyErr_SetString(PyExc_TypeError,
> + "wrong type");
> + return NULL;
> +}
> +
> +static PyObject *
> +c_void_p_from_param(PyObject *type, PyObject *value)
> +{
> + StgDictObject *stgd;
> + PyObject *as_parameter;
> + int res;
> +
> +/* None */
> + if (value == Py_None) {
> + Py_INCREF(Py_None);
> + return Py_None;
> + }
> + /* Should probably allow buffer interface as well */
> +/* int, long */
> + if (PyLong_Check(value)) {
> + PyCArgObject *parg;
> + struct fielddesc *fd = _ctypes_get_fielddesc("P");
> +
> + parg = PyCArgObject_new();
> + if (parg == NULL)
> + return NULL;
> + parg->pffi_type = &ffi_type_pointer;
> + parg->tag = 'P';
> + parg->obj = fd->setfunc(&parg->value, value, 0);
> + if (parg->obj == NULL) {
> + Py_DECREF(parg);
> + return NULL;
> + }
> + return (PyObject *)parg;
> + }
> + /* XXX struni: remove later */
> +/* bytes */
> + if (PyBytes_Check(value)) {
> + PyCArgObject *parg;
> + struct fielddesc *fd = _ctypes_get_fielddesc("z");
> +
> + parg = PyCArgObject_new();
> + if (parg == NULL)
> + return NULL;
> + parg->pffi_type = &ffi_type_pointer;
> + parg->tag = 'z';
> + parg->obj = fd->setfunc(&parg->value, value, 0);
> + if (parg->obj == NULL) {
> + Py_DECREF(parg);
> + return NULL;
> + }
> + return (PyObject *)parg;
> + }
> +/* unicode */
> + if (PyUnicode_Check(value)) {
> + PyCArgObject *parg;
> + struct fielddesc *fd = _ctypes_get_fielddesc("Z");
> +
> + parg = PyCArgObject_new();
> + if (parg == NULL)
> + return NULL;
> + parg->pffi_type = &ffi_type_pointer;
> + parg->tag = 'Z';
> + parg->obj = fd->setfunc(&parg->value, value, 0);
> + if (parg->obj == NULL) {
> + Py_DECREF(parg);
> + return NULL;
> + }
> + return (PyObject *)parg;
> + }
> +/* c_void_p instance (or subclass) */
> + res = PyObject_IsInstance(value, type);
> + if (res == -1)
> + return NULL;
> + if (res) {
> + /* c_void_p instances */
> + Py_INCREF(value);
> + return value;
> + }
> +/* ctypes array or pointer instance */
> + if (ArrayObject_Check(value) || PointerObject_Check(value)) {
> + /* Any array or pointer is accepted */
> + Py_INCREF(value);
> + return value;
> + }
> +/* byref(...) */
> + if (PyCArg_CheckExact(value)) {
> + /* byref(c_xxx()) */
> + PyCArgObject *a = (PyCArgObject *)value;
> + if (a->tag == 'P') {
> + Py_INCREF(value);
> + return value;
> + }
> + }
> +/* function pointer */
> + if (PyCFuncPtrObject_Check(value)) {
> + PyCArgObject *parg;
> + PyCFuncPtrObject *func;
> + func = (PyCFuncPtrObject *)value;
> + parg = PyCArgObject_new();
> + if (parg == NULL)
> + return NULL;
> + parg->pffi_type = &ffi_type_pointer;
> + parg->tag = 'P';
> + Py_INCREF(value);
> + parg->value.p = *(void **)func->b_ptr;
> + parg->obj = value;
> + return (PyObject *)parg;
> + }
> +/* c_char_p, c_wchar_p */
> + stgd = PyObject_stgdict(value);
> + if (stgd && CDataObject_Check(value) && stgd->proto && PyUnicode_Check(stgd->proto)) {
> + PyCArgObject *parg;
> +
> + switch (PyUnicode_AsUTF8(stgd->proto)[0]) {
> + case 'z': /* c_char_p */
> + case 'Z': /* c_wchar_p */
> + parg = PyCArgObject_new();
> + if (parg == NULL)
> + return NULL;
> + parg->pffi_type = &ffi_type_pointer;
> + parg->tag = 'Z';
> + Py_INCREF(value);
> + parg->obj = value;
> + /* Remember: b_ptr points to where the pointer is stored! */
> + parg->value.p = *(void **)(((CDataObject *)value)->b_ptr);
> + return (PyObject *)parg;
> + }
> + }
> +
> + as_parameter = PyObject_GetAttrString(value, "_as_parameter_");
> + if (as_parameter) {
> + value = c_void_p_from_param(type, as_parameter);
> + Py_DECREF(as_parameter);
> + return value;
> + }
> + /* XXX better message */
> + PyErr_SetString(PyExc_TypeError,
> + "wrong type");
> + return NULL;
> +}
> +
> +static PyMethodDef c_void_p_method = { "from_param", c_void_p_from_param, METH_O };
> +static PyMethodDef c_char_p_method = { "from_param", c_char_p_from_param, METH_O };
> +static PyMethodDef c_wchar_p_method = { "from_param", c_wchar_p_from_param, METH_O };
> +
> +static PyObject *CreateSwappedType(PyTypeObject *type, PyObject *args, PyObject *kwds,
> + PyObject *proto, struct fielddesc *fmt)
> +{
> + PyTypeObject *result;
> + StgDictObject *stgdict;
> + PyObject *name = PyTuple_GET_ITEM(args, 0);
> + PyObject *newname;
> + PyObject *swapped_args;
> + static PyObject *suffix;
> + Py_ssize_t i;
> +
> + swapped_args = PyTuple_New(PyTuple_GET_SIZE(args));
> + if (!swapped_args)
> + return NULL;
> +
> + if (suffix == NULL)
> +#ifdef WORDS_BIGENDIAN
> + suffix = PyUnicode_InternFromString("_le");
> +#else
> + suffix = PyUnicode_InternFromString("_be");
> +#endif
> + if (suffix == NULL) {
> + Py_DECREF(swapped_args);
> + return NULL;
> + }
> +
> + newname = PyUnicode_Concat(name, suffix);
> + if (newname == NULL) {
> + Py_DECREF(swapped_args);
> + return NULL;
> + }
> +
> + PyTuple_SET_ITEM(swapped_args, 0, newname);
> + for (i=1; i<PyTuple_GET_SIZE(args); ++i) {
> + PyObject *v = PyTuple_GET_ITEM(args, i);
> + Py_INCREF(v);
> + PyTuple_SET_ITEM(swapped_args, i, v);
> + }
> +
> + /* create the new instance (which is a class,
> + since we are a metatype!) */
> + result = (PyTypeObject *)PyType_Type.tp_new(type, swapped_args, kwds);
> + Py_DECREF(swapped_args);
> + if (result == NULL)
> + return NULL;
> +
> + stgdict = (StgDictObject *)PyObject_CallObject(
> + (PyObject *)&PyCStgDict_Type, NULL);
> + if (!stgdict) {
> + Py_DECREF(result);
> + return NULL;
> + }
> +
> + stgdict->ffi_type_pointer = *fmt->pffi_type;
> + stgdict->align = fmt->pffi_type->alignment;
> + stgdict->length = 0;
> + stgdict->size = fmt->pffi_type->size;
> + stgdict->setfunc = fmt->setfunc_swapped;
> + stgdict->getfunc = fmt->getfunc_swapped;
> +
> + Py_INCREF(proto);
> + stgdict->proto = proto;
> +
> + /* replace the class dict by our updated spam dict */
> + if (-1 == PyDict_Update((PyObject *)stgdict, result->tp_dict)) {
> + Py_DECREF(result);
> + Py_DECREF((PyObject *)stgdict);
> + return NULL;
> + }
> + Py_SETREF(result->tp_dict, (PyObject *)stgdict);
> +
> + return (PyObject *)result;
> +}
> +
> +static PyCArgObject *
> +PyCSimpleType_paramfunc(CDataObject *self)
> +{
> + StgDictObject *dict;
> + char *fmt;
> + PyCArgObject *parg;
> + struct fielddesc *fd;
> +
> + dict = PyObject_stgdict((PyObject *)self);
> + assert(dict); /* Cannot be NULL for CDataObject instances */
> + fmt = PyUnicode_AsUTF8(dict->proto);
> + assert(fmt);
> +
> + fd = _ctypes_get_fielddesc(fmt);
> + assert(fd);
> +
> + parg = PyCArgObject_new();
> + if (parg == NULL)
> + return NULL;
> +
> + parg->tag = fmt[0];
> + parg->pffi_type = fd->pffi_type;
> + Py_INCREF(self);
> + parg->obj = (PyObject *)self;
> + memcpy(&parg->value, self->b_ptr, self->b_size);
> + return parg;
> +}
> +
> +static PyObject *
> +PyCSimpleType_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
> +{
> + PyTypeObject *result;
> + StgDictObject *stgdict;
> + PyObject *proto;
> + const char *proto_str;
> + Py_ssize_t proto_len;
> + PyMethodDef *ml;
> + struct fielddesc *fmt;
> +
> + /* create the new instance (which is a class,
> + since we are a metatype!) */
> + result = (PyTypeObject *)PyType_Type.tp_new(type, args, kwds);
> + if (result == NULL)
> + return NULL;
> +
> + proto = PyObject_GetAttrString((PyObject *)result, "_type_"); /* new ref */
> + if (!proto) {
> + PyErr_SetString(PyExc_AttributeError,
> + "class must define a '_type_' attribute");
> + error:
> + Py_XDECREF(proto);
> + Py_XDECREF(result);
> + return NULL;
> + }
> + if (PyUnicode_Check(proto)) {
> + proto_str = PyUnicode_AsUTF8AndSize(proto, &proto_len);
> + if (!proto_str)
> + goto error;
> + } else {
> + PyErr_SetString(PyExc_TypeError,
> + "class must define a '_type_' string attribute");
> + goto error;
> + }
> + if (proto_len != 1) {
> + PyErr_SetString(PyExc_ValueError,
> + "class must define a '_type_' attribute "
> + "which must be a string of length 1");
> + goto error;
> + }
> + if (!strchr(SIMPLE_TYPE_CHARS, *proto_str)) {
> + PyErr_Format(PyExc_AttributeError,
> + "class must define a '_type_' attribute which must be\n"
> + "a single character string containing one of '%s'.",
> + SIMPLE_TYPE_CHARS);
> + goto error;
> + }
> + fmt = _ctypes_get_fielddesc(proto_str);
> + if (fmt == NULL) {
> + PyErr_Format(PyExc_ValueError,
> + "_type_ '%s' not supported", proto_str);
> + goto error;
> + }
> +
> + stgdict = (StgDictObject *)PyObject_CallObject(
> + (PyObject *)&PyCStgDict_Type, NULL);
> + if (!stgdict)
> + goto error;
> +
> + stgdict->ffi_type_pointer = *fmt->pffi_type;
> + stgdict->align = fmt->pffi_type->alignment;
> + stgdict->length = 0;
> + stgdict->size = fmt->pffi_type->size;
> + stgdict->setfunc = fmt->setfunc;
> + stgdict->getfunc = fmt->getfunc;
> +#ifdef WORDS_BIGENDIAN
> + stgdict->format = _ctypes_alloc_format_string_for_type(proto_str[0], 1);
> +#else
> + stgdict->format = _ctypes_alloc_format_string_for_type(proto_str[0], 0);
> +#endif
> + if (stgdict->format == NULL) {
> + Py_DECREF(result);
> + Py_DECREF(proto);
> + Py_DECREF((PyObject *)stgdict);
> + return NULL;
> + }
> +
> + stgdict->paramfunc = PyCSimpleType_paramfunc;
> +/*
> + if (result->tp_base != &Simple_Type) {
> + stgdict->setfunc = NULL;
> + stgdict->getfunc = NULL;
> + }
> +*/
> +
> + /* This consumes the refcount on proto which we have */
> + stgdict->proto = proto;
> +
> + /* replace the class dict by our updated spam dict */
> + if (-1 == PyDict_Update((PyObject *)stgdict, result->tp_dict)) {
> + Py_DECREF(result);
> + Py_DECREF((PyObject *)stgdict);
> + return NULL;
> + }
> + Py_SETREF(result->tp_dict, (PyObject *)stgdict);
> +
> + /* Install from_param class methods in ctypes base classes.
> + Overrides the PyCSimpleType_from_param generic method.
> + */
> + if (result->tp_base == &Simple_Type) {
> + switch (*proto_str) {
> + case 'z': /* c_char_p */
> + ml = &c_char_p_method;
> + stgdict->flags |= TYPEFLAG_ISPOINTER;
> + break;
> + case 'Z': /* c_wchar_p */
> + ml = &c_wchar_p_method;
> + stgdict->flags |= TYPEFLAG_ISPOINTER;
> + break;
> + case 'P': /* c_void_p */
> + ml = &c_void_p_method;
> + stgdict->flags |= TYPEFLAG_ISPOINTER;
> + break;
> + case 's':
> + case 'X':
> + case 'O':
> + ml = NULL;
> + stgdict->flags |= TYPEFLAG_ISPOINTER;
> + break;
> + default:
> + ml = NULL;
> + break;
> + }
> +
> + if (ml) {
> + PyObject *meth;
> + int x;
> + meth = PyDescr_NewClassMethod(result, ml);
> + if (!meth) {
> + Py_DECREF(result);
> + return NULL;
> + }
> + x = PyDict_SetItemString(result->tp_dict,
> + ml->ml_name,
> + meth);
> + Py_DECREF(meth);
> + if (x == -1) {
> + Py_DECREF(result);
> + return NULL;
> + }
> + }
> + }
> +
> + if (type == &PyCSimpleType_Type && fmt->setfunc_swapped && fmt->getfunc_swapped) {
> + PyObject *swapped = CreateSwappedType(type, args, kwds,
> + proto, fmt);
> + StgDictObject *sw_dict;
> + if (swapped == NULL) {
> + Py_DECREF(result);
> + return NULL;
> + }
> + sw_dict = PyType_stgdict(swapped);
> +#ifdef WORDS_BIGENDIAN
> + PyObject_SetAttrString((PyObject *)result, "__ctype_le__", swapped);
> + PyObject_SetAttrString((PyObject *)result, "__ctype_be__", (PyObject *)result);
> + PyObject_SetAttrString(swapped, "__ctype_be__", (PyObject *)result);
> + PyObject_SetAttrString(swapped, "__ctype_le__", swapped);
> + /* We are creating the type for the OTHER endian */
> + sw_dict->format = _ctypes_alloc_format_string("<", stgdict->format+1);
> +#else
> + PyObject_SetAttrString((PyObject *)result, "__ctype_be__", swapped);
> + PyObject_SetAttrString((PyObject *)result, "__ctype_le__", (PyObject *)result);
> + PyObject_SetAttrString(swapped, "__ctype_le__", (PyObject *)result);
> + PyObject_SetAttrString(swapped, "__ctype_be__", swapped);
> + /* We are creating the type for the OTHER endian */
> + sw_dict->format = _ctypes_alloc_format_string(">", stgdict->format+1);
> +#endif
> + Py_DECREF(swapped);
> + if (PyErr_Occurred()) {
> + Py_DECREF(result);
> + return NULL;
> + }
> + };
> +
> + return (PyObject *)result;
> +}
> +
> +/*
> + * This is a *class method*.
> + * Convert a parameter into something that ConvParam can handle.
> + */
> +static PyObject *
> +PyCSimpleType_from_param(PyObject *type, PyObject *value)
> +{
> + StgDictObject *dict;
> + char *fmt;
> + PyCArgObject *parg;
> + struct fielddesc *fd;
> + PyObject *as_parameter;
> + int res;
> +
> + /* If the value is already an instance of the requested type,
> + we can use it as is */
> + res = PyObject_IsInstance(value, type);
> + if (res == -1)
> + return NULL;
> + if (res) {
> + Py_INCREF(value);
> + return value;
> + }
> +
> + dict = PyType_stgdict(type);
> + if (!dict) {
> + PyErr_SetString(PyExc_TypeError,
> + "abstract class");
> + return NULL;
> + }
> +
> + /* I think we can rely on this being a one-character string */
> + fmt = PyUnicode_AsUTF8(dict->proto);
> + assert(fmt);
> +
> + fd = _ctypes_get_fielddesc(fmt);
> + assert(fd);
> +
> + parg = PyCArgObject_new();
> + if (parg == NULL)
> + return NULL;
> +
> + parg->tag = fmt[0];
> + parg->pffi_type = fd->pffi_type;
> + parg->obj = fd->setfunc(&parg->value, value, 0);
> + if (parg->obj)
> + return (PyObject *)parg;
> + PyErr_Clear();
> + Py_DECREF(parg);
> +
> + as_parameter = PyObject_GetAttrString(value, "_as_parameter_");
> + if (as_parameter) {
> + if (Py_EnterRecursiveCall("while processing _as_parameter_")) {
> + Py_DECREF(as_parameter);
> + return NULL;
> + }
> + value = PyCSimpleType_from_param(type, as_parameter);
> + Py_LeaveRecursiveCall();
> + Py_DECREF(as_parameter);
> + return value;
> + }
> + PyErr_SetString(PyExc_TypeError,
> + "wrong type");
> + return NULL;
> +}
> +
> +static PyMethodDef PyCSimpleType_methods[] = {
> + { "from_param", PyCSimpleType_from_param, METH_O, from_param_doc },
> + { "from_address", CDataType_from_address, METH_O, from_address_doc },
> + { "from_buffer", CDataType_from_buffer, METH_VARARGS, from_buffer_doc, },
> + { "from_buffer_copy", CDataType_from_buffer_copy, METH_VARARGS, from_buffer_copy_doc, },
> +#ifndef UEFI_C_SOURCE
> + { "in_dll", CDataType_in_dll, METH_VARARGS, in_dll_doc},
> +#endif
> + { NULL, NULL },
> +};
> +
> +PyTypeObject PyCSimpleType_Type = {
> + PyVarObject_HEAD_INIT(NULL, 0)
> + "_ctypes.PyCSimpleType", /* tp_name */
> + 0, /* tp_basicsize */
> + 0, /* tp_itemsize */
> + 0, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + 0, /* tp_repr */
> + 0, /* tp_as_number */
> + &CDataType_as_sequence, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + 0, /* tp_getattro */
> + 0, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */
> + "metatype for the PyCSimpleType Objects", /* tp_doc */
> + 0, /* tp_traverse */
> + 0, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + 0, /* tp_iter */
> + 0, /* tp_iternext */
> + PyCSimpleType_methods, /* tp_methods */
> + 0, /* tp_members */
> + 0, /* tp_getset */
> + 0, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + 0, /* tp_init */
> + 0, /* tp_alloc */
> + PyCSimpleType_new, /* tp_new */
> + 0, /* tp_free */
> +};
> +
> +/******************************************************************/
> +/*
> + PyCFuncPtrType_Type
> + */
> +
> +static PyObject *
> +converters_from_argtypes(PyObject *ob)
> +{
> + PyObject *converters;
> + Py_ssize_t i;
> + Py_ssize_t nArgs;
> +
> + ob = PySequence_Tuple(ob); /* new reference */
> + if (!ob) {
> + PyErr_SetString(PyExc_TypeError,
> + "_argtypes_ must be a sequence of types");
> + return NULL;
> + }
> +
> + nArgs = PyTuple_GET_SIZE(ob);
> + converters = PyTuple_New(nArgs);
> + if (!converters) {
> + Py_DECREF(ob);
> + return NULL;
> + }
> +
> + /* I have to check if this is correct. Using c_char, which has a size
> + of 1, will be assumed to be pushed as only one byte!
> + Aren't these promoted to integers by the C compiler and pushed as 4 bytes?
> + */
> +
> + for (i = 0; i < nArgs; ++i) {
> + PyObject *tp = PyTuple_GET_ITEM(ob, i);
> + PyObject *cnv = PyObject_GetAttrString(tp, "from_param");
> + if (!cnv)
> + goto argtypes_error_1;
> + PyTuple_SET_ITEM(converters, i, cnv);
> + }
> + Py_DECREF(ob);
> + return converters;
> +
> + argtypes_error_1:
> + Py_XDECREF(converters);
> + Py_DECREF(ob);
> + PyErr_Format(PyExc_TypeError,
> + "item %zd in _argtypes_ has no from_param method",
> + i+1);
> + return NULL;
> +}
> +
> +static int
> +make_funcptrtype_dict(StgDictObject *stgdict)
> +{
> + PyObject *ob;
> + PyObject *converters = NULL;
> +
> + stgdict->align = _ctypes_get_fielddesc("P")->pffi_type->alignment;
> + stgdict->length = 1;
> + stgdict->size = sizeof(void *);
> + stgdict->setfunc = NULL;
> + stgdict->getfunc = NULL;
> + stgdict->ffi_type_pointer = ffi_type_pointer;
> +
> + ob = PyDict_GetItemString((PyObject *)stgdict, "_flags_");
> + if (!ob || !PyLong_Check(ob)) {
> + PyErr_SetString(PyExc_TypeError,
> + "class must define _flags_ which must be an integer");
> + return -1;
> + }
> + stgdict->flags = PyLong_AS_LONG(ob) | TYPEFLAG_ISPOINTER;
> +
> + /* _argtypes_ is optional... */
> + ob = PyDict_GetItemString((PyObject *)stgdict, "_argtypes_");
> + if (ob) {
> + converters = converters_from_argtypes(ob);
> + if (!converters)
> + goto error;
> + Py_INCREF(ob);
> + stgdict->argtypes = ob;
> + stgdict->converters = converters;
> + }
> +
> + ob = PyDict_GetItemString((PyObject *)stgdict, "_restype_");
> + if (ob) {
> + if (ob != Py_None && !PyType_stgdict(ob) && !PyCallable_Check(ob)) {
> + PyErr_SetString(PyExc_TypeError,
> + "_restype_ must be a type, a callable, or None");
> + return -1;
> + }
> + Py_INCREF(ob);
> + stgdict->restype = ob;
> + stgdict->checker = PyObject_GetAttrString(ob, "_check_retval_");
> + if (stgdict->checker == NULL)
> + PyErr_Clear();
> + }
> +/* XXX later, maybe.
> + ob = PyDict_GetItemString((PyObject *)stgdict, "_errcheck_");
> + if (ob) {
> + if (!PyCallable_Check(ob)) {
> + PyErr_SetString(PyExc_TypeError,
> + "_errcheck_ must be callable");
> + return -1;
> + }
> + Py_INCREF(ob);
> + stgdict->errcheck = ob;
> + }
> +*/
> + return 0;
> +
> + error:
> + Py_XDECREF(converters);
> + return -1;
> +
> +}
> +
> +static PyCArgObject *
> +PyCFuncPtrType_paramfunc(CDataObject *self)
> +{
> + PyCArgObject *parg;
> +
> + parg = PyCArgObject_new();
> + if (parg == NULL)
> + return NULL;
> +
> + parg->tag = 'P';
> + parg->pffi_type = &ffi_type_pointer;
> + Py_INCREF(self);
> + parg->obj = (PyObject *)self;
> + parg->value.p = *(void **)self->b_ptr;
> + return parg;
> +}
> +
> +static PyObject *
> +PyCFuncPtrType_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
> +{
> + PyTypeObject *result;
> + StgDictObject *stgdict;
> +
> + stgdict = (StgDictObject *)PyObject_CallObject(
> + (PyObject *)&PyCStgDict_Type, NULL);
> + if (!stgdict)
> + return NULL;
> +
> + stgdict->paramfunc = PyCFuncPtrType_paramfunc;
> + /* We do NOT expose the function signature in the format string. It
> + is impossible, generally, because the only requirement for the
> + argtypes items is that they have a .from_param method - we do not
> + know the types of the arguments (although, in practice, most
> + argtypes would be a ctypes type).
> + */
> + stgdict->format = _ctypes_alloc_format_string(NULL, "X{}");
> + if (stgdict->format == NULL) {
> + Py_DECREF((PyObject *)stgdict);
> + return NULL;
> + }
> + stgdict->flags |= TYPEFLAG_ISPOINTER;
> +
> + /* create the new instance (which is a class,
> + since we are a metatype!) */
> + result = (PyTypeObject *)PyType_Type.tp_new(type, args, kwds);
> + if (result == NULL) {
> + Py_DECREF((PyObject *)stgdict);
> + return NULL;
> + }
> +
> + /* replace the class dict by our updated storage dict */
> + if (-1 == PyDict_Update((PyObject *)stgdict, result->tp_dict)) {
> + Py_DECREF(result);
> + Py_DECREF((PyObject *)stgdict);
> + return NULL;
> + }
> + Py_SETREF(result->tp_dict, (PyObject *)stgdict);
> +
> + if (-1 == make_funcptrtype_dict(stgdict)) {
> + Py_DECREF(result);
> + return NULL;
> + }
> +
> + return (PyObject *)result;
> +}
> +
> +PyTypeObject PyCFuncPtrType_Type = {
> + PyVarObject_HEAD_INIT(NULL, 0)
> + "_ctypes.PyCFuncPtrType", /* tp_name */
> + 0, /* tp_basicsize */
> + 0, /* tp_itemsize */
> + 0, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + 0, /* tp_repr */
> + 0, /* tp_as_number */
> + &CDataType_as_sequence, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + 0, /* tp_getattro */
> + 0, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_HAVE_GC, /* tp_flags */
> + "metatype for C function pointers", /* tp_doc */
> + (traverseproc)CDataType_traverse, /* tp_traverse */
> + (inquiry)CDataType_clear, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + 0, /* tp_iter */
> + 0, /* tp_iternext */
> + CDataType_methods, /* tp_methods */
> + 0, /* tp_members */
> + 0, /* tp_getset */
> + 0, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + 0, /* tp_init */
> + 0, /* tp_alloc */
> + PyCFuncPtrType_new, /* tp_new */
> + 0, /* tp_free */
> +};
> +
> +
> +/*****************************************************************
> + * Code to keep needed objects alive
> + */
> +
> +static CDataObject *
> +PyCData_GetContainer(CDataObject *self)
> +{
> + while (self->b_base)
> + self = self->b_base;
> + if (self->b_objects == NULL) {
> + if (self->b_length) {
> + self->b_objects = PyDict_New();
> + if (self->b_objects == NULL)
> + return NULL;
> + } else {
> + Py_INCREF(Py_None);
> + self->b_objects = Py_None;
> + }
> + }
> + return self;
> +}
> +
> +static PyObject *
> +GetKeepedObjects(CDataObject *target)
> +{
> + CDataObject *container;
> + container = PyCData_GetContainer(target);
> + if (container == NULL)
> + return NULL;
> + return container->b_objects;
> +}
> +
> +static PyObject *
> +unique_key(CDataObject *target, Py_ssize_t index)
> +{
> + char string[256];
> + char *cp = string;
> + size_t bytes_left;
> +
> + Py_BUILD_ASSERT(sizeof(string) - 1 > sizeof(Py_ssize_t) * 2);
> + cp += sprintf(cp, "%x", Py_SAFE_DOWNCAST(index, Py_ssize_t, int));
> + while (target->b_base) {
> + bytes_left = sizeof(string) - (cp - string) - 1;
> + /* Hex format needs 2 characters per byte */
> + if (bytes_left < sizeof(Py_ssize_t) * 2) {
> + PyErr_SetString(PyExc_ValueError,
> + "ctypes object structure too deep");
> + return NULL;
> + }
> + cp += sprintf(cp, ":%x", Py_SAFE_DOWNCAST(target->b_index, Py_ssize_t, int));
> + target = target->b_base;
> + }
> + return PyUnicode_FromStringAndSize(string, cp-string);
> +}
> +
> +/*
> + * Keep a reference to 'keep' in the 'target', at index 'index'.
> + *
> + * If 'keep' is None, do nothing.
> + *
> + * Otherwise create a dictionary (if it does not yet exist) id the root
> + * objects 'b_objects' item, which will store the 'keep' object under a unique
> + * key.
> + *
> + * The unique_key helper travels the target's b_base pointer down to the root,
> + * building a string containing hex-formatted indexes found during traversal,
> + * separated by colons.
> + *
> + * The index tuple is used as a key into the root object's b_objects dict.
> + *
> + * Note: This function steals a refcount of the third argument, even if it
> + * fails!
> + */
> +static int
> +KeepRef(CDataObject *target, Py_ssize_t index, PyObject *keep)
> +{
> + int result;
> + CDataObject *ob;
> + PyObject *key;
> +
> +/* Optimization: no need to store None */
> + if (keep == Py_None) {
> + Py_DECREF(Py_None);
> + return 0;
> + }
> + ob = PyCData_GetContainer(target);
> + if (ob == NULL) {
> + Py_DECREF(keep);
> + return -1;
> + }
> + if (ob->b_objects == NULL || !PyDict_CheckExact(ob->b_objects)) {
> + Py_XSETREF(ob->b_objects, keep); /* refcount consumed */
> + return 0;
> + }
> + key = unique_key(target, index);
> + if (key == NULL) {
> + Py_DECREF(keep);
> + return -1;
> + }
> + result = PyDict_SetItem(ob->b_objects, key, keep);
> + Py_DECREF(key);
> + Py_DECREF(keep);
> + return result;
> +}
> +
> +/******************************************************************/
> +/*
> + PyCData_Type
> + */
> +static int
> +PyCData_traverse(CDataObject *self, visitproc visit, void *arg)
> +{
> + Py_VISIT(self->b_objects);
> + Py_VISIT((PyObject *)self->b_base);
> + return 0;
> +}
> +
> +static int
> +PyCData_clear(CDataObject *self)
> +{
> + Py_CLEAR(self->b_objects);
> + if ((self->b_needsfree)
> + && _CDataObject_HasExternalBuffer(self))
> + PyMem_Free(self->b_ptr);
> + self->b_ptr = NULL;
> + Py_CLEAR(self->b_base);
> + return 0;
> +}
> +
> +static void
> +PyCData_dealloc(PyObject *self)
> +{
> + PyCData_clear((CDataObject *)self);
> + Py_TYPE(self)->tp_free(self);
> +}
> +
> +static PyMemberDef PyCData_members[] = {
> + { "_b_base_", T_OBJECT,
> + offsetof(CDataObject, b_base), READONLY,
> + "the base object" },
> + { "_b_needsfree_", T_INT,
> + offsetof(CDataObject, b_needsfree), READONLY,
> + "whether the object owns the memory or not" },
> + { "_objects", T_OBJECT,
> + offsetof(CDataObject, b_objects), READONLY,
> + "internal objects tree (NEVER CHANGE THIS OBJECT!)"},
> + { NULL },
> +};
> +
> +static int PyCData_NewGetBuffer(PyObject *myself, Py_buffer *view, int flags)
> +{
> + CDataObject *self = (CDataObject *)myself;
> + StgDictObject *dict = PyObject_stgdict(myself);
> + Py_ssize_t i;
> +
> + if (view == NULL) return 0;
> +
> + view->buf = self->b_ptr;
> + view->obj = myself;
> + Py_INCREF(myself);
> + view->len = self->b_size;
> + view->readonly = 0;
> + /* use default format character if not set */
> + view->format = dict->format ? dict->format : "B";
> + view->ndim = dict->ndim;
> + view->shape = dict->shape;
> + view->itemsize = self->b_size;
> + if (view->itemsize) {
> + for (i = 0; i < view->ndim; ++i) {
> + view->itemsize /= dict->shape[i];
> + }
> + }
> + view->strides = NULL;
> + view->suboffsets = NULL;
> + view->internal = NULL;
> + return 0;
> +}
> +
> +static PyBufferProcs PyCData_as_buffer = {
> + PyCData_NewGetBuffer,
> + NULL,
> +};
> +
> +/*
> + * CData objects are mutable, so they cannot be hashable!
> + */
> +static Py_hash_t
> +PyCData_nohash(PyObject *self)
> +{
> + PyErr_SetString(PyExc_TypeError, "unhashable type");
> + return -1;
> +}
> +
> +static PyObject *
> +PyCData_reduce(PyObject *myself, PyObject *args)
> +{
> + CDataObject *self = (CDataObject *)myself;
> +
> + if (PyObject_stgdict(myself)->flags & (TYPEFLAG_ISPOINTER|TYPEFLAG_HASPOINTER)) {
> + PyErr_SetString(PyExc_ValueError,
> + "ctypes objects containing pointers cannot be pickled");
> + return NULL;
> + }
> + return Py_BuildValue("O(O(NN))",
> + _unpickle,
> + Py_TYPE(myself),
> + PyObject_GetAttrString(myself, "__dict__"),
> + PyBytes_FromStringAndSize(self->b_ptr, self->b_size));
> +}
> +
> +static PyObject *
> +PyCData_setstate(PyObject *myself, PyObject *args)
> +{
> + void *data;
> + Py_ssize_t len;
> + int res;
> + PyObject *dict, *mydict;
> + CDataObject *self = (CDataObject *)myself;
> + if (!PyArg_ParseTuple(args, "Os#", &dict, &data, &len))
> + return NULL;
> + if (len > self->b_size)
> + len = self->b_size;
> + memmove(self->b_ptr, data, len);
> + mydict = PyObject_GetAttrString(myself, "__dict__");
> + if (mydict == NULL) {
> + return NULL;
> + }
> + if (!PyDict_Check(mydict)) {
> + PyErr_Format(PyExc_TypeError,
> + "%.200s.__dict__ must be a dictionary, not %.200s",
> + Py_TYPE(myself)->tp_name, Py_TYPE(mydict)->tp_name);
> + Py_DECREF(mydict);
> + return NULL;
> + }
> + res = PyDict_Update(mydict, dict);
> + Py_DECREF(mydict);
> + if (res == -1)
> + return NULL;
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +/*
> + * default __ctypes_from_outparam__ method returns self.
> + */
> +static PyObject *
> +PyCData_from_outparam(PyObject *self, PyObject *args)
> +{
> + Py_INCREF(self);
> + return self;
> +}
> +
> +static PyMethodDef PyCData_methods[] = {
> + { "__ctypes_from_outparam__", PyCData_from_outparam, METH_NOARGS, },
> + { "__reduce__", PyCData_reduce, METH_NOARGS, },
> + { "__setstate__", PyCData_setstate, METH_VARARGS, },
> + { NULL, NULL },
> +};
> +
> +PyTypeObject PyCData_Type = {
> + PyVarObject_HEAD_INIT(NULL, 0)
> + "_ctypes._CData",
> + sizeof(CDataObject), /* tp_basicsize */
> + 0, /* tp_itemsize */
> + PyCData_dealloc, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + 0, /* tp_repr */
> + 0, /* tp_as_number */
> + 0, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + PyCData_nohash, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + 0, /* tp_getattro */
> + 0, /* tp_setattro */
> + &PyCData_as_buffer, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */
> + "XXX to be provided", /* tp_doc */
> + (traverseproc)PyCData_traverse, /* tp_traverse */
> + (inquiry)PyCData_clear, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + 0, /* tp_iter */
> + 0, /* tp_iternext */
> + PyCData_methods, /* tp_methods */
> + PyCData_members, /* tp_members */
> + 0, /* tp_getset */
> + 0, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + 0, /* tp_init */
> + 0, /* tp_alloc */
> + 0, /* tp_new */
> + 0, /* tp_free */
> +};
> +
> +static int PyCData_MallocBuffer(CDataObject *obj, StgDictObject *dict)
> +{
> + if ((size_t)dict->size <= sizeof(obj->b_value)) {
> + /* No need to call malloc, can use the default buffer */
> + obj->b_ptr = (char *)&obj->b_value;
> + /* The b_needsfree flag does not mean that we actually did
> + call PyMem_Malloc to allocate the memory block; instead it
> + means we are the *owner* of the memory and are responsible
> + for freeing resources associated with the memory. This is
> + also the reason that b_needsfree is exposed to Python.
> + */
> + obj->b_needsfree = 1;
> + } else {
> + /* In python 2.4, and ctypes 0.9.6, the malloc call took about
> + 33% of the creation time for c_int().
> + */
> + obj->b_ptr = (char *)PyMem_Malloc(dict->size);
> + if (obj->b_ptr == NULL) {
> + PyErr_NoMemory();
> + return -1;
> + }
> + obj->b_needsfree = 1;
> + memset(obj->b_ptr, 0, dict->size);
> + }
> + obj->b_size = dict->size;
> + return 0;
> +}
> +
> +PyObject *
> +PyCData_FromBaseObj(PyObject *type, PyObject *base, Py_ssize_t index, char *adr)
> +{
> + CDataObject *cmem;
> + StgDictObject *dict;
> +
> + assert(PyType_Check(type));
> + dict = PyType_stgdict(type);
> + if (!dict) {
> + PyErr_SetString(PyExc_TypeError,
> + "abstract class");
> + return NULL;
> + }
> + dict->flags |= DICTFLAG_FINAL;
> + cmem = (CDataObject *)((PyTypeObject *)type)->tp_alloc((PyTypeObject *)type, 0);
> + if (cmem == NULL)
> + return NULL;
> + assert(CDataObject_Check(cmem));
> +
> + cmem->b_length = dict->length;
> + cmem->b_size = dict->size;
> + if (base) { /* use base's buffer */
> + assert(CDataObject_Check(base));
> + cmem->b_ptr = adr;
> + cmem->b_needsfree = 0;
> + Py_INCREF(base);
> + cmem->b_base = (CDataObject *)base;
> + cmem->b_index = index;
> + } else { /* copy contents of adr */
> + if (-1 == PyCData_MallocBuffer(cmem, dict)) {
> + Py_DECREF(cmem);
> + return NULL;
> + }
> + memcpy(cmem->b_ptr, adr, dict->size);
> + cmem->b_index = index;
> + }
> + return (PyObject *)cmem;
> +}
> +
> +/*
> + Box a memory block into a CData instance.
> +*/
> +PyObject *
> +PyCData_AtAddress(PyObject *type, void *buf)
> +{
> + CDataObject *pd;
> + StgDictObject *dict;
> +
> + assert(PyType_Check(type));
> + dict = PyType_stgdict(type);
> + if (!dict) {
> + PyErr_SetString(PyExc_TypeError,
> + "abstract class");
> + return NULL;
> + }
> + dict->flags |= DICTFLAG_FINAL;
> +
> + pd = (CDataObject *)((PyTypeObject *)type)->tp_alloc((PyTypeObject *)type, 0);
> + if (!pd)
> + return NULL;
> + assert(CDataObject_Check(pd));
> + pd->b_ptr = (char *)buf;
> + pd->b_length = dict->length;
> + pd->b_size = dict->size;
> + return (PyObject *)pd;
> +}
> +
> +/*
> + This function returns TRUE for c_int, c_void_p, and these kind of
> + classes. FALSE otherwise FALSE also for subclasses of c_int and
> + such.
> +*/
> +int _ctypes_simple_instance(PyObject *obj)
> +{
> + PyTypeObject *type = (PyTypeObject *)obj;
> +
> + if (PyCSimpleTypeObject_Check(type))
> + return type->tp_base != &Simple_Type;
> + return 0;
> +}
> +
> +PyObject *
> +PyCData_get(PyObject *type, GETFUNC getfunc, PyObject *src,
> + Py_ssize_t index, Py_ssize_t size, char *adr)
> +{
> + StgDictObject *dict;
> + if (getfunc)
> + return getfunc(adr, size);
> + assert(type);
> + dict = PyType_stgdict(type);
> + if (dict && dict->getfunc && !_ctypes_simple_instance(type))
> + return dict->getfunc(adr, size);
> + return PyCData_FromBaseObj(type, src, index, adr);
> +}
> +
> +/*
> + Helper function for PyCData_set below.
> +*/
> +static PyObject *
> +_PyCData_set(CDataObject *dst, PyObject *type, SETFUNC setfunc, PyObject *value,
> + Py_ssize_t size, char *ptr)
> +{
> + CDataObject *src;
> + int err;
> +
> + if (setfunc)
> + return setfunc(ptr, value, size);
> +
> + if (!CDataObject_Check(value)) {
> + StgDictObject *dict = PyType_stgdict(type);
> + if (dict && dict->setfunc)
> + return dict->setfunc(ptr, value, size);
> + /*
> + If value is a tuple, we try to call the type with the tuple
> + and use the result!
> + */
> + assert(PyType_Check(type));
> + if (PyTuple_Check(value)) {
> + PyObject *ob;
> + PyObject *result;
> + ob = PyObject_CallObject(type, value);
> + if (ob == NULL) {
> + _ctypes_extend_error(PyExc_RuntimeError, "(%s) ",
> + ((PyTypeObject *)type)->tp_name);
> + return NULL;
> + }
> + result = _PyCData_set(dst, type, setfunc, ob,
> + size, ptr);
> + Py_DECREF(ob);
> + return result;
> + } else if (value == Py_None && PyCPointerTypeObject_Check(type)) {
> + *(void **)ptr = NULL;
> + Py_INCREF(Py_None);
> + return Py_None;
> + } else {
> + PyErr_Format(PyExc_TypeError,
> + "expected %s instance, got %s",
> + ((PyTypeObject *)type)->tp_name,
> + Py_TYPE(value)->tp_name);
> + return NULL;
> + }
> + }
> + src = (CDataObject *)value;
> +
> + err = PyObject_IsInstance(value, type);
> + if (err == -1)
> + return NULL;
> + if (err) {
> + memcpy(ptr,
> + src->b_ptr,
> + size);
> +
> + if (PyCPointerTypeObject_Check(type)) {
> + /* XXX */
> + }
> +
> + value = GetKeepedObjects(src);
> + if (value == NULL)
> + return NULL;
> +
> + Py_INCREF(value);
> + return value;
> + }
> +
> + if (PyCPointerTypeObject_Check(type)
> + && ArrayObject_Check(value)) {
> + StgDictObject *p1, *p2;
> + PyObject *keep;
> + p1 = PyObject_stgdict(value);
> + assert(p1); /* Cannot be NULL for array instances */
> + p2 = PyType_stgdict(type);
> + assert(p2); /* Cannot be NULL for pointer types */
> +
> + if (p1->proto != p2->proto) {
> + PyErr_Format(PyExc_TypeError,
> + "incompatible types, %s instance instead of %s instance",
> + Py_TYPE(value)->tp_name,
> + ((PyTypeObject *)type)->tp_name);
> + return NULL;
> + }
> + *(void **)ptr = src->b_ptr;
> +
> + keep = GetKeepedObjects(src);
> + if (keep == NULL)
> + return NULL;
> +
> + /*
> + We are assigning an array object to a field which represents
> + a pointer. This has the same effect as converting an array
> + into a pointer. So, again, we have to keep the whole object
> + pointed to (which is the array in this case) alive, and not
> + only it's object list. So we create a tuple, containing
> + b_objects list PLUS the array itself, and return that!
> + */
> + return PyTuple_Pack(2, keep, value);
> + }
> + PyErr_Format(PyExc_TypeError,
> + "incompatible types, %s instance instead of %s instance",
> + Py_TYPE(value)->tp_name,
> + ((PyTypeObject *)type)->tp_name);
> + return NULL;
> +}
> +
> +/*
> + * Set a slice in object 'dst', which has the type 'type',
> + * to the value 'value'.
> + */
> +int
> +PyCData_set(PyObject *dst, PyObject *type, SETFUNC setfunc, PyObject *value,
> + Py_ssize_t index, Py_ssize_t size, char *ptr)
> +{
> + CDataObject *mem = (CDataObject *)dst;
> + PyObject *result;
> +
> + if (!CDataObject_Check(dst)) {
> + PyErr_SetString(PyExc_TypeError,
> + "not a ctype instance");
> + return -1;
> + }
> +
> + result = _PyCData_set(mem, type, setfunc, value,
> + size, ptr);
> + if (result == NULL)
> + return -1;
> +
> + /* KeepRef steals a refcount from it's last argument */
> + /* If KeepRef fails, we are stumped. The dst memory block has already
> + been changed */
> + return KeepRef(mem, index, result);
> +}
> +
> +
> +/******************************************************************/
> +static PyObject *
> +GenericPyCData_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
> +{
> + CDataObject *obj;
> + StgDictObject *dict;
> +
> + dict = PyType_stgdict((PyObject *)type);
> + if (!dict) {
> + PyErr_SetString(PyExc_TypeError,
> + "abstract class");
> + return NULL;
> + }
> + dict->flags |= DICTFLAG_FINAL;
> +
> + obj = (CDataObject *)type->tp_alloc(type, 0);
> + if (!obj)
> + return NULL;
> +
> + obj->b_base = NULL;
> + obj->b_index = 0;
> + obj->b_objects = NULL;
> + obj->b_length = dict->length;
> +
> + if (-1 == PyCData_MallocBuffer(obj, dict)) {
> + Py_DECREF(obj);
> + return NULL;
> + }
> + return (PyObject *)obj;
> +}
> +/*****************************************************************/
> +/*
> + PyCFuncPtr_Type
> +*/
> +
> +static int
> +PyCFuncPtr_set_errcheck(PyCFuncPtrObject *self, PyObject *ob, void *Py_UNUSED(ignored))
> +{
> + if (ob && !PyCallable_Check(ob)) {
> + PyErr_SetString(PyExc_TypeError,
> + "the errcheck attribute must be callable");
> + return -1;
> + }
> + Py_XINCREF(ob);
> + Py_XSETREF(self->errcheck, ob);
> + return 0;
> +}
> +
> +static PyObject *
> +PyCFuncPtr_get_errcheck(PyCFuncPtrObject *self, void *Py_UNUSED(ignored))
> +{
> + if (self->errcheck) {
> + Py_INCREF(self->errcheck);
> + return self->errcheck;
> + }
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +static int
> +PyCFuncPtr_set_restype(PyCFuncPtrObject *self, PyObject *ob, void *Py_UNUSED(ignored))
> +{
> + if (ob == NULL) {
> + Py_CLEAR(self->restype);
> + Py_CLEAR(self->checker);
> + return 0;
> + }
> + if (ob != Py_None && !PyType_stgdict(ob) && !PyCallable_Check(ob)) {
> + PyErr_SetString(PyExc_TypeError,
> + "restype must be a type, a callable, or None");
> + return -1;
> + }
> + Py_INCREF(ob);
> + Py_XSETREF(self->restype, ob);
> + Py_XSETREF(self->checker, PyObject_GetAttrString(ob, "_check_retval_"));
> + if (self->checker == NULL)
> + PyErr_Clear();
> + return 0;
> +}
> +
> +static PyObject *
> +PyCFuncPtr_get_restype(PyCFuncPtrObject *self, void *Py_UNUSED(ignored))
> +{
> + StgDictObject *dict;
> + if (self->restype) {
> + Py_INCREF(self->restype);
> + return self->restype;
> + }
> + dict = PyObject_stgdict((PyObject *)self);
> + assert(dict); /* Cannot be NULL for PyCFuncPtrObject instances */
> + if (dict->restype) {
> + Py_INCREF(dict->restype);
> + return dict->restype;
> + } else {
> + Py_INCREF(Py_None);
> + return Py_None;
> + }
> +}
> +
> +static int
> +PyCFuncPtr_set_argtypes(PyCFuncPtrObject *self, PyObject *ob, void *Py_UNUSED(ignored))
> +{
> + PyObject *converters;
> +
> + if (ob == NULL || ob == Py_None) {
> + Py_CLEAR(self->converters);
> + Py_CLEAR(self->argtypes);
> + } else {
> + converters = converters_from_argtypes(ob);
> + if (!converters)
> + return -1;
> + Py_XSETREF(self->converters, converters);
> + Py_INCREF(ob);
> + Py_XSETREF(self->argtypes, ob);
> + }
> + return 0;
> +}
> +
> +static PyObject *
> +PyCFuncPtr_get_argtypes(PyCFuncPtrObject *self, void *Py_UNUSED(ignored))
> +{
> + StgDictObject *dict;
> + if (self->argtypes) {
> + Py_INCREF(self->argtypes);
> + return self->argtypes;
> + }
> + dict = PyObject_stgdict((PyObject *)self);
> + assert(dict); /* Cannot be NULL for PyCFuncPtrObject instances */
> + if (dict->argtypes) {
> + Py_INCREF(dict->argtypes);
> + return dict->argtypes;
> + } else {
> + Py_INCREF(Py_None);
> + return Py_None;
> + }
> +}
> +
> +static PyGetSetDef PyCFuncPtr_getsets[] = {
> + { "errcheck", (getter)PyCFuncPtr_get_errcheck, (setter)PyCFuncPtr_set_errcheck,
> + "a function to check for errors", NULL },
> + { "restype", (getter)PyCFuncPtr_get_restype, (setter)PyCFuncPtr_set_restype,
> + "specify the result type", NULL },
> + { "argtypes", (getter)PyCFuncPtr_get_argtypes,
> + (setter)PyCFuncPtr_set_argtypes,
> + "specify the argument types", NULL },
> + { NULL, NULL }
> +};
> +
> +#ifdef MS_WIN32
> +static PPROC FindAddress(void *handle, const char *name, PyObject *type)
> +{
> +#ifdef MS_WIN64
> + /* win64 has no stdcall calling conv, so it should
> + also not have the name mangling of it.
> + */
> + return (PPROC)GetProcAddress(handle, name);
> +#else
> + PPROC address;
> + char *mangled_name;
> + int i;
> + StgDictObject *dict;
> +
> + address = (PPROC)GetProcAddress(handle, name);
> + if (address)
> + return address;
> + if (((size_t)name & ~0xFFFF) == 0) {
> + return NULL;
> + }
> +
> + dict = PyType_stgdict((PyObject *)type);
> + /* It should not happen that dict is NULL, but better be safe */
> + if (dict==NULL || dict->flags & FUNCFLAG_CDECL)
> + return address;
> +
> + /* for stdcall, try mangled names:
> + funcname -> _funcname@<n>
> + where n is 0, 4, 8, 12, ..., 128
> + */
> + mangled_name = alloca(strlen(name) + 1 + 1 + 1 + 3); /* \0 _ @ %d */
> + if (!mangled_name)
> + return NULL;
> + for (i = 0; i < 32; ++i) {
> + sprintf(mangled_name, "_%s@%d", name, i*4);
> + address = (PPROC)GetProcAddress(handle, mangled_name);
> + if (address)
> + return address;
> + }
> + return NULL;
> +#endif
> +}
> +#endif
> +
> +/* Return 1 if usable, 0 else and exception set. */
> +static int
> +_check_outarg_type(PyObject *arg, Py_ssize_t index)
> +{
> + StgDictObject *dict;
> +
> + if (PyCPointerTypeObject_Check(arg))
> + return 1;
> +
> + if (PyCArrayTypeObject_Check(arg))
> + return 1;
> +
> + dict = PyType_stgdict(arg);
> + if (dict
> + /* simple pointer types, c_void_p, c_wchar_p, BSTR, ... */
> + && PyUnicode_Check(dict->proto)
> +/* We only allow c_void_p, c_char_p and c_wchar_p as a simple output parameter type */
> + && (strchr("PzZ", PyUnicode_AsUTF8(dict->proto)[0]))) {
> + return 1;
> + }
> +
> + PyErr_Format(PyExc_TypeError,
> + "'out' parameter %d must be a pointer type, not %s",
> + Py_SAFE_DOWNCAST(index, Py_ssize_t, int),
> + PyType_Check(arg) ?
> + ((PyTypeObject *)arg)->tp_name :
> + Py_TYPE(arg)->tp_name);
> + return 0;
> +}
> +
> +/* Returns 1 on success, 0 on error */
> +static int
> +_validate_paramflags(PyTypeObject *type, PyObject *paramflags)
> +{
> + Py_ssize_t i, len;
> + StgDictObject *dict;
> + PyObject *argtypes;
> +
> + dict = PyType_stgdict((PyObject *)type);
> + if (!dict) {
> + PyErr_SetString(PyExc_TypeError,
> + "abstract class");
> + return 0;
> + }
> + argtypes = dict->argtypes;
> +
> + if (paramflags == NULL || dict->argtypes == NULL)
> + return 1;
> +
> + if (!PyTuple_Check(paramflags)) {
> + PyErr_SetString(PyExc_TypeError,
> + "paramflags must be a tuple or None");
> + return 0;
> + }
> +
> + len = PyTuple_GET_SIZE(paramflags);
> + if (len != PyTuple_GET_SIZE(dict->argtypes)) {
> + PyErr_SetString(PyExc_ValueError,
> + "paramflags must have the same length as argtypes");
> + return 0;
> + }
> +
> + for (i = 0; i < len; ++i) {
> + PyObject *item = PyTuple_GET_ITEM(paramflags, i);
> + int flag;
> + char *name;
> + PyObject *defval;
> + PyObject *typ;
> + if (!PyArg_ParseTuple(item, "i|ZO", &flag, &name, &defval)) {
> + PyErr_SetString(PyExc_TypeError,
> + "paramflags must be a sequence of (int [,string [,value]]) tuples");
> + return 0;
> + }
> + typ = PyTuple_GET_ITEM(argtypes, i);
> + switch (flag & (PARAMFLAG_FIN | PARAMFLAG_FOUT | PARAMFLAG_FLCID)) {
> + case 0:
> + case PARAMFLAG_FIN:
> + case PARAMFLAG_FIN | PARAMFLAG_FLCID:
> + case PARAMFLAG_FIN | PARAMFLAG_FOUT:
> + break;
> + case PARAMFLAG_FOUT:
> + if (!_check_outarg_type(typ, i+1))
> + return 0;
> + break;
> + default:
> + PyErr_Format(PyExc_TypeError,
> + "paramflag value %d not supported",
> + flag);
> + return 0;
> + }
> + }
> + return 1;
> +}
> +
> +static int
> +_get_name(PyObject *obj, const char **pname)
> +{
> +#ifdef MS_WIN32
> + if (PyLong_Check(obj)) {
> + /* We have to use MAKEINTRESOURCEA for Windows CE.
> + Works on Windows as well, of course.
> + */
> + *pname = MAKEINTRESOURCEA(PyLong_AsUnsignedLongMask(obj) & 0xFFFF);
> + return 1;
> + }
> +#endif
> + if (PyBytes_Check(obj)) {
> + *pname = PyBytes_AS_STRING(obj);
> + return *pname ? 1 : 0;
> + }
> + if (PyUnicode_Check(obj)) {
> + *pname = PyUnicode_AsUTF8(obj);
> + return *pname ? 1 : 0;
> + }
> + PyErr_SetString(PyExc_TypeError,
> + "function name must be string, bytes object or integer");
> + return 0;
> +}
> +
> +#ifndef UEFI_C_SOURCE
> +static PyObject *
> +PyCFuncPtr_FromDll(PyTypeObject *type, PyObject *args, PyObject *kwds)
> +{
> + const char *name;
> + int (* address)(void);
> + PyObject *ftuple;
> + PyObject *dll;
> + PyObject *obj;
> + PyCFuncPtrObject *self;
> + void *handle;
> + PyObject *paramflags = NULL;
> +
> + if (!PyArg_ParseTuple(args, "O|O", &ftuple, ¶mflags))
> + return NULL;
> + if (paramflags == Py_None)
> + paramflags = NULL;
> +
> + ftuple = PySequence_Tuple(ftuple);
> + if (!ftuple)
> + /* Here ftuple is a borrowed reference */
> + return NULL;
> +
> + if (!PyArg_ParseTuple(ftuple, "O&O", _get_name, &name, &dll)) {
> + Py_DECREF(ftuple);
> + return NULL;
> + }
> +
> + obj = PyObject_GetAttrString(dll, "_handle");
> + if (!obj) {
> + Py_DECREF(ftuple);
> + return NULL;
> + }
> + if (!PyLong_Check(obj)) {
> + PyErr_SetString(PyExc_TypeError,
> + "the _handle attribute of the second argument must be an integer");
> + Py_DECREF(ftuple);
> + Py_DECREF(obj);
> + return NULL;
> + }
> + handle = (void *)PyLong_AsVoidPtr(obj);
> + Py_DECREF(obj);
> + if (PyErr_Occurred()) {
> + PyErr_SetString(PyExc_ValueError,
> + "could not convert the _handle attribute to a pointer");
> + Py_DECREF(ftuple);
> + return NULL;
> + }
> +
> +#ifdef MS_WIN32
> + address = FindAddress(handle, name, (PyObject *)type);
> + if (!address) {
> + if (!IS_INTRESOURCE(name))
> + PyErr_Format(PyExc_AttributeError,
> + "function '%s' not found",
> + name);
> + else
> + PyErr_Format(PyExc_AttributeError,
> + "function ordinal %d not found",
> + (WORD)(size_t)name);
> + Py_DECREF(ftuple);
> + return NULL;
> + }
> +#else
> + address = (PPROC)ctypes_dlsym(handle, name);
> + if (!address) {
> +#ifdef __CYGWIN__
> +/* dlerror() isn't very helpful on cygwin */
> + PyErr_Format(PyExc_AttributeError,
> + "function '%s' not found",
> + name);
> +#else
> + PyErr_SetString(PyExc_AttributeError, ctypes_dlerror());
> +#endif
> + Py_DECREF(ftuple);
> + return NULL;
> + }
> +#endif
> + Py_INCREF(dll); /* for KeepRef */
> + Py_DECREF(ftuple);
> + if (!_validate_paramflags(type, paramflags))
> + return NULL;
> +
> + self = (PyCFuncPtrObject *)GenericPyCData_new(type, args, kwds);
> + if (!self)
> + return NULL;
> +
> + Py_XINCREF(paramflags);
> + self->paramflags = paramflags;
> +
> + *(void **)self->b_ptr = address;
> +
> + if (-1 == KeepRef((CDataObject *)self, 0, dll)) {
> + Py_DECREF((PyObject *)self);
> + return NULL;
> + }
> +
> + Py_INCREF(self);
> + self->callable = (PyObject *)self;
> + return (PyObject *)self;
> +}
> +#endif // UEFI_C_SOURCE
> +
> +#ifdef MS_WIN32
> +static PyObject *
> +PyCFuncPtr_FromVtblIndex(PyTypeObject *type, PyObject *args, PyObject *kwds)
> +{
> + PyCFuncPtrObject *self;
> + int index;
> + char *name = NULL;
> + PyObject *paramflags = NULL;
> + GUID *iid = NULL;
> + Py_ssize_t iid_len = 0;
> +
> + if (!PyArg_ParseTuple(args, "is|Oz#", &index, &name, ¶mflags, &iid, &iid_len))
> + return NULL;
> + if (paramflags == Py_None)
> + paramflags = NULL;
> +
> + if (!_validate_paramflags(type, paramflags))
> + return NULL;
> +
> + self = (PyCFuncPtrObject *)GenericPyCData_new(type, args, kwds);
> + self->index = index + 0x1000;
> + Py_XINCREF(paramflags);
> + self->paramflags = paramflags;
> + if (iid_len == sizeof(GUID))
> + self->iid = iid;
> + return (PyObject *)self;
> +}
> +#endif
> +
> +/*
> + PyCFuncPtr_new accepts different argument lists in addition to the standard
> + _basespec_ keyword arg:
> +
> + one argument form
> + "i" - function address
> + "O" - must be a callable, creates a C callable function
> +
> + two or more argument forms (the third argument is a paramflags tuple)
> + "(sO)|..." - (function name, dll object (with an integer handle)), paramflags
> + "(iO)|..." - (function ordinal, dll object (with an integer handle)), paramflags
> + "is|..." - vtable index, method name, creates callable calling COM vtbl
> +*/
> +static PyObject *
> +PyCFuncPtr_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
> +{
> + PyCFuncPtrObject *self;
> + PyObject *callable;
> + StgDictObject *dict;
> + CThunkObject *thunk;
> +
> + if (PyTuple_GET_SIZE(args) == 0)
> + return GenericPyCData_new(type, args, kwds);
> +#ifndef UEFI_C_SOURCE
> + if (1 <= PyTuple_GET_SIZE(args) && PyTuple_Check(PyTuple_GET_ITEM(args, 0)))
> + return PyCFuncPtr_FromDll(type, args, kwds);
> +#endif // UEFI_C_SOURCE
> +#ifdef MS_WIN32
> + if (2 <= PyTuple_GET_SIZE(args) && PyLong_Check(PyTuple_GET_ITEM(args, 0)))
> + return PyCFuncPtr_FromVtblIndex(type, args, kwds);
> +#endif
> +
> + if (1 == PyTuple_GET_SIZE(args)
> + && (PyLong_Check(PyTuple_GET_ITEM(args, 0)))) {
> + CDataObject *ob;
> + void *ptr = PyLong_AsVoidPtr(PyTuple_GET_ITEM(args, 0));
> + if (ptr == NULL && PyErr_Occurred())
> + return NULL;
> + ob = (CDataObject *)GenericPyCData_new(type, args, kwds);
> + if (ob == NULL)
> + return NULL;
> + *(void **)ob->b_ptr = ptr;
> + return (PyObject *)ob;
> + }
> +
> + if (!PyArg_ParseTuple(args, "O", &callable))
> + return NULL;
> + if (!PyCallable_Check(callable)) {
> + PyErr_SetString(PyExc_TypeError,
> + "argument must be callable or integer function address");
> + return NULL;
> + }
> +
> + /* XXX XXX This would allow passing additional options. For COM
> + method *implementations*, we would probably want different
> + behaviour than in 'normal' callback functions: return a HRESULT if
> + an exception occurs in the callback, and print the traceback not
> + only on the console, but also to OutputDebugString() or something
> + like that.
> + */
> +/*
> + if (kwds && PyDict_GetItemString(kwds, "options")) {
> + ...
> + }
> +*/
> +
> + dict = PyType_stgdict((PyObject *)type);
> + /* XXXX Fails if we do: 'PyCFuncPtr(lambda x: x)' */
> + if (!dict || !dict->argtypes) {
> + PyErr_SetString(PyExc_TypeError,
> + "cannot construct instance of this class:"
> + " no argtypes");
> + return NULL;
> + }
> +
> + thunk = _ctypes_alloc_callback(callable,
> + dict->argtypes,
> + dict->restype,
> + dict->flags);
> + if (!thunk)
> + return NULL;
> +
> + self = (PyCFuncPtrObject *)GenericPyCData_new(type, args, kwds);
> + if (self == NULL) {
> + Py_DECREF(thunk);
> + return NULL;
> + }
> +
> + Py_INCREF(callable);
> + self->callable = callable;
> +
> + self->thunk = thunk;
> + *(void **)self->b_ptr = (void *)thunk->pcl_exec;
> +
> + Py_INCREF((PyObject *)thunk); /* for KeepRef */
> + if (-1 == KeepRef((CDataObject *)self, 0, (PyObject *)thunk)) {
> + Py_DECREF((PyObject *)self);
> + return NULL;
> + }
> + return (PyObject *)self;
> +}
> +
> +
> +/*
> + _byref consumes a refcount to its argument
> +*/
> +static PyObject *
> +_byref(PyObject *obj)
> +{
> + PyCArgObject *parg;
> + if (!CDataObject_Check(obj)) {
> + PyErr_SetString(PyExc_TypeError,
> + "expected CData instance");
> + return NULL;
> + }
> +
> + parg = PyCArgObject_new();
> + if (parg == NULL) {
> + Py_DECREF(obj);
> + return NULL;
> + }
> +
> + parg->tag = 'P';
> + parg->pffi_type = &ffi_type_pointer;
> + parg->obj = obj;
> + parg->value.p = ((CDataObject *)obj)->b_ptr;
> + return (PyObject *)parg;
> +}
> +
> +static PyObject *
> +_get_arg(int *pindex, PyObject *name, PyObject *defval, PyObject *inargs, PyObject *kwds)
> +{
> + PyObject *v;
> +
> + if (*pindex < PyTuple_GET_SIZE(inargs)) {
> + v = PyTuple_GET_ITEM(inargs, *pindex);
> + ++*pindex;
> + Py_INCREF(v);
> + return v;
> + }
> + if (kwds && name && (v = PyDict_GetItem(kwds, name))) {
> + ++*pindex;
> + Py_INCREF(v);
> + return v;
> + }
> + if (defval) {
> + Py_INCREF(defval);
> + return defval;
> + }
> + /* we can't currently emit a better error message */
> + if (name)
> + PyErr_Format(PyExc_TypeError,
> + "required argument '%S' missing", name);
> + else
> + PyErr_Format(PyExc_TypeError,
> + "not enough arguments");
> + return NULL;
> +}
> +
> +/*
> + This function implements higher level functionality plus the ability to call
> + functions with keyword arguments by looking at parameter flags. parameter
> + flags is a tuple of 1, 2 or 3-tuples. The first entry in each is an integer
> + specifying the direction of the data transfer for this parameter - 'in',
> + 'out' or 'inout' (zero means the same as 'in'). The second entry is the
> + parameter name, and the third is the default value if the parameter is
> + missing in the function call.
> +
> + This function builds and returns a new tuple 'callargs' which contains the
> + parameters to use in the call. Items on this tuple are copied from the
> + 'inargs' tuple for 'in' and 'in, out' parameters, and constructed from the
> + 'argtypes' tuple for 'out' parameters. It also calculates numretvals which
> + is the number of return values for the function, outmask/inoutmask are
> + bitmasks containing indexes into the callargs tuple specifying which
> + parameters have to be returned. _build_result builds the return value of the
> + function.
> +*/
> +static PyObject *
> +_build_callargs(PyCFuncPtrObject *self, PyObject *argtypes,
> + PyObject *inargs, PyObject *kwds,
> + int *poutmask, int *pinoutmask, unsigned int *pnumretvals)
> +{
> + PyObject *paramflags = self->paramflags;
> + PyObject *callargs;
> + StgDictObject *dict;
> + Py_ssize_t i, len;
> + int inargs_index = 0;
> + /* It's a little bit difficult to determine how many arguments the
> + function call requires/accepts. For simplicity, we count the consumed
> + args and compare this to the number of supplied args. */
> + Py_ssize_t actual_args;
> +
> + *poutmask = 0;
> + *pinoutmask = 0;
> + *pnumretvals = 0;
> +
> + /* Trivial cases, where we either return inargs itself, or a slice of it. */
> + if (argtypes == NULL || paramflags == NULL || PyTuple_GET_SIZE(argtypes) == 0) {
> +#ifdef MS_WIN32
> + if (self->index)
> + return PyTuple_GetSlice(inargs, 1, PyTuple_GET_SIZE(inargs));
> +#endif
> + Py_INCREF(inargs);
> + return inargs;
> + }
> +
> + len = PyTuple_GET_SIZE(argtypes);
> + callargs = PyTuple_New(len); /* the argument tuple we build */
> + if (callargs == NULL)
> + return NULL;
> +
> +#ifdef MS_WIN32
> + /* For a COM method, skip the first arg */
> + if (self->index) {
> + inargs_index = 1;
> + }
> +#endif
> + for (i = 0; i < len; ++i) {
> + PyObject *item = PyTuple_GET_ITEM(paramflags, i);
> + PyObject *ob;
> + int flag;
> + PyObject *name = NULL;
> + PyObject *defval = NULL;
> +
> + /* This way seems to be ~2 us faster than the PyArg_ParseTuple
> + calls below. */
> + /* We HAVE already checked that the tuple can be parsed with "i|ZO", so... */
> + Py_ssize_t tsize = PyTuple_GET_SIZE(item);
> + flag = PyLong_AS_LONG(PyTuple_GET_ITEM(item, 0));
> + name = tsize > 1 ? PyTuple_GET_ITEM(item, 1) : NULL;
> + defval = tsize > 2 ? PyTuple_GET_ITEM(item, 2) : NULL;
> +
> + switch (flag & (PARAMFLAG_FIN | PARAMFLAG_FOUT | PARAMFLAG_FLCID)) {
> + case PARAMFLAG_FIN | PARAMFLAG_FLCID:
> + /* ['in', 'lcid'] parameter. Always taken from defval,
> + if given, else the integer 0. */
> + if (defval == NULL) {
> + defval = PyLong_FromLong(0);
> + if (defval == NULL)
> + goto error;
> + } else
> + Py_INCREF(defval);
> + PyTuple_SET_ITEM(callargs, i, defval);
> + break;
> + case (PARAMFLAG_FIN | PARAMFLAG_FOUT):
> + *pinoutmask |= (1 << i); /* mark as inout arg */
> + (*pnumretvals)++;
> + /* fall through */
> + case 0:
> + case PARAMFLAG_FIN:
> + /* 'in' parameter. Copy it from inargs. */
> + ob =_get_arg(&inargs_index, name, defval, inargs, kwds);
> + if (ob == NULL)
> + goto error;
> + PyTuple_SET_ITEM(callargs, i, ob);
> + break;
> + case PARAMFLAG_FOUT:
> + /* XXX Refactor this code into a separate function. */
> + /* 'out' parameter.
> + argtypes[i] must be a POINTER to a c type.
> +
> + Cannot by supplied in inargs, but a defval will be used
> + if available. XXX Should we support getting it from kwds?
> + */
> + if (defval) {
> + /* XXX Using mutable objects as defval will
> + make the function non-threadsafe, unless we
> + copy the object in each invocation */
> + Py_INCREF(defval);
> + PyTuple_SET_ITEM(callargs, i, defval);
> + *poutmask |= (1 << i); /* mark as out arg */
> + (*pnumretvals)++;
> + break;
> + }
> + ob = PyTuple_GET_ITEM(argtypes, i);
> + dict = PyType_stgdict(ob);
> + if (dict == NULL) {
> + /* Cannot happen: _validate_paramflags()
> + would not accept such an object */
> + PyErr_Format(PyExc_RuntimeError,
> + "NULL stgdict unexpected");
> + goto error;
> + }
> + if (PyUnicode_Check(dict->proto)) {
> + PyErr_Format(
> + PyExc_TypeError,
> + "%s 'out' parameter must be passed as default value",
> + ((PyTypeObject *)ob)->tp_name);
> + goto error;
> + }
> + if (PyCArrayTypeObject_Check(ob))
> + ob = PyObject_CallObject(ob, NULL);
> + else
> + /* Create an instance of the pointed-to type */
> + ob = PyObject_CallObject(dict->proto, NULL);
> + /*
> + XXX Is the following correct any longer?
> + We must not pass a byref() to the array then but
> + the array instance itself. Then, we cannot retrive
> + the result from the PyCArgObject.
> + */
> + if (ob == NULL)
> + goto error;
> + /* The .from_param call that will ocurr later will pass this
> + as a byref parameter. */
> + PyTuple_SET_ITEM(callargs, i, ob);
> + *poutmask |= (1 << i); /* mark as out arg */
> + (*pnumretvals)++;
> + break;
> + default:
> + PyErr_Format(PyExc_ValueError,
> + "paramflag %d not yet implemented", flag);
> + goto error;
> + break;
> + }
> + }
> +
> + /* We have counted the arguments we have consumed in 'inargs_index'. This
> + must be the same as len(inargs) + len(kwds), otherwise we have
> + either too much or not enough arguments. */
> +
> + actual_args = PyTuple_GET_SIZE(inargs) + (kwds ? PyDict_Size(kwds) : 0);
> + if (actual_args != inargs_index) {
> + /* When we have default values or named parameters, this error
> + message is misleading. See unittests/test_paramflags.py
> + */
> + PyErr_Format(PyExc_TypeError,
> + "call takes exactly %d arguments (%zd given)",
> + inargs_index, actual_args);
> + goto error;
> + }
> +
> + /* outmask is a bitmask containing indexes into callargs. Items at
> + these indexes contain values to return.
> + */
> + return callargs;
> + error:
> + Py_DECREF(callargs);
> + return NULL;
> +}
> +
> +/* See also:
> + http://msdn.microsoft.com/library/en-us/com/html/769127a1-1a14-4ed4-9d38-7cf3e571b661.asp
> +*/
> +/*
> + Build return value of a function.
> +
> + Consumes the refcount on result and callargs.
> +*/
> +static PyObject *
> +_build_result(PyObject *result, PyObject *callargs,
> + int outmask, int inoutmask, unsigned int numretvals)
> +{
> + unsigned int i, index;
> + int bit;
> + PyObject *tup = NULL;
> +
> + if (callargs == NULL)
> + return result;
> + if (result == NULL || numretvals == 0) {
> + Py_DECREF(callargs);
> + return result;
> + }
> + Py_DECREF(result);
> +
> + /* tup will not be allocated if numretvals == 1 */
> + /* allocate tuple to hold the result */
> + if (numretvals > 1) {
> + tup = PyTuple_New(numretvals);
> + if (tup == NULL) {
> + Py_DECREF(callargs);
> + return NULL;
> + }
> + }
> +
> + index = 0;
> + for (bit = 1, i = 0; i < 32; ++i, bit <<= 1) {
> + PyObject *v;
> + if (bit & inoutmask) {
> + v = PyTuple_GET_ITEM(callargs, i);
> + Py_INCREF(v);
> + if (numretvals == 1) {
> + Py_DECREF(callargs);
> + return v;
> + }
> + PyTuple_SET_ITEM(tup, index, v);
> + index++;
> + } else if (bit & outmask) {
> + _Py_IDENTIFIER(__ctypes_from_outparam__);
> +
> + v = PyTuple_GET_ITEM(callargs, i);
> + v = _PyObject_CallMethodId(v, &PyId___ctypes_from_outparam__, NULL);
> + if (v == NULL || numretvals == 1) {
> + Py_DECREF(callargs);
> + return v;
> + }
> + PyTuple_SET_ITEM(tup, index, v);
> + index++;
> + }
> + if (index == numretvals)
> + break;
> + }
> +
> + Py_DECREF(callargs);
> + return tup;
> +}
> +
> +static PyObject *
> +PyCFuncPtr_call(PyCFuncPtrObject *self, PyObject *inargs, PyObject *kwds)
> +{
> + PyObject *restype;
> + PyObject *converters;
> + PyObject *checker;
> + PyObject *argtypes;
> + StgDictObject *dict = PyObject_stgdict((PyObject *)self);
> + PyObject *result;
> + PyObject *callargs;
> + PyObject *errcheck;
> +#ifdef MS_WIN32
> + IUnknown *piunk = NULL;
> +#endif
> + void *pProc = NULL;
> +
> + int inoutmask;
> + int outmask;
> + unsigned int numretvals;
> +
> + assert(dict); /* Cannot be NULL for PyCFuncPtrObject instances */
> + restype = self->restype ? self->restype : dict->restype;
> + converters = self->converters ? self->converters : dict->converters;
> + checker = self->checker ? self->checker : dict->checker;
> + argtypes = self->argtypes ? self->argtypes : dict->argtypes;
> +/* later, we probably want to have an errcheck field in stgdict */
> + errcheck = self->errcheck /* ? self->errcheck : dict->errcheck */;
> +
> +
> + pProc = *(void **)self->b_ptr;
> +#ifdef MS_WIN32
> + if (self->index) {
> + /* It's a COM method */
> + CDataObject *this;
> + this = (CDataObject *)PyTuple_GetItem(inargs, 0); /* borrowed ref! */
> + if (!this) {
> + PyErr_SetString(PyExc_ValueError,
> + "native com method call without 'this' parameter");
> + return NULL;
> + }
> + if (!CDataObject_Check(this)) {
> + PyErr_SetString(PyExc_TypeError,
> + "Expected a COM this pointer as first argument");
> + return NULL;
> + }
> + /* there should be more checks? No, in Python */
> + /* First arg is a pointer to an interface instance */
> + if (!this->b_ptr || *(void **)this->b_ptr == NULL) {
> + PyErr_SetString(PyExc_ValueError,
> + "NULL COM pointer access");
> + return NULL;
> + }
> + piunk = *(IUnknown **)this->b_ptr;
> + if (NULL == piunk->lpVtbl) {
> + PyErr_SetString(PyExc_ValueError,
> + "COM method call without VTable");
> + return NULL;
> + }
> + pProc = ((void **)piunk->lpVtbl)[self->index - 0x1000];
> + }
> +#endif
> + callargs = _build_callargs(self, argtypes,
> + inargs, kwds,
> + &outmask, &inoutmask, &numretvals);
> + if (callargs == NULL)
> + return NULL;
> +
> + if (converters) {
> + int required = Py_SAFE_DOWNCAST(PyTuple_GET_SIZE(converters),
> + Py_ssize_t, int);
> + int actual = Py_SAFE_DOWNCAST(PyTuple_GET_SIZE(callargs),
> + Py_ssize_t, int);
> +
> + if ((dict->flags & FUNCFLAG_CDECL) == FUNCFLAG_CDECL) {
> + /* For cdecl functions, we allow more actual arguments
> + than the length of the argtypes tuple.
> + */
> + if (required > actual) {
> + Py_DECREF(callargs);
> + PyErr_Format(PyExc_TypeError,
> + "this function takes at least %d argument%s (%d given)",
> + required,
> + required == 1 ? "" : "s",
> + actual);
> + return NULL;
> + }
> + } else if (required != actual) {
> + Py_DECREF(callargs);
> + PyErr_Format(PyExc_TypeError,
> + "this function takes %d argument%s (%d given)",
> + required,
> + required == 1 ? "" : "s",
> + actual);
> + return NULL;
> + }
> + }
> +
> + result = _ctypes_callproc(pProc,
> + callargs,
> +#ifdef MS_WIN32
> + piunk,
> + self->iid,
> +#endif
> + dict->flags,
> + converters,
> + restype,
> + checker);
> +/* The 'errcheck' protocol */
> + if (result != NULL && errcheck) {
> + PyObject *v = PyObject_CallFunctionObjArgs(errcheck,
> + result,
> + self,
> + callargs,
> + NULL);
> + /* If the errcheck function failed, return NULL.
> + If the errcheck function returned callargs unchanged,
> + continue normal processing.
> + If the errcheck function returned something else,
> + use that as result.
> + */
> + if (v == NULL || v != callargs) {
> + Py_DECREF(result);
> + Py_DECREF(callargs);
> + return v;
> + }
> + Py_DECREF(v);
> + }
> +
> + return _build_result(result, callargs,
> + outmask, inoutmask, numretvals);
> +}
> +
> +static int
> +PyCFuncPtr_traverse(PyCFuncPtrObject *self, visitproc visit, void *arg)
> +{
> + Py_VISIT(self->callable);
> + Py_VISIT(self->restype);
> + Py_VISIT(self->checker);
> + Py_VISIT(self->errcheck);
> + Py_VISIT(self->argtypes);
> + Py_VISIT(self->converters);
> + Py_VISIT(self->paramflags);
> + Py_VISIT(self->thunk);
> + return PyCData_traverse((CDataObject *)self, visit, arg);
> +}
> +
> +static int
> +PyCFuncPtr_clear(PyCFuncPtrObject *self)
> +{
> + Py_CLEAR(self->callable);
> + Py_CLEAR(self->restype);
> + Py_CLEAR(self->checker);
> + Py_CLEAR(self->errcheck);
> + Py_CLEAR(self->argtypes);
> + Py_CLEAR(self->converters);
> + Py_CLEAR(self->paramflags);
> + Py_CLEAR(self->thunk);
> + return PyCData_clear((CDataObject *)self);
> +}
> +
> +static void
> +PyCFuncPtr_dealloc(PyCFuncPtrObject *self)
> +{
> + PyCFuncPtr_clear(self);
> + Py_TYPE(self)->tp_free((PyObject *)self);
> +}
> +
> +static PyObject *
> +PyCFuncPtr_repr(PyCFuncPtrObject *self)
> +{
> +#ifdef MS_WIN32
> + if (self->index)
> + return PyUnicode_FromFormat("<COM method offset %d: %s at %p>",
> + self->index - 0x1000,
> + Py_TYPE(self)->tp_name,
> + self);
> +#endif
> + return PyUnicode_FromFormat("<%s object at %p>",
> + Py_TYPE(self)->tp_name,
> + self);
> +}
> +
> +static int
> +PyCFuncPtr_bool(PyCFuncPtrObject *self)
> +{
> + return ((*(void **)self->b_ptr != NULL)
> +#ifdef MS_WIN32
> + || (self->index != 0)
> +#endif
> + );
> +}
> +
> +static PyNumberMethods PyCFuncPtr_as_number = {
> + 0, /* nb_add */
> + 0, /* nb_subtract */
> + 0, /* nb_multiply */
> + 0, /* nb_remainder */
> + 0, /* nb_divmod */
> + 0, /* nb_power */
> + 0, /* nb_negative */
> + 0, /* nb_positive */
> + 0, /* nb_absolute */
> + (inquiry)PyCFuncPtr_bool, /* nb_bool */
> +};
> +
> +PyTypeObject PyCFuncPtr_Type = {
> + PyVarObject_HEAD_INIT(NULL, 0)
> + "_ctypes.PyCFuncPtr",
> + sizeof(PyCFuncPtrObject), /* tp_basicsize */
> + 0, /* tp_itemsize */
> + (destructor)PyCFuncPtr_dealloc, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + (reprfunc)PyCFuncPtr_repr, /* tp_repr */
> + &PyCFuncPtr_as_number, /* tp_as_number */
> + 0, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + (ternaryfunc)PyCFuncPtr_call, /* tp_call */
> + 0, /* tp_str */
> + 0, /* tp_getattro */
> + 0, /* tp_setattro */
> + &PyCData_as_buffer, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */
> + "Function Pointer", /* tp_doc */
> + (traverseproc)PyCFuncPtr_traverse, /* tp_traverse */
> + (inquiry)PyCFuncPtr_clear, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + 0, /* tp_iter */
> + 0, /* tp_iternext */
> + 0, /* tp_methods */
> + 0, /* tp_members */
> + PyCFuncPtr_getsets, /* tp_getset */
> + 0, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + 0, /* tp_init */
> + 0, /* tp_alloc */
> + PyCFuncPtr_new, /* tp_new */
> + 0, /* tp_free */
> +};
> +
> +/*****************************************************************/
> +/*
> + Struct_Type
> +*/
> +/*
> + This function is called to initialize a Structure or Union with positional
> + arguments. It calls itself recursively for all Structure or Union base
> + classes, then retrieves the _fields_ member to associate the argument
> + position with the correct field name.
> +
> + Returns -1 on error, or the index of next argument on success.
> + */
> +static Py_ssize_t
> +_init_pos_args(PyObject *self, PyTypeObject *type,
> + PyObject *args, PyObject *kwds,
> + Py_ssize_t index)
> +{
> + StgDictObject *dict;
> + PyObject *fields;
> + Py_ssize_t i;
> +
> + if (PyType_stgdict((PyObject *)type->tp_base)) {
> + index = _init_pos_args(self, type->tp_base,
> + args, kwds,
> + index);
> + if (index == -1)
> + return -1;
> + }
> +
> + dict = PyType_stgdict((PyObject *)type);
> + fields = PyDict_GetItemString((PyObject *)dict, "_fields_");
> + if (fields == NULL)
> + return index;
> +
> + for (i = 0;
> + i < dict->length && (i+index) < PyTuple_GET_SIZE(args);
> + ++i) {
> + PyObject *pair = PySequence_GetItem(fields, i);
> + PyObject *name, *val;
> + int res;
> + if (!pair)
> + return -1;
> + name = PySequence_GetItem(pair, 0);
> + if (!name) {
> + Py_DECREF(pair);
> + return -1;
> + }
> + val = PyTuple_GET_ITEM(args, i + index);
> + if (kwds && PyDict_GetItem(kwds, name)) {
> + PyErr_Format(PyExc_TypeError,
> + "duplicate values for field %R",
> + name);
> + Py_DECREF(pair);
> + Py_DECREF(name);
> + return -1;
> + }
> +
> + res = PyObject_SetAttr(self, name, val);
> + Py_DECREF(pair);
> + Py_DECREF(name);
> + if (res == -1)
> + return -1;
> + }
> + return index + dict->length;
> +}
> +
> +static int
> +Struct_init(PyObject *self, PyObject *args, PyObject *kwds)
> +{
> +/* Optimization possible: Store the attribute names _fields_[x][0]
> + * in C accessible fields somewhere ?
> + */
> + if (!PyTuple_Check(args)) {
> + PyErr_SetString(PyExc_TypeError,
> + "args not a tuple?");
> + return -1;
> + }
> + if (PyTuple_GET_SIZE(args)) {
> + Py_ssize_t res = _init_pos_args(self, Py_TYPE(self),
> + args, kwds, 0);
> + if (res == -1)
> + return -1;
> + if (res < PyTuple_GET_SIZE(args)) {
> + PyErr_SetString(PyExc_TypeError,
> + "too many initializers");
> + return -1;
> + }
> + }
> +
> + if (kwds) {
> + PyObject *key, *value;
> + Py_ssize_t pos = 0;
> + while(PyDict_Next(kwds, &pos, &key, &value)) {
> + if (-1 == PyObject_SetAttr(self, key, value))
> + return -1;
> + }
> + }
> + return 0;
> +}
> +
> +static PyTypeObject Struct_Type = {
> + PyVarObject_HEAD_INIT(NULL, 0)
> + "_ctypes.Structure",
> + sizeof(CDataObject), /* tp_basicsize */
> + 0, /* tp_itemsize */
> + 0, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + 0, /* tp_repr */
> + 0, /* tp_as_number */
> + 0, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + 0, /* tp_getattro */
> + 0, /* tp_setattro */
> + &PyCData_as_buffer, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */
> + "Structure base class", /* tp_doc */
> + (traverseproc)PyCData_traverse, /* tp_traverse */
> + (inquiry)PyCData_clear, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + 0, /* tp_iter */
> + 0, /* tp_iternext */
> + 0, /* tp_methods */
> + 0, /* tp_members */
> + 0, /* tp_getset */
> + 0, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + Struct_init, /* tp_init */
> + 0, /* tp_alloc */
> + GenericPyCData_new, /* tp_new */
> + 0, /* tp_free */
> +};
> +
> +static PyTypeObject Union_Type = {
> + PyVarObject_HEAD_INIT(NULL, 0)
> + "_ctypes.Union",
> + sizeof(CDataObject), /* tp_basicsize */
> + 0, /* tp_itemsize */
> + 0, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + 0, /* tp_repr */
> + 0, /* tp_as_number */
> + 0, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + 0, /* tp_getattro */
> + 0, /* tp_setattro */
> + &PyCData_as_buffer, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */
> + "Union base class", /* tp_doc */
> + (traverseproc)PyCData_traverse, /* tp_traverse */
> + (inquiry)PyCData_clear, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + 0, /* tp_iter */
> + 0, /* tp_iternext */
> + 0, /* tp_methods */
> + 0, /* tp_members */
> + 0, /* tp_getset */
> + 0, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + Struct_init, /* tp_init */
> + 0, /* tp_alloc */
> + GenericPyCData_new, /* tp_new */
> + 0, /* tp_free */
> +};
> +
> +
> +/******************************************************************/
> +/*
> + PyCArray_Type
> +*/
> +static int
> +Array_init(CDataObject *self, PyObject *args, PyObject *kw)
> +{
> + Py_ssize_t i;
> + Py_ssize_t n;
> +
> + if (!PyTuple_Check(args)) {
> + PyErr_SetString(PyExc_TypeError,
> + "args not a tuple?");
> + return -1;
> + }
> + n = PyTuple_GET_SIZE(args);
> + for (i = 0; i < n; ++i) {
> + PyObject *v;
> + v = PyTuple_GET_ITEM(args, i);
> + if (-1 == PySequence_SetItem((PyObject *)self, i, v))
> + return -1;
> + }
> + return 0;
> +}
> +
> +static PyObject *
> +Array_item(PyObject *myself, Py_ssize_t index)
> +{
> + CDataObject *self = (CDataObject *)myself;
> + Py_ssize_t offset, size;
> + StgDictObject *stgdict;
> +
> +
> + if (index < 0 || index >= self->b_length) {
> + PyErr_SetString(PyExc_IndexError,
> + "invalid index");
> + return NULL;
> + }
> +
> + stgdict = PyObject_stgdict((PyObject *)self);
> + assert(stgdict); /* Cannot be NULL for array instances */
> + /* Would it be clearer if we got the item size from
> + stgdict->proto's stgdict?
> + */
> + size = stgdict->size / stgdict->length;
> + offset = index * size;
> +
> + return PyCData_get(stgdict->proto, stgdict->getfunc, (PyObject *)self,
> + index, size, self->b_ptr + offset);
> +}
> +
> +static PyObject *
> +Array_subscript(PyObject *myself, PyObject *item)
> +{
> + CDataObject *self = (CDataObject *)myself;
> +
> + if (PyIndex_Check(item)) {
> + Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError);
> +
> + if (i == -1 && PyErr_Occurred())
> + return NULL;
> + if (i < 0)
> + i += self->b_length;
> + return Array_item(myself, i);
> + }
> + else if (PySlice_Check(item)) {
> + StgDictObject *stgdict, *itemdict;
> + PyObject *proto;
> + PyObject *np;
> + Py_ssize_t start, stop, step, slicelen, cur, i;
> +
> + if (PySlice_Unpack(item, &start, &stop, &step) < 0) {
> + return NULL;
> + }
> + slicelen = PySlice_AdjustIndices(self->b_length, &start, &stop, step);
> +
> + stgdict = PyObject_stgdict((PyObject *)self);
> + assert(stgdict); /* Cannot be NULL for array object instances */
> + proto = stgdict->proto;
> + itemdict = PyType_stgdict(proto);
> + assert(itemdict); /* proto is the item type of the array, a
> + ctypes type, so this cannot be NULL */
> +
> + if (itemdict->getfunc == _ctypes_get_fielddesc("c")->getfunc) {
> + char *ptr = (char *)self->b_ptr;
> + char *dest;
> +
> + if (slicelen <= 0)
> + return PyBytes_FromStringAndSize("", 0);
> + if (step == 1) {
> + return PyBytes_FromStringAndSize(ptr + start,
> + slicelen);
> + }
> + dest = (char *)PyMem_Malloc(slicelen);
> +
> + if (dest == NULL)
> + return PyErr_NoMemory();
> +
> + for (cur = start, i = 0; i < slicelen;
> + cur += step, i++) {
> + dest[i] = ptr[cur];
> + }
> +
> + np = PyBytes_FromStringAndSize(dest, slicelen);
> + PyMem_Free(dest);
> + return np;
> + }
> +#ifdef CTYPES_UNICODE
> + if (itemdict->getfunc == _ctypes_get_fielddesc("u")->getfunc) {
> + wchar_t *ptr = (wchar_t *)self->b_ptr;
> + wchar_t *dest;
> +
> + if (slicelen <= 0)
> + return PyUnicode_New(0, 0);
> + if (step == 1) {
> + return PyUnicode_FromWideChar(ptr + start,
> + slicelen);
> + }
> +
> + dest = PyMem_New(wchar_t, slicelen);
> + if (dest == NULL) {
> + PyErr_NoMemory();
> + return NULL;
> + }
> +
> + for (cur = start, i = 0; i < slicelen;
> + cur += step, i++) {
> + dest[i] = ptr[cur];
> + }
> +
> + np = PyUnicode_FromWideChar(dest, slicelen);
> + PyMem_Free(dest);
> + return np;
> + }
> +#endif
> +
> + np = PyList_New(slicelen);
> + if (np == NULL)
> + return NULL;
> +
> + for (cur = start, i = 0; i < slicelen;
> + cur += step, i++) {
> + PyObject *v = Array_item(myself, cur);
> + if (v == NULL) {
> + Py_DECREF(np);
> + return NULL;
> + }
> + PyList_SET_ITEM(np, i, v);
> + }
> + return np;
> + }
> + else {
> + PyErr_SetString(PyExc_TypeError,
> + "indices must be integers");
> + return NULL;
> + }
> +
> +}
> +
> +static int
> +Array_ass_item(PyObject *myself, Py_ssize_t index, PyObject *value)
> +{
> + CDataObject *self = (CDataObject *)myself;
> + Py_ssize_t size, offset;
> + StgDictObject *stgdict;
> + char *ptr;
> +
> + if (value == NULL) {
> + PyErr_SetString(PyExc_TypeError,
> + "Array does not support item deletion");
> + return -1;
> + }
> +
> + stgdict = PyObject_stgdict((PyObject *)self);
> + assert(stgdict); /* Cannot be NULL for array object instances */
> + if (index < 0 || index >= stgdict->length) {
> + PyErr_SetString(PyExc_IndexError,
> + "invalid index");
> + return -1;
> + }
> + size = stgdict->size / stgdict->length;
> + offset = index * size;
> + ptr = self->b_ptr + offset;
> +
> + return PyCData_set((PyObject *)self, stgdict->proto, stgdict->setfunc, value,
> + index, size, ptr);
> +}
> +
> +static int
> +Array_ass_subscript(PyObject *myself, PyObject *item, PyObject *value)
> +{
> + CDataObject *self = (CDataObject *)myself;
> +
> + if (value == NULL) {
> + PyErr_SetString(PyExc_TypeError,
> + "Array does not support item deletion");
> + return -1;
> + }
> +
> + if (PyIndex_Check(item)) {
> + Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError);
> +
> + if (i == -1 && PyErr_Occurred())
> + return -1;
> + if (i < 0)
> + i += self->b_length;
> + return Array_ass_item(myself, i, value);
> + }
> + else if (PySlice_Check(item)) {
> + Py_ssize_t start, stop, step, slicelen, otherlen, i, cur;
> +
> + if (PySlice_Unpack(item, &start, &stop, &step) < 0) {
> + return -1;
> + }
> + slicelen = PySlice_AdjustIndices(self->b_length, &start, &stop, step);
> + if ((step < 0 && start < stop) ||
> + (step > 0 && start > stop))
> + stop = start;
> +
> + otherlen = PySequence_Length(value);
> + if (otherlen != slicelen) {
> + PyErr_SetString(PyExc_ValueError,
> + "Can only assign sequence of same size");
> + return -1;
> + }
> + for (cur = start, i = 0; i < otherlen; cur += step, i++) {
> + PyObject *item = PySequence_GetItem(value, i);
> + int result;
> + if (item == NULL)
> + return -1;
> + result = Array_ass_item(myself, cur, item);
> + Py_DECREF(item);
> + if (result == -1)
> + return -1;
> + }
> + return 0;
> + }
> + else {
> + PyErr_SetString(PyExc_TypeError,
> + "indices must be integer");
> + return -1;
> + }
> +}
> +
> +static Py_ssize_t
> +Array_length(PyObject *myself)
> +{
> + CDataObject *self = (CDataObject *)myself;
> + return self->b_length;
> +}
> +
> +static PySequenceMethods Array_as_sequence = {
> + Array_length, /* sq_length; */
> + 0, /* sq_concat; */
> + 0, /* sq_repeat; */
> + Array_item, /* sq_item; */
> + 0, /* sq_slice; */
> + Array_ass_item, /* sq_ass_item; */
> + 0, /* sq_ass_slice; */
> + 0, /* sq_contains; */
> +
> + 0, /* sq_inplace_concat; */
> + 0, /* sq_inplace_repeat; */
> +};
> +
> +static PyMappingMethods Array_as_mapping = {
> + Array_length,
> + Array_subscript,
> + Array_ass_subscript,
> +};
> +
> +PyTypeObject PyCArray_Type = {
> + PyVarObject_HEAD_INIT(NULL, 0)
> + "_ctypes.Array",
> + sizeof(CDataObject), /* tp_basicsize */
> + 0, /* tp_itemsize */
> + 0, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + 0, /* tp_repr */
> + 0, /* tp_as_number */
> + &Array_as_sequence, /* tp_as_sequence */
> + &Array_as_mapping, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + 0, /* tp_getattro */
> + 0, /* tp_setattro */
> + &PyCData_as_buffer, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */
> + "XXX to be provided", /* tp_doc */
> + (traverseproc)PyCData_traverse, /* tp_traverse */
> + (inquiry)PyCData_clear, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + 0, /* tp_iter */
> + 0, /* tp_iternext */
> + 0, /* tp_methods */
> + 0, /* tp_members */
> + 0, /* tp_getset */
> + 0, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + (initproc)Array_init, /* tp_init */
> + 0, /* tp_alloc */
> + GenericPyCData_new, /* tp_new */
> + 0, /* tp_free */
> +};
> +
> +PyObject *
> +PyCArrayType_from_ctype(PyObject *itemtype, Py_ssize_t length)
> +{
> + static PyObject *cache;
> + PyObject *key;
> + PyObject *result;
> + char name[256];
> + PyObject *len;
> +
> + if (cache == NULL) {
> + cache = PyDict_New();
> + if (cache == NULL)
> + return NULL;
> + }
> + len = PyLong_FromSsize_t(length);
> + if (len == NULL)
> + return NULL;
> + key = PyTuple_Pack(2, itemtype, len);
> + Py_DECREF(len);
> + if (!key)
> + return NULL;
> + result = PyDict_GetItemProxy(cache, key);
> + if (result) {
> + Py_INCREF(result);
> + Py_DECREF(key);
> + return result;
> + }
> +
> + if (!PyType_Check(itemtype)) {
> + PyErr_SetString(PyExc_TypeError,
> + "Expected a type object");
> + Py_DECREF(key);
> + return NULL;
> + }
> +#ifdef MS_WIN64
> + sprintf(name, "%.200s_Array_%Id",
> + ((PyTypeObject *)itemtype)->tp_name, length);
> +#else
> + sprintf(name, "%.200s_Array_%ld",
> + ((PyTypeObject *)itemtype)->tp_name, (long)length);
> +#endif
> +
> + result = PyObject_CallFunction((PyObject *)&PyCArrayType_Type,
> + "s(O){s:n,s:O}",
> + name,
> + &PyCArray_Type,
> + "_length_",
> + length,
> + "_type_",
> + itemtype
> + );
> + if (result == NULL) {
> + Py_DECREF(key);
> + return NULL;
> + }
> + if (-1 == PyDict_SetItemProxy(cache, key, result)) {
> + Py_DECREF(key);
> + Py_DECREF(result);
> + return NULL;
> + }
> + Py_DECREF(key);
> + return result;
> +}
> +
> +
> +/******************************************************************/
> +/*
> + Simple_Type
> +*/
> +
> +static int
> +Simple_set_value(CDataObject *self, PyObject *value, void *Py_UNUSED(ignored))
> +{
> + PyObject *result;
> + StgDictObject *dict = PyObject_stgdict((PyObject *)self);
> +
> + if (value == NULL) {
> + PyErr_SetString(PyExc_TypeError,
> + "can't delete attribute");
> + return -1;
> + }
> + assert(dict); /* Cannot be NULL for CDataObject instances */
> + assert(dict->setfunc);
> + result = dict->setfunc(self->b_ptr, value, dict->size);
> + if (!result)
> + return -1;
> +
> + /* consumes the refcount the setfunc returns */
> + return KeepRef(self, 0, result);
> +}
> +
> +static int
> +Simple_init(CDataObject *self, PyObject *args, PyObject *kw)
> +{
> + PyObject *value = NULL;
> + if (!PyArg_UnpackTuple(args, "__init__", 0, 1, &value))
> + return -1;
> + if (value)
> + return Simple_set_value(self, value, NULL);
> + return 0;
> +}
> +
> +static PyObject *
> +Simple_get_value(CDataObject *self, void *Py_UNUSED(ignored))
> +{
> + StgDictObject *dict;
> + dict = PyObject_stgdict((PyObject *)self);
> + assert(dict); /* Cannot be NULL for CDataObject instances */
> + assert(dict->getfunc);
> + return dict->getfunc(self->b_ptr, self->b_size);
> +}
> +
> +static PyGetSetDef Simple_getsets[] = {
> + { "value", (getter)Simple_get_value, (setter)Simple_set_value,
> + "current value", NULL },
> + { NULL, NULL }
> +};
> +
> +static PyObject *
> +Simple_from_outparm(PyObject *self, PyObject *args)
> +{
> + if (_ctypes_simple_instance((PyObject *)Py_TYPE(self))) {
> + Py_INCREF(self);
> + return self;
> + }
> + /* call stgdict->getfunc */
> + return Simple_get_value((CDataObject *)self, NULL);
> +}
> +
> +static PyMethodDef Simple_methods[] = {
> + { "__ctypes_from_outparam__", Simple_from_outparm, METH_NOARGS, },
> + { NULL, NULL },
> +};
> +
> +static int Simple_bool(CDataObject *self)
> +{
> + return memcmp(self->b_ptr, "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0", self->b_size);
> +}
> +
> +static PyNumberMethods Simple_as_number = {
> + 0, /* nb_add */
> + 0, /* nb_subtract */
> + 0, /* nb_multiply */
> + 0, /* nb_remainder */
> + 0, /* nb_divmod */
> + 0, /* nb_power */
> + 0, /* nb_negative */
> + 0, /* nb_positive */
> + 0, /* nb_absolute */
> + (inquiry)Simple_bool, /* nb_bool */
> +};
> +
> +/* "%s(%s)" % (self.__class__.__name__, self.value) */
> +static PyObject *
> +Simple_repr(CDataObject *self)
> +{
> + PyObject *val, *result;
> +
> + if (Py_TYPE(self)->tp_base != &Simple_Type) {
> + return PyUnicode_FromFormat("<%s object at %p>",
> + Py_TYPE(self)->tp_name, self);
> + }
> +
> + val = Simple_get_value(self, NULL);
> + if (val == NULL)
> + return NULL;
> +
> + result = PyUnicode_FromFormat("%s(%R)",
> + Py_TYPE(self)->tp_name, val);
> + Py_DECREF(val);
> + return result;
> +}
> +
> +static PyTypeObject Simple_Type = {
> + PyVarObject_HEAD_INIT(NULL, 0)
> + "_ctypes._SimpleCData",
> + sizeof(CDataObject), /* tp_basicsize */
> + 0, /* tp_itemsize */
> + 0, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + (reprfunc)&Simple_repr, /* tp_repr */
> + &Simple_as_number, /* tp_as_number */
> + 0, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + 0, /* tp_getattro */
> + 0, /* tp_setattro */
> + &PyCData_as_buffer, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */
> + "XXX to be provided", /* tp_doc */
> + (traverseproc)PyCData_traverse, /* tp_traverse */
> + (inquiry)PyCData_clear, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + 0, /* tp_iter */
> + 0, /* tp_iternext */
> + Simple_methods, /* tp_methods */
> + 0, /* tp_members */
> + Simple_getsets, /* tp_getset */
> + 0, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + (initproc)Simple_init, /* tp_init */
> + 0, /* tp_alloc */
> + GenericPyCData_new, /* tp_new */
> + 0, /* tp_free */
> +};
> +
> +/******************************************************************/
> +/*
> + PyCPointer_Type
> +*/
> +static PyObject *
> +Pointer_item(PyObject *myself, Py_ssize_t index)
> +{
> + CDataObject *self = (CDataObject *)myself;
> + Py_ssize_t size;
> + Py_ssize_t offset;
> + StgDictObject *stgdict, *itemdict;
> + PyObject *proto;
> +
> + if (*(void **)self->b_ptr == NULL) {
> + PyErr_SetString(PyExc_ValueError,
> + "NULL pointer access");
> + return NULL;
> + }
> +
> + stgdict = PyObject_stgdict((PyObject *)self);
> + assert(stgdict); /* Cannot be NULL for pointer object instances */
> +
> + proto = stgdict->proto;
> + assert(proto);
> + itemdict = PyType_stgdict(proto);
> + assert(itemdict); /* proto is the item type of the pointer, a ctypes
> + type, so this cannot be NULL */
> +
> + size = itemdict->size;
> + offset = index * itemdict->size;
> +
> + return PyCData_get(proto, stgdict->getfunc, (PyObject *)self,
> + index, size, (*(char **)self->b_ptr) + offset);
> +}
> +
> +static int
> +Pointer_ass_item(PyObject *myself, Py_ssize_t index, PyObject *value)
> +{
> + CDataObject *self = (CDataObject *)myself;
> + Py_ssize_t size;
> + Py_ssize_t offset;
> + StgDictObject *stgdict, *itemdict;
> + PyObject *proto;
> +
> + if (value == NULL) {
> + PyErr_SetString(PyExc_TypeError,
> + "Pointer does not support item deletion");
> + return -1;
> + }
> +
> + if (*(void **)self->b_ptr == NULL) {
> + PyErr_SetString(PyExc_ValueError,
> + "NULL pointer access");
> + return -1;
> + }
> +
> + stgdict = PyObject_stgdict((PyObject *)self);
> + assert(stgdict); /* Cannot be NULL for pointer instances */
> +
> + proto = stgdict->proto;
> + assert(proto);
> +
> + itemdict = PyType_stgdict(proto);
> + assert(itemdict); /* Cannot be NULL because the itemtype of a pointer
> + is always a ctypes type */
> +
> + size = itemdict->size;
> + offset = index * itemdict->size;
> +
> + return PyCData_set((PyObject *)self, proto, stgdict->setfunc, value,
> + index, size, (*(char **)self->b_ptr) + offset);
> +}
> +
> +static PyObject *
> +Pointer_get_contents(CDataObject *self, void *closure)
> +{
> + StgDictObject *stgdict;
> +
> + if (*(void **)self->b_ptr == NULL) {
> + PyErr_SetString(PyExc_ValueError,
> + "NULL pointer access");
> + return NULL;
> + }
> +
> + stgdict = PyObject_stgdict((PyObject *)self);
> + assert(stgdict); /* Cannot be NULL for pointer instances */
> + return PyCData_FromBaseObj(stgdict->proto,
> + (PyObject *)self, 0,
> + *(void **)self->b_ptr);
> +}
> +
> +static int
> +Pointer_set_contents(CDataObject *self, PyObject *value, void *closure)
> +{
> + StgDictObject *stgdict;
> + CDataObject *dst;
> + PyObject *keep;
> +
> + if (value == NULL) {
> + PyErr_SetString(PyExc_TypeError,
> + "Pointer does not support item deletion");
> + return -1;
> + }
> + stgdict = PyObject_stgdict((PyObject *)self);
> + assert(stgdict); /* Cannot be NULL for pointer instances */
> + assert(stgdict->proto);
> + if (!CDataObject_Check(value)) {
> + int res = PyObject_IsInstance(value, stgdict->proto);
> + if (res == -1)
> + return -1;
> + if (!res) {
> + PyErr_Format(PyExc_TypeError,
> + "expected %s instead of %s",
> + ((PyTypeObject *)(stgdict->proto))->tp_name,
> + Py_TYPE(value)->tp_name);
> + return -1;
> + }
> + }
> +
> + dst = (CDataObject *)value;
> + *(void **)self->b_ptr = dst->b_ptr;
> +
> + /*
> + A Pointer instance must keep the value it points to alive. So, a
> + pointer instance has b_length set to 2 instead of 1, and we set
> + 'value' itself as the second item of the b_objects list, additionally.
> + */
> + Py_INCREF(value);
> + if (-1 == KeepRef(self, 1, value))
> + return -1;
> +
> + keep = GetKeepedObjects(dst);
> + if (keep == NULL)
> + return -1;
> +
> + Py_INCREF(keep);
> + return KeepRef(self, 0, keep);
> +}
> +
> +static PyGetSetDef Pointer_getsets[] = {
> + { "contents", (getter)Pointer_get_contents,
> + (setter)Pointer_set_contents,
> + "the object this pointer points to (read-write)", NULL },
> + { NULL, NULL }
> +};
> +
> +static int
> +Pointer_init(CDataObject *self, PyObject *args, PyObject *kw)
> +{
> + PyObject *value = NULL;
> +
> + if (!PyArg_UnpackTuple(args, "POINTER", 0, 1, &value))
> + return -1;
> + if (value == NULL)
> + return 0;
> + return Pointer_set_contents(self, value, NULL);
> +}
> +
> +static PyObject *
> +Pointer_new(PyTypeObject *type, PyObject *args, PyObject *kw)
> +{
> + StgDictObject *dict = PyType_stgdict((PyObject *)type);
> + if (!dict || !dict->proto) {
> + PyErr_SetString(PyExc_TypeError,
> + "Cannot create instance: has no _type_");
> + return NULL;
> + }
> + return GenericPyCData_new(type, args, kw);
> +}
> +
> +static PyObject *
> +Pointer_subscript(PyObject *myself, PyObject *item)
> +{
> + CDataObject *self = (CDataObject *)myself;
> + if (PyIndex_Check(item)) {
> + Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError);
> + if (i == -1 && PyErr_Occurred())
> + return NULL;
> + return Pointer_item(myself, i);
> + }
> + else if (PySlice_Check(item)) {
> + PySliceObject *slice = (PySliceObject *)item;
> + Py_ssize_t start, stop, step;
> + PyObject *np;
> + StgDictObject *stgdict, *itemdict;
> + PyObject *proto;
> + Py_ssize_t i, len, cur;
> +
> + /* Since pointers have no length, and we want to apply
> + different semantics to negative indices than normal
> + slicing, we have to dissect the slice object ourselves.*/
> + if (slice->step == Py_None) {
> + step = 1;
> + }
> + else {
> + step = PyNumber_AsSsize_t(slice->step,
> + PyExc_ValueError);
> + if (step == -1 && PyErr_Occurred())
> + return NULL;
> + if (step == 0) {
> + PyErr_SetString(PyExc_ValueError,
> + "slice step cannot be zero");
> + return NULL;
> + }
> + }
> + if (slice->start == Py_None) {
> + if (step < 0) {
> + PyErr_SetString(PyExc_ValueError,
> + "slice start is required "
> + "for step < 0");
> + return NULL;
> + }
> + start = 0;
> + }
> + else {
> + start = PyNumber_AsSsize_t(slice->start,
> + PyExc_ValueError);
> + if (start == -1 && PyErr_Occurred())
> + return NULL;
> + }
> + if (slice->stop == Py_None) {
> + PyErr_SetString(PyExc_ValueError,
> + "slice stop is required");
> + return NULL;
> + }
> + stop = PyNumber_AsSsize_t(slice->stop,
> + PyExc_ValueError);
> + if (stop == -1 && PyErr_Occurred())
> + return NULL;
> + if ((step > 0 && start > stop) ||
> + (step < 0 && start < stop))
> + len = 0;
> + else if (step > 0)
> + len = (stop - start - 1) / step + 1;
> + else
> + len = (stop - start + 1) / step + 1;
> +
> + stgdict = PyObject_stgdict((PyObject *)self);
> + assert(stgdict); /* Cannot be NULL for pointer instances */
> + proto = stgdict->proto;
> + assert(proto);
> + itemdict = PyType_stgdict(proto);
> + assert(itemdict);
> + if (itemdict->getfunc == _ctypes_get_fielddesc("c")->getfunc) {
> + char *ptr = *(char **)self->b_ptr;
> + char *dest;
> +
> + if (len <= 0)
> + return PyBytes_FromStringAndSize("", 0);
> + if (step == 1) {
> + return PyBytes_FromStringAndSize(ptr + start,
> + len);
> + }
> + dest = (char *)PyMem_Malloc(len);
> + if (dest == NULL)
> + return PyErr_NoMemory();
> + for (cur = start, i = 0; i < len; cur += step, i++) {
> + dest[i] = ptr[cur];
> + }
> + np = PyBytes_FromStringAndSize(dest, len);
> + PyMem_Free(dest);
> + return np;
> + }
> +#ifdef CTYPES_UNICODE
> + if (itemdict->getfunc == _ctypes_get_fielddesc("u")->getfunc) {
> + wchar_t *ptr = *(wchar_t **)self->b_ptr;
> + wchar_t *dest;
> +
> + if (len <= 0)
> + return PyUnicode_New(0, 0);
> + if (step == 1) {
> + return PyUnicode_FromWideChar(ptr + start,
> + len);
> + }
> + dest = PyMem_New(wchar_t, len);
> + if (dest == NULL)
> + return PyErr_NoMemory();
> + for (cur = start, i = 0; i < len; cur += step, i++) {
> + dest[i] = ptr[cur];
> + }
> + np = PyUnicode_FromWideChar(dest, len);
> + PyMem_Free(dest);
> + return np;
> + }
> +#endif
> +
> + np = PyList_New(len);
> + if (np == NULL)
> + return NULL;
> +
> + for (cur = start, i = 0; i < len; cur += step, i++) {
> + PyObject *v = Pointer_item(myself, cur);
> + PyList_SET_ITEM(np, i, v);
> + }
> + return np;
> + }
> + else {
> + PyErr_SetString(PyExc_TypeError,
> + "Pointer indices must be integer");
> + return NULL;
> + }
> +}
> +
> +static PySequenceMethods Pointer_as_sequence = {
> + 0, /* inquiry sq_length; */
> + 0, /* binaryfunc sq_concat; */
> + 0, /* intargfunc sq_repeat; */
> + Pointer_item, /* intargfunc sq_item; */
> + 0, /* intintargfunc sq_slice; */
> + Pointer_ass_item, /* intobjargproc sq_ass_item; */
> + 0, /* intintobjargproc sq_ass_slice; */
> + 0, /* objobjproc sq_contains; */
> + /* Added in release 2.0 */
> + 0, /* binaryfunc sq_inplace_concat; */
> + 0, /* intargfunc sq_inplace_repeat; */
> +};
> +
> +static PyMappingMethods Pointer_as_mapping = {
> + 0,
> + Pointer_subscript,
> +};
> +
> +static int
> +Pointer_bool(CDataObject *self)
> +{
> + return (*(void **)self->b_ptr != NULL);
> +}
> +
> +static PyNumberMethods Pointer_as_number = {
> + 0, /* nb_add */
> + 0, /* nb_subtract */
> + 0, /* nb_multiply */
> + 0, /* nb_remainder */
> + 0, /* nb_divmod */
> + 0, /* nb_power */
> + 0, /* nb_negative */
> + 0, /* nb_positive */
> + 0, /* nb_absolute */
> + (inquiry)Pointer_bool, /* nb_bool */
> +};
> +
> +PyTypeObject PyCPointer_Type = {
> + PyVarObject_HEAD_INIT(NULL, 0)
> + "_ctypes._Pointer",
> + sizeof(CDataObject), /* tp_basicsize */
> + 0, /* tp_itemsize */
> + 0, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + 0, /* tp_repr */
> + &Pointer_as_number, /* tp_as_number */
> + &Pointer_as_sequence, /* tp_as_sequence */
> + &Pointer_as_mapping, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + 0, /* tp_getattro */
> + 0, /* tp_setattro */
> + &PyCData_as_buffer, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */
> + "XXX to be provided", /* tp_doc */
> + (traverseproc)PyCData_traverse, /* tp_traverse */
> + (inquiry)PyCData_clear, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + 0, /* tp_iter */
> + 0, /* tp_iternext */
> + 0, /* tp_methods */
> + 0, /* tp_members */
> + Pointer_getsets, /* tp_getset */
> + 0, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + (initproc)Pointer_init, /* tp_init */
> + 0, /* tp_alloc */
> + Pointer_new, /* tp_new */
> + 0, /* tp_free */
> +};
> +
> +
> +/******************************************************************/
> +/*
> + * Module initialization.
> + */
> +
> +static const char module_docs[] =
> +"Create and manipulate C compatible data types in Python.";
> +
> +#ifdef MS_WIN32
> +
> +static const char comerror_doc[] = "Raised when a COM method call failed.";
> +
> +int
> +comerror_init(PyObject *self, PyObject *args, PyObject *kwds)
> +{
> + PyObject *hresult, *text, *details;
> + PyObject *a;
> + int status;
> +
> + if (!_PyArg_NoKeywords(Py_TYPE(self)->tp_name, kwds))
> + return -1;
> +
> + if (!PyArg_ParseTuple(args, "OOO:COMError", &hresult, &text, &details))
> + return -1;
> +
> + a = PySequence_GetSlice(args, 1, PySequence_Size(args));
> + if (!a)
> + return -1;
> + status = PyObject_SetAttrString(self, "args", a);
> + Py_DECREF(a);
> + if (status < 0)
> + return -1;
> +
> + if (PyObject_SetAttrString(self, "hresult", hresult) < 0)
> + return -1;
> +
> + if (PyObject_SetAttrString(self, "text", text) < 0)
> + return -1;
> +
> + if (PyObject_SetAttrString(self, "details", details) < 0)
> + return -1;
> +
> + Py_INCREF(args);
> + Py_SETREF(((PyBaseExceptionObject *)self)->args, args);
> +
> + return 0;
> +}
> +
> +static PyTypeObject PyComError_Type = {
> + PyVarObject_HEAD_INIT(NULL, 0)
> + "_ctypes.COMError", /* tp_name */
> + sizeof(PyBaseExceptionObject), /* tp_basicsize */
> + 0, /* tp_itemsize */
> + 0, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + 0, /* tp_repr */
> + 0, /* tp_as_number */
> + 0, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + 0, /* tp_getattro */
> + 0, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /* tp_flags */
> + PyDoc_STR(comerror_doc), /* tp_doc */
> + 0, /* tp_traverse */
> + 0, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + 0, /* tp_iter */
> + 0, /* tp_iternext */
> + 0, /* tp_methods */
> + 0, /* tp_members */
> + 0, /* tp_getset */
> + 0, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + (initproc)comerror_init, /* tp_init */
> + 0, /* tp_alloc */
> + 0, /* tp_new */
> +};
> +
> +
> +static int
> +create_comerror(void)
> +{
> + PyComError_Type.tp_base = (PyTypeObject*)PyExc_Exception;
> + if (PyType_Ready(&PyComError_Type) < 0)
> + return -1;
> + Py_INCREF(&PyComError_Type);
> + ComError = (PyObject*)&PyComError_Type;
> + return 0;
> +}
> +
> +#endif
> +
> +static PyObject *
> +string_at(const char *ptr, int size)
> +{
> + if (size == -1)
> + return PyBytes_FromStringAndSize(ptr, strlen(ptr));
> + return PyBytes_FromStringAndSize(ptr, size);
> +}
> +
> +static int
> +cast_check_pointertype(PyObject *arg)
> +{
> + StgDictObject *dict;
> +
> + if (PyCPointerTypeObject_Check(arg))
> + return 1;
> + if (PyCFuncPtrTypeObject_Check(arg))
> + return 1;
> + dict = PyType_stgdict(arg);
> + if (dict != NULL && dict->proto != NULL) {
> + if (PyUnicode_Check(dict->proto)
> + && (strchr("sPzUZXO", PyUnicode_AsUTF8(dict->proto)[0]))) {
> + /* simple pointer types, c_void_p, c_wchar_p, BSTR, ... */
> + return 1;
> + }
> + }
> + PyErr_Format(PyExc_TypeError,
> + "cast() argument 2 must be a pointer type, not %s",
> + PyType_Check(arg)
> + ? ((PyTypeObject *)arg)->tp_name
> + : Py_TYPE(arg)->tp_name);
> + return 0;
> +}
> +
> +static PyObject *
> +cast(void *ptr, PyObject *src, PyObject *ctype)
> +{
> + CDataObject *result;
> + if (0 == cast_check_pointertype(ctype))
> + return NULL;
> + result = (CDataObject *)PyObject_CallFunctionObjArgs(ctype, NULL);
> + if (result == NULL)
> + return NULL;
> +
> + /*
> + The casted objects '_objects' member:
> +
> + It must certainly contain the source objects one.
> + It must contain the source object itself.
> + */
> + if (CDataObject_Check(src)) {
> + CDataObject *obj = (CDataObject *)src;
> + CDataObject *container;
> +
> + /* PyCData_GetContainer will initialize src.b_objects, we need
> + this so it can be shared */
> + container = PyCData_GetContainer(obj);
> + if (container == NULL)
> + goto failed;
> +
> + /* But we need a dictionary! */
> + if (obj->b_objects == Py_None) {
> + Py_DECREF(Py_None);
> + obj->b_objects = PyDict_New();
> + if (obj->b_objects == NULL)
> + goto failed;
> + }
> + Py_XINCREF(obj->b_objects);
> + result->b_objects = obj->b_objects;
> + if (result->b_objects && PyDict_CheckExact(result->b_objects)) {
> + PyObject *index;
> + int rc;
> + index = PyLong_FromVoidPtr((void *)src);
> + if (index == NULL)
> + goto failed;
> + rc = PyDict_SetItem(result->b_objects, index, src);
> + Py_DECREF(index);
> + if (rc == -1)
> + goto failed;
> + }
> + }
> + /* Should we assert that result is a pointer type? */
> + memcpy(result->b_ptr, &ptr, sizeof(void *));
> + return (PyObject *)result;
> +
> + failed:
> + Py_DECREF(result);
> + return NULL;
> +}
> +
> +#ifdef CTYPES_UNICODE
> +static PyObject *
> +wstring_at(const wchar_t *ptr, int size)
> +{
> + Py_ssize_t ssize = size;
> + if (ssize == -1)
> + ssize = wcslen(ptr);
> + return PyUnicode_FromWideChar(ptr, ssize);
> +}
> +#endif
> +
> +
> +static struct PyModuleDef _ctypesmodule = {
> + PyModuleDef_HEAD_INIT,
> + "_ctypes",
> + module_docs,
> + -1,
> + _ctypes_module_methods,
> + NULL,
> + NULL,
> + NULL,
> + NULL
> +};
> +
> +PyMODINIT_FUNC
> +PyInit__ctypes(void)
> +{
> + PyObject *m;
> +
> +/* Note:
> + ob_type is the metatype (the 'type'), defaults to PyType_Type,
> + tp_base is the base type, defaults to 'object' aka PyBaseObject_Type.
> +*/
> +#ifdef WITH_THREAD
> + PyEval_InitThreads();
> +#endif
> + m = PyModule_Create(&_ctypesmodule);
> + if (!m)
> + return NULL;
> +
> + _ctypes_ptrtype_cache = PyDict_New();
> + if (_ctypes_ptrtype_cache == NULL)
> + return NULL;
> +
> + PyModule_AddObject(m, "_pointer_type_cache", (PyObject *)_ctypes_ptrtype_cache);
> +
> + _unpickle = PyObject_GetAttrString(m, "_unpickle");
> + if (_unpickle == NULL)
> + return NULL;
> +
> + if (PyType_Ready(&PyCArg_Type) < 0)
> + return NULL;
> +
> + if (PyType_Ready(&PyCThunk_Type) < 0)
> + return NULL;
> +
> + /* StgDict is derived from PyDict_Type */
> + PyCStgDict_Type.tp_base = &PyDict_Type;
> + if (PyType_Ready(&PyCStgDict_Type) < 0)
> + return NULL;
> +
> + /*************************************************
> + *
> + * Metaclasses
> + */
> +
> + PyCStructType_Type.tp_base = &PyType_Type;
> + if (PyType_Ready(&PyCStructType_Type) < 0)
> + return NULL;
> +
> + UnionType_Type.tp_base = &PyType_Type;
> + if (PyType_Ready(&UnionType_Type) < 0)
> + return NULL;
> +
> + PyCPointerType_Type.tp_base = &PyType_Type;
> + if (PyType_Ready(&PyCPointerType_Type) < 0)
> + return NULL;
> +
> + PyCArrayType_Type.tp_base = &PyType_Type;
> + if (PyType_Ready(&PyCArrayType_Type) < 0)
> + return NULL;
> +
> + PyCSimpleType_Type.tp_base = &PyType_Type;
> + if (PyType_Ready(&PyCSimpleType_Type) < 0)
> + return NULL;
> +
> + PyCFuncPtrType_Type.tp_base = &PyType_Type;
> + if (PyType_Ready(&PyCFuncPtrType_Type) < 0)
> + return NULL;
> +
> + /*************************************************
> + *
> + * Classes using a custom metaclass
> + */
> +
> + if (PyType_Ready(&PyCData_Type) < 0)
> + return NULL;
> +
> + Py_TYPE(&Struct_Type) = &PyCStructType_Type;
> + Struct_Type.tp_base = &PyCData_Type;
> + if (PyType_Ready(&Struct_Type) < 0)
> + return NULL;
> + Py_INCREF(&Struct_Type);
> + PyModule_AddObject(m, "Structure", (PyObject *)&Struct_Type);
> +
> + Py_TYPE(&Union_Type) = &UnionType_Type;
> + Union_Type.tp_base = &PyCData_Type;
> + if (PyType_Ready(&Union_Type) < 0)
> + return NULL;
> + Py_INCREF(&Union_Type);
> + PyModule_AddObject(m, "Union", (PyObject *)&Union_Type);
> +
> + Py_TYPE(&PyCPointer_Type) = &PyCPointerType_Type;
> + PyCPointer_Type.tp_base = &PyCData_Type;
> + if (PyType_Ready(&PyCPointer_Type) < 0)
> + return NULL;
> + Py_INCREF(&PyCPointer_Type);
> + PyModule_AddObject(m, "_Pointer", (PyObject *)&PyCPointer_Type);
> +
> + Py_TYPE(&PyCArray_Type) = &PyCArrayType_Type;
> + PyCArray_Type.tp_base = &PyCData_Type;
> + if (PyType_Ready(&PyCArray_Type) < 0)
> + return NULL;
> + Py_INCREF(&PyCArray_Type);
> + PyModule_AddObject(m, "Array", (PyObject *)&PyCArray_Type);
> +
> + Py_TYPE(&Simple_Type) = &PyCSimpleType_Type;
> + Simple_Type.tp_base = &PyCData_Type;
> + if (PyType_Ready(&Simple_Type) < 0)
> + return NULL;
> + Py_INCREF(&Simple_Type);
> + PyModule_AddObject(m, "_SimpleCData", (PyObject *)&Simple_Type);
> +
> + Py_TYPE(&PyCFuncPtr_Type) = &PyCFuncPtrType_Type;
> + PyCFuncPtr_Type.tp_base = &PyCData_Type;
> + if (PyType_Ready(&PyCFuncPtr_Type) < 0)
> + return NULL;
> + Py_INCREF(&PyCFuncPtr_Type);
> + PyModule_AddObject(m, "CFuncPtr", (PyObject *)&PyCFuncPtr_Type);
> +
> + /*************************************************
> + *
> + * Simple classes
> + */
> +
> + /* PyCField_Type is derived from PyBaseObject_Type */
> + if (PyType_Ready(&PyCField_Type) < 0)
> + return NULL;
> +
> + /*************************************************
> + *
> + * Other stuff
> + */
> +
> + DictRemover_Type.tp_new = PyType_GenericNew;
> + if (PyType_Ready(&DictRemover_Type) < 0)
> + return NULL;
> +
> +#ifdef MS_WIN32
> + if (create_comerror() < 0)
> + return NULL;
> + PyModule_AddObject(m, "COMError", ComError);
> +
> + PyModule_AddObject(m, "FUNCFLAG_HRESULT", PyLong_FromLong(FUNCFLAG_HRESULT));
> + PyModule_AddObject(m, "FUNCFLAG_STDCALL", PyLong_FromLong(FUNCFLAG_STDCALL));
> +#endif
> + PyModule_AddObject(m, "FUNCFLAG_CDECL", PyLong_FromLong(FUNCFLAG_CDECL));
> + PyModule_AddObject(m, "FUNCFLAG_USE_ERRNO", PyLong_FromLong(FUNCFLAG_USE_ERRNO));
> + PyModule_AddObject(m, "FUNCFLAG_USE_LASTERROR", PyLong_FromLong(FUNCFLAG_USE_LASTERROR));
> + PyModule_AddObject(m, "FUNCFLAG_PYTHONAPI", PyLong_FromLong(FUNCFLAG_PYTHONAPI));
> + PyModule_AddStringConstant(m, "__version__", "1.1.0");
> +
> + PyModule_AddObject(m, "_memmove_addr", PyLong_FromVoidPtr(memmove));
> + PyModule_AddObject(m, "_memset_addr", PyLong_FromVoidPtr(memset));
> + PyModule_AddObject(m, "_string_at_addr", PyLong_FromVoidPtr(string_at));
> + PyModule_AddObject(m, "_cast_addr", PyLong_FromVoidPtr(cast));
> +#ifdef CTYPES_UNICODE
> + PyModule_AddObject(m, "_wstring_at_addr", PyLong_FromVoidPtr(wstring_at));
> +#endif
> +
> +/* If RTLD_LOCAL is not defined (Windows!), set it to zero. */
> +#if !HAVE_DECL_RTLD_LOCAL
> +#define RTLD_LOCAL 0
> +#endif
> +
> +/* If RTLD_GLOBAL is not defined (cygwin), set it to the same value as
> + RTLD_LOCAL.
> +*/
> +#if !HAVE_DECL_RTLD_GLOBAL
> +#define RTLD_GLOBAL RTLD_LOCAL
> +#endif
> +
> + PyModule_AddObject(m, "RTLD_LOCAL", PyLong_FromLong(RTLD_LOCAL));
> + PyModule_AddObject(m, "RTLD_GLOBAL", PyLong_FromLong(RTLD_GLOBAL));
> +
> + PyExc_ArgError = PyErr_NewException("ctypes.ArgumentError", NULL, NULL);
> + if (PyExc_ArgError) {
> + Py_INCREF(PyExc_ArgError);
> + PyModule_AddObject(m, "ArgumentError", PyExc_ArgError);
> + }
> + return m;
> +}
> +
> +/*
> + Local Variables:
> + compile-command: "cd .. && python setup.py -q build -g && python setup.py -q build install --home ~"
> + End:
> +*/
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/callproc.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/callproc.c
> new file mode 100644
> index 00000000..1d041da2
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/callproc.c
> @@ -0,0 +1,1871 @@
> +/*
> + * History: First version dated from 3/97, derived from my SCMLIB version
> + * for win16.
> + */
> +/*
> + * Related Work:
> + * - calldll http://www.nightmare.com/software.html
> + * - libffi http://sourceware.cygnus.com/libffi/
> + * - ffcall http://clisp.cons.org/~haible/packages-ffcall.html
> + * and, of course, Don Beaudry's MESS package, but this is more ctypes
> + * related.
> + */
> +
> +
> +/*
> + How are functions called, and how are parameters converted to C ?
> +
> + 1. _ctypes.c::PyCFuncPtr_call receives an argument tuple 'inargs' and a
> + keyword dictionary 'kwds'.
> +
> + 2. After several checks, _build_callargs() is called which returns another
> + tuple 'callargs'. This may be the same tuple as 'inargs', a slice of
> + 'inargs', or a completely fresh tuple, depending on several things (is it a
> + COM method?, are 'paramflags' available?).
> +
> + 3. _build_callargs also calculates bitarrays containing indexes into
> + the callargs tuple, specifying how to build the return value(s) of
> + the function.
> +
> + 4. _ctypes_callproc is then called with the 'callargs' tuple. _ctypes_callproc first
> + allocates two arrays. The first is an array of 'struct argument' items, the
> + second array has 'void *' entries.
> +
> + 5. If 'converters' are present (converters is a sequence of argtypes'
> + from_param methods), for each item in 'callargs' converter is called and the
> + result passed to ConvParam. If 'converters' are not present, each argument
> + is directly passed to ConvParm.
> +
> + 6. For each arg, ConvParam stores the contained C data (or a pointer to it,
> + for structures) into the 'struct argument' array.
> +
> + 7. Finally, a loop fills the 'void *' array so that each item points to the
> + data contained in or pointed to by the 'struct argument' array.
> +
> + 8. The 'void *' argument array is what _call_function_pointer
> + expects. _call_function_pointer then has very little to do - only some
> + libffi specific stuff, then it calls ffi_call.
> +
> + So, there are 4 data structures holding processed arguments:
> + - the inargs tuple (in PyCFuncPtr_call)
> + - the callargs tuple (in PyCFuncPtr_call)
> + - the 'struct arguments' array
> + - the 'void *' array
> +
> + */
> +
> +#include "Python.h"
> +#include "structmember.h"
> +
> +#ifdef MS_WIN32
> +#include <windows.h>
> +#include <tchar.h>
> +#else
> +#include "ctypes_dlfcn.h"
> +#endif
> +
> +#ifdef MS_WIN32
> +#include <malloc.h>
> +#endif
> +
> +#include <ffi.h>
> +#include "ctypes.h"
> +#ifdef HAVE_ALLOCA_H
> +/* AIX needs alloca.h for alloca() */
> +#include <alloca.h>
> +#endif
> +
> +#ifdef _Py_MEMORY_SANITIZER
> +#include <sanitizer/msan_interface.h>
> +#endif
> +
> +#if defined(_DEBUG) || defined(__MINGW32__)
> +/* Don't use structured exception handling on Windows if this is defined.
> + MingW, AFAIK, doesn't support it.
> +*/
> +#define DONT_USE_SEH
> +#endif
> +
> +#define CTYPES_CAPSULE_NAME_PYMEM "_ctypes pymem"
> +
> +static void pymem_destructor(PyObject *ptr)
> +{
> + void *p = PyCapsule_GetPointer(ptr, CTYPES_CAPSULE_NAME_PYMEM);
> + if (p) {
> + PyMem_Free(p);
> + }
> +}
> +
> +/*
> + ctypes maintains thread-local storage that has space for two error numbers:
> + private copies of the system 'errno' value and, on Windows, the system error code
> + accessed by the GetLastError() and SetLastError() api functions.
> +
> + Foreign functions created with CDLL(..., use_errno=True), when called, swap
> + the system 'errno' value with the private copy just before the actual
> + function call, and swapped again immediately afterwards. The 'use_errno'
> + parameter defaults to False, in this case 'ctypes_errno' is not touched.
> +
> + On Windows, foreign functions created with CDLL(..., use_last_error=True) or
> + WinDLL(..., use_last_error=True) swap the system LastError value with the
> + ctypes private copy.
> +
> + The values are also swapped immeditately before and after ctypes callback
> + functions are called, if the callbacks are constructed using the new
> + optional use_errno parameter set to True: CFUNCTYPE(..., use_errno=TRUE) or
> + WINFUNCTYPE(..., use_errno=True).
> +
> + New ctypes functions are provided to access the ctypes private copies from
> + Python:
> +
> + - ctypes.set_errno(value) and ctypes.set_last_error(value) store 'value' in
> + the private copy and returns the previous value.
> +
> + - ctypes.get_errno() and ctypes.get_last_error() returns the current ctypes
> + private copies value.
> +*/
> +
> +/*
> + This function creates and returns a thread-local Python object that has
> + space to store two integer error numbers; once created the Python object is
> + kept alive in the thread state dictionary as long as the thread itself.
> +*/
> +PyObject *
> +_ctypes_get_errobj(int **pspace)
> +{
> + PyObject *dict = PyThreadState_GetDict();
> + PyObject *errobj;
> + static PyObject *error_object_name;
> + if (dict == 0) {
> + PyErr_SetString(PyExc_RuntimeError,
> + "cannot get thread state");
> + return NULL;
> + }
> + if (error_object_name == NULL) {
> + error_object_name = PyUnicode_InternFromString("ctypes.error_object");
> + if (error_object_name == NULL)
> + return NULL;
> + }
> + errobj = PyDict_GetItem(dict, error_object_name);
> + if (errobj) {
> + if (!PyCapsule_IsValid(errobj, CTYPES_CAPSULE_NAME_PYMEM)) {
> + PyErr_SetString(PyExc_RuntimeError,
> + "ctypes.error_object is an invalid capsule");
> + return NULL;
> + }
> + Py_INCREF(errobj);
> + }
> + else {
> + void *space = PyMem_Malloc(sizeof(int) * 2);
> + if (space == NULL)
> + return NULL;
> + memset(space, 0, sizeof(int) * 2);
> + errobj = PyCapsule_New(space, CTYPES_CAPSULE_NAME_PYMEM, pymem_destructor);
> + if (errobj == NULL) {
> + PyMem_Free(space);
> + return NULL;
> + }
> + if (-1 == PyDict_SetItem(dict, error_object_name,
> + errobj)) {
> + Py_DECREF(errobj);
> + return NULL;
> + }
> + }
> + *pspace = (int *)PyCapsule_GetPointer(errobj, CTYPES_CAPSULE_NAME_PYMEM);
> + return errobj;
> +}
> +
> +static PyObject *
> +get_error_internal(PyObject *self, PyObject *args, int index)
> +{
> + int *space;
> + PyObject *errobj = _ctypes_get_errobj(&space);
> + PyObject *result;
> +
> + if (errobj == NULL)
> + return NULL;
> + result = PyLong_FromLong(space[index]);
> + Py_DECREF(errobj);
> + return result;
> +}
> +
> +static PyObject *
> +set_error_internal(PyObject *self, PyObject *args, int index)
> +{
> + int new_errno, old_errno;
> + PyObject *errobj;
> + int *space;
> +
> + if (!PyArg_ParseTuple(args, "i", &new_errno))
> + return NULL;
> + errobj = _ctypes_get_errobj(&space);
> + if (errobj == NULL)
> + return NULL;
> + old_errno = space[index];
> + space[index] = new_errno;
> + Py_DECREF(errobj);
> + return PyLong_FromLong(old_errno);
> +}
> +
> +static PyObject *
> +get_errno(PyObject *self, PyObject *args)
> +{
> + return get_error_internal(self, args, 0);
> +}
> +
> +static PyObject *
> +set_errno(PyObject *self, PyObject *args)
> +{
> + return set_error_internal(self, args, 0);
> +}
> +
> +#ifdef MS_WIN32
> +
> +static PyObject *
> +get_last_error(PyObject *self, PyObject *args)
> +{
> + return get_error_internal(self, args, 1);
> +}
> +
> +static PyObject *
> +set_last_error(PyObject *self, PyObject *args)
> +{
> + return set_error_internal(self, args, 1);
> +}
> +
> +PyObject *ComError;
> +
> +static WCHAR *FormatError(DWORD code)
> +{
> + WCHAR *lpMsgBuf;
> + DWORD n;
> + n = FormatMessageW(FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM,
> + NULL,
> + code,
> + MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT), /* Default language */
> + (LPWSTR) &lpMsgBuf,
> + 0,
> + NULL);
> + if (n) {
> + while (iswspace(lpMsgBuf[n-1]))
> + --n;
> + lpMsgBuf[n] = L'\0'; /* rstrip() */
> + }
> + return lpMsgBuf;
> +}
> +
> +#ifndef DONT_USE_SEH
> +static void SetException(DWORD code, EXCEPTION_RECORD *pr)
> +{
> + /* The 'code' is a normal win32 error code so it could be handled by
> + PyErr_SetFromWindowsErr(). However, for some errors, we have additional
> + information not included in the error code. We handle those here and
> + delegate all others to the generic function. */
> + switch (code) {
> + case EXCEPTION_ACCESS_VIOLATION:
> + /* The thread attempted to read from or write
> + to a virtual address for which it does not
> + have the appropriate access. */
> + if (pr->ExceptionInformation[0] == 0)
> + PyErr_Format(PyExc_OSError,
> + "exception: access violation reading %p",
> + pr->ExceptionInformation[1]);
> + else
> + PyErr_Format(PyExc_OSError,
> + "exception: access violation writing %p",
> + pr->ExceptionInformation[1]);
> + break;
> +
> + case EXCEPTION_BREAKPOINT:
> + /* A breakpoint was encountered. */
> + PyErr_SetString(PyExc_OSError,
> + "exception: breakpoint encountered");
> + break;
> +
> + case EXCEPTION_DATATYPE_MISALIGNMENT:
> + /* The thread attempted to read or write data that is
> + misaligned on hardware that does not provide
> + alignment. For example, 16-bit values must be
> + aligned on 2-byte boundaries, 32-bit values on
> + 4-byte boundaries, and so on. */
> + PyErr_SetString(PyExc_OSError,
> + "exception: datatype misalignment");
> + break;
> +
> + case EXCEPTION_SINGLE_STEP:
> + /* A trace trap or other single-instruction mechanism
> + signaled that one instruction has been executed. */
> + PyErr_SetString(PyExc_OSError,
> + "exception: single step");
> + break;
> +
> + case EXCEPTION_ARRAY_BOUNDS_EXCEEDED:
> + /* The thread attempted to access an array element
> + that is out of bounds, and the underlying hardware
> + supports bounds checking. */
> + PyErr_SetString(PyExc_OSError,
> + "exception: array bounds exceeded");
> + break;
> +
> + case EXCEPTION_FLT_DENORMAL_OPERAND:
> + /* One of the operands in a floating-point operation
> + is denormal. A denormal value is one that is too
> + small to represent as a standard floating-point
> + value. */
> + PyErr_SetString(PyExc_OSError,
> + "exception: floating-point operand denormal");
> + break;
> +
> + case EXCEPTION_FLT_DIVIDE_BY_ZERO:
> + /* The thread attempted to divide a floating-point
> + value by a floating-point divisor of zero. */
> + PyErr_SetString(PyExc_OSError,
> + "exception: float divide by zero");
> + break;
> +
> + case EXCEPTION_FLT_INEXACT_RESULT:
> + /* The result of a floating-point operation cannot be
> + represented exactly as a decimal fraction. */
> + PyErr_SetString(PyExc_OSError,
> + "exception: float inexact");
> + break;
> +
> + case EXCEPTION_FLT_INVALID_OPERATION:
> + /* This exception represents any floating-point
> + exception not included in this list. */
> + PyErr_SetString(PyExc_OSError,
> + "exception: float invalid operation");
> + break;
> +
> + case EXCEPTION_FLT_OVERFLOW:
> + /* The exponent of a floating-point operation is
> + greater than the magnitude allowed by the
> + corresponding type. */
> + PyErr_SetString(PyExc_OSError,
> + "exception: float overflow");
> + break;
> +
> + case EXCEPTION_FLT_STACK_CHECK:
> + /* The stack overflowed or underflowed as the result
> + of a floating-point operation. */
> + PyErr_SetString(PyExc_OSError,
> + "exception: stack over/underflow");
> + break;
> +
> + case EXCEPTION_STACK_OVERFLOW:
> + /* The stack overflowed or underflowed as the result
> + of a floating-point operation. */
> + PyErr_SetString(PyExc_OSError,
> + "exception: stack overflow");
> + break;
> +
> + case EXCEPTION_FLT_UNDERFLOW:
> + /* The exponent of a floating-point operation is less
> + than the magnitude allowed by the corresponding
> + type. */
> + PyErr_SetString(PyExc_OSError,
> + "exception: float underflow");
> + break;
> +
> + case EXCEPTION_INT_DIVIDE_BY_ZERO:
> + /* The thread attempted to divide an integer value by
> + an integer divisor of zero. */
> + PyErr_SetString(PyExc_OSError,
> + "exception: integer divide by zero");
> + break;
> +
> + case EXCEPTION_INT_OVERFLOW:
> + /* The result of an integer operation caused a carry
> + out of the most significant bit of the result. */
> + PyErr_SetString(PyExc_OSError,
> + "exception: integer overflow");
> + break;
> +
> + case EXCEPTION_PRIV_INSTRUCTION:
> + /* The thread attempted to execute an instruction
> + whose operation is not allowed in the current
> + machine mode. */
> + PyErr_SetString(PyExc_OSError,
> + "exception: privileged instruction");
> + break;
> +
> + case EXCEPTION_NONCONTINUABLE_EXCEPTION:
> + /* The thread attempted to continue execution after a
> + noncontinuable exception occurred. */
> + PyErr_SetString(PyExc_OSError,
> + "exception: nocontinuable");
> + break;
> +
> + default:
> + PyErr_SetFromWindowsErr(code);
> + break;
> + }
> +}
> +
> +static DWORD HandleException(EXCEPTION_POINTERS *ptrs,
> + DWORD *pdw, EXCEPTION_RECORD *record)
> +{
> + *pdw = ptrs->ExceptionRecord->ExceptionCode;
> + *record = *ptrs->ExceptionRecord;
> + /* We don't want to catch breakpoint exceptions, they are used to attach
> + * a debugger to the process.
> + */
> + if (*pdw == EXCEPTION_BREAKPOINT)
> + return EXCEPTION_CONTINUE_SEARCH;
> + return EXCEPTION_EXECUTE_HANDLER;
> +}
> +#endif
> +
> +static PyObject *
> +check_hresult(PyObject *self, PyObject *args)
> +{
> + HRESULT hr;
> + if (!PyArg_ParseTuple(args, "i", &hr))
> + return NULL;
> + if (FAILED(hr))
> + return PyErr_SetFromWindowsErr(hr);
> + return PyLong_FromLong(hr);
> +}
> +
> +#endif
> +
> +/**************************************************************/
> +
> +PyCArgObject *
> +PyCArgObject_new(void)
> +{
> + PyCArgObject *p;
> + p = PyObject_New(PyCArgObject, &PyCArg_Type);
> + if (p == NULL)
> + return NULL;
> + p->pffi_type = NULL;
> + p->tag = '\0';
> + p->obj = NULL;
> + memset(&p->value, 0, sizeof(p->value));
> + return p;
> +}
> +
> +static void
> +PyCArg_dealloc(PyCArgObject *self)
> +{
> + Py_XDECREF(self->obj);
> + PyObject_Del(self);
> +}
> +
> +static int
> +is_literal_char(unsigned char c)
> +{
> + return c < 128 && _PyUnicode_IsPrintable(c) && c != '\\' && c != '\'';
> +}
> +
> +static PyObject *
> +PyCArg_repr(PyCArgObject *self)
> +{
> + char buffer[256];
> + switch(self->tag) {
> + case 'b':
> + case 'B':
> + sprintf(buffer, "<cparam '%c' (%d)>",
> + self->tag, self->value.b);
> + break;
> + case 'h':
> + case 'H':
> + sprintf(buffer, "<cparam '%c' (%d)>",
> + self->tag, self->value.h);
> + break;
> + case 'i':
> + case 'I':
> + sprintf(buffer, "<cparam '%c' (%d)>",
> + self->tag, self->value.i);
> + break;
> + case 'l':
> + case 'L':
> + sprintf(buffer, "<cparam '%c' (%ld)>",
> + self->tag, self->value.l);
> + break;
> +
> + case 'q':
> + case 'Q':
> + sprintf(buffer,
> +#ifdef MS_WIN32
> + "<cparam '%c' (%I64d)>",
> +#else
> + "<cparam '%c' (%lld)>",
> +#endif
> + self->tag, self->value.q);
> + break;
> + case 'd':
> + sprintf(buffer, "<cparam '%c' (%f)>",
> + self->tag, self->value.d);
> + break;
> + case 'f':
> + sprintf(buffer, "<cparam '%c' (%f)>",
> + self->tag, self->value.f);
> + break;
> +
> + case 'c':
> + if (is_literal_char((unsigned char)self->value.c)) {
> + sprintf(buffer, "<cparam '%c' ('%c')>",
> + self->tag, self->value.c);
> + }
> + else {
> + sprintf(buffer, "<cparam '%c' ('\\x%02x')>",
> + self->tag, (unsigned char)self->value.c);
> + }
> + break;
> +
> +/* Hm, are these 'z' and 'Z' codes useful at all?
> + Shouldn't they be replaced by the functionality of c_string
> + and c_wstring ?
> +*/
> + case 'z':
> + case 'Z':
> + case 'P':
> + sprintf(buffer, "<cparam '%c' (%p)>",
> + self->tag, self->value.p);
> + break;
> +
> + default:
> + if (is_literal_char((unsigned char)self->tag)) {
> + sprintf(buffer, "<cparam '%c' at %p>",
> + (unsigned char)self->tag, self);
> + }
> + else {
> + sprintf(buffer, "<cparam 0x%02x at %p>",
> + (unsigned char)self->tag, self);
> + }
> + break;
> + }
> + return PyUnicode_FromString(buffer);
> +}
> +
> +static PyMemberDef PyCArgType_members[] = {
> + { "_obj", T_OBJECT,
> + offsetof(PyCArgObject, obj), READONLY,
> + "the wrapped object" },
> + { NULL },
> +};
> +
> +PyTypeObject PyCArg_Type = {
> + PyVarObject_HEAD_INIT(NULL, 0)
> + "CArgObject",
> + sizeof(PyCArgObject),
> + 0,
> + (destructor)PyCArg_dealloc, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + (reprfunc)PyCArg_repr, /* tp_repr */
> + 0, /* tp_as_number */
> + 0, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + 0, /* tp_getattro */
> + 0, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT, /* tp_flags */
> + 0, /* tp_doc */
> + 0, /* tp_traverse */
> + 0, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + 0, /* tp_iter */
> + 0, /* tp_iternext */
> + 0, /* tp_methods */
> + PyCArgType_members, /* tp_members */
> +};
> +
> +/****************************************************************/
> +/*
> + * Convert a PyObject * into a parameter suitable to pass to an
> + * C function call.
> + *
> + * 1. Python integers are converted to C int and passed by value.
> + * Py_None is converted to a C NULL pointer.
> + *
> + * 2. 3-tuples are expected to have a format character in the first
> + * item, which must be 'i', 'f', 'd', 'q', or 'P'.
> + * The second item will have to be an integer, float, double, long long
> + * or integer (denoting an address void *), will be converted to the
> + * corresponding C data type and passed by value.
> + *
> + * 3. Other Python objects are tested for an '_as_parameter_' attribute.
> + * The value of this attribute must be an integer which will be passed
> + * by value, or a 2-tuple or 3-tuple which will be used according
> + * to point 2 above. The third item (if any), is ignored. It is normally
> + * used to keep the object alive where this parameter refers to.
> + * XXX This convention is dangerous - you can construct arbitrary tuples
> + * in Python and pass them. Would it be safer to use a custom container
> + * datatype instead of a tuple?
> + *
> + * 4. Other Python objects cannot be passed as parameters - an exception is raised.
> + *
> + * 5. ConvParam will store the converted result in a struct containing format
> + * and value.
> + */
> +
> +union result {
> + char c;
> + char b;
> + short h;
> + int i;
> + long l;
> + long long q;
> + long double D;
> + double d;
> + float f;
> + void *p;
> +};
> +
> +struct argument {
> + ffi_type *ffi_type;
> + PyObject *keep;
> + union result value;
> +};
> +
> +/*
> + * Convert a single Python object into a PyCArgObject and return it.
> + */
> +static int ConvParam(PyObject *obj, Py_ssize_t index, struct argument *pa)
> +{
> + StgDictObject *dict;
> + pa->keep = NULL; /* so we cannot forget it later */
> +
> + dict = PyObject_stgdict(obj);
> + if (dict) {
> + PyCArgObject *carg;
> + assert(dict->paramfunc);
> + /* If it has an stgdict, it is a CDataObject */
> + carg = dict->paramfunc((CDataObject *)obj);
> + if (carg == NULL)
> + return -1;
> + pa->ffi_type = carg->pffi_type;
> + memcpy(&pa->value, &carg->value, sizeof(pa->value));
> + pa->keep = (PyObject *)carg;
> + return 0;
> + }
> +
> + if (PyCArg_CheckExact(obj)) {
> + PyCArgObject *carg = (PyCArgObject *)obj;
> + pa->ffi_type = carg->pffi_type;
> + Py_INCREF(obj);
> + pa->keep = obj;
> + memcpy(&pa->value, &carg->value, sizeof(pa->value));
> + return 0;
> + }
> +
> + /* check for None, integer, string or unicode and use directly if successful */
> + if (obj == Py_None) {
> + pa->ffi_type = &ffi_type_pointer;
> + pa->value.p = NULL;
> + return 0;
> + }
> +
> + if (PyLong_Check(obj)) {
> + pa->ffi_type = &ffi_type_sint;
> + pa->value.i = (long)PyLong_AsUnsignedLong(obj);
> + if (pa->value.i == -1 && PyErr_Occurred()) {
> + PyErr_Clear();
> + pa->value.i = PyLong_AsLong(obj);
> + if (pa->value.i == -1 && PyErr_Occurred()) {
> + PyErr_SetString(PyExc_OverflowError,
> + "int too long to convert");
> + return -1;
> + }
> + }
> + return 0;
> + }
> +
> + if (PyBytes_Check(obj)) {
> + pa->ffi_type = &ffi_type_pointer;
> + pa->value.p = PyBytes_AsString(obj);
> + Py_INCREF(obj);
> + pa->keep = obj;
> + return 0;
> + }
> +
> +#ifdef CTYPES_UNICODE
> + if (PyUnicode_Check(obj)) {
> + pa->ffi_type = &ffi_type_pointer;
> + pa->value.p = _PyUnicode_AsWideCharString(obj);
> + if (pa->value.p == NULL)
> + return -1;
> + pa->keep = PyCapsule_New(pa->value.p, CTYPES_CAPSULE_NAME_PYMEM, pymem_destructor);
> + if (!pa->keep) {
> + PyMem_Free(pa->value.p);
> + return -1;
> + }
> + return 0;
> + }
> +#endif
> +
> + {
> + PyObject *arg;
> + arg = PyObject_GetAttrString(obj, "_as_parameter_");
> + /* Which types should we exactly allow here?
> + integers are required for using Python classes
> + as parameters (they have to expose the '_as_parameter_'
> + attribute)
> + */
> + if (arg) {
> + int result;
> + result = ConvParam(arg, index, pa);
> + Py_DECREF(arg);
> + return result;
> + }
> + PyErr_Format(PyExc_TypeError,
> + "Don't know how to convert parameter %d",
> + Py_SAFE_DOWNCAST(index, Py_ssize_t, int));
> + return -1;
> + }
> +}
> +
> +
> +ffi_type *_ctypes_get_ffi_type(PyObject *obj)
> +{
> + StgDictObject *dict;
> + if (obj == NULL)
> + return &ffi_type_sint;
> + dict = PyType_stgdict(obj);
> + if (dict == NULL)
> + return &ffi_type_sint;
> +#if defined(MS_WIN32) && !defined(_WIN32_WCE)
> + /* This little trick works correctly with MSVC.
> + It returns small structures in registers
> + */
> + if (dict->ffi_type_pointer.type == FFI_TYPE_STRUCT) {
> + if (can_return_struct_as_int(dict->ffi_type_pointer.size))
> + return &ffi_type_sint32;
> + else if (can_return_struct_as_sint64 (dict->ffi_type_pointer.size))
> + return &ffi_type_sint64;
> + }
> +#endif
> + return &dict->ffi_type_pointer;
> +}
> +
> +
> +/*
> + * libffi uses:
> + *
> + * ffi_status ffi_prep_cif(ffi_cif *cif, ffi_abi abi,
> + * unsigned int nargs,
> + * ffi_type *rtype,
> + * ffi_type **atypes);
> + *
> + * and then
> + *
> + * void ffi_call(ffi_cif *cif, void *fn, void *rvalue, void **avalues);
> + */
> +static int _call_function_pointer(int flags,
> + PPROC pProc,
> + void **avalues,
> + ffi_type **atypes,
> + ffi_type *restype,
> + void *resmem,
> + int argcount)
> +{
> +#ifdef WITH_THREAD
> + PyThreadState *_save = NULL; /* For Py_BLOCK_THREADS and Py_UNBLOCK_THREADS */
> +#endif
> + PyObject *error_object = NULL;
> + int *space;
> + ffi_cif cif;
> + int cc;
> +#ifdef MS_WIN32
> + int delta;
> +#ifndef DONT_USE_SEH
> + DWORD dwExceptionCode = 0;
> + EXCEPTION_RECORD record;
> +#endif
> +#endif
> + /* XXX check before here */
> + if (restype == NULL) {
> + PyErr_SetString(PyExc_RuntimeError,
> + "No ffi_type for result");
> + return -1;
> + }
> +
> + cc = FFI_DEFAULT_ABI;
> +#if defined(MS_WIN32) && !defined(MS_WIN64) && !defined(_WIN32_WCE)
> + if ((flags & FUNCFLAG_CDECL) == 0)
> + cc = FFI_STDCALL;
> +#endif
> + if (FFI_OK != ffi_prep_cif(&cif,
> + cc,
> + argcount,
> + restype,
> + atypes)) {
> + PyErr_SetString(PyExc_RuntimeError,
> + "ffi_prep_cif failed");
> + return -1;
> + }
> +
> + if (flags & (FUNCFLAG_USE_ERRNO | FUNCFLAG_USE_LASTERROR)) {
> + error_object = _ctypes_get_errobj(&space);
> + if (error_object == NULL)
> + return -1;
> + }
> +#ifdef WITH_THREAD
> + if ((flags & FUNCFLAG_PYTHONAPI) == 0)
> + Py_UNBLOCK_THREADS
> +#endif
> + if (flags & FUNCFLAG_USE_ERRNO) {
> + int temp = space[0];
> + space[0] = errno;
> + errno = temp;
> + }
> +#ifdef MS_WIN32
> + if (flags & FUNCFLAG_USE_LASTERROR) {
> + int temp = space[1];
> + space[1] = GetLastError();
> + SetLastError(temp);
> + }
> +#ifndef DONT_USE_SEH
> + __try {
> +#endif
> + delta =
> +#endif
> + ffi_call(&cif, (void *)pProc, resmem, avalues);
> +#ifdef MS_WIN32
> +#ifndef DONT_USE_SEH
> + }
> + __except (HandleException(GetExceptionInformation(),
> + &dwExceptionCode, &record)) {
> + ;
> + }
> +#endif
> + if (flags & FUNCFLAG_USE_LASTERROR) {
> + int temp = space[1];
> + space[1] = GetLastError();
> + SetLastError(temp);
> + }
> +#endif
> + if (flags & FUNCFLAG_USE_ERRNO) {
> + int temp = space[0];
> + space[0] = errno;
> + errno = temp;
> + }
> +#ifdef WITH_THREAD
> + if ((flags & FUNCFLAG_PYTHONAPI) == 0)
> + Py_BLOCK_THREADS
> +#endif
> + Py_XDECREF(error_object);
> +#ifdef MS_WIN32
> +#ifndef DONT_USE_SEH
> + if (dwExceptionCode) {
> + SetException(dwExceptionCode, &record);
> + return -1;
> + }
> +#endif
> +#ifdef MS_WIN64
> + if (delta != 0) {
> + PyErr_Format(PyExc_RuntimeError,
> + "ffi_call failed with code %d",
> + delta);
> + return -1;
> + }
> +#else
> + if (delta < 0) {
> + if (flags & FUNCFLAG_CDECL)
> + PyErr_Format(PyExc_ValueError,
> + "Procedure called with not enough "
> + "arguments (%d bytes missing) "
> + "or wrong calling convention",
> + -delta);
> + else
> + PyErr_Format(PyExc_ValueError,
> + "Procedure probably called with not enough "
> + "arguments (%d bytes missing)",
> + -delta);
> + return -1;
> + } else if (delta > 0) {
> + PyErr_Format(PyExc_ValueError,
> + "Procedure probably called with too many "
> + "arguments (%d bytes in excess)",
> + delta);
> + return -1;
> + }
> +#endif
> +#endif
> + if ((flags & FUNCFLAG_PYTHONAPI) && PyErr_Occurred())
> + return -1;
> + return 0;
> +}
> +
> +/*
> + * Convert the C value in result into a Python object, depending on restype.
> + *
> + * - If restype is NULL, return a Python integer.
> + * - If restype is None, return None.
> + * - If restype is a simple ctypes type (c_int, c_void_p), call the type's getfunc,
> + * pass the result to checker and return the result.
> + * - If restype is another ctypes type, return an instance of that.
> + * - Otherwise, call restype and return the result.
> + */
> +static PyObject *GetResult(PyObject *restype, void *result, PyObject *checker)
> +{
> + StgDictObject *dict;
> + PyObject *retval, *v;
> +
> + if (restype == NULL)
> + return PyLong_FromLong(*(int *)result);
> +
> + if (restype == Py_None) {
> + Py_INCREF(Py_None);
> + return Py_None;
> + }
> +
> + dict = PyType_stgdict(restype);
> + if (dict == NULL)
> + return PyObject_CallFunction(restype, "i", *(int *)result);
> +
> + if (dict->getfunc && !_ctypes_simple_instance(restype)) {
> + retval = dict->getfunc(result, dict->size);
> + /* If restype is py_object (detected by comparing getfunc with
> + O_get), we have to call Py_DECREF because O_get has already
> + called Py_INCREF.
> + */
> + if (dict->getfunc == _ctypes_get_fielddesc("O")->getfunc) {
> + Py_DECREF(retval);
> + }
> + } else
> + retval = PyCData_FromBaseObj(restype, NULL, 0, result);
> +
> + if (!checker || !retval)
> + return retval;
> +
> + v = PyObject_CallFunctionObjArgs(checker, retval, NULL);
> + if (v == NULL)
> + _PyTraceback_Add("GetResult", "_ctypes/callproc.c", __LINE__-2);
> + Py_DECREF(retval);
> + return v;
> +}
> +
> +/*
> + * Raise a new exception 'exc_class', adding additional text to the original
> + * exception string.
> + */
> +void _ctypes_extend_error(PyObject *exc_class, const char *fmt, ...)
> +{
> + va_list vargs;
> + PyObject *tp, *v, *tb, *s, *cls_str, *msg_str;
> +
> + va_start(vargs, fmt);
> + s = PyUnicode_FromFormatV(fmt, vargs);
> + va_end(vargs);
> + if (!s)
> + return;
> +
> + PyErr_Fetch(&tp, &v, &tb);
> + PyErr_NormalizeException(&tp, &v, &tb);
> + cls_str = PyObject_Str(tp);
> + if (cls_str) {
> + PyUnicode_AppendAndDel(&s, cls_str);
> + PyUnicode_AppendAndDel(&s, PyUnicode_FromString(": "));
> + if (s == NULL)
> + goto error;
> + } else
> + PyErr_Clear();
> + msg_str = PyObject_Str(v);
> + if (msg_str)
> + PyUnicode_AppendAndDel(&s, msg_str);
> + else {
> + PyErr_Clear();
> + PyUnicode_AppendAndDel(&s, PyUnicode_FromString("???"));
> + }
> + if (s == NULL)
> + goto error;
> + PyErr_SetObject(exc_class, s);
> +error:
> + Py_XDECREF(tp);
> + Py_XDECREF(v);
> + Py_XDECREF(tb);
> + Py_XDECREF(s);
> +}
> +
> +
> +#ifdef MS_WIN32
> +
> +static PyObject *
> +GetComError(HRESULT errcode, GUID *riid, IUnknown *pIunk)
> +{
> + HRESULT hr;
> + ISupportErrorInfo *psei = NULL;
> + IErrorInfo *pei = NULL;
> + BSTR descr=NULL, helpfile=NULL, source=NULL;
> + GUID guid;
> + DWORD helpcontext=0;
> + LPOLESTR progid;
> + PyObject *obj;
> + LPOLESTR text;
> +
> + /* We absolutely have to release the GIL during COM method calls,
> + otherwise we may get a deadlock!
> + */
> +#ifdef WITH_THREAD
> + Py_BEGIN_ALLOW_THREADS
> +#endif
> +
> + hr = pIunk->lpVtbl->QueryInterface(pIunk, &IID_ISupportErrorInfo, (void **)&psei);
> + if (FAILED(hr))
> + goto failed;
> +
> + hr = psei->lpVtbl->InterfaceSupportsErrorInfo(psei, riid);
> + psei->lpVtbl->Release(psei);
> + if (FAILED(hr))
> + goto failed;
> +
> + hr = GetErrorInfo(0, &pei);
> + if (hr != S_OK)
> + goto failed;
> +
> + pei->lpVtbl->GetDescription(pei, &descr);
> + pei->lpVtbl->GetGUID(pei, &guid);
> + pei->lpVtbl->GetHelpContext(pei, &helpcontext);
> + pei->lpVtbl->GetHelpFile(pei, &helpfile);
> + pei->lpVtbl->GetSource(pei, &source);
> +
> + pei->lpVtbl->Release(pei);
> +
> + failed:
> +#ifdef WITH_THREAD
> + Py_END_ALLOW_THREADS
> +#endif
> +
> + progid = NULL;
> + ProgIDFromCLSID(&guid, &progid);
> +
> + text = FormatError(errcode);
> + obj = Py_BuildValue(
> + "iu(uuuiu)",
> + errcode,
> + text,
> + descr, source, helpfile, helpcontext,
> + progid);
> + if (obj) {
> + PyErr_SetObject(ComError, obj);
> + Py_DECREF(obj);
> + }
> + LocalFree(text);
> +
> + if (descr)
> + SysFreeString(descr);
> + if (helpfile)
> + SysFreeString(helpfile);
> + if (source)
> + SysFreeString(source);
> +
> + return NULL;
> +}
> +#endif
> +
> +#if (defined(__x86_64__) && (defined(__MINGW64__) || defined(__CYGWIN__))) || \
> + defined(__aarch64__)
> +#define CTYPES_PASS_BY_REF_HACK
> +#define POW2(x) (((x & ~(x - 1)) == x) ? x : 0)
> +#define IS_PASS_BY_REF(x) (x > 8 || !POW2(x))
> +#endif
> +
> +/*
> + * Requirements, must be ensured by the caller:
> + * - argtuple is tuple of arguments
> + * - argtypes is either NULL, or a tuple of the same size as argtuple
> + *
> + * - XXX various requirements for restype, not yet collected
> + */
> +PyObject *_ctypes_callproc(PPROC pProc,
> + PyObject *argtuple,
> +#ifdef MS_WIN32
> + IUnknown *pIunk,
> + GUID *iid,
> +#endif
> + int flags,
> + PyObject *argtypes, /* misleading name: This is a tuple of
> + methods, not types: the .from_param
> + class methods of the types */
> + PyObject *restype,
> + PyObject *checker)
> +{
> + Py_ssize_t i, n, argcount, argtype_count;
> + void *resbuf;
> + struct argument *args, *pa;
> + ffi_type **atypes;
> + ffi_type *rtype;
> + void **avalues;
> + PyObject *retval = NULL;
> +
> + n = argcount = PyTuple_GET_SIZE(argtuple);
> +#ifdef MS_WIN32
> + /* an optional COM object this pointer */
> + if (pIunk)
> + ++argcount;
> +#endif
> +
> +#ifdef UEFI_C_SOURCE
> + args = (struct argument *)malloc(sizeof(struct argument) * argcount);
> +#else
> + args = (struct argument *)alloca(sizeof(struct argument) * argcount);
> +#endif // UEFI_C_SOURCE
> + if (!args) {
> + PyErr_NoMemory();
> + return NULL;
> + }
> + memset(args, 0, sizeof(struct argument) * argcount);
> + argtype_count = argtypes ? PyTuple_GET_SIZE(argtypes) : 0;
> +#ifdef MS_WIN32
> + if (pIunk) {
> + args[0].ffi_type = &ffi_type_pointer;
> + args[0].value.p = pIunk;
> + pa = &args[1];
> + } else
> +#endif
> + pa = &args[0];
> +
> + /* Convert the arguments */
> + for (i = 0; i < n; ++i, ++pa) {
> + PyObject *converter;
> + PyObject *arg;
> + int err;
> +
> + arg = PyTuple_GET_ITEM(argtuple, i); /* borrowed ref */
> + /* For cdecl functions, we allow more actual arguments
> + than the length of the argtypes tuple.
> + This is checked in _ctypes::PyCFuncPtr_Call
> + */
> + if (argtypes && argtype_count > i) {
> + PyObject *v;
> + converter = PyTuple_GET_ITEM(argtypes, i);
> + v = PyObject_CallFunctionObjArgs(converter, arg, NULL);
> + if (v == NULL) {
> + _ctypes_extend_error(PyExc_ArgError, "argument %d: ", i+1);
> + goto cleanup;
> + }
> +
> + err = ConvParam(v, i+1, pa);
> + Py_DECREF(v);
> + if (-1 == err) {
> + _ctypes_extend_error(PyExc_ArgError, "argument %d: ", i+1);
> + goto cleanup;
> + }
> + } else {
> + err = ConvParam(arg, i+1, pa);
> + if (-1 == err) {
> + _ctypes_extend_error(PyExc_ArgError, "argument %d: ", i+1);
> + goto cleanup; /* leaking ? */
> + }
> + }
> + }
> +
> + rtype = _ctypes_get_ffi_type(restype);
> +
> +
> +#ifdef UEFI_C_SOURCE
> + resbuf = malloc(max(rtype->size, sizeof(ffi_arg)));
> +#ifdef _Py_MEMORY_SANITIZER
> + /* ffi_call actually initializes resbuf, but from asm, which
> + * MemorySanitizer can't detect. Avoid false positives from MSan. */
> + if (resbuf != NULL) {
> + __msan_unpoison(resbuf, max(rtype->size, sizeof(ffi_arg)));
> + }
> +#endif
> +
> + avalues = (void **)malloc(sizeof(void *)* argcount);
> + atypes = (ffi_type **)malloc(sizeof(ffi_type *)* argcount);
> +#else
> + resbuf = alloca(max(rtype->size, sizeof(ffi_arg)));
> +#ifdef _Py_MEMORY_SANITIZER
> + /* ffi_call actually initializes resbuf, but from asm, which
> + * MemorySanitizer can't detect. Avoid false positives from MSan. */
> + if (resbuf != NULL) {
> + __msan_unpoison(resbuf, max(rtype->size, sizeof(ffi_arg)));
> + }
> +#endif
> + avalues = (void **)alloca(sizeof(void *) * argcount);
> + atypes = (ffi_type **)alloca(sizeof(ffi_type *) * argcount);
> +#endif //UEFI_C_SOURCE
> + if (!resbuf || !avalues || !atypes) {
> + PyErr_NoMemory();
> + goto cleanup;
> + }
> + for (i = 0; i < argcount; ++i) {
> + atypes[i] = args[i].ffi_type;
> +#ifdef CTYPES_PASS_BY_REF_HACK
> + size_t size = atypes[i]->size;
> + if (IS_PASS_BY_REF(size)) {
> + void *tmp = alloca(size);
> + if (atypes[i]->type == FFI_TYPE_STRUCT)
> + memcpy(tmp, args[i].value.p, size);
> + else
> + memcpy(tmp, (void*)&args[i].value, size);
> +
> + avalues[i] = tmp;
> + }
> + else
> +#endif
> + if (atypes[i]->type == FFI_TYPE_STRUCT)
> + avalues[i] = (void *)args[i].value.p;
> + else
> + avalues[i] = (void *)&args[i].value;
> + }
> +
> + if (-1 == _call_function_pointer(flags, pProc, avalues, atypes,
> + rtype, resbuf,
> + Py_SAFE_DOWNCAST(argcount,
> + Py_ssize_t,
> + int)))
> + goto cleanup;
> +
> +#ifdef WORDS_BIGENDIAN
> + /* libffi returns the result in a buffer with sizeof(ffi_arg). This
> + causes problems on big endian machines, since the result buffer
> + address cannot simply be used as result pointer, instead we must
> + adjust the pointer value:
> + */
> + /*
> + XXX I should find out and clarify why this is needed at all,
> + especially why adjusting for ffi_type_float must be avoided on
> + 64-bit platforms.
> + */
> + if (rtype->type != FFI_TYPE_FLOAT
> + && rtype->type != FFI_TYPE_STRUCT
> + && rtype->size < sizeof(ffi_arg))
> + resbuf = (char *)resbuf + sizeof(ffi_arg) - rtype->size;
> +#endif
> +
> +#ifdef MS_WIN32
> + if (iid && pIunk) {
> + if (*(int *)resbuf & 0x80000000)
> + retval = GetComError(*(HRESULT *)resbuf, iid, pIunk);
> + else
> + retval = PyLong_FromLong(*(int *)resbuf);
> + } else if (flags & FUNCFLAG_HRESULT) {
> + if (*(int *)resbuf & 0x80000000)
> + retval = PyErr_SetFromWindowsErr(*(int *)resbuf);
> + else
> + retval = PyLong_FromLong(*(int *)resbuf);
> + } else
> +#endif
> + retval = GetResult(restype, resbuf, checker);
> + cleanup:
> + for (i = 0; i < argcount; ++i)
> + Py_XDECREF(args[i].keep);
> + return retval;
> +}
> +
> +static int
> +_parse_voidp(PyObject *obj, void **address)
> +{
> + *address = PyLong_AsVoidPtr(obj);
> + if (*address == NULL)
> + return 0;
> + return 1;
> +}
> +
> +#ifdef MS_WIN32
> +
> +static const char format_error_doc[] =
> +"FormatError([integer]) -> string\n\
> +\n\
> +Convert a win32 error code into a string. If the error code is not\n\
> +given, the return value of a call to GetLastError() is used.\n";
> +static PyObject *format_error(PyObject *self, PyObject *args)
> +{
> + PyObject *result;
> + wchar_t *lpMsgBuf;
> + DWORD code = 0;
> + if (!PyArg_ParseTuple(args, "|i:FormatError", &code))
> + return NULL;
> + if (code == 0)
> + code = GetLastError();
> + lpMsgBuf = FormatError(code);
> + if (lpMsgBuf) {
> + result = PyUnicode_FromWideChar(lpMsgBuf, wcslen(lpMsgBuf));
> + LocalFree(lpMsgBuf);
> + } else {
> + result = PyUnicode_FromString("<no description>");
> + }
> + return result;
> +}
> +
> +static const char load_library_doc[] =
> +"LoadLibrary(name) -> handle\n\
> +\n\
> +Load an executable (usually a DLL), and return a handle to it.\n\
> +The handle may be used to locate exported functions in this\n\
> +module.\n";
> +static PyObject *load_library(PyObject *self, PyObject *args)
> +{
> + const WCHAR *name;
> + PyObject *nameobj;
> + PyObject *ignored;
> + HMODULE hMod;
> +
> + if (!PyArg_ParseTuple(args, "U|O:LoadLibrary", &nameobj, &ignored))
> + return NULL;
> +
> + name = _PyUnicode_AsUnicode(nameobj);
> + if (!name)
> + return NULL;
> +
> + hMod = LoadLibraryW(name);
> + if (!hMod)
> + return PyErr_SetFromWindowsErr(GetLastError());
> +#ifdef _WIN64
> + return PyLong_FromVoidPtr(hMod);
> +#else
> + return Py_BuildValue("i", hMod);
> +#endif
> +}
> +
> +static const char free_library_doc[] =
> +"FreeLibrary(handle) -> void\n\
> +\n\
> +Free the handle of an executable previously loaded by LoadLibrary.\n";
> +static PyObject *free_library(PyObject *self, PyObject *args)
> +{
> + void *hMod;
> + if (!PyArg_ParseTuple(args, "O&:FreeLibrary", &_parse_voidp, &hMod))
> + return NULL;
> + if (!FreeLibrary((HMODULE)hMod))
> + return PyErr_SetFromWindowsErr(GetLastError());
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +static const char copy_com_pointer_doc[] =
> +"CopyComPointer(src, dst) -> HRESULT value\n";
> +
> +static PyObject *
> +copy_com_pointer(PyObject *self, PyObject *args)
> +{
> + PyObject *p1, *p2, *r = NULL;
> + struct argument a, b;
> + IUnknown *src, **pdst;
> + if (!PyArg_ParseTuple(args, "OO:CopyComPointer", &p1, &p2))
> + return NULL;
> + a.keep = b.keep = NULL;
> +
> + if (-1 == ConvParam(p1, 0, &a) || -1 == ConvParam(p2, 1, &b))
> + goto done;
> + src = (IUnknown *)a.value.p;
> + pdst = (IUnknown **)b.value.p;
> +
> + if (pdst == NULL)
> + r = PyLong_FromLong(E_POINTER);
> + else {
> + if (src)
> + src->lpVtbl->AddRef(src);
> + *pdst = src;
> + r = PyLong_FromLong(S_OK);
> + }
> + done:
> + Py_XDECREF(a.keep);
> + Py_XDECREF(b.keep);
> + return r;
> +}
> +#else
> +
> +#ifndef UEFI_C_SOURCE
> +static PyObject *py_dl_open(PyObject *self, PyObject *args)
> +{
> + PyObject *name, *name2;
> + char *name_str;
> + void * handle;
> +#if HAVE_DECL_RTLD_LOCAL
> + int mode = RTLD_NOW | RTLD_LOCAL;
> +#else
> + /* cygwin doesn't define RTLD_LOCAL */
> + int mode = RTLD_NOW;
> +#endif
> + if (!PyArg_ParseTuple(args, "O|i:dlopen", &name, &mode))
> + return NULL;
> + mode |= RTLD_NOW;
> + if (name != Py_None) {
> + if (PyUnicode_FSConverter(name, &name2) == 0)
> + return NULL;
> + if (PyBytes_Check(name2))
> + name_str = PyBytes_AS_STRING(name2);
> + else
> + name_str = PyByteArray_AS_STRING(name2);
> + } else {
> + name_str = NULL;
> + name2 = NULL;
> + }
> + handle = ctypes_dlopen(name_str, mode);
> + Py_XDECREF(name2);
> + if (!handle) {
> + char *errmsg = ctypes_dlerror();
> + if (!errmsg)
> + errmsg = "dlopen() error";
> + PyErr_SetString(PyExc_OSError,
> + errmsg);
> + return NULL;
> + }
> + return PyLong_FromVoidPtr(handle);
> +}
> +
> +static PyObject *py_dl_close(PyObject *self, PyObject *args)
> +{
> + void *handle;
> +
> + if (!PyArg_ParseTuple(args, "O&:dlclose", &_parse_voidp, &handle))
> + return NULL;
> + if (dlclose(handle)) {
> + PyErr_SetString(PyExc_OSError,
> + ctypes_dlerror());
> + return NULL;
> + }
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +static PyObject *py_dl_sym(PyObject *self, PyObject *args)
> +{
> + char *name;
> + void *handle;
> + void *ptr;
> +
> + if (!PyArg_ParseTuple(args, "O&s:dlsym",
> + &_parse_voidp, &handle, &name))
> + return NULL;
> + ptr = ctypes_dlsym((void*)handle, name);
> + if (!ptr) {
> + PyErr_SetString(PyExc_OSError,
> + ctypes_dlerror());
> + return NULL;
> + }
> + return PyLong_FromVoidPtr(ptr);
> +}
> +#endif // UEFI_C_SOURCE
> +#endif
> +
> +/*
> + * Only for debugging so far: So that we can call CFunction instances
> + *
> + * XXX Needs to accept more arguments: flags, argtypes, restype
> + */
> +static PyObject *
> +call_function(PyObject *self, PyObject *args)
> +{
> + void *func;
> + PyObject *arguments;
> + PyObject *result;
> +
> + if (!PyArg_ParseTuple(args,
> + "O&O!",
> + &_parse_voidp, &func,
> + &PyTuple_Type, &arguments))
> + return NULL;
> +
> + result = _ctypes_callproc((PPROC)func,
> + arguments,
> +#ifdef MS_WIN32
> + NULL,
> + NULL,
> +#endif
> + 0, /* flags */
> + NULL, /* self->argtypes */
> + NULL, /* self->restype */
> + NULL); /* checker */
> + return result;
> +}
> +
> +/*
> + * Only for debugging so far: So that we can call CFunction instances
> + *
> + * XXX Needs to accept more arguments: flags, argtypes, restype
> + */
> +static PyObject *
> +call_cdeclfunction(PyObject *self, PyObject *args)
> +{
> + void *func;
> + PyObject *arguments;
> + PyObject *result;
> +
> + if (!PyArg_ParseTuple(args,
> + "O&O!",
> + &_parse_voidp, &func,
> + &PyTuple_Type, &arguments))
> + return NULL;
> +
> + result = _ctypes_callproc((PPROC)func,
> + arguments,
> +#ifdef MS_WIN32
> + NULL,
> + NULL,
> +#endif
> + FUNCFLAG_CDECL, /* flags */
> + NULL, /* self->argtypes */
> + NULL, /* self->restype */
> + NULL); /* checker */
> + return result;
> +}
> +
> +/*****************************************************************
> + * functions
> + */
> +static const char sizeof_doc[] =
> +"sizeof(C type) -> integer\n"
> +"sizeof(C instance) -> integer\n"
> +"Return the size in bytes of a C instance";
> +
> +static PyObject *
> +sizeof_func(PyObject *self, PyObject *obj)
> +{
> + StgDictObject *dict;
> +
> + dict = PyType_stgdict(obj);
> + if (dict)
> + return PyLong_FromSsize_t(dict->size);
> +
> + if (CDataObject_Check(obj))
> + return PyLong_FromSsize_t(((CDataObject *)obj)->b_size);
> + PyErr_SetString(PyExc_TypeError,
> + "this type has no size");
> + return NULL;
> +}
> +
> +static const char alignment_doc[] =
> +"alignment(C type) -> integer\n"
> +"alignment(C instance) -> integer\n"
> +"Return the alignment requirements of a C instance";
> +
> +static PyObject *
> +align_func(PyObject *self, PyObject *obj)
> +{
> + StgDictObject *dict;
> +
> + dict = PyType_stgdict(obj);
> + if (dict)
> + return PyLong_FromSsize_t(dict->align);
> +
> + dict = PyObject_stgdict(obj);
> + if (dict)
> + return PyLong_FromSsize_t(dict->align);
> +
> + PyErr_SetString(PyExc_TypeError,
> + "no alignment info");
> + return NULL;
> +}
> +
> +static const char byref_doc[] =
> +"byref(C instance[, offset=0]) -> byref-object\n"
> +"Return a pointer lookalike to a C instance, only usable\n"
> +"as function argument";
> +
> +/*
> + * We must return something which can be converted to a parameter,
> + * but still has a reference to self.
> + */
> +static PyObject *
> +byref(PyObject *self, PyObject *args)
> +{
> + PyCArgObject *parg;
> + PyObject *obj;
> + PyObject *pyoffset = NULL;
> + Py_ssize_t offset = 0;
> +
> + if (!PyArg_UnpackTuple(args, "byref", 1, 2,
> + &obj, &pyoffset))
> + return NULL;
> + if (pyoffset) {
> + offset = PyNumber_AsSsize_t(pyoffset, NULL);
> + if (offset == -1 && PyErr_Occurred())
> + return NULL;
> + }
> + if (!CDataObject_Check(obj)) {
> + PyErr_Format(PyExc_TypeError,
> + "byref() argument must be a ctypes instance, not '%s'",
> + Py_TYPE(obj)->tp_name);
> + return NULL;
> + }
> +
> + parg = PyCArgObject_new();
> + if (parg == NULL)
> + return NULL;
> +
> + parg->tag = 'P';
> + parg->pffi_type = &ffi_type_pointer;
> + Py_INCREF(obj);
> + parg->obj = obj;
> + parg->value.p = (char *)((CDataObject *)obj)->b_ptr + offset;
> + return (PyObject *)parg;
> +}
> +
> +static const char addressof_doc[] =
> +"addressof(C instance) -> integer\n"
> +"Return the address of the C instance internal buffer";
> +
> +static PyObject *
> +addressof(PyObject *self, PyObject *obj)
> +{
> + if (CDataObject_Check(obj))
> + return PyLong_FromVoidPtr(((CDataObject *)obj)->b_ptr);
> + PyErr_SetString(PyExc_TypeError,
> + "invalid type");
> + return NULL;
> +}
> +
> +static int
> +converter(PyObject *obj, void **address)
> +{
> + *address = PyLong_AsVoidPtr(obj);
> + return *address != NULL;
> +}
> +
> +static PyObject *
> +My_PyObj_FromPtr(PyObject *self, PyObject *args)
> +{
> + PyObject *ob;
> + if (!PyArg_ParseTuple(args, "O&:PyObj_FromPtr", converter, &ob))
> + return NULL;
> + Py_INCREF(ob);
> + return ob;
> +}
> +
> +static PyObject *
> +My_Py_INCREF(PyObject *self, PyObject *arg)
> +{
> + Py_INCREF(arg); /* that's what this function is for */
> + Py_INCREF(arg); /* that for returning it */
> + return arg;
> +}
> +
> +static PyObject *
> +My_Py_DECREF(PyObject *self, PyObject *arg)
> +{
> + Py_DECREF(arg); /* that's what this function is for */
> + Py_INCREF(arg); /* that's for returning it */
> + return arg;
> +}
> +
> +static PyObject *
> +resize(PyObject *self, PyObject *args)
> +{
> + CDataObject *obj;
> + StgDictObject *dict;
> + Py_ssize_t size;
> +
> + if (!PyArg_ParseTuple(args,
> + "On:resize",
> + &obj, &size))
> + return NULL;
> +
> + dict = PyObject_stgdict((PyObject *)obj);
> + if (dict == NULL) {
> + PyErr_SetString(PyExc_TypeError,
> + "excepted ctypes instance");
> + return NULL;
> + }
> + if (size < dict->size) {
> + PyErr_Format(PyExc_ValueError,
> + "minimum size is %zd",
> + dict->size);
> + return NULL;
> + }
> + if (obj->b_needsfree == 0) {
> + PyErr_Format(PyExc_ValueError,
> + "Memory cannot be resized because this object doesn't own it");
> + return NULL;
> + }
> + if ((size_t)size <= sizeof(obj->b_value)) {
> + /* internal default buffer is large enough */
> + obj->b_size = size;
> + goto done;
> + }
> + if (!_CDataObject_HasExternalBuffer(obj)) {
> + /* We are currently using the objects default buffer, but it
> + isn't large enough any more. */
> + void *ptr = PyMem_Malloc(size);
> + if (ptr == NULL)
> + return PyErr_NoMemory();
> + memset(ptr, 0, size);
> + memmove(ptr, obj->b_ptr, obj->b_size);
> + obj->b_ptr = ptr;
> + obj->b_size = size;
> + } else {
> + void * ptr = PyMem_Realloc(obj->b_ptr, size);
> + if (ptr == NULL)
> + return PyErr_NoMemory();
> + obj->b_ptr = ptr;
> + obj->b_size = size;
> + }
> + done:
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +static PyObject *
> +unpickle(PyObject *self, PyObject *args)
> +{
> + PyObject *typ;
> + PyObject *state;
> + PyObject *result;
> + PyObject *tmp;
> + _Py_IDENTIFIER(__new__);
> + _Py_IDENTIFIER(__setstate__);
> +
> + if (!PyArg_ParseTuple(args, "OO", &typ, &state))
> + return NULL;
> + result = _PyObject_CallMethodId(typ, &PyId___new__, "O", typ);
> + if (result == NULL)
> + return NULL;
> + tmp = _PyObject_CallMethodId(result, &PyId___setstate__, "O", state);
> + if (tmp == NULL) {
> + Py_DECREF(result);
> + return NULL;
> + }
> + Py_DECREF(tmp);
> + return result;
> +}
> +
> +static PyObject *
> +POINTER(PyObject *self, PyObject *cls)
> +{
> + PyObject *result;
> + PyTypeObject *typ;
> + PyObject *key;
> + char *buf;
> +
> + result = PyDict_GetItem(_ctypes_ptrtype_cache, cls);
> + if (result) {
> + Py_INCREF(result);
> + return result;
> + }
> + if (PyUnicode_CheckExact(cls)) {
> + const char *name = PyUnicode_AsUTF8(cls);
> + if (name == NULL)
> + return NULL;
> + buf = PyMem_Malloc(strlen(name) + 3 + 1);
> + if (buf == NULL)
> + return PyErr_NoMemory();
> + sprintf(buf, "LP_%s", name);
> + result = PyObject_CallFunction((PyObject *)Py_TYPE(&PyCPointer_Type),
> + "s(O){}",
> + buf,
> + &PyCPointer_Type);
> + PyMem_Free(buf);
> + if (result == NULL)
> + return result;
> + key = PyLong_FromVoidPtr(result);
> + if (key == NULL) {
> + Py_DECREF(result);
> + return NULL;
> + }
> + } else if (PyType_Check(cls)) {
> + typ = (PyTypeObject *)cls;
> + buf = PyMem_Malloc(strlen(typ->tp_name) + 3 + 1);
> + if (buf == NULL)
> + return PyErr_NoMemory();
> + sprintf(buf, "LP_%s", typ->tp_name);
> + result = PyObject_CallFunction((PyObject *)Py_TYPE(&PyCPointer_Type),
> + "s(O){sO}",
> + buf,
> + &PyCPointer_Type,
> + "_type_", cls);
> + PyMem_Free(buf);
> + if (result == NULL)
> + return result;
> + Py_INCREF(cls);
> + key = cls;
> + } else {
> + PyErr_SetString(PyExc_TypeError, "must be a ctypes type");
> + return NULL;
> + }
> + if (-1 == PyDict_SetItem(_ctypes_ptrtype_cache, key, result)) {
> + Py_DECREF(result);
> + Py_DECREF(key);
> + return NULL;
> + }
> + Py_DECREF(key);
> + return result;
> +}
> +
> +static PyObject *
> +pointer(PyObject *self, PyObject *arg)
> +{
> + PyObject *result;
> + PyObject *typ;
> +
> + typ = PyDict_GetItem(_ctypes_ptrtype_cache, (PyObject *)Py_TYPE(arg));
> + if (typ)
> + return PyObject_CallFunctionObjArgs(typ, arg, NULL);
> + typ = POINTER(NULL, (PyObject *)Py_TYPE(arg));
> + if (typ == NULL)
> + return NULL;
> + result = PyObject_CallFunctionObjArgs(typ, arg, NULL);
> + Py_DECREF(typ);
> + return result;
> +}
> +
> +static PyObject *
> +buffer_info(PyObject *self, PyObject *arg)
> +{
> + StgDictObject *dict = PyType_stgdict(arg);
> + PyObject *shape;
> + Py_ssize_t i;
> +
> + if (dict == NULL)
> + dict = PyObject_stgdict(arg);
> + if (dict == NULL) {
> + PyErr_SetString(PyExc_TypeError,
> + "not a ctypes type or object");
> + return NULL;
> + }
> + shape = PyTuple_New(dict->ndim);
> + if (shape == NULL)
> + return NULL;
> + for (i = 0; i < (int)dict->ndim; ++i)
> + PyTuple_SET_ITEM(shape, i, PyLong_FromSsize_t(dict->shape[i]));
> +
> + if (PyErr_Occurred()) {
> + Py_DECREF(shape);
> + return NULL;
> + }
> + return Py_BuildValue("siN", dict->format, dict->ndim, shape);
> +}
> +
> +PyMethodDef _ctypes_module_methods[] = {
> + {"get_errno", get_errno, METH_NOARGS},
> + {"set_errno", set_errno, METH_VARARGS},
> + {"POINTER", POINTER, METH_O },
> + {"pointer", pointer, METH_O },
> + {"_unpickle", unpickle, METH_VARARGS },
> + {"buffer_info", buffer_info, METH_O, "Return buffer interface information"},
> + {"resize", resize, METH_VARARGS, "Resize the memory buffer of a ctypes instance"},
> +#ifndef UEFI_C_SOURCE
> +#ifdef MS_WIN32
> + {"get_last_error", get_last_error, METH_NOARGS},
> + {"set_last_error", set_last_error, METH_VARARGS},
> + {"CopyComPointer", copy_com_pointer, METH_VARARGS, copy_com_pointer_doc},
> + {"FormatError", format_error, METH_VARARGS, format_error_doc},
> + {"LoadLibrary", load_library, METH_VARARGS, load_library_doc},
> + {"FreeLibrary", free_library, METH_VARARGS, free_library_doc},
> + {"_check_HRESULT", check_hresult, METH_VARARGS},
> +#else
> + {"dlopen", py_dl_open, METH_VARARGS,
> + "dlopen(name, flag={RTLD_GLOBAL|RTLD_LOCAL}) open a shared library"},
> + {"dlclose", py_dl_close, METH_VARARGS, "dlclose a library"},
> + {"dlsym", py_dl_sym, METH_VARARGS, "find symbol in shared library"},
> +#endif
> +#endif // UEFI_C_SOURCE
> + {"alignment", align_func, METH_O, alignment_doc},
> + {"sizeof", sizeof_func, METH_O, sizeof_doc},
> + {"byref", byref, METH_VARARGS, byref_doc},
> + {"addressof", addressof, METH_O, addressof_doc},
> + {"call_function", call_function, METH_VARARGS },
> + {"call_cdeclfunction", call_cdeclfunction, METH_VARARGS },
> + {"PyObj_FromPtr", My_PyObj_FromPtr, METH_VARARGS },
> + {"Py_INCREF", My_Py_INCREF, METH_O },
> + {"Py_DECREF", My_Py_DECREF, METH_O },
> + {NULL, NULL} /* Sentinel */
> +};
> +
> +/*
> + Local Variables:
> + compile-command: "cd .. && python setup.py -q build -g && python setup.py -q build install --home ~"
> + End:
> +*/
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/ctypes_dlfcn.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/ctypes_dlfcn.h
> new file mode 100644
> index 00000000..8b452ec0
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/ctypes_dlfcn.h
> @@ -0,0 +1,29 @@
> +#ifndef _CTYPES_DLFCN_H_
> +#define _CTYPES_DLFCN_H_
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif /* __cplusplus */
> +
> +#ifndef UEFI_C_SOURCE
> +#ifndef MS_WIN32
> +
> +#include <dlfcn.h>
> +
> +#ifndef CTYPES_DARWIN_DLFCN
> +
> +#define ctypes_dlsym dlsym
> +#define ctypes_dlerror dlerror
> +#define ctypes_dlopen dlopen
> +#define ctypes_dlclose dlclose
> +#define ctypes_dladdr dladdr
> +
> +#endif /* !CTYPES_DARWIN_DLFCN */
> +
> +#endif /* !MS_WIN32 */
> +#endif // UEFI_C_SOURCE
> +
> +#ifdef __cplusplus
> +}
> +#endif /* __cplusplus */
> +#endif /* _CTYPES_DLFCN_H_ */
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi.c
> new file mode 100644
> index 00000000..12e01284
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi.c
> @@ -0,0 +1,572 @@
> +/* -----------------------------------------------------------------------
> +
> + Copyright (c) 2021, Intel Corporation. All rights reserved.<BR>
> + This program and the accompanying materials are licensed and made available under
> + the terms and conditions of the BSD License that accompanies this distribution.
> + The full text of the license may be found at
> + http://opensource.org/licenses/bsd-license.
> +
> + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
> + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
> +
> + ffi.c - Copyright (c) 1996, 1998, 1999, 2001 Red Hat, Inc.
> + Copyright (c) 2002 Ranjit Mathew
> + Copyright (c) 2002 Bo Thorsen
> + Copyright (c) 2002 Roger Sayle
> +
> + x86 Foreign Function Interface
> +
> + Permission is hereby granted, free of charge, to any person obtaining
> + a copy of this software and associated documentation files (the
> + ``Software''), to deal in the Software without restriction, including
> + without limitation the rights to use, copy, modify, merge, publish,
> + distribute, sublicense, and/or sell copies of the Software, and to
> + permit persons to whom the Software is furnished to do so, subject to
> + the following conditions:
> +
> + The above copyright notice and this permission notice shall be included
> + in all copies or substantial portions of the Software.
> +
> + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS
> + OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
> + IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + OTHER DEALINGS IN THE SOFTWARE.
> + ----------------------------------------------------------------------- */
> +
> +#include <ffi.h>
> +#include <ffi_common.h>
> +
> +#include <stdlib.h>
> +#include <math.h>
> +#include <sys/types.h>
> +
> +/* ffi_prep_args is called by the assembly routine once stack space
> + has been allocated for the function's arguments */
> +
> +extern void Py_FatalError(const char *msg);
> +
> +/*@-exportheader@*/
> +void ffi_prep_args(char *stack, extended_cif *ecif)
> +/*@=exportheader@*/
> +{
> + register unsigned int i;
> + register void **p_argv;
> + register char *argp;
> + register ffi_type **p_arg;
> +
> + argp = stack;
> + if (ecif->cif->rtype->type == FFI_TYPE_STRUCT)
> + {
> + *(void **) argp = ecif->rvalue;
> + argp += sizeof(void *);
> + }
> +
> + p_argv = ecif->avalue;
> +
> + for (i = ecif->cif->nargs, p_arg = ecif->cif->arg_types;
> + i != 0;
> + i--, p_arg++)
> + {
> + size_t z;
> +
> + /* Align if necessary */
> + if ((sizeof(void *) - 1) & (size_t) argp)
> + argp = (char *) ALIGN(argp, sizeof(void *));
> +
> + z = (*p_arg)->size;
> + if (z < sizeof(intptr_t))
> + {
> + z = sizeof(intptr_t);
> + switch ((*p_arg)->type)
> + {
> + case FFI_TYPE_SINT8:
> + *(intptr_t *) argp = (intptr_t)*(SINT8 *)(* p_argv);
> + break;
> +
> + case FFI_TYPE_UINT8:
> + *(uintptr_t *) argp = (uintptr_t)*(UINT8 *)(* p_argv);
> + break;
> +
> + case FFI_TYPE_SINT16:
> + *(intptr_t *) argp = (intptr_t)*(SINT16 *)(* p_argv);
> + break;
> +
> + case FFI_TYPE_UINT16:
> + *(uintptr_t *) argp = (uintptr_t)*(UINT16 *)(* p_argv);
> + break;
> +
> + case FFI_TYPE_SINT32:
> + *(intptr_t *) argp = (intptr_t)*(SINT32 *)(* p_argv);
> + break;
> +
> + case FFI_TYPE_UINT32:
> + *(uintptr_t *) argp = (uintptr_t)*(UINT32 *)(* p_argv);
> + break;
> +
> + case FFI_TYPE_FLOAT:
> + *(uintptr_t *) argp = 0;
> + *(float *) argp = *(float *)(* p_argv);
> + break;
> +
> + // 64-bit value cases should never be used for x86 and AMD64 builds
> + case FFI_TYPE_SINT64:
> + *(intptr_t *) argp = (intptr_t)*(SINT64 *)(* p_argv);
> + break;
> +
> + case FFI_TYPE_UINT64:
> + *(uintptr_t *) argp = (uintptr_t)*(UINT64 *)(* p_argv);
> + break;
> +
> + case FFI_TYPE_STRUCT:
> + *(uintptr_t *) argp = (uintptr_t)*(UINT32 *)(* p_argv);
> + break;
> +
> + case FFI_TYPE_DOUBLE:
> + *(uintptr_t *) argp = 0;
> + *(double *) argp = *(double *)(* p_argv);
> + break;
> +
> + default:
> + FFI_ASSERT(0);
> + }
> + }
> +#ifdef _WIN64
> + else if (z > 8)
> + {
> + /* On Win64, if a single argument takes more than 8 bytes,
> + then it is always passed by reference. */
> + *(void **)argp = *p_argv;
> + z = 8;
> + }
> +#endif
> + else
> + {
> + memcpy(argp, *p_argv, z);
> + }
> + p_argv++;
> + argp += z;
> + }
> +
> + if (argp >= stack && (unsigned)(argp - stack) > ecif->cif->bytes)
> + {
> + Py_FatalError("FFI BUG: not enough stack space for arguments");
> + }
> + return;
> +}
> +
> +/*
> +Per: https://msdn.microsoft.com/en-us/library/7572ztz4.aspx
> +To be returned by value in RAX, user-defined types must have a length
> +of 1, 2, 4, 8, 16, 32, or 64 bits
> +*/
> +int can_return_struct_as_int(size_t s)
> +{
> + return s == 1 || s == 2 || s == 4;
> +}
> +
> +int can_return_struct_as_sint64(size_t s)
> +{
> + return s == 8;
> +}
> +
> +/* Perform machine dependent cif processing */
> +ffi_status ffi_prep_cif_machdep(ffi_cif *cif)
> +{
> + /* Set the return type flag */
> + switch (cif->rtype->type)
> + {
> + case FFI_TYPE_VOID:
> + case FFI_TYPE_SINT64:
> + case FFI_TYPE_FLOAT:
> + case FFI_TYPE_DOUBLE:
> + case FFI_TYPE_LONGDOUBLE:
> + cif->flags = (unsigned) cif->rtype->type;
> + break;
> +
> + case FFI_TYPE_STRUCT:
> + /* MSVC returns small structures in registers. Put in cif->flags
> + the value FFI_TYPE_STRUCT only if the structure is big enough;
> + otherwise, put the 4- or 8-bytes integer type. */
> + if (can_return_struct_as_int(cif->rtype->size))
> + cif->flags = FFI_TYPE_INT;
> + else if (can_return_struct_as_sint64(cif->rtype->size))
> + cif->flags = FFI_TYPE_SINT64;
> + else
> + cif->flags = FFI_TYPE_STRUCT;
> + break;
> +
> + case FFI_TYPE_UINT64:
> +#if defined(_WIN64) || defined(UEFI_MSVC_64)
> + case FFI_TYPE_POINTER:
> +#endif
> + cif->flags = FFI_TYPE_SINT64;
> + break;
> +
> + default:
> + cif->flags = FFI_TYPE_INT;
> + break;
> + }
> +
> + return FFI_OK;
> +}
> +
> +#if defined(_WIN32) || defined(UEFI_MSVC_32)
> +extern int
> +ffi_call_x86(void (*)(char *, extended_cif *),
> + /*@out@*/ extended_cif *,
> + unsigned, unsigned,
> + /*@out@*/ unsigned *,
> + void (*fn)());
> +#endif
> +
> +#if defined(_WIN64) || defined(UEFI_MSVC_64)
> +extern int
> +ffi_call_AMD64(void (*)(char *, extended_cif *),
> + /*@out@*/ extended_cif *,
> + unsigned, unsigned,
> + /*@out@*/ unsigned *,
> + void (*fn)());
> +#endif
> +
> +int
> +ffi_call(/*@dependent@*/ ffi_cif *cif,
> + void (*fn)(),
> + /*@out@*/ void *rvalue,
> + /*@dependent@*/ void **avalue)
> +{
> + extended_cif ecif;
> +#ifdef UEFI_C_SOURCE
> + int malloc_flag = 0;
> +#endif // UEFI_C_SOURCE
> +
> + ecif.cif = cif;
> + ecif.avalue = avalue;
> +
> + /* If the return value is a struct and we don't have a return */
> + /* value address then we need to make one */
> +
> + if ((rvalue == NULL) &&
> + (cif->rtype->type == FFI_TYPE_STRUCT))
> + {
> + /*@-sysunrecog@*/
> +#ifdef UEFI_C_SOURCE
> + ecif.rvalue = malloc(cif->rtype->size);
> + malloc_flag = 1;
> +#else
> + ecif.rvalue = alloca(cif->rtype->size);
> +#endif // UEFI_C_SOURCE
> + /*@=sysunrecog@*/
> + }
> + else
> + ecif.rvalue = rvalue;
> +
> +
> + switch (cif->abi)
> + {
> +#if !defined(_WIN64) && !defined(UEFI_MSVC_64)
> + case FFI_SYSV:
> + case FFI_STDCALL:
> + return ffi_call_x86(ffi_prep_args, &ecif, cif->bytes,
> + cif->flags, ecif.rvalue, fn);
> + break;
> +#else
> + case FFI_SYSV:
> + /* If a single argument takes more than 8 bytes,
> + then a copy is passed by reference. */
> + for (unsigned i = 0; i < cif->nargs; i++) {
> + size_t z = cif->arg_types[i]->size;
> + if (z > 8) {
> + void *temp = alloca(z);
> + memcpy(temp, avalue[i], z);
> + avalue[i] = temp;
> + }
> + }
> + /*@-usedef@*/
> + return ffi_call_AMD64(ffi_prep_args, &ecif, cif->bytes,
> + cif->flags, ecif.rvalue, fn);
> + /*@=usedef@*/
> + break;
> +#endif
> +
> + default:
> + FFI_ASSERT(0);
> + break;
> + }
> +
> +#ifdef UEFI_C_SOURCE
> + if (malloc_flag) {
> + free (ecif.rvalue);
> + }
> +#endif // UEFI_C_SOURCE
> +
> + return -1; /* theller: Hrm. */
> +}
> +
> +
> +/** private members **/
> +
> +static void ffi_prep_incoming_args_SYSV (char *stack, void **ret,
> + void** args, ffi_cif* cif);
> +/* This function is jumped to by the trampoline */
> +
> +#if defined(_WIN64) || defined(UEFI_MSVC_64)
> +void *
> +#else
> +static void __fastcall
> +#endif
> +ffi_closure_SYSV (ffi_closure *closure, char *argp)
> +{
> + // this is our return value storage
> + long double res;
> +
> + // our various things...
> + ffi_cif *cif;
> + void **arg_area;
> + unsigned short rtype;
> + void *resp = (void*)&res;
> + void *args = argp + sizeof(void*);
> +
> + cif = closure->cif;
> +#ifdef UEFI_C_SOURCE
> + arg_area = (void**) malloc (cif->nargs * sizeof (void*));
> +#else
> + arg_area = (void**) alloca (cif->nargs * sizeof (void*));
> +#endif // UEFI_C_SOURCE
> +
> + /* this call will initialize ARG_AREA, such that each
> + * element in that array points to the corresponding
> + * value on the stack; and if the function returns
> + * a structure, it will re-set RESP to point to the
> + * structure return address. */
> +
> + ffi_prep_incoming_args_SYSV(args, (void**)&resp, arg_area, cif);
> +
> + (closure->fun) (cif, resp, arg_area, closure->user_data);
> +
> + rtype = cif->flags;
> +
> +#if (defined(_WIN32) && !defined(_WIN64)) || (defined(UEFI_MSVC_32) && !defined(UEFI_MSVC_64))
> +#ifdef _MSC_VER
> + /* now, do a generic return based on the value of rtype */
> + if (rtype == FFI_TYPE_INT)
> + {
> + _asm mov eax, resp ;
> + _asm mov eax, [eax] ;
> + }
> + else if (rtype == FFI_TYPE_FLOAT)
> + {
> + _asm mov eax, resp ;
> + _asm fld DWORD PTR [eax] ;
> +// asm ("flds (%0)" : : "r" (resp) : "st" );
> + }
> + else if (rtype == FFI_TYPE_DOUBLE)
> + {
> + _asm mov eax, resp ;
> + _asm fld QWORD PTR [eax] ;
> +// asm ("fldl (%0)" : : "r" (resp) : "st", "st(1)" );
> + }
> + else if (rtype == FFI_TYPE_LONGDOUBLE)
> + {
> +// asm ("fldt (%0)" : : "r" (resp) : "st", "st(1)" );
> + }
> + else if (rtype == FFI_TYPE_SINT64)
> + {
> + _asm mov edx, resp ;
> + _asm mov eax, [edx] ;
> + _asm mov edx, [edx + 4] ;
> +// asm ("movl 0(%0),%%eax;"
> +// "movl 4(%0),%%edx"
> +// : : "r"(resp)
> +// : "eax", "edx");
> + }
> +#else
> + /* now, do a generic return based on the value of rtype */
> + if (rtype == FFI_TYPE_INT)
> + {
> + asm ("movl (%0),%%eax" : : "r" (resp) : "eax");
> + }
> + else if (rtype == FFI_TYPE_FLOAT)
> + {
> + asm ("flds (%0)" : : "r" (resp) : "st" );
> + }
> + else if (rtype == FFI_TYPE_DOUBLE)
> + {
> + asm ("fldl (%0)" : : "r" (resp) : "st", "st(1)" );
> + }
> + else if (rtype == FFI_TYPE_LONGDOUBLE)
> + {
> + asm ("fldt (%0)" : : "r" (resp) : "st", "st(1)" );
> + }
> + else if (rtype == FFI_TYPE_SINT64)
> + {
> + asm ("movl 0(%0),%%eax;"
> + "movl 4(%0),%%edx"
> + : : "r"(resp)
> + : "eax", "edx");
> + }
> +#endif
> +#endif
> +
> +#if defined(_WIN64) || defined(UEFI_MSVC_64)
> + /* The result is returned in rax. This does the right thing for
> + result types except for floats; we have to 'mov xmm0, rax' in the
> + caller to correct this.
> + */
> +
> + free (arg_area);
> +
> + return *(void **)resp;
> +#endif
> +}
> +
> +/*@-exportheader@*/
> +static void
> +ffi_prep_incoming_args_SYSV(char *stack, void **rvalue,
> + void **avalue, ffi_cif *cif)
> +/*@=exportheader@*/
> +{
> + register unsigned int i;
> + register void **p_argv;
> + register char *argp;
> + register ffi_type **p_arg;
> +
> + argp = stack;
> +
> + if ( cif->rtype->type == FFI_TYPE_STRUCT ) {
> + *rvalue = *(void **) argp;
> + argp += sizeof(void *);
> + }
> +
> + p_argv = avalue;
> +
> + for (i = cif->nargs, p_arg = cif->arg_types; (i != 0); i--, p_arg++)
> + {
> + size_t z;
> +
> + /* Align if necessary */
> + if ((sizeof(char *) - 1) & (size_t) argp) {
> + argp = (char *) ALIGN(argp, sizeof(char*));
> + }
> +
> + z = (*p_arg)->size;
> +
> + /* because we're little endian, this is what it turns into. */
> +
> +#if defined(_WIN64) || defined(UEFI_MSVC_64)
> + if (z > 8) {
> + /* On Win64, if a single argument takes more than 8 bytes,
> + * then it is always passed by reference.
> + */
> + *p_argv = *((void**) argp);
> + z = 8;
> + }
> + else
> +#endif
> + *p_argv = (void*) argp;
> +
> + p_argv++;
> + argp += z;
> + }
> +
> + return;
> +}
> +
> +/* the cif must already be prep'ed */
> +extern void ffi_closure_OUTER();
> +
> +ffi_status
> +ffi_prep_closure_loc (ffi_closure* closure,
> + ffi_cif* cif,
> + void (*fun)(ffi_cif*,void*,void**,void*),
> + void *user_data,
> + void *codeloc)
> +{
> + short bytes;
> + char *tramp;
> +#if defined(_WIN64) || defined(UEFI_MSVC_64)
> + int mask = 0;
> +#endif
> + FFI_ASSERT (cif->abi == FFI_SYSV);
> +
> + if (cif->abi == FFI_SYSV)
> + bytes = 0;
> +#if !defined(_WIN64) && !defined(UEFI_MSVC_64)
> + else if (cif->abi == FFI_STDCALL)
> + bytes = cif->bytes;
> +#endif
> + else
> + return FFI_BAD_ABI;
> +
> + tramp = &closure->tramp[0];
> +
> +#define BYTES(text) memcpy(tramp, text, sizeof(text)), tramp += sizeof(text)-1
> +#define POINTER(x) *(void**)tramp = (void*)(x), tramp += sizeof(void*)
> +#define SHORT(x) *(short*)tramp = x, tramp += sizeof(short)
> +#define INT(x) *(int*)tramp = x, tramp += sizeof(int)
> +
> +#if defined(_WIN64) || defined(UEFI_MSVC_64)
> + if (cif->nargs >= 1 &&
> + (cif->arg_types[0]->type == FFI_TYPE_FLOAT
> + || cif->arg_types[0]->type == FFI_TYPE_DOUBLE))
> + mask |= 1;
> + if (cif->nargs >= 2 &&
> + (cif->arg_types[1]->type == FFI_TYPE_FLOAT
> + || cif->arg_types[1]->type == FFI_TYPE_DOUBLE))
> + mask |= 2;
> + if (cif->nargs >= 3 &&
> + (cif->arg_types[2]->type == FFI_TYPE_FLOAT
> + || cif->arg_types[2]->type == FFI_TYPE_DOUBLE))
> + mask |= 4;
> + if (cif->nargs >= 4 &&
> + (cif->arg_types[3]->type == FFI_TYPE_FLOAT
> + || cif->arg_types[3]->type == FFI_TYPE_DOUBLE))
> + mask |= 8;
> +
> + /* 41 BB ---- mov r11d,mask */
> + BYTES("\x41\xBB"); INT(mask);
> +
> + /* 48 B8 -------- mov rax, closure */
> + BYTES("\x48\xB8"); POINTER(closure);
> +
> + /* 49 BA -------- mov r10, ffi_closure_OUTER */
> + BYTES("\x49\xBA"); POINTER(ffi_closure_OUTER);
> +
> + /* 41 FF E2 jmp r10 */
> + BYTES("\x41\xFF\xE2");
> +
> +#else
> +
> + /* mov ecx, closure */
> + BYTES("\xb9"); POINTER(closure);
> +
> + /* mov edx, esp */
> + BYTES("\x8b\xd4");
> +
> + /* call ffi_closure_SYSV */
> + BYTES("\xe8"); POINTER((char*)&ffi_closure_SYSV - (tramp + 4));
> +
> + /* ret bytes */
> + BYTES("\xc2");
> + SHORT(bytes);
> +
> +#endif
> +
> + if (tramp - &closure->tramp[0] > FFI_TRAMPOLINE_SIZE)
> + Py_FatalError("FFI_TRAMPOLINE_SIZE too small in " __FILE__);
> +
> + closure->cif = cif;
> + closure->user_data = user_data;
> + closure->fun = fun;
> + return FFI_OK;
> +}
> +
> +/**
> +Hack function for passing Python368 build.
> +**/
> +VOID
> +__chkstk()
> +{
> +}
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi.h
> new file mode 100644
> index 00000000..7ab8d0f9
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi.h
> @@ -0,0 +1,331 @@
> +/* -----------------------------------------------------------------*-C-*-
> + libffi 2.00-beta - Copyright (c) 1996-2003 Red Hat, Inc.
> +
> + Permission is hereby granted, free of charge, to any person obtaining
> + a copy of this software and associated documentation files (the
> + ``Software''), to deal in the Software without restriction, including
> + without limitation the rights to use, copy, modify, merge, publish,
> + distribute, sublicense, and/or sell copies of the Software, and to
> + permit persons to whom the Software is furnished to do so, subject to
> + the following conditions:
> +
> + The above copyright notice and this permission notice shall be included
> + in all copies or substantial portions of the Software.
> +
> + THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND, EXPRESS
> + OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
> + MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
> + IN NO EVENT SHALL CYGNUS SOLUTIONS BE LIABLE FOR ANY CLAIM, DAMAGES OR
> + OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
> + ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
> + OTHER DEALINGS IN THE SOFTWARE.
> +
> + ----------------------------------------------------------------------- */
> +
> +/* -------------------------------------------------------------------
> + The basic API is described in the README file.
> +
> + The raw API is designed to bypass some of the argument packing
> + and unpacking on architectures for which it can be avoided.
> +
> + The closure API allows interpreted functions to be packaged up
> + inside a C function pointer, so that they can be called as C functions,
> + with no understanding on the client side that they are interpreted.
> + It can also be used in other cases in which it is necessary to package
> + up a user specified parameter and a function pointer as a single
> + function pointer.
> +
> + The closure API must be implemented in order to get its functionality,
> + e.g. for use by gij. Routines are provided to emulate the raw API
> + if the underlying platform doesn't allow faster implementation.
> +
> + More details on the raw and cloure API can be found in:
> +
> + http://gcc.gnu.org/ml/java/1999-q3/msg00138.html
> +
> + and
> +
> + http://gcc.gnu.org/ml/java/1999-q3/msg00174.html
> + -------------------------------------------------------------------- */
> +
> +#ifndef LIBFFI_H
> +#define LIBFFI_H
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/* Specify which architecture libffi is configured for. */
> +//XXX #define X86
> +
> +/* ---- System configuration information --------------------------------- */
> +
> +#include <ffitarget.h>
> +
> +#ifndef LIBFFI_ASM
> +
> +#include <stddef.h>
> +#include <limits.h>
> +
> +/* LONG_LONG_MAX is not always defined (not if STRICT_ANSI, for example).
> + But we can find it either under the correct ANSI name, or under GNU
> + C's internal name. */
> +#ifdef LONG_LONG_MAX
> +# define FFI_LONG_LONG_MAX LONG_LONG_MAX
> +#else
> +# ifdef LLONG_MAX
> +# define FFI_LONG_LONG_MAX LLONG_MAX
> +# else
> +# ifdef __GNUC__
> +# define FFI_LONG_LONG_MAX __LONG_LONG_MAX__
> +# endif
> +# ifdef _MSC_VER
> +# define FFI_LONG_LONG_MAX _I64_MAX
> +# endif
> +# endif
> +#endif
> +
> +#if SCHAR_MAX == 127
> +# define ffi_type_uchar ffi_type_uint8
> +# define ffi_type_schar ffi_type_sint8
> +#else
> + #error "char size not supported"
> +#endif
> +
> +#if SHRT_MAX == 32767
> +# define ffi_type_ushort ffi_type_uint16
> +# define ffi_type_sshort ffi_type_sint16
> +#elif SHRT_MAX == 2147483647
> +# define ffi_type_ushort ffi_type_uint32
> +# define ffi_type_sshort ffi_type_sint32
> +#else
> + #error "short size not supported"
> +#endif
> +
> +#if INT_MAX == 32767
> +# define ffi_type_uint ffi_type_uint16
> +# define ffi_type_sint ffi_type_sint16
> +#elif INT_MAX == 2147483647
> +# define ffi_type_uint ffi_type_uint32
> +# define ffi_type_sint ffi_type_sint32
> +#elif INT_MAX == 9223372036854775807
> +# define ffi_type_uint ffi_type_uint64
> +# define ffi_type_sint ffi_type_sint64
> +#else
> + #error "int size not supported"
> +#endif
> +
> +#define ffi_type_ulong ffi_type_uint64
> +#define ffi_type_slong ffi_type_sint64
> +#if LONG_MAX == 2147483647
> +# if FFI_LONG_LONG_MAX != 9223372036854775807
> + #error "no 64-bit data type supported"
> +# endif
> +#elif LONG_MAX != 9223372036854775807
> + #error "long size not supported"
> +#endif
> +
> +/* The closure code assumes that this works on pointers, i.e. a size_t */
> +/* can hold a pointer. */
> +
> +typedef struct _ffi_type
> +{
> + size_t size;
> + unsigned short alignment;
> + unsigned short type;
> + /*@null@*/ struct _ffi_type **elements;
> +} ffi_type;
> +
> +int can_return_struct_as_int(size_t);
> +int can_return_struct_as_sint64(size_t);
> +
> +/* These are defined in types.c */
> +extern ffi_type ffi_type_void;
> +extern ffi_type ffi_type_uint8;
> +extern ffi_type ffi_type_sint8;
> +extern ffi_type ffi_type_uint16;
> +extern ffi_type ffi_type_sint16;
> +extern ffi_type ffi_type_uint32;
> +extern ffi_type ffi_type_sint32;
> +extern ffi_type ffi_type_uint64;
> +extern ffi_type ffi_type_sint64;
> +extern ffi_type ffi_type_float;
> +extern ffi_type ffi_type_double;
> +extern ffi_type ffi_type_longdouble;
> +extern ffi_type ffi_type_pointer;
> +
> +
> +typedef enum {
> + FFI_OK = 0,
> + FFI_BAD_TYPEDEF,
> + FFI_BAD_ABI
> +} ffi_status;
> +
> +typedef unsigned FFI_TYPE;
> +
> +typedef struct {
> + ffi_abi abi;
> + unsigned nargs;
> + /*@dependent@*/ ffi_type **arg_types;
> + /*@dependent@*/ ffi_type *rtype;
> + unsigned bytes;
> + unsigned flags;
> +#ifdef FFI_EXTRA_CIF_FIELDS
> + FFI_EXTRA_CIF_FIELDS;
> +#endif
> +} ffi_cif;
> +
> +/* ---- Definitions for the raw API -------------------------------------- */
> +
> +#if defined(_WIN64) || defined(UEFI_MSVC_64)
> +#define FFI_SIZEOF_ARG 8
> +#else
> +#define FFI_SIZEOF_ARG 4
> +#endif
> +
> +typedef union {
> + ffi_sarg sint;
> + ffi_arg uint;
> + float flt;
> + char data[FFI_SIZEOF_ARG];
> + void* ptr;
> +} ffi_raw;
> +
> +void ffi_raw_call (/*@dependent@*/ ffi_cif *cif,
> + void (*fn)(),
> + /*@out@*/ void *rvalue,
> + /*@dependent@*/ ffi_raw *avalue);
> +
> +void ffi_ptrarray_to_raw (ffi_cif *cif, void **args, ffi_raw *raw);
> +void ffi_raw_to_ptrarray (ffi_cif *cif, ffi_raw *raw, void **args);
> +size_t ffi_raw_size (ffi_cif *cif);
> +
> +/* This is analogous to the raw API, except it uses Java parameter */
> +/* packing, even on 64-bit machines. I.e. on 64-bit machines */
> +/* longs and doubles are followed by an empty 64-bit word. */
> +
> +void ffi_java_raw_call (/*@dependent@*/ ffi_cif *cif,
> + void (*fn)(),
> + /*@out@*/ void *rvalue,
> + /*@dependent@*/ ffi_raw *avalue);
> +
> +void ffi_java_ptrarray_to_raw (ffi_cif *cif, void **args, ffi_raw *raw);
> +void ffi_java_raw_to_ptrarray (ffi_cif *cif, ffi_raw *raw, void **args);
> +size_t ffi_java_raw_size (ffi_cif *cif);
> +
> +/* ---- Definitions for closures ----------------------------------------- */
> +
> +#if FFI_CLOSURES
> +
> +typedef struct {
> + char tramp[FFI_TRAMPOLINE_SIZE];
> + ffi_cif *cif;
> + void (*fun)(ffi_cif*,void*,void**,void*);
> + void *user_data;
> +} ffi_closure;
> +
> +void ffi_closure_free(void *);
> +void *ffi_closure_alloc (size_t size, void **code);
> +
> +ffi_status
> +ffi_prep_closure_loc (ffi_closure*,
> + ffi_cif *,
> + void (*fun)(ffi_cif*,void*,void**,void*),
> + void *user_data,
> + void *codeloc);
> +
> +typedef struct {
> + char tramp[FFI_TRAMPOLINE_SIZE];
> +
> + ffi_cif *cif;
> +
> +#if !FFI_NATIVE_RAW_API
> +
> + /* if this is enabled, then a raw closure has the same layout
> + as a regular closure. We use this to install an intermediate
> + handler to do the transaltion, void** -> ffi_raw*. */
> +
> + void (*translate_args)(ffi_cif*,void*,void**,void*);
> + void *this_closure;
> +
> +#endif
> +
> + void (*fun)(ffi_cif*,void*,ffi_raw*,void*);
> + void *user_data;
> +
> +} ffi_raw_closure;
> +
> +ffi_status
> +ffi_prep_raw_closure (ffi_raw_closure*,
> + ffi_cif *cif,
> + void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
> + void *user_data);
> +
> +ffi_status
> +ffi_prep_java_raw_closure (ffi_raw_closure*,
> + ffi_cif *cif,
> + void (*fun)(ffi_cif*,void*,ffi_raw*,void*),
> + void *user_data);
> +
> +#endif /* FFI_CLOSURES */
> +
> +/* ---- Public interface definition -------------------------------------- */
> +
> +ffi_status ffi_prep_cif(/*@out@*/ /*@partial@*/ ffi_cif *cif,
> + ffi_abi abi,
> + unsigned int nargs,
> + /*@dependent@*/ /*@out@*/ /*@partial@*/ ffi_type *rtype,
> + /*@dependent@*/ ffi_type **atypes);
> +
> +int
> +ffi_call(/*@dependent@*/ ffi_cif *cif,
> + void (*fn)(),
> + /*@out@*/ void *rvalue,
> + /*@dependent@*/ void **avalue);
> +
> +/* Useful for eliminating compiler warnings */
> +#define FFI_FN(f) ((void (*)())f)
> +
> +/* ---- Definitions shared with assembly code ---------------------------- */
> +
> +#endif
> +
> +/* If these change, update src/mips/ffitarget.h. */
> +#define FFI_TYPE_VOID 0
> +#define FFI_TYPE_INT 1
> +#define FFI_TYPE_FLOAT 2
> +#define FFI_TYPE_DOUBLE 3
> +#if 1
> +#define FFI_TYPE_LONGDOUBLE 4
> +#else
> +#define FFI_TYPE_LONGDOUBLE FFI_TYPE_DOUBLE
> +#endif
> +#define FFI_TYPE_UINT8 5
> +#define FFI_TYPE_SINT8 6
> +#define FFI_TYPE_UINT16 7
> +#define FFI_TYPE_SINT16 8
> +#define FFI_TYPE_UINT32 9
> +#define FFI_TYPE_SINT32 10
> +#define FFI_TYPE_UINT64 11
> +#define FFI_TYPE_SINT64 12
> +#define FFI_TYPE_STRUCT 13
> +#define FFI_TYPE_POINTER 14
> +
> +/* This should always refer to the last type code (for sanity checks) */
> +#define FFI_TYPE_LAST FFI_TYPE_POINTER
> +
> +#ifdef UEFI_C_SOURCE
> +#ifndef intptr_t
> +typedef long long intptr_t;
> +#endif
> +#ifndef uintptr_t
> +typedef unsigned long long uintptr_t;
> +#endif
> +#endif
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif
> +
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi_common.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi_common.h
> new file mode 100644
> index 00000000..2f39d2d5
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi_common.h
> @@ -0,0 +1,85 @@
> +/* -----------------------------------------------------------------------
> + ffi_common.h - Copyright (c) 1996 Red Hat, Inc.
> +
> + Common internal definitions and macros. Only necessary for building
> + libffi.
> + ----------------------------------------------------------------------- */
> +
> +#ifndef FFI_COMMON_H
> +#define FFI_COMMON_H
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#include <fficonfig.h>
> +#ifndef UEFI_C_SOURCE
> +#include <malloc.h>
> +#endif
> +
> +/* Check for the existence of memcpy. */
> +#if STDC_HEADERS
> +# include <string.h>
> +#else
> +# ifndef HAVE_MEMCPY
> +# define memcpy(d, s, n) bcopy ((s), (d), (n))
> +# endif
> +#endif
> +
> +#if defined(FFI_DEBUG)
> +#include <stdio.h>
> +#endif
> +
> +#ifdef FFI_DEBUG
> +/*@exits@*/ void ffi_assert(/*@temp@*/ char *expr, /*@temp@*/ char *file, int line);
> +void ffi_stop_here(void);
> +void ffi_type_test(/*@temp@*/ /*@out@*/ ffi_type *a, /*@temp@*/ char *file, int line);
> +
> +#define FFI_ASSERT(x) ((x) ? (void)0 : ffi_assert(#x, __FILE__,__LINE__))
> +#define FFI_ASSERT_AT(x, f, l) ((x) ? 0 : ffi_assert(#x, (f), (l)))
> +#define FFI_ASSERT_VALID_TYPE(x) ffi_type_test (x, __FILE__, __LINE__)
> +#else
> +#define FFI_ASSERT(x)
> +#define FFI_ASSERT_AT(x, f, l)
> +#define FFI_ASSERT_VALID_TYPE(x)
> +#endif
> +
> +#define ALIGN(v, a) (((((size_t) (v))-1) | ((a)-1))+1)
> +
> +/* Perform machine dependent cif processing */
> +ffi_status ffi_prep_cif_machdep(ffi_cif *cif);
> +
> +/* Extended cif, used in callback from assembly routine */
> +typedef struct
> +{
> + /*@dependent@*/ ffi_cif *cif;
> + /*@dependent@*/ void *rvalue;
> + /*@dependent@*/ void **avalue;
> +} extended_cif;
> +
> +/* Terse sized type definitions. */
> +#ifndef UEFI_C_SOURCE
> +typedef unsigned int UINT8 __attribute__((__mode__(__QI__)));
> +typedef signed int SINT8 __attribute__((__mode__(__QI__)));
> +typedef unsigned int UINT16 __attribute__((__mode__(__HI__)));
> +typedef signed int SINT16 __attribute__((__mode__(__HI__)));
> +typedef unsigned int UINT32 __attribute__((__mode__(__SI__)));
> +typedef signed int SINT32 __attribute__((__mode__(__SI__)));
> +typedef unsigned int UINT64 __attribute__((__mode__(__DI__)));
> +typedef signed int SINT64 __attribute__((__mode__(__DI__)));
> +#else
> +typedef signed int SINT8 __attribute__((__mode__(__QI__)));
> +typedef signed int SINT16 __attribute__((__mode__(__HI__)));
> +typedef signed int SINT32 __attribute__((__mode__(__SI__)));
> +typedef signed int SINT64 __attribute__((__mode__(__DI__)));
> +#endif
> +typedef float FLOAT32;
> +
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif
> +
> +
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/malloc_closure.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/malloc_closure.c
> new file mode 100644
> index 00000000..624e3a8c
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/malloc_closure.c
> @@ -0,0 +1,128 @@
> +#include <Python.h>
> +#include <ffi.h>
> +#ifdef MS_WIN32
> +#include <windows.h>
> +#else
> +#ifndef UEFI_C_SOURCE
> +#include <sys/mman.h>
> +#endif // UEFI_C_SOURCE
> +#include <unistd.h>
> +# if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
> +# define MAP_ANONYMOUS MAP_ANON
> +# endif
> +#endif
> +#include "ctypes.h"
> +
> +/* BLOCKSIZE can be adjusted. Larger blocksize will take a larger memory
> + overhead, but allocate less blocks from the system. It may be that some
> + systems have a limit of how many mmap'd blocks can be open.
> +*/
> +
> +#define BLOCKSIZE _pagesize
> +
> +/* #define MALLOC_CLOSURE_DEBUG */ /* enable for some debugging output */
> +
> +/******************************************************************/
> +
> +typedef union _tagITEM {
> + ffi_closure closure;
> + union _tagITEM *next;
> +} ITEM;
> +
> +static ITEM *free_list;
> +static int _pagesize;
> +
> +static void more_core(void)
> +{
> + ITEM *item;
> + int count, i;
> +
> +#ifndef UEFI_C_SOURCE
> +/* determine the pagesize */
> +#ifdef MS_WIN32
> + if (!_pagesize) {
> + SYSTEM_INFO systeminfo;
> + GetSystemInfo(&systeminfo);
> + _pagesize = systeminfo.dwPageSize;
> + }
> +#else
> + if (!_pagesize) {
> +#ifdef _SC_PAGESIZE
> + _pagesize = sysconf(_SC_PAGESIZE);
> +#else
> + _pagesize = getpagesize();
> +#endif
> + }
> +#endif
> +
> + /* calculate the number of nodes to allocate */
> + count = BLOCKSIZE / sizeof(ITEM);
> +
> + /* allocate a memory block */
> +#ifdef MS_WIN32
> + item = (ITEM *)VirtualAlloc(NULL,
> + count * sizeof(ITEM),
> + MEM_COMMIT,
> + PAGE_EXECUTE_READWRITE);
> + if (item == NULL)
> + return;
> +#else
> + item = (ITEM *)mmap(NULL,
> + count * sizeof(ITEM),
> + PROT_READ | PROT_WRITE | PROT_EXEC,
> + MAP_PRIVATE | MAP_ANONYMOUS,
> + -1,
> + 0);
> + if (item == (void *)MAP_FAILED)
> + return;
> +#endif
> +
> +#ifdef MALLOC_CLOSURE_DEBUG
> + printf("block at %p allocated (%d bytes), %d ITEMs\n",
> + item, count * sizeof(ITEM), count);
> +#endif
> +
> +#else //EfiPy
> +
> +#define PAGE_SHIFT 14 /* 16K pages by default. */
> +#define PAGE_SIZE (1 << PAGE_SHIFT)
> +
> + count = PAGE_SIZE / sizeof(ITEM);
> +
> + item = (ITEM *)malloc(count * sizeof(ITEM));
> + if (item == NULL)
> + return;
> +
> +#endif // EfiPy
> +
> + /* put them into the free list */
> + for (i = 0; i < count; ++i) {
> + item->next = free_list;
> + free_list = item;
> + ++item;
> + }
> +}
> +
> +/******************************************************************/
> +
> +/* put the item back into the free list */
> +void ffi_closure_free(void *p)
> +{
> + ITEM *item = (ITEM *)p;
> + item->next = free_list;
> + free_list = item;
> +}
> +
> +/* return one item from the free list, allocating more if needed */
> +void *ffi_closure_alloc(size_t ignored, void** codeloc)
> +{
> + ITEM *item;
> + if (!free_list)
> + more_core();
> + if (!free_list)
> + return NULL;
> + item = free_list;
> + free_list = item->next;
> + *codeloc = (void *)item;
> + return (void *)item;
> +}
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/config.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/config.c
> new file mode 100644
> index 00000000..4b1eb0fb
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/config.c
> @@ -0,0 +1,159 @@
> +/** @file
> + Python Module configuration.
> +
> + Copyright (c) 2011-2021, Intel Corporation. All rights reserved.<BR>
> + This program and the accompanying materials are licensed and made available under
> + the terms and conditions of the BSD License that accompanies this distribution.
> + The full text of the license may be found at
> + http://opensource.org/licenses/bsd-license.
> +
> + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
> + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
> +**/
> +
> +/* This file contains the table of built-in modules.
> + See init_builtin() in import.c. */
> +
> +#include "Python.h"
> +
> +extern PyObject* PyInit_array(void);
> +extern PyObject* PyInit__ast(void);
> +extern PyObject* PyInit_binascii(void);
> +extern PyObject* init_bisect(void);
> +extern PyObject* PyInit_cmath(void);
> +extern PyObject* PyInit__codecs(void);
> +extern PyObject* PyInit__collections(void);
> +extern PyObject* PyInit__pickle(void);
> +extern PyObject* PyInit__csv(void);
> +extern PyObject* init_ctypes(void);
> +extern PyObject* PyInit__datetime(void);
> +extern PyObject* PyEdk2__Init(void);
> +extern PyObject* PyInit_errno(void);
> +extern PyObject* PyInit__functools(void);
> +extern PyObject* initfuture_builtins(void);
> +extern PyObject* PyInit_gc(void);
> +extern PyObject* init_heapq(void);
> +extern PyObject* init_hotshot(void);
> +extern PyObject* PyInit_imp(void);
> +extern PyObject* PyInit__io(void);
> +extern PyObject* PyInit_itertools(void);
> +extern PyObject* PyInit__json(void);
> +extern PyObject* init_lsprof(void);
> +extern PyObject* PyInit_math(void);
> +extern PyObject* PyInit__md5(void);
> +extern PyObject* initmmap(void);
> +extern PyObject* PyInit__operator(void);
> +extern PyObject* PyInit_parser(void);
> +extern PyObject* PyInit_pyexpat(void);
> +extern PyObject* PyInit__random(void);
> +extern PyObject* PyInit_select(void);
> +extern PyObject* PyInit__sha1(void);
> +extern PyObject* PyInit__sha256(void);
> +extern PyObject* PyInit__sha512(void);
> +extern PyObject* PyInit__sha3(void);
> +extern PyObject* PyInit__blake2(void);
> +extern PyObject* PyInit__signal(void);
> +extern PyObject* PyInit__socket(void);
> +extern PyObject* PyInit__sre(void);
> +extern PyObject* PyInit__struct(void);
> +extern PyObject* init_subprocess(void);
> +extern PyObject* PyInit__symtable(void);
> +extern PyObject* initthread(void);
> +extern PyObject* PyInit_time(void);
> +extern PyObject* PyInit_unicodedata(void);
> +extern PyObject* PyInit__weakref(void);
> +extern PyObject* init_winreg(void);
> +extern PyObject* PyInit_zlib(void);
> +extern PyObject* initbz2(void);
> +
> +extern PyObject* PyMarshal_Init(void);
> +extern PyObject* _PyWarnings_Init(void);
> +
> +extern PyObject* PyInit__multibytecodec(void);
> +extern PyObject* PyInit__codecs_cn(void);
> +extern PyObject* PyInit__codecs_hk(void);
> +extern PyObject* PyInit__codecs_iso2022(void);
> +extern PyObject* PyInit__codecs_jp(void);
> +extern PyObject* PyInit__codecs_kr(void);
> +extern PyObject* PyInit__codecs_tw(void);
> +
> +extern PyObject* PyInit__string(void);
> +extern PyObject* PyInit__stat(void);
> +extern PyObject* PyInit__opcode(void);
> +extern PyObject* PyInit_faulthandler(void);
> +// _ctypes
> +extern PyObject* PyInit__ctypes(void);
> +extern PyObject* init_sqlite3(void);
> +
> +// EfiPy
> +extern PyObject* init_EfiPy(void);
> +
> +// ssl
> +extern PyObject* PyInit__ssl(void);
> +
> +struct _inittab _PyImport_Inittab[] = {
> + {"_ast", PyInit__ast},
> + {"_csv", PyInit__csv},
> + {"_io", PyInit__io},
> + {"_json", PyInit__json},
> + {"_md5", PyInit__md5},
> + {"_sha1", PyInit__sha1},
> + {"_sha256", PyInit__sha256},
> + {"_sha512", PyInit__sha512},
> + { "_sha3", PyInit__sha3 },
> + { "_blake2", PyInit__blake2 },
> +// {"_socket", PyInit__socket},
> + {"_symtable", PyInit__symtable},
> + {"binascii", PyInit_binascii},
> + {"cmath", PyInit_cmath},
> + {"errno", PyInit_errno},
> + {"faulthandler", PyInit_faulthandler},
> + {"gc", PyInit_gc},
> + {"math", PyInit_math},
> + {"array", PyInit_array},
> + {"_datetime", PyInit__datetime},
> + {"parser", PyInit_parser},
> + {"pyexpat", PyInit_pyexpat},
> + {"select", PyInit_select},
> + {"_signal", PyInit__signal},
> + {"unicodedata", PyInit_unicodedata},
> + { "zlib", PyInit_zlib },
> +
> + /* CJK codecs */
> + {"_multibytecodec", PyInit__multibytecodec},
> +
> +#ifdef WITH_THREAD
> + {"thread", initthread},
> +#endif
> +
> + /* These modules are required for the full built-in help() facility provided by pydoc. */
> + {"_codecs", PyInit__codecs},
> + {"_collections", PyInit__collections},
> + {"_functools", PyInit__functools},
> + {"_random", PyInit__random},
> + {"_sre", PyInit__sre},
> + {"_struct", PyInit__struct},
> + {"_weakref", PyInit__weakref},
> + {"itertools", PyInit_itertools},
> + {"_operator", PyInit__operator},
> + {"time", PyInit_time},
> +
> + /* These four modules should always be built in. */
> + {"edk2", PyEdk2__Init},
> + {"_imp", PyInit_imp},
> + {"marshal", PyMarshal_Init},
> +
> + /* These entries are here for sys.builtin_module_names */
> + {"__main__", NULL},
> + {"__builtin__", NULL},
> + {"builtins", NULL},
> + {"sys", NULL},
> + {"exceptions", NULL},
> + {"_warnings", _PyWarnings_Init},
> + {"_string", PyInit__string},
> + {"_stat", PyInit__stat},
> + {"_opcode", PyInit__opcode},
> + { "_ctypes", PyInit__ctypes },
> + /* Sentinel */
> + {0, 0}
> +};
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/edk2module.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/edk2module.c
> new file mode 100644
> index 00000000..0501a2be
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/edk2module.c
> @@ -0,0 +1,4348 @@
> +/** @file
> + OS-specific module implementation for EDK II and UEFI.
> + Derived from posixmodule.c in Python 2.7.2.
> +
> + Copyright (c) 2015, Daryl McDaniel. All rights reserved.<BR>
> + Copyright (c) 2011 - 2021, Intel Corporation. All rights reserved.<BR>
> + This program and the accompanying materials are licensed and made available under
> + the terms and conditions of the BSD License that accompanies this distribution.
> + The full text of the license may be found at
> + http://opensource.org/licenses/bsd-license.
> +
> + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
> + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
> +**/
> +#define PY_SSIZE_T_CLEAN
> +
> +#include "Python.h"
> +#include "structseq.h"
> +
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <wchar.h>
> +#include <sys/syslimits.h>
> +#include <Uefi.h>
> +#include <Library/UefiLib.h>
> +#include <Library/UefiRuntimeServicesTableLib.h>
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +PyDoc_STRVAR(edk2__doc__,
> + "This module provides access to UEFI firmware functionality that is\n\
> + standardized by the C Standard and the POSIX standard (a thinly\n\
> + disguised Unix interface). Refer to the library manual and\n\
> + corresponding UEFI Specification entries for more information on calls.");
> +
> +#ifndef Py_USING_UNICODE
> + /* This is used in signatures of functions. */
> + #define Py_UNICODE void
> +#endif
> +
> +#ifdef HAVE_SYS_TYPES_H
> + #include <sys/types.h>
> +#endif /* HAVE_SYS_TYPES_H */
> +
> +#ifdef HAVE_SYS_STAT_H
> + #include <sys/stat.h>
> +#endif /* HAVE_SYS_STAT_H */
> +
> +#ifdef HAVE_SYS_WAIT_H
> + #include <sys/wait.h> /* For WNOHANG */
> +#endif
> +
> +#ifdef HAVE_SIGNAL_H
> + #include <signal.h>
> +#endif
> +
> +#ifdef HAVE_FCNTL_H
> + #include <fcntl.h>
> +#endif /* HAVE_FCNTL_H */
> +
> +#ifdef HAVE_GRP_H
> + #include <grp.h>
> +#endif
> +
> +#ifdef HAVE_SYSEXITS_H
> + #include <sysexits.h>
> +#endif /* HAVE_SYSEXITS_H */
> +
> +#ifdef HAVE_SYS_LOADAVG_H
> + #include <sys/loadavg.h>
> +#endif
> +
> +#ifdef HAVE_UTIME_H
> + #include <utime.h>
> +#endif /* HAVE_UTIME_H */
> +
> +#ifdef HAVE_SYS_UTIME_H
> + #include <sys/utime.h>
> + #define HAVE_UTIME_H /* pretend we do for the rest of this file */
> +#endif /* HAVE_SYS_UTIME_H */
> +
> +#ifdef HAVE_SYS_TIMES_H
> + #include <sys/times.h>
> +#endif /* HAVE_SYS_TIMES_H */
> +
> +#ifdef HAVE_SYS_PARAM_H
> + #include <sys/param.h>
> +#endif /* HAVE_SYS_PARAM_H */
> +
> +#ifdef HAVE_SYS_UTSNAME_H
> + #include <sys/utsname.h>
> +#endif /* HAVE_SYS_UTSNAME_H */
> +
> +#ifdef HAVE_DIRENT_H
> + #include <dirent.h>
> + #define NAMLEN(dirent) wcslen((dirent)->FileName)
> +#else
> + #define dirent direct
> + #define NAMLEN(dirent) (dirent)->d_namlen
> + #ifdef HAVE_SYS_NDIR_H
> + #include <sys/ndir.h>
> + #endif
> + #ifdef HAVE_SYS_DIR_H
> + #include <sys/dir.h>
> + #endif
> + #ifdef HAVE_NDIR_H
> + #include <ndir.h>
> + #endif
> +#endif
> +
> +#ifndef MAXPATHLEN
> + #if defined(PATH_MAX) && PATH_MAX > 1024
> + #define MAXPATHLEN PATH_MAX
> + #else
> + #define MAXPATHLEN 1024
> + #endif
> +#endif /* MAXPATHLEN */
> +
> +#define WAIT_TYPE int
> +#define WAIT_STATUS_INT(s) (s)
> +
> +/* Issue #1983: pid_t can be longer than a C long on some systems */
> +#if !defined(SIZEOF_PID_T) || SIZEOF_PID_T == SIZEOF_INT
> + #define PARSE_PID "i"
> + #define PyLong_FromPid PyLong_FromLong
> + #define PyLong_AsPid PyLong_AsLong
> +#elif SIZEOF_PID_T == SIZEOF_LONG
> + #define PARSE_PID "l"
> + #define PyLong_FromPid PyLong_FromLong
> + #define PyLong_AsPid PyLong_AsLong
> +#elif defined(SIZEOF_LONG_LONG) && SIZEOF_PID_T == SIZEOF_LONG_LONG
> + #define PARSE_PID "L"
> + #define PyLong_FromPid PyLong_FromLongLong
> + #define PyLong_AsPid PyLong_AsLongLong
> +#else
> + #error "sizeof(pid_t) is neither sizeof(int), sizeof(long) or sizeof(long long)"
> +#endif /* SIZEOF_PID_T */
> +
> +/* Don't use the "_r" form if we don't need it (also, won't have a
> + prototype for it, at least on Solaris -- maybe others as well?). */
> +#if defined(HAVE_CTERMID_R) && defined(WITH_THREAD)
> + #define USE_CTERMID_R
> +#endif
> +
> +#if defined(HAVE_TMPNAM_R) && defined(WITH_THREAD)
> + #define USE_TMPNAM_R
> +#endif
> +
> +/* choose the appropriate stat and fstat functions and return structs */
> +#undef STAT
> +#undef FSTAT
> +#undef STRUCT_STAT
> +#define STAT stat
> +#define FSTAT fstat
> +#define STRUCT_STAT struct stat
> +
> +#define _PyVerify_fd(A) (1) /* dummy */
> +
> +/* dummy version. _PyVerify_fd() is already defined in fileobject.h */
> +#define _PyVerify_fd_dup2(A, B) (1)
> +
> +#ifndef UEFI_C_SOURCE
> +/* Return a dictionary corresponding to the POSIX environment table */
> +extern char **environ;
> +
> +static PyObject *
> +convertenviron(void)
> +{
> + PyObject *d;
> + char **e;
> + d = PyDict_New();
> + if (d == NULL)
> + return NULL;
> + if (environ == NULL)
> + return d;
> + /* This part ignores errors */
> + for (e = environ; *e != NULL; e++) {
> + PyObject *k;
> + PyObject *v;
> + char *p = strchr(*e, '=');
> + if (p == NULL)
> + continue;
> + k = PyUnicode_FromStringAndSize(*e, (int)(p-*e));
> + if (k == NULL) {
> + PyErr_Clear();
> + continue;
> + }
> + v = PyUnicode_FromString(p+1);
> + if (v == NULL) {
> + PyErr_Clear();
> + Py_DECREF(k);
> + continue;
> + }
> + if (PyDict_GetItem(d, k) == NULL) {
> + if (PyDict_SetItem(d, k, v) != 0)
> + PyErr_Clear();
> + }
> + Py_DECREF(k);
> + Py_DECREF(v);
> + }
> + return d;
> +}
> +#endif /* UEFI_C_SOURCE */
> +
> +/* Set a POSIX-specific error from errno, and return NULL */
> +
> +static PyObject *
> +edk2_error(void)
> +{
> + return PyErr_SetFromErrno(PyExc_OSError);
> +}
> +static PyObject *
> +edk2_error_with_filename(char* name)
> +{
> + return PyErr_SetFromErrnoWithFilename(PyExc_OSError, name);
> +}
> +
> +
> +static PyObject *
> +edk2_error_with_allocated_filename(char* name)
> +{
> + PyObject *rc = PyErr_SetFromErrnoWithFilename(PyExc_OSError, name);
> + PyMem_Free(name);
> + return rc;
> +}
> +
> +/* POSIX generic methods */
> +
> +#ifndef UEFI_C_SOURCE
> + static PyObject *
> + edk2_fildes(PyObject *fdobj, int (*func)(int))
> + {
> + int fd;
> + int res;
> + fd = PyObject_AsFileDescriptor(fdobj);
> + if (fd < 0)
> + return NULL;
> + if (!_PyVerify_fd(fd))
> + return edk2_error();
> + Py_BEGIN_ALLOW_THREADS
> + res = (*func)(fd);
> + Py_END_ALLOW_THREADS
> + if (res < 0)
> + return edk2_error();
> + Py_INCREF(Py_None);
> + return Py_None;
> + }
> +#endif /* UEFI_C_SOURCE */
> +
> +static PyObject *
> +edk2_1str(PyObject *args, char *format, int (*func)(const char*))
> +{
> + char *path1 = NULL;
> + int res;
> + if (!PyArg_ParseTuple(args, format,
> + Py_FileSystemDefaultEncoding, &path1))
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + res = (*func)(path1);
> + Py_END_ALLOW_THREADS
> + if (res < 0)
> + return edk2_error_with_allocated_filename(path1);
> + PyMem_Free(path1);
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +static PyObject *
> +edk2_2str(PyObject *args,
> + char *format,
> + int (*func)(const char *, const char *))
> +{
> + char *path1 = NULL, *path2 = NULL;
> + int res;
> + if (!PyArg_ParseTuple(args, format,
> + Py_FileSystemDefaultEncoding, &path1,
> + Py_FileSystemDefaultEncoding, &path2))
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + res = (*func)(path1, path2);
> + Py_END_ALLOW_THREADS
> + PyMem_Free(path1);
> + PyMem_Free(path2);
> + if (res != 0)
> + /* XXX how to report both path1 and path2??? */
> + return edk2_error();
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +PyDoc_STRVAR(stat_result__doc__,
> +"stat_result: Result from stat or lstat.\n\n\
> +This object may be accessed either as a tuple of\n\
> + (mode, ino, dev, nlink, uid, gid, size, atime, mtime, ctime)\n\
> +or via the attributes st_mode, st_ino, st_dev, st_nlink, st_uid, and so on.\n\
> +\n\
> +Posix/windows: If your platform supports st_blksize, st_blocks, st_rdev,\n\
> +or st_flags, they are available as attributes only.\n\
> +\n\
> +See os.stat for more information.");
> +
> +static PyStructSequence_Field stat_result_fields[] = {
> + {"st_mode", "protection bits"},
> + //{"st_ino", "inode"},
> + //{"st_dev", "device"},
> + //{"st_nlink", "number of hard links"},
> + //{"st_uid", "user ID of owner"},
> + //{"st_gid", "group ID of owner"},
> + {"st_size", "total size, in bytes"},
> + /* The NULL is replaced with PyStructSequence_UnnamedField later. */
> + {NULL, "integer time of last access"},
> + {NULL, "integer time of last modification"},
> + {NULL, "integer time of last change"},
> + {"st_atime", "time of last access"},
> + {"st_mtime", "time of last modification"},
> + {"st_ctime", "time of last change"},
> +#ifdef HAVE_STRUCT_STAT_ST_BLKSIZE
> + {"st_blksize", "blocksize for filesystem I/O"},
> +#endif
> +#ifdef HAVE_STRUCT_STAT_ST_BLOCKS
> + {"st_blocks", "number of blocks allocated"},
> +#endif
> +#ifdef HAVE_STRUCT_STAT_ST_RDEV
> + {"st_rdev", "device type (if inode device)"},
> +#endif
> +#ifdef HAVE_STRUCT_STAT_ST_FLAGS
> + {"st_flags", "user defined flags for file"},
> +#endif
> +#ifdef HAVE_STRUCT_STAT_ST_GEN
> + {"st_gen", "generation number"},
> +#endif
> +#ifdef HAVE_STRUCT_STAT_ST_BIRTHTIME
> + {"st_birthtime", "time of creation"},
> +#endif
> + {0}
> +};
> +
> +#ifdef HAVE_STRUCT_STAT_ST_BLKSIZE
> +#define ST_BLKSIZE_IDX 8
> +#else
> +#define ST_BLKSIZE_IDX 12
> +#endif
> +
> +#ifdef HAVE_STRUCT_STAT_ST_BLOCKS
> +#define ST_BLOCKS_IDX (ST_BLKSIZE_IDX+1)
> +#else
> +#define ST_BLOCKS_IDX ST_BLKSIZE_IDX
> +#endif
> +
> +#ifdef HAVE_STRUCT_STAT_ST_RDEV
> +#define ST_RDEV_IDX (ST_BLOCKS_IDX+1)
> +#else
> +#define ST_RDEV_IDX ST_BLOCKS_IDX
> +#endif
> +
> +#ifdef HAVE_STRUCT_STAT_ST_FLAGS
> +#define ST_FLAGS_IDX (ST_RDEV_IDX+1)
> +#else
> +#define ST_FLAGS_IDX ST_RDEV_IDX
> +#endif
> +
> +#ifdef HAVE_STRUCT_STAT_ST_GEN
> +#define ST_GEN_IDX (ST_FLAGS_IDX+1)
> +#else
> +#define ST_GEN_IDX ST_FLAGS_IDX
> +#endif
> +
> +#ifdef HAVE_STRUCT_STAT_ST_BIRTHTIME
> +#define ST_BIRTHTIME_IDX (ST_GEN_IDX+1)
> +#else
> +#define ST_BIRTHTIME_IDX ST_GEN_IDX
> +#endif
> +
> +static PyStructSequence_Desc stat_result_desc = {
> + "stat_result", /* name */
> + stat_result__doc__, /* doc */
> + stat_result_fields,
> + 10
> +};
> +
> +#ifndef UEFI_C_SOURCE /* Not in UEFI */
> +PyDoc_STRVAR(statvfs_result__doc__,
> +"statvfs_result: Result from statvfs or fstatvfs.\n\n\
> +This object may be accessed either as a tuple of\n\
> + (bsize, frsize, blocks, bfree, bavail, files, ffree, favail, flag, namemax),\n\
> +or via the attributes f_bsize, f_frsize, f_blocks, f_bfree, and so on.\n\
> +\n\
> +See os.statvfs for more information.");
> +
> +static PyStructSequence_Field statvfs_result_fields[] = {
> + {"f_bsize", },
> + {"f_frsize", },
> + {"f_blocks", },
> + {"f_bfree", },
> + {"f_bavail", },
> + {"f_files", },
> + {"f_ffree", },
> + {"f_favail", },
> + {"f_flag", },
> + {"f_namemax",},
> + {0}
> +};
> +
> +static PyStructSequence_Desc statvfs_result_desc = {
> + "statvfs_result", /* name */
> + statvfs_result__doc__, /* doc */
> + statvfs_result_fields,
> + 10
> +};
> +
> +static PyTypeObject StatVFSResultType;
> +#endif
> +
> +static int initialized;
> +static PyTypeObject StatResultType;
> +static newfunc structseq_new;
> +
> +static PyObject *
> +statresult_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
> +{
> + PyStructSequence *result;
> + int i;
> +
> + result = (PyStructSequence*)structseq_new(type, args, kwds);
> + if (!result)
> + return NULL;
> + /* If we have been initialized from a tuple,
> + st_?time might be set to None. Initialize it
> + from the int slots. */
> + for (i = 7; i <= 9; i++) {
> + if (result->ob_item[i+3] == Py_None) {
> + Py_DECREF(Py_None);
> + Py_INCREF(result->ob_item[i]);
> + result->ob_item[i+3] = result->ob_item[i];
> + }
> + }
> + return (PyObject*)result;
> +}
> +
> +
> +
> +/* If true, st_?time is float. */
> +#if defined(UEFI_C_SOURCE)
> + static int _stat_float_times = 0;
> +#else
> + static int _stat_float_times = 1;
> +
> +PyDoc_STRVAR(stat_float_times__doc__,
> +"stat_float_times([newval]) -> oldval\n\n\
> +Determine whether os.[lf]stat represents time stamps as float objects.\n\
> +If newval is True, future calls to stat() return floats, if it is False,\n\
> +future calls return ints. \n\
> +If newval is omitted, return the current setting.\n");
> +
> +static PyObject*
> +stat_float_times(PyObject* self, PyObject *args)
> +{
> + int newval = -1;
> +
> + if (!PyArg_ParseTuple(args, "|i:stat_float_times", &newval))
> + return NULL;
> + if (newval == -1)
> + /* Return old value */
> + return PyBool_FromLong(_stat_float_times);
> + _stat_float_times = newval;
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +#endif /* UEFI_C_SOURCE */
> +
> +static void
> +fill_time(PyObject *v, int index, time_t sec, unsigned long nsec)
> +{
> + PyObject *fval,*ival;
> +#if SIZEOF_TIME_T > SIZEOF_LONG
> + ival = PyLong_FromLongLong((PY_LONG_LONG)sec);
> +#else
> + ival = PyLong_FromLong((long)sec);
> +#endif
> + if (!ival)
> + return;
> + if (_stat_float_times) {
> + fval = PyFloat_FromDouble(sec + 1e-9*nsec);
> + } else {
> + fval = ival;
> + Py_INCREF(fval);
> + }
> + PyStructSequence_SET_ITEM(v, index, ival);
> + PyStructSequence_SET_ITEM(v, index+3, fval);
> +}
> +
> +/* pack a system stat C structure into the Python stat tuple
> + (used by edk2_stat() and edk2_fstat()) */
> +static PyObject*
> +_pystat_fromstructstat(STRUCT_STAT *st)
> +{
> + unsigned long ansec, mnsec, cnsec;
> + PyObject *v = PyStructSequence_New(&StatResultType);
> + if (v == NULL)
> + return NULL;
> +
> + PyStructSequence_SET_ITEM(v, 0, PyLong_FromLong((long)st->st_mode));
> + PyStructSequence_SET_ITEM(v, 1,
> + PyLong_FromLongLong((PY_LONG_LONG)st->st_size));
> +
> + ansec = mnsec = cnsec = 0;
> + /* The index used by fill_time is the index of the integer time.
> + fill_time will add 3 to the index to get the floating time index.
> + */
> + fill_time(v, 2, st->st_atime, ansec);
> + fill_time(v, 3, st->st_mtime, mnsec);
> + fill_time(v, 4, st->st_mtime, cnsec);
> +
> +#ifdef HAVE_STRUCT_STAT_ST_BLKSIZE
> + PyStructSequence_SET_ITEM(v, ST_BLKSIZE_IDX,
> + PyLong_FromLong((long)st->st_blksize));
> +#endif
> +#ifdef HAVE_STRUCT_STAT_ST_BLOCKS
> + PyStructSequence_SET_ITEM(v, ST_BLOCKS_IDX,
> + PyLong_FromLong((long)st->st_blocks));
> +#endif
> +#ifdef HAVE_STRUCT_STAT_ST_RDEV
> + PyStructSequence_SET_ITEM(v, ST_RDEV_IDX,
> + PyLong_FromLong((long)st->st_rdev));
> +#endif
> +#ifdef HAVE_STRUCT_STAT_ST_GEN
> + PyStructSequence_SET_ITEM(v, ST_GEN_IDX,
> + PyLong_FromLong((long)st->st_gen));
> +#endif
> +#ifdef HAVE_STRUCT_STAT_ST_BIRTHTIME
> + {
> + PyObject *val;
> + unsigned long bsec,bnsec;
> + bsec = (long)st->st_birthtime;
> +#ifdef HAVE_STAT_TV_NSEC2
> + bnsec = st->st_birthtimespec.tv_nsec;
> +#else
> + bnsec = 0;
> +#endif
> + if (_stat_float_times) {
> + val = PyFloat_FromDouble(bsec + 1e-9*bnsec);
> + } else {
> + val = PyLong_FromLong((long)bsec);
> + }
> + PyStructSequence_SET_ITEM(v, ST_BIRTHTIME_IDX,
> + val);
> + }
> +#endif
> +#ifdef HAVE_STRUCT_STAT_ST_FLAGS
> + PyStructSequence_SET_ITEM(v, ST_FLAGS_IDX,
> + PyLong_FromLong((long)st->st_flags));
> +#endif
> +
> + if (PyErr_Occurred()) {
> + Py_DECREF(v);
> + return NULL;
> + }
> +
> + return v;
> +}
> +
> +static PyObject *
> +edk2_do_stat(PyObject *self, PyObject *args,
> + char *format,
> + int (*statfunc)(const char *, STRUCT_STAT *),
> + char *wformat,
> + int (*wstatfunc)(const Py_UNICODE *, STRUCT_STAT *))
> +{
> + STRUCT_STAT st;
> + char *path = NULL; /* pass this to stat; do not free() it */
> + char *pathfree = NULL; /* this memory must be free'd */
> + int res;
> + PyObject *result;
> +
> + if (!PyArg_ParseTuple(args, format,
> + Py_FileSystemDefaultEncoding, &path))
> + return NULL;
> + pathfree = path;
> +
> + Py_BEGIN_ALLOW_THREADS
> + res = (*statfunc)(path, &st);
> + Py_END_ALLOW_THREADS
> +
> + if (res != 0) {
> + result = edk2_error_with_filename(pathfree);
> + }
> + else
> + result = _pystat_fromstructstat(&st);
> +
> + PyMem_Free(pathfree);
> + return result;
> +}
> +
> +/* POSIX methods */
> +
> +PyDoc_STRVAR(edk2_access__doc__,
> +"access(path, mode) -> True if granted, False otherwise\n\n\
> +Use the real uid/gid to test for access to a path. Note that most\n\
> +operations will use the effective uid/gid, therefore this routine can\n\
> +be used in a suid/sgid environment to test if the invoking user has the\n\
> +specified access to the path. The mode argument can be F_OK to test\n\
> +existence, or the inclusive-OR of R_OK, W_OK, and X_OK.");
> +
> +static PyObject *
> +edk2_access(PyObject *self, PyObject *args)
> +{
> + char *path;
> + int mode;
> +
> + int res;
> + if (!PyArg_ParseTuple(args, "eti:access",
> + Py_FileSystemDefaultEncoding, &path, &mode))
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + res = access(path, mode);
> + Py_END_ALLOW_THREADS
> + PyMem_Free(path);
> + return PyBool_FromLong(res == 0);
> +}
> +
> +#ifndef F_OK
> + #define F_OK 0
> +#endif
> +#ifndef R_OK
> + #define R_OK 4
> +#endif
> +#ifndef W_OK
> + #define W_OK 2
> +#endif
> +#ifndef X_OK
> + #define X_OK 1
> +#endif
> +
> +PyDoc_STRVAR(edk2_chdir__doc__,
> +"chdir(path)\n\n\
> +Change the current working directory to the specified path.");
> +
> +static PyObject *
> +edk2_chdir(PyObject *self, PyObject *args)
> +{
> + return edk2_1str(args, "et:chdir", chdir);
> +}
> +
> +PyDoc_STRVAR(edk2_chmod__doc__,
> +"chmod(path, mode)\n\n\
> +Change the access permissions of a file.");
> +
> +static PyObject *
> +edk2_chmod(PyObject *self, PyObject *args)
> +{
> + char *path = NULL;
> + int i;
> + int res;
> + if (!PyArg_ParseTuple(args, "eti:chmod", Py_FileSystemDefaultEncoding,
> + &path, &i))
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + res = chmod(path, i);
> + Py_END_ALLOW_THREADS
> + if (res < 0)
> + return edk2_error_with_allocated_filename(path);
> + PyMem_Free(path);
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +#ifdef HAVE_FCHMOD
> +PyDoc_STRVAR(edk2_fchmod__doc__,
> +"fchmod(fd, mode)\n\n\
> +Change the access permissions of the file given by file\n\
> +descriptor fd.");
> +
> +static PyObject *
> +edk2_fchmod(PyObject *self, PyObject *args)
> +{
> + int fd, mode, res;
> + if (!PyArg_ParseTuple(args, "ii:fchmod", &fd, &mode))
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + res = fchmod(fd, mode);
> + Py_END_ALLOW_THREADS
> + if (res < 0)
> + return edk2_error();
> + Py_RETURN_NONE;
> +}
> +#endif /* HAVE_FCHMOD */
> +
> +#ifdef HAVE_LCHMOD
> +PyDoc_STRVAR(edk2_lchmod__doc__,
> +"lchmod(path, mode)\n\n\
> +Change the access permissions of a file. If path is a symlink, this\n\
> +affects the link itself rather than the target.");
> +
> +static PyObject *
> +edk2_lchmod(PyObject *self, PyObject *args)
> +{
> + char *path = NULL;
> + int i;
> + int res;
> + if (!PyArg_ParseTuple(args, "eti:lchmod", Py_FileSystemDefaultEncoding,
> + &path, &i))
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + res = lchmod(path, i);
> + Py_END_ALLOW_THREADS
> + if (res < 0)
> + return edk2_error_with_allocated_filename(path);
> + PyMem_Free(path);
> + Py_RETURN_NONE;
> +}
> +#endif /* HAVE_LCHMOD */
> +
> +
> +#ifdef HAVE_CHFLAGS
> +PyDoc_STRVAR(edk2_chflags__doc__,
> +"chflags(path, flags)\n\n\
> +Set file flags.");
> +
> +static PyObject *
> +edk2_chflags(PyObject *self, PyObject *args)
> +{
> + char *path;
> + unsigned long flags;
> + int res;
> + if (!PyArg_ParseTuple(args, "etk:chflags",
> + Py_FileSystemDefaultEncoding, &path, &flags))
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + res = chflags(path, flags);
> + Py_END_ALLOW_THREADS
> + if (res < 0)
> + return edk2_error_with_allocated_filename(path);
> + PyMem_Free(path);
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +#endif /* HAVE_CHFLAGS */
> +
> +#ifdef HAVE_LCHFLAGS
> +PyDoc_STRVAR(edk2_lchflags__doc__,
> +"lchflags(path, flags)\n\n\
> +Set file flags.\n\
> +This function will not follow symbolic links.");
> +
> +static PyObject *
> +edk2_lchflags(PyObject *self, PyObject *args)
> +{
> + char *path;
> + unsigned long flags;
> + int res;
> + if (!PyArg_ParseTuple(args, "etk:lchflags",
> + Py_FileSystemDefaultEncoding, &path, &flags))
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + res = lchflags(path, flags);
> + Py_END_ALLOW_THREADS
> + if (res < 0)
> + return edk2_error_with_allocated_filename(path);
> + PyMem_Free(path);
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +#endif /* HAVE_LCHFLAGS */
> +
> +#ifdef HAVE_CHROOT
> +PyDoc_STRVAR(edk2_chroot__doc__,
> +"chroot(path)\n\n\
> +Change root directory to path.");
> +
> +static PyObject *
> +edk2_chroot(PyObject *self, PyObject *args)
> +{
> + return edk2_1str(args, "et:chroot", chroot);
> +}
> +#endif
> +
> +#ifdef HAVE_FSYNC
> +PyDoc_STRVAR(edk2_fsync__doc__,
> +"fsync(fildes)\n\n\
> +force write of file with filedescriptor to disk.");
> +
> +static PyObject *
> +edk2_fsync(PyObject *self, PyObject *fdobj)
> +{
> + return edk2_fildes(fdobj, fsync);
> +}
> +#endif /* HAVE_FSYNC */
> +
> +#ifdef HAVE_FDATASYNC
> +
> +#ifdef __hpux
> +extern int fdatasync(int); /* On HP-UX, in libc but not in unistd.h */
> +#endif
> +
> +PyDoc_STRVAR(edk2_fdatasync__doc__,
> +"fdatasync(fildes)\n\n\
> +force write of file with filedescriptor to disk.\n\
> + does not force update of metadata.");
> +
> +static PyObject *
> +edk2_fdatasync(PyObject *self, PyObject *fdobj)
> +{
> + return edk2_fildes(fdobj, fdatasync);
> +}
> +#endif /* HAVE_FDATASYNC */
> +
> +
> +#ifdef HAVE_CHOWN
> +PyDoc_STRVAR(edk2_chown__doc__,
> +"chown(path, uid, gid)\n\n\
> +Change the owner and group id of path to the numeric uid and gid.");
> +
> +static PyObject *
> +edk2_chown(PyObject *self, PyObject *args)
> +{
> + char *path = NULL;
> + long uid, gid;
> + int res;
> + if (!PyArg_ParseTuple(args, "etll:chown",
> + Py_FileSystemDefaultEncoding, &path,
> + &uid, &gid))
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + res = chown(path, (uid_t) uid, (gid_t) gid);
> + Py_END_ALLOW_THREADS
> + if (res < 0)
> + return edk2_error_with_allocated_filename(path);
> + PyMem_Free(path);
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +#endif /* HAVE_CHOWN */
> +
> +#ifdef HAVE_FCHOWN
> +PyDoc_STRVAR(edk2_fchown__doc__,
> +"fchown(fd, uid, gid)\n\n\
> +Change the owner and group id of the file given by file descriptor\n\
> +fd to the numeric uid and gid.");
> +
> +static PyObject *
> +edk2_fchown(PyObject *self, PyObject *args)
> +{
> + int fd;
> + long uid, gid;
> + int res;
> + if (!PyArg_ParseTuple(args, "ill:chown", &fd, &uid, &gid))
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + res = fchown(fd, (uid_t) uid, (gid_t) gid);
> + Py_END_ALLOW_THREADS
> + if (res < 0)
> + return edk2_error();
> + Py_RETURN_NONE;
> +}
> +#endif /* HAVE_FCHOWN */
> +
> +#ifdef HAVE_LCHOWN
> +PyDoc_STRVAR(edk2_lchown__doc__,
> +"lchown(path, uid, gid)\n\n\
> +Change the owner and group id of path to the numeric uid and gid.\n\
> +This function will not follow symbolic links.");
> +
> +static PyObject *
> +edk2_lchown(PyObject *self, PyObject *args)
> +{
> + char *path = NULL;
> + long uid, gid;
> + int res;
> + if (!PyArg_ParseTuple(args, "etll:lchown",
> + Py_FileSystemDefaultEncoding, &path,
> + &uid, &gid))
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + res = lchown(path, (uid_t) uid, (gid_t) gid);
> + Py_END_ALLOW_THREADS
> + if (res < 0)
> + return edk2_error_with_allocated_filename(path);
> + PyMem_Free(path);
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +#endif /* HAVE_LCHOWN */
> +
> +
> +#ifdef HAVE_GETCWD
> +PyDoc_STRVAR(edk2_getcwd__doc__,
> +"getcwd() -> path\n\n\
> +Return a string representing the current working directory.");
> +
> +static PyObject *
> +edk2_getcwd(PyObject *self, PyObject *noargs)
> +{
> + int bufsize_incr = 1024;
> + int bufsize = 0;
> + char *tmpbuf = NULL;
> + char *res = NULL;
> + PyObject *dynamic_return;
> +
> + Py_BEGIN_ALLOW_THREADS
> + do {
> + bufsize = bufsize + bufsize_incr;
> + tmpbuf = malloc(bufsize);
> + if (tmpbuf == NULL) {
> + break;
> + }
> + res = getcwd(tmpbuf, bufsize);
> + if (res == NULL) {
> + free(tmpbuf);
> + }
> + } while ((res == NULL) && (errno == ERANGE));
> + Py_END_ALLOW_THREADS
> +
> + if (res == NULL)
> + return edk2_error();
> +
> + dynamic_return = PyUnicode_FromString(tmpbuf);
> + free(tmpbuf);
> +
> + return dynamic_return;
> +}
> +
> +#ifdef Py_USING_UNICODE
> +PyDoc_STRVAR(edk2_getcwdu__doc__,
> +"getcwdu() -> path\n\n\
> +Return a unicode string representing the current working directory.");
> +
> +static PyObject *
> +edk2_getcwdu(PyObject *self, PyObject *noargs)
> +{
> + char buf[1026];
> + char *res;
> +
> + Py_BEGIN_ALLOW_THREADS
> + res = getcwd(buf, sizeof buf);
> + Py_END_ALLOW_THREADS
> + if (res == NULL)
> + return edk2_error();
> + return PyUnicode_Decode(buf, strlen(buf), Py_FileSystemDefaultEncoding,"strict");
> +}
> +#endif /* Py_USING_UNICODE */
> +#endif /* HAVE_GETCWD */
> +
> +
> +PyDoc_STRVAR(edk2_listdir__doc__,
> +"listdir(path) -> list_of_strings\n\n\
> +Return a list containing the names of the entries in the directory.\n\
> +\n\
> + path: path of directory to list\n\
> +\n\
> +The list is in arbitrary order. It does not include the special\n\
> +entries '.' and '..' even if they are present in the directory.");
> +
> +static PyObject *
> +edk2_listdir(PyObject *self, PyObject *args)
> +{
> + /* XXX Should redo this putting the (now four) versions of opendir
> + in separate files instead of having them all here... */
> +
> + char *name = NULL;
> + char *MBname;
> + PyObject *d, *v;
> + DIR *dirp;
> + struct dirent *ep;
> + int arg_is_unicode = 1;
> +
> + errno = 0;
> + if (!PyArg_ParseTuple(args, "U:listdir", &v)) {
> + arg_is_unicode = 0;
> + PyErr_Clear();
> + }
> + if (!PyArg_ParseTuple(args, "et:listdir", Py_FileSystemDefaultEncoding, &name))
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + dirp = opendir(name);
> + Py_END_ALLOW_THREADS
> + if (dirp == NULL) {
> + return edk2_error_with_allocated_filename(name);
> + }
> + if ((d = PyList_New(0)) == NULL) {
> + Py_BEGIN_ALLOW_THREADS
> + closedir(dirp);
> + Py_END_ALLOW_THREADS
> + PyMem_Free(name);
> + return NULL;
> + }
> + if((MBname = malloc(NAME_MAX)) == NULL) {
> + Py_BEGIN_ALLOW_THREADS
> + closedir(dirp);
> + Py_END_ALLOW_THREADS
> + Py_DECREF(d);
> + PyMem_Free(name);
> + return NULL;
> + }
> + for (;;) {
> + errno = 0;
> + Py_BEGIN_ALLOW_THREADS
> + ep = readdir(dirp);
> + Py_END_ALLOW_THREADS
> + if (ep == NULL) {
> + if ((errno == 0) || (errno == EISDIR)) {
> + break;
> + } else {
> + Py_BEGIN_ALLOW_THREADS
> + closedir(dirp);
> + Py_END_ALLOW_THREADS
> + Py_DECREF(d);
> + return edk2_error_with_allocated_filename(name);
> + }
> + }
> + if (ep->FileName[0] == L'.' &&
> + (NAMLEN(ep) == 1 ||
> + (ep->FileName[1] == L'.' && NAMLEN(ep) == 2)))
> + continue;
> + if(wcstombs(MBname, ep->FileName, NAME_MAX) == -1) {
> + free(MBname);
> + Py_BEGIN_ALLOW_THREADS
> + closedir(dirp);
> + Py_END_ALLOW_THREADS
> + Py_DECREF(d);
> + PyMem_Free(name);
> + return NULL;
> + }
> + v = PyUnicode_FromStringAndSize(MBname, strlen(MBname));
> + if (v == NULL) {
> + Py_DECREF(d);
> + d = NULL;
> + break;
> + }
> +#ifdef Py_USING_UNICODE
> + if (arg_is_unicode) {
> + PyObject *w;
> +
> + w = PyUnicode_FromEncodedObject(v,
> + Py_FileSystemDefaultEncoding,
> + "strict");
> + if (w != NULL) {
> + Py_DECREF(v);
> + v = w;
> + }
> + else {
> + /* fall back to the original byte string, as
> + discussed in patch #683592 */
> + PyErr_Clear();
> + }
> + }
> +#endif
> + if (PyList_Append(d, v) != 0) {
> + Py_DECREF(v);
> + Py_DECREF(d);
> + d = NULL;
> + break;
> + }
> + Py_DECREF(v);
> + }
> + Py_BEGIN_ALLOW_THREADS
> + closedir(dirp);
> + Py_END_ALLOW_THREADS
> + PyMem_Free(name);
> + if(MBname != NULL) {
> + free(MBname);
> + }
> +
> + return d;
> +
> +} /* end of edk2_listdir */
> +
> +PyDoc_STRVAR(edk2_mkdir__doc__,
> +"mkdir(path [, mode=0777])\n\n\
> +Create a directory.");
> +
> +static PyObject *
> +edk2_mkdir(PyObject *self, PyObject *args)
> +{
> + int res;
> + char *path = NULL;
> + int mode = 0777;
> +
> + if (!PyArg_ParseTuple(args, "et|i:mkdir",
> + Py_FileSystemDefaultEncoding, &path, &mode))
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + res = mkdir(path, mode);
> + Py_END_ALLOW_THREADS
> + if (res < 0)
> + return edk2_error_with_allocated_filename(path);
> + PyMem_Free(path);
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +
> +/* sys/resource.h is needed for at least: wait3(), wait4(), broken nice. */
> +#if defined(HAVE_SYS_RESOURCE_H)
> +#include <sys/resource.h>
> +#endif
> +
> +
> +#ifdef HAVE_NICE
> +PyDoc_STRVAR(edk2_nice__doc__,
> +"nice(inc) -> new_priority\n\n\
> +Decrease the priority of process by inc and return the new priority.");
> +
> +static PyObject *
> +edk2_nice(PyObject *self, PyObject *args)
> +{
> + int increment, value;
> +
> + if (!PyArg_ParseTuple(args, "i:nice", &increment))
> + return NULL;
> +
> + /* There are two flavours of 'nice': one that returns the new
> + priority (as required by almost all standards out there) and the
> + Linux/FreeBSD/BSDI one, which returns '0' on success and advices
> + the use of getpriority() to get the new priority.
> +
> + If we are of the nice family that returns the new priority, we
> + need to clear errno before the call, and check if errno is filled
> + before calling edk2_error() on a returnvalue of -1, because the
> + -1 may be the actual new priority! */
> +
> + errno = 0;
> + value = nice(increment);
> +#if defined(HAVE_BROKEN_NICE) && defined(HAVE_GETPRIORITY)
> + if (value == 0)
> + value = getpriority(PRIO_PROCESS, 0);
> +#endif
> + if (value == -1 && errno != 0)
> + /* either nice() or getpriority() returned an error */
> + return edk2_error();
> + return PyLong_FromLong((long) value);
> +}
> +#endif /* HAVE_NICE */
> +
> +PyDoc_STRVAR(edk2_rename__doc__,
> +"rename(old, new)\n\n\
> +Rename a file or directory.");
> +
> +static PyObject *
> +edk2_rename(PyObject *self, PyObject *args)
> +{
> + return edk2_2str(args, "etet:rename", rename);
> +}
> +
> +
> +PyDoc_STRVAR(edk2_rmdir__doc__,
> +"rmdir(path)\n\n\
> +Remove a directory.");
> +
> +static PyObject *
> +edk2_rmdir(PyObject *self, PyObject *args)
> +{
> + return edk2_1str(args, "et:rmdir", rmdir);
> +}
> +
> +
> +PyDoc_STRVAR(edk2_stat__doc__,
> +"stat(path) -> stat result\n\n\
> +Perform a stat system call on the given path.");
> +
> +static PyObject *
> +edk2_stat(PyObject *self, PyObject *args)
> +{
> + return edk2_do_stat(self, args, "et:stat", STAT, NULL, NULL);
> +}
> +
> +
> +#ifdef HAVE_SYSTEM
> +PyDoc_STRVAR(edk2_system__doc__,
> +"system(command) -> exit_status\n\n\
> +Execute the command (a string) in a subshell.");
> +
> +static PyObject *
> +edk2_system(PyObject *self, PyObject *args)
> +{
> + char *command;
> + long sts;
> + if (!PyArg_ParseTuple(args, "s:system", &command))
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + sts = system(command);
> + Py_END_ALLOW_THREADS
> + return PyLong_FromLong(sts);
> +}
> +#endif
> +
> +
> +PyDoc_STRVAR(edk2_umask__doc__,
> +"umask(new_mask) -> old_mask\n\n\
> +Set the current numeric umask and return the previous umask.");
> +
> +static PyObject *
> +edk2_umask(PyObject *self, PyObject *args)
> +{
> + int i;
> + if (!PyArg_ParseTuple(args, "i:umask", &i))
> + return NULL;
> + i = (int)umask(i);
> + if (i < 0)
> + return edk2_error();
> + return PyLong_FromLong((long)i);
> +}
> +
> +
> +PyDoc_STRVAR(edk2_unlink__doc__,
> +"unlink(path)\n\n\
> +Remove a file (same as remove(path)).");
> +
> +PyDoc_STRVAR(edk2_remove__doc__,
> +"remove(path)\n\n\
> +Remove a file (same as unlink(path)).");
> +
> +static PyObject *
> +edk2_unlink(PyObject *self, PyObject *args)
> +{
> + return edk2_1str(args, "et:remove", unlink);
> +}
> +
> +
> +static int
> +extract_time(PyObject *t, time_t* sec, long* usec)
> +{
> + time_t intval;
> + if (PyFloat_Check(t)) {
> + double tval = PyFloat_AsDouble(t);
> + PyObject *intobj = PyNumber_Long(t);
> + if (!intobj)
> + return -1;
> +#if SIZEOF_TIME_T > SIZEOF_LONG
> + intval = PyInt_AsUnsignedLongLongMask(intobj);
> +#else
> + intval = PyLong_AsLong(intobj);
> +#endif
> + Py_DECREF(intobj);
> + if (intval == -1 && PyErr_Occurred())
> + return -1;
> + *sec = intval;
> + *usec = (long)((tval - intval) * 1e6); /* can't exceed 1000000 */
> + if (*usec < 0)
> + /* If rounding gave us a negative number,
> + truncate. */
> + *usec = 0;
> + return 0;
> + }
> +#if SIZEOF_TIME_T > SIZEOF_LONG
> + intval = PyInt_AsUnsignedLongLongMask(t);
> +#else
> + intval = PyLong_AsLong(t);
> +#endif
> + if (intval == -1 && PyErr_Occurred())
> + return -1;
> + *sec = intval;
> + *usec = 0;
> + return 0;
> +}
> +
> +PyDoc_STRVAR(edk2_utime__doc__,
> +"utime(path, (atime, mtime))\n\
> +utime(path, None)\n\n\
> +Set the access and modified time of the file to the given values. If the\n\
> +second form is used, set the access and modified times to the current time.");
> +
> +static PyObject *
> +edk2_utime(PyObject *self, PyObject *args)
> +{
> + char *path = NULL;
> + time_t atime, mtime;
> + long ausec, musec;
> + int res;
> + PyObject* arg;
> +
> +#if defined(HAVE_UTIMES)
> + struct timeval buf[2];
> +#define ATIME buf[0].tv_sec
> +#define MTIME buf[1].tv_sec
> +#elif defined(HAVE_UTIME_H)
> +/* XXX should define struct utimbuf instead, above */
> + struct utimbuf buf;
> +#define ATIME buf.actime
> +#define MTIME buf.modtime
> +#define UTIME_ARG &buf
> +#else /* HAVE_UTIMES */
> + time_t buf[2];
> +#define ATIME buf[0]
> +#define MTIME buf[1]
> +#define UTIME_ARG buf
> +#endif /* HAVE_UTIMES */
> +
> +
> + if (!PyArg_ParseTuple(args, "etO:utime",
> + Py_FileSystemDefaultEncoding, &path, &arg))
> + return NULL;
> + if (arg == Py_None) {
> + /* optional time values not given */
> + Py_BEGIN_ALLOW_THREADS
> + res = utime(path, NULL);
> + Py_END_ALLOW_THREADS
> + }
> + else if (!PyTuple_Check(arg) || PyTuple_Size(arg) != 2) {
> + PyErr_SetString(PyExc_TypeError,
> + "utime() arg 2 must be a tuple (atime, mtime)");
> + PyMem_Free(path);
> + return NULL;
> + }
> + else {
> + if (extract_time(PyTuple_GET_ITEM(arg, 0),
> + &atime, &ausec) == -1) {
> + PyMem_Free(path);
> + return NULL;
> + }
> + if (extract_time(PyTuple_GET_ITEM(arg, 1),
> + &mtime, &musec) == -1) {
> + PyMem_Free(path);
> + return NULL;
> + }
> + ATIME = atime;
> + MTIME = mtime;
> +#ifdef HAVE_UTIMES
> + buf[0].tv_usec = ausec;
> + buf[1].tv_usec = musec;
> + Py_BEGIN_ALLOW_THREADS
> + res = utimes(path, buf);
> + Py_END_ALLOW_THREADS
> +#else
> + Py_BEGIN_ALLOW_THREADS
> + res = utime(path, UTIME_ARG);
> + Py_END_ALLOW_THREADS
> +#endif /* HAVE_UTIMES */
> + }
> + if (res < 0) {
> + return edk2_error_with_allocated_filename(path);
> + }
> + PyMem_Free(path);
> + Py_INCREF(Py_None);
> + return Py_None;
> +#undef UTIME_ARG
> +#undef ATIME
> +#undef MTIME
> +}
> +
> +
> +/* Process operations */
> +
> +PyDoc_STRVAR(edk2__exit__doc__,
> +"_exit(status)\n\n\
> +Exit to the system with specified status, without normal exit processing.");
> +
> +static PyObject *
> +edk2__exit(PyObject *self, PyObject *args)
> +{
> + int sts;
> + if (!PyArg_ParseTuple(args, "i:_exit", &sts))
> + return NULL;
> + _Exit(sts);
> + return NULL; /* Make gcc -Wall happy */
> +}
> +
> +#if defined(HAVE_EXECV) || defined(HAVE_SPAWNV)
> +static void
> +free_string_array(char **array, Py_ssize_t count)
> +{
> + Py_ssize_t i;
> + for (i = 0; i < count; i++)
> + PyMem_Free(array[i]);
> + PyMem_DEL(array);
> +}
> +#endif
> +
> +
> +#ifdef HAVE_EXECV
> +PyDoc_STRVAR(edk2_execv__doc__,
> +"execv(path, args)\n\n\
> +Execute an executable path with arguments, replacing current process.\n\
> +\n\
> + path: path of executable file\n\
> + args: tuple or list of strings");
> +
> +static PyObject *
> +edk2_execv(PyObject *self, PyObject *args)
> +{
> + char *path;
> + PyObject *argv;
> + char **argvlist;
> + Py_ssize_t i, argc;
> + PyObject *(*getitem)(PyObject *, Py_ssize_t);
> +
> + /* execv has two arguments: (path, argv), where
> + argv is a list or tuple of strings. */
> +
> + if (!PyArg_ParseTuple(args, "etO:execv",
> + Py_FileSystemDefaultEncoding,
> + &path, &argv))
> + return NULL;
> + if (PyList_Check(argv)) {
> + argc = PyList_Size(argv);
> + getitem = PyList_GetItem;
> + }
> + else if (PyTuple_Check(argv)) {
> + argc = PyTuple_Size(argv);
> + getitem = PyTuple_GetItem;
> + }
> + else {
> + PyErr_SetString(PyExc_TypeError, "execv() arg 2 must be a tuple or list");
> + PyMem_Free(path);
> + return NULL;
> + }
> + if (argc < 1) {
> + PyErr_SetString(PyExc_ValueError, "execv() arg 2 must not be empty");
> + PyMem_Free(path);
> + return NULL;
> + }
> +
> + argvlist = PyMem_NEW(char *, argc+1);
> + if (argvlist == NULL) {
> + PyMem_Free(path);
> + return PyErr_NoMemory();
> + }
> + for (i = 0; i < argc; i++) {
> + if (!PyArg_Parse((*getitem)(argv, i), "et",
> + Py_FileSystemDefaultEncoding,
> + &argvlist[i])) {
> + free_string_array(argvlist, i);
> + PyErr_SetString(PyExc_TypeError,
> + "execv() arg 2 must contain only strings");
> + PyMem_Free(path);
> + return NULL;
> +
> + }
> + }
> + argvlist[argc] = NULL;
> +
> + execv(path, argvlist);
> +
> + /* If we get here it's definitely an error */
> +
> + free_string_array(argvlist, argc);
> + PyMem_Free(path);
> + return edk2_error();
> +}
> +
> +
> +PyDoc_STRVAR(edk2_execve__doc__,
> +"execve(path, args, env)\n\n\
> +Execute a path with arguments and environment, replacing current process.\n\
> +\n\
> + path: path of executable file\n\
> + args: tuple or list of arguments\n\
> + env: dictionary of strings mapping to strings");
> +
> +static PyObject *
> +edk2_execve(PyObject *self, PyObject *args)
> +{
> + char *path;
> + PyObject *argv, *env;
> + char **argvlist;
> + char **envlist;
> + PyObject *key, *val, *keys=NULL, *vals=NULL;
> + Py_ssize_t i, pos, argc, envc;
> + PyObject *(*getitem)(PyObject *, Py_ssize_t);
> + Py_ssize_t lastarg = 0;
> +
> + /* execve has three arguments: (path, argv, env), where
> + argv is a list or tuple of strings and env is a dictionary
> + like posix.environ. */
> +
> + if (!PyArg_ParseTuple(args, "etOO:execve",
> + Py_FileSystemDefaultEncoding,
> + &path, &argv, &env))
> + return NULL;
> + if (PyList_Check(argv)) {
> + argc = PyList_Size(argv);
> + getitem = PyList_GetItem;
> + }
> + else if (PyTuple_Check(argv)) {
> + argc = PyTuple_Size(argv);
> + getitem = PyTuple_GetItem;
> + }
> + else {
> + PyErr_SetString(PyExc_TypeError,
> + "execve() arg 2 must be a tuple or list");
> + goto fail_0;
> + }
> + if (!PyMapping_Check(env)) {
> + PyErr_SetString(PyExc_TypeError,
> + "execve() arg 3 must be a mapping object");
> + goto fail_0;
> + }
> +
> + argvlist = PyMem_NEW(char *, argc+1);
> + if (argvlist == NULL) {
> + PyErr_NoMemory();
> + goto fail_0;
> + }
> + for (i = 0; i < argc; i++) {
> + if (!PyArg_Parse((*getitem)(argv, i),
> + "et;execve() arg 2 must contain only strings",
> + Py_FileSystemDefaultEncoding,
> + &argvlist[i]))
> + {
> + lastarg = i;
> + goto fail_1;
> + }
> + }
> + lastarg = argc;
> + argvlist[argc] = NULL;
> +
> + i = PyMapping_Size(env);
> + if (i < 0)
> + goto fail_1;
> + envlist = PyMem_NEW(char *, i + 1);
> + if (envlist == NULL) {
> + PyErr_NoMemory();
> + goto fail_1;
> + }
> + envc = 0;
> + keys = PyMapping_Keys(env);
> + vals = PyMapping_Values(env);
> + if (!keys || !vals)
> + goto fail_2;
> + if (!PyList_Check(keys) || !PyList_Check(vals)) {
> + PyErr_SetString(PyExc_TypeError,
> + "execve(): env.keys() or env.values() is not a list");
> + goto fail_2;
> + }
> +
> + for (pos = 0; pos < i; pos++) {
> + char *p, *k, *v;
> + size_t len;
> +
> + key = PyList_GetItem(keys, pos);
> + val = PyList_GetItem(vals, pos);
> + if (!key || !val)
> + goto fail_2;
> +
> + if (!PyArg_Parse(
> + key,
> + "s;execve() arg 3 contains a non-string key",
> + &k) ||
> + !PyArg_Parse(
> + val,
> + "s;execve() arg 3 contains a non-string value",
> + &v))
> + {
> + goto fail_2;
> + }
> +
> +#if defined(PYOS_OS2)
> + /* Omit Pseudo-Env Vars that Would Confuse Programs if Passed On */
> + if (stricmp(k, "BEGINLIBPATH") != 0 && stricmp(k, "ENDLIBPATH") != 0) {
> +#endif
> + len = PyString_Size(key) + PyString_Size(val) + 2;
> + p = PyMem_NEW(char, len);
> + if (p == NULL) {
> + PyErr_NoMemory();
> + goto fail_2;
> + }
> + PyOS_snprintf(p, len, "%s=%s", k, v);
> + envlist[envc++] = p;
> +#if defined(PYOS_OS2)
> + }
> +#endif
> + }
> + envlist[envc] = 0;
> +
> + execve(path, argvlist, envlist);
> +
> + /* If we get here it's definitely an error */
> +
> + (void) edk2_error();
> +
> + fail_2:
> + while (--envc >= 0)
> + PyMem_DEL(envlist[envc]);
> + PyMem_DEL(envlist);
> + fail_1:
> + free_string_array(argvlist, lastarg);
> + Py_XDECREF(vals);
> + Py_XDECREF(keys);
> + fail_0:
> + PyMem_Free(path);
> + return NULL;
> +}
> +#endif /* HAVE_EXECV */
> +
> +
> +#ifdef HAVE_SPAWNV
> +PyDoc_STRVAR(edk2_spawnv__doc__,
> +"spawnv(mode, path, args)\n\n\
> +Execute the program 'path' in a new process.\n\
> +\n\
> + mode: mode of process creation\n\
> + path: path of executable file\n\
> + args: tuple or list of strings");
> +
> +static PyObject *
> +edk2_spawnv(PyObject *self, PyObject *args)
> +{
> + char *path;
> + PyObject *argv;
> + char **argvlist;
> + int mode, i;
> + Py_ssize_t argc;
> + Py_intptr_t spawnval;
> + PyObject *(*getitem)(PyObject *, Py_ssize_t);
> +
> + /* spawnv has three arguments: (mode, path, argv), where
> + argv is a list or tuple of strings. */
> +
> + if (!PyArg_ParseTuple(args, "ietO:spawnv", &mode,
> + Py_FileSystemDefaultEncoding,
> + &path, &argv))
> + return NULL;
> + if (PyList_Check(argv)) {
> + argc = PyList_Size(argv);
> + getitem = PyList_GetItem;
> + }
> + else if (PyTuple_Check(argv)) {
> + argc = PyTuple_Size(argv);
> + getitem = PyTuple_GetItem;
> + }
> + else {
> + PyErr_SetString(PyExc_TypeError,
> + "spawnv() arg 2 must be a tuple or list");
> + PyMem_Free(path);
> + return NULL;
> + }
> +
> + argvlist = PyMem_NEW(char *, argc+1);
> + if (argvlist == NULL) {
> + PyMem_Free(path);
> + return PyErr_NoMemory();
> + }
> + for (i = 0; i < argc; i++) {
> + if (!PyArg_Parse((*getitem)(argv, i), "et",
> + Py_FileSystemDefaultEncoding,
> + &argvlist[i])) {
> + free_string_array(argvlist, i);
> + PyErr_SetString(
> + PyExc_TypeError,
> + "spawnv() arg 2 must contain only strings");
> + PyMem_Free(path);
> + return NULL;
> + }
> + }
> + argvlist[argc] = NULL;
> +
> +#if defined(PYOS_OS2) && defined(PYCC_GCC)
> + Py_BEGIN_ALLOW_THREADS
> + spawnval = spawnv(mode, path, argvlist);
> + Py_END_ALLOW_THREADS
> +#else
> + if (mode == _OLD_P_OVERLAY)
> + mode = _P_OVERLAY;
> +
> + Py_BEGIN_ALLOW_THREADS
> + spawnval = _spawnv(mode, path, argvlist);
> + Py_END_ALLOW_THREADS
> +#endif
> +
> + free_string_array(argvlist, argc);
> + PyMem_Free(path);
> +
> + if (spawnval == -1)
> + return edk2_error();
> + else
> +#if SIZEOF_LONG == SIZEOF_VOID_P
> + return Py_BuildValue("l", (long) spawnval);
> +#else
> + return Py_BuildValue("L", (PY_LONG_LONG) spawnval);
> +#endif
> +}
> +
> +
> +PyDoc_STRVAR(edk2_spawnve__doc__,
> +"spawnve(mode, path, args, env)\n\n\
> +Execute the program 'path' in a new process.\n\
> +\n\
> + mode: mode of process creation\n\
> + path: path of executable file\n\
> + args: tuple or list of arguments\n\
> + env: dictionary of strings mapping to strings");
> +
> +static PyObject *
> +edk2_spawnve(PyObject *self, PyObject *args)
> +{
> + char *path;
> + PyObject *argv, *env;
> + char **argvlist;
> + char **envlist;
> + PyObject *key, *val, *keys=NULL, *vals=NULL, *res=NULL;
> + int mode, pos, envc;
> + Py_ssize_t argc, i;
> + Py_intptr_t spawnval;
> + PyObject *(*getitem)(PyObject *, Py_ssize_t);
> + Py_ssize_t lastarg = 0;
> +
> + /* spawnve has four arguments: (mode, path, argv, env), where
> + argv is a list or tuple of strings and env is a dictionary
> + like posix.environ. */
> +
> + if (!PyArg_ParseTuple(args, "ietOO:spawnve", &mode,
> + Py_FileSystemDefaultEncoding,
> + &path, &argv, &env))
> + return NULL;
> + if (PyList_Check(argv)) {
> + argc = PyList_Size(argv);
> + getitem = PyList_GetItem;
> + }
> + else if (PyTuple_Check(argv)) {
> + argc = PyTuple_Size(argv);
> + getitem = PyTuple_GetItem;
> + }
> + else {
> + PyErr_SetString(PyExc_TypeError,
> + "spawnve() arg 2 must be a tuple or list");
> + goto fail_0;
> + }
> + if (!PyMapping_Check(env)) {
> + PyErr_SetString(PyExc_TypeError,
> + "spawnve() arg 3 must be a mapping object");
> + goto fail_0;
> + }
> +
> + argvlist = PyMem_NEW(char *, argc+1);
> + if (argvlist == NULL) {
> + PyErr_NoMemory();
> + goto fail_0;
> + }
> + for (i = 0; i < argc; i++) {
> + if (!PyArg_Parse((*getitem)(argv, i),
> + "et;spawnve() arg 2 must contain only strings",
> + Py_FileSystemDefaultEncoding,
> + &argvlist[i]))
> + {
> + lastarg = i;
> + goto fail_1;
> + }
> + }
> + lastarg = argc;
> + argvlist[argc] = NULL;
> +
> + i = PyMapping_Size(env);
> + if (i < 0)
> + goto fail_1;
> + envlist = PyMem_NEW(char *, i + 1);
> + if (envlist == NULL) {
> + PyErr_NoMemory();
> + goto fail_1;
> + }
> + envc = 0;
> + keys = PyMapping_Keys(env);
> + vals = PyMapping_Values(env);
> + if (!keys || !vals)
> + goto fail_2;
> + if (!PyList_Check(keys) || !PyList_Check(vals)) {
> + PyErr_SetString(PyExc_TypeError,
> + "spawnve(): env.keys() or env.values() is not a list");
> + goto fail_2;
> + }
> +
> + for (pos = 0; pos < i; pos++) {
> + char *p, *k, *v;
> + size_t len;
> +
> + key = PyList_GetItem(keys, pos);
> + val = PyList_GetItem(vals, pos);
> + if (!key || !val)
> + goto fail_2;
> +
> + if (!PyArg_Parse(
> + key,
> + "s;spawnve() arg 3 contains a non-string key",
> + &k) ||
> + !PyArg_Parse(
> + val,
> + "s;spawnve() arg 3 contains a non-string value",
> + &v))
> + {
> + goto fail_2;
> + }
> + len = PyString_Size(key) + PyString_Size(val) + 2;
> + p = PyMem_NEW(char, len);
> + if (p == NULL) {
> + PyErr_NoMemory();
> + goto fail_2;
> + }
> + PyOS_snprintf(p, len, "%s=%s", k, v);
> + envlist[envc++] = p;
> + }
> + envlist[envc] = 0;
> +
> +#if defined(PYOS_OS2) && defined(PYCC_GCC)
> + Py_BEGIN_ALLOW_THREADS
> + spawnval = spawnve(mode, path, argvlist, envlist);
> + Py_END_ALLOW_THREADS
> +#else
> + if (mode == _OLD_P_OVERLAY)
> + mode = _P_OVERLAY;
> +
> + Py_BEGIN_ALLOW_THREADS
> + spawnval = _spawnve(mode, path, argvlist, envlist);
> + Py_END_ALLOW_THREADS
> +#endif
> +
> + if (spawnval == -1)
> + (void) edk2_error();
> + else
> +#if SIZEOF_LONG == SIZEOF_VOID_P
> + res = Py_BuildValue("l", (long) spawnval);
> +#else
> + res = Py_BuildValue("L", (PY_LONG_LONG) spawnval);
> +#endif
> +
> + fail_2:
> + while (--envc >= 0)
> + PyMem_DEL(envlist[envc]);
> + PyMem_DEL(envlist);
> + fail_1:
> + free_string_array(argvlist, lastarg);
> + Py_XDECREF(vals);
> + Py_XDECREF(keys);
> + fail_0:
> + PyMem_Free(path);
> + return res;
> +}
> +
> +/* OS/2 supports spawnvp & spawnvpe natively */
> +#if defined(PYOS_OS2)
> +PyDoc_STRVAR(edk2_spawnvp__doc__,
> +"spawnvp(mode, file, args)\n\n\
> +Execute the program 'file' in a new process, using the environment\n\
> +search path to find the file.\n\
> +\n\
> + mode: mode of process creation\n\
> + file: executable file name\n\
> + args: tuple or list of strings");
> +
> +static PyObject *
> +edk2_spawnvp(PyObject *self, PyObject *args)
> +{
> + char *path;
> + PyObject *argv;
> + char **argvlist;
> + int mode, i, argc;
> + Py_intptr_t spawnval;
> + PyObject *(*getitem)(PyObject *, Py_ssize_t);
> +
> + /* spawnvp has three arguments: (mode, path, argv), where
> + argv is a list or tuple of strings. */
> +
> + if (!PyArg_ParseTuple(args, "ietO:spawnvp", &mode,
> + Py_FileSystemDefaultEncoding,
> + &path, &argv))
> + return NULL;
> + if (PyList_Check(argv)) {
> + argc = PyList_Size(argv);
> + getitem = PyList_GetItem;
> + }
> + else if (PyTuple_Check(argv)) {
> + argc = PyTuple_Size(argv);
> + getitem = PyTuple_GetItem;
> + }
> + else {
> + PyErr_SetString(PyExc_TypeError,
> + "spawnvp() arg 2 must be a tuple or list");
> + PyMem_Free(path);
> + return NULL;
> + }
> +
> + argvlist = PyMem_NEW(char *, argc+1);
> + if (argvlist == NULL) {
> + PyMem_Free(path);
> + return PyErr_NoMemory();
> + }
> + for (i = 0; i < argc; i++) {
> + if (!PyArg_Parse((*getitem)(argv, i), "et",
> + Py_FileSystemDefaultEncoding,
> + &argvlist[i])) {
> + free_string_array(argvlist, i);
> + PyErr_SetString(
> + PyExc_TypeError,
> + "spawnvp() arg 2 must contain only strings");
> + PyMem_Free(path);
> + return NULL;
> + }
> + }
> + argvlist[argc] = NULL;
> +
> + Py_BEGIN_ALLOW_THREADS
> +#if defined(PYCC_GCC)
> + spawnval = spawnvp(mode, path, argvlist);
> +#else
> + spawnval = _spawnvp(mode, path, argvlist);
> +#endif
> + Py_END_ALLOW_THREADS
> +
> + free_string_array(argvlist, argc);
> + PyMem_Free(path);
> +
> + if (spawnval == -1)
> + return edk2_error();
> + else
> + return Py_BuildValue("l", (long) spawnval);
> +}
> +
> +
> +PyDoc_STRVAR(edk2_spawnvpe__doc__,
> +"spawnvpe(mode, file, args, env)\n\n\
> +Execute the program 'file' in a new process, using the environment\n\
> +search path to find the file.\n\
> +\n\
> + mode: mode of process creation\n\
> + file: executable file name\n\
> + args: tuple or list of arguments\n\
> + env: dictionary of strings mapping to strings");
> +
> +static PyObject *
> +edk2_spawnvpe(PyObject *self, PyObject *args)
> +{
> + char *path;
> + PyObject *argv, *env;
> + char **argvlist;
> + char **envlist;
> + PyObject *key, *val, *keys=NULL, *vals=NULL, *res=NULL;
> + int mode, i, pos, argc, envc;
> + Py_intptr_t spawnval;
> + PyObject *(*getitem)(PyObject *, Py_ssize_t);
> + int lastarg = 0;
> +
> + /* spawnvpe has four arguments: (mode, path, argv, env), where
> + argv is a list or tuple of strings and env is a dictionary
> + like posix.environ. */
> +
> + if (!PyArg_ParseTuple(args, "ietOO:spawnvpe", &mode,
> + Py_FileSystemDefaultEncoding,
> + &path, &argv, &env))
> + return NULL;
> + if (PyList_Check(argv)) {
> + argc = PyList_Size(argv);
> + getitem = PyList_GetItem;
> + }
> + else if (PyTuple_Check(argv)) {
> + argc = PyTuple_Size(argv);
> + getitem = PyTuple_GetItem;
> + }
> + else {
> + PyErr_SetString(PyExc_TypeError,
> + "spawnvpe() arg 2 must be a tuple or list");
> + goto fail_0;
> + }
> + if (!PyMapping_Check(env)) {
> + PyErr_SetString(PyExc_TypeError,
> + "spawnvpe() arg 3 must be a mapping object");
> + goto fail_0;
> + }
> +
> + argvlist = PyMem_NEW(char *, argc+1);
> + if (argvlist == NULL) {
> + PyErr_NoMemory();
> + goto fail_0;
> + }
> + for (i = 0; i < argc; i++) {
> + if (!PyArg_Parse((*getitem)(argv, i),
> + "et;spawnvpe() arg 2 must contain only strings",
> + Py_FileSystemDefaultEncoding,
> + &argvlist[i]))
> + {
> + lastarg = i;
> + goto fail_1;
> + }
> + }
> + lastarg = argc;
> + argvlist[argc] = NULL;
> +
> + i = PyMapping_Size(env);
> + if (i < 0)
> + goto fail_1;
> + envlist = PyMem_NEW(char *, i + 1);
> + if (envlist == NULL) {
> + PyErr_NoMemory();
> + goto fail_1;
> + }
> + envc = 0;
> + keys = PyMapping_Keys(env);
> + vals = PyMapping_Values(env);
> + if (!keys || !vals)
> + goto fail_2;
> + if (!PyList_Check(keys) || !PyList_Check(vals)) {
> + PyErr_SetString(PyExc_TypeError,
> + "spawnvpe(): env.keys() or env.values() is not a list");
> + goto fail_2;
> + }
> +
> + for (pos = 0; pos < i; pos++) {
> + char *p, *k, *v;
> + size_t len;
> +
> + key = PyList_GetItem(keys, pos);
> + val = PyList_GetItem(vals, pos);
> + if (!key || !val)
> + goto fail_2;
> +
> + if (!PyArg_Parse(
> + key,
> + "s;spawnvpe() arg 3 contains a non-string key",
> + &k) ||
> + !PyArg_Parse(
> + val,
> + "s;spawnvpe() arg 3 contains a non-string value",
> + &v))
> + {
> + goto fail_2;
> + }
> + len = PyString_Size(key) + PyString_Size(val) + 2;
> + p = PyMem_NEW(char, len);
> + if (p == NULL) {
> + PyErr_NoMemory();
> + goto fail_2;
> + }
> + PyOS_snprintf(p, len, "%s=%s", k, v);
> + envlist[envc++] = p;
> + }
> + envlist[envc] = 0;
> +
> + Py_BEGIN_ALLOW_THREADS
> +#if defined(PYCC_GCC)
> + spawnval = spawnvpe(mode, path, argvlist, envlist);
> +#else
> + spawnval = _spawnvpe(mode, path, argvlist, envlist);
> +#endif
> + Py_END_ALLOW_THREADS
> +
> + if (spawnval == -1)
> + (void) edk2_error();
> + else
> + res = Py_BuildValue("l", (long) spawnval);
> +
> + fail_2:
> + while (--envc >= 0)
> + PyMem_DEL(envlist[envc]);
> + PyMem_DEL(envlist);
> + fail_1:
> + free_string_array(argvlist, lastarg);
> + Py_XDECREF(vals);
> + Py_XDECREF(keys);
> + fail_0:
> + PyMem_Free(path);
> + return res;
> +}
> +#endif /* PYOS_OS2 */
> +#endif /* HAVE_SPAWNV */
> +
> +
> +#ifdef HAVE_FORK1
> +PyDoc_STRVAR(edk2_fork1__doc__,
> +"fork1() -> pid\n\n\
> +Fork a child process with a single multiplexed (i.e., not bound) thread.\n\
> +\n\
> +Return 0 to child process and PID of child to parent process.");
> +
> +static PyObject *
> +edk2_fork1(PyObject *self, PyObject *noargs)
> +{
> + pid_t pid;
> + int result = 0;
> + _PyImport_AcquireLock();
> + pid = fork1();
> + if (pid == 0) {
> + /* child: this clobbers and resets the import lock. */
> + PyOS_AfterFork();
> + } else {
> + /* parent: release the import lock. */
> + result = _PyImport_ReleaseLock();
> + }
> + if (pid == -1)
> + return edk2_error();
> + if (result < 0) {
> + /* Don't clobber the OSError if the fork failed. */
> + PyErr_SetString(PyExc_RuntimeError,
> + "not holding the import lock");
> + return NULL;
> + }
> + return PyLong_FromPid(pid);
> +}
> +#endif
> +
> +
> +#ifdef HAVE_FORK
> +PyDoc_STRVAR(edk2_fork__doc__,
> +"fork() -> pid\n\n\
> +Fork a child process.\n\
> +Return 0 to child process and PID of child to parent process.");
> +
> +static PyObject *
> +edk2_fork(PyObject *self, PyObject *noargs)
> +{
> + pid_t pid;
> + int result = 0;
> + _PyImport_AcquireLock();
> + pid = fork();
> + if (pid == 0) {
> + /* child: this clobbers and resets the import lock. */
> + PyOS_AfterFork();
> + } else {
> + /* parent: release the import lock. */
> + result = _PyImport_ReleaseLock();
> + }
> + if (pid == -1)
> + return edk2_error();
> + if (result < 0) {
> + /* Don't clobber the OSError if the fork failed. */
> + PyErr_SetString(PyExc_RuntimeError,
> + "not holding the import lock");
> + return NULL;
> + }
> + return PyLong_FromPid(pid);
> +}
> +#endif
> +
> +/* AIX uses /dev/ptc but is otherwise the same as /dev/ptmx */
> +/* IRIX has both /dev/ptc and /dev/ptmx, use ptmx */
> +#if defined(HAVE_DEV_PTC) && !defined(HAVE_DEV_PTMX)
> +#define DEV_PTY_FILE "/dev/ptc"
> +#define HAVE_DEV_PTMX
> +#else
> +#define DEV_PTY_FILE "/dev/ptmx"
> +#endif
> +
> +#if defined(HAVE_OPENPTY) || defined(HAVE_FORKPTY) || defined(HAVE_DEV_PTMX)
> +#ifdef HAVE_PTY_H
> +#include <pty.h>
> +#else
> +#ifdef HAVE_LIBUTIL_H
> +#include <libutil.h>
> +#else
> +#ifdef HAVE_UTIL_H
> +#include <util.h>
> +#endif /* HAVE_UTIL_H */
> +#endif /* HAVE_LIBUTIL_H */
> +#endif /* HAVE_PTY_H */
> +#ifdef HAVE_STROPTS_H
> +#include <stropts.h>
> +#endif
> +#endif /* defined(HAVE_OPENPTY) || defined(HAVE_FORKPTY) || defined(HAVE_DEV_PTMX */
> +
> +#if defined(HAVE_OPENPTY) || defined(HAVE__GETPTY) || defined(HAVE_DEV_PTMX)
> +PyDoc_STRVAR(edk2_openpty__doc__,
> +"openpty() -> (master_fd, slave_fd)\n\n\
> +Open a pseudo-terminal, returning open fd's for both master and slave end.\n");
> +
> +static PyObject *
> +edk2_openpty(PyObject *self, PyObject *noargs)
> +{
> + int master_fd, slave_fd;
> +#ifndef HAVE_OPENPTY
> + char * slave_name;
> +#endif
> +#if defined(HAVE_DEV_PTMX) && !defined(HAVE_OPENPTY) && !defined(HAVE__GETPTY)
> + PyOS_sighandler_t sig_saved;
> +#ifdef sun
> + extern char *ptsname(int fildes);
> +#endif
> +#endif
> +
> +#ifdef HAVE_OPENPTY
> + if (openpty(&master_fd, &slave_fd, NULL, NULL, NULL) != 0)
> + return edk2_error();
> +#elif defined(HAVE__GETPTY)
> + slave_name = _getpty(&master_fd, O_RDWR, 0666, 0);
> + if (slave_name == NULL)
> + return edk2_error();
> +
> + slave_fd = open(slave_name, O_RDWR);
> + if (slave_fd < 0)
> + return edk2_error();
> +#else
> + master_fd = open(DEV_PTY_FILE, O_RDWR | O_NOCTTY); /* open master */
> + if (master_fd < 0)
> + return edk2_error();
> + sig_saved = PyOS_setsig(SIGCHLD, SIG_DFL);
> + /* change permission of slave */
> + if (grantpt(master_fd) < 0) {
> + PyOS_setsig(SIGCHLD, sig_saved);
> + return edk2_error();
> + }
> + /* unlock slave */
> + if (unlockpt(master_fd) < 0) {
> + PyOS_setsig(SIGCHLD, sig_saved);
> + return edk2_error();
> + }
> + PyOS_setsig(SIGCHLD, sig_saved);
> + slave_name = ptsname(master_fd); /* get name of slave */
> + if (slave_name == NULL)
> + return edk2_error();
> + slave_fd = open(slave_name, O_RDWR | O_NOCTTY); /* open slave */
> + if (slave_fd < 0)
> + return edk2_error();
> +#if !defined(__CYGWIN__) && !defined(HAVE_DEV_PTC)
> + ioctl(slave_fd, I_PUSH, "ptem"); /* push ptem */
> + ioctl(slave_fd, I_PUSH, "ldterm"); /* push ldterm */
> +#ifndef __hpux
> + ioctl(slave_fd, I_PUSH, "ttcompat"); /* push ttcompat */
> +#endif /* __hpux */
> +#endif /* HAVE_CYGWIN */
> +#endif /* HAVE_OPENPTY */
> +
> + return Py_BuildValue("(ii)", master_fd, slave_fd);
> +
> +}
> +#endif /* defined(HAVE_OPENPTY) || defined(HAVE__GETPTY) || defined(HAVE_DEV_PTMX) */
> +
> +#ifdef HAVE_FORKPTY
> +PyDoc_STRVAR(edk2_forkpty__doc__,
> +"forkpty() -> (pid, master_fd)\n\n\
> +Fork a new process with a new pseudo-terminal as controlling tty.\n\n\
> +Like fork(), return 0 as pid to child process, and PID of child to parent.\n\
> +To both, return fd of newly opened pseudo-terminal.\n");
> +
> +static PyObject *
> +edk2_forkpty(PyObject *self, PyObject *noargs)
> +{
> + int master_fd = -1, result = 0;
> + pid_t pid;
> +
> + _PyImport_AcquireLock();
> + pid = forkpty(&master_fd, NULL, NULL, NULL);
> + if (pid == 0) {
> + /* child: this clobbers and resets the import lock. */
> + PyOS_AfterFork();
> + } else {
> + /* parent: release the import lock. */
> + result = _PyImport_ReleaseLock();
> + }
> + if (pid == -1)
> + return edk2_error();
> + if (result < 0) {
> + /* Don't clobber the OSError if the fork failed. */
> + PyErr_SetString(PyExc_RuntimeError,
> + "not holding the import lock");
> + return NULL;
> + }
> + return Py_BuildValue("(Ni)", PyLong_FromPid(pid), master_fd);
> +}
> +#endif
> +
> +PyDoc_STRVAR(edk2_getpid__doc__,
> +"getpid() -> pid\n\n\
> +Return the current process id");
> +
> +static PyObject *
> +edk2_getpid(PyObject *self, PyObject *noargs)
> +{
> + return PyLong_FromPid(getpid());
> +}
> +
> +
> +#ifdef HAVE_GETLOGIN
> +PyDoc_STRVAR(edk2_getlogin__doc__,
> +"getlogin() -> string\n\n\
> +Return the actual login name.");
> +
> +static PyObject *
> +edk2_getlogin(PyObject *self, PyObject *noargs)
> +{
> + PyObject *result = NULL;
> + char *name;
> + int old_errno = errno;
> +
> + errno = 0;
> + name = getlogin();
> + if (name == NULL) {
> + if (errno)
> + edk2_error();
> + else
> + PyErr_SetString(PyExc_OSError,
> + "unable to determine login name");
> + }
> + else
> + result = PyUnicode_FromString(name);
> + errno = old_errno;
> +
> + return result;
> +}
> +#endif
> +
> +#ifdef HAVE_KILL
> +PyDoc_STRVAR(edk2_kill__doc__,
> +"kill(pid, sig)\n\n\
> +Kill a process with a signal.");
> +
> +static PyObject *
> +edk2_kill(PyObject *self, PyObject *args)
> +{
> + pid_t pid;
> + int sig;
> + if (!PyArg_ParseTuple(args, PARSE_PID "i:kill", &pid, &sig))
> + return NULL;
> +#if defined(PYOS_OS2) && !defined(PYCC_GCC)
> + if (sig == XCPT_SIGNAL_INTR || sig == XCPT_SIGNAL_BREAK) {
> + APIRET rc;
> + if ((rc = DosSendSignalException(pid, sig)) != NO_ERROR)
> + return os2_error(rc);
> +
> + } else if (sig == XCPT_SIGNAL_KILLPROC) {
> + APIRET rc;
> + if ((rc = DosKillProcess(DKP_PROCESS, pid)) != NO_ERROR)
> + return os2_error(rc);
> +
> + } else
> + return NULL; /* Unrecognized Signal Requested */
> +#else
> + if (kill(pid, sig) == -1)
> + return edk2_error();
> +#endif
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +#endif
> +
> +#ifdef HAVE_PLOCK
> +
> +#ifdef HAVE_SYS_LOCK_H
> +#include <sys/lock.h>
> +#endif
> +
> +PyDoc_STRVAR(edk2_plock__doc__,
> +"plock(op)\n\n\
> +Lock program segments into memory.");
> +
> +static PyObject *
> +edk2_plock(PyObject *self, PyObject *args)
> +{
> + int op;
> + if (!PyArg_ParseTuple(args, "i:plock", &op))
> + return NULL;
> + if (plock(op) == -1)
> + return edk2_error();
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +#endif
> +
> +
> +#ifdef HAVE_POPEN
> +PyDoc_STRVAR(edk2_popen__doc__,
> +"popen(command [, mode='r' [, bufsize]]) -> pipe\n\n\
> +Open a pipe to/from a command returning a file object.");
> +
> +static PyObject *
> +edk2_popen(PyObject *self, PyObject *args)
> +{
> + char *name;
> + char *mode = "r";
> + int bufsize = -1;
> + FILE *fp;
> + PyObject *f;
> + if (!PyArg_ParseTuple(args, "s|si:popen", &name, &mode, &bufsize))
> + return NULL;
> + /* Strip mode of binary or text modifiers */
> + if (strcmp(mode, "rb") == 0 || strcmp(mode, "rt") == 0)
> + mode = "r";
> + else if (strcmp(mode, "wb") == 0 || strcmp(mode, "wt") == 0)
> + mode = "w";
> + Py_BEGIN_ALLOW_THREADS
> + fp = popen(name, mode);
> + Py_END_ALLOW_THREADS
> + if (fp == NULL)
> + return edk2_error();
> +// TODO: Commented this for UEFI as it doesn't compile and
> +// has no impact on the edk2 module functionality
> +// f = PyFile_FromFile(fp, name, mode, pclose);
> +// if (f != NULL)
> +// PyFile_SetBufSize(f, bufsize);
> + return f;
> +}
> +
> +#endif /* HAVE_POPEN */
> +
> +
> +#if defined(HAVE_WAIT3) || defined(HAVE_WAIT4)
> +static PyObject *
> +wait_helper(pid_t pid, int status, struct rusage *ru)
> +{
> + PyObject *result;
> + static PyObject *struct_rusage;
> +
> + if (pid == -1)
> + return edk2_error();
> +
> + if (struct_rusage == NULL) {
> + PyObject *m = PyImport_ImportModuleNoBlock("resource");
> + if (m == NULL)
> + return NULL;
> + struct_rusage = PyObject_GetAttrString(m, "struct_rusage");
> + Py_DECREF(m);
> + if (struct_rusage == NULL)
> + return NULL;
> + }
> +
> + /* XXX(nnorwitz): Copied (w/mods) from resource.c, there should be only one. */
> + result = PyStructSequence_New((PyTypeObject*) struct_rusage);
> + if (!result)
> + return NULL;
> +
> +#ifndef doubletime
> +#define doubletime(TV) ((double)(TV).tv_sec + (TV).tv_usec * 0.000001)
> +#endif
> +
> + PyStructSequence_SET_ITEM(result, 0,
> + PyFloat_FromDouble(doubletime(ru->ru_utime)));
> + PyStructSequence_SET_ITEM(result, 1,
> + PyFloat_FromDouble(doubletime(ru->ru_stime)));
> +#define SET_INT(result, index, value)\
> + PyStructSequence_SET_ITEM(result, index, PyLong_FromLong(value))
> + SET_INT(result, 2, ru->ru_maxrss);
> + SET_INT(result, 3, ru->ru_ixrss);
> + SET_INT(result, 4, ru->ru_idrss);
> + SET_INT(result, 5, ru->ru_isrss);
> + SET_INT(result, 6, ru->ru_minflt);
> + SET_INT(result, 7, ru->ru_majflt);
> + SET_INT(result, 8, ru->ru_nswap);
> + SET_INT(result, 9, ru->ru_inblock);
> + SET_INT(result, 10, ru->ru_oublock);
> + SET_INT(result, 11, ru->ru_msgsnd);
> + SET_INT(result, 12, ru->ru_msgrcv);
> + SET_INT(result, 13, ru->ru_nsignals);
> + SET_INT(result, 14, ru->ru_nvcsw);
> + SET_INT(result, 15, ru->ru_nivcsw);
> +#undef SET_INT
> +
> + if (PyErr_Occurred()) {
> + Py_DECREF(result);
> + return NULL;
> + }
> +
> + return Py_BuildValue("NiN", PyLong_FromPid(pid), status, result);
> +}
> +#endif /* HAVE_WAIT3 || HAVE_WAIT4 */
> +
> +#ifdef HAVE_WAIT3
> +PyDoc_STRVAR(edk2_wait3__doc__,
> +"wait3(options) -> (pid, status, rusage)\n\n\
> +Wait for completion of a child process.");
> +
> +static PyObject *
> +edk2_wait3(PyObject *self, PyObject *args)
> +{
> + pid_t pid;
> + int options;
> + struct rusage ru;
> + WAIT_TYPE status;
> + WAIT_STATUS_INT(status) = 0;
> +
> + if (!PyArg_ParseTuple(args, "i:wait3", &options))
> + return NULL;
> +
> + Py_BEGIN_ALLOW_THREADS
> + pid = wait3(&status, options, &ru);
> + Py_END_ALLOW_THREADS
> +
> + return wait_helper(pid, WAIT_STATUS_INT(status), &ru);
> +}
> +#endif /* HAVE_WAIT3 */
> +
> +#ifdef HAVE_WAIT4
> +PyDoc_STRVAR(edk2_wait4__doc__,
> +"wait4(pid, options) -> (pid, status, rusage)\n\n\
> +Wait for completion of a given child process.");
> +
> +static PyObject *
> +edk2_wait4(PyObject *self, PyObject *args)
> +{
> + pid_t pid;
> + int options;
> + struct rusage ru;
> + WAIT_TYPE status;
> + WAIT_STATUS_INT(status) = 0;
> +
> + if (!PyArg_ParseTuple(args, PARSE_PID "i:wait4", &pid, &options))
> + return NULL;
> +
> + Py_BEGIN_ALLOW_THREADS
> + pid = wait4(pid, &status, options, &ru);
> + Py_END_ALLOW_THREADS
> +
> + return wait_helper(pid, WAIT_STATUS_INT(status), &ru);
> +}
> +#endif /* HAVE_WAIT4 */
> +
> +#ifdef HAVE_WAITPID
> +PyDoc_STRVAR(edk2_waitpid__doc__,
> +"waitpid(pid, options) -> (pid, status)\n\n\
> +Wait for completion of a given child process.");
> +
> +static PyObject *
> +edk2_waitpid(PyObject *self, PyObject *args)
> +{
> + pid_t pid;
> + int options;
> + WAIT_TYPE status;
> + WAIT_STATUS_INT(status) = 0;
> +
> + if (!PyArg_ParseTuple(args, PARSE_PID "i:waitpid", &pid, &options))
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + pid = waitpid(pid, &status, options);
> + Py_END_ALLOW_THREADS
> + if (pid == -1)
> + return edk2_error();
> +
> + return Py_BuildValue("Ni", PyLong_FromPid(pid), WAIT_STATUS_INT(status));
> +}
> +
> +#elif defined(HAVE_CWAIT)
> +
> +/* MS C has a variant of waitpid() that's usable for most purposes. */
> +PyDoc_STRVAR(edk2_waitpid__doc__,
> +"waitpid(pid, options) -> (pid, status << 8)\n\n"
> +"Wait for completion of a given process. options is ignored on Windows.");
> +
> +static PyObject *
> +edk2_waitpid(PyObject *self, PyObject *args)
> +{
> + Py_intptr_t pid;
> + int status, options;
> +
> + if (!PyArg_ParseTuple(args, PARSE_PID "i:waitpid", &pid, &options))
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + pid = _cwait(&status, pid, options);
> + Py_END_ALLOW_THREADS
> + if (pid == -1)
> + return edk2_error();
> +
> + /* shift the status left a byte so this is more like the POSIX waitpid */
> + return Py_BuildValue("Ni", PyLong_FromPid(pid), status << 8);
> +}
> +#endif /* HAVE_WAITPID || HAVE_CWAIT */
> +
> +#ifdef HAVE_WAIT
> +PyDoc_STRVAR(edk2_wait__doc__,
> +"wait() -> (pid, status)\n\n\
> +Wait for completion of a child process.");
> +
> +static PyObject *
> +edk2_wait(PyObject *self, PyObject *noargs)
> +{
> + pid_t pid;
> + WAIT_TYPE status;
> + WAIT_STATUS_INT(status) = 0;
> +
> + Py_BEGIN_ALLOW_THREADS
> + pid = wait(&status);
> + Py_END_ALLOW_THREADS
> + if (pid == -1)
> + return edk2_error();
> +
> + return Py_BuildValue("Ni", PyLong_FromPid(pid), WAIT_STATUS_INT(status));
> +}
> +#endif
> +
> +
> +PyDoc_STRVAR(edk2_lstat__doc__,
> +"lstat(path) -> stat result\n\n\
> +Like stat(path), but do not follow symbolic links.");
> +
> +static PyObject *
> +edk2_lstat(PyObject *self, PyObject *args)
> +{
> +#ifdef HAVE_LSTAT
> + return edk2_do_stat(self, args, "et:lstat", lstat, NULL, NULL);
> +#else /* !HAVE_LSTAT */
> + return edk2_do_stat(self, args, "et:lstat", STAT, NULL, NULL);
> +#endif /* !HAVE_LSTAT */
> +}
> +
> +
> +#ifdef HAVE_READLINK
> +PyDoc_STRVAR(edk2_readlink__doc__,
> +"readlink(path) -> path\n\n\
> +Return a string representing the path to which the symbolic link points.");
> +
> +static PyObject *
> +edk2_readlink(PyObject *self, PyObject *args)
> +{
> + PyObject* v;
> + char buf[MAXPATHLEN];
> + char *path;
> + int n;
> +#ifdef Py_USING_UNICODE
> + int arg_is_unicode = 0;
> +#endif
> +
> + if (!PyArg_ParseTuple(args, "et:readlink",
> + Py_FileSystemDefaultEncoding, &path))
> + return NULL;
> +#ifdef Py_USING_UNICODE
> + v = PySequence_GetItem(args, 0);
> + if (v == NULL) {
> + PyMem_Free(path);
> + return NULL;
> + }
> +
> + if (PyUnicode_Check(v)) {
> + arg_is_unicode = 1;
> + }
> + Py_DECREF(v);
> +#endif
> +
> + Py_BEGIN_ALLOW_THREADS
> + n = readlink(path, buf, (int) sizeof buf);
> + Py_END_ALLOW_THREADS
> + if (n < 0)
> + return edk2_error_with_allocated_filename(path);
> +
> + PyMem_Free(path);
> + v = PyUnicode_FromStringAndSize(buf, n);
> +#ifdef Py_USING_UNICODE
> + if (arg_is_unicode) {
> + PyObject *w;
> +
> + w = PyUnicode_FromEncodedObject(v,
> + Py_FileSystemDefaultEncoding,
> + "strict");
> + if (w != NULL) {
> + Py_DECREF(v);
> + v = w;
> + }
> + else {
> + /* fall back to the original byte string, as
> + discussed in patch #683592 */
> + PyErr_Clear();
> + }
> + }
> +#endif
> + return v;
> +}
> +#endif /* HAVE_READLINK */
> +
> +
> +#ifdef HAVE_SYMLINK
> +PyDoc_STRVAR(edk2_symlink__doc__,
> +"symlink(src, dst)\n\n\
> +Create a symbolic link pointing to src named dst.");
> +
> +static PyObject *
> +edk2_symlink(PyObject *self, PyObject *args)
> +{
> + return edk2_2str(args, "etet:symlink", symlink);
> +}
> +#endif /* HAVE_SYMLINK */
> +
> +
> +#ifdef HAVE_TIMES
> +#define NEED_TICKS_PER_SECOND
> +static long ticks_per_second = -1;
> +static PyObject *
> +edk2_times(PyObject *self, PyObject *noargs)
> +{
> + struct tms t;
> + clock_t c;
> + errno = 0;
> + c = times(&t);
> + if (c == (clock_t) -1)
> + return edk2_error();
> + return Py_BuildValue("ddddd",
> + (double)t.tms_utime / ticks_per_second,
> + (double)t.tms_stime / ticks_per_second,
> + (double)t.tms_cutime / ticks_per_second,
> + (double)t.tms_cstime / ticks_per_second,
> + (double)c / ticks_per_second);
> +}
> +#endif /* HAVE_TIMES */
> +
> +
> +#ifdef HAVE_TIMES
> +PyDoc_STRVAR(edk2_times__doc__,
> +"times() -> (utime, stime, cutime, cstime, elapsed_time)\n\n\
> +Return a tuple of floating point numbers indicating process times.");
> +#endif
> +
> +
> +#ifdef HAVE_GETSID
> +PyDoc_STRVAR(edk2_getsid__doc__,
> +"getsid(pid) -> sid\n\n\
> +Call the system call getsid().");
> +
> +static PyObject *
> +edk2_getsid(PyObject *self, PyObject *args)
> +{
> + pid_t pid;
> + int sid;
> + if (!PyArg_ParseTuple(args, PARSE_PID ":getsid", &pid))
> + return NULL;
> + sid = getsid(pid);
> + if (sid < 0)
> + return edk2_error();
> + return PyLong_FromLong((long)sid);
> +}
> +#endif /* HAVE_GETSID */
> +
> +
> +#ifdef HAVE_SETSID
> +PyDoc_STRVAR(edk2_setsid__doc__,
> +"setsid()\n\n\
> +Call the system call setsid().");
> +
> +static PyObject *
> +edk2_setsid(PyObject *self, PyObject *noargs)
> +{
> + if (setsid() < 0)
> + return edk2_error();
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +#endif /* HAVE_SETSID */
> +
> +#ifdef HAVE_SETPGID
> +PyDoc_STRVAR(edk2_setpgid__doc__,
> +"setpgid(pid, pgrp)\n\n\
> +Call the system call setpgid().");
> +
> +static PyObject *
> +edk2_setpgid(PyObject *self, PyObject *args)
> +{
> + pid_t pid;
> + int pgrp;
> + if (!PyArg_ParseTuple(args, PARSE_PID "i:setpgid", &pid, &pgrp))
> + return NULL;
> + if (setpgid(pid, pgrp) < 0)
> + return edk2_error();
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +#endif /* HAVE_SETPGID */
> +
> +
> +#ifdef HAVE_TCGETPGRP
> +PyDoc_STRVAR(edk2_tcgetpgrp__doc__,
> +"tcgetpgrp(fd) -> pgid\n\n\
> +Return the process group associated with the terminal given by a fd.");
> +
> +static PyObject *
> +edk2_tcgetpgrp(PyObject *self, PyObject *args)
> +{
> + int fd;
> + pid_t pgid;
> + if (!PyArg_ParseTuple(args, "i:tcgetpgrp", &fd))
> + return NULL;
> + pgid = tcgetpgrp(fd);
> + if (pgid < 0)
> + return edk2_error();
> + return PyLong_FromPid(pgid);
> +}
> +#endif /* HAVE_TCGETPGRP */
> +
> +
> +#ifdef HAVE_TCSETPGRP
> +PyDoc_STRVAR(edk2_tcsetpgrp__doc__,
> +"tcsetpgrp(fd, pgid)\n\n\
> +Set the process group associated with the terminal given by a fd.");
> +
> +static PyObject *
> +edk2_tcsetpgrp(PyObject *self, PyObject *args)
> +{
> + int fd;
> + pid_t pgid;
> + if (!PyArg_ParseTuple(args, "i" PARSE_PID ":tcsetpgrp", &fd, &pgid))
> + return NULL;
> + if (tcsetpgrp(fd, pgid) < 0)
> + return edk2_error();
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +#endif /* HAVE_TCSETPGRP */
> +
> +/* Functions acting on file descriptors */
> +
> +PyDoc_STRVAR(edk2_open__doc__,
> +"open(filename, flag [, mode=0777]) -> fd\n\n\
> +Open a file (for low level IO).");
> +
> +static PyObject *
> +edk2_open(PyObject *self, PyObject *args)
> +{
> + char *file = NULL;
> + int flag;
> + int mode = 0777;
> + int fd;
> +
> + if (!PyArg_ParseTuple(args, "eti|i",
> + Py_FileSystemDefaultEncoding, &file,
> + &flag, &mode))
> + return NULL;
> +
> + Py_BEGIN_ALLOW_THREADS
> + fd = open(file, flag, mode);
> + Py_END_ALLOW_THREADS
> + if (fd < 0)
> + return edk2_error_with_allocated_filename(file);
> + PyMem_Free(file);
> + return PyLong_FromLong((long)fd);
> +}
> +
> +
> +PyDoc_STRVAR(edk2_close__doc__,
> +"close(fd)\n\n\
> +Close a file descriptor (for low level IO).");
> +
> +static PyObject *
> +edk2_close(PyObject *self, PyObject *args)
> +{
> + int fd, res;
> + if (!PyArg_ParseTuple(args, "i:close", &fd))
> + return NULL;
> + if (!_PyVerify_fd(fd))
> + return edk2_error();
> + Py_BEGIN_ALLOW_THREADS
> + res = close(fd);
> + Py_END_ALLOW_THREADS
> + if (res < 0)
> + return edk2_error();
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +
> +PyDoc_STRVAR(edk2_closerange__doc__,
> +"closerange(fd_low, fd_high)\n\n\
> +Closes all file descriptors in [fd_low, fd_high), ignoring errors.");
> +
> +static PyObject *
> +edk2_closerange(PyObject *self, PyObject *args)
> +{
> + int fd_from, fd_to, i;
> + if (!PyArg_ParseTuple(args, "ii:closerange", &fd_from, &fd_to))
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + for (i = fd_from; i < fd_to; i++)
> + if (_PyVerify_fd(i))
> + close(i);
> + Py_END_ALLOW_THREADS
> + Py_RETURN_NONE;
> +}
> +
> +
> +PyDoc_STRVAR(edk2_dup__doc__,
> +"dup(fd) -> fd2\n\n\
> +Return a duplicate of a file descriptor.");
> +
> +static PyObject *
> +edk2_dup(PyObject *self, PyObject *args)
> +{
> + int fd;
> + if (!PyArg_ParseTuple(args, "i:dup", &fd))
> + return NULL;
> + if (!_PyVerify_fd(fd))
> + return edk2_error();
> + Py_BEGIN_ALLOW_THREADS
> + fd = dup(fd);
> + Py_END_ALLOW_THREADS
> + if (fd < 0)
> + return edk2_error();
> + return PyLong_FromLong((long)fd);
> +}
> +
> +
> +PyDoc_STRVAR(edk2_dup2__doc__,
> +"dup2(old_fd, new_fd)\n\n\
> +Duplicate file descriptor.");
> +
> +static PyObject *
> +edk2_dup2(PyObject *self, PyObject *args)
> +{
> + int fd, fd2, res;
> + if (!PyArg_ParseTuple(args, "ii:dup2", &fd, &fd2))
> + return NULL;
> + if (!_PyVerify_fd_dup2(fd, fd2))
> + return edk2_error();
> + Py_BEGIN_ALLOW_THREADS
> + res = dup2(fd, fd2);
> + Py_END_ALLOW_THREADS
> + if (res < 0)
> + return edk2_error();
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +
> +PyDoc_STRVAR(edk2_lseek__doc__,
> +"lseek(fd, pos, how) -> newpos\n\n\
> +Set the current position of a file descriptor.");
> +
> +static PyObject *
> +edk2_lseek(PyObject *self, PyObject *args)
> +{
> + int fd, how;
> + off_t pos, res;
> + PyObject *posobj;
> + if (!PyArg_ParseTuple(args, "iOi:lseek", &fd, &posobj, &how))
> + return NULL;
> +#ifdef SEEK_SET
> + /* Turn 0, 1, 2 into SEEK_{SET,CUR,END} */
> + switch (how) {
> + case 0: how = SEEK_SET; break;
> + case 1: how = SEEK_CUR; break;
> + case 2: how = SEEK_END; break;
> + }
> +#endif /* SEEK_END */
> +
> +#if !defined(HAVE_LARGEFILE_SUPPORT)
> + pos = PyLong_AsLong(posobj);
> +#else
> + pos = PyLong_Check(posobj) ?
> + PyLong_AsLongLong(posobj) : PyLong_AsLong(posobj);
> +#endif
> + if (PyErr_Occurred())
> + return NULL;
> +
> + if (!_PyVerify_fd(fd))
> + return edk2_error();
> + Py_BEGIN_ALLOW_THREADS
> + res = lseek(fd, pos, how);
> + Py_END_ALLOW_THREADS
> + if (res < 0)
> + return edk2_error();
> +
> +#if !defined(HAVE_LARGEFILE_SUPPORT)
> + return PyLong_FromLong(res);
> +#else
> + return PyLong_FromLongLong(res);
> +#endif
> +}
> +
> +
> +PyDoc_STRVAR(edk2_read__doc__,
> +"read(fd, buffersize) -> string\n\n\
> +Read a file descriptor.");
> +
> +static PyObject *
> +edk2_read(PyObject *self, PyObject *args)
> +{
> + int fd, size, n;
> + PyObject *buffer;
> + if (!PyArg_ParseTuple(args, "ii:read", &fd, &size))
> + return NULL;
> + if (size < 0) {
> + errno = EINVAL;
> + return edk2_error();
> + }
> + buffer = PyBytes_FromStringAndSize((char *)NULL, size);
> + if (buffer == NULL)
> + return NULL;
> + if (!_PyVerify_fd(fd)) {
> + Py_DECREF(buffer);
> + return edk2_error();
> + }
> + Py_BEGIN_ALLOW_THREADS
> + n = read(fd, PyBytes_AS_STRING(buffer), size);
> + Py_END_ALLOW_THREADS
> + if (n < 0) {
> + Py_DECREF(buffer);
> + return edk2_error();
> + }
> + if (n != size)
> + _PyBytes_Resize(&buffer, n);
> + return buffer;
> +}
> +
> +
> +PyDoc_STRVAR(edk2_write__doc__,
> +"write(fd, string) -> byteswritten\n\n\
> +Write a string to a file descriptor.");
> +
> +static PyObject *
> +edk2_write(PyObject *self, PyObject *args)
> +{
> + Py_buffer pbuf;
> + int fd;
> + Py_ssize_t size;
> +
> + if (!PyArg_ParseTuple(args, "is*:write", &fd, &pbuf))
> + return NULL;
> + if (!_PyVerify_fd(fd)) {
> + PyBuffer_Release(&pbuf);
> + return edk2_error();
> + }
> + Py_BEGIN_ALLOW_THREADS
> + size = write(fd, pbuf.buf, (size_t)pbuf.len);
> + Py_END_ALLOW_THREADS
> + PyBuffer_Release(&pbuf);
> + if (size < 0)
> + return edk2_error();
> + return PyLong_FromSsize_t(size);
> +}
> +
> +
> +PyDoc_STRVAR(edk2_fstat__doc__,
> +"fstat(fd) -> stat result\n\n\
> +Like stat(), but for an open file descriptor.");
> +
> +static PyObject *
> +edk2_fstat(PyObject *self, PyObject *args)
> +{
> + int fd;
> + STRUCT_STAT st;
> + int res;
> + if (!PyArg_ParseTuple(args, "i:fstat", &fd))
> + return NULL;
> + if (!_PyVerify_fd(fd))
> + return edk2_error();
> + Py_BEGIN_ALLOW_THREADS
> + res = FSTAT(fd, &st);
> + Py_END_ALLOW_THREADS
> + if (res != 0) {
> + return edk2_error();
> + }
> +
> + return _pystat_fromstructstat(&st);
> +}
> +
> +/* check for known incorrect mode strings - problem is, platforms are
> + free to accept any mode characters they like and are supposed to
> + ignore stuff they don't understand... write or append mode with
> + universal newline support is expressly forbidden by PEP 278.
> + Additionally, remove the 'U' from the mode string as platforms
> + won't know what it is. Non-zero return signals an exception */
> +int
> +_PyFile_SanitizeMode(char *mode)
> +{
> + char *upos;
> + size_t len = strlen(mode);
> +
> + if (!len) {
> + PyErr_SetString(PyExc_ValueError, "empty mode string");
> + return -1;
> + }
> +
> + upos = strchr(mode, 'U');
> + if (upos) {
> + memmove(upos, upos+1, len-(upos-mode)); /* incl null char */
> +
> + if (mode[0] == 'w' || mode[0] == 'a') {
> + PyErr_Format(PyExc_ValueError, "universal newline "
> + "mode can only be used with modes "
> + "starting with 'r'");
> + return -1;
> + }
> +
> + if (mode[0] != 'r') {
> + memmove(mode+1, mode, strlen(mode)+1);
> + mode[0] = 'r';
> + }
> +
> + if (!strchr(mode, 'b')) {
> + memmove(mode+2, mode+1, strlen(mode));
> + mode[1] = 'b';
> + }
> + } else if (mode[0] != 'r' && mode[0] != 'w' && mode[0] != 'a') {
> + PyErr_Format(PyExc_ValueError, "mode string must begin with "
> + "one of 'r', 'w', 'a' or 'U', not '%.200s'", mode);
> + return -1;
> + }
> +#ifdef Py_VERIFY_WINNT
> + /* additional checks on NT with visual studio 2005 and higher */
> + if (!_PyVerify_Mode_WINNT(mode)) {
> + PyErr_Format(PyExc_ValueError, "Invalid mode ('%.50s')", mode);
> + return -1;
> + }
> +#endif
> + return 0;
> +}
> +
> +
> +PyDoc_STRVAR(edk2_fdopen__doc__,
> +"fdopen(fd [, mode='r' [, bufsize]]) -> file_object\n\n\
> +Return an open file object connected to a file descriptor.");
> +
> +static PyObject *
> +edk2_fdopen(PyObject *self, PyObject *args)
> +{
> + int fd;
> + char *orgmode = "r";
> + int bufsize = -1;
> + FILE *fp;
> + PyObject *f = NULL;
> + char *mode;
> + if (!PyArg_ParseTuple(args, "i|si", &fd, &orgmode, &bufsize))
> + return NULL;
> +
> + /* Sanitize mode. See fileobject.c */
> + mode = PyMem_MALLOC(strlen(orgmode)+3);
> + if (!mode) {
> + PyErr_NoMemory();
> + return NULL;
> + }
> + strcpy(mode, orgmode);
> + if (_PyFile_SanitizeMode(mode)) {
> + PyMem_FREE(mode);
> + return NULL;
> + }
> + if (!_PyVerify_fd(fd))
> + return edk2_error();
> + Py_BEGIN_ALLOW_THREADS
> +#if defined(HAVE_FCNTL_H)
> + if (mode[0] == 'a') {
> + /* try to make sure the O_APPEND flag is set */
> + int flags;
> + flags = fcntl(fd, F_GETFL);
> + if (flags != -1)
> + fcntl(fd, F_SETFL, flags | O_APPEND);
> + fp = fdopen(fd, mode);
> + if (fp == NULL && flags != -1)
> + /* restore old mode if fdopen failed */
> + fcntl(fd, F_SETFL, flags);
> + } else {
> + fp = fdopen(fd, mode);
> + }
> +#else
> + fp = fdopen(fd, mode);
> +#endif
> + Py_END_ALLOW_THREADS
> + PyMem_FREE(mode);
> + if (fp == NULL)
> + return edk2_error();
> +// TODO: Commented this for UEFI as it doesn't compile and
> +// has no impact on the edk2 module functionality
> +// f = PyFile_FromFile(fp, "<fdopen>", orgmode, fclose);
> +// if (f != NULL)
> +// PyFile_SetBufSize(f, bufsize);
> + return f;
> +}
> +
> +PyDoc_STRVAR(edk2_isatty__doc__,
> +"isatty(fd) -> bool\n\n\
> +Return True if the file descriptor 'fd' is an open file descriptor\n\
> +connected to the slave end of a terminal.");
> +
> +static PyObject *
> +edk2_isatty(PyObject *self, PyObject *args)
> +{
> + int fd;
> + if (!PyArg_ParseTuple(args, "i:isatty", &fd))
> + return NULL;
> + if (!_PyVerify_fd(fd))
> + return PyBool_FromLong(0);
> + return PyBool_FromLong(isatty(fd));
> +}
> +
> +#ifdef HAVE_PIPE
> +PyDoc_STRVAR(edk2_pipe__doc__,
> +"pipe() -> (read_end, write_end)\n\n\
> +Create a pipe.");
> +
> +static PyObject *
> +edk2_pipe(PyObject *self, PyObject *noargs)
> +{
> + int fds[2];
> + int res;
> + Py_BEGIN_ALLOW_THREADS
> + res = pipe(fds);
> + Py_END_ALLOW_THREADS
> + if (res != 0)
> + return edk2_error();
> + return Py_BuildValue("(ii)", fds[0], fds[1]);
> +}
> +#endif /* HAVE_PIPE */
> +
> +
> +#ifdef HAVE_MKFIFO
> +PyDoc_STRVAR(edk2_mkfifo__doc__,
> +"mkfifo(filename [, mode=0666])\n\n\
> +Create a FIFO (a POSIX named pipe).");
> +
> +static PyObject *
> +edk2_mkfifo(PyObject *self, PyObject *args)
> +{
> + char *filename;
> + int mode = 0666;
> + int res;
> + if (!PyArg_ParseTuple(args, "s|i:mkfifo", &filename, &mode))
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + res = mkfifo(filename, mode);
> + Py_END_ALLOW_THREADS
> + if (res < 0)
> + return edk2_error();
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +#endif
> +
> +
> +#if defined(HAVE_MKNOD) && defined(HAVE_MAKEDEV)
> +PyDoc_STRVAR(edk2_mknod__doc__,
> +"mknod(filename [, mode=0600, device])\n\n\
> +Create a filesystem node (file, device special file or named pipe)\n\
> +named filename. mode specifies both the permissions to use and the\n\
> +type of node to be created, being combined (bitwise OR) with one of\n\
> +S_IFREG, S_IFCHR, S_IFBLK, and S_IFIFO. For S_IFCHR and S_IFBLK,\n\
> +device defines the newly created device special file (probably using\n\
> +os.makedev()), otherwise it is ignored.");
> +
> +
> +static PyObject *
> +edk2_mknod(PyObject *self, PyObject *args)
> +{
> + char *filename;
> + int mode = 0600;
> + int device = 0;
> + int res;
> + if (!PyArg_ParseTuple(args, "s|ii:mknod", &filename, &mode, &device))
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + res = mknod(filename, mode, device);
> + Py_END_ALLOW_THREADS
> + if (res < 0)
> + return edk2_error();
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +#endif
> +
> +#ifdef HAVE_DEVICE_MACROS
> +PyDoc_STRVAR(edk2_major__doc__,
> +"major(device) -> major number\n\
> +Extracts a device major number from a raw device number.");
> +
> +static PyObject *
> +edk2_major(PyObject *self, PyObject *args)
> +{
> + int device;
> + if (!PyArg_ParseTuple(args, "i:major", &device))
> + return NULL;
> + return PyLong_FromLong((long)major(device));
> +}
> +
> +PyDoc_STRVAR(edk2_minor__doc__,
> +"minor(device) -> minor number\n\
> +Extracts a device minor number from a raw device number.");
> +
> +static PyObject *
> +edk2_minor(PyObject *self, PyObject *args)
> +{
> + int device;
> + if (!PyArg_ParseTuple(args, "i:minor", &device))
> + return NULL;
> + return PyLong_FromLong((long)minor(device));
> +}
> +
> +PyDoc_STRVAR(edk2_makedev__doc__,
> +"makedev(major, minor) -> device number\n\
> +Composes a raw device number from the major and minor device numbers.");
> +
> +static PyObject *
> +edk2_makedev(PyObject *self, PyObject *args)
> +{
> + int major, minor;
> + if (!PyArg_ParseTuple(args, "ii:makedev", &major, &minor))
> + return NULL;
> + return PyLong_FromLong((long)makedev(major, minor));
> +}
> +#endif /* device macros */
> +
> +
> +#ifdef HAVE_FTRUNCATE
> +PyDoc_STRVAR(edk2_ftruncate__doc__,
> +"ftruncate(fd, length)\n\n\
> +Truncate a file to a specified length.");
> +
> +static PyObject *
> +edk2_ftruncate(PyObject *self, PyObject *args)
> +{
> + int fd;
> + off_t length;
> + int res;
> + PyObject *lenobj;
> +
> + if (!PyArg_ParseTuple(args, "iO:ftruncate", &fd, &lenobj))
> + return NULL;
> +
> +#if !defined(HAVE_LARGEFILE_SUPPORT)
> + length = PyLong_AsLong(lenobj);
> +#else
> + length = PyLong_Check(lenobj) ?
> + PyLong_AsLongLong(lenobj) : PyLong_AsLong(lenobj);
> +#endif
> + if (PyErr_Occurred())
> + return NULL;
> +
> + Py_BEGIN_ALLOW_THREADS
> + res = ftruncate(fd, length);
> + Py_END_ALLOW_THREADS
> + if (res < 0)
> + return edk2_error();
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +#endif
> +
> +#ifdef HAVE_PUTENV
> +PyDoc_STRVAR(edk2_putenv__doc__,
> +"putenv(key, value)\n\n\
> +Change or add an environment variable.");
> +
> +/* Save putenv() parameters as values here, so we can collect them when they
> + * get re-set with another call for the same key. */
> +static PyObject *edk2_putenv_garbage;
> +
> +static PyObject *
> +edk2_putenv(PyObject *self, PyObject *args)
> +{
> + char *s1, *s2;
> + char *newenv;
> + PyObject *newstr;
> + size_t len;
> +
> + if (!PyArg_ParseTuple(args, "ss:putenv", &s1, &s2))
> + return NULL;
> +
> + /* XXX This can leak memory -- not easy to fix :-( */
> + len = strlen(s1) + strlen(s2) + 2;
> + /* len includes space for a trailing \0; the size arg to
> + PyString_FromStringAndSize does not count that */
> + newstr = PyUnicode_FromStringAndSize(NULL, (int)len - 1);
> + if (newstr == NULL)
> + return PyErr_NoMemory();
> + newenv = PyString_AS_STRING(newstr);
> + PyOS_snprintf(newenv, len, "%s=%s", s1, s2);
> + if (putenv(newenv)) {
> + Py_DECREF(newstr);
> + edk2_error();
> + return NULL;
> + }
> + /* Install the first arg and newstr in edk2_putenv_garbage;
> + * this will cause previous value to be collected. This has to
> + * happen after the real putenv() call because the old value
> + * was still accessible until then. */
> + if (PyDict_SetItem(edk2_putenv_garbage,
> + PyTuple_GET_ITEM(args, 0), newstr)) {
> + /* really not much we can do; just leak */
> + PyErr_Clear();
> + }
> + else {
> + Py_DECREF(newstr);
> + }
> +
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +#endif /* putenv */
> +
> +#ifdef HAVE_UNSETENV
> +PyDoc_STRVAR(edk2_unsetenv__doc__,
> +"unsetenv(key)\n\n\
> +Delete an environment variable.");
> +
> +static PyObject *
> +edk2_unsetenv(PyObject *self, PyObject *args)
> +{
> + char *s1;
> +
> + if (!PyArg_ParseTuple(args, "s:unsetenv", &s1))
> + return NULL;
> +
> + unsetenv(s1);
> +
> + /* Remove the key from edk2_putenv_garbage;
> + * this will cause it to be collected. This has to
> + * happen after the real unsetenv() call because the
> + * old value was still accessible until then.
> + */
> + if (PyDict_DelItem(edk2_putenv_garbage,
> + PyTuple_GET_ITEM(args, 0))) {
> + /* really not much we can do; just leak */
> + PyErr_Clear();
> + }
> +
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +#endif /* unsetenv */
> +
> +PyDoc_STRVAR(edk2_strerror__doc__,
> +"strerror(code) -> string\n\n\
> +Translate an error code to a message string.");
> +
> +static PyObject *
> +edk2_strerror(PyObject *self, PyObject *args)
> +{
> + int code;
> + char *message;
> + if (!PyArg_ParseTuple(args, "i:strerror", &code))
> + return NULL;
> + message = strerror(code);
> + if (message == NULL) {
> + PyErr_SetString(PyExc_ValueError,
> + "strerror() argument out of range");
> + return NULL;
> + }
> + return PyUnicode_FromString(message);
> +}
> +
> +
> +#ifdef HAVE_SYS_WAIT_H
> +
> +#ifdef WCOREDUMP
> +PyDoc_STRVAR(edk2_WCOREDUMP__doc__,
> +"WCOREDUMP(status) -> bool\n\n\
> +Return True if the process returning 'status' was dumped to a core file.");
> +
> +static PyObject *
> +edk2_WCOREDUMP(PyObject *self, PyObject *args)
> +{
> + WAIT_TYPE status;
> + WAIT_STATUS_INT(status) = 0;
> +
> + if (!PyArg_ParseTuple(args, "i:WCOREDUMP", &WAIT_STATUS_INT(status)))
> + return NULL;
> +
> + return PyBool_FromLong(WCOREDUMP(status));
> +}
> +#endif /* WCOREDUMP */
> +
> +#ifdef WIFCONTINUED
> +PyDoc_STRVAR(edk2_WIFCONTINUED__doc__,
> +"WIFCONTINUED(status) -> bool\n\n\
> +Return True if the process returning 'status' was continued from a\n\
> +job control stop.");
> +
> +static PyObject *
> +edk2_WIFCONTINUED(PyObject *self, PyObject *args)
> +{
> + WAIT_TYPE status;
> + WAIT_STATUS_INT(status) = 0;
> +
> + if (!PyArg_ParseTuple(args, "i:WCONTINUED", &WAIT_STATUS_INT(status)))
> + return NULL;
> +
> + return PyBool_FromLong(WIFCONTINUED(status));
> +}
> +#endif /* WIFCONTINUED */
> +
> +#ifdef WIFSTOPPED
> +PyDoc_STRVAR(edk2_WIFSTOPPED__doc__,
> +"WIFSTOPPED(status) -> bool\n\n\
> +Return True if the process returning 'status' was stopped.");
> +
> +static PyObject *
> +edk2_WIFSTOPPED(PyObject *self, PyObject *args)
> +{
> + WAIT_TYPE status;
> + WAIT_STATUS_INT(status) = 0;
> +
> + if (!PyArg_ParseTuple(args, "i:WIFSTOPPED", &WAIT_STATUS_INT(status)))
> + return NULL;
> +
> + return PyBool_FromLong(WIFSTOPPED(status));
> +}
> +#endif /* WIFSTOPPED */
> +
> +#ifdef WIFSIGNALED
> +PyDoc_STRVAR(edk2_WIFSIGNALED__doc__,
> +"WIFSIGNALED(status) -> bool\n\n\
> +Return True if the process returning 'status' was terminated by a signal.");
> +
> +static PyObject *
> +edk2_WIFSIGNALED(PyObject *self, PyObject *args)
> +{
> + WAIT_TYPE status;
> + WAIT_STATUS_INT(status) = 0;
> +
> + if (!PyArg_ParseTuple(args, "i:WIFSIGNALED", &WAIT_STATUS_INT(status)))
> + return NULL;
> +
> + return PyBool_FromLong(WIFSIGNALED(status));
> +}
> +#endif /* WIFSIGNALED */
> +
> +#ifdef WIFEXITED
> +PyDoc_STRVAR(edk2_WIFEXITED__doc__,
> +"WIFEXITED(status) -> bool\n\n\
> +Return true if the process returning 'status' exited using the exit()\n\
> +system call.");
> +
> +static PyObject *
> +edk2_WIFEXITED(PyObject *self, PyObject *args)
> +{
> + WAIT_TYPE status;
> + WAIT_STATUS_INT(status) = 0;
> +
> + if (!PyArg_ParseTuple(args, "i:WIFEXITED", &WAIT_STATUS_INT(status)))
> + return NULL;
> +
> + return PyBool_FromLong(WIFEXITED(status));
> +}
> +#endif /* WIFEXITED */
> +
> +#ifdef WEXITSTATUS
> +PyDoc_STRVAR(edk2_WEXITSTATUS__doc__,
> +"WEXITSTATUS(status) -> integer\n\n\
> +Return the process return code from 'status'.");
> +
> +static PyObject *
> +edk2_WEXITSTATUS(PyObject *self, PyObject *args)
> +{
> + WAIT_TYPE status;
> + WAIT_STATUS_INT(status) = 0;
> +
> + if (!PyArg_ParseTuple(args, "i:WEXITSTATUS", &WAIT_STATUS_INT(status)))
> + return NULL;
> +
> + return Py_BuildValue("i", WEXITSTATUS(status));
> +}
> +#endif /* WEXITSTATUS */
> +
> +#ifdef WTERMSIG
> +PyDoc_STRVAR(edk2_WTERMSIG__doc__,
> +"WTERMSIG(status) -> integer\n\n\
> +Return the signal that terminated the process that provided the 'status'\n\
> +value.");
> +
> +static PyObject *
> +edk2_WTERMSIG(PyObject *self, PyObject *args)
> +{
> + WAIT_TYPE status;
> + WAIT_STATUS_INT(status) = 0;
> +
> + if (!PyArg_ParseTuple(args, "i:WTERMSIG", &WAIT_STATUS_INT(status)))
> + return NULL;
> +
> + return Py_BuildValue("i", WTERMSIG(status));
> +}
> +#endif /* WTERMSIG */
> +
> +#ifdef WSTOPSIG
> +PyDoc_STRVAR(edk2_WSTOPSIG__doc__,
> +"WSTOPSIG(status) -> integer\n\n\
> +Return the signal that stopped the process that provided\n\
> +the 'status' value.");
> +
> +static PyObject *
> +edk2_WSTOPSIG(PyObject *self, PyObject *args)
> +{
> + WAIT_TYPE status;
> + WAIT_STATUS_INT(status) = 0;
> +
> + if (!PyArg_ParseTuple(args, "i:WSTOPSIG", &WAIT_STATUS_INT(status)))
> + return NULL;
> +
> + return Py_BuildValue("i", WSTOPSIG(status));
> +}
> +#endif /* WSTOPSIG */
> +
> +#endif /* HAVE_SYS_WAIT_H */
> +
> +
> +#if defined(HAVE_FSTATVFS) && defined(HAVE_SYS_STATVFS_H)
> +#include <sys/statvfs.h>
> +
> +static PyObject*
> +_pystatvfs_fromstructstatvfs(struct statvfs st) {
> + PyObject *v = PyStructSequence_New(&StatVFSResultType);
> + if (v == NULL)
> + return NULL;
> +
> +#if !defined(HAVE_LARGEFILE_SUPPORT)
> + PyStructSequence_SET_ITEM(v, 0, PyLong_FromLong((long) st.f_bsize));
> + PyStructSequence_SET_ITEM(v, 1, PyLong_FromLong((long) st.f_frsize));
> + PyStructSequence_SET_ITEM(v, 2, PyLong_FromLong((long) st.f_blocks));
> + PyStructSequence_SET_ITEM(v, 3, PyLong_FromLong((long) st.f_bfree));
> + PyStructSequence_SET_ITEM(v, 4, PyLong_FromLong((long) st.f_bavail));
> + PyStructSequence_SET_ITEM(v, 5, PyLong_FromLong((long) st.f_files));
> + PyStructSequence_SET_ITEM(v, 6, PyLong_FromLong((long) st.f_ffree));
> + PyStructSequence_SET_ITEM(v, 7, PyLong_FromLong((long) st.f_favail));
> + PyStructSequence_SET_ITEM(v, 8, PyLong_FromLong((long) st.f_flag));
> + PyStructSequence_SET_ITEM(v, 9, PyLong_FromLong((long) st.f_namemax));
> +#else
> + PyStructSequence_SET_ITEM(v, 0, PyLong_FromLong((long) st.f_bsize));
> + PyStructSequence_SET_ITEM(v, 1, PyLong_FromLong((long) st.f_frsize));
> + PyStructSequence_SET_ITEM(v, 2,
> + PyLong_FromLongLong((PY_LONG_LONG) st.f_blocks));
> + PyStructSequence_SET_ITEM(v, 3,
> + PyLong_FromLongLong((PY_LONG_LONG) st.f_bfree));
> + PyStructSequence_SET_ITEM(v, 4,
> + PyLong_FromLongLong((PY_LONG_LONG) st.f_bavail));
> + PyStructSequence_SET_ITEM(v, 5,
> + PyLong_FromLongLong((PY_LONG_LONG) st.f_files));
> + PyStructSequence_SET_ITEM(v, 6,
> + PyLong_FromLongLong((PY_LONG_LONG) st.f_ffree));
> + PyStructSequence_SET_ITEM(v, 7,
> + PyLong_FromLongLong((PY_LONG_LONG) st.f_favail));
> + PyStructSequence_SET_ITEM(v, 8, PyLong_FromLong((long) st.f_flag));
> + PyStructSequence_SET_ITEM(v, 9, PyLong_FromLong((long) st.f_namemax));
> +#endif
> +
> + return v;
> +}
> +
> +PyDoc_STRVAR(edk2_fstatvfs__doc__,
> +"fstatvfs(fd) -> statvfs result\n\n\
> +Perform an fstatvfs system call on the given fd.");
> +
> +static PyObject *
> +edk2_fstatvfs(PyObject *self, PyObject *args)
> +{
> + int fd, res;
> + struct statvfs st;
> +
> + if (!PyArg_ParseTuple(args, "i:fstatvfs", &fd))
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + res = fstatvfs(fd, &st);
> + Py_END_ALLOW_THREADS
> + if (res != 0)
> + return edk2_error();
> +
> + return _pystatvfs_fromstructstatvfs(st);
> +}
> +#endif /* HAVE_FSTATVFS && HAVE_SYS_STATVFS_H */
> +
> +
> +#if defined(HAVE_STATVFS) && defined(HAVE_SYS_STATVFS_H)
> +#include <sys/statvfs.h>
> +
> +PyDoc_STRVAR(edk2_statvfs__doc__,
> +"statvfs(path) -> statvfs result\n\n\
> +Perform a statvfs system call on the given path.");
> +
> +static PyObject *
> +edk2_statvfs(PyObject *self, PyObject *args)
> +{
> + char *path;
> + int res;
> + struct statvfs st;
> + if (!PyArg_ParseTuple(args, "s:statvfs", &path))
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + res = statvfs(path, &st);
> + Py_END_ALLOW_THREADS
> + if (res != 0)
> + return edk2_error_with_filename(path);
> +
> + return _pystatvfs_fromstructstatvfs(st);
> +}
> +#endif /* HAVE_STATVFS */
> +
> +PyObject *
> +PyOS_FSPath(PyObject *path)
> +{
> + /* For error message reasons, this function is manually inlined in
> + path_converter(). */
> + _Py_IDENTIFIER(__fspath__);
> + PyObject *func = NULL;
> + PyObject *path_repr = NULL;
> +
> + if (PyUnicode_Check(path) || PyBytes_Check(path)) {
> + Py_INCREF(path);
> + return path;
> + }
> +
> + func = _PyObject_LookupSpecial(path, &PyId___fspath__);
> + if (NULL == func) {
> + return PyErr_Format(PyExc_TypeError,
> + "expected str, bytes or os.PathLike object, "
> + "not %.200s",
> + Py_TYPE(path)->tp_name);
> + }
> +
> + path_repr = PyObject_CallFunctionObjArgs(func, NULL);
> + Py_DECREF(func);
> + if (NULL == path_repr) {
> + return NULL;
> + }
> +
> + if (!(PyUnicode_Check(path_repr) || PyBytes_Check(path_repr))) {
> + PyErr_Format(PyExc_TypeError,
> + "expected %.200s.__fspath__() to return str or bytes, "
> + "not %.200s", Py_TYPE(path)->tp_name,
> + Py_TYPE(path_repr)->tp_name);
> + Py_DECREF(path_repr);
> + return NULL;
> + }
> +
> + return path_repr;
> +}
> +
> +#if !defined(UEFI_C_SOURCE) // not supported in 3.x
> +#ifdef HAVE_TEMPNAM
> +PyDoc_STRVAR(edk2_tempnam__doc__,
> +"tempnam([dir[, prefix]]) -> string\n\n\
> +Return a unique name for a temporary file.\n\
> +The directory and a prefix may be specified as strings; they may be omitted\n\
> +or None if not needed.");
> +
> +static PyObject *
> +edk2_tempnam(PyObject *self, PyObject *args)
> +{
> + PyObject *result = NULL;
> + char *dir = NULL;
> + char *pfx = NULL;
> + char *name;
> +
> + if (!PyArg_ParseTuple(args, "|zz:tempnam", &dir, &pfx))
> + return NULL;
> +
> + if (PyErr_Warn(PyExc_RuntimeWarning,
> + "tempnam is a potential security risk to your program") < 0)
> + return NULL;
> +
> + if (PyErr_WarnPy3k("tempnam has been removed in 3.x; "
> + "use the tempfile module", 1) < 0)
> + return NULL;
> +
> + name = tempnam(dir, pfx);
> + if (name == NULL)
> + return PyErr_NoMemory();
> + result = PyUnicode_FromString(name);
> + free(name);
> + return result;
> +}
> +#endif
> +
> +
> +#ifdef HAVE_TMPFILE
> +PyDoc_STRVAR(edk2_tmpfile__doc__,
> +"tmpfile() -> file object\n\n\
> +Create a temporary file with no directory entries.");
> +
> +static PyObject *
> +edk2_tmpfile(PyObject *self, PyObject *noargs)
> +{
> + FILE *fp;
> +
> + if (PyErr_WarnPy3k("tmpfile has been removed in 3.x; "
> + "use the tempfile module", 1) < 0)
> + return NULL;
> +
> + fp = tmpfile();
> + if (fp == NULL)
> + return edk2_error();
> + return PyFile_FromFile(fp, "<tmpfile>", "w+b", fclose);
> +}
> +#endif
> +
> +
> +#ifdef HAVE_TMPNAM
> +PyDoc_STRVAR(edk2_tmpnam__doc__,
> +"tmpnam() -> string\n\n\
> +Return a unique name for a temporary file.");
> +
> +static PyObject *
> +edk2_tmpnam(PyObject *self, PyObject *noargs)
> +{
> + char buffer[L_tmpnam];
> + char *name;
> +
> + if (PyErr_Warn(PyExc_RuntimeWarning,
> + "tmpnam is a potential security risk to your program") < 0)
> + return NULL;
> +
> + if (PyErr_WarnPy3k("tmpnam has been removed in 3.x; "
> + "use the tempfile module", 1) < 0)
> + return NULL;
> +
> +#ifdef USE_TMPNAM_R
> + name = tmpnam_r(buffer);
> +#else
> + name = tmpnam(buffer);
> +#endif
> + if (name == NULL) {
> + PyObject *err = Py_BuildValue("is", 0,
> +#ifdef USE_TMPNAM_R
> + "unexpected NULL from tmpnam_r"
> +#else
> + "unexpected NULL from tmpnam"
> +#endif
> + );
> + PyErr_SetObject(PyExc_OSError, err);
> + Py_XDECREF(err);
> + return NULL;
> + }
> + return PyUnicode_FromString(buffer);
> +}
> +#endif
> +#endif
> +
> +PyDoc_STRVAR(edk2_abort__doc__,
> +"abort() -> does not return!\n\n\
> +Abort the interpreter immediately. This 'dumps core' or otherwise fails\n\
> +in the hardest way possible on the hosting operating system.");
> +
> +static PyObject *
> +edk2_abort(PyObject *self, PyObject *noargs)
> +{
> + abort();
> + /*NOTREACHED*/
> + Py_FatalError("abort() called from Python code didn't abort!");
> + return NULL;
> +}
> +
> +static PyMethodDef edk2_methods[] = {
> + {"access", edk2_access, METH_VARARGS, edk2_access__doc__},
> +#ifdef HAVE_TTYNAME
> + {"ttyname", edk2_ttyname, METH_VARARGS, edk2_ttyname__doc__},
> +#endif
> + {"chdir", edk2_chdir, METH_VARARGS, edk2_chdir__doc__},
> +#ifdef HAVE_CHFLAGS
> + {"chflags", edk2_chflags, METH_VARARGS, edk2_chflags__doc__},
> +#endif /* HAVE_CHFLAGS */
> + {"chmod", edk2_chmod, METH_VARARGS, edk2_chmod__doc__},
> +#ifdef HAVE_FCHMOD
> + {"fchmod", edk2_fchmod, METH_VARARGS, edk2_fchmod__doc__},
> +#endif /* HAVE_FCHMOD */
> +#ifdef HAVE_CHOWN
> + {"chown", edk2_chown, METH_VARARGS, edk2_chown__doc__},
> +#endif /* HAVE_CHOWN */
> +#ifdef HAVE_LCHMOD
> + {"lchmod", edk2_lchmod, METH_VARARGS, edk2_lchmod__doc__},
> +#endif /* HAVE_LCHMOD */
> +#ifdef HAVE_FCHOWN
> + {"fchown", edk2_fchown, METH_VARARGS, edk2_fchown__doc__},
> +#endif /* HAVE_FCHOWN */
> +#ifdef HAVE_LCHFLAGS
> + {"lchflags", edk2_lchflags, METH_VARARGS, edk2_lchflags__doc__},
> +#endif /* HAVE_LCHFLAGS */
> +#ifdef HAVE_LCHOWN
> + {"lchown", edk2_lchown, METH_VARARGS, edk2_lchown__doc__},
> +#endif /* HAVE_LCHOWN */
> +#ifdef HAVE_CHROOT
> + {"chroot", edk2_chroot, METH_VARARGS, edk2_chroot__doc__},
> +#endif
> +#ifdef HAVE_CTERMID
> + {"ctermid", edk2_ctermid, METH_NOARGS, edk2_ctermid__doc__},
> +#endif
> +#ifdef HAVE_GETCWD
> + {"getcwd", edk2_getcwd, METH_NOARGS, edk2_getcwd__doc__},
> +#ifdef Py_USING_UNICODE
> + {"getcwdu", edk2_getcwdu, METH_NOARGS, edk2_getcwdu__doc__},
> +#endif
> +#endif
> +#ifdef HAVE_LINK
> + {"link", edk2_link, METH_VARARGS, edk2_link__doc__},
> +#endif /* HAVE_LINK */
> + {"listdir", edk2_listdir, METH_VARARGS, edk2_listdir__doc__},
> + {"lstat", edk2_lstat, METH_VARARGS, edk2_lstat__doc__},
> + {"mkdir", edk2_mkdir, METH_VARARGS, edk2_mkdir__doc__},
> +#ifdef HAVE_NICE
> + {"nice", edk2_nice, METH_VARARGS, edk2_nice__doc__},
> +#endif /* HAVE_NICE */
> +#ifdef HAVE_READLINK
> + {"readlink", edk2_readlink, METH_VARARGS, edk2_readlink__doc__},
> +#endif /* HAVE_READLINK */
> + {"rename", edk2_rename, METH_VARARGS, edk2_rename__doc__},
> + {"rmdir", edk2_rmdir, METH_VARARGS, edk2_rmdir__doc__},
> + {"stat", edk2_stat, METH_VARARGS, edk2_stat__doc__},
> + //{"stat_float_times", stat_float_times, METH_VARARGS, stat_float_times__doc__},
> +#ifdef HAVE_SYMLINK
> + {"symlink", edk2_symlink, METH_VARARGS, edk2_symlink__doc__},
> +#endif /* HAVE_SYMLINK */
> +#ifdef HAVE_SYSTEM
> + {"system", edk2_system, METH_VARARGS, edk2_system__doc__},
> +#endif
> + {"umask", edk2_umask, METH_VARARGS, edk2_umask__doc__},
> +#ifdef HAVE_UNAME
> + {"uname", edk2_uname, METH_NOARGS, edk2_uname__doc__},
> +#endif /* HAVE_UNAME */
> + {"unlink", edk2_unlink, METH_VARARGS, edk2_unlink__doc__},
> + {"remove", edk2_unlink, METH_VARARGS, edk2_remove__doc__},
> + {"utime", edk2_utime, METH_VARARGS, edk2_utime__doc__},
> +#ifdef HAVE_TIMES
> + {"times", edk2_times, METH_NOARGS, edk2_times__doc__},
> +#endif /* HAVE_TIMES */
> + {"_exit", edk2__exit, METH_VARARGS, edk2__exit__doc__},
> +#ifdef HAVE_EXECV
> + {"execv", edk2_execv, METH_VARARGS, edk2_execv__doc__},
> + {"execve", edk2_execve, METH_VARARGS, edk2_execve__doc__},
> +#endif /* HAVE_EXECV */
> +#ifdef HAVE_SPAWNV
> + {"spawnv", edk2_spawnv, METH_VARARGS, edk2_spawnv__doc__},
> + {"spawnve", edk2_spawnve, METH_VARARGS, edk2_spawnve__doc__},
> +#if defined(PYOS_OS2)
> + {"spawnvp", edk2_spawnvp, METH_VARARGS, edk2_spawnvp__doc__},
> + {"spawnvpe", edk2_spawnvpe, METH_VARARGS, edk2_spawnvpe__doc__},
> +#endif /* PYOS_OS2 */
> +#endif /* HAVE_SPAWNV */
> +#ifdef HAVE_FORK1
> + {"fork1", edk2_fork1, METH_NOARGS, edk2_fork1__doc__},
> +#endif /* HAVE_FORK1 */
> +#ifdef HAVE_FORK
> + {"fork", edk2_fork, METH_NOARGS, edk2_fork__doc__},
> +#endif /* HAVE_FORK */
> +#if defined(HAVE_OPENPTY) || defined(HAVE__GETPTY) || defined(HAVE_DEV_PTMX)
> + {"openpty", edk2_openpty, METH_NOARGS, edk2_openpty__doc__},
> +#endif /* HAVE_OPENPTY || HAVE__GETPTY || HAVE_DEV_PTMX */
> +#ifdef HAVE_FORKPTY
> + {"forkpty", edk2_forkpty, METH_NOARGS, edk2_forkpty__doc__},
> +#endif /* HAVE_FORKPTY */
> + {"getpid", edk2_getpid, METH_NOARGS, edk2_getpid__doc__},
> +#ifdef HAVE_GETPGRP
> + {"getpgrp", edk2_getpgrp, METH_NOARGS, edk2_getpgrp__doc__},
> +#endif /* HAVE_GETPGRP */
> +#ifdef HAVE_GETPPID
> + {"getppid", edk2_getppid, METH_NOARGS, edk2_getppid__doc__},
> +#endif /* HAVE_GETPPID */
> +#ifdef HAVE_GETLOGIN
> + {"getlogin", edk2_getlogin, METH_NOARGS, edk2_getlogin__doc__},
> +#endif
> +#ifdef HAVE_KILL
> + {"kill", edk2_kill, METH_VARARGS, edk2_kill__doc__},
> +#endif /* HAVE_KILL */
> +#ifdef HAVE_KILLPG
> + {"killpg", edk2_killpg, METH_VARARGS, edk2_killpg__doc__},
> +#endif /* HAVE_KILLPG */
> +#ifdef HAVE_PLOCK
> + {"plock", edk2_plock, METH_VARARGS, edk2_plock__doc__},
> +#endif /* HAVE_PLOCK */
> +#ifdef HAVE_POPEN
> + {"popen", edk2_popen, METH_VARARGS, edk2_popen__doc__},
> +#endif /* HAVE_POPEN */
> +#ifdef HAVE_SETGROUPS
> + {"setgroups", edk2_setgroups, METH_O, edk2_setgroups__doc__},
> +#endif /* HAVE_SETGROUPS */
> +#ifdef HAVE_INITGROUPS
> + {"initgroups", edk2_initgroups, METH_VARARGS, edk2_initgroups__doc__},
> +#endif /* HAVE_INITGROUPS */
> +#ifdef HAVE_GETPGID
> + {"getpgid", edk2_getpgid, METH_VARARGS, edk2_getpgid__doc__},
> +#endif /* HAVE_GETPGID */
> +#ifdef HAVE_SETPGRP
> + {"setpgrp", edk2_setpgrp, METH_NOARGS, edk2_setpgrp__doc__},
> +#endif /* HAVE_SETPGRP */
> +#ifdef HAVE_WAIT
> + {"wait", edk2_wait, METH_NOARGS, edk2_wait__doc__},
> +#endif /* HAVE_WAIT */
> +#ifdef HAVE_WAIT3
> + {"wait3", edk2_wait3, METH_VARARGS, edk2_wait3__doc__},
> +#endif /* HAVE_WAIT3 */
> +#ifdef HAVE_WAIT4
> + {"wait4", edk2_wait4, METH_VARARGS, edk2_wait4__doc__},
> +#endif /* HAVE_WAIT4 */
> +#if defined(HAVE_WAITPID) || defined(HAVE_CWAIT)
> + {"waitpid", edk2_waitpid, METH_VARARGS, edk2_waitpid__doc__},
> +#endif /* HAVE_WAITPID */
> +#ifdef HAVE_GETSID
> + {"getsid", edk2_getsid, METH_VARARGS, edk2_getsid__doc__},
> +#endif /* HAVE_GETSID */
> +#ifdef HAVE_SETSID
> + {"setsid", edk2_setsid, METH_NOARGS, edk2_setsid__doc__},
> +#endif /* HAVE_SETSID */
> +#ifdef HAVE_SETPGID
> + {"setpgid", edk2_setpgid, METH_VARARGS, edk2_setpgid__doc__},
> +#endif /* HAVE_SETPGID */
> +#ifdef HAVE_TCGETPGRP
> + {"tcgetpgrp", edk2_tcgetpgrp, METH_VARARGS, edk2_tcgetpgrp__doc__},
> +#endif /* HAVE_TCGETPGRP */
> +#ifdef HAVE_TCSETPGRP
> + {"tcsetpgrp", edk2_tcsetpgrp, METH_VARARGS, edk2_tcsetpgrp__doc__},
> +#endif /* HAVE_TCSETPGRP */
> + {"open", edk2_open, METH_VARARGS, edk2_open__doc__},
> + {"close", edk2_close, METH_VARARGS, edk2_close__doc__},
> + {"closerange", edk2_closerange, METH_VARARGS, edk2_closerange__doc__},
> + {"dup", edk2_dup, METH_VARARGS, edk2_dup__doc__},
> + {"dup2", edk2_dup2, METH_VARARGS, edk2_dup2__doc__},
> + {"lseek", edk2_lseek, METH_VARARGS, edk2_lseek__doc__},
> + {"read", edk2_read, METH_VARARGS, edk2_read__doc__},
> + {"write", edk2_write, METH_VARARGS, edk2_write__doc__},
> + {"fstat", edk2_fstat, METH_VARARGS, edk2_fstat__doc__},
> + {"fdopen", edk2_fdopen, METH_VARARGS, edk2_fdopen__doc__},
> + {"isatty", edk2_isatty, METH_VARARGS, edk2_isatty__doc__},
> +#ifdef HAVE_PIPE
> + {"pipe", edk2_pipe, METH_NOARGS, edk2_pipe__doc__},
> +#endif
> +#ifdef HAVE_MKFIFO
> + {"mkfifo", edk2_mkfifo, METH_VARARGS, edk2_mkfifo__doc__},
> +#endif
> +#if defined(HAVE_MKNOD) && defined(HAVE_MAKEDEV)
> + {"mknod", edk2_mknod, METH_VARARGS, edk2_mknod__doc__},
> +#endif
> +#ifdef HAVE_DEVICE_MACROS
> + {"major", edk2_major, METH_VARARGS, edk2_major__doc__},
> + {"minor", edk2_minor, METH_VARARGS, edk2_minor__doc__},
> + {"makedev", edk2_makedev, METH_VARARGS, edk2_makedev__doc__},
> +#endif
> +#ifdef HAVE_FTRUNCATE
> + {"ftruncate", edk2_ftruncate, METH_VARARGS, edk2_ftruncate__doc__},
> +#endif
> +#ifdef HAVE_PUTENV
> + {"putenv", edk2_putenv, METH_VARARGS, edk2_putenv__doc__},
> +#endif
> +#ifdef HAVE_UNSETENV
> + {"unsetenv", edk2_unsetenv, METH_VARARGS, edk2_unsetenv__doc__},
> +#endif
> + {"strerror", edk2_strerror, METH_VARARGS, edk2_strerror__doc__},
> +#ifdef HAVE_FCHDIR
> + {"fchdir", edk2_fchdir, METH_O, edk2_fchdir__doc__},
> +#endif
> +#ifdef HAVE_FSYNC
> + {"fsync", edk2_fsync, METH_O, edk2_fsync__doc__},
> +#endif
> +#ifdef HAVE_FDATASYNC
> + {"fdatasync", edk2_fdatasync, METH_O, edk2_fdatasync__doc__},
> +#endif
> +#ifdef HAVE_SYS_WAIT_H
> +#ifdef WCOREDUMP
> + {"WCOREDUMP", edk2_WCOREDUMP, METH_VARARGS, edk2_WCOREDUMP__doc__},
> +#endif /* WCOREDUMP */
> +#ifdef WIFCONTINUED
> + {"WIFCONTINUED",edk2_WIFCONTINUED, METH_VARARGS, edk2_WIFCONTINUED__doc__},
> +#endif /* WIFCONTINUED */
> +#ifdef WIFSTOPPED
> + {"WIFSTOPPED", edk2_WIFSTOPPED, METH_VARARGS, edk2_WIFSTOPPED__doc__},
> +#endif /* WIFSTOPPED */
> +#ifdef WIFSIGNALED
> + {"WIFSIGNALED", edk2_WIFSIGNALED, METH_VARARGS, edk2_WIFSIGNALED__doc__},
> +#endif /* WIFSIGNALED */
> +#ifdef WIFEXITED
> + {"WIFEXITED", edk2_WIFEXITED, METH_VARARGS, edk2_WIFEXITED__doc__},
> +#endif /* WIFEXITED */
> +#ifdef WEXITSTATUS
> + {"WEXITSTATUS", edk2_WEXITSTATUS, METH_VARARGS, edk2_WEXITSTATUS__doc__},
> +#endif /* WEXITSTATUS */
> +#ifdef WTERMSIG
> + {"WTERMSIG", edk2_WTERMSIG, METH_VARARGS, edk2_WTERMSIG__doc__},
> +#endif /* WTERMSIG */
> +#ifdef WSTOPSIG
> + {"WSTOPSIG", edk2_WSTOPSIG, METH_VARARGS, edk2_WSTOPSIG__doc__},
> +#endif /* WSTOPSIG */
> +#endif /* HAVE_SYS_WAIT_H */
> +#if defined(HAVE_FSTATVFS) && defined(HAVE_SYS_STATVFS_H)
> + {"fstatvfs", edk2_fstatvfs, METH_VARARGS, edk2_fstatvfs__doc__},
> +#endif
> +#if defined(HAVE_STATVFS) && defined(HAVE_SYS_STATVFS_H)
> + {"statvfs", edk2_statvfs, METH_VARARGS, edk2_statvfs__doc__},
> +#endif
> +#if !defined(UEFI_C_SOURCE)
> +#ifdef HAVE_TMPFILE
> + {"tmpfile", edk2_tmpfile, METH_NOARGS, edk2_tmpfile__doc__},
> +#endif
> +#ifdef HAVE_TEMPNAM
> + {"tempnam", edk2_tempnam, METH_VARARGS, edk2_tempnam__doc__},
> +#endif
> +#ifdef HAVE_TMPNAM
> + {"tmpnam", edk2_tmpnam, METH_NOARGS, edk2_tmpnam__doc__},
> +#endif
> +#endif
> +#ifdef HAVE_CONFSTR
> + {"confstr", edk2_confstr, METH_VARARGS, edk2_confstr__doc__},
> +#endif
> +#ifdef HAVE_SYSCONF
> + {"sysconf", edk2_sysconf, METH_VARARGS, edk2_sysconf__doc__},
> +#endif
> +#ifdef HAVE_FPATHCONF
> + {"fpathconf", edk2_fpathconf, METH_VARARGS, edk2_fpathconf__doc__},
> +#endif
> +#ifdef HAVE_PATHCONF
> + {"pathconf", edk2_pathconf, METH_VARARGS, edk2_pathconf__doc__},
> +#endif
> + {"abort", edk2_abort, METH_NOARGS, edk2_abort__doc__},
> + {NULL, NULL} /* Sentinel */
> +};
> +
> +
> +static int
> +ins(PyObject *module, char *symbol, long value)
> +{
> + return PyModule_AddIntConstant(module, symbol, value);
> +}
> +
> +static int
> +all_ins(PyObject *d)
> +{
> +#ifdef F_OK
> + if (ins(d, "F_OK", (long)F_OK)) return -1;
> +#endif
> +#ifdef R_OK
> + if (ins(d, "R_OK", (long)R_OK)) return -1;
> +#endif
> +#ifdef W_OK
> + if (ins(d, "W_OK", (long)W_OK)) return -1;
> +#endif
> +#ifdef X_OK
> + if (ins(d, "X_OK", (long)X_OK)) return -1;
> +#endif
> +#ifdef NGROUPS_MAX
> + if (ins(d, "NGROUPS_MAX", (long)NGROUPS_MAX)) return -1;
> +#endif
> +#ifdef TMP_MAX
> + if (ins(d, "TMP_MAX", (long)TMP_MAX)) return -1;
> +#endif
> +#ifdef WCONTINUED
> + if (ins(d, "WCONTINUED", (long)WCONTINUED)) return -1;
> +#endif
> +#ifdef WNOHANG
> + if (ins(d, "WNOHANG", (long)WNOHANG)) return -1;
> +#endif
> +#ifdef WUNTRACED
> + if (ins(d, "WUNTRACED", (long)WUNTRACED)) return -1;
> +#endif
> +#ifdef O_RDONLY
> + if (ins(d, "O_RDONLY", (long)O_RDONLY)) return -1;
> +#endif
> +#ifdef O_WRONLY
> + if (ins(d, "O_WRONLY", (long)O_WRONLY)) return -1;
> +#endif
> +#ifdef O_RDWR
> + if (ins(d, "O_RDWR", (long)O_RDWR)) return -1;
> +#endif
> +#ifdef O_NDELAY
> + if (ins(d, "O_NDELAY", (long)O_NDELAY)) return -1;
> +#endif
> +#ifdef O_NONBLOCK
> + if (ins(d, "O_NONBLOCK", (long)O_NONBLOCK)) return -1;
> +#endif
> +#ifdef O_APPEND
> + if (ins(d, "O_APPEND", (long)O_APPEND)) return -1;
> +#endif
> +#ifdef O_DSYNC
> + if (ins(d, "O_DSYNC", (long)O_DSYNC)) return -1;
> +#endif
> +#ifdef O_RSYNC
> + if (ins(d, "O_RSYNC", (long)O_RSYNC)) return -1;
> +#endif
> +#ifdef O_SYNC
> + if (ins(d, "O_SYNC", (long)O_SYNC)) return -1;
> +#endif
> +#ifdef O_NOCTTY
> + if (ins(d, "O_NOCTTY", (long)O_NOCTTY)) return -1;
> +#endif
> +#ifdef O_CREAT
> + if (ins(d, "O_CREAT", (long)O_CREAT)) return -1;
> +#endif
> +#ifdef O_EXCL
> + if (ins(d, "O_EXCL", (long)O_EXCL)) return -1;
> +#endif
> +#ifdef O_TRUNC
> + if (ins(d, "O_TRUNC", (long)O_TRUNC)) return -1;
> +#endif
> +#ifdef O_BINARY
> + if (ins(d, "O_BINARY", (long)O_BINARY)) return -1;
> +#endif
> +#ifdef O_TEXT
> + if (ins(d, "O_TEXT", (long)O_TEXT)) return -1;
> +#endif
> +#ifdef O_LARGEFILE
> + if (ins(d, "O_LARGEFILE", (long)O_LARGEFILE)) return -1;
> +#endif
> +#ifdef O_SHLOCK
> + if (ins(d, "O_SHLOCK", (long)O_SHLOCK)) return -1;
> +#endif
> +#ifdef O_EXLOCK
> + if (ins(d, "O_EXLOCK", (long)O_EXLOCK)) return -1;
> +#endif
> +
> +/* MS Windows */
> +#ifdef O_NOINHERIT
> + /* Don't inherit in child processes. */
> + if (ins(d, "O_NOINHERIT", (long)O_NOINHERIT)) return -1;
> +#endif
> +#ifdef _O_SHORT_LIVED
> + /* Optimize for short life (keep in memory). */
> + /* MS forgot to define this one with a non-underscore form too. */
> + if (ins(d, "O_SHORT_LIVED", (long)_O_SHORT_LIVED)) return -1;
> +#endif
> +#ifdef O_TEMPORARY
> + /* Automatically delete when last handle is closed. */
> + if (ins(d, "O_TEMPORARY", (long)O_TEMPORARY)) return -1;
> +#endif
> +#ifdef O_RANDOM
> + /* Optimize for random access. */
> + if (ins(d, "O_RANDOM", (long)O_RANDOM)) return -1;
> +#endif
> +#ifdef O_SEQUENTIAL
> + /* Optimize for sequential access. */
> + if (ins(d, "O_SEQUENTIAL", (long)O_SEQUENTIAL)) return -1;
> +#endif
> +
> +/* GNU extensions. */
> +#ifdef O_ASYNC
> + /* Send a SIGIO signal whenever input or output
> + becomes available on file descriptor */
> + if (ins(d, "O_ASYNC", (long)O_ASYNC)) return -1;
> +#endif
> +#ifdef O_DIRECT
> + /* Direct disk access. */
> + if (ins(d, "O_DIRECT", (long)O_DIRECT)) return -1;
> +#endif
> +#ifdef O_DIRECTORY
> + /* Must be a directory. */
> + if (ins(d, "O_DIRECTORY", (long)O_DIRECTORY)) return -1;
> +#endif
> +#ifdef O_NOFOLLOW
> + /* Do not follow links. */
> + if (ins(d, "O_NOFOLLOW", (long)O_NOFOLLOW)) return -1;
> +#endif
> +#ifdef O_NOATIME
> + /* Do not update the access time. */
> + if (ins(d, "O_NOATIME", (long)O_NOATIME)) return -1;
> +#endif
> +
> + /* These come from sysexits.h */
> +#ifdef EX_OK
> + if (ins(d, "EX_OK", (long)EX_OK)) return -1;
> +#endif /* EX_OK */
> +#ifdef EX_USAGE
> + if (ins(d, "EX_USAGE", (long)EX_USAGE)) return -1;
> +#endif /* EX_USAGE */
> +#ifdef EX_DATAERR
> + if (ins(d, "EX_DATAERR", (long)EX_DATAERR)) return -1;
> +#endif /* EX_DATAERR */
> +#ifdef EX_NOINPUT
> + if (ins(d, "EX_NOINPUT", (long)EX_NOINPUT)) return -1;
> +#endif /* EX_NOINPUT */
> +#ifdef EX_NOUSER
> + if (ins(d, "EX_NOUSER", (long)EX_NOUSER)) return -1;
> +#endif /* EX_NOUSER */
> +#ifdef EX_NOHOST
> + if (ins(d, "EX_NOHOST", (long)EX_NOHOST)) return -1;
> +#endif /* EX_NOHOST */
> +#ifdef EX_UNAVAILABLE
> + if (ins(d, "EX_UNAVAILABLE", (long)EX_UNAVAILABLE)) return -1;
> +#endif /* EX_UNAVAILABLE */
> +#ifdef EX_SOFTWARE
> + if (ins(d, "EX_SOFTWARE", (long)EX_SOFTWARE)) return -1;
> +#endif /* EX_SOFTWARE */
> +#ifdef EX_OSERR
> + if (ins(d, "EX_OSERR", (long)EX_OSERR)) return -1;
> +#endif /* EX_OSERR */
> +#ifdef EX_OSFILE
> + if (ins(d, "EX_OSFILE", (long)EX_OSFILE)) return -1;
> +#endif /* EX_OSFILE */
> +#ifdef EX_CANTCREAT
> + if (ins(d, "EX_CANTCREAT", (long)EX_CANTCREAT)) return -1;
> +#endif /* EX_CANTCREAT */
> +#ifdef EX_IOERR
> + if (ins(d, "EX_IOERR", (long)EX_IOERR)) return -1;
> +#endif /* EX_IOERR */
> +#ifdef EX_TEMPFAIL
> + if (ins(d, "EX_TEMPFAIL", (long)EX_TEMPFAIL)) return -1;
> +#endif /* EX_TEMPFAIL */
> +#ifdef EX_PROTOCOL
> + if (ins(d, "EX_PROTOCOL", (long)EX_PROTOCOL)) return -1;
> +#endif /* EX_PROTOCOL */
> +#ifdef EX_NOPERM
> + if (ins(d, "EX_NOPERM", (long)EX_NOPERM)) return -1;
> +#endif /* EX_NOPERM */
> +#ifdef EX_CONFIG
> + if (ins(d, "EX_CONFIG", (long)EX_CONFIG)) return -1;
> +#endif /* EX_CONFIG */
> +#ifdef EX_NOTFOUND
> + if (ins(d, "EX_NOTFOUND", (long)EX_NOTFOUND)) return -1;
> +#endif /* EX_NOTFOUND */
> +
> +#ifdef HAVE_SPAWNV
> + if (ins(d, "P_WAIT", (long)_P_WAIT)) return -1;
> + if (ins(d, "P_NOWAIT", (long)_P_NOWAIT)) return -1;
> + if (ins(d, "P_OVERLAY", (long)_OLD_P_OVERLAY)) return -1;
> + if (ins(d, "P_NOWAITO", (long)_P_NOWAITO)) return -1;
> + if (ins(d, "P_DETACH", (long)_P_DETACH)) return -1;
> +#endif
> + return 0;
> +}
> +
> +static struct PyModuleDef edk2module = {
> + PyModuleDef_HEAD_INIT,
> + "edk2",
> + edk2__doc__,
> + -1,
> + edk2_methods,
> + NULL,
> + NULL,
> + NULL,
> + NULL
> +};
> +
> +#define INITFUNC initedk2
> +#define MODNAME "edk2"
> +
> +PyMODINIT_FUNC
> +PyEdk2__Init(void)
> +{
> + PyObject *m;
> +
> +#ifndef UEFI_C_SOURCE
> + PyObject *v;
> +#endif
> +
> + m = PyModule_Create(&edk2module);
> + if (m == NULL)
> + return m;
> +
> +#ifndef UEFI_C_SOURCE
> + /* Initialize environ dictionary */
> + v = convertenviron();
> + Py_XINCREF(v);
> + if (v == NULL || PyModule_AddObject(m, "environ", v) != 0)
> + return NULL;
> + Py_DECREF(v);
> +#endif /* UEFI_C_SOURCE */
> +
> + if (all_ins(m))
> + return NULL ;
> +
> + Py_INCREF(PyExc_OSError);
> + PyModule_AddObject(m, "error", PyExc_OSError);
> +
> +#ifdef HAVE_PUTENV
> + if (edk2_putenv_garbage == NULL)
> + edk2_putenv_garbage = PyDict_New();
> +#endif
> +
> + if (!initialized) {
> + stat_result_desc.name = MODNAME ".stat_result";
> + stat_result_desc.fields[2].name = PyStructSequence_UnnamedField;
> + stat_result_desc.fields[3].name = PyStructSequence_UnnamedField;
> + stat_result_desc.fields[4].name = PyStructSequence_UnnamedField;
> + PyStructSequence_InitType(&StatResultType, &stat_result_desc);
> + structseq_new = StatResultType.tp_new;
> + StatResultType.tp_new = statresult_new;
> +
> + //statvfs_result_desc.name = MODNAME ".statvfs_result";
> + //PyStructSequence_InitType(&StatVFSResultType, &statvfs_result_desc);
> +#ifdef NEED_TICKS_PER_SECOND
> +# if defined(HAVE_SYSCONF) && defined(_SC_CLK_TCK)
> + ticks_per_second = sysconf(_SC_CLK_TCK);
> +# elif defined(HZ)
> + ticks_per_second = HZ;
> +# else
> + ticks_per_second = 60; /* magic fallback value; may be bogus */
> +# endif
> +#endif
> + }
> + Py_INCREF((PyObject*) &StatResultType);
> + PyModule_AddObject(m, "stat_result", (PyObject*) &StatResultType);
> + //Py_INCREF((PyObject*) &StatVFSResultType);
> + //PyModule_AddObject(m, "statvfs_result",
> + // (PyObject*) &StatVFSResultType);
> + initialized = 1;
> + return m;
> +
> +}
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/errnomodule.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/errnomodule.c
> new file mode 100644
> index 00000000..b8e96c48
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/errnomodule.c
> @@ -0,0 +1,890 @@
> +/* Errno module
> +
> + Copyright (c) 2011 - 2012, Intel Corporation. All rights reserved.<BR>
> + This program and the accompanying materials are licensed and made available under
> + the terms and conditions of the BSD License that accompanies this distribution.
> + The full text of the license may be found at
> + http://opensource.org/licenses/bsd-license.
> +
> + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
> + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
> +*/
> +
> +#include "Python.h"
> +
> +/* Windows socket errors (WSA*) */
> +#ifdef MS_WINDOWS
> +#include <windows.h>
> +#endif
> +
> +/*
> + * Pull in the system error definitions
> + */
> +
> +static PyMethodDef errno_methods[] = {
> + {NULL, NULL}
> +};
> +
> +/* Helper function doing the dictionary inserting */
> +
> +static void
> +_inscode(PyObject *d, PyObject *de, const char *name, int code)
> +{
> + PyObject *u = PyUnicode_FromString(name);
> + PyObject *v = PyLong_FromLong((long) code);
> +
> + /* Don't bother checking for errors; they'll be caught at the end
> + * of the module initialization function by the caller of
> + * initerrno().
> + */
> + if (u && v) {
> + /* insert in modules dict */
> + PyDict_SetItem(d, u, v);
> + /* insert in errorcode dict */
> + PyDict_SetItem(de, v, u);
> + }
> + Py_XDECREF(u);
> + Py_XDECREF(v);
> +}
> +
> +PyDoc_STRVAR(errno__doc__,
> +"This module makes available standard errno system symbols.\n\
> +\n\
> +The value of each symbol is the corresponding integer value,\n\
> +e.g., on most systems, errno.ENOENT equals the integer 2.\n\
> +\n\
> +The dictionary errno.errorcode maps numeric codes to symbol names,\n\
> +e.g., errno.errorcode[2] could be the string 'ENOENT'.\n\
> +\n\
> +Symbols that are not relevant to the underlying system are not defined.\n\
> +\n\
> +To map error codes to error messages, use the function os.strerror(),\n\
> +e.g. os.strerror(2) could return 'No such file or directory'.");
> +
> +static struct PyModuleDef errnomodule = {
> + PyModuleDef_HEAD_INIT,
> + "errno",
> + errno__doc__,
> + -1,
> + errno_methods,
> + NULL,
> + NULL,
> + NULL,
> + NULL
> +};
> +
> +PyMODINIT_FUNC
> +PyInit_errno(void)
> +{
> + PyObject *m, *d, *de;
> + m = PyModule_Create(&errnomodule);
> + if (m == NULL)
> + return NULL;
> + d = PyModule_GetDict(m);
> + de = PyDict_New();
> + if (!d || !de || PyDict_SetItemString(d, "errorcode", de) < 0)
> + return NULL;
> +
> +/* Macro so I don't have to edit each and every line below... */
> +#define inscode(d, ds, de, name, code, comment) _inscode(d, de, name, code)
> +
> + /*
> + * The names and comments are borrowed from linux/include/errno.h,
> + * which should be pretty all-inclusive. However, the Solaris specific
> + * names and comments are borrowed from sys/errno.h in Solaris.
> + * MacOSX specific names and comments are borrowed from sys/errno.h in
> + * MacOSX.
> + */
> +
> +#ifdef ENODEV
> + inscode(d, ds, de, "ENODEV", ENODEV, "No such device");
> +#endif
> +#ifdef ENOCSI
> + inscode(d, ds, de, "ENOCSI", ENOCSI, "No CSI structure available");
> +#endif
> +#ifdef EHOSTUNREACH
> + inscode(d, ds, de, "EHOSTUNREACH", EHOSTUNREACH, "No route to host");
> +#else
> +#ifdef WSAEHOSTUNREACH
> + inscode(d, ds, de, "EHOSTUNREACH", WSAEHOSTUNREACH, "No route to host");
> +#endif
> +#endif
> +#ifdef ENOMSG
> + inscode(d, ds, de, "ENOMSG", ENOMSG, "No message of desired type");
> +#endif
> +#ifdef EUCLEAN
> + inscode(d, ds, de, "EUCLEAN", EUCLEAN, "Structure needs cleaning");
> +#endif
> +#ifdef EL2NSYNC
> + inscode(d, ds, de, "EL2NSYNC", EL2NSYNC, "Level 2 not synchronized");
> +#endif
> +#ifdef EL2HLT
> + inscode(d, ds, de, "EL2HLT", EL2HLT, "Level 2 halted");
> +#endif
> +#ifdef ENODATA
> + inscode(d, ds, de, "ENODATA", ENODATA, "No data available");
> +#endif
> +#ifdef ENOTBLK
> + inscode(d, ds, de, "ENOTBLK", ENOTBLK, "Block device required");
> +#endif
> +#ifdef ENOSYS
> + inscode(d, ds, de, "ENOSYS", ENOSYS, "Function not implemented");
> +#endif
> +#ifdef EPIPE
> + inscode(d, ds, de, "EPIPE", EPIPE, "Broken pipe");
> +#endif
> +#ifdef EINVAL
> + inscode(d, ds, de, "EINVAL", EINVAL, "Invalid argument");
> +#else
> +#ifdef WSAEINVAL
> + inscode(d, ds, de, "EINVAL", WSAEINVAL, "Invalid argument");
> +#endif
> +#endif
> +#ifdef EOVERFLOW
> + inscode(d, ds, de, "EOVERFLOW", EOVERFLOW, "Value too large for defined data type");
> +#endif
> +#ifdef EADV
> + inscode(d, ds, de, "EADV", EADV, "Advertise error");
> +#endif
> +#ifdef EINTR
> + inscode(d, ds, de, "EINTR", EINTR, "Interrupted system call");
> +#else
> +#ifdef WSAEINTR
> + inscode(d, ds, de, "EINTR", WSAEINTR, "Interrupted system call");
> +#endif
> +#endif
> +#ifdef EUSERS
> + inscode(d, ds, de, "EUSERS", EUSERS, "Too many users");
> +#else
> +#ifdef WSAEUSERS
> + inscode(d, ds, de, "EUSERS", WSAEUSERS, "Too many users");
> +#endif
> +#endif
> +#ifdef ENOTEMPTY
> + inscode(d, ds, de, "ENOTEMPTY", ENOTEMPTY, "Directory not empty");
> +#else
> +#ifdef WSAENOTEMPTY
> + inscode(d, ds, de, "ENOTEMPTY", WSAENOTEMPTY, "Directory not empty");
> +#endif
> +#endif
> +#ifdef ENOBUFS
> + inscode(d, ds, de, "ENOBUFS", ENOBUFS, "No buffer space available");
> +#else
> +#ifdef WSAENOBUFS
> + inscode(d, ds, de, "ENOBUFS", WSAENOBUFS, "No buffer space available");
> +#endif
> +#endif
> +#ifdef EPROTO
> + inscode(d, ds, de, "EPROTO", EPROTO, "Protocol error");
> +#endif
> +#ifdef EREMOTE
> + inscode(d, ds, de, "EREMOTE", EREMOTE, "Object is remote");
> +#else
> +#ifdef WSAEREMOTE
> + inscode(d, ds, de, "EREMOTE", WSAEREMOTE, "Object is remote");
> +#endif
> +#endif
> +#ifdef ENAVAIL
> + inscode(d, ds, de, "ENAVAIL", ENAVAIL, "No XENIX semaphores available");
> +#endif
> +#ifdef ECHILD
> + inscode(d, ds, de, "ECHILD", ECHILD, "No child processes");
> +#endif
> +#ifdef ELOOP
> + inscode(d, ds, de, "ELOOP", ELOOP, "Too many symbolic links encountered");
> +#else
> +#ifdef WSAELOOP
> + inscode(d, ds, de, "ELOOP", WSAELOOP, "Too many symbolic links encountered");
> +#endif
> +#endif
> +#ifdef EXDEV
> + inscode(d, ds, de, "EXDEV", EXDEV, "Cross-device link");
> +#endif
> +#ifdef E2BIG
> + inscode(d, ds, de, "E2BIG", E2BIG, "Arg list too long");
> +#endif
> +#ifdef ESRCH
> + inscode(d, ds, de, "ESRCH", ESRCH, "No such process");
> +#endif
> +#ifdef EMSGSIZE
> + inscode(d, ds, de, "EMSGSIZE", EMSGSIZE, "Message too long");
> +#else
> +#ifdef WSAEMSGSIZE
> + inscode(d, ds, de, "EMSGSIZE", WSAEMSGSIZE, "Message too long");
> +#endif
> +#endif
> +#ifdef EAFNOSUPPORT
> + inscode(d, ds, de, "EAFNOSUPPORT", EAFNOSUPPORT, "Address family not supported by protocol");
> +#else
> +#ifdef WSAEAFNOSUPPORT
> + inscode(d, ds, de, "EAFNOSUPPORT", WSAEAFNOSUPPORT, "Address family not supported by protocol");
> +#endif
> +#endif
> +#ifdef EBADR
> + inscode(d, ds, de, "EBADR", EBADR, "Invalid request descriptor");
> +#endif
> +#ifdef EHOSTDOWN
> + inscode(d, ds, de, "EHOSTDOWN", EHOSTDOWN, "Host is down");
> +#else
> +#ifdef WSAEHOSTDOWN
> + inscode(d, ds, de, "EHOSTDOWN", WSAEHOSTDOWN, "Host is down");
> +#endif
> +#endif
> +#ifdef EPFNOSUPPORT
> + inscode(d, ds, de, "EPFNOSUPPORT", EPFNOSUPPORT, "Protocol family not supported");
> +#else
> +#ifdef WSAEPFNOSUPPORT
> + inscode(d, ds, de, "EPFNOSUPPORT", WSAEPFNOSUPPORT, "Protocol family not supported");
> +#endif
> +#endif
> +#ifdef ENOPROTOOPT
> + inscode(d, ds, de, "ENOPROTOOPT", ENOPROTOOPT, "Protocol not available");
> +#else
> +#ifdef WSAENOPROTOOPT
> + inscode(d, ds, de, "ENOPROTOOPT", WSAENOPROTOOPT, "Protocol not available");
> +#endif
> +#endif
> +#ifdef EBUSY
> + inscode(d, ds, de, "EBUSY", EBUSY, "Device or resource busy");
> +#endif
> +#ifdef EWOULDBLOCK
> + inscode(d, ds, de, "EWOULDBLOCK", EWOULDBLOCK, "Operation would block");
> +#else
> +#ifdef WSAEWOULDBLOCK
> + inscode(d, ds, de, "EWOULDBLOCK", WSAEWOULDBLOCK, "Operation would block");
> +#endif
> +#endif
> +#ifdef EBADFD
> + inscode(d, ds, de, "EBADFD", EBADFD, "File descriptor in bad state");
> +#endif
> +#ifdef EDOTDOT
> + inscode(d, ds, de, "EDOTDOT", EDOTDOT, "RFS specific error");
> +#endif
> +#ifdef EISCONN
> + inscode(d, ds, de, "EISCONN", EISCONN, "Transport endpoint is already connected");
> +#else
> +#ifdef WSAEISCONN
> + inscode(d, ds, de, "EISCONN", WSAEISCONN, "Transport endpoint is already connected");
> +#endif
> +#endif
> +#ifdef ENOANO
> + inscode(d, ds, de, "ENOANO", ENOANO, "No anode");
> +#endif
> +#ifdef ESHUTDOWN
> + inscode(d, ds, de, "ESHUTDOWN", ESHUTDOWN, "Cannot send after transport endpoint shutdown");
> +#else
> +#ifdef WSAESHUTDOWN
> + inscode(d, ds, de, "ESHUTDOWN", WSAESHUTDOWN, "Cannot send after transport endpoint shutdown");
> +#endif
> +#endif
> +#ifdef ECHRNG
> + inscode(d, ds, de, "ECHRNG", ECHRNG, "Channel number out of range");
> +#endif
> +#ifdef ELIBBAD
> + inscode(d, ds, de, "ELIBBAD", ELIBBAD, "Accessing a corrupted shared library");
> +#endif
> +#ifdef ENONET
> + inscode(d, ds, de, "ENONET", ENONET, "Machine is not on the network");
> +#endif
> +#ifdef EBADE
> + inscode(d, ds, de, "EBADE", EBADE, "Invalid exchange");
> +#endif
> +#ifdef EBADF
> + inscode(d, ds, de, "EBADF", EBADF, "Bad file number");
> +#else
> +#ifdef WSAEBADF
> + inscode(d, ds, de, "EBADF", WSAEBADF, "Bad file number");
> +#endif
> +#endif
> +#ifdef EMULTIHOP
> + inscode(d, ds, de, "EMULTIHOP", EMULTIHOP, "Multihop attempted");
> +#endif
> +#ifdef EIO
> + inscode(d, ds, de, "EIO", EIO, "I/O error");
> +#endif
> +#ifdef EUNATCH
> + inscode(d, ds, de, "EUNATCH", EUNATCH, "Protocol driver not attached");
> +#endif
> +#ifdef EPROTOTYPE
> + inscode(d, ds, de, "EPROTOTYPE", EPROTOTYPE, "Protocol wrong type for socket");
> +#else
> +#ifdef WSAEPROTOTYPE
> + inscode(d, ds, de, "EPROTOTYPE", WSAEPROTOTYPE, "Protocol wrong type for socket");
> +#endif
> +#endif
> +#ifdef ENOSPC
> + inscode(d, ds, de, "ENOSPC", ENOSPC, "No space left on device");
> +#endif
> +#ifdef ENOEXEC
> + inscode(d, ds, de, "ENOEXEC", ENOEXEC, "Exec format error");
> +#endif
> +#ifdef EALREADY
> + inscode(d, ds, de, "EALREADY", EALREADY, "Operation already in progress");
> +#else
> +#ifdef WSAEALREADY
> + inscode(d, ds, de, "EALREADY", WSAEALREADY, "Operation already in progress");
> +#endif
> +#endif
> +#ifdef ENETDOWN
> + inscode(d, ds, de, "ENETDOWN", ENETDOWN, "Network is down");
> +#else
> +#ifdef WSAENETDOWN
> + inscode(d, ds, de, "ENETDOWN", WSAENETDOWN, "Network is down");
> +#endif
> +#endif
> +#ifdef ENOTNAM
> + inscode(d, ds, de, "ENOTNAM", ENOTNAM, "Not a XENIX named type file");
> +#endif
> +#ifdef EACCES
> + inscode(d, ds, de, "EACCES", EACCES, "Permission denied");
> +#else
> +#ifdef WSAEACCES
> + inscode(d, ds, de, "EACCES", WSAEACCES, "Permission denied");
> +#endif
> +#endif
> +#ifdef ELNRNG
> + inscode(d, ds, de, "ELNRNG", ELNRNG, "Link number out of range");
> +#endif
> +#ifdef EILSEQ
> + inscode(d, ds, de, "EILSEQ", EILSEQ, "Illegal byte sequence");
> +#endif
> +#ifdef ENOTDIR
> + inscode(d, ds, de, "ENOTDIR", ENOTDIR, "Not a directory");
> +#endif
> +#ifdef ENOTUNIQ
> + inscode(d, ds, de, "ENOTUNIQ", ENOTUNIQ, "Name not unique on network");
> +#endif
> +#ifdef EPERM
> + inscode(d, ds, de, "EPERM", EPERM, "Operation not permitted");
> +#endif
> +#ifdef EDOM
> + inscode(d, ds, de, "EDOM", EDOM, "Math argument out of domain of func");
> +#endif
> +#ifdef EXFULL
> + inscode(d, ds, de, "EXFULL", EXFULL, "Exchange full");
> +#endif
> +#ifdef ECONNREFUSED
> + inscode(d, ds, de, "ECONNREFUSED", ECONNREFUSED, "Connection refused");
> +#else
> +#ifdef WSAECONNREFUSED
> + inscode(d, ds, de, "ECONNREFUSED", WSAECONNREFUSED, "Connection refused");
> +#endif
> +#endif
> +#ifdef EISDIR
> + inscode(d, ds, de, "EISDIR", EISDIR, "Is a directory");
> +#endif
> +#ifdef EPROTONOSUPPORT
> + inscode(d, ds, de, "EPROTONOSUPPORT", EPROTONOSUPPORT, "Protocol not supported");
> +#else
> +#ifdef WSAEPROTONOSUPPORT
> + inscode(d, ds, de, "EPROTONOSUPPORT", WSAEPROTONOSUPPORT, "Protocol not supported");
> +#endif
> +#endif
> +#ifdef EROFS
> + inscode(d, ds, de, "EROFS", EROFS, "Read-only file system");
> +#endif
> +#ifdef EADDRNOTAVAIL
> + inscode(d, ds, de, "EADDRNOTAVAIL", EADDRNOTAVAIL, "Cannot assign requested address");
> +#else
> +#ifdef WSAEADDRNOTAVAIL
> + inscode(d, ds, de, "EADDRNOTAVAIL", WSAEADDRNOTAVAIL, "Cannot assign requested address");
> +#endif
> +#endif
> +#ifdef EIDRM
> + inscode(d, ds, de, "EIDRM", EIDRM, "Identifier removed");
> +#endif
> +#ifdef ECOMM
> + inscode(d, ds, de, "ECOMM", ECOMM, "Communication error on send");
> +#endif
> +#ifdef ESRMNT
> + inscode(d, ds, de, "ESRMNT", ESRMNT, "Srmount error");
> +#endif
> +#ifdef EREMOTEIO
> + inscode(d, ds, de, "EREMOTEIO", EREMOTEIO, "Remote I/O error");
> +#endif
> +#ifdef EL3RST
> + inscode(d, ds, de, "EL3RST", EL3RST, "Level 3 reset");
> +#endif
> +#ifdef EBADMSG
> + inscode(d, ds, de, "EBADMSG", EBADMSG, "Not a data message");
> +#endif
> +#ifdef ENFILE
> + inscode(d, ds, de, "ENFILE", ENFILE, "File table overflow");
> +#endif
> +#ifdef ELIBMAX
> + inscode(d, ds, de, "ELIBMAX", ELIBMAX, "Attempting to link in too many shared libraries");
> +#endif
> +#ifdef ESPIPE
> + inscode(d, ds, de, "ESPIPE", ESPIPE, "Illegal seek");
> +#endif
> +#ifdef ENOLINK
> + inscode(d, ds, de, "ENOLINK", ENOLINK, "Link has been severed");
> +#endif
> +#ifdef ENETRESET
> + inscode(d, ds, de, "ENETRESET", ENETRESET, "Network dropped connection because of reset");
> +#else
> +#ifdef WSAENETRESET
> + inscode(d, ds, de, "ENETRESET", WSAENETRESET, "Network dropped connection because of reset");
> +#endif
> +#endif
> +#ifdef ETIMEDOUT
> + inscode(d, ds, de, "ETIMEDOUT", ETIMEDOUT, "Connection timed out");
> +#else
> +#ifdef WSAETIMEDOUT
> + inscode(d, ds, de, "ETIMEDOUT", WSAETIMEDOUT, "Connection timed out");
> +#endif
> +#endif
> +#ifdef ENOENT
> + inscode(d, ds, de, "ENOENT", ENOENT, "No such file or directory");
> +#endif
> +#ifdef EEXIST
> + inscode(d, ds, de, "EEXIST", EEXIST, "File exists");
> +#endif
> +#ifdef EDQUOT
> + inscode(d, ds, de, "EDQUOT", EDQUOT, "Quota exceeded");
> +#else
> +#ifdef WSAEDQUOT
> + inscode(d, ds, de, "EDQUOT", WSAEDQUOT, "Quota exceeded");
> +#endif
> +#endif
> +#ifdef ENOSTR
> + inscode(d, ds, de, "ENOSTR", ENOSTR, "Device not a stream");
> +#endif
> +#ifdef EBADSLT
> + inscode(d, ds, de, "EBADSLT", EBADSLT, "Invalid slot");
> +#endif
> +#ifdef EBADRQC
> + inscode(d, ds, de, "EBADRQC", EBADRQC, "Invalid request code");
> +#endif
> +#ifdef ELIBACC
> + inscode(d, ds, de, "ELIBACC", ELIBACC, "Can not access a needed shared library");
> +#endif
> +#ifdef EFAULT
> + inscode(d, ds, de, "EFAULT", EFAULT, "Bad address");
> +#else
> +#ifdef WSAEFAULT
> + inscode(d, ds, de, "EFAULT", WSAEFAULT, "Bad address");
> +#endif
> +#endif
> +#ifdef EFBIG
> + inscode(d, ds, de, "EFBIG", EFBIG, "File too large");
> +#endif
> +#ifdef EDEADLK
> + inscode(d, ds, de, "EDEADLK", EDEADLK, "Resource deadlock would occur");
> +#endif
> +#ifdef ENOTCONN
> + inscode(d, ds, de, "ENOTCONN", ENOTCONN, "Transport endpoint is not connected");
> +#else
> +#ifdef WSAENOTCONN
> + inscode(d, ds, de, "ENOTCONN", WSAENOTCONN, "Transport endpoint is not connected");
> +#endif
> +#endif
> +#ifdef EDESTADDRREQ
> + inscode(d, ds, de, "EDESTADDRREQ", EDESTADDRREQ, "Destination address required");
> +#else
> +#ifdef WSAEDESTADDRREQ
> + inscode(d, ds, de, "EDESTADDRREQ", WSAEDESTADDRREQ, "Destination address required");
> +#endif
> +#endif
> +#ifdef ELIBSCN
> + inscode(d, ds, de, "ELIBSCN", ELIBSCN, ".lib section in a.out corrupted");
> +#endif
> +#ifdef ENOLCK
> + inscode(d, ds, de, "ENOLCK", ENOLCK, "No record locks available");
> +#endif
> +#ifdef EISNAM
> + inscode(d, ds, de, "EISNAM", EISNAM, "Is a named type file");
> +#endif
> +#ifdef ECONNABORTED
> + inscode(d, ds, de, "ECONNABORTED", ECONNABORTED, "Software caused connection abort");
> +#else
> +#ifdef WSAECONNABORTED
> + inscode(d, ds, de, "ECONNABORTED", WSAECONNABORTED, "Software caused connection abort");
> +#endif
> +#endif
> +#ifdef ENETUNREACH
> + inscode(d, ds, de, "ENETUNREACH", ENETUNREACH, "Network is unreachable");
> +#else
> +#ifdef WSAENETUNREACH
> + inscode(d, ds, de, "ENETUNREACH", WSAENETUNREACH, "Network is unreachable");
> +#endif
> +#endif
> +#ifdef ESTALE
> + inscode(d, ds, de, "ESTALE", ESTALE, "Stale NFS file handle");
> +#else
> +#ifdef WSAESTALE
> + inscode(d, ds, de, "ESTALE", WSAESTALE, "Stale NFS file handle");
> +#endif
> +#endif
> +#ifdef ENOSR
> + inscode(d, ds, de, "ENOSR", ENOSR, "Out of streams resources");
> +#endif
> +#ifdef ENOMEM
> + inscode(d, ds, de, "ENOMEM", ENOMEM, "Out of memory");
> +#endif
> +#ifdef ENOTSOCK
> + inscode(d, ds, de, "ENOTSOCK", ENOTSOCK, "Socket operation on non-socket");
> +#else
> +#ifdef WSAENOTSOCK
> + inscode(d, ds, de, "ENOTSOCK", WSAENOTSOCK, "Socket operation on non-socket");
> +#endif
> +#endif
> +#ifdef ESTRPIPE
> + inscode(d, ds, de, "ESTRPIPE", ESTRPIPE, "Streams pipe error");
> +#endif
> +#ifdef EMLINK
> + inscode(d, ds, de, "EMLINK", EMLINK, "Too many links");
> +#endif
> +#ifdef ERANGE
> + inscode(d, ds, de, "ERANGE", ERANGE, "Math result not representable");
> +#endif
> +#ifdef ELIBEXEC
> + inscode(d, ds, de, "ELIBEXEC", ELIBEXEC, "Cannot exec a shared library directly");
> +#endif
> +#ifdef EL3HLT
> + inscode(d, ds, de, "EL3HLT", EL3HLT, "Level 3 halted");
> +#endif
> +#ifdef ECONNRESET
> + inscode(d, ds, de, "ECONNRESET", ECONNRESET, "Connection reset by peer");
> +#else
> +#ifdef WSAECONNRESET
> + inscode(d, ds, de, "ECONNRESET", WSAECONNRESET, "Connection reset by peer");
> +#endif
> +#endif
> +#ifdef EADDRINUSE
> + inscode(d, ds, de, "EADDRINUSE", EADDRINUSE, "Address already in use");
> +#else
> +#ifdef WSAEADDRINUSE
> + inscode(d, ds, de, "EADDRINUSE", WSAEADDRINUSE, "Address already in use");
> +#endif
> +#endif
> +#ifdef EOPNOTSUPP
> + inscode(d, ds, de, "EOPNOTSUPP", EOPNOTSUPP, "Operation not supported on transport endpoint");
> +#else
> +#ifdef WSAEOPNOTSUPP
> + inscode(d, ds, de, "EOPNOTSUPP", WSAEOPNOTSUPP, "Operation not supported on transport endpoint");
> +#endif
> +#endif
> +#ifdef EREMCHG
> + inscode(d, ds, de, "EREMCHG", EREMCHG, "Remote address changed");
> +#endif
> +#ifdef EAGAIN
> + inscode(d, ds, de, "EAGAIN", EAGAIN, "Try again");
> +#endif
> +#ifdef ENAMETOOLONG
> + inscode(d, ds, de, "ENAMETOOLONG", ENAMETOOLONG, "File name too long");
> +#else
> +#ifdef WSAENAMETOOLONG
> + inscode(d, ds, de, "ENAMETOOLONG", WSAENAMETOOLONG, "File name too long");
> +#endif
> +#endif
> +#ifdef ENOTTY
> + inscode(d, ds, de, "ENOTTY", ENOTTY, "Not a typewriter");
> +#endif
> +#ifdef ERESTART
> + inscode(d, ds, de, "ERESTART", ERESTART, "Interrupted system call should be restarted");
> +#endif
> +#ifdef ESOCKTNOSUPPORT
> + inscode(d, ds, de, "ESOCKTNOSUPPORT", ESOCKTNOSUPPORT, "Socket type not supported");
> +#else
> +#ifdef WSAESOCKTNOSUPPORT
> + inscode(d, ds, de, "ESOCKTNOSUPPORT", WSAESOCKTNOSUPPORT, "Socket type not supported");
> +#endif
> +#endif
> +#ifdef ETIME
> + inscode(d, ds, de, "ETIME", ETIME, "Timer expired");
> +#endif
> +#ifdef EBFONT
> + inscode(d, ds, de, "EBFONT", EBFONT, "Bad font file format");
> +#endif
> +#ifdef EDEADLOCK
> + inscode(d, ds, de, "EDEADLOCK", EDEADLOCK, "Error EDEADLOCK");
> +#endif
> +#ifdef ETOOMANYREFS
> + inscode(d, ds, de, "ETOOMANYREFS", ETOOMANYREFS, "Too many references: cannot splice");
> +#else
> +#ifdef WSAETOOMANYREFS
> + inscode(d, ds, de, "ETOOMANYREFS", WSAETOOMANYREFS, "Too many references: cannot splice");
> +#endif
> +#endif
> +#ifdef EMFILE
> + inscode(d, ds, de, "EMFILE", EMFILE, "Too many open files");
> +#else
> +#ifdef WSAEMFILE
> + inscode(d, ds, de, "EMFILE", WSAEMFILE, "Too many open files");
> +#endif
> +#endif
> +#ifdef ETXTBSY
> + inscode(d, ds, de, "ETXTBSY", ETXTBSY, "Text file busy");
> +#endif
> +#ifdef EINPROGRESS
> + inscode(d, ds, de, "EINPROGRESS", EINPROGRESS, "Operation now in progress");
> +#else
> +#ifdef WSAEINPROGRESS
> + inscode(d, ds, de, "EINPROGRESS", WSAEINPROGRESS, "Operation now in progress");
> +#endif
> +#endif
> +#ifdef ENXIO
> + inscode(d, ds, de, "ENXIO", ENXIO, "No such device or address");
> +#endif
> +#ifdef ENOPKG
> + inscode(d, ds, de, "ENOPKG", ENOPKG, "Package not installed");
> +#endif
> +#ifdef WSASY
> + inscode(d, ds, de, "WSASY", WSASY, "Error WSASY");
> +#endif
> +#ifdef WSAEHOSTDOWN
> + inscode(d, ds, de, "WSAEHOSTDOWN", WSAEHOSTDOWN, "Host is down");
> +#endif
> +#ifdef WSAENETDOWN
> + inscode(d, ds, de, "WSAENETDOWN", WSAENETDOWN, "Network is down");
> +#endif
> +#ifdef WSAENOTSOCK
> + inscode(d, ds, de, "WSAENOTSOCK", WSAENOTSOCK, "Socket operation on non-socket");
> +#endif
> +#ifdef WSAEHOSTUNREACH
> + inscode(d, ds, de, "WSAEHOSTUNREACH", WSAEHOSTUNREACH, "No route to host");
> +#endif
> +#ifdef WSAELOOP
> + inscode(d, ds, de, "WSAELOOP", WSAELOOP, "Too many symbolic links encountered");
> +#endif
> +#ifdef WSAEMFILE
> + inscode(d, ds, de, "WSAEMFILE", WSAEMFILE, "Too many open files");
> +#endif
> +#ifdef WSAESTALE
> + inscode(d, ds, de, "WSAESTALE", WSAESTALE, "Stale NFS file handle");
> +#endif
> +#ifdef WSAVERNOTSUPPORTED
> + inscode(d, ds, de, "WSAVERNOTSUPPORTED", WSAVERNOTSUPPORTED, "Error WSAVERNOTSUPPORTED");
> +#endif
> +#ifdef WSAENETUNREACH
> + inscode(d, ds, de, "WSAENETUNREACH", WSAENETUNREACH, "Network is unreachable");
> +#endif
> +#ifdef WSAEPROCLIM
> + inscode(d, ds, de, "WSAEPROCLIM", WSAEPROCLIM, "Error WSAEPROCLIM");
> +#endif
> +#ifdef WSAEFAULT
> + inscode(d, ds, de, "WSAEFAULT", WSAEFAULT, "Bad address");
> +#endif
> +#ifdef WSANOTINITIALISED
> + inscode(d, ds, de, "WSANOTINITIALISED", WSANOTINITIALISED, "Error WSANOTINITIALISED");
> +#endif
> +#ifdef WSAEUSERS
> + inscode(d, ds, de, "WSAEUSERS", WSAEUSERS, "Too many users");
> +#endif
> +#ifdef WSAMAKEASYNCREPL
> + inscode(d, ds, de, "WSAMAKEASYNCREPL", WSAMAKEASYNCREPL, "Error WSAMAKEASYNCREPL");
> +#endif
> +#ifdef WSAENOPROTOOPT
> + inscode(d, ds, de, "WSAENOPROTOOPT", WSAENOPROTOOPT, "Protocol not available");
> +#endif
> +#ifdef WSAECONNABORTED
> + inscode(d, ds, de, "WSAECONNABORTED", WSAECONNABORTED, "Software caused connection abort");
> +#endif
> +#ifdef WSAENAMETOOLONG
> + inscode(d, ds, de, "WSAENAMETOOLONG", WSAENAMETOOLONG, "File name too long");
> +#endif
> +#ifdef WSAENOTEMPTY
> + inscode(d, ds, de, "WSAENOTEMPTY", WSAENOTEMPTY, "Directory not empty");
> +#endif
> +#ifdef WSAESHUTDOWN
> + inscode(d, ds, de, "WSAESHUTDOWN", WSAESHUTDOWN, "Cannot send after transport endpoint shutdown");
> +#endif
> +#ifdef WSAEAFNOSUPPORT
> + inscode(d, ds, de, "WSAEAFNOSUPPORT", WSAEAFNOSUPPORT, "Address family not supported by protocol");
> +#endif
> +#ifdef WSAETOOMANYREFS
> + inscode(d, ds, de, "WSAETOOMANYREFS", WSAETOOMANYREFS, "Too many references: cannot splice");
> +#endif
> +#ifdef WSAEACCES
> + inscode(d, ds, de, "WSAEACCES", WSAEACCES, "Permission denied");
> +#endif
> +#ifdef WSATR
> + inscode(d, ds, de, "WSATR", WSATR, "Error WSATR");
> +#endif
> +#ifdef WSABASEERR
> + inscode(d, ds, de, "WSABASEERR", WSABASEERR, "Error WSABASEERR");
> +#endif
> +#ifdef WSADESCRIPTIO
> + inscode(d, ds, de, "WSADESCRIPTIO", WSADESCRIPTIO, "Error WSADESCRIPTIO");
> +#endif
> +#ifdef WSAEMSGSIZE
> + inscode(d, ds, de, "WSAEMSGSIZE", WSAEMSGSIZE, "Message too long");
> +#endif
> +#ifdef WSAEBADF
> + inscode(d, ds, de, "WSAEBADF", WSAEBADF, "Bad file number");
> +#endif
> +#ifdef WSAECONNRESET
> + inscode(d, ds, de, "WSAECONNRESET", WSAECONNRESET, "Connection reset by peer");
> +#endif
> +#ifdef WSAGETSELECTERRO
> + inscode(d, ds, de, "WSAGETSELECTERRO", WSAGETSELECTERRO, "Error WSAGETSELECTERRO");
> +#endif
> +#ifdef WSAETIMEDOUT
> + inscode(d, ds, de, "WSAETIMEDOUT", WSAETIMEDOUT, "Connection timed out");
> +#endif
> +#ifdef WSAENOBUFS
> + inscode(d, ds, de, "WSAENOBUFS", WSAENOBUFS, "No buffer space available");
> +#endif
> +#ifdef WSAEDISCON
> + inscode(d, ds, de, "WSAEDISCON", WSAEDISCON, "Error WSAEDISCON");
> +#endif
> +#ifdef WSAEINTR
> + inscode(d, ds, de, "WSAEINTR", WSAEINTR, "Interrupted system call");
> +#endif
> +#ifdef WSAEPROTOTYPE
> + inscode(d, ds, de, "WSAEPROTOTYPE", WSAEPROTOTYPE, "Protocol wrong type for socket");
> +#endif
> +#ifdef WSAHOS
> + inscode(d, ds, de, "WSAHOS", WSAHOS, "Error WSAHOS");
> +#endif
> +#ifdef WSAEADDRINUSE
> + inscode(d, ds, de, "WSAEADDRINUSE", WSAEADDRINUSE, "Address already in use");
> +#endif
> +#ifdef WSAEADDRNOTAVAIL
> + inscode(d, ds, de, "WSAEADDRNOTAVAIL", WSAEADDRNOTAVAIL, "Cannot assign requested address");
> +#endif
> +#ifdef WSAEALREADY
> + inscode(d, ds, de, "WSAEALREADY", WSAEALREADY, "Operation already in progress");
> +#endif
> +#ifdef WSAEPROTONOSUPPORT
> + inscode(d, ds, de, "WSAEPROTONOSUPPORT", WSAEPROTONOSUPPORT, "Protocol not supported");
> +#endif
> +#ifdef WSASYSNOTREADY
> + inscode(d, ds, de, "WSASYSNOTREADY", WSASYSNOTREADY, "Error WSASYSNOTREADY");
> +#endif
> +#ifdef WSAEWOULDBLOCK
> + inscode(d, ds, de, "WSAEWOULDBLOCK", WSAEWOULDBLOCK, "Operation would block");
> +#endif
> +#ifdef WSAEPFNOSUPPORT
> + inscode(d, ds, de, "WSAEPFNOSUPPORT", WSAEPFNOSUPPORT, "Protocol family not supported");
> +#endif
> +#ifdef WSAEOPNOTSUPP
> + inscode(d, ds, de, "WSAEOPNOTSUPP", WSAEOPNOTSUPP, "Operation not supported on transport endpoint");
> +#endif
> +#ifdef WSAEISCONN
> + inscode(d, ds, de, "WSAEISCONN", WSAEISCONN, "Transport endpoint is already connected");
> +#endif
> +#ifdef WSAEDQUOT
> + inscode(d, ds, de, "WSAEDQUOT", WSAEDQUOT, "Quota exceeded");
> +#endif
> +#ifdef WSAENOTCONN
> + inscode(d, ds, de, "WSAENOTCONN", WSAENOTCONN, "Transport endpoint is not connected");
> +#endif
> +#ifdef WSAEREMOTE
> + inscode(d, ds, de, "WSAEREMOTE", WSAEREMOTE, "Object is remote");
> +#endif
> +#ifdef WSAEINVAL
> + inscode(d, ds, de, "WSAEINVAL", WSAEINVAL, "Invalid argument");
> +#endif
> +#ifdef WSAEINPROGRESS
> + inscode(d, ds, de, "WSAEINPROGRESS", WSAEINPROGRESS, "Operation now in progress");
> +#endif
> +#ifdef WSAGETSELECTEVEN
> + inscode(d, ds, de, "WSAGETSELECTEVEN", WSAGETSELECTEVEN, "Error WSAGETSELECTEVEN");
> +#endif
> +#ifdef WSAESOCKTNOSUPPORT
> + inscode(d, ds, de, "WSAESOCKTNOSUPPORT", WSAESOCKTNOSUPPORT, "Socket type not supported");
> +#endif
> +#ifdef WSAGETASYNCERRO
> + inscode(d, ds, de, "WSAGETASYNCERRO", WSAGETASYNCERRO, "Error WSAGETASYNCERRO");
> +#endif
> +#ifdef WSAMAKESELECTREPL
> + inscode(d, ds, de, "WSAMAKESELECTREPL", WSAMAKESELECTREPL, "Error WSAMAKESELECTREPL");
> +#endif
> +#ifdef WSAGETASYNCBUFLE
> + inscode(d, ds, de, "WSAGETASYNCBUFLE", WSAGETASYNCBUFLE, "Error WSAGETASYNCBUFLE");
> +#endif
> +#ifdef WSAEDESTADDRREQ
> + inscode(d, ds, de, "WSAEDESTADDRREQ", WSAEDESTADDRREQ, "Destination address required");
> +#endif
> +#ifdef WSAECONNREFUSED
> + inscode(d, ds, de, "WSAECONNREFUSED", WSAECONNREFUSED, "Connection refused");
> +#endif
> +#ifdef WSAENETRESET
> + inscode(d, ds, de, "WSAENETRESET", WSAENETRESET, "Network dropped connection because of reset");
> +#endif
> +#ifdef WSAN
> + inscode(d, ds, de, "WSAN", WSAN, "Error WSAN");
> +#endif
> +#ifdef ENOMEDIUM
> + inscode(d, ds, de, "ENOMEDIUM", ENOMEDIUM, "No medium found");
> +#endif
> +#ifdef EMEDIUMTYPE
> + inscode(d, ds, de, "EMEDIUMTYPE", EMEDIUMTYPE, "Wrong medium type");
> +#endif
> +#ifdef ECANCELED
> + inscode(d, ds, de, "ECANCELED", ECANCELED, "Operation Canceled");
> +#endif
> +#ifdef ENOKEY
> + inscode(d, ds, de, "ENOKEY", ENOKEY, "Required key not available");
> +#endif
> +#ifdef EKEYEXPIRED
> + inscode(d, ds, de, "EKEYEXPIRED", EKEYEXPIRED, "Key has expired");
> +#endif
> +#ifdef EKEYREVOKED
> + inscode(d, ds, de, "EKEYREVOKED", EKEYREVOKED, "Key has been revoked");
> +#endif
> +#ifdef EKEYREJECTED
> + inscode(d, ds, de, "EKEYREJECTED", EKEYREJECTED, "Key was rejected by service");
> +#endif
> +#ifdef EOWNERDEAD
> + inscode(d, ds, de, "EOWNERDEAD", EOWNERDEAD, "Owner died");
> +#endif
> +#ifdef ENOTRECOVERABLE
> + inscode(d, ds, de, "ENOTRECOVERABLE", ENOTRECOVERABLE, "State not recoverable");
> +#endif
> +#ifdef ERFKILL
> + inscode(d, ds, de, "ERFKILL", ERFKILL, "Operation not possible due to RF-kill");
> +#endif
> +
> +/* These symbols are added for EDK II support. */
> +#ifdef EMINERRORVAL
> + inscode(d, ds, de, "EMINERRORVAL", EMINERRORVAL, "Lowest valid error value");
> +#endif
> +#ifdef ENOTSUP
> + inscode(d, ds, de, "ENOTSUP", ENOTSUP, "Operation not supported");
> +#endif
> +#ifdef EBADRPC
> + inscode(d, ds, de, "EBADRPC", EBADRPC, "RPC struct is bad");
> +#endif
> +#ifdef ERPCMISMATCH
> + inscode(d, ds, de, "ERPCMISMATCH", ERPCMISMATCH, "RPC version wrong");
> +#endif
> +#ifdef EPROGUNAVAIL
> + inscode(d, ds, de, "EPROGUNAVAIL", EPROGUNAVAIL, "RPC prog. not avail");
> +#endif
> +#ifdef EPROGMISMATCH
> + inscode(d, ds, de, "EPROGMISMATCH", EPROGMISMATCH, "Program version wrong");
> +#endif
> +#ifdef EPROCUNAVAIL
> + inscode(d, ds, de, "EPROCUNAVAIL", EPROCUNAVAIL, "Bad procedure for program");
> +#endif
> +#ifdef EFTYPE
> + inscode(d, ds, de, "EFTYPE", EFTYPE, "Inappropriate file type or format");
> +#endif
> +#ifdef EAUTH
> + inscode(d, ds, de, "EAUTH", EAUTH, "Authentication error");
> +#endif
> +#ifdef ENEEDAUTH
> + inscode(d, ds, de, "ENEEDAUTH", ENEEDAUTH, "Need authenticator");
> +#endif
> +#ifdef ECANCELED
> + inscode(d, ds, de, "ECANCELED", ECANCELED, "Operation canceled");
> +#endif
> +#ifdef ENOATTR
> + inscode(d, ds, de, "ENOATTR", ENOATTR, "Attribute not found");
> +#endif
> +#ifdef EDOOFUS
> + inscode(d, ds, de, "EDOOFUS", EDOOFUS, "Programming Error");
> +#endif
> +#ifdef EBUFSIZE
> + inscode(d, ds, de, "EBUFSIZE", EBUFSIZE, "Buffer too small to hold result");
> +#endif
> +#ifdef EMAXERRORVAL
> + inscode(d, ds, de, "EMAXERRORVAL", EMAXERRORVAL, "One more than the highest defined error value");
> +#endif
> +
> + Py_DECREF(de);
> + return m;
> +}
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/faulthandler.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/faulthandler.c
> new file mode 100644
> index 00000000..5b81995f
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/faulthandler.c
> @@ -0,0 +1,1414 @@
> +#include "Python.h"
> +#include "pythread.h"
> +#include <signal.h>
> +#include <object.h>
> +#include <frameobject.h>
> +#include <signal.h>
> +#if defined(HAVE_PTHREAD_SIGMASK) && !defined(HAVE_BROKEN_PTHREAD_SIGMASK)
> +# include <pthread.h>
> +#endif
> +#ifdef MS_WINDOWS
> +# include <windows.h>
> +#endif
> +#ifdef HAVE_SYS_RESOURCE_H
> +# include <sys/resource.h>
> +#endif
> +
> +/* Allocate at maximum 100 MB of the stack to raise the stack overflow */
> +#define STACK_OVERFLOW_MAX_SIZE (100*1024*1024)
> +
> +#ifdef WITH_THREAD
> +# define FAULTHANDLER_LATER
> +#endif
> +
> +#ifndef MS_WINDOWS
> + /* register() is useless on Windows, because only SIGSEGV, SIGABRT and
> + SIGILL can be handled by the process, and these signals can only be used
> + with enable(), not using register() */
> +# define FAULTHANDLER_USER
> +#endif
> +
> +#define PUTS(fd, str) _Py_write_noraise(fd, str, strlen(str))
> +
> +_Py_IDENTIFIER(enable);
> +_Py_IDENTIFIER(fileno);
> +_Py_IDENTIFIER(flush);
> +_Py_IDENTIFIER(stderr);
> +
> +#ifdef HAVE_SIGACTION
> +typedef struct sigaction _Py_sighandler_t;
> +#else
> +typedef PyOS_sighandler_t _Py_sighandler_t;
> +#endif
> +
> +typedef struct {
> + int signum;
> + int enabled;
> + const char* name;
> + _Py_sighandler_t previous;
> + int all_threads;
> +} fault_handler_t;
> +
> +static struct {
> + int enabled;
> + PyObject *file;
> + int fd;
> + int all_threads;
> + PyInterpreterState *interp;
> +#ifdef MS_WINDOWS
> + void *exc_handler;
> +#endif
> +} fatal_error = {0, NULL, -1, 0};
> +
> +#ifdef FAULTHANDLER_LATER
> +static struct {
> + PyObject *file;
> + int fd;
> + PY_TIMEOUT_T timeout_us; /* timeout in microseconds */
> + int repeat;
> + PyInterpreterState *interp;
> + int exit;
> + char *header;
> + size_t header_len;
> + /* The main thread always holds this lock. It is only released when
> + faulthandler_thread() is interrupted before this thread exits, or at
> + Python exit. */
> + PyThread_type_lock cancel_event;
> + /* released by child thread when joined */
> + PyThread_type_lock running;
> +} thread;
> +#endif
> +
> +#ifdef FAULTHANDLER_USER
> +typedef struct {
> + int enabled;
> + PyObject *file;
> + int fd;
> + int all_threads;
> + int chain;
> + _Py_sighandler_t previous;
> + PyInterpreterState *interp;
> +} user_signal_t;
> +
> +static user_signal_t *user_signals;
> +
> +/* the following macros come from Python: Modules/signalmodule.c */
> +#ifndef NSIG
> +# if defined(_NSIG)
> +# define NSIG _NSIG /* For BSD/SysV */
> +# elif defined(_SIGMAX)
> +# define NSIG (_SIGMAX + 1) /* For QNX */
> +# elif defined(SIGMAX)
> +# define NSIG (SIGMAX + 1) /* For djgpp */
> +# else
> +# define NSIG 64 /* Use a reasonable default value */
> +# endif
> +#endif
> +
> +static void faulthandler_user(int signum);
> +#endif /* FAULTHANDLER_USER */
> +
> +
> +static fault_handler_t faulthandler_handlers[] = {
> +#ifdef SIGBUS
> + {SIGBUS, 0, "Bus error", },
> +#endif
> +#ifdef SIGILL
> + {SIGILL, 0, "Illegal instruction", },
> +#endif
> + {SIGFPE, 0, "Floating point exception", },
> + {SIGABRT, 0, "Aborted", },
> + /* define SIGSEGV at the end to make it the default choice if searching the
> + handler fails in faulthandler_fatal_error() */
> + {SIGSEGV, 0, "Segmentation fault", }
> +};
> +static const size_t faulthandler_nsignals = \
> + Py_ARRAY_LENGTH(faulthandler_handlers);
> +
> +#ifdef HAVE_SIGALTSTACK
> +static stack_t stack;
> +static stack_t old_stack;
> +#endif
> +
> +
> +/* Get the file descriptor of a file by calling its fileno() method and then
> + call its flush() method.
> +
> + If file is NULL or Py_None, use sys.stderr as the new file.
> + If file is an integer, it will be treated as file descriptor.
> +
> + On success, return the file descriptor and write the new file into *file_ptr.
> + On error, return -1. */
> +
> +static int
> +faulthandler_get_fileno(PyObject **file_ptr)
> +{
> + PyObject *result;
> + long fd_long;
> + int fd;
> + PyObject *file = *file_ptr;
> +
> + if (file == NULL || file == Py_None) {
> + file = _PySys_GetObjectId(&PyId_stderr);
> + if (file == NULL) {
> + PyErr_SetString(PyExc_RuntimeError, "unable to get sys.stderr");
> + return -1;
> + }
> + if (file == Py_None) {
> + PyErr_SetString(PyExc_RuntimeError, "sys.stderr is None");
> + return -1;
> + }
> + }
> + else if (PyLong_Check(file)) {
> + fd = _PyLong_AsInt(file);
> + if (fd == -1 && PyErr_Occurred())
> + return -1;
> + if (fd < 0) {
> + PyErr_SetString(PyExc_ValueError,
> + "file is not a valid file descripter");
> + return -1;
> + }
> + *file_ptr = NULL;
> + return fd;
> + }
> +
> + result = _PyObject_CallMethodId(file, &PyId_fileno, NULL);
> + if (result == NULL)
> + return -1;
> +
> + fd = -1;
> + if (PyLong_Check(result)) {
> + fd_long = PyLong_AsLong(result);
> + if (0 <= fd_long && fd_long < INT_MAX)
> + fd = (int)fd_long;
> + }
> + Py_DECREF(result);
> +
> + if (fd == -1) {
> + PyErr_SetString(PyExc_RuntimeError,
> + "file.fileno() is not a valid file descriptor");
> + return -1;
> + }
> +
> + result = _PyObject_CallMethodId(file, &PyId_flush, NULL);
> + if (result != NULL)
> + Py_DECREF(result);
> + else {
> + /* ignore flush() error */
> + PyErr_Clear();
> + }
> + *file_ptr = file;
> + return fd;
> +}
> +
> +/* Get the state of the current thread: only call this function if the current
> + thread holds the GIL. Raise an exception on error. */
> +static PyThreadState*
> +get_thread_state(void)
> +{
> + PyThreadState *tstate = _PyThreadState_UncheckedGet();
> + if (tstate == NULL) {
> + /* just in case but very unlikely... */
> + PyErr_SetString(PyExc_RuntimeError,
> + "unable to get the current thread state");
> + return NULL;
> + }
> + return tstate;
> +}
> +
> +static void
> +faulthandler_dump_traceback(int fd, int all_threads,
> + PyInterpreterState *interp)
> +{
> + static volatile int reentrant = 0;
> + PyThreadState *tstate;
> +
> + if (reentrant)
> + return;
> +
> + reentrant = 1;
> +
> +#ifdef WITH_THREAD
> + /* SIGSEGV, SIGFPE, SIGABRT, SIGBUS and SIGILL are synchronous signals and
> + are thus delivered to the thread that caused the fault. Get the Python
> + thread state of the current thread.
> +
> + PyThreadState_Get() doesn't give the state of the thread that caused the
> + fault if the thread released the GIL, and so this function cannot be
> + used. Read the thread local storage (TLS) instead: call
> + PyGILState_GetThisThreadState(). */
> + tstate = PyGILState_GetThisThreadState();
> +#else
> + tstate = _PyThreadState_UncheckedGet();
> +#endif
> +
> + if (all_threads) {
> + (void)_Py_DumpTracebackThreads(fd, NULL, tstate);
> + }
> + else {
> + if (tstate != NULL)
> + _Py_DumpTraceback(fd, tstate);
> + }
> +
> + reentrant = 0;
> +}
> +
> +static PyObject*
> +faulthandler_dump_traceback_py(PyObject *self,
> + PyObject *args, PyObject *kwargs)
> +{
> + static char *kwlist[] = {"file", "all_threads", NULL};
> + PyObject *file = NULL;
> + int all_threads = 1;
> + PyThreadState *tstate;
> + const char *errmsg;
> + int fd;
> +
> + if (!PyArg_ParseTupleAndKeywords(args, kwargs,
> + "|Oi:dump_traceback", kwlist,
> + &file, &all_threads))
> + return NULL;
> +
> + fd = faulthandler_get_fileno(&file);
> + if (fd < 0)
> + return NULL;
> +
> + tstate = get_thread_state();
> + if (tstate == NULL)
> + return NULL;
> +
> + if (all_threads) {
> + errmsg = _Py_DumpTracebackThreads(fd, NULL, tstate);
> + if (errmsg != NULL) {
> + PyErr_SetString(PyExc_RuntimeError, errmsg);
> + return NULL;
> + }
> + }
> + else {
> + _Py_DumpTraceback(fd, tstate);
> + }
> +
> + if (PyErr_CheckSignals())
> + return NULL;
> +
> + Py_RETURN_NONE;
> +}
> +
> +static void
> +faulthandler_disable_fatal_handler(fault_handler_t *handler)
> +{
> + if (!handler->enabled)
> + return;
> + handler->enabled = 0;
> +#ifdef HAVE_SIGACTION
> + (void)sigaction(handler->signum, &handler->previous, NULL);
> +#else
> + (void)signal(handler->signum, handler->previous);
> +#endif
> +}
> +
> +
> +/* Handler for SIGSEGV, SIGFPE, SIGABRT, SIGBUS and SIGILL signals.
> +
> + Display the current Python traceback, restore the previous handler and call
> + the previous handler.
> +
> + On Windows, don't explicitly call the previous handler, because the Windows
> + signal handler would not be called (for an unknown reason). The execution of
> + the program continues at faulthandler_fatal_error() exit, but the same
> + instruction will raise the same fault (signal), and so the previous handler
> + will be called.
> +
> + This function is signal-safe and should only call signal-safe functions. */
> +
> +static void
> +faulthandler_fatal_error(int signum)
> +{
> + const int fd = fatal_error.fd;
> + size_t i;
> + fault_handler_t *handler = NULL;
> + int save_errno = errno;
> +
> + if (!fatal_error.enabled)
> + return;
> +
> + for (i=0; i < faulthandler_nsignals; i++) {
> + handler = &faulthandler_handlers[i];
> + if (handler->signum == signum)
> + break;
> + }
> + if (handler == NULL) {
> + /* faulthandler_nsignals == 0 (unlikely) */
> + return;
> + }
> +
> + /* restore the previous handler */
> + faulthandler_disable_fatal_handler(handler);
> +
> + PUTS(fd, "Fatal Python error: ");
> + PUTS(fd, handler->name);
> + PUTS(fd, "\n\n");
> +
> + faulthandler_dump_traceback(fd, fatal_error.all_threads,
> + fatal_error.interp);
> +
> + errno = save_errno;
> +#ifdef MS_WINDOWS
> + if (signum == SIGSEGV) {
> + /* don't explicitly call the previous handler for SIGSEGV in this signal
> + handler, because the Windows signal handler would not be called */
> + return;
> + }
> +#endif
> + /* call the previous signal handler: it is called immediately if we use
> + sigaction() thanks to SA_NODEFER flag, otherwise it is deferred */
> + raise(signum);
> +}
> +
> +#ifdef MS_WINDOWS
> +static int
> +faulthandler_ignore_exception(DWORD code)
> +{
> + /* bpo-30557: ignore exceptions which are not errors */
> + if (!(code & 0x80000000)) {
> + return 1;
> + }
> + /* bpo-31701: ignore MSC and COM exceptions
> + E0000000 + code */
> + if (code == 0xE06D7363 /* MSC exception ("Emsc") */
> + || code == 0xE0434352 /* COM Callable Runtime exception ("ECCR") */) {
> + return 1;
> + }
> + /* Interesting exception: log it with the Python traceback */
> + return 0;
> +}
> +
> +static LONG WINAPI
> +faulthandler_exc_handler(struct _EXCEPTION_POINTERS *exc_info)
> +{
> + const int fd = fatal_error.fd;
> + DWORD code = exc_info->ExceptionRecord->ExceptionCode;
> + DWORD flags = exc_info->ExceptionRecord->ExceptionFlags;
> +
> + if (faulthandler_ignore_exception(code)) {
> + /* ignore the exception: call the next exception handler */
> + return EXCEPTION_CONTINUE_SEARCH;
> + }
> +
> + PUTS(fd, "Windows fatal exception: ");
> + switch (code)
> + {
> + /* only format most common errors */
> + case EXCEPTION_ACCESS_VIOLATION: PUTS(fd, "access violation"); break;
> + case EXCEPTION_FLT_DIVIDE_BY_ZERO: PUTS(fd, "float divide by zero"); break;
> + case EXCEPTION_FLT_OVERFLOW: PUTS(fd, "float overflow"); break;
> + case EXCEPTION_INT_DIVIDE_BY_ZERO: PUTS(fd, "int divide by zero"); break;
> + case EXCEPTION_INT_OVERFLOW: PUTS(fd, "integer overflow"); break;
> + case EXCEPTION_IN_PAGE_ERROR: PUTS(fd, "page error"); break;
> + case EXCEPTION_STACK_OVERFLOW: PUTS(fd, "stack overflow"); break;
> + default:
> + PUTS(fd, "code 0x");
> + _Py_DumpHexadecimal(fd, code, 8);
> + }
> + PUTS(fd, "\n\n");
> +
> + if (code == EXCEPTION_ACCESS_VIOLATION) {
> + /* disable signal handler for SIGSEGV */
> + size_t i;
> + for (i=0; i < faulthandler_nsignals; i++) {
> + fault_handler_t *handler = &faulthandler_handlers[i];
> + if (handler->signum == SIGSEGV) {
> + faulthandler_disable_fatal_handler(handler);
> + break;
> + }
> + }
> + }
> +
> + faulthandler_dump_traceback(fd, fatal_error.all_threads,
> + fatal_error.interp);
> +
> + /* call the next exception handler */
> + return EXCEPTION_CONTINUE_SEARCH;
> +}
> +#endif
> +
> +/* Install the handler for fatal signals, faulthandler_fatal_error(). */
> +
> +static int
> +faulthandler_enable(void)
> +{
> + size_t i;
> +
> + if (fatal_error.enabled) {
> + return 0;
> + }
> + fatal_error.enabled = 1;
> +
> + for (i=0; i < faulthandler_nsignals; i++) {
> + fault_handler_t *handler;
> +#ifdef HAVE_SIGACTION
> + struct sigaction action;
> +#endif
> + int err;
> +
> + handler = &faulthandler_handlers[i];
> + assert(!handler->enabled);
> +#ifdef HAVE_SIGACTION
> + action.sa_handler = faulthandler_fatal_error;
> + sigemptyset(&action.sa_mask);
> + /* Do not prevent the signal from being received from within
> + its own signal handler */
> + action.sa_flags = SA_NODEFER;
> +#ifdef HAVE_SIGALTSTACK
> + if (stack.ss_sp != NULL) {
> + /* Call the signal handler on an alternate signal stack
> + provided by sigaltstack() */
> + action.sa_flags |= SA_ONSTACK;
> + }
> +#endif
> + err = sigaction(handler->signum, &action, &handler->previous);
> +#else
> + handler->previous = signal(handler->signum,
> + faulthandler_fatal_error);
> + err = (handler->previous == SIG_ERR);
> +#endif
> + if (err) {
> + PyErr_SetFromErrno(PyExc_RuntimeError);
> + return -1;
> + }
> +
> + handler->enabled = 1;
> + }
> +
> +#ifdef MS_WINDOWS
> + assert(fatal_error.exc_handler == NULL);
> + fatal_error.exc_handler = AddVectoredExceptionHandler(1, faulthandler_exc_handler);
> +#endif
> + return 0;
> +}
> +
> +static PyObject*
> +faulthandler_py_enable(PyObject *self, PyObject *args, PyObject *kwargs)
> +{
> + static char *kwlist[] = {"file", "all_threads", NULL};
> + PyObject *file = NULL;
> + int all_threads = 1;
> + int fd;
> + PyThreadState *tstate;
> +
> + if (!PyArg_ParseTupleAndKeywords(args, kwargs,
> + "|Oi:enable", kwlist, &file, &all_threads))
> + return NULL;
> +
> + fd = faulthandler_get_fileno(&file);
> + if (fd < 0)
> + return NULL;
> +
> + tstate = get_thread_state();
> + if (tstate == NULL)
> + return NULL;
> +
> + Py_XINCREF(file);
> + Py_XSETREF(fatal_error.file, file);
> + fatal_error.fd = fd;
> + fatal_error.all_threads = all_threads;
> + fatal_error.interp = tstate->interp;
> +
> + if (faulthandler_enable() < 0) {
> + return NULL;
> + }
> +
> + Py_RETURN_NONE;
> +}
> +
> +static void
> +faulthandler_disable(void)
> +{
> + unsigned int i;
> + fault_handler_t *handler;
> +
> + if (fatal_error.enabled) {
> + fatal_error.enabled = 0;
> + for (i=0; i < faulthandler_nsignals; i++) {
> + handler = &faulthandler_handlers[i];
> + faulthandler_disable_fatal_handler(handler);
> + }
> + }
> +#ifdef MS_WINDOWS
> + if (fatal_error.exc_handler != NULL) {
> + RemoveVectoredExceptionHandler(fatal_error.exc_handler);
> + fatal_error.exc_handler = NULL;
> + }
> +#endif
> + Py_CLEAR(fatal_error.file);
> +}
> +
> +static PyObject*
> +faulthandler_disable_py(PyObject *self)
> +{
> + if (!fatal_error.enabled) {
> + Py_INCREF(Py_False);
> + return Py_False;
> + }
> + faulthandler_disable();
> + Py_INCREF(Py_True);
> + return Py_True;
> +}
> +
> +static PyObject*
> +faulthandler_is_enabled(PyObject *self)
> +{
> + return PyBool_FromLong(fatal_error.enabled);
> +}
> +
> +#ifdef FAULTHANDLER_LATER
> +
> +static void
> +faulthandler_thread(void *unused)
> +{
> + PyLockStatus st;
> + const char* errmsg;
> + int ok;
> +#if defined(HAVE_PTHREAD_SIGMASK) && !defined(HAVE_BROKEN_PTHREAD_SIGMASK)
> + sigset_t set;
> +
> + /* we don't want to receive any signal */
> + sigfillset(&set);
> + pthread_sigmask(SIG_SETMASK, &set, NULL);
> +#endif
> +
> + do {
> + st = PyThread_acquire_lock_timed(thread.cancel_event,
> + thread.timeout_us, 0);
> + if (st == PY_LOCK_ACQUIRED) {
> + PyThread_release_lock(thread.cancel_event);
> + break;
> + }
> + /* Timeout => dump traceback */
> + assert(st == PY_LOCK_FAILURE);
> +
> + _Py_write_noraise(thread.fd, thread.header, (int)thread.header_len);
> +
> + errmsg = _Py_DumpTracebackThreads(thread.fd, thread.interp, NULL);
> + ok = (errmsg == NULL);
> +
> + if (thread.exit)
> + _exit(1);
> + } while (ok && thread.repeat);
> +
> + /* The only way out */
> + PyThread_release_lock(thread.running);
> +}
> +
> +static void
> +cancel_dump_traceback_later(void)
> +{
> + /* Notify cancellation */
> + PyThread_release_lock(thread.cancel_event);
> +
> + /* Wait for thread to join */
> + PyThread_acquire_lock(thread.running, 1);
> + PyThread_release_lock(thread.running);
> +
> + /* The main thread should always hold the cancel_event lock */
> + PyThread_acquire_lock(thread.cancel_event, 1);
> +
> + Py_CLEAR(thread.file);
> + if (thread.header) {
> + PyMem_Free(thread.header);
> + thread.header = NULL;
> + }
> +}
> +
> +static char*
> +format_timeout(double timeout)
> +{
> + unsigned long us, sec, min, hour;
> + double intpart, fracpart;
> + char buffer[100];
> +
> + fracpart = modf(timeout, &intpart);
> + sec = (unsigned long)intpart;
> + us = (unsigned long)(fracpart * 1e6);
> + min = sec / 60;
> + sec %= 60;
> + hour = min / 60;
> + min %= 60;
> +
> + if (us != 0)
> + PyOS_snprintf(buffer, sizeof(buffer),
> + "Timeout (%lu:%02lu:%02lu.%06lu)!\n",
> + hour, min, sec, us);
> + else
> + PyOS_snprintf(buffer, sizeof(buffer),
> + "Timeout (%lu:%02lu:%02lu)!\n",
> + hour, min, sec);
> +
> + return _PyMem_Strdup(buffer);
> +}
> +
> +static PyObject*
> +faulthandler_dump_traceback_later(PyObject *self,
> + PyObject *args, PyObject *kwargs)
> +{
> + static char *kwlist[] = {"timeout", "repeat", "file", "exit", NULL};
> + double timeout;
> + PY_TIMEOUT_T timeout_us;
> + int repeat = 0;
> + PyObject *file = NULL;
> + int fd;
> + int exit = 0;
> + PyThreadState *tstate;
> + char *header;
> + size_t header_len;
> +
> + if (!PyArg_ParseTupleAndKeywords(args, kwargs,
> + "d|iOi:dump_traceback_later", kwlist,
> + &timeout, &repeat, &file, &exit))
> + return NULL;
> + if ((timeout * 1e6) >= (double) PY_TIMEOUT_MAX) {
> + PyErr_SetString(PyExc_OverflowError, "timeout value is too large");
> + return NULL;
> + }
> + timeout_us = (PY_TIMEOUT_T)(timeout * 1e6);
> + if (timeout_us <= 0) {
> + PyErr_SetString(PyExc_ValueError, "timeout must be greater than 0");
> + return NULL;
> + }
> +
> + tstate = get_thread_state();
> + if (tstate == NULL)
> + return NULL;
> +
> + fd = faulthandler_get_fileno(&file);
> + if (fd < 0)
> + return NULL;
> +
> + /* format the timeout */
> + header = format_timeout(timeout);
> + if (header == NULL)
> + return PyErr_NoMemory();
> + header_len = strlen(header);
> +
> + /* Cancel previous thread, if running */
> + cancel_dump_traceback_later();
> +
> + Py_XINCREF(file);
> + Py_XSETREF(thread.file, file);
> + thread.fd = fd;
> + thread.timeout_us = timeout_us;
> + thread.repeat = repeat;
> + thread.interp = tstate->interp;
> + thread.exit = exit;
> + thread.header = header;
> + thread.header_len = header_len;
> +
> + /* Arm these locks to serve as events when released */
> + PyThread_acquire_lock(thread.running, 1);
> +
> + if (PyThread_start_new_thread(faulthandler_thread, NULL) == -1) {
> + PyThread_release_lock(thread.running);
> + Py_CLEAR(thread.file);
> + PyMem_Free(header);
> + thread.header = NULL;
> + PyErr_SetString(PyExc_RuntimeError,
> + "unable to start watchdog thread");
> + return NULL;
> + }
> +
> + Py_RETURN_NONE;
> +}
> +
> +static PyObject*
> +faulthandler_cancel_dump_traceback_later_py(PyObject *self)
> +{
> + cancel_dump_traceback_later();
> + Py_RETURN_NONE;
> +}
> +#endif /* FAULTHANDLER_LATER */
> +
> +#ifdef FAULTHANDLER_USER
> +static int
> +faulthandler_register(int signum, int chain, _Py_sighandler_t *p_previous)
> +{
> +#ifdef HAVE_SIGACTION
> + struct sigaction action;
> + action.sa_handler = faulthandler_user;
> + sigemptyset(&action.sa_mask);
> + /* if the signal is received while the kernel is executing a system
> + call, try to restart the system call instead of interrupting it and
> + return EINTR. */
> + action.sa_flags = SA_RESTART;
> + if (chain) {
> + /* do not prevent the signal from being received from within its
> + own signal handler */
> + action.sa_flags = SA_NODEFER;
> + }
> +#ifdef HAVE_SIGALTSTACK
> + if (stack.ss_sp != NULL) {
> + /* Call the signal handler on an alternate signal stack
> + provided by sigaltstack() */
> + action.sa_flags |= SA_ONSTACK;
> + }
> +#endif
> + return sigaction(signum, &action, p_previous);
> +#else
> + _Py_sighandler_t previous;
> + previous = signal(signum, faulthandler_user);
> + if (p_previous != NULL)
> + *p_previous = previous;
> + return (previous == SIG_ERR);
> +#endif
> +}
> +
> +/* Handler of user signals (e.g. SIGUSR1).
> +
> + Dump the traceback of the current thread, or of all threads if
> + thread.all_threads is true.
> +
> + This function is signal safe and should only call signal safe functions. */
> +
> +static void
> +faulthandler_user(int signum)
> +{
> + user_signal_t *user;
> + int save_errno = errno;
> +
> + user = &user_signals[signum];
> + if (!user->enabled)
> + return;
> +
> + faulthandler_dump_traceback(user->fd, user->all_threads, user->interp);
> +
> +#ifdef HAVE_SIGACTION
> + if (user->chain) {
> + (void)sigaction(signum, &user->previous, NULL);
> + errno = save_errno;
> +
> + /* call the previous signal handler */
> + raise(signum);
> +
> + save_errno = errno;
> + (void)faulthandler_register(signum, user->chain, NULL);
> + errno = save_errno;
> + }
> +#else
> + if (user->chain) {
> + errno = save_errno;
> + /* call the previous signal handler */
> + user->previous(signum);
> + }
> +#endif
> +}
> +
> +static int
> +check_signum(int signum)
> +{
> + unsigned int i;
> +
> + for (i=0; i < faulthandler_nsignals; i++) {
> + if (faulthandler_handlers[i].signum == signum) {
> + PyErr_Format(PyExc_RuntimeError,
> + "signal %i cannot be registered, "
> + "use enable() instead",
> + signum);
> + return 0;
> + }
> + }
> + if (signum < 1 || NSIG <= signum) {
> + PyErr_SetString(PyExc_ValueError, "signal number out of range");
> + return 0;
> + }
> + return 1;
> +}
> +
> +static PyObject*
> +faulthandler_register_py(PyObject *self,
> + PyObject *args, PyObject *kwargs)
> +{
> + static char *kwlist[] = {"signum", "file", "all_threads", "chain", NULL};
> + int signum;
> + PyObject *file = NULL;
> + int all_threads = 1;
> + int chain = 0;
> + int fd;
> + user_signal_t *user;
> + _Py_sighandler_t previous;
> + PyThreadState *tstate;
> + int err;
> +
> + if (!PyArg_ParseTupleAndKeywords(args, kwargs,
> + "i|Oii:register", kwlist,
> + &signum, &file, &all_threads, &chain))
> + return NULL;
> +
> + if (!check_signum(signum))
> + return NULL;
> +
> + tstate = get_thread_state();
> + if (tstate == NULL)
> + return NULL;
> +
> + fd = faulthandler_get_fileno(&file);
> + if (fd < 0)
> + return NULL;
> +
> + if (user_signals == NULL) {
> + user_signals = PyMem_Malloc(NSIG * sizeof(user_signal_t));
> + if (user_signals == NULL)
> + return PyErr_NoMemory();
> + memset(user_signals, 0, NSIG * sizeof(user_signal_t));
> + }
> + user = &user_signals[signum];
> +
> + if (!user->enabled) {
> + err = faulthandler_register(signum, chain, &previous);
> + if (err) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + return NULL;
> + }
> +
> + user->previous = previous;
> + }
> +
> + Py_XINCREF(file);
> + Py_XSETREF(user->file, file);
> + user->fd = fd;
> + user->all_threads = all_threads;
> + user->chain = chain;
> + user->interp = tstate->interp;
> + user->enabled = 1;
> +
> + Py_RETURN_NONE;
> +}
> +
> +static int
> +faulthandler_unregister(user_signal_t *user, int signum)
> +{
> + if (!user->enabled)
> + return 0;
> + user->enabled = 0;
> +#ifdef HAVE_SIGACTION
> + (void)sigaction(signum, &user->previous, NULL);
> +#else
> + (void)signal(signum, user->previous);
> +#endif
> + Py_CLEAR(user->file);
> + user->fd = -1;
> + return 1;
> +}
> +
> +static PyObject*
> +faulthandler_unregister_py(PyObject *self, PyObject *args)
> +{
> + int signum;
> + user_signal_t *user;
> + int change;
> +
> + if (!PyArg_ParseTuple(args, "i:unregister", &signum))
> + return NULL;
> +
> + if (!check_signum(signum))
> + return NULL;
> +
> + if (user_signals == NULL)
> + Py_RETURN_FALSE;
> +
> + user = &user_signals[signum];
> + change = faulthandler_unregister(user, signum);
> + return PyBool_FromLong(change);
> +}
> +#endif /* FAULTHANDLER_USER */
> +
> +
> +static void
> +faulthandler_suppress_crash_report(void)
> +{
> +#ifdef MS_WINDOWS
> + UINT mode;
> +
> + /* Configure Windows to not display the Windows Error Reporting dialog */
> + mode = SetErrorMode(SEM_NOGPFAULTERRORBOX);
> + SetErrorMode(mode | SEM_NOGPFAULTERRORBOX);
> +#endif
> +
> +#ifdef HAVE_SYS_RESOURCE_H
> + struct rlimit rl;
> +#ifndef UEFI_C_SOURCE
> + /* Disable creation of core dump */
> + if (getrlimit(RLIMIT_CORE, &rl) == 0) {
> + rl.rlim_cur = 0;
> + setrlimit(RLIMIT_CORE, &rl);
> + }
> +#endif
> +#endif
> +
> +#if defined(_MSC_VER) && !defined(UEFI_MSVC_64) && !defined(UEFI_MSVC_32)
> + /* Visual Studio: configure abort() to not display an error message nor
> + open a popup asking to report the fault. */
> + _set_abort_behavior(0, _WRITE_ABORT_MSG | _CALL_REPORTFAULT);
> +#endif
> +}
> +
> +static PyObject *
> +faulthandler_read_null(PyObject *self, PyObject *args)
> +{
> + volatile int *x;
> + volatile int y;
> +
> + faulthandler_suppress_crash_report();
> + x = NULL;
> + y = *x;
> + return PyLong_FromLong(y);
> +
> +}
> +
> +static void
> +faulthandler_raise_sigsegv(void)
> +{
> + faulthandler_suppress_crash_report();
> +#if defined(MS_WINDOWS)
> + /* For SIGSEGV, faulthandler_fatal_error() restores the previous signal
> + handler and then gives back the execution flow to the program (without
> + explicitly calling the previous error handler). In a normal case, the
> + SIGSEGV was raised by the kernel because of a fault, and so if the
> + program retries to execute the same instruction, the fault will be
> + raised again.
> +
> + Here the fault is simulated by a fake SIGSEGV signal raised by the
> + application. We have to raise SIGSEGV at lease twice: once for
> + faulthandler_fatal_error(), and one more time for the previous signal
> + handler. */
> + while(1)
> + raise(SIGSEGV);
> +#else
> + raise(SIGSEGV);
> +#endif
> +}
> +
> +static PyObject *
> +faulthandler_sigsegv(PyObject *self, PyObject *args)
> +{
> + int release_gil = 0;
> + if (!PyArg_ParseTuple(args, "|i:_sigsegv", &release_gil))
> + return NULL;
> +
> + if (release_gil) {
> + Py_BEGIN_ALLOW_THREADS
> + faulthandler_raise_sigsegv();
> + Py_END_ALLOW_THREADS
> + } else {
> + faulthandler_raise_sigsegv();
> + }
> + Py_RETURN_NONE;
> +}
> +
> +#ifdef WITH_THREAD
> +static void
> +faulthandler_fatal_error_thread(void *plock)
> +{
> + PyThread_type_lock *lock = (PyThread_type_lock *)plock;
> +
> + Py_FatalError("in new thread");
> +
> + /* notify the caller that we are done */
> + PyThread_release_lock(lock);
> +}
> +
> +static PyObject *
> +faulthandler_fatal_error_c_thread(PyObject *self, PyObject *args)
> +{
> + long thread;
> + PyThread_type_lock lock;
> +
> + faulthandler_suppress_crash_report();
> +
> + lock = PyThread_allocate_lock();
> + if (lock == NULL)
> + return PyErr_NoMemory();
> +
> + PyThread_acquire_lock(lock, WAIT_LOCK);
> +
> + thread = PyThread_start_new_thread(faulthandler_fatal_error_thread, lock);
> + if (thread == -1) {
> + PyThread_free_lock(lock);
> + PyErr_SetString(PyExc_RuntimeError, "unable to start the thread");
> + return NULL;
> + }
> +
> + /* wait until the thread completes: it will never occur, since Py_FatalError()
> + exits the process immediately. */
> + PyThread_acquire_lock(lock, WAIT_LOCK);
> + PyThread_release_lock(lock);
> + PyThread_free_lock(lock);
> +
> + Py_RETURN_NONE;
> +}
> +#endif
> +
> +static PyObject *
> +faulthandler_sigfpe(PyObject *self, PyObject *args)
> +{
> + /* Do an integer division by zero: raise a SIGFPE on Intel CPU, but not on
> + PowerPC. Use volatile to disable compile-time optimizations. */
> + volatile int x = 1, y = 0, z;
> + faulthandler_suppress_crash_report();
> + z = x / y;
> + /* If the division by zero didn't raise a SIGFPE (e.g. on PowerPC),
> + raise it manually. */
> + raise(SIGFPE);
> + /* This line is never reached, but we pretend to make something with z
> + to silence a compiler warning. */
> + return PyLong_FromLong(z);
> +}
> +
> +static PyObject *
> +faulthandler_sigabrt(PyObject *self, PyObject *args)
> +{
> + faulthandler_suppress_crash_report();
> + abort();
> + Py_RETURN_NONE;
> +}
> +
> +static PyObject *
> +faulthandler_fatal_error_py(PyObject *self, PyObject *args)
> +{
> + char *message;
> + int release_gil = 0;
> + if (!PyArg_ParseTuple(args, "y|i:fatal_error", &message, &release_gil))
> + return NULL;
> + faulthandler_suppress_crash_report();
> + if (release_gil) {
> + Py_BEGIN_ALLOW_THREADS
> + Py_FatalError(message);
> + Py_END_ALLOW_THREADS
> + }
> + else {
> + Py_FatalError(message);
> + }
> + Py_RETURN_NONE;
> +}
> +
> +#if defined(HAVE_SIGALTSTACK) && defined(HAVE_SIGACTION)
> +#define FAULTHANDLER_STACK_OVERFLOW
> +
> +#ifdef __INTEL_COMPILER
> + /* Issue #23654: Turn off ICC's tail call optimization for the
> + * stack_overflow generator. ICC turns the recursive tail call into
> + * a loop. */
> +# pragma intel optimization_level 0
> +#endif
> +static
> +uintptr_t
> +stack_overflow(uintptr_t min_sp, uintptr_t max_sp, size_t *depth)
> +{
> + /* allocate 4096 bytes on the stack at each call */
> + unsigned char buffer[4096];
> + uintptr_t sp = (uintptr_t)&buffer;
> + *depth += 1;
> + if (sp < min_sp || max_sp < sp)
> + return sp;
> + buffer[0] = 1;
> + buffer[4095] = 0;
> + return stack_overflow(min_sp, max_sp, depth);
> +}
> +
> +static PyObject *
> +faulthandler_stack_overflow(PyObject *self)
> +{
> + size_t depth, size;
> + uintptr_t sp = (uintptr_t)&depth;
> + uintptr_t stop;
> +
> + faulthandler_suppress_crash_report();
> + depth = 0;
> + stop = stack_overflow(sp - STACK_OVERFLOW_MAX_SIZE,
> + sp + STACK_OVERFLOW_MAX_SIZE,
> + &depth);
> + if (sp < stop)
> + size = stop - sp;
> + else
> + size = sp - stop;
> + PyErr_Format(PyExc_RuntimeError,
> + "unable to raise a stack overflow (allocated %zu bytes "
> + "on the stack, %zu recursive calls)",
> + size, depth);
> + return NULL;
> +}
> +#endif /* defined(HAVE_SIGALTSTACK) && defined(HAVE_SIGACTION) */
> +
> +
> +static int
> +faulthandler_traverse(PyObject *module, visitproc visit, void *arg)
> +{
> +#ifdef FAULTHANDLER_USER
> + unsigned int signum;
> +#endif
> +
> +#ifdef FAULTHANDLER_LATER
> + Py_VISIT(thread.file);
> +#endif
> +#ifdef FAULTHANDLER_USER
> + if (user_signals != NULL) {
> + for (signum=0; signum < NSIG; signum++)
> + Py_VISIT(user_signals[signum].file);
> + }
> +#endif
> + Py_VISIT(fatal_error.file);
> + return 0;
> +}
> +
> +#ifdef MS_WINDOWS
> +static PyObject *
> +faulthandler_raise_exception(PyObject *self, PyObject *args)
> +{
> + unsigned int code, flags = 0;
> + if (!PyArg_ParseTuple(args, "I|I:_raise_exception", &code, &flags))
> + return NULL;
> + faulthandler_suppress_crash_report();
> + RaiseException(code, flags, 0, NULL);
> + Py_RETURN_NONE;
> +}
> +#endif
> +
> +PyDoc_STRVAR(module_doc,
> +"faulthandler module.");
> +
> +static PyMethodDef module_methods[] = {
> + {"enable",
> + (PyCFunction)faulthandler_py_enable, METH_VARARGS|METH_KEYWORDS,
> + PyDoc_STR("enable(file=sys.stderr, all_threads=True): "
> + "enable the fault handler")},
> + {"disable", (PyCFunction)faulthandler_disable_py, METH_NOARGS,
> + PyDoc_STR("disable(): disable the fault handler")},
> + {"is_enabled", (PyCFunction)faulthandler_is_enabled, METH_NOARGS,
> + PyDoc_STR("is_enabled()->bool: check if the handler is enabled")},
> + {"dump_traceback",
> + (PyCFunction)faulthandler_dump_traceback_py, METH_VARARGS|METH_KEYWORDS,
> + PyDoc_STR("dump_traceback(file=sys.stderr, all_threads=True): "
> + "dump the traceback of the current thread, or of all threads "
> + "if all_threads is True, into file")},
> +#ifdef FAULTHANDLER_LATER
> + {"dump_traceback_later",
> + (PyCFunction)faulthandler_dump_traceback_later, METH_VARARGS|METH_KEYWORDS,
> + PyDoc_STR("dump_traceback_later(timeout, repeat=False, file=sys.stderrn, exit=False):\n"
> + "dump the traceback of all threads in timeout seconds,\n"
> + "or each timeout seconds if repeat is True. If exit is True, "
> + "call _exit(1) which is not safe.")},
> + {"cancel_dump_traceback_later",
> + (PyCFunction)faulthandler_cancel_dump_traceback_later_py, METH_NOARGS,
> + PyDoc_STR("cancel_dump_traceback_later():\ncancel the previous call "
> + "to dump_traceback_later().")},
> +#endif
> +
> +#ifdef FAULTHANDLER_USER
> + {"register",
> + (PyCFunction)faulthandler_register_py, METH_VARARGS|METH_KEYWORDS,
> + PyDoc_STR("register(signum, file=sys.stderr, all_threads=True, chain=False): "
> + "register a handler for the signal 'signum': dump the "
> + "traceback of the current thread, or of all threads if "
> + "all_threads is True, into file")},
> + {"unregister",
> + faulthandler_unregister_py, METH_VARARGS|METH_KEYWORDS,
> + PyDoc_STR("unregister(signum): unregister the handler of the signal "
> + "'signum' registered by register()")},
> +#endif
> +
> + {"_read_null", faulthandler_read_null, METH_NOARGS,
> + PyDoc_STR("_read_null(): read from NULL, raise "
> + "a SIGSEGV or SIGBUS signal depending on the platform")},
> + {"_sigsegv", faulthandler_sigsegv, METH_VARARGS,
> + PyDoc_STR("_sigsegv(release_gil=False): raise a SIGSEGV signal")},
> +#ifdef WITH_THREAD
> + {"_fatal_error_c_thread", faulthandler_fatal_error_c_thread, METH_NOARGS,
> + PyDoc_STR("fatal_error_c_thread(): "
> + "call Py_FatalError() in a new C thread.")},
> +#endif
> + {"_sigabrt", faulthandler_sigabrt, METH_NOARGS,
> + PyDoc_STR("_sigabrt(): raise a SIGABRT signal")},
> + {"_sigfpe", (PyCFunction)faulthandler_sigfpe, METH_NOARGS,
> + PyDoc_STR("_sigfpe(): raise a SIGFPE signal")},
> + {"_fatal_error", faulthandler_fatal_error_py, METH_VARARGS,
> + PyDoc_STR("_fatal_error(message): call Py_FatalError(message)")},
> +#ifdef FAULTHANDLER_STACK_OVERFLOW
> + {"_stack_overflow", (PyCFunction)faulthandler_stack_overflow, METH_NOARGS,
> + PyDoc_STR("_stack_overflow(): recursive call to raise a stack overflow")},
> +#endif
> +#ifdef MS_WINDOWS
> + {"_raise_exception", faulthandler_raise_exception, METH_VARARGS,
> + PyDoc_STR("raise_exception(code, flags=0): Call RaiseException(code, flags).")},
> +#endif
> + {NULL, NULL} /* sentinel */
> +};
> +
> +static struct PyModuleDef module_def = {
> + PyModuleDef_HEAD_INIT,
> + "faulthandler",
> + module_doc,
> + 0, /* non-negative size to be able to unload the module */
> + module_methods,
> + NULL,
> + faulthandler_traverse,
> + NULL,
> + NULL
> +};
> +
> +PyMODINIT_FUNC
> +PyInit_faulthandler(void)
> +{
> + PyObject *m = PyModule_Create(&module_def);
> + if (m == NULL)
> + return NULL;
> +
> + /* Add constants for unit tests */
> +#ifdef MS_WINDOWS
> + /* RaiseException() codes (prefixed by an underscore) */
> + if (PyModule_AddIntConstant(m, "_EXCEPTION_ACCESS_VIOLATION",
> + EXCEPTION_ACCESS_VIOLATION))
> + return NULL;
> + if (PyModule_AddIntConstant(m, "_EXCEPTION_INT_DIVIDE_BY_ZERO",
> + EXCEPTION_INT_DIVIDE_BY_ZERO))
> + return NULL;
> + if (PyModule_AddIntConstant(m, "_EXCEPTION_STACK_OVERFLOW",
> + EXCEPTION_STACK_OVERFLOW))
> + return NULL;
> +
> + /* RaiseException() flags (prefixed by an underscore) */
> + if (PyModule_AddIntConstant(m, "_EXCEPTION_NONCONTINUABLE",
> + EXCEPTION_NONCONTINUABLE))
> + return NULL;
> + if (PyModule_AddIntConstant(m, "_EXCEPTION_NONCONTINUABLE_EXCEPTION",
> + EXCEPTION_NONCONTINUABLE_EXCEPTION))
> + return NULL;
> +#endif
> +
> + return m;
> +}
> +
> +/* Call faulthandler.enable() if the PYTHONFAULTHANDLER environment variable
> + is defined, or if sys._xoptions has a 'faulthandler' key. */
> +
> +static int
> +faulthandler_env_options(void)
> +{
> + PyObject *xoptions, *key, *module, *res;
> + char *p;
> +
> + if (!((p = Py_GETENV("PYTHONFAULTHANDLER")) && *p != '\0')) {
> + /* PYTHONFAULTHANDLER environment variable is missing
> + or an empty string */
> + int has_key;
> +
> + xoptions = PySys_GetXOptions();
> + if (xoptions == NULL)
> + return -1;
> +
> + key = PyUnicode_FromString("faulthandler");
> + if (key == NULL)
> + return -1;
> +
> + has_key = PyDict_Contains(xoptions, key);
> + Py_DECREF(key);
> + if (has_key <= 0)
> + return has_key;
> + }
> +
> + module = PyImport_ImportModule("faulthandler");
> + if (module == NULL) {
> + return -1;
> + }
> + res = _PyObject_CallMethodId(module, &PyId_enable, NULL);
> + Py_DECREF(module);
> + if (res == NULL)
> + return -1;
> + Py_DECREF(res);
> + return 0;
> +}
> +
> +int _PyFaulthandler_Init(void)
> +{
> +#ifdef HAVE_SIGALTSTACK
> + int err;
> +
> + /* Try to allocate an alternate stack for faulthandler() signal handler to
> + * be able to allocate memory on the stack, even on a stack overflow. If it
> + * fails, ignore the error. */
> + stack.ss_flags = 0;
> + stack.ss_size = SIGSTKSZ;
> + stack.ss_sp = PyMem_Malloc(stack.ss_size);
> + if (stack.ss_sp != NULL) {
> + err = sigaltstack(&stack, &old_stack);
> + if (err) {
> + PyMem_Free(stack.ss_sp);
> + stack.ss_sp = NULL;
> + }
> + }
> +#endif
> +#ifdef FAULTHANDLER_LATER
> + thread.file = NULL;
> + thread.cancel_event = PyThread_allocate_lock();
> + thread.running = PyThread_allocate_lock();
> + if (!thread.cancel_event || !thread.running) {
> + PyErr_SetString(PyExc_RuntimeError,
> + "could not allocate locks for faulthandler");
> + return -1;
> + }
> + PyThread_acquire_lock(thread.cancel_event, 1);
> +#endif
> +
> + return faulthandler_env_options();
> +}
> +
> +void _PyFaulthandler_Fini(void)
> +{
> +#ifdef FAULTHANDLER_USER
> + unsigned int signum;
> +#endif
> +
> +#ifdef FAULTHANDLER_LATER
> + /* later */
> + if (thread.cancel_event) {
> + cancel_dump_traceback_later();
> + PyThread_release_lock(thread.cancel_event);
> + PyThread_free_lock(thread.cancel_event);
> + thread.cancel_event = NULL;
> + }
> + if (thread.running) {
> + PyThread_free_lock(thread.running);
> + thread.running = NULL;
> + }
> +#endif
> +
> +#ifdef FAULTHANDLER_USER
> + /* user */
> + if (user_signals != NULL) {
> + for (signum=0; signum < NSIG; signum++)
> + faulthandler_unregister(&user_signals[signum], signum);
> + PyMem_Free(user_signals);
> + user_signals = NULL;
> + }
> +#endif
> +
> + /* fatal */
> + faulthandler_disable();
> +#ifdef HAVE_SIGALTSTACK
> + if (stack.ss_sp != NULL) {
> + /* Fetch the current alt stack */
> + stack_t current_stack = {};
> + if (sigaltstack(NULL, ¤t_stack) == 0) {
> + if (current_stack.ss_sp == stack.ss_sp) {
> + /* The current alt stack is the one that we installed.
> + It is safe to restore the old stack that we found when
> + we installed ours */
> + sigaltstack(&old_stack, NULL);
> + } else {
> + /* Someone switched to a different alt stack and didn't
> + restore ours when they were done (if they're done).
> + There's not much we can do in this unlikely case */
> + }
> + }
> + PyMem_Free(stack.ss_sp);
> + stack.ss_sp = NULL;
> + }
> +#endif
> +}
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/getpath.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/getpath.c
> new file mode 100644
> index 00000000..ad10784d
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/getpath.c
> @@ -0,0 +1,1283 @@
> +/* Return the initial module search path. */
> +
> +#include "Python.h"
> +#include <osdefs.h>
> +#include <ctype.h>
> +
> +//#include <sys/types.h>
> +//#include <string.h>
> +
> +#ifdef __APPLE__
> +#include <mach-o/dyld.h>
> +#endif
> +
> +/* Search in some common locations for the associated Python libraries.
> + *
> + * Two directories must be found, the platform independent directory
> + * (prefix), containing the common .py and .pyc files, and the platform
> + * dependent directory (exec_prefix), containing the shared library
> + * modules. Note that prefix and exec_prefix can be the same directory,
> + * but for some installations, they are different.
> + *
> + * Py_GetPath() carries out separate searches for prefix and exec_prefix.
> + * Each search tries a number of different locations until a ``landmark''
> + * file or directory is found. If no prefix or exec_prefix is found, a
> + * warning message is issued and the preprocessor defined PREFIX and
> + * EXEC_PREFIX are used (even though they will not work); python carries on
> + * as best as is possible, but most imports will fail.
> + *
> + * Before any searches are done, the location of the executable is
> + * determined. If argv[0] has one or more slashes in it, it is used
> + * unchanged. Otherwise, it must have been invoked from the shell's path,
> + * so we search $PATH for the named executable and use that. If the
> + * executable was not found on $PATH (or there was no $PATH environment
> + * variable), the original argv[0] string is used.
> + *
> + * Next, the executable location is examined to see if it is a symbolic
> + * link. If so, the link is chased (correctly interpreting a relative
> + * pathname if one is found) and the directory of the link target is used.
> + *
> + * Finally, argv0_path is set to the directory containing the executable
> + * (i.e. the last component is stripped).
> + *
> + * With argv0_path in hand, we perform a number of steps. The same steps
> + * are performed for prefix and for exec_prefix, but with a different
> + * landmark.
> + *
> + * Step 1. Are we running python out of the build directory? This is
> + * checked by looking for a different kind of landmark relative to
> + * argv0_path. For prefix, the landmark's path is derived from the VPATH
> + * preprocessor variable (taking into account that its value is almost, but
> + * not quite, what we need). For exec_prefix, the landmark is
> + * pybuilddir.txt. If the landmark is found, we're done.
> + *
> + * For the remaining steps, the prefix landmark will always be
> + * lib/python$VERSION/os.py and the exec_prefix will always be
> + * lib/python$VERSION/lib-dynload, where $VERSION is Python's version
> + * number as supplied by the Makefile. Note that this means that no more
> + * build directory checking is performed; if the first step did not find
> + * the landmarks, the assumption is that python is running from an
> + * installed setup.
> + *
> + * Step 2. See if the $PYTHONHOME environment variable points to the
> + * installed location of the Python libraries. If $PYTHONHOME is set, then
> + * it points to prefix and exec_prefix. $PYTHONHOME can be a single
> + * directory, which is used for both, or the prefix and exec_prefix
> + * directories separated by a colon.
> + *
> + * Step 3. Try to find prefix and exec_prefix relative to argv0_path,
> + * backtracking up the path until it is exhausted. This is the most common
> + * step to succeed. Note that if prefix and exec_prefix are different,
> + * exec_prefix is more likely to be found; however if exec_prefix is a
> + * subdirectory of prefix, both will be found.
> + *
> + * Step 4. Search the directories pointed to by the preprocessor variables
> + * PREFIX and EXEC_PREFIX. These are supplied by the Makefile but can be
> + * passed in as options to the configure script.
> + *
> + * That's it!
> + *
> + * Well, almost. Once we have determined prefix and exec_prefix, the
> + * preprocessor variable PYTHONPATH is used to construct a path. Each
> + * relative path on PYTHONPATH is prefixed with prefix. Then the directory
> + * containing the shared library modules is appended. The environment
> + * variable $PYTHONPATH is inserted in front of it all. Finally, the
> + * prefix and exec_prefix globals are tweaked so they reflect the values
> + * expected by other code, by stripping the "lib/python$VERSION/..." stuff
> + * off. If either points to the build directory, the globals are reset to
> + * the corresponding preprocessor variables (so sys.prefix will reflect the
> + * installation location, even though sys.path points into the build
> + * directory). This seems to make more sense given that currently the only
> + * known use of sys.prefix and sys.exec_prefix is for the ILU installation
> + * process to find the installed Python tree.
> + *
> + * An embedding application can use Py_SetPath() to override all of
> + * these authomatic path computations.
> + *
> + * NOTE: Windows MSVC builds use PC/getpathp.c instead!
> + */
> +
> +#ifdef __cplusplus
> + extern "C" {
> +#endif
> +
> +/* Filename separator */
> +#ifndef SEP
> +#define SEP L'/'
> +#define ALTSEP L'\\'
> +#endif
> +
> +#ifndef ALTSEP
> +#define ALTSEP L'\\'
> +#endif
> +
> +
> +#define SIFY_I( x ) L#x
> +#define SIFY( y ) SIFY_I( y )
> +
> +#ifndef PREFIX
> + #define PREFIX L"/Efi/StdLib"
> +#endif
> +
> +#ifndef EXEC_PREFIX
> + #define EXEC_PREFIX PREFIX
> +#endif
> +
> +#ifndef LIBPYTHON
> + #define LIBPYTHON L"lib/python" VERSION L"." SIFY(PY_MICRO_VERSION)
> +#endif
> +
> +#ifndef PYTHONPATH
> + #define PYTHONPATH LIBPYTHON
> +#endif
> +
> +#ifndef LANDMARK
> + #define LANDMARK L"os.py"
> +#endif
> +
> +#ifndef VERSION
> + #define VERSION SIFY(PY_MAJOR_VERSION) SIFY(PY_MINOR_VERSION)
> +#endif
> +
> +#ifndef VPATH
> + #define VPATH L"."
> +#endif
> +
> +/* Search path entry delimiter */
> +//# define DELIM ';'
> +# define DELIM_STR ";"
> +
> +#ifdef DELIM
> + #define sDELIM L";"
> +#endif
> +
> +
> +#if !defined(PREFIX) || !defined(EXEC_PREFIX) || !defined(VERSION) || !defined(VPATH)
> +#error "PREFIX, EXEC_PREFIX, VERSION, and VPATH must be constant defined"
> +#endif
> +
> +#ifndef LANDMARK
> +#define LANDMARK L"os.py"
> +#endif
> +
> +static wchar_t prefix[MAXPATHLEN+1] = {0};
> +static wchar_t exec_prefix[MAXPATHLEN+1] = {0};
> +static wchar_t progpath[MAXPATHLEN+1] = {0};
> +static wchar_t *module_search_path = NULL;
> +static wchar_t lib_python[] = LIBPYTHON;
> +static wchar_t volume_name[32] = { 0 };
> +
> +
> +/* Get file status. Encode the path to the locale encoding. */
> +
> +static int
> +_Py_wstat(const wchar_t* path, struct stat *buf)
> +{
> + int err;
> + char *fname;
> + fname = Py_EncodeLocale(path, NULL);
> + if (fname == NULL) {
> + errno = EINVAL;
> + return -1;
> + }
> + err = stat(fname, buf);
> + PyMem_Free(fname);
> + return err;
> +}
> +
> +/* Get file status. Encode the path to the locale encoding. */
> +
> +static wchar_t *
> +_Py_basename(const wchar_t* path)
> +{
> + int err;
> + size_t len = 0;
> + char *fname, *buf;
> + wchar_t *bname;
> + fname = Py_EncodeLocale(path, NULL);
> + if (fname == NULL) {
> + errno = EINVAL;
> + return NULL;
> + }
> + buf = basename(fname);
> + PyMem_Free(fname);
> + len = strlen(buf);
> + bname = Py_DecodeLocale(buf, &len);
> + return bname;
> +}
> +
> +/** Determine if "ch" is a separator character.
> +
> + @param[in] ch The character to test.
> +
> + @retval TRUE ch is a separator character.
> + @retval FALSE ch is NOT a separator character.
> +**/
> +static int
> +is_sep(wchar_t ch)
> +{
> + return ch == SEP || ch == ALTSEP;
> +}
> +
> +/** Determine if a path is absolute, or not.
> + An absolute path consists of a volume name, "VOL:", followed by a rooted path,
> + "/path/elements". If both of these components are present, the path is absolute.
> +
> + Let P be a pointer to the path to test.
> + Let A be a pointer to the first ':' in P.
> + Let B be a pointer to the first '/' or '\\' in P.
> +
> + If A and B are not NULL
> + If (A-P+1) == (B-P) then the path is absolute.
> + Otherwise, the path is NOT absolute.
> +
> + @param[in] path The path to test.
> +
> + @retval -1 Path is absolute but lacking volume name.
> + @retval 0 Path is NOT absolute.
> + @retval 1 Path is absolute.
> +*/
> +static int
> +is_absolute(wchar_t *path)
> +{
> + wchar_t *A;
> + wchar_t *B;
> +
> + A = wcschr(path, L':');
> + B = wcspbrk(path, L"/\\");
> +
> + if(B != NULL) {
> + if(A == NULL) {
> + if(B == path) {
> + return -1;
> + }
> + }
> + else {
> + if(((A - path) + 1) == (B - path)) {
> + return 1;
> + }
> + }
> + }
> + return 0;
> +}
> +
> +static void
> +reduce(wchar_t *dir)
> +{
> + size_t i = wcslen(dir);
> + while (i > 0 && !is_sep(dir[i]))
> + --i;
> + dir[i] = '\0';
> +}
> +
> +static int
> +isfile(wchar_t *filename) /* Is file, not directory */
> +{
> + struct stat buf;
> + if (_Py_wstat(filename, &buf) != 0)
> + return 0;
> + if (!S_ISREG(buf.st_mode))
> + return 0;
> + return 1;
> +}
> +
> +
> +static int
> +ismodule(wchar_t *filename) /* Is module -- check for .pyc too */
> +{
> + if (isfile(filename))
> + return 1;
> +
> + /* Check for the compiled version of prefix. */
> + if (wcslen(filename) < MAXPATHLEN) {
> + wcscat(filename, L"c");
> + if (isfile(filename))
> + return 1;
> + }
> + return 0;
> +}
> +
> +static int
> +isdir(wchar_t *filename) /* Is directory */
> +{
> + struct stat buf;
> + if (_Py_wstat(filename, &buf) != 0)
> + return 0;
> + if (!S_ISDIR(buf.st_mode))
> + return 0;
> + return 1;
> +}
> +
> +/* Add a path component, by appending stuff to buffer.
> + buffer must have at least MAXPATHLEN + 1 bytes allocated, and contain a
> + NUL-terminated string with no more than MAXPATHLEN characters (not counting
> + the trailing NUL). It's a fatal error if it contains a string longer than
> + that (callers must be careful!). If these requirements are met, it's
> + guaranteed that buffer will still be a NUL-terminated string with no more
> + than MAXPATHLEN characters at exit. If stuff is too long, only as much of
> + stuff as fits will be appended.
> +*/
> +static void
> +joinpath(wchar_t *buffer, wchar_t *stuff)
> +{
> + size_t n, k;
> + k = 0;
> + if (is_absolute(stuff) == 1){
> + n = 0;
> + }
> + else {
> + n = wcslen(buffer);
> + if (n == 0) {
> + wcsncpy(buffer, volume_name, MAXPATHLEN);
> + n = wcslen(buffer);
> + }
> + if (n > 0 && n < MAXPATHLEN){
> + if(!is_sep(buffer[n-1])) {
> + buffer[n++] = SEP;
> + }
> + if(is_sep(stuff[0])) ++stuff;
> + }
> + }
> + if (n > MAXPATHLEN)
> + Py_FatalError("buffer overflow in getpath.c's joinpath()");
> + k = wcslen(stuff);
> + if (n + k > MAXPATHLEN)
> + k = MAXPATHLEN - n;
> + wcsncpy(buffer+n, stuff, k);
> + buffer[n+k] = '\0';
> +}
> +
> +static int
> +isxfile(wchar_t *filename)
> +{
> + struct stat buf;
> + wchar_t *bn;
> + wchar_t *newbn;
> + int bnlen;
> + char *filename_str;
> +
> + bn = _Py_basename(filename); // Separate off the file name component
> + reduce(filename); // and isolate the path component
> + bnlen = wcslen(bn);
> + newbn = wcsrchr(bn, L'.'); // Does basename contain a period?
> + if(newbn == NULL) { // Does NOT contain a period.
> + newbn = &bn[bnlen];
> + wcsncpy(newbn, L".efi", MAXPATHLEN - bnlen); // append ".efi" to basename
> + bnlen += 4;
> + }
> + else if(wcscmp(newbn, L".efi") != 0) {
> + return 0; // File can not be executable.
> + }
> + joinpath(filename, bn); // Stitch path and file name back together
> +
> + return isdir(filename);
> +}
> +
> +/* copy_absolute requires that path be allocated at least
> + MAXPATHLEN + 1 bytes and that p be no more than MAXPATHLEN bytes. */
> +static void
> +copy_absolute(wchar_t *path, wchar_t *p, size_t pathlen)
> +{
> + if (is_absolute(p) == 1)
> + wcscpy(path, p);
> + else {
> + if (!_Py_wgetcwd(path, pathlen)) {
> + /* unable to get the current directory */
> + if(volume_name[0] != 0) {
> + wcscpy(path, volume_name);
> + joinpath(path, p);
> + }
> + else
> + wcscpy(path, p);
> + return;
> + }
> + if (p[0] == '.' && p[1] == SEP)
> + p += 2;
> + joinpath(path, p);
> + }
> +}
> +
> +/* absolutize() requires that path be allocated at least MAXPATHLEN+1 bytes. */
> +static void
> +absolutize(wchar_t *path)
> +{
> + wchar_t buffer[MAXPATHLEN+1];
> +
> + if (is_absolute(path) == 1)
> + return;
> + copy_absolute(buffer, path, MAXPATHLEN+1);
> + wcscpy(path, buffer);
> +}
> +
> +/** Extract the volume name from a path.
> +
> + @param[out] Dest Pointer to location in which to store the extracted volume name.
> + @param[in] path Pointer to the path to extract the volume name from.
> +**/
> +static void
> +set_volume(wchar_t *Dest, wchar_t *path)
> +{
> + size_t VolLen;
> +
> + if(is_absolute(path)) {
> + VolLen = wcscspn(path, L"/\\:");
> + if((VolLen != 0) && (path[VolLen] == L':')) {
> + (void) wcsncpy(Dest, path, VolLen + 1);
> + }
> + }
> +}
> +
> +
> +/* search for a prefix value in an environment file. If found, copy it
> + to the provided buffer, which is expected to be no more than MAXPATHLEN
> + bytes long.
> +*/
> +
> +static int
> +find_env_config_value(FILE * env_file, const wchar_t * key, wchar_t * value)
> +{
> + int result = 0; /* meaning not found */
> + char buffer[MAXPATHLEN*2+1]; /* allow extra for key, '=', etc. */
> +
> + fseek(env_file, 0, SEEK_SET);
> + while (!feof(env_file)) {
> + char * p = fgets(buffer, MAXPATHLEN*2, env_file);
> + wchar_t tmpbuffer[MAXPATHLEN*2+1];
> + PyObject * decoded;
> + int n;
> +
> + if (p == NULL)
> + break;
> + n = strlen(p);
> + if (p[n - 1] != '\n') {
> + /* line has overflowed - bail */
> + break;
> + }
> + if (p[0] == '#') /* Comment - skip */
> + continue;
> + decoded = PyUnicode_DecodeUTF8(buffer, n, "surrogateescape");
> + if (decoded != NULL) {
> + Py_ssize_t k;
> + wchar_t * state;
> + k = PyUnicode_AsWideChar(decoded,
> + tmpbuffer, MAXPATHLEN * 2);
> + Py_DECREF(decoded);
> + if (k >= 0) {
> + wchar_t * tok = wcstok(tmpbuffer, L" \t\r\n", &state);
> + if ((tok != NULL) && !wcscmp(tok, key)) {
> + tok = wcstok(NULL, L" \t", &state);
> + if ((tok != NULL) && !wcscmp(tok, L"=")) {
> + tok = wcstok(NULL, L"\r\n", &state);
> + if (tok != NULL) {
> + wcsncpy(value, tok, MAXPATHLEN);
> + result = 1;
> + break;
> + }
> + }
> + }
> + }
> + }
> + }
> + return result;
> +}
> +
> +/* search_for_prefix requires that argv0_path be no more than MAXPATHLEN
> + bytes long.
> +*/
> +static int
> +search_for_prefix(wchar_t *argv0_path, wchar_t *home, wchar_t *_prefix,
> + wchar_t *lib_python)
> +{
> + size_t n;
> + wchar_t *vpath;
> +
> + /* If PYTHONHOME is set, we believe it unconditionally */
> + if (home) {
> + wchar_t *delim;
> + wcsncpy(prefix, home, MAXPATHLEN);
> + prefix[MAXPATHLEN] = L'\0';
> + delim = wcschr(prefix, DELIM);
> + if (delim)
> + *delim = L'\0';
> + joinpath(prefix, lib_python);
> + joinpath(prefix, LANDMARK);
> + return 1;
> + }
> +
> + /* Check to see if argv[0] is in the build directory */
> + wcsncpy(prefix, argv0_path, MAXPATHLEN);
> + prefix[MAXPATHLEN] = L'\0';
> + joinpath(prefix, L"Modules/Setup");
> + if (isfile(prefix)) {
> + /* Check VPATH to see if argv0_path is in the build directory. */
> + vpath = Py_DecodeLocale(VPATH, NULL);
> + if (vpath != NULL) {
> + wcsncpy(prefix, argv0_path, MAXPATHLEN);
> + prefix[MAXPATHLEN] = L'\0';
> + joinpath(prefix, vpath);
> + PyMem_RawFree(vpath);
> + joinpath(prefix, L"Lib");
> + joinpath(prefix, LANDMARK);
> + if (ismodule(prefix))
> + return -1;
> + }
> + }
> +
> + /* Search from argv0_path, until root is found */
> + copy_absolute(prefix, argv0_path, MAXPATHLEN+1);
> + do {
> + n = wcslen(prefix);
> + joinpath(prefix, lib_python);
> + joinpath(prefix, LANDMARK);
> + if (ismodule(prefix))
> + return 1;
> + prefix[n] = L'\0';
> + reduce(prefix);
> + } while (prefix[0]);
> +
> + /* Look at configure's PREFIX */
> + wcsncpy(prefix, _prefix, MAXPATHLEN);
> + prefix[MAXPATHLEN] = L'\0';
> + joinpath(prefix, lib_python);
> + joinpath(prefix, LANDMARK);
> + if (ismodule(prefix))
> + return 1;
> +
> + /* Fail */
> + return 0;
> +}
> +
> +
> +/* search_for_exec_prefix requires that argv0_path be no more than
> + MAXPATHLEN bytes long.
> +*/
> +static int
> +search_for_exec_prefix(wchar_t *argv0_path, wchar_t *home,
> + wchar_t *_exec_prefix, wchar_t *lib_python)
> +{
> + size_t n;
> +
> + /* If PYTHONHOME is set, we believe it unconditionally */
> + if (home) {
> + wchar_t *delim;
> + delim = wcschr(home, DELIM);
> + if (delim)
> + wcsncpy(exec_prefix, delim+1, MAXPATHLEN);
> + else
> + wcsncpy(exec_prefix, home, MAXPATHLEN);
> + exec_prefix[MAXPATHLEN] = L'\0';
> + joinpath(exec_prefix, lib_python);
> + joinpath(exec_prefix, L"lib-dynload");
> + return 1;
> + }
> +
> + /* Check to see if argv[0] is in the build directory. "pybuilddir.txt"
> + is written by setup.py and contains the relative path to the location
> + of shared library modules. */
> + wcsncpy(exec_prefix, argv0_path, MAXPATHLEN);
> + exec_prefix[MAXPATHLEN] = L'\0';
> + joinpath(exec_prefix, L"pybuilddir.txt");
> + if (isfile(exec_prefix)) {
> + FILE *f = _Py_wfopen(exec_prefix, L"rb");
> + if (f == NULL)
> + errno = 0;
> + else {
> + char buf[MAXPATHLEN+1];
> + PyObject *decoded;
> + wchar_t rel_builddir_path[MAXPATHLEN+1];
> + n = fread(buf, 1, MAXPATHLEN, f);
> + buf[n] = '\0';
> + fclose(f);
> + decoded = PyUnicode_DecodeUTF8(buf, n, "surrogateescape");
> + if (decoded != NULL) {
> + Py_ssize_t k;
> + k = PyUnicode_AsWideChar(decoded,
> + rel_builddir_path, MAXPATHLEN);
> + Py_DECREF(decoded);
> + if (k >= 0) {
> + rel_builddir_path[k] = L'\0';
> + wcsncpy(exec_prefix, argv0_path, MAXPATHLEN);
> + exec_prefix[MAXPATHLEN] = L'\0';
> + joinpath(exec_prefix, rel_builddir_path);
> + return -1;
> + }
> + }
> + }
> + }
> +
> + /* Search from argv0_path, until root is found */
> + copy_absolute(exec_prefix, argv0_path, MAXPATHLEN+1);
> + do {
> + n = wcslen(exec_prefix);
> + joinpath(exec_prefix, lib_python);
> + joinpath(exec_prefix, L"lib-dynload");
> + if (isdir(exec_prefix))
> + return 1;
> + exec_prefix[n] = L'\0';
> + reduce(exec_prefix);
> + } while (exec_prefix[0]);
> +
> + /* Look at configure's EXEC_PREFIX */
> + wcsncpy(exec_prefix, _exec_prefix, MAXPATHLEN);
> + exec_prefix[MAXPATHLEN] = L'\0';
> + joinpath(exec_prefix, lib_python);
> + joinpath(exec_prefix, L"lib-dynload");
> + if (isdir(exec_prefix))
> + return 1;
> +
> + /* Fail */
> + return 0;
> +}
> +
> +static void
> +calculate_path(void)
> +{
> + extern wchar_t *Py_GetProgramName(void);
> + wchar_t *pythonpath = PYTHONPATH;
> + static const wchar_t delimiter[2] = {DELIM, '\0'};
> + static const wchar_t separator[2] = {SEP, '\0'};
> + //char *rtpypath = Py_GETENV("PYTHONPATH"); /* XXX use wide version on Windows */
> + wchar_t *rtpypath = NULL;
> + char *_path = getenv("path");
> + wchar_t *path_buffer = NULL;
> + wchar_t *path = NULL;
> + wchar_t *prog = Py_GetProgramName();
> + wchar_t argv0_path[MAXPATHLEN+1];
> + wchar_t zip_path[MAXPATHLEN+1];
> + wchar_t *buf;
> + size_t bufsz;
> + size_t prefixsz;
> + wchar_t *defpath;
> +
> + if (_path) {
> + path_buffer = Py_DecodeLocale(_path, NULL);
> + path = path_buffer;
> + }
> +/* ###########################################################################
> + Determine path to the Python.efi binary.
> + Produces progpath, argv0_path, and volume_name.
> +########################################################################### */
> +
> + /* If there is no slash in the argv0 path, then we have to
> + * assume python is on the user's $PATH, since there's no
> + * other way to find a directory to start the search from. If
> + * $PATH isn't exported, you lose.
> + */
> + if (wcspbrk(prog, L"/\\"))
> + {
> + wcsncpy(progpath, prog, MAXPATHLEN);
> + }
> + else if (path) {
> + while (1) {
> + wchar_t *delim = wcschr(path, DELIM);
> +
> + if (delim) {
> + size_t len = delim - path;
> + if (len > MAXPATHLEN)
> + len = MAXPATHLEN;
> + wcsncpy(progpath, path, len);
> + *(progpath + len) = L'\0';
> + }
> + else
> + wcsncpy(progpath, path, MAXPATHLEN);
> +
> + joinpath(progpath, prog);
> + if (isxfile(progpath))
> + break;
> +
> + if (!delim) {
> + progpath[0] = L'\0';
> + break;
> + }
> + path = delim + 1;
> + }
> + }
> + else
> + progpath[0] = L'\0';
> +
> + if ( (!is_absolute(progpath)) && (progpath[0] != '\0') )
> + absolutize(progpath);
> +
> + wcsncpy(argv0_path, progpath, MAXPATHLEN);
> + argv0_path[MAXPATHLEN] = L'\0';
> + set_volume(volume_name, argv0_path);
> +
> + reduce(argv0_path);
> + /* At this point, argv0_path is guaranteed to be less than
> + MAXPATHLEN bytes long.
> + */
> +/* ###########################################################################
> + Build the FULL prefix string, including volume name.
> + This is the full path to the platform independent libraries.
> +########################################################################### */
> +
> + wcsncpy(prefix, volume_name, MAXPATHLEN);
> + joinpath(prefix, PREFIX);
> + joinpath(prefix, lib_python);
> +
> +/* ###########################################################################
> + Build the FULL path to the zipped-up Python library.
> +########################################################################### */
> +
> + wcsncpy(zip_path, prefix, MAXPATHLEN);
> + zip_path[MAXPATHLEN] = L'\0';
> + reduce(zip_path);
> + joinpath(zip_path, L"python00.zip");
> + bufsz = wcslen(zip_path); /* Replace "00" with version */
> + zip_path[bufsz - 6] = VERSION[0];
> + zip_path[bufsz - 5] = VERSION[1];
> +/* ###########################################################################
> + Build the FULL path to dynamically loadable libraries.
> +########################################################################### */
> +
> + wcsncpy(exec_prefix, volume_name, MAXPATHLEN); // "fs0:"
> + joinpath(exec_prefix, EXEC_PREFIX); // "fs0:/Efi/StdLib"
> + joinpath(exec_prefix, lib_python); // "fs0:/Efi/StdLib/lib/python.27"
> + joinpath(exec_prefix, L"lib-dynload"); // "fs0:/Efi/StdLib/lib/python.27/lib-dynload"
> +/* ###########################################################################
> + Build the module search path.
> +########################################################################### */
> +
> + /* Reduce prefix and exec_prefix to their essence,
> + * e.g. /usr/local/lib/python1.5 is reduced to /usr/local.
> + * If we're loading relative to the build directory,
> + * return the compiled-in defaults instead.
> + */
> + reduce(prefix);
> + reduce(prefix);
> + /* The prefix is the root directory, but reduce() chopped
> + * off the "/". */
> + if (!prefix[0]) {
> + wcscpy(prefix, volume_name);
> + }
> + bufsz = wcslen(prefix);
> + if(prefix[bufsz-1] == L':') { // if prefix consists solely of a volume_name
> + prefix[bufsz] = SEP; // then append SEP indicating the root directory
> + prefix[bufsz+1] = 0; // and ensure the new string is terminated
> + }
> +
> + /* Calculate size of return buffer.
> + */
> + defpath = pythonpath;
> + bufsz = 0;
> +
> + if (rtpypath)
> + bufsz += wcslen(rtpypath) + 1;
> +
> + prefixsz = wcslen(prefix) + 1;
> + while (1) {
> + wchar_t *delim = wcschr(defpath, DELIM);
> +
> + if (is_absolute(defpath) == 0)
> + /* Paths are relative to prefix */
> + bufsz += prefixsz;
> +
> + if (delim)
> + bufsz += delim - defpath + 1;
> + else {
> + bufsz += wcslen(defpath) + 1;
> + break;
> + }
> + defpath = delim + 1;
> + }
> +
> + bufsz += wcslen(zip_path) + 1;
> + bufsz += wcslen(exec_prefix) + 1;
> +
> + /* This is the only malloc call in this file */
> + buf = (wchar_t *)PyMem_Malloc(bufsz * 2);
> +
> + if (buf == NULL) {
> + /* We can't exit, so print a warning and limp along */
> + fprintf(stderr, "Not enough memory for dynamic PYTHONPATH.\n");
> + fprintf(stderr, "Using default static PYTHONPATH.\n");
> + module_search_path = PYTHONPATH;
> + }
> + else {
> + /* Run-time value of $PYTHONPATH goes first */
> + if (rtpypath) {
> + wcscpy(buf, rtpypath);
> + wcscat(buf, delimiter);
> + }
> + else
> + buf[0] = L'\0';
> +
> + /* Next is the default zip path */
> + wcscat(buf, zip_path);
> + wcscat(buf, delimiter);
> + /* Next goes merge of compile-time $PYTHONPATH with
> + * dynamically located prefix.
> + */
> + defpath = pythonpath;
> + while (1) {
> + wchar_t *delim = wcschr(defpath, DELIM);
> +
> + if (is_absolute(defpath) != 1) {
> + wcscat(buf, prefix);
> + wcscat(buf, separator);
> + }
> +
> + if (delim) {
> + size_t len = delim - defpath + 1;
> + size_t end = wcslen(buf) + len;
> + wcsncat(buf, defpath, len);
> + *(buf + end) = L'\0';
> + }
> + else {
> + wcscat(buf, defpath);
> + break;
> + }
> + defpath = delim + 1;
> + }
> + wcscat(buf, delimiter);
> + /* Finally, on goes the directory for dynamic-load modules */
> + wcscat(buf, exec_prefix);
> + /* And publish the results */
> + module_search_path = buf;
> + }
> + /* At this point, exec_prefix is set to VOL:/Efi/StdLib/lib/python.27/dynalib.
> + We want to get back to the root value, so we have to remove the final three
> + segments to get VOL:/Efi/StdLib. Because we don't know what VOL is, and
> + EXEC_PREFIX is also indeterminate, we just remove the three final segments.
> + */
> + reduce(exec_prefix);
> + reduce(exec_prefix);
> + reduce(exec_prefix);
> + if (!exec_prefix[0]) {
> + wcscpy(exec_prefix, volume_name);
> + }
> + bufsz = wcslen(exec_prefix);
> + if(exec_prefix[bufsz-1] == L':') {
> + exec_prefix[bufsz] = SEP;
> + exec_prefix[bufsz+1] = 0;
> + }
> +
> +#if 1
> + if (Py_VerboseFlag) PySys_WriteStderr("%s[%d]: module_search_path = \"%s\"\n", __func__, __LINE__, module_search_path);
> + if (Py_VerboseFlag) PySys_WriteStderr("%s[%d]: prefix = \"%s\"\n", __func__, __LINE__, prefix);
> + if (Py_VerboseFlag) PySys_WriteStderr("%s[%d]: exec_prefix = \"%s\"\n", __func__, __LINE__, exec_prefix);
> + if (Py_VerboseFlag) PySys_WriteStderr("%s[%d]: progpath = \"%s\"\n", __func__, __LINE__, progpath);
> +#endif
> +
> +#if 0
> +
> + extern wchar_t *Py_GetProgramName(void);
> +
> + static const wchar_t delimiter[2] = {DELIM, '\0'};
> + static const wchar_t separator[2] = {SEP, '\0'};
> + char *_rtpypath = Py_GETENV("PYTHONPATH"); /* XXX use wide version on Windows */
> + wchar_t *rtpypath = NULL;
> + //wchar_t *home = Py_GetPythonHome();
> + char *_path = getenv("PATH");
> + wchar_t *path_buffer = NULL;
> + wchar_t *path = NULL;
> + wchar_t *prog = Py_GetProgramName();
> + wchar_t argv0_path[MAXPATHLEN+1];
> + wchar_t zip_path[MAXPATHLEN+1];
> + wchar_t *buf;
> + size_t bufsz;
> + size_t prefixsz;
> + wchar_t *defpath;
> +#ifdef WITH_NEXT_FRAMEWORK
> + NSModule pythonModule;
> + const char* modPath;
> +#endif
> +#ifdef __APPLE__
> +#if MAC_OS_X_VERSION_MAX_ALLOWED >= MAC_OS_X_VERSION_10_4
> + uint32_t nsexeclength = MAXPATHLEN;
> +#else
> + unsigned long nsexeclength = MAXPATHLEN;
> +#endif
> + char execpath[MAXPATHLEN+1];
> +#endif
> + wchar_t *_pythonpath, *_prefix, *_exec_prefix;
> + wchar_t *lib_python;
> +
> + _pythonpath = Py_DecodeLocale(PYTHONPATH, NULL);
> + _prefix = Py_DecodeLocale(PREFIX, NULL);
> + _exec_prefix = Py_DecodeLocale(EXEC_PREFIX, NULL);
> + lib_python = Py_DecodeLocale("lib/python" VERSION, NULL);
> +
> + if (!_pythonpath || !_prefix || !_exec_prefix || !lib_python) {
> + Py_FatalError(
> + "Unable to decode path variables in getpath.c: "
> + "memory error");
> + }
> +
> + if (_path) {
> + path_buffer = Py_DecodeLocale(_path, NULL);
> + path = path_buffer;
> + }
> +
> + /* If there is no slash in the argv0 path, then we have to
> + * assume python is on the user's $PATH, since there's no
> + * other way to find a directory to start the search from. If
> + * $PATH isn't exported, you lose.
> + */
> + if (wcschr(prog, SEP))
> + wcsncpy(progpath, prog, MAXPATHLEN);
> +#ifdef __APPLE__
> + /* On Mac OS X, if a script uses an interpreter of the form
> + * "#!/opt/python2.3/bin/python", the kernel only passes "python"
> + * as argv[0], which falls through to the $PATH search below.
> + * If /opt/python2.3/bin isn't in your path, or is near the end,
> + * this algorithm may incorrectly find /usr/bin/python. To work
> + * around this, we can use _NSGetExecutablePath to get a better
> + * hint of what the intended interpreter was, although this
> + * will fail if a relative path was used. but in that case,
> + * absolutize() should help us out below
> + */
> + else if(0 == _NSGetExecutablePath(execpath, &nsexeclength) && execpath[0] == SEP) {
> + size_t r = mbstowcs(progpath, execpath, MAXPATHLEN+1);
> + if (r == (size_t)-1 || r > MAXPATHLEN) {
> + /* Could not convert execpath, or it's too long. */
> + progpath[0] = L'\0';
> + }
> + }
> +#endif /* __APPLE__ */
> + else if (path) {
> + while (1) {
> + wchar_t *delim = wcschr(path, DELIM);
> +
> + if (delim) {
> + size_t len = delim - path;
> + if (len > MAXPATHLEN)
> + len = MAXPATHLEN;
> + wcsncpy(progpath, path, len);
> + *(progpath + len) = L'\0';
> + }
> + else
> + wcsncpy(progpath, path, MAXPATHLEN);
> +
> + joinpath(progpath, prog);
> + if (isxfile(progpath))
> + break;
> +
> + if (!delim) {
> + progpath[0] = L'\0';
> + break;
> + }
> + path = delim + 1;
> + }
> + }
> + else
> + progpath[0] = L'\0';
> + PyMem_RawFree(path_buffer);
> + if (progpath[0] != SEP && progpath[0] != L'\0')
> + absolutize(progpath);
> + wcsncpy(argv0_path, progpath, MAXPATHLEN);
> + argv0_path[MAXPATHLEN] = L'\0';
> +
> +#ifdef WITH_NEXT_FRAMEWORK
> + /* On Mac OS X we have a special case if we're running from a framework.
> + ** This is because the python home should be set relative to the library,
> + ** which is in the framework, not relative to the executable, which may
> + ** be outside of the framework. Except when we're in the build directory...
> + */
> + pythonModule = NSModuleForSymbol(NSLookupAndBindSymbol("_Py_Initialize"));
> + /* Use dylib functions to find out where the framework was loaded from */
> + modPath = NSLibraryNameForModule(pythonModule);
> + if (modPath != NULL) {
> + /* We're in a framework. */
> + /* See if we might be in the build directory. The framework in the
> + ** build directory is incomplete, it only has the .dylib and a few
> + ** needed symlinks, it doesn't have the Lib directories and such.
> + ** If we're running with the framework from the build directory we must
> + ** be running the interpreter in the build directory, so we use the
> + ** build-directory-specific logic to find Lib and such.
> + */
> + wchar_t* wbuf = Py_DecodeLocale(modPath, NULL);
> + if (wbuf == NULL) {
> + Py_FatalError("Cannot decode framework location");
> + }
> +
> + wcsncpy(argv0_path, wbuf, MAXPATHLEN);
> + reduce(argv0_path);
> + joinpath(argv0_path, lib_python);
> + joinpath(argv0_path, LANDMARK);
> + if (!ismodule(argv0_path)) {
> + /* We are in the build directory so use the name of the
> + executable - we know that the absolute path is passed */
> + wcsncpy(argv0_path, progpath, MAXPATHLEN);
> + }
> + else {
> + /* Use the location of the library as the progpath */
> + wcsncpy(argv0_path, wbuf, MAXPATHLEN);
> + }
> + PyMem_RawFree(wbuf);
> + }
> +#endif
> +
> +#if HAVE_READLINK
> + {
> + wchar_t tmpbuffer[MAXPATHLEN+1];
> + int linklen = _Py_wreadlink(progpath, tmpbuffer, MAXPATHLEN);
> + while (linklen != -1) {
> + if (tmpbuffer[0] == SEP)
> + /* tmpbuffer should never be longer than MAXPATHLEN,
> + but extra check does not hurt */
> + wcsncpy(argv0_path, tmpbuffer, MAXPATHLEN);
> + else {
> + /* Interpret relative to progpath */
> + reduce(argv0_path);
> + joinpath(argv0_path, tmpbuffer);
> + }
> + linklen = _Py_wreadlink(argv0_path, tmpbuffer, MAXPATHLEN);
> + }
> + }
> +#endif /* HAVE_READLINK */
> +
> + reduce(argv0_path);
> + /* At this point, argv0_path is guaranteed to be less than
> + MAXPATHLEN bytes long.
> + */
> +
> + /* Search for an environment configuration file, first in the
> + executable's directory and then in the parent directory.
> + If found, open it for use when searching for prefixes.
> + */
> +
> + {
> + wchar_t tmpbuffer[MAXPATHLEN+1];
> + wchar_t *env_cfg = L"pyvenv.cfg";
> + FILE * env_file = NULL;
> +
> + wcscpy(tmpbuffer, argv0_path);
> +
> + joinpath(tmpbuffer, env_cfg);
> + env_file = _Py_wfopen(tmpbuffer, L"r");
> + if (env_file == NULL) {
> + errno = 0;
> + reduce(tmpbuffer);
> + reduce(tmpbuffer);
> + joinpath(tmpbuffer, env_cfg);
> + env_file = _Py_wfopen(tmpbuffer, L"r");
> + if (env_file == NULL) {
> + errno = 0;
> + }
> + }
> + if (env_file != NULL) {
> + /* Look for a 'home' variable and set argv0_path to it, if found */
> + if (find_env_config_value(env_file, L"home", tmpbuffer)) {
> + wcscpy(argv0_path, tmpbuffer);
> + }
> + fclose(env_file);
> + env_file = NULL;
> + }
> + }
> + printf("argv0_path = %s, home = %s, _prefix = %s, lib_python=%s",argv0_path, home, _prefix, lib_python);
> +
> + pfound = search_for_prefix(argv0_path, home, _prefix, lib_python);
> + if (!pfound) {
> + if (!Py_FrozenFlag)
> + fprintf(stderr,
> + "Could not find platform independent libraries <prefix>\n");
> + wcsncpy(prefix, _prefix, MAXPATHLEN);
> + joinpath(prefix, lib_python);
> + }
> + else
> + reduce(prefix);
> +
> + wcsncpy(zip_path, prefix, MAXPATHLEN);
> + zip_path[MAXPATHLEN] = L'\0';
> + if (pfound > 0) { /* Use the reduced prefix returned by Py_GetPrefix() */
> + reduce(zip_path);
> + reduce(zip_path);
> + }
> + else
> + wcsncpy(zip_path, _prefix, MAXPATHLEN);
> + joinpath(zip_path, L"lib/python36.zip");
> + bufsz = wcslen(zip_path); /* Replace "00" with version */
> + zip_path[bufsz - 6] = VERSION[0];
> + zip_path[bufsz - 5] = VERSION[2];
> +
> + efound = search_for_exec_prefix(argv0_path, home,
> + _exec_prefix, lib_python);
> + if (!efound) {
> + if (!Py_FrozenFlag)
> + fprintf(stderr,
> + "Could not find platform dependent libraries <exec_prefix>\n");
> + wcsncpy(exec_prefix, _exec_prefix, MAXPATHLEN);
> + joinpath(exec_prefix, L"lib/lib-dynload");
> + }
> + /* If we found EXEC_PREFIX do *not* reduce it! (Yet.) */
> +
> + if ((!pfound || !efound) && !Py_FrozenFlag)
> + fprintf(stderr,
> + "Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>]\n");
> +
> + /* Calculate size of return buffer.
> + */
> + bufsz = 0;
> +
> + if (_rtpypath && _rtpypath[0] != '\0') {
> + size_t rtpypath_len;
> + rtpypath = Py_DecodeLocale(_rtpypath, &rtpypath_len);
> + if (rtpypath != NULL)
> + bufsz += rtpypath_len + 1;
> + }
> +
> + defpath = _pythonpath;
> + prefixsz = wcslen(prefix) + 1;
> + while (1) {
> + wchar_t *delim = wcschr(defpath, DELIM);
> +
> + if (defpath[0] != SEP)
> + /* Paths are relative to prefix */
> + bufsz += prefixsz;
> +
> + if (delim)
> + bufsz += delim - defpath + 1;
> + else {
> + bufsz += wcslen(defpath) + 1;
> + break;
> + }
> + defpath = delim + 1;
> + }
> +
> + bufsz += wcslen(zip_path) + 1;
> + bufsz += wcslen(exec_prefix) + 1;
> +
> + buf = PyMem_RawMalloc(bufsz * sizeof(wchar_t));
> + if (buf == NULL) {
> + Py_FatalError(
> + "Not enough memory for dynamic PYTHONPATH");
> + }
> +
> + /* Run-time value of $PYTHONPATH goes first */
> + if (rtpypath) {
> + wcscpy(buf, rtpypath);
> + wcscat(buf, delimiter);
> + }
> + else
> + buf[0] = '\0';
> +
> + /* Next is the default zip path */
> + wcscat(buf, zip_path);
> + wcscat(buf, delimiter);
> +
> + /* Next goes merge of compile-time $PYTHONPATH with
> + * dynamically located prefix.
> + */
> + defpath = _pythonpath;
> + while (1) {
> + wchar_t *delim = wcschr(defpath, DELIM);
> +
> + if (defpath[0] != SEP) {
> + wcscat(buf, prefix);
> + if (prefixsz >= 2 && prefix[prefixsz - 2] != SEP &&
> + defpath[0] != (delim ? DELIM : L'\0')) { /* not empty */
> + wcscat(buf, separator);
> + }
> + }
> +
> + if (delim) {
> + size_t len = delim - defpath + 1;
> + size_t end = wcslen(buf) + len;
> + wcsncat(buf, defpath, len);
> + *(buf + end) = '\0';
> + }
> + else {
> + wcscat(buf, defpath);
> + break;
> + }
> + defpath = delim + 1;
> + }
> + wcscat(buf, delimiter);
> +
> + /* Finally, on goes the directory for dynamic-load modules */
> + wcscat(buf, exec_prefix);
> +
> + /* And publish the results */
> + module_search_path = buf;
> +
> + /* Reduce prefix and exec_prefix to their essence,
> + * e.g. /usr/local/lib/python1.5 is reduced to /usr/local.
> + * If we're loading relative to the build directory,
> + * return the compiled-in defaults instead.
> + */
> + if (pfound > 0) {
> + reduce(prefix);
> + reduce(prefix);
> + /* The prefix is the root directory, but reduce() chopped
> + * off the "/". */
> + if (!prefix[0])
> + wcscpy(prefix, separator);
> + }
> + else
> + wcsncpy(prefix, _prefix, MAXPATHLEN);
> +
> + if (efound > 0) {
> + reduce(exec_prefix);
> + reduce(exec_prefix);
> + reduce(exec_prefix);
> + if (!exec_prefix[0])
> + wcscpy(exec_prefix, separator);
> + }
> + else
> + wcsncpy(exec_prefix, _exec_prefix, MAXPATHLEN);
> +
> + PyMem_RawFree(_pythonpath);
> + PyMem_RawFree(_prefix);
> + PyMem_RawFree(_exec_prefix);
> + PyMem_RawFree(lib_python);
> + PyMem_RawFree(rtpypath);
> +#endif
> +}
> +
> +
> +/* External interface */
> +void
> +Py_SetPath(const wchar_t *path)
> +{
> + if (module_search_path != NULL) {
> + PyMem_RawFree(module_search_path);
> + module_search_path = NULL;
> + }
> + if (path != NULL) {
> + extern wchar_t *Py_GetProgramName(void);
> + wchar_t *prog = Py_GetProgramName();
> + wcsncpy(progpath, prog, MAXPATHLEN);
> + exec_prefix[0] = prefix[0] = L'\0';
> + module_search_path = PyMem_RawMalloc((wcslen(path) + 1) * sizeof(wchar_t));
> + if (module_search_path != NULL)
> + wcscpy(module_search_path, path);
> + }
> +}
> +
> +wchar_t *
> +Py_GetPath(void)
> +{
> + if (!module_search_path)
> + calculate_path();
> + return module_search_path;
> +}
> +
> +wchar_t *
> +Py_GetPrefix(void)
> +{
> + if (!module_search_path)
> + calculate_path();
> + return prefix;
> +}
> +
> +wchar_t *
> +Py_GetExecPrefix(void)
> +{
> + if (!module_search_path)
> + calculate_path();
> + return exec_prefix;
> +}
> +
> +wchar_t *
> +Py_GetProgramFullPath(void)
> +{
> + if (!module_search_path)
> + calculate_path();
> + return progpath;
> +}
> +
> +
> +#ifdef __cplusplus
> +}
> +#endif
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/main.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/main.c
> new file mode 100644
> index 00000000..c46c81ca
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/main.c
> @@ -0,0 +1,878 @@
> +/* Python interpreter main program */
> +
> +#include "Python.h"
> +#include "osdefs.h"
> +
> +#include <locale.h>
> +
> +#if defined(MS_WINDOWS) || defined(__CYGWIN__)
> +#include <windows.h>
> +#ifdef HAVE_IO_H
> +#include <io.h>
> +#endif
> +#ifdef HAVE_FCNTL_H
> +#include <fcntl.h>
> +#endif
> +#endif
> +
> +#if !defined(UEFI_MSVC_64) && !defined(UEFI_MSVC_32)
> +#ifdef _MSC_VER
> +#include <crtdbg.h>
> +#endif
> +#endif
> +
> +#if defined(MS_WINDOWS)
> +#define PYTHONHOMEHELP "<prefix>\\python{major}{minor}"
> +#else
> +#define PYTHONHOMEHELP "<prefix>/lib/pythonX.X"
> +#endif
> +
> +#include "pygetopt.h"
> +
> +#define COPYRIGHT \
> + "Type \"help\", \"copyright\", \"credits\" or \"license\" " \
> + "for more information."
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/* For Py_GetArgcArgv(); set by main() */
> +static wchar_t **orig_argv;
> +static int orig_argc;
> +
> +/* command line options */
> +#define BASE_OPTS L"$bBc:dEhiIJm:OqRsStuvVW:xX:?"
> +
> +#define PROGRAM_OPTS BASE_OPTS
> +
> +/* Short usage message (with %s for argv0) */
> +static const char usage_line[] =
> +"usage: %ls [option] ... [-c cmd | -m mod | file | -] [arg] ...\n";
> +
> +/* Long usage message, split into parts < 512 bytes */
> +static const char usage_1[] = "\
> +Options and arguments (and corresponding environment variables):\n\
> +-b : issue warnings about str(bytes_instance), str(bytearray_instance)\n\
> + and comparing bytes/bytearray with str. (-bb: issue errors)\n\
> +-B : don't write .pyc files on import; also PYTHONDONTWRITEBYTECODE=x\n\
> +-c cmd : program passed in as string (terminates option list)\n\
> +-d : debug output from parser; also PYTHONDEBUG=x\n\
> +-E : ignore PYTHON* environment variables (such as PYTHONPATH)\n\
> +-h : print this help message and exit (also --help)\n\
> +";
> +static const char usage_2[] = "\
> +-i : inspect interactively after running script; forces a prompt even\n\
> + if stdin does not appear to be a terminal; also PYTHONINSPECT=x\n\
> +-I : isolate Python from the user's environment (implies -E and -s)\n\
> +-m mod : run library module as a script (terminates option list)\n\
> +-O : remove assert and __debug__-dependent statements; add .opt-1 before\n\
> + .pyc extension; also PYTHONOPTIMIZE=x\n\
> +-OO : do -O changes and also discard docstrings; add .opt-2 before\n\
> + .pyc extension\n\
> +-q : don't print version and copyright messages on interactive startup\n\
> +-s : don't add user site directory to sys.path; also PYTHONNOUSERSITE\n\
> +-S : don't imply 'import site' on initialization\n\
> +";
> +static const char usage_3[] = "\
> +-u : force the binary I/O layers of stdout and stderr to be unbuffered;\n\
> + stdin is always buffered; text I/O layer will be line-buffered;\n\
> + also PYTHONUNBUFFERED=x\n\
> +-v : verbose (trace import statements); also PYTHONVERBOSE=x\n\
> + can be supplied multiple times to increase verbosity\n\
> +-V : print the Python version number and exit (also --version)\n\
> + when given twice, print more information about the build\n\
> +-W arg : warning control; arg is action:message:category:module:lineno\n\
> + also PYTHONWARNINGS=arg\n\
> +-x : skip first line of source, allowing use of non-Unix forms of #!cmd\n\
> +-X opt : set implementation-specific option\n\
> +";
> +static const char usage_4[] = "\
> +file : program read from script file\n\
> +- : program read from stdin (default; interactive mode if a tty)\n\
> +arg ...: arguments passed to program in sys.argv[1:]\n\n\
> +Other environment variables:\n\
> +PYTHONSTARTUP: file executed on interactive startup (no default)\n\
> +PYTHONPATH : '%lc'-separated list of directories prefixed to the\n\
> + default module search path. The result is sys.path.\n\
> +";
> +static const char usage_5[] =
> +"PYTHONHOME : alternate <prefix> directory (or <prefix>%lc<exec_prefix>).\n"
> +" The default module search path uses %s.\n"
> +"PYTHONCASEOK : ignore case in 'import' statements (UEFI Default).\n"
> +"PYTHONIOENCODING: Encoding[:errors] used for stdin/stdout/stderr.\n"
> +"PYTHONFAULTHANDLER: dump the Python traceback on fatal errors.\n";
> +static const char usage_6[] =
> +"PYTHONMALLOC: set the Python memory allocators and/or install debug hooks\n"
> +" on Python memory allocators. Use PYTHONMALLOC=debug to install debug\n"
> +" hooks.\n";
> +
> +static int
> +usage(int exitcode, const wchar_t* program)
> +{
> + FILE *f = exitcode ? stderr : stdout;
> +
> + fprintf(f, usage_line, program);
> + if (exitcode)
> + fprintf(f, "Try `python -h' for more information.\n");
> + else {
> + fputs(usage_1, f);
> + fputs(usage_2, f);
> + fputs(usage_3, f);
> + fprintf(f, usage_4, (wint_t)DELIM);
> + fprintf(f, usage_5, (wint_t)DELIM, PYTHONHOMEHELP);
> + //fputs(usage_6, f);
> + }
> + return exitcode;
> +}
> +
> +static void RunStartupFile(PyCompilerFlags *cf)
> +{
> + char *startup = Py_GETENV("PYTHONSTARTUP");
> + if (startup != NULL && startup[0] != '\0') {
> + FILE *fp = _Py_fopen(startup, "r");
> + if (fp != NULL) {
> + (void) PyRun_SimpleFileExFlags(fp, startup, 0, cf);
> + PyErr_Clear();
> + fclose(fp);
> + } else {
> + int save_errno;
> +
> + save_errno = errno;
> + PySys_WriteStderr("Could not open PYTHONSTARTUP\n");
> + errno = save_errno;
> + PyErr_SetFromErrnoWithFilename(PyExc_IOError,
> + startup);
> + PyErr_Print();
> + PyErr_Clear();
> + }
> + }
> +}
> +
> +static void RunInteractiveHook(void)
> +{
> + PyObject *sys, *hook, *result;
> + sys = PyImport_ImportModule("sys");
> + if (sys == NULL)
> + goto error;
> + hook = PyObject_GetAttrString(sys, "__interactivehook__");
> + Py_DECREF(sys);
> + if (hook == NULL)
> + PyErr_Clear();
> + else {
> + result = PyObject_CallObject(hook, NULL);
> + Py_DECREF(hook);
> + if (result == NULL)
> + goto error;
> + else
> + Py_DECREF(result);
> + }
> + return;
> +
> +error:
> + PySys_WriteStderr("Failed calling sys.__interactivehook__\n");
> + PyErr_Print();
> + PyErr_Clear();
> +}
> +
> +
> +static int RunModule(wchar_t *modname, int set_argv0)
> +{
> + PyObject *module, *runpy, *runmodule, *runargs, *result;
> + runpy = PyImport_ImportModule("runpy");
> + if (runpy == NULL) {
> + fprintf(stderr, "Could not import runpy module\n");
> + PyErr_Print();
> + return -1;
> + }
> + runmodule = PyObject_GetAttrString(runpy, "_run_module_as_main");
> + if (runmodule == NULL) {
> + fprintf(stderr, "Could not access runpy._run_module_as_main\n");
> + PyErr_Print();
> + Py_DECREF(runpy);
> + return -1;
> + }
> + module = PyUnicode_FromWideChar(modname, wcslen(modname));
> + if (module == NULL) {
> + fprintf(stderr, "Could not convert module name to unicode\n");
> + PyErr_Print();
> + Py_DECREF(runpy);
> + Py_DECREF(runmodule);
> + return -1;
> + }
> + runargs = Py_BuildValue("(Oi)", module, set_argv0);
> + if (runargs == NULL) {
> + fprintf(stderr,
> + "Could not create arguments for runpy._run_module_as_main\n");
> + PyErr_Print();
> + Py_DECREF(runpy);
> + Py_DECREF(runmodule);
> + Py_DECREF(module);
> + return -1;
> + }
> + result = PyObject_Call(runmodule, runargs, NULL);
> + if (result == NULL) {
> + PyErr_Print();
> + }
> + Py_DECREF(runpy);
> + Py_DECREF(runmodule);
> + Py_DECREF(module);
> + Py_DECREF(runargs);
> + if (result == NULL) {
> + return -1;
> + }
> + Py_DECREF(result);
> + return 0;
> +}
> +
> +static PyObject *
> +AsImportPathEntry(wchar_t *filename)
> +{
> + PyObject *sys_path0 = NULL, *importer;
> +
> + sys_path0 = PyUnicode_FromWideChar(filename, wcslen(filename));
> + if (sys_path0 == NULL)
> + goto error;
> +
> + importer = PyImport_GetImporter(sys_path0);
> + if (importer == NULL)
> + goto error;
> +
> + if (importer == Py_None) {
> + Py_DECREF(sys_path0);
> + Py_DECREF(importer);
> + return NULL;
> + }
> + Py_DECREF(importer);
> + return sys_path0;
> +
> +error:
> + Py_XDECREF(sys_path0);
> + PySys_WriteStderr("Failed checking if argv[0] is an import path entry\n");
> + PyErr_Print();
> + PyErr_Clear();
> + return NULL;
> +}
> +
> +
> +static int
> +RunMainFromImporter(PyObject *sys_path0)
> +{
> + PyObject *sys_path;
> + int sts;
> +
> + /* Assume sys_path0 has already been checked by AsImportPathEntry,
> + * so put it in sys.path[0] and import __main__ */
> + sys_path = PySys_GetObject("path");
> + if (sys_path == NULL) {
> + PyErr_SetString(PyExc_RuntimeError, "unable to get sys.path");
> + goto error;
> + }
> + sts = PyList_Insert(sys_path, 0, sys_path0);
> + if (sts) {
> + sys_path0 = NULL;
> + goto error;
> + }
> +
> + sts = RunModule(L"__main__", 0);
> + return sts != 0;
> +
> +error:
> + Py_XDECREF(sys_path0);
> + PyErr_Print();
> + return 1;
> +}
> +
> +static int
> +run_command(wchar_t *command, PyCompilerFlags *cf)
> +{
> + PyObject *unicode, *bytes;
> + int ret;
> +
> + unicode = PyUnicode_FromWideChar(command, -1);
> + if (unicode == NULL)
> + goto error;
> + bytes = PyUnicode_AsUTF8String(unicode);
> + Py_DECREF(unicode);
> + if (bytes == NULL)
> + goto error;
> + ret = PyRun_SimpleStringFlags(PyBytes_AsString(bytes), cf);
> + Py_DECREF(bytes);
> + return ret != 0;
> +
> +error:
> + PySys_WriteStderr("Unable to decode the command from the command line:\n");
> + PyErr_Print();
> + return 1;
> +}
> +
> +static int
> +run_file(FILE *fp, const wchar_t *filename, PyCompilerFlags *p_cf)
> +{
> + PyObject *unicode, *bytes = NULL;
> + char *filename_str;
> + int run;
> + /* call pending calls like signal handlers (SIGINT) */
> + if (Py_MakePendingCalls() == -1) {
> + PyErr_Print();
> + return 1;
> + }
> +
> + if (filename) {
> + unicode = PyUnicode_FromWideChar(filename, wcslen(filename));
> + if (unicode != NULL) {
> + bytes = PyUnicode_EncodeFSDefault(unicode);
> + Py_DECREF(unicode);
> + }
> + if (bytes != NULL)
> + filename_str = PyBytes_AsString(bytes);
> + else {
> + PyErr_Clear();
> + filename_str = "<encoding error>";
> + }
> + }
> + else
> + filename_str = "<stdin>";
> +
> + run = PyRun_AnyFileExFlags(fp, filename_str, filename != NULL, p_cf);
> + Py_XDECREF(bytes);
> + return run != 0;
> +}
> +
> +
> +/* Main program */
> +
> +int
> +Py_Main(int argc, wchar_t **argv)
> +{
> + int c;
> + int sts;
> + wchar_t *command = NULL;
> + wchar_t *filename = NULL;
> + wchar_t *module = NULL;
> + FILE *fp = stdin;
> + char *p;
> +#ifdef MS_WINDOWS
> + wchar_t *wp;
> +#endif
> + int skipfirstline = 0;
> + int stdin_is_interactive = 0;
> + int help = 0;
> + int version = 0;
> + int saw_unbuffered_flag = 0;
> + int saw_pound_flag = 0;
> + char *opt;
> + PyCompilerFlags cf;
> + PyObject *main_importer_path = NULL;
> + PyObject *warning_option = NULL;
> + PyObject *warning_options = NULL;
> +
> + cf.cf_flags = 0;
> +
> + orig_argc = argc; /* For Py_GetArgcArgv() */
> + orig_argv = argv;
> +
> + /* Hash randomization needed early for all string operations
> + (including -W and -X options). */
> + _PyOS_opterr = 0; /* prevent printing the error in 1st pass */
> + while ((c = _PyOS_GetOpt(argc, argv, PROGRAM_OPTS)) != EOF) {
> + if (c == 'm' || c == 'c') {
> + /* -c / -m is the last option: following arguments are
> + not interpreter options. */
> + break;
> + }
> + if (c == 'E') {
> + Py_IgnoreEnvironmentFlag++;
> + break;
> + }
> + }
> +
> + if (saw_pound_flag == 0) {
> + if (freopen("stdout:", "w", stderr) == NULL) {
> + puts("ERROR: Unable to reopen stderr as an alias to stdout!");
> + }
> + saw_pound_flag = 0xFF;
> + }
> +
> +#if 0
> + opt = Py_GETENV("PYTHONMALLOC");
> + if (_PyMem_SetupAllocators(opt) < 0) {
> + fprintf(stderr,
> + "Error in PYTHONMALLOC: unknown allocator \"%s\"!\n", opt);
> + exit(1);
> + }
> +#endif
> + _PyRandom_Init();
> +
> + PySys_ResetWarnOptions();
> + _PyOS_ResetGetOpt();
> +
> +
> + while ((c = _PyOS_GetOpt(argc, argv, PROGRAM_OPTS)) != EOF) {
> + if (c == 'c') {
> + size_t len;
> + /* -c is the last option; following arguments
> + that look like options are left for the
> + command to interpret. */
> +
> + len = wcslen(_PyOS_optarg) + 1 + 1;
> + command = (wchar_t *)PyMem_RawMalloc(sizeof(wchar_t) * len);
> + if (command == NULL)
> + Py_FatalError(
> + "not enough memory to copy -c argument");
> + wcscpy(command, _PyOS_optarg);
> + command[len - 2] = '\n';
> + command[len - 1] = 0;
> + break;
> + }
> +
> + if (c == 'm') {
> + /* -m is the last option; following arguments
> + that look like options are left for the
> + module to interpret. */
> + module = _PyOS_optarg;
> + break;
> + }
> +
> + switch (c) {
> + case 'b':
> + Py_BytesWarningFlag++;
> + break;
> +
> + case 'd':
> + Py_DebugFlag++;
> + break;
> +
> + case 'i':
> + Py_InspectFlag++;
> + Py_InteractiveFlag++;
> + break;
> +
> + case 'I':
> + Py_IsolatedFlag++;
> + Py_NoUserSiteDirectory++;
> + Py_IgnoreEnvironmentFlag++;
> + break;
> +
> + /* case 'J': reserved for Jython */
> +
> + case 'O':
> + Py_OptimizeFlag++;
> + break;
> +
> + case 'B':
> + Py_DontWriteBytecodeFlag++;
> + break;
> +
> + case 's':
> + Py_NoUserSiteDirectory++;
> + break;
> +
> + case 'S':
> + Py_NoSiteFlag++;
> + break;
> +
> + case 'E':
> + /* Already handled above */
> + break;
> +
> + case 't':
> + /* ignored for backwards compatibility */
> + break;
> +
> + case 'u':
> + Py_UnbufferedStdioFlag = 1;
> + saw_unbuffered_flag = 1;
> + break;
> +
> + case 'v':
> + Py_VerboseFlag++;
> + break;
> +
> + case 'x':
> + skipfirstline = 1;
> + break;
> +
> + case 'h':
> + case '?':
> + help++;
> + break;
> +
> + case 'V':
> + version++;
> + break;
> +
> + case 'W':
> + if (warning_options == NULL)
> + warning_options = PyList_New(0);
> + if (warning_options == NULL)
> + Py_FatalError("failure in handling of -W argument");
> + warning_option = PyUnicode_FromWideChar(_PyOS_optarg, -1);
> + if (warning_option == NULL)
> + Py_FatalError("failure in handling of -W argument");
> + if (PyList_Append(warning_options, warning_option) == -1)
> + Py_FatalError("failure in handling of -W argument");
> + Py_DECREF(warning_option);
> + break;
> +
> + case 'X':
> + PySys_AddXOption(_PyOS_optarg);
> + break;
> +
> + case 'q':
> + Py_QuietFlag++;
> + break;
> +
> + case '$':
> + /* Ignored */
> + break;
> +
> + case 'R':
> + /* Ignored */
> + break;
> +
> + /* This space reserved for other options */
> +
> + default:
> + return usage(2, argv[0]);
> + /*NOTREACHED*/
> +
> + }
> + }
> +
> + if (help)
> + return usage(0, argv[0]);
> +
> + if (version) {
> + printf("Python %s\n", version >= 2 ? Py_GetVersion() : PY_VERSION);
> + return 0;
> + }
> +
> + if (!Py_InspectFlag &&
> + (p = Py_GETENV("PYTHONINSPECT")) && *p != '\0')
> + Py_InspectFlag = 1;
> + if (!saw_unbuffered_flag &&
> + (p = Py_GETENV("PYTHONUNBUFFERED")) && *p != '\0')
> + Py_UnbufferedStdioFlag = 1;
> +
> + if (!Py_NoUserSiteDirectory &&
> + (p = Py_GETENV("PYTHONNOUSERSITE")) && *p != '\0')
> + Py_NoUserSiteDirectory = 1;
> +
> +#ifdef MS_WINDOWS
> + if (!Py_IgnoreEnvironmentFlag && (wp = _wgetenv(L"PYTHONWARNINGS")) &&
> + *wp != L'\0') {
> + wchar_t *buf, *warning, *context = NULL;
> +
> + buf = (wchar_t *)PyMem_RawMalloc((wcslen(wp) + 1) * sizeof(wchar_t));
> + if (buf == NULL)
> + Py_FatalError(
> + "not enough memory to copy PYTHONWARNINGS");
> + wcscpy(buf, wp);
> + for (warning = wcstok_s(buf, L",", &context);
> + warning != NULL;
> + warning = wcstok_s(NULL, L",", &context)) {
> + PySys_AddWarnOption(warning);
> + }
> + PyMem_RawFree(buf);
> + }
> +#else
> + if ((p = Py_GETENV("PYTHONWARNINGS")) && *p != '\0') {
> + char *buf, *oldloc;
> + PyObject *unicode;
> +
> + /* settle for strtok here as there's no one standard
> + C89 wcstok */
> + buf = (char *)PyMem_RawMalloc(strlen(p) + 1);
> + if (buf == NULL)
> + Py_FatalError(
> + "not enough memory to copy PYTHONWARNINGS");
> + strcpy(buf, p);
> + oldloc = _PyMem_RawStrdup(setlocale(LC_ALL, NULL));
> + setlocale(LC_ALL, "");
> + for (p = strtok(buf, ","); p != NULL; p = strtok(NULL, ",")) {
> +#ifdef __APPLE__
> + /* Use utf-8 on Mac OS X */
> + unicode = PyUnicode_FromString(p);
> +#else
> + unicode = PyUnicode_DecodeLocale(p, "surrogateescape");
> +#endif
> + if (unicode == NULL) {
> + /* ignore errors */
> + PyErr_Clear();
> + continue;
> + }
> + PySys_AddWarnOptionUnicode(unicode);
> + Py_DECREF(unicode);
> + }
> + setlocale(LC_ALL, oldloc);
> + PyMem_RawFree(oldloc);
> + PyMem_RawFree(buf);
> + }
> +#endif
> + if (warning_options != NULL) {
> + Py_ssize_t i;
> + for (i = 0; i < PyList_GET_SIZE(warning_options); i++) {
> + PySys_AddWarnOptionUnicode(PyList_GET_ITEM(warning_options, i));
> + }
> + }
> +
> + if (command == NULL && module == NULL && _PyOS_optind < argc &&
> + wcscmp(argv[_PyOS_optind], L"-") != 0)
> + {
> + filename = argv[_PyOS_optind];
> + }
> +
> + stdin_is_interactive = Py_FdIsInteractive(stdin, (char *)0);
> +#if defined(MS_WINDOWS) || defined(__CYGWIN__)
> + /* don't translate newlines (\r\n <=> \n) */
> + _setmode(fileno(stdin), O_BINARY);
> + _setmode(fileno(stdout), O_BINARY);
> + _setmode(fileno(stderr), O_BINARY);
> +#endif
> +
> + if (Py_UnbufferedStdioFlag) {
> +#ifdef HAVE_SETVBUF
> + setvbuf(stdin, (char *)NULL, _IONBF, BUFSIZ);
> + setvbuf(stdout, (char *)NULL, _IONBF, BUFSIZ);
> + setvbuf(stderr, (char *)NULL, _IONBF, BUFSIZ);
> +#else /* !HAVE_SETVBUF */
> + setbuf(stdin, (char *)NULL);
> + setbuf(stdout, (char *)NULL);
> + setbuf(stderr, (char *)NULL);
> +#endif /* !HAVE_SETVBUF */
> + }
> + else if (Py_InteractiveFlag) {
> +#ifdef MS_WINDOWS
> + /* Doesn't have to have line-buffered -- use unbuffered */
> + /* Any set[v]buf(stdin, ...) screws up Tkinter :-( */
> + setvbuf(stdout, (char *)NULL, _IONBF, BUFSIZ);
> +#else /* !MS_WINDOWS */
> +#ifdef HAVE_SETVBUF
> + setvbuf(stdin, (char *)NULL, _IOLBF, BUFSIZ);
> + setvbuf(stdout, (char *)NULL, _IOLBF, BUFSIZ);
> +#endif /* HAVE_SETVBUF */
> +#endif /* !MS_WINDOWS */
> + /* Leave stderr alone - it should be unbuffered anyway. */
> + }
> +
> +#ifdef __APPLE__
> + /* On MacOS X, when the Python interpreter is embedded in an
> + application bundle, it gets executed by a bootstrapping script
> + that does os.execve() with an argv[0] that's different from the
> + actual Python executable. This is needed to keep the Finder happy,
> + or rather, to work around Apple's overly strict requirements of
> + the process name. However, we still need a usable sys.executable,
> + so the actual executable path is passed in an environment variable.
> + See Lib/plat-mac/bundlebuiler.py for details about the bootstrap
> + script. */
> + if ((p = Py_GETENV("PYTHONEXECUTABLE")) && *p != '\0') {
> + wchar_t* buffer;
> + size_t len = strlen(p) + 1;
> +
> + buffer = PyMem_RawMalloc(len * sizeof(wchar_t));
> + if (buffer == NULL) {
> + Py_FatalError(
> + "not enough memory to copy PYTHONEXECUTABLE");
> + }
> +
> + mbstowcs(buffer, p, len);
> + Py_SetProgramName(buffer);
> + /* buffer is now handed off - do not free */
> + } else {
> +#ifdef WITH_NEXT_FRAMEWORK
> + char* pyvenv_launcher = getenv("__PYVENV_LAUNCHER__");
> +
> + if (pyvenv_launcher && *pyvenv_launcher) {
> + /* Used by Mac/Tools/pythonw.c to forward
> + * the argv0 of the stub executable
> + */
> + wchar_t* wbuf = Py_DecodeLocale(pyvenv_launcher, NULL);
> +
> + if (wbuf == NULL) {
> + Py_FatalError("Cannot decode __PYVENV_LAUNCHER__");
> + }
> + Py_SetProgramName(wbuf);
> +
> + /* Don't free wbuf, the argument to Py_SetProgramName
> + * must remain valid until Py_FinalizeEx is called.
> + */
> + } else {
> + Py_SetProgramName(argv[0]);
> + }
> +#else
> + Py_SetProgramName(argv[0]);
> +#endif
> + }
> +#else
> + Py_SetProgramName(argv[0]);
> +#endif
> + Py_Initialize();
> + Py_XDECREF(warning_options);
> +
> + if (!Py_QuietFlag && (Py_VerboseFlag ||
> + (command == NULL && filename == NULL &&
> + module == NULL && stdin_is_interactive))) {
> + fprintf(stderr, "Python %s on %s\n",
> + Py_GetVersion(), Py_GetPlatform());
> + if (!Py_NoSiteFlag)
> + fprintf(stderr, "%s\n", COPYRIGHT);
> + }
> +
> + if (command != NULL) {
> + /* Backup _PyOS_optind and force sys.argv[0] = '-c' */
> + _PyOS_optind--;
> + argv[_PyOS_optind] = L"-c";
> + }
> +
> + if (module != NULL) {
> + /* Backup _PyOS_optind and force sys.argv[0] = '-m'*/
> + _PyOS_optind--;
> + argv[_PyOS_optind] = L"-m";
> + }
> +
> + if (filename != NULL) {
> + main_importer_path = AsImportPathEntry(filename);
> + }
> +
> + if (main_importer_path != NULL) {
> + /* Let RunMainFromImporter adjust sys.path[0] later */
> + PySys_SetArgvEx(argc-_PyOS_optind, argv+_PyOS_optind, 0);
> + } else {
> + /* Use config settings to decide whether or not to update sys.path[0] */
> + PySys_SetArgv(argc-_PyOS_optind, argv+_PyOS_optind);
> + }
> +
> + if ((Py_InspectFlag || (command == NULL && filename == NULL && module == NULL)) &&
> + isatty(fileno(stdin)) &&
> + !Py_IsolatedFlag) {
> + PyObject *v;
> + v = PyImport_ImportModule("readline");
> + if (v == NULL)
> + PyErr_Clear();
> + else
> + Py_DECREF(v);
> + }
> +
> + if (command) {
> + sts = run_command(command, &cf);
> + PyMem_RawFree(command);
> + } else if (module) {
> + sts = (RunModule(module, 1) != 0);
> + }
> + else {
> +
> + if (filename == NULL && stdin_is_interactive) {
> + Py_InspectFlag = 0; /* do exit on SystemExit */
> + RunStartupFile(&cf);
> + RunInteractiveHook();
> + }
> + /* XXX */
> +
> + sts = -1; /* keep track of whether we've already run __main__ */
> +
> + if (main_importer_path != NULL) {
> + sts = RunMainFromImporter(main_importer_path);
> + }
> +
> + if (sts==-1 && filename != NULL) {
> + fp = _Py_wfopen(filename, L"r");
> + if (fp == NULL) {
> + char *cfilename_buffer;
> + const char *cfilename;
> + int err = errno;
> + cfilename_buffer = Py_EncodeLocale(filename, NULL);
> + if (cfilename_buffer != NULL)
> + cfilename = cfilename_buffer;
> + else
> + cfilename = "<unprintable file name>";
> + fprintf(stderr, "%ls: can't open file '%s': [Errno %d] %s\n",
> + argv[0], cfilename, err, strerror(err));
> + if (cfilename_buffer)
> + PyMem_Free(cfilename_buffer);
> + return 2;
> + }
> + else if (skipfirstline) {
> + int ch;
> + /* Push back first newline so line numbers
> + remain the same */
> + while ((ch = getc(fp)) != EOF) {
> + if (ch == '\n') {
> + (void)ungetc(ch, fp);
> + break;
> + }
> + }
> + }
> + {
> + struct _Py_stat_struct sb;
> + if (_Py_fstat_noraise(fileno(fp), &sb) == 0 &&
> + S_ISDIR(sb.st_mode)) {
> + fprintf(stderr,
> + "%ls: '%ls' is a directory, cannot continue\n",
> + argv[0], filename);
> + fclose(fp);
> + return 1;
> + }
> + }
> + }
> +
> + if (sts == -1)
> + sts = run_file(fp, filename, &cf);
> + }
> +
> + /* Check this environment variable at the end, to give programs the
> + * opportunity to set it from Python.
> + */
> + if (!Py_InspectFlag &&
> + (p = Py_GETENV("PYTHONINSPECT")) && *p != '\0')
> + {
> + Py_InspectFlag = 1;
> + }
> +
> + if (Py_InspectFlag && stdin_is_interactive &&
> + (filename != NULL || command != NULL || module != NULL)) {
> + Py_InspectFlag = 0;
> + RunInteractiveHook();
> + /* XXX */
> + sts = PyRun_AnyFileFlags(stdin, "<stdin>", &cf) != 0;
> + }
> +
> + if (Py_FinalizeEx() < 0) {
> + /* Value unlikely to be confused with a non-error exit status or
> + other special meaning */
> + sts = 120;
> + }
> +
> +#ifdef __INSURE__
> + /* Insure++ is a memory analysis tool that aids in discovering
> + * memory leaks and other memory problems. On Python exit, the
> + * interned string dictionaries are flagged as being in use at exit
> + * (which it is). Under normal circumstances, this is fine because
> + * the memory will be automatically reclaimed by the system. Under
> + * memory debugging, it's a huge source of useless noise, so we
> + * trade off slower shutdown for less distraction in the memory
> + * reports. -baw
> + */
> + _Py_ReleaseInternedUnicodeStrings();
> +#endif /* __INSURE__ */
> +
> + return sts;
> +}
> +
> +/* this is gonna seem *real weird*, but if you put some other code between
> + Py_Main() and Py_GetArgcArgv() you will need to adjust the test in the
> + while statement in Misc/gdbinit:ppystack */
> +
> +/* Make the *original* argc/argv available to other modules.
> + This is rare, but it is needed by the secureware extension. */
> +
> +void
> +Py_GetArgcArgv(int *argc, wchar_t ***argv)
> +{
> + *argc = orig_argc;
> + *argv = orig_argv;
> +}
> +
> +#ifdef __cplusplus
> +}
> +#endif
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/selectmodule.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/selectmodule.c
> new file mode 100644
> index 00000000..7072f5ee
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/selectmodule.c
> @@ -0,0 +1,2638 @@
> +/* select - Module containing unix select(2) call.
> + Under Unix, the file descriptors are small integers.
> + Under Win32, select only exists for sockets, and sockets may
> + have any value except INVALID_SOCKET.
> +*/
> +
> +#if defined(HAVE_POLL_H) && !defined(_GNU_SOURCE)
> +#define _GNU_SOURCE
> +#endif
> +
> +#include "Python.h"
> +#include <structmember.h>
> +
> +#ifdef HAVE_SYS_DEVPOLL_H
> +#include <sys/resource.h>
> +#include <sys/devpoll.h>
> +#include <sys/types.h>
> +#include <sys/stat.h>
> +#include <fcntl.h>
> +#endif
> +
> +#ifdef __APPLE__
> + /* Perform runtime testing for a broken poll on OSX to make it easier
> + * to use the same binary on multiple releases of the OS.
> + */
> +#undef HAVE_BROKEN_POLL
> +#endif
> +
> +/* Windows #defines FD_SETSIZE to 64 if FD_SETSIZE isn't already defined.
> + 64 is too small (too many people have bumped into that limit).
> + Here we boost it.
> + Users who want even more than the boosted limit should #define
> + FD_SETSIZE higher before this; e.g., via compiler /D switch.
> +*/
> +#if defined(MS_WINDOWS) && !defined(FD_SETSIZE)
> +#define FD_SETSIZE 512
> +#endif
> +
> +#if defined(HAVE_POLL_H)
> +#include <poll.h>
> +#elif defined(HAVE_SYS_POLL_H)
> +#include <sys/poll.h>
> +#endif
> +
> +#ifdef __sgi
> +/* This is missing from unistd.h */
> +extern void bzero(void *, int);
> +#endif
> +
> +#ifdef HAVE_SYS_TYPES_H
> +#include <sys/types.h>
> +#endif
> +
> +#ifdef MS_WINDOWS
> +# define WIN32_LEAN_AND_MEAN
> +# include <winsock.h>
> +#else
> +# define SOCKET int
> +#endif
> +
> +/* list of Python objects and their file descriptor */
> +typedef struct {
> + PyObject *obj; /* owned reference */
> + SOCKET fd;
> + int sentinel; /* -1 == sentinel */
> +} pylist;
> +
> +static void
> +reap_obj(pylist fd2obj[FD_SETSIZE + 1])
> +{
> + unsigned int i;
> + for (i = 0; i < (unsigned int)FD_SETSIZE + 1 && fd2obj[i].sentinel >= 0; i++) {
> + Py_CLEAR(fd2obj[i].obj);
> + }
> + fd2obj[0].sentinel = -1;
> +}
> +
> +
> +/* returns -1 and sets the Python exception if an error occurred, otherwise
> + returns a number >= 0
> +*/
> +static int
> +seq2set(PyObject *seq, fd_set *set, pylist fd2obj[FD_SETSIZE + 1])
> +{
> + int max = -1;
> + unsigned int index = 0;
> + Py_ssize_t i;
> + PyObject* fast_seq = NULL;
> + PyObject* o = NULL;
> +
> + fd2obj[0].obj = (PyObject*)0; /* set list to zero size */
> + FD_ZERO(set);
> +
> + fast_seq = PySequence_Fast(seq, "arguments 1-3 must be sequences");
> + if (!fast_seq)
> + return -1;
> +
> + for (i = 0; i < PySequence_Fast_GET_SIZE(fast_seq); i++) {
> + SOCKET v;
> +
> + /* any intervening fileno() calls could decr this refcnt */
> + if (!(o = PySequence_Fast_GET_ITEM(fast_seq, i)))
> + goto finally;
> +
> + Py_INCREF(o);
> + v = PyObject_AsFileDescriptor( o );
> + if (v == -1) goto finally;
> +
> +#if defined(_MSC_VER) && !defined(UEFI_C_SOURCE)
> + max = 0; /* not used for Win32 */
> +#else /* !_MSC_VER */
> + if (!_PyIsSelectable_fd(v)) {
> + PyErr_SetString(PyExc_ValueError,
> + "filedescriptor out of range in select()");
> + goto finally;
> + }
> + if (v > max)
> + max = v;
> +#endif /* _MSC_VER */
> + FD_SET(v, set);
> +
> + /* add object and its file descriptor to the list */
> + if (index >= (unsigned int)FD_SETSIZE) {
> + PyErr_SetString(PyExc_ValueError,
> + "too many file descriptors in select()");
> + goto finally;
> + }
> + fd2obj[index].obj = o;
> + fd2obj[index].fd = v;
> + fd2obj[index].sentinel = 0;
> + fd2obj[++index].sentinel = -1;
> + }
> + Py_DECREF(fast_seq);
> + return max+1;
> +
> + finally:
> + Py_XDECREF(o);
> + Py_DECREF(fast_seq);
> + return -1;
> +}
> +
> +/* returns NULL and sets the Python exception if an error occurred */
> +static PyObject *
> +set2list(fd_set *set, pylist fd2obj[FD_SETSIZE + 1])
> +{
> + int i, j, count=0;
> + PyObject *list, *o;
> + SOCKET fd;
> +
> + for (j = 0; fd2obj[j].sentinel >= 0; j++) {
> + if (FD_ISSET(fd2obj[j].fd, set))
> + count++;
> + }
> + list = PyList_New(count);
> + if (!list)
> + return NULL;
> +
> + i = 0;
> + for (j = 0; fd2obj[j].sentinel >= 0; j++) {
> + fd = fd2obj[j].fd;
> + if (FD_ISSET(fd, set)) {
> + o = fd2obj[j].obj;
> + fd2obj[j].obj = NULL;
> + /* transfer ownership */
> + if (PyList_SetItem(list, i, o) < 0)
> + goto finally;
> +
> + i++;
> + }
> + }
> + return list;
> + finally:
> + Py_DECREF(list);
> + return NULL;
> +}
> +
> +#undef SELECT_USES_HEAP
> +#if FD_SETSIZE > 1024
> +#define SELECT_USES_HEAP
> +#endif /* FD_SETSIZE > 1024 */
> +
> +static PyObject *
> +select_select(PyObject *self, PyObject *args)
> +{
> +#ifdef SELECT_USES_HEAP
> + pylist *rfd2obj, *wfd2obj, *efd2obj;
> +#else /* !SELECT_USES_HEAP */
> + /* XXX: All this should probably be implemented as follows:
> + * - find the highest descriptor we're interested in
> + * - add one
> + * - that's the size
> + * See: Stevens, APitUE, $12.5.1
> + */
> + pylist rfd2obj[FD_SETSIZE + 1];
> + pylist wfd2obj[FD_SETSIZE + 1];
> + pylist efd2obj[FD_SETSIZE + 1];
> +#endif /* SELECT_USES_HEAP */
> + PyObject *ifdlist, *ofdlist, *efdlist;
> + PyObject *ret = NULL;
> + PyObject *timeout_obj = Py_None;
> + fd_set ifdset, ofdset, efdset;
> + struct timeval tv, *tvp;
> + int imax, omax, emax, max;
> + int n;
> + _PyTime_t timeout, deadline = 0;
> +
> + /* convert arguments */
> + if (!PyArg_UnpackTuple(args, "select", 3, 4,
> + &ifdlist, &ofdlist, &efdlist, &timeout_obj))
> + return NULL;
> +
> + if (timeout_obj == Py_None)
> + tvp = (struct timeval *)NULL;
> + else {
> + if (_PyTime_FromSecondsObject(&timeout, timeout_obj,
> + _PyTime_ROUND_TIMEOUT) < 0) {
> + if (PyErr_ExceptionMatches(PyExc_TypeError)) {
> + PyErr_SetString(PyExc_TypeError,
> + "timeout must be a float or None");
> + }
> + return NULL;
> + }
> +
> + if (_PyTime_AsTimeval(timeout, &tv, _PyTime_ROUND_TIMEOUT) == -1)
> + return NULL;
> + if (tv.tv_sec < 0) {
> + PyErr_SetString(PyExc_ValueError, "timeout must be non-negative");
> + return NULL;
> + }
> + tvp = &tv;
> + }
> +
> +#ifdef SELECT_USES_HEAP
> + /* Allocate memory for the lists */
> + rfd2obj = PyMem_NEW(pylist, FD_SETSIZE + 1);
> + wfd2obj = PyMem_NEW(pylist, FD_SETSIZE + 1);
> + efd2obj = PyMem_NEW(pylist, FD_SETSIZE + 1);
> + if (rfd2obj == NULL || wfd2obj == NULL || efd2obj == NULL) {
> + if (rfd2obj) PyMem_DEL(rfd2obj);
> + if (wfd2obj) PyMem_DEL(wfd2obj);
> + if (efd2obj) PyMem_DEL(efd2obj);
> + return PyErr_NoMemory();
> + }
> +#endif /* SELECT_USES_HEAP */
> +
> + /* Convert sequences to fd_sets, and get maximum fd number
> + * propagates the Python exception set in seq2set()
> + */
> + rfd2obj[0].sentinel = -1;
> + wfd2obj[0].sentinel = -1;
> + efd2obj[0].sentinel = -1;
> + if ((imax=seq2set(ifdlist, &ifdset, rfd2obj)) < 0)
> + goto finally;
> + if ((omax=seq2set(ofdlist, &ofdset, wfd2obj)) < 0)
> + goto finally;
> + if ((emax=seq2set(efdlist, &efdset, efd2obj)) < 0)
> + goto finally;
> +
> + max = imax;
> + if (omax > max) max = omax;
> + if (emax > max) max = emax;
> +
> + if (tvp)
> + deadline = _PyTime_GetMonotonicClock() + timeout;
> +
> + do {
> + Py_BEGIN_ALLOW_THREADS
> + errno = 0;
> + n = select(max, &ifdset, &ofdset, &efdset, tvp);
> + Py_END_ALLOW_THREADS
> +
> + if (errno != EINTR)
> + break;
> +
> + /* select() was interrupted by a signal */
> + if (PyErr_CheckSignals())
> + goto finally;
> +
> + if (tvp) {
> + timeout = deadline - _PyTime_GetMonotonicClock();
> + if (timeout < 0) {
> + /* bpo-35310: lists were unmodified -- clear them explicitly */
> + FD_ZERO(&ifdset);
> + FD_ZERO(&ofdset);
> + FD_ZERO(&efdset);
> + n = 0;
> + break;
> + }
> + _PyTime_AsTimeval_noraise(timeout, &tv, _PyTime_ROUND_CEILING);
> + /* retry select() with the recomputed timeout */
> + }
> + } while (1);
> +
> +#ifdef MS_WINDOWS
> + if (n == SOCKET_ERROR) {
> + PyErr_SetExcFromWindowsErr(PyExc_OSError, WSAGetLastError());
> + }
> +#else
> + if (n < 0) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + }
> +#endif
> + else {
> + /* any of these three calls can raise an exception. it's more
> + convenient to test for this after all three calls... but
> + is that acceptable?
> + */
> + ifdlist = set2list(&ifdset, rfd2obj);
> + ofdlist = set2list(&ofdset, wfd2obj);
> + efdlist = set2list(&efdset, efd2obj);
> + if (PyErr_Occurred())
> + ret = NULL;
> + else
> + ret = PyTuple_Pack(3, ifdlist, ofdlist, efdlist);
> +
> + Py_XDECREF(ifdlist);
> + Py_XDECREF(ofdlist);
> + Py_XDECREF(efdlist);
> + }
> +
> + finally:
> + reap_obj(rfd2obj);
> + reap_obj(wfd2obj);
> + reap_obj(efd2obj);
> +#ifdef SELECT_USES_HEAP
> + PyMem_DEL(rfd2obj);
> + PyMem_DEL(wfd2obj);
> + PyMem_DEL(efd2obj);
> +#endif /* SELECT_USES_HEAP */
> + return ret;
> +}
> +
> +#if defined(HAVE_POLL) && !defined(HAVE_BROKEN_POLL)
> +/*
> + * poll() support
> + */
> +
> +typedef struct {
> + PyObject_HEAD
> + PyObject *dict;
> + int ufd_uptodate;
> + int ufd_len;
> + struct pollfd *ufds;
> + int poll_running;
> +} pollObject;
> +
> +static PyTypeObject poll_Type;
> +
> +/* Update the malloc'ed array of pollfds to match the dictionary
> + contained within a pollObject. Return 1 on success, 0 on an error.
> +*/
> +
> +static int
> +update_ufd_array(pollObject *self)
> +{
> + Py_ssize_t i, pos;
> + PyObject *key, *value;
> + struct pollfd *old_ufds = self->ufds;
> +
> + self->ufd_len = PyDict_Size(self->dict);
> + PyMem_RESIZE(self->ufds, struct pollfd, self->ufd_len);
> + if (self->ufds == NULL) {
> + self->ufds = old_ufds;
> + PyErr_NoMemory();
> + return 0;
> + }
> +
> + i = pos = 0;
> + while (PyDict_Next(self->dict, &pos, &key, &value)) {
> + assert(i < self->ufd_len);
> + /* Never overflow */
> + self->ufds[i].fd = (int)PyLong_AsLong(key);
> + self->ufds[i].events = (short)(unsigned short)PyLong_AsLong(value);
> + i++;
> + }
> + assert(i == self->ufd_len);
> + self->ufd_uptodate = 1;
> + return 1;
> +}
> +
> +static int
> +ushort_converter(PyObject *obj, void *ptr)
> +{
> + unsigned long uval;
> +
> + uval = PyLong_AsUnsignedLong(obj);
> + if (uval == (unsigned long)-1 && PyErr_Occurred())
> + return 0;
> + if (uval > USHRT_MAX) {
> + PyErr_SetString(PyExc_OverflowError,
> + "Python int too large for C unsigned short");
> + return 0;
> + }
> +
> + *(unsigned short *)ptr = Py_SAFE_DOWNCAST(uval, unsigned long, unsigned short);
> + return 1;
> +}
> +
> +PyDoc_STRVAR(poll_register_doc,
> +"register(fd [, eventmask] ) -> None\n\n\
> +Register a file descriptor with the polling object.\n\
> +fd -- either an integer, or an object with a fileno() method returning an\n\
> + int.\n\
> +events -- an optional bitmask describing the type of events to check for");
> +
> +static PyObject *
> +poll_register(pollObject *self, PyObject *args)
> +{
> + PyObject *o, *key, *value;
> + int fd;
> + unsigned short events = POLLIN | POLLPRI | POLLOUT;
> + int err;
> +
> + if (!PyArg_ParseTuple(args, "O|O&:register", &o, ushort_converter, &events))
> + return NULL;
> +
> + fd = PyObject_AsFileDescriptor(o);
> + if (fd == -1) return NULL;
> +
> + /* Add entry to the internal dictionary: the key is the
> + file descriptor, and the value is the event mask. */
> + key = PyLong_FromLong(fd);
> + if (key == NULL)
> + return NULL;
> + value = PyLong_FromLong(events);
> + if (value == NULL) {
> + Py_DECREF(key);
> + return NULL;
> + }
> + err = PyDict_SetItem(self->dict, key, value);
> + Py_DECREF(key);
> + Py_DECREF(value);
> + if (err < 0)
> + return NULL;
> +
> + self->ufd_uptodate = 0;
> +
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +PyDoc_STRVAR(poll_modify_doc,
> +"modify(fd, eventmask) -> None\n\n\
> +Modify an already registered file descriptor.\n\
> +fd -- either an integer, or an object with a fileno() method returning an\n\
> + int.\n\
> +events -- an optional bitmask describing the type of events to check for");
> +
> +static PyObject *
> +poll_modify(pollObject *self, PyObject *args)
> +{
> + PyObject *o, *key, *value;
> + int fd;
> + unsigned short events;
> + int err;
> +
> + if (!PyArg_ParseTuple(args, "OO&:modify", &o, ushort_converter, &events))
> + return NULL;
> +
> + fd = PyObject_AsFileDescriptor(o);
> + if (fd == -1) return NULL;
> +
> + /* Modify registered fd */
> + key = PyLong_FromLong(fd);
> + if (key == NULL)
> + return NULL;
> + if (PyDict_GetItem(self->dict, key) == NULL) {
> + errno = ENOENT;
> + PyErr_SetFromErrno(PyExc_OSError);
> + Py_DECREF(key);
> + return NULL;
> + }
> + value = PyLong_FromLong(events);
> + if (value == NULL) {
> + Py_DECREF(key);
> + return NULL;
> + }
> + err = PyDict_SetItem(self->dict, key, value);
> + Py_DECREF(key);
> + Py_DECREF(value);
> + if (err < 0)
> + return NULL;
> +
> + self->ufd_uptodate = 0;
> +
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +
> +PyDoc_STRVAR(poll_unregister_doc,
> +"unregister(fd) -> None\n\n\
> +Remove a file descriptor being tracked by the polling object.");
> +
> +static PyObject *
> +poll_unregister(pollObject *self, PyObject *o)
> +{
> + PyObject *key;
> + int fd;
> +
> + fd = PyObject_AsFileDescriptor( o );
> + if (fd == -1)
> + return NULL;
> +
> + /* Check whether the fd is already in the array */
> + key = PyLong_FromLong(fd);
> + if (key == NULL)
> + return NULL;
> +
> + if (PyDict_DelItem(self->dict, key) == -1) {
> + Py_DECREF(key);
> + /* This will simply raise the KeyError set by PyDict_DelItem
> + if the file descriptor isn't registered. */
> + return NULL;
> + }
> +
> + Py_DECREF(key);
> + self->ufd_uptodate = 0;
> +
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +PyDoc_STRVAR(poll_poll_doc,
> +"poll( [timeout] ) -> list of (fd, event) 2-tuples\n\n\
> +Polls the set of registered file descriptors, returning a list containing \n\
> +any descriptors that have events or errors to report.");
> +
> +static PyObject *
> +poll_poll(pollObject *self, PyObject *args)
> +{
> + PyObject *result_list = NULL, *timeout_obj = NULL;
> + int poll_result, i, j;
> + PyObject *value = NULL, *num = NULL;
> + _PyTime_t timeout = -1, ms = -1, deadline = 0;
> + int async_err = 0;
> +
> + if (!PyArg_ParseTuple(args, "|O:poll", &timeout_obj)) {
> + return NULL;
> + }
> +
> + if (timeout_obj != NULL && timeout_obj != Py_None) {
> + if (_PyTime_FromMillisecondsObject(&timeout, timeout_obj,
> + _PyTime_ROUND_TIMEOUT) < 0) {
> + if (PyErr_ExceptionMatches(PyExc_TypeError)) {
> + PyErr_SetString(PyExc_TypeError,
> + "timeout must be an integer or None");
> + }
> + return NULL;
> + }
> +
> + ms = _PyTime_AsMilliseconds(timeout, _PyTime_ROUND_TIMEOUT);
> + if (ms < INT_MIN || ms > INT_MAX) {
> + PyErr_SetString(PyExc_OverflowError, "timeout is too large");
> + return NULL;
> + }
> +
> + if (timeout >= 0) {
> + deadline = _PyTime_GetMonotonicClock() + timeout;
> + }
> + }
> +
> + /* On some OSes, typically BSD-based ones, the timeout parameter of the
> + poll() syscall, when negative, must be exactly INFTIM, where defined,
> + or -1. See issue 31334. */
> + if (ms < 0) {
> +#ifdef INFTIM
> + ms = INFTIM;
> +#else
> + ms = -1;
> +#endif
> + }
> +
> + /* Avoid concurrent poll() invocation, issue 8865 */
> + if (self->poll_running) {
> + PyErr_SetString(PyExc_RuntimeError,
> + "concurrent poll() invocation");
> + return NULL;
> + }
> +
> + /* Ensure the ufd array is up to date */
> + if (!self->ufd_uptodate)
> + if (update_ufd_array(self) == 0)
> + return NULL;
> +
> + self->poll_running = 1;
> +
> + /* call poll() */
> + async_err = 0;
> + do {
> + Py_BEGIN_ALLOW_THREADS
> + errno = 0;
> + poll_result = poll(self->ufds, self->ufd_len, (int)ms);
> + Py_END_ALLOW_THREADS
> +
> + if (errno != EINTR)
> + break;
> +
> + /* poll() was interrupted by a signal */
> + if (PyErr_CheckSignals()) {
> + async_err = 1;
> + break;
> + }
> +
> + if (timeout >= 0) {
> + timeout = deadline - _PyTime_GetMonotonicClock();
> + if (timeout < 0) {
> + poll_result = 0;
> + break;
> + }
> + ms = _PyTime_AsMilliseconds(timeout, _PyTime_ROUND_CEILING);
> + /* retry poll() with the recomputed timeout */
> + }
> + } while (1);
> +
> + self->poll_running = 0;
> +
> + if (poll_result < 0) {
> + if (!async_err)
> + PyErr_SetFromErrno(PyExc_OSError);
> + return NULL;
> + }
> +
> + /* build the result list */
> +
> + result_list = PyList_New(poll_result);
> + if (!result_list)
> + return NULL;
> +
> + for (i = 0, j = 0; j < poll_result; j++) {
> + /* skip to the next fired descriptor */
> + while (!self->ufds[i].revents) {
> + i++;
> + }
> + /* if we hit a NULL return, set value to NULL
> + and break out of loop; code at end will
> + clean up result_list */
> + value = PyTuple_New(2);
> + if (value == NULL)
> + goto error;
> + num = PyLong_FromLong(self->ufds[i].fd);
> + if (num == NULL) {
> + Py_DECREF(value);
> + goto error;
> + }
> + PyTuple_SET_ITEM(value, 0, num);
> +
> + /* The &0xffff is a workaround for AIX. 'revents'
> + is a 16-bit short, and IBM assigned POLLNVAL
> + to be 0x8000, so the conversion to int results
> + in a negative number. See SF bug #923315. */
> + num = PyLong_FromLong(self->ufds[i].revents & 0xffff);
> + if (num == NULL) {
> + Py_DECREF(value);
> + goto error;
> + }
> + PyTuple_SET_ITEM(value, 1, num);
> + PyList_SET_ITEM(result_list, j, value);
> + i++;
> + }
> + return result_list;
> +
> + error:
> + Py_DECREF(result_list);
> + return NULL;
> +}
> +
> +static PyMethodDef poll_methods[] = {
> + {"register", (PyCFunction)poll_register,
> + METH_VARARGS, poll_register_doc},
> + {"modify", (PyCFunction)poll_modify,
> + METH_VARARGS, poll_modify_doc},
> + {"unregister", (PyCFunction)poll_unregister,
> + METH_O, poll_unregister_doc},
> + {"poll", (PyCFunction)poll_poll,
> + METH_VARARGS, poll_poll_doc},
> + {NULL, NULL} /* sentinel */
> +};
> +
> +static pollObject *
> +newPollObject(void)
> +{
> + pollObject *self;
> + self = PyObject_New(pollObject, &poll_Type);
> + if (self == NULL)
> + return NULL;
> + /* ufd_uptodate is a Boolean, denoting whether the
> + array pointed to by ufds matches the contents of the dictionary. */
> + self->ufd_uptodate = 0;
> + self->ufds = NULL;
> + self->poll_running = 0;
> + self->dict = PyDict_New();
> + if (self->dict == NULL) {
> + Py_DECREF(self);
> + return NULL;
> + }
> + return self;
> +}
> +
> +static void
> +poll_dealloc(pollObject *self)
> +{
> + if (self->ufds != NULL)
> + PyMem_DEL(self->ufds);
> + Py_XDECREF(self->dict);
> + PyObject_Del(self);
> +}
> +
> +static PyTypeObject poll_Type = {
> + /* The ob_type field must be initialized in the module init function
> + * to be portable to Windows without using C++. */
> + PyVarObject_HEAD_INIT(NULL, 0)
> + "select.poll", /*tp_name*/
> + sizeof(pollObject), /*tp_basicsize*/
> + 0, /*tp_itemsize*/
> + /* methods */
> + (destructor)poll_dealloc, /*tp_dealloc*/
> + 0, /*tp_print*/
> + 0, /*tp_getattr*/
> + 0, /*tp_setattr*/
> + 0, /*tp_reserved*/
> + 0, /*tp_repr*/
> + 0, /*tp_as_number*/
> + 0, /*tp_as_sequence*/
> + 0, /*tp_as_mapping*/
> + 0, /*tp_hash*/
> + 0, /*tp_call*/
> + 0, /*tp_str*/
> + 0, /*tp_getattro*/
> + 0, /*tp_setattro*/
> + 0, /*tp_as_buffer*/
> + Py_TPFLAGS_DEFAULT, /*tp_flags*/
> + 0, /*tp_doc*/
> + 0, /*tp_traverse*/
> + 0, /*tp_clear*/
> + 0, /*tp_richcompare*/
> + 0, /*tp_weaklistoffset*/
> + 0, /*tp_iter*/
> + 0, /*tp_iternext*/
> + poll_methods, /*tp_methods*/
> +};
> +
> +#ifdef HAVE_SYS_DEVPOLL_H
> +typedef struct {
> + PyObject_HEAD
> + int fd_devpoll;
> + int max_n_fds;
> + int n_fds;
> + struct pollfd *fds;
> +} devpollObject;
> +
> +static PyTypeObject devpoll_Type;
> +
> +static PyObject *
> +devpoll_err_closed(void)
> +{
> + PyErr_SetString(PyExc_ValueError, "I/O operation on closed devpoll object");
> + return NULL;
> +}
> +
> +static int devpoll_flush(devpollObject *self)
> +{
> + int size, n;
> +
> + if (!self->n_fds) return 0;
> +
> + size = sizeof(struct pollfd)*self->n_fds;
> + self->n_fds = 0;
> +
> + n = _Py_write(self->fd_devpoll, self->fds, size);
> + if (n == -1)
> + return -1;
> +
> + if (n < size) {
> + /*
> + ** Data writed to /dev/poll is a binary data structure. It is not
> + ** clear what to do if a partial write occurred. For now, raise
> + ** an exception and see if we actually found this problem in
> + ** the wild.
> + ** See http://bugs.python.org/issue6397.
> + */
> + PyErr_Format(PyExc_IOError, "failed to write all pollfds. "
> + "Please, report at http://bugs.python.org/. "
> + "Data to report: Size tried: %d, actual size written: %d.",
> + size, n);
> + return -1;
> + }
> + return 0;
> +}
> +
> +static PyObject *
> +internal_devpoll_register(devpollObject *self, PyObject *args, int remove)
> +{
> + PyObject *o;
> + int fd;
> + unsigned short events = POLLIN | POLLPRI | POLLOUT;
> +
> + if (self->fd_devpoll < 0)
> + return devpoll_err_closed();
> +
> + if (!PyArg_ParseTuple(args, "O|O&:register", &o, ushort_converter, &events))
> + return NULL;
> +
> + fd = PyObject_AsFileDescriptor(o);
> + if (fd == -1) return NULL;
> +
> + if (remove) {
> + self->fds[self->n_fds].fd = fd;
> + self->fds[self->n_fds].events = POLLREMOVE;
> +
> + if (++self->n_fds == self->max_n_fds) {
> + if (devpoll_flush(self))
> + return NULL;
> + }
> + }
> +
> + self->fds[self->n_fds].fd = fd;
> + self->fds[self->n_fds].events = (signed short)events;
> +
> + if (++self->n_fds == self->max_n_fds) {
> + if (devpoll_flush(self))
> + return NULL;
> + }
> +
> + Py_RETURN_NONE;
> +}
> +
> +PyDoc_STRVAR(devpoll_register_doc,
> +"register(fd [, eventmask] ) -> None\n\n\
> +Register a file descriptor with the polling object.\n\
> +fd -- either an integer, or an object with a fileno() method returning an\n\
> + int.\n\
> +events -- an optional bitmask describing the type of events to check for");
> +
> +static PyObject *
> +devpoll_register(devpollObject *self, PyObject *args)
> +{
> + return internal_devpoll_register(self, args, 0);
> +}
> +
> +PyDoc_STRVAR(devpoll_modify_doc,
> +"modify(fd[, eventmask]) -> None\n\n\
> +Modify a possible already registered file descriptor.\n\
> +fd -- either an integer, or an object with a fileno() method returning an\n\
> + int.\n\
> +events -- an optional bitmask describing the type of events to check for");
> +
> +static PyObject *
> +devpoll_modify(devpollObject *self, PyObject *args)
> +{
> + return internal_devpoll_register(self, args, 1);
> +}
> +
> +
> +PyDoc_STRVAR(devpoll_unregister_doc,
> +"unregister(fd) -> None\n\n\
> +Remove a file descriptor being tracked by the polling object.");
> +
> +static PyObject *
> +devpoll_unregister(devpollObject *self, PyObject *o)
> +{
> + int fd;
> +
> + if (self->fd_devpoll < 0)
> + return devpoll_err_closed();
> +
> + fd = PyObject_AsFileDescriptor( o );
> + if (fd == -1)
> + return NULL;
> +
> + self->fds[self->n_fds].fd = fd;
> + self->fds[self->n_fds].events = POLLREMOVE;
> +
> + if (++self->n_fds == self->max_n_fds) {
> + if (devpoll_flush(self))
> + return NULL;
> + }
> +
> + Py_RETURN_NONE;
> +}
> +
> +PyDoc_STRVAR(devpoll_poll_doc,
> +"poll( [timeout] ) -> list of (fd, event) 2-tuples\n\n\
> +Polls the set of registered file descriptors, returning a list containing \n\
> +any descriptors that have events or errors to report.");
> +
> +static PyObject *
> +devpoll_poll(devpollObject *self, PyObject *args)
> +{
> + struct dvpoll dvp;
> + PyObject *result_list = NULL, *timeout_obj = NULL;
> + int poll_result, i;
> + PyObject *value, *num1, *num2;
> + _PyTime_t timeout, ms, deadline = 0;
> +
> + if (self->fd_devpoll < 0)
> + return devpoll_err_closed();
> +
> + if (!PyArg_ParseTuple(args, "|O:poll", &timeout_obj)) {
> + return NULL;
> + }
> +
> + /* Check values for timeout */
> + if (timeout_obj == NULL || timeout_obj == Py_None) {
> + timeout = -1;
> + ms = -1;
> + }
> + else {
> + if (_PyTime_FromMillisecondsObject(&timeout, timeout_obj,
> + _PyTime_ROUND_TIMEOUT) < 0) {
> + if (PyErr_ExceptionMatches(PyExc_TypeError)) {
> + PyErr_SetString(PyExc_TypeError,
> + "timeout must be an integer or None");
> + }
> + return NULL;
> + }
> +
> + ms = _PyTime_AsMilliseconds(timeout, _PyTime_ROUND_TIMEOUT);
> + if (ms < -1 || ms > INT_MAX) {
> + PyErr_SetString(PyExc_OverflowError, "timeout is too large");
> + return NULL;
> + }
> + }
> +
> + if (devpoll_flush(self))
> + return NULL;
> +
> + dvp.dp_fds = self->fds;
> + dvp.dp_nfds = self->max_n_fds;
> + dvp.dp_timeout = (int)ms;
> +
> + if (timeout >= 0)
> + deadline = _PyTime_GetMonotonicClock() + timeout;
> +
> + do {
> + /* call devpoll() */
> + Py_BEGIN_ALLOW_THREADS
> + errno = 0;
> + poll_result = ioctl(self->fd_devpoll, DP_POLL, &dvp);
> + Py_END_ALLOW_THREADS
> +
> + if (errno != EINTR)
> + break;
> +
> + /* devpoll() was interrupted by a signal */
> + if (PyErr_CheckSignals())
> + return NULL;
> +
> + if (timeout >= 0) {
> + timeout = deadline - _PyTime_GetMonotonicClock();
> + if (timeout < 0) {
> + poll_result = 0;
> + break;
> + }
> + ms = _PyTime_AsMilliseconds(timeout, _PyTime_ROUND_CEILING);
> + dvp.dp_timeout = (int)ms;
> + /* retry devpoll() with the recomputed timeout */
> + }
> + } while (1);
> +
> + if (poll_result < 0) {
> + PyErr_SetFromErrno(PyExc_IOError);
> + return NULL;
> + }
> +
> + /* build the result list */
> + result_list = PyList_New(poll_result);
> + if (!result_list)
> + return NULL;
> +
> + for (i = 0; i < poll_result; i++) {
> + num1 = PyLong_FromLong(self->fds[i].fd);
> + num2 = PyLong_FromLong(self->fds[i].revents);
> + if ((num1 == NULL) || (num2 == NULL)) {
> + Py_XDECREF(num1);
> + Py_XDECREF(num2);
> + goto error;
> + }
> + value = PyTuple_Pack(2, num1, num2);
> + Py_DECREF(num1);
> + Py_DECREF(num2);
> + if (value == NULL)
> + goto error;
> + PyList_SET_ITEM(result_list, i, value);
> + }
> +
> + return result_list;
> +
> + error:
> + Py_DECREF(result_list);
> + return NULL;
> +}
> +
> +static int
> +devpoll_internal_close(devpollObject *self)
> +{
> + int save_errno = 0;
> + if (self->fd_devpoll >= 0) {
> + int fd = self->fd_devpoll;
> + self->fd_devpoll = -1;
> + Py_BEGIN_ALLOW_THREADS
> + if (close(fd) < 0)
> + save_errno = errno;
> + Py_END_ALLOW_THREADS
> + }
> + return save_errno;
> +}
> +
> +static PyObject*
> +devpoll_close(devpollObject *self)
> +{
> + errno = devpoll_internal_close(self);
> + if (errno < 0) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + return NULL;
> + }
> + Py_RETURN_NONE;
> +}
> +
> +PyDoc_STRVAR(devpoll_close_doc,
> +"close() -> None\n\
> +\n\
> +Close the devpoll file descriptor. Further operations on the devpoll\n\
> +object will raise an exception.");
> +
> +static PyObject*
> +devpoll_get_closed(devpollObject *self, void *Py_UNUSED(ignored))
> +{
> + if (self->fd_devpoll < 0)
> + Py_RETURN_TRUE;
> + else
> + Py_RETURN_FALSE;
> +}
> +
> +static PyObject*
> +devpoll_fileno(devpollObject *self)
> +{
> + if (self->fd_devpoll < 0)
> + return devpoll_err_closed();
> + return PyLong_FromLong(self->fd_devpoll);
> +}
> +
> +PyDoc_STRVAR(devpoll_fileno_doc,
> +"fileno() -> int\n\
> +\n\
> +Return the file descriptor.");
> +
> +static PyMethodDef devpoll_methods[] = {
> + {"register", (PyCFunction)devpoll_register,
> + METH_VARARGS, devpoll_register_doc},
> + {"modify", (PyCFunction)devpoll_modify,
> + METH_VARARGS, devpoll_modify_doc},
> + {"unregister", (PyCFunction)devpoll_unregister,
> + METH_O, devpoll_unregister_doc},
> + {"poll", (PyCFunction)devpoll_poll,
> + METH_VARARGS, devpoll_poll_doc},
> + {"close", (PyCFunction)devpoll_close, METH_NOARGS,
> + devpoll_close_doc},
> + {"fileno", (PyCFunction)devpoll_fileno, METH_NOARGS,
> + devpoll_fileno_doc},
> + {NULL, NULL} /* sentinel */
> +};
> +
> +static PyGetSetDef devpoll_getsetlist[] = {
> + {"closed", (getter)devpoll_get_closed, NULL,
> + "True if the devpoll object is closed"},
> + {0},
> +};
> +
> +static devpollObject *
> +newDevPollObject(void)
> +{
> + devpollObject *self;
> + int fd_devpoll, limit_result;
> + struct pollfd *fds;
> + struct rlimit limit;
> +
> + /*
> + ** If we try to process more that getrlimit()
> + ** fds, the kernel will give an error, so
> + ** we set the limit here. It is a dynamic
> + ** value, because we can change rlimit() anytime.
> + */
> + limit_result = getrlimit(RLIMIT_NOFILE, &limit);
> + if (limit_result == -1) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + return NULL;
> + }
> +
> + fd_devpoll = _Py_open("/dev/poll", O_RDWR);
> + if (fd_devpoll == -1)
> + return NULL;
> +
> + fds = PyMem_NEW(struct pollfd, limit.rlim_cur);
> + if (fds == NULL) {
> + close(fd_devpoll);
> + PyErr_NoMemory();
> + return NULL;
> + }
> +
> + self = PyObject_New(devpollObject, &devpoll_Type);
> + if (self == NULL) {
> + close(fd_devpoll);
> + PyMem_DEL(fds);
> + return NULL;
> + }
> + self->fd_devpoll = fd_devpoll;
> + self->max_n_fds = limit.rlim_cur;
> + self->n_fds = 0;
> + self->fds = fds;
> +
> + return self;
> +}
> +
> +static void
> +devpoll_dealloc(devpollObject *self)
> +{
> + (void)devpoll_internal_close(self);
> + PyMem_DEL(self->fds);
> + PyObject_Del(self);
> +}
> +
> +static PyTypeObject devpoll_Type = {
> + /* The ob_type field must be initialized in the module init function
> + * to be portable to Windows without using C++. */
> + PyVarObject_HEAD_INIT(NULL, 0)
> + "select.devpoll", /*tp_name*/
> + sizeof(devpollObject), /*tp_basicsize*/
> + 0, /*tp_itemsize*/
> + /* methods */
> + (destructor)devpoll_dealloc, /*tp_dealloc*/
> + 0, /*tp_print*/
> + 0, /*tp_getattr*/
> + 0, /*tp_setattr*/
> + 0, /*tp_reserved*/
> + 0, /*tp_repr*/
> + 0, /*tp_as_number*/
> + 0, /*tp_as_sequence*/
> + 0, /*tp_as_mapping*/
> + 0, /*tp_hash*/
> + 0, /*tp_call*/
> + 0, /*tp_str*/
> + 0, /*tp_getattro*/
> + 0, /*tp_setattro*/
> + 0, /*tp_as_buffer*/
> + Py_TPFLAGS_DEFAULT, /*tp_flags*/
> + 0, /*tp_doc*/
> + 0, /*tp_traverse*/
> + 0, /*tp_clear*/
> + 0, /*tp_richcompare*/
> + 0, /*tp_weaklistoffset*/
> + 0, /*tp_iter*/
> + 0, /*tp_iternext*/
> + devpoll_methods, /*tp_methods*/
> + 0, /* tp_members */
> + devpoll_getsetlist, /* tp_getset */
> +};
> +#endif /* HAVE_SYS_DEVPOLL_H */
> +
> +
> +
> +PyDoc_STRVAR(poll_doc,
> +"Returns a polling object, which supports registering and\n\
> +unregistering file descriptors, and then polling them for I/O events.");
> +
> +static PyObject *
> +select_poll(PyObject *self, PyObject *unused)
> +{
> + return (PyObject *)newPollObject();
> +}
> +
> +#ifdef HAVE_SYS_DEVPOLL_H
> +PyDoc_STRVAR(devpoll_doc,
> +"Returns a polling object, which supports registering and\n\
> +unregistering file descriptors, and then polling them for I/O events.");
> +
> +static PyObject *
> +select_devpoll(PyObject *self, PyObject *unused)
> +{
> + return (PyObject *)newDevPollObject();
> +}
> +#endif
> +
> +
> +#ifdef __APPLE__
> +/*
> + * On some systems poll() sets errno on invalid file descriptors. We test
> + * for this at runtime because this bug may be fixed or introduced between
> + * OS releases.
> + */
> +static int select_have_broken_poll(void)
> +{
> + int poll_test;
> + int filedes[2];
> +
> + struct pollfd poll_struct = { 0, POLLIN|POLLPRI|POLLOUT, 0 };
> +
> + /* Create a file descriptor to make invalid */
> + if (pipe(filedes) < 0) {
> + return 1;
> + }
> + poll_struct.fd = filedes[0];
> + close(filedes[0]);
> + close(filedes[1]);
> + poll_test = poll(&poll_struct, 1, 0);
> + if (poll_test < 0) {
> + return 1;
> + } else if (poll_test == 0 && poll_struct.revents != POLLNVAL) {
> + return 1;
> + }
> + return 0;
> +}
> +#endif /* __APPLE__ */
> +
> +#endif /* HAVE_POLL */
> +
> +#ifdef HAVE_EPOLL
> +/* **************************************************************************
> + * epoll interface for Linux 2.6
> + *
> + * Written by Christian Heimes
> + * Inspired by Twisted's _epoll.pyx and select.poll()
> + */
> +
> +#ifdef HAVE_SYS_EPOLL_H
> +#include <sys/epoll.h>
> +#endif
> +
> +typedef struct {
> + PyObject_HEAD
> + SOCKET epfd; /* epoll control file descriptor */
> +} pyEpoll_Object;
> +
> +static PyTypeObject pyEpoll_Type;
> +#define pyepoll_CHECK(op) (PyObject_TypeCheck((op), &pyEpoll_Type))
> +
> +static PyObject *
> +pyepoll_err_closed(void)
> +{
> + PyErr_SetString(PyExc_ValueError, "I/O operation on closed epoll object");
> + return NULL;
> +}
> +
> +static int
> +pyepoll_internal_close(pyEpoll_Object *self)
> +{
> + int save_errno = 0;
> + if (self->epfd >= 0) {
> + int epfd = self->epfd;
> + self->epfd = -1;
> + Py_BEGIN_ALLOW_THREADS
> + if (close(epfd) < 0)
> + save_errno = errno;
> + Py_END_ALLOW_THREADS
> + }
> + return save_errno;
> +}
> +
> +static PyObject *
> +newPyEpoll_Object(PyTypeObject *type, int sizehint, int flags, SOCKET fd)
> +{
> + pyEpoll_Object *self;
> +
> + assert(type != NULL && type->tp_alloc != NULL);
> + self = (pyEpoll_Object *) type->tp_alloc(type, 0);
> + if (self == NULL)
> + return NULL;
> +
> + if (fd == -1) {
> + Py_BEGIN_ALLOW_THREADS
> +#ifdef HAVE_EPOLL_CREATE1
> + flags |= EPOLL_CLOEXEC;
> + if (flags)
> + self->epfd = epoll_create1(flags);
> + else
> +#endif
> + self->epfd = epoll_create(sizehint);
> + Py_END_ALLOW_THREADS
> + }
> + else {
> + self->epfd = fd;
> + }
> + if (self->epfd < 0) {
> + Py_DECREF(self);
> + PyErr_SetFromErrno(PyExc_OSError);
> + return NULL;
> + }
> +
> +#ifndef HAVE_EPOLL_CREATE1
> + if (fd == -1 && _Py_set_inheritable(self->epfd, 0, NULL) < 0) {
> + Py_DECREF(self);
> + return NULL;
> + }
> +#endif
> +
> + return (PyObject *)self;
> +}
> +
> +
> +static PyObject *
> +pyepoll_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
> +{
> + int flags = 0, sizehint = -1;
> + static char *kwlist[] = {"sizehint", "flags", NULL};
> +
> + if (!PyArg_ParseTupleAndKeywords(args, kwds, "|ii:epoll", kwlist,
> + &sizehint, &flags))
> + return NULL;
> + if (sizehint == -1) {
> + sizehint = FD_SETSIZE - 1;
> + }
> + else if (sizehint <= 0) {
> + PyErr_SetString(PyExc_ValueError, "sizehint must be positive or -1");
> + return NULL;
> + }
> +
> + return newPyEpoll_Object(type, sizehint, flags, -1);
> +}
> +
> +
> +static void
> +pyepoll_dealloc(pyEpoll_Object *self)
> +{
> + (void)pyepoll_internal_close(self);
> + Py_TYPE(self)->tp_free(self);
> +}
> +
> +static PyObject*
> +pyepoll_close(pyEpoll_Object *self)
> +{
> + errno = pyepoll_internal_close(self);
> + if (errno < 0) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + return NULL;
> + }
> + Py_RETURN_NONE;
> +}
> +
> +PyDoc_STRVAR(pyepoll_close_doc,
> +"close() -> None\n\
> +\n\
> +Close the epoll control file descriptor. Further operations on the epoll\n\
> +object will raise an exception.");
> +
> +static PyObject*
> +pyepoll_get_closed(pyEpoll_Object *self, void *Py_UNUSED(ignored))
> +{
> + if (self->epfd < 0)
> + Py_RETURN_TRUE;
> + else
> + Py_RETURN_FALSE;
> +}
> +
> +static PyObject*
> +pyepoll_fileno(pyEpoll_Object *self)
> +{
> + if (self->epfd < 0)
> + return pyepoll_err_closed();
> + return PyLong_FromLong(self->epfd);
> +}
> +
> +PyDoc_STRVAR(pyepoll_fileno_doc,
> +"fileno() -> int\n\
> +\n\
> +Return the epoll control file descriptor.");
> +
> +static PyObject*
> +pyepoll_fromfd(PyObject *cls, PyObject *args)
> +{
> + SOCKET fd;
> +
> + if (!PyArg_ParseTuple(args, "i:fromfd", &fd))
> + return NULL;
> +
> + return newPyEpoll_Object((PyTypeObject*)cls, FD_SETSIZE - 1, 0, fd);
> +}
> +
> +PyDoc_STRVAR(pyepoll_fromfd_doc,
> +"fromfd(fd) -> epoll\n\
> +\n\
> +Create an epoll object from a given control fd.");
> +
> +static PyObject *
> +pyepoll_internal_ctl(int epfd, int op, PyObject *pfd, unsigned int events)
> +{
> + struct epoll_event ev;
> + int result;
> + int fd;
> +
> + if (epfd < 0)
> + return pyepoll_err_closed();
> +
> + fd = PyObject_AsFileDescriptor(pfd);
> + if (fd == -1) {
> + return NULL;
> + }
> +
> + switch (op) {
> + case EPOLL_CTL_ADD:
> + case EPOLL_CTL_MOD:
> + ev.events = events;
> + ev.data.fd = fd;
> + Py_BEGIN_ALLOW_THREADS
> + result = epoll_ctl(epfd, op, fd, &ev);
> + Py_END_ALLOW_THREADS
> + break;
> + case EPOLL_CTL_DEL:
> + /* In kernel versions before 2.6.9, the EPOLL_CTL_DEL
> + * operation required a non-NULL pointer in event, even
> + * though this argument is ignored. */
> + Py_BEGIN_ALLOW_THREADS
> + result = epoll_ctl(epfd, op, fd, &ev);
> + if (errno == EBADF) {
> + /* fd already closed */
> + result = 0;
> + errno = 0;
> + }
> + Py_END_ALLOW_THREADS
> + break;
> + default:
> + result = -1;
> + errno = EINVAL;
> + }
> +
> + if (result < 0) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + return NULL;
> + }
> + Py_RETURN_NONE;
> +}
> +
> +static PyObject *
> +pyepoll_register(pyEpoll_Object *self, PyObject *args, PyObject *kwds)
> +{
> + PyObject *pfd;
> + unsigned int events = EPOLLIN | EPOLLOUT | EPOLLPRI;
> + static char *kwlist[] = {"fd", "eventmask", NULL};
> +
> + if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|I:register", kwlist,
> + &pfd, &events)) {
> + return NULL;
> + }
> +
> + return pyepoll_internal_ctl(self->epfd, EPOLL_CTL_ADD, pfd, events);
> +}
> +
> +PyDoc_STRVAR(pyepoll_register_doc,
> +"register(fd[, eventmask]) -> None\n\
> +\n\
> +Registers a new fd or raises an OSError if the fd is already registered.\n\
> +fd is the target file descriptor of the operation.\n\
> +events is a bit set composed of the various EPOLL constants; the default\n\
> +is EPOLLIN | EPOLLOUT | EPOLLPRI.\n\
> +\n\
> +The epoll interface supports all file descriptors that support poll.");
> +
> +static PyObject *
> +pyepoll_modify(pyEpoll_Object *self, PyObject *args, PyObject *kwds)
> +{
> + PyObject *pfd;
> + unsigned int events;
> + static char *kwlist[] = {"fd", "eventmask", NULL};
> +
> + if (!PyArg_ParseTupleAndKeywords(args, kwds, "OI:modify", kwlist,
> + &pfd, &events)) {
> + return NULL;
> + }
> +
> + return pyepoll_internal_ctl(self->epfd, EPOLL_CTL_MOD, pfd, events);
> +}
> +
> +PyDoc_STRVAR(pyepoll_modify_doc,
> +"modify(fd, eventmask) -> None\n\
> +\n\
> +fd is the target file descriptor of the operation\n\
> +events is a bit set composed of the various EPOLL constants");
> +
> +static PyObject *
> +pyepoll_unregister(pyEpoll_Object *self, PyObject *args, PyObject *kwds)
> +{
> + PyObject *pfd;
> + static char *kwlist[] = {"fd", NULL};
> +
> + if (!PyArg_ParseTupleAndKeywords(args, kwds, "O:unregister", kwlist,
> + &pfd)) {
> + return NULL;
> + }
> +
> + return pyepoll_internal_ctl(self->epfd, EPOLL_CTL_DEL, pfd, 0);
> +}
> +
> +PyDoc_STRVAR(pyepoll_unregister_doc,
> +"unregister(fd) -> None\n\
> +\n\
> +fd is the target file descriptor of the operation.");
> +
> +static PyObject *
> +pyepoll_poll(pyEpoll_Object *self, PyObject *args, PyObject *kwds)
> +{
> + static char *kwlist[] = {"timeout", "maxevents", NULL};
> + PyObject *timeout_obj = NULL;
> + int maxevents = -1;
> + int nfds, i;
> + PyObject *elist = NULL, *etuple = NULL;
> + struct epoll_event *evs = NULL;
> + _PyTime_t timeout, ms, deadline;
> +
> + if (self->epfd < 0)
> + return pyepoll_err_closed();
> +
> + if (!PyArg_ParseTupleAndKeywords(args, kwds, "|Oi:poll", kwlist,
> + &timeout_obj, &maxevents)) {
> + return NULL;
> + }
> +
> + if (timeout_obj == NULL || timeout_obj == Py_None) {
> + timeout = -1;
> + ms = -1;
> + deadline = 0; /* initialize to prevent gcc warning */
> + }
> + else {
> + /* epoll_wait() has a resolution of 1 millisecond, round towards
> + infinity to wait at least timeout seconds. */
> + if (_PyTime_FromSecondsObject(&timeout, timeout_obj,
> + _PyTime_ROUND_TIMEOUT) < 0) {
> + if (PyErr_ExceptionMatches(PyExc_TypeError)) {
> + PyErr_SetString(PyExc_TypeError,
> + "timeout must be an integer or None");
> + }
> + return NULL;
> + }
> +
> + ms = _PyTime_AsMilliseconds(timeout, _PyTime_ROUND_CEILING);
> + if (ms < INT_MIN || ms > INT_MAX) {
> + PyErr_SetString(PyExc_OverflowError, "timeout is too large");
> + return NULL;
> + }
> +
> + deadline = _PyTime_GetMonotonicClock() + timeout;
> + }
> +
> + if (maxevents == -1) {
> + maxevents = FD_SETSIZE-1;
> + }
> + else if (maxevents < 1) {
> + PyErr_Format(PyExc_ValueError,
> + "maxevents must be greater than 0, got %d",
> + maxevents);
> + return NULL;
> + }
> +
> + evs = PyMem_New(struct epoll_event, maxevents);
> + if (evs == NULL) {
> + PyErr_NoMemory();
> + return NULL;
> + }
> +
> + do {
> + Py_BEGIN_ALLOW_THREADS
> + errno = 0;
> + nfds = epoll_wait(self->epfd, evs, maxevents, (int)ms);
> + Py_END_ALLOW_THREADS
> +
> + if (errno != EINTR)
> + break;
> +
> + /* poll() was interrupted by a signal */
> + if (PyErr_CheckSignals())
> + goto error;
> +
> + if (timeout >= 0) {
> + timeout = deadline - _PyTime_GetMonotonicClock();
> + if (timeout < 0) {
> + nfds = 0;
> + break;
> + }
> + ms = _PyTime_AsMilliseconds(timeout, _PyTime_ROUND_CEILING);
> + /* retry epoll_wait() with the recomputed timeout */
> + }
> + } while(1);
> +
> + if (nfds < 0) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + goto error;
> + }
> +
> + elist = PyList_New(nfds);
> + if (elist == NULL) {
> + goto error;
> + }
> +
> + for (i = 0; i < nfds; i++) {
> + etuple = Py_BuildValue("iI", evs[i].data.fd, evs[i].events);
> + if (etuple == NULL) {
> + Py_CLEAR(elist);
> + goto error;
> + }
> + PyList_SET_ITEM(elist, i, etuple);
> + }
> +
> + error:
> + PyMem_Free(evs);
> + return elist;
> +}
> +
> +PyDoc_STRVAR(pyepoll_poll_doc,
> +"poll([timeout=-1[, maxevents=-1]]) -> [(fd, events), (...)]\n\
> +\n\
> +Wait for events on the epoll file descriptor for a maximum time of timeout\n\
> +in seconds (as float). -1 makes poll wait indefinitely.\n\
> +Up to maxevents are returned to the caller.");
> +
> +static PyObject *
> +pyepoll_enter(pyEpoll_Object *self, PyObject *args)
> +{
> + if (self->epfd < 0)
> + return pyepoll_err_closed();
> +
> + Py_INCREF(self);
> + return (PyObject *)self;
> +}
> +
> +static PyObject *
> +pyepoll_exit(PyObject *self, PyObject *args)
> +{
> + _Py_IDENTIFIER(close);
> +
> + return _PyObject_CallMethodId(self, &PyId_close, NULL);
> +}
> +
> +static PyMethodDef pyepoll_methods[] = {
> + {"fromfd", (PyCFunction)pyepoll_fromfd,
> + METH_VARARGS | METH_CLASS, pyepoll_fromfd_doc},
> + {"close", (PyCFunction)pyepoll_close, METH_NOARGS,
> + pyepoll_close_doc},
> + {"fileno", (PyCFunction)pyepoll_fileno, METH_NOARGS,
> + pyepoll_fileno_doc},
> + {"modify", (PyCFunction)pyepoll_modify,
> + METH_VARARGS | METH_KEYWORDS, pyepoll_modify_doc},
> + {"register", (PyCFunction)pyepoll_register,
> + METH_VARARGS | METH_KEYWORDS, pyepoll_register_doc},
> + {"unregister", (PyCFunction)pyepoll_unregister,
> + METH_VARARGS | METH_KEYWORDS, pyepoll_unregister_doc},
> + {"poll", (PyCFunction)pyepoll_poll,
> + METH_VARARGS | METH_KEYWORDS, pyepoll_poll_doc},
> + {"__enter__", (PyCFunction)pyepoll_enter, METH_NOARGS,
> + NULL},
> + {"__exit__", (PyCFunction)pyepoll_exit, METH_VARARGS,
> + NULL},
> + {NULL, NULL},
> +};
> +
> +static PyGetSetDef pyepoll_getsetlist[] = {
> + {"closed", (getter)pyepoll_get_closed, NULL,
> + "True if the epoll handler is closed"},
> + {0},
> +};
> +
> +PyDoc_STRVAR(pyepoll_doc,
> +"select.epoll(sizehint=-1, flags=0)\n\
> +\n\
> +Returns an epolling object\n\
> +\n\
> +sizehint must be a positive integer or -1 for the default size. The\n\
> +sizehint is used to optimize internal data structures. It doesn't limit\n\
> +the maximum number of monitored events.");
> +
> +static PyTypeObject pyEpoll_Type = {
> + PyVarObject_HEAD_INIT(NULL, 0)
> + "select.epoll", /* tp_name */
> + sizeof(pyEpoll_Object), /* tp_basicsize */
> + 0, /* tp_itemsize */
> + (destructor)pyepoll_dealloc, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + 0, /* tp_repr */
> + 0, /* tp_as_number */
> + 0, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + PyObject_GenericGetAttr, /* tp_getattro */
> + 0, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT, /* tp_flags */
> + pyepoll_doc, /* tp_doc */
> + 0, /* tp_traverse */
> + 0, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + 0, /* tp_iter */
> + 0, /* tp_iternext */
> + pyepoll_methods, /* tp_methods */
> + 0, /* tp_members */
> + pyepoll_getsetlist, /* tp_getset */
> + 0, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + 0, /* tp_init */
> + 0, /* tp_alloc */
> + pyepoll_new, /* tp_new */
> + 0, /* tp_free */
> +};
> +
> +#endif /* HAVE_EPOLL */
> +
> +#ifdef HAVE_KQUEUE
> +/* **************************************************************************
> + * kqueue interface for BSD
> + *
> + * Copyright (c) 2000 Doug White, 2006 James Knight, 2007 Christian Heimes
> + * All rights reserved.
> + *
> + * Redistribution and use in source and binary forms, with or without
> + * modification, are permitted provided that the following conditions
> + * are met:
> + * 1. Redistributions of source code must retain the above copyright
> + * notice, this list of conditions and the following disclaimer.
> + * 2. Redistributions in binary form must reproduce the above copyright
> + * notice, this list of conditions and the following disclaimer in the
> + * documentation and/or other materials provided with the distribution.
> + *
> + * THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
> + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
> + * ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
> + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
> + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
> + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
> + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
> + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
> + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
> + * SUCH DAMAGE.
> + */
> +
> +#ifdef HAVE_SYS_EVENT_H
> +#include <sys/event.h>
> +#endif
> +
> +PyDoc_STRVAR(kqueue_event_doc,
> +"kevent(ident, filter=KQ_FILTER_READ, flags=KQ_EV_ADD, fflags=0, data=0, udata=0)\n\
> +\n\
> +This object is the equivalent of the struct kevent for the C API.\n\
> +\n\
> +See the kqueue manpage for more detailed information about the meaning\n\
> +of the arguments.\n\
> +\n\
> +One minor note: while you might hope that udata could store a\n\
> +reference to a python object, it cannot, because it is impossible to\n\
> +keep a proper reference count of the object once it's passed into the\n\
> +kernel. Therefore, I have restricted it to only storing an integer. I\n\
> +recommend ignoring it and simply using the 'ident' field to key off\n\
> +of. You could also set up a dictionary on the python side to store a\n\
> +udata->object mapping.");
> +
> +typedef struct {
> + PyObject_HEAD
> + struct kevent e;
> +} kqueue_event_Object;
> +
> +static PyTypeObject kqueue_event_Type;
> +
> +#define kqueue_event_Check(op) (PyObject_TypeCheck((op), &kqueue_event_Type))
> +
> +typedef struct {
> + PyObject_HEAD
> + SOCKET kqfd; /* kqueue control fd */
> +} kqueue_queue_Object;
> +
> +static PyTypeObject kqueue_queue_Type;
> +
> +#define kqueue_queue_Check(op) (PyObject_TypeCheck((op), &kqueue_queue_Type))
> +
> +#if (SIZEOF_UINTPTR_T != SIZEOF_VOID_P)
> +# error uintptr_t does not match void *!
> +#elif (SIZEOF_UINTPTR_T == SIZEOF_LONG_LONG)
> +# define T_UINTPTRT T_ULONGLONG
> +# define T_INTPTRT T_LONGLONG
> +# define UINTPTRT_FMT_UNIT "K"
> +# define INTPTRT_FMT_UNIT "L"
> +#elif (SIZEOF_UINTPTR_T == SIZEOF_LONG)
> +# define T_UINTPTRT T_ULONG
> +# define T_INTPTRT T_LONG
> +# define UINTPTRT_FMT_UNIT "k"
> +# define INTPTRT_FMT_UNIT "l"
> +#elif (SIZEOF_UINTPTR_T == SIZEOF_INT)
> +# define T_UINTPTRT T_UINT
> +# define T_INTPTRT T_INT
> +# define UINTPTRT_FMT_UNIT "I"
> +# define INTPTRT_FMT_UNIT "i"
> +#else
> +# error uintptr_t does not match int, long, or long long!
> +#endif
> +
> +#if SIZEOF_LONG_LONG == 8
> +# define T_INT64 T_LONGLONG
> +# define INT64_FMT_UNIT "L"
> +#elif SIZEOF_LONG == 8
> +# define T_INT64 T_LONG
> +# define INT64_FMT_UNIT "l"
> +#elif SIZEOF_INT == 8
> +# define T_INT64 T_INT
> +# define INT64_FMT_UNIT "i"
> +#else
> +# define INT64_FMT_UNIT "_"
> +#endif
> +
> +#if SIZEOF_LONG_LONG == 4
> +# define T_UINT32 T_ULONGLONG
> +# define UINT32_FMT_UNIT "K"
> +#elif SIZEOF_LONG == 4
> +# define T_UINT32 T_ULONG
> +# define UINT32_FMT_UNIT "k"
> +#elif SIZEOF_INT == 4
> +# define T_UINT32 T_UINT
> +# define UINT32_FMT_UNIT "I"
> +#else
> +# define UINT32_FMT_UNIT "_"
> +#endif
> +
> +/*
> + * kevent is not standard and its members vary across BSDs.
> + */
> +#ifdef __NetBSD__
> +# define FILTER_TYPE T_UINT32
> +# define FILTER_FMT_UNIT UINT32_FMT_UNIT
> +# define FLAGS_TYPE T_UINT32
> +# define FLAGS_FMT_UNIT UINT32_FMT_UNIT
> +# define FFLAGS_TYPE T_UINT32
> +# define FFLAGS_FMT_UNIT UINT32_FMT_UNIT
> +#else
> +# define FILTER_TYPE T_SHORT
> +# define FILTER_FMT_UNIT "h"
> +# define FLAGS_TYPE T_USHORT
> +# define FLAGS_FMT_UNIT "H"
> +# define FFLAGS_TYPE T_UINT
> +# define FFLAGS_FMT_UNIT "I"
> +#endif
> +
> +#if defined(__NetBSD__) || defined(__OpenBSD__)
> +# define DATA_TYPE T_INT64
> +# define DATA_FMT_UNIT INT64_FMT_UNIT
> +#else
> +# define DATA_TYPE T_INTPTRT
> +# define DATA_FMT_UNIT INTPTRT_FMT_UNIT
> +#endif
> +
> +/* Unfortunately, we can't store python objects in udata, because
> + * kevents in the kernel can be removed without warning, which would
> + * forever lose the refcount on the object stored with it.
> + */
> +
> +#define KQ_OFF(x) offsetof(kqueue_event_Object, x)
> +static struct PyMemberDef kqueue_event_members[] = {
> + {"ident", T_UINTPTRT, KQ_OFF(e.ident)},
> + {"filter", FILTER_TYPE, KQ_OFF(e.filter)},
> + {"flags", FLAGS_TYPE, KQ_OFF(e.flags)},
> + {"fflags", T_UINT, KQ_OFF(e.fflags)},
> + {"data", DATA_TYPE, KQ_OFF(e.data)},
> + {"udata", T_UINTPTRT, KQ_OFF(e.udata)},
> + {NULL} /* Sentinel */
> +};
> +#undef KQ_OFF
> +
> +static PyObject *
> +
> +kqueue_event_repr(kqueue_event_Object *s)
> +{
> + char buf[1024];
> + PyOS_snprintf(
> + buf, sizeof(buf),
> + "<select.kevent ident=%zu filter=%d flags=0x%x fflags=0x%x "
> + "data=0x%llx udata=%p>",
> + (size_t)(s->e.ident), (int)s->e.filter, (unsigned int)s->e.flags,
> + (unsigned int)s->e.fflags, (long long)(s->e.data), (void *)s->e.udata);
> + return PyUnicode_FromString(buf);
> +}
> +
> +static int
> +kqueue_event_init(kqueue_event_Object *self, PyObject *args, PyObject *kwds)
> +{
> + PyObject *pfd;
> + static char *kwlist[] = {"ident", "filter", "flags", "fflags",
> + "data", "udata", NULL};
> + static const char fmt[] = "O|"
> + FILTER_FMT_UNIT FLAGS_FMT_UNIT FFLAGS_FMT_UNIT DATA_FMT_UNIT
> + UINTPTRT_FMT_UNIT ":kevent";
> +
> + EV_SET(&(self->e), 0, EVFILT_READ, EV_ADD, 0, 0, 0); /* defaults */
> +
> + if (!PyArg_ParseTupleAndKeywords(args, kwds, fmt, kwlist,
> + &pfd, &(self->e.filter), &(self->e.flags),
> + &(self->e.fflags), &(self->e.data), &(self->e.udata))) {
> + return -1;
> + }
> +
> + if (PyLong_Check(pfd)) {
> + self->e.ident = PyLong_AsSize_t(pfd);
> + }
> + else {
> + self->e.ident = PyObject_AsFileDescriptor(pfd);
> + }
> + if (PyErr_Occurred()) {
> + return -1;
> + }
> + return 0;
> +}
> +
> +static PyObject *
> +kqueue_event_richcompare(kqueue_event_Object *s, kqueue_event_Object *o,
> + int op)
> +{
> + int result;
> +
> + if (!kqueue_event_Check(o)) {
> + Py_RETURN_NOTIMPLEMENTED;
> + }
> +
> +#define CMP(a, b) ((a) != (b)) ? ((a) < (b) ? -1 : 1)
> + result = CMP(s->e.ident, o->e.ident)
> + : CMP(s->e.filter, o->e.filter)
> + : CMP(s->e.flags, o->e.flags)
> + : CMP(s->e.fflags, o->e.fflags)
> + : CMP(s->e.data, o->e.data)
> + : CMP((intptr_t)s->e.udata, (intptr_t)o->e.udata)
> + : 0;
> +#undef CMP
> +
> + switch (op) {
> + case Py_EQ:
> + result = (result == 0);
> + break;
> + case Py_NE:
> + result = (result != 0);
> + break;
> + case Py_LE:
> + result = (result <= 0);
> + break;
> + case Py_GE:
> + result = (result >= 0);
> + break;
> + case Py_LT:
> + result = (result < 0);
> + break;
> + case Py_GT:
> + result = (result > 0);
> + break;
> + }
> + return PyBool_FromLong((long)result);
> +}
> +
> +static PyTypeObject kqueue_event_Type = {
> + PyVarObject_HEAD_INIT(NULL, 0)
> + "select.kevent", /* tp_name */
> + sizeof(kqueue_event_Object), /* tp_basicsize */
> + 0, /* tp_itemsize */
> + 0, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + (reprfunc)kqueue_event_repr, /* tp_repr */
> + 0, /* tp_as_number */
> + 0, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + 0, /* tp_getattro */
> + 0, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT, /* tp_flags */
> + kqueue_event_doc, /* tp_doc */
> + 0, /* tp_traverse */
> + 0, /* tp_clear */
> + (richcmpfunc)kqueue_event_richcompare, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + 0, /* tp_iter */
> + 0, /* tp_iternext */
> + 0, /* tp_methods */
> + kqueue_event_members, /* tp_members */
> + 0, /* tp_getset */
> + 0, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + (initproc)kqueue_event_init, /* tp_init */
> + 0, /* tp_alloc */
> + 0, /* tp_new */
> + 0, /* tp_free */
> +};
> +
> +static PyObject *
> +kqueue_queue_err_closed(void)
> +{
> + PyErr_SetString(PyExc_ValueError, "I/O operation on closed kqueue object");
> + return NULL;
> +}
> +
> +static int
> +kqueue_queue_internal_close(kqueue_queue_Object *self)
> +{
> + int save_errno = 0;
> + if (self->kqfd >= 0) {
> + int kqfd = self->kqfd;
> + self->kqfd = -1;
> + Py_BEGIN_ALLOW_THREADS
> + if (close(kqfd) < 0)
> + save_errno = errno;
> + Py_END_ALLOW_THREADS
> + }
> + return save_errno;
> +}
> +
> +static PyObject *
> +newKqueue_Object(PyTypeObject *type, SOCKET fd)
> +{
> + kqueue_queue_Object *self;
> + assert(type != NULL && type->tp_alloc != NULL);
> + self = (kqueue_queue_Object *) type->tp_alloc(type, 0);
> + if (self == NULL) {
> + return NULL;
> + }
> +
> + if (fd == -1) {
> + Py_BEGIN_ALLOW_THREADS
> + self->kqfd = kqueue();
> + Py_END_ALLOW_THREADS
> + }
> + else {
> + self->kqfd = fd;
> + }
> + if (self->kqfd < 0) {
> + Py_DECREF(self);
> + PyErr_SetFromErrno(PyExc_OSError);
> + return NULL;
> + }
> +
> + if (fd == -1) {
> + if (_Py_set_inheritable(self->kqfd, 0, NULL) < 0) {
> + Py_DECREF(self);
> + return NULL;
> + }
> + }
> + return (PyObject *)self;
> +}
> +
> +static PyObject *
> +kqueue_queue_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
> +{
> + if ((args != NULL && PyObject_Size(args)) ||
> + (kwds != NULL && PyObject_Size(kwds))) {
> + PyErr_SetString(PyExc_ValueError,
> + "select.kqueue doesn't accept arguments");
> + return NULL;
> + }
> +
> + return newKqueue_Object(type, -1);
> +}
> +
> +static void
> +kqueue_queue_dealloc(kqueue_queue_Object *self)
> +{
> + kqueue_queue_internal_close(self);
> + Py_TYPE(self)->tp_free(self);
> +}
> +
> +static PyObject*
> +kqueue_queue_close(kqueue_queue_Object *self)
> +{
> + errno = kqueue_queue_internal_close(self);
> + if (errno < 0) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + return NULL;
> + }
> + Py_RETURN_NONE;
> +}
> +
> +PyDoc_STRVAR(kqueue_queue_close_doc,
> +"close() -> None\n\
> +\n\
> +Close the kqueue control file descriptor. Further operations on the kqueue\n\
> +object will raise an exception.");
> +
> +static PyObject*
> +kqueue_queue_get_closed(kqueue_queue_Object *self, void *Py_UNUSED(ignored))
> +{
> + if (self->kqfd < 0)
> + Py_RETURN_TRUE;
> + else
> + Py_RETURN_FALSE;
> +}
> +
> +static PyObject*
> +kqueue_queue_fileno(kqueue_queue_Object *self)
> +{
> + if (self->kqfd < 0)
> + return kqueue_queue_err_closed();
> + return PyLong_FromLong(self->kqfd);
> +}
> +
> +PyDoc_STRVAR(kqueue_queue_fileno_doc,
> +"fileno() -> int\n\
> +\n\
> +Return the kqueue control file descriptor.");
> +
> +static PyObject*
> +kqueue_queue_fromfd(PyObject *cls, PyObject *args)
> +{
> + SOCKET fd;
> +
> + if (!PyArg_ParseTuple(args, "i:fromfd", &fd))
> + return NULL;
> +
> + return newKqueue_Object((PyTypeObject*)cls, fd);
> +}
> +
> +PyDoc_STRVAR(kqueue_queue_fromfd_doc,
> +"fromfd(fd) -> kqueue\n\
> +\n\
> +Create a kqueue object from a given control fd.");
> +
> +static PyObject *
> +kqueue_queue_control(kqueue_queue_Object *self, PyObject *args)
> +{
> + int nevents = 0;
> + int gotevents = 0;
> + int nchanges = 0;
> + int i = 0;
> + PyObject *otimeout = NULL;
> + PyObject *ch = NULL;
> + PyObject *seq = NULL, *ei = NULL;
> + PyObject *result = NULL;
> + struct kevent *evl = NULL;
> + struct kevent *chl = NULL;
> + struct timespec timeoutspec;
> + struct timespec *ptimeoutspec;
> + _PyTime_t timeout, deadline = 0;
> +
> + if (self->kqfd < 0)
> + return kqueue_queue_err_closed();
> +
> + if (!PyArg_ParseTuple(args, "Oi|O:control", &ch, &nevents, &otimeout))
> + return NULL;
> +
> + if (nevents < 0) {
> + PyErr_Format(PyExc_ValueError,
> + "Length of eventlist must be 0 or positive, got %d",
> + nevents);
> + return NULL;
> + }
> +
> + if (otimeout == Py_None || otimeout == NULL) {
> + ptimeoutspec = NULL;
> + }
> + else {
> + if (_PyTime_FromSecondsObject(&timeout,
> + otimeout, _PyTime_ROUND_TIMEOUT) < 0) {
> + PyErr_Format(PyExc_TypeError,
> + "timeout argument must be a number "
> + "or None, got %.200s",
> + Py_TYPE(otimeout)->tp_name);
> + return NULL;
> + }
> +
> + if (_PyTime_AsTimespec(timeout, &timeoutspec) == -1)
> + return NULL;
> +
> + if (timeoutspec.tv_sec < 0) {
> + PyErr_SetString(PyExc_ValueError,
> + "timeout must be positive or None");
> + return NULL;
> + }
> + ptimeoutspec = &timeoutspec;
> + }
> +
> + if (ch != NULL && ch != Py_None) {
> + seq = PySequence_Fast(ch, "changelist is not iterable");
> + if (seq == NULL) {
> + return NULL;
> + }
> + if (PySequence_Fast_GET_SIZE(seq) > INT_MAX) {
> + PyErr_SetString(PyExc_OverflowError,
> + "changelist is too long");
> + goto error;
> + }
> + nchanges = (int)PySequence_Fast_GET_SIZE(seq);
> +
> + chl = PyMem_New(struct kevent, nchanges);
> + if (chl == NULL) {
> + PyErr_NoMemory();
> + goto error;
> + }
> + for (i = 0; i < nchanges; ++i) {
> + ei = PySequence_Fast_GET_ITEM(seq, i);
> + if (!kqueue_event_Check(ei)) {
> + PyErr_SetString(PyExc_TypeError,
> + "changelist must be an iterable of "
> + "select.kevent objects");
> + goto error;
> + }
> + chl[i] = ((kqueue_event_Object *)ei)->e;
> + }
> + Py_CLEAR(seq);
> + }
> +
> + /* event list */
> + if (nevents) {
> + evl = PyMem_New(struct kevent, nevents);
> + if (evl == NULL) {
> + PyErr_NoMemory();
> + goto error;
> + }
> + }
> +
> + if (ptimeoutspec)
> + deadline = _PyTime_GetMonotonicClock() + timeout;
> +
> + do {
> + Py_BEGIN_ALLOW_THREADS
> + errno = 0;
> + gotevents = kevent(self->kqfd, chl, nchanges,
> + evl, nevents, ptimeoutspec);
> + Py_END_ALLOW_THREADS
> +
> + if (errno != EINTR)
> + break;
> +
> + /* kevent() was interrupted by a signal */
> + if (PyErr_CheckSignals())
> + goto error;
> +
> + if (ptimeoutspec) {
> + timeout = deadline - _PyTime_GetMonotonicClock();
> + if (timeout < 0) {
> + gotevents = 0;
> + break;
> + }
> + if (_PyTime_AsTimespec(timeout, &timeoutspec) == -1)
> + goto error;
> + /* retry kevent() with the recomputed timeout */
> + }
> + } while (1);
> +
> + if (gotevents == -1) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + goto error;
> + }
> +
> + result = PyList_New(gotevents);
> + if (result == NULL) {
> + goto error;
> + }
> +
> + for (i = 0; i < gotevents; i++) {
> + kqueue_event_Object *ch;
> +
> + ch = PyObject_New(kqueue_event_Object, &kqueue_event_Type);
> + if (ch == NULL) {
> + goto error;
> + }
> + ch->e = evl[i];
> + PyList_SET_ITEM(result, i, (PyObject *)ch);
> + }
> + PyMem_Free(chl);
> + PyMem_Free(evl);
> + return result;
> +
> + error:
> + PyMem_Free(chl);
> + PyMem_Free(evl);
> + Py_XDECREF(result);
> + Py_XDECREF(seq);
> + return NULL;
> +}
> +
> +PyDoc_STRVAR(kqueue_queue_control_doc,
> +"control(changelist, max_events[, timeout=None]) -> eventlist\n\
> +\n\
> +Calls the kernel kevent function.\n\
> +- changelist must be an iterable of kevent objects describing the changes\n\
> + to be made to the kernel's watch list or None.\n\
> +- max_events lets you specify the maximum number of events that the\n\
> + kernel will return.\n\
> +- timeout is the maximum time to wait in seconds, or else None,\n\
> + to wait forever. timeout accepts floats for smaller timeouts, too.");
> +
> +
> +static PyMethodDef kqueue_queue_methods[] = {
> + {"fromfd", (PyCFunction)kqueue_queue_fromfd,
> + METH_VARARGS | METH_CLASS, kqueue_queue_fromfd_doc},
> + {"close", (PyCFunction)kqueue_queue_close, METH_NOARGS,
> + kqueue_queue_close_doc},
> + {"fileno", (PyCFunction)kqueue_queue_fileno, METH_NOARGS,
> + kqueue_queue_fileno_doc},
> + {"control", (PyCFunction)kqueue_queue_control,
> + METH_VARARGS , kqueue_queue_control_doc},
> + {NULL, NULL},
> +};
> +
> +static PyGetSetDef kqueue_queue_getsetlist[] = {
> + {"closed", (getter)kqueue_queue_get_closed, NULL,
> + "True if the kqueue handler is closed"},
> + {0},
> +};
> +
> +PyDoc_STRVAR(kqueue_queue_doc,
> +"Kqueue syscall wrapper.\n\
> +\n\
> +For example, to start watching a socket for input:\n\
> +>>> kq = kqueue()\n\
> +>>> sock = socket()\n\
> +>>> sock.connect((host, port))\n\
> +>>> kq.control([kevent(sock, KQ_FILTER_WRITE, KQ_EV_ADD)], 0)\n\
> +\n\
> +To wait one second for it to become writeable:\n\
> +>>> kq.control(None, 1, 1000)\n\
> +\n\
> +To stop listening:\n\
> +>>> kq.control([kevent(sock, KQ_FILTER_WRITE, KQ_EV_DELETE)], 0)");
> +
> +static PyTypeObject kqueue_queue_Type = {
> + PyVarObject_HEAD_INIT(NULL, 0)
> + "select.kqueue", /* tp_name */
> + sizeof(kqueue_queue_Object), /* tp_basicsize */
> + 0, /* tp_itemsize */
> + (destructor)kqueue_queue_dealloc, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + 0, /* tp_repr */
> + 0, /* tp_as_number */
> + 0, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + 0, /* tp_getattro */
> + 0, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT, /* tp_flags */
> + kqueue_queue_doc, /* tp_doc */
> + 0, /* tp_traverse */
> + 0, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + 0, /* tp_iter */
> + 0, /* tp_iternext */
> + kqueue_queue_methods, /* tp_methods */
> + 0, /* tp_members */
> + kqueue_queue_getsetlist, /* tp_getset */
> + 0, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + 0, /* tp_init */
> + 0, /* tp_alloc */
> + kqueue_queue_new, /* tp_new */
> + 0, /* tp_free */
> +};
> +
> +#endif /* HAVE_KQUEUE */
> +
> +
> +
> +
> +
> +/* ************************************************************************ */
> +
> +PyDoc_STRVAR(select_doc,
> +"select(rlist, wlist, xlist[, timeout]) -> (rlist, wlist, xlist)\n\
> +\n\
> +Wait until one or more file descriptors are ready for some kind of I/O.\n\
> +The first three arguments are sequences of file descriptors to be waited for:\n\
> +rlist -- wait until ready for reading\n\
> +wlist -- wait until ready for writing\n\
> +xlist -- wait for an ``exceptional condition''\n\
> +If only one kind of condition is required, pass [] for the other lists.\n\
> +A file descriptor is either a socket or file object, or a small integer\n\
> +gotten from a fileno() method call on one of those.\n\
> +\n\
> +The optional 4th argument specifies a timeout in seconds; it may be\n\
> +a floating point number to specify fractions of seconds. If it is absent\n\
> +or None, the call will never time out.\n\
> +\n\
> +The return value is a tuple of three lists corresponding to the first three\n\
> +arguments; each contains the subset of the corresponding file descriptors\n\
> +that are ready.\n\
> +\n\
> +*** IMPORTANT NOTICE ***\n\
> +On Windows, only sockets are supported; on Unix, all file\n\
> +descriptors can be used.");
> +
> +static PyMethodDef select_methods[] = {
> + {"select", select_select, METH_VARARGS, select_doc},
> +#if defined(HAVE_POLL) && !defined(HAVE_BROKEN_POLL)
> + {"poll", select_poll, METH_NOARGS, poll_doc},
> +#endif /* HAVE_POLL */
> +#ifdef HAVE_SYS_DEVPOLL_H
> + {"devpoll", select_devpoll, METH_NOARGS, devpoll_doc},
> +#endif
> + {0, 0}, /* sentinel */
> +};
> +
> +PyDoc_STRVAR(module_doc,
> +"This module supports asynchronous I/O on multiple file descriptors.\n\
> +\n\
> +*** IMPORTANT NOTICE ***\n\
> +On Windows, only sockets are supported; on Unix, all file descriptors.");
> +
> +
> +static struct PyModuleDef selectmodule = {
> + PyModuleDef_HEAD_INIT,
> + "select",
> + module_doc,
> + -1,
> + select_methods,
> + NULL,
> + NULL,
> + NULL,
> + NULL
> +};
> +
> +
> +
> +
> +PyMODINIT_FUNC
> +PyInit_select(void)
> +{
> + PyObject *m;
> + m = PyModule_Create(&selectmodule);
> + if (m == NULL)
> + return NULL;
> +
> + Py_INCREF(PyExc_OSError);
> + PyModule_AddObject(m, "error", PyExc_OSError);
> +
> +#ifdef PIPE_BUF
> +#ifdef HAVE_BROKEN_PIPE_BUF
> +#undef PIPE_BUF
> +#define PIPE_BUF 512
> +#endif
> + PyModule_AddIntMacro(m, PIPE_BUF);
> +#endif
> +
> +#if defined(HAVE_POLL) && !defined(HAVE_BROKEN_POLL)
> +#ifdef __APPLE__
> + if (select_have_broken_poll()) {
> + if (PyObject_DelAttrString(m, "poll") == -1) {
> + PyErr_Clear();
> + }
> + } else {
> +#else
> + {
> +#endif
> + if (PyType_Ready(&poll_Type) < 0)
> + return NULL;
> + PyModule_AddIntMacro(m, POLLIN);
> + PyModule_AddIntMacro(m, POLLPRI);
> + PyModule_AddIntMacro(m, POLLOUT);
> + PyModule_AddIntMacro(m, POLLERR);
> + PyModule_AddIntMacro(m, POLLHUP);
> + PyModule_AddIntMacro(m, POLLNVAL);
> +
> +#ifdef POLLRDNORM
> + PyModule_AddIntMacro(m, POLLRDNORM);
> +#endif
> +#ifdef POLLRDBAND
> + PyModule_AddIntMacro(m, POLLRDBAND);
> +#endif
> +#ifdef POLLWRNORM
> + PyModule_AddIntMacro(m, POLLWRNORM);
> +#endif
> +#ifdef POLLWRBAND
> + PyModule_AddIntMacro(m, POLLWRBAND);
> +#endif
> +#ifdef POLLMSG
> + PyModule_AddIntMacro(m, POLLMSG);
> +#endif
> +#ifdef POLLRDHUP
> + /* Kernel 2.6.17+ */
> + PyModule_AddIntMacro(m, POLLRDHUP);
> +#endif
> + }
> +#endif /* HAVE_POLL */
> +
> +#ifdef HAVE_SYS_DEVPOLL_H
> + if (PyType_Ready(&devpoll_Type) < 0)
> + return NULL;
> +#endif
> +
> +#ifdef HAVE_EPOLL
> + Py_TYPE(&pyEpoll_Type) = &PyType_Type;
> + if (PyType_Ready(&pyEpoll_Type) < 0)
> + return NULL;
> +
> + Py_INCREF(&pyEpoll_Type);
> + PyModule_AddObject(m, "epoll", (PyObject *) &pyEpoll_Type);
> +
> + PyModule_AddIntMacro(m, EPOLLIN);
> + PyModule_AddIntMacro(m, EPOLLOUT);
> + PyModule_AddIntMacro(m, EPOLLPRI);
> + PyModule_AddIntMacro(m, EPOLLERR);
> + PyModule_AddIntMacro(m, EPOLLHUP);
> +#ifdef EPOLLRDHUP
> + /* Kernel 2.6.17 */
> + PyModule_AddIntMacro(m, EPOLLRDHUP);
> +#endif
> + PyModule_AddIntMacro(m, EPOLLET);
> +#ifdef EPOLLONESHOT
> + /* Kernel 2.6.2+ */
> + PyModule_AddIntMacro(m, EPOLLONESHOT);
> +#endif
> +#ifdef EPOLLEXCLUSIVE
> + PyModule_AddIntMacro(m, EPOLLEXCLUSIVE);
> +#endif
> +
> +#ifdef EPOLLRDNORM
> + PyModule_AddIntMacro(m, EPOLLRDNORM);
> +#endif
> +#ifdef EPOLLRDBAND
> + PyModule_AddIntMacro(m, EPOLLRDBAND);
> +#endif
> +#ifdef EPOLLWRNORM
> + PyModule_AddIntMacro(m, EPOLLWRNORM);
> +#endif
> +#ifdef EPOLLWRBAND
> + PyModule_AddIntMacro(m, EPOLLWRBAND);
> +#endif
> +#ifdef EPOLLMSG
> + PyModule_AddIntMacro(m, EPOLLMSG);
> +#endif
> +
> +#ifdef EPOLL_CLOEXEC
> + PyModule_AddIntMacro(m, EPOLL_CLOEXEC);
> +#endif
> +#endif /* HAVE_EPOLL */
> +
> +#ifdef HAVE_KQUEUE
> + kqueue_event_Type.tp_new = PyType_GenericNew;
> + Py_TYPE(&kqueue_event_Type) = &PyType_Type;
> + if(PyType_Ready(&kqueue_event_Type) < 0)
> + return NULL;
> +
> + Py_INCREF(&kqueue_event_Type);
> + PyModule_AddObject(m, "kevent", (PyObject *)&kqueue_event_Type);
> +
> + Py_TYPE(&kqueue_queue_Type) = &PyType_Type;
> + if(PyType_Ready(&kqueue_queue_Type) < 0)
> + return NULL;
> + Py_INCREF(&kqueue_queue_Type);
> + PyModule_AddObject(m, "kqueue", (PyObject *)&kqueue_queue_Type);
> +
> + /* event filters */
> + PyModule_AddIntConstant(m, "KQ_FILTER_READ", EVFILT_READ);
> + PyModule_AddIntConstant(m, "KQ_FILTER_WRITE", EVFILT_WRITE);
> +#ifdef EVFILT_AIO
> + PyModule_AddIntConstant(m, "KQ_FILTER_AIO", EVFILT_AIO);
> +#endif
> +#ifdef EVFILT_VNODE
> + PyModule_AddIntConstant(m, "KQ_FILTER_VNODE", EVFILT_VNODE);
> +#endif
> +#ifdef EVFILT_PROC
> + PyModule_AddIntConstant(m, "KQ_FILTER_PROC", EVFILT_PROC);
> +#endif
> +#ifdef EVFILT_NETDEV
> + PyModule_AddIntConstant(m, "KQ_FILTER_NETDEV", EVFILT_NETDEV);
> +#endif
> +#ifdef EVFILT_SIGNAL
> + PyModule_AddIntConstant(m, "KQ_FILTER_SIGNAL", EVFILT_SIGNAL);
> +#endif
> + PyModule_AddIntConstant(m, "KQ_FILTER_TIMER", EVFILT_TIMER);
> +
> + /* event flags */
> + PyModule_AddIntConstant(m, "KQ_EV_ADD", EV_ADD);
> + PyModule_AddIntConstant(m, "KQ_EV_DELETE", EV_DELETE);
> + PyModule_AddIntConstant(m, "KQ_EV_ENABLE", EV_ENABLE);
> + PyModule_AddIntConstant(m, "KQ_EV_DISABLE", EV_DISABLE);
> + PyModule_AddIntConstant(m, "KQ_EV_ONESHOT", EV_ONESHOT);
> + PyModule_AddIntConstant(m, "KQ_EV_CLEAR", EV_CLEAR);
> +
> +#ifdef EV_SYSFLAGS
> + PyModule_AddIntConstant(m, "KQ_EV_SYSFLAGS", EV_SYSFLAGS);
> +#endif
> +#ifdef EV_FLAG1
> + PyModule_AddIntConstant(m, "KQ_EV_FLAG1", EV_FLAG1);
> +#endif
> +
> + PyModule_AddIntConstant(m, "KQ_EV_EOF", EV_EOF);
> + PyModule_AddIntConstant(m, "KQ_EV_ERROR", EV_ERROR);
> +
> + /* READ WRITE filter flag */
> +#ifdef NOTE_LOWAT
> + PyModule_AddIntConstant(m, "KQ_NOTE_LOWAT", NOTE_LOWAT);
> +#endif
> +
> + /* VNODE filter flags */
> +#ifdef EVFILT_VNODE
> + PyModule_AddIntConstant(m, "KQ_NOTE_DELETE", NOTE_DELETE);
> + PyModule_AddIntConstant(m, "KQ_NOTE_WRITE", NOTE_WRITE);
> + PyModule_AddIntConstant(m, "KQ_NOTE_EXTEND", NOTE_EXTEND);
> + PyModule_AddIntConstant(m, "KQ_NOTE_ATTRIB", NOTE_ATTRIB);
> + PyModule_AddIntConstant(m, "KQ_NOTE_LINK", NOTE_LINK);
> + PyModule_AddIntConstant(m, "KQ_NOTE_RENAME", NOTE_RENAME);
> + PyModule_AddIntConstant(m, "KQ_NOTE_REVOKE", NOTE_REVOKE);
> +#endif
> +
> + /* PROC filter flags */
> +#ifdef EVFILT_PROC
> + PyModule_AddIntConstant(m, "KQ_NOTE_EXIT", NOTE_EXIT);
> + PyModule_AddIntConstant(m, "KQ_NOTE_FORK", NOTE_FORK);
> + PyModule_AddIntConstant(m, "KQ_NOTE_EXEC", NOTE_EXEC);
> + PyModule_AddIntConstant(m, "KQ_NOTE_PCTRLMASK", NOTE_PCTRLMASK);
> + PyModule_AddIntConstant(m, "KQ_NOTE_PDATAMASK", NOTE_PDATAMASK);
> +
> + PyModule_AddIntConstant(m, "KQ_NOTE_TRACK", NOTE_TRACK);
> + PyModule_AddIntConstant(m, "KQ_NOTE_CHILD", NOTE_CHILD);
> + PyModule_AddIntConstant(m, "KQ_NOTE_TRACKERR", NOTE_TRACKERR);
> +#endif
> +
> + /* NETDEV filter flags */
> +#ifdef EVFILT_NETDEV
> + PyModule_AddIntConstant(m, "KQ_NOTE_LINKUP", NOTE_LINKUP);
> + PyModule_AddIntConstant(m, "KQ_NOTE_LINKDOWN", NOTE_LINKDOWN);
> + PyModule_AddIntConstant(m, "KQ_NOTE_LINKINV", NOTE_LINKINV);
> +#endif
> +
> +#endif /* HAVE_KQUEUE */
> + return m;
> +}
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/socketmodule.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/socketmodule.c
> new file mode 100644
> index 00000000..d5bb9f59
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/socketmodule.c
> @@ -0,0 +1,7810 @@
> +/* Socket module */
> +
> +/*
> +
> +This module provides an interface to Berkeley socket IPC.
> +
> +Limitations:
> +
> +- Only AF_INET, AF_INET6 and AF_UNIX address families are supported in a
> + portable manner, though AF_PACKET, AF_NETLINK and AF_TIPC are supported
> + under Linux.
> +- No read/write operations (use sendall/recv or makefile instead).
> +- Additional restrictions apply on some non-Unix platforms (compensated
> + for by socket.py).
> +
> +Module interface:
> +
> +- socket.error: exception raised for socket specific errors, alias for OSError
> +- socket.gaierror: exception raised for getaddrinfo/getnameinfo errors,
> + a subclass of socket.error
> +- socket.herror: exception raised for gethostby* errors,
> + a subclass of socket.error
> +- socket.gethostbyname(hostname) --> host IP address (string: 'dd.dd.dd.dd')
> +- socket.gethostbyaddr(IP address) --> (hostname, [alias, ...], [IP addr, ...])
> +- socket.gethostname() --> host name (string: 'spam' or 'spam.domain.com')
> +- socket.getprotobyname(protocolname) --> protocol number
> +- socket.getservbyname(servicename[, protocolname]) --> port number
> +- socket.getservbyport(portnumber[, protocolname]) --> service name
> +- socket.socket([family[, type [, proto, fileno]]]) --> new socket object
> + (fileno specifies a pre-existing socket file descriptor)
> +- socket.socketpair([family[, type [, proto]]]) --> (socket, socket)
> +- socket.ntohs(16 bit value) --> new int object
> +- socket.ntohl(32 bit value) --> new int object
> +- socket.htons(16 bit value) --> new int object
> +- socket.htonl(32 bit value) --> new int object
> +- socket.getaddrinfo(host, port [, family, type, proto, flags])
> + --> List of (family, type, proto, canonname, sockaddr)
> +- socket.getnameinfo(sockaddr, flags) --> (host, port)
> +- socket.AF_INET, socket.SOCK_STREAM, etc.: constants from <socket.h>
> +- socket.has_ipv6: boolean value indicating if IPv6 is supported
> +- socket.inet_aton(IP address) -> 32-bit packed IP representation
> +- socket.inet_ntoa(packed IP) -> IP address string
> +- socket.getdefaulttimeout() -> None | float
> +- socket.setdefaulttimeout(None | float)
> +- socket.if_nameindex() -> list of tuples (if_index, if_name)
> +- socket.if_nametoindex(name) -> corresponding interface index
> +- socket.if_indextoname(index) -> corresponding interface name
> +- an Internet socket address is a pair (hostname, port)
> + where hostname can be anything recognized by gethostbyname()
> + (including the dd.dd.dd.dd notation) and port is in host byte order
> +- where a hostname is returned, the dd.dd.dd.dd notation is used
> +- a UNIX domain socket address is a string specifying the pathname
> +- an AF_PACKET socket address is a tuple containing a string
> + specifying the ethernet interface and an integer specifying
> + the Ethernet protocol number to be received. For example:
> + ("eth0",0x1234). Optional 3rd,4th,5th elements in the tuple
> + specify packet-type and ha-type/addr.
> +- an AF_TIPC socket address is expressed as
> + (addr_type, v1, v2, v3 [, scope]); where addr_type can be one of:
> + TIPC_ADDR_NAMESEQ, TIPC_ADDR_NAME, and TIPC_ADDR_ID;
> + and scope can be one of:
> + TIPC_ZONE_SCOPE, TIPC_CLUSTER_SCOPE, and TIPC_NODE_SCOPE.
> + The meaning of v1, v2 and v3 depends on the value of addr_type:
> + if addr_type is TIPC_ADDR_NAME:
> + v1 is the server type
> + v2 is the port identifier
> + v3 is ignored
> + if addr_type is TIPC_ADDR_NAMESEQ:
> + v1 is the server type
> + v2 is the lower port number
> + v3 is the upper port number
> + if addr_type is TIPC_ADDR_ID:
> + v1 is the node
> + v2 is the ref
> + v3 is ignored
> +
> +
> +Local naming conventions:
> +
> +- names starting with sock_ are socket object methods
> +- names starting with socket_ are module-level functions
> +- names starting with PySocket are exported through socketmodule.h
> +
> +*/
> +
> +#ifdef __APPLE__
> +#include <AvailabilityMacros.h>
> +/* for getaddrinfo thread safety test on old versions of OS X */
> +#ifndef MAC_OS_X_VERSION_10_5
> +#define MAC_OS_X_VERSION_10_5 1050
> +#endif
> + /*
> + * inet_aton is not available on OSX 10.3, yet we want to use a binary
> + * that was build on 10.4 or later to work on that release, weak linking
> + * comes to the rescue.
> + */
> +# pragma weak inet_aton
> +#endif
> +
> +#include "Python.h"
> +#include "structmember.h"
> +
> +/* Socket object documentation */
> +PyDoc_STRVAR(sock_doc,
> +"socket(family=AF_INET, type=SOCK_STREAM, proto=0, fileno=None) -> socket object\n\
> +\n\
> +Open a socket of the given type. The family argument specifies the\n\
> +address family; it defaults to AF_INET. The type argument specifies\n\
> +whether this is a stream (SOCK_STREAM, this is the default)\n\
> +or datagram (SOCK_DGRAM) socket. The protocol argument defaults to 0,\n\
> +specifying the default protocol. Keyword arguments are accepted.\n\
> +The socket is created as non-inheritable.\n\
> +\n\
> +A socket object represents one endpoint of a network connection.\n\
> +\n\
> +Methods of socket objects (keyword arguments not allowed):\n\
> +\n\
> +_accept() -- accept connection, returning new socket fd and client address\n\
> +bind(addr) -- bind the socket to a local address\n\
> +close() -- close the socket\n\
> +connect(addr) -- connect the socket to a remote address\n\
> +connect_ex(addr) -- connect, return an error code instead of an exception\n\
> +dup() -- return a new socket fd duplicated from fileno()\n\
> +fileno() -- return underlying file descriptor\n\
> +getpeername() -- return remote address [*]\n\
> +getsockname() -- return local address\n\
> +getsockopt(level, optname[, buflen]) -- get socket options\n\
> +gettimeout() -- return timeout or None\n\
> +listen([n]) -- start listening for incoming connections\n\
> +recv(buflen[, flags]) -- receive data\n\
> +recv_into(buffer[, nbytes[, flags]]) -- receive data (into a buffer)\n\
> +recvfrom(buflen[, flags]) -- receive data and sender\'s address\n\
> +recvfrom_into(buffer[, nbytes, [, flags])\n\
> + -- receive data and sender\'s address (into a buffer)\n\
> +sendall(data[, flags]) -- send all data\n\
> +send(data[, flags]) -- send data, may not send all of it\n\
> +sendto(data[, flags], addr) -- send data to a given address\n\
> +setblocking(0 | 1) -- set or clear the blocking I/O flag\n\
> +setsockopt(level, optname, value[, optlen]) -- set socket options\n\
> +settimeout(None | float) -- set or clear the timeout\n\
> +shutdown(how) -- shut down traffic in one or both directions\n\
> +if_nameindex() -- return all network interface indices and names\n\
> +if_nametoindex(name) -- return the corresponding interface index\n\
> +if_indextoname(index) -- return the corresponding interface name\n\
> +\n\
> + [*] not available on all platforms!");
> +
> +/* XXX This is a terrible mess of platform-dependent preprocessor hacks.
> + I hope some day someone can clean this up please... */
> +
> +/* Hacks for gethostbyname_r(). On some non-Linux platforms, the configure
> + script doesn't get this right, so we hardcode some platform checks below.
> + On the other hand, not all Linux versions agree, so there the settings
> + computed by the configure script are needed! */
> +
> +#ifndef __linux__
> +# undef HAVE_GETHOSTBYNAME_R_3_ARG
> +# undef HAVE_GETHOSTBYNAME_R_5_ARG
> +# undef HAVE_GETHOSTBYNAME_R_6_ARG
> +#endif
> +
> +#if defined(__OpenBSD__)
> +# include <sys/uio.h>
> +#endif
> +
> +#if !defined(WITH_THREAD)
> +# undef HAVE_GETHOSTBYNAME_R
> +#endif
> +
> +#if defined(__ANDROID__) && __ANDROID_API__ < 23
> +# undef HAVE_GETHOSTBYNAME_R
> +#endif
> +
> +#ifdef HAVE_GETHOSTBYNAME_R
> +# if defined(_AIX) && !defined(_LINUX_SOURCE_COMPAT)
> +# define HAVE_GETHOSTBYNAME_R_3_ARG
> +# elif defined(__sun) || defined(__sgi)
> +# define HAVE_GETHOSTBYNAME_R_5_ARG
> +# elif defined(__linux__)
> +/* Rely on the configure script */
> +# elif defined(_LINUX_SOURCE_COMPAT) /* Linux compatibility on AIX */
> +# define HAVE_GETHOSTBYNAME_R_6_ARG
> +# else
> +# undef HAVE_GETHOSTBYNAME_R
> +# endif
> +#endif
> +
> +#if !defined(HAVE_GETHOSTBYNAME_R) && defined(WITH_THREAD) && \
> + !defined(MS_WINDOWS)
> +# define USE_GETHOSTBYNAME_LOCK
> +#endif
> +
> +/* To use __FreeBSD_version, __OpenBSD__, and __NetBSD_Version__ */
> +#ifdef HAVE_SYS_PARAM_H
> +#include <sys/param.h>
> +#endif
> +/* On systems on which getaddrinfo() is believed to not be thread-safe,
> + (this includes the getaddrinfo emulation) protect access with a lock.
> +
> + getaddrinfo is thread-safe on Mac OS X 10.5 and later. Originally it was
> + a mix of code including an unsafe implementation from an old BSD's
> + libresolv. In 10.5 Apple reimplemented it as a safe IPC call to the
> + mDNSResponder process. 10.5 is the first be UNIX '03 certified, which
> + includes the requirement that getaddrinfo be thread-safe. See issue #25924.
> +
> + It's thread-safe in OpenBSD starting with 5.4, released Nov 2013:
> + http://www.openbsd.org/plus54.html
> +
> + It's thread-safe in NetBSD starting with 4.0, released Dec 2007:
> +
> +http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libc/net/getaddrinfo.c.diff?r1=1.82&r2=1.83
> + */
> +#if defined(WITH_THREAD) && ( \
> + (defined(__APPLE__) && \
> + MAC_OS_X_VERSION_MIN_REQUIRED < MAC_OS_X_VERSION_10_5) || \
> + (defined(__FreeBSD__) && __FreeBSD_version+0 < 503000) || \
> + (defined(__OpenBSD__) && OpenBSD+0 < 201311) || \
> + (defined(__NetBSD__) && __NetBSD_Version__+0 < 400000000) || \
> + !defined(HAVE_GETADDRINFO))
> +#define USE_GETADDRINFO_LOCK
> +#endif
> +
> +#ifdef USE_GETADDRINFO_LOCK
> +#define ACQUIRE_GETADDRINFO_LOCK PyThread_acquire_lock(netdb_lock, 1);
> +#define RELEASE_GETADDRINFO_LOCK PyThread_release_lock(netdb_lock);
> +#else
> +#define ACQUIRE_GETADDRINFO_LOCK
> +#define RELEASE_GETADDRINFO_LOCK
> +#endif
> +
> +#if defined(USE_GETHOSTBYNAME_LOCK) || defined(USE_GETADDRINFO_LOCK)
> +# include "pythread.h"
> +#endif
> +
> +#if defined(PYCC_VACPP)
> +# include <types.h>
> +# include <io.h>
> +# include <sys/ioctl.h>
> +# include <utils.h>
> +# include <ctype.h>
> +#endif
> +
> +#if defined(__APPLE__) || defined(__CYGWIN__) || defined(__NetBSD__)
> +# include <sys/ioctl.h>
> +#endif
> +
> +
> +#if defined(__sgi) && _COMPILER_VERSION>700 && !_SGIAPI
> +/* make sure that the reentrant (gethostbyaddr_r etc)
> + functions are declared correctly if compiling with
> + MIPSPro 7.x in ANSI C mode (default) */
> +
> +/* XXX Using _SGIAPI is the wrong thing,
> + but I don't know what the right thing is. */
> +#undef _SGIAPI /* to avoid warning */
> +#define _SGIAPI 1
> +
> +#undef _XOPEN_SOURCE
> +#include <sys/socket.h>
> +#include <sys/types.h>
> +#include <netinet/in.h>
> +#ifdef _SS_ALIGNSIZE
> +#define HAVE_GETADDRINFO 1
> +#define HAVE_GETNAMEINFO 1
> +#endif
> +
> +#define HAVE_INET_PTON
> +#include <netdb.h>
> +#endif
> +
> +/* Irix 6.5 fails to define this variable at all. This is needed
> + for both GCC and SGI's compiler. I'd say that the SGI headers
> + are just busted. Same thing for Solaris. */
> +#if (defined(__sgi) || defined(sun)) && !defined(INET_ADDRSTRLEN)
> +#define INET_ADDRSTRLEN 16
> +#endif
> +
> +/* Generic includes */
> +#ifdef HAVE_SYS_TYPES_H
> +#include <sys/types.h>
> +#endif
> +
> +#ifdef HAVE_SYS_SOCKET_H
> +#include <sys/socket.h>
> +#endif
> +
> +#ifdef HAVE_NET_IF_H
> +#include <net/if.h>
> +#endif
> +
> +/* Generic socket object definitions and includes */
> +#define PySocket_BUILDING_SOCKET
> +#include "socketmodule.h"
> +
> +/* Addressing includes */
> +
> +#ifndef MS_WINDOWS
> +
> +/* Non-MS WINDOWS includes */
> +# include <netdb.h>
> +# include <unistd.h>
> +
> +/* Headers needed for inet_ntoa() and inet_addr() */
> +# include <arpa/inet.h>
> +
> +# include <fcntl.h>
> +
> +#else
> +
> +/* MS_WINDOWS includes */
> +# ifdef HAVE_FCNTL_H
> +# include <fcntl.h>
> +# endif
> +
> +/* Provides the IsWindows7SP1OrGreater() function */
> +#include <VersionHelpers.h>
> +
> +/* remove some flags on older version Windows during run-time.
> + https://msdn.microsoft.com/en-us/library/windows/desktop/ms738596.aspx */
> +typedef struct {
> + DWORD build_number; /* available starting with this Win10 BuildNumber */
> + const char flag_name[20];
> +} FlagRuntimeInfo;
> +
> +/* IMPORTANT: make sure the list ordered by descending build_number */
> +static FlagRuntimeInfo win_runtime_flags[] = {
> + /* available starting with Windows 10 1703 */
> + {15063, "TCP_KEEPCNT"},
> + /* available starting with Windows 10 1607 */
> + {14393, "TCP_FASTOPEN"}
> +};
> +
> +static void
> +remove_unusable_flags(PyObject *m)
> +{
> + PyObject *dict;
> + OSVERSIONINFOEX info;
> + DWORDLONG dwlConditionMask;
> +
> + dict = PyModule_GetDict(m);
> + if (dict == NULL) {
> + return;
> + }
> +
> + /* set to Windows 10, except BuildNumber. */
> + memset(&info, 0, sizeof(info));
> + info.dwOSVersionInfoSize = sizeof(info);
> + info.dwMajorVersion = 10;
> + info.dwMinorVersion = 0;
> +
> + /* set Condition Mask */
> + dwlConditionMask = 0;
> + VER_SET_CONDITION(dwlConditionMask, VER_MAJORVERSION, VER_GREATER_EQUAL);
> + VER_SET_CONDITION(dwlConditionMask, VER_MINORVERSION, VER_GREATER_EQUAL);
> + VER_SET_CONDITION(dwlConditionMask, VER_BUILDNUMBER, VER_GREATER_EQUAL);
> +
> + for (int i=0; i<sizeof(win_runtime_flags)/sizeof(FlagRuntimeInfo); i++) {
> + info.dwBuildNumber = win_runtime_flags[i].build_number;
> + /* greater than or equal to the specified version?
> + Compatibility Mode will not cheat VerifyVersionInfo(...) */
> + if (VerifyVersionInfo(
> + &info,
> + VER_MAJORVERSION|VER_MINORVERSION|VER_BUILDNUMBER,
> + dwlConditionMask)) {
> + break;
> + }
> + else {
> + if (PyDict_GetItemString(
> + dict,
> + win_runtime_flags[i].flag_name) != NULL)
> + {
> + if (PyDict_DelItemString(
> + dict,
> + win_runtime_flags[i].flag_name))
> + {
> + PyErr_Clear();
> + }
> + }
> + }
> + }
> +}
> +
> +#endif
> +
> +#include <stddef.h>
> +
> +#ifndef O_NONBLOCK
> +# define O_NONBLOCK O_NDELAY
> +#endif
> +
> +/* include Python's addrinfo.h unless it causes trouble */
> +#if defined(__sgi) && _COMPILER_VERSION>700 && defined(_SS_ALIGNSIZE)
> + /* Do not include addinfo.h on some newer IRIX versions.
> + * _SS_ALIGNSIZE is defined in sys/socket.h by 6.5.21,
> + * for example, but not by 6.5.10.
> + */
> +#elif defined(_MSC_VER) && _MSC_VER>1201
> + /* Do not include addrinfo.h for MSVC7 or greater. 'addrinfo' and
> + * EAI_* constants are defined in (the already included) ws2tcpip.h.
> + */
> +#else
> +# include "addrinfo.h"
> +#endif
> +
> +#ifndef HAVE_INET_PTON
> +#if !defined(NTDDI_VERSION) || (NTDDI_VERSION < NTDDI_LONGHORN)
> +int inet_pton(int af, const char *src, void *dst);
> +const char *inet_ntop(int af, const void *src, char *dst, socklen_t size);
> +#endif
> +#endif
> +
> +#ifdef __APPLE__
> +/* On OS X, getaddrinfo returns no error indication of lookup
> + failure, so we must use the emulation instead of the libinfo
> + implementation. Unfortunately, performing an autoconf test
> + for this bug would require DNS access for the machine performing
> + the configuration, which is not acceptable. Therefore, we
> + determine the bug just by checking for __APPLE__. If this bug
> + gets ever fixed, perhaps checking for sys/version.h would be
> + appropriate, which is 10/0 on the system with the bug. */
> +#ifndef HAVE_GETNAMEINFO
> +/* This bug seems to be fixed in Jaguar. Ths easiest way I could
> + Find to check for Jaguar is that it has getnameinfo(), which
> + older releases don't have */
> +#undef HAVE_GETADDRINFO
> +#endif
> +
> +#ifdef HAVE_INET_ATON
> +#define USE_INET_ATON_WEAKLINK
> +#endif
> +
> +#endif
> +
> +/* I know this is a bad practice, but it is the easiest... */
> +#if !defined(HAVE_GETADDRINFO)
> +/* avoid clashes with the C library definition of the symbol. */
> +#define getaddrinfo fake_getaddrinfo
> +#define gai_strerror fake_gai_strerror
> +#define freeaddrinfo fake_freeaddrinfo
> +#include "getaddrinfo.c"
> +#endif
> +#if !defined(HAVE_GETNAMEINFO)
> +#define getnameinfo fake_getnameinfo
> +#include "getnameinfo.c"
> +#endif
> +
> +#ifdef MS_WINDOWS
> +#define SOCKETCLOSE closesocket
> +#endif
> +
> +#ifdef MS_WIN32
> +#undef EAFNOSUPPORT
> +#define EAFNOSUPPORT WSAEAFNOSUPPORT
> +#define snprintf _snprintf
> +#endif
> +
> +#ifndef SOCKETCLOSE
> +#define SOCKETCLOSE close
> +#endif
> +
> +#if (defined(HAVE_BLUETOOTH_H) || defined(HAVE_BLUETOOTH_BLUETOOTH_H)) && !defined(__NetBSD__) && !defined(__DragonFly__)
> +#define USE_BLUETOOTH 1
> +#if defined(__FreeBSD__)
> +#define BTPROTO_L2CAP BLUETOOTH_PROTO_L2CAP
> +#define BTPROTO_RFCOMM BLUETOOTH_PROTO_RFCOMM
> +#define BTPROTO_HCI BLUETOOTH_PROTO_HCI
> +#define SOL_HCI SOL_HCI_RAW
> +#define HCI_FILTER SO_HCI_RAW_FILTER
> +#define sockaddr_l2 sockaddr_l2cap
> +#define sockaddr_rc sockaddr_rfcomm
> +#define hci_dev hci_node
> +#define _BT_L2_MEMB(sa, memb) ((sa)->l2cap_##memb)
> +#define _BT_RC_MEMB(sa, memb) ((sa)->rfcomm_##memb)
> +#define _BT_HCI_MEMB(sa, memb) ((sa)->hci_##memb)
> +#elif defined(__NetBSD__) || defined(__DragonFly__)
> +#define sockaddr_l2 sockaddr_bt
> +#define sockaddr_rc sockaddr_bt
> +#define sockaddr_hci sockaddr_bt
> +#define sockaddr_sco sockaddr_bt
> +#define SOL_HCI BTPROTO_HCI
> +#define HCI_DATA_DIR SO_HCI_DIRECTION
> +#define _BT_L2_MEMB(sa, memb) ((sa)->bt_##memb)
> +#define _BT_RC_MEMB(sa, memb) ((sa)->bt_##memb)
> +#define _BT_HCI_MEMB(sa, memb) ((sa)->bt_##memb)
> +#define _BT_SCO_MEMB(sa, memb) ((sa)->bt_##memb)
> +#else
> +#define _BT_L2_MEMB(sa, memb) ((sa)->l2_##memb)
> +#define _BT_RC_MEMB(sa, memb) ((sa)->rc_##memb)
> +#define _BT_HCI_MEMB(sa, memb) ((sa)->hci_##memb)
> +#define _BT_SCO_MEMB(sa, memb) ((sa)->sco_##memb)
> +#endif
> +#endif
> +
> +/* Convert "sock_addr_t *" to "struct sockaddr *". */
> +#define SAS2SA(x) (&((x)->sa))
> +
> +/*
> + * Constants for getnameinfo()
> + */
> +#if !defined(NI_MAXHOST)
> +#define NI_MAXHOST 1025
> +#endif
> +#if !defined(NI_MAXSERV)
> +#define NI_MAXSERV 32
> +#endif
> +
> +#ifndef INVALID_SOCKET /* MS defines this */
> +#define INVALID_SOCKET (-1)
> +#endif
> +
> +#ifndef INADDR_NONE
> +#define INADDR_NONE (-1)
> +#endif
> +
> +/* XXX There's a problem here: *static* functions are not supposed to have
> + a Py prefix (or use CapitalizedWords). Later... */
> +
> +/* Global variable holding the exception type for errors detected
> + by this module (but not argument type or memory errors, etc.). */
> +static PyObject *socket_herror;
> +static PyObject *socket_gaierror;
> +static PyObject *socket_timeout;
> +
> +/* A forward reference to the socket type object.
> + The sock_type variable contains pointers to various functions,
> + some of which call new_sockobject(), which uses sock_type, so
> + there has to be a circular reference. */
> +static PyTypeObject sock_type;
> +
> +#if defined(HAVE_POLL_H)
> +#include <poll.h>
> +#elif defined(HAVE_SYS_POLL_H)
> +#include <sys/poll.h>
> +#endif
> +
> +/* Largest value to try to store in a socklen_t (used when handling
> + ancillary data). POSIX requires socklen_t to hold at least
> + (2**31)-1 and recommends against storing larger values, but
> + socklen_t was originally int in the BSD interface, so to be on the
> + safe side we use the smaller of (2**31)-1 and INT_MAX. */
> +#if INT_MAX > 0x7fffffff
> +#define SOCKLEN_T_LIMIT 0x7fffffff
> +#else
> +#define SOCKLEN_T_LIMIT INT_MAX
> +#endif
> +
> +#ifdef HAVE_POLL
> +/* Instead of select(), we'll use poll() since poll() works on any fd. */
> +#define IS_SELECTABLE(s) 1
> +/* Can we call select() with this socket without a buffer overrun? */
> +#else
> +/* If there's no timeout left, we don't have to call select, so it's a safe,
> + * little white lie. */
> +#define IS_SELECTABLE(s) (_PyIsSelectable_fd((s)->sock_fd) || (s)->sock_timeout <= 0)
> +#endif
> +
> +static PyObject*
> +select_error(void)
> +{
> + PyErr_SetString(PyExc_OSError, "unable to select on socket");
> + return NULL;
> +}
> +
> +#ifdef MS_WINDOWS
> +#ifndef WSAEAGAIN
> +#define WSAEAGAIN WSAEWOULDBLOCK
> +#endif
> +#define CHECK_ERRNO(expected) \
> + (WSAGetLastError() == WSA ## expected)
> +#else
> +#define CHECK_ERRNO(expected) \
> + (errno == expected)
> +#endif
> +
> +#ifdef MS_WINDOWS
> +# define GET_SOCK_ERROR WSAGetLastError()
> +# define SET_SOCK_ERROR(err) WSASetLastError(err)
> +# define SOCK_TIMEOUT_ERR WSAEWOULDBLOCK
> +# define SOCK_INPROGRESS_ERR WSAEWOULDBLOCK
> +#else
> +# define GET_SOCK_ERROR errno
> +# define SET_SOCK_ERROR(err) do { errno = err; } while (0)
> +# define SOCK_TIMEOUT_ERR EWOULDBLOCK
> +# define SOCK_INPROGRESS_ERR EINPROGRESS
> +#endif
> +
> +
> +#ifdef MS_WINDOWS
> +/* Does WSASocket() support the WSA_FLAG_NO_HANDLE_INHERIT flag? */
> +static int support_wsa_no_inherit = -1;
> +#endif
> +
> +/* Convenience function to raise an error according to errno
> + and return a NULL pointer from a function. */
> +
> +static PyObject *
> +set_error(void)
> +{
> +#ifdef MS_WINDOWS
> + int err_no = WSAGetLastError();
> + /* PyErr_SetExcFromWindowsErr() invokes FormatMessage() which
> + recognizes the error codes used by both GetLastError() and
> + WSAGetLastError */
> + if (err_no)
> + return PyErr_SetExcFromWindowsErr(PyExc_OSError, err_no);
> +#endif
> +
> + return PyErr_SetFromErrno(PyExc_OSError);
> +}
> +
> +
> +static PyObject *
> +set_herror(int h_error)
> +{
> + PyObject *v;
> +
> +#ifdef HAVE_HSTRERROR
> + v = Py_BuildValue("(is)", h_error, (char *)hstrerror(h_error));
> +#else
> + v = Py_BuildValue("(is)", h_error, "host not found");
> +#endif
> + if (v != NULL) {
> + PyErr_SetObject(socket_herror, v);
> + Py_DECREF(v);
> + }
> +
> + return NULL;
> +}
> +
> +
> +static PyObject *
> +set_gaierror(int error)
> +{
> + PyObject *v;
> +
> +#ifdef EAI_SYSTEM
> + /* EAI_SYSTEM is not available on Windows XP. */
> + if (error == EAI_SYSTEM)
> + return set_error();
> +#endif
> +
> +#ifdef HAVE_GAI_STRERROR
> + v = Py_BuildValue("(is)", error, gai_strerror(error));
> +#else
> + v = Py_BuildValue("(is)", error, "getaddrinfo failed");
> +#endif
> + if (v != NULL) {
> + PyErr_SetObject(socket_gaierror, v);
> + Py_DECREF(v);
> + }
> +
> + return NULL;
> +}
> +
> +/* Function to perform the setting of socket blocking mode
> + internally. block = (1 | 0). */
> +static int
> +internal_setblocking(PySocketSockObject *s, int block)
> +{
> + int result = -1;
> +#ifdef MS_WINDOWS
> + u_long arg;
> +#endif
> +#if !defined(MS_WINDOWS) \
> + && !((defined(HAVE_SYS_IOCTL_H) && defined(FIONBIO)))
> + int delay_flag, new_delay_flag;
> +#endif
> +#ifdef SOCK_NONBLOCK
> + if (block)
> + s->sock_type &= (~SOCK_NONBLOCK);
> + else
> + s->sock_type |= SOCK_NONBLOCK;
> +#endif
> +
> + Py_BEGIN_ALLOW_THREADS
> +#ifndef MS_WINDOWS
> +#if (defined(HAVE_SYS_IOCTL_H) && defined(FIONBIO))
> + block = !block;
> + if (ioctl(s->sock_fd, FIONBIO, (unsigned int *)&block) == -1)
> + goto done;
> +#else
> + delay_flag = fcntl(s->sock_fd, F_GETFL, 0);
> + if (delay_flag == -1)
> + goto done;
> + if (block)
> + new_delay_flag = delay_flag & (~O_NONBLOCK);
> + else
> + new_delay_flag = delay_flag | O_NONBLOCK;
> + if (new_delay_flag != delay_flag)
> + if (fcntl(s->sock_fd, F_SETFL, new_delay_flag) == -1)
> + goto done;
> +#endif
> +#else /* MS_WINDOWS */
> + arg = !block;
> + if (ioctlsocket(s->sock_fd, FIONBIO, &arg) != 0)
> + goto done;
> +#endif /* MS_WINDOWS */
> +
> + result = 0;
> +
> + done:
> + ; /* necessary for --without-threads flag */
> + Py_END_ALLOW_THREADS
> +
> + if (result) {
> +#ifndef MS_WINDOWS
> + PyErr_SetFromErrno(PyExc_OSError);
> +#else
> + PyErr_SetExcFromWindowsErr(PyExc_OSError, WSAGetLastError());
> +#endif
> + }
> +
> + return result;
> +}
> +
> +static int
> +internal_select(PySocketSockObject *s, int writing, _PyTime_t interval,
> + int connect)
> +{
> + int n;
> +#ifdef HAVE_POLL
> + struct pollfd pollfd;
> + _PyTime_t ms;
> +#else
> + fd_set fds, efds;
> + struct timeval tv, *tvp;
> +#endif
> +
> +#ifdef WITH_THREAD
> + /* must be called with the GIL held */
> + assert(PyGILState_Check());
> +#endif
> +
> + /* Error condition is for output only */
> + assert(!(connect && !writing));
> +
> + /* Guard against closed socket */
> + if (s->sock_fd == INVALID_SOCKET)
> + return 0;
> +
> + /* Prefer poll, if available, since you can poll() any fd
> + * which can't be done with select(). */
> +#ifdef HAVE_POLL
> + pollfd.fd = s->sock_fd;
> + pollfd.events = writing ? POLLOUT : POLLIN;
> + if (connect) {
> + /* On Windows, the socket becomes writable on connection success,
> + but a connection failure is notified as an error. On POSIX, the
> + socket becomes writable on connection success or on connection
> + failure. */
> + pollfd.events |= POLLERR;
> + }
> +
> + /* s->sock_timeout is in seconds, timeout in ms */
> + ms = _PyTime_AsMilliseconds(interval, _PyTime_ROUND_CEILING);
> + assert(ms <= INT_MAX);
> +
> + Py_BEGIN_ALLOW_THREADS;
> + n = poll(&pollfd, 1, (int)ms);
> + Py_END_ALLOW_THREADS;
> +#else
> + if (interval >= 0) {
> + _PyTime_AsTimeval_noraise(interval, &tv, _PyTime_ROUND_CEILING);
> + tvp = &tv;
> + }
> + else
> + tvp = NULL;
> +
> + FD_ZERO(&fds);
> + FD_SET(s->sock_fd, &fds);
> + FD_ZERO(&efds);
> + if (connect) {
> + /* On Windows, the socket becomes writable on connection success,
> + but a connection failure is notified as an error. On POSIX, the
> + socket becomes writable on connection success or on connection
> + failure. */
> + FD_SET(s->sock_fd, &efds);
> + }
> +
> + /* See if the socket is ready */
> + Py_BEGIN_ALLOW_THREADS;
> + if (writing)
> + n = select(Py_SAFE_DOWNCAST(s->sock_fd+1, SOCKET_T, int),
> + NULL, &fds, &efds, tvp);
> + else
> + n = select(Py_SAFE_DOWNCAST(s->sock_fd+1, SOCKET_T, int),
> + &fds, NULL, &efds, tvp);
> + Py_END_ALLOW_THREADS;
> +#endif
> +
> + if (n < 0)
> + return -1;
> + if (n == 0)
> + return 1;
> + return 0;
> +}
> +
> +/* Call a socket function.
> +
> + On error, raise an exception and return -1 if err is set, or fill err and
> + return -1 otherwise. If a signal was received and the signal handler raised
> + an exception, return -1, and set err to -1 if err is set.
> +
> + On success, return 0, and set err to 0 if err is set.
> +
> + If the socket has a timeout, wait until the socket is ready before calling
> + the function: wait until the socket is writable if writing is nonzero, wait
> + until the socket received data otherwise.
> +
> + If the socket function is interrupted by a signal (failed with EINTR): retry
> + the function, except if the signal handler raised an exception (PEP 475).
> +
> + When the function is retried, recompute the timeout using a monotonic clock.
> +
> + sock_call_ex() must be called with the GIL held. The socket function is
> + called with the GIL released. */
> +static int
> +sock_call_ex(PySocketSockObject *s,
> + int writing,
> + int (*sock_func) (PySocketSockObject *s, void *data),
> + void *data,
> + int connect,
> + int *err,
> + _PyTime_t timeout)
> +{
> + int has_timeout = (timeout > 0);
> + _PyTime_t deadline = 0;
> + int deadline_initialized = 0;
> + int res;
> +
> +#ifdef WITH_THREAD
> + /* sock_call() must be called with the GIL held. */
> + assert(PyGILState_Check());
> +#endif
> +
> + /* outer loop to retry select() when select() is interrupted by a signal
> + or to retry select()+sock_func() on false positive (see above) */
> + while (1) {
> + /* For connect(), poll even for blocking socket. The connection
> + runs asynchronously. */
> + if (has_timeout || connect) {
> + if (has_timeout) {
> + _PyTime_t interval;
> +
> + if (deadline_initialized) {
> + /* recompute the timeout */
> + interval = deadline - _PyTime_GetMonotonicClock();
> + }
> + else {
> + deadline_initialized = 1;
> + deadline = _PyTime_GetMonotonicClock() + timeout;
> + interval = timeout;
> + }
> +
> + if (interval >= 0)
> + res = internal_select(s, writing, interval, connect);
> + else
> + res = 1;
> + }
> + else {
> + res = internal_select(s, writing, timeout, connect);
> + }
> +
> + if (res == -1) {
> + if (err)
> + *err = GET_SOCK_ERROR;
> +
> + if (CHECK_ERRNO(EINTR)) {
> + /* select() was interrupted by a signal */
> + if (PyErr_CheckSignals()) {
> + if (err)
> + *err = -1;
> + return -1;
> + }
> +
> + /* retry select() */
> + continue;
> + }
> +
> + /* select() failed */
> + s->errorhandler();
> + return -1;
> + }
> +
> + if (res == 1) {
> + if (err)
> + *err = SOCK_TIMEOUT_ERR;
> + else
> + PyErr_SetString(socket_timeout, "timed out");
> + return -1;
> + }
> +
> + /* the socket is ready */
> + }
> +
> + /* inner loop to retry sock_func() when sock_func() is interrupted
> + by a signal */
> + while (1) {
> + Py_BEGIN_ALLOW_THREADS
> + res = sock_func(s, data);
> + Py_END_ALLOW_THREADS
> +
> + if (res) {
> + /* sock_func() succeeded */
> + if (err)
> + *err = 0;
> + return 0;
> + }
> +
> + if (err)
> + *err = GET_SOCK_ERROR;
> +
> + if (!CHECK_ERRNO(EINTR))
> + break;
> +
> + /* sock_func() was interrupted by a signal */
> + if (PyErr_CheckSignals()) {
> + if (err)
> + *err = -1;
> + return -1;
> + }
> +
> + /* retry sock_func() */
> + }
> +
> + if (s->sock_timeout > 0
> + && (CHECK_ERRNO(EWOULDBLOCK) || CHECK_ERRNO(EAGAIN))) {
> + /* False positive: sock_func() failed with EWOULDBLOCK or EAGAIN.
> +
> + For example, select() could indicate a socket is ready for
> + reading, but the data then discarded by the OS because of a
> + wrong checksum.
> +
> + Loop on select() to recheck for socket readyness. */
> + continue;
> + }
> +
> + /* sock_func() failed */
> + if (!err)
> + s->errorhandler();
> + /* else: err was already set before */
> + return -1;
> + }
> +}
> +
> +static int
> +sock_call(PySocketSockObject *s,
> + int writing,
> + int (*func) (PySocketSockObject *s, void *data),
> + void *data)
> +{
> + return sock_call_ex(s, writing, func, data, 0, NULL, s->sock_timeout);
> +}
> +
> +
> +/* Initialize a new socket object. */
> +
> +/* Default timeout for new sockets */
> +static _PyTime_t defaulttimeout = _PYTIME_FROMSECONDS(-1);
> +
> +static int
> +init_sockobject(PySocketSockObject *s,
> + SOCKET_T fd, int family, int type, int proto)
> +{
> + s->sock_fd = fd;
> + s->sock_family = family;
> + s->sock_type = type;
> + s->sock_proto = proto;
> +
> + s->errorhandler = &set_error;
> +#ifdef SOCK_NONBLOCK
> + if (type & SOCK_NONBLOCK)
> + s->sock_timeout = 0;
> + else
> +#endif
> + {
> + s->sock_timeout = defaulttimeout;
> + if (defaulttimeout >= 0) {
> + if (internal_setblocking(s, 0) == -1) {
> + return -1;
> + }
> + }
> + }
> + return 0;
> +}
> +
> +
> +/* Create a new socket object.
> + This just creates the object and initializes it.
> + If the creation fails, return NULL and set an exception (implicit
> + in NEWOBJ()). */
> +
> +static PySocketSockObject *
> +new_sockobject(SOCKET_T fd, int family, int type, int proto)
> +{
> + PySocketSockObject *s;
> + s = (PySocketSockObject *)
> + PyType_GenericNew(&sock_type, NULL, NULL);
> + if (s == NULL)
> + return NULL;
> + if (init_sockobject(s, fd, family, type, proto) == -1) {
> + Py_DECREF(s);
> + return NULL;
> + }
> + return s;
> +}
> +
> +
> +/* Lock to allow python interpreter to continue, but only allow one
> + thread to be in gethostbyname or getaddrinfo */
> +#if defined(USE_GETHOSTBYNAME_LOCK) || defined(USE_GETADDRINFO_LOCK)
> +static PyThread_type_lock netdb_lock;
> +#endif
> +
> +
> +/* Convert a string specifying a host name or one of a few symbolic
> + names to a numeric IP address. This usually calls gethostbyname()
> + to do the work; the names "" and "<broadcast>" are special.
> + Return the length (IPv4 should be 4 bytes), or negative if
> + an error occurred; then an exception is raised. */
> +
> +static int
> +setipaddr(const char *name, struct sockaddr *addr_ret, size_t addr_ret_size, int af)
> +{
> + struct addrinfo hints, *res;
> + int error;
> +
> + memset((void *) addr_ret, '\0', sizeof(*addr_ret));
> + if (name[0] == '\0') {
> + int siz;
> + memset(&hints, 0, sizeof(hints));
> + hints.ai_family = af;
> + hints.ai_socktype = SOCK_DGRAM; /*dummy*/
> + hints.ai_flags = AI_PASSIVE;
> + Py_BEGIN_ALLOW_THREADS
> + ACQUIRE_GETADDRINFO_LOCK
> + error = getaddrinfo(NULL, "0", &hints, &res);
> + Py_END_ALLOW_THREADS
> + /* We assume that those thread-unsafe getaddrinfo() versions
> + *are* safe regarding their return value, ie. that a
> + subsequent call to getaddrinfo() does not destroy the
> + outcome of the first call. */
> + RELEASE_GETADDRINFO_LOCK
> + if (error) {
> + set_gaierror(error);
> + return -1;
> + }
> + switch (res->ai_family) {
> + case AF_INET:
> + siz = 4;
> + break;
> +#ifdef ENABLE_IPV6
> + case AF_INET6:
> + siz = 16;
> + break;
> +#endif
> + default:
> + freeaddrinfo(res);
> + PyErr_SetString(PyExc_OSError,
> + "unsupported address family");
> + return -1;
> + }
> + if (res->ai_next) {
> + freeaddrinfo(res);
> + PyErr_SetString(PyExc_OSError,
> + "wildcard resolved to multiple address");
> + return -1;
> + }
> + if (res->ai_addrlen < addr_ret_size)
> + addr_ret_size = res->ai_addrlen;
> + memcpy(addr_ret, res->ai_addr, addr_ret_size);
> + freeaddrinfo(res);
> + return siz;
> + }
> + /* special-case broadcast - inet_addr() below can return INADDR_NONE for
> + * this */
> + if (strcmp(name, "255.255.255.255") == 0 ||
> + strcmp(name, "<broadcast>") == 0) {
> + struct sockaddr_in *sin;
> + if (af != AF_INET && af != AF_UNSPEC) {
> + PyErr_SetString(PyExc_OSError,
> + "address family mismatched");
> + return -1;
> + }
> + sin = (struct sockaddr_in *)addr_ret;
> + memset((void *) sin, '\0', sizeof(*sin));
> + sin->sin_family = AF_INET;
> +#ifdef HAVE_SOCKADDR_SA_LEN
> + sin->sin_len = sizeof(*sin);
> +#endif
> + sin->sin_addr.s_addr = INADDR_BROADCAST;
> + return sizeof(sin->sin_addr);
> + }
> +
> + /* avoid a name resolution in case of numeric address */
> +#ifdef HAVE_INET_PTON
> + /* check for an IPv4 address */
> + if (af == AF_UNSPEC || af == AF_INET) {
> + struct sockaddr_in *sin = (struct sockaddr_in *)addr_ret;
> + memset(sin, 0, sizeof(*sin));
> + if (inet_pton(AF_INET, name, &sin->sin_addr) > 0) {
> + sin->sin_family = AF_INET;
> +#ifdef HAVE_SOCKADDR_SA_LEN
> + sin->sin_len = sizeof(*sin);
> +#endif
> + return 4;
> + }
> + }
> +#ifdef ENABLE_IPV6
> + /* check for an IPv6 address - if the address contains a scope ID, we
> + * fallback to getaddrinfo(), which can handle translation from interface
> + * name to interface index */
> + if ((af == AF_UNSPEC || af == AF_INET6) && !strchr(name, '%')) {
> + struct sockaddr_in6 *sin = (struct sockaddr_in6 *)addr_ret;
> + memset(sin, 0, sizeof(*sin));
> + if (inet_pton(AF_INET6, name, &sin->sin6_addr) > 0) {
> + sin->sin6_family = AF_INET6;
> +#ifdef HAVE_SOCKADDR_SA_LEN
> + sin->sin6_len = sizeof(*sin);
> +#endif
> + return 16;
> + }
> + }
> +#endif /* ENABLE_IPV6 */
> +#else /* HAVE_INET_PTON */
> + /* check for an IPv4 address */
> + if (af == AF_INET || af == AF_UNSPEC) {
> + struct sockaddr_in *sin = (struct sockaddr_in *)addr_ret;
> + memset(sin, 0, sizeof(*sin));
> + if ((sin->sin_addr.s_addr = inet_addr(name)) != INADDR_NONE) {
> + sin->sin_family = AF_INET;
> +#ifdef HAVE_SOCKADDR_SA_LEN
> + sin->sin_len = sizeof(*sin);
> +#endif
> + return 4;
> + }
> + }
> +#endif /* HAVE_INET_PTON */
> +
> + /* perform a name resolution */
> + memset(&hints, 0, sizeof(hints));
> + hints.ai_family = af;
> + Py_BEGIN_ALLOW_THREADS
> + ACQUIRE_GETADDRINFO_LOCK
> + error = getaddrinfo(name, NULL, &hints, &res);
> +#if defined(__digital__) && defined(__unix__)
> + if (error == EAI_NONAME && af == AF_UNSPEC) {
> + /* On Tru64 V5.1, numeric-to-addr conversion fails
> + if no address family is given. Assume IPv4 for now.*/
> + hints.ai_family = AF_INET;
> + error = getaddrinfo(name, NULL, &hints, &res);
> + }
> +#endif
> + Py_END_ALLOW_THREADS
> + RELEASE_GETADDRINFO_LOCK /* see comment in setipaddr() */
> + if (error) {
> + set_gaierror(error);
> + return -1;
> + }
> + if (res->ai_addrlen < addr_ret_size)
> + addr_ret_size = res->ai_addrlen;
> + memcpy((char *) addr_ret, res->ai_addr, addr_ret_size);
> + freeaddrinfo(res);
> + switch (addr_ret->sa_family) {
> + case AF_INET:
> + return 4;
> +#ifdef ENABLE_IPV6
> + case AF_INET6:
> + return 16;
> +#endif
> + default:
> + PyErr_SetString(PyExc_OSError, "unknown address family");
> + return -1;
> + }
> +}
> +
> +
> +/* Create a string object representing an IP address.
> + This is always a string of the form 'dd.dd.dd.dd' (with variable
> + size numbers). */
> +
> +static PyObject *
> +makeipaddr(struct sockaddr *addr, int addrlen)
> +{
> + char buf[NI_MAXHOST];
> + int error;
> +
> + error = getnameinfo(addr, addrlen, buf, sizeof(buf), NULL, 0,
> + NI_NUMERICHOST);
> + if (error) {
> + set_gaierror(error);
> + return NULL;
> + }
> + return PyUnicode_FromString(buf);
> +}
> +
> +
> +#ifdef USE_BLUETOOTH
> +/* Convert a string representation of a Bluetooth address into a numeric
> + address. Returns the length (6), or raises an exception and returns -1 if
> + an error occurred. */
> +
> +static int
> +setbdaddr(const char *name, bdaddr_t *bdaddr)
> +{
> + unsigned int b0, b1, b2, b3, b4, b5;
> + char ch;
> + int n;
> +
> + n = sscanf(name, "%X:%X:%X:%X:%X:%X%c",
> + &b5, &b4, &b3, &b2, &b1, &b0, &ch);
> + if (n == 6 && (b0 | b1 | b2 | b3 | b4 | b5) < 256) {
> + bdaddr->b[0] = b0;
> + bdaddr->b[1] = b1;
> + bdaddr->b[2] = b2;
> + bdaddr->b[3] = b3;
> + bdaddr->b[4] = b4;
> + bdaddr->b[5] = b5;
> + return 6;
> + } else {
> + PyErr_SetString(PyExc_OSError, "bad bluetooth address");
> + return -1;
> + }
> +}
> +
> +/* Create a string representation of the Bluetooth address. This is always a
> + string of the form 'XX:XX:XX:XX:XX:XX' where XX is a two digit hexadecimal
> + value (zero padded if necessary). */
> +
> +static PyObject *
> +makebdaddr(bdaddr_t *bdaddr)
> +{
> + char buf[(6 * 2) + 5 + 1];
> +
> + sprintf(buf, "%02X:%02X:%02X:%02X:%02X:%02X",
> + bdaddr->b[5], bdaddr->b[4], bdaddr->b[3],
> + bdaddr->b[2], bdaddr->b[1], bdaddr->b[0]);
> + return PyUnicode_FromString(buf);
> +}
> +#endif
> +
> +
> +/* Create an object representing the given socket address,
> + suitable for passing it back to bind(), connect() etc.
> + The family field of the sockaddr structure is inspected
> + to determine what kind of address it really is. */
> +
> +/*ARGSUSED*/
> +static PyObject *
> +makesockaddr(SOCKET_T sockfd, struct sockaddr *addr, size_t addrlen, int proto)
> +{
> + if (addrlen == 0) {
> + /* No address -- may be recvfrom() from known socket */
> + Py_INCREF(Py_None);
> + return Py_None;
> + }
> +
> + switch (addr->sa_family) {
> +
> + case AF_INET:
> + {
> + struct sockaddr_in *a;
> + PyObject *addrobj = makeipaddr(addr, sizeof(*a));
> + PyObject *ret = NULL;
> + if (addrobj) {
> + a = (struct sockaddr_in *)addr;
> + ret = Py_BuildValue("Oi", addrobj, ntohs(a->sin_port));
> + Py_DECREF(addrobj);
> + }
> + return ret;
> + }
> +
> +#if defined(AF_UNIX)
> + case AF_UNIX:
> + {
> + struct sockaddr_un *a = (struct sockaddr_un *) addr;
> +#ifdef __linux__
> + size_t linuxaddrlen = addrlen - offsetof(struct sockaddr_un, sun_path);
> + if (linuxaddrlen > 0 && a->sun_path[0] == 0) { /* Linux abstract namespace */
> + return PyBytes_FromStringAndSize(a->sun_path, linuxaddrlen);
> + }
> + else
> +#endif /* linux */
> + {
> + /* regular NULL-terminated string */
> + return PyUnicode_DecodeFSDefault(a->sun_path);
> + }
> + }
> +#endif /* AF_UNIX */
> +
> +#if defined(AF_NETLINK)
> + case AF_NETLINK:
> + {
> + struct sockaddr_nl *a = (struct sockaddr_nl *) addr;
> + return Py_BuildValue("II", a->nl_pid, a->nl_groups);
> + }
> +#endif /* AF_NETLINK */
> +
> +#ifdef ENABLE_IPV6
> + case AF_INET6:
> + {
> + struct sockaddr_in6 *a;
> + PyObject *addrobj = makeipaddr(addr, sizeof(*a));
> + PyObject *ret = NULL;
> + if (addrobj) {
> + a = (struct sockaddr_in6 *)addr;
> + ret = Py_BuildValue("OiII",
> + addrobj,
> + ntohs(a->sin6_port),
> + ntohl(a->sin6_flowinfo),
> + a->sin6_scope_id);
> + Py_DECREF(addrobj);
> + }
> + return ret;
> + }
> +#endif /* ENABLE_IPV6 */
> +
> +#ifdef USE_BLUETOOTH
> + case AF_BLUETOOTH:
> + switch (proto) {
> +
> + case BTPROTO_L2CAP:
> + {
> + struct sockaddr_l2 *a = (struct sockaddr_l2 *) addr;
> + PyObject *addrobj = makebdaddr(&_BT_L2_MEMB(a, bdaddr));
> + PyObject *ret = NULL;
> + if (addrobj) {
> + ret = Py_BuildValue("Oi",
> + addrobj,
> + _BT_L2_MEMB(a, psm));
> + Py_DECREF(addrobj);
> + }
> + return ret;
> + }
> +
> + case BTPROTO_RFCOMM:
> + {
> + struct sockaddr_rc *a = (struct sockaddr_rc *) addr;
> + PyObject *addrobj = makebdaddr(&_BT_RC_MEMB(a, bdaddr));
> + PyObject *ret = NULL;
> + if (addrobj) {
> + ret = Py_BuildValue("Oi",
> + addrobj,
> + _BT_RC_MEMB(a, channel));
> + Py_DECREF(addrobj);
> + }
> + return ret;
> + }
> +
> + case BTPROTO_HCI:
> + {
> + struct sockaddr_hci *a = (struct sockaddr_hci *) addr;
> +#if defined(__NetBSD__) || defined(__DragonFly__)
> + return makebdaddr(&_BT_HCI_MEMB(a, bdaddr));
> +#else /* __NetBSD__ || __DragonFly__ */
> + PyObject *ret = NULL;
> + ret = Py_BuildValue("i", _BT_HCI_MEMB(a, dev));
> + return ret;
> +#endif /* !(__NetBSD__ || __DragonFly__) */
> + }
> +
> +#if !defined(__FreeBSD__)
> + case BTPROTO_SCO:
> + {
> + struct sockaddr_sco *a = (struct sockaddr_sco *) addr;
> + return makebdaddr(&_BT_SCO_MEMB(a, bdaddr));
> + }
> +#endif /* !__FreeBSD__ */
> +
> + default:
> + PyErr_SetString(PyExc_ValueError,
> + "Unknown Bluetooth protocol");
> + return NULL;
> + }
> +#endif /* USE_BLUETOOTH */
> +
> +#if defined(HAVE_NETPACKET_PACKET_H) && defined(SIOCGIFNAME)
> + case AF_PACKET:
> + {
> + struct sockaddr_ll *a = (struct sockaddr_ll *)addr;
> + const char *ifname = "";
> + struct ifreq ifr;
> + /* need to look up interface name give index */
> + if (a->sll_ifindex) {
> + ifr.ifr_ifindex = a->sll_ifindex;
> + if (ioctl(sockfd, SIOCGIFNAME, &ifr) == 0)
> + ifname = ifr.ifr_name;
> + }
> + return Py_BuildValue("shbhy#",
> + ifname,
> + ntohs(a->sll_protocol),
> + a->sll_pkttype,
> + a->sll_hatype,
> + a->sll_addr,
> + a->sll_halen);
> + }
> +#endif /* HAVE_NETPACKET_PACKET_H && SIOCGIFNAME */
> +
> +#ifdef HAVE_LINUX_TIPC_H
> + case AF_TIPC:
> + {
> + struct sockaddr_tipc *a = (struct sockaddr_tipc *) addr;
> + if (a->addrtype == TIPC_ADDR_NAMESEQ) {
> + return Py_BuildValue("IIIII",
> + a->addrtype,
> + a->addr.nameseq.type,
> + a->addr.nameseq.lower,
> + a->addr.nameseq.upper,
> + a->scope);
> + } else if (a->addrtype == TIPC_ADDR_NAME) {
> + return Py_BuildValue("IIIII",
> + a->addrtype,
> + a->addr.name.name.type,
> + a->addr.name.name.instance,
> + a->addr.name.name.instance,
> + a->scope);
> + } else if (a->addrtype == TIPC_ADDR_ID) {
> + return Py_BuildValue("IIIII",
> + a->addrtype,
> + a->addr.id.node,
> + a->addr.id.ref,
> + 0,
> + a->scope);
> + } else {
> + PyErr_SetString(PyExc_ValueError,
> + "Invalid address type");
> + return NULL;
> + }
> + }
> +#endif /* HAVE_LINUX_TIPC_H */
> +
> +#if defined(AF_CAN) && defined(SIOCGIFNAME)
> + case AF_CAN:
> + {
> + struct sockaddr_can *a = (struct sockaddr_can *)addr;
> + const char *ifname = "";
> + struct ifreq ifr;
> + /* need to look up interface name given index */
> + if (a->can_ifindex) {
> + ifr.ifr_ifindex = a->can_ifindex;
> + if (ioctl(sockfd, SIOCGIFNAME, &ifr) == 0)
> + ifname = ifr.ifr_name;
> + }
> +
> + return Py_BuildValue("O&h", PyUnicode_DecodeFSDefault,
> + ifname,
> + a->can_family);
> + }
> +#endif /* AF_CAN && SIOCGIFNAME */
> +
> +#ifdef PF_SYSTEM
> + case PF_SYSTEM:
> + switch(proto) {
> +#ifdef SYSPROTO_CONTROL
> + case SYSPROTO_CONTROL:
> + {
> + struct sockaddr_ctl *a = (struct sockaddr_ctl *)addr;
> + return Py_BuildValue("(II)", a->sc_id, a->sc_unit);
> + }
> +#endif /* SYSPROTO_CONTROL */
> + default:
> + PyErr_SetString(PyExc_ValueError,
> + "Invalid address type");
> + return 0;
> + }
> +#endif /* PF_SYSTEM */
> +
> +#ifdef HAVE_SOCKADDR_ALG
> + case AF_ALG:
> + {
> + struct sockaddr_alg *a = (struct sockaddr_alg *)addr;
> + return Py_BuildValue("s#s#HH",
> + a->salg_type,
> + strnlen((const char*)a->salg_type,
> + sizeof(a->salg_type)),
> + a->salg_name,
> + strnlen((const char*)a->salg_name,
> + sizeof(a->salg_name)),
> + a->salg_feat,
> + a->salg_mask);
> + }
> +#endif /* HAVE_SOCKADDR_ALG */
> +
> + /* More cases here... */
> +
> + default:
> + /* If we don't know the address family, don't raise an
> + exception -- return it as an (int, bytes) tuple. */
> + return Py_BuildValue("iy#",
> + addr->sa_family,
> + addr->sa_data,
> + sizeof(addr->sa_data));
> +
> + }
> +}
> +
> +/* Helper for getsockaddrarg: bypass IDNA for ASCII-only host names
> + (in particular, numeric IP addresses). */
> +struct maybe_idna {
> + PyObject *obj;
> + char *buf;
> +};
> +
> +static void
> +idna_cleanup(struct maybe_idna *data)
> +{
> + Py_CLEAR(data->obj);
> +}
> +
> +static int
> +idna_converter(PyObject *obj, struct maybe_idna *data)
> +{
> + size_t len;
> + PyObject *obj2;
> + if (obj == NULL) {
> + idna_cleanup(data);
> + return 1;
> + }
> + data->obj = NULL;
> + len = -1;
> + if (PyBytes_Check(obj)) {
> + data->buf = PyBytes_AsString(obj);
> + len = PyBytes_Size(obj);
> + }
> + else if (PyByteArray_Check(obj)) {
> + data->buf = PyByteArray_AsString(obj);
> + len = PyByteArray_Size(obj);
> + }
> + else if (PyUnicode_Check(obj)) {
> + if (PyUnicode_READY(obj) == -1) {
> + return 0;
> + }
> + if (PyUnicode_IS_COMPACT_ASCII(obj)) {
> + data->buf = PyUnicode_DATA(obj);
> + len = PyUnicode_GET_LENGTH(obj);
> + }
> + else {
> + obj2 = PyUnicode_AsEncodedString(obj, "idna", NULL);
> + if (!obj2) {
> + PyErr_SetString(PyExc_TypeError, "encoding of hostname failed");
> + return 0;
> + }
> + assert(PyBytes_Check(obj2));
> + data->obj = obj2;
> + data->buf = PyBytes_AS_STRING(obj2);
> + len = PyBytes_GET_SIZE(obj2);
> + }
> + }
> + else {
> + PyErr_Format(PyExc_TypeError, "str, bytes or bytearray expected, not %s",
> + obj->ob_type->tp_name);
> + return 0;
> + }
> + if (strlen(data->buf) != len) {
> + Py_CLEAR(data->obj);
> + PyErr_SetString(PyExc_TypeError, "host name must not contain null character");
> + return 0;
> + }
> + return Py_CLEANUP_SUPPORTED;
> +}
> +
> +/* Parse a socket address argument according to the socket object's
> + address family. Return 1 if the address was in the proper format,
> + 0 of not. The address is returned through addr_ret, its length
> + through len_ret. */
> +
> +static int
> +getsockaddrarg(PySocketSockObject *s, PyObject *args,
> + struct sockaddr *addr_ret, int *len_ret)
> +{
> + switch (s->sock_family) {
> +
> +#if defined(AF_UNIX)
> + case AF_UNIX:
> + {
> + struct sockaddr_un* addr;
> + Py_buffer path;
> + int retval = 0;
> +
> + /* PEP 383. Not using PyUnicode_FSConverter since we need to
> + allow embedded nulls on Linux. */
> + if (PyUnicode_Check(args)) {
> + if ((args = PyUnicode_EncodeFSDefault(args)) == NULL)
> + return 0;
> + }
> + else
> + Py_INCREF(args);
> + if (!PyArg_Parse(args, "y*", &path)) {
> + Py_DECREF(args);
> + return retval;
> + }
> + assert(path.len >= 0);
> +
> + addr = (struct sockaddr_un*)addr_ret;
> +#ifdef __linux__
> + if (path.len > 0 && *(const char *)path.buf == 0) {
> + /* Linux abstract namespace extension */
> + if ((size_t)path.len > sizeof addr->sun_path) {
> + PyErr_SetString(PyExc_OSError,
> + "AF_UNIX path too long");
> + goto unix_out;
> + }
> + }
> + else
> +#endif /* linux */
> + {
> + /* regular NULL-terminated string */
> + if ((size_t)path.len >= sizeof addr->sun_path) {
> + PyErr_SetString(PyExc_OSError,
> + "AF_UNIX path too long");
> + goto unix_out;
> + }
> + addr->sun_path[path.len] = 0;
> + }
> + addr->sun_family = s->sock_family;
> + memcpy(addr->sun_path, path.buf, path.len);
> + *len_ret = path.len + offsetof(struct sockaddr_un, sun_path);
> + retval = 1;
> + unix_out:
> + PyBuffer_Release(&path);
> + Py_DECREF(args);
> + return retval;
> + }
> +#endif /* AF_UNIX */
> +
> +#if defined(AF_NETLINK)
> + case AF_NETLINK:
> + {
> + struct sockaddr_nl* addr;
> + int pid, groups;
> + addr = (struct sockaddr_nl *)addr_ret;
> + if (!PyTuple_Check(args)) {
> + PyErr_Format(
> + PyExc_TypeError,
> + "getsockaddrarg: "
> + "AF_NETLINK address must be tuple, not %.500s",
> + Py_TYPE(args)->tp_name);
> + return 0;
> + }
> + if (!PyArg_ParseTuple(args, "II:getsockaddrarg", &pid, &groups))
> + return 0;
> + addr->nl_family = AF_NETLINK;
> + addr->nl_pid = pid;
> + addr->nl_groups = groups;
> + *len_ret = sizeof(*addr);
> + return 1;
> + }
> +#endif /* AF_NETLINK */
> +
> +#ifdef AF_RDS
> + case AF_RDS:
> + /* RDS sockets use sockaddr_in: fall-through */
> +#endif /* AF_RDS */
> +
> + case AF_INET:
> + {
> + struct sockaddr_in* addr;
> + struct maybe_idna host = {NULL, NULL};
> + int port, result;
> + if (!PyTuple_Check(args)) {
> + PyErr_Format(
> + PyExc_TypeError,
> + "getsockaddrarg: "
> + "AF_INET address must be tuple, not %.500s",
> + Py_TYPE(args)->tp_name);
> + return 0;
> + }
> + if (!PyArg_ParseTuple(args, "O&i:getsockaddrarg",
> + idna_converter, &host, &port))
> + return 0;
> + addr=(struct sockaddr_in*)addr_ret;
> + result = setipaddr(host.buf, (struct sockaddr *)addr,
> + sizeof(*addr), AF_INET);
> + idna_cleanup(&host);
> + if (result < 0)
> + return 0;
> + if (port < 0 || port > 0xffff) {
> + PyErr_SetString(
> + PyExc_OverflowError,
> + "getsockaddrarg: port must be 0-65535.");
> + return 0;
> + }
> + addr->sin_family = AF_INET;
> + addr->sin_port = htons((short)port);
> + *len_ret = sizeof *addr;
> + return 1;
> + }
> +
> +#ifdef ENABLE_IPV6
> + case AF_INET6:
> + {
> + struct sockaddr_in6* addr;
> + struct maybe_idna host = {NULL, NULL};
> + int port, result;
> + unsigned int flowinfo, scope_id;
> + flowinfo = scope_id = 0;
> + if (!PyTuple_Check(args)) {
> + PyErr_Format(
> + PyExc_TypeError,
> + "getsockaddrarg: "
> + "AF_INET6 address must be tuple, not %.500s",
> + Py_TYPE(args)->tp_name);
> + return 0;
> + }
> + if (!PyArg_ParseTuple(args, "O&i|II",
> + idna_converter, &host, &port, &flowinfo,
> + &scope_id)) {
> + return 0;
> + }
> + addr = (struct sockaddr_in6*)addr_ret;
> + result = setipaddr(host.buf, (struct sockaddr *)addr,
> + sizeof(*addr), AF_INET6);
> + idna_cleanup(&host);
> + if (result < 0)
> + return 0;
> + if (port < 0 || port > 0xffff) {
> + PyErr_SetString(
> + PyExc_OverflowError,
> + "getsockaddrarg: port must be 0-65535.");
> + return 0;
> + }
> + if (flowinfo > 0xfffff) {
> + PyErr_SetString(
> + PyExc_OverflowError,
> + "getsockaddrarg: flowinfo must be 0-1048575.");
> + return 0;
> + }
> + addr->sin6_family = s->sock_family;
> + addr->sin6_port = htons((short)port);
> + addr->sin6_flowinfo = htonl(flowinfo);
> + addr->sin6_scope_id = scope_id;
> + *len_ret = sizeof *addr;
> + return 1;
> + }
> +#endif /* ENABLE_IPV6 */
> +
> +#ifdef USE_BLUETOOTH
> + case AF_BLUETOOTH:
> + {
> + switch (s->sock_proto) {
> + case BTPROTO_L2CAP:
> + {
> + struct sockaddr_l2 *addr;
> + const char *straddr;
> +
> + addr = (struct sockaddr_l2 *)addr_ret;
> + memset(addr, 0, sizeof(struct sockaddr_l2));
> + _BT_L2_MEMB(addr, family) = AF_BLUETOOTH;
> + if (!PyArg_ParseTuple(args, "si", &straddr,
> + &_BT_L2_MEMB(addr, psm))) {
> + PyErr_SetString(PyExc_OSError, "getsockaddrarg: "
> + "wrong format");
> + return 0;
> + }
> + if (setbdaddr(straddr, &_BT_L2_MEMB(addr, bdaddr)) < 0)
> + return 0;
> +
> + *len_ret = sizeof *addr;
> + return 1;
> + }
> + case BTPROTO_RFCOMM:
> + {
> + struct sockaddr_rc *addr;
> + const char *straddr;
> +
> + addr = (struct sockaddr_rc *)addr_ret;
> + _BT_RC_MEMB(addr, family) = AF_BLUETOOTH;
> + if (!PyArg_ParseTuple(args, "si", &straddr,
> + &_BT_RC_MEMB(addr, channel))) {
> + PyErr_SetString(PyExc_OSError, "getsockaddrarg: "
> + "wrong format");
> + return 0;
> + }
> + if (setbdaddr(straddr, &_BT_RC_MEMB(addr, bdaddr)) < 0)
> + return 0;
> +
> + *len_ret = sizeof *addr;
> + return 1;
> + }
> + case BTPROTO_HCI:
> + {
> + struct sockaddr_hci *addr = (struct sockaddr_hci *)addr_ret;
> +#if defined(__NetBSD__) || defined(__DragonFly__)
> + const char *straddr;
> + _BT_HCI_MEMB(addr, family) = AF_BLUETOOTH;
> + if (!PyBytes_Check(args)) {
> + PyErr_SetString(PyExc_OSError, "getsockaddrarg: "
> + "wrong format");
> + return 0;
> + }
> + straddr = PyBytes_AS_STRING(args);
> + if (setbdaddr(straddr, &_BT_HCI_MEMB(addr, bdaddr)) < 0)
> + return 0;
> +#else /* __NetBSD__ || __DragonFly__ */
> + _BT_HCI_MEMB(addr, family) = AF_BLUETOOTH;
> + if (!PyArg_ParseTuple(args, "i", &_BT_HCI_MEMB(addr, dev))) {
> + PyErr_SetString(PyExc_OSError, "getsockaddrarg: "
> + "wrong format");
> + return 0;
> + }
> +#endif /* !(__NetBSD__ || __DragonFly__) */
> + *len_ret = sizeof *addr;
> + return 1;
> + }
> +#if !defined(__FreeBSD__)
> + case BTPROTO_SCO:
> + {
> + struct sockaddr_sco *addr;
> + const char *straddr;
> +
> + addr = (struct sockaddr_sco *)addr_ret;
> + _BT_SCO_MEMB(addr, family) = AF_BLUETOOTH;
> + if (!PyBytes_Check(args)) {
> + PyErr_SetString(PyExc_OSError, "getsockaddrarg: "
> + "wrong format");
> + return 0;
> + }
> + straddr = PyBytes_AS_STRING(args);
> + if (setbdaddr(straddr, &_BT_SCO_MEMB(addr, bdaddr)) < 0)
> + return 0;
> +
> + *len_ret = sizeof *addr;
> + return 1;
> + }
> +#endif /* !__FreeBSD__ */
> + default:
> + PyErr_SetString(PyExc_OSError, "getsockaddrarg: unknown Bluetooth protocol");
> + return 0;
> + }
> + }
> +#endif /* USE_BLUETOOTH */
> +
> +#if defined(HAVE_NETPACKET_PACKET_H) && defined(SIOCGIFINDEX)
> + case AF_PACKET:
> + {
> + struct sockaddr_ll* addr;
> + struct ifreq ifr;
> + const char *interfaceName;
> + int protoNumber;
> + int hatype = 0;
> + int pkttype = PACKET_HOST;
> + Py_buffer haddr = {NULL, NULL};
> +
> + if (!PyTuple_Check(args)) {
> + PyErr_Format(
> + PyExc_TypeError,
> + "getsockaddrarg: "
> + "AF_PACKET address must be tuple, not %.500s",
> + Py_TYPE(args)->tp_name);
> + return 0;
> + }
> + if (!PyArg_ParseTuple(args, "si|iiy*", &interfaceName,
> + &protoNumber, &pkttype, &hatype,
> + &haddr))
> + return 0;
> + strncpy(ifr.ifr_name, interfaceName, sizeof(ifr.ifr_name));
> + ifr.ifr_name[(sizeof(ifr.ifr_name))-1] = '\0';
> + if (ioctl(s->sock_fd, SIOCGIFINDEX, &ifr) < 0) {
> + s->errorhandler();
> + PyBuffer_Release(&haddr);
> + return 0;
> + }
> + if (haddr.buf && haddr.len > 8) {
> + PyErr_SetString(PyExc_ValueError,
> + "Hardware address must be 8 bytes or less");
> + PyBuffer_Release(&haddr);
> + return 0;
> + }
> + if (protoNumber < 0 || protoNumber > 0xffff) {
> + PyErr_SetString(
> + PyExc_OverflowError,
> + "getsockaddrarg: proto must be 0-65535.");
> + PyBuffer_Release(&haddr);
> + return 0;
> + }
> + addr = (struct sockaddr_ll*)addr_ret;
> + addr->sll_family = AF_PACKET;
> + addr->sll_protocol = htons((short)protoNumber);
> + addr->sll_ifindex = ifr.ifr_ifindex;
> + addr->sll_pkttype = pkttype;
> + addr->sll_hatype = hatype;
> + if (haddr.buf) {
> + memcpy(&addr->sll_addr, haddr.buf, haddr.len);
> + addr->sll_halen = haddr.len;
> + }
> + else
> + addr->sll_halen = 0;
> + *len_ret = sizeof *addr;
> + PyBuffer_Release(&haddr);
> + return 1;
> + }
> +#endif /* HAVE_NETPACKET_PACKET_H && SIOCGIFINDEX */
> +
> +#ifdef HAVE_LINUX_TIPC_H
> + case AF_TIPC:
> + {
> + unsigned int atype, v1, v2, v3;
> + unsigned int scope = TIPC_CLUSTER_SCOPE;
> + struct sockaddr_tipc *addr;
> +
> + if (!PyTuple_Check(args)) {
> + PyErr_Format(
> + PyExc_TypeError,
> + "getsockaddrarg: "
> + "AF_TIPC address must be tuple, not %.500s",
> + Py_TYPE(args)->tp_name);
> + return 0;
> + }
> +
> + if (!PyArg_ParseTuple(args,
> + "IIII|I;Invalid TIPC address format",
> + &atype, &v1, &v2, &v3, &scope))
> + return 0;
> +
> + addr = (struct sockaddr_tipc *) addr_ret;
> + memset(addr, 0, sizeof(struct sockaddr_tipc));
> +
> + addr->family = AF_TIPC;
> + addr->scope = scope;
> + addr->addrtype = atype;
> +
> + if (atype == TIPC_ADDR_NAMESEQ) {
> + addr->addr.nameseq.type = v1;
> + addr->addr.nameseq.lower = v2;
> + addr->addr.nameseq.upper = v3;
> + } else if (atype == TIPC_ADDR_NAME) {
> + addr->addr.name.name.type = v1;
> + addr->addr.name.name.instance = v2;
> + } else if (atype == TIPC_ADDR_ID) {
> + addr->addr.id.node = v1;
> + addr->addr.id.ref = v2;
> + } else {
> + /* Shouldn't happen */
> + PyErr_SetString(PyExc_TypeError, "Invalid address type");
> + return 0;
> + }
> +
> + *len_ret = sizeof(*addr);
> +
> + return 1;
> + }
> +#endif /* HAVE_LINUX_TIPC_H */
> +
> +#if defined(AF_CAN) && defined(CAN_RAW) && defined(CAN_BCM) && defined(SIOCGIFINDEX)
> + case AF_CAN:
> + switch (s->sock_proto) {
> + case CAN_RAW:
> + /* fall-through */
> + case CAN_BCM:
> + {
> + struct sockaddr_can *addr;
> + PyObject *interfaceName;
> + struct ifreq ifr;
> + Py_ssize_t len;
> +
> + addr = (struct sockaddr_can *)addr_ret;
> +
> + if (!PyArg_ParseTuple(args, "O&", PyUnicode_FSConverter,
> + &interfaceName))
> + return 0;
> +
> + len = PyBytes_GET_SIZE(interfaceName);
> +
> + if (len == 0) {
> + ifr.ifr_ifindex = 0;
> + } else if ((size_t)len < sizeof(ifr.ifr_name)) {
> + strncpy(ifr.ifr_name, PyBytes_AS_STRING(interfaceName), sizeof(ifr.ifr_name));
> + ifr.ifr_name[(sizeof(ifr.ifr_name))-1] = '\0';
> + if (ioctl(s->sock_fd, SIOCGIFINDEX, &ifr) < 0) {
> + s->errorhandler();
> + Py_DECREF(interfaceName);
> + return 0;
> + }
> + } else {
> + PyErr_SetString(PyExc_OSError,
> + "AF_CAN interface name too long");
> + Py_DECREF(interfaceName);
> + return 0;
> + }
> +
> + addr->can_family = AF_CAN;
> + addr->can_ifindex = ifr.ifr_ifindex;
> +
> + *len_ret = sizeof(*addr);
> + Py_DECREF(interfaceName);
> + return 1;
> + }
> + default:
> + PyErr_SetString(PyExc_OSError,
> + "getsockaddrarg: unsupported CAN protocol");
> + return 0;
> + }
> +#endif /* AF_CAN && CAN_RAW && CAN_BCM && SIOCGIFINDEX */
> +
> +#ifdef PF_SYSTEM
> + case PF_SYSTEM:
> + switch (s->sock_proto) {
> +#ifdef SYSPROTO_CONTROL
> + case SYSPROTO_CONTROL:
> + {
> + struct sockaddr_ctl *addr;
> +
> + addr = (struct sockaddr_ctl *)addr_ret;
> + addr->sc_family = AF_SYSTEM;
> + addr->ss_sysaddr = AF_SYS_CONTROL;
> +
> + if (PyUnicode_Check(args)) {
> + struct ctl_info info;
> + PyObject *ctl_name;
> +
> + if (!PyArg_Parse(args, "O&",
> + PyUnicode_FSConverter, &ctl_name)) {
> + return 0;
> + }
> +
> + if (PyBytes_GET_SIZE(ctl_name) > (Py_ssize_t)sizeof(info.ctl_name)) {
> + PyErr_SetString(PyExc_ValueError,
> + "provided string is too long");
> + Py_DECREF(ctl_name);
> + return 0;
> + }
> + strncpy(info.ctl_name, PyBytes_AS_STRING(ctl_name),
> + sizeof(info.ctl_name));
> + Py_DECREF(ctl_name);
> +
> + if (ioctl(s->sock_fd, CTLIOCGINFO, &info)) {
> + PyErr_SetString(PyExc_OSError,
> + "cannot find kernel control with provided name");
> + return 0;
> + }
> +
> + addr->sc_id = info.ctl_id;
> + addr->sc_unit = 0;
> + } else if (!PyArg_ParseTuple(args, "II",
> + &(addr->sc_id), &(addr->sc_unit))) {
> + PyErr_SetString(PyExc_TypeError, "getsockaddrarg: "
> + "expected str or tuple of two ints");
> +
> + return 0;
> + }
> +
> + *len_ret = sizeof(*addr);
> + return 1;
> + }
> +#endif /* SYSPROTO_CONTROL */
> + default:
> + PyErr_SetString(PyExc_OSError,
> + "getsockaddrarg: unsupported PF_SYSTEM protocol");
> + return 0;
> + }
> +#endif /* PF_SYSTEM */
> +#ifdef HAVE_SOCKADDR_ALG
> + case AF_ALG:
> + {
> + struct sockaddr_alg *sa;
> + const char *type;
> + const char *name;
> + sa = (struct sockaddr_alg *)addr_ret;
> +
> + memset(sa, 0, sizeof(*sa));
> + sa->salg_family = AF_ALG;
> +
> + if (!PyArg_ParseTuple(args, "ss|HH:getsockaddrarg",
> + &type, &name, &sa->salg_feat, &sa->salg_mask))
> + {
> + return 0;
> + }
> + /* sockaddr_alg has fixed-sized char arrays for type, and name
> + * both must be NULL terminated.
> + */
> + if (strlen(type) >= sizeof(sa->salg_type)) {
> + PyErr_SetString(PyExc_ValueError, "AF_ALG type too long.");
> + return 0;
> + }
> + strncpy((char *)sa->salg_type, type, sizeof(sa->salg_type));
> + if (strlen(name) >= sizeof(sa->salg_name)) {
> + PyErr_SetString(PyExc_ValueError, "AF_ALG name too long.");
> + return 0;
> + }
> + strncpy((char *)sa->salg_name, name, sizeof(sa->salg_name));
> +
> + *len_ret = sizeof(*sa);
> + return 1;
> + }
> +#endif /* HAVE_SOCKADDR_ALG */
> +
> + /* More cases here... */
> +
> + default:
> + PyErr_SetString(PyExc_OSError, "getsockaddrarg: bad family");
> + return 0;
> +
> + }
> +}
> +
> +
> +/* Get the address length according to the socket object's address family.
> + Return 1 if the family is known, 0 otherwise. The length is returned
> + through len_ret. */
> +
> +static int
> +getsockaddrlen(PySocketSockObject *s, socklen_t *len_ret)
> +{
> + switch (s->sock_family) {
> +
> +#if defined(AF_UNIX)
> + case AF_UNIX:
> + {
> + *len_ret = sizeof (struct sockaddr_un);
> + return 1;
> + }
> +#endif /* AF_UNIX */
> +
> +#if defined(AF_NETLINK)
> + case AF_NETLINK:
> + {
> + *len_ret = sizeof (struct sockaddr_nl);
> + return 1;
> + }
> +#endif /* AF_NETLINK */
> +
> +#ifdef AF_RDS
> + case AF_RDS:
> + /* RDS sockets use sockaddr_in: fall-through */
> +#endif /* AF_RDS */
> +
> + case AF_INET:
> + {
> + *len_ret = sizeof (struct sockaddr_in);
> + return 1;
> + }
> +
> +#ifdef ENABLE_IPV6
> + case AF_INET6:
> + {
> + *len_ret = sizeof (struct sockaddr_in6);
> + return 1;
> + }
> +#endif /* ENABLE_IPV6 */
> +
> +#ifdef USE_BLUETOOTH
> + case AF_BLUETOOTH:
> + {
> + switch(s->sock_proto)
> + {
> +
> + case BTPROTO_L2CAP:
> + *len_ret = sizeof (struct sockaddr_l2);
> + return 1;
> + case BTPROTO_RFCOMM:
> + *len_ret = sizeof (struct sockaddr_rc);
> + return 1;
> + case BTPROTO_HCI:
> + *len_ret = sizeof (struct sockaddr_hci);
> + return 1;
> +#if !defined(__FreeBSD__)
> + case BTPROTO_SCO:
> + *len_ret = sizeof (struct sockaddr_sco);
> + return 1;
> +#endif /* !__FreeBSD__ */
> + default:
> + PyErr_SetString(PyExc_OSError, "getsockaddrlen: "
> + "unknown BT protocol");
> + return 0;
> +
> + }
> + }
> +#endif /* USE_BLUETOOTH */
> +
> +#ifdef HAVE_NETPACKET_PACKET_H
> + case AF_PACKET:
> + {
> + *len_ret = sizeof (struct sockaddr_ll);
> + return 1;
> + }
> +#endif /* HAVE_NETPACKET_PACKET_H */
> +
> +#ifdef HAVE_LINUX_TIPC_H
> + case AF_TIPC:
> + {
> + *len_ret = sizeof (struct sockaddr_tipc);
> + return 1;
> + }
> +#endif /* HAVE_LINUX_TIPC_H */
> +
> +#ifdef AF_CAN
> + case AF_CAN:
> + {
> + *len_ret = sizeof (struct sockaddr_can);
> + return 1;
> + }
> +#endif /* AF_CAN */
> +
> +#ifdef PF_SYSTEM
> + case PF_SYSTEM:
> + switch(s->sock_proto) {
> +#ifdef SYSPROTO_CONTROL
> + case SYSPROTO_CONTROL:
> + *len_ret = sizeof (struct sockaddr_ctl);
> + return 1;
> +#endif /* SYSPROTO_CONTROL */
> + default:
> + PyErr_SetString(PyExc_OSError, "getsockaddrlen: "
> + "unknown PF_SYSTEM protocol");
> + return 0;
> + }
> +#endif /* PF_SYSTEM */
> +#ifdef HAVE_SOCKADDR_ALG
> + case AF_ALG:
> + {
> + *len_ret = sizeof (struct sockaddr_alg);
> + return 1;
> + }
> +#endif /* HAVE_SOCKADDR_ALG */
> +
> + /* More cases here... */
> +
> + default:
> + PyErr_SetString(PyExc_OSError, "getsockaddrlen: bad family");
> + return 0;
> +
> + }
> +}
> +
> +
> +/* Support functions for the sendmsg() and recvmsg[_into]() methods.
> + Currently, these methods are only compiled if the RFC 2292/3542
> + CMSG_LEN() macro is available. Older systems seem to have used
> + sizeof(struct cmsghdr) + (length) where CMSG_LEN() is used now, so
> + it may be possible to define CMSG_LEN() that way if it's not
> + provided. Some architectures might need extra padding after the
> + cmsghdr, however, and CMSG_LEN() would have to take account of
> + this. */
> +#ifdef CMSG_LEN
> +/* If length is in range, set *result to CMSG_LEN(length) and return
> + true; otherwise, return false. */
> +static int
> +get_CMSG_LEN(size_t length, size_t *result)
> +{
> + size_t tmp;
> +
> + if (length > (SOCKLEN_T_LIMIT - CMSG_LEN(0)))
> + return 0;
> + tmp = CMSG_LEN(length);
> + if (tmp > SOCKLEN_T_LIMIT || tmp < length)
> + return 0;
> + *result = tmp;
> + return 1;
> +}
> +
> +#ifdef CMSG_SPACE
> +/* If length is in range, set *result to CMSG_SPACE(length) and return
> + true; otherwise, return false. */
> +static int
> +get_CMSG_SPACE(size_t length, size_t *result)
> +{
> + size_t tmp;
> +
> + /* Use CMSG_SPACE(1) here in order to take account of the padding
> + necessary before *and* after the data. */
> + if (length > (SOCKLEN_T_LIMIT - CMSG_SPACE(1)))
> + return 0;
> + tmp = CMSG_SPACE(length);
> + if (tmp > SOCKLEN_T_LIMIT || tmp < length)
> + return 0;
> + *result = tmp;
> + return 1;
> +}
> +#endif
> +
> +/* Return true iff msg->msg_controllen is valid, cmsgh is a valid
> + pointer in msg->msg_control with at least "space" bytes after it,
> + and its cmsg_len member inside the buffer. */
> +static int
> +cmsg_min_space(struct msghdr *msg, struct cmsghdr *cmsgh, size_t space)
> +{
> + size_t cmsg_offset;
> + static const size_t cmsg_len_end = (offsetof(struct cmsghdr, cmsg_len) +
> + sizeof(cmsgh->cmsg_len));
> +
> + /* Note that POSIX allows msg_controllen to be of signed type. */
> + if (cmsgh == NULL || msg->msg_control == NULL)
> + return 0;
> + /* Note that POSIX allows msg_controllen to be of a signed type. This is
> + annoying under OS X as it's unsigned there and so it triggers a
> + tautological comparison warning under Clang when compared against 0.
> + Since the check is valid on other platforms, silence the warning under
> + Clang. */
> + #ifdef __clang__
> + #pragma clang diagnostic push
> + #pragma clang diagnostic ignored "-Wtautological-compare"
> + #endif
> + #if defined(__GNUC__) && ((__GNUC__ > 4) || ((__GNUC__ == 4) && (__GNUC_MINOR__ > 5)))
> + #pragma GCC diagnostic push
> + #pragma GCC diagnostic ignored "-Wtype-limits"
> + #endif
> + if (msg->msg_controllen < 0)
> + return 0;
> + #if defined(__GNUC__) && ((__GNUC__ > 4) || ((__GNUC__ == 4) && (__GNUC_MINOR__ > 5)))
> + #pragma GCC diagnostic pop
> + #endif
> + #ifdef __clang__
> + #pragma clang diagnostic pop
> + #endif
> + if (space < cmsg_len_end)
> + space = cmsg_len_end;
> + cmsg_offset = (char *)cmsgh - (char *)msg->msg_control;
> + return (cmsg_offset <= (size_t)-1 - space &&
> + cmsg_offset + space <= msg->msg_controllen);
> +}
> +
> +/* If pointer CMSG_DATA(cmsgh) is in buffer msg->msg_control, set
> + *space to number of bytes following it in the buffer and return
> + true; otherwise, return false. Assumes cmsgh, msg->msg_control and
> + msg->msg_controllen are valid. */
> +static int
> +get_cmsg_data_space(struct msghdr *msg, struct cmsghdr *cmsgh, size_t *space)
> +{
> + size_t data_offset;
> + char *data_ptr;
> +
> + if ((data_ptr = (char *)CMSG_DATA(cmsgh)) == NULL)
> + return 0;
> + data_offset = data_ptr - (char *)msg->msg_control;
> + if (data_offset > msg->msg_controllen)
> + return 0;
> + *space = msg->msg_controllen - data_offset;
> + return 1;
> +}
> +
> +/* If cmsgh is invalid or not contained in the buffer pointed to by
> + msg->msg_control, return -1. If cmsgh is valid and its associated
> + data is entirely contained in the buffer, set *data_len to the
> + length of the associated data and return 0. If only part of the
> + associated data is contained in the buffer but cmsgh is otherwise
> + valid, set *data_len to the length contained in the buffer and
> + return 1. */
> +static int
> +get_cmsg_data_len(struct msghdr *msg, struct cmsghdr *cmsgh, size_t *data_len)
> +{
> + size_t space, cmsg_data_len;
> +
> + if (!cmsg_min_space(msg, cmsgh, CMSG_LEN(0)) ||
> + cmsgh->cmsg_len < CMSG_LEN(0))
> + return -1;
> + cmsg_data_len = cmsgh->cmsg_len - CMSG_LEN(0);
> + if (!get_cmsg_data_space(msg, cmsgh, &space))
> + return -1;
> + if (space >= cmsg_data_len) {
> + *data_len = cmsg_data_len;
> + return 0;
> + }
> + *data_len = space;
> + return 1;
> +}
> +#endif /* CMSG_LEN */
> +
> +
> +struct sock_accept {
> + socklen_t *addrlen;
> + sock_addr_t *addrbuf;
> + SOCKET_T result;
> +};
> +
> +#if defined(HAVE_ACCEPT4) && defined(SOCK_CLOEXEC)
> +/* accept4() is available on Linux 2.6.28+ and glibc 2.10 */
> +static int accept4_works = -1;
> +#endif
> +
> +static int
> +sock_accept_impl(PySocketSockObject *s, void *data)
> +{
> + struct sock_accept *ctx = data;
> + struct sockaddr *addr = SAS2SA(ctx->addrbuf);
> + socklen_t *paddrlen = ctx->addrlen;
> +#ifdef HAVE_SOCKADDR_ALG
> + /* AF_ALG does not support accept() with addr and raises
> + * ECONNABORTED instead. */
> + if (s->sock_family == AF_ALG) {
> + addr = NULL;
> + paddrlen = NULL;
> + *ctx->addrlen = 0;
> + }
> +#endif
> +
> +#if defined(HAVE_ACCEPT4) && defined(SOCK_CLOEXEC)
> + if (accept4_works != 0) {
> + ctx->result = accept4(s->sock_fd, addr, paddrlen,
> + SOCK_CLOEXEC);
> + if (ctx->result == INVALID_SOCKET && accept4_works == -1) {
> + /* On Linux older than 2.6.28, accept4() fails with ENOSYS */
> + accept4_works = (errno != ENOSYS);
> + }
> + }
> + if (accept4_works == 0)
> + ctx->result = accept(s->sock_fd, addr, paddrlen);
> +#else
> + ctx->result = accept(s->sock_fd, addr, paddrlen);
> +#endif
> +
> +#ifdef MS_WINDOWS
> + return (ctx->result != INVALID_SOCKET);
> +#else
> + return (ctx->result >= 0);
> +#endif
> +}
> +
> +/* s._accept() -> (fd, address) */
> +
> +static PyObject *
> +sock_accept(PySocketSockObject *s)
> +{
> + sock_addr_t addrbuf;
> + SOCKET_T newfd;
> + socklen_t addrlen;
> + PyObject *sock = NULL;
> + PyObject *addr = NULL;
> + PyObject *res = NULL;
> + struct sock_accept ctx;
> +
> + if (!getsockaddrlen(s, &addrlen))
> + return NULL;
> + memset(&addrbuf, 0, addrlen);
> +
> + if (!IS_SELECTABLE(s))
> + return select_error();
> +
> + ctx.addrlen = &addrlen;
> + ctx.addrbuf = &addrbuf;
> + if (sock_call(s, 0, sock_accept_impl, &ctx) < 0)
> + return NULL;
> + newfd = ctx.result;
> +
> +#ifdef MS_WINDOWS
> + if (!SetHandleInformation((HANDLE)newfd, HANDLE_FLAG_INHERIT, 0)) {
> + PyErr_SetFromWindowsErr(0);
> + SOCKETCLOSE(newfd);
> + goto finally;
> + }
> +#else
> +
> +#if defined(HAVE_ACCEPT4) && defined(SOCK_CLOEXEC)
> + if (!accept4_works)
> +#endif
> + {
> + if (_Py_set_inheritable(newfd, 0, NULL) < 0) {
> + SOCKETCLOSE(newfd);
> + goto finally;
> + }
> + }
> +#endif
> +
> + sock = PyLong_FromSocket_t(newfd);
> + if (sock == NULL) {
> + SOCKETCLOSE(newfd);
> + goto finally;
> + }
> +
> + addr = makesockaddr(s->sock_fd, SAS2SA(&addrbuf),
> + addrlen, s->sock_proto);
> + if (addr == NULL)
> + goto finally;
> +
> + res = PyTuple_Pack(2, sock, addr);
> +
> +finally:
> + Py_XDECREF(sock);
> + Py_XDECREF(addr);
> + return res;
> +}
> +
> +PyDoc_STRVAR(accept_doc,
> +"_accept() -> (integer, address info)\n\
> +\n\
> +Wait for an incoming connection. Return a new socket file descriptor\n\
> +representing the connection, and the address of the client.\n\
> +For IP sockets, the address info is a pair (hostaddr, port).");
> +
> +/* s.setblocking(flag) method. Argument:
> + False -- non-blocking mode; same as settimeout(0)
> + True -- blocking mode; same as settimeout(None)
> +*/
> +
> +static PyObject *
> +sock_setblocking(PySocketSockObject *s, PyObject *arg)
> +{
> + long block;
> +
> + block = PyLong_AsLong(arg);
> + if (block == -1 && PyErr_Occurred())
> + return NULL;
> +
> + s->sock_timeout = _PyTime_FromSeconds(block ? -1 : 0);
> + if (internal_setblocking(s, block) == -1) {
> + return NULL;
> + }
> + Py_RETURN_NONE;
> +}
> +
> +PyDoc_STRVAR(setblocking_doc,
> +"setblocking(flag)\n\
> +\n\
> +Set the socket to blocking (flag is true) or non-blocking (false).\n\
> +setblocking(True) is equivalent to settimeout(None);\n\
> +setblocking(False) is equivalent to settimeout(0.0).");
> +
> +static int
> +socket_parse_timeout(_PyTime_t *timeout, PyObject *timeout_obj)
> +{
> +#ifdef MS_WINDOWS
> + struct timeval tv;
> +#endif
> +#ifndef HAVE_POLL
> + _PyTime_t ms;
> +#endif
> + int overflow = 0;
> +
> + if (timeout_obj == Py_None) {
> + *timeout = _PyTime_FromSeconds(-1);
> + return 0;
> + }
> +
> + if (_PyTime_FromSecondsObject(timeout,
> + timeout_obj, _PyTime_ROUND_TIMEOUT) < 0)
> + return -1;
> +
> + if (*timeout < 0) {
> + PyErr_SetString(PyExc_ValueError, "Timeout value out of range");
> + return -1;
> + }
> +
> +#ifdef MS_WINDOWS
> + overflow |= (_PyTime_AsTimeval(*timeout, &tv, _PyTime_ROUND_TIMEOUT) < 0);
> +#endif
> +#ifndef HAVE_POLL
> + ms = _PyTime_AsMilliseconds(*timeout, _PyTime_ROUND_TIMEOUT);
> + overflow |= (ms > INT_MAX);
> +#endif
> + if (overflow) {
> + PyErr_SetString(PyExc_OverflowError,
> + "timeout doesn't fit into C timeval");
> + return -1;
> + }
> +
> + return 0;
> +}
> +
> +/* s.settimeout(timeout) method. Argument:
> + None -- no timeout, blocking mode; same as setblocking(True)
> + 0.0 -- non-blocking mode; same as setblocking(False)
> + > 0 -- timeout mode; operations time out after timeout seconds
> + < 0 -- illegal; raises an exception
> +*/
> +static PyObject *
> +sock_settimeout(PySocketSockObject *s, PyObject *arg)
> +{
> + _PyTime_t timeout;
> +
> + if (socket_parse_timeout(&timeout, arg) < 0)
> + return NULL;
> +
> + s->sock_timeout = timeout;
> + if (internal_setblocking(s, timeout < 0) == -1) {
> + return NULL;
> + }
> + Py_RETURN_NONE;
> +}
> +
> +PyDoc_STRVAR(settimeout_doc,
> +"settimeout(timeout)\n\
> +\n\
> +Set a timeout on socket operations. 'timeout' can be a float,\n\
> +giving in seconds, or None. Setting a timeout of None disables\n\
> +the timeout feature and is equivalent to setblocking(1).\n\
> +Setting a timeout of zero is the same as setblocking(0).");
> +
> +/* s.gettimeout() method.
> + Returns the timeout associated with a socket. */
> +static PyObject *
> +sock_gettimeout(PySocketSockObject *s)
> +{
> + if (s->sock_timeout < 0) {
> + Py_INCREF(Py_None);
> + return Py_None;
> + }
> + else {
> + double seconds = _PyTime_AsSecondsDouble(s->sock_timeout);
> + return PyFloat_FromDouble(seconds);
> + }
> +}
> +
> +PyDoc_STRVAR(gettimeout_doc,
> +"gettimeout() -> timeout\n\
> +\n\
> +Returns the timeout in seconds (float) associated with socket \n\
> +operations. A timeout of None indicates that timeouts on socket \n\
> +operations are disabled.");
> +
> +/* s.setsockopt() method.
> + With an integer third argument, sets an integer optval with optlen=4.
> + With None as third argument and an integer fourth argument, set
> + optval=NULL with unsigned int as optlen.
> + With a string third argument, sets an option from a buffer;
> + use optional built-in module 'struct' to encode the string.
> +*/
> +
> +static PyObject *
> +sock_setsockopt(PySocketSockObject *s, PyObject *args)
> +{
> + int level;
> + int optname;
> + int res;
> + Py_buffer optval;
> + int flag;
> + unsigned int optlen;
> + PyObject *none;
> +
> + /* setsockopt(level, opt, flag) */
> + if (PyArg_ParseTuple(args, "iii:setsockopt",
> + &level, &optname, &flag)) {
> + res = setsockopt(s->sock_fd, level, optname,
> + (char*)&flag, sizeof flag);
> + goto done;
> + }
> +
> + PyErr_Clear();
> + /* setsockopt(level, opt, None, flag) */
> + if (PyArg_ParseTuple(args, "iiO!I:setsockopt",
> + &level, &optname, Py_TYPE(Py_None), &none, &optlen)) {
> + assert(sizeof(socklen_t) >= sizeof(unsigned int));
> + res = setsockopt(s->sock_fd, level, optname,
> + NULL, (socklen_t)optlen);
> + goto done;
> + }
> +
> + PyErr_Clear();
> + /* setsockopt(level, opt, buffer) */
> + if (!PyArg_ParseTuple(args, "iiy*:setsockopt",
> + &level, &optname, &optval))
> + return NULL;
> +
> +#ifdef MS_WINDOWS
> + if (optval.len > INT_MAX) {
> + PyBuffer_Release(&optval);
> + PyErr_Format(PyExc_OverflowError,
> + "socket option is larger than %i bytes",
> + INT_MAX);
> + return NULL;
> + }
> + res = setsockopt(s->sock_fd, level, optname,
> + optval.buf, (int)optval.len);
> +#else
> + res = setsockopt(s->sock_fd, level, optname, optval.buf, optval.len);
> +#endif
> + PyBuffer_Release(&optval);
> +
> +done:
> + if (res < 0) {
> + return s->errorhandler();
> + }
> +
> + Py_RETURN_NONE;
> +}
> +
> +PyDoc_STRVAR(setsockopt_doc,
> +"setsockopt(level, option, value: int)\n\
> +setsockopt(level, option, value: buffer)\n\
> +setsockopt(level, option, None, optlen: int)\n\
> +\n\
> +Set a socket option. See the Unix manual for level and option.\n\
> +The value argument can either be an integer, a string buffer, or \n\
> +None, optlen.");
> +
> +
> +/* s.getsockopt() method.
> + With two arguments, retrieves an integer option.
> + With a third integer argument, retrieves a string buffer of that size;
> + use optional built-in module 'struct' to decode the string. */
> +
> +static PyObject *
> +sock_getsockopt(PySocketSockObject *s, PyObject *args)
> +{
> + int level;
> + int optname;
> + int res;
> + PyObject *buf;
> + socklen_t buflen = 0;
> +
> + if (!PyArg_ParseTuple(args, "ii|i:getsockopt",
> + &level, &optname, &buflen))
> + return NULL;
> +
> + if (buflen == 0) {
> + int flag = 0;
> + socklen_t flagsize = sizeof flag;
> + res = getsockopt(s->sock_fd, level, optname,
> + (void *)&flag, &flagsize);
> + if (res < 0)
> + return s->errorhandler();
> + return PyLong_FromLong(flag);
> + }
> + if (buflen <= 0 || buflen > 1024) {
> + PyErr_SetString(PyExc_OSError,
> + "getsockopt buflen out of range");
> + return NULL;
> + }
> + buf = PyBytes_FromStringAndSize((char *)NULL, buflen);
> + if (buf == NULL)
> + return NULL;
> + res = getsockopt(s->sock_fd, level, optname,
> + (void *)PyBytes_AS_STRING(buf), &buflen);
> + if (res < 0) {
> + Py_DECREF(buf);
> + return s->errorhandler();
> + }
> + _PyBytes_Resize(&buf, buflen);
> + return buf;
> +}
> +
> +PyDoc_STRVAR(getsockopt_doc,
> +"getsockopt(level, option[, buffersize]) -> value\n\
> +\n\
> +Get a socket option. See the Unix manual for level and option.\n\
> +If a nonzero buffersize argument is given, the return value is a\n\
> +string of that length; otherwise it is an integer.");
> +
> +
> +/* s.bind(sockaddr) method */
> +
> +static PyObject *
> +sock_bind(PySocketSockObject *s, PyObject *addro)
> +{
> + sock_addr_t addrbuf;
> + int addrlen;
> + int res;
> +
> + if (!getsockaddrarg(s, addro, SAS2SA(&addrbuf), &addrlen))
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + res = bind(s->sock_fd, SAS2SA(&addrbuf), addrlen);
> + Py_END_ALLOW_THREADS
> + if (res < 0)
> + return s->errorhandler();
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +PyDoc_STRVAR(bind_doc,
> +"bind(address)\n\
> +\n\
> +Bind the socket to a local address. For IP sockets, the address is a\n\
> +pair (host, port); the host must refer to the local host. For raw packet\n\
> +sockets the address is a tuple (ifname, proto [,pkttype [,hatype [,addr]]])");
> +
> +
> +/* s.close() method.
> + Set the file descriptor to -1 so operations tried subsequently
> + will surely fail. */
> +
> +static PyObject *
> +sock_close(PySocketSockObject *s)
> +{
> + SOCKET_T fd;
> + int res;
> +
> + fd = s->sock_fd;
> + if (fd != INVALID_SOCKET) {
> + s->sock_fd = INVALID_SOCKET;
> +
> + /* We do not want to retry upon EINTR: see
> + http://lwn.net/Articles/576478/ and
> + http://linux.derkeiler.com/Mailing-Lists/Kernel/2005-09/3000.html
> + for more details. */
> + Py_BEGIN_ALLOW_THREADS
> + res = SOCKETCLOSE(fd);
> + Py_END_ALLOW_THREADS
> + /* bpo-30319: The peer can already have closed the connection.
> + Python ignores ECONNRESET on close(). */
> + if (res < 0 && errno != ECONNRESET) {
> + return s->errorhandler();
> + }
> + }
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +PyDoc_STRVAR(close_doc,
> +"close()\n\
> +\n\
> +Close the socket. It cannot be used after this call.");
> +
> +static PyObject *
> +sock_detach(PySocketSockObject *s)
> +{
> + SOCKET_T fd = s->sock_fd;
> + s->sock_fd = INVALID_SOCKET;
> + return PyLong_FromSocket_t(fd);
> +}
> +
> +PyDoc_STRVAR(detach_doc,
> +"detach()\n\
> +\n\
> +Close the socket object without closing the underlying file descriptor.\n\
> +The object cannot be used after this call, but the file descriptor\n\
> +can be reused for other purposes. The file descriptor is returned.");
> +
> +static int
> +sock_connect_impl(PySocketSockObject *s, void* Py_UNUSED(data))
> +{
> + int err;
> + socklen_t size = sizeof err;
> +
> + if (getsockopt(s->sock_fd, SOL_SOCKET, SO_ERROR, (void *)&err, &size)) {
> + /* getsockopt() failed */
> + return 0;
> + }
> +
> + if (err == EISCONN)
> + return 1;
> + if (err != 0) {
> + /* sock_call_ex() uses GET_SOCK_ERROR() to get the error code */
> + SET_SOCK_ERROR(err);
> + return 0;
> + }
> + return 1;
> +}
> +
> +static int
> +internal_connect(PySocketSockObject *s, struct sockaddr *addr, int addrlen,
> + int raise)
> +{
> + int res, err, wait_connect;
> +
> + Py_BEGIN_ALLOW_THREADS
> + res = connect(s->sock_fd, addr, addrlen);
> + Py_END_ALLOW_THREADS
> +
> + if (!res) {
> + /* connect() succeeded, the socket is connected */
> + return 0;
> + }
> +
> + /* connect() failed */
> +
> + /* save error, PyErr_CheckSignals() can replace it */
> + err = GET_SOCK_ERROR;
> + if (CHECK_ERRNO(EINTR)) {
> + if (PyErr_CheckSignals())
> + return -1;
> +
> + /* Issue #23618: when connect() fails with EINTR, the connection is
> + running asynchronously.
> +
> + If the socket is blocking or has a timeout, wait until the
> + connection completes, fails or timed out using select(), and then
> + get the connection status using getsockopt(SO_ERROR).
> +
> + If the socket is non-blocking, raise InterruptedError. The caller is
> + responsible to wait until the connection completes, fails or timed
> + out (it's the case in asyncio for example). */
> + wait_connect = (s->sock_timeout != 0 && IS_SELECTABLE(s));
> + }
> + else {
> + wait_connect = (s->sock_timeout > 0 && err == SOCK_INPROGRESS_ERR
> + && IS_SELECTABLE(s));
> + }
> +
> + if (!wait_connect) {
> + if (raise) {
> + /* restore error, maybe replaced by PyErr_CheckSignals() */
> + SET_SOCK_ERROR(err);
> + s->errorhandler();
> + return -1;
> + }
> + else
> + return err;
> + }
> +
> + if (raise) {
> + /* socket.connect() raises an exception on error */
> + if (sock_call_ex(s, 1, sock_connect_impl, NULL,
> + 1, NULL, s->sock_timeout) < 0)
> + return -1;
> + }
> + else {
> + /* socket.connect_ex() returns the error code on error */
> + if (sock_call_ex(s, 1, sock_connect_impl, NULL,
> + 1, &err, s->sock_timeout) < 0)
> + return err;
> + }
> + return 0;
> +}
> +
> +/* s.connect(sockaddr) method */
> +
> +static PyObject *
> +sock_connect(PySocketSockObject *s, PyObject *addro)
> +{
> + sock_addr_t addrbuf;
> + int addrlen;
> + int res;
> +
> + if (!getsockaddrarg(s, addro, SAS2SA(&addrbuf), &addrlen))
> + return NULL;
> +
> + res = internal_connect(s, SAS2SA(&addrbuf), addrlen, 1);
> + if (res < 0)
> + return NULL;
> +
> + Py_RETURN_NONE;
> +}
> +
> +PyDoc_STRVAR(connect_doc,
> +"connect(address)\n\
> +\n\
> +Connect the socket to a remote address. For IP sockets, the address\n\
> +is a pair (host, port).");
> +
> +
> +/* s.connect_ex(sockaddr) method */
> +
> +static PyObject *
> +sock_connect_ex(PySocketSockObject *s, PyObject *addro)
> +{
> + sock_addr_t addrbuf;
> + int addrlen;
> + int res;
> +
> + if (!getsockaddrarg(s, addro, SAS2SA(&addrbuf), &addrlen))
> + return NULL;
> +
> + res = internal_connect(s, SAS2SA(&addrbuf), addrlen, 0);
> + if (res < 0)
> + return NULL;
> +
> + return PyLong_FromLong((long) res);
> +}
> +
> +PyDoc_STRVAR(connect_ex_doc,
> +"connect_ex(address) -> errno\n\
> +\n\
> +This is like connect(address), but returns an error code (the errno value)\n\
> +instead of raising an exception when an error occurs.");
> +
> +
> +/* s.fileno() method */
> +
> +static PyObject *
> +sock_fileno(PySocketSockObject *s)
> +{
> + return PyLong_FromSocket_t(s->sock_fd);
> +}
> +
> +PyDoc_STRVAR(fileno_doc,
> +"fileno() -> integer\n\
> +\n\
> +Return the integer file descriptor of the socket.");
> +
> +
> +/* s.getsockname() method */
> +
> +static PyObject *
> +sock_getsockname(PySocketSockObject *s)
> +{
> + sock_addr_t addrbuf;
> + int res;
> + socklen_t addrlen;
> +
> + if (!getsockaddrlen(s, &addrlen))
> + return NULL;
> + memset(&addrbuf, 0, addrlen);
> + Py_BEGIN_ALLOW_THREADS
> + res = getsockname(s->sock_fd, SAS2SA(&addrbuf), &addrlen);
> + Py_END_ALLOW_THREADS
> + if (res < 0)
> + return s->errorhandler();
> + return makesockaddr(s->sock_fd, SAS2SA(&addrbuf), addrlen,
> + s->sock_proto);
> +}
> +
> +PyDoc_STRVAR(getsockname_doc,
> +"getsockname() -> address info\n\
> +\n\
> +Return the address of the local endpoint. For IP sockets, the address\n\
> +info is a pair (hostaddr, port).");
> +
> +
> +#ifdef HAVE_GETPEERNAME /* Cray APP doesn't have this :-( */
> +/* s.getpeername() method */
> +
> +static PyObject *
> +sock_getpeername(PySocketSockObject *s)
> +{
> + sock_addr_t addrbuf;
> + int res;
> + socklen_t addrlen;
> +
> + if (!getsockaddrlen(s, &addrlen))
> + return NULL;
> + memset(&addrbuf, 0, addrlen);
> + Py_BEGIN_ALLOW_THREADS
> + res = getpeername(s->sock_fd, SAS2SA(&addrbuf), &addrlen);
> + Py_END_ALLOW_THREADS
> + if (res < 0)
> + return s->errorhandler();
> + return makesockaddr(s->sock_fd, SAS2SA(&addrbuf), addrlen,
> + s->sock_proto);
> +}
> +
> +PyDoc_STRVAR(getpeername_doc,
> +"getpeername() -> address info\n\
> +\n\
> +Return the address of the remote endpoint. For IP sockets, the address\n\
> +info is a pair (hostaddr, port).");
> +
> +#endif /* HAVE_GETPEERNAME */
> +
> +
> +/* s.listen(n) method */
> +
> +static PyObject *
> +sock_listen(PySocketSockObject *s, PyObject *args)
> +{
> + /* We try to choose a default backlog high enough to avoid connection drops
> + * for common workloads, yet not too high to limit resource usage. */
> + int backlog = Py_MIN(SOMAXCONN, 128);
> + int res;
> +
> + if (!PyArg_ParseTuple(args, "|i:listen", &backlog))
> + return NULL;
> +
> + Py_BEGIN_ALLOW_THREADS
> + /* To avoid problems on systems that don't allow a negative backlog
> + * (which doesn't make sense anyway) we force a minimum value of 0. */
> + if (backlog < 0)
> + backlog = 0;
> + res = listen(s->sock_fd, backlog);
> + Py_END_ALLOW_THREADS
> + if (res < 0)
> + return s->errorhandler();
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +PyDoc_STRVAR(listen_doc,
> +"listen([backlog])\n\
> +\n\
> +Enable a server to accept connections. If backlog is specified, it must be\n\
> +at least 0 (if it is lower, it is set to 0); it specifies the number of\n\
> +unaccepted connections that the system will allow before refusing new\n\
> +connections. If not specified, a default reasonable value is chosen.");
> +
> +struct sock_recv {
> + char *cbuf;
> + Py_ssize_t len;
> + int flags;
> + Py_ssize_t result;
> +};
> +
> +static int
> +sock_recv_impl(PySocketSockObject *s, void *data)
> +{
> + struct sock_recv *ctx = data;
> +
> +#ifdef MS_WINDOWS
> + if (ctx->len > INT_MAX)
> + ctx->len = INT_MAX;
> + ctx->result = recv(s->sock_fd, ctx->cbuf, (int)ctx->len, ctx->flags);
> +#else
> + ctx->result = recv(s->sock_fd, ctx->cbuf, ctx->len, ctx->flags);
> +#endif
> + return (ctx->result >= 0);
> +}
> +
> +
> +/*
> + * This is the guts of the recv() and recv_into() methods, which reads into a
> + * char buffer. If you have any inc/dec ref to do to the objects that contain
> + * the buffer, do it in the caller. This function returns the number of bytes
> + * successfully read. If there was an error, it returns -1. Note that it is
> + * also possible that we return a number of bytes smaller than the request
> + * bytes.
> + */
> +
> +static Py_ssize_t
> +sock_recv_guts(PySocketSockObject *s, char* cbuf, Py_ssize_t len, int flags)
> +{
> + struct sock_recv ctx;
> +
> + if (!IS_SELECTABLE(s)) {
> + select_error();
> + return -1;
> + }
> + if (len == 0) {
> + /* If 0 bytes were requested, do nothing. */
> + return 0;
> + }
> +
> + ctx.cbuf = cbuf;
> + ctx.len = len;
> + ctx.flags = flags;
> + if (sock_call(s, 0, sock_recv_impl, &ctx) < 0)
> + return -1;
> +
> + return ctx.result;
> +}
> +
> +
> +/* s.recv(nbytes [,flags]) method */
> +
> +static PyObject *
> +sock_recv(PySocketSockObject *s, PyObject *args)
> +{
> + Py_ssize_t recvlen, outlen;
> + int flags = 0;
> + PyObject *buf;
> +
> + if (!PyArg_ParseTuple(args, "n|i:recv", &recvlen, &flags))
> + return NULL;
> +
> + if (recvlen < 0) {
> + PyErr_SetString(PyExc_ValueError,
> + "negative buffersize in recv");
> + return NULL;
> + }
> +
> + /* Allocate a new string. */
> + buf = PyBytes_FromStringAndSize((char *) 0, recvlen);
> + if (buf == NULL)
> + return NULL;
> +
> + /* Call the guts */
> + outlen = sock_recv_guts(s, PyBytes_AS_STRING(buf), recvlen, flags);
> + if (outlen < 0) {
> + /* An error occurred, release the string and return an
> + error. */
> + Py_DECREF(buf);
> + return NULL;
> + }
> + if (outlen != recvlen) {
> + /* We did not read as many bytes as we anticipated, resize the
> + string if possible and be successful. */
> + _PyBytes_Resize(&buf, outlen);
> + }
> +
> + return buf;
> +}
> +
> +PyDoc_STRVAR(recv_doc,
> +"recv(buffersize[, flags]) -> data\n\
> +\n\
> +Receive up to buffersize bytes from the socket. For the optional flags\n\
> +argument, see the Unix manual. When no data is available, block until\n\
> +at least one byte is available or until the remote end is closed. When\n\
> +the remote end is closed and all data is read, return the empty string.");
> +
> +
> +/* s.recv_into(buffer, [nbytes [,flags]]) method */
> +
> +static PyObject*
> +sock_recv_into(PySocketSockObject *s, PyObject *args, PyObject *kwds)
> +{
> + static char *kwlist[] = {"buffer", "nbytes", "flags", 0};
> +
> + int flags = 0;
> + Py_buffer pbuf;
> + char *buf;
> + Py_ssize_t buflen, readlen, recvlen = 0;
> +
> + /* Get the buffer's memory */
> + if (!PyArg_ParseTupleAndKeywords(args, kwds, "w*|ni:recv_into", kwlist,
> + &pbuf, &recvlen, &flags))
> + return NULL;
> + buf = pbuf.buf;
> + buflen = pbuf.len;
> +
> + if (recvlen < 0) {
> + PyBuffer_Release(&pbuf);
> + PyErr_SetString(PyExc_ValueError,
> + "negative buffersize in recv_into");
> + return NULL;
> + }
> + if (recvlen == 0) {
> + /* If nbytes was not specified, use the buffer's length */
> + recvlen = buflen;
> + }
> +
> + /* Check if the buffer is large enough */
> + if (buflen < recvlen) {
> + PyBuffer_Release(&pbuf);
> + PyErr_SetString(PyExc_ValueError,
> + "buffer too small for requested bytes");
> + return NULL;
> + }
> +
> + /* Call the guts */
> + readlen = sock_recv_guts(s, buf, recvlen, flags);
> + if (readlen < 0) {
> + /* Return an error. */
> + PyBuffer_Release(&pbuf);
> + return NULL;
> + }
> +
> + PyBuffer_Release(&pbuf);
> + /* Return the number of bytes read. Note that we do not do anything
> + special here in the case that readlen < recvlen. */
> + return PyLong_FromSsize_t(readlen);
> +}
> +
> +PyDoc_STRVAR(recv_into_doc,
> +"recv_into(buffer, [nbytes[, flags]]) -> nbytes_read\n\
> +\n\
> +A version of recv() that stores its data into a buffer rather than creating \n\
> +a new string. Receive up to buffersize bytes from the socket. If buffersize \n\
> +is not specified (or 0), receive up to the size available in the given buffer.\n\
> +\n\
> +See recv() for documentation about the flags.");
> +
> +struct sock_recvfrom {
> + char* cbuf;
> + Py_ssize_t len;
> + int flags;
> + socklen_t *addrlen;
> + sock_addr_t *addrbuf;
> + Py_ssize_t result;
> +};
> +
> +static int
> +sock_recvfrom_impl(PySocketSockObject *s, void *data)
> +{
> + struct sock_recvfrom *ctx = data;
> +
> + memset(ctx->addrbuf, 0, *ctx->addrlen);
> +
> +#ifdef MS_WINDOWS
> + if (ctx->len > INT_MAX)
> + ctx->len = INT_MAX;
> + ctx->result = recvfrom(s->sock_fd, ctx->cbuf, (int)ctx->len, ctx->flags,
> + SAS2SA(ctx->addrbuf), ctx->addrlen);
> +#else
> + ctx->result = recvfrom(s->sock_fd, ctx->cbuf, ctx->len, ctx->flags,
> + SAS2SA(ctx->addrbuf), ctx->addrlen);
> +#endif
> + return (ctx->result >= 0);
> +}
> +
> +
> +/*
> + * This is the guts of the recvfrom() and recvfrom_into() methods, which reads
> + * into a char buffer. If you have any inc/def ref to do to the objects that
> + * contain the buffer, do it in the caller. This function returns the number
> + * of bytes successfully read. If there was an error, it returns -1. Note
> + * that it is also possible that we return a number of bytes smaller than the
> + * request bytes.
> + *
> + * 'addr' is a return value for the address object. Note that you must decref
> + * it yourself.
> + */
> +static Py_ssize_t
> +sock_recvfrom_guts(PySocketSockObject *s, char* cbuf, Py_ssize_t len, int flags,
> + PyObject** addr)
> +{
> + sock_addr_t addrbuf;
> + socklen_t addrlen;
> + struct sock_recvfrom ctx;
> +
> + *addr = NULL;
> +
> + if (!getsockaddrlen(s, &addrlen))
> + return -1;
> +
> + if (!IS_SELECTABLE(s)) {
> + select_error();
> + return -1;
> + }
> +
> + ctx.cbuf = cbuf;
> + ctx.len = len;
> + ctx.flags = flags;
> + ctx.addrbuf = &addrbuf;
> + ctx.addrlen = &addrlen;
> + if (sock_call(s, 0, sock_recvfrom_impl, &ctx) < 0)
> + return -1;
> +
> + *addr = makesockaddr(s->sock_fd, SAS2SA(&addrbuf), addrlen,
> + s->sock_proto);
> + if (*addr == NULL)
> + return -1;
> +
> + return ctx.result;
> +}
> +
> +/* s.recvfrom(nbytes [,flags]) method */
> +
> +static PyObject *
> +sock_recvfrom(PySocketSockObject *s, PyObject *args)
> +{
> + PyObject *buf = NULL;
> + PyObject *addr = NULL;
> + PyObject *ret = NULL;
> + int flags = 0;
> + Py_ssize_t recvlen, outlen;
> +
> + if (!PyArg_ParseTuple(args, "n|i:recvfrom", &recvlen, &flags))
> + return NULL;
> +
> + if (recvlen < 0) {
> + PyErr_SetString(PyExc_ValueError,
> + "negative buffersize in recvfrom");
> + return NULL;
> + }
> +
> + buf = PyBytes_FromStringAndSize((char *) 0, recvlen);
> + if (buf == NULL)
> + return NULL;
> +
> + outlen = sock_recvfrom_guts(s, PyBytes_AS_STRING(buf),
> + recvlen, flags, &addr);
> + if (outlen < 0) {
> + goto finally;
> + }
> +
> + if (outlen != recvlen) {
> + /* We did not read as many bytes as we anticipated, resize the
> + string if possible and be successful. */
> + if (_PyBytes_Resize(&buf, outlen) < 0)
> + /* Oopsy, not so successful after all. */
> + goto finally;
> + }
> +
> + ret = PyTuple_Pack(2, buf, addr);
> +
> +finally:
> + Py_XDECREF(buf);
> + Py_XDECREF(addr);
> + return ret;
> +}
> +
> +PyDoc_STRVAR(recvfrom_doc,
> +"recvfrom(buffersize[, flags]) -> (data, address info)\n\
> +\n\
> +Like recv(buffersize, flags) but also return the sender's address info.");
> +
> +
> +/* s.recvfrom_into(buffer[, nbytes [,flags]]) method */
> +
> +static PyObject *
> +sock_recvfrom_into(PySocketSockObject *s, PyObject *args, PyObject* kwds)
> +{
> + static char *kwlist[] = {"buffer", "nbytes", "flags", 0};
> +
> + int flags = 0;
> + Py_buffer pbuf;
> + char *buf;
> + Py_ssize_t readlen, buflen, recvlen = 0;
> +
> + PyObject *addr = NULL;
> +
> + if (!PyArg_ParseTupleAndKeywords(args, kwds, "w*|ni:recvfrom_into",
> + kwlist, &pbuf,
> + &recvlen, &flags))
> + return NULL;
> + buf = pbuf.buf;
> + buflen = pbuf.len;
> +
> + if (recvlen < 0) {
> + PyBuffer_Release(&pbuf);
> + PyErr_SetString(PyExc_ValueError,
> + "negative buffersize in recvfrom_into");
> + return NULL;
> + }
> + if (recvlen == 0) {
> + /* If nbytes was not specified, use the buffer's length */
> + recvlen = buflen;
> + } else if (recvlen > buflen) {
> + PyBuffer_Release(&pbuf);
> + PyErr_SetString(PyExc_ValueError,
> + "nbytes is greater than the length of the buffer");
> + return NULL;
> + }
> +
> + readlen = sock_recvfrom_guts(s, buf, recvlen, flags, &addr);
> + if (readlen < 0) {
> + PyBuffer_Release(&pbuf);
> + /* Return an error */
> + Py_XDECREF(addr);
> + return NULL;
> + }
> +
> + PyBuffer_Release(&pbuf);
> + /* Return the number of bytes read and the address. Note that we do
> + not do anything special here in the case that readlen < recvlen. */
> + return Py_BuildValue("nN", readlen, addr);
> +}
> +
> +PyDoc_STRVAR(recvfrom_into_doc,
> +"recvfrom_into(buffer[, nbytes[, flags]]) -> (nbytes, address info)\n\
> +\n\
> +Like recv_into(buffer[, nbytes[, flags]]) but also return the sender's address info.");
> +
> +/* The sendmsg() and recvmsg[_into]() methods require a working
> + CMSG_LEN(). See the comment near get_CMSG_LEN(). */
> +#ifdef CMSG_LEN
> +struct sock_recvmsg {
> + struct msghdr *msg;
> + int flags;
> + ssize_t result;
> +};
> +
> +static int
> +sock_recvmsg_impl(PySocketSockObject *s, void *data)
> +{
> + struct sock_recvmsg *ctx = data;
> +
> + ctx->result = recvmsg(s->sock_fd, ctx->msg, ctx->flags);
> + return (ctx->result >= 0);
> +}
> +
> +/*
> + * Call recvmsg() with the supplied iovec structures, flags, and
> + * ancillary data buffer size (controllen). Returns the tuple return
> + * value for recvmsg() or recvmsg_into(), with the first item provided
> + * by the supplied makeval() function. makeval() will be called with
> + * the length read and makeval_data as arguments, and must return a
> + * new reference (which will be decrefed if there is a subsequent
> + * error). On error, closes any file descriptors received via
> + * SCM_RIGHTS.
> + */
> +static PyObject *
> +sock_recvmsg_guts(PySocketSockObject *s, struct iovec *iov, int iovlen,
> + int flags, Py_ssize_t controllen,
> + PyObject *(*makeval)(ssize_t, void *), void *makeval_data)
> +{
> + sock_addr_t addrbuf;
> + socklen_t addrbuflen;
> + struct msghdr msg = {0};
> + PyObject *cmsg_list = NULL, *retval = NULL;
> + void *controlbuf = NULL;
> + struct cmsghdr *cmsgh;
> + size_t cmsgdatalen = 0;
> + int cmsg_status;
> + struct sock_recvmsg ctx;
> +
> + /* XXX: POSIX says that msg_name and msg_namelen "shall be
> + ignored" when the socket is connected (Linux fills them in
> + anyway for AF_UNIX sockets at least). Normally msg_namelen
> + seems to be set to 0 if there's no address, but try to
> + initialize msg_name to something that won't be mistaken for a
> + real address if that doesn't happen. */
> + if (!getsockaddrlen(s, &addrbuflen))
> + return NULL;
> + memset(&addrbuf, 0, addrbuflen);
> + SAS2SA(&addrbuf)->sa_family = AF_UNSPEC;
> +
> + if (controllen < 0 || controllen > SOCKLEN_T_LIMIT) {
> + PyErr_SetString(PyExc_ValueError,
> + "invalid ancillary data buffer length");
> + return NULL;
> + }
> + if (controllen > 0 && (controlbuf = PyMem_Malloc(controllen)) == NULL)
> + return PyErr_NoMemory();
> +
> + /* Make the system call. */
> + if (!IS_SELECTABLE(s)) {
> + select_error();
> + goto finally;
> + }
> +
> + msg.msg_name = SAS2SA(&addrbuf);
> + msg.msg_namelen = addrbuflen;
> + msg.msg_iov = iov;
> + msg.msg_iovlen = iovlen;
> + msg.msg_control = controlbuf;
> + msg.msg_controllen = controllen;
> +
> + ctx.msg = &msg;
> + ctx.flags = flags;
> + if (sock_call(s, 0, sock_recvmsg_impl, &ctx) < 0)
> + goto finally;
> +
> + /* Make list of (level, type, data) tuples from control messages. */
> + if ((cmsg_list = PyList_New(0)) == NULL)
> + goto err_closefds;
> + /* Check for empty ancillary data as old CMSG_FIRSTHDR()
> + implementations didn't do so. */
> + for (cmsgh = ((msg.msg_controllen > 0) ? CMSG_FIRSTHDR(&msg) : NULL);
> + cmsgh != NULL; cmsgh = CMSG_NXTHDR(&msg, cmsgh)) {
> + PyObject *bytes, *tuple;
> + int tmp;
> +
> + cmsg_status = get_cmsg_data_len(&msg, cmsgh, &cmsgdatalen);
> + if (cmsg_status != 0) {
> + if (PyErr_WarnEx(PyExc_RuntimeWarning,
> + "received malformed or improperly-truncated "
> + "ancillary data", 1) == -1)
> + goto err_closefds;
> + }
> + if (cmsg_status < 0)
> + break;
> + if (cmsgdatalen > PY_SSIZE_T_MAX) {
> + PyErr_SetString(PyExc_OSError, "control message too long");
> + goto err_closefds;
> + }
> +
> + bytes = PyBytes_FromStringAndSize((char *)CMSG_DATA(cmsgh),
> + cmsgdatalen);
> + tuple = Py_BuildValue("iiN", (int)cmsgh->cmsg_level,
> + (int)cmsgh->cmsg_type, bytes);
> + if (tuple == NULL)
> + goto err_closefds;
> + tmp = PyList_Append(cmsg_list, tuple);
> + Py_DECREF(tuple);
> + if (tmp != 0)
> + goto err_closefds;
> +
> + if (cmsg_status != 0)
> + break;
> + }
> +
> + retval = Py_BuildValue("NOiN",
> + (*makeval)(ctx.result, makeval_data),
> + cmsg_list,
> + (int)msg.msg_flags,
> + makesockaddr(s->sock_fd, SAS2SA(&addrbuf),
> + ((msg.msg_namelen > addrbuflen) ?
> + addrbuflen : msg.msg_namelen),
> + s->sock_proto));
> + if (retval == NULL)
> + goto err_closefds;
> +
> +finally:
> + Py_XDECREF(cmsg_list);
> + PyMem_Free(controlbuf);
> + return retval;
> +
> +err_closefds:
> +#ifdef SCM_RIGHTS
> + /* Close all descriptors coming from SCM_RIGHTS, so they don't leak. */
> + for (cmsgh = ((msg.msg_controllen > 0) ? CMSG_FIRSTHDR(&msg) : NULL);
> + cmsgh != NULL; cmsgh = CMSG_NXTHDR(&msg, cmsgh)) {
> + cmsg_status = get_cmsg_data_len(&msg, cmsgh, &cmsgdatalen);
> + if (cmsg_status < 0)
> + break;
> + if (cmsgh->cmsg_level == SOL_SOCKET &&
> + cmsgh->cmsg_type == SCM_RIGHTS) {
> + size_t numfds;
> + int *fdp;
> +
> + numfds = cmsgdatalen / sizeof(int);
> + fdp = (int *)CMSG_DATA(cmsgh);
> + while (numfds-- > 0)
> + close(*fdp++);
> + }
> + if (cmsg_status != 0)
> + break;
> + }
> +#endif /* SCM_RIGHTS */
> + goto finally;
> +}
> +
> +
> +static PyObject *
> +makeval_recvmsg(ssize_t received, void *data)
> +{
> + PyObject **buf = data;
> +
> + if (received < PyBytes_GET_SIZE(*buf))
> + _PyBytes_Resize(buf, received);
> + Py_XINCREF(*buf);
> + return *buf;
> +}
> +
> +/* s.recvmsg(bufsize[, ancbufsize[, flags]]) method */
> +
> +static PyObject *
> +sock_recvmsg(PySocketSockObject *s, PyObject *args)
> +{
> + Py_ssize_t bufsize, ancbufsize = 0;
> + int flags = 0;
> + struct iovec iov;
> + PyObject *buf = NULL, *retval = NULL;
> +
> + if (!PyArg_ParseTuple(args, "n|ni:recvmsg", &bufsize, &ancbufsize, &flags))
> + return NULL;
> +
> + if (bufsize < 0) {
> + PyErr_SetString(PyExc_ValueError, "negative buffer size in recvmsg()");
> + return NULL;
> + }
> + if ((buf = PyBytes_FromStringAndSize(NULL, bufsize)) == NULL)
> + return NULL;
> + iov.iov_base = PyBytes_AS_STRING(buf);
> + iov.iov_len = bufsize;
> +
> + /* Note that we're passing a pointer to *our pointer* to the bytes
> + object here (&buf); makeval_recvmsg() may incref the object, or
> + deallocate it and set our pointer to NULL. */
> + retval = sock_recvmsg_guts(s, &iov, 1, flags, ancbufsize,
> + &makeval_recvmsg, &buf);
> + Py_XDECREF(buf);
> + return retval;
> +}
> +
> +PyDoc_STRVAR(recvmsg_doc,
> +"recvmsg(bufsize[, ancbufsize[, flags]]) -> (data, ancdata, msg_flags, address)\n\
> +\n\
> +Receive normal data (up to bufsize bytes) and ancillary data from the\n\
> +socket. The ancbufsize argument sets the size in bytes of the\n\
> +internal buffer used to receive the ancillary data; it defaults to 0,\n\
> +meaning that no ancillary data will be received. Appropriate buffer\n\
> +sizes for ancillary data can be calculated using CMSG_SPACE() or\n\
> +CMSG_LEN(), and items which do not fit into the buffer might be\n\
> +truncated or discarded. The flags argument defaults to 0 and has the\n\
> +same meaning as for recv().\n\
> +\n\
> +The return value is a 4-tuple: (data, ancdata, msg_flags, address).\n\
> +The data item is a bytes object holding the non-ancillary data\n\
> +received. The ancdata item is a list of zero or more tuples\n\
> +(cmsg_level, cmsg_type, cmsg_data) representing the ancillary data\n\
> +(control messages) received: cmsg_level and cmsg_type are integers\n\
> +specifying the protocol level and protocol-specific type respectively,\n\
> +and cmsg_data is a bytes object holding the associated data. The\n\
> +msg_flags item is the bitwise OR of various flags indicating\n\
> +conditions on the received message; see your system documentation for\n\
> +details. If the receiving socket is unconnected, address is the\n\
> +address of the sending socket, if available; otherwise, its value is\n\
> +unspecified.\n\
> +\n\
> +If recvmsg() raises an exception after the system call returns, it\n\
> +will first attempt to close any file descriptors received via the\n\
> +SCM_RIGHTS mechanism.");
> +
> +
> +static PyObject *
> +makeval_recvmsg_into(ssize_t received, void *data)
> +{
> + return PyLong_FromSsize_t(received);
> +}
> +
> +/* s.recvmsg_into(buffers[, ancbufsize[, flags]]) method */
> +
> +static PyObject *
> +sock_recvmsg_into(PySocketSockObject *s, PyObject *args)
> +{
> + Py_ssize_t ancbufsize = 0;
> + int flags = 0;
> + struct iovec *iovs = NULL;
> + Py_ssize_t i, nitems, nbufs = 0;
> + Py_buffer *bufs = NULL;
> + PyObject *buffers_arg, *fast, *retval = NULL;
> +
> + if (!PyArg_ParseTuple(args, "O|ni:recvmsg_into",
> + &buffers_arg, &ancbufsize, &flags))
> + return NULL;
> +
> + if ((fast = PySequence_Fast(buffers_arg,
> + "recvmsg_into() argument 1 must be an "
> + "iterable")) == NULL)
> + return NULL;
> + nitems = PySequence_Fast_GET_SIZE(fast);
> + if (nitems > INT_MAX) {
> + PyErr_SetString(PyExc_OSError, "recvmsg_into() argument 1 is too long");
> + goto finally;
> + }
> +
> + /* Fill in an iovec for each item, and save the Py_buffer
> + structs to release afterwards. */
> + if (nitems > 0 && ((iovs = PyMem_New(struct iovec, nitems)) == NULL ||
> + (bufs = PyMem_New(Py_buffer, nitems)) == NULL)) {
> + PyErr_NoMemory();
> + goto finally;
> + }
> + for (; nbufs < nitems; nbufs++) {
> + if (!PyArg_Parse(PySequence_Fast_GET_ITEM(fast, nbufs),
> + "w*;recvmsg_into() argument 1 must be an iterable "
> + "of single-segment read-write buffers",
> + &bufs[nbufs]))
> + goto finally;
> + iovs[nbufs].iov_base = bufs[nbufs].buf;
> + iovs[nbufs].iov_len = bufs[nbufs].len;
> + }
> +
> + retval = sock_recvmsg_guts(s, iovs, nitems, flags, ancbufsize,
> + &makeval_recvmsg_into, NULL);
> +finally:
> + for (i = 0; i < nbufs; i++)
> + PyBuffer_Release(&bufs[i]);
> + PyMem_Free(bufs);
> + PyMem_Free(iovs);
> + Py_DECREF(fast);
> + return retval;
> +}
> +
> +PyDoc_STRVAR(recvmsg_into_doc,
> +"recvmsg_into(buffers[, ancbufsize[, flags]]) -> (nbytes, ancdata, msg_flags, address)\n\
> +\n\
> +Receive normal data and ancillary data from the socket, scattering the\n\
> +non-ancillary data into a series of buffers. The buffers argument\n\
> +must be an iterable of objects that export writable buffers\n\
> +(e.g. bytearray objects); these will be filled with successive chunks\n\
> +of the non-ancillary data until it has all been written or there are\n\
> +no more buffers. The ancbufsize argument sets the size in bytes of\n\
> +the internal buffer used to receive the ancillary data; it defaults to\n\
> +0, meaning that no ancillary data will be received. Appropriate\n\
> +buffer sizes for ancillary data can be calculated using CMSG_SPACE()\n\
> +or CMSG_LEN(), and items which do not fit into the buffer might be\n\
> +truncated or discarded. The flags argument defaults to 0 and has the\n\
> +same meaning as for recv().\n\
> +\n\
> +The return value is a 4-tuple: (nbytes, ancdata, msg_flags, address).\n\
> +The nbytes item is the total number of bytes of non-ancillary data\n\
> +written into the buffers. The ancdata item is a list of zero or more\n\
> +tuples (cmsg_level, cmsg_type, cmsg_data) representing the ancillary\n\
> +data (control messages) received: cmsg_level and cmsg_type are\n\
> +integers specifying the protocol level and protocol-specific type\n\
> +respectively, and cmsg_data is a bytes object holding the associated\n\
> +data. The msg_flags item is the bitwise OR of various flags\n\
> +indicating conditions on the received message; see your system\n\
> +documentation for details. If the receiving socket is unconnected,\n\
> +address is the address of the sending socket, if available; otherwise,\n\
> +its value is unspecified.\n\
> +\n\
> +If recvmsg_into() raises an exception after the system call returns,\n\
> +it will first attempt to close any file descriptors received via the\n\
> +SCM_RIGHTS mechanism.");
> +#endif /* CMSG_LEN */
> +
> +
> +struct sock_send {
> + char *buf;
> + Py_ssize_t len;
> + int flags;
> + Py_ssize_t result;
> +};
> +
> +static int
> +sock_send_impl(PySocketSockObject *s, void *data)
> +{
> + struct sock_send *ctx = data;
> +
> +#ifdef MS_WINDOWS
> + if (ctx->len > INT_MAX)
> + ctx->len = INT_MAX;
> + ctx->result = send(s->sock_fd, ctx->buf, (int)ctx->len, ctx->flags);
> +#else
> + ctx->result = send(s->sock_fd, ctx->buf, ctx->len, ctx->flags);
> +#endif
> + return (ctx->result >= 0);
> +}
> +
> +/* s.send(data [,flags]) method */
> +
> +static PyObject *
> +sock_send(PySocketSockObject *s, PyObject *args)
> +{
> + int flags = 0;
> + Py_buffer pbuf;
> + struct sock_send ctx;
> +
> + if (!PyArg_ParseTuple(args, "y*|i:send", &pbuf, &flags))
> + return NULL;
> +
> + if (!IS_SELECTABLE(s)) {
> + PyBuffer_Release(&pbuf);
> + return select_error();
> + }
> + ctx.buf = pbuf.buf;
> + ctx.len = pbuf.len;
> + ctx.flags = flags;
> + if (sock_call(s, 1, sock_send_impl, &ctx) < 0) {
> + PyBuffer_Release(&pbuf);
> + return NULL;
> + }
> + PyBuffer_Release(&pbuf);
> +
> + return PyLong_FromSsize_t(ctx.result);
> +}
> +
> +PyDoc_STRVAR(send_doc,
> +"send(data[, flags]) -> count\n\
> +\n\
> +Send a data string to the socket. For the optional flags\n\
> +argument, see the Unix manual. Return the number of bytes\n\
> +sent; this may be less than len(data) if the network is busy.");
> +
> +
> +/* s.sendall(data [,flags]) method */
> +
> +static PyObject *
> +sock_sendall(PySocketSockObject *s, PyObject *args)
> +{
> + char *buf;
> + Py_ssize_t len, n;
> + int flags = 0;
> + Py_buffer pbuf;
> + struct sock_send ctx;
> + int has_timeout = (s->sock_timeout > 0);
> + _PyTime_t interval = s->sock_timeout;
> + _PyTime_t deadline = 0;
> + int deadline_initialized = 0;
> + PyObject *res = NULL;
> +
> + if (!PyArg_ParseTuple(args, "y*|i:sendall", &pbuf, &flags))
> + return NULL;
> + buf = pbuf.buf;
> + len = pbuf.len;
> +
> + if (!IS_SELECTABLE(s)) {
> + PyBuffer_Release(&pbuf);
> + return select_error();
> + }
> +
> + do {
> + if (has_timeout) {
> + if (deadline_initialized) {
> + /* recompute the timeout */
> + interval = deadline - _PyTime_GetMonotonicClock();
> + }
> + else {
> + deadline_initialized = 1;
> + deadline = _PyTime_GetMonotonicClock() + s->sock_timeout;
> + }
> +
> + if (interval <= 0) {
> + PyErr_SetString(socket_timeout, "timed out");
> + goto done;
> + }
> + }
> +
> + ctx.buf = buf;
> + ctx.len = len;
> + ctx.flags = flags;
> + if (sock_call_ex(s, 1, sock_send_impl, &ctx, 0, NULL, interval) < 0)
> + goto done;
> + n = ctx.result;
> + assert(n >= 0);
> +
> + buf += n;
> + len -= n;
> +
> + /* We must run our signal handlers before looping again.
> + send() can return a successful partial write when it is
> + interrupted, so we can't restrict ourselves to EINTR. */
> + if (PyErr_CheckSignals())
> + goto done;
> + } while (len > 0);
> + PyBuffer_Release(&pbuf);
> +
> + Py_INCREF(Py_None);
> + res = Py_None;
> +
> +done:
> + PyBuffer_Release(&pbuf);
> + return res;
> +}
> +
> +PyDoc_STRVAR(sendall_doc,
> +"sendall(data[, flags])\n\
> +\n\
> +Send a data string to the socket. For the optional flags\n\
> +argument, see the Unix manual. This calls send() repeatedly\n\
> +until all data is sent. If an error occurs, it's impossible\n\
> +to tell how much data has been sent.");
> +
> +
> +struct sock_sendto {
> + char *buf;
> + Py_ssize_t len;
> + int flags;
> + int addrlen;
> + sock_addr_t *addrbuf;
> + Py_ssize_t result;
> +};
> +
> +static int
> +sock_sendto_impl(PySocketSockObject *s, void *data)
> +{
> + struct sock_sendto *ctx = data;
> +
> +#ifdef MS_WINDOWS
> + if (ctx->len > INT_MAX)
> + ctx->len = INT_MAX;
> + ctx->result = sendto(s->sock_fd, ctx->buf, (int)ctx->len, ctx->flags,
> + SAS2SA(ctx->addrbuf), ctx->addrlen);
> +#else
> + ctx->result = sendto(s->sock_fd, ctx->buf, ctx->len, ctx->flags,
> + SAS2SA(ctx->addrbuf), ctx->addrlen);
> +#endif
> + return (ctx->result >= 0);
> +}
> +
> +/* s.sendto(data, [flags,] sockaddr) method */
> +
> +static PyObject *
> +sock_sendto(PySocketSockObject *s, PyObject *args)
> +{
> + Py_buffer pbuf;
> + PyObject *addro;
> + Py_ssize_t arglen;
> + sock_addr_t addrbuf;
> + int addrlen, flags;
> + struct sock_sendto ctx;
> +
> + flags = 0;
> + arglen = PyTuple_Size(args);
> + switch (arglen) {
> + case 2:
> + PyArg_ParseTuple(args, "y*O:sendto", &pbuf, &addro);
> + break;
> + case 3:
> + PyArg_ParseTuple(args, "y*iO:sendto",
> + &pbuf, &flags, &addro);
> + break;
> + default:
> + PyErr_Format(PyExc_TypeError,
> + "sendto() takes 2 or 3 arguments (%d given)",
> + arglen);
> + return NULL;
> + }
> + if (PyErr_Occurred())
> + return NULL;
> +
> + if (!IS_SELECTABLE(s)) {
> + PyBuffer_Release(&pbuf);
> + return select_error();
> + }
> +
> + if (!getsockaddrarg(s, addro, SAS2SA(&addrbuf), &addrlen)) {
> + PyBuffer_Release(&pbuf);
> + return NULL;
> + }
> +
> + ctx.buf = pbuf.buf;
> + ctx.len = pbuf.len;
> + ctx.flags = flags;
> + ctx.addrlen = addrlen;
> + ctx.addrbuf = &addrbuf;
> + if (sock_call(s, 1, sock_sendto_impl, &ctx) < 0) {
> + PyBuffer_Release(&pbuf);
> + return NULL;
> + }
> + PyBuffer_Release(&pbuf);
> +
> + return PyLong_FromSsize_t(ctx.result);
> +}
> +
> +PyDoc_STRVAR(sendto_doc,
> +"sendto(data[, flags], address) -> count\n\
> +\n\
> +Like send(data, flags) but allows specifying the destination address.\n\
> +For IP sockets, the address is a pair (hostaddr, port).");
> +
> +
> +/* The sendmsg() and recvmsg[_into]() methods require a working
> + CMSG_LEN(). See the comment near get_CMSG_LEN(). */
> +#ifdef CMSG_LEN
> +struct sock_sendmsg {
> + struct msghdr *msg;
> + int flags;
> + ssize_t result;
> +};
> +
> +static int
> +sock_sendmsg_iovec(PySocketSockObject *s, PyObject *data_arg,
> + struct msghdr *msg,
> + Py_buffer **databufsout, Py_ssize_t *ndatabufsout) {
> + Py_ssize_t ndataparts, ndatabufs = 0;
> + int result = -1;
> + struct iovec *iovs = NULL;
> + PyObject *data_fast = NULL;
> + Py_buffer *databufs = NULL;
> +
> + /* Fill in an iovec for each message part, and save the Py_buffer
> + structs to release afterwards. */
> + data_fast = PySequence_Fast(data_arg,
> + "sendmsg() argument 1 must be an "
> + "iterable");
> + if (data_fast == NULL) {
> + goto finally;
> + }
> +
> + ndataparts = PySequence_Fast_GET_SIZE(data_fast);
> + if (ndataparts > INT_MAX) {
> + PyErr_SetString(PyExc_OSError, "sendmsg() argument 1 is too long");
> + goto finally;
> + }
> +
> + msg->msg_iovlen = ndataparts;
> + if (ndataparts > 0) {
> + iovs = PyMem_New(struct iovec, ndataparts);
> + if (iovs == NULL) {
> + PyErr_NoMemory();
> + goto finally;
> + }
> + msg->msg_iov = iovs;
> +
> + databufs = PyMem_New(Py_buffer, ndataparts);
> + if (databufs == NULL) {
> + PyErr_NoMemory();
> + goto finally;
> + }
> + }
> + for (; ndatabufs < ndataparts; ndatabufs++) {
> + if (!PyArg_Parse(PySequence_Fast_GET_ITEM(data_fast, ndatabufs),
> + "y*;sendmsg() argument 1 must be an iterable of "
> + "bytes-like objects",
> + &databufs[ndatabufs]))
> + goto finally;
> + iovs[ndatabufs].iov_base = databufs[ndatabufs].buf;
> + iovs[ndatabufs].iov_len = databufs[ndatabufs].len;
> + }
> + result = 0;
> + finally:
> + *databufsout = databufs;
> + *ndatabufsout = ndatabufs;
> + Py_XDECREF(data_fast);
> + return result;
> +}
> +
> +static int
> +sock_sendmsg_impl(PySocketSockObject *s, void *data)
> +{
> + struct sock_sendmsg *ctx = data;
> +
> + ctx->result = sendmsg(s->sock_fd, ctx->msg, ctx->flags);
> + return (ctx->result >= 0);
> +}
> +
> +/* s.sendmsg(buffers[, ancdata[, flags[, address]]]) method */
> +
> +static PyObject *
> +sock_sendmsg(PySocketSockObject *s, PyObject *args)
> +{
> + Py_ssize_t i, ndatabufs = 0, ncmsgs, ncmsgbufs = 0;
> + Py_buffer *databufs = NULL;
> + sock_addr_t addrbuf;
> + struct msghdr msg;
> + struct cmsginfo {
> + int level;
> + int type;
> + Py_buffer data;
> + } *cmsgs = NULL;
> + void *controlbuf = NULL;
> + size_t controllen, controllen_last;
> + int addrlen, flags = 0;
> + PyObject *data_arg, *cmsg_arg = NULL, *addr_arg = NULL,
> + *cmsg_fast = NULL, *retval = NULL;
> + struct sock_sendmsg ctx;
> +
> + if (!PyArg_ParseTuple(args, "O|OiO:sendmsg",
> + &data_arg, &cmsg_arg, &flags, &addr_arg)) {
> + return NULL;
> + }
> +
> + memset(&msg, 0, sizeof(msg));
> +
> + /* Parse destination address. */
> + if (addr_arg != NULL && addr_arg != Py_None) {
> + if (!getsockaddrarg(s, addr_arg, SAS2SA(&addrbuf), &addrlen))
> + goto finally;
> + msg.msg_name = &addrbuf;
> + msg.msg_namelen = addrlen;
> + }
> +
> + /* Fill in an iovec for each message part, and save the Py_buffer
> + structs to release afterwards. */
> + if (sock_sendmsg_iovec(s, data_arg, &msg, &databufs, &ndatabufs) == -1) {
> + goto finally;
> + }
> +
> + if (cmsg_arg == NULL)
> + ncmsgs = 0;
> + else {
> + if ((cmsg_fast = PySequence_Fast(cmsg_arg,
> + "sendmsg() argument 2 must be an "
> + "iterable")) == NULL)
> + goto finally;
> + ncmsgs = PySequence_Fast_GET_SIZE(cmsg_fast);
> + }
> +
> +#ifndef CMSG_SPACE
> + if (ncmsgs > 1) {
> + PyErr_SetString(PyExc_OSError,
> + "sending multiple control messages is not supported "
> + "on this system");
> + goto finally;
> + }
> +#endif
> + /* Save level, type and Py_buffer for each control message,
> + and calculate total size. */
> + if (ncmsgs > 0 && (cmsgs = PyMem_New(struct cmsginfo, ncmsgs)) == NULL) {
> + PyErr_NoMemory();
> + goto finally;
> + }
> + controllen = controllen_last = 0;
> + while (ncmsgbufs < ncmsgs) {
> + size_t bufsize, space;
> +
> + if (!PyArg_Parse(PySequence_Fast_GET_ITEM(cmsg_fast, ncmsgbufs),
> + "(iiy*):[sendmsg() ancillary data items]",
> + &cmsgs[ncmsgbufs].level,
> + &cmsgs[ncmsgbufs].type,
> + &cmsgs[ncmsgbufs].data))
> + goto finally;
> + bufsize = cmsgs[ncmsgbufs++].data.len;
> +
> +#ifdef CMSG_SPACE
> + if (!get_CMSG_SPACE(bufsize, &space)) {
> +#else
> + if (!get_CMSG_LEN(bufsize, &space)) {
> +#endif
> + PyErr_SetString(PyExc_OSError, "ancillary data item too large");
> + goto finally;
> + }
> + controllen += space;
> + if (controllen > SOCKLEN_T_LIMIT || controllen < controllen_last) {
> + PyErr_SetString(PyExc_OSError, "too much ancillary data");
> + goto finally;
> + }
> + controllen_last = controllen;
> + }
> +
> + /* Construct ancillary data block from control message info. */
> + if (ncmsgbufs > 0) {
> + struct cmsghdr *cmsgh = NULL;
> +
> + controlbuf = PyMem_Malloc(controllen);
> + if (controlbuf == NULL) {
> + PyErr_NoMemory();
> + goto finally;
> + }
> + msg.msg_control = controlbuf;
> +
> + msg.msg_controllen = controllen;
> +
> + /* Need to zero out the buffer as a workaround for glibc's
> + CMSG_NXTHDR() implementation. After getting the pointer to
> + the next header, it checks its (uninitialized) cmsg_len
> + member to see if the "message" fits in the buffer, and
> + returns NULL if it doesn't. Zero-filling the buffer
> + ensures that this doesn't happen. */
> + memset(controlbuf, 0, controllen);
> +
> + for (i = 0; i < ncmsgbufs; i++) {
> + size_t msg_len, data_len = cmsgs[i].data.len;
> + int enough_space = 0;
> +
> + cmsgh = (i == 0) ? CMSG_FIRSTHDR(&msg) : CMSG_NXTHDR(&msg, cmsgh);
> + if (cmsgh == NULL) {
> + PyErr_Format(PyExc_RuntimeError,
> + "unexpected NULL result from %s()",
> + (i == 0) ? "CMSG_FIRSTHDR" : "CMSG_NXTHDR");
> + goto finally;
> + }
> + if (!get_CMSG_LEN(data_len, &msg_len)) {
> + PyErr_SetString(PyExc_RuntimeError,
> + "item size out of range for CMSG_LEN()");
> + goto finally;
> + }
> + if (cmsg_min_space(&msg, cmsgh, msg_len)) {
> + size_t space;
> +
> + cmsgh->cmsg_len = msg_len;
> + if (get_cmsg_data_space(&msg, cmsgh, &space))
> + enough_space = (space >= data_len);
> + }
> + if (!enough_space) {
> + PyErr_SetString(PyExc_RuntimeError,
> + "ancillary data does not fit in calculated "
> + "space");
> + goto finally;
> + }
> + cmsgh->cmsg_level = cmsgs[i].level;
> + cmsgh->cmsg_type = cmsgs[i].type;
> + memcpy(CMSG_DATA(cmsgh), cmsgs[i].data.buf, data_len);
> + }
> + }
> +
> + /* Make the system call. */
> + if (!IS_SELECTABLE(s)) {
> + select_error();
> + goto finally;
> + }
> +
> + ctx.msg = &msg;
> + ctx.flags = flags;
> + if (sock_call(s, 1, sock_sendmsg_impl, &ctx) < 0)
> + goto finally;
> +
> + retval = PyLong_FromSsize_t(ctx.result);
> +
> +finally:
> + PyMem_Free(controlbuf);
> + for (i = 0; i < ncmsgbufs; i++)
> + PyBuffer_Release(&cmsgs[i].data);
> + PyMem_Free(cmsgs);
> + Py_XDECREF(cmsg_fast);
> + PyMem_Free(msg.msg_iov);
> + for (i = 0; i < ndatabufs; i++) {
> + PyBuffer_Release(&databufs[i]);
> + }
> + PyMem_Free(databufs);
> + return retval;
> +}
> +
> +PyDoc_STRVAR(sendmsg_doc,
> +"sendmsg(buffers[, ancdata[, flags[, address]]]) -> count\n\
> +\n\
> +Send normal and ancillary data to the socket, gathering the\n\
> +non-ancillary data from a series of buffers and concatenating it into\n\
> +a single message. The buffers argument specifies the non-ancillary\n\
> +data as an iterable of bytes-like objects (e.g. bytes objects).\n\
> +The ancdata argument specifies the ancillary data (control messages)\n\
> +as an iterable of zero or more tuples (cmsg_level, cmsg_type,\n\
> +cmsg_data), where cmsg_level and cmsg_type are integers specifying the\n\
> +protocol level and protocol-specific type respectively, and cmsg_data\n\
> +is a bytes-like object holding the associated data. The flags\n\
> +argument defaults to 0 and has the same meaning as for send(). If\n\
> +address is supplied and not None, it sets a destination address for\n\
> +the message. The return value is the number of bytes of non-ancillary\n\
> +data sent.");
> +#endif /* CMSG_LEN */
> +
> +#ifdef HAVE_SOCKADDR_ALG
> +static PyObject*
> +sock_sendmsg_afalg(PySocketSockObject *self, PyObject *args, PyObject *kwds)
> +{
> + PyObject *retval = NULL;
> +
> + Py_ssize_t i, ndatabufs = 0;
> + Py_buffer *databufs = NULL;
> + PyObject *data_arg = NULL;
> +
> + Py_buffer iv = {NULL, NULL};
> +
> + PyObject *opobj = NULL;
> + int op = -1;
> +
> + PyObject *assoclenobj = NULL;
> + int assoclen = -1;
> +
> + unsigned int *uiptr;
> + int flags = 0;
> +
> + struct msghdr msg;
> + struct cmsghdr *header = NULL;
> + struct af_alg_iv *alg_iv = NULL;
> + struct sock_sendmsg ctx;
> + Py_ssize_t controllen;
> + void *controlbuf = NULL;
> + static char *keywords[] = {"msg", "op", "iv", "assoclen", "flags", 0};
> +
> + if (self->sock_family != AF_ALG) {
> + PyErr_SetString(PyExc_OSError,
> + "algset is only supported for AF_ALG");
> + return NULL;
> + }
> +
> + if (!PyArg_ParseTupleAndKeywords(args, kwds,
> + "|O$O!y*O!i:sendmsg_afalg", keywords,
> + &data_arg,
> + &PyLong_Type, &opobj, &iv,
> + &PyLong_Type, &assoclenobj, &flags)) {
> + return NULL;
> + }
> +
> + memset(&msg, 0, sizeof(msg));
> +
> + /* op is a required, keyword-only argument >= 0 */
> + if (opobj != NULL) {
> + op = _PyLong_AsInt(opobj);
> + }
> + if (op < 0) {
> + /* override exception from _PyLong_AsInt() */
> + PyErr_SetString(PyExc_TypeError,
> + "Invalid or missing argument 'op'");
> + goto finally;
> + }
> + /* assoclen is optional but must be >= 0 */
> + if (assoclenobj != NULL) {
> + assoclen = _PyLong_AsInt(assoclenobj);
> + if (assoclen == -1 && PyErr_Occurred()) {
> + goto finally;
> + }
> + if (assoclen < 0) {
> + PyErr_SetString(PyExc_TypeError,
> + "assoclen must be positive");
> + goto finally;
> + }
> + }
> +
> + controllen = CMSG_SPACE(4);
> + if (iv.buf != NULL) {
> + controllen += CMSG_SPACE(sizeof(*alg_iv) + iv.len);
> + }
> + if (assoclen >= 0) {
> + controllen += CMSG_SPACE(4);
> + }
> +
> + controlbuf = PyMem_Malloc(controllen);
> + if (controlbuf == NULL) {
> + PyErr_NoMemory();
> + goto finally;
> + }
> + memset(controlbuf, 0, controllen);
> +
> + msg.msg_controllen = controllen;
> + msg.msg_control = controlbuf;
> +
> + /* Fill in an iovec for each message part, and save the Py_buffer
> + structs to release afterwards. */
> + if (data_arg != NULL) {
> + if (sock_sendmsg_iovec(self, data_arg, &msg, &databufs, &ndatabufs) == -1) {
> + goto finally;
> + }
> + }
> +
> + /* set operation to encrypt or decrypt */
> + header = CMSG_FIRSTHDR(&msg);
> + if (header == NULL) {
> + PyErr_SetString(PyExc_RuntimeError,
> + "unexpected NULL result from CMSG_FIRSTHDR");
> + goto finally;
> + }
> + header->cmsg_level = SOL_ALG;
> + header->cmsg_type = ALG_SET_OP;
> + header->cmsg_len = CMSG_LEN(4);
> + uiptr = (void*)CMSG_DATA(header);
> + *uiptr = (unsigned int)op;
> +
> + /* set initialization vector */
> + if (iv.buf != NULL) {
> + header = CMSG_NXTHDR(&msg, header);
> + if (header == NULL) {
> + PyErr_SetString(PyExc_RuntimeError,
> + "unexpected NULL result from CMSG_NXTHDR(iv)");
> + goto finally;
> + }
> + header->cmsg_level = SOL_ALG;
> + header->cmsg_type = ALG_SET_IV;
> + header->cmsg_len = CMSG_SPACE(sizeof(*alg_iv) + iv.len);
> + alg_iv = (void*)CMSG_DATA(header);
> + alg_iv->ivlen = iv.len;
> + memcpy(alg_iv->iv, iv.buf, iv.len);
> + }
> +
> + /* set length of associated data for AEAD */
> + if (assoclen >= 0) {
> + header = CMSG_NXTHDR(&msg, header);
> + if (header == NULL) {
> + PyErr_SetString(PyExc_RuntimeError,
> + "unexpected NULL result from CMSG_NXTHDR(assoc)");
> + goto finally;
> + }
> + header->cmsg_level = SOL_ALG;
> + header->cmsg_type = ALG_SET_AEAD_ASSOCLEN;
> + header->cmsg_len = CMSG_LEN(4);
> + uiptr = (void*)CMSG_DATA(header);
> + *uiptr = (unsigned int)assoclen;
> + }
> +
> + ctx.msg = &msg;
> + ctx.flags = flags;
> + if (sock_call(self, 1, sock_sendmsg_impl, &ctx) < 0) {
> + goto finally;
> + }
> +
> + retval = PyLong_FromSsize_t(ctx.result);
> +
> + finally:
> + PyMem_Free(controlbuf);
> + if (iv.buf != NULL) {
> + PyBuffer_Release(&iv);
> + }
> + PyMem_Free(msg.msg_iov);
> + for (i = 0; i < ndatabufs; i++) {
> + PyBuffer_Release(&databufs[i]);
> + }
> + PyMem_Free(databufs);
> + return retval;
> +}
> +
> +PyDoc_STRVAR(sendmsg_afalg_doc,
> +"sendmsg_afalg([msg], *, op[, iv[, assoclen[, flags=MSG_MORE]]])\n\
> +\n\
> +Set operation mode, IV and length of associated data for an AF_ALG\n\
> +operation socket.");
> +#endif
> +
> +/* s.shutdown(how) method */
> +
> +static PyObject *
> +sock_shutdown(PySocketSockObject *s, PyObject *arg)
> +{
> + int how;
> + int res;
> +
> + how = _PyLong_AsInt(arg);
> + if (how == -1 && PyErr_Occurred())
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + res = shutdown(s->sock_fd, how);
> + Py_END_ALLOW_THREADS
> + if (res < 0)
> + return s->errorhandler();
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +PyDoc_STRVAR(shutdown_doc,
> +"shutdown(flag)\n\
> +\n\
> +Shut down the reading side of the socket (flag == SHUT_RD), the writing side\n\
> +of the socket (flag == SHUT_WR), or both ends (flag == SHUT_RDWR).");
> +
> +#if defined(MS_WINDOWS) && defined(SIO_RCVALL)
> +static PyObject*
> +sock_ioctl(PySocketSockObject *s, PyObject *arg)
> +{
> + unsigned long cmd = SIO_RCVALL;
> + PyObject *argO;
> + DWORD recv;
> +
> + if (!PyArg_ParseTuple(arg, "kO:ioctl", &cmd, &argO))
> + return NULL;
> +
> + switch (cmd) {
> + case SIO_RCVALL: {
> + unsigned int option = RCVALL_ON;
> + if (!PyArg_ParseTuple(arg, "kI:ioctl", &cmd, &option))
> + return NULL;
> + if (WSAIoctl(s->sock_fd, cmd, &option, sizeof(option),
> + NULL, 0, &recv, NULL, NULL) == SOCKET_ERROR) {
> + return set_error();
> + }
> + return PyLong_FromUnsignedLong(recv); }
> + case SIO_KEEPALIVE_VALS: {
> + struct tcp_keepalive ka;
> + if (!PyArg_ParseTuple(arg, "k(kkk):ioctl", &cmd,
> + &ka.onoff, &ka.keepalivetime, &ka.keepaliveinterval))
> + return NULL;
> + if (WSAIoctl(s->sock_fd, cmd, &ka, sizeof(ka),
> + NULL, 0, &recv, NULL, NULL) == SOCKET_ERROR) {
> + return set_error();
> + }
> + return PyLong_FromUnsignedLong(recv); }
> +#if defined(SIO_LOOPBACK_FAST_PATH)
> + case SIO_LOOPBACK_FAST_PATH: {
> + unsigned int option;
> + if (!PyArg_ParseTuple(arg, "kI:ioctl", &cmd, &option))
> + return NULL;
> + if (WSAIoctl(s->sock_fd, cmd, &option, sizeof(option),
> + NULL, 0, &recv, NULL, NULL) == SOCKET_ERROR) {
> + return set_error();
> + }
> + return PyLong_FromUnsignedLong(recv); }
> +#endif
> + default:
> + PyErr_Format(PyExc_ValueError, "invalid ioctl command %d", cmd);
> + return NULL;
> + }
> +}
> +PyDoc_STRVAR(sock_ioctl_doc,
> +"ioctl(cmd, option) -> long\n\
> +\n\
> +Control the socket with WSAIoctl syscall. Currently supported 'cmd' values are\n\
> +SIO_RCVALL: 'option' must be one of the socket.RCVALL_* constants.\n\
> +SIO_KEEPALIVE_VALS: 'option' is a tuple of (onoff, timeout, interval).\n\
> +SIO_LOOPBACK_FAST_PATH: 'option' is a boolean value, and is disabled by default");
> +#endif
> +
> +#if defined(MS_WINDOWS)
> +static PyObject*
> +sock_share(PySocketSockObject *s, PyObject *arg)
> +{
> + WSAPROTOCOL_INFO info;
> + DWORD processId;
> + int result;
> +
> + if (!PyArg_ParseTuple(arg, "I", &processId))
> + return NULL;
> +
> + Py_BEGIN_ALLOW_THREADS
> + result = WSADuplicateSocket(s->sock_fd, processId, &info);
> + Py_END_ALLOW_THREADS
> + if (result == SOCKET_ERROR)
> + return set_error();
> + return PyBytes_FromStringAndSize((const char*)&info, sizeof(info));
> +}
> +PyDoc_STRVAR(sock_share_doc,
> +"share(process_id) -> bytes\n\
> +\n\
> +Share the socket with another process. The target process id\n\
> +must be provided and the resulting bytes object passed to the target\n\
> +process. There the shared socket can be instantiated by calling\n\
> +socket.fromshare().");
> +
> +
> +#endif
> +
> +/* List of methods for socket objects */
> +
> +static PyMethodDef sock_methods[] = {
> + {"_accept", (PyCFunction)sock_accept, METH_NOARGS,
> + accept_doc},
> + {"bind", (PyCFunction)sock_bind, METH_O,
> + bind_doc},
> + {"close", (PyCFunction)sock_close, METH_NOARGS,
> + close_doc},
> + {"connect", (PyCFunction)sock_connect, METH_O,
> + connect_doc},
> + {"connect_ex", (PyCFunction)sock_connect_ex, METH_O,
> + connect_ex_doc},
> + {"detach", (PyCFunction)sock_detach, METH_NOARGS,
> + detach_doc},
> + {"fileno", (PyCFunction)sock_fileno, METH_NOARGS,
> + fileno_doc},
> +#ifdef HAVE_GETPEERNAME
> + {"getpeername", (PyCFunction)sock_getpeername,
> + METH_NOARGS, getpeername_doc},
> +#endif
> + {"getsockname", (PyCFunction)sock_getsockname,
> + METH_NOARGS, getsockname_doc},
> + {"getsockopt", (PyCFunction)sock_getsockopt, METH_VARARGS,
> + getsockopt_doc},
> +#if defined(MS_WINDOWS) && defined(SIO_RCVALL)
> + {"ioctl", (PyCFunction)sock_ioctl, METH_VARARGS,
> + sock_ioctl_doc},
> +#endif
> +#if defined(MS_WINDOWS)
> + {"share", (PyCFunction)sock_share, METH_VARARGS,
> + sock_share_doc},
> +#endif
> + {"listen", (PyCFunction)sock_listen, METH_VARARGS,
> + listen_doc},
> + {"recv", (PyCFunction)sock_recv, METH_VARARGS,
> + recv_doc},
> + {"recv_into", (PyCFunction)sock_recv_into, METH_VARARGS | METH_KEYWORDS,
> + recv_into_doc},
> + {"recvfrom", (PyCFunction)sock_recvfrom, METH_VARARGS,
> + recvfrom_doc},
> + {"recvfrom_into", (PyCFunction)sock_recvfrom_into, METH_VARARGS | METH_KEYWORDS,
> + recvfrom_into_doc},
> + {"send", (PyCFunction)sock_send, METH_VARARGS,
> + send_doc},
> + {"sendall", (PyCFunction)sock_sendall, METH_VARARGS,
> + sendall_doc},
> + {"sendto", (PyCFunction)sock_sendto, METH_VARARGS,
> + sendto_doc},
> + {"setblocking", (PyCFunction)sock_setblocking, METH_O,
> + setblocking_doc},
> + {"settimeout", (PyCFunction)sock_settimeout, METH_O,
> + settimeout_doc},
> + {"gettimeout", (PyCFunction)sock_gettimeout, METH_NOARGS,
> + gettimeout_doc},
> + {"setsockopt", (PyCFunction)sock_setsockopt, METH_VARARGS,
> + setsockopt_doc},
> + {"shutdown", (PyCFunction)sock_shutdown, METH_O,
> + shutdown_doc},
> +#ifndef UEFI_C_SOURCE
> +#ifdef CMSG_LEN
> + {"recvmsg", (PyCFunction)sock_recvmsg, METH_VARARGS,
> + recvmsg_doc},
> + {"recvmsg_into", (PyCFunction)sock_recvmsg_into, METH_VARARGS,
> + recvmsg_into_doc,},
> + {"sendmsg", (PyCFunction)sock_sendmsg, METH_VARARGS,
> + sendmsg_doc},
> +#endif
> +#ifdef HAVE_SOCKADDR_ALG
> + {"sendmsg_afalg", (PyCFunction)sock_sendmsg_afalg, METH_VARARGS | METH_KEYWORDS,
> + sendmsg_afalg_doc},
> +#endif
> +#endif
> + {NULL, NULL} /* sentinel */
> +};
> +
> +/* SockObject members */
> +static PyMemberDef sock_memberlist[] = {
> + {"family", T_INT, offsetof(PySocketSockObject, sock_family), READONLY, "the socket family"},
> + {"type", T_INT, offsetof(PySocketSockObject, sock_type), READONLY, "the socket type"},
> + {"proto", T_INT, offsetof(PySocketSockObject, sock_proto), READONLY, "the socket protocol"},
> + {0},
> +};
> +
> +static PyGetSetDef sock_getsetlist[] = {
> + {"timeout", (getter)sock_gettimeout, NULL, PyDoc_STR("the socket timeout")},
> + {NULL} /* sentinel */
> +};
> +
> +/* Deallocate a socket object in response to the last Py_DECREF().
> + First close the file description. */
> +
> +static void
> +sock_finalize(PySocketSockObject *s)
> +{
> + SOCKET_T fd;
> + PyObject *error_type, *error_value, *error_traceback;
> +
> + /* Save the current exception, if any. */
> + PyErr_Fetch(&error_type, &error_value, &error_traceback);
> +
> + if (s->sock_fd != INVALID_SOCKET) {
> + if (PyErr_ResourceWarning((PyObject *)s, 1, "unclosed %R", s)) {
> + /* Spurious errors can appear at shutdown */
> + if (PyErr_ExceptionMatches(PyExc_Warning)) {
> + PyErr_WriteUnraisable((PyObject *)s);
> + }
> + }
> +
> + /* Only close the socket *after* logging the ResourceWarning warning
> + to allow the logger to call socket methods like
> + socket.getsockname(). If the socket is closed before, socket
> + methods fails with the EBADF error. */
> + fd = s->sock_fd;
> + s->sock_fd = INVALID_SOCKET;
> +
> + /* We do not want to retry upon EINTR: see sock_close() */
> + Py_BEGIN_ALLOW_THREADS
> + (void) SOCKETCLOSE(fd);
> + Py_END_ALLOW_THREADS
> + }
> +
> + /* Restore the saved exception. */
> + PyErr_Restore(error_type, error_value, error_traceback);
> +}
> +
> +static void
> +sock_dealloc(PySocketSockObject *s)
> +{
> + if (PyObject_CallFinalizerFromDealloc((PyObject *)s) < 0)
> + return;
> +
> + Py_TYPE(s)->tp_free((PyObject *)s);
> +}
> +
> +
> +static PyObject *
> +sock_repr(PySocketSockObject *s)
> +{
> + long sock_fd;
> + /* On Windows, this test is needed because SOCKET_T is unsigned */
> + if (s->sock_fd == INVALID_SOCKET) {
> + sock_fd = -1;
> + }
> +#if SIZEOF_SOCKET_T > SIZEOF_LONG
> + else if (s->sock_fd > LONG_MAX) {
> + /* this can occur on Win64, and actually there is a special
> + ugly printf formatter for decimal pointer length integer
> + printing, only bother if necessary*/
> + PyErr_SetString(PyExc_OverflowError,
> + "no printf formatter to display "
> + "the socket descriptor in decimal");
> + return NULL;
> + }
> +#endif
> + else
> + sock_fd = (long)s->sock_fd;
> + return PyUnicode_FromFormat(
> + "<socket object, fd=%ld, family=%d, type=%d, proto=%d>",
> + sock_fd, s->sock_family,
> + s->sock_type,
> + s->sock_proto);
> +}
> +
> +
> +/* Create a new, uninitialized socket object. */
> +
> +static PyObject *
> +sock_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
> +{
> + PyObject *new;
> +
> + new = type->tp_alloc(type, 0);
> + if (new != NULL) {
> + ((PySocketSockObject *)new)->sock_fd = INVALID_SOCKET;
> + ((PySocketSockObject *)new)->sock_timeout = _PyTime_FromSeconds(-1);
> + ((PySocketSockObject *)new)->errorhandler = &set_error;
> + }
> + return new;
> +}
> +
> +
> +/* Initialize a new socket object. */
> +
> +#ifdef SOCK_CLOEXEC
> +/* socket() and socketpair() fail with EINVAL on Linux kernel older
> + * than 2.6.27 if SOCK_CLOEXEC flag is set in the socket type. */
> +static int sock_cloexec_works = -1;
> +#endif
> +
> +/*ARGSUSED*/
> +static int
> +sock_initobj(PyObject *self, PyObject *args, PyObject *kwds)
> +{
> + PySocketSockObject *s = (PySocketSockObject *)self;
> + PyObject *fdobj = NULL;
> + SOCKET_T fd = INVALID_SOCKET;
> + int family = AF_INET, type = SOCK_STREAM, proto = 0;
> + static char *keywords[] = {"family", "type", "proto", "fileno", 0};
> +#ifndef MS_WINDOWS
> +#ifdef SOCK_CLOEXEC
> + int *atomic_flag_works = &sock_cloexec_works;
> +#else
> + int *atomic_flag_works = NULL;
> +#endif
> +#endif
> +
> + if (!PyArg_ParseTupleAndKeywords(args, kwds,
> + "|iiiO:socket", keywords,
> + &family, &type, &proto, &fdobj))
> + return -1;
> +
> + if (fdobj != NULL && fdobj != Py_None) {
> +#ifdef MS_WINDOWS
> + /* recreate a socket that was duplicated */
> + if (PyBytes_Check(fdobj)) {
> + WSAPROTOCOL_INFO info;
> + if (PyBytes_GET_SIZE(fdobj) != sizeof(info)) {
> + PyErr_Format(PyExc_ValueError,
> + "socket descriptor string has wrong size, "
> + "should be %zu bytes.", sizeof(info));
> + return -1;
> + }
> + memcpy(&info, PyBytes_AS_STRING(fdobj), sizeof(info));
> + Py_BEGIN_ALLOW_THREADS
> + fd = WSASocket(FROM_PROTOCOL_INFO, FROM_PROTOCOL_INFO,
> + FROM_PROTOCOL_INFO, &info, 0, WSA_FLAG_OVERLAPPED);
> + Py_END_ALLOW_THREADS
> + if (fd == INVALID_SOCKET) {
> + set_error();
> + return -1;
> + }
> + family = info.iAddressFamily;
> + type = info.iSocketType;
> + proto = info.iProtocol;
> + }
> + else
> +#endif
> + {
> + fd = PyLong_AsSocket_t(fdobj);
> + if (fd == (SOCKET_T)(-1) && PyErr_Occurred())
> + return -1;
> + if (fd == INVALID_SOCKET) {
> + PyErr_SetString(PyExc_ValueError,
> + "can't use invalid socket value");
> + return -1;
> + }
> + }
> + }
> + else {
> +#ifdef MS_WINDOWS
> + /* Windows implementation */
> +#ifndef WSA_FLAG_NO_HANDLE_INHERIT
> +#define WSA_FLAG_NO_HANDLE_INHERIT 0x80
> +#endif
> +
> + Py_BEGIN_ALLOW_THREADS
> + if (support_wsa_no_inherit) {
> + fd = WSASocket(family, type, proto,
> + NULL, 0,
> + WSA_FLAG_OVERLAPPED | WSA_FLAG_NO_HANDLE_INHERIT);
> + if (fd == INVALID_SOCKET) {
> + /* Windows 7 or Windows 2008 R2 without SP1 or the hotfix */
> + support_wsa_no_inherit = 0;
> + fd = socket(family, type, proto);
> + }
> + }
> + else {
> + fd = socket(family, type, proto);
> + }
> + Py_END_ALLOW_THREADS
> +
> + if (fd == INVALID_SOCKET) {
> + set_error();
> + return -1;
> + }
> +
> + if (!support_wsa_no_inherit) {
> + if (!SetHandleInformation((HANDLE)fd, HANDLE_FLAG_INHERIT, 0)) {
> + closesocket(fd);
> + PyErr_SetFromWindowsErr(0);
> + return -1;
> + }
> + }
> +#else
> + /* UNIX */
> + Py_BEGIN_ALLOW_THREADS
> +#ifdef SOCK_CLOEXEC
> + if (sock_cloexec_works != 0) {
> + fd = socket(family, type | SOCK_CLOEXEC, proto);
> + if (sock_cloexec_works == -1) {
> + if (fd >= 0) {
> + sock_cloexec_works = 1;
> + }
> + else if (errno == EINVAL) {
> + /* Linux older than 2.6.27 does not support SOCK_CLOEXEC */
> + sock_cloexec_works = 0;
> + fd = socket(family, type, proto);
> + }
> + }
> + }
> + else
> +#endif
> + {
> + fd = socket(family, type, proto);
> + }
> + Py_END_ALLOW_THREADS
> +
> + if (fd == INVALID_SOCKET) {
> + set_error();
> + return -1;
> + }
> +
> + if (_Py_set_inheritable(fd, 0, atomic_flag_works) < 0) {
> + SOCKETCLOSE(fd);
> + return -1;
> + }
> +#endif
> + }
> + if (init_sockobject(s, fd, family, type, proto) == -1) {
> + SOCKETCLOSE(fd);
> + return -1;
> + }
> +
> + return 0;
> +
> +}
> +
> +
> +/* Type object for socket objects. */
> +
> +static PyTypeObject sock_type = {
> + PyVarObject_HEAD_INIT(0, 0) /* Must fill in type value later */
> + "_socket.socket", /* tp_name */
> + sizeof(PySocketSockObject), /* tp_basicsize */
> + 0, /* tp_itemsize */
> + (destructor)sock_dealloc, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + (reprfunc)sock_repr, /* tp_repr */
> + 0, /* tp_as_number */
> + 0, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + PyObject_GenericGetAttr, /* tp_getattro */
> + 0, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE
> + | Py_TPFLAGS_HAVE_FINALIZE, /* tp_flags */
> + sock_doc, /* tp_doc */
> + 0, /* tp_traverse */
> + 0, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + 0, /* tp_iter */
> + 0, /* tp_iternext */
> + sock_methods, /* tp_methods */
> + sock_memberlist, /* tp_members */
> + sock_getsetlist, /* tp_getset */
> + 0, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + sock_initobj, /* tp_init */
> + PyType_GenericAlloc, /* tp_alloc */
> + sock_new, /* tp_new */
> + PyObject_Del, /* tp_free */
> + 0, /* tp_is_gc */
> + 0, /* tp_bases */
> + 0, /* tp_mro */
> + 0, /* tp_cache */
> + 0, /* tp_subclasses */
> + 0, /* tp_weaklist */
> + 0, /* tp_del */
> + 0, /* tp_version_tag */
> + (destructor)sock_finalize, /* tp_finalize */
> +};
> +
> +
> +/* Python interface to gethostname(). */
> +
> +/*ARGSUSED*/
> +static PyObject *
> +socket_gethostname(PyObject *self, PyObject *unused)
> +{
> +#ifdef MS_WINDOWS
> + /* Don't use winsock's gethostname, as this returns the ANSI
> + version of the hostname, whereas we need a Unicode string.
> + Otherwise, gethostname apparently also returns the DNS name. */
> + wchar_t buf[MAX_COMPUTERNAME_LENGTH + 1];
> + DWORD size = Py_ARRAY_LENGTH(buf);
> + wchar_t *name;
> + PyObject *result;
> +
> + if (GetComputerNameExW(ComputerNamePhysicalDnsHostname, buf, &size))
> + return PyUnicode_FromWideChar(buf, size);
> +
> + if (GetLastError() != ERROR_MORE_DATA)
> + return PyErr_SetFromWindowsErr(0);
> +
> + if (size == 0)
> + return PyUnicode_New(0, 0);
> +
> + /* MSDN says ERROR_MORE_DATA may occur because DNS allows longer
> + names */
> + name = PyMem_New(wchar_t, size);
> + if (!name) {
> + PyErr_NoMemory();
> + return NULL;
> + }
> + if (!GetComputerNameExW(ComputerNamePhysicalDnsHostname,
> + name,
> + &size))
> + {
> + PyMem_Free(name);
> + return PyErr_SetFromWindowsErr(0);
> + }
> +
> + result = PyUnicode_FromWideChar(name, size);
> + PyMem_Free(name);
> + return result;
> +#else
> + char buf[1024];
> + int res;
> + Py_BEGIN_ALLOW_THREADS
> + res = gethostname(buf, (int) sizeof buf - 1);
> + Py_END_ALLOW_THREADS
> + if (res < 0)
> + return set_error();
> + buf[sizeof buf - 1] = '\0';
> + return PyUnicode_DecodeFSDefault(buf);
> +#endif
> +}
> +
> +PyDoc_STRVAR(gethostname_doc,
> +"gethostname() -> string\n\
> +\n\
> +Return the current host name.");
> +
> +#ifdef HAVE_SETHOSTNAME
> +PyDoc_STRVAR(sethostname_doc,
> +"sethostname(name)\n\n\
> +Sets the hostname to name.");
> +
> +static PyObject *
> +socket_sethostname(PyObject *self, PyObject *args)
> +{
> + PyObject *hnobj;
> + Py_buffer buf;
> + int res, flag = 0;
> +
> +#ifdef _AIX
> +/* issue #18259, not declared in any useful header file */
> +extern int sethostname(const char *, size_t);
> +#endif
> +
> + if (!PyArg_ParseTuple(args, "S:sethostname", &hnobj)) {
> + PyErr_Clear();
> + if (!PyArg_ParseTuple(args, "O&:sethostname",
> + PyUnicode_FSConverter, &hnobj))
> + return NULL;
> + flag = 1;
> + }
> + res = PyObject_GetBuffer(hnobj, &buf, PyBUF_SIMPLE);
> + if (!res) {
> + res = sethostname(buf.buf, buf.len);
> + PyBuffer_Release(&buf);
> + }
> + if (flag)
> + Py_DECREF(hnobj);
> + if (res)
> + return set_error();
> + Py_RETURN_NONE;
> +}
> +#endif
> +
> +/* Python interface to gethostbyname(name). */
> +
> +/*ARGSUSED*/
> +static PyObject *
> +socket_gethostbyname(PyObject *self, PyObject *args)
> +{
> + char *name;
> + sock_addr_t addrbuf;
> + PyObject *ret = NULL;
> +
> + if (!PyArg_ParseTuple(args, "et:gethostbyname", "idna", &name))
> + return NULL;
> + if (setipaddr(name, SAS2SA(&addrbuf), sizeof(addrbuf), AF_INET) < 0)
> + goto finally;
> + ret = makeipaddr(SAS2SA(&addrbuf), sizeof(struct sockaddr_in));
> +finally:
> + PyMem_Free(name);
> + return ret;
> +}
> +
> +PyDoc_STRVAR(gethostbyname_doc,
> +"gethostbyname(host) -> address\n\
> +\n\
> +Return the IP address (a string of the form '255.255.255.255') for a host.");
> +
> +
> +static PyObject*
> +sock_decode_hostname(const char *name)
> +{
> +#ifdef MS_WINDOWS
> + /* Issue #26227: gethostbyaddr() returns a string encoded
> + * to the ANSI code page */
> + return PyUnicode_DecodeFSDefault(name);
> +#else
> + /* Decode from UTF-8 */
> + return PyUnicode_FromString(name);
> +#endif
> +}
> +
> +/* Convenience function common to gethostbyname_ex and gethostbyaddr */
> +
> +static PyObject *
> +gethost_common(struct hostent *h, struct sockaddr *addr, size_t alen, int af)
> +{
> + char **pch;
> + PyObject *rtn_tuple = (PyObject *)NULL;
> + PyObject *name_list = (PyObject *)NULL;
> + PyObject *addr_list = (PyObject *)NULL;
> + PyObject *tmp;
> + PyObject *name;
> +
> + if (h == NULL) {
> + /* Let's get real error message to return */
> + set_herror(h_errno);
> + return NULL;
> + }
> +
> + if (h->h_addrtype != af) {
> + /* Let's get real error message to return */
> + errno = EAFNOSUPPORT;
> + PyErr_SetFromErrno(PyExc_OSError);
> + return NULL;
> + }
> +
> + switch (af) {
> +
> + case AF_INET:
> + if (alen < sizeof(struct sockaddr_in))
> + return NULL;
> + break;
> +
> +#ifdef ENABLE_IPV6
> + case AF_INET6:
> + if (alen < sizeof(struct sockaddr_in6))
> + return NULL;
> + break;
> +#endif
> +
> + }
> +
> + if ((name_list = PyList_New(0)) == NULL)
> + goto err;
> +
> + if ((addr_list = PyList_New(0)) == NULL)
> + goto err;
> +
> + /* SF #1511317: h_aliases can be NULL */
> + if (h->h_aliases) {
> + for (pch = h->h_aliases; *pch != NULL; pch++) {
> + int status;
> + tmp = PyUnicode_FromString(*pch);
> + if (tmp == NULL)
> + goto err;
> +
> + status = PyList_Append(name_list, tmp);
> + Py_DECREF(tmp);
> +
> + if (status)
> + goto err;
> + }
> + }
> +
> + for (pch = h->h_addr_list; *pch != NULL; pch++) {
> + int status;
> +
> + switch (af) {
> +
> + case AF_INET:
> + {
> + struct sockaddr_in sin;
> + memset(&sin, 0, sizeof(sin));
> + sin.sin_family = af;
> +#ifdef HAVE_SOCKADDR_SA_LEN
> + sin.sin_len = sizeof(sin);
> +#endif
> + memcpy(&sin.sin_addr, *pch, sizeof(sin.sin_addr));
> + tmp = makeipaddr((struct sockaddr *)&sin, sizeof(sin));
> +
> + if (pch == h->h_addr_list && alen >= sizeof(sin))
> + memcpy((char *) addr, &sin, sizeof(sin));
> + break;
> + }
> +
> +#ifdef ENABLE_IPV6
> + case AF_INET6:
> + {
> + struct sockaddr_in6 sin6;
> + memset(&sin6, 0, sizeof(sin6));
> + sin6.sin6_family = af;
> +#ifdef HAVE_SOCKADDR_SA_LEN
> + sin6.sin6_len = sizeof(sin6);
> +#endif
> + memcpy(&sin6.sin6_addr, *pch, sizeof(sin6.sin6_addr));
> + tmp = makeipaddr((struct sockaddr *)&sin6,
> + sizeof(sin6));
> +
> + if (pch == h->h_addr_list && alen >= sizeof(sin6))
> + memcpy((char *) addr, &sin6, sizeof(sin6));
> + break;
> + }
> +#endif
> +
> + default: /* can't happen */
> + PyErr_SetString(PyExc_OSError,
> + "unsupported address family");
> + return NULL;
> + }
> +
> + if (tmp == NULL)
> + goto err;
> +
> + status = PyList_Append(addr_list, tmp);
> + Py_DECREF(tmp);
> +
> + if (status)
> + goto err;
> + }
> +
> + name = sock_decode_hostname(h->h_name);
> + if (name == NULL)
> + goto err;
> + rtn_tuple = Py_BuildValue("NOO", name, name_list, addr_list);
> +
> + err:
> + Py_XDECREF(name_list);
> + Py_XDECREF(addr_list);
> + return rtn_tuple;
> +}
> +
> +
> +/* Python interface to gethostbyname_ex(name). */
> +
> +/*ARGSUSED*/
> +static PyObject *
> +socket_gethostbyname_ex(PyObject *self, PyObject *args)
> +{
> + char *name;
> + struct hostent *h;
> + sock_addr_t addr;
> + struct sockaddr *sa;
> + PyObject *ret = NULL;
> +#ifdef HAVE_GETHOSTBYNAME_R
> + struct hostent hp_allocated;
> +#ifdef HAVE_GETHOSTBYNAME_R_3_ARG
> + struct hostent_data data;
> +#else
> + char buf[16384];
> + int buf_len = (sizeof buf) - 1;
> + int errnop;
> +#endif
> +#ifdef HAVE_GETHOSTBYNAME_R_3_ARG
> + int result;
> +#endif
> +#endif /* HAVE_GETHOSTBYNAME_R */
> +
> + if (!PyArg_ParseTuple(args, "et:gethostbyname_ex", "idna", &name))
> + return NULL;
> + if (setipaddr(name, SAS2SA(&addr), sizeof(addr), AF_INET) < 0)
> + goto finally;
> + Py_BEGIN_ALLOW_THREADS
> +#ifdef HAVE_GETHOSTBYNAME_R
> +#if defined(HAVE_GETHOSTBYNAME_R_6_ARG)
> + gethostbyname_r(name, &hp_allocated, buf, buf_len,
> + &h, &errnop);
> +#elif defined(HAVE_GETHOSTBYNAME_R_5_ARG)
> + h = gethostbyname_r(name, &hp_allocated, buf, buf_len, &errnop);
> +#else /* HAVE_GETHOSTBYNAME_R_3_ARG */
> + memset((void *) &data, '\0', sizeof(data));
> + result = gethostbyname_r(name, &hp_allocated, &data);
> + h = (result != 0) ? NULL : &hp_allocated;
> +#endif
> +#else /* not HAVE_GETHOSTBYNAME_R */
> +#ifdef USE_GETHOSTBYNAME_LOCK
> + PyThread_acquire_lock(netdb_lock, 1);
> +#endif
> + h = gethostbyname(name);
> +#endif /* HAVE_GETHOSTBYNAME_R */
> + Py_END_ALLOW_THREADS
> + /* Some C libraries would require addr.__ss_family instead of
> + addr.ss_family.
> + Therefore, we cast the sockaddr_storage into sockaddr to
> + access sa_family. */
> + sa = SAS2SA(&addr);
> + ret = gethost_common(h, SAS2SA(&addr), sizeof(addr),
> + sa->sa_family);
> +#ifdef USE_GETHOSTBYNAME_LOCK
> + PyThread_release_lock(netdb_lock);
> +#endif
> +finally:
> + PyMem_Free(name);
> + return ret;
> +}
> +
> +PyDoc_STRVAR(ghbn_ex_doc,
> +"gethostbyname_ex(host) -> (name, aliaslist, addresslist)\n\
> +\n\
> +Return the true host name, a list of aliases, and a list of IP addresses,\n\
> +for a host. The host argument is a string giving a host name or IP number.");
> +
> +
> +/* Python interface to gethostbyaddr(IP). */
> +
> +/*ARGSUSED*/
> +static PyObject *
> +socket_gethostbyaddr(PyObject *self, PyObject *args)
> +{
> + sock_addr_t addr;
> + struct sockaddr *sa = SAS2SA(&addr);
> + char *ip_num;
> + struct hostent *h;
> + PyObject *ret = NULL;
> +#ifdef HAVE_GETHOSTBYNAME_R
> + struct hostent hp_allocated;
> +#ifdef HAVE_GETHOSTBYNAME_R_3_ARG
> + struct hostent_data data;
> +#else
> + /* glibcs up to 2.10 assume that the buf argument to
> + gethostbyaddr_r is 8-byte aligned, which at least llvm-gcc
> + does not ensure. The attribute below instructs the compiler
> + to maintain this alignment. */
> + char buf[16384] Py_ALIGNED(8);
> + int buf_len = (sizeof buf) - 1;
> + int errnop;
> +#endif
> +#ifdef HAVE_GETHOSTBYNAME_R_3_ARG
> + int result;
> +#endif
> +#endif /* HAVE_GETHOSTBYNAME_R */
> + const char *ap;
> + int al;
> + int af;
> +
> + if (!PyArg_ParseTuple(args, "et:gethostbyaddr", "idna", &ip_num))
> + return NULL;
> + af = AF_UNSPEC;
> + if (setipaddr(ip_num, sa, sizeof(addr), af) < 0)
> + goto finally;
> + af = sa->sa_family;
> + ap = NULL;
> + /* al = 0; */
> + switch (af) {
> + case AF_INET:
> + ap = (char *)&((struct sockaddr_in *)sa)->sin_addr;
> + al = sizeof(((struct sockaddr_in *)sa)->sin_addr);
> + break;
> +#ifdef ENABLE_IPV6
> + case AF_INET6:
> + ap = (char *)&((struct sockaddr_in6 *)sa)->sin6_addr;
> + al = sizeof(((struct sockaddr_in6 *)sa)->sin6_addr);
> + break;
> +#endif
> + default:
> + PyErr_SetString(PyExc_OSError, "unsupported address family");
> + goto finally;
> + }
> + Py_BEGIN_ALLOW_THREADS
> +#ifdef HAVE_GETHOSTBYNAME_R
> +#if defined(HAVE_GETHOSTBYNAME_R_6_ARG)
> + gethostbyaddr_r(ap, al, af,
> + &hp_allocated, buf, buf_len,
> + &h, &errnop);
> +#elif defined(HAVE_GETHOSTBYNAME_R_5_ARG)
> + h = gethostbyaddr_r(ap, al, af,
> + &hp_allocated, buf, buf_len, &errnop);
> +#else /* HAVE_GETHOSTBYNAME_R_3_ARG */
> + memset((void *) &data, '\0', sizeof(data));
> + result = gethostbyaddr_r(ap, al, af, &hp_allocated, &data);
> + h = (result != 0) ? NULL : &hp_allocated;
> +#endif
> +#else /* not HAVE_GETHOSTBYNAME_R */
> +#ifdef USE_GETHOSTBYNAME_LOCK
> + PyThread_acquire_lock(netdb_lock, 1);
> +#endif
> + h = gethostbyaddr(ap, al, af);
> +#endif /* HAVE_GETHOSTBYNAME_R */
> + Py_END_ALLOW_THREADS
> + ret = gethost_common(h, SAS2SA(&addr), sizeof(addr), af);
> +#ifdef USE_GETHOSTBYNAME_LOCK
> + PyThread_release_lock(netdb_lock);
> +#endif
> +finally:
> + PyMem_Free(ip_num);
> + return ret;
> +}
> +
> +PyDoc_STRVAR(gethostbyaddr_doc,
> +"gethostbyaddr(host) -> (name, aliaslist, addresslist)\n\
> +\n\
> +Return the true host name, a list of aliases, and a list of IP addresses,\n\
> +for a host. The host argument is a string giving a host name or IP number.");
> +
> +
> +/* Python interface to getservbyname(name).
> + This only returns the port number, since the other info is already
> + known or not useful (like the list of aliases). */
> +
> +/*ARGSUSED*/
> +static PyObject *
> +socket_getservbyname(PyObject *self, PyObject *args)
> +{
> + const char *name, *proto=NULL;
> + struct servent *sp;
> + if (!PyArg_ParseTuple(args, "s|s:getservbyname", &name, &proto))
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + sp = getservbyname(name, proto);
> + Py_END_ALLOW_THREADS
> + if (sp == NULL) {
> + PyErr_SetString(PyExc_OSError, "service/proto not found");
> + return NULL;
> + }
> + return PyLong_FromLong((long) ntohs(sp->s_port));
> +}
> +
> +PyDoc_STRVAR(getservbyname_doc,
> +"getservbyname(servicename[, protocolname]) -> integer\n\
> +\n\
> +Return a port number from a service name and protocol name.\n\
> +The optional protocol name, if given, should be 'tcp' or 'udp',\n\
> +otherwise any protocol will match.");
> +
> +
> +/* Python interface to getservbyport(port).
> + This only returns the service name, since the other info is already
> + known or not useful (like the list of aliases). */
> +
> +/*ARGSUSED*/
> +static PyObject *
> +socket_getservbyport(PyObject *self, PyObject *args)
> +{
> + int port;
> + const char *proto=NULL;
> + struct servent *sp;
> + if (!PyArg_ParseTuple(args, "i|s:getservbyport", &port, &proto))
> + return NULL;
> + if (port < 0 || port > 0xffff) {
> + PyErr_SetString(
> + PyExc_OverflowError,
> + "getservbyport: port must be 0-65535.");
> + return NULL;
> + }
> + Py_BEGIN_ALLOW_THREADS
> + sp = getservbyport(htons((short)port), proto);
> + Py_END_ALLOW_THREADS
> + if (sp == NULL) {
> + PyErr_SetString(PyExc_OSError, "port/proto not found");
> + return NULL;
> + }
> + return PyUnicode_FromString(sp->s_name);
> +}
> +
> +PyDoc_STRVAR(getservbyport_doc,
> +"getservbyport(port[, protocolname]) -> string\n\
> +\n\
> +Return the service name from a port number and protocol name.\n\
> +The optional protocol name, if given, should be 'tcp' or 'udp',\n\
> +otherwise any protocol will match.");
> +
> +/* Python interface to getprotobyname(name).
> + This only returns the protocol number, since the other info is
> + already known or not useful (like the list of aliases). */
> +
> +/*ARGSUSED*/
> +static PyObject *
> +socket_getprotobyname(PyObject *self, PyObject *args)
> +{
> + const char *name;
> + struct protoent *sp;
> + if (!PyArg_ParseTuple(args, "s:getprotobyname", &name))
> + return NULL;
> + Py_BEGIN_ALLOW_THREADS
> + sp = getprotobyname(name);
> + Py_END_ALLOW_THREADS
> + if (sp == NULL) {
> + PyErr_SetString(PyExc_OSError, "protocol not found");
> + return NULL;
> + }
> + return PyLong_FromLong((long) sp->p_proto);
> +}
> +
> +PyDoc_STRVAR(getprotobyname_doc,
> +"getprotobyname(name) -> integer\n\
> +\n\
> +Return the protocol number for the named protocol. (Rarely used.)");
> +
> +
> +#ifndef NO_DUP
> +/* dup() function for socket fds */
> +
> +static PyObject *
> +socket_dup(PyObject *self, PyObject *fdobj)
> +{
> + SOCKET_T fd, newfd;
> + PyObject *newfdobj;
> +#ifdef MS_WINDOWS
> + WSAPROTOCOL_INFO info;
> +#endif
> +
> + fd = PyLong_AsSocket_t(fdobj);
> + if (fd == (SOCKET_T)(-1) && PyErr_Occurred())
> + return NULL;
> +
> +#ifdef MS_WINDOWS
> + if (WSADuplicateSocket(fd, GetCurrentProcessId(), &info))
> + return set_error();
> +
> + newfd = WSASocket(FROM_PROTOCOL_INFO, FROM_PROTOCOL_INFO,
> + FROM_PROTOCOL_INFO,
> + &info, 0, WSA_FLAG_OVERLAPPED);
> + if (newfd == INVALID_SOCKET)
> + return set_error();
> +
> + if (!SetHandleInformation((HANDLE)newfd, HANDLE_FLAG_INHERIT, 0)) {
> + closesocket(newfd);
> + PyErr_SetFromWindowsErr(0);
> + return NULL;
> + }
> +#else
> + /* On UNIX, dup can be used to duplicate the file descriptor of a socket */
> + newfd = _Py_dup(fd);
> + if (newfd == INVALID_SOCKET)
> + return NULL;
> +#endif
> +
> + newfdobj = PyLong_FromSocket_t(newfd);
> + if (newfdobj == NULL)
> + SOCKETCLOSE(newfd);
> + return newfdobj;
> +}
> +
> +PyDoc_STRVAR(dup_doc,
> +"dup(integer) -> integer\n\
> +\n\
> +Duplicate an integer socket file descriptor. This is like os.dup(), but for\n\
> +sockets; on some platforms os.dup() won't work for socket file descriptors.");
> +#endif
> +
> +
> +#ifdef HAVE_SOCKETPAIR
> +/* Create a pair of sockets using the socketpair() function.
> + Arguments as for socket() except the default family is AF_UNIX if
> + defined on the platform; otherwise, the default is AF_INET. */
> +
> +/*ARGSUSED*/
> +static PyObject *
> +socket_socketpair(PyObject *self, PyObject *args)
> +{
> + PySocketSockObject *s0 = NULL, *s1 = NULL;
> + SOCKET_T sv[2];
> + int family, type = SOCK_STREAM, proto = 0;
> + PyObject *res = NULL;
> +#ifdef SOCK_CLOEXEC
> + int *atomic_flag_works = &sock_cloexec_works;
> +#else
> + int *atomic_flag_works = NULL;
> +#endif
> + int ret;
> +
> +#if defined(AF_UNIX)
> + family = AF_UNIX;
> +#else
> + family = AF_INET;
> +#endif
> + if (!PyArg_ParseTuple(args, "|iii:socketpair",
> + &family, &type, &proto))
> + return NULL;
> +
> + /* Create a pair of socket fds */
> + Py_BEGIN_ALLOW_THREADS
> +#ifdef SOCK_CLOEXEC
> + if (sock_cloexec_works != 0) {
> + ret = socketpair(family, type | SOCK_CLOEXEC, proto, sv);
> + if (sock_cloexec_works == -1) {
> + if (ret >= 0) {
> + sock_cloexec_works = 1;
> + }
> + else if (errno == EINVAL) {
> + /* Linux older than 2.6.27 does not support SOCK_CLOEXEC */
> + sock_cloexec_works = 0;
> + ret = socketpair(family, type, proto, sv);
> + }
> + }
> + }
> + else
> +#endif
> + {
> + ret = socketpair(family, type, proto, sv);
> + }
> + Py_END_ALLOW_THREADS
> +
> + if (ret < 0)
> + return set_error();
> +
> + if (_Py_set_inheritable(sv[0], 0, atomic_flag_works) < 0)
> + goto finally;
> + if (_Py_set_inheritable(sv[1], 0, atomic_flag_works) < 0)
> + goto finally;
> +
> + s0 = new_sockobject(sv[0], family, type, proto);
> + if (s0 == NULL)
> + goto finally;
> + s1 = new_sockobject(sv[1], family, type, proto);
> + if (s1 == NULL)
> + goto finally;
> + res = PyTuple_Pack(2, s0, s1);
> +
> +finally:
> + if (res == NULL) {
> + if (s0 == NULL)
> + SOCKETCLOSE(sv[0]);
> + if (s1 == NULL)
> + SOCKETCLOSE(sv[1]);
> + }
> + Py_XDECREF(s0);
> + Py_XDECREF(s1);
> + return res;
> +}
> +
> +PyDoc_STRVAR(socketpair_doc,
> +"socketpair([family[, type [, proto]]]) -> (socket object, socket object)\n\
> +\n\
> +Create a pair of socket objects from the sockets returned by the platform\n\
> +socketpair() function.\n\
> +The arguments are the same as for socket() except the default family is\n\
> +AF_UNIX if defined on the platform; otherwise, the default is AF_INET.");
> +
> +#endif /* HAVE_SOCKETPAIR */
> +
> +
> +static PyObject *
> +socket_ntohs(PyObject *self, PyObject *args)
> +{
> + int x1, x2;
> +
> + if (!PyArg_ParseTuple(args, "i:ntohs", &x1)) {
> + return NULL;
> + }
> + if (x1 < 0) {
> + PyErr_SetString(PyExc_OverflowError,
> + "can't convert negative number to unsigned long");
> + return NULL;
> + }
> + x2 = (unsigned int)ntohs((unsigned short)x1);
> + return PyLong_FromLong(x2);
> +}
> +
> +PyDoc_STRVAR(ntohs_doc,
> +"ntohs(integer) -> integer\n\
> +\n\
> +Convert a 16-bit integer from network to host byte order.");
> +
> +
> +static PyObject *
> +socket_ntohl(PyObject *self, PyObject *arg)
> +{
> + unsigned long x;
> +
> + if (PyLong_Check(arg)) {
> + x = PyLong_AsUnsignedLong(arg);
> + if (x == (unsigned long) -1 && PyErr_Occurred())
> + return NULL;
> +#if SIZEOF_LONG > 4
> + {
> + unsigned long y;
> + /* only want the trailing 32 bits */
> + y = x & 0xFFFFFFFFUL;
> + if (y ^ x)
> + return PyErr_Format(PyExc_OverflowError,
> + "int larger than 32 bits");
> + x = y;
> + }
> +#endif
> + }
> + else
> + return PyErr_Format(PyExc_TypeError,
> + "expected int, %s found",
> + Py_TYPE(arg)->tp_name);
> + return PyLong_FromUnsignedLong(ntohl(x));
> +}
> +
> +PyDoc_STRVAR(ntohl_doc,
> +"ntohl(integer) -> integer\n\
> +\n\
> +Convert a 32-bit integer from network to host byte order.");
> +
> +
> +static PyObject *
> +socket_htons(PyObject *self, PyObject *args)
> +{
> + int x1, x2;
> +
> + if (!PyArg_ParseTuple(args, "i:htons", &x1)) {
> + return NULL;
> + }
> + if (x1 < 0) {
> + PyErr_SetString(PyExc_OverflowError,
> + "can't convert negative number to unsigned long");
> + return NULL;
> + }
> + x2 = (unsigned int)htons((unsigned short)x1);
> + return PyLong_FromLong(x2);
> +}
> +
> +PyDoc_STRVAR(htons_doc,
> +"htons(integer) -> integer\n\
> +\n\
> +Convert a 16-bit integer from host to network byte order.");
> +
> +
> +static PyObject *
> +socket_htonl(PyObject *self, PyObject *arg)
> +{
> + unsigned long x;
> +
> + if (PyLong_Check(arg)) {
> + x = PyLong_AsUnsignedLong(arg);
> + if (x == (unsigned long) -1 && PyErr_Occurred())
> + return NULL;
> +#if SIZEOF_LONG > 4
> + {
> + unsigned long y;
> + /* only want the trailing 32 bits */
> + y = x & 0xFFFFFFFFUL;
> + if (y ^ x)
> + return PyErr_Format(PyExc_OverflowError,
> + "int larger than 32 bits");
> + x = y;
> + }
> +#endif
> + }
> + else
> + return PyErr_Format(PyExc_TypeError,
> + "expected int, %s found",
> + Py_TYPE(arg)->tp_name);
> + return PyLong_FromUnsignedLong(htonl((unsigned long)x));
> +}
> +
> +PyDoc_STRVAR(htonl_doc,
> +"htonl(integer) -> integer\n\
> +\n\
> +Convert a 32-bit integer from host to network byte order.");
> +
> +/* socket.inet_aton() and socket.inet_ntoa() functions. */
> +
> +PyDoc_STRVAR(inet_aton_doc,
> +"inet_aton(string) -> bytes giving packed 32-bit IP representation\n\
> +\n\
> +Convert an IP address in string format (123.45.67.89) to the 32-bit packed\n\
> +binary format used in low-level network functions.");
> +
> +static PyObject*
> +socket_inet_aton(PyObject *self, PyObject *args)
> +{
> +#ifdef HAVE_INET_ATON
> + struct in_addr buf;
> +#endif
> +
> +#if !defined(HAVE_INET_ATON) || defined(USE_INET_ATON_WEAKLINK)
> +#if (SIZEOF_INT != 4)
> +#error "Not sure if in_addr_t exists and int is not 32-bits."
> +#endif
> + /* Have to use inet_addr() instead */
> + unsigned int packed_addr;
> +#endif
> + const char *ip_addr;
> +
> + if (!PyArg_ParseTuple(args, "s:inet_aton", &ip_addr))
> + return NULL;
> +
> +
> +#ifdef HAVE_INET_ATON
> +
> +#ifdef USE_INET_ATON_WEAKLINK
> + if (inet_aton != NULL) {
> +#endif
> + if (inet_aton(ip_addr, &buf))
> + return PyBytes_FromStringAndSize((char *)(&buf),
> + sizeof(buf));
> +
> + PyErr_SetString(PyExc_OSError,
> + "illegal IP address string passed to inet_aton");
> + return NULL;
> +
> +#ifdef USE_INET_ATON_WEAKLINK
> + } else {
> +#endif
> +
> +#endif
> +
> +#if !defined(HAVE_INET_ATON) || defined(USE_INET_ATON_WEAKLINK)
> +
> + /* special-case this address as inet_addr might return INADDR_NONE
> + * for this */
> + if (strcmp(ip_addr, "255.255.255.255") == 0) {
> + packed_addr = INADDR_BROADCAST;
> + } else {
> +
> + packed_addr = inet_addr(ip_addr);
> +
> + if (packed_addr == INADDR_NONE) { /* invalid address */
> + PyErr_SetString(PyExc_OSError,
> + "illegal IP address string passed to inet_aton");
> + return NULL;
> + }
> + }
> + return PyBytes_FromStringAndSize((char *) &packed_addr,
> + sizeof(packed_addr));
> +
> +#ifdef USE_INET_ATON_WEAKLINK
> + }
> +#endif
> +
> +#endif
> +}
> +
> +PyDoc_STRVAR(inet_ntoa_doc,
> +"inet_ntoa(packed_ip) -> ip_address_string\n\
> +\n\
> +Convert an IP address from 32-bit packed binary format to string format");
> +
> +static PyObject*
> +socket_inet_ntoa(PyObject *self, PyObject *args)
> +{
> + Py_buffer packed_ip;
> + struct in_addr packed_addr;
> +
> + if (!PyArg_ParseTuple(args, "y*:inet_ntoa", &packed_ip)) {
> + return NULL;
> + }
> +
> + if (packed_ip.len != sizeof(packed_addr)) {
> + PyErr_SetString(PyExc_OSError,
> + "packed IP wrong length for inet_ntoa");
> + PyBuffer_Release(&packed_ip);
> + return NULL;
> + }
> +
> + memcpy(&packed_addr, packed_ip.buf, packed_ip.len);
> + PyBuffer_Release(&packed_ip);
> +
> + return PyUnicode_FromString(inet_ntoa(packed_addr));
> +}
> +
> +#if defined(HAVE_INET_PTON) || defined(MS_WINDOWS)
> +
> +PyDoc_STRVAR(inet_pton_doc,
> +"inet_pton(af, ip) -> packed IP address string\n\
> +\n\
> +Convert an IP address from string format to a packed string suitable\n\
> +for use with low-level network functions.");
> +
> +#endif
> +
> +#ifdef HAVE_INET_PTON
> +
> +static PyObject *
> +socket_inet_pton(PyObject *self, PyObject *args)
> +{
> + int af;
> + const char* ip;
> + int retval;
> +#ifdef ENABLE_IPV6
> + char packed[Py_MAX(sizeof(struct in_addr), sizeof(struct in6_addr))];
> +#else
> + char packed[sizeof(struct in_addr)];
> +#endif
> + if (!PyArg_ParseTuple(args, "is:inet_pton", &af, &ip)) {
> + return NULL;
> + }
> +
> +#if !defined(ENABLE_IPV6) && defined(AF_INET6)
> + if(af == AF_INET6) {
> + PyErr_SetString(PyExc_OSError,
> + "can't use AF_INET6, IPv6 is disabled");
> + return NULL;
> + }
> +#endif
> +
> + retval = inet_pton(af, ip, packed);
> + if (retval < 0) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + return NULL;
> + } else if (retval == 0) {
> + PyErr_SetString(PyExc_OSError,
> + "illegal IP address string passed to inet_pton");
> + return NULL;
> + } else if (af == AF_INET) {
> + return PyBytes_FromStringAndSize(packed,
> + sizeof(struct in_addr));
> +#ifdef ENABLE_IPV6
> + } else if (af == AF_INET6) {
> + return PyBytes_FromStringAndSize(packed,
> + sizeof(struct in6_addr));
> +#endif
> + } else {
> + PyErr_SetString(PyExc_OSError, "unknown address family");
> + return NULL;
> + }
> +}
> +#elif defined(MS_WINDOWS)
> +
> +static PyObject *
> +socket_inet_pton(PyObject *self, PyObject *args)
> +{
> + int af;
> + char* ip;
> + struct sockaddr_in6 addr;
> + INT ret, size;
> +
> + if (!PyArg_ParseTuple(args, "is:inet_pton", &af, &ip)) {
> + return NULL;
> + }
> +
> + size = sizeof(addr);
> + ret = WSAStringToAddressA(ip, af, NULL, (LPSOCKADDR)&addr, &size);
> +
> + if (ret) {
> + PyErr_SetExcFromWindowsErr(PyExc_OSError, WSAGetLastError());
> + return NULL;
> + } else if(af == AF_INET) {
> + struct sockaddr_in *addr4 = (struct sockaddr_in*)&addr;
> + return PyBytes_FromStringAndSize((const char *)&(addr4->sin_addr),
> + sizeof(addr4->sin_addr));
> + } else if (af == AF_INET6) {
> + return PyBytes_FromStringAndSize((const char *)&(addr.sin6_addr),
> + sizeof(addr.sin6_addr));
> + } else {
> + PyErr_SetString(PyExc_OSError, "unknown address family");
> + return NULL;
> + }
> +}
> +
> +#endif
> +
> +#if defined(HAVE_INET_PTON) || defined(MS_WINDOWS)
> +
> +PyDoc_STRVAR(inet_ntop_doc,
> +"inet_ntop(af, packed_ip) -> string formatted IP address\n\
> +\n\
> +Convert a packed IP address of the given family to string format.");
> +
> +#endif
> +
> +
> +#ifdef HAVE_INET_PTON
> +static PyObject *
> +socket_inet_ntop(PyObject *self, PyObject *args)
> +{
> + int af;
> + Py_buffer packed_ip;
> + const char* retval;
> +#ifdef ENABLE_IPV6
> + char ip[Py_MAX(INET_ADDRSTRLEN, INET6_ADDRSTRLEN) + 1];
> +#else
> + char ip[INET_ADDRSTRLEN + 1];
> +#endif
> +
> + /* Guarantee NUL-termination for PyUnicode_FromString() below */
> + memset((void *) &ip[0], '\0', sizeof(ip));
> +
> + if (!PyArg_ParseTuple(args, "iy*:inet_ntop", &af, &packed_ip)) {
> + return NULL;
> + }
> +
> + if (af == AF_INET) {
> + if (packed_ip.len != sizeof(struct in_addr)) {
> + PyErr_SetString(PyExc_ValueError,
> + "invalid length of packed IP address string");
> + PyBuffer_Release(&packed_ip);
> + return NULL;
> + }
> +#ifdef ENABLE_IPV6
> + } else if (af == AF_INET6) {
> + if (packed_ip.len != sizeof(struct in6_addr)) {
> + PyErr_SetString(PyExc_ValueError,
> + "invalid length of packed IP address string");
> + PyBuffer_Release(&packed_ip);
> + return NULL;
> + }
> +#endif
> + } else {
> + PyErr_Format(PyExc_ValueError,
> + "unknown address family %d", af);
> + PyBuffer_Release(&packed_ip);
> + return NULL;
> + }
> +
> + retval = inet_ntop(af, packed_ip.buf, ip, sizeof(ip));
> + PyBuffer_Release(&packed_ip);
> + if (!retval) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + return NULL;
> + } else {
> + return PyUnicode_FromString(retval);
> + }
> +}
> +
> +#elif defined(MS_WINDOWS)
> +
> +static PyObject *
> +socket_inet_ntop(PyObject *self, PyObject *args)
> +{
> + int af;
> + Py_buffer packed_ip;
> + struct sockaddr_in6 addr;
> + DWORD addrlen, ret, retlen;
> +#ifdef ENABLE_IPV6
> + char ip[Py_MAX(INET_ADDRSTRLEN, INET6_ADDRSTRLEN) + 1];
> +#else
> + char ip[INET_ADDRSTRLEN + 1];
> +#endif
> +
> + /* Guarantee NUL-termination for PyUnicode_FromString() below */
> + memset((void *) &ip[0], '\0', sizeof(ip));
> +
> + if (!PyArg_ParseTuple(args, "iy*:inet_ntop", &af, &packed_ip)) {
> + return NULL;
> + }
> +
> + if (af == AF_INET) {
> + struct sockaddr_in * addr4 = (struct sockaddr_in *)&addr;
> +
> + if (packed_ip.len != sizeof(struct in_addr)) {
> + PyErr_SetString(PyExc_ValueError,
> + "invalid length of packed IP address string");
> + PyBuffer_Release(&packed_ip);
> + return NULL;
> + }
> + memset(addr4, 0, sizeof(struct sockaddr_in));
> + addr4->sin_family = AF_INET;
> + memcpy(&(addr4->sin_addr), packed_ip.buf, sizeof(addr4->sin_addr));
> + addrlen = sizeof(struct sockaddr_in);
> + } else if (af == AF_INET6) {
> + if (packed_ip.len != sizeof(struct in6_addr)) {
> + PyErr_SetString(PyExc_ValueError,
> + "invalid length of packed IP address string");
> + PyBuffer_Release(&packed_ip);
> + return NULL;
> + }
> +
> + memset(&addr, 0, sizeof(addr));
> + addr.sin6_family = AF_INET6;
> + memcpy(&(addr.sin6_addr), packed_ip.buf, sizeof(addr.sin6_addr));
> + addrlen = sizeof(addr);
> + } else {
> + PyErr_Format(PyExc_ValueError,
> + "unknown address family %d", af);
> + PyBuffer_Release(&packed_ip);
> + return NULL;
> + }
> + PyBuffer_Release(&packed_ip);
> +
> + retlen = sizeof(ip);
> + ret = WSAAddressToStringA((struct sockaddr*)&addr, addrlen, NULL,
> + ip, &retlen);
> +
> + if (ret) {
> + PyErr_SetExcFromWindowsErr(PyExc_OSError, WSAGetLastError());
> + return NULL;
> + } else {
> + return PyUnicode_FromString(ip);
> + }
> +}
> +
> +#endif /* HAVE_INET_PTON */
> +
> +/* Python interface to getaddrinfo(host, port). */
> +
> +/*ARGSUSED*/
> +static PyObject *
> +socket_getaddrinfo(PyObject *self, PyObject *args, PyObject* kwargs)
> +{
> + static char* kwnames[] = {"host", "port", "family", "type", "proto",
> + "flags", 0};
> + struct addrinfo hints, *res;
> + struct addrinfo *res0 = NULL;
> + PyObject *hobj = NULL;
> + PyObject *pobj = (PyObject *)NULL;
> + char pbuf[30];
> + char *hptr, *pptr;
> + int family, socktype, protocol, flags;
> + int error;
> + PyObject *all = (PyObject *)NULL;
> + PyObject *idna = NULL;
> +
> + socktype = protocol = flags = 0;
> + family = AF_UNSPEC;
> + if (!PyArg_ParseTupleAndKeywords(args, kwargs, "OO|iiii:getaddrinfo",
> + kwnames, &hobj, &pobj, &family, &socktype,
> + &protocol, &flags)) {
> + return NULL;
> + }
> + if (hobj == Py_None) {
> + hptr = NULL;
> + } else if (PyUnicode_Check(hobj)) {
> + idna = PyUnicode_AsEncodedString(hobj, "idna", NULL);
> + if (!idna)
> + return NULL;
> + assert(PyBytes_Check(idna));
> + hptr = PyBytes_AS_STRING(idna);
> + } else if (PyBytes_Check(hobj)) {
> + hptr = PyBytes_AsString(hobj);
> + } else {
> + PyErr_SetString(PyExc_TypeError,
> + "getaddrinfo() argument 1 must be string or None");
> + return NULL;
> + }
> + if (PyLong_CheckExact(pobj)) {
> + long value = PyLong_AsLong(pobj);
> + if (value == -1 && PyErr_Occurred())
> + goto err;
> + PyOS_snprintf(pbuf, sizeof(pbuf), "%ld", value);
> + pptr = pbuf;
> + } else if (PyUnicode_Check(pobj)) {
> + pptr = PyUnicode_AsUTF8(pobj);
> + if (pptr == NULL)
> + goto err;
> + } else if (PyBytes_Check(pobj)) {
> + pptr = PyBytes_AS_STRING(pobj);
> + } else if (pobj == Py_None) {
> + pptr = (char *)NULL;
> + } else {
> + PyErr_SetString(PyExc_OSError, "Int or String expected");
> + goto err;
> + }
> +#if defined(__APPLE__) && defined(AI_NUMERICSERV)
> + if ((flags & AI_NUMERICSERV) && (pptr == NULL || (pptr[0] == '0' && pptr[1] == 0))) {
> + /* On OSX up to at least OSX 10.8 getaddrinfo crashes
> + * if AI_NUMERICSERV is set and the servname is NULL or "0".
> + * This workaround avoids a segfault in libsystem.
> + */
> + pptr = "00";
> + }
> +#endif
> + memset(&hints, 0, sizeof(hints));
> + hints.ai_family = family;
> + hints.ai_socktype = socktype;
> + hints.ai_protocol = protocol;
> + hints.ai_flags = flags;
> + Py_BEGIN_ALLOW_THREADS
> + ACQUIRE_GETADDRINFO_LOCK
> + error = getaddrinfo(hptr, pptr, &hints, &res0);
> + Py_END_ALLOW_THREADS
> + RELEASE_GETADDRINFO_LOCK /* see comment in setipaddr() */
> + if (error) {
> + set_gaierror(error);
> + goto err;
> + }
> +
> + all = PyList_New(0);
> + if (all == NULL)
> + goto err;
> + for (res = res0; res; res = res->ai_next) {
> + PyObject *single;
> + PyObject *addr =
> + makesockaddr(-1, res->ai_addr, res->ai_addrlen, protocol);
> + if (addr == NULL)
> + goto err;
> + single = Py_BuildValue("iiisO", res->ai_family,
> + res->ai_socktype, res->ai_protocol,
> + res->ai_canonname ? res->ai_canonname : "",
> + addr);
> + Py_DECREF(addr);
> + if (single == NULL)
> + goto err;
> +
> + if (PyList_Append(all, single)) {
> + Py_DECREF(single);
> + goto err;
> + }
> + Py_DECREF(single);
> + }
> + Py_XDECREF(idna);
> + if (res0)
> + freeaddrinfo(res0);
> + return all;
> + err:
> + Py_XDECREF(all);
> + Py_XDECREF(idna);
> + if (res0)
> + freeaddrinfo(res0);
> + return (PyObject *)NULL;
> +}
> +
> +PyDoc_STRVAR(getaddrinfo_doc,
> +"getaddrinfo(host, port [, family, type, proto, flags])\n\
> + -> list of (family, type, proto, canonname, sockaddr)\n\
> +\n\
> +Resolve host and port into addrinfo struct.");
> +
> +/* Python interface to getnameinfo(sa, flags). */
> +
> +/*ARGSUSED*/
> +static PyObject *
> +socket_getnameinfo(PyObject *self, PyObject *args)
> +{
> + PyObject *sa = (PyObject *)NULL;
> + int flags;
> + const char *hostp;
> + int port;
> + unsigned int flowinfo, scope_id;
> + char hbuf[NI_MAXHOST], pbuf[NI_MAXSERV];
> + struct addrinfo hints, *res = NULL;
> + int error;
> + PyObject *ret = (PyObject *)NULL;
> + PyObject *name;
> +
> + flags = flowinfo = scope_id = 0;
> + if (!PyArg_ParseTuple(args, "Oi:getnameinfo", &sa, &flags))
> + return NULL;
> + if (!PyTuple_Check(sa)) {
> + PyErr_SetString(PyExc_TypeError,
> + "getnameinfo() argument 1 must be a tuple");
> + return NULL;
> + }
> + if (!PyArg_ParseTuple(sa, "si|II",
> + &hostp, &port, &flowinfo, &scope_id))
> + return NULL;
> + if (flowinfo > 0xfffff) {
> + PyErr_SetString(PyExc_OverflowError,
> + "getsockaddrarg: flowinfo must be 0-1048575.");
> + return NULL;
> + }
> + PyOS_snprintf(pbuf, sizeof(pbuf), "%d", port);
> + memset(&hints, 0, sizeof(hints));
> + hints.ai_family = AF_UNSPEC;
> + hints.ai_socktype = SOCK_DGRAM; /* make numeric port happy */
> + hints.ai_flags = AI_NUMERICHOST; /* don't do any name resolution */
> + Py_BEGIN_ALLOW_THREADS
> + ACQUIRE_GETADDRINFO_LOCK
> + error = getaddrinfo(hostp, pbuf, &hints, &res);
> + Py_END_ALLOW_THREADS
> + RELEASE_GETADDRINFO_LOCK /* see comment in setipaddr() */
> + if (error) {
> + set_gaierror(error);
> + goto fail;
> + }
> + if (res->ai_next) {
> + PyErr_SetString(PyExc_OSError,
> + "sockaddr resolved to multiple addresses");
> + goto fail;
> + }
> + switch (res->ai_family) {
> + case AF_INET:
> + {
> + if (PyTuple_GET_SIZE(sa) != 2) {
> + PyErr_SetString(PyExc_OSError,
> + "IPv4 sockaddr must be 2 tuple");
> + goto fail;
> + }
> + break;
> + }
> +#ifdef ENABLE_IPV6
> + case AF_INET6:
> + {
> + struct sockaddr_in6 *sin6;
> + sin6 = (struct sockaddr_in6 *)res->ai_addr;
> + sin6->sin6_flowinfo = htonl(flowinfo);
> + sin6->sin6_scope_id = scope_id;
> + break;
> + }
> +#endif
> + }
> + error = getnameinfo(res->ai_addr, (socklen_t) res->ai_addrlen,
> + hbuf, sizeof(hbuf), pbuf, sizeof(pbuf), flags);
> + if (error) {
> + set_gaierror(error);
> + goto fail;
> + }
> +
> + name = sock_decode_hostname(hbuf);
> + if (name == NULL)
> + goto fail;
> + ret = Py_BuildValue("Ns", name, pbuf);
> +
> +fail:
> + if (res)
> + freeaddrinfo(res);
> + return ret;
> +}
> +
> +PyDoc_STRVAR(getnameinfo_doc,
> +"getnameinfo(sockaddr, flags) --> (host, port)\n\
> +\n\
> +Get host and port for a sockaddr.");
> +
> +
> +/* Python API to getting and setting the default timeout value. */
> +
> +static PyObject *
> +socket_getdefaulttimeout(PyObject *self)
> +{
> + if (defaulttimeout < 0) {
> + Py_INCREF(Py_None);
> + return Py_None;
> + }
> + else {
> + double seconds = _PyTime_AsSecondsDouble(defaulttimeout);
> + return PyFloat_FromDouble(seconds);
> + }
> +}
> +
> +PyDoc_STRVAR(getdefaulttimeout_doc,
> +"getdefaulttimeout() -> timeout\n\
> +\n\
> +Returns the default timeout in seconds (float) for new socket objects.\n\
> +A value of None indicates that new socket objects have no timeout.\n\
> +When the socket module is first imported, the default is None.");
> +
> +static PyObject *
> +socket_setdefaulttimeout(PyObject *self, PyObject *arg)
> +{
> + _PyTime_t timeout;
> +
> + if (socket_parse_timeout(&timeout, arg) < 0)
> + return NULL;
> +
> + defaulttimeout = timeout;
> +
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +PyDoc_STRVAR(setdefaulttimeout_doc,
> +"setdefaulttimeout(timeout)\n\
> +\n\
> +Set the default timeout in seconds (float) for new socket objects.\n\
> +A value of None indicates that new socket objects have no timeout.\n\
> +When the socket module is first imported, the default is None.");
> +
> +#ifdef HAVE_IF_NAMEINDEX
> +/* Python API for getting interface indices and names */
> +
> +static PyObject *
> +socket_if_nameindex(PyObject *self, PyObject *arg)
> +{
> + PyObject *list;
> + int i;
> + struct if_nameindex *ni;
> +
> + ni = if_nameindex();
> + if (ni == NULL) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + return NULL;
> + }
> +
> + list = PyList_New(0);
> + if (list == NULL) {
> + if_freenameindex(ni);
> + return NULL;
> + }
> +
> + for (i = 0; ni[i].if_index != 0 && i < INT_MAX; i++) {
> + PyObject *ni_tuple = Py_BuildValue("IO&",
> + ni[i].if_index, PyUnicode_DecodeFSDefault, ni[i].if_name);
> +
> + if (ni_tuple == NULL || PyList_Append(list, ni_tuple) == -1) {
> + Py_XDECREF(ni_tuple);
> + Py_DECREF(list);
> + if_freenameindex(ni);
> + return NULL;
> + }
> + Py_DECREF(ni_tuple);
> + }
> +
> + if_freenameindex(ni);
> + return list;
> +}
> +
> +PyDoc_STRVAR(if_nameindex_doc,
> +"if_nameindex()\n\
> +\n\
> +Returns a list of network interface information (index, name) tuples.");
> +
> +static PyObject *
> +socket_if_nametoindex(PyObject *self, PyObject *args)
> +{
> + PyObject *oname;
> + unsigned long index;
> +
> + if (!PyArg_ParseTuple(args, "O&:if_nametoindex",
> + PyUnicode_FSConverter, &oname))
> + return NULL;
> +
> + index = if_nametoindex(PyBytes_AS_STRING(oname));
> + Py_DECREF(oname);
> + if (index == 0) {
> + /* if_nametoindex() doesn't set errno */
> + PyErr_SetString(PyExc_OSError, "no interface with this name");
> + return NULL;
> + }
> +
> + return PyLong_FromUnsignedLong(index);
> +}
> +
> +PyDoc_STRVAR(if_nametoindex_doc,
> +"if_nametoindex(if_name)\n\
> +\n\
> +Returns the interface index corresponding to the interface name if_name.");
> +
> +static PyObject *
> +socket_if_indextoname(PyObject *self, PyObject *arg)
> +{
> + unsigned long index;
> + char name[IF_NAMESIZE + 1];
> +
> + index = PyLong_AsUnsignedLong(arg);
> + if (index == (unsigned long) -1)
> + return NULL;
> +
> + if (if_indextoname(index, name) == NULL) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + return NULL;
> + }
> +
> + return PyUnicode_DecodeFSDefault(name);
> +}
> +
> +PyDoc_STRVAR(if_indextoname_doc,
> +"if_indextoname(if_index)\n\
> +\n\
> +Returns the interface name corresponding to the interface index if_index.");
> +
> +#endif /* HAVE_IF_NAMEINDEX */
> +
> +
> +#ifdef CMSG_LEN
> +/* Python interface to CMSG_LEN(length). */
> +
> +static PyObject *
> +socket_CMSG_LEN(PyObject *self, PyObject *args)
> +{
> + Py_ssize_t length;
> + size_t result;
> +
> + if (!PyArg_ParseTuple(args, "n:CMSG_LEN", &length))
> + return NULL;
> + if (length < 0 || !get_CMSG_LEN(length, &result)) {
> + PyErr_Format(PyExc_OverflowError, "CMSG_LEN() argument out of range");
> + return NULL;
> + }
> + return PyLong_FromSize_t(result);
> +}
> +
> +PyDoc_STRVAR(CMSG_LEN_doc,
> +"CMSG_LEN(length) -> control message length\n\
> +\n\
> +Return the total length, without trailing padding, of an ancillary\n\
> +data item with associated data of the given length. This value can\n\
> +often be used as the buffer size for recvmsg() to receive a single\n\
> +item of ancillary data, but RFC 3542 requires portable applications to\n\
> +use CMSG_SPACE() and thus include space for padding, even when the\n\
> +item will be the last in the buffer. Raises OverflowError if length\n\
> +is outside the permissible range of values.");
> +
> +
> +#ifdef CMSG_SPACE
> +/* Python interface to CMSG_SPACE(length). */
> +
> +static PyObject *
> +socket_CMSG_SPACE(PyObject *self, PyObject *args)
> +{
> + Py_ssize_t length;
> + size_t result;
> +
> + if (!PyArg_ParseTuple(args, "n:CMSG_SPACE", &length))
> + return NULL;
> + if (length < 0 || !get_CMSG_SPACE(length, &result)) {
> + PyErr_SetString(PyExc_OverflowError,
> + "CMSG_SPACE() argument out of range");
> + return NULL;
> + }
> + return PyLong_FromSize_t(result);
> +}
> +
> +PyDoc_STRVAR(CMSG_SPACE_doc,
> +"CMSG_SPACE(length) -> buffer size\n\
> +\n\
> +Return the buffer size needed for recvmsg() to receive an ancillary\n\
> +data item with associated data of the given length, along with any\n\
> +trailing padding. The buffer space needed to receive multiple items\n\
> +is the sum of the CMSG_SPACE() values for their associated data\n\
> +lengths. Raises OverflowError if length is outside the permissible\n\
> +range of values.");
> +#endif /* CMSG_SPACE */
> +#endif /* CMSG_LEN */
> +
> +
> +/* List of functions exported by this module. */
> +
> +static PyMethodDef socket_methods[] = {
> + {"gethostbyname", socket_gethostbyname,
> + METH_VARARGS, gethostbyname_doc},
> + {"gethostbyname_ex", socket_gethostbyname_ex,
> + METH_VARARGS, ghbn_ex_doc},
> + {"gethostbyaddr", socket_gethostbyaddr,
> + METH_VARARGS, gethostbyaddr_doc},
> + {"gethostname", socket_gethostname,
> + METH_NOARGS, gethostname_doc},
> +#ifdef HAVE_SETHOSTNAME
> + {"sethostname", socket_sethostname,
> + METH_VARARGS, sethostname_doc},
> +#endif
> + {"getservbyname", socket_getservbyname,
> + METH_VARARGS, getservbyname_doc},
> + {"getservbyport", socket_getservbyport,
> + METH_VARARGS, getservbyport_doc},
> + {"getprotobyname", socket_getprotobyname,
> + METH_VARARGS, getprotobyname_doc},
> +#ifndef NO_DUP
> + {"dup", socket_dup,
> + METH_O, dup_doc},
> +#endif
> +#ifdef HAVE_SOCKETPAIR
> + {"socketpair", socket_socketpair,
> + METH_VARARGS, socketpair_doc},
> +#endif
> + {"ntohs", socket_ntohs,
> + METH_VARARGS, ntohs_doc},
> + {"ntohl", socket_ntohl,
> + METH_O, ntohl_doc},
> + {"htons", socket_htons,
> + METH_VARARGS, htons_doc},
> + {"htonl", socket_htonl,
> + METH_O, htonl_doc},
> + {"inet_aton", socket_inet_aton,
> + METH_VARARGS, inet_aton_doc},
> + {"inet_ntoa", socket_inet_ntoa,
> + METH_VARARGS, inet_ntoa_doc},
> +#if defined(HAVE_INET_PTON) || defined(MS_WINDOWS)
> + {"inet_pton", socket_inet_pton,
> + METH_VARARGS, inet_pton_doc},
> + {"inet_ntop", socket_inet_ntop,
> + METH_VARARGS, inet_ntop_doc},
> +#endif
> + {"getaddrinfo", (PyCFunction)socket_getaddrinfo,
> + METH_VARARGS | METH_KEYWORDS, getaddrinfo_doc},
> + {"getnameinfo", socket_getnameinfo,
> + METH_VARARGS, getnameinfo_doc},
> + {"getdefaulttimeout", (PyCFunction)socket_getdefaulttimeout,
> + METH_NOARGS, getdefaulttimeout_doc},
> + {"setdefaulttimeout", socket_setdefaulttimeout,
> + METH_O, setdefaulttimeout_doc},
> +#ifdef HAVE_IF_NAMEINDEX
> + {"if_nameindex", socket_if_nameindex,
> + METH_NOARGS, if_nameindex_doc},
> + {"if_nametoindex", socket_if_nametoindex,
> + METH_VARARGS, if_nametoindex_doc},
> + {"if_indextoname", socket_if_indextoname,
> + METH_O, if_indextoname_doc},
> +#endif
> +#ifdef CMSG_LEN
> + {"CMSG_LEN", socket_CMSG_LEN,
> + METH_VARARGS, CMSG_LEN_doc},
> +#ifdef CMSG_SPACE
> + {"CMSG_SPACE", socket_CMSG_SPACE,
> + METH_VARARGS, CMSG_SPACE_doc},
> +#endif
> +#endif
> + {NULL, NULL} /* Sentinel */
> +};
> +
> +
> +#ifdef MS_WINDOWS
> +#define OS_INIT_DEFINED
> +
> +/* Additional initialization and cleanup for Windows */
> +
> +static void
> +os_cleanup(void)
> +{
> + WSACleanup();
> +}
> +
> +static int
> +os_init(void)
> +{
> + WSADATA WSAData;
> + int ret;
> + ret = WSAStartup(0x0101, &WSAData);
> + switch (ret) {
> + case 0: /* No error */
> + Py_AtExit(os_cleanup);
> + return 1; /* Success */
> + case WSASYSNOTREADY:
> + PyErr_SetString(PyExc_ImportError,
> + "WSAStartup failed: network not ready");
> + break;
> + case WSAVERNOTSUPPORTED:
> + case WSAEINVAL:
> + PyErr_SetString(
> + PyExc_ImportError,
> + "WSAStartup failed: requested version not supported");
> + break;
> + default:
> + PyErr_Format(PyExc_ImportError, "WSAStartup failed: error code %d", ret);
> + break;
> + }
> + return 0; /* Failure */
> +}
> +
> +#endif /* MS_WINDOWS */
> +
> +
> +
> +#ifndef OS_INIT_DEFINED
> +static int
> +os_init(void)
> +{
> + return 1; /* Success */
> +}
> +#endif
> +
> +
> +/* C API table - always add new things to the end for binary
> + compatibility. */
> +static
> +PySocketModule_APIObject PySocketModuleAPI =
> +{
> + &sock_type,
> + NULL,
> + NULL
> +};
> +
> +
> +/* Initialize the _socket module.
> +
> + This module is actually called "_socket", and there's a wrapper
> + "socket.py" which implements some additional functionality.
> + The import of "_socket" may fail with an ImportError exception if
> + os-specific initialization fails. On Windows, this does WINSOCK
> + initialization. When WINSOCK is initialized successfully, a call to
> + WSACleanup() is scheduled to be made at exit time.
> +*/
> +
> +PyDoc_STRVAR(socket_doc,
> +"Implementation module for socket operations.\n\
> +\n\
> +See the socket module for documentation.");
> +
> +static struct PyModuleDef socketmodule = {
> + PyModuleDef_HEAD_INIT,
> + PySocket_MODULE_NAME,
> + socket_doc,
> + -1,
> + socket_methods,
> + NULL,
> + NULL,
> + NULL,
> + NULL
> +};
> +
> +PyMODINIT_FUNC
> +PyInit__socket(void)
> +{
> + PyObject *m, *has_ipv6;
> +
> + if (!os_init())
> + return NULL;
> +
> +#ifdef MS_WINDOWS
> + if (support_wsa_no_inherit == -1) {
> + support_wsa_no_inherit = IsWindows7SP1OrGreater();
> + }
> +#endif
> +
> + Py_TYPE(&sock_type) = &PyType_Type;
> + m = PyModule_Create(&socketmodule);
> + if (m == NULL)
> + return NULL;
> +
> + Py_INCREF(PyExc_OSError);
> + PySocketModuleAPI.error = PyExc_OSError;
> + Py_INCREF(PyExc_OSError);
> + PyModule_AddObject(m, "error", PyExc_OSError);
> + socket_herror = PyErr_NewException("socket.herror",
> + PyExc_OSError, NULL);
> + if (socket_herror == NULL)
> + return NULL;
> + Py_INCREF(socket_herror);
> + PyModule_AddObject(m, "herror", socket_herror);
> + socket_gaierror = PyErr_NewException("socket.gaierror", PyExc_OSError,
> + NULL);
> + if (socket_gaierror == NULL)
> + return NULL;
> + Py_INCREF(socket_gaierror);
> + PyModule_AddObject(m, "gaierror", socket_gaierror);
> + socket_timeout = PyErr_NewException("socket.timeout",
> + PyExc_OSError, NULL);
> + if (socket_timeout == NULL)
> + return NULL;
> + PySocketModuleAPI.timeout_error = socket_timeout;
> + Py_INCREF(socket_timeout);
> + PyModule_AddObject(m, "timeout", socket_timeout);
> + Py_INCREF((PyObject *)&sock_type);
> + if (PyModule_AddObject(m, "SocketType",
> + (PyObject *)&sock_type) != 0)
> + return NULL;
> + Py_INCREF((PyObject *)&sock_type);
> + if (PyModule_AddObject(m, "socket",
> + (PyObject *)&sock_type) != 0)
> + return NULL;
> +
> +#ifdef ENABLE_IPV6
> + has_ipv6 = Py_True;
> +#else
> + has_ipv6 = Py_False;
> +#endif
> + Py_INCREF(has_ipv6);
> + PyModule_AddObject(m, "has_ipv6", has_ipv6);
> +
> + /* Export C API */
> + if (PyModule_AddObject(m, PySocket_CAPI_NAME,
> + PyCapsule_New(&PySocketModuleAPI, PySocket_CAPSULE_NAME, NULL)
> + ) != 0)
> + return NULL;
> +
> + /* Address families (we only support AF_INET and AF_UNIX) */
> +#ifdef AF_UNSPEC
> + PyModule_AddIntMacro(m, AF_UNSPEC);
> +#endif
> + PyModule_AddIntMacro(m, AF_INET);
> +#if defined(AF_UNIX)
> + PyModule_AddIntMacro(m, AF_UNIX);
> +#endif /* AF_UNIX */
> +#ifdef AF_AX25
> + /* Amateur Radio AX.25 */
> + PyModule_AddIntMacro(m, AF_AX25);
> +#endif
> +#ifdef AF_IPX
> + PyModule_AddIntMacro(m, AF_IPX); /* Novell IPX */
> +#endif
> +#ifdef AF_APPLETALK
> + /* Appletalk DDP */
> + PyModule_AddIntMacro(m, AF_APPLETALK);
> +#endif
> +#ifdef AF_NETROM
> + /* Amateur radio NetROM */
> + PyModule_AddIntMacro(m, AF_NETROM);
> +#endif
> +#ifdef AF_BRIDGE
> + /* Multiprotocol bridge */
> + PyModule_AddIntMacro(m, AF_BRIDGE);
> +#endif
> +#ifdef AF_ATMPVC
> + /* ATM PVCs */
> + PyModule_AddIntMacro(m, AF_ATMPVC);
> +#endif
> +#ifdef AF_AAL5
> + /* Reserved for Werner's ATM */
> + PyModule_AddIntMacro(m, AF_AAL5);
> +#endif
> +#ifdef HAVE_SOCKADDR_ALG
> + PyModule_AddIntMacro(m, AF_ALG); /* Linux crypto */
> +#endif
> +#ifdef AF_X25
> + /* Reserved for X.25 project */
> + PyModule_AddIntMacro(m, AF_X25);
> +#endif
> +#ifdef AF_INET6
> + PyModule_AddIntMacro(m, AF_INET6); /* IP version 6 */
> +#endif
> +#ifdef AF_ROSE
> + /* Amateur Radio X.25 PLP */
> + PyModule_AddIntMacro(m, AF_ROSE);
> +#endif
> +#ifdef AF_DECnet
> + /* Reserved for DECnet project */
> + PyModule_AddIntMacro(m, AF_DECnet);
> +#endif
> +#ifdef AF_NETBEUI
> + /* Reserved for 802.2LLC project */
> + PyModule_AddIntMacro(m, AF_NETBEUI);
> +#endif
> +#ifdef AF_SECURITY
> + /* Security callback pseudo AF */
> + PyModule_AddIntMacro(m, AF_SECURITY);
> +#endif
> +#ifdef AF_KEY
> + /* PF_KEY key management API */
> + PyModule_AddIntMacro(m, AF_KEY);
> +#endif
> +#ifdef AF_NETLINK
> + /* */
> + PyModule_AddIntMacro(m, AF_NETLINK);
> + PyModule_AddIntMacro(m, NETLINK_ROUTE);
> +#ifdef NETLINK_SKIP
> + PyModule_AddIntMacro(m, NETLINK_SKIP);
> +#endif
> +#ifdef NETLINK_W1
> + PyModule_AddIntMacro(m, NETLINK_W1);
> +#endif
> + PyModule_AddIntMacro(m, NETLINK_USERSOCK);
> + PyModule_AddIntMacro(m, NETLINK_FIREWALL);
> +#ifdef NETLINK_TCPDIAG
> + PyModule_AddIntMacro(m, NETLINK_TCPDIAG);
> +#endif
> +#ifdef NETLINK_NFLOG
> + PyModule_AddIntMacro(m, NETLINK_NFLOG);
> +#endif
> +#ifdef NETLINK_XFRM
> + PyModule_AddIntMacro(m, NETLINK_XFRM);
> +#endif
> +#ifdef NETLINK_ARPD
> + PyModule_AddIntMacro(m, NETLINK_ARPD);
> +#endif
> +#ifdef NETLINK_ROUTE6
> + PyModule_AddIntMacro(m, NETLINK_ROUTE6);
> +#endif
> + PyModule_AddIntMacro(m, NETLINK_IP6_FW);
> +#ifdef NETLINK_DNRTMSG
> + PyModule_AddIntMacro(m, NETLINK_DNRTMSG);
> +#endif
> +#ifdef NETLINK_TAPBASE
> + PyModule_AddIntMacro(m, NETLINK_TAPBASE);
> +#endif
> +#ifdef NETLINK_CRYPTO
> + PyModule_AddIntMacro(m, NETLINK_CRYPTO);
> +#endif
> +#endif /* AF_NETLINK */
> +#ifdef AF_ROUTE
> + /* Alias to emulate 4.4BSD */
> + PyModule_AddIntMacro(m, AF_ROUTE);
> +#endif
> +#ifdef AF_LINK
> + PyModule_AddIntMacro(m, AF_LINK);
> +#endif
> +#ifdef AF_ASH
> + /* Ash */
> + PyModule_AddIntMacro(m, AF_ASH);
> +#endif
> +#ifdef AF_ECONET
> + /* Acorn Econet */
> + PyModule_AddIntMacro(m, AF_ECONET);
> +#endif
> +#ifdef AF_ATMSVC
> + /* ATM SVCs */
> + PyModule_AddIntMacro(m, AF_ATMSVC);
> +#endif
> +#ifdef AF_SNA
> + /* Linux SNA Project (nutters!) */
> + PyModule_AddIntMacro(m, AF_SNA);
> +#endif
> +#ifdef AF_IRDA
> + /* IRDA sockets */
> + PyModule_AddIntMacro(m, AF_IRDA);
> +#endif
> +#ifdef AF_PPPOX
> + /* PPPoX sockets */
> + PyModule_AddIntMacro(m, AF_PPPOX);
> +#endif
> +#ifdef AF_WANPIPE
> + /* Wanpipe API Sockets */
> + PyModule_AddIntMacro(m, AF_WANPIPE);
> +#endif
> +#ifdef AF_LLC
> + /* Linux LLC */
> + PyModule_AddIntMacro(m, AF_LLC);
> +#endif
> +
> +#ifdef USE_BLUETOOTH
> + PyModule_AddIntMacro(m, AF_BLUETOOTH);
> + PyModule_AddIntMacro(m, BTPROTO_L2CAP);
> + PyModule_AddIntMacro(m, BTPROTO_HCI);
> + PyModule_AddIntMacro(m, SOL_HCI);
> +#if !defined(__NetBSD__) && !defined(__DragonFly__)
> + PyModule_AddIntMacro(m, HCI_FILTER);
> +#endif
> +#if !defined(__FreeBSD__)
> +#if !defined(__NetBSD__) && !defined(__DragonFly__)
> + PyModule_AddIntMacro(m, HCI_TIME_STAMP);
> +#endif
> + PyModule_AddIntMacro(m, HCI_DATA_DIR);
> + PyModule_AddIntMacro(m, BTPROTO_SCO);
> +#endif
> + PyModule_AddIntMacro(m, BTPROTO_RFCOMM);
> + PyModule_AddStringConstant(m, "BDADDR_ANY", "00:00:00:00:00:00");
> + PyModule_AddStringConstant(m, "BDADDR_LOCAL", "00:00:00:FF:FF:FF");
> +#endif
> +
> +#ifdef AF_CAN
> + /* Controller Area Network */
> + PyModule_AddIntMacro(m, AF_CAN);
> +#endif
> +#ifdef PF_CAN
> + /* Controller Area Network */
> + PyModule_AddIntMacro(m, PF_CAN);
> +#endif
> +
> +/* Reliable Datagram Sockets */
> +#ifdef AF_RDS
> + PyModule_AddIntMacro(m, AF_RDS);
> +#endif
> +#ifdef PF_RDS
> + PyModule_AddIntMacro(m, PF_RDS);
> +#endif
> +
> +/* Kernel event messages */
> +#ifdef PF_SYSTEM
> + PyModule_AddIntMacro(m, PF_SYSTEM);
> +#endif
> +#ifdef AF_SYSTEM
> + PyModule_AddIntMacro(m, AF_SYSTEM);
> +#endif
> +
> +#ifdef AF_PACKET
> + PyModule_AddIntMacro(m, AF_PACKET);
> +#endif
> +#ifdef PF_PACKET
> + PyModule_AddIntMacro(m, PF_PACKET);
> +#endif
> +#ifdef PACKET_HOST
> + PyModule_AddIntMacro(m, PACKET_HOST);
> +#endif
> +#ifdef PACKET_BROADCAST
> + PyModule_AddIntMacro(m, PACKET_BROADCAST);
> +#endif
> +#ifdef PACKET_MULTICAST
> + PyModule_AddIntMacro(m, PACKET_MULTICAST);
> +#endif
> +#ifdef PACKET_OTHERHOST
> + PyModule_AddIntMacro(m, PACKET_OTHERHOST);
> +#endif
> +#ifdef PACKET_OUTGOING
> + PyModule_AddIntMacro(m, PACKET_OUTGOING);
> +#endif
> +#ifdef PACKET_LOOPBACK
> + PyModule_AddIntMacro(m, PACKET_LOOPBACK);
> +#endif
> +#ifdef PACKET_FASTROUTE
> + PyModule_AddIntMacro(m, PACKET_FASTROUTE);
> +#endif
> +
> +#ifdef HAVE_LINUX_TIPC_H
> + PyModule_AddIntMacro(m, AF_TIPC);
> +
> + /* for addresses */
> + PyModule_AddIntMacro(m, TIPC_ADDR_NAMESEQ);
> + PyModule_AddIntMacro(m, TIPC_ADDR_NAME);
> + PyModule_AddIntMacro(m, TIPC_ADDR_ID);
> +
> + PyModule_AddIntMacro(m, TIPC_ZONE_SCOPE);
> + PyModule_AddIntMacro(m, TIPC_CLUSTER_SCOPE);
> + PyModule_AddIntMacro(m, TIPC_NODE_SCOPE);
> +
> + /* for setsockopt() */
> + PyModule_AddIntMacro(m, SOL_TIPC);
> + PyModule_AddIntMacro(m, TIPC_IMPORTANCE);
> + PyModule_AddIntMacro(m, TIPC_SRC_DROPPABLE);
> + PyModule_AddIntMacro(m, TIPC_DEST_DROPPABLE);
> + PyModule_AddIntMacro(m, TIPC_CONN_TIMEOUT);
> +
> + PyModule_AddIntMacro(m, TIPC_LOW_IMPORTANCE);
> + PyModule_AddIntMacro(m, TIPC_MEDIUM_IMPORTANCE);
> + PyModule_AddIntMacro(m, TIPC_HIGH_IMPORTANCE);
> + PyModule_AddIntMacro(m, TIPC_CRITICAL_IMPORTANCE);
> +
> + /* for subscriptions */
> + PyModule_AddIntMacro(m, TIPC_SUB_PORTS);
> + PyModule_AddIntMacro(m, TIPC_SUB_SERVICE);
> +#ifdef TIPC_SUB_CANCEL
> + /* doesn't seem to be available everywhere */
> + PyModule_AddIntMacro(m, TIPC_SUB_CANCEL);
> +#endif
> + PyModule_AddIntMacro(m, TIPC_WAIT_FOREVER);
> + PyModule_AddIntMacro(m, TIPC_PUBLISHED);
> + PyModule_AddIntMacro(m, TIPC_WITHDRAWN);
> + PyModule_AddIntMacro(m, TIPC_SUBSCR_TIMEOUT);
> + PyModule_AddIntMacro(m, TIPC_CFG_SRV);
> + PyModule_AddIntMacro(m, TIPC_TOP_SRV);
> +#endif
> +
> +#ifdef HAVE_SOCKADDR_ALG
> + /* Socket options */
> + PyModule_AddIntMacro(m, ALG_SET_KEY);
> + PyModule_AddIntMacro(m, ALG_SET_IV);
> + PyModule_AddIntMacro(m, ALG_SET_OP);
> + PyModule_AddIntMacro(m, ALG_SET_AEAD_ASSOCLEN);
> + PyModule_AddIntMacro(m, ALG_SET_AEAD_AUTHSIZE);
> + PyModule_AddIntMacro(m, ALG_SET_PUBKEY);
> +
> + /* Operations */
> + PyModule_AddIntMacro(m, ALG_OP_DECRYPT);
> + PyModule_AddIntMacro(m, ALG_OP_ENCRYPT);
> + PyModule_AddIntMacro(m, ALG_OP_SIGN);
> + PyModule_AddIntMacro(m, ALG_OP_VERIFY);
> +#endif
> +
> + /* Socket types */
> + PyModule_AddIntMacro(m, SOCK_STREAM);
> + PyModule_AddIntMacro(m, SOCK_DGRAM);
> +/* We have incomplete socket support. */
> +#ifdef SOCK_RAW
> + /* SOCK_RAW is marked as optional in the POSIX specification */
> + PyModule_AddIntMacro(m, SOCK_RAW);
> +#endif
> + PyModule_AddIntMacro(m, SOCK_SEQPACKET);
> +#if defined(SOCK_RDM)
> + PyModule_AddIntMacro(m, SOCK_RDM);
> +#endif
> +#ifdef SOCK_CLOEXEC
> + PyModule_AddIntMacro(m, SOCK_CLOEXEC);
> +#endif
> +#ifdef SOCK_NONBLOCK
> + PyModule_AddIntMacro(m, SOCK_NONBLOCK);
> +#endif
> +
> +#ifdef SO_DEBUG
> + PyModule_AddIntMacro(m, SO_DEBUG);
> +#endif
> +#ifdef SO_ACCEPTCONN
> + PyModule_AddIntMacro(m, SO_ACCEPTCONN);
> +#endif
> +#ifdef SO_REUSEADDR
> + PyModule_AddIntMacro(m, SO_REUSEADDR);
> +#endif
> +#ifdef SO_EXCLUSIVEADDRUSE
> + PyModule_AddIntMacro(m, SO_EXCLUSIVEADDRUSE);
> +#endif
> +
> +#ifdef SO_KEEPALIVE
> + PyModule_AddIntMacro(m, SO_KEEPALIVE);
> +#endif
> +#ifdef SO_DONTROUTE
> + PyModule_AddIntMacro(m, SO_DONTROUTE);
> +#endif
> +#ifdef SO_BROADCAST
> + PyModule_AddIntMacro(m, SO_BROADCAST);
> +#endif
> +#ifdef SO_USELOOPBACK
> + PyModule_AddIntMacro(m, SO_USELOOPBACK);
> +#endif
> +#ifdef SO_LINGER
> + PyModule_AddIntMacro(m, SO_LINGER);
> +#endif
> +#ifdef SO_OOBINLINE
> + PyModule_AddIntMacro(m, SO_OOBINLINE);
> +#endif
> +#ifndef __GNU__
> +#ifdef SO_REUSEPORT
> + PyModule_AddIntMacro(m, SO_REUSEPORT);
> +#endif
> +#endif
> +#ifdef SO_SNDBUF
> + PyModule_AddIntMacro(m, SO_SNDBUF);
> +#endif
> +#ifdef SO_RCVBUF
> + PyModule_AddIntMacro(m, SO_RCVBUF);
> +#endif
> +#ifdef SO_SNDLOWAT
> + PyModule_AddIntMacro(m, SO_SNDLOWAT);
> +#endif
> +#ifdef SO_RCVLOWAT
> + PyModule_AddIntMacro(m, SO_RCVLOWAT);
> +#endif
> +#ifdef SO_SNDTIMEO
> + PyModule_AddIntMacro(m, SO_SNDTIMEO);
> +#endif
> +#ifdef SO_RCVTIMEO
> + PyModule_AddIntMacro(m, SO_RCVTIMEO);
> +#endif
> +#ifdef SO_ERROR
> + PyModule_AddIntMacro(m, SO_ERROR);
> +#endif
> +#ifdef SO_TYPE
> + PyModule_AddIntMacro(m, SO_TYPE);
> +#endif
> +#ifdef SO_SETFIB
> + PyModule_AddIntMacro(m, SO_SETFIB);
> +#endif
> +#ifdef SO_PASSCRED
> + PyModule_AddIntMacro(m, SO_PASSCRED);
> +#endif
> +#ifdef SO_PEERCRED
> + PyModule_AddIntMacro(m, SO_PEERCRED);
> +#endif
> +#ifdef LOCAL_PEERCRED
> + PyModule_AddIntMacro(m, LOCAL_PEERCRED);
> +#endif
> +#ifdef SO_PASSSEC
> + PyModule_AddIntMacro(m, SO_PASSSEC);
> +#endif
> +#ifdef SO_PEERSEC
> + PyModule_AddIntMacro(m, SO_PEERSEC);
> +#endif
> +#ifdef SO_BINDTODEVICE
> + PyModule_AddIntMacro(m, SO_BINDTODEVICE);
> +#endif
> +#ifdef SO_PRIORITY
> + PyModule_AddIntMacro(m, SO_PRIORITY);
> +#endif
> +#ifdef SO_MARK
> + PyModule_AddIntMacro(m, SO_MARK);
> +#endif
> +#ifdef SO_DOMAIN
> + PyModule_AddIntMacro(m, SO_DOMAIN);
> +#endif
> +#ifdef SO_PROTOCOL
> + PyModule_AddIntMacro(m, SO_PROTOCOL);
> +#endif
> +
> + /* Maximum number of connections for "listen" */
> +#ifdef SOMAXCONN
> + PyModule_AddIntMacro(m, SOMAXCONN);
> +#else
> + PyModule_AddIntConstant(m, "SOMAXCONN", 5); /* Common value */
> +#endif
> +
> + /* Ancillary message types */
> +#ifdef SCM_RIGHTS
> + PyModule_AddIntMacro(m, SCM_RIGHTS);
> +#endif
> +#ifdef SCM_CREDENTIALS
> + PyModule_AddIntMacro(m, SCM_CREDENTIALS);
> +#endif
> +#ifdef SCM_CREDS
> + PyModule_AddIntMacro(m, SCM_CREDS);
> +#endif
> +
> + /* Flags for send, recv */
> +#ifdef MSG_OOB
> + PyModule_AddIntMacro(m, MSG_OOB);
> +#endif
> +#ifdef MSG_PEEK
> + PyModule_AddIntMacro(m, MSG_PEEK);
> +#endif
> +#ifdef MSG_DONTROUTE
> + PyModule_AddIntMacro(m, MSG_DONTROUTE);
> +#endif
> +#ifdef MSG_DONTWAIT
> + PyModule_AddIntMacro(m, MSG_DONTWAIT);
> +#endif
> +#ifdef MSG_EOR
> + PyModule_AddIntMacro(m, MSG_EOR);
> +#endif
> +#ifdef MSG_TRUNC
> + PyModule_AddIntMacro(m, MSG_TRUNC);
> +#endif
> +#ifdef MSG_CTRUNC
> + PyModule_AddIntMacro(m, MSG_CTRUNC);
> +#endif
> +#ifdef MSG_WAITALL
> + PyModule_AddIntMacro(m, MSG_WAITALL);
> +#endif
> +#ifdef MSG_BTAG
> + PyModule_AddIntMacro(m, MSG_BTAG);
> +#endif
> +#ifdef MSG_ETAG
> + PyModule_AddIntMacro(m, MSG_ETAG);
> +#endif
> +#ifdef MSG_NOSIGNAL
> + PyModule_AddIntMacro(m, MSG_NOSIGNAL);
> +#endif
> +#ifdef MSG_NOTIFICATION
> + PyModule_AddIntMacro(m, MSG_NOTIFICATION);
> +#endif
> +#ifdef MSG_CMSG_CLOEXEC
> + PyModule_AddIntMacro(m, MSG_CMSG_CLOEXEC);
> +#endif
> +#ifdef MSG_ERRQUEUE
> + PyModule_AddIntMacro(m, MSG_ERRQUEUE);
> +#endif
> +#ifdef MSG_CONFIRM
> + PyModule_AddIntMacro(m, MSG_CONFIRM);
> +#endif
> +#ifdef MSG_MORE
> + PyModule_AddIntMacro(m, MSG_MORE);
> +#endif
> +#ifdef MSG_EOF
> + PyModule_AddIntMacro(m, MSG_EOF);
> +#endif
> +#ifdef MSG_BCAST
> + PyModule_AddIntMacro(m, MSG_BCAST);
> +#endif
> +#ifdef MSG_MCAST
> + PyModule_AddIntMacro(m, MSG_MCAST);
> +#endif
> +#ifdef MSG_FASTOPEN
> + PyModule_AddIntMacro(m, MSG_FASTOPEN);
> +#endif
> +
> + /* Protocol level and numbers, usable for [gs]etsockopt */
> +#ifdef SOL_SOCKET
> + PyModule_AddIntMacro(m, SOL_SOCKET);
> +#endif
> +#ifdef SOL_IP
> + PyModule_AddIntMacro(m, SOL_IP);
> +#else
> + PyModule_AddIntConstant(m, "SOL_IP", 0);
> +#endif
> +#ifdef SOL_IPX
> + PyModule_AddIntMacro(m, SOL_IPX);
> +#endif
> +#ifdef SOL_AX25
> + PyModule_AddIntMacro(m, SOL_AX25);
> +#endif
> +#ifdef SOL_ATALK
> + PyModule_AddIntMacro(m, SOL_ATALK);
> +#endif
> +#ifdef SOL_NETROM
> + PyModule_AddIntMacro(m, SOL_NETROM);
> +#endif
> +#ifdef SOL_ROSE
> + PyModule_AddIntMacro(m, SOL_ROSE);
> +#endif
> +#ifdef SOL_TCP
> + PyModule_AddIntMacro(m, SOL_TCP);
> +#else
> + PyModule_AddIntConstant(m, "SOL_TCP", 6);
> +#endif
> +#ifdef SOL_UDP
> + PyModule_AddIntMacro(m, SOL_UDP);
> +#else
> + PyModule_AddIntConstant(m, "SOL_UDP", 17);
> +#endif
> +#ifdef SOL_CAN_BASE
> + PyModule_AddIntMacro(m, SOL_CAN_BASE);
> +#endif
> +#ifdef SOL_CAN_RAW
> + PyModule_AddIntMacro(m, SOL_CAN_RAW);
> + PyModule_AddIntMacro(m, CAN_RAW);
> +#endif
> +#ifdef HAVE_LINUX_CAN_H
> + PyModule_AddIntMacro(m, CAN_EFF_FLAG);
> + PyModule_AddIntMacro(m, CAN_RTR_FLAG);
> + PyModule_AddIntMacro(m, CAN_ERR_FLAG);
> +
> + PyModule_AddIntMacro(m, CAN_SFF_MASK);
> + PyModule_AddIntMacro(m, CAN_EFF_MASK);
> + PyModule_AddIntMacro(m, CAN_ERR_MASK);
> +#endif
> +#ifdef HAVE_LINUX_CAN_RAW_H
> + PyModule_AddIntMacro(m, CAN_RAW_FILTER);
> + PyModule_AddIntMacro(m, CAN_RAW_ERR_FILTER);
> + PyModule_AddIntMacro(m, CAN_RAW_LOOPBACK);
> + PyModule_AddIntMacro(m, CAN_RAW_RECV_OWN_MSGS);
> +#endif
> +#ifdef HAVE_LINUX_CAN_RAW_FD_FRAMES
> + PyModule_AddIntMacro(m, CAN_RAW_FD_FRAMES);
> +#endif
> +#ifdef HAVE_LINUX_CAN_BCM_H
> + PyModule_AddIntMacro(m, CAN_BCM);
> + PyModule_AddIntConstant(m, "CAN_BCM_TX_SETUP", TX_SETUP);
> + PyModule_AddIntConstant(m, "CAN_BCM_TX_DELETE", TX_DELETE);
> + PyModule_AddIntConstant(m, "CAN_BCM_TX_READ", TX_READ);
> + PyModule_AddIntConstant(m, "CAN_BCM_TX_SEND", TX_SEND);
> + PyModule_AddIntConstant(m, "CAN_BCM_RX_SETUP", RX_SETUP);
> + PyModule_AddIntConstant(m, "CAN_BCM_RX_DELETE", RX_DELETE);
> + PyModule_AddIntConstant(m, "CAN_BCM_RX_READ", RX_READ);
> + PyModule_AddIntConstant(m, "CAN_BCM_TX_STATUS", TX_STATUS);
> + PyModule_AddIntConstant(m, "CAN_BCM_TX_EXPIRED", TX_EXPIRED);
> + PyModule_AddIntConstant(m, "CAN_BCM_RX_STATUS", RX_STATUS);
> + PyModule_AddIntConstant(m, "CAN_BCM_RX_TIMEOUT", RX_TIMEOUT);
> + PyModule_AddIntConstant(m, "CAN_BCM_RX_CHANGED", RX_CHANGED);
> +#endif
> +#ifdef SOL_RDS
> + PyModule_AddIntMacro(m, SOL_RDS);
> +#endif
> +#ifdef HAVE_SOCKADDR_ALG
> + PyModule_AddIntMacro(m, SOL_ALG);
> +#endif
> +#ifdef RDS_CANCEL_SENT_TO
> + PyModule_AddIntMacro(m, RDS_CANCEL_SENT_TO);
> +#endif
> +#ifdef RDS_GET_MR
> + PyModule_AddIntMacro(m, RDS_GET_MR);
> +#endif
> +#ifdef RDS_FREE_MR
> + PyModule_AddIntMacro(m, RDS_FREE_MR);
> +#endif
> +#ifdef RDS_RECVERR
> + PyModule_AddIntMacro(m, RDS_RECVERR);
> +#endif
> +#ifdef RDS_CONG_MONITOR
> + PyModule_AddIntMacro(m, RDS_CONG_MONITOR);
> +#endif
> +#ifdef RDS_GET_MR_FOR_DEST
> + PyModule_AddIntMacro(m, RDS_GET_MR_FOR_DEST);
> +#endif
> +#ifdef IPPROTO_IP
> + PyModule_AddIntMacro(m, IPPROTO_IP);
> +#else
> + PyModule_AddIntConstant(m, "IPPROTO_IP", 0);
> +#endif
> +#ifdef IPPROTO_HOPOPTS
> + PyModule_AddIntMacro(m, IPPROTO_HOPOPTS);
> +#endif
> +#ifdef IPPROTO_ICMP
> + PyModule_AddIntMacro(m, IPPROTO_ICMP);
> +#else
> + PyModule_AddIntConstant(m, "IPPROTO_ICMP", 1);
> +#endif
> +#ifdef IPPROTO_IGMP
> + PyModule_AddIntMacro(m, IPPROTO_IGMP);
> +#endif
> +#ifdef IPPROTO_GGP
> + PyModule_AddIntMacro(m, IPPROTO_GGP);
> +#endif
> +#ifdef IPPROTO_IPV4
> + PyModule_AddIntMacro(m, IPPROTO_IPV4);
> +#endif
> +#ifdef IPPROTO_IPV6
> + PyModule_AddIntMacro(m, IPPROTO_IPV6);
> +#endif
> +#ifdef IPPROTO_IPIP
> + PyModule_AddIntMacro(m, IPPROTO_IPIP);
> +#endif
> +#ifdef IPPROTO_TCP
> + PyModule_AddIntMacro(m, IPPROTO_TCP);
> +#else
> + PyModule_AddIntConstant(m, "IPPROTO_TCP", 6);
> +#endif
> +#ifdef IPPROTO_EGP
> + PyModule_AddIntMacro(m, IPPROTO_EGP);
> +#endif
> +#ifdef IPPROTO_PUP
> + PyModule_AddIntMacro(m, IPPROTO_PUP);
> +#endif
> +#ifdef IPPROTO_UDP
> + PyModule_AddIntMacro(m, IPPROTO_UDP);
> +#else
> + PyModule_AddIntConstant(m, "IPPROTO_UDP", 17);
> +#endif
> +#ifdef IPPROTO_IDP
> + PyModule_AddIntMacro(m, IPPROTO_IDP);
> +#endif
> +#ifdef IPPROTO_HELLO
> + PyModule_AddIntMacro(m, IPPROTO_HELLO);
> +#endif
> +#ifdef IPPROTO_ND
> + PyModule_AddIntMacro(m, IPPROTO_ND);
> +#endif
> +#ifdef IPPROTO_TP
> + PyModule_AddIntMacro(m, IPPROTO_TP);
> +#endif
> +#ifdef IPPROTO_IPV6
> + PyModule_AddIntMacro(m, IPPROTO_IPV6);
> +#endif
> +#ifdef IPPROTO_ROUTING
> + PyModule_AddIntMacro(m, IPPROTO_ROUTING);
> +#endif
> +#ifdef IPPROTO_FRAGMENT
> + PyModule_AddIntMacro(m, IPPROTO_FRAGMENT);
> +#endif
> +#ifdef IPPROTO_RSVP
> + PyModule_AddIntMacro(m, IPPROTO_RSVP);
> +#endif
> +#ifdef IPPROTO_GRE
> + PyModule_AddIntMacro(m, IPPROTO_GRE);
> +#endif
> +#ifdef IPPROTO_ESP
> + PyModule_AddIntMacro(m, IPPROTO_ESP);
> +#endif
> +#ifdef IPPROTO_AH
> + PyModule_AddIntMacro(m, IPPROTO_AH);
> +#endif
> +#ifdef IPPROTO_MOBILE
> + PyModule_AddIntMacro(m, IPPROTO_MOBILE);
> +#endif
> +#ifdef IPPROTO_ICMPV6
> + PyModule_AddIntMacro(m, IPPROTO_ICMPV6);
> +#endif
> +#ifdef IPPROTO_NONE
> + PyModule_AddIntMacro(m, IPPROTO_NONE);
> +#endif
> +#ifdef IPPROTO_DSTOPTS
> + PyModule_AddIntMacro(m, IPPROTO_DSTOPTS);
> +#endif
> +#ifdef IPPROTO_XTP
> + PyModule_AddIntMacro(m, IPPROTO_XTP);
> +#endif
> +#ifdef IPPROTO_EON
> + PyModule_AddIntMacro(m, IPPROTO_EON);
> +#endif
> +#ifdef IPPROTO_PIM
> + PyModule_AddIntMacro(m, IPPROTO_PIM);
> +#endif
> +#ifdef IPPROTO_IPCOMP
> + PyModule_AddIntMacro(m, IPPROTO_IPCOMP);
> +#endif
> +#ifdef IPPROTO_VRRP
> + PyModule_AddIntMacro(m, IPPROTO_VRRP);
> +#endif
> +#ifdef IPPROTO_SCTP
> + PyModule_AddIntMacro(m, IPPROTO_SCTP);
> +#endif
> +#ifdef IPPROTO_BIP
> + PyModule_AddIntMacro(m, IPPROTO_BIP);
> +#endif
> +/**/
> +#ifdef IPPROTO_RAW
> + PyModule_AddIntMacro(m, IPPROTO_RAW);
> +#else
> + PyModule_AddIntConstant(m, "IPPROTO_RAW", 255);
> +#endif
> +#ifdef IPPROTO_MAX
> + PyModule_AddIntMacro(m, IPPROTO_MAX);
> +#endif
> +
> +#ifdef SYSPROTO_CONTROL
> + PyModule_AddIntMacro(m, SYSPROTO_CONTROL);
> +#endif
> +
> + /* Some port configuration */
> +#ifdef IPPORT_RESERVED
> + PyModule_AddIntMacro(m, IPPORT_RESERVED);
> +#else
> + PyModule_AddIntConstant(m, "IPPORT_RESERVED", 1024);
> +#endif
> +#ifdef IPPORT_USERRESERVED
> + PyModule_AddIntMacro(m, IPPORT_USERRESERVED);
> +#else
> + PyModule_AddIntConstant(m, "IPPORT_USERRESERVED", 5000);
> +#endif
> +
> + /* Some reserved IP v.4 addresses */
> +#ifdef INADDR_ANY
> + PyModule_AddIntMacro(m, INADDR_ANY);
> +#else
> + PyModule_AddIntConstant(m, "INADDR_ANY", 0x00000000);
> +#endif
> +#ifdef INADDR_BROADCAST
> + PyModule_AddIntMacro(m, INADDR_BROADCAST);
> +#else
> + PyModule_AddIntConstant(m, "INADDR_BROADCAST", 0xffffffff);
> +#endif
> +#ifdef INADDR_LOOPBACK
> + PyModule_AddIntMacro(m, INADDR_LOOPBACK);
> +#else
> + PyModule_AddIntConstant(m, "INADDR_LOOPBACK", 0x7F000001);
> +#endif
> +#ifdef INADDR_UNSPEC_GROUP
> + PyModule_AddIntMacro(m, INADDR_UNSPEC_GROUP);
> +#else
> + PyModule_AddIntConstant(m, "INADDR_UNSPEC_GROUP", 0xe0000000);
> +#endif
> +#ifdef INADDR_ALLHOSTS_GROUP
> + PyModule_AddIntConstant(m, "INADDR_ALLHOSTS_GROUP",
> + INADDR_ALLHOSTS_GROUP);
> +#else
> + PyModule_AddIntConstant(m, "INADDR_ALLHOSTS_GROUP", 0xe0000001);
> +#endif
> +#ifdef INADDR_MAX_LOCAL_GROUP
> + PyModule_AddIntMacro(m, INADDR_MAX_LOCAL_GROUP);
> +#else
> + PyModule_AddIntConstant(m, "INADDR_MAX_LOCAL_GROUP", 0xe00000ff);
> +#endif
> +#ifdef INADDR_NONE
> + PyModule_AddIntMacro(m, INADDR_NONE);
> +#else
> + PyModule_AddIntConstant(m, "INADDR_NONE", 0xffffffff);
> +#endif
> +
> + /* IPv4 [gs]etsockopt options */
> +#ifdef IP_OPTIONS
> + PyModule_AddIntMacro(m, IP_OPTIONS);
> +#endif
> +#ifdef IP_HDRINCL
> + PyModule_AddIntMacro(m, IP_HDRINCL);
> +#endif
> +#ifdef IP_TOS
> + PyModule_AddIntMacro(m, IP_TOS);
> +#endif
> +#ifdef IP_TTL
> + PyModule_AddIntMacro(m, IP_TTL);
> +#endif
> +#ifdef IP_RECVOPTS
> + PyModule_AddIntMacro(m, IP_RECVOPTS);
> +#endif
> +#ifdef IP_RECVRETOPTS
> + PyModule_AddIntMacro(m, IP_RECVRETOPTS);
> +#endif
> +#ifdef IP_RECVDSTADDR
> + PyModule_AddIntMacro(m, IP_RECVDSTADDR);
> +#endif
> +#ifdef IP_RETOPTS
> + PyModule_AddIntMacro(m, IP_RETOPTS);
> +#endif
> +#ifdef IP_MULTICAST_IF
> + PyModule_AddIntMacro(m, IP_MULTICAST_IF);
> +#endif
> +#ifdef IP_MULTICAST_TTL
> + PyModule_AddIntMacro(m, IP_MULTICAST_TTL);
> +#endif
> +#ifdef IP_MULTICAST_LOOP
> + PyModule_AddIntMacro(m, IP_MULTICAST_LOOP);
> +#endif
> +#ifdef IP_ADD_MEMBERSHIP
> + PyModule_AddIntMacro(m, IP_ADD_MEMBERSHIP);
> +#endif
> +#ifdef IP_DROP_MEMBERSHIP
> + PyModule_AddIntMacro(m, IP_DROP_MEMBERSHIP);
> +#endif
> +#ifdef IP_DEFAULT_MULTICAST_TTL
> + PyModule_AddIntMacro(m, IP_DEFAULT_MULTICAST_TTL);
> +#endif
> +#ifdef IP_DEFAULT_MULTICAST_LOOP
> + PyModule_AddIntMacro(m, IP_DEFAULT_MULTICAST_LOOP);
> +#endif
> +#ifdef IP_MAX_MEMBERSHIPS
> + PyModule_AddIntMacro(m, IP_MAX_MEMBERSHIPS);
> +#endif
> +#ifdef IP_TRANSPARENT
> + PyModule_AddIntMacro(m, IP_TRANSPARENT);
> +#endif
> +
> + /* IPv6 [gs]etsockopt options, defined in RFC2553 */
> +#ifdef IPV6_JOIN_GROUP
> + PyModule_AddIntMacro(m, IPV6_JOIN_GROUP);
> +#endif
> +#ifdef IPV6_LEAVE_GROUP
> + PyModule_AddIntMacro(m, IPV6_LEAVE_GROUP);
> +#endif
> +#ifdef IPV6_MULTICAST_HOPS
> + PyModule_AddIntMacro(m, IPV6_MULTICAST_HOPS);
> +#endif
> +#ifdef IPV6_MULTICAST_IF
> + PyModule_AddIntMacro(m, IPV6_MULTICAST_IF);
> +#endif
> +#ifdef IPV6_MULTICAST_LOOP
> + PyModule_AddIntMacro(m, IPV6_MULTICAST_LOOP);
> +#endif
> +#ifdef IPV6_UNICAST_HOPS
> + PyModule_AddIntMacro(m, IPV6_UNICAST_HOPS);
> +#endif
> + /* Additional IPV6 socket options, defined in RFC 3493 */
> +#ifdef IPV6_V6ONLY
> + PyModule_AddIntMacro(m, IPV6_V6ONLY);
> +#endif
> + /* Advanced IPV6 socket options, from RFC 3542 */
> +#ifdef IPV6_CHECKSUM
> + PyModule_AddIntMacro(m, IPV6_CHECKSUM);
> +#endif
> +#ifdef IPV6_DONTFRAG
> + PyModule_AddIntMacro(m, IPV6_DONTFRAG);
> +#endif
> +#ifdef IPV6_DSTOPTS
> + PyModule_AddIntMacro(m, IPV6_DSTOPTS);
> +#endif
> +#ifdef IPV6_HOPLIMIT
> + PyModule_AddIntMacro(m, IPV6_HOPLIMIT);
> +#endif
> +#ifdef IPV6_HOPOPTS
> + PyModule_AddIntMacro(m, IPV6_HOPOPTS);
> +#endif
> +#ifdef IPV6_NEXTHOP
> + PyModule_AddIntMacro(m, IPV6_NEXTHOP);
> +#endif
> +#ifdef IPV6_PATHMTU
> + PyModule_AddIntMacro(m, IPV6_PATHMTU);
> +#endif
> +#ifdef IPV6_PKTINFO
> + PyModule_AddIntMacro(m, IPV6_PKTINFO);
> +#endif
> +#ifdef IPV6_RECVDSTOPTS
> + PyModule_AddIntMacro(m, IPV6_RECVDSTOPTS);
> +#endif
> +#ifdef IPV6_RECVHOPLIMIT
> + PyModule_AddIntMacro(m, IPV6_RECVHOPLIMIT);
> +#endif
> +#ifdef IPV6_RECVHOPOPTS
> + PyModule_AddIntMacro(m, IPV6_RECVHOPOPTS);
> +#endif
> +#ifdef IPV6_RECVPKTINFO
> + PyModule_AddIntMacro(m, IPV6_RECVPKTINFO);
> +#endif
> +#ifdef IPV6_RECVRTHDR
> + PyModule_AddIntMacro(m, IPV6_RECVRTHDR);
> +#endif
> +#ifdef IPV6_RECVTCLASS
> + PyModule_AddIntMacro(m, IPV6_RECVTCLASS);
> +#endif
> +#ifdef IPV6_RTHDR
> + PyModule_AddIntMacro(m, IPV6_RTHDR);
> +#endif
> +#ifdef IPV6_RTHDRDSTOPTS
> + PyModule_AddIntMacro(m, IPV6_RTHDRDSTOPTS);
> +#endif
> +#ifdef IPV6_RTHDR_TYPE_0
> + PyModule_AddIntMacro(m, IPV6_RTHDR_TYPE_0);
> +#endif
> +#ifdef IPV6_RECVPATHMTU
> + PyModule_AddIntMacro(m, IPV6_RECVPATHMTU);
> +#endif
> +#ifdef IPV6_TCLASS
> + PyModule_AddIntMacro(m, IPV6_TCLASS);
> +#endif
> +#ifdef IPV6_USE_MIN_MTU
> + PyModule_AddIntMacro(m, IPV6_USE_MIN_MTU);
> +#endif
> +
> + /* TCP options */
> +#ifdef TCP_NODELAY
> + PyModule_AddIntMacro(m, TCP_NODELAY);
> +#endif
> +#ifdef TCP_MAXSEG
> + PyModule_AddIntMacro(m, TCP_MAXSEG);
> +#endif
> +#ifdef TCP_CORK
> + PyModule_AddIntMacro(m, TCP_CORK);
> +#endif
> +#ifdef TCP_KEEPIDLE
> + PyModule_AddIntMacro(m, TCP_KEEPIDLE);
> +#endif
> +#ifdef TCP_KEEPINTVL
> + PyModule_AddIntMacro(m, TCP_KEEPINTVL);
> +#endif
> +#ifdef TCP_KEEPCNT
> + PyModule_AddIntMacro(m, TCP_KEEPCNT);
> +#endif
> +#ifdef TCP_SYNCNT
> + PyModule_AddIntMacro(m, TCP_SYNCNT);
> +#endif
> +#ifdef TCP_LINGER2
> + PyModule_AddIntMacro(m, TCP_LINGER2);
> +#endif
> +#ifdef TCP_DEFER_ACCEPT
> + PyModule_AddIntMacro(m, TCP_DEFER_ACCEPT);
> +#endif
> +#ifdef TCP_WINDOW_CLAMP
> + PyModule_AddIntMacro(m, TCP_WINDOW_CLAMP);
> +#endif
> +#ifdef TCP_INFO
> + PyModule_AddIntMacro(m, TCP_INFO);
> +#endif
> +#ifdef TCP_QUICKACK
> + PyModule_AddIntMacro(m, TCP_QUICKACK);
> +#endif
> +#ifdef TCP_FASTOPEN
> + PyModule_AddIntMacro(m, TCP_FASTOPEN);
> +#endif
> +#ifdef TCP_CONGESTION
> + PyModule_AddIntMacro(m, TCP_CONGESTION);
> +#endif
> +#ifdef TCP_USER_TIMEOUT
> + PyModule_AddIntMacro(m, TCP_USER_TIMEOUT);
> +#endif
> +
> + /* IPX options */
> +#ifdef IPX_TYPE
> + PyModule_AddIntMacro(m, IPX_TYPE);
> +#endif
> +
> +/* Reliable Datagram Sockets */
> +#ifdef RDS_CMSG_RDMA_ARGS
> + PyModule_AddIntMacro(m, RDS_CMSG_RDMA_ARGS);
> +#endif
> +#ifdef RDS_CMSG_RDMA_DEST
> + PyModule_AddIntMacro(m, RDS_CMSG_RDMA_DEST);
> +#endif
> +#ifdef RDS_CMSG_RDMA_MAP
> + PyModule_AddIntMacro(m, RDS_CMSG_RDMA_MAP);
> +#endif
> +#ifdef RDS_CMSG_RDMA_STATUS
> + PyModule_AddIntMacro(m, RDS_CMSG_RDMA_STATUS);
> +#endif
> +#ifdef RDS_CMSG_RDMA_UPDATE
> + PyModule_AddIntMacro(m, RDS_CMSG_RDMA_UPDATE);
> +#endif
> +#ifdef RDS_RDMA_READWRITE
> + PyModule_AddIntMacro(m, RDS_RDMA_READWRITE);
> +#endif
> +#ifdef RDS_RDMA_FENCE
> + PyModule_AddIntMacro(m, RDS_RDMA_FENCE);
> +#endif
> +#ifdef RDS_RDMA_INVALIDATE
> + PyModule_AddIntMacro(m, RDS_RDMA_INVALIDATE);
> +#endif
> +#ifdef RDS_RDMA_USE_ONCE
> + PyModule_AddIntMacro(m, RDS_RDMA_USE_ONCE);
> +#endif
> +#ifdef RDS_RDMA_DONTWAIT
> + PyModule_AddIntMacro(m, RDS_RDMA_DONTWAIT);
> +#endif
> +#ifdef RDS_RDMA_NOTIFY_ME
> + PyModule_AddIntMacro(m, RDS_RDMA_NOTIFY_ME);
> +#endif
> +#ifdef RDS_RDMA_SILENT
> + PyModule_AddIntMacro(m, RDS_RDMA_SILENT);
> +#endif
> +
> + /* get{addr,name}info parameters */
> +#ifdef EAI_ADDRFAMILY
> + PyModule_AddIntMacro(m, EAI_ADDRFAMILY);
> +#endif
> +#ifdef EAI_AGAIN
> + PyModule_AddIntMacro(m, EAI_AGAIN);
> +#endif
> +#ifdef EAI_BADFLAGS
> + PyModule_AddIntMacro(m, EAI_BADFLAGS);
> +#endif
> +#ifdef EAI_FAIL
> + PyModule_AddIntMacro(m, EAI_FAIL);
> +#endif
> +#ifdef EAI_FAMILY
> + PyModule_AddIntMacro(m, EAI_FAMILY);
> +#endif
> +#ifdef EAI_MEMORY
> + PyModule_AddIntMacro(m, EAI_MEMORY);
> +#endif
> +#ifdef EAI_NODATA
> + PyModule_AddIntMacro(m, EAI_NODATA);
> +#endif
> +#ifdef EAI_NONAME
> + PyModule_AddIntMacro(m, EAI_NONAME);
> +#endif
> +#ifdef EAI_OVERFLOW
> + PyModule_AddIntMacro(m, EAI_OVERFLOW);
> +#endif
> +#ifdef EAI_SERVICE
> + PyModule_AddIntMacro(m, EAI_SERVICE);
> +#endif
> +#ifdef EAI_SOCKTYPE
> + PyModule_AddIntMacro(m, EAI_SOCKTYPE);
> +#endif
> +#ifdef EAI_SYSTEM
> + PyModule_AddIntMacro(m, EAI_SYSTEM);
> +#endif
> +#ifdef EAI_BADHINTS
> + PyModule_AddIntMacro(m, EAI_BADHINTS);
> +#endif
> +#ifdef EAI_PROTOCOL
> + PyModule_AddIntMacro(m, EAI_PROTOCOL);
> +#endif
> +#ifdef EAI_MAX
> + PyModule_AddIntMacro(m, EAI_MAX);
> +#endif
> +#ifdef AI_PASSIVE
> + PyModule_AddIntMacro(m, AI_PASSIVE);
> +#endif
> +#ifdef AI_CANONNAME
> + PyModule_AddIntMacro(m, AI_CANONNAME);
> +#endif
> +#ifdef AI_NUMERICHOST
> + PyModule_AddIntMacro(m, AI_NUMERICHOST);
> +#endif
> +#ifdef AI_NUMERICSERV
> + PyModule_AddIntMacro(m, AI_NUMERICSERV);
> +#endif
> +#ifdef AI_MASK
> + PyModule_AddIntMacro(m, AI_MASK);
> +#endif
> +#ifdef AI_ALL
> + PyModule_AddIntMacro(m, AI_ALL);
> +#endif
> +#ifdef AI_V4MAPPED_CFG
> + PyModule_AddIntMacro(m, AI_V4MAPPED_CFG);
> +#endif
> +#ifdef AI_ADDRCONFIG
> + PyModule_AddIntMacro(m, AI_ADDRCONFIG);
> +#endif
> +#ifdef AI_V4MAPPED
> + PyModule_AddIntMacro(m, AI_V4MAPPED);
> +#endif
> +#ifdef AI_DEFAULT
> + PyModule_AddIntMacro(m, AI_DEFAULT);
> +#endif
> +#ifdef NI_MAXHOST
> + PyModule_AddIntMacro(m, NI_MAXHOST);
> +#endif
> +#ifdef NI_MAXSERV
> + PyModule_AddIntMacro(m, NI_MAXSERV);
> +#endif
> +#ifdef NI_NOFQDN
> + PyModule_AddIntMacro(m, NI_NOFQDN);
> +#endif
> +#ifdef NI_NUMERICHOST
> + PyModule_AddIntMacro(m, NI_NUMERICHOST);
> +#endif
> +#ifdef NI_NAMEREQD
> + PyModule_AddIntMacro(m, NI_NAMEREQD);
> +#endif
> +#ifdef NI_NUMERICSERV
> + PyModule_AddIntMacro(m, NI_NUMERICSERV);
> +#endif
> +#ifdef NI_DGRAM
> + PyModule_AddIntMacro(m, NI_DGRAM);
> +#endif
> +
> + /* shutdown() parameters */
> +#ifdef SHUT_RD
> + PyModule_AddIntMacro(m, SHUT_RD);
> +#elif defined(SD_RECEIVE)
> + PyModule_AddIntConstant(m, "SHUT_RD", SD_RECEIVE);
> +#else
> + PyModule_AddIntConstant(m, "SHUT_RD", 0);
> +#endif
> +#ifdef SHUT_WR
> + PyModule_AddIntMacro(m, SHUT_WR);
> +#elif defined(SD_SEND)
> + PyModule_AddIntConstant(m, "SHUT_WR", SD_SEND);
> +#else
> + PyModule_AddIntConstant(m, "SHUT_WR", 1);
> +#endif
> +#ifdef SHUT_RDWR
> + PyModule_AddIntMacro(m, SHUT_RDWR);
> +#elif defined(SD_BOTH)
> + PyModule_AddIntConstant(m, "SHUT_RDWR", SD_BOTH);
> +#else
> + PyModule_AddIntConstant(m, "SHUT_RDWR", 2);
> +#endif
> +
> +#ifdef SIO_RCVALL
> + {
> + DWORD codes[] = {SIO_RCVALL, SIO_KEEPALIVE_VALS,
> +#if defined(SIO_LOOPBACK_FAST_PATH)
> + SIO_LOOPBACK_FAST_PATH
> +#endif
> + };
> + const char *names[] = {"SIO_RCVALL", "SIO_KEEPALIVE_VALS",
> +#if defined(SIO_LOOPBACK_FAST_PATH)
> + "SIO_LOOPBACK_FAST_PATH"
> +#endif
> + };
> + int i;
> + for(i = 0; i<Py_ARRAY_LENGTH(codes); ++i) {
> + PyObject *tmp;
> + tmp = PyLong_FromUnsignedLong(codes[i]);
> + if (tmp == NULL)
> + return NULL;
> + PyModule_AddObject(m, names[i], tmp);
> + }
> + }
> + PyModule_AddIntMacro(m, RCVALL_OFF);
> + PyModule_AddIntMacro(m, RCVALL_ON);
> + PyModule_AddIntMacro(m, RCVALL_SOCKETLEVELONLY);
> +#ifdef RCVALL_IPLEVEL
> + PyModule_AddIntMacro(m, RCVALL_IPLEVEL);
> +#endif
> +#ifdef RCVALL_MAX
> + PyModule_AddIntMacro(m, RCVALL_MAX);
> +#endif
> +#endif /* _MSTCPIP_ */
> +
> + /* Initialize gethostbyname lock */
> +#if defined(USE_GETHOSTBYNAME_LOCK) || defined(USE_GETADDRINFO_LOCK)
> + netdb_lock = PyThread_allocate_lock();
> +#endif
> +
> +#ifdef MS_WINDOWS
> + /* removes some flags on older version Windows during run-time */
> + remove_unusable_flags(m);
> +#endif
> +
> + return m;
> +}
> +
> +
> +#ifndef HAVE_INET_PTON
> +#if !defined(NTDDI_VERSION) || (NTDDI_VERSION < NTDDI_LONGHORN)
> +
> +/* Simplistic emulation code for inet_pton that only works for IPv4 */
> +/* These are not exposed because they do not set errno properly */
> +
> +int
> +inet_pton(int af, const char *src, void *dst)
> +{
> + if (af == AF_INET) {
> +#if (SIZEOF_INT != 4)
> +#error "Not sure if in_addr_t exists and int is not 32-bits."
> +#endif
> + unsigned int packed_addr;
> + packed_addr = inet_addr(src);
> + if (packed_addr == INADDR_NONE)
> + return 0;
> + memcpy(dst, &packed_addr, 4);
> + return 1;
> + }
> + /* Should set errno to EAFNOSUPPORT */
> + return -1;
> +}
> +
> +const char *
> +inet_ntop(int af, const void *src, char *dst, socklen_t size)
> +{
> + if (af == AF_INET) {
> + struct in_addr packed_addr;
> + if (size < 16)
> + /* Should set errno to ENOSPC. */
> + return NULL;
> + memcpy(&packed_addr, src, sizeof(packed_addr));
> + return strncpy(dst, inet_ntoa(packed_addr), size);
> + }
> + /* Should set errno to EAFNOSUPPORT */
> + return NULL;
> +}
> +
> +#endif
> +#endif
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/socketmodule.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/socketmodule.h
> new file mode 100644
> index 00000000..ada048f4
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/socketmodule.h
> @@ -0,0 +1,282 @@
> +/* Socket module header file */
> +#ifdef UEFI_C_SOURCE
> +# undef CMSG_LEN // Hack to not to include code reloated to CMSG_LEN
> +#endif
> +/* Includes needed for the sockaddr_* symbols below */
> +#ifndef MS_WINDOWS
> +#ifdef __VMS
> +# include <socket.h>
> +# else
> +# include <sys/socket.h>
> +# endif
> +# include <netinet/in.h>
> +# if !defined(__CYGWIN__)
> +# include <netinet/tcp.h>
> +# endif
> +
> +#else /* MS_WINDOWS */
> +# include <winsock2.h>
> +/* Windows 'supports' CMSG_LEN, but does not follow the POSIX standard
> + * interface at all, so there is no point including the code that
> + * attempts to use it.
> + */
> +# ifdef PySocket_BUILDING_SOCKET
> +# undef CMSG_LEN
> +# endif
> +# include <ws2tcpip.h>
> +/* VC6 is shipped with old platform headers, and does not have MSTcpIP.h
> + * Separate SDKs have all the functions we want, but older ones don't have
> + * any version information.
> + * I use SIO_GET_MULTICAST_FILTER to detect a decent SDK.
> + */
> +# ifdef SIO_GET_MULTICAST_FILTER
> +# include <MSTcpIP.h> /* for SIO_RCVALL */
> +# define HAVE_ADDRINFO
> +# define HAVE_SOCKADDR_STORAGE
> +# define HAVE_GETADDRINFO
> +# define HAVE_GETNAMEINFO
> +# define ENABLE_IPV6
> +# else
> +typedef int socklen_t;
> +# endif /* IPPROTO_IPV6 */
> +#endif /* MS_WINDOWS */
> +
> +#ifdef HAVE_SYS_UN_H
> +# include <sys/un.h>
> +#else
> +# undef AF_UNIX
> +#endif
> +
> +#ifdef HAVE_LINUX_NETLINK_H
> +# ifdef HAVE_ASM_TYPES_H
> +# include <asm/types.h>
> +# endif
> +# include <linux/netlink.h>
> +#else
> +# undef AF_NETLINK
> +#endif
> +
> +#ifdef HAVE_BLUETOOTH_BLUETOOTH_H
> +#include <bluetooth/bluetooth.h>
> +#include <bluetooth/rfcomm.h>
> +#include <bluetooth/l2cap.h>
> +#include <bluetooth/sco.h>
> +#include <bluetooth/hci.h>
> +#endif
> +
> +#ifdef HAVE_BLUETOOTH_H
> +#include <bluetooth.h>
> +#endif
> +
> +#ifdef HAVE_NET_IF_H
> +# include <net/if.h>
> +#endif
> +
> +#ifdef HAVE_NETPACKET_PACKET_H
> +# include <sys/ioctl.h>
> +# include <netpacket/packet.h>
> +#endif
> +
> +#ifdef HAVE_LINUX_TIPC_H
> +# include <linux/tipc.h>
> +#endif
> +
> +#ifdef HAVE_LINUX_CAN_H
> +# include <linux/can.h>
> +#else
> +# undef AF_CAN
> +# undef PF_CAN
> +#endif
> +
> +#ifdef HAVE_LINUX_CAN_RAW_H
> +#include <linux/can/raw.h>
> +#endif
> +
> +#ifdef HAVE_LINUX_CAN_BCM_H
> +#include <linux/can/bcm.h>
> +#endif
> +
> +#ifdef HAVE_SYS_SYS_DOMAIN_H
> +#include <sys/sys_domain.h>
> +#endif
> +#ifdef HAVE_SYS_KERN_CONTROL_H
> +#include <sys/kern_control.h>
> +#endif
> +
> +#ifdef HAVE_SOCKADDR_ALG
> +#include <linux/if_alg.h>
> +#ifndef AF_ALG
> +#define AF_ALG 38
> +#endif
> +#ifndef SOL_ALG
> +#define SOL_ALG 279
> +#endif
> +
> +/* Linux 3.19 */
> +#ifndef ALG_SET_AEAD_ASSOCLEN
> +#define ALG_SET_AEAD_ASSOCLEN 4
> +#endif
> +#ifndef ALG_SET_AEAD_AUTHSIZE
> +#define ALG_SET_AEAD_AUTHSIZE 5
> +#endif
> +/* Linux 4.8 */
> +#ifndef ALG_SET_PUBKEY
> +#define ALG_SET_PUBKEY 6
> +#endif
> +
> +#ifndef ALG_OP_SIGN
> +#define ALG_OP_SIGN 2
> +#endif
> +#ifndef ALG_OP_VERIFY
> +#define ALG_OP_VERIFY 3
> +#endif
> +
> +#endif /* HAVE_SOCKADDR_ALG */
> +
> +
> +#ifndef Py__SOCKET_H
> +#define Py__SOCKET_H
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/* Python module and C API name */
> +#define PySocket_MODULE_NAME "_socket"
> +#define PySocket_CAPI_NAME "CAPI"
> +#define PySocket_CAPSULE_NAME PySocket_MODULE_NAME "." PySocket_CAPI_NAME
> +
> +/* Abstract the socket file descriptor type */
> +#ifdef MS_WINDOWS
> +typedef SOCKET SOCKET_T;
> +# ifdef MS_WIN64
> +# define SIZEOF_SOCKET_T 8
> +# else
> +# define SIZEOF_SOCKET_T 4
> +# endif
> +#else
> +typedef int SOCKET_T;
> +# define SIZEOF_SOCKET_T SIZEOF_INT
> +#endif
> +
> +#if SIZEOF_SOCKET_T <= SIZEOF_LONG
> +#define PyLong_FromSocket_t(fd) PyLong_FromLong((SOCKET_T)(fd))
> +#define PyLong_AsSocket_t(fd) (SOCKET_T)PyLong_AsLong(fd)
> +#else
> +#define PyLong_FromSocket_t(fd) PyLong_FromLongLong((SOCKET_T)(fd))
> +#define PyLong_AsSocket_t(fd) (SOCKET_T)PyLong_AsLongLong(fd)
> +#endif
> +
> +/* Socket address */
> +typedef union sock_addr {
> + struct sockaddr_in in;
> + struct sockaddr sa;
> +#ifdef AF_UNIX
> + struct sockaddr_un un;
> +#endif
> +#ifdef AF_NETLINK
> + struct sockaddr_nl nl;
> +#endif
> +#ifdef ENABLE_IPV6
> + struct sockaddr_in6 in6;
> + struct sockaddr_storage storage;
> +#endif
> +#ifdef HAVE_BLUETOOTH_BLUETOOTH_H
> + struct sockaddr_l2 bt_l2;
> + struct sockaddr_rc bt_rc;
> + struct sockaddr_sco bt_sco;
> + struct sockaddr_hci bt_hci;
> +#endif
> +#ifdef HAVE_NETPACKET_PACKET_H
> + struct sockaddr_ll ll;
> +#endif
> +#ifdef HAVE_LINUX_CAN_H
> + struct sockaddr_can can;
> +#endif
> +#ifdef HAVE_SYS_KERN_CONTROL_H
> + struct sockaddr_ctl ctl;
> +#endif
> +#ifdef HAVE_SOCKADDR_ALG
> + struct sockaddr_alg alg;
> +#endif
> +} sock_addr_t;
> +
> +/* The object holding a socket. It holds some extra information,
> + like the address family, which is used to decode socket address
> + arguments properly. */
> +
> +typedef struct {
> + PyObject_HEAD
> + SOCKET_T sock_fd; /* Socket file descriptor */
> + int sock_family; /* Address family, e.g., AF_INET */
> + int sock_type; /* Socket type, e.g., SOCK_STREAM */
> + int sock_proto; /* Protocol type, usually 0 */
> + PyObject *(*errorhandler)(void); /* Error handler; checks
> + errno, returns NULL and
> + sets a Python exception */
> + _PyTime_t sock_timeout; /* Operation timeout in seconds;
> + 0.0 means non-blocking */
> +} PySocketSockObject;
> +
> +/* --- C API ----------------------------------------------------*/
> +
> +/* Short explanation of what this C API export mechanism does
> + and how it works:
> +
> + The _ssl module needs access to the type object defined in
> + the _socket module. Since cross-DLL linking introduces a lot of
> + problems on many platforms, the "trick" is to wrap the
> + C API of a module in a struct which then gets exported to
> + other modules via a PyCapsule.
> +
> + The code in socketmodule.c defines this struct (which currently
> + only contains the type object reference, but could very
> + well also include other C APIs needed by other modules)
> + and exports it as PyCapsule via the module dictionary
> + under the name "CAPI".
> +
> + Other modules can now include the socketmodule.h file
> + which defines the needed C APIs to import and set up
> + a static copy of this struct in the importing module.
> +
> + After initialization, the importing module can then
> + access the C APIs from the _socket module by simply
> + referring to the static struct, e.g.
> +
> + Load _socket module and its C API; this sets up the global
> + PySocketModule:
> +
> + if (PySocketModule_ImportModuleAndAPI())
> + return;
> +
> +
> + Now use the C API as if it were defined in the using
> + module:
> +
> + if (!PyArg_ParseTuple(args, "O!|zz:ssl",
> +
> + PySocketModule.Sock_Type,
> +
> + (PyObject*)&Sock,
> + &key_file, &cert_file))
> + return NULL;
> +
> + Support could easily be extended to export more C APIs/symbols
> + this way. Currently, only the type object is exported,
> + other candidates would be socket constructors and socket
> + access functions.
> +
> +*/
> +
> +/* C API for usage by other Python modules */
> +typedef struct {
> + PyTypeObject *Sock_Type;
> + PyObject *error;
> + PyObject *timeout_error;
> +} PySocketModule_APIObject;
> +
> +#define PySocketModule_ImportModuleAndAPI() PyCapsule_Import(PySocket_CAPSULE_NAME, 1)
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +#endif /* !Py__SOCKET_H */
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/sre_lib.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/sre_lib.h
> new file mode 100644
> index 00000000..a50dad0d
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/sre_lib.h
> @@ -0,0 +1,1372 @@
> +/*
> + * Secret Labs' Regular Expression Engine
> + *
> + * regular expression matching engine
> + *
> + * Copyright (c) 1997-2001 by Secret Labs AB. All rights reserved.
> + *
> + * See the _sre.c file for information on usage and redistribution.
> + */
> +
> +/* String matching engine */
> +
> +/* This file is included three times, with different character settings */
> +
> +LOCAL(int)
> +SRE(at)(SRE_STATE* state, SRE_CHAR* ptr, SRE_CODE at)
> +{
> + /* check if pointer is at given position */
> +
> + Py_ssize_t thisp, thatp;
> +
> + switch (at) {
> +
> + case SRE_AT_BEGINNING:
> + case SRE_AT_BEGINNING_STRING:
> + return ((void*) ptr == state->beginning);
> +
> + case SRE_AT_BEGINNING_LINE:
> + return ((void*) ptr == state->beginning ||
> + SRE_IS_LINEBREAK((int) ptr[-1]));
> +
> + case SRE_AT_END:
> + return (((SRE_CHAR *)state->end - ptr == 1 &&
> + SRE_IS_LINEBREAK((int) ptr[0])) ||
> + ((void*) ptr == state->end));
> +
> + case SRE_AT_END_LINE:
> + return ((void*) ptr == state->end ||
> + SRE_IS_LINEBREAK((int) ptr[0]));
> +
> + case SRE_AT_END_STRING:
> + return ((void*) ptr == state->end);
> +
> + case SRE_AT_BOUNDARY:
> + if (state->beginning == state->end)
> + return 0;
> + thatp = ((void*) ptr > state->beginning) ?
> + SRE_IS_WORD((int) ptr[-1]) : 0;
> + thisp = ((void*) ptr < state->end) ?
> + SRE_IS_WORD((int) ptr[0]) : 0;
> + return thisp != thatp;
> +
> + case SRE_AT_NON_BOUNDARY:
> + if (state->beginning == state->end)
> + return 0;
> + thatp = ((void*) ptr > state->beginning) ?
> + SRE_IS_WORD((int) ptr[-1]) : 0;
> + thisp = ((void*) ptr < state->end) ?
> + SRE_IS_WORD((int) ptr[0]) : 0;
> + return thisp == thatp;
> +
> + case SRE_AT_LOC_BOUNDARY:
> + if (state->beginning == state->end)
> + return 0;
> + thatp = ((void*) ptr > state->beginning) ?
> + SRE_LOC_IS_WORD((int) ptr[-1]) : 0;
> + thisp = ((void*) ptr < state->end) ?
> + SRE_LOC_IS_WORD((int) ptr[0]) : 0;
> + return thisp != thatp;
> +
> + case SRE_AT_LOC_NON_BOUNDARY:
> + if (state->beginning == state->end)
> + return 0;
> + thatp = ((void*) ptr > state->beginning) ?
> + SRE_LOC_IS_WORD((int) ptr[-1]) : 0;
> + thisp = ((void*) ptr < state->end) ?
> + SRE_LOC_IS_WORD((int) ptr[0]) : 0;
> + return thisp == thatp;
> +
> + case SRE_AT_UNI_BOUNDARY:
> + if (state->beginning == state->end)
> + return 0;
> + thatp = ((void*) ptr > state->beginning) ?
> + SRE_UNI_IS_WORD((int) ptr[-1]) : 0;
> + thisp = ((void*) ptr < state->end) ?
> + SRE_UNI_IS_WORD((int) ptr[0]) : 0;
> + return thisp != thatp;
> +
> + case SRE_AT_UNI_NON_BOUNDARY:
> + if (state->beginning == state->end)
> + return 0;
> + thatp = ((void*) ptr > state->beginning) ?
> + SRE_UNI_IS_WORD((int) ptr[-1]) : 0;
> + thisp = ((void*) ptr < state->end) ?
> + SRE_UNI_IS_WORD((int) ptr[0]) : 0;
> + return thisp == thatp;
> +
> + }
> +
> + return 0;
> +}
> +
> +LOCAL(int)
> +SRE(charset)(SRE_STATE* state, SRE_CODE* set, SRE_CODE ch)
> +{
> + /* check if character is a member of the given set */
> +
> + int ok = 1;
> +
> + for (;;) {
> + switch (*set++) {
> +
> + case SRE_OP_FAILURE:
> + return !ok;
> +
> + case SRE_OP_LITERAL:
> + /* <LITERAL> <code> */
> + if (ch == set[0])
> + return ok;
> + set++;
> + break;
> +
> + case SRE_OP_CATEGORY:
> + /* <CATEGORY> <code> */
> + if (sre_category(set[0], (int) ch))
> + return ok;
> + set++;
> + break;
> +
> + case SRE_OP_CHARSET:
> + /* <CHARSET> <bitmap> */
> + if (ch < 256 &&
> + (set[ch/SRE_CODE_BITS] & (1u << (ch & (SRE_CODE_BITS-1)))))
> + return ok;
> + set += 256/SRE_CODE_BITS;
> + break;
> +
> + case SRE_OP_RANGE:
> + /* <RANGE> <lower> <upper> */
> + if (set[0] <= ch && ch <= set[1])
> + return ok;
> + set += 2;
> + break;
> +
> + case SRE_OP_RANGE_IGNORE:
> + /* <RANGE_IGNORE> <lower> <upper> */
> + {
> + SRE_CODE uch;
> + /* ch is already lower cased */
> + if (set[0] <= ch && ch <= set[1])
> + return ok;
> + uch = state->upper(ch);
> + if (set[0] <= uch && uch <= set[1])
> + return ok;
> + set += 2;
> + break;
> + }
> +
> + case SRE_OP_NEGATE:
> + ok = !ok;
> + break;
> +
> + case SRE_OP_BIGCHARSET:
> + /* <BIGCHARSET> <blockcount> <256 blockindices> <blocks> */
> + {
> + Py_ssize_t count, block;
> + count = *(set++);
> +
> + if (ch < 0x10000u)
> + block = ((unsigned char*)set)[ch >> 8];
> + else
> + block = -1;
> + set += 256/sizeof(SRE_CODE);
> + if (block >=0 &&
> + (set[(block * 256 + (ch & 255))/SRE_CODE_BITS] &
> + (1u << (ch & (SRE_CODE_BITS-1)))))
> + return ok;
> + set += count * (256/SRE_CODE_BITS);
> + break;
> + }
> +
> + default:
> + /* internal error -- there's not much we can do about it
> + here, so let's just pretend it didn't match... */
> + return 0;
> + }
> + }
> +}
> +
> +LOCAL(Py_ssize_t) SRE(match)(SRE_STATE* state, SRE_CODE* pattern, int match_all);
> +
> +LOCAL(Py_ssize_t)
> +SRE(count)(SRE_STATE* state, SRE_CODE* pattern, Py_ssize_t maxcount)
> +{
> + SRE_CODE chr;
> + SRE_CHAR c;
> + SRE_CHAR* ptr = (SRE_CHAR *)state->ptr;
> + SRE_CHAR* end = (SRE_CHAR *)state->end;
> + Py_ssize_t i;
> +
> + /* adjust end */
> + if (maxcount < end - ptr && maxcount != SRE_MAXREPEAT)
> + end = ptr + maxcount;
> +
> + switch (pattern[0]) {
> +
> + case SRE_OP_IN:
> + /* repeated set */
> + TRACE(("|%p|%p|COUNT IN\n", pattern, ptr));
> + while (ptr < end && SRE(charset)(state, pattern + 2, *ptr))
> + ptr++;
> + break;
> +
> + case SRE_OP_ANY:
> + /* repeated dot wildcard. */
> + TRACE(("|%p|%p|COUNT ANY\n", pattern, ptr));
> + while (ptr < end && !SRE_IS_LINEBREAK(*ptr))
> + ptr++;
> + break;
> +
> + case SRE_OP_ANY_ALL:
> + /* repeated dot wildcard. skip to the end of the target
> + string, and backtrack from there */
> + TRACE(("|%p|%p|COUNT ANY_ALL\n", pattern, ptr));
> + ptr = end;
> + break;
> +
> + case SRE_OP_LITERAL:
> + /* repeated literal */
> + chr = pattern[1];
> + TRACE(("|%p|%p|COUNT LITERAL %d\n", pattern, ptr, chr));
> + c = (SRE_CHAR) chr;
> +#if SIZEOF_SRE_CHAR < 4
> + if ((SRE_CODE) c != chr)
> + ; /* literal can't match: doesn't fit in char width */
> + else
> +#endif
> + while (ptr < end && *ptr == c)
> + ptr++;
> + break;
> +
> + case SRE_OP_LITERAL_IGNORE:
> + /* repeated literal */
> + chr = pattern[1];
> + TRACE(("|%p|%p|COUNT LITERAL_IGNORE %d\n", pattern, ptr, chr));
> + while (ptr < end && (SRE_CODE) state->lower(*ptr) == chr)
> + ptr++;
> + break;
> +
> + case SRE_OP_NOT_LITERAL:
> + /* repeated non-literal */
> + chr = pattern[1];
> + TRACE(("|%p|%p|COUNT NOT_LITERAL %d\n", pattern, ptr, chr));
> + c = (SRE_CHAR) chr;
> +#if SIZEOF_SRE_CHAR < 4
> + if ((SRE_CODE) c != chr)
> + ptr = end; /* literal can't match: doesn't fit in char width */
> + else
> +#endif
> + while (ptr < end && *ptr != c)
> + ptr++;
> + break;
> +
> + case SRE_OP_NOT_LITERAL_IGNORE:
> + /* repeated non-literal */
> + chr = pattern[1];
> + TRACE(("|%p|%p|COUNT NOT_LITERAL_IGNORE %d\n", pattern, ptr, chr));
> + while (ptr < end && (SRE_CODE) state->lower(*ptr) != chr)
> + ptr++;
> + break;
> +
> + default:
> + /* repeated single character pattern */
> + TRACE(("|%p|%p|COUNT SUBPATTERN\n", pattern, ptr));
> + while ((SRE_CHAR*) state->ptr < end) {
> + i = SRE(match)(state, pattern, 0);
> + if (i < 0)
> + return i;
> + if (!i)
> + break;
> + }
> + TRACE(("|%p|%p|COUNT %" PY_FORMAT_SIZE_T "d\n", pattern, ptr,
> + (SRE_CHAR*) state->ptr - ptr));
> + return (SRE_CHAR*) state->ptr - ptr;
> + }
> +
> + TRACE(("|%p|%p|COUNT %" PY_FORMAT_SIZE_T "d\n", pattern, ptr,
> + ptr - (SRE_CHAR*) state->ptr));
> + return ptr - (SRE_CHAR*) state->ptr;
> +}
> +
> +#if 0 /* not used in this release */
> +LOCAL(int)
> +SRE(info)(SRE_STATE* state, SRE_CODE* pattern)
> +{
> + /* check if an SRE_OP_INFO block matches at the current position.
> + returns the number of SRE_CODE objects to skip if successful, 0
> + if no match */
> +
> + SRE_CHAR* end = (SRE_CHAR*) state->end;
> + SRE_CHAR* ptr = (SRE_CHAR*) state->ptr;
> + Py_ssize_t i;
> +
> + /* check minimal length */
> + if (pattern[3] && end - ptr < pattern[3])
> + return 0;
> +
> + /* check known prefix */
> + if (pattern[2] & SRE_INFO_PREFIX && pattern[5] > 1) {
> + /* <length> <skip> <prefix data> <overlap data> */
> + for (i = 0; i < pattern[5]; i++)
> + if ((SRE_CODE) ptr[i] != pattern[7 + i])
> + return 0;
> + return pattern[0] + 2 * pattern[6];
> + }
> + return pattern[0];
> +}
> +#endif
> +
> +/* The macros below should be used to protect recursive SRE(match)()
> + * calls that *failed* and do *not* return immediately (IOW, those
> + * that will backtrack). Explaining:
> + *
> + * - Recursive SRE(match)() returned true: that's usually a success
> + * (besides atypical cases like ASSERT_NOT), therefore there's no
> + * reason to restore lastmark;
> + *
> + * - Recursive SRE(match)() returned false but the current SRE(match)()
> + * is returning to the caller: If the current SRE(match)() is the
> + * top function of the recursion, returning false will be a matching
> + * failure, and it doesn't matter where lastmark is pointing to.
> + * If it's *not* the top function, it will be a recursive SRE(match)()
> + * failure by itself, and the calling SRE(match)() will have to deal
> + * with the failure by the same rules explained here (it will restore
> + * lastmark by itself if necessary);
> + *
> + * - Recursive SRE(match)() returned false, and will continue the
> + * outside 'for' loop: must be protected when breaking, since the next
> + * OP could potentially depend on lastmark;
> + *
> + * - Recursive SRE(match)() returned false, and will be called again
> + * inside a local for/while loop: must be protected between each
> + * loop iteration, since the recursive SRE(match)() could do anything,
> + * and could potentially depend on lastmark.
> + *
> + * For more information, check the discussion at SF patch #712900.
> + */
> +#define LASTMARK_SAVE() \
> + do { \
> + ctx->lastmark = state->lastmark; \
> + ctx->lastindex = state->lastindex; \
> + } while (0)
> +#define LASTMARK_RESTORE() \
> + do { \
> + state->lastmark = ctx->lastmark; \
> + state->lastindex = ctx->lastindex; \
> + } while (0)
> +#ifdef UEFI_C_SOURCE
> +#undef RETURN_ERROR
> +#undef RETURN_SUCCESS
> +#endif
> +#define RETURN_ERROR(i) do { return i; } while(0)
> +#define RETURN_FAILURE do { ret = 0; goto exit; } while(0)
> +#define RETURN_SUCCESS do { ret = 1; goto exit; } while(0)
> +
> +#define RETURN_ON_ERROR(i) \
> + do { if (i < 0) RETURN_ERROR(i); } while (0)
> +#define RETURN_ON_SUCCESS(i) \
> + do { RETURN_ON_ERROR(i); if (i > 0) RETURN_SUCCESS; } while (0)
> +#define RETURN_ON_FAILURE(i) \
> + do { RETURN_ON_ERROR(i); if (i == 0) RETURN_FAILURE; } while (0)
> +
> +#define DATA_STACK_ALLOC(state, type, ptr) \
> +do { \
> + alloc_pos = state->data_stack_base; \
> + TRACE(("allocating %s in %" PY_FORMAT_SIZE_T "d " \
> + "(%" PY_FORMAT_SIZE_T "d)\n", \
> + Py_STRINGIFY(type), alloc_pos, sizeof(type))); \
> + if (sizeof(type) > state->data_stack_size - alloc_pos) { \
> + int j = data_stack_grow(state, sizeof(type)); \
> + if (j < 0) return j; \
> + if (ctx_pos != -1) \
> + DATA_STACK_LOOKUP_AT(state, SRE(match_context), ctx, ctx_pos); \
> + } \
> + ptr = (type*)(state->data_stack+alloc_pos); \
> + state->data_stack_base += sizeof(type); \
> +} while (0)
> +
> +#define DATA_STACK_LOOKUP_AT(state, type, ptr, pos) \
> +do { \
> + TRACE(("looking up %s at %" PY_FORMAT_SIZE_T "d\n", Py_STRINGIFY(type), pos)); \
> + ptr = (type*)(state->data_stack+pos); \
> +} while (0)
> +
> +#define DATA_STACK_PUSH(state, data, size) \
> +do { \
> + TRACE(("copy data in %p to %" PY_FORMAT_SIZE_T "d " \
> + "(%" PY_FORMAT_SIZE_T "d)\n", \
> + data, state->data_stack_base, size)); \
> + if (size > state->data_stack_size - state->data_stack_base) { \
> + int j = data_stack_grow(state, size); \
> + if (j < 0) return j; \
> + if (ctx_pos != -1) \
> + DATA_STACK_LOOKUP_AT(state, SRE(match_context), ctx, ctx_pos); \
> + } \
> + memcpy(state->data_stack+state->data_stack_base, data, size); \
> + state->data_stack_base += size; \
> +} while (0)
> +
> +#define DATA_STACK_POP(state, data, size, discard) \
> +do { \
> + TRACE(("copy data to %p from %" PY_FORMAT_SIZE_T "d " \
> + "(%" PY_FORMAT_SIZE_T "d)\n", \
> + data, state->data_stack_base-size, size)); \
> + memcpy(data, state->data_stack+state->data_stack_base-size, size); \
> + if (discard) \
> + state->data_stack_base -= size; \
> +} while (0)
> +
> +#define DATA_STACK_POP_DISCARD(state, size) \
> +do { \
> + TRACE(("discard data from %" PY_FORMAT_SIZE_T "d " \
> + "(%" PY_FORMAT_SIZE_T "d)\n", \
> + state->data_stack_base-size, size)); \
> + state->data_stack_base -= size; \
> +} while(0)
> +
> +#define DATA_PUSH(x) \
> + DATA_STACK_PUSH(state, (x), sizeof(*(x)))
> +#define DATA_POP(x) \
> + DATA_STACK_POP(state, (x), sizeof(*(x)), 1)
> +#define DATA_POP_DISCARD(x) \
> + DATA_STACK_POP_DISCARD(state, sizeof(*(x)))
> +#define DATA_ALLOC(t,p) \
> + DATA_STACK_ALLOC(state, t, p)
> +#define DATA_LOOKUP_AT(t,p,pos) \
> + DATA_STACK_LOOKUP_AT(state,t,p,pos)
> +
> +#define MARK_PUSH(lastmark) \
> + do if (lastmark > 0) { \
> + i = lastmark; /* ctx->lastmark may change if reallocated */ \
> + DATA_STACK_PUSH(state, state->mark, (i+1)*sizeof(void*)); \
> + } while (0)
> +#define MARK_POP(lastmark) \
> + do if (lastmark > 0) { \
> + DATA_STACK_POP(state, state->mark, (lastmark+1)*sizeof(void*), 1); \
> + } while (0)
> +#define MARK_POP_KEEP(lastmark) \
> + do if (lastmark > 0) { \
> + DATA_STACK_POP(state, state->mark, (lastmark+1)*sizeof(void*), 0); \
> + } while (0)
> +#define MARK_POP_DISCARD(lastmark) \
> + do if (lastmark > 0) { \
> + DATA_STACK_POP_DISCARD(state, (lastmark+1)*sizeof(void*)); \
> + } while (0)
> +
> +#define JUMP_NONE 0
> +#define JUMP_MAX_UNTIL_1 1
> +#define JUMP_MAX_UNTIL_2 2
> +#define JUMP_MAX_UNTIL_3 3
> +#define JUMP_MIN_UNTIL_1 4
> +#define JUMP_MIN_UNTIL_2 5
> +#define JUMP_MIN_UNTIL_3 6
> +#define JUMP_REPEAT 7
> +#define JUMP_REPEAT_ONE_1 8
> +#define JUMP_REPEAT_ONE_2 9
> +#define JUMP_MIN_REPEAT_ONE 10
> +#define JUMP_BRANCH 11
> +#define JUMP_ASSERT 12
> +#define JUMP_ASSERT_NOT 13
> +
> +#define DO_JUMPX(jumpvalue, jumplabel, nextpattern, matchall) \
> + DATA_ALLOC(SRE(match_context), nextctx); \
> + nextctx->last_ctx_pos = ctx_pos; \
> + nextctx->jump = jumpvalue; \
> + nextctx->pattern = nextpattern; \
> + nextctx->match_all = matchall; \
> + ctx_pos = alloc_pos; \
> + ctx = nextctx; \
> + goto entrance; \
> + jumplabel: \
> + while (0) /* gcc doesn't like labels at end of scopes */ \
> +
> +#define DO_JUMP(jumpvalue, jumplabel, nextpattern) \
> + DO_JUMPX(jumpvalue, jumplabel, nextpattern, ctx->match_all)
> +
> +#define DO_JUMP0(jumpvalue, jumplabel, nextpattern) \
> + DO_JUMPX(jumpvalue, jumplabel, nextpattern, 0)
> +
> +typedef struct {
> + Py_ssize_t last_ctx_pos;
> + Py_ssize_t jump;
> + SRE_CHAR* ptr;
> + SRE_CODE* pattern;
> + Py_ssize_t count;
> + Py_ssize_t lastmark;
> + Py_ssize_t lastindex;
> + union {
> + SRE_CODE chr;
> + SRE_REPEAT* rep;
> + } u;
> + int match_all;
> +} SRE(match_context);
> +
> +/* check if string matches the given pattern. returns <0 for
> + error, 0 for failure, and 1 for success */
> +LOCAL(Py_ssize_t)
> +SRE(match)(SRE_STATE* state, SRE_CODE* pattern, int match_all)
> +{
> + SRE_CHAR* end = (SRE_CHAR *)state->end;
> + Py_ssize_t alloc_pos, ctx_pos = -1;
> + Py_ssize_t i, ret = 0;
> + Py_ssize_t jump;
> + unsigned int sigcount=0;
> +
> + SRE(match_context)* ctx;
> + SRE(match_context)* nextctx;
> +
> + TRACE(("|%p|%p|ENTER\n", pattern, state->ptr));
> +
> + DATA_ALLOC(SRE(match_context), ctx);
> + ctx->last_ctx_pos = -1;
> + ctx->jump = JUMP_NONE;
> + ctx->pattern = pattern;
> + ctx->match_all = match_all;
> + ctx_pos = alloc_pos;
> +
> +entrance:
> +
> + ctx->ptr = (SRE_CHAR *)state->ptr;
> +
> + if (ctx->pattern[0] == SRE_OP_INFO) {
> + /* optimization info block */
> + /* <INFO> <1=skip> <2=flags> <3=min> ... */
> + if (ctx->pattern[3] && (uintptr_t)(end - ctx->ptr) < ctx->pattern[3]) {
> + TRACE(("reject (got %" PY_FORMAT_SIZE_T "d chars, "
> + "need %" PY_FORMAT_SIZE_T "d)\n",
> + end - ctx->ptr, (Py_ssize_t) ctx->pattern[3]));
> + RETURN_FAILURE;
> + }
> + ctx->pattern += ctx->pattern[1] + 1;
> + }
> +
> + for (;;) {
> + ++sigcount;
> + if ((0 == (sigcount & 0xfff)) && PyErr_CheckSignals())
> + RETURN_ERROR(SRE_ERROR_INTERRUPTED);
> +
> + switch (*ctx->pattern++) {
> +
> + case SRE_OP_MARK:
> + /* set mark */
> + /* <MARK> <gid> */
> + TRACE(("|%p|%p|MARK %d\n", ctx->pattern,
> + ctx->ptr, ctx->pattern[0]));
> + i = ctx->pattern[0];
> + if (i & 1)
> + state->lastindex = i/2 + 1;
> + if (i > state->lastmark) {
> + /* state->lastmark is the highest valid index in the
> + state->mark array. If it is increased by more than 1,
> + the intervening marks must be set to NULL to signal
> + that these marks have not been encountered. */
> + Py_ssize_t j = state->lastmark + 1;
> + while (j < i)
> + state->mark[j++] = NULL;
> + state->lastmark = i;
> + }
> + state->mark[i] = ctx->ptr;
> + ctx->pattern++;
> + break;
> +
> + case SRE_OP_LITERAL:
> + /* match literal string */
> + /* <LITERAL> <code> */
> + TRACE(("|%p|%p|LITERAL %d\n", ctx->pattern,
> + ctx->ptr, *ctx->pattern));
> + if (ctx->ptr >= end || (SRE_CODE) ctx->ptr[0] != ctx->pattern[0])
> + RETURN_FAILURE;
> + ctx->pattern++;
> + ctx->ptr++;
> + break;
> +
> + case SRE_OP_NOT_LITERAL:
> + /* match anything that is not literal character */
> + /* <NOT_LITERAL> <code> */
> + TRACE(("|%p|%p|NOT_LITERAL %d\n", ctx->pattern,
> + ctx->ptr, *ctx->pattern));
> + if (ctx->ptr >= end || (SRE_CODE) ctx->ptr[0] == ctx->pattern[0])
> + RETURN_FAILURE;
> + ctx->pattern++;
> + ctx->ptr++;
> + break;
> +
> + case SRE_OP_SUCCESS:
> + /* end of pattern */
> + TRACE(("|%p|%p|SUCCESS\n", ctx->pattern, ctx->ptr));
> + if (!ctx->match_all || ctx->ptr == state->end) {
> + state->ptr = ctx->ptr;
> + RETURN_SUCCESS;
> + }
> + RETURN_FAILURE;
> +
> + case SRE_OP_AT:
> + /* match at given position */
> + /* <AT> <code> */
> + TRACE(("|%p|%p|AT %d\n", ctx->pattern, ctx->ptr, *ctx->pattern));
> + if (!SRE(at)(state, ctx->ptr, *ctx->pattern))
> + RETURN_FAILURE;
> + ctx->pattern++;
> + break;
> +
> + case SRE_OP_CATEGORY:
> + /* match at given category */
> + /* <CATEGORY> <code> */
> + TRACE(("|%p|%p|CATEGORY %d\n", ctx->pattern,
> + ctx->ptr, *ctx->pattern));
> + if (ctx->ptr >= end || !sre_category(ctx->pattern[0], ctx->ptr[0]))
> + RETURN_FAILURE;
> + ctx->pattern++;
> + ctx->ptr++;
> + break;
> +
> + case SRE_OP_ANY:
> + /* match anything (except a newline) */
> + /* <ANY> */
> + TRACE(("|%p|%p|ANY\n", ctx->pattern, ctx->ptr));
> + if (ctx->ptr >= end || SRE_IS_LINEBREAK(ctx->ptr[0]))
> + RETURN_FAILURE;
> + ctx->ptr++;
> + break;
> +
> + case SRE_OP_ANY_ALL:
> + /* match anything */
> + /* <ANY_ALL> */
> + TRACE(("|%p|%p|ANY_ALL\n", ctx->pattern, ctx->ptr));
> + if (ctx->ptr >= end)
> + RETURN_FAILURE;
> + ctx->ptr++;
> + break;
> +
> + case SRE_OP_IN:
> + /* match set member (or non_member) */
> + /* <IN> <skip> <set> */
> + TRACE(("|%p|%p|IN\n", ctx->pattern, ctx->ptr));
> + if (ctx->ptr >= end ||
> + !SRE(charset)(state, ctx->pattern + 1, *ctx->ptr))
> + RETURN_FAILURE;
> + ctx->pattern += ctx->pattern[0];
> + ctx->ptr++;
> + break;
> +
> + case SRE_OP_LITERAL_IGNORE:
> + TRACE(("|%p|%p|LITERAL_IGNORE %d\n",
> + ctx->pattern, ctx->ptr, ctx->pattern[0]));
> + if (ctx->ptr >= end ||
> + state->lower(*ctx->ptr) != state->lower(*ctx->pattern))
> + RETURN_FAILURE;
> + ctx->pattern++;
> + ctx->ptr++;
> + break;
> +
> + case SRE_OP_NOT_LITERAL_IGNORE:
> + TRACE(("|%p|%p|NOT_LITERAL_IGNORE %d\n",
> + ctx->pattern, ctx->ptr, *ctx->pattern));
> + if (ctx->ptr >= end ||
> + state->lower(*ctx->ptr) == state->lower(*ctx->pattern))
> + RETURN_FAILURE;
> + ctx->pattern++;
> + ctx->ptr++;
> + break;
> +
> + case SRE_OP_IN_IGNORE:
> + TRACE(("|%p|%p|IN_IGNORE\n", ctx->pattern, ctx->ptr));
> + if (ctx->ptr >= end
> + || !SRE(charset)(state, ctx->pattern+1,
> + (SRE_CODE)state->lower(*ctx->ptr)))
> + RETURN_FAILURE;
> + ctx->pattern += ctx->pattern[0];
> + ctx->ptr++;
> + break;
> +
> + case SRE_OP_JUMP:
> + case SRE_OP_INFO:
> + /* jump forward */
> + /* <JUMP> <offset> */
> + TRACE(("|%p|%p|JUMP %d\n", ctx->pattern,
> + ctx->ptr, ctx->pattern[0]));
> + ctx->pattern += ctx->pattern[0];
> + break;
> +
> + case SRE_OP_BRANCH:
> + /* alternation */
> + /* <BRANCH> <0=skip> code <JUMP> ... <NULL> */
> + TRACE(("|%p|%p|BRANCH\n", ctx->pattern, ctx->ptr));
> + LASTMARK_SAVE();
> + ctx->u.rep = state->repeat;
> + if (ctx->u.rep)
> + MARK_PUSH(ctx->lastmark);
> + for (; ctx->pattern[0]; ctx->pattern += ctx->pattern[0]) {
> + if (ctx->pattern[1] == SRE_OP_LITERAL &&
> + (ctx->ptr >= end ||
> + (SRE_CODE) *ctx->ptr != ctx->pattern[2]))
> + continue;
> + if (ctx->pattern[1] == SRE_OP_IN &&
> + (ctx->ptr >= end ||
> + !SRE(charset)(state, ctx->pattern + 3,
> + (SRE_CODE) *ctx->ptr)))
> + continue;
> + state->ptr = ctx->ptr;
> + DO_JUMP(JUMP_BRANCH, jump_branch, ctx->pattern+1);
> + if (ret) {
> + if (ctx->u.rep)
> + MARK_POP_DISCARD(ctx->lastmark);
> + RETURN_ON_ERROR(ret);
> + RETURN_SUCCESS;
> + }
> + if (ctx->u.rep)
> + MARK_POP_KEEP(ctx->lastmark);
> + LASTMARK_RESTORE();
> + }
> + if (ctx->u.rep)
> + MARK_POP_DISCARD(ctx->lastmark);
> + RETURN_FAILURE;
> +
> + case SRE_OP_REPEAT_ONE:
> + /* match repeated sequence (maximizing regexp) */
> +
> + /* this operator only works if the repeated item is
> + exactly one character wide, and we're not already
> + collecting backtracking points. for other cases,
> + use the MAX_REPEAT operator */
> +
> + /* <REPEAT_ONE> <skip> <1=min> <2=max> item <SUCCESS> tail */
> +
> + TRACE(("|%p|%p|REPEAT_ONE %d %d\n", ctx->pattern, ctx->ptr,
> + ctx->pattern[1], ctx->pattern[2]));
> +
> + if ((Py_ssize_t) ctx->pattern[1] > end - ctx->ptr)
> + RETURN_FAILURE; /* cannot match */
> +
> + state->ptr = ctx->ptr;
> +
> + ret = SRE(count)(state, ctx->pattern+3, ctx->pattern[2]);
> + RETURN_ON_ERROR(ret);
> + DATA_LOOKUP_AT(SRE(match_context), ctx, ctx_pos);
> + ctx->count = ret;
> + ctx->ptr += ctx->count;
> +
> + /* when we arrive here, count contains the number of
> + matches, and ctx->ptr points to the tail of the target
> + string. check if the rest of the pattern matches,
> + and backtrack if not. */
> +
> + if (ctx->count < (Py_ssize_t) ctx->pattern[1])
> + RETURN_FAILURE;
> +
> + if (ctx->pattern[ctx->pattern[0]] == SRE_OP_SUCCESS &&
> + ctx->ptr == state->end) {
> + /* tail is empty. we're finished */
> + state->ptr = ctx->ptr;
> + RETURN_SUCCESS;
> + }
> +
> + LASTMARK_SAVE();
> +
> + if (ctx->pattern[ctx->pattern[0]] == SRE_OP_LITERAL) {
> + /* tail starts with a literal. skip positions where
> + the rest of the pattern cannot possibly match */
> + ctx->u.chr = ctx->pattern[ctx->pattern[0]+1];
> + for (;;) {
> + while (ctx->count >= (Py_ssize_t) ctx->pattern[1] &&
> + (ctx->ptr >= end || *ctx->ptr != ctx->u.chr)) {
> + ctx->ptr--;
> + ctx->count--;
> + }
> + if (ctx->count < (Py_ssize_t) ctx->pattern[1])
> + break;
> + state->ptr = ctx->ptr;
> + DO_JUMP(JUMP_REPEAT_ONE_1, jump_repeat_one_1,
> + ctx->pattern+ctx->pattern[0]);
> + if (ret) {
> + RETURN_ON_ERROR(ret);
> + RETURN_SUCCESS;
> + }
> +
> + LASTMARK_RESTORE();
> +
> + ctx->ptr--;
> + ctx->count--;
> + }
> +
> + } else {
> + /* general case */
> + while (ctx->count >= (Py_ssize_t) ctx->pattern[1]) {
> + state->ptr = ctx->ptr;
> + DO_JUMP(JUMP_REPEAT_ONE_2, jump_repeat_one_2,
> + ctx->pattern+ctx->pattern[0]);
> + if (ret) {
> + RETURN_ON_ERROR(ret);
> + RETURN_SUCCESS;
> + }
> + ctx->ptr--;
> + ctx->count--;
> + LASTMARK_RESTORE();
> + }
> + }
> + RETURN_FAILURE;
> +
> + case SRE_OP_MIN_REPEAT_ONE:
> + /* match repeated sequence (minimizing regexp) */
> +
> + /* this operator only works if the repeated item is
> + exactly one character wide, and we're not already
> + collecting backtracking points. for other cases,
> + use the MIN_REPEAT operator */
> +
> + /* <MIN_REPEAT_ONE> <skip> <1=min> <2=max> item <SUCCESS> tail */
> +
> + TRACE(("|%p|%p|MIN_REPEAT_ONE %d %d\n", ctx->pattern, ctx->ptr,
> + ctx->pattern[1], ctx->pattern[2]));
> +
> + if ((Py_ssize_t) ctx->pattern[1] > end - ctx->ptr)
> + RETURN_FAILURE; /* cannot match */
> +
> + state->ptr = ctx->ptr;
> +
> + if (ctx->pattern[1] == 0)
> + ctx->count = 0;
> + else {
> + /* count using pattern min as the maximum */
> + ret = SRE(count)(state, ctx->pattern+3, ctx->pattern[1]);
> + RETURN_ON_ERROR(ret);
> + DATA_LOOKUP_AT(SRE(match_context), ctx, ctx_pos);
> + if (ret < (Py_ssize_t) ctx->pattern[1])
> + /* didn't match minimum number of times */
> + RETURN_FAILURE;
> + /* advance past minimum matches of repeat */
> + ctx->count = ret;
> + ctx->ptr += ctx->count;
> + }
> +
> + if (ctx->pattern[ctx->pattern[0]] == SRE_OP_SUCCESS &&
> + (!match_all || ctx->ptr == state->end)) {
> + /* tail is empty. we're finished */
> + state->ptr = ctx->ptr;
> + RETURN_SUCCESS;
> +
> + } else {
> + /* general case */
> + LASTMARK_SAVE();
> + while ((Py_ssize_t)ctx->pattern[2] == SRE_MAXREPEAT
> + || ctx->count <= (Py_ssize_t)ctx->pattern[2]) {
> + state->ptr = ctx->ptr;
> + DO_JUMP(JUMP_MIN_REPEAT_ONE,jump_min_repeat_one,
> + ctx->pattern+ctx->pattern[0]);
> + if (ret) {
> + RETURN_ON_ERROR(ret);
> + RETURN_SUCCESS;
> + }
> + state->ptr = ctx->ptr;
> + ret = SRE(count)(state, ctx->pattern+3, 1);
> + RETURN_ON_ERROR(ret);
> + DATA_LOOKUP_AT(SRE(match_context), ctx, ctx_pos);
> + if (ret == 0)
> + break;
> + assert(ret == 1);
> + ctx->ptr++;
> + ctx->count++;
> + LASTMARK_RESTORE();
> + }
> + }
> + RETURN_FAILURE;
> +
> + case SRE_OP_REPEAT:
> + /* create repeat context. all the hard work is done
> + by the UNTIL operator (MAX_UNTIL, MIN_UNTIL) */
> + /* <REPEAT> <skip> <1=min> <2=max> item <UNTIL> tail */
> + TRACE(("|%p|%p|REPEAT %d %d\n", ctx->pattern, ctx->ptr,
> + ctx->pattern[1], ctx->pattern[2]));
> +
> + /* install new repeat context */
> + ctx->u.rep = (SRE_REPEAT*) PyObject_MALLOC(sizeof(*ctx->u.rep));
> + if (!ctx->u.rep) {
> + PyErr_NoMemory();
> + RETURN_FAILURE;
> + }
> + ctx->u.rep->count = -1;
> + ctx->u.rep->pattern = ctx->pattern;
> + ctx->u.rep->prev = state->repeat;
> + ctx->u.rep->last_ptr = NULL;
> + state->repeat = ctx->u.rep;
> +
> + state->ptr = ctx->ptr;
> + DO_JUMP(JUMP_REPEAT, jump_repeat, ctx->pattern+ctx->pattern[0]);
> + state->repeat = ctx->u.rep->prev;
> + PyObject_FREE(ctx->u.rep);
> +
> + if (ret) {
> + RETURN_ON_ERROR(ret);
> + RETURN_SUCCESS;
> + }
> + RETURN_FAILURE;
> +
> + case SRE_OP_MAX_UNTIL:
> + /* maximizing repeat */
> + /* <REPEAT> <skip> <1=min> <2=max> item <MAX_UNTIL> tail */
> +
> + /* FIXME: we probably need to deal with zero-width
> + matches in here... */
> +
> + ctx->u.rep = state->repeat;
> + if (!ctx->u.rep)
> + RETURN_ERROR(SRE_ERROR_STATE);
> +
> + state->ptr = ctx->ptr;
> +
> + ctx->count = ctx->u.rep->count+1;
> +
> + TRACE(("|%p|%p|MAX_UNTIL %" PY_FORMAT_SIZE_T "d\n", ctx->pattern,
> + ctx->ptr, ctx->count));
> +
> + if (ctx->count < (Py_ssize_t) ctx->u.rep->pattern[1]) {
> + /* not enough matches */
> + ctx->u.rep->count = ctx->count;
> + DO_JUMP(JUMP_MAX_UNTIL_1, jump_max_until_1,
> + ctx->u.rep->pattern+3);
> + if (ret) {
> + RETURN_ON_ERROR(ret);
> + RETURN_SUCCESS;
> + }
> + ctx->u.rep->count = ctx->count-1;
> + state->ptr = ctx->ptr;
> + RETURN_FAILURE;
> + }
> +
> + if ((ctx->count < (Py_ssize_t) ctx->u.rep->pattern[2] ||
> + ctx->u.rep->pattern[2] == SRE_MAXREPEAT) &&
> + state->ptr != ctx->u.rep->last_ptr) {
> + /* we may have enough matches, but if we can
> + match another item, do so */
> + ctx->u.rep->count = ctx->count;
> + LASTMARK_SAVE();
> + MARK_PUSH(ctx->lastmark);
> + /* zero-width match protection */
> + DATA_PUSH(&ctx->u.rep->last_ptr);
> + ctx->u.rep->last_ptr = state->ptr;
> + DO_JUMP(JUMP_MAX_UNTIL_2, jump_max_until_2,
> + ctx->u.rep->pattern+3);
> + DATA_POP(&ctx->u.rep->last_ptr);
> + if (ret) {
> + MARK_POP_DISCARD(ctx->lastmark);
> + RETURN_ON_ERROR(ret);
> + RETURN_SUCCESS;
> + }
> + MARK_POP(ctx->lastmark);
> + LASTMARK_RESTORE();
> + ctx->u.rep->count = ctx->count-1;
> + state->ptr = ctx->ptr;
> + }
> +
> + /* cannot match more repeated items here. make sure the
> + tail matches */
> + state->repeat = ctx->u.rep->prev;
> + DO_JUMP(JUMP_MAX_UNTIL_3, jump_max_until_3, ctx->pattern);
> + RETURN_ON_SUCCESS(ret);
> + state->repeat = ctx->u.rep;
> + state->ptr = ctx->ptr;
> + RETURN_FAILURE;
> +
> + case SRE_OP_MIN_UNTIL:
> + /* minimizing repeat */
> + /* <REPEAT> <skip> <1=min> <2=max> item <MIN_UNTIL> tail */
> +
> + ctx->u.rep = state->repeat;
> + if (!ctx->u.rep)
> + RETURN_ERROR(SRE_ERROR_STATE);
> +
> + state->ptr = ctx->ptr;
> +
> + ctx->count = ctx->u.rep->count+1;
> +
> + TRACE(("|%p|%p|MIN_UNTIL %" PY_FORMAT_SIZE_T "d %p\n", ctx->pattern,
> + ctx->ptr, ctx->count, ctx->u.rep->pattern));
> +
> + if (ctx->count < (Py_ssize_t) ctx->u.rep->pattern[1]) {
> + /* not enough matches */
> + ctx->u.rep->count = ctx->count;
> + DO_JUMP(JUMP_MIN_UNTIL_1, jump_min_until_1,
> + ctx->u.rep->pattern+3);
> + if (ret) {
> + RETURN_ON_ERROR(ret);
> + RETURN_SUCCESS;
> + }
> + ctx->u.rep->count = ctx->count-1;
> + state->ptr = ctx->ptr;
> + RETURN_FAILURE;
> + }
> +
> + LASTMARK_SAVE();
> +
> + /* see if the tail matches */
> + state->repeat = ctx->u.rep->prev;
> + DO_JUMP(JUMP_MIN_UNTIL_2, jump_min_until_2, ctx->pattern);
> + if (ret) {
> + RETURN_ON_ERROR(ret);
> + RETURN_SUCCESS;
> + }
> +
> + state->repeat = ctx->u.rep;
> + state->ptr = ctx->ptr;
> +
> + LASTMARK_RESTORE();
> +
> + if ((ctx->count >= (Py_ssize_t) ctx->u.rep->pattern[2]
> + && ctx->u.rep->pattern[2] != SRE_MAXREPEAT) ||
> + state->ptr == ctx->u.rep->last_ptr)
> + RETURN_FAILURE;
> +
> + ctx->u.rep->count = ctx->count;
> + /* zero-width match protection */
> + DATA_PUSH(&ctx->u.rep->last_ptr);
> + ctx->u.rep->last_ptr = state->ptr;
> + DO_JUMP(JUMP_MIN_UNTIL_3,jump_min_until_3,
> + ctx->u.rep->pattern+3);
> + DATA_POP(&ctx->u.rep->last_ptr);
> + if (ret) {
> + RETURN_ON_ERROR(ret);
> + RETURN_SUCCESS;
> + }
> + ctx->u.rep->count = ctx->count-1;
> + state->ptr = ctx->ptr;
> + RETURN_FAILURE;
> +
> + case SRE_OP_GROUPREF:
> + /* match backreference */
> + TRACE(("|%p|%p|GROUPREF %d\n", ctx->pattern,
> + ctx->ptr, ctx->pattern[0]));
> + i = ctx->pattern[0];
> + {
> + Py_ssize_t groupref = i+i;
> + if (groupref >= state->lastmark) {
> + RETURN_FAILURE;
> + } else {
> + SRE_CHAR* p = (SRE_CHAR*) state->mark[groupref];
> + SRE_CHAR* e = (SRE_CHAR*) state->mark[groupref+1];
> + if (!p || !e || e < p)
> + RETURN_FAILURE;
> + while (p < e) {
> + if (ctx->ptr >= end || *ctx->ptr != *p)
> + RETURN_FAILURE;
> + p++;
> + ctx->ptr++;
> + }
> + }
> + }
> + ctx->pattern++;
> + break;
> +
> + case SRE_OP_GROUPREF_IGNORE:
> + /* match backreference */
> + TRACE(("|%p|%p|GROUPREF_IGNORE %d\n", ctx->pattern,
> + ctx->ptr, ctx->pattern[0]));
> + i = ctx->pattern[0];
> + {
> + Py_ssize_t groupref = i+i;
> + if (groupref >= state->lastmark) {
> + RETURN_FAILURE;
> + } else {
> + SRE_CHAR* p = (SRE_CHAR*) state->mark[groupref];
> + SRE_CHAR* e = (SRE_CHAR*) state->mark[groupref+1];
> + if (!p || !e || e < p)
> + RETURN_FAILURE;
> + while (p < e) {
> + if (ctx->ptr >= end ||
> + state->lower(*ctx->ptr) != state->lower(*p))
> + RETURN_FAILURE;
> + p++;
> + ctx->ptr++;
> + }
> + }
> + }
> + ctx->pattern++;
> + break;
> +
> + case SRE_OP_GROUPREF_EXISTS:
> + TRACE(("|%p|%p|GROUPREF_EXISTS %d\n", ctx->pattern,
> + ctx->ptr, ctx->pattern[0]));
> + /* <GROUPREF_EXISTS> <group> <skip> codeyes <JUMP> codeno ... */
> + i = ctx->pattern[0];
> + {
> + Py_ssize_t groupref = i+i;
> + if (groupref >= state->lastmark) {
> + ctx->pattern += ctx->pattern[1];
> + break;
> + } else {
> + SRE_CHAR* p = (SRE_CHAR*) state->mark[groupref];
> + SRE_CHAR* e = (SRE_CHAR*) state->mark[groupref+1];
> + if (!p || !e || e < p) {
> + ctx->pattern += ctx->pattern[1];
> + break;
> + }
> + }
> + }
> + ctx->pattern += 2;
> + break;
> +
> + case SRE_OP_ASSERT:
> + /* assert subpattern */
> + /* <ASSERT> <skip> <back> <pattern> */
> + TRACE(("|%p|%p|ASSERT %d\n", ctx->pattern,
> + ctx->ptr, ctx->pattern[1]));
> + if (ctx->ptr - (SRE_CHAR *)state->beginning < (Py_ssize_t)ctx->pattern[1])
> + RETURN_FAILURE;
> + state->ptr = ctx->ptr - ctx->pattern[1];
> + DO_JUMP0(JUMP_ASSERT, jump_assert, ctx->pattern+2);
> + RETURN_ON_FAILURE(ret);
> + ctx->pattern += ctx->pattern[0];
> + break;
> +
> + case SRE_OP_ASSERT_NOT:
> + /* assert not subpattern */
> + /* <ASSERT_NOT> <skip> <back> <pattern> */
> + TRACE(("|%p|%p|ASSERT_NOT %d\n", ctx->pattern,
> + ctx->ptr, ctx->pattern[1]));
> + if (ctx->ptr - (SRE_CHAR *)state->beginning >= (Py_ssize_t)ctx->pattern[1]) {
> + state->ptr = ctx->ptr - ctx->pattern[1];
> + DO_JUMP0(JUMP_ASSERT_NOT, jump_assert_not, ctx->pattern+2);
> + if (ret) {
> + RETURN_ON_ERROR(ret);
> + RETURN_FAILURE;
> + }
> + }
> + ctx->pattern += ctx->pattern[0];
> + break;
> +
> + case SRE_OP_FAILURE:
> + /* immediate failure */
> + TRACE(("|%p|%p|FAILURE\n", ctx->pattern, ctx->ptr));
> + RETURN_FAILURE;
> +
> + default:
> + TRACE(("|%p|%p|UNKNOWN %d\n", ctx->pattern, ctx->ptr,
> + ctx->pattern[-1]));
> + RETURN_ERROR(SRE_ERROR_ILLEGAL);
> + }
> + }
> +
> +exit:
> + ctx_pos = ctx->last_ctx_pos;
> + jump = ctx->jump;
> + DATA_POP_DISCARD(ctx);
> + if (ctx_pos == -1)
> + return ret;
> + DATA_LOOKUP_AT(SRE(match_context), ctx, ctx_pos);
> +
> + switch (jump) {
> + case JUMP_MAX_UNTIL_2:
> + TRACE(("|%p|%p|JUMP_MAX_UNTIL_2\n", ctx->pattern, ctx->ptr));
> + goto jump_max_until_2;
> + case JUMP_MAX_UNTIL_3:
> + TRACE(("|%p|%p|JUMP_MAX_UNTIL_3\n", ctx->pattern, ctx->ptr));
> + goto jump_max_until_3;
> + case JUMP_MIN_UNTIL_2:
> + TRACE(("|%p|%p|JUMP_MIN_UNTIL_2\n", ctx->pattern, ctx->ptr));
> + goto jump_min_until_2;
> + case JUMP_MIN_UNTIL_3:
> + TRACE(("|%p|%p|JUMP_MIN_UNTIL_3\n", ctx->pattern, ctx->ptr));
> + goto jump_min_until_3;
> + case JUMP_BRANCH:
> + TRACE(("|%p|%p|JUMP_BRANCH\n", ctx->pattern, ctx->ptr));
> + goto jump_branch;
> + case JUMP_MAX_UNTIL_1:
> + TRACE(("|%p|%p|JUMP_MAX_UNTIL_1\n", ctx->pattern, ctx->ptr));
> + goto jump_max_until_1;
> + case JUMP_MIN_UNTIL_1:
> + TRACE(("|%p|%p|JUMP_MIN_UNTIL_1\n", ctx->pattern, ctx->ptr));
> + goto jump_min_until_1;
> + case JUMP_REPEAT:
> + TRACE(("|%p|%p|JUMP_REPEAT\n", ctx->pattern, ctx->ptr));
> + goto jump_repeat;
> + case JUMP_REPEAT_ONE_1:
> + TRACE(("|%p|%p|JUMP_REPEAT_ONE_1\n", ctx->pattern, ctx->ptr));
> + goto jump_repeat_one_1;
> + case JUMP_REPEAT_ONE_2:
> + TRACE(("|%p|%p|JUMP_REPEAT_ONE_2\n", ctx->pattern, ctx->ptr));
> + goto jump_repeat_one_2;
> + case JUMP_MIN_REPEAT_ONE:
> + TRACE(("|%p|%p|JUMP_MIN_REPEAT_ONE\n", ctx->pattern, ctx->ptr));
> + goto jump_min_repeat_one;
> + case JUMP_ASSERT:
> + TRACE(("|%p|%p|JUMP_ASSERT\n", ctx->pattern, ctx->ptr));
> + goto jump_assert;
> + case JUMP_ASSERT_NOT:
> + TRACE(("|%p|%p|JUMP_ASSERT_NOT\n", ctx->pattern, ctx->ptr));
> + goto jump_assert_not;
> + case JUMP_NONE:
> + TRACE(("|%p|%p|RETURN %" PY_FORMAT_SIZE_T "d\n", ctx->pattern,
> + ctx->ptr, ret));
> + break;
> + }
> +
> + return ret; /* should never get here */
> +}
> +
> +LOCAL(Py_ssize_t)
> +SRE(search)(SRE_STATE* state, SRE_CODE* pattern)
> +{
> + SRE_CHAR* ptr = (SRE_CHAR *)state->start;
> + SRE_CHAR* end = (SRE_CHAR *)state->end;
> + Py_ssize_t status = 0;
> + Py_ssize_t prefix_len = 0;
> + Py_ssize_t prefix_skip = 0;
> + SRE_CODE* prefix = NULL;
> + SRE_CODE* charset = NULL;
> + SRE_CODE* overlap = NULL;
> + int flags = 0;
> +
> + if (ptr > end)
> + return 0;
> +
> + if (pattern[0] == SRE_OP_INFO) {
> + /* optimization info block */
> + /* <INFO> <1=skip> <2=flags> <3=min> <4=max> <5=prefix info> */
> +
> + flags = pattern[2];
> +
> + if (pattern[3] && end - ptr < (Py_ssize_t)pattern[3]) {
> + TRACE(("reject (got %u chars, need %u)\n",
> + (unsigned int)(end - ptr), pattern[3]));
> + return 0;
> + }
> + if (pattern[3] > 1) {
> + /* adjust end point (but make sure we leave at least one
> + character in there, so literal search will work) */
> + end -= pattern[3] - 1;
> + if (end <= ptr)
> + end = ptr;
> + }
> +
> + if (flags & SRE_INFO_PREFIX) {
> + /* pattern starts with a known prefix */
> + /* <length> <skip> <prefix data> <overlap data> */
> + prefix_len = pattern[5];
> + prefix_skip = pattern[6];
> + prefix = pattern + 7;
> + overlap = prefix + prefix_len - 1;
> + } else if (flags & SRE_INFO_CHARSET)
> + /* pattern starts with a character from a known set */
> + /* <charset> */
> + charset = pattern + 5;
> +
> + pattern += 1 + pattern[1];
> + }
> +
> + TRACE(("prefix = %p %" PY_FORMAT_SIZE_T "d %" PY_FORMAT_SIZE_T "d\n",
> + prefix, prefix_len, prefix_skip));
> + TRACE(("charset = %p\n", charset));
> +
> + if (prefix_len == 1) {
> + /* pattern starts with a literal character */
> + SRE_CHAR c = (SRE_CHAR) prefix[0];
> +#if SIZEOF_SRE_CHAR < 4
> + if ((SRE_CODE) c != prefix[0])
> + return 0; /* literal can't match: doesn't fit in char width */
> +#endif
> + end = (SRE_CHAR *)state->end;
> + while (ptr < end) {
> + while (*ptr != c) {
> + if (++ptr >= end)
> + return 0;
> + }
> + TRACE(("|%p|%p|SEARCH LITERAL\n", pattern, ptr));
> + state->start = ptr;
> + state->ptr = ptr + prefix_skip;
> + if (flags & SRE_INFO_LITERAL)
> + return 1; /* we got all of it */
> + status = SRE(match)(state, pattern + 2*prefix_skip, 0);
> + if (status != 0)
> + return status;
> + ++ptr;
> + }
> + return 0;
> + }
> +
> + if (prefix_len > 1) {
> + /* pattern starts with a known prefix. use the overlap
> + table to skip forward as fast as we possibly can */
> + Py_ssize_t i = 0;
> +
> + end = (SRE_CHAR *)state->end;
> + if (prefix_len > end - ptr)
> + return 0;
> +#if SIZEOF_SRE_CHAR < 4
> + for (i = 0; i < prefix_len; i++)
> + if ((SRE_CODE)(SRE_CHAR) prefix[i] != prefix[i])
> + return 0; /* literal can't match: doesn't fit in char width */
> +#endif
> + while (ptr < end) {
> + SRE_CHAR c = (SRE_CHAR) prefix[0];
> + while (*ptr++ != c) {
> + if (ptr >= end)
> + return 0;
> + }
> + if (ptr >= end)
> + return 0;
> +
> + i = 1;
> + do {
> + if (*ptr == (SRE_CHAR) prefix[i]) {
> + if (++i != prefix_len) {
> + if (++ptr >= end)
> + return 0;
> + continue;
> + }
> + /* found a potential match */
> + TRACE(("|%p|%p|SEARCH SCAN\n", pattern, ptr));
> + state->start = ptr - (prefix_len - 1);
> + state->ptr = ptr - (prefix_len - prefix_skip - 1);
> + if (flags & SRE_INFO_LITERAL)
> + return 1; /* we got all of it */
> + status = SRE(match)(state, pattern + 2*prefix_skip, 0);
> + if (status != 0)
> + return status;
> + /* close but no cigar -- try again */
> + if (++ptr >= end)
> + return 0;
> + }
> + i = overlap[i];
> + } while (i != 0);
> + }
> + return 0;
> + }
> +
> + if (charset) {
> + /* pattern starts with a character from a known set */
> + end = (SRE_CHAR *)state->end;
> + for (;;) {
> + while (ptr < end && !SRE(charset)(state, charset, *ptr))
> + ptr++;
> + if (ptr >= end)
> + return 0;
> + TRACE(("|%p|%p|SEARCH CHARSET\n", pattern, ptr));
> + state->start = ptr;
> + state->ptr = ptr;
> + status = SRE(match)(state, pattern, 0);
> + if (status != 0)
> + break;
> + ptr++;
> + }
> + } else {
> + /* general case */
> + assert(ptr <= end);
> + while (1) {
> + TRACE(("|%p|%p|SEARCH\n", pattern, ptr));
> + state->start = state->ptr = ptr;
> + status = SRE(match)(state, pattern, 0);
> + if (status != 0 || ptr >= end)
> + break;
> + ptr++;
> + }
> + }
> +
> + return status;
> +}
> +
> +#undef SRE_CHAR
> +#undef SIZEOF_SRE_CHAR
> +#undef SRE
> +
> +/* vim:ts=4:sw=4:et
> +*/
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/timemodule.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/timemodule.c
> new file mode 100644
> index 00000000..85b22142
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/timemodule.c
> @@ -0,0 +1,1526 @@
> +/* Time module */
> +
> +#include "Python.h"
> +
> +#include <ctype.h>
> +
> +#ifdef HAVE_SYS_TIMES_H
> +#include <sys/times.h>
> +#endif
> +
> +#ifdef HAVE_SYS_TYPES_H
> +#include <sys/types.h>
> +#endif
> +
> +#if defined(HAVE_SYS_RESOURCE_H)
> +#include <sys/resource.h>
> +#endif
> +
> +#ifdef QUICKWIN
> +#include <io.h>
> +#endif
> +
> +#if defined(__WATCOMC__) && !defined(__QNX__)
> +#include <i86.h>
> +#else
> +#ifdef MS_WINDOWS
> +#define WIN32_LEAN_AND_MEAN
> +#include <windows.h>
> +#include "pythread.h"
> +#endif /* MS_WINDOWS */
> +#endif /* !__WATCOMC__ || __QNX__ */
> +
> +/* Forward declarations */
> +static int pysleep(_PyTime_t);
> +static PyObject* floattime(_Py_clock_info_t *info);
> +
> +static PyObject *
> +time_time(PyObject *self, PyObject *unused)
> +{
> + return floattime(NULL);
> +}
> +
> +PyDoc_STRVAR(time_doc,
> +"time() -> floating point number\n\
> +\n\
> +Return the current time in seconds since the Epoch.\n\
> +Fractions of a second may be present if the system clock provides them.");
> +
> +#if defined(HAVE_CLOCK)
> +
> +#ifndef CLOCKS_PER_SEC
> +#ifdef CLK_TCK
> +#define CLOCKS_PER_SEC CLK_TCK
> +#else
> +#define CLOCKS_PER_SEC 1000000
> +#endif
> +#endif
> +
> +static PyObject *
> +floatclock(_Py_clock_info_t *info)
> +{
> + clock_t value;
> + value = clock();
> + if (value == (clock_t)-1) {
> + PyErr_SetString(PyExc_RuntimeError,
> + "the processor time used is not available "
> + "or its value cannot be represented");
> + return NULL;
> + }
> + if (info) {
> + info->implementation = "clock()";
> + info->resolution = 1.0 / (double)CLOCKS_PER_SEC;
> + info->monotonic = 1;
> + info->adjustable = 0;
> + }
> + return PyFloat_FromDouble((double)value / CLOCKS_PER_SEC);
> +}
> +#endif /* HAVE_CLOCK */
> +
> +#ifdef MS_WINDOWS
> +#define WIN32_PERF_COUNTER
> +/* Win32 has better clock replacement; we have our own version, due to Mark
> + Hammond and Tim Peters */
> +static PyObject*
> +win_perf_counter(_Py_clock_info_t *info)
> +{
> + static LONGLONG cpu_frequency = 0;
> + static LONGLONG ctrStart;
> + LARGE_INTEGER now;
> + double diff;
> +
> + if (cpu_frequency == 0) {
> + LARGE_INTEGER freq;
> + QueryPerformanceCounter(&now);
> + ctrStart = now.QuadPart;
> + if (!QueryPerformanceFrequency(&freq) || freq.QuadPart == 0) {
> + PyErr_SetFromWindowsErr(0);
> + return NULL;
> + }
> + cpu_frequency = freq.QuadPart;
> + }
> + QueryPerformanceCounter(&now);
> + diff = (double)(now.QuadPart - ctrStart);
> + if (info) {
> + info->implementation = "QueryPerformanceCounter()";
> + info->resolution = 1.0 / (double)cpu_frequency;
> + info->monotonic = 1;
> + info->adjustable = 0;
> + }
> + return PyFloat_FromDouble(diff / (double)cpu_frequency);
> +}
> +#endif /* MS_WINDOWS */
> +
> +#if defined(WIN32_PERF_COUNTER) || defined(HAVE_CLOCK)
> +#define PYCLOCK
> +static PyObject*
> +pyclock(_Py_clock_info_t *info)
> +{
> +#ifdef WIN32_PERF_COUNTER
> + return win_perf_counter(info);
> +#else
> + return floatclock(info);
> +#endif
> +}
> +
> +static PyObject *
> +time_clock(PyObject *self, PyObject *unused)
> +{
> + return pyclock(NULL);
> +}
> +
> +PyDoc_STRVAR(clock_doc,
> +"clock() -> floating point number\n\
> +\n\
> +Return the CPU time or real time since the start of the process or since\n\
> +the first call to clock(). This has as much precision as the system\n\
> +records.");
> +#endif
> +
> +#ifdef HAVE_CLOCK_GETTIME
> +static PyObject *
> +time_clock_gettime(PyObject *self, PyObject *args)
> +{
> + int ret;
> + int clk_id;
> + struct timespec tp;
> +
> + if (!PyArg_ParseTuple(args, "i:clock_gettime", &clk_id))
> + return NULL;
> +
> + ret = clock_gettime((clockid_t)clk_id, &tp);
> + if (ret != 0) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + return NULL;
> + }
> + return PyFloat_FromDouble(tp.tv_sec + tp.tv_nsec * 1e-9);
> +}
> +
> +PyDoc_STRVAR(clock_gettime_doc,
> +"clock_gettime(clk_id) -> floating point number\n\
> +\n\
> +Return the time of the specified clock clk_id.");
> +#endif /* HAVE_CLOCK_GETTIME */
> +
> +#ifdef HAVE_CLOCK_SETTIME
> +static PyObject *
> +time_clock_settime(PyObject *self, PyObject *args)
> +{
> + int clk_id;
> + PyObject *obj;
> + _PyTime_t t;
> + struct timespec tp;
> + int ret;
> +
> + if (!PyArg_ParseTuple(args, "iO:clock_settime", &clk_id, &obj))
> + return NULL;
> +
> + if (_PyTime_FromSecondsObject(&t, obj, _PyTime_ROUND_FLOOR) < 0)
> + return NULL;
> +
> + if (_PyTime_AsTimespec(t, &tp) == -1)
> + return NULL;
> +
> + ret = clock_settime((clockid_t)clk_id, &tp);
> + if (ret != 0) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + return NULL;
> + }
> + Py_RETURN_NONE;
> +}
> +
> +PyDoc_STRVAR(clock_settime_doc,
> +"clock_settime(clk_id, time)\n\
> +\n\
> +Set the time of the specified clock clk_id.");
> +#endif /* HAVE_CLOCK_SETTIME */
> +
> +#ifdef HAVE_CLOCK_GETRES
> +static PyObject *
> +time_clock_getres(PyObject *self, PyObject *args)
> +{
> + int ret;
> + int clk_id;
> + struct timespec tp;
> +
> + if (!PyArg_ParseTuple(args, "i:clock_getres", &clk_id))
> + return NULL;
> +
> + ret = clock_getres((clockid_t)clk_id, &tp);
> + if (ret != 0) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + return NULL;
> + }
> +
> + return PyFloat_FromDouble(tp.tv_sec + tp.tv_nsec * 1e-9);
> +}
> +
> +PyDoc_STRVAR(clock_getres_doc,
> +"clock_getres(clk_id) -> floating point number\n\
> +\n\
> +Return the resolution (precision) of the specified clock clk_id.");
> +#endif /* HAVE_CLOCK_GETRES */
> +
> +static PyObject *
> +time_sleep(PyObject *self, PyObject *obj)
> +{
> + _PyTime_t secs;
> + if (_PyTime_FromSecondsObject(&secs, obj, _PyTime_ROUND_TIMEOUT))
> + return NULL;
> + if (secs < 0) {
> + PyErr_SetString(PyExc_ValueError,
> + "sleep length must be non-negative");
> + return NULL;
> + }
> + if (pysleep(secs) != 0)
> + return NULL;
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +PyDoc_STRVAR(sleep_doc,
> +"sleep(seconds)\n\
> +\n\
> +Delay execution for a given number of seconds. The argument may be\n\
> +a floating point number for subsecond precision.");
> +
> +static PyStructSequence_Field struct_time_type_fields[] = {
> + {"tm_year", "year, for example, 1993"},
> + {"tm_mon", "month of year, range [1, 12]"},
> + {"tm_mday", "day of month, range [1, 31]"},
> + {"tm_hour", "hours, range [0, 23]"},
> + {"tm_min", "minutes, range [0, 59]"},
> + {"tm_sec", "seconds, range [0, 61])"},
> + {"tm_wday", "day of week, range [0, 6], Monday is 0"},
> + {"tm_yday", "day of year, range [1, 366]"},
> + {"tm_isdst", "1 if summer time is in effect, 0 if not, and -1 if unknown"},
> + {"tm_zone", "abbreviation of timezone name"},
> + {"tm_gmtoff", "offset from UTC in seconds"},
> + {0}
> +};
> +
> +static PyStructSequence_Desc struct_time_type_desc = {
> + "time.struct_time",
> + "The time value as returned by gmtime(), localtime(), and strptime(), and\n"
> + " accepted by asctime(), mktime() and strftime(). May be considered as a\n"
> + " sequence of 9 integers.\n\n"
> + " Note that several fields' values are not the same as those defined by\n"
> + " the C language standard for struct tm. For example, the value of the\n"
> + " field tm_year is the actual year, not year - 1900. See individual\n"
> + " fields' descriptions for details.",
> + struct_time_type_fields,
> + 9,
> +};
> +
> +static int initialized;
> +static PyTypeObject StructTimeType;
> +
> +
> +static PyObject *
> +tmtotuple(struct tm *p
> +#ifndef HAVE_STRUCT_TM_TM_ZONE
> + , const char *zone, time_t gmtoff
> +#endif
> +)
> +{
> + PyObject *v = PyStructSequence_New(&StructTimeType);
> + if (v == NULL)
> + return NULL;
> +
> +#define SET(i,val) PyStructSequence_SET_ITEM(v, i, PyLong_FromLong((long) val))
> +
> + SET(0, p->tm_year + 1900);
> + SET(1, p->tm_mon + 1); /* Want January == 1 */
> + SET(2, p->tm_mday);
> + SET(3, p->tm_hour);
> + SET(4, p->tm_min);
> + SET(5, p->tm_sec);
> + SET(6, (p->tm_wday + 6) % 7); /* Want Monday == 0 */
> + SET(7, p->tm_yday + 1); /* Want January, 1 == 1 */
> + SET(8, p->tm_isdst);
> +#ifdef HAVE_STRUCT_TM_TM_ZONE
> + PyStructSequence_SET_ITEM(v, 9,
> + PyUnicode_DecodeLocale(p->tm_zone, "surrogateescape"));
> + SET(10, p->tm_gmtoff);
> +#else
> + PyStructSequence_SET_ITEM(v, 9,
> + PyUnicode_DecodeLocale(zone, "surrogateescape"));
> + PyStructSequence_SET_ITEM(v, 10, _PyLong_FromTime_t(gmtoff));
> +#endif /* HAVE_STRUCT_TM_TM_ZONE */
> +#undef SET
> + if (PyErr_Occurred()) {
> + Py_XDECREF(v);
> + return NULL;
> + }
> +
> + return v;
> +}
> +
> +/* Parse arg tuple that can contain an optional float-or-None value;
> + format needs to be "|O:name".
> + Returns non-zero on success (parallels PyArg_ParseTuple).
> +*/
> +static int
> +parse_time_t_args(PyObject *args, const char *format, time_t *pwhen)
> +{
> + PyObject *ot = NULL;
> + time_t whent;
> +
> + if (!PyArg_ParseTuple(args, format, &ot))
> + return 0;
> + if (ot == NULL || ot == Py_None) {
> + whent = time(NULL);
> + }
> + else {
> + if (_PyTime_ObjectToTime_t(ot, &whent, _PyTime_ROUND_FLOOR) == -1)
> + return 0;
> + }
> + *pwhen = whent;
> + return 1;
> +}
> +
> +static PyObject *
> +time_gmtime(PyObject *self, PyObject *args)
> +{
> + time_t when;
> + struct tm buf;
> +
> + if (!parse_time_t_args(args, "|O:gmtime", &when))
> + return NULL;
> +
> + errno = 0;
> + if (_PyTime_gmtime(when, &buf) != 0)
> + return NULL;
> +#ifdef HAVE_STRUCT_TM_TM_ZONE
> + return tmtotuple(&buf);
> +#else
> + return tmtotuple(&buf, "UTC", 0);
> +#endif
> +}
> +
> +#ifndef HAVE_TIMEGM
> +static time_t
> +timegm(struct tm *p)
> +{
> + /* XXX: the following implementation will not work for tm_year < 1970.
> + but it is likely that platforms that don't have timegm do not support
> + negative timestamps anyways. */
> + return p->tm_sec + p->tm_min*60 + p->tm_hour*3600 + p->tm_yday*86400 +
> + (p->tm_year-70)*31536000 + ((p->tm_year-69)/4)*86400 -
> + ((p->tm_year-1)/100)*86400 + ((p->tm_year+299)/400)*86400;
> +}
> +#endif
> +
> +PyDoc_STRVAR(gmtime_doc,
> +"gmtime([seconds]) -> (tm_year, tm_mon, tm_mday, tm_hour, tm_min,\n\
> + tm_sec, tm_wday, tm_yday, tm_isdst)\n\
> +\n\
> +Convert seconds since the Epoch to a time tuple expressing UTC (a.k.a.\n\
> +GMT). When 'seconds' is not passed in, convert the current time instead.\n\
> +\n\
> +If the platform supports the tm_gmtoff and tm_zone, they are available as\n\
> +attributes only.");
> +
> +static PyObject *
> +time_localtime(PyObject *self, PyObject *args)
> +{
> + time_t when;
> + struct tm buf;
> +
> + if (!parse_time_t_args(args, "|O:localtime", &when))
> + return NULL;
> + if (_PyTime_localtime(when, &buf) != 0)
> + return NULL;
> +#ifdef HAVE_STRUCT_TM_TM_ZONE
> + return tmtotuple(&buf);
> +#else
> + {
> + struct tm local = buf;
> + char zone[100];
> + time_t gmtoff;
> + strftime(zone, sizeof(zone), "%Z", &buf);
> + gmtoff = timegm(&buf) - when;
> + return tmtotuple(&local, zone, gmtoff);
> + }
> +#endif
> +}
> +
> +PyDoc_STRVAR(localtime_doc,
> +"localtime([seconds]) -> (tm_year,tm_mon,tm_mday,tm_hour,tm_min,\n\
> + tm_sec,tm_wday,tm_yday,tm_isdst)\n\
> +\n\
> +Convert seconds since the Epoch to a time tuple expressing local time.\n\
> +When 'seconds' is not passed in, convert the current time instead.");
> +
> +/* Convert 9-item tuple to tm structure. Return 1 on success, set
> + * an exception and return 0 on error.
> + */
> +static int
> +gettmarg(PyObject *args, struct tm *p)
> +{
> + int y;
> +
> + memset((void *) p, '\0', sizeof(struct tm));
> +
> + if (!PyTuple_Check(args)) {
> + PyErr_SetString(PyExc_TypeError,
> + "Tuple or struct_time argument required");
> + return 0;
> + }
> +
> + if (!PyArg_ParseTuple(args, "iiiiiiiii",
> + &y, &p->tm_mon, &p->tm_mday,
> + &p->tm_hour, &p->tm_min, &p->tm_sec,
> + &p->tm_wday, &p->tm_yday, &p->tm_isdst))
> + return 0;
> +
> + if (y < INT_MIN + 1900) {
> + PyErr_SetString(PyExc_OverflowError, "year out of range");
> + return 0;
> + }
> +
> + p->tm_year = y - 1900;
> + p->tm_mon--;
> + p->tm_wday = (p->tm_wday + 1) % 7;
> + p->tm_yday--;
> +#ifdef HAVE_STRUCT_TM_TM_ZONE
> + if (Py_TYPE(args) == &StructTimeType) {
> + PyObject *item;
> + item = PyTuple_GET_ITEM(args, 9);
> + p->tm_zone = item == Py_None ? NULL : PyUnicode_AsUTF8(item);
> + item = PyTuple_GET_ITEM(args, 10);
> + p->tm_gmtoff = item == Py_None ? 0 : PyLong_AsLong(item);
> + if (PyErr_Occurred())
> + return 0;
> + }
> +#endif /* HAVE_STRUCT_TM_TM_ZONE */
> + return 1;
> +}
> +
> +/* Check values of the struct tm fields before it is passed to strftime() and
> + * asctime(). Return 1 if all values are valid, otherwise set an exception
> + * and returns 0.
> + */
> +static int
> +checktm(struct tm* buf)
> +{
> + /* Checks added to make sure strftime() and asctime() does not crash Python by
> + indexing blindly into some array for a textual representation
> + by some bad index (fixes bug #897625 and #6608).
> +
> + Also support values of zero from Python code for arguments in which
> + that is out of range by forcing that value to the lowest value that
> + is valid (fixed bug #1520914).
> +
> + Valid ranges based on what is allowed in struct tm:
> +
> + - tm_year: [0, max(int)] (1)
> + - tm_mon: [0, 11] (2)
> + - tm_mday: [1, 31]
> + - tm_hour: [0, 23]
> + - tm_min: [0, 59]
> + - tm_sec: [0, 60]
> + - tm_wday: [0, 6] (1)
> + - tm_yday: [0, 365] (2)
> + - tm_isdst: [-max(int), max(int)]
> +
> + (1) gettmarg() handles bounds-checking.
> + (2) Python's acceptable range is one greater than the range in C,
> + thus need to check against automatic decrement by gettmarg().
> + */
> + if (buf->tm_mon == -1)
> + buf->tm_mon = 0;
> + else if (buf->tm_mon < 0 || buf->tm_mon > 11) {
> + PyErr_SetString(PyExc_ValueError, "month out of range");
> + return 0;
> + }
> + if (buf->tm_mday == 0)
> + buf->tm_mday = 1;
> + else if (buf->tm_mday < 0 || buf->tm_mday > 31) {
> + PyErr_SetString(PyExc_ValueError, "day of month out of range");
> + return 0;
> + }
> + if (buf->tm_hour < 0 || buf->tm_hour > 23) {
> + PyErr_SetString(PyExc_ValueError, "hour out of range");
> + return 0;
> + }
> + if (buf->tm_min < 0 || buf->tm_min > 59) {
> + PyErr_SetString(PyExc_ValueError, "minute out of range");
> + return 0;
> + }
> + if (buf->tm_sec < 0 || buf->tm_sec > 61) {
> + PyErr_SetString(PyExc_ValueError, "seconds out of range");
> + return 0;
> + }
> + /* tm_wday does not need checking of its upper-bound since taking
> + ``% 7`` in gettmarg() automatically restricts the range. */
> + if (buf->tm_wday < 0) {
> + PyErr_SetString(PyExc_ValueError, "day of week out of range");
> + return 0;
> + }
> + if (buf->tm_yday == -1)
> + buf->tm_yday = 0;
> + else if (buf->tm_yday < 0 || buf->tm_yday > 365) {
> + PyErr_SetString(PyExc_ValueError, "day of year out of range");
> + return 0;
> + }
> + return 1;
> +}
> +
> +#ifdef MS_WINDOWS
> + /* wcsftime() doesn't format correctly time zones, see issue #10653 */
> +# undef HAVE_WCSFTIME
> +#endif
> +#define STRFTIME_FORMAT_CODES \
> +"Commonly used format codes:\n\
> +\n\
> +%Y Year with century as a decimal number.\n\
> +%m Month as a decimal number [01,12].\n\
> +%d Day of the month as a decimal number [01,31].\n\
> +%H Hour (24-hour clock) as a decimal number [00,23].\n\
> +%M Minute as a decimal number [00,59].\n\
> +%S Second as a decimal number [00,61].\n\
> +%z Time zone offset from UTC.\n\
> +%a Locale's abbreviated weekday name.\n\
> +%A Locale's full weekday name.\n\
> +%b Locale's abbreviated month name.\n\
> +%B Locale's full month name.\n\
> +%c Locale's appropriate date and time representation.\n\
> +%I Hour (12-hour clock) as a decimal number [01,12].\n\
> +%p Locale's equivalent of either AM or PM.\n\
> +\n\
> +Other codes may be available on your platform. See documentation for\n\
> +the C library strftime function.\n"
> +
> +#ifdef HAVE_STRFTIME
> +#ifdef HAVE_WCSFTIME
> +#define time_char wchar_t
> +#define format_time wcsftime
> +#define time_strlen wcslen
> +#else
> +#define time_char char
> +#define format_time strftime
> +#define time_strlen strlen
> +#endif
> +
> +static PyObject *
> +time_strftime(PyObject *self, PyObject *args)
> +{
> + PyObject *tup = NULL;
> + struct tm buf;
> + const time_char *fmt;
> +#ifdef HAVE_WCSFTIME
> + wchar_t *format;
> +#else
> + PyObject *format;
> +#endif
> + PyObject *format_arg;
> + size_t fmtlen, buflen;
> + time_char *outbuf = NULL;
> + size_t i;
> + PyObject *ret = NULL;
> +
> + memset((void *) &buf, '\0', sizeof(buf));
> +
> + /* Will always expect a unicode string to be passed as format.
> + Given that there's no str type anymore in py3k this seems safe.
> + */
> + if (!PyArg_ParseTuple(args, "U|O:strftime", &format_arg, &tup))
> + return NULL;
> +
> + if (tup == NULL) {
> + time_t tt = time(NULL);
> + if (_PyTime_localtime(tt, &buf) != 0)
> + return NULL;
> + }
> + else if (!gettmarg(tup, &buf) || !checktm(&buf))
> + return NULL;
> +
> +#if defined(_MSC_VER) || defined(sun) || defined(_AIX)
> + if (buf.tm_year + 1900 < 1 || 9999 < buf.tm_year + 1900) {
> + PyErr_SetString(PyExc_ValueError,
> + "strftime() requires year in [1; 9999]");
> + return NULL;
> + }
> +#endif
> +
> + /* Normalize tm_isdst just in case someone foolishly implements %Z
> + based on the assumption that tm_isdst falls within the range of
> + [-1, 1] */
> + if (buf.tm_isdst < -1)
> + buf.tm_isdst = -1;
> + else if (buf.tm_isdst > 1)
> + buf.tm_isdst = 1;
> +
> +#ifdef HAVE_WCSFTIME
> + format = _PyUnicode_AsWideCharString(format_arg);
> + if (format == NULL)
> + return NULL;
> + fmt = format;
> +#else
> + /* Convert the unicode string to an ascii one */
> + format = PyUnicode_EncodeLocale(format_arg, "surrogateescape");
> + if (format == NULL)
> + return NULL;
> + fmt = PyBytes_AS_STRING(format);
> +#endif
> +
> +#if defined(MS_WINDOWS) && !defined(HAVE_WCSFTIME)
> + /* check that the format string contains only valid directives */
> + for (outbuf = strchr(fmt, '%');
> + outbuf != NULL;
> + outbuf = strchr(outbuf+2, '%'))
> + {
> + if (outbuf[1] == '#')
> + ++outbuf; /* not documented by python, */
> + if (outbuf[1] == '\0')
> + break;
> + if ((outbuf[1] == 'y') && buf.tm_year < 0) {
> + PyErr_SetString(PyExc_ValueError,
> + "format %y requires year >= 1900 on Windows");
> + Py_DECREF(format);
> + return NULL;
> + }
> + }
> +#elif (defined(_AIX) || defined(sun)) && defined(HAVE_WCSFTIME)
> + for (outbuf = wcschr(fmt, '%');
> + outbuf != NULL;
> + outbuf = wcschr(outbuf+2, '%'))
> + {
> + if (outbuf[1] == L'\0')
> + break;
> + /* Issue #19634: On AIX, wcsftime("y", (1899, 1, 1, 0, 0, 0, 0, 0, 0))
> + returns "0/" instead of "99" */
> + if (outbuf[1] == L'y' && buf.tm_year < 0) {
> + PyErr_SetString(PyExc_ValueError,
> + "format %y requires year >= 1900 on AIX");
> + PyMem_Free(format);
> + return NULL;
> + }
> + }
> +#endif
> +
> + fmtlen = time_strlen(fmt);
> +
> + /* I hate these functions that presume you know how big the output
> + * will be ahead of time...
> + */
> + for (i = 1024; ; i += i) {
> + outbuf = (time_char *)PyMem_Malloc(i*sizeof(time_char));
> + if (outbuf == NULL) {
> + PyErr_NoMemory();
> + break;
> + }
> +#if defined _MSC_VER && _MSC_VER >= 1400 && defined(__STDC_SECURE_LIB__)
> + errno = 0;
> +#endif
> + _Py_BEGIN_SUPPRESS_IPH
> + buflen = format_time(outbuf, i, fmt, &buf);
> + _Py_END_SUPPRESS_IPH
> +#if defined _MSC_VER && _MSC_VER >= 1400 && defined(__STDC_SECURE_LIB__)
> + /* VisualStudio .NET 2005 does this properly */
> + if (buflen == 0 && errno == EINVAL) {
> + PyErr_SetString(PyExc_ValueError, "Invalid format string");
> + PyMem_Free(outbuf);
> + break;
> + }
> +#endif
> + if (buflen > 0 || i >= 256 * fmtlen) {
> + /* If the buffer is 256 times as long as the format,
> + it's probably not failing for lack of room!
> + More likely, the format yields an empty result,
> + e.g. an empty format, or %Z when the timezone
> + is unknown. */
> +#ifdef HAVE_WCSFTIME
> + ret = PyUnicode_FromWideChar(outbuf, buflen);
> +#else
> + ret = PyUnicode_DecodeLocaleAndSize(outbuf, buflen,
> + "surrogateescape");
> +#endif
> + PyMem_Free(outbuf);
> + break;
> + }
> + PyMem_Free(outbuf);
> + }
> +#ifdef HAVE_WCSFTIME
> + PyMem_Free(format);
> +#else
> + Py_DECREF(format);
> +#endif
> + return ret;
> +}
> +
> +#undef time_char
> +#undef format_time
> +PyDoc_STRVAR(strftime_doc,
> +"strftime(format[, tuple]) -> string\n\
> +\n\
> +Convert a time tuple to a string according to a format specification.\n\
> +See the library reference manual for formatting codes. When the time tuple\n\
> +is not present, current time as returned by localtime() is used.\n\
> +\n" STRFTIME_FORMAT_CODES);
> +#endif /* HAVE_STRFTIME */
> +
> +static PyObject *
> +time_strptime(PyObject *self, PyObject *args)
> +{
> + PyObject *strptime_module = PyImport_ImportModuleNoBlock("_strptime");
> + PyObject *strptime_result;
> + _Py_IDENTIFIER(_strptime_time);
> +
> + if (!strptime_module)
> + return NULL;
> + strptime_result = _PyObject_CallMethodId(strptime_module,
> + &PyId__strptime_time, "O", args);
> + Py_DECREF(strptime_module);
> + return strptime_result;
> +}
> +
> +
> +PyDoc_STRVAR(strptime_doc,
> +"strptime(string, format) -> struct_time\n\
> +\n\
> +Parse a string to a time tuple according to a format specification.\n\
> +See the library reference manual for formatting codes (same as\n\
> +strftime()).\n\
> +\n" STRFTIME_FORMAT_CODES);
> +
> +static PyObject *
> +_asctime(struct tm *timeptr)
> +{
> + /* Inspired by Open Group reference implementation available at
> + * http://pubs.opengroup.org/onlinepubs/009695399/functions/asctime.html */
> + static const char wday_name[7][4] = {
> + "Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"
> + };
> + static const char mon_name[12][4] = {
> + "Jan", "Feb", "Mar", "Apr", "May", "Jun",
> + "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"
> + };
> + return PyUnicode_FromFormat(
> + "%s %s%3d %.2d:%.2d:%.2d %d",
> + wday_name[timeptr->tm_wday],
> + mon_name[timeptr->tm_mon],
> + timeptr->tm_mday, timeptr->tm_hour,
> + timeptr->tm_min, timeptr->tm_sec,
> + 1900 + timeptr->tm_year);
> +}
> +
> +static PyObject *
> +time_asctime(PyObject *self, PyObject *args)
> +{
> + PyObject *tup = NULL;
> + struct tm buf;
> +
> + if (!PyArg_UnpackTuple(args, "asctime", 0, 1, &tup))
> + return NULL;
> + if (tup == NULL) {
> + time_t tt = time(NULL);
> + if (_PyTime_localtime(tt, &buf) != 0)
> + return NULL;
> +
> + } else if (!gettmarg(tup, &buf) || !checktm(&buf))
> + return NULL;
> + return _asctime(&buf);
> +}
> +
> +PyDoc_STRVAR(asctime_doc,
> +"asctime([tuple]) -> string\n\
> +\n\
> +Convert a time tuple to a string, e.g. 'Sat Jun 06 16:26:11 1998'.\n\
> +When the time tuple is not present, current time as returned by localtime()\n\
> +is used.");
> +
> +static PyObject *
> +time_ctime(PyObject *self, PyObject *args)
> +{
> + time_t tt;
> + struct tm buf;
> + if (!parse_time_t_args(args, "|O:ctime", &tt))
> + return NULL;
> + if (_PyTime_localtime(tt, &buf) != 0)
> + return NULL;
> + return _asctime(&buf);
> +}
> +
> +PyDoc_STRVAR(ctime_doc,
> +"ctime(seconds) -> string\n\
> +\n\
> +Convert a time in seconds since the Epoch to a string in local time.\n\
> +This is equivalent to asctime(localtime(seconds)). When the time tuple is\n\
> +not present, current time as returned by localtime() is used.");
> +
> +#ifdef HAVE_MKTIME
> +static PyObject *
> +time_mktime(PyObject *self, PyObject *tup)
> +{
> + struct tm buf;
> + time_t tt;
> + if (!gettmarg(tup, &buf))
> + return NULL;
> +#ifdef _AIX
> + /* year < 1902 or year > 2037 */
> + if (buf.tm_year < 2 || buf.tm_year > 137) {
> + /* Issue #19748: On AIX, mktime() doesn't report overflow error for
> + * timestamp < -2^31 or timestamp > 2**31-1. */
> + PyErr_SetString(PyExc_OverflowError,
> + "mktime argument out of range");
> + return NULL;
> + }
> +#else
> + buf.tm_wday = -1; /* sentinel; original value ignored */
> +#endif
> + tt = mktime(&buf);
> + /* Return value of -1 does not necessarily mean an error, but tm_wday
> + * cannot remain set to -1 if mktime succeeded. */
> + if (tt == (time_t)(-1)
> +#ifndef _AIX
> + /* Return value of -1 does not necessarily mean an error, but
> + * tm_wday cannot remain set to -1 if mktime succeeded. */
> + && buf.tm_wday == -1
> +#else
> + /* on AIX, tm_wday is always sets, even on error */
> +#endif
> + )
> + {
> + PyErr_SetString(PyExc_OverflowError,
> + "mktime argument out of range");
> + return NULL;
> + }
> + return PyFloat_FromDouble((double)tt);
> +}
> +
> +PyDoc_STRVAR(mktime_doc,
> +"mktime(tuple) -> floating point number\n\
> +\n\
> +Convert a time tuple in local time to seconds since the Epoch.\n\
> +Note that mktime(gmtime(0)) will not generally return zero for most\n\
> +time zones; instead the returned value will either be equal to that\n\
> +of the timezone or altzone attributes on the time module.");
> +#endif /* HAVE_MKTIME */
> +
> +#ifdef HAVE_WORKING_TZSET
> +static int init_timezone(PyObject *module);
> +
> +static PyObject *
> +time_tzset(PyObject *self, PyObject *unused)
> +{
> + PyObject* m;
> +
> + m = PyImport_ImportModuleNoBlock("time");
> + if (m == NULL) {
> + return NULL;
> + }
> +
> + tzset();
> +
> + /* Reset timezone, altzone, daylight and tzname */
> + if (init_timezone(m) < 0) {
> + return NULL;
> + }
> + Py_DECREF(m);
> + if (PyErr_Occurred())
> + return NULL;
> +
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +PyDoc_STRVAR(tzset_doc,
> +"tzset()\n\
> +\n\
> +Initialize, or reinitialize, the local timezone to the value stored in\n\
> +os.environ['TZ']. The TZ environment variable should be specified in\n\
> +standard Unix timezone format as documented in the tzset man page\n\
> +(eg. 'US/Eastern', 'Europe/Amsterdam'). Unknown timezones will silently\n\
> +fall back to UTC. If the TZ environment variable is not set, the local\n\
> +timezone is set to the systems best guess of wallclock time.\n\
> +Changing the TZ environment variable without calling tzset *may* change\n\
> +the local timezone used by methods such as localtime, but this behaviour\n\
> +should not be relied on.");
> +#endif /* HAVE_WORKING_TZSET */
> +
> +static PyObject *
> +pymonotonic(_Py_clock_info_t *info)
> +{
> + _PyTime_t t;
> + double d;
> + if (_PyTime_GetMonotonicClockWithInfo(&t, info) < 0) {
> + assert(info != NULL);
> + return NULL;
> + }
> + d = _PyTime_AsSecondsDouble(t);
> + return PyFloat_FromDouble(d);
> +}
> +
> +static PyObject *
> +time_monotonic(PyObject *self, PyObject *unused)
> +{
> + return pymonotonic(NULL);
> +}
> +
> +PyDoc_STRVAR(monotonic_doc,
> +"monotonic() -> float\n\
> +\n\
> +Monotonic clock, cannot go backward.");
> +
> +static PyObject*
> +perf_counter(_Py_clock_info_t *info)
> +{
> +#ifdef WIN32_PERF_COUNTER
> + return win_perf_counter(info);
> +#else
> + return pymonotonic(info);
> +#endif
> +}
> +
> +static PyObject *
> +time_perf_counter(PyObject *self, PyObject *unused)
> +{
> + return perf_counter(NULL);
> +}
> +
> +PyDoc_STRVAR(perf_counter_doc,
> +"perf_counter() -> float\n\
> +\n\
> +Performance counter for benchmarking.");
> +
> +static PyObject*
> +py_process_time(_Py_clock_info_t *info)
> +{
> +#if defined(MS_WINDOWS)
> + HANDLE process;
> + FILETIME creation_time, exit_time, kernel_time, user_time;
> + ULARGE_INTEGER large;
> + double total;
> + BOOL ok;
> +
> + process = GetCurrentProcess();
> + ok = GetProcessTimes(process, &creation_time, &exit_time, &kernel_time, &user_time);
> + if (!ok)
> + return PyErr_SetFromWindowsErr(0);
> +
> + large.u.LowPart = kernel_time.dwLowDateTime;
> + large.u.HighPart = kernel_time.dwHighDateTime;
> + total = (double)large.QuadPart;
> + large.u.LowPart = user_time.dwLowDateTime;
> + large.u.HighPart = user_time.dwHighDateTime;
> + total += (double)large.QuadPart;
> + if (info) {
> + info->implementation = "GetProcessTimes()";
> + info->resolution = 1e-7;
> + info->monotonic = 1;
> + info->adjustable = 0;
> + }
> + return PyFloat_FromDouble(total * 1e-7);
> +#else
> +
> +#if defined(HAVE_SYS_RESOURCE_H)
> + struct rusage ru;
> +#endif
> +#ifdef HAVE_TIMES
> + struct tms t;
> + static long ticks_per_second = -1;
> +#endif
> +
> +#if defined(HAVE_CLOCK_GETTIME) \
> + && (defined(CLOCK_PROCESS_CPUTIME_ID) || defined(CLOCK_PROF))
> + struct timespec tp;
> +#ifdef CLOCK_PROF
> + const clockid_t clk_id = CLOCK_PROF;
> + const char *function = "clock_gettime(CLOCK_PROF)";
> +#else
> + const clockid_t clk_id = CLOCK_PROCESS_CPUTIME_ID;
> + const char *function = "clock_gettime(CLOCK_PROCESS_CPUTIME_ID)";
> +#endif
> +
> + if (clock_gettime(clk_id, &tp) == 0) {
> + if (info) {
> + struct timespec res;
> + info->implementation = function;
> + info->monotonic = 1;
> + info->adjustable = 0;
> + if (clock_getres(clk_id, &res) == 0)
> + info->resolution = res.tv_sec + res.tv_nsec * 1e-9;
> + else
> + info->resolution = 1e-9;
> + }
> + return PyFloat_FromDouble(tp.tv_sec + tp.tv_nsec * 1e-9);
> + }
> +#endif
> +
> +#ifndef UEFI_C_SOURCE
> +#if defined(HAVE_SYS_RESOURCE_H)
> + if (getrusage(RUSAGE_SELF, &ru) == 0) {
> + double total;
> + total = ru.ru_utime.tv_sec + ru.ru_utime.tv_usec * 1e-6;
> + total += ru.ru_stime.tv_sec + ru.ru_stime.tv_usec * 1e-6;
> + if (info) {
> + info->implementation = "getrusage(RUSAGE_SELF)";
> + info->monotonic = 1;
> + info->adjustable = 0;
> + info->resolution = 1e-6;
> + }
> + return PyFloat_FromDouble(total);
> + }
> +#endif
> +#endif
> +
> +#ifdef HAVE_TIMES
> + if (times(&t) != (clock_t)-1) {
> + double total;
> +
> + if (ticks_per_second == -1) {
> +#if defined(HAVE_SYSCONF) && defined(_SC_CLK_TCK)
> + ticks_per_second = sysconf(_SC_CLK_TCK);
> + if (ticks_per_second < 1)
> + ticks_per_second = -1;
> +#elif defined(HZ)
> + ticks_per_second = HZ;
> +#else
> + ticks_per_second = 60; /* magic fallback value; may be bogus */
> +#endif
> + }
> +
> + if (ticks_per_second != -1) {
> + total = (double)t.tms_utime / ticks_per_second;
> + total += (double)t.tms_stime / ticks_per_second;
> + if (info) {
> + info->implementation = "times()";
> + info->monotonic = 1;
> + info->adjustable = 0;
> + info->resolution = 1.0 / ticks_per_second;
> + }
> + return PyFloat_FromDouble(total);
> + }
> + }
> +#endif
> +
> + /* Currently, Python 3 requires clock() to build: see issue #22624 */
> + return floatclock(info);
> +#endif
> +}
> +
> +static PyObject *
> +time_process_time(PyObject *self, PyObject *unused)
> +{
> + return py_process_time(NULL);
> +}
> +
> +PyDoc_STRVAR(process_time_doc,
> +"process_time() -> float\n\
> +\n\
> +Process time for profiling: sum of the kernel and user-space CPU time.");
> +
> +
> +static PyObject *
> +time_get_clock_info(PyObject *self, PyObject *args)
> +{
> + char *name;
> + _Py_clock_info_t info;
> + PyObject *obj = NULL, *dict, *ns;
> +
> + if (!PyArg_ParseTuple(args, "s:get_clock_info", &name))
> + return NULL;
> +
> +#ifdef Py_DEBUG
> + info.implementation = NULL;
> + info.monotonic = -1;
> + info.adjustable = -1;
> + info.resolution = -1.0;
> +#else
> + info.implementation = "";
> + info.monotonic = 0;
> + info.adjustable = 0;
> + info.resolution = 1.0;
> +#endif
> +
> + if (strcmp(name, "time") == 0)
> + obj = floattime(&info);
> +#ifdef PYCLOCK
> + else if (strcmp(name, "clock") == 0)
> + obj = pyclock(&info);
> +#endif
> + else if (strcmp(name, "monotonic") == 0)
> + obj = pymonotonic(&info);
> + else if (strcmp(name, "perf_counter") == 0)
> + obj = perf_counter(&info);
> + else if (strcmp(name, "process_time") == 0)
> + obj = py_process_time(&info);
> + else {
> + PyErr_SetString(PyExc_ValueError, "unknown clock");
> + return NULL;
> + }
> + if (obj == NULL)
> + return NULL;
> + Py_DECREF(obj);
> +
> + dict = PyDict_New();
> + if (dict == NULL)
> + return NULL;
> +
> + assert(info.implementation != NULL);
> + obj = PyUnicode_FromString(info.implementation);
> + if (obj == NULL)
> + goto error;
> + if (PyDict_SetItemString(dict, "implementation", obj) == -1)
> + goto error;
> + Py_CLEAR(obj);
> +
> + assert(info.monotonic != -1);
> + obj = PyBool_FromLong(info.monotonic);
> + if (obj == NULL)
> + goto error;
> + if (PyDict_SetItemString(dict, "monotonic", obj) == -1)
> + goto error;
> + Py_CLEAR(obj);
> +
> + assert(info.adjustable != -1);
> + obj = PyBool_FromLong(info.adjustable);
> + if (obj == NULL)
> + goto error;
> + if (PyDict_SetItemString(dict, "adjustable", obj) == -1)
> + goto error;
> + Py_CLEAR(obj);
> +
> + assert(info.resolution > 0.0);
> + assert(info.resolution <= 1.0);
> + obj = PyFloat_FromDouble(info.resolution);
> + if (obj == NULL)
> + goto error;
> + if (PyDict_SetItemString(dict, "resolution", obj) == -1)
> + goto error;
> + Py_CLEAR(obj);
> +
> + ns = _PyNamespace_New(dict);
> + Py_DECREF(dict);
> + return ns;
> +
> +error:
> + Py_DECREF(dict);
> + Py_XDECREF(obj);
> + return NULL;
> +}
> +
> +PyDoc_STRVAR(get_clock_info_doc,
> +"get_clock_info(name: str) -> dict\n\
> +\n\
> +Get information of the specified clock.");
> +
> +static void
> +get_zone(char *zone, int n, struct tm *p)
> +{
> +#ifdef HAVE_STRUCT_TM_TM_ZONE
> + strncpy(zone, p->tm_zone ? p->tm_zone : " ", n);
> +#else
> + tzset();
> + strftime(zone, n, "%Z", p);
> +#endif
> +}
> +
> +static time_t
> +get_gmtoff(time_t t, struct tm *p)
> +{
> +#ifdef HAVE_STRUCT_TM_TM_ZONE
> + return p->tm_gmtoff;
> +#else
> + return timegm(p) - t;
> +#endif
> +}
> +
> +static int
> +init_timezone(PyObject *m)
> +{
> + assert(!PyErr_Occurred());
> +
> + /* This code moved from PyInit_time wholesale to allow calling it from
> + time_tzset. In the future, some parts of it can be moved back
> + (for platforms that don't HAVE_WORKING_TZSET, when we know what they
> + are), and the extraneous calls to tzset(3) should be removed.
> + I haven't done this yet, as I don't want to change this code as
> + little as possible when introducing the time.tzset and time.tzsetwall
> + methods. This should simply be a method of doing the following once,
> + at the top of this function and removing the call to tzset() from
> + time_tzset():
> +
> + #ifdef HAVE_TZSET
> + tzset()
> + #endif
> +
> + And I'm lazy and hate C so nyer.
> + */
> +#if defined(HAVE_TZNAME) && !defined(__GLIBC__) && !defined(__CYGWIN__)
> + PyObject *otz0, *otz1;
> + tzset();
> + PyModule_AddIntConstant(m, "timezone", timezone);
> +#ifdef HAVE_ALTZONE
> + PyModule_AddIntConstant(m, "altzone", altzone);
> +#else
> + PyModule_AddIntConstant(m, "altzone", timezone-3600);
> +#endif
> + PyModule_AddIntConstant(m, "daylight", daylight);
> + otz0 = PyUnicode_DecodeLocale(tzname[0], "surrogateescape");
> + if (otz0 == NULL) {
> + return -1;
> + }
> + otz1 = PyUnicode_DecodeLocale(tzname[1], "surrogateescape");
> + if (otz1 == NULL) {
> + Py_DECREF(otz0);
> + return -1;
> + }
> + PyObject *tzname_obj = Py_BuildValue("(NN)", otz0, otz1);
> + if (tzname_obj == NULL) {
> + return -1;
> + }
> + PyModule_AddObject(m, "tzname", tzname_obj);
> +#else /* !HAVE_TZNAME || __GLIBC__ || __CYGWIN__*/
> +#ifdef HAVE_STRUCT_TM_TM_ZONE
> + {
> +#define YEAR ((time_t)((365 * 24 + 6) * 3600))
> + time_t t;
> + struct tm p;
> + time_t janzone_t, julyzone_t;
> + char janname[10], julyname[10];
> + t = (time((time_t *)0) / YEAR) * YEAR;
> + _PyTime_localtime(t, &p);
> + get_zone(janname, 9, &p);
> + janzone_t = -get_gmtoff(t, &p);
> + janname[9] = '\0';
> + t += YEAR/2;
> + _PyTime_localtime(t, &p);
> + get_zone(julyname, 9, &p);
> + julyzone_t = -get_gmtoff(t, &p);
> + julyname[9] = '\0';
> +
> +#ifndef UEFI_C_SOURCE
> + /* Sanity check, don't check for the validity of timezones.
> + In practice, it should be more in range -12 hours .. +14 hours. */
> +#define MAX_TIMEZONE (48 * 3600)
> + if (janzone_t < -MAX_TIMEZONE || janzone_t > MAX_TIMEZONE
> + || julyzone_t < -MAX_TIMEZONE || julyzone_t > MAX_TIMEZONE)
> + {
> + PyErr_SetString(PyExc_RuntimeError, "invalid GMT offset");
> + return -1;
> + }
> +#endif
> + int janzone = (int)janzone_t;
> + int julyzone = (int)julyzone_t;
> +
> + if( janzone < julyzone ) {
> + /* DST is reversed in the southern hemisphere */
> + PyModule_AddIntConstant(m, "timezone", julyzone);
> + PyModule_AddIntConstant(m, "altzone", janzone);
> + PyModule_AddIntConstant(m, "daylight",
> + janzone != julyzone);
> + PyModule_AddObject(m, "tzname",
> + Py_BuildValue("(zz)",
> + julyname, janname));
> + } else {
> + PyModule_AddIntConstant(m, "timezone", janzone);
> + PyModule_AddIntConstant(m, "altzone", julyzone);
> + PyModule_AddIntConstant(m, "daylight",
> + janzone != julyzone);
> + PyModule_AddObject(m, "tzname",
> + Py_BuildValue("(zz)",
> + janname, julyname));
> + }
> + }
> +#else /*HAVE_STRUCT_TM_TM_ZONE */
> +#ifdef __CYGWIN__
> + tzset();
> + PyModule_AddIntConstant(m, "timezone", _timezone);
> + PyModule_AddIntConstant(m, "altzone", _timezone-3600);
> + PyModule_AddIntConstant(m, "daylight", _daylight);
> + PyModule_AddObject(m, "tzname",
> + Py_BuildValue("(zz)", _tzname[0], _tzname[1]));
> +#endif /* __CYGWIN__ */
> +#endif
> +#endif /* !HAVE_TZNAME || __GLIBC__ || __CYGWIN__*/
> +
> + if (PyErr_Occurred()) {
> + return -1;
> + }
> + return 0;
> +}
> +
> +
> +static PyMethodDef time_methods[] = {
> + {"time", time_time, METH_NOARGS, time_doc},
> +#ifdef PYCLOCK
> + {"clock", time_clock, METH_NOARGS, clock_doc},
> +#endif
> +#ifdef HAVE_CLOCK_GETTIME
> + {"clock_gettime", time_clock_gettime, METH_VARARGS, clock_gettime_doc},
> +#endif
> +#ifdef HAVE_CLOCK_SETTIME
> + {"clock_settime", time_clock_settime, METH_VARARGS, clock_settime_doc},
> +#endif
> +#ifdef HAVE_CLOCK_GETRES
> + {"clock_getres", time_clock_getres, METH_VARARGS, clock_getres_doc},
> +#endif
> + {"sleep", time_sleep, METH_O, sleep_doc},
> + {"gmtime", time_gmtime, METH_VARARGS, gmtime_doc},
> + {"localtime", time_localtime, METH_VARARGS, localtime_doc},
> + {"asctime", time_asctime, METH_VARARGS, asctime_doc},
> + {"ctime", time_ctime, METH_VARARGS, ctime_doc},
> +#ifdef HAVE_MKTIME
> + {"mktime", time_mktime, METH_O, mktime_doc},
> +#endif
> +#ifdef HAVE_STRFTIME
> + {"strftime", time_strftime, METH_VARARGS, strftime_doc},
> +#endif
> + {"strptime", time_strptime, METH_VARARGS, strptime_doc},
> +#ifdef HAVE_WORKING_TZSET
> + {"tzset", time_tzset, METH_NOARGS, tzset_doc},
> +#endif
> + {"monotonic", time_monotonic, METH_NOARGS, monotonic_doc},
> + {"process_time", time_process_time, METH_NOARGS, process_time_doc},
> + {"perf_counter", time_perf_counter, METH_NOARGS, perf_counter_doc},
> + {"get_clock_info", time_get_clock_info, METH_VARARGS, get_clock_info_doc},
> + {NULL, NULL} /* sentinel */
> +};
> +
> +
> +PyDoc_STRVAR(module_doc,
> +"This module provides various functions to manipulate time values.\n\
> +\n\
> +There are two standard representations of time. One is the number\n\
> +of seconds since the Epoch, in UTC (a.k.a. GMT). It may be an integer\n\
> +or a floating point number (to represent fractions of seconds).\n\
> +The Epoch is system-defined; on Unix, it is generally January 1st, 1970.\n\
> +The actual value can be retrieved by calling gmtime(0).\n\
> +\n\
> +The other representation is a tuple of 9 integers giving local time.\n\
> +The tuple items are:\n\
> + year (including century, e.g. 1998)\n\
> + month (1-12)\n\
> + day (1-31)\n\
> + hours (0-23)\n\
> + minutes (0-59)\n\
> + seconds (0-59)\n\
> + weekday (0-6, Monday is 0)\n\
> + Julian day (day in the year, 1-366)\n\
> + DST (Daylight Savings Time) flag (-1, 0 or 1)\n\
> +If the DST flag is 0, the time is given in the regular time zone;\n\
> +if it is 1, the time is given in the DST time zone;\n\
> +if it is -1, mktime() should guess based on the date and time.\n");
> +
> +
> +
> +static struct PyModuleDef timemodule = {
> + PyModuleDef_HEAD_INIT,
> + "time",
> + module_doc,
> + -1,
> + time_methods,
> + NULL,
> + NULL,
> + NULL,
> + NULL
> +};
> +
> +PyMODINIT_FUNC
> +PyInit_time(void)
> +{
> + PyObject *m;
> + m = PyModule_Create(&timemodule);
> + if (m == NULL)
> + return NULL;
> +
> + /* Set, or reset, module variables like time.timezone */
> + if (init_timezone(m) < 0) {
> + return NULL;
> + }
> +
> +#if defined(HAVE_CLOCK_GETTIME) || defined(HAVE_CLOCK_SETTIME) || defined(HAVE_CLOCK_GETRES)
> +
> +#ifdef CLOCK_REALTIME
> + PyModule_AddIntMacro(m, CLOCK_REALTIME);
> +#endif
> +#ifdef CLOCK_MONOTONIC
> + PyModule_AddIntMacro(m, CLOCK_MONOTONIC);
> +#endif
> +#ifdef CLOCK_MONOTONIC_RAW
> + PyModule_AddIntMacro(m, CLOCK_MONOTONIC_RAW);
> +#endif
> +#ifdef CLOCK_HIGHRES
> + PyModule_AddIntMacro(m, CLOCK_HIGHRES);
> +#endif
> +#ifdef CLOCK_PROCESS_CPUTIME_ID
> + PyModule_AddIntMacro(m, CLOCK_PROCESS_CPUTIME_ID);
> +#endif
> +#ifdef CLOCK_THREAD_CPUTIME_ID
> + PyModule_AddIntMacro(m, CLOCK_THREAD_CPUTIME_ID);
> +#endif
> +
> +#endif /* defined(HAVE_CLOCK_GETTIME) || defined(HAVE_CLOCK_SETTIME) || defined(HAVE_CLOCK_GETRES) */
> +
> + if (!initialized) {
> + if (PyStructSequence_InitType2(&StructTimeType,
> + &struct_time_type_desc) < 0)
> + return NULL;
> + }
> + Py_INCREF(&StructTimeType);
> + PyModule_AddIntConstant(m, "_STRUCT_TM_ITEMS", 11);
> + PyModule_AddObject(m, "struct_time", (PyObject*) &StructTimeType);
> + initialized = 1;
> +
> + if (PyErr_Occurred()) {
> + return NULL;
> + }
> + return m;
> +}
> +
> +static PyObject*
> +floattime(_Py_clock_info_t *info)
> +{
> + _PyTime_t t;
> + double d;
> + if (_PyTime_GetSystemClockWithInfo(&t, info) < 0) {
> + assert(info != NULL);
> + return NULL;
> + }
> + d = _PyTime_AsSecondsDouble(t);
> + return PyFloat_FromDouble(d);
> +}
> +
> +
> +/* Implement pysleep() for various platforms.
> + When interrupted (or when another error occurs), return -1 and
> + set an exception; else return 0. */
> +
> +static int
> +pysleep(_PyTime_t secs)
> +{
> + _PyTime_t deadline, monotonic;
> +#ifndef MS_WINDOWS
> + struct timeval timeout;
> + int err = 0;
> +#else
> + _PyTime_t millisecs;
> + unsigned long ul_millis;
> + DWORD rc;
> + HANDLE hInterruptEvent;
> +#endif
> +
> + deadline = _PyTime_GetMonotonicClock() + secs;
> +
> + do {
> +#ifndef MS_WINDOWS
> + if (_PyTime_AsTimeval(secs, &timeout, _PyTime_ROUND_CEILING) < 0)
> + return -1;
> +
> + Py_BEGIN_ALLOW_THREADS
> + err = select(0, (fd_set *)0, (fd_set *)0, (fd_set *)0, &timeout);
> + Py_END_ALLOW_THREADS
> +
> + if (err == 0)
> + break;
> +
> + if (errno != EINTR) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + return -1;
> + }
> +#else
> + millisecs = _PyTime_AsMilliseconds(secs, _PyTime_ROUND_CEILING);
> + if (millisecs > (double)ULONG_MAX) {
> + PyErr_SetString(PyExc_OverflowError,
> + "sleep length is too large");
> + return -1;
> + }
> +
> + /* Allow sleep(0) to maintain win32 semantics, and as decreed
> + * by Guido, only the main thread can be interrupted.
> + */
> + ul_millis = (unsigned long)millisecs;
> + if (ul_millis == 0 || !_PyOS_IsMainThread()) {
> + Py_BEGIN_ALLOW_THREADS
> + Sleep(ul_millis);
> + Py_END_ALLOW_THREADS
> + break;
> + }
> +
> + hInterruptEvent = _PyOS_SigintEvent();
> + ResetEvent(hInterruptEvent);
> +
> + Py_BEGIN_ALLOW_THREADS
> + rc = WaitForSingleObjectEx(hInterruptEvent, ul_millis, FALSE);
> + Py_END_ALLOW_THREADS
> +
> + if (rc != WAIT_OBJECT_0)
> + break;
> +#endif
> +
> + /* sleep was interrupted by SIGINT */
> + if (PyErr_CheckSignals())
> + return -1;
> +
> + monotonic = _PyTime_GetMonotonicClock();
> + secs = deadline - monotonic;
> + if (secs < 0)
> + break;
> + /* retry with the recomputed delay */
> + } while (1);
> +
> + return 0;
> +}
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/zlib/gzguts.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/zlib/gzguts.h
> new file mode 100644
> index 00000000..23fe2617
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/zlib/gzguts.h
> @@ -0,0 +1,218 @@
> +/* gzguts.h -- zlib internal header definitions for gz* operations
> + * Copyright (C) 2004, 2005, 2010, 2011, 2012, 2013, 2016 Mark Adler
> + * For conditions of distribution and use, see copyright notice in zlib.h
> + */
> +
> +#ifdef _LARGEFILE64_SOURCE
> +# ifndef _LARGEFILE_SOURCE
> +# define _LARGEFILE_SOURCE 1
> +# endif
> +# ifdef _FILE_OFFSET_BITS
> +# undef _FILE_OFFSET_BITS
> +# endif
> +#endif
> +
> +#ifdef HAVE_HIDDEN
> +# define ZLIB_INTERNAL __attribute__((visibility ("hidden")))
> +#else
> +# define ZLIB_INTERNAL
> +#endif
> +
> +#include <stdio.h>
> +#include "zlib.h"
> +#ifdef STDC
> +# include <string.h>
> +# include <stdlib.h>
> +# include <limits.h>
> +#endif
> +
> +#ifndef _POSIX_SOURCE
> +# define _POSIX_SOURCE
> +#endif
> +#include <fcntl.h>
> +
> +#ifdef _WIN32
> +# include <stddef.h>
> +#endif
> +
> +#if (defined(__TURBOC__) || defined(_MSC_VER) || defined(_WIN32)) && !defined(UEFI_C_SOURCE)
> +# include <io.h>
> +#endif
> +
> +#if defined(_WIN32) || defined(__CYGWIN__)
> +# define WIDECHAR
> +#endif
> +
> +#ifdef WINAPI_FAMILY
> +# define open _open
> +# define read _read
> +# define write _write
> +# define close _close
> +#endif
> +
> +#ifdef NO_DEFLATE /* for compatibility with old definition */
> +# define NO_GZCOMPRESS
> +#endif
> +
> +#if defined(STDC99) || (defined(__TURBOC__) && __TURBOC__ >= 0x550)
> +# ifndef HAVE_VSNPRINTF
> +# define HAVE_VSNPRINTF
> +# endif
> +#endif
> +
> +#if defined(__CYGWIN__)
> +# ifndef HAVE_VSNPRINTF
> +# define HAVE_VSNPRINTF
> +# endif
> +#endif
> +
> +#if defined(MSDOS) && defined(__BORLANDC__) && (BORLANDC > 0x410)
> +# ifndef HAVE_VSNPRINTF
> +# define HAVE_VSNPRINTF
> +# endif
> +#endif
> +
> +#ifndef HAVE_VSNPRINTF
> +# ifdef MSDOS
> +/* vsnprintf may exist on some MS-DOS compilers (DJGPP?),
> + but for now we just assume it doesn't. */
> +# define NO_vsnprintf
> +# endif
> +# ifdef __TURBOC__
> +# define NO_vsnprintf
> +# endif
> +# ifdef WIN32
> +/* In Win32, vsnprintf is available as the "non-ANSI" _vsnprintf. */
> +# if !defined(vsnprintf) && !defined(NO_vsnprintf)
> +# if !defined(_MSC_VER) || ( defined(_MSC_VER) && _MSC_VER < 1500 )
> +# define vsnprintf _vsnprintf
> +# endif
> +# endif
> +# endif
> +# ifdef __SASC
> +# define NO_vsnprintf
> +# endif
> +# ifdef VMS
> +# define NO_vsnprintf
> +# endif
> +# ifdef __OS400__
> +# define NO_vsnprintf
> +# endif
> +# ifdef __MVS__
> +# define NO_vsnprintf
> +# endif
> +#endif
> +
> +/* unlike snprintf (which is required in C99), _snprintf does not guarantee
> + null termination of the result -- however this is only used in gzlib.c where
> + the result is assured to fit in the space provided */
> +#if defined(_MSC_VER) && _MSC_VER < 1900
> +# define snprintf _snprintf
> +#endif
> +
> +#ifndef local
> +# define local static
> +#endif
> +/* since "static" is used to mean two completely different things in C, we
> + define "local" for the non-static meaning of "static", for readability
> + (compile with -Dlocal if your debugger can't find static symbols) */
> +
> +/* gz* functions always use library allocation functions */
> +#ifndef STDC
> + extern voidp malloc OF((uInt size));
> + extern void free OF((voidpf ptr));
> +#endif
> +
> +/* get errno and strerror definition */
> +#if defined UNDER_CE
> +# include <windows.h>
> +# define zstrerror() gz_strwinerror((DWORD)GetLastError())
> +#else
> +# ifndef NO_STRERROR
> +# include <errno.h>
> +# define zstrerror() strerror(errno)
> +# else
> +# define zstrerror() "stdio error (consult errno)"
> +# endif
> +#endif
> +
> +/* provide prototypes for these when building zlib without LFS */
> +#if !defined(_LARGEFILE64_SOURCE) || _LFS64_LARGEFILE-0 == 0
> + ZEXTERN gzFile ZEXPORT gzopen64 OF((const char *, const char *));
> + ZEXTERN z_off64_t ZEXPORT gzseek64 OF((gzFile, z_off64_t, int));
> + ZEXTERN z_off64_t ZEXPORT gztell64 OF((gzFile));
> + ZEXTERN z_off64_t ZEXPORT gzoffset64 OF((gzFile));
> +#endif
> +
> +/* default memLevel */
> +#if MAX_MEM_LEVEL >= 8
> +# define DEF_MEM_LEVEL 8
> +#else
> +# define DEF_MEM_LEVEL MAX_MEM_LEVEL
> +#endif
> +
> +/* default i/o buffer size -- double this for output when reading (this and
> + twice this must be able to fit in an unsigned type) */
> +#define GZBUFSIZE 8192
> +
> +/* gzip modes, also provide a little integrity check on the passed structure */
> +#define GZ_NONE 0
> +#define GZ_READ 7247
> +#define GZ_WRITE 31153
> +#define GZ_APPEND 1 /* mode set to GZ_WRITE after the file is opened */
> +
> +/* values for gz_state how */
> +#define LOOK 0 /* look for a gzip header */
> +#define COPY 1 /* copy input directly */
> +#define GZIP 2 /* decompress a gzip stream */
> +
> +/* internal gzip file state data structure */
> +typedef struct {
> + /* exposed contents for gzgetc() macro */
> + struct gzFile_s x; /* "x" for exposed */
> + /* x.have: number of bytes available at x.next */
> + /* x.next: next output data to deliver or write */
> + /* x.pos: current position in uncompressed data */
> + /* used for both reading and writing */
> + int mode; /* see gzip modes above */
> + int fd; /* file descriptor */
> + char *path; /* path or fd for error messages */
> + unsigned size; /* buffer size, zero if not allocated yet */
> + unsigned want; /* requested buffer size, default is GZBUFSIZE */
> + unsigned char *in; /* input buffer (double-sized when writing) */
> + unsigned char *out; /* output buffer (double-sized when reading) */
> + int direct; /* 0 if processing gzip, 1 if transparent */
> + /* just for reading */
> + int how; /* 0: get header, 1: copy, 2: decompress */
> + z_off64_t start; /* where the gzip data started, for rewinding */
> + int eof; /* true if end of input file reached */
> + int past; /* true if read requested past end */
> + /* just for writing */
> + int level; /* compression level */
> + int strategy; /* compression strategy */
> + /* seek request */
> + z_off64_t skip; /* amount to skip (already rewound if backwards) */
> + int seek; /* true if seek request pending */
> + /* error information */
> + int err; /* error code */
> + char *msg; /* error message */
> + /* zlib inflate or deflate stream */
> + z_stream strm; /* stream structure in-place (not a pointer) */
> +} gz_state;
> +typedef gz_state FAR *gz_statep;
> +
> +/* shared functions */
> +void ZLIB_INTERNAL gz_error OF((gz_statep, int, const char *));
> +#if defined UNDER_CE
> +char ZLIB_INTERNAL *gz_strwinerror OF((DWORD error));
> +#endif
> +
> +/* GT_OFF(x), where x is an unsigned value, is true if x > maximum z_off64_t
> + value -- needed when comparing unsigned to z_off64_t, which is signed
> + (possible z_off64_t types off_t, off64_t, and long are all signed) */
> +#ifdef INT_MAX
> +# define GT_OFF(x) (sizeof(int) == sizeof(z_off64_t) && (x) > INT_MAX)
> +#else
> +unsigned ZLIB_INTERNAL gz_intmax OF((void));
> +# define GT_OFF(x) (sizeof(int) == sizeof(z_off64_t) && (x) > gz_intmax())
> +#endif
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/dictobject.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/dictobject.c
> new file mode 100644
> index 00000000..9eabd28c
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/dictobject.c
> @@ -0,0 +1,4472 @@
> +/* Dictionary object implementation using a hash table */
> +
> +/* The distribution includes a separate file, Objects/dictnotes.txt,
> + describing explorations into dictionary design and optimization.
> + It covers typical dictionary use patterns, the parameters for
> + tuning dictionaries, and several ideas for possible optimizations.
> +*/
> +
> +/* PyDictKeysObject
> +
> +This implements the dictionary's hashtable.
> +
> +As of Python 3.6, this is compact and ordered. Basic idea is described here.
> +https://morepypy.blogspot.com/2015/01/faster-more-memory-efficient-and-more.html
> +
> +layout:
> +
> ++---------------+
> +| dk_refcnt |
> +| dk_size |
> +| dk_lookup |
> +| dk_usable |
> +| dk_nentries |
> ++---------------+
> +| dk_indices |
> +| |
> ++---------------+
> +| dk_entries |
> +| |
> ++---------------+
> +
> +dk_indices is actual hashtable. It holds index in entries, or DKIX_EMPTY(-1)
> +or DKIX_DUMMY(-2).
> +Size of indices is dk_size. Type of each index in indices is vary on dk_size:
> +
> +* int8 for dk_size <= 128
> +* int16 for 256 <= dk_size <= 2**15
> +* int32 for 2**16 <= dk_size <= 2**31
> +* int64 for 2**32 <= dk_size
> +
> +dk_entries is array of PyDictKeyEntry. It's size is USABLE_FRACTION(dk_size).
> +DK_ENTRIES(dk) can be used to get pointer to entries.
> +
> +NOTE: Since negative value is used for DKIX_EMPTY and DKIX_DUMMY, type of
> +dk_indices entry is signed integer and int16 is used for table which
> +dk_size == 256.
> +*/
> +
> +
> +/*
> +The DictObject can be in one of two forms.
> +
> +Either:
> + A combined table:
> + ma_values == NULL, dk_refcnt == 1.
> + Values are stored in the me_value field of the PyDictKeysObject.
> +Or:
> + A split table:
> + ma_values != NULL, dk_refcnt >= 1
> + Values are stored in the ma_values array.
> + Only string (unicode) keys are allowed.
> + All dicts sharing same key must have same insertion order.
> +
> +There are four kinds of slots in the table (slot is index, and
> +DK_ENTRIES(keys)[index] if index >= 0):
> +
> +1. Unused. index == DKIX_EMPTY
> + Does not hold an active (key, value) pair now and never did. Unused can
> + transition to Active upon key insertion. This is each slot's initial state.
> +
> +2. Active. index >= 0, me_key != NULL and me_value != NULL
> + Holds an active (key, value) pair. Active can transition to Dummy or
> + Pending upon key deletion (for combined and split tables respectively).
> + This is the only case in which me_value != NULL.
> +
> +3. Dummy. index == DKIX_DUMMY (combined only)
> + Previously held an active (key, value) pair, but that was deleted and an
> + active pair has not yet overwritten the slot. Dummy can transition to
> + Active upon key insertion. Dummy slots cannot be made Unused again
> + else the probe sequence in case of collision would have no way to know
> + they were once active.
> +
> +4. Pending. index >= 0, key != NULL, and value == NULL (split only)
> + Not yet inserted in split-table.
> +*/
> +
> +/*
> +Preserving insertion order
> +
> +It's simple for combined table. Since dk_entries is mostly append only, we can
> +get insertion order by just iterating dk_entries.
> +
> +One exception is .popitem(). It removes last item in dk_entries and decrement
> +dk_nentries to achieve amortized O(1). Since there are DKIX_DUMMY remains in
> +dk_indices, we can't increment dk_usable even though dk_nentries is
> +decremented.
> +
> +In split table, inserting into pending entry is allowed only for dk_entries[ix]
> +where ix == mp->ma_used. Inserting into other index and deleting item cause
> +converting the dict to the combined table.
> +*/
> +
> +/* PyDict_MINSIZE is the starting size for any new dict.
> + * 8 allows dicts with no more than 5 active entries; experiments suggested
> + * this suffices for the majority of dicts (consisting mostly of usually-small
> + * dicts created to pass keyword arguments).
> + * Making this 8, rather than 4 reduces the number of resizes for most
> + * dictionaries, without any significant extra memory use.
> + */
> +#define PyDict_MINSIZE 8
> +
> +#include "Python.h"
> +#include "dict-common.h"
> +#include "stringlib/eq.h" /* to get unicode_eq() */
> +
> +/*[clinic input]
> +class dict "PyDictObject *" "&PyDict_Type"
> +[clinic start generated code]*/
> +/*[clinic end generated code: output=da39a3ee5e6b4b0d input=f157a5a0ce9589d6]*/
> +
> +
> +/*
> +To ensure the lookup algorithm terminates, there must be at least one Unused
> +slot (NULL key) in the table.
> +To avoid slowing down lookups on a near-full table, we resize the table when
> +it's USABLE_FRACTION (currently two-thirds) full.
> +*/
> +
> +#define PERTURB_SHIFT 5
> +
> +/*
> +Major subtleties ahead: Most hash schemes depend on having a "good" hash
> +function, in the sense of simulating randomness. Python doesn't: its most
> +important hash functions (for ints) are very regular in common
> +cases:
> +
> + >>>[hash(i) for i in range(4)]
> + [0, 1, 2, 3]
> +
> +This isn't necessarily bad! To the contrary, in a table of size 2**i, taking
> +the low-order i bits as the initial table index is extremely fast, and there
> +are no collisions at all for dicts indexed by a contiguous range of ints. So
> +this gives better-than-random behavior in common cases, and that's very
> +desirable.
> +
> +OTOH, when collisions occur, the tendency to fill contiguous slices of the
> +hash table makes a good collision resolution strategy crucial. Taking only
> +the last i bits of the hash code is also vulnerable: for example, consider
> +the list [i << 16 for i in range(20000)] as a set of keys. Since ints are
> +their own hash codes, and this fits in a dict of size 2**15, the last 15 bits
> + of every hash code are all 0: they *all* map to the same table index.
> +
> +But catering to unusual cases should not slow the usual ones, so we just take
> +the last i bits anyway. It's up to collision resolution to do the rest. If
> +we *usually* find the key we're looking for on the first try (and, it turns
> +out, we usually do -- the table load factor is kept under 2/3, so the odds
> +are solidly in our favor), then it makes best sense to keep the initial index
> +computation dirt cheap.
> +
> +The first half of collision resolution is to visit table indices via this
> +recurrence:
> +
> + j = ((5*j) + 1) mod 2**i
> +
> +For any initial j in range(2**i), repeating that 2**i times generates each
> +int in range(2**i) exactly once (see any text on random-number generation for
> +proof). By itself, this doesn't help much: like linear probing (setting
> +j += 1, or j -= 1, on each loop trip), it scans the table entries in a fixed
> +order. This would be bad, except that's not the only thing we do, and it's
> +actually *good* in the common cases where hash keys are consecutive. In an
> +example that's really too small to make this entirely clear, for a table of
> +size 2**3 the order of indices is:
> +
> + 0 -> 1 -> 6 -> 7 -> 4 -> 5 -> 2 -> 3 -> 0 [and here it's repeating]
> +
> +If two things come in at index 5, the first place we look after is index 2,
> +not 6, so if another comes in at index 6 the collision at 5 didn't hurt it.
> +Linear probing is deadly in this case because there the fixed probe order
> +is the *same* as the order consecutive keys are likely to arrive. But it's
> +extremely unlikely hash codes will follow a 5*j+1 recurrence by accident,
> +and certain that consecutive hash codes do not.
> +
> +The other half of the strategy is to get the other bits of the hash code
> +into play. This is done by initializing a (unsigned) vrbl "perturb" to the
> +full hash code, and changing the recurrence to:
> +
> + perturb >>= PERTURB_SHIFT;
> + j = (5*j) + 1 + perturb;
> + use j % 2**i as the next table index;
> +
> +Now the probe sequence depends (eventually) on every bit in the hash code,
> +and the pseudo-scrambling property of recurring on 5*j+1 is more valuable,
> +because it quickly magnifies small differences in the bits that didn't affect
> +the initial index. Note that because perturb is unsigned, if the recurrence
> +is executed often enough perturb eventually becomes and remains 0. At that
> +point (very rarely reached) the recurrence is on (just) 5*j+1 again, and
> +that's certain to find an empty slot eventually (since it generates every int
> +in range(2**i), and we make sure there's always at least one empty slot).
> +
> +Selecting a good value for PERTURB_SHIFT is a balancing act. You want it
> +small so that the high bits of the hash code continue to affect the probe
> +sequence across iterations; but you want it large so that in really bad cases
> +the high-order hash bits have an effect on early iterations. 5 was "the
> +best" in minimizing total collisions across experiments Tim Peters ran (on
> +both normal and pathological cases), but 4 and 6 weren't significantly worse.
> +
> +Historical: Reimer Behrends contributed the idea of using a polynomial-based
> +approach, using repeated multiplication by x in GF(2**n) where an irreducible
> +polynomial for each table size was chosen such that x was a primitive root.
> +Christian Tismer later extended that to use division by x instead, as an
> +efficient way to get the high bits of the hash code into play. This scheme
> +also gave excellent collision statistics, but was more expensive: two
> +if-tests were required inside the loop; computing "the next" index took about
> +the same number of operations but without as much potential parallelism
> +(e.g., computing 5*j can go on at the same time as computing 1+perturb in the
> +above, and then shifting perturb can be done while the table index is being
> +masked); and the PyDictObject struct required a member to hold the table's
> +polynomial. In Tim's experiments the current scheme ran faster, produced
> +equally good collision statistics, needed less code & used less memory.
> +
> +*/
> +
> +/* forward declarations */
> +static Py_ssize_t lookdict(PyDictObject *mp, PyObject *key,
> + Py_hash_t hash, PyObject ***value_addr,
> + Py_ssize_t *hashpos);
> +static Py_ssize_t lookdict_unicode(PyDictObject *mp, PyObject *key,
> + Py_hash_t hash, PyObject ***value_addr,
> + Py_ssize_t *hashpos);
> +static Py_ssize_t
> +lookdict_unicode_nodummy(PyDictObject *mp, PyObject *key,
> + Py_hash_t hash, PyObject ***value_addr,
> + Py_ssize_t *hashpos);
> +static Py_ssize_t lookdict_split(PyDictObject *mp, PyObject *key,
> + Py_hash_t hash, PyObject ***value_addr,
> + Py_ssize_t *hashpos);
> +
> +static int dictresize(PyDictObject *mp, Py_ssize_t minused);
> +
> +static PyObject* dict_iter(PyDictObject *dict);
> +
> +/*Global counter used to set ma_version_tag field of dictionary.
> + * It is incremented each time that a dictionary is created and each
> + * time that a dictionary is modified. */
> +static uint64_t pydict_global_version = 0;
> +
> +#define DICT_NEXT_VERSION() (++pydict_global_version)
> +
> +/* Dictionary reuse scheme to save calls to malloc and free */
> +#ifndef PyDict_MAXFREELIST
> +#define PyDict_MAXFREELIST 80
> +#endif
> +static PyDictObject *free_list[PyDict_MAXFREELIST];
> +static int numfree = 0;
> +static PyDictKeysObject *keys_free_list[PyDict_MAXFREELIST];
> +static int numfreekeys = 0;
> +
> +#include "clinic/dictobject.c.h"
> +
> +int
> +PyDict_ClearFreeList(void)
> +{
> + PyDictObject *op;
> + int ret = numfree + numfreekeys;
> + while (numfree) {
> + op = free_list[--numfree];
> + assert(PyDict_CheckExact(op));
> + PyObject_GC_Del(op);
> + }
> + while (numfreekeys) {
> + PyObject_FREE(keys_free_list[--numfreekeys]);
> + }
> + return ret;
> +}
> +
> +/* Print summary info about the state of the optimized allocator */
> +void
> +_PyDict_DebugMallocStats(FILE *out)
> +{
> + _PyDebugAllocatorStats(out,
> + "free PyDictObject", numfree, sizeof(PyDictObject));
> +}
> +
> +
> +void
> +PyDict_Fini(void)
> +{
> + PyDict_ClearFreeList();
> +}
> +
> +#define DK_SIZE(dk) ((dk)->dk_size)
> +#if SIZEOF_VOID_P > 4
> +#define DK_IXSIZE(dk) \
> + (DK_SIZE(dk) <= 0xff ? \
> + 1 : DK_SIZE(dk) <= 0xffff ? \
> + 2 : DK_SIZE(dk) <= 0xffffffff ? \
> + 4 : sizeof(int64_t))
> +#else
> +#define DK_IXSIZE(dk) \
> + (DK_SIZE(dk) <= 0xff ? \
> + 1 : DK_SIZE(dk) <= 0xffff ? \
> + 2 : sizeof(int32_t))
> +#endif
> +#define DK_ENTRIES(dk) \
> + ((PyDictKeyEntry*)(&((int8_t*)((dk)->dk_indices))[DK_SIZE(dk) * DK_IXSIZE(dk)]))
> +
> +#define DK_DEBUG_INCREF _Py_INC_REFTOTAL _Py_REF_DEBUG_COMMA
> +#define DK_DEBUG_DECREF _Py_DEC_REFTOTAL _Py_REF_DEBUG_COMMA
> +
> +#define DK_INCREF(dk) (DK_DEBUG_INCREF ++(dk)->dk_refcnt)
> +#define DK_DECREF(dk) if (DK_DEBUG_DECREF (--(dk)->dk_refcnt) == 0) free_keys_object(dk)
> +#define DK_MASK(dk) (((dk)->dk_size)-1)
> +#define IS_POWER_OF_2(x) (((x) & (x-1)) == 0)
> +
> +/* lookup indices. returns DKIX_EMPTY, DKIX_DUMMY, or ix >=0 */
> +static Py_ssize_t
> +dk_get_index(PyDictKeysObject *keys, Py_ssize_t i)
> +{
> + Py_ssize_t s = DK_SIZE(keys);
> + Py_ssize_t ix;
> +
> + if (s <= 0xff) {
> + int8_t *indices = (int8_t*)(keys->dk_indices);
> + ix = indices[i];
> + }
> + else if (s <= 0xffff) {
> + int16_t *indices = (int16_t*)(keys->dk_indices);
> + ix = indices[i];
> + }
> +#if SIZEOF_VOID_P > 4
> + else if (s > 0xffffffff) {
> + int64_t *indices = (int64_t*)(keys->dk_indices);
> + ix = indices[i];
> + }
> +#endif
> + else {
> + int32_t *indices = (int32_t*)(keys->dk_indices);
> + ix = indices[i];
> + }
> + assert(ix >= DKIX_DUMMY);
> + return ix;
> +}
> +
> +/* write to indices. */
> +static void
> +dk_set_index(PyDictKeysObject *keys, Py_ssize_t i, Py_ssize_t ix)
> +{
> + Py_ssize_t s = DK_SIZE(keys);
> +
> + assert(ix >= DKIX_DUMMY);
> +
> + if (s <= 0xff) {
> + int8_t *indices = (int8_t*)(keys->dk_indices);
> + assert(ix <= 0x7f);
> + indices[i] = (char)ix;
> + }
> + else if (s <= 0xffff) {
> + int16_t *indices = (int16_t*)(keys->dk_indices);
> + assert(ix <= 0x7fff);
> + indices[i] = (int16_t)ix;
> + }
> +#if SIZEOF_VOID_P > 4
> + else if (s > 0xffffffff) {
> + int64_t *indices = (int64_t*)(keys->dk_indices);
> + indices[i] = ix;
> + }
> +#endif
> + else {
> + int32_t *indices = (int32_t*)(keys->dk_indices);
> + assert(ix <= 0x7fffffff);
> + indices[i] = (int32_t)ix;
> + }
> +}
> +
> +
> +/* USABLE_FRACTION is the maximum dictionary load.
> + * Increasing this ratio makes dictionaries more dense resulting in more
> + * collisions. Decreasing it improves sparseness at the expense of spreading
> + * indices over more cache lines and at the cost of total memory consumed.
> + *
> + * USABLE_FRACTION must obey the following:
> + * (0 < USABLE_FRACTION(n) < n) for all n >= 2
> + *
> + * USABLE_FRACTION should be quick to calculate.
> + * Fractions around 1/2 to 2/3 seem to work well in practice.
> + */
> +#define USABLE_FRACTION(n) (((n) << 1)/3)
> +
> +/* ESTIMATE_SIZE is reverse function of USABLE_FRACTION.
> + * This can be used to reserve enough size to insert n entries without
> + * resizing.
> + */
> +#define ESTIMATE_SIZE(n) (((n)*3+1) >> 1)
> +
> +/* Alternative fraction that is otherwise close enough to 2n/3 to make
> + * little difference. 8 * 2/3 == 8 * 5/8 == 5. 16 * 2/3 == 16 * 5/8 == 10.
> + * 32 * 2/3 = 21, 32 * 5/8 = 20.
> + * Its advantage is that it is faster to compute on machines with slow division.
> + * #define USABLE_FRACTION(n) (((n) >> 1) + ((n) >> 2) - ((n) >> 3))
> + */
> +
> +/* GROWTH_RATE. Growth rate upon hitting maximum load.
> + * Currently set to used*2 + capacity/2.
> + * This means that dicts double in size when growing without deletions,
> + * but have more head room when the number of deletions is on a par with the
> + * number of insertions.
> + * Raising this to used*4 doubles memory consumption depending on the size of
> + * the dictionary, but results in half the number of resizes, less effort to
> + * resize.
> + * GROWTH_RATE was set to used*4 up to version 3.2.
> + * GROWTH_RATE was set to used*2 in version 3.3.0
> + */
> +#define GROWTH_RATE(d) (((d)->ma_used*2)+((d)->ma_keys->dk_size>>1))
> +
> +#define ENSURE_ALLOWS_DELETIONS(d) \
> + if ((d)->ma_keys->dk_lookup == lookdict_unicode_nodummy) { \
> + (d)->ma_keys->dk_lookup = lookdict_unicode; \
> + }
> +
> +/* This immutable, empty PyDictKeysObject is used for PyDict_Clear()
> + * (which cannot fail and thus can do no allocation).
> + */
> +static PyDictKeysObject empty_keys_struct = {
> + 1, /* dk_refcnt */
> + 1, /* dk_size */
> + lookdict_split, /* dk_lookup */
> + 0, /* dk_usable (immutable) */
> + 0, /* dk_nentries */
> + {DKIX_EMPTY, DKIX_EMPTY, DKIX_EMPTY, DKIX_EMPTY,
> + DKIX_EMPTY, DKIX_EMPTY, DKIX_EMPTY, DKIX_EMPTY}, /* dk_indices */
> +};
> +
> +static PyObject *empty_values[1] = { NULL };
> +
> +#define Py_EMPTY_KEYS &empty_keys_struct
> +
> +/* Uncomment to check the dict content in _PyDict_CheckConsistency() */
> +/* #define DEBUG_PYDICT */
> +
> +
> +#ifndef NDEBUG
> +static int
> +_PyDict_CheckConsistency(PyDictObject *mp)
> +{
> + PyDictKeysObject *keys = mp->ma_keys;
> + int splitted = _PyDict_HasSplitTable(mp);
> + Py_ssize_t usable = USABLE_FRACTION(keys->dk_size);
> +#ifdef DEBUG_PYDICT
> + PyDictKeyEntry *entries = DK_ENTRIES(keys);
> + Py_ssize_t i;
> +#endif
> +
> + assert(0 <= mp->ma_used && mp->ma_used <= usable);
> + assert(IS_POWER_OF_2(keys->dk_size));
> + assert(0 <= keys->dk_usable
> + && keys->dk_usable <= usable);
> + assert(0 <= keys->dk_nentries
> + && keys->dk_nentries <= usable);
> + assert(keys->dk_usable + keys->dk_nentries <= usable);
> +
> + if (!splitted) {
> + /* combined table */
> + assert(keys->dk_refcnt == 1);
> + }
> +
> +#ifdef DEBUG_PYDICT
> + for (i=0; i < keys->dk_size; i++) {
> + Py_ssize_t ix = dk_get_index(keys, i);
> + assert(DKIX_DUMMY <= ix && ix <= usable);
> + }
> +
> + for (i=0; i < usable; i++) {
> + PyDictKeyEntry *entry = &entries[i];
> + PyObject *key = entry->me_key;
> +
> + if (key != NULL) {
> + if (PyUnicode_CheckExact(key)) {
> + Py_hash_t hash = ((PyASCIIObject *)key)->hash;
> + assert(hash != -1);
> + assert(entry->me_hash == hash);
> + }
> + else {
> + /* test_dict fails if PyObject_Hash() is called again */
> + assert(entry->me_hash != -1);
> + }
> + if (!splitted) {
> + assert(entry->me_value != NULL);
> + }
> + }
> +
> + if (splitted) {
> + assert(entry->me_value == NULL);
> + }
> + }
> +
> + if (splitted) {
> + /* splitted table */
> + for (i=0; i < mp->ma_used; i++) {
> + assert(mp->ma_values[i] != NULL);
> + }
> + }
> +#endif
> +
> + return 1;
> +}
> +#endif
> +
> +
> +static PyDictKeysObject *new_keys_object(Py_ssize_t size)
> +{
> + PyDictKeysObject *dk;
> + Py_ssize_t es, usable;
> +
> + assert(size >= PyDict_MINSIZE);
> + assert(IS_POWER_OF_2(size));
> +
> + usable = USABLE_FRACTION(size);
> + if (size <= 0xff) {
> + es = 1;
> + }
> + else if (size <= 0xffff) {
> + es = 2;
> + }
> +#if SIZEOF_VOID_P > 4
> + else if (size <= 0xffffffff) {
> + es = 4;
> + }
> +#endif
> + else {
> + es = sizeof(Py_ssize_t);
> + }
> +
> + if (size == PyDict_MINSIZE && numfreekeys > 0) {
> + dk = keys_free_list[--numfreekeys];
> + }
> + else {
> + dk = PyObject_MALLOC(sizeof(PyDictKeysObject)
> + + es * size
> + + sizeof(PyDictKeyEntry) * usable);
> + if (dk == NULL) {
> + PyErr_NoMemory();
> + return NULL;
> + }
> + }
> + DK_DEBUG_INCREF dk->dk_refcnt = 1;
> + dk->dk_size = size;
> + dk->dk_usable = usable;
> + dk->dk_lookup = lookdict_unicode_nodummy;
> + dk->dk_nentries = 0;
> + memset(&dk->dk_indices[0], 0xff, es * size);
> + memset(DK_ENTRIES(dk), 0, sizeof(PyDictKeyEntry) * usable);
> + return dk;
> +}
> +
> +static void
> +free_keys_object(PyDictKeysObject *keys)
> +{
> + PyDictKeyEntry *entries = DK_ENTRIES(keys);
> + Py_ssize_t i, n;
> + for (i = 0, n = keys->dk_nentries; i < n; i++) {
> + Py_XDECREF(entries[i].me_key);
> + Py_XDECREF(entries[i].me_value);
> + }
> + if (keys->dk_size == PyDict_MINSIZE && numfreekeys < PyDict_MAXFREELIST) {
> + keys_free_list[numfreekeys++] = keys;
> + return;
> + }
> + PyObject_FREE(keys);
> +}
> +
> +#define new_values(size) PyMem_NEW(PyObject *, size)
> +#define free_values(values) PyMem_FREE(values)
> +
> +/* Consumes a reference to the keys object */
> +static PyObject *
> +new_dict(PyDictKeysObject *keys, PyObject **values)
> +{
> + PyDictObject *mp;
> + assert(keys != NULL);
> + if (numfree) {
> + mp = free_list[--numfree];
> + assert (mp != NULL);
> + assert (Py_TYPE(mp) == &PyDict_Type);
> + _Py_NewReference((PyObject *)mp);
> + }
> + else {
> + mp = PyObject_GC_New(PyDictObject, &PyDict_Type);
> + if (mp == NULL) {
> + DK_DECREF(keys);
> + free_values(values);
> + return NULL;
> + }
> + }
> + mp->ma_keys = keys;
> + mp->ma_values = values;
> + mp->ma_used = 0;
> + mp->ma_version_tag = DICT_NEXT_VERSION();
> + assert(_PyDict_CheckConsistency(mp));
> + return (PyObject *)mp;
> +}
> +
> +/* Consumes a reference to the keys object */
> +static PyObject *
> +new_dict_with_shared_keys(PyDictKeysObject *keys)
> +{
> + PyObject **values;
> + Py_ssize_t i, size;
> +
> + size = USABLE_FRACTION(DK_SIZE(keys));
> + values = new_values(size);
> + if (values == NULL) {
> + DK_DECREF(keys);
> + return PyErr_NoMemory();
> + }
> + for (i = 0; i < size; i++) {
> + values[i] = NULL;
> + }
> + return new_dict(keys, values);
> +}
> +
> +PyObject *
> +PyDict_New(void)
> +{
> + PyDictKeysObject *keys = new_keys_object(PyDict_MINSIZE);
> + if (keys == NULL)
> + return NULL;
> + return new_dict(keys, NULL);
> +}
> +
> +/* Search index of hash table from offset of entry table */
> +static Py_ssize_t
> +lookdict_index(PyDictKeysObject *k, Py_hash_t hash, Py_ssize_t index)
> +{
> + size_t i;
> + size_t mask = DK_MASK(k);
> + Py_ssize_t ix;
> +
> + i = (size_t)hash & mask;
> + ix = dk_get_index(k, i);
> + if (ix == index) {
> + return i;
> + }
> + if (ix == DKIX_EMPTY) {
> + return DKIX_EMPTY;
> + }
> +
> + for (size_t perturb = hash;;) {
> + perturb >>= PERTURB_SHIFT;
> + i = mask & ((i << 2) + i + perturb + 1);
> + ix = dk_get_index(k, i);
> + if (ix == index) {
> + return i;
> + }
> + if (ix == DKIX_EMPTY) {
> + return DKIX_EMPTY;
> + }
> + }
> + assert(0); /* NOT REACHED */
> + return DKIX_ERROR;
> +}
> +
> +/*
> +The basic lookup function used by all operations.
> +This is based on Algorithm D from Knuth Vol. 3, Sec. 6.4.
> +Open addressing is preferred over chaining since the link overhead for
> +chaining would be substantial (100% with typical malloc overhead).
> +
> +The initial probe index is computed as hash mod the table size. Subsequent
> +probe indices are computed as explained earlier.
> +
> +All arithmetic on hash should ignore overflow.
> +
> +The details in this version are due to Tim Peters, building on many past
> +contributions by Reimer Behrends, Jyrki Alakuijala, Vladimir Marangozov and
> +Christian Tismer.
> +
> +lookdict() is general-purpose, and may return DKIX_ERROR if (and only if) a
> +comparison raises an exception.
> +lookdict_unicode() below is specialized to string keys, comparison of which can
> +never raise an exception; that function can never return DKIX_ERROR when key
> +is string. Otherwise, it falls back to lookdict().
> +lookdict_unicode_nodummy is further specialized for string keys that cannot be
> +the <dummy> value.
> +For both, when the key isn't found a DKIX_EMPTY is returned. hashpos returns
> +where the key index should be inserted.
> +*/
> +static Py_ssize_t
> +lookdict(PyDictObject *mp, PyObject *key,
> + Py_hash_t hash, PyObject ***value_addr, Py_ssize_t *hashpos)
> +{
> + size_t i, mask;
> + Py_ssize_t ix, freeslot;
> + int cmp;
> + PyDictKeysObject *dk;
> + PyDictKeyEntry *ep0, *ep;
> + PyObject *startkey;
> +
> +top:
> + dk = mp->ma_keys;
> + mask = DK_MASK(dk);
> + ep0 = DK_ENTRIES(dk);
> + i = (size_t)hash & mask;
> +
> + ix = dk_get_index(dk, i);
> + if (ix == DKIX_EMPTY) {
> + if (hashpos != NULL)
> + *hashpos = i;
> + *value_addr = NULL;
> + return DKIX_EMPTY;
> + }
> + if (ix == DKIX_DUMMY) {
> + freeslot = i;
> + }
> + else {
> + ep = &ep0[ix];
> + assert(ep->me_key != NULL);
> + if (ep->me_key == key) {
> + *value_addr = &ep->me_value;
> + if (hashpos != NULL)
> + *hashpos = i;
> + return ix;
> + }
> + if (ep->me_hash == hash) {
> + startkey = ep->me_key;
> + Py_INCREF(startkey);
> + cmp = PyObject_RichCompareBool(startkey, key, Py_EQ);
> + Py_DECREF(startkey);
> + if (cmp < 0) {
> + *value_addr = NULL;
> + return DKIX_ERROR;
> + }
> + if (dk == mp->ma_keys && ep->me_key == startkey) {
> + if (cmp > 0) {
> + *value_addr = &ep->me_value;
> + if (hashpos != NULL)
> + *hashpos = i;
> + return ix;
> + }
> + }
> + else {
> + /* The dict was mutated, restart */
> + goto top;
> + }
> + }
> + freeslot = -1;
> + }
> +
> + for (size_t perturb = hash;;) {
> + perturb >>= PERTURB_SHIFT;
> + i = ((i << 2) + i + perturb + 1) & mask;
> + ix = dk_get_index(dk, i);
> + if (ix == DKIX_EMPTY) {
> + if (hashpos != NULL) {
> + *hashpos = (freeslot == -1) ? (Py_ssize_t)i : freeslot;
> + }
> + *value_addr = NULL;
> + return ix;
> + }
> + if (ix == DKIX_DUMMY) {
> + if (freeslot == -1)
> + freeslot = i;
> + continue;
> + }
> + ep = &ep0[ix];
> + assert(ep->me_key != NULL);
> + if (ep->me_key == key) {
> + if (hashpos != NULL) {
> + *hashpos = i;
> + }
> + *value_addr = &ep->me_value;
> + return ix;
> + }
> + if (ep->me_hash == hash) {
> + startkey = ep->me_key;
> + Py_INCREF(startkey);
> + cmp = PyObject_RichCompareBool(startkey, key, Py_EQ);
> + Py_DECREF(startkey);
> + if (cmp < 0) {
> + *value_addr = NULL;
> + return DKIX_ERROR;
> + }
> + if (dk == mp->ma_keys && ep->me_key == startkey) {
> + if (cmp > 0) {
> + if (hashpos != NULL) {
> + *hashpos = i;
> + }
> + *value_addr = &ep->me_value;
> + return ix;
> + }
> + }
> + else {
> + /* The dict was mutated, restart */
> + goto top;
> + }
> + }
> + }
> + assert(0); /* NOT REACHED */
> + return 0;
> +}
> +
> +/* Specialized version for string-only keys */
> +static Py_ssize_t
> +lookdict_unicode(PyDictObject *mp, PyObject *key,
> + Py_hash_t hash, PyObject ***value_addr, Py_ssize_t *hashpos)
> +{
> + size_t i;
> + size_t mask = DK_MASK(mp->ma_keys);
> + Py_ssize_t ix, freeslot;
> + PyDictKeyEntry *ep, *ep0 = DK_ENTRIES(mp->ma_keys);
> +
> + assert(mp->ma_values == NULL);
> + /* Make sure this function doesn't have to handle non-unicode keys,
> + including subclasses of str; e.g., one reason to subclass
> + unicodes is to override __eq__, and for speed we don't cater to
> + that here. */
> + if (!PyUnicode_CheckExact(key)) {
> + mp->ma_keys->dk_lookup = lookdict;
> + return lookdict(mp, key, hash, value_addr, hashpos);
> + }
> + i = (size_t)hash & mask;
> + ix = dk_get_index(mp->ma_keys, i);
> + if (ix == DKIX_EMPTY) {
> + if (hashpos != NULL)
> + *hashpos = i;
> + *value_addr = NULL;
> + return DKIX_EMPTY;
> + }
> + if (ix == DKIX_DUMMY) {
> + freeslot = i;
> + }
> + else {
> + ep = &ep0[ix];
> + assert(ep->me_key != NULL);
> + if (ep->me_key == key
> + || (ep->me_hash == hash && unicode_eq(ep->me_key, key))) {
> + if (hashpos != NULL)
> + *hashpos = i;
> + *value_addr = &ep->me_value;
> + return ix;
> + }
> + freeslot = -1;
> + }
> +
> + for (size_t perturb = hash;;) {
> + perturb >>= PERTURB_SHIFT;
> + i = mask & ((i << 2) + i + perturb + 1);
> + ix = dk_get_index(mp->ma_keys, i);
> + if (ix == DKIX_EMPTY) {
> + if (hashpos != NULL) {
> + *hashpos = (freeslot == -1) ? (Py_ssize_t)i : freeslot;
> + }
> + *value_addr = NULL;
> + return DKIX_EMPTY;
> + }
> + if (ix == DKIX_DUMMY) {
> + if (freeslot == -1)
> + freeslot = i;
> + continue;
> + }
> + ep = &ep0[ix];
> + assert(ep->me_key != NULL);
> + if (ep->me_key == key
> + || (ep->me_hash == hash && unicode_eq(ep->me_key, key))) {
> + *value_addr = &ep->me_value;
> + if (hashpos != NULL) {
> + *hashpos = i;
> + }
> + return ix;
> + }
> + }
> + assert(0); /* NOT REACHED */
> + return 0;
> +}
> +
> +/* Faster version of lookdict_unicode when it is known that no <dummy> keys
> + * will be present. */
> +static Py_ssize_t
> +lookdict_unicode_nodummy(PyDictObject *mp, PyObject *key,
> + Py_hash_t hash, PyObject ***value_addr,
> + Py_ssize_t *hashpos)
> +{
> + size_t i;
> + size_t mask = DK_MASK(mp->ma_keys);
> + Py_ssize_t ix;
> + PyDictKeyEntry *ep, *ep0 = DK_ENTRIES(mp->ma_keys);
> +
> + assert(mp->ma_values == NULL);
> + /* Make sure this function doesn't have to handle non-unicode keys,
> + including subclasses of str; e.g., one reason to subclass
> + unicodes is to override __eq__, and for speed we don't cater to
> + that here. */
> + if (!PyUnicode_CheckExact(key)) {
> + mp->ma_keys->dk_lookup = lookdict;
> + return lookdict(mp, key, hash, value_addr, hashpos);
> + }
> + i = (size_t)hash & mask;
> + ix = dk_get_index(mp->ma_keys, i);
> + assert (ix != DKIX_DUMMY);
> + if (ix == DKIX_EMPTY) {
> + if (hashpos != NULL)
> + *hashpos = i;
> + *value_addr = NULL;
> + return DKIX_EMPTY;
> + }
> + ep = &ep0[ix];
> + assert(ep->me_key != NULL);
> + assert(PyUnicode_CheckExact(ep->me_key));
> + if (ep->me_key == key ||
> + (ep->me_hash == hash && unicode_eq(ep->me_key, key))) {
> + if (hashpos != NULL)
> + *hashpos = i;
> + *value_addr = &ep->me_value;
> + return ix;
> + }
> + for (size_t perturb = hash;;) {
> + perturb >>= PERTURB_SHIFT;
> + i = mask & ((i << 2) + i + perturb + 1);
> + ix = dk_get_index(mp->ma_keys, i);
> + assert (ix != DKIX_DUMMY);
> + if (ix == DKIX_EMPTY) {
> + if (hashpos != NULL)
> + *hashpos = i;
> + *value_addr = NULL;
> + return DKIX_EMPTY;
> + }
> + ep = &ep0[ix];
> + assert(ep->me_key != NULL && PyUnicode_CheckExact(ep->me_key));
> + if (ep->me_key == key ||
> + (ep->me_hash == hash && unicode_eq(ep->me_key, key))) {
> + if (hashpos != NULL)
> + *hashpos = i;
> + *value_addr = &ep->me_value;
> + return ix;
> + }
> + }
> + assert(0); /* NOT REACHED */
> + return 0;
> +}
> +
> +/* Version of lookdict for split tables.
> + * All split tables and only split tables use this lookup function.
> + * Split tables only contain unicode keys and no dummy keys,
> + * so algorithm is the same as lookdict_unicode_nodummy.
> + */
> +static Py_ssize_t
> +lookdict_split(PyDictObject *mp, PyObject *key,
> + Py_hash_t hash, PyObject ***value_addr, Py_ssize_t *hashpos)
> +{
> + size_t i;
> + size_t mask = DK_MASK(mp->ma_keys);
> + Py_ssize_t ix;
> + PyDictKeyEntry *ep, *ep0 = DK_ENTRIES(mp->ma_keys);
> +
> + /* mp must split table */
> + assert(mp->ma_values != NULL);
> + if (!PyUnicode_CheckExact(key)) {
> + ix = lookdict(mp, key, hash, value_addr, hashpos);
> + if (ix >= 0) {
> + *value_addr = &mp->ma_values[ix];
> + }
> + return ix;
> + }
> +
> + i = (size_t)hash & mask;
> + ix = dk_get_index(mp->ma_keys, i);
> + if (ix == DKIX_EMPTY) {
> + if (hashpos != NULL)
> + *hashpos = i;
> + *value_addr = NULL;
> + return DKIX_EMPTY;
> + }
> + assert(ix >= 0);
> + ep = &ep0[ix];
> + assert(ep->me_key != NULL && PyUnicode_CheckExact(ep->me_key));
> + if (ep->me_key == key ||
> + (ep->me_hash == hash && unicode_eq(ep->me_key, key))) {
> + if (hashpos != NULL)
> + *hashpos = i;
> + *value_addr = &mp->ma_values[ix];
> + return ix;
> + }
> + for (size_t perturb = hash;;) {
> + perturb >>= PERTURB_SHIFT;
> + i = mask & ((i << 2) + i + perturb + 1);
> + ix = dk_get_index(mp->ma_keys, i);
> + if (ix == DKIX_EMPTY) {
> + if (hashpos != NULL)
> + *hashpos = i;
> + *value_addr = NULL;
> + return DKIX_EMPTY;
> + }
> + assert(ix >= 0);
> + ep = &ep0[ix];
> + assert(ep->me_key != NULL && PyUnicode_CheckExact(ep->me_key));
> + if (ep->me_key == key ||
> + (ep->me_hash == hash && unicode_eq(ep->me_key, key))) {
> + if (hashpos != NULL)
> + *hashpos = i;
> + *value_addr = &mp->ma_values[ix];
> + return ix;
> + }
> + }
> + assert(0); /* NOT REACHED */
> + return 0;
> +}
> +
> +int
> +_PyDict_HasOnlyStringKeys(PyObject *dict)
> +{
> + Py_ssize_t pos = 0;
> + PyObject *key, *value;
> + assert(PyDict_Check(dict));
> + /* Shortcut */
> + if (((PyDictObject *)dict)->ma_keys->dk_lookup != lookdict)
> + return 1;
> + while (PyDict_Next(dict, &pos, &key, &value))
> + if (!PyUnicode_Check(key))
> + return 0;
> + return 1;
> +}
> +
> +#define MAINTAIN_TRACKING(mp, key, value) \
> + do { \
> + if (!_PyObject_GC_IS_TRACKED(mp)) { \
> + if (_PyObject_GC_MAY_BE_TRACKED(key) || \
> + _PyObject_GC_MAY_BE_TRACKED(value)) { \
> + _PyObject_GC_TRACK(mp); \
> + } \
> + } \
> + } while(0)
> +
> +void
> +_PyDict_MaybeUntrack(PyObject *op)
> +{
> + PyDictObject *mp;
> + PyObject *value;
> + Py_ssize_t i, numentries;
> + PyDictKeyEntry *ep0;
> +
> + if (!PyDict_CheckExact(op) || !_PyObject_GC_IS_TRACKED(op))
> + return;
> +
> + mp = (PyDictObject *) op;
> + ep0 = DK_ENTRIES(mp->ma_keys);
> + numentries = mp->ma_keys->dk_nentries;
> + if (_PyDict_HasSplitTable(mp)) {
> + for (i = 0; i < numentries; i++) {
> + if ((value = mp->ma_values[i]) == NULL)
> + continue;
> + if (_PyObject_GC_MAY_BE_TRACKED(value)) {
> + assert(!_PyObject_GC_MAY_BE_TRACKED(ep0[i].me_key));
> + return;
> + }
> + }
> + }
> + else {
> + for (i = 0; i < numentries; i++) {
> + if ((value = ep0[i].me_value) == NULL)
> + continue;
> + if (_PyObject_GC_MAY_BE_TRACKED(value) ||
> + _PyObject_GC_MAY_BE_TRACKED(ep0[i].me_key))
> + return;
> + }
> + }
> + _PyObject_GC_UNTRACK(op);
> +}
> +
> +/* Internal function to find slot for an item from its hash
> + when it is known that the key is not present in the dict.
> +
> + The dict must be combined. */
> +static void
> +find_empty_slot(PyDictObject *mp, PyObject *key, Py_hash_t hash,
> + PyObject ***value_addr, Py_ssize_t *hashpos)
> +{
> + size_t i;
> + size_t mask = DK_MASK(mp->ma_keys);
> + Py_ssize_t ix;
> + PyDictKeyEntry *ep, *ep0 = DK_ENTRIES(mp->ma_keys);
> +
> + assert(!_PyDict_HasSplitTable(mp));
> + assert(hashpos != NULL);
> + assert(key != NULL);
> +
> + if (!PyUnicode_CheckExact(key))
> + mp->ma_keys->dk_lookup = lookdict;
> + i = hash & mask;
> + ix = dk_get_index(mp->ma_keys, i);
> + for (size_t perturb = hash; ix != DKIX_EMPTY;) {
> + perturb >>= PERTURB_SHIFT;
> + i = (i << 2) + i + perturb + 1;
> + ix = dk_get_index(mp->ma_keys, i & mask);
> + }
> + ep = &ep0[mp->ma_keys->dk_nentries];
> + *hashpos = i & mask;
> + assert(ep->me_value == NULL);
> + *value_addr = &ep->me_value;
> +}
> +
> +static int
> +insertion_resize(PyDictObject *mp)
> +{
> + return dictresize(mp, GROWTH_RATE(mp));
> +}
> +
> +/*
> +Internal routine to insert a new item into the table.
> +Used both by the internal resize routine and by the public insert routine.
> +Returns -1 if an error occurred, or 0 on success.
> +*/
> +static int
> +insertdict(PyDictObject *mp, PyObject *key, Py_hash_t hash, PyObject *value)
> +{
> + PyObject *old_value;
> + PyObject **value_addr;
> + PyDictKeyEntry *ep, *ep0;
> + Py_ssize_t hashpos, ix;
> +
> + Py_INCREF(key);
> + Py_INCREF(value);
> + if (mp->ma_values != NULL && !PyUnicode_CheckExact(key)) {
> + if (insertion_resize(mp) < 0)
> + goto Fail;
> + }
> +
> + ix = mp->ma_keys->dk_lookup(mp, key, hash, &value_addr, &hashpos);
> + if (ix == DKIX_ERROR)
> + goto Fail;
> +
> + assert(PyUnicode_CheckExact(key) || mp->ma_keys->dk_lookup == lookdict);
> + MAINTAIN_TRACKING(mp, key, value);
> +
> + /* When insertion order is different from shared key, we can't share
> + * the key anymore. Convert this instance to combine table.
> + */
> + if (_PyDict_HasSplitTable(mp) &&
> + ((ix >= 0 && *value_addr == NULL && mp->ma_used != ix) ||
> + (ix == DKIX_EMPTY && mp->ma_used != mp->ma_keys->dk_nentries))) {
> + if (insertion_resize(mp) < 0)
> + goto Fail;
> + find_empty_slot(mp, key, hash, &value_addr, &hashpos);
> + ix = DKIX_EMPTY;
> + }
> +
> + if (ix == DKIX_EMPTY) {
> + /* Insert into new slot. */
> + if (mp->ma_keys->dk_usable <= 0) {
> + /* Need to resize. */
> + if (insertion_resize(mp) < 0)
> + goto Fail;
> + find_empty_slot(mp, key, hash, &value_addr, &hashpos);
> + }
> + ep0 = DK_ENTRIES(mp->ma_keys);
> + ep = &ep0[mp->ma_keys->dk_nentries];
> + dk_set_index(mp->ma_keys, hashpos, mp->ma_keys->dk_nentries);
> + ep->me_key = key;
> + ep->me_hash = hash;
> + if (mp->ma_values) {
> + assert (mp->ma_values[mp->ma_keys->dk_nentries] == NULL);
> + mp->ma_values[mp->ma_keys->dk_nentries] = value;
> + }
> + else {
> + ep->me_value = value;
> + }
> + mp->ma_used++;
> + mp->ma_version_tag = DICT_NEXT_VERSION();
> + mp->ma_keys->dk_usable--;
> + mp->ma_keys->dk_nentries++;
> + assert(mp->ma_keys->dk_usable >= 0);
> + assert(_PyDict_CheckConsistency(mp));
> + return 0;
> + }
> +
> + assert(value_addr != NULL);
> +
> + old_value = *value_addr;
> + if (old_value != NULL) {
> + *value_addr = value;
> + mp->ma_version_tag = DICT_NEXT_VERSION();
> + assert(_PyDict_CheckConsistency(mp));
> +
> + Py_DECREF(old_value); /* which **CAN** re-enter (see issue #22653) */
> + Py_DECREF(key);
> + return 0;
> + }
> +
> + /* pending state */
> + assert(_PyDict_HasSplitTable(mp));
> + assert(ix == mp->ma_used);
> + *value_addr = value;
> + mp->ma_used++;
> + mp->ma_version_tag = DICT_NEXT_VERSION();
> + assert(_PyDict_CheckConsistency(mp));
> + Py_DECREF(key);
> + return 0;
> +
> +Fail:
> + Py_DECREF(value);
> + Py_DECREF(key);
> + return -1;
> +}
> +
> +/*
> +Internal routine used by dictresize() to insert an item which is
> +known to be absent from the dict. This routine also assumes that
> +the dict contains no deleted entries. Besides the performance benefit,
> +using insertdict() in dictresize() is dangerous (SF bug #1456209).
> +Note that no refcounts are changed by this routine; if needed, the caller
> +is responsible for incref'ing `key` and `value`.
> +Neither mp->ma_used nor k->dk_usable are modified by this routine; the caller
> +must set them correctly
> +*/
> +static void
> +insertdict_clean(PyDictObject *mp, PyObject *key, Py_hash_t hash,
> + PyObject *value)
> +{
> + size_t i;
> + PyDictKeysObject *k = mp->ma_keys;
> + size_t mask = (size_t)DK_SIZE(k)-1;
> + PyDictKeyEntry *ep0 = DK_ENTRIES(mp->ma_keys);
> + PyDictKeyEntry *ep;
> +
> + assert(k->dk_lookup != NULL);
> + assert(value != NULL);
> + assert(key != NULL);
> + assert(PyUnicode_CheckExact(key) || k->dk_lookup == lookdict);
> + i = hash & mask;
> + for (size_t perturb = hash; dk_get_index(k, i) != DKIX_EMPTY;) {
> + perturb >>= PERTURB_SHIFT;
> + i = mask & ((i << 2) + i + perturb + 1);
> + }
> + ep = &ep0[k->dk_nentries];
> + assert(ep->me_value == NULL);
> + dk_set_index(k, i, k->dk_nentries);
> + k->dk_nentries++;
> + ep->me_key = key;
> + ep->me_hash = hash;
> + ep->me_value = value;
> +}
> +
> +/*
> +Restructure the table by allocating a new table and reinserting all
> +items again. When entries have been deleted, the new table may
> +actually be smaller than the old one.
> +If a table is split (its keys and hashes are shared, its values are not),
> +then the values are temporarily copied into the table, it is resized as
> +a combined table, then the me_value slots in the old table are NULLed out.
> +After resizing a table is always combined,
> +but can be resplit by make_keys_shared().
> +*/
> +static int
> +dictresize(PyDictObject *mp, Py_ssize_t minsize)
> +{
> + Py_ssize_t i, newsize;
> + PyDictKeysObject *oldkeys;
> + PyObject **oldvalues;
> + PyDictKeyEntry *ep0;
> +
> + /* Find the smallest table size > minused. */
> + for (newsize = PyDict_MINSIZE;
> + newsize < minsize && newsize > 0;
> + newsize <<= 1)
> + ;
> + if (newsize <= 0) {
> + PyErr_NoMemory();
> + return -1;
> + }
> + oldkeys = mp->ma_keys;
> + oldvalues = mp->ma_values;
> + /* Allocate a new table. */
> + mp->ma_keys = new_keys_object(newsize);
> + if (mp->ma_keys == NULL) {
> + mp->ma_keys = oldkeys;
> + return -1;
> + }
> + // New table must be large enough.
> + assert(mp->ma_keys->dk_usable >= mp->ma_used);
> + if (oldkeys->dk_lookup == lookdict)
> + mp->ma_keys->dk_lookup = lookdict;
> + mp->ma_values = NULL;
> + ep0 = DK_ENTRIES(oldkeys);
> + /* Main loop below assumes we can transfer refcount to new keys
> + * and that value is stored in me_value.
> + * Increment ref-counts and copy values here to compensate
> + * This (resizing a split table) should be relatively rare */
> + if (oldvalues != NULL) {
> + for (i = 0; i < oldkeys->dk_nentries; i++) {
> + if (oldvalues[i] != NULL) {
> + Py_INCREF(ep0[i].me_key);
> + ep0[i].me_value = oldvalues[i];
> + }
> + }
> + }
> + /* Main loop */
> + for (i = 0; i < oldkeys->dk_nentries; i++) {
> + PyDictKeyEntry *ep = &ep0[i];
> + if (ep->me_value != NULL) {
> + insertdict_clean(mp, ep->me_key, ep->me_hash, ep->me_value);
> + }
> + }
> + mp->ma_keys->dk_usable -= mp->ma_used;
> + if (oldvalues != NULL) {
> + /* NULL out me_value slot in oldkeys, in case it was shared */
> + for (i = 0; i < oldkeys->dk_nentries; i++)
> + ep0[i].me_value = NULL;
> + DK_DECREF(oldkeys);
> + if (oldvalues != empty_values) {
> + free_values(oldvalues);
> + }
> + }
> + else {
> + assert(oldkeys->dk_lookup != lookdict_split);
> + assert(oldkeys->dk_refcnt == 1);
> + DK_DEBUG_DECREF PyObject_FREE(oldkeys);
> + }
> + return 0;
> +}
> +
> +/* Returns NULL if unable to split table.
> + * A NULL return does not necessarily indicate an error */
> +static PyDictKeysObject *
> +make_keys_shared(PyObject *op)
> +{
> + Py_ssize_t i;
> + Py_ssize_t size;
> + PyDictObject *mp = (PyDictObject *)op;
> +
> + if (!PyDict_CheckExact(op))
> + return NULL;
> + if (!_PyDict_HasSplitTable(mp)) {
> + PyDictKeyEntry *ep0;
> + PyObject **values;
> + assert(mp->ma_keys->dk_refcnt == 1);
> + if (mp->ma_keys->dk_lookup == lookdict) {
> + return NULL;
> + }
> + else if (mp->ma_keys->dk_lookup == lookdict_unicode) {
> + /* Remove dummy keys */
> + if (dictresize(mp, DK_SIZE(mp->ma_keys)))
> + return NULL;
> + }
> + assert(mp->ma_keys->dk_lookup == lookdict_unicode_nodummy);
> + /* Copy values into a new array */
> + ep0 = DK_ENTRIES(mp->ma_keys);
> + size = USABLE_FRACTION(DK_SIZE(mp->ma_keys));
> + values = new_values(size);
> + if (values == NULL) {
> + PyErr_SetString(PyExc_MemoryError,
> + "Not enough memory to allocate new values array");
> + return NULL;
> + }
> + for (i = 0; i < size; i++) {
> + values[i] = ep0[i].me_value;
> + ep0[i].me_value = NULL;
> + }
> + mp->ma_keys->dk_lookup = lookdict_split;
> + mp->ma_values = values;
> + }
> + DK_INCREF(mp->ma_keys);
> + return mp->ma_keys;
> +}
> +
> +PyObject *
> +_PyDict_NewPresized(Py_ssize_t minused)
> +{
> + const Py_ssize_t max_presize = 128 * 1024;
> + Py_ssize_t newsize;
> + PyDictKeysObject *new_keys;
> +
> + /* There are no strict guarantee that returned dict can contain minused
> + * items without resize. So we create medium size dict instead of very
> + * large dict or MemoryError.
> + */
> + if (minused > USABLE_FRACTION(max_presize)) {
> + newsize = max_presize;
> + }
> + else {
> + Py_ssize_t minsize = ESTIMATE_SIZE(minused);
> + newsize = PyDict_MINSIZE;
> + while (newsize < minsize) {
> + newsize <<= 1;
> + }
> + }
> + assert(IS_POWER_OF_2(newsize));
> +
> + new_keys = new_keys_object(newsize);
> + if (new_keys == NULL)
> + return NULL;
> + return new_dict(new_keys, NULL);
> +}
> +
> +/* Note that, for historical reasons, PyDict_GetItem() suppresses all errors
> + * that may occur (originally dicts supported only string keys, and exceptions
> + * weren't possible). So, while the original intent was that a NULL return
> + * meant the key wasn't present, in reality it can mean that, or that an error
> + * (suppressed) occurred while computing the key's hash, or that some error
> + * (suppressed) occurred when comparing keys in the dict's internal probe
> + * sequence. A nasty example of the latter is when a Python-coded comparison
> + * function hits a stack-depth error, which can cause this to return NULL
> + * even if the key is present.
> + */
> +PyObject *
> +PyDict_GetItem(PyObject *op, PyObject *key)
> +{
> + Py_hash_t hash;
> + Py_ssize_t ix;
> + PyDictObject *mp = (PyDictObject *)op;
> + PyThreadState *tstate;
> + PyObject **value_addr;
> +
> + if (!PyDict_Check(op))
> + return NULL;
> + if (!PyUnicode_CheckExact(key) ||
> + (hash = ((PyASCIIObject *) key)->hash) == -1)
> + {
> + hash = PyObject_Hash(key);
> + if (hash == -1) {
> + PyErr_Clear();
> + return NULL;
> + }
> + }
> +
> + /* We can arrive here with a NULL tstate during initialization: try
> + running "python -Wi" for an example related to string interning.
> + Let's just hope that no exception occurs then... This must be
> + _PyThreadState_Current and not PyThreadState_GET() because in debug
> + mode, the latter complains if tstate is NULL. */
> + tstate = _PyThreadState_UncheckedGet();
> + if (tstate != NULL && tstate->curexc_type != NULL) {
> + /* preserve the existing exception */
> + PyObject *err_type, *err_value, *err_tb;
> + PyErr_Fetch(&err_type, &err_value, &err_tb);
> + ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, NULL);
> + /* ignore errors */
> + PyErr_Restore(err_type, err_value, err_tb);
> + if (ix < 0)
> + return NULL;
> + }
> + else {
> + ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, NULL);
> + if (ix < 0) {
> + PyErr_Clear();
> + return NULL;
> + }
> + }
> + return *value_addr;
> +}
> +
> +/* Same as PyDict_GetItemWithError() but with hash supplied by caller.
> + This returns NULL *with* an exception set if an exception occurred.
> + It returns NULL *without* an exception set if the key wasn't present.
> +*/
> +PyObject *
> +_PyDict_GetItem_KnownHash(PyObject *op, PyObject *key, Py_hash_t hash)
> +{
> + Py_ssize_t ix;
> + PyDictObject *mp = (PyDictObject *)op;
> + PyObject **value_addr;
> +
> + if (!PyDict_Check(op)) {
> + PyErr_BadInternalCall();
> + return NULL;
> + }
> +
> + ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, NULL);
> + if (ix < 0) {
> + return NULL;
> + }
> + return *value_addr;
> +}
> +
> +/* Variant of PyDict_GetItem() that doesn't suppress exceptions.
> + This returns NULL *with* an exception set if an exception occurred.
> + It returns NULL *without* an exception set if the key wasn't present.
> +*/
> +PyObject *
> +PyDict_GetItemWithError(PyObject *op, PyObject *key)
> +{
> + Py_ssize_t ix;
> + Py_hash_t hash;
> + PyDictObject*mp = (PyDictObject *)op;
> + PyObject **value_addr;
> +
> + if (!PyDict_Check(op)) {
> + PyErr_BadInternalCall();
> + return NULL;
> + }
> + if (!PyUnicode_CheckExact(key) ||
> + (hash = ((PyASCIIObject *) key)->hash) == -1)
> + {
> + hash = PyObject_Hash(key);
> + if (hash == -1) {
> + return NULL;
> + }
> + }
> +
> + ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, NULL);
> + if (ix < 0)
> + return NULL;
> + return *value_addr;
> +}
> +
> +PyObject *
> +_PyDict_GetItemIdWithError(PyObject *dp, struct _Py_Identifier *key)
> +{
> + PyObject *kv;
> + kv = _PyUnicode_FromId(key); /* borrowed */
> + if (kv == NULL)
> + return NULL;
> + return PyDict_GetItemWithError(dp, kv);
> +}
> +
> +/* Fast version of global value lookup (LOAD_GLOBAL).
> + * Lookup in globals, then builtins.
> + *
> + * Raise an exception and return NULL if an error occurred (ex: computing the
> + * key hash failed, key comparison failed, ...). Return NULL if the key doesn't
> + * exist. Return the value if the key exists.
> + */
> +PyObject *
> +_PyDict_LoadGlobal(PyDictObject *globals, PyDictObject *builtins, PyObject *key)
> +{
> + Py_ssize_t ix;
> + Py_hash_t hash;
> + PyObject **value_addr;
> +
> + if (!PyUnicode_CheckExact(key) ||
> + (hash = ((PyASCIIObject *) key)->hash) == -1)
> + {
> + hash = PyObject_Hash(key);
> + if (hash == -1)
> + return NULL;
> + }
> +
> + /* namespace 1: globals */
> + ix = globals->ma_keys->dk_lookup(globals, key, hash, &value_addr, NULL);
> + if (ix == DKIX_ERROR)
> + return NULL;
> + if (ix != DKIX_EMPTY && *value_addr != NULL)
> + return *value_addr;
> +
> + /* namespace 2: builtins */
> + ix = builtins->ma_keys->dk_lookup(builtins, key, hash, &value_addr, NULL);
> + if (ix < 0)
> + return NULL;
> + return *value_addr;
> +}
> +
> +/* CAUTION: PyDict_SetItem() must guarantee that it won't resize the
> + * dictionary if it's merely replacing the value for an existing key.
> + * This means that it's safe to loop over a dictionary with PyDict_Next()
> + * and occasionally replace a value -- but you can't insert new keys or
> + * remove them.
> + */
> +int
> +PyDict_SetItem(PyObject *op, PyObject *key, PyObject *value)
> +{
> + PyDictObject *mp;
> + Py_hash_t hash;
> + if (!PyDict_Check(op)) {
> + PyErr_BadInternalCall();
> + return -1;
> + }
> + assert(key);
> + assert(value);
> + mp = (PyDictObject *)op;
> + if (!PyUnicode_CheckExact(key) ||
> + (hash = ((PyASCIIObject *) key)->hash) == -1)
> + {
> + hash = PyObject_Hash(key);
> + if (hash == -1)
> + return -1;
> + }
> +
> + /* insertdict() handles any resizing that might be necessary */
> + return insertdict(mp, key, hash, value);
> +}
> +
> +int
> +_PyDict_SetItem_KnownHash(PyObject *op, PyObject *key, PyObject *value,
> + Py_hash_t hash)
> +{
> + PyDictObject *mp;
> +
> + if (!PyDict_Check(op)) {
> + PyErr_BadInternalCall();
> + return -1;
> + }
> + assert(key);
> + assert(value);
> + assert(hash != -1);
> + mp = (PyDictObject *)op;
> +
> + /* insertdict() handles any resizing that might be necessary */
> + return insertdict(mp, key, hash, value);
> +}
> +
> +static int
> +delitem_common(PyDictObject *mp, Py_ssize_t hashpos, Py_ssize_t ix,
> + PyObject **value_addr)
> +{
> + PyObject *old_key, *old_value;
> + PyDictKeyEntry *ep;
> +
> + old_value = *value_addr;
> + assert(old_value != NULL);
> + *value_addr = NULL;
> + mp->ma_used--;
> + mp->ma_version_tag = DICT_NEXT_VERSION();
> + ep = &DK_ENTRIES(mp->ma_keys)[ix];
> + dk_set_index(mp->ma_keys, hashpos, DKIX_DUMMY);
> + ENSURE_ALLOWS_DELETIONS(mp);
> + old_key = ep->me_key;
> + ep->me_key = NULL;
> + Py_DECREF(old_key);
> + Py_DECREF(old_value);
> +
> + assert(_PyDict_CheckConsistency(mp));
> + return 0;
> +}
> +
> +int
> +PyDict_DelItem(PyObject *op, PyObject *key)
> +{
> + Py_hash_t hash;
> + assert(key);
> + if (!PyUnicode_CheckExact(key) ||
> + (hash = ((PyASCIIObject *) key)->hash) == -1) {
> + hash = PyObject_Hash(key);
> + if (hash == -1)
> + return -1;
> + }
> +
> + return _PyDict_DelItem_KnownHash(op, key, hash);
> +}
> +
> +int
> +_PyDict_DelItem_KnownHash(PyObject *op, PyObject *key, Py_hash_t hash)
> +{
> + Py_ssize_t hashpos, ix;
> + PyDictObject *mp;
> + PyObject **value_addr;
> +
> + if (!PyDict_Check(op)) {
> + PyErr_BadInternalCall();
> + return -1;
> + }
> + assert(key);
> + assert(hash != -1);
> + mp = (PyDictObject *)op;
> + ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, &hashpos);
> + if (ix == DKIX_ERROR)
> + return -1;
> + if (ix == DKIX_EMPTY || *value_addr == NULL) {
> + _PyErr_SetKeyError(key);
> + return -1;
> + }
> + assert(dk_get_index(mp->ma_keys, hashpos) == ix);
> +
> + // Split table doesn't allow deletion. Combine it.
> + if (_PyDict_HasSplitTable(mp)) {
> + if (dictresize(mp, DK_SIZE(mp->ma_keys))) {
> + return -1;
> + }
> + ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, &hashpos);
> + assert(ix >= 0);
> + }
> + return delitem_common(mp, hashpos, ix, value_addr);
> +}
> +
> +/* This function promises that the predicate -> deletion sequence is atomic
> + * (i.e. protected by the GIL), assuming the predicate itself doesn't
> + * release the GIL.
> + */
> +int
> +_PyDict_DelItemIf(PyObject *op, PyObject *key,
> + int (*predicate)(PyObject *value))
> +{
> + Py_ssize_t hashpos, ix;
> + PyDictObject *mp;
> + Py_hash_t hash;
> + PyObject **value_addr;
> + int res;
> +
> + if (!PyDict_Check(op)) {
> + PyErr_BadInternalCall();
> + return -1;
> + }
> + assert(key);
> + hash = PyObject_Hash(key);
> + if (hash == -1)
> + return -1;
> + mp = (PyDictObject *)op;
> + ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, &hashpos);
> + if (ix == DKIX_ERROR)
> + return -1;
> + if (ix == DKIX_EMPTY || *value_addr == NULL) {
> + _PyErr_SetKeyError(key);
> + return -1;
> + }
> + assert(dk_get_index(mp->ma_keys, hashpos) == ix);
> +
> + // Split table doesn't allow deletion. Combine it.
> + if (_PyDict_HasSplitTable(mp)) {
> + if (dictresize(mp, DK_SIZE(mp->ma_keys))) {
> + return -1;
> + }
> + ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, &hashpos);
> + assert(ix >= 0);
> + }
> +
> + res = predicate(*value_addr);
> + if (res == -1)
> + return -1;
> + if (res > 0)
> + return delitem_common(mp, hashpos, ix, value_addr);
> + else
> + return 0;
> +}
> +
> +
> +void
> +PyDict_Clear(PyObject *op)
> +{
> + PyDictObject *mp;
> + PyDictKeysObject *oldkeys;
> + PyObject **oldvalues;
> + Py_ssize_t i, n;
> +
> + if (!PyDict_Check(op))
> + return;
> + mp = ((PyDictObject *)op);
> + oldkeys = mp->ma_keys;
> + oldvalues = mp->ma_values;
> + if (oldvalues == empty_values)
> + return;
> + /* Empty the dict... */
> + DK_INCREF(Py_EMPTY_KEYS);
> + mp->ma_keys = Py_EMPTY_KEYS;
> + mp->ma_values = empty_values;
> + mp->ma_used = 0;
> + mp->ma_version_tag = DICT_NEXT_VERSION();
> + /* ...then clear the keys and values */
> + if (oldvalues != NULL) {
> + n = oldkeys->dk_nentries;
> + for (i = 0; i < n; i++)
> + Py_CLEAR(oldvalues[i]);
> + free_values(oldvalues);
> + DK_DECREF(oldkeys);
> + }
> + else {
> + assert(oldkeys->dk_refcnt == 1);
> + DK_DECREF(oldkeys);
> + }
> + assert(_PyDict_CheckConsistency(mp));
> +}
> +
> +/* Internal version of PyDict_Next that returns a hash value in addition
> + * to the key and value.
> + * Return 1 on success, return 0 when the reached the end of the dictionary
> + * (or if op is not a dictionary)
> + */
> +int
> +_PyDict_Next(PyObject *op, Py_ssize_t *ppos, PyObject **pkey,
> + PyObject **pvalue, Py_hash_t *phash)
> +{
> + Py_ssize_t i, n;
> + PyDictObject *mp;
> + PyDictKeyEntry *entry_ptr;
> + PyObject *value;
> +
> + if (!PyDict_Check(op))
> + return 0;
> + mp = (PyDictObject *)op;
> + i = *ppos;
> + n = mp->ma_keys->dk_nentries;
> + if ((size_t)i >= (size_t)n)
> + return 0;
> + if (mp->ma_values) {
> + PyObject **value_ptr = &mp->ma_values[i];
> + while (i < n && *value_ptr == NULL) {
> + value_ptr++;
> + i++;
> + }
> + if (i >= n)
> + return 0;
> + entry_ptr = &DK_ENTRIES(mp->ma_keys)[i];
> + value = *value_ptr;
> + }
> + else {
> + entry_ptr = &DK_ENTRIES(mp->ma_keys)[i];
> + while (i < n && entry_ptr->me_value == NULL) {
> + entry_ptr++;
> + i++;
> + }
> + if (i >= n)
> + return 0;
> + value = entry_ptr->me_value;
> + }
> + *ppos = i+1;
> + if (pkey)
> + *pkey = entry_ptr->me_key;
> + if (phash)
> + *phash = entry_ptr->me_hash;
> + if (pvalue)
> + *pvalue = value;
> + return 1;
> +}
> +
> +/*
> + * Iterate over a dict. Use like so:
> + *
> + * Py_ssize_t i;
> + * PyObject *key, *value;
> + * i = 0; # important! i should not otherwise be changed by you
> + * while (PyDict_Next(yourdict, &i, &key, &value)) {
> + * Refer to borrowed references in key and value.
> + * }
> + *
> + * Return 1 on success, return 0 when the reached the end of the dictionary
> + * (or if op is not a dictionary)
> + *
> + * CAUTION: In general, it isn't safe to use PyDict_Next in a loop that
> + * mutates the dict. One exception: it is safe if the loop merely changes
> + * the values associated with the keys (but doesn't insert new keys or
> + * delete keys), via PyDict_SetItem().
> + */
> +int
> +PyDict_Next(PyObject *op, Py_ssize_t *ppos, PyObject **pkey, PyObject **pvalue)
> +{
> + return _PyDict_Next(op, ppos, pkey, pvalue, NULL);
> +}
> +
> +/* Internal version of dict.pop(). */
> +PyObject *
> +_PyDict_Pop_KnownHash(PyObject *dict, PyObject *key, Py_hash_t hash, PyObject *deflt)
> +{
> + Py_ssize_t ix, hashpos;
> + PyObject *old_value, *old_key;
> + PyDictKeyEntry *ep;
> + PyObject **value_addr;
> + PyDictObject *mp;
> +
> + assert(PyDict_Check(dict));
> + mp = (PyDictObject *)dict;
> +
> + if (mp->ma_used == 0) {
> + if (deflt) {
> + Py_INCREF(deflt);
> + return deflt;
> + }
> + _PyErr_SetKeyError(key);
> + return NULL;
> + }
> + ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, &hashpos);
> + if (ix == DKIX_ERROR)
> + return NULL;
> + if (ix == DKIX_EMPTY || *value_addr == NULL) {
> + if (deflt) {
> + Py_INCREF(deflt);
> + return deflt;
> + }
> + _PyErr_SetKeyError(key);
> + return NULL;
> + }
> +
> + // Split table doesn't allow deletion. Combine it.
> + if (_PyDict_HasSplitTable(mp)) {
> + if (dictresize(mp, DK_SIZE(mp->ma_keys))) {
> + return NULL;
> + }
> + ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, &hashpos);
> + assert(ix >= 0);
> + }
> +
> + old_value = *value_addr;
> + assert(old_value != NULL);
> + *value_addr = NULL;
> + mp->ma_used--;
> + mp->ma_version_tag = DICT_NEXT_VERSION();
> + dk_set_index(mp->ma_keys, hashpos, DKIX_DUMMY);
> + ep = &DK_ENTRIES(mp->ma_keys)[ix];
> + ENSURE_ALLOWS_DELETIONS(mp);
> + old_key = ep->me_key;
> + ep->me_key = NULL;
> + Py_DECREF(old_key);
> +
> + assert(_PyDict_CheckConsistency(mp));
> + return old_value;
> +}
> +
> +PyObject *
> +_PyDict_Pop(PyObject *dict, PyObject *key, PyObject *deflt)
> +{
> + Py_hash_t hash;
> +
> + if (((PyDictObject *)dict)->ma_used == 0) {
> + if (deflt) {
> + Py_INCREF(deflt);
> + return deflt;
> + }
> + _PyErr_SetKeyError(key);
> + return NULL;
> + }
> + if (!PyUnicode_CheckExact(key) ||
> + (hash = ((PyASCIIObject *) key)->hash) == -1) {
> + hash = PyObject_Hash(key);
> + if (hash == -1)
> + return NULL;
> + }
> + return _PyDict_Pop_KnownHash(dict, key, hash, deflt);
> +}
> +
> +/* Internal version of dict.from_keys(). It is subclass-friendly. */
> +PyObject *
> +_PyDict_FromKeys(PyObject *cls, PyObject *iterable, PyObject *value)
> +{
> + PyObject *it; /* iter(iterable) */
> + PyObject *key;
> + PyObject *d;
> + int status;
> +
> + d = PyObject_CallObject(cls, NULL);
> + if (d == NULL)
> + return NULL;
> +
> + if (PyDict_CheckExact(d) && ((PyDictObject *)d)->ma_used == 0) {
> + if (PyDict_CheckExact(iterable)) {
> + PyDictObject *mp = (PyDictObject *)d;
> + PyObject *oldvalue;
> + Py_ssize_t pos = 0;
> + PyObject *key;
> + Py_hash_t hash;
> +
> + if (dictresize(mp, ESTIMATE_SIZE(((PyDictObject *)iterable)->ma_used))) {
> + Py_DECREF(d);
> + return NULL;
> + }
> +
> + while (_PyDict_Next(iterable, &pos, &key, &oldvalue, &hash)) {
> + if (insertdict(mp, key, hash, value)) {
> + Py_DECREF(d);
> + return NULL;
> + }
> + }
> + return d;
> + }
> + if (PyAnySet_CheckExact(iterable)) {
> + PyDictObject *mp = (PyDictObject *)d;
> + Py_ssize_t pos = 0;
> + PyObject *key;
> + Py_hash_t hash;
> +
> + if (dictresize(mp, ESTIMATE_SIZE(PySet_GET_SIZE(iterable)))) {
> + Py_DECREF(d);
> + return NULL;
> + }
> +
> + while (_PySet_NextEntry(iterable, &pos, &key, &hash)) {
> + if (insertdict(mp, key, hash, value)) {
> + Py_DECREF(d);
> + return NULL;
> + }
> + }
> + return d;
> + }
> + }
> +
> + it = PyObject_GetIter(iterable);
> + if (it == NULL){
> + Py_DECREF(d);
> + return NULL;
> + }
> +
> + if (PyDict_CheckExact(d)) {
> + while ((key = PyIter_Next(it)) != NULL) {
> + status = PyDict_SetItem(d, key, value);
> + Py_DECREF(key);
> + if (status < 0)
> + goto Fail;
> + }
> + } else {
> + while ((key = PyIter_Next(it)) != NULL) {
> + status = PyObject_SetItem(d, key, value);
> + Py_DECREF(key);
> + if (status < 0)
> + goto Fail;
> + }
> + }
> +
> + if (PyErr_Occurred())
> + goto Fail;
> + Py_DECREF(it);
> + return d;
> +
> +Fail:
> + Py_DECREF(it);
> + Py_DECREF(d);
> + return NULL;
> +}
> +
> +/* Methods */
> +
> +static void
> +dict_dealloc(PyDictObject *mp)
> +{
> + PyObject **values = mp->ma_values;
> + PyDictKeysObject *keys = mp->ma_keys;
> + Py_ssize_t i, n;
> +
> + /* bpo-31095: UnTrack is needed before calling any callbacks */
> + PyObject_GC_UnTrack(mp);
> + Py_TRASHCAN_SAFE_BEGIN(mp)
> + if (values != NULL) {
> + if (values != empty_values) {
> + for (i = 0, n = mp->ma_keys->dk_nentries; i < n; i++) {
> + Py_XDECREF(values[i]);
> + }
> + free_values(values);
> + }
> + DK_DECREF(keys);
> + }
> + else if (keys != NULL) {
> + assert(keys->dk_refcnt == 1);
> + DK_DECREF(keys);
> + }
> + if (numfree < PyDict_MAXFREELIST && Py_TYPE(mp) == &PyDict_Type)
> + free_list[numfree++] = mp;
> + else
> + Py_TYPE(mp)->tp_free((PyObject *)mp);
> + Py_TRASHCAN_SAFE_END(mp)
> +}
> +
> +
> +static PyObject *
> +dict_repr(PyDictObject *mp)
> +{
> + Py_ssize_t i;
> + PyObject *key = NULL, *value = NULL;
> + _PyUnicodeWriter writer;
> + int first;
> +
> + i = Py_ReprEnter((PyObject *)mp);
> + if (i != 0) {
> + return i > 0 ? PyUnicode_FromString("{...}") : NULL;
> + }
> +
> + if (mp->ma_used == 0) {
> + Py_ReprLeave((PyObject *)mp);
> + return PyUnicode_FromString("{}");
> + }
> +
> + _PyUnicodeWriter_Init(&writer);
> + writer.overallocate = 1;
> + /* "{" + "1: 2" + ", 3: 4" * (len - 1) + "}" */
> + writer.min_length = 1 + 4 + (2 + 4) * (mp->ma_used - 1) + 1;
> +
> + if (_PyUnicodeWriter_WriteChar(&writer, '{') < 0)
> + goto error;
> +
> + /* Do repr() on each key+value pair, and insert ": " between them.
> + Note that repr may mutate the dict. */
> + i = 0;
> + first = 1;
> + while (PyDict_Next((PyObject *)mp, &i, &key, &value)) {
> + PyObject *s;
> + int res;
> +
> + /* Prevent repr from deleting key or value during key format. */
> + Py_INCREF(key);
> + Py_INCREF(value);
> +
> + if (!first) {
> + if (_PyUnicodeWriter_WriteASCIIString(&writer, ", ", 2) < 0)
> + goto error;
> + }
> + first = 0;
> +
> + s = PyObject_Repr(key);
> + if (s == NULL)
> + goto error;
> + res = _PyUnicodeWriter_WriteStr(&writer, s);
> + Py_DECREF(s);
> + if (res < 0)
> + goto error;
> +
> + if (_PyUnicodeWriter_WriteASCIIString(&writer, ": ", 2) < 0)
> + goto error;
> +
> + s = PyObject_Repr(value);
> + if (s == NULL)
> + goto error;
> + res = _PyUnicodeWriter_WriteStr(&writer, s);
> + Py_DECREF(s);
> + if (res < 0)
> + goto error;
> +
> + Py_CLEAR(key);
> + Py_CLEAR(value);
> + }
> +
> + writer.overallocate = 0;
> + if (_PyUnicodeWriter_WriteChar(&writer, '}') < 0)
> + goto error;
> +
> + Py_ReprLeave((PyObject *)mp);
> +
> + return _PyUnicodeWriter_Finish(&writer);
> +
> +error:
> + Py_ReprLeave((PyObject *)mp);
> + _PyUnicodeWriter_Dealloc(&writer);
> + Py_XDECREF(key);
> + Py_XDECREF(value);
> + return NULL;
> +}
> +
> +static Py_ssize_t
> +dict_length(PyDictObject *mp)
> +{
> + return mp->ma_used;
> +}
> +
> +static PyObject *
> +dict_subscript(PyDictObject *mp, PyObject *key)
> +{
> + PyObject *v;
> + Py_ssize_t ix;
> + Py_hash_t hash;
> + PyObject **value_addr;
> +
> + if (!PyUnicode_CheckExact(key) ||
> + (hash = ((PyASCIIObject *) key)->hash) == -1) {
> + hash = PyObject_Hash(key);
> + if (hash == -1)
> + return NULL;
> + }
> + ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, NULL);
> + if (ix == DKIX_ERROR)
> + return NULL;
> + if (ix == DKIX_EMPTY || *value_addr == NULL) {
> + if (!PyDict_CheckExact(mp)) {
> + /* Look up __missing__ method if we're a subclass. */
> + PyObject *missing, *res;
> + _Py_IDENTIFIER(__missing__);
> + missing = _PyObject_LookupSpecial((PyObject *)mp, &PyId___missing__);
> + if (missing != NULL) {
> + res = PyObject_CallFunctionObjArgs(missing,
> + key, NULL);
> + Py_DECREF(missing);
> + return res;
> + }
> + else if (PyErr_Occurred())
> + return NULL;
> + }
> + _PyErr_SetKeyError(key);
> + return NULL;
> + }
> + v = *value_addr;
> + Py_INCREF(v);
> + return v;
> +}
> +
> +static int
> +dict_ass_sub(PyDictObject *mp, PyObject *v, PyObject *w)
> +{
> + if (w == NULL)
> + return PyDict_DelItem((PyObject *)mp, v);
> + else
> + return PyDict_SetItem((PyObject *)mp, v, w);
> +}
> +
> +static PyMappingMethods dict_as_mapping = {
> + (lenfunc)dict_length, /*mp_length*/
> + (binaryfunc)dict_subscript, /*mp_subscript*/
> + (objobjargproc)dict_ass_sub, /*mp_ass_subscript*/
> +};
> +
> +static PyObject *
> +dict_keys(PyDictObject *mp)
> +{
> + PyObject *v;
> + Py_ssize_t i, j;
> + PyDictKeyEntry *ep;
> + Py_ssize_t size, n, offset;
> + PyObject **value_ptr;
> +
> + again:
> + n = mp->ma_used;
> + v = PyList_New(n);
> + if (v == NULL)
> + return NULL;
> + if (n != mp->ma_used) {
> + /* Durnit. The allocations caused the dict to resize.
> + * Just start over, this shouldn't normally happen.
> + */
> + Py_DECREF(v);
> + goto again;
> + }
> + ep = DK_ENTRIES(mp->ma_keys);
> + size = mp->ma_keys->dk_nentries;
> + if (mp->ma_values) {
> + value_ptr = mp->ma_values;
> + offset = sizeof(PyObject *);
> + }
> + else {
> + value_ptr = &ep[0].me_value;
> + offset = sizeof(PyDictKeyEntry);
> + }
> + for (i = 0, j = 0; i < size; i++) {
> + if (*value_ptr != NULL) {
> + PyObject *key = ep[i].me_key;
> + Py_INCREF(key);
> + PyList_SET_ITEM(v, j, key);
> + j++;
> + }
> + value_ptr = (PyObject **)(((char *)value_ptr) + offset);
> + }
> + assert(j == n);
> + return v;
> +}
> +
> +static PyObject *
> +dict_values(PyDictObject *mp)
> +{
> + PyObject *v;
> + Py_ssize_t i, j;
> + PyDictKeyEntry *ep;
> + Py_ssize_t size, n, offset;
> + PyObject **value_ptr;
> +
> + again:
> + n = mp->ma_used;
> + v = PyList_New(n);
> + if (v == NULL)
> + return NULL;
> + if (n != mp->ma_used) {
> + /* Durnit. The allocations caused the dict to resize.
> + * Just start over, this shouldn't normally happen.
> + */
> + Py_DECREF(v);
> + goto again;
> + }
> + ep = DK_ENTRIES(mp->ma_keys);
> + size = mp->ma_keys->dk_nentries;
> + if (mp->ma_values) {
> + value_ptr = mp->ma_values;
> + offset = sizeof(PyObject *);
> + }
> + else {
> + value_ptr = &ep[0].me_value;
> + offset = sizeof(PyDictKeyEntry);
> + }
> + for (i = 0, j = 0; i < size; i++) {
> + PyObject *value = *value_ptr;
> + value_ptr = (PyObject **)(((char *)value_ptr) + offset);
> + if (value != NULL) {
> + Py_INCREF(value);
> + PyList_SET_ITEM(v, j, value);
> + j++;
> + }
> + }
> + assert(j == n);
> + return v;
> +}
> +
> +static PyObject *
> +dict_items(PyDictObject *mp)
> +{
> + PyObject *v;
> + Py_ssize_t i, j, n;
> + Py_ssize_t size, offset;
> + PyObject *item, *key;
> + PyDictKeyEntry *ep;
> + PyObject **value_ptr;
> +
> + /* Preallocate the list of tuples, to avoid allocations during
> + * the loop over the items, which could trigger GC, which
> + * could resize the dict. :-(
> + */
> + again:
> + n = mp->ma_used;
> + v = PyList_New(n);
> + if (v == NULL)
> + return NULL;
> + for (i = 0; i < n; i++) {
> + item = PyTuple_New(2);
> + if (item == NULL) {
> + Py_DECREF(v);
> + return NULL;
> + }
> + PyList_SET_ITEM(v, i, item);
> + }
> + if (n != mp->ma_used) {
> + /* Durnit. The allocations caused the dict to resize.
> + * Just start over, this shouldn't normally happen.
> + */
> + Py_DECREF(v);
> + goto again;
> + }
> + /* Nothing we do below makes any function calls. */
> + ep = DK_ENTRIES(mp->ma_keys);
> + size = mp->ma_keys->dk_nentries;
> + if (mp->ma_values) {
> + value_ptr = mp->ma_values;
> + offset = sizeof(PyObject *);
> + }
> + else {
> + value_ptr = &ep[0].me_value;
> + offset = sizeof(PyDictKeyEntry);
> + }
> + for (i = 0, j = 0; i < size; i++) {
> + PyObject *value = *value_ptr;
> + value_ptr = (PyObject **)(((char *)value_ptr) + offset);
> + if (value != NULL) {
> + key = ep[i].me_key;
> + item = PyList_GET_ITEM(v, j);
> + Py_INCREF(key);
> + PyTuple_SET_ITEM(item, 0, key);
> + Py_INCREF(value);
> + PyTuple_SET_ITEM(item, 1, value);
> + j++;
> + }
> + }
> + assert(j == n);
> + return v;
> +}
> +
> +/*[clinic input]
> +@classmethod
> +dict.fromkeys
> + iterable: object
> + value: object=None
> + /
> +
> +Returns a new dict with keys from iterable and values equal to value.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +dict_fromkeys_impl(PyTypeObject *type, PyObject *iterable, PyObject *value)
> +/*[clinic end generated code: output=8fb98e4b10384999 input=b85a667f9bf4669d]*/
> +{
> + return _PyDict_FromKeys((PyObject *)type, iterable, value);
> +}
> +
> +static int
> +dict_update_common(PyObject *self, PyObject *args, PyObject *kwds,
> + const char *methname)
> +{
> + PyObject *arg = NULL;
> + int result = 0;
> +
> + if (!PyArg_UnpackTuple(args, methname, 0, 1, &arg))
> + result = -1;
> +
> + else if (arg != NULL) {
> + _Py_IDENTIFIER(keys);
> + if (_PyObject_HasAttrId(arg, &PyId_keys))
> + result = PyDict_Merge(self, arg, 1);
> + else
> + result = PyDict_MergeFromSeq2(self, arg, 1);
> + }
> + if (result == 0 && kwds != NULL) {
> + if (PyArg_ValidateKeywordArguments(kwds))
> + result = PyDict_Merge(self, kwds, 1);
> + else
> + result = -1;
> + }
> + return result;
> +}
> +
> +static PyObject *
> +dict_update(PyObject *self, PyObject *args, PyObject *kwds)
> +{
> + if (dict_update_common(self, args, kwds, "update") != -1)
> + Py_RETURN_NONE;
> + return NULL;
> +}
> +
> +/* Update unconditionally replaces existing items.
> + Merge has a 3rd argument 'override'; if set, it acts like Update,
> + otherwise it leaves existing items unchanged.
> +
> + PyDict_{Update,Merge} update/merge from a mapping object.
> +
> + PyDict_MergeFromSeq2 updates/merges from any iterable object
> + producing iterable objects of length 2.
> +*/
> +
> +int
> +PyDict_MergeFromSeq2(PyObject *d, PyObject *seq2, int override)
> +{
> + PyObject *it; /* iter(seq2) */
> + Py_ssize_t i; /* index into seq2 of current element */
> + PyObject *item; /* seq2[i] */
> + PyObject *fast; /* item as a 2-tuple or 2-list */
> +
> + assert(d != NULL);
> + assert(PyDict_Check(d));
> + assert(seq2 != NULL);
> +
> + it = PyObject_GetIter(seq2);
> + if (it == NULL)
> + return -1;
> +
> + for (i = 0; ; ++i) {
> + PyObject *key, *value;
> + Py_ssize_t n;
> +
> + fast = NULL;
> + item = PyIter_Next(it);
> + if (item == NULL) {
> + if (PyErr_Occurred())
> + goto Fail;
> + break;
> + }
> +
> + /* Convert item to sequence, and verify length 2. */
> + fast = PySequence_Fast(item, "");
> + if (fast == NULL) {
> + if (PyErr_ExceptionMatches(PyExc_TypeError))
> + PyErr_Format(PyExc_TypeError,
> + "cannot convert dictionary update "
> + "sequence element #%zd to a sequence",
> + i);
> + goto Fail;
> + }
> + n = PySequence_Fast_GET_SIZE(fast);
> + if (n != 2) {
> + PyErr_Format(PyExc_ValueError,
> + "dictionary update sequence element #%zd "
> + "has length %zd; 2 is required",
> + i, n);
> + goto Fail;
> + }
> +
> + /* Update/merge with this (key, value) pair. */
> + key = PySequence_Fast_GET_ITEM(fast, 0);
> + value = PySequence_Fast_GET_ITEM(fast, 1);
> + Py_INCREF(key);
> + Py_INCREF(value);
> + if (override || PyDict_GetItem(d, key) == NULL) {
> + int status = PyDict_SetItem(d, key, value);
> + if (status < 0) {
> + Py_DECREF(key);
> + Py_DECREF(value);
> + goto Fail;
> + }
> + }
> + Py_DECREF(key);
> + Py_DECREF(value);
> + Py_DECREF(fast);
> + Py_DECREF(item);
> + }
> +
> + i = 0;
> + assert(_PyDict_CheckConsistency((PyDictObject *)d));
> + goto Return;
> +Fail:
> + Py_XDECREF(item);
> + Py_XDECREF(fast);
> + i = -1;
> +Return:
> + Py_DECREF(it);
> + return Py_SAFE_DOWNCAST(i, Py_ssize_t, int);
> +}
> +
> +static int
> +dict_merge(PyObject *a, PyObject *b, int override)
> +{
> + PyDictObject *mp, *other;
> + Py_ssize_t i, n;
> + PyDictKeyEntry *entry, *ep0;
> +
> + assert(0 <= override && override <= 2);
> +
> + /* We accept for the argument either a concrete dictionary object,
> + * or an abstract "mapping" object. For the former, we can do
> + * things quite efficiently. For the latter, we only require that
> + * PyMapping_Keys() and PyObject_GetItem() be supported.
> + */
> + if (a == NULL || !PyDict_Check(a) || b == NULL) {
> + PyErr_BadInternalCall();
> + return -1;
> + }
> + mp = (PyDictObject*)a;
> + if (PyDict_Check(b) && (Py_TYPE(b)->tp_iter == (getiterfunc)dict_iter)) {
> + other = (PyDictObject*)b;
> + if (other == mp || other->ma_used == 0)
> + /* a.update(a) or a.update({}); nothing to do */
> + return 0;
> + if (mp->ma_used == 0)
> + /* Since the target dict is empty, PyDict_GetItem()
> + * always returns NULL. Setting override to 1
> + * skips the unnecessary test.
> + */
> + override = 1;
> + /* Do one big resize at the start, rather than
> + * incrementally resizing as we insert new items. Expect
> + * that there will be no (or few) overlapping keys.
> + */
> + if (USABLE_FRACTION(mp->ma_keys->dk_size) < other->ma_used) {
> + if (dictresize(mp, ESTIMATE_SIZE(mp->ma_used + other->ma_used))) {
> + return -1;
> + }
> + }
> + ep0 = DK_ENTRIES(other->ma_keys);
> + for (i = 0, n = other->ma_keys->dk_nentries; i < n; i++) {
> + PyObject *key, *value;
> + Py_hash_t hash;
> + entry = &ep0[i];
> + key = entry->me_key;
> + hash = entry->me_hash;
> + if (other->ma_values)
> + value = other->ma_values[i];
> + else
> + value = entry->me_value;
> +
> + if (value != NULL) {
> + int err = 0;
> + Py_INCREF(key);
> + Py_INCREF(value);
> + if (override == 1)
> + err = insertdict(mp, key, hash, value);
> + else if (_PyDict_GetItem_KnownHash(a, key, hash) == NULL) {
> + if (PyErr_Occurred()) {
> + Py_DECREF(value);
> + Py_DECREF(key);
> + return -1;
> + }
> + err = insertdict(mp, key, hash, value);
> + }
> + else if (override != 0) {
> + _PyErr_SetKeyError(key);
> + Py_DECREF(value);
> + Py_DECREF(key);
> + return -1;
> + }
> + Py_DECREF(value);
> + Py_DECREF(key);
> + if (err != 0)
> + return -1;
> +
> + if (n != other->ma_keys->dk_nentries) {
> + PyErr_SetString(PyExc_RuntimeError,
> + "dict mutated during update");
> + return -1;
> + }
> + }
> + }
> + }
> + else {
> + /* Do it the generic, slower way */
> + PyObject *keys = PyMapping_Keys(b);
> + PyObject *iter;
> + PyObject *key, *value;
> + int status;
> +
> + if (keys == NULL)
> + /* Docstring says this is equivalent to E.keys() so
> + * if E doesn't have a .keys() method we want
> + * AttributeError to percolate up. Might as well
> + * do the same for any other error.
> + */
> + return -1;
> +
> + iter = PyObject_GetIter(keys);
> + Py_DECREF(keys);
> + if (iter == NULL)
> + return -1;
> +
> + for (key = PyIter_Next(iter); key; key = PyIter_Next(iter)) {
> + if (override != 1 && PyDict_GetItem(a, key) != NULL) {
> + if (override != 0) {
> + _PyErr_SetKeyError(key);
> + Py_DECREF(key);
> + Py_DECREF(iter);
> + return -1;
> + }
> + Py_DECREF(key);
> + continue;
> + }
> + value = PyObject_GetItem(b, key);
> + if (value == NULL) {
> + Py_DECREF(iter);
> + Py_DECREF(key);
> + return -1;
> + }
> + status = PyDict_SetItem(a, key, value);
> + Py_DECREF(key);
> + Py_DECREF(value);
> + if (status < 0) {
> + Py_DECREF(iter);
> + return -1;
> + }
> + }
> + Py_DECREF(iter);
> + if (PyErr_Occurred())
> + /* Iterator completed, via error */
> + return -1;
> + }
> + assert(_PyDict_CheckConsistency((PyDictObject *)a));
> + return 0;
> +}
> +
> +int
> +PyDict_Update(PyObject *a, PyObject *b)
> +{
> + return dict_merge(a, b, 1);
> +}
> +
> +int
> +PyDict_Merge(PyObject *a, PyObject *b, int override)
> +{
> + /* XXX Deprecate override not in (0, 1). */
> + return dict_merge(a, b, override != 0);
> +}
> +
> +int
> +_PyDict_MergeEx(PyObject *a, PyObject *b, int override)
> +{
> + return dict_merge(a, b, override);
> +}
> +
> +static PyObject *
> +dict_copy(PyDictObject *mp)
> +{
> + return PyDict_Copy((PyObject*)mp);
> +}
> +
> +PyObject *
> +PyDict_Copy(PyObject *o)
> +{
> + PyObject *copy;
> + PyDictObject *mp;
> + Py_ssize_t i, n;
> +
> + if (o == NULL || !PyDict_Check(o)) {
> + PyErr_BadInternalCall();
> + return NULL;
> + }
> + mp = (PyDictObject *)o;
> + if (_PyDict_HasSplitTable(mp)) {
> + PyDictObject *split_copy;
> + Py_ssize_t size = USABLE_FRACTION(DK_SIZE(mp->ma_keys));
> + PyObject **newvalues;
> + newvalues = new_values(size);
> + if (newvalues == NULL)
> + return PyErr_NoMemory();
> + split_copy = PyObject_GC_New(PyDictObject, &PyDict_Type);
> + if (split_copy == NULL) {
> + free_values(newvalues);
> + return NULL;
> + }
> + split_copy->ma_values = newvalues;
> + split_copy->ma_keys = mp->ma_keys;
> + split_copy->ma_used = mp->ma_used;
> + split_copy->ma_version_tag = DICT_NEXT_VERSION();
> + DK_INCREF(mp->ma_keys);
> + for (i = 0, n = size; i < n; i++) {
> + PyObject *value = mp->ma_values[i];
> + Py_XINCREF(value);
> + split_copy->ma_values[i] = value;
> + }
> + if (_PyObject_GC_IS_TRACKED(mp))
> + _PyObject_GC_TRACK(split_copy);
> + return (PyObject *)split_copy;
> + }
> + copy = PyDict_New();
> + if (copy == NULL)
> + return NULL;
> + if (PyDict_Merge(copy, o, 1) == 0)
> + return copy;
> + Py_DECREF(copy);
> + return NULL;
> +}
> +
> +Py_ssize_t
> +PyDict_Size(PyObject *mp)
> +{
> + if (mp == NULL || !PyDict_Check(mp)) {
> + PyErr_BadInternalCall();
> + return -1;
> + }
> + return ((PyDictObject *)mp)->ma_used;
> +}
> +
> +PyObject *
> +PyDict_Keys(PyObject *mp)
> +{
> + if (mp == NULL || !PyDict_Check(mp)) {
> + PyErr_BadInternalCall();
> + return NULL;
> + }
> + return dict_keys((PyDictObject *)mp);
> +}
> +
> +PyObject *
> +PyDict_Values(PyObject *mp)
> +{
> + if (mp == NULL || !PyDict_Check(mp)) {
> + PyErr_BadInternalCall();
> + return NULL;
> + }
> + return dict_values((PyDictObject *)mp);
> +}
> +
> +PyObject *
> +PyDict_Items(PyObject *mp)
> +{
> + if (mp == NULL || !PyDict_Check(mp)) {
> + PyErr_BadInternalCall();
> + return NULL;
> + }
> + return dict_items((PyDictObject *)mp);
> +}
> +
> +/* Return 1 if dicts equal, 0 if not, -1 if error.
> + * Gets out as soon as any difference is detected.
> + * Uses only Py_EQ comparison.
> + */
> +static int
> +dict_equal(PyDictObject *a, PyDictObject *b)
> +{
> + Py_ssize_t i;
> +
> + if (a->ma_used != b->ma_used)
> + /* can't be equal if # of entries differ */
> + return 0;
> + /* Same # of entries -- check all of 'em. Exit early on any diff. */
> + for (i = 0; i < a->ma_keys->dk_nentries; i++) {
> + PyDictKeyEntry *ep = &DK_ENTRIES(a->ma_keys)[i];
> + PyObject *aval;
> + if (a->ma_values)
> + aval = a->ma_values[i];
> + else
> + aval = ep->me_value;
> + if (aval != NULL) {
> + int cmp;
> + PyObject *bval;
> + PyObject **vaddr;
> + PyObject *key = ep->me_key;
> + /* temporarily bump aval's refcount to ensure it stays
> + alive until we're done with it */
> + Py_INCREF(aval);
> + /* ditto for key */
> + Py_INCREF(key);
> + /* reuse the known hash value */
> + if ((b->ma_keys->dk_lookup)(b, key, ep->me_hash, &vaddr, NULL) < 0)
> + bval = NULL;
> + else
> + bval = *vaddr;
> + if (bval == NULL) {
> + Py_DECREF(key);
> + Py_DECREF(aval);
> + if (PyErr_Occurred())
> + return -1;
> + return 0;
> + }
> + cmp = PyObject_RichCompareBool(aval, bval, Py_EQ);
> + Py_DECREF(key);
> + Py_DECREF(aval);
> + if (cmp <= 0) /* error or not equal */
> + return cmp;
> + }
> + }
> + return 1;
> +}
> +
> +static PyObject *
> +dict_richcompare(PyObject *v, PyObject *w, int op)
> +{
> + int cmp;
> + PyObject *res;
> +
> + if (!PyDict_Check(v) || !PyDict_Check(w)) {
> + res = Py_NotImplemented;
> + }
> + else if (op == Py_EQ || op == Py_NE) {
> + cmp = dict_equal((PyDictObject *)v, (PyDictObject *)w);
> + if (cmp < 0)
> + return NULL;
> + res = (cmp == (op == Py_EQ)) ? Py_True : Py_False;
> + }
> + else
> + res = Py_NotImplemented;
> + Py_INCREF(res);
> + return res;
> +}
> +
> +/*[clinic input]
> +
> +@coexist
> +dict.__contains__
> +
> + key: object
> + /
> +
> +True if D has a key k, else False.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +dict___contains__(PyDictObject *self, PyObject *key)
> +/*[clinic end generated code: output=a3d03db709ed6e6b input=b852b2a19b51ab24]*/
> +{
> + register PyDictObject *mp = self;
> + Py_hash_t hash;
> + Py_ssize_t ix;
> + PyObject **value_addr;
> +
> + if (!PyUnicode_CheckExact(key) ||
> + (hash = ((PyASCIIObject *) key)->hash) == -1) {
> + hash = PyObject_Hash(key);
> + if (hash == -1)
> + return NULL;
> + }
> + ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, NULL);
> + if (ix == DKIX_ERROR)
> + return NULL;
> + if (ix == DKIX_EMPTY || *value_addr == NULL)
> + Py_RETURN_FALSE;
> + Py_RETURN_TRUE;
> +}
> +
> +static PyObject *
> +dict_get(PyDictObject *mp, PyObject *args)
> +{
> + PyObject *key;
> + PyObject *failobj = Py_None;
> + PyObject *val = NULL;
> + Py_hash_t hash;
> + Py_ssize_t ix;
> + PyObject **value_addr;
> +
> + if (!PyArg_UnpackTuple(args, "get", 1, 2, &key, &failobj))
> + return NULL;
> +
> + if (!PyUnicode_CheckExact(key) ||
> + (hash = ((PyASCIIObject *) key)->hash) == -1) {
> + hash = PyObject_Hash(key);
> + if (hash == -1)
> + return NULL;
> + }
> + ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, NULL);
> + if (ix == DKIX_ERROR)
> + return NULL;
> + if (ix == DKIX_EMPTY || *value_addr == NULL)
> + val = failobj;
> + else
> + val = *value_addr;
> + Py_INCREF(val);
> + return val;
> +}
> +
> +PyObject *
> +PyDict_SetDefault(PyObject *d, PyObject *key, PyObject *defaultobj)
> +{
> + PyDictObject *mp = (PyDictObject *)d;
> + PyObject *value;
> + Py_hash_t hash;
> + Py_ssize_t hashpos, ix;
> + PyObject **value_addr;
> +
> + if (!PyDict_Check(d)) {
> + PyErr_BadInternalCall();
> + return NULL;
> + }
> +
> + if (!PyUnicode_CheckExact(key) ||
> + (hash = ((PyASCIIObject *) key)->hash) == -1) {
> + hash = PyObject_Hash(key);
> + if (hash == -1)
> + return NULL;
> + }
> +
> + if (mp->ma_values != NULL && !PyUnicode_CheckExact(key)) {
> + if (insertion_resize(mp) < 0)
> + return NULL;
> + }
> +
> + ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, &hashpos);
> + if (ix == DKIX_ERROR)
> + return NULL;
> +
> + if (_PyDict_HasSplitTable(mp) &&
> + ((ix >= 0 && *value_addr == NULL && mp->ma_used != ix) ||
> + (ix == DKIX_EMPTY && mp->ma_used != mp->ma_keys->dk_nentries))) {
> + if (insertion_resize(mp) < 0) {
> + return NULL;
> + }
> + find_empty_slot(mp, key, hash, &value_addr, &hashpos);
> + ix = DKIX_EMPTY;
> + }
> +
> + if (ix == DKIX_EMPTY) {
> + PyDictKeyEntry *ep, *ep0;
> + value = defaultobj;
> + if (mp->ma_keys->dk_usable <= 0) {
> + if (insertion_resize(mp) < 0) {
> + return NULL;
> + }
> + find_empty_slot(mp, key, hash, &value_addr, &hashpos);
> + }
> + ep0 = DK_ENTRIES(mp->ma_keys);
> + ep = &ep0[mp->ma_keys->dk_nentries];
> + dk_set_index(mp->ma_keys, hashpos, mp->ma_keys->dk_nentries);
> + Py_INCREF(key);
> + Py_INCREF(value);
> + MAINTAIN_TRACKING(mp, key, value);
> + ep->me_key = key;
> + ep->me_hash = hash;
> + if (mp->ma_values) {
> + assert(mp->ma_values[mp->ma_keys->dk_nentries] == NULL);
> + mp->ma_values[mp->ma_keys->dk_nentries] = value;
> + }
> + else {
> + ep->me_value = value;
> + }
> + mp->ma_used++;
> + mp->ma_version_tag = DICT_NEXT_VERSION();
> + mp->ma_keys->dk_usable--;
> + mp->ma_keys->dk_nentries++;
> + assert(mp->ma_keys->dk_usable >= 0);
> + }
> + else if (*value_addr == NULL) {
> + value = defaultobj;
> + assert(_PyDict_HasSplitTable(mp));
> + assert(ix == mp->ma_used);
> + Py_INCREF(value);
> + MAINTAIN_TRACKING(mp, key, value);
> + *value_addr = value;
> + mp->ma_used++;
> + mp->ma_version_tag = DICT_NEXT_VERSION();
> + }
> + else {
> + value = *value_addr;
> + }
> +
> + assert(_PyDict_CheckConsistency(mp));
> + return value;
> +}
> +
> +static PyObject *
> +dict_setdefault(PyDictObject *mp, PyObject *args)
> +{
> + PyObject *key, *val;
> + PyObject *defaultobj = Py_None;
> +
> + if (!PyArg_UnpackTuple(args, "setdefault", 1, 2, &key, &defaultobj))
> + return NULL;
> +
> + val = PyDict_SetDefault((PyObject *)mp, key, defaultobj);
> + Py_XINCREF(val);
> + return val;
> +}
> +
> +static PyObject *
> +dict_clear(PyDictObject *mp)
> +{
> + PyDict_Clear((PyObject *)mp);
> + Py_RETURN_NONE;
> +}
> +
> +static PyObject *
> +dict_pop(PyDictObject *mp, PyObject *args)
> +{
> + PyObject *key, *deflt = NULL;
> +
> + if(!PyArg_UnpackTuple(args, "pop", 1, 2, &key, &deflt))
> + return NULL;
> +
> + return _PyDict_Pop((PyObject*)mp, key, deflt);
> +}
> +
> +static PyObject *
> +dict_popitem(PyDictObject *mp)
> +{
> + Py_ssize_t i, j;
> + PyDictKeyEntry *ep0, *ep;
> + PyObject *res;
> +
> + /* Allocate the result tuple before checking the size. Believe it
> + * or not, this allocation could trigger a garbage collection which
> + * could empty the dict, so if we checked the size first and that
> + * happened, the result would be an infinite loop (searching for an
> + * entry that no longer exists). Note that the usual popitem()
> + * idiom is "while d: k, v = d.popitem()". so needing to throw the
> + * tuple away if the dict *is* empty isn't a significant
> + * inefficiency -- possible, but unlikely in practice.
> + */
> + res = PyTuple_New(2);
> + if (res == NULL)
> + return NULL;
> + if (mp->ma_used == 0) {
> + Py_DECREF(res);
> + PyErr_SetString(PyExc_KeyError,
> + "popitem(): dictionary is empty");
> + return NULL;
> + }
> + /* Convert split table to combined table */
> + if (mp->ma_keys->dk_lookup == lookdict_split) {
> + if (dictresize(mp, DK_SIZE(mp->ma_keys))) {
> + Py_DECREF(res);
> + return NULL;
> + }
> + }
> + ENSURE_ALLOWS_DELETIONS(mp);
> +
> + /* Pop last item */
> + ep0 = DK_ENTRIES(mp->ma_keys);
> + i = mp->ma_keys->dk_nentries - 1;
> + while (i >= 0 && ep0[i].me_value == NULL) {
> + i--;
> + }
> + assert(i >= 0);
> +
> + ep = &ep0[i];
> + j = lookdict_index(mp->ma_keys, ep->me_hash, i);
> + assert(j >= 0);
> + assert(dk_get_index(mp->ma_keys, j) == i);
> + dk_set_index(mp->ma_keys, j, DKIX_DUMMY);
> +
> + PyTuple_SET_ITEM(res, 0, ep->me_key);
> + PyTuple_SET_ITEM(res, 1, ep->me_value);
> + ep->me_key = NULL;
> + ep->me_value = NULL;
> + /* We can't dk_usable++ since there is DKIX_DUMMY in indices */
> + mp->ma_keys->dk_nentries = i;
> + mp->ma_used--;
> + mp->ma_version_tag = DICT_NEXT_VERSION();
> + assert(_PyDict_CheckConsistency(mp));
> + return res;
> +}
> +
> +static int
> +dict_traverse(PyObject *op, visitproc visit, void *arg)
> +{
> + PyDictObject *mp = (PyDictObject *)op;
> + PyDictKeysObject *keys = mp->ma_keys;
> + PyDictKeyEntry *entries = DK_ENTRIES(keys);
> + Py_ssize_t i, n = keys->dk_nentries;
> +
> + if (keys->dk_lookup == lookdict) {
> + for (i = 0; i < n; i++) {
> + if (entries[i].me_value != NULL) {
> + Py_VISIT(entries[i].me_value);
> + Py_VISIT(entries[i].me_key);
> + }
> + }
> + }
> + else {
> + if (mp->ma_values != NULL) {
> + for (i = 0; i < n; i++) {
> + Py_VISIT(mp->ma_values[i]);
> + }
> + }
> + else {
> + for (i = 0; i < n; i++) {
> + Py_VISIT(entries[i].me_value);
> + }
> + }
> + }
> + return 0;
> +}
> +
> +static int
> +dict_tp_clear(PyObject *op)
> +{
> + PyDict_Clear(op);
> + return 0;
> +}
> +
> +static PyObject *dictiter_new(PyDictObject *, PyTypeObject *);
> +
> +Py_ssize_t
> +_PyDict_SizeOf(PyDictObject *mp)
> +{
> + Py_ssize_t size, usable, res;
> +
> + size = DK_SIZE(mp->ma_keys);
> + usable = USABLE_FRACTION(size);
> +
> + res = _PyObject_SIZE(Py_TYPE(mp));
> + if (mp->ma_values)
> + res += usable * sizeof(PyObject*);
> + /* If the dictionary is split, the keys portion is accounted-for
> + in the type object. */
> + if (mp->ma_keys->dk_refcnt == 1)
> + res += (sizeof(PyDictKeysObject)
> + + DK_IXSIZE(mp->ma_keys) * size
> + + sizeof(PyDictKeyEntry) * usable);
> + return res;
> +}
> +
> +Py_ssize_t
> +_PyDict_KeysSize(PyDictKeysObject *keys)
> +{
> + return (sizeof(PyDictKeysObject)
> + + DK_IXSIZE(keys) * DK_SIZE(keys)
> + + USABLE_FRACTION(DK_SIZE(keys)) * sizeof(PyDictKeyEntry));
> +}
> +
> +static PyObject *
> +dict_sizeof(PyDictObject *mp)
> +{
> + return PyLong_FromSsize_t(_PyDict_SizeOf(mp));
> +}
> +
> +PyDoc_STRVAR(getitem__doc__, "x.__getitem__(y) <==> x[y]");
> +
> +PyDoc_STRVAR(sizeof__doc__,
> +"D.__sizeof__() -> size of D in memory, in bytes");
> +
> +PyDoc_STRVAR(get__doc__,
> +"D.get(k[,d]) -> D[k] if k in D, else d. d defaults to None.");
> +
> +PyDoc_STRVAR(setdefault_doc__,
> +"D.setdefault(k[,d]) -> D.get(k,d), also set D[k]=d if k not in D");
> +
> +PyDoc_STRVAR(pop__doc__,
> +"D.pop(k[,d]) -> v, remove specified key and return the corresponding value.\n\
> +If key is not found, d is returned if given, otherwise KeyError is raised");
> +
> +PyDoc_STRVAR(popitem__doc__,
> +"D.popitem() -> (k, v), remove and return some (key, value) pair as a\n\
> +2-tuple; but raise KeyError if D is empty.");
> +
> +PyDoc_STRVAR(update__doc__,
> +"D.update([E, ]**F) -> None. Update D from dict/iterable E and F.\n\
> +If E is present and has a .keys() method, then does: for k in E: D[k] = E[k]\n\
> +If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v\n\
> +In either case, this is followed by: for k in F: D[k] = F[k]");
> +
> +PyDoc_STRVAR(clear__doc__,
> +"D.clear() -> None. Remove all items from D.");
> +
> +PyDoc_STRVAR(copy__doc__,
> +"D.copy() -> a shallow copy of D");
> +
> +/* Forward */
> +static PyObject *dictkeys_new(PyObject *);
> +static PyObject *dictitems_new(PyObject *);
> +static PyObject *dictvalues_new(PyObject *);
> +
> +PyDoc_STRVAR(keys__doc__,
> + "D.keys() -> a set-like object providing a view on D's keys");
> +PyDoc_STRVAR(items__doc__,
> + "D.items() -> a set-like object providing a view on D's items");
> +PyDoc_STRVAR(values__doc__,
> + "D.values() -> an object providing a view on D's values");
> +
> +static PyMethodDef mapp_methods[] = {
> + DICT___CONTAINS___METHODDEF
> + {"__getitem__", (PyCFunction)dict_subscript, METH_O | METH_COEXIST,
> + getitem__doc__},
> + {"__sizeof__", (PyCFunction)dict_sizeof, METH_NOARGS,
> + sizeof__doc__},
> + {"get", (PyCFunction)dict_get, METH_VARARGS,
> + get__doc__},
> + {"setdefault", (PyCFunction)dict_setdefault, METH_VARARGS,
> + setdefault_doc__},
> + {"pop", (PyCFunction)dict_pop, METH_VARARGS,
> + pop__doc__},
> + {"popitem", (PyCFunction)dict_popitem, METH_NOARGS,
> + popitem__doc__},
> + {"keys", (PyCFunction)dictkeys_new, METH_NOARGS,
> + keys__doc__},
> + {"items", (PyCFunction)dictitems_new, METH_NOARGS,
> + items__doc__},
> + {"values", (PyCFunction)dictvalues_new, METH_NOARGS,
> + values__doc__},
> + {"update", (PyCFunction)dict_update, METH_VARARGS | METH_KEYWORDS,
> + update__doc__},
> + DICT_FROMKEYS_METHODDEF
> + {"clear", (PyCFunction)dict_clear, METH_NOARGS,
> + clear__doc__},
> + {"copy", (PyCFunction)dict_copy, METH_NOARGS,
> + copy__doc__},
> + {NULL, NULL} /* sentinel */
> +};
> +
> +/* Return 1 if `key` is in dict `op`, 0 if not, and -1 on error. */
> +int
> +PyDict_Contains(PyObject *op, PyObject *key)
> +{
> + Py_hash_t hash;
> + Py_ssize_t ix;
> + PyDictObject *mp = (PyDictObject *)op;
> + PyObject **value_addr;
> +
> + if (!PyUnicode_CheckExact(key) ||
> + (hash = ((PyASCIIObject *) key)->hash) == -1) {
> + hash = PyObject_Hash(key);
> + if (hash == -1)
> + return -1;
> + }
> + ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, NULL);
> + if (ix == DKIX_ERROR)
> + return -1;
> + return (ix != DKIX_EMPTY && *value_addr != NULL);
> +}
> +
> +/* Internal version of PyDict_Contains used when the hash value is already known */
> +int
> +_PyDict_Contains(PyObject *op, PyObject *key, Py_hash_t hash)
> +{
> + PyDictObject *mp = (PyDictObject *)op;
> + PyObject **value_addr;
> + Py_ssize_t ix;
> +
> + ix = (mp->ma_keys->dk_lookup)(mp, key, hash, &value_addr, NULL);
> + if (ix == DKIX_ERROR)
> + return -1;
> + return (ix != DKIX_EMPTY && *value_addr != NULL);
> +}
> +
> +/* Hack to implement "key in dict" */
> +static PySequenceMethods dict_as_sequence = {
> + 0, /* sq_length */
> + 0, /* sq_concat */
> + 0, /* sq_repeat */
> + 0, /* sq_item */
> + 0, /* sq_slice */
> + 0, /* sq_ass_item */
> + 0, /* sq_ass_slice */
> + PyDict_Contains, /* sq_contains */
> + 0, /* sq_inplace_concat */
> + 0, /* sq_inplace_repeat */
> +};
> +
> +static PyObject *
> +dict_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
> +{
> + PyObject *self;
> + PyDictObject *d;
> +
> + assert(type != NULL && type->tp_alloc != NULL);
> + self = type->tp_alloc(type, 0);
> + if (self == NULL)
> + return NULL;
> + d = (PyDictObject *)self;
> +
> + /* The object has been implicitly tracked by tp_alloc */
> + if (type == &PyDict_Type)
> + _PyObject_GC_UNTRACK(d);
> +
> + d->ma_used = 0;
> + d->ma_version_tag = DICT_NEXT_VERSION();
> + d->ma_keys = new_keys_object(PyDict_MINSIZE);
> + if (d->ma_keys == NULL) {
> + Py_DECREF(self);
> + return NULL;
> + }
> + assert(_PyDict_CheckConsistency(d));
> + return self;
> +}
> +
> +static int
> +dict_init(PyObject *self, PyObject *args, PyObject *kwds)
> +{
> + return dict_update_common(self, args, kwds, "dict");
> +}
> +
> +static PyObject *
> +dict_iter(PyDictObject *dict)
> +{
> + return dictiter_new(dict, &PyDictIterKey_Type);
> +}
> +
> +PyDoc_STRVAR(dictionary_doc,
> +"dict() -> new empty dictionary\n"
> +"dict(mapping) -> new dictionary initialized from a mapping object's\n"
> +" (key, value) pairs\n"
> +"dict(iterable) -> new dictionary initialized as if via:\n"
> +" d = {}\n"
> +" for k, v in iterable:\n"
> +" d[k] = v\n"
> +"dict(**kwargs) -> new dictionary initialized with the name=value pairs\n"
> +" in the keyword argument list. For example: dict(one=1, two=2)");
> +
> +PyTypeObject PyDict_Type = {
> + PyVarObject_HEAD_INIT(&PyType_Type, 0)
> + "dict",
> + sizeof(PyDictObject),
> + 0,
> + (destructor)dict_dealloc, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + (reprfunc)dict_repr, /* tp_repr */
> + 0, /* tp_as_number */
> + &dict_as_sequence, /* tp_as_sequence */
> + &dict_as_mapping, /* tp_as_mapping */
> + PyObject_HashNotImplemented, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + PyObject_GenericGetAttr, /* tp_getattro */
> + 0, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC |
> + Py_TPFLAGS_BASETYPE | Py_TPFLAGS_DICT_SUBCLASS, /* tp_flags */
> + dictionary_doc, /* tp_doc */
> + dict_traverse, /* tp_traverse */
> + dict_tp_clear, /* tp_clear */
> + dict_richcompare, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + (getiterfunc)dict_iter, /* tp_iter */
> + 0, /* tp_iternext */
> + mapp_methods, /* tp_methods */
> + 0, /* tp_members */
> + 0, /* tp_getset */
> + 0, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + dict_init, /* tp_init */
> + PyType_GenericAlloc, /* tp_alloc */
> + dict_new, /* tp_new */
> + PyObject_GC_Del, /* tp_free */
> +};
> +
> +PyObject *
> +_PyDict_GetItemId(PyObject *dp, struct _Py_Identifier *key)
> +{
> + PyObject *kv;
> + kv = _PyUnicode_FromId(key); /* borrowed */
> + if (kv == NULL) {
> + PyErr_Clear();
> + return NULL;
> + }
> + return PyDict_GetItem(dp, kv);
> +}
> +
> +/* For backward compatibility with old dictionary interface */
> +
> +PyObject *
> +PyDict_GetItemString(PyObject *v, const char *key)
> +{
> + PyObject *kv, *rv;
> + kv = PyUnicode_FromString(key);
> + if (kv == NULL) {
> + PyErr_Clear();
> + return NULL;
> + }
> + rv = PyDict_GetItem(v, kv);
> + Py_DECREF(kv);
> + return rv;
> +}
> +
> +int
> +_PyDict_SetItemId(PyObject *v, struct _Py_Identifier *key, PyObject *item)
> +{
> + PyObject *kv;
> + kv = _PyUnicode_FromId(key); /* borrowed */
> + if (kv == NULL)
> + return -1;
> + return PyDict_SetItem(v, kv, item);
> +}
> +
> +int
> +PyDict_SetItemString(PyObject *v, const char *key, PyObject *item)
> +{
> + PyObject *kv;
> + int err;
> + kv = PyUnicode_FromString(key);
> + if (kv == NULL)
> + return -1;
> + PyUnicode_InternInPlace(&kv); /* XXX Should we really? */
> + err = PyDict_SetItem(v, kv, item);
> + Py_DECREF(kv);
> + return err;
> +}
> +
> +int
> +_PyDict_DelItemId(PyObject *v, _Py_Identifier *key)
> +{
> + PyObject *kv = _PyUnicode_FromId(key); /* borrowed */
> + if (kv == NULL)
> + return -1;
> + return PyDict_DelItem(v, kv);
> +}
> +
> +int
> +PyDict_DelItemString(PyObject *v, const char *key)
> +{
> + PyObject *kv;
> + int err;
> + kv = PyUnicode_FromString(key);
> + if (kv == NULL)
> + return -1;
> + err = PyDict_DelItem(v, kv);
> + Py_DECREF(kv);
> + return err;
> +}
> +
> +/* Dictionary iterator types */
> +
> +typedef struct {
> + PyObject_HEAD
> + PyDictObject *di_dict; /* Set to NULL when iterator is exhausted */
> + Py_ssize_t di_used;
> + Py_ssize_t di_pos;
> + PyObject* di_result; /* reusable result tuple for iteritems */
> + Py_ssize_t len;
> +} dictiterobject;
> +
> +static PyObject *
> +dictiter_new(PyDictObject *dict, PyTypeObject *itertype)
> +{
> + dictiterobject *di;
> + di = PyObject_GC_New(dictiterobject, itertype);
> + if (di == NULL)
> + return NULL;
> + Py_INCREF(dict);
> + di->di_dict = dict;
> + di->di_used = dict->ma_used;
> + di->di_pos = 0;
> + di->len = dict->ma_used;
> + if (itertype == &PyDictIterItem_Type) {
> + di->di_result = PyTuple_Pack(2, Py_None, Py_None);
> + if (di->di_result == NULL) {
> + Py_DECREF(di);
> + return NULL;
> + }
> + }
> + else
> + di->di_result = NULL;
> + _PyObject_GC_TRACK(di);
> + return (PyObject *)di;
> +}
> +
> +static void
> +dictiter_dealloc(dictiterobject *di)
> +{
> + /* bpo-31095: UnTrack is needed before calling any callbacks */
> + _PyObject_GC_UNTRACK(di);
> + Py_XDECREF(di->di_dict);
> + Py_XDECREF(di->di_result);
> + PyObject_GC_Del(di);
> +}
> +
> +static int
> +dictiter_traverse(dictiterobject *di, visitproc visit, void *arg)
> +{
> + Py_VISIT(di->di_dict);
> + Py_VISIT(di->di_result);
> + return 0;
> +}
> +
> +static PyObject *
> +dictiter_len(dictiterobject *di)
> +{
> + Py_ssize_t len = 0;
> + if (di->di_dict != NULL && di->di_used == di->di_dict->ma_used)
> + len = di->len;
> + return PyLong_FromSize_t(len);
> +}
> +
> +PyDoc_STRVAR(length_hint_doc,
> + "Private method returning an estimate of len(list(it)).");
> +
> +static PyObject *
> +dictiter_reduce(dictiterobject *di);
> +
> +PyDoc_STRVAR(reduce_doc, "Return state information for pickling.");
> +
> +static PyMethodDef dictiter_methods[] = {
> + {"__length_hint__", (PyCFunction)dictiter_len, METH_NOARGS,
> + length_hint_doc},
> + {"__reduce__", (PyCFunction)dictiter_reduce, METH_NOARGS,
> + reduce_doc},
> + {NULL, NULL} /* sentinel */
> +};
> +
> +static PyObject*
> +dictiter_iternextkey(dictiterobject *di)
> +{
> + PyObject *key;
> + Py_ssize_t i, n;
> + PyDictKeysObject *k;
> + PyDictObject *d = di->di_dict;
> +
> + if (d == NULL)
> + return NULL;
> + assert (PyDict_Check(d));
> +
> + if (di->di_used != d->ma_used) {
> + PyErr_SetString(PyExc_RuntimeError,
> + "dictionary changed size during iteration");
> + di->di_used = -1; /* Make this state sticky */
> + return NULL;
> + }
> +
> + i = di->di_pos;
> + k = d->ma_keys;
> + n = k->dk_nentries;
> + if (d->ma_values) {
> + PyObject **value_ptr = &d->ma_values[i];
> + while (i < n && *value_ptr == NULL) {
> + value_ptr++;
> + i++;
> + }
> + if (i >= n)
> + goto fail;
> + key = DK_ENTRIES(k)[i].me_key;
> + }
> + else {
> + PyDictKeyEntry *entry_ptr = &DK_ENTRIES(k)[i];
> + while (i < n && entry_ptr->me_value == NULL) {
> + entry_ptr++;
> + i++;
> + }
> + if (i >= n)
> + goto fail;
> + key = entry_ptr->me_key;
> + }
> + di->di_pos = i+1;
> + di->len--;
> + Py_INCREF(key);
> + return key;
> +
> +fail:
> + di->di_dict = NULL;
> + Py_DECREF(d);
> + return NULL;
> +}
> +
> +PyTypeObject PyDictIterKey_Type = {
> + PyVarObject_HEAD_INIT(&PyType_Type, 0)
> + "dict_keyiterator", /* tp_name */
> + sizeof(dictiterobject), /* tp_basicsize */
> + 0, /* tp_itemsize */
> + /* methods */
> + (destructor)dictiter_dealloc, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + 0, /* tp_repr */
> + 0, /* tp_as_number */
> + 0, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + PyObject_GenericGetAttr, /* tp_getattro */
> + 0, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,/* tp_flags */
> + 0, /* tp_doc */
> + (traverseproc)dictiter_traverse, /* tp_traverse */
> + 0, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + PyObject_SelfIter, /* tp_iter */
> + (iternextfunc)dictiter_iternextkey, /* tp_iternext */
> + dictiter_methods, /* tp_methods */
> + 0,
> +};
> +
> +static PyObject *
> +dictiter_iternextvalue(dictiterobject *di)
> +{
> + PyObject *value;
> + Py_ssize_t i, n;
> + PyDictObject *d = di->di_dict;
> +
> + if (d == NULL)
> + return NULL;
> + assert (PyDict_Check(d));
> +
> + if (di->di_used != d->ma_used) {
> + PyErr_SetString(PyExc_RuntimeError,
> + "dictionary changed size during iteration");
> + di->di_used = -1; /* Make this state sticky */
> + return NULL;
> + }
> +
> + i = di->di_pos;
> + n = d->ma_keys->dk_nentries;
> + if (d->ma_values) {
> + PyObject **value_ptr = &d->ma_values[i];
> + while (i < n && *value_ptr == NULL) {
> + value_ptr++;
> + i++;
> + }
> + if (i >= n)
> + goto fail;
> + value = *value_ptr;
> + }
> + else {
> + PyDictKeyEntry *entry_ptr = &DK_ENTRIES(d->ma_keys)[i];
> + while (i < n && entry_ptr->me_value == NULL) {
> + entry_ptr++;
> + i++;
> + }
> + if (i >= n)
> + goto fail;
> + value = entry_ptr->me_value;
> + }
> + di->di_pos = i+1;
> + di->len--;
> + Py_INCREF(value);
> + return value;
> +
> +fail:
> + di->di_dict = NULL;
> + Py_DECREF(d);
> + return NULL;
> +}
> +
> +PyTypeObject PyDictIterValue_Type = {
> + PyVarObject_HEAD_INIT(&PyType_Type, 0)
> + "dict_valueiterator", /* tp_name */
> + sizeof(dictiterobject), /* tp_basicsize */
> + 0, /* tp_itemsize */
> + /* methods */
> + (destructor)dictiter_dealloc, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + 0, /* tp_repr */
> + 0, /* tp_as_number */
> + 0, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + PyObject_GenericGetAttr, /* tp_getattro */
> + 0, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC, /* tp_flags */
> + 0, /* tp_doc */
> + (traverseproc)dictiter_traverse, /* tp_traverse */
> + 0, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + PyObject_SelfIter, /* tp_iter */
> + (iternextfunc)dictiter_iternextvalue, /* tp_iternext */
> + dictiter_methods, /* tp_methods */
> + 0,
> +};
> +
> +static PyObject *
> +dictiter_iternextitem(dictiterobject *di)
> +{
> + PyObject *key, *value, *result;
> + Py_ssize_t i, n;
> + PyDictObject *d = di->di_dict;
> +
> + if (d == NULL)
> + return NULL;
> + assert (PyDict_Check(d));
> +
> + if (di->di_used != d->ma_used) {
> + PyErr_SetString(PyExc_RuntimeError,
> + "dictionary changed size during iteration");
> + di->di_used = -1; /* Make this state sticky */
> + return NULL;
> + }
> +
> + i = di->di_pos;
> + n = d->ma_keys->dk_nentries;
> + if (d->ma_values) {
> + PyObject **value_ptr = &d->ma_values[i];
> + while (i < n && *value_ptr == NULL) {
> + value_ptr++;
> + i++;
> + }
> + if (i >= n)
> + goto fail;
> + key = DK_ENTRIES(d->ma_keys)[i].me_key;
> + value = *value_ptr;
> + }
> + else {
> + PyDictKeyEntry *entry_ptr = &DK_ENTRIES(d->ma_keys)[i];
> + while (i < n && entry_ptr->me_value == NULL) {
> + entry_ptr++;
> + i++;
> + }
> + if (i >= n)
> + goto fail;
> + key = entry_ptr->me_key;
> + value = entry_ptr->me_value;
> + }
> + di->di_pos = i+1;
> + di->len--;
> + Py_INCREF(key);
> + Py_INCREF(value);
> + result = di->di_result;
> + if (Py_REFCNT(result) == 1) {
> + PyObject *oldkey = PyTuple_GET_ITEM(result, 0);
> + PyObject *oldvalue = PyTuple_GET_ITEM(result, 1);
> + PyTuple_SET_ITEM(result, 0, key); /* steals reference */
> + PyTuple_SET_ITEM(result, 1, value); /* steals reference */
> + Py_INCREF(result);
> + Py_DECREF(oldkey);
> + Py_DECREF(oldvalue);
> + }
> + else {
> + result = PyTuple_New(2);
> + if (result == NULL)
> + return NULL;
> + PyTuple_SET_ITEM(result, 0, key); /* steals reference */
> + PyTuple_SET_ITEM(result, 1, value); /* steals reference */
> + }
> + return result;
> +
> +fail:
> + di->di_dict = NULL;
> + Py_DECREF(d);
> + return NULL;
> +}
> +
> +PyTypeObject PyDictIterItem_Type = {
> + PyVarObject_HEAD_INIT(&PyType_Type, 0)
> + "dict_itemiterator", /* tp_name */
> + sizeof(dictiterobject), /* tp_basicsize */
> + 0, /* tp_itemsize */
> + /* methods */
> + (destructor)dictiter_dealloc, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + 0, /* tp_repr */
> + 0, /* tp_as_number */
> + 0, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + PyObject_GenericGetAttr, /* tp_getattro */
> + 0, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,/* tp_flags */
> + 0, /* tp_doc */
> + (traverseproc)dictiter_traverse, /* tp_traverse */
> + 0, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + PyObject_SelfIter, /* tp_iter */
> + (iternextfunc)dictiter_iternextitem, /* tp_iternext */
> + dictiter_methods, /* tp_methods */
> + 0,
> +};
> +
> +
> +static PyObject *
> +dictiter_reduce(dictiterobject *di)
> +{
> + PyObject *list;
> + dictiterobject tmp;
> +
> + list = PyList_New(0);
> + if (!list)
> + return NULL;
> +
> + /* copy the itertor state */
> + tmp = *di;
> + Py_XINCREF(tmp.di_dict);
> +
> + /* iterate the temporary into a list */
> + for(;;) {
> + PyObject *element = 0;
> + if (Py_TYPE(di) == &PyDictIterItem_Type)
> + element = dictiter_iternextitem(&tmp);
> + else if (Py_TYPE(di) == &PyDictIterKey_Type)
> + element = dictiter_iternextkey(&tmp);
> + else if (Py_TYPE(di) == &PyDictIterValue_Type)
> + element = dictiter_iternextvalue(&tmp);
> + else
> + assert(0);
> + if (element) {
> + if (PyList_Append(list, element)) {
> + Py_DECREF(element);
> + Py_DECREF(list);
> + Py_XDECREF(tmp.di_dict);
> + return NULL;
> + }
> + Py_DECREF(element);
> + } else
> + break;
> + }
> + Py_XDECREF(tmp.di_dict);
> + /* check for error */
> + if (tmp.di_dict != NULL) {
> + /* we have an error */
> + Py_DECREF(list);
> + return NULL;
> + }
> + return Py_BuildValue("N(N)", _PyObject_GetBuiltin("iter"), list);
> +}
> +
> +/***********************************************/
> +/* View objects for keys(), items(), values(). */
> +/***********************************************/
> +
> +/* The instance lay-out is the same for all three; but the type differs. */
> +
> +static void
> +dictview_dealloc(_PyDictViewObject *dv)
> +{
> + /* bpo-31095: UnTrack is needed before calling any callbacks */
> + _PyObject_GC_UNTRACK(dv);
> + Py_XDECREF(dv->dv_dict);
> + PyObject_GC_Del(dv);
> +}
> +
> +static int
> +dictview_traverse(_PyDictViewObject *dv, visitproc visit, void *arg)
> +{
> + Py_VISIT(dv->dv_dict);
> + return 0;
> +}
> +
> +static Py_ssize_t
> +dictview_len(_PyDictViewObject *dv)
> +{
> + Py_ssize_t len = 0;
> + if (dv->dv_dict != NULL)
> + len = dv->dv_dict->ma_used;
> + return len;
> +}
> +
> +PyObject *
> +_PyDictView_New(PyObject *dict, PyTypeObject *type)
> +{
> + _PyDictViewObject *dv;
> + if (dict == NULL) {
> + PyErr_BadInternalCall();
> + return NULL;
> + }
> + if (!PyDict_Check(dict)) {
> + /* XXX Get rid of this restriction later */
> + PyErr_Format(PyExc_TypeError,
> + "%s() requires a dict argument, not '%s'",
> + type->tp_name, dict->ob_type->tp_name);
> + return NULL;
> + }
> + dv = PyObject_GC_New(_PyDictViewObject, type);
> + if (dv == NULL)
> + return NULL;
> + Py_INCREF(dict);
> + dv->dv_dict = (PyDictObject *)dict;
> + _PyObject_GC_TRACK(dv);
> + return (PyObject *)dv;
> +}
> +
> +/* TODO(guido): The views objects are not complete:
> +
> + * support more set operations
> + * support arbitrary mappings?
> + - either these should be static or exported in dictobject.h
> + - if public then they should probably be in builtins
> +*/
> +
> +/* Return 1 if self is a subset of other, iterating over self;
> + 0 if not; -1 if an error occurred. */
> +static int
> +all_contained_in(PyObject *self, PyObject *other)
> +{
> + PyObject *iter = PyObject_GetIter(self);
> + int ok = 1;
> +
> + if (iter == NULL)
> + return -1;
> + for (;;) {
> + PyObject *next = PyIter_Next(iter);
> + if (next == NULL) {
> + if (PyErr_Occurred())
> + ok = -1;
> + break;
> + }
> + ok = PySequence_Contains(other, next);
> + Py_DECREF(next);
> + if (ok <= 0)
> + break;
> + }
> + Py_DECREF(iter);
> + return ok;
> +}
> +
> +static PyObject *
> +dictview_richcompare(PyObject *self, PyObject *other, int op)
> +{
> + Py_ssize_t len_self, len_other;
> + int ok;
> + PyObject *result;
> +
> + assert(self != NULL);
> + assert(PyDictViewSet_Check(self));
> + assert(other != NULL);
> +
> + if (!PyAnySet_Check(other) && !PyDictViewSet_Check(other))
> + Py_RETURN_NOTIMPLEMENTED;
> +
> + len_self = PyObject_Size(self);
> + if (len_self < 0)
> + return NULL;
> + len_other = PyObject_Size(other);
> + if (len_other < 0)
> + return NULL;
> +
> + ok = 0;
> + switch(op) {
> +
> + case Py_NE:
> + case Py_EQ:
> + if (len_self == len_other)
> + ok = all_contained_in(self, other);
> + if (op == Py_NE && ok >= 0)
> + ok = !ok;
> + break;
> +
> + case Py_LT:
> + if (len_self < len_other)
> + ok = all_contained_in(self, other);
> + break;
> +
> + case Py_LE:
> + if (len_self <= len_other)
> + ok = all_contained_in(self, other);
> + break;
> +
> + case Py_GT:
> + if (len_self > len_other)
> + ok = all_contained_in(other, self);
> + break;
> +
> + case Py_GE:
> + if (len_self >= len_other)
> + ok = all_contained_in(other, self);
> + break;
> +
> + }
> + if (ok < 0)
> + return NULL;
> + result = ok ? Py_True : Py_False;
> + Py_INCREF(result);
> + return result;
> +}
> +
> +static PyObject *
> +dictview_repr(_PyDictViewObject *dv)
> +{
> + PyObject *seq;
> + PyObject *result = NULL;
> + Py_ssize_t rc;
> +
> + rc = Py_ReprEnter((PyObject *)dv);
> + if (rc != 0) {
> + return rc > 0 ? PyUnicode_FromString("...") : NULL;
> + }
> + seq = PySequence_List((PyObject *)dv);
> + if (seq == NULL) {
> + goto Done;
> + }
> + result = PyUnicode_FromFormat("%s(%R)", Py_TYPE(dv)->tp_name, seq);
> + Py_DECREF(seq);
> +
> +Done:
> + Py_ReprLeave((PyObject *)dv);
> + return result;
> +}
> +
> +/*** dict_keys ***/
> +
> +static PyObject *
> +dictkeys_iter(_PyDictViewObject *dv)
> +{
> + if (dv->dv_dict == NULL) {
> + Py_RETURN_NONE;
> + }
> + return dictiter_new(dv->dv_dict, &PyDictIterKey_Type);
> +}
> +
> +static int
> +dictkeys_contains(_PyDictViewObject *dv, PyObject *obj)
> +{
> + if (dv->dv_dict == NULL)
> + return 0;
> + return PyDict_Contains((PyObject *)dv->dv_dict, obj);
> +}
> +
> +static PySequenceMethods dictkeys_as_sequence = {
> + (lenfunc)dictview_len, /* sq_length */
> + 0, /* sq_concat */
> + 0, /* sq_repeat */
> + 0, /* sq_item */
> + 0, /* sq_slice */
> + 0, /* sq_ass_item */
> + 0, /* sq_ass_slice */
> + (objobjproc)dictkeys_contains, /* sq_contains */
> +};
> +
> +static PyObject*
> +dictviews_sub(PyObject* self, PyObject *other)
> +{
> + PyObject *result = PySet_New(self);
> + PyObject *tmp;
> + _Py_IDENTIFIER(difference_update);
> +
> + if (result == NULL)
> + return NULL;
> +
> + tmp = _PyObject_CallMethodIdObjArgs(result, &PyId_difference_update, other, NULL);
> + if (tmp == NULL) {
> + Py_DECREF(result);
> + return NULL;
> + }
> +
> + Py_DECREF(tmp);
> + return result;
> +}
> +
> +PyObject*
> +_PyDictView_Intersect(PyObject* self, PyObject *other)
> +{
> + PyObject *result = PySet_New(self);
> + PyObject *tmp;
> + _Py_IDENTIFIER(intersection_update);
> +
> + if (result == NULL)
> + return NULL;
> +
> + tmp = _PyObject_CallMethodIdObjArgs(result, &PyId_intersection_update, other, NULL);
> + if (tmp == NULL) {
> + Py_DECREF(result);
> + return NULL;
> + }
> +
> + Py_DECREF(tmp);
> + return result;
> +}
> +
> +static PyObject*
> +dictviews_or(PyObject* self, PyObject *other)
> +{
> + PyObject *result = PySet_New(self);
> + PyObject *tmp;
> + _Py_IDENTIFIER(update);
> +
> + if (result == NULL)
> + return NULL;
> +
> + tmp = _PyObject_CallMethodIdObjArgs(result, &PyId_update, other, NULL);
> + if (tmp == NULL) {
> + Py_DECREF(result);
> + return NULL;
> + }
> +
> + Py_DECREF(tmp);
> + return result;
> +}
> +
> +static PyObject*
> +dictviews_xor(PyObject* self, PyObject *other)
> +{
> + PyObject *result = PySet_New(self);
> + PyObject *tmp;
> + _Py_IDENTIFIER(symmetric_difference_update);
> +
> + if (result == NULL)
> + return NULL;
> +
> + tmp = _PyObject_CallMethodIdObjArgs(result, &PyId_symmetric_difference_update, other, NULL);
> + if (tmp == NULL) {
> + Py_DECREF(result);
> + return NULL;
> + }
> +
> + Py_DECREF(tmp);
> + return result;
> +}
> +
> +static PyNumberMethods dictviews_as_number = {
> + 0, /*nb_add*/
> + (binaryfunc)dictviews_sub, /*nb_subtract*/
> + 0, /*nb_multiply*/
> + 0, /*nb_remainder*/
> + 0, /*nb_divmod*/
> + 0, /*nb_power*/
> + 0, /*nb_negative*/
> + 0, /*nb_positive*/
> + 0, /*nb_absolute*/
> + 0, /*nb_bool*/
> + 0, /*nb_invert*/
> + 0, /*nb_lshift*/
> + 0, /*nb_rshift*/
> + (binaryfunc)_PyDictView_Intersect, /*nb_and*/
> + (binaryfunc)dictviews_xor, /*nb_xor*/
> + (binaryfunc)dictviews_or, /*nb_or*/
> +};
> +
> +static PyObject*
> +dictviews_isdisjoint(PyObject *self, PyObject *other)
> +{
> + PyObject *it;
> + PyObject *item = NULL;
> +
> + if (self == other) {
> + if (dictview_len((_PyDictViewObject *)self) == 0)
> + Py_RETURN_TRUE;
> + else
> + Py_RETURN_FALSE;
> + }
> +
> + /* Iterate over the shorter object (only if other is a set,
> + * because PySequence_Contains may be expensive otherwise): */
> + if (PyAnySet_Check(other) || PyDictViewSet_Check(other)) {
> + Py_ssize_t len_self = dictview_len((_PyDictViewObject *)self);
> + Py_ssize_t len_other = PyObject_Size(other);
> + if (len_other == -1)
> + return NULL;
> +
> + if ((len_other > len_self)) {
> + PyObject *tmp = other;
> + other = self;
> + self = tmp;
> + }
> + }
> +
> + it = PyObject_GetIter(other);
> + if (it == NULL)
> + return NULL;
> +
> + while ((item = PyIter_Next(it)) != NULL) {
> + int contains = PySequence_Contains(self, item);
> + Py_DECREF(item);
> + if (contains == -1) {
> + Py_DECREF(it);
> + return NULL;
> + }
> +
> + if (contains) {
> + Py_DECREF(it);
> + Py_RETURN_FALSE;
> + }
> + }
> + Py_DECREF(it);
> + if (PyErr_Occurred())
> + return NULL; /* PyIter_Next raised an exception. */
> + Py_RETURN_TRUE;
> +}
> +
> +PyDoc_STRVAR(isdisjoint_doc,
> +"Return True if the view and the given iterable have a null intersection.");
> +
> +static PyMethodDef dictkeys_methods[] = {
> + {"isdisjoint", (PyCFunction)dictviews_isdisjoint, METH_O,
> + isdisjoint_doc},
> + {NULL, NULL} /* sentinel */
> +};
> +
> +PyTypeObject PyDictKeys_Type = {
> + PyVarObject_HEAD_INIT(&PyType_Type, 0)
> + "dict_keys", /* tp_name */
> + sizeof(_PyDictViewObject), /* tp_basicsize */
> + 0, /* tp_itemsize */
> + /* methods */
> + (destructor)dictview_dealloc, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + (reprfunc)dictview_repr, /* tp_repr */
> + &dictviews_as_number, /* tp_as_number */
> + &dictkeys_as_sequence, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + PyObject_GenericGetAttr, /* tp_getattro */
> + 0, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,/* tp_flags */
> + 0, /* tp_doc */
> + (traverseproc)dictview_traverse, /* tp_traverse */
> + 0, /* tp_clear */
> + dictview_richcompare, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + (getiterfunc)dictkeys_iter, /* tp_iter */
> + 0, /* tp_iternext */
> + dictkeys_methods, /* tp_methods */
> + 0,
> +};
> +
> +static PyObject *
> +dictkeys_new(PyObject *dict)
> +{
> + return _PyDictView_New(dict, &PyDictKeys_Type);
> +}
> +
> +/*** dict_items ***/
> +
> +static PyObject *
> +dictitems_iter(_PyDictViewObject *dv)
> +{
> + if (dv->dv_dict == NULL) {
> + Py_RETURN_NONE;
> + }
> + return dictiter_new(dv->dv_dict, &PyDictIterItem_Type);
> +}
> +
> +static int
> +dictitems_contains(_PyDictViewObject *dv, PyObject *obj)
> +{
> + int result;
> + PyObject *key, *value, *found;
> + if (dv->dv_dict == NULL)
> + return 0;
> + if (!PyTuple_Check(obj) || PyTuple_GET_SIZE(obj) != 2)
> + return 0;
> + key = PyTuple_GET_ITEM(obj, 0);
> + value = PyTuple_GET_ITEM(obj, 1);
> + found = PyDict_GetItemWithError((PyObject *)dv->dv_dict, key);
> + if (found == NULL) {
> + if (PyErr_Occurred())
> + return -1;
> + return 0;
> + }
> + Py_INCREF(found);
> + result = PyObject_RichCompareBool(value, found, Py_EQ);
> + Py_DECREF(found);
> + return result;
> +}
> +
> +static PySequenceMethods dictitems_as_sequence = {
> + (lenfunc)dictview_len, /* sq_length */
> + 0, /* sq_concat */
> + 0, /* sq_repeat */
> + 0, /* sq_item */
> + 0, /* sq_slice */
> + 0, /* sq_ass_item */
> + 0, /* sq_ass_slice */
> + (objobjproc)dictitems_contains, /* sq_contains */
> +};
> +
> +static PyMethodDef dictitems_methods[] = {
> + {"isdisjoint", (PyCFunction)dictviews_isdisjoint, METH_O,
> + isdisjoint_doc},
> + {NULL, NULL} /* sentinel */
> +};
> +
> +PyTypeObject PyDictItems_Type = {
> + PyVarObject_HEAD_INIT(&PyType_Type, 0)
> + "dict_items", /* tp_name */
> + sizeof(_PyDictViewObject), /* tp_basicsize */
> + 0, /* tp_itemsize */
> + /* methods */
> + (destructor)dictview_dealloc, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + (reprfunc)dictview_repr, /* tp_repr */
> + &dictviews_as_number, /* tp_as_number */
> + &dictitems_as_sequence, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + PyObject_GenericGetAttr, /* tp_getattro */
> + 0, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,/* tp_flags */
> + 0, /* tp_doc */
> + (traverseproc)dictview_traverse, /* tp_traverse */
> + 0, /* tp_clear */
> + dictview_richcompare, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + (getiterfunc)dictitems_iter, /* tp_iter */
> + 0, /* tp_iternext */
> + dictitems_methods, /* tp_methods */
> + 0,
> +};
> +
> +static PyObject *
> +dictitems_new(PyObject *dict)
> +{
> + return _PyDictView_New(dict, &PyDictItems_Type);
> +}
> +
> +/*** dict_values ***/
> +
> +static PyObject *
> +dictvalues_iter(_PyDictViewObject *dv)
> +{
> + if (dv->dv_dict == NULL) {
> + Py_RETURN_NONE;
> + }
> + return dictiter_new(dv->dv_dict, &PyDictIterValue_Type);
> +}
> +
> +static PySequenceMethods dictvalues_as_sequence = {
> + (lenfunc)dictview_len, /* sq_length */
> + 0, /* sq_concat */
> + 0, /* sq_repeat */
> + 0, /* sq_item */
> + 0, /* sq_slice */
> + 0, /* sq_ass_item */
> + 0, /* sq_ass_slice */
> + (objobjproc)0, /* sq_contains */
> +};
> +
> +static PyMethodDef dictvalues_methods[] = {
> + {NULL, NULL} /* sentinel */
> +};
> +
> +PyTypeObject PyDictValues_Type = {
> + PyVarObject_HEAD_INIT(&PyType_Type, 0)
> + "dict_values", /* tp_name */
> + sizeof(_PyDictViewObject), /* tp_basicsize */
> + 0, /* tp_itemsize */
> + /* methods */
> + (destructor)dictview_dealloc, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + (reprfunc)dictview_repr, /* tp_repr */
> + 0, /* tp_as_number */
> + &dictvalues_as_sequence, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + PyObject_GenericGetAttr, /* tp_getattro */
> + 0, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,/* tp_flags */
> + 0, /* tp_doc */
> + (traverseproc)dictview_traverse, /* tp_traverse */
> + 0, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + (getiterfunc)dictvalues_iter, /* tp_iter */
> + 0, /* tp_iternext */
> + dictvalues_methods, /* tp_methods */
> + 0,
> +};
> +
> +static PyObject *
> +dictvalues_new(PyObject *dict)
> +{
> + return _PyDictView_New(dict, &PyDictValues_Type);
> +}
> +
> +/* Returns NULL if cannot allocate a new PyDictKeysObject,
> + but does not set an error */
> +PyDictKeysObject *
> +_PyDict_NewKeysForClass(void)
> +{
> + PyDictKeysObject *keys = new_keys_object(PyDict_MINSIZE);
> + if (keys == NULL)
> + PyErr_Clear();
> + else
> + keys->dk_lookup = lookdict_split;
> + return keys;
> +}
> +
> +#define CACHED_KEYS(tp) (((PyHeapTypeObject*)tp)->ht_cached_keys)
> +
> +PyObject *
> +PyObject_GenericGetDict(PyObject *obj, void *context)
> +{
> + PyObject *dict, **dictptr = _PyObject_GetDictPtr(obj);
> + if (dictptr == NULL) {
> + PyErr_SetString(PyExc_AttributeError,
> + "This object has no __dict__");
> + return NULL;
> + }
> + dict = *dictptr;
> + if (dict == NULL) {
> + PyTypeObject *tp = Py_TYPE(obj);
> + if ((tp->tp_flags & Py_TPFLAGS_HEAPTYPE) && CACHED_KEYS(tp)) {
> + DK_INCREF(CACHED_KEYS(tp));
> + *dictptr = dict = new_dict_with_shared_keys(CACHED_KEYS(tp));
> + }
> + else {
> + *dictptr = dict = PyDict_New();
> + }
> + }
> + Py_XINCREF(dict);
> + return dict;
> +}
> +
> +int
> +_PyObjectDict_SetItem(PyTypeObject *tp, PyObject **dictptr,
> + PyObject *key, PyObject *value)
> +{
> + PyObject *dict;
> + int res;
> + PyDictKeysObject *cached;
> +
> + assert(dictptr != NULL);
> + if ((tp->tp_flags & Py_TPFLAGS_HEAPTYPE) && (cached = CACHED_KEYS(tp))) {
> + assert(dictptr != NULL);
> + dict = *dictptr;
> + if (dict == NULL) {
> + DK_INCREF(cached);
> + dict = new_dict_with_shared_keys(cached);
> + if (dict == NULL)
> + return -1;
> + *dictptr = dict;
> + }
> + if (value == NULL) {
> + res = PyDict_DelItem(dict, key);
> + // Since key sharing dict doesn't allow deletion, PyDict_DelItem()
> + // always converts dict to combined form.
> + if ((cached = CACHED_KEYS(tp)) != NULL) {
> + CACHED_KEYS(tp) = NULL;
> + DK_DECREF(cached);
> + }
> + }
> + else {
> + int was_shared = (cached == ((PyDictObject *)dict)->ma_keys);
> + res = PyDict_SetItem(dict, key, value);
> + if (was_shared &&
> + (cached = CACHED_KEYS(tp)) != NULL &&
> + cached != ((PyDictObject *)dict)->ma_keys) {
> + /* PyDict_SetItem() may call dictresize and convert split table
> + * into combined table. In such case, convert it to split
> + * table again and update type's shared key only when this is
> + * the only dict sharing key with the type.
> + *
> + * This is to allow using shared key in class like this:
> + *
> + * class C:
> + * def __init__(self):
> + * # one dict resize happens
> + * self.a, self.b, self.c = 1, 2, 3
> + * self.d, self.e, self.f = 4, 5, 6
> + * a = C()
> + */
> + if (cached->dk_refcnt == 1) {
> + CACHED_KEYS(tp) = make_keys_shared(dict);
> + }
> + else {
> + CACHED_KEYS(tp) = NULL;
> + }
> + DK_DECREF(cached);
> + if (CACHED_KEYS(tp) == NULL && PyErr_Occurred())
> + return -1;
> + }
> + }
> + } else {
> + dict = *dictptr;
> + if (dict == NULL) {
> + dict = PyDict_New();
> + if (dict == NULL)
> + return -1;
> + *dictptr = dict;
> + }
> + if (value == NULL) {
> + res = PyDict_DelItem(dict, key);
> + } else {
> + res = PyDict_SetItem(dict, key, value);
> + }
> + }
> + return res;
> +}
> +
> +void
> +_PyDictKeys_DecRef(PyDictKeysObject *keys)
> +{
> + DK_DECREF(keys);
> +}
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/memoryobject.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/memoryobject.c
> new file mode 100644
> index 00000000..2b6449c7
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/memoryobject.c
> @@ -0,0 +1,3114 @@
> +/* Memoryview object implementation */
> +
> +#include "Python.h"
> +#include "pystrhex.h"
> +#include <stddef.h>
> +
> +
> +/****************************************************************************/
> +/* ManagedBuffer Object */
> +/****************************************************************************/
> +
> +/*
> + ManagedBuffer Object:
> + ---------------------
> +
> + The purpose of this object is to facilitate the handling of chained
> + memoryviews that have the same underlying exporting object. PEP-3118
> + allows the underlying object to change while a view is exported. This
> + could lead to unexpected results when constructing a new memoryview
> + from an existing memoryview.
> +
> + Rather than repeatedly redirecting buffer requests to the original base
> + object, all chained memoryviews use a single buffer snapshot. This
> + snapshot is generated by the constructor _PyManagedBuffer_FromObject().
> +
> + Ownership rules:
> + ----------------
> +
> + The master buffer inside a managed buffer is filled in by the original
> + base object. shape, strides, suboffsets and format are read-only for
> + all consumers.
> +
> + A memoryview's buffer is a private copy of the exporter's buffer. shape,
> + strides and suboffsets belong to the memoryview and are thus writable.
> +
> + If a memoryview itself exports several buffers via memory_getbuf(), all
> + buffer copies share shape, strides and suboffsets. In this case, the
> + arrays are NOT writable.
> +
> + Reference count assumptions:
> + ----------------------------
> +
> + The 'obj' member of a Py_buffer must either be NULL or refer to the
> + exporting base object. In the Python codebase, all getbufferprocs
> + return a new reference to view.obj (example: bytes_buffer_getbuffer()).
> +
> + PyBuffer_Release() decrements view.obj (if non-NULL), so the
> + releasebufferprocs must NOT decrement view.obj.
> +*/
> +
> +
> +#define CHECK_MBUF_RELEASED(mbuf) \
> + if (((_PyManagedBufferObject *)mbuf)->flags&_Py_MANAGED_BUFFER_RELEASED) { \
> + PyErr_SetString(PyExc_ValueError, \
> + "operation forbidden on released memoryview object"); \
> + return NULL; \
> + }
> +
> +
> +static _PyManagedBufferObject *
> +mbuf_alloc(void)
> +{
> + _PyManagedBufferObject *mbuf;
> +
> + mbuf = (_PyManagedBufferObject *)
> + PyObject_GC_New(_PyManagedBufferObject, &_PyManagedBuffer_Type);
> + if (mbuf == NULL)
> + return NULL;
> + mbuf->flags = 0;
> + mbuf->exports = 0;
> + mbuf->master.obj = NULL;
> + _PyObject_GC_TRACK(mbuf);
> +
> + return mbuf;
> +}
> +
> +static PyObject *
> +_PyManagedBuffer_FromObject(PyObject *base)
> +{
> + _PyManagedBufferObject *mbuf;
> +
> + mbuf = mbuf_alloc();
> + if (mbuf == NULL)
> + return NULL;
> +
> + if (PyObject_GetBuffer(base, &mbuf->master, PyBUF_FULL_RO) < 0) {
> + mbuf->master.obj = NULL;
> + Py_DECREF(mbuf);
> + return NULL;
> + }
> +
> + return (PyObject *)mbuf;
> +}
> +
> +static void
> +mbuf_release(_PyManagedBufferObject *self)
> +{
> + if (self->flags&_Py_MANAGED_BUFFER_RELEASED)
> + return;
> +
> + /* NOTE: at this point self->exports can still be > 0 if this function
> + is called from mbuf_clear() to break up a reference cycle. */
> + self->flags |= _Py_MANAGED_BUFFER_RELEASED;
> +
> + /* PyBuffer_Release() decrements master->obj and sets it to NULL. */
> + _PyObject_GC_UNTRACK(self);
> + PyBuffer_Release(&self->master);
> +}
> +
> +static void
> +mbuf_dealloc(_PyManagedBufferObject *self)
> +{
> + assert(self->exports == 0);
> + mbuf_release(self);
> + if (self->flags&_Py_MANAGED_BUFFER_FREE_FORMAT)
> + PyMem_Free(self->master.format);
> + PyObject_GC_Del(self);
> +}
> +
> +static int
> +mbuf_traverse(_PyManagedBufferObject *self, visitproc visit, void *arg)
> +{
> + Py_VISIT(self->master.obj);
> + return 0;
> +}
> +
> +static int
> +mbuf_clear(_PyManagedBufferObject *self)
> +{
> + assert(self->exports >= 0);
> + mbuf_release(self);
> + return 0;
> +}
> +
> +PyTypeObject _PyManagedBuffer_Type = {
> + PyVarObject_HEAD_INIT(&PyType_Type, 0)
> + "managedbuffer",
> + sizeof(_PyManagedBufferObject),
> + 0,
> + (destructor)mbuf_dealloc, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + 0, /* tp_repr */
> + 0, /* tp_as_number */
> + 0, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + PyObject_GenericGetAttr, /* tp_getattro */
> + 0, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC, /* tp_flags */
> + 0, /* tp_doc */
> + (traverseproc)mbuf_traverse, /* tp_traverse */
> + (inquiry)mbuf_clear /* tp_clear */
> +};
> +
> +
> +/****************************************************************************/
> +/* MemoryView Object */
> +/****************************************************************************/
> +
> +/* In the process of breaking reference cycles mbuf_release() can be
> + called before memory_release(). */
> +#define BASE_INACCESSIBLE(mv) \
> + (((PyMemoryViewObject *)mv)->flags&_Py_MEMORYVIEW_RELEASED || \
> + ((PyMemoryViewObject *)mv)->mbuf->flags&_Py_MANAGED_BUFFER_RELEASED)
> +
> +#define CHECK_RELEASED(mv) \
> + if (BASE_INACCESSIBLE(mv)) { \
> + PyErr_SetString(PyExc_ValueError, \
> + "operation forbidden on released memoryview object"); \
> + return NULL; \
> + }
> +
> +#define CHECK_RELEASED_INT(mv) \
> + if (BASE_INACCESSIBLE(mv)) { \
> + PyErr_SetString(PyExc_ValueError, \
> + "operation forbidden on released memoryview object"); \
> + return -1; \
> + }
> +
> +#define CHECK_LIST_OR_TUPLE(v) \
> + if (!PyList_Check(v) && !PyTuple_Check(v)) { \
> + PyErr_SetString(PyExc_TypeError, \
> + #v " must be a list or a tuple"); \
> + return NULL; \
> + }
> +
> +#define VIEW_ADDR(mv) (&((PyMemoryViewObject *)mv)->view)
> +
> +/* Check for the presence of suboffsets in the first dimension. */
> +#define HAVE_PTR(suboffsets, dim) (suboffsets && suboffsets[dim] >= 0)
> +/* Adjust ptr if suboffsets are present. */
> +#define ADJUST_PTR(ptr, suboffsets, dim) \
> + (HAVE_PTR(suboffsets, dim) ? *((char**)ptr) + suboffsets[dim] : ptr)
> +
> +/* Memoryview buffer properties */
> +#define MV_C_CONTIGUOUS(flags) (flags&(_Py_MEMORYVIEW_SCALAR|_Py_MEMORYVIEW_C))
> +#define MV_F_CONTIGUOUS(flags) \
> + (flags&(_Py_MEMORYVIEW_SCALAR|_Py_MEMORYVIEW_FORTRAN))
> +#define MV_ANY_CONTIGUOUS(flags) \
> + (flags&(_Py_MEMORYVIEW_SCALAR|_Py_MEMORYVIEW_C|_Py_MEMORYVIEW_FORTRAN))
> +
> +/* Fast contiguity test. Caller must ensure suboffsets==NULL and ndim==1. */
> +#define MV_CONTIGUOUS_NDIM1(view) \
> + ((view)->shape[0] == 1 || (view)->strides[0] == (view)->itemsize)
> +
> +/* getbuffer() requests */
> +#define REQ_INDIRECT(flags) ((flags&PyBUF_INDIRECT) == PyBUF_INDIRECT)
> +#define REQ_C_CONTIGUOUS(flags) ((flags&PyBUF_C_CONTIGUOUS) == PyBUF_C_CONTIGUOUS)
> +#define REQ_F_CONTIGUOUS(flags) ((flags&PyBUF_F_CONTIGUOUS) == PyBUF_F_CONTIGUOUS)
> +#define REQ_ANY_CONTIGUOUS(flags) ((flags&PyBUF_ANY_CONTIGUOUS) == PyBUF_ANY_CONTIGUOUS)
> +#define REQ_STRIDES(flags) ((flags&PyBUF_STRIDES) == PyBUF_STRIDES)
> +#define REQ_SHAPE(flags) ((flags&PyBUF_ND) == PyBUF_ND)
> +#define REQ_WRITABLE(flags) (flags&PyBUF_WRITABLE)
> +#define REQ_FORMAT(flags) (flags&PyBUF_FORMAT)
> +
> +
> +PyDoc_STRVAR(memory_doc,
> +"memoryview(object)\n--\n\
> +\n\
> +Create a new memoryview object which references the given object.");
> +
> +
> +/**************************************************************************/
> +/* Copy memoryview buffers */
> +/**************************************************************************/
> +
> +/* The functions in this section take a source and a destination buffer
> + with the same logical structure: format, itemsize, ndim and shape
> + are identical, with ndim > 0.
> +
> + NOTE: All buffers are assumed to have PyBUF_FULL information, which
> + is the case for memoryviews! */
> +
> +
> +/* Assumptions: ndim >= 1. The macro tests for a corner case that should
> + perhaps be explicitly forbidden in the PEP. */
> +#define HAVE_SUBOFFSETS_IN_LAST_DIM(view) \
> + (view->suboffsets && view->suboffsets[dest->ndim-1] >= 0)
> +
> +static int
> +last_dim_is_contiguous(const Py_buffer *dest, const Py_buffer *src)
> +{
> + assert(dest->ndim > 0 && src->ndim > 0);
> + return (!HAVE_SUBOFFSETS_IN_LAST_DIM(dest) &&
> + !HAVE_SUBOFFSETS_IN_LAST_DIM(src) &&
> + dest->strides[dest->ndim-1] == dest->itemsize &&
> + src->strides[src->ndim-1] == src->itemsize);
> +}
> +
> +/* This is not a general function for determining format equivalence.
> + It is used in copy_single() and copy_buffer() to weed out non-matching
> + formats. Skipping the '@' character is specifically used in slice
> + assignments, where the lvalue is already known to have a single character
> + format. This is a performance hack that could be rewritten (if properly
> + benchmarked). */
> +static int
> +equiv_format(const Py_buffer *dest, const Py_buffer *src)
> +{
> + const char *dfmt, *sfmt;
> +
> + assert(dest->format && src->format);
> + dfmt = dest->format[0] == '@' ? dest->format+1 : dest->format;
> + sfmt = src->format[0] == '@' ? src->format+1 : src->format;
> +
> + if (strcmp(dfmt, sfmt) != 0 ||
> + dest->itemsize != src->itemsize) {
> + return 0;
> + }
> +
> + return 1;
> +}
> +
> +/* Two shapes are equivalent if they are either equal or identical up
> + to a zero element at the same position. For example, in NumPy arrays
> + the shapes [1, 0, 5] and [1, 0, 7] are equivalent. */
> +static int
> +equiv_shape(const Py_buffer *dest, const Py_buffer *src)
> +{
> + int i;
> +
> + if (dest->ndim != src->ndim)
> + return 0;
> +
> + for (i = 0; i < dest->ndim; i++) {
> + if (dest->shape[i] != src->shape[i])
> + return 0;
> + if (dest->shape[i] == 0)
> + break;
> + }
> +
> + return 1;
> +}
> +
> +/* Check that the logical structure of the destination and source buffers
> + is identical. */
> +static int
> +equiv_structure(const Py_buffer *dest, const Py_buffer *src)
> +{
> + if (!equiv_format(dest, src) ||
> + !equiv_shape(dest, src)) {
> + PyErr_SetString(PyExc_ValueError,
> + "memoryview assignment: lvalue and rvalue have different "
> + "structures");
> + return 0;
> + }
> +
> + return 1;
> +}
> +
> +/* Base case for recursive multi-dimensional copying. Contiguous arrays are
> + copied with very little overhead. Assumptions: ndim == 1, mem == NULL or
> + sizeof(mem) == shape[0] * itemsize. */
> +static void
> +copy_base(const Py_ssize_t *shape, Py_ssize_t itemsize,
> + char *dptr, const Py_ssize_t *dstrides, const Py_ssize_t *dsuboffsets,
> + char *sptr, const Py_ssize_t *sstrides, const Py_ssize_t *ssuboffsets,
> + char *mem)
> +{
> + if (mem == NULL) { /* contiguous */
> + Py_ssize_t size = shape[0] * itemsize;
> + if (dptr + size < sptr || sptr + size < dptr)
> + memcpy(dptr, sptr, size); /* no overlapping */
> + else
> + memmove(dptr, sptr, size);
> + }
> + else {
> + char *p;
> + Py_ssize_t i;
> + for (i=0, p=mem; i < shape[0]; p+=itemsize, sptr+=sstrides[0], i++) {
> + char *xsptr = ADJUST_PTR(sptr, ssuboffsets, 0);
> + memcpy(p, xsptr, itemsize);
> + }
> + for (i=0, p=mem; i < shape[0]; p+=itemsize, dptr+=dstrides[0], i++) {
> + char *xdptr = ADJUST_PTR(dptr, dsuboffsets, 0);
> + memcpy(xdptr, p, itemsize);
> + }
> + }
> +
> +}
> +
> +/* Recursively copy a source buffer to a destination buffer. The two buffers
> + have the same ndim, shape and itemsize. */
> +static void
> +copy_rec(const Py_ssize_t *shape, Py_ssize_t ndim, Py_ssize_t itemsize,
> + char *dptr, const Py_ssize_t *dstrides, const Py_ssize_t *dsuboffsets,
> + char *sptr, const Py_ssize_t *sstrides, const Py_ssize_t *ssuboffsets,
> + char *mem)
> +{
> + Py_ssize_t i;
> +
> + assert(ndim >= 1);
> +
> + if (ndim == 1) {
> + copy_base(shape, itemsize,
> + dptr, dstrides, dsuboffsets,
> + sptr, sstrides, ssuboffsets,
> + mem);
> + return;
> + }
> +
> + for (i = 0; i < shape[0]; dptr+=dstrides[0], sptr+=sstrides[0], i++) {
> + char *xdptr = ADJUST_PTR(dptr, dsuboffsets, 0);
> + char *xsptr = ADJUST_PTR(sptr, ssuboffsets, 0);
> +
> + copy_rec(shape+1, ndim-1, itemsize,
> + xdptr, dstrides+1, dsuboffsets ? dsuboffsets+1 : NULL,
> + xsptr, sstrides+1, ssuboffsets ? ssuboffsets+1 : NULL,
> + mem);
> + }
> +}
> +
> +/* Faster copying of one-dimensional arrays. */
> +static int
> +copy_single(Py_buffer *dest, Py_buffer *src)
> +{
> + char *mem = NULL;
> +
> + assert(dest->ndim == 1);
> +
> + if (!equiv_structure(dest, src))
> + return -1;
> +
> + if (!last_dim_is_contiguous(dest, src)) {
> + mem = PyMem_Malloc(dest->shape[0] * dest->itemsize);
> + if (mem == NULL) {
> + PyErr_NoMemory();
> + return -1;
> + }
> + }
> +
> + copy_base(dest->shape, dest->itemsize,
> + dest->buf, dest->strides, dest->suboffsets,
> + src->buf, src->strides, src->suboffsets,
> + mem);
> +
> + if (mem)
> + PyMem_Free(mem);
> +
> + return 0;
> +}
> +
> +/* Recursively copy src to dest. Both buffers must have the same basic
> + structure. Copying is atomic, the function never fails with a partial
> + copy. */
> +static int
> +copy_buffer(Py_buffer *dest, Py_buffer *src)
> +{
> + char *mem = NULL;
> +
> + assert(dest->ndim > 0);
> +
> + if (!equiv_structure(dest, src))
> + return -1;
> +
> + if (!last_dim_is_contiguous(dest, src)) {
> + mem = PyMem_Malloc(dest->shape[dest->ndim-1] * dest->itemsize);
> + if (mem == NULL) {
> + PyErr_NoMemory();
> + return -1;
> + }
> + }
> +
> + copy_rec(dest->shape, dest->ndim, dest->itemsize,
> + dest->buf, dest->strides, dest->suboffsets,
> + src->buf, src->strides, src->suboffsets,
> + mem);
> +
> + if (mem)
> + PyMem_Free(mem);
> +
> + return 0;
> +}
> +
> +/* Initialize strides for a C-contiguous array. */
> +static void
> +init_strides_from_shape(Py_buffer *view)
> +{
> + Py_ssize_t i;
> +
> + assert(view->ndim > 0);
> +
> + view->strides[view->ndim-1] = view->itemsize;
> + for (i = view->ndim-2; i >= 0; i--)
> + view->strides[i] = view->strides[i+1] * view->shape[i+1];
> +}
> +
> +/* Initialize strides for a Fortran-contiguous array. */
> +static void
> +init_fortran_strides_from_shape(Py_buffer *view)
> +{
> + Py_ssize_t i;
> +
> + assert(view->ndim > 0);
> +
> + view->strides[0] = view->itemsize;
> + for (i = 1; i < view->ndim; i++)
> + view->strides[i] = view->strides[i-1] * view->shape[i-1];
> +}
> +
> +/* Copy src to a contiguous representation. order is one of 'C', 'F' (Fortran)
> + or 'A' (Any). Assumptions: src has PyBUF_FULL information, src->ndim >= 1,
> + len(mem) == src->len. */
> +static int
> +buffer_to_contiguous(char *mem, Py_buffer *src, char order)
> +{
> + Py_buffer dest;
> + Py_ssize_t *strides;
> + int ret;
> +
> + assert(src->ndim >= 1);
> + assert(src->shape != NULL);
> + assert(src->strides != NULL);
> +
> + strides = PyMem_Malloc(src->ndim * (sizeof *src->strides));
> + if (strides == NULL) {
> + PyErr_NoMemory();
> + return -1;
> + }
> +
> + /* initialize dest */
> + dest = *src;
> + dest.buf = mem;
> + /* shape is constant and shared: the logical representation of the
> + array is unaltered. */
> +
> + /* The physical representation determined by strides (and possibly
> + suboffsets) may change. */
> + dest.strides = strides;
> + if (order == 'C' || order == 'A') {
> + init_strides_from_shape(&dest);
> + }
> + else {
> + init_fortran_strides_from_shape(&dest);
> + }
> +
> + dest.suboffsets = NULL;
> +
> + ret = copy_buffer(&dest, src);
> +
> + PyMem_Free(strides);
> + return ret;
> +}
> +
> +
> +/****************************************************************************/
> +/* Constructors */
> +/****************************************************************************/
> +
> +/* Initialize values that are shared with the managed buffer. */
> +static void
> +init_shared_values(Py_buffer *dest, const Py_buffer *src)
> +{
> + dest->obj = src->obj;
> + dest->buf = src->buf;
> + dest->len = src->len;
> + dest->itemsize = src->itemsize;
> + dest->readonly = src->readonly;
> + dest->format = src->format ? src->format : "B";
> + dest->internal = src->internal;
> +}
> +
> +/* Copy shape and strides. Reconstruct missing values. */
> +static void
> +init_shape_strides(Py_buffer *dest, const Py_buffer *src)
> +{
> + Py_ssize_t i;
> +
> + if (src->ndim == 0) {
> + dest->shape = NULL;
> + dest->strides = NULL;
> + return;
> + }
> + if (src->ndim == 1) {
> + dest->shape[0] = src->shape ? src->shape[0] : src->len / src->itemsize;
> + dest->strides[0] = src->strides ? src->strides[0] : src->itemsize;
> + return;
> + }
> +
> + for (i = 0; i < src->ndim; i++)
> + dest->shape[i] = src->shape[i];
> + if (src->strides) {
> + for (i = 0; i < src->ndim; i++)
> + dest->strides[i] = src->strides[i];
> + }
> + else {
> + init_strides_from_shape(dest);
> + }
> +}
> +
> +static void
> +init_suboffsets(Py_buffer *dest, const Py_buffer *src)
> +{
> + Py_ssize_t i;
> +
> + if (src->suboffsets == NULL) {
> + dest->suboffsets = NULL;
> + return;
> + }
> + for (i = 0; i < src->ndim; i++)
> + dest->suboffsets[i] = src->suboffsets[i];
> +}
> +
> +/* len = product(shape) * itemsize */
> +static void
> +init_len(Py_buffer *view)
> +{
> + Py_ssize_t i, len;
> +
> + len = 1;
> + for (i = 0; i < view->ndim; i++)
> + len *= view->shape[i];
> + len *= view->itemsize;
> +
> + view->len = len;
> +}
> +
> +/* Initialize memoryview buffer properties. */
> +static void
> +init_flags(PyMemoryViewObject *mv)
> +{
> + const Py_buffer *view = &mv->view;
> + int flags = 0;
> +
> + switch (view->ndim) {
> + case 0:
> + flags |= (_Py_MEMORYVIEW_SCALAR|_Py_MEMORYVIEW_C|
> + _Py_MEMORYVIEW_FORTRAN);
> + break;
> + case 1:
> + if (MV_CONTIGUOUS_NDIM1(view))
> + flags |= (_Py_MEMORYVIEW_C|_Py_MEMORYVIEW_FORTRAN);
> + break;
> + default:
> + if (PyBuffer_IsContiguous(view, 'C'))
> + flags |= _Py_MEMORYVIEW_C;
> + if (PyBuffer_IsContiguous(view, 'F'))
> + flags |= _Py_MEMORYVIEW_FORTRAN;
> + break;
> + }
> +
> + if (view->suboffsets) {
> + flags |= _Py_MEMORYVIEW_PIL;
> + flags &= ~(_Py_MEMORYVIEW_C|_Py_MEMORYVIEW_FORTRAN);
> + }
> +
> + mv->flags = flags;
> +}
> +
> +/* Allocate a new memoryview and perform basic initialization. New memoryviews
> + are exclusively created through the mbuf_add functions. */
> +static PyMemoryViewObject *
> +memory_alloc(int ndim)
> +{
> + PyMemoryViewObject *mv;
> +
> + mv = (PyMemoryViewObject *)
> + PyObject_GC_NewVar(PyMemoryViewObject, &PyMemoryView_Type, 3*ndim);
> + if (mv == NULL)
> + return NULL;
> +
> + mv->mbuf = NULL;
> + mv->hash = -1;
> + mv->flags = 0;
> + mv->exports = 0;
> + mv->view.ndim = ndim;
> + mv->view.shape = mv->ob_array;
> + mv->view.strides = mv->ob_array + ndim;
> + mv->view.suboffsets = mv->ob_array + 2 * ndim;
> + mv->weakreflist = NULL;
> +
> + _PyObject_GC_TRACK(mv);
> + return mv;
> +}
> +
> +/*
> + Return a new memoryview that is registered with mbuf. If src is NULL,
> + use mbuf->master as the underlying buffer. Otherwise, use src.
> +
> + The new memoryview has full buffer information: shape and strides
> + are always present, suboffsets as needed. Arrays are copied to
> + the memoryview's ob_array field.
> + */
> +static PyObject *
> +mbuf_add_view(_PyManagedBufferObject *mbuf, const Py_buffer *src)
> +{
> + PyMemoryViewObject *mv;
> + Py_buffer *dest;
> +
> + if (src == NULL)
> + src = &mbuf->master;
> +
> + if (src->ndim > PyBUF_MAX_NDIM) {
> + PyErr_SetString(PyExc_ValueError,
> + "memoryview: number of dimensions must not exceed "
> + Py_STRINGIFY(PyBUF_MAX_NDIM));
> + return NULL;
> + }
> +
> + mv = memory_alloc(src->ndim);
> + if (mv == NULL)
> + return NULL;
> +
> + dest = &mv->view;
> + init_shared_values(dest, src);
> + init_shape_strides(dest, src);
> + init_suboffsets(dest, src);
> + init_flags(mv);
> +
> + mv->mbuf = mbuf;
> + Py_INCREF(mbuf);
> + mbuf->exports++;
> +
> + return (PyObject *)mv;
> +}
> +
> +/* Register an incomplete view: shape, strides, suboffsets and flags still
> + need to be initialized. Use 'ndim' instead of src->ndim to determine the
> + size of the memoryview's ob_array.
> +
> + Assumption: ndim <= PyBUF_MAX_NDIM. */
> +static PyObject *
> +mbuf_add_incomplete_view(_PyManagedBufferObject *mbuf, const Py_buffer *src,
> + int ndim)
> +{
> + PyMemoryViewObject *mv;
> + Py_buffer *dest;
> +
> + if (src == NULL)
> + src = &mbuf->master;
> +
> + assert(ndim <= PyBUF_MAX_NDIM);
> +
> + mv = memory_alloc(ndim);
> + if (mv == NULL)
> + return NULL;
> +
> + dest = &mv->view;
> + init_shared_values(dest, src);
> +
> + mv->mbuf = mbuf;
> + Py_INCREF(mbuf);
> + mbuf->exports++;
> +
> + return (PyObject *)mv;
> +}
> +
> +/* Expose a raw memory area as a view of contiguous bytes. flags can be
> + PyBUF_READ or PyBUF_WRITE. view->format is set to "B" (unsigned bytes).
> + The memoryview has complete buffer information. */
> +PyObject *
> +PyMemoryView_FromMemory(char *mem, Py_ssize_t size, int flags)
> +{
> + _PyManagedBufferObject *mbuf;
> + PyObject *mv;
> + int readonly;
> +
> + assert(mem != NULL);
> + assert(flags == PyBUF_READ || flags == PyBUF_WRITE);
> +
> + mbuf = mbuf_alloc();
> + if (mbuf == NULL)
> + return NULL;
> +
> + readonly = (flags == PyBUF_WRITE) ? 0 : 1;
> + (void)PyBuffer_FillInfo(&mbuf->master, NULL, mem, size, readonly,
> + PyBUF_FULL_RO);
> +
> + mv = mbuf_add_view(mbuf, NULL);
> + Py_DECREF(mbuf);
> +
> + return mv;
> +}
> +
> +/* Create a memoryview from a given Py_buffer. For simple byte views,
> + PyMemoryView_FromMemory() should be used instead.
> + This function is the only entry point that can create a master buffer
> + without full information. Because of this fact init_shape_strides()
> + must be able to reconstruct missing values. */
> +PyObject *
> +PyMemoryView_FromBuffer(Py_buffer *info)
> +{
> + _PyManagedBufferObject *mbuf;
> + PyObject *mv;
> +
> + if (info->buf == NULL) {
> + PyErr_SetString(PyExc_ValueError,
> + "PyMemoryView_FromBuffer(): info->buf must not be NULL");
> + return NULL;
> + }
> +
> + mbuf = mbuf_alloc();
> + if (mbuf == NULL)
> + return NULL;
> +
> + /* info->obj is either NULL or a borrowed reference. This reference
> + should not be decremented in PyBuffer_Release(). */
> + mbuf->master = *info;
> + mbuf->master.obj = NULL;
> +
> + mv = mbuf_add_view(mbuf, NULL);
> + Py_DECREF(mbuf);
> +
> + return mv;
> +}
> +
> +/* Create a memoryview from an object that implements the buffer protocol.
> + If the object is a memoryview, the new memoryview must be registered
> + with the same managed buffer. Otherwise, a new managed buffer is created. */
> +PyObject *
> +PyMemoryView_FromObject(PyObject *v)
> +{
> + _PyManagedBufferObject *mbuf;
> +
> + if (PyMemoryView_Check(v)) {
> + PyMemoryViewObject *mv = (PyMemoryViewObject *)v;
> + CHECK_RELEASED(mv);
> + return mbuf_add_view(mv->mbuf, &mv->view);
> + }
> + else if (PyObject_CheckBuffer(v)) {
> + PyObject *ret;
> + mbuf = (_PyManagedBufferObject *)_PyManagedBuffer_FromObject(v);
> + if (mbuf == NULL)
> + return NULL;
> + ret = mbuf_add_view(mbuf, NULL);
> + Py_DECREF(mbuf);
> + return ret;
> + }
> +
> + PyErr_Format(PyExc_TypeError,
> + "memoryview: a bytes-like object is required, not '%.200s'",
> + Py_TYPE(v)->tp_name);
> + return NULL;
> +}
> +
> +/* Copy the format string from a base object that might vanish. */
> +static int
> +mbuf_copy_format(_PyManagedBufferObject *mbuf, const char *fmt)
> +{
> + if (fmt != NULL) {
> + char *cp = PyMem_Malloc(strlen(fmt)+1);
> + if (cp == NULL) {
> + PyErr_NoMemory();
> + return -1;
> + }
> + mbuf->master.format = strcpy(cp, fmt);
> + mbuf->flags |= _Py_MANAGED_BUFFER_FREE_FORMAT;
> + }
> +
> + return 0;
> +}
> +
> +/*
> + Return a memoryview that is based on a contiguous copy of src.
> + Assumptions: src has PyBUF_FULL_RO information, src->ndim > 0.
> +
> + Ownership rules:
> + 1) As usual, the returned memoryview has a private copy
> + of src->shape, src->strides and src->suboffsets.
> + 2) src->format is copied to the master buffer and released
> + in mbuf_dealloc(). The releasebufferproc of the bytes
> + object is NULL, so it does not matter that mbuf_release()
> + passes the altered format pointer to PyBuffer_Release().
> +*/
> +static PyObject *
> +memory_from_contiguous_copy(Py_buffer *src, char order)
> +{
> + _PyManagedBufferObject *mbuf;
> + PyMemoryViewObject *mv;
> + PyObject *bytes;
> + Py_buffer *dest;
> + int i;
> +
> + assert(src->ndim > 0);
> + assert(src->shape != NULL);
> +
> + bytes = PyBytes_FromStringAndSize(NULL, src->len);
> + if (bytes == NULL)
> + return NULL;
> +
> + mbuf = (_PyManagedBufferObject *)_PyManagedBuffer_FromObject(bytes);
> + Py_DECREF(bytes);
> + if (mbuf == NULL)
> + return NULL;
> +
> + if (mbuf_copy_format(mbuf, src->format) < 0) {
> + Py_DECREF(mbuf);
> + return NULL;
> + }
> +
> + mv = (PyMemoryViewObject *)mbuf_add_incomplete_view(mbuf, NULL, src->ndim);
> + Py_DECREF(mbuf);
> + if (mv == NULL)
> + return NULL;
> +
> + dest = &mv->view;
> +
> + /* shared values are initialized correctly except for itemsize */
> + dest->itemsize = src->itemsize;
> +
> + /* shape and strides */
> + for (i = 0; i < src->ndim; i++) {
> + dest->shape[i] = src->shape[i];
> + }
> + if (order == 'C' || order == 'A') {
> + init_strides_from_shape(dest);
> + }
> + else {
> + init_fortran_strides_from_shape(dest);
> + }
> + /* suboffsets */
> + dest->suboffsets = NULL;
> +
> + /* flags */
> + init_flags(mv);
> +
> + if (copy_buffer(dest, src) < 0) {
> + Py_DECREF(mv);
> + return NULL;
> + }
> +
> + return (PyObject *)mv;
> +}
> +
> +/*
> + Return a new memoryview object based on a contiguous exporter with
> + buffertype={PyBUF_READ, PyBUF_WRITE} and order={'C', 'F'ortran, or 'A'ny}.
> + The logical structure of the input and output buffers is the same
> + (i.e. tolist(input) == tolist(output)), but the physical layout in
> + memory can be explicitly chosen.
> +
> + As usual, if buffertype=PyBUF_WRITE, the exporter's buffer must be writable,
> + otherwise it may be writable or read-only.
> +
> + If the exporter is already contiguous with the desired target order,
> + the memoryview will be directly based on the exporter.
> +
> + Otherwise, if the buffertype is PyBUF_READ, the memoryview will be
> + based on a new bytes object. If order={'C', 'A'ny}, use 'C' order,
> + 'F'ortran order otherwise.
> +*/
> +PyObject *
> +PyMemoryView_GetContiguous(PyObject *obj, int buffertype, char order)
> +{
> + PyMemoryViewObject *mv;
> + PyObject *ret;
> + Py_buffer *view;
> +
> + assert(buffertype == PyBUF_READ || buffertype == PyBUF_WRITE);
> + assert(order == 'C' || order == 'F' || order == 'A');
> +
> + mv = (PyMemoryViewObject *)PyMemoryView_FromObject(obj);
> + if (mv == NULL)
> + return NULL;
> +
> + view = &mv->view;
> + if (buffertype == PyBUF_WRITE && view->readonly) {
> + PyErr_SetString(PyExc_BufferError,
> + "underlying buffer is not writable");
> + Py_DECREF(mv);
> + return NULL;
> + }
> +
> + if (PyBuffer_IsContiguous(view, order))
> + return (PyObject *)mv;
> +
> + if (buffertype == PyBUF_WRITE) {
> + PyErr_SetString(PyExc_BufferError,
> + "writable contiguous buffer requested "
> + "for a non-contiguous object.");
> + Py_DECREF(mv);
> + return NULL;
> + }
> +
> + ret = memory_from_contiguous_copy(view, order);
> + Py_DECREF(mv);
> + return ret;
> +}
> +
> +
> +static PyObject *
> +memory_new(PyTypeObject *subtype, PyObject *args, PyObject *kwds)
> +{
> + PyObject *obj;
> + static char *kwlist[] = {"object", NULL};
> +
> + if (!PyArg_ParseTupleAndKeywords(args, kwds, "O:memoryview", kwlist,
> + &obj)) {
> + return NULL;
> + }
> +
> + return PyMemoryView_FromObject(obj);
> +}
> +
> +
> +/****************************************************************************/
> +/* Previously in abstract.c */
> +/****************************************************************************/
> +
> +typedef struct {
> + Py_buffer view;
> + Py_ssize_t array[1];
> +} Py_buffer_full;
> +
> +int
> +PyBuffer_ToContiguous(void *buf, Py_buffer *src, Py_ssize_t len, char order)
> +{
> + Py_buffer_full *fb = NULL;
> + int ret;
> +
> + assert(order == 'C' || order == 'F' || order == 'A');
> +
> + if (len != src->len) {
> + PyErr_SetString(PyExc_ValueError,
> + "PyBuffer_ToContiguous: len != view->len");
> + return -1;
> + }
> +
> + if (PyBuffer_IsContiguous(src, order)) {
> + memcpy((char *)buf, src->buf, len);
> + return 0;
> + }
> +
> + /* buffer_to_contiguous() assumes PyBUF_FULL */
> + fb = PyMem_Malloc(sizeof *fb + 3 * src->ndim * (sizeof *fb->array));
> + if (fb == NULL) {
> + PyErr_NoMemory();
> + return -1;
> + }
> + fb->view.ndim = src->ndim;
> + fb->view.shape = fb->array;
> + fb->view.strides = fb->array + src->ndim;
> + fb->view.suboffsets = fb->array + 2 * src->ndim;
> +
> + init_shared_values(&fb->view, src);
> + init_shape_strides(&fb->view, src);
> + init_suboffsets(&fb->view, src);
> +
> + src = &fb->view;
> +
> + ret = buffer_to_contiguous(buf, src, order);
> + PyMem_Free(fb);
> + return ret;
> +}
> +
> +
> +/****************************************************************************/
> +/* Release/GC management */
> +/****************************************************************************/
> +
> +/* Inform the managed buffer that this particular memoryview will not access
> + the underlying buffer again. If no other memoryviews are registered with
> + the managed buffer, the underlying buffer is released instantly and
> + marked as inaccessible for both the memoryview and the managed buffer.
> +
> + This function fails if the memoryview itself has exported buffers. */
> +static int
> +_memory_release(PyMemoryViewObject *self)
> +{
> + if (self->flags & _Py_MEMORYVIEW_RELEASED)
> + return 0;
> +
> + if (self->exports == 0) {
> + self->flags |= _Py_MEMORYVIEW_RELEASED;
> + assert(self->mbuf->exports > 0);
> + if (--self->mbuf->exports == 0)
> + mbuf_release(self->mbuf);
> + return 0;
> + }
> + if (self->exports > 0) {
> + PyErr_Format(PyExc_BufferError,
> + "memoryview has %zd exported buffer%s", self->exports,
> + self->exports==1 ? "" : "s");
> + return -1;
> + }
> +
> + Py_FatalError("_memory_release(): negative export count");
> + return -1;
> +}
> +
> +static PyObject *
> +memory_release(PyMemoryViewObject *self, PyObject *noargs)
> +{
> + if (_memory_release(self) < 0)
> + return NULL;
> + Py_RETURN_NONE;
> +}
> +
> +static void
> +memory_dealloc(PyMemoryViewObject *self)
> +{
> + assert(self->exports == 0);
> + _PyObject_GC_UNTRACK(self);
> + (void)_memory_release(self);
> + Py_CLEAR(self->mbuf);
> + if (self->weakreflist != NULL)
> + PyObject_ClearWeakRefs((PyObject *) self);
> + PyObject_GC_Del(self);
> +}
> +
> +static int
> +memory_traverse(PyMemoryViewObject *self, visitproc visit, void *arg)
> +{
> + Py_VISIT(self->mbuf);
> + return 0;
> +}
> +
> +static int
> +memory_clear(PyMemoryViewObject *self)
> +{
> + (void)_memory_release(self);
> + Py_CLEAR(self->mbuf);
> + return 0;
> +}
> +
> +static PyObject *
> +memory_enter(PyObject *self, PyObject *args)
> +{
> + CHECK_RELEASED(self);
> + Py_INCREF(self);
> + return self;
> +}
> +
> +static PyObject *
> +memory_exit(PyObject *self, PyObject *args)
> +{
> + return memory_release((PyMemoryViewObject *)self, NULL);
> +}
> +
> +
> +/****************************************************************************/
> +/* Casting format and shape */
> +/****************************************************************************/
> +
> +#define IS_BYTE_FORMAT(f) (f == 'b' || f == 'B' || f == 'c')
> +
> +static Py_ssize_t
> +get_native_fmtchar(char *result, const char *fmt)
> +{
> + Py_ssize_t size = -1;
> +
> + if (fmt[0] == '@') fmt++;
> +
> + switch (fmt[0]) {
> + case 'c': case 'b': case 'B': size = sizeof(char); break;
> + case 'h': case 'H': size = sizeof(short); break;
> + case 'i': case 'I': size = sizeof(int); break;
> + case 'l': case 'L': size = sizeof(long); break;
> + case 'q': case 'Q': size = sizeof(long long); break;
> + case 'n': case 'N': size = sizeof(Py_ssize_t); break;
> + case 'f': size = sizeof(float); break;
> + case 'd': size = sizeof(double); break;
> + case '?': size = sizeof(_Bool); break;
> + case 'P': size = sizeof(void *); break;
> + }
> +
> + if (size > 0 && fmt[1] == '\0') {
> + *result = fmt[0];
> + return size;
> + }
> +
> + return -1;
> +}
> +
> +static const char *
> +get_native_fmtstr(const char *fmt)
> +{
> + int at = 0;
> +
> + if (fmt[0] == '@') {
> + at = 1;
> + fmt++;
> + }
> + if (fmt[0] == '\0' || fmt[1] != '\0') {
> + return NULL;
> + }
> +
> +#define RETURN(s) do { return at ? "@" s : s; } while (0)
> +
> + switch (fmt[0]) {
> + case 'c': RETURN("c");
> + case 'b': RETURN("b");
> + case 'B': RETURN("B");
> + case 'h': RETURN("h");
> + case 'H': RETURN("H");
> + case 'i': RETURN("i");
> + case 'I': RETURN("I");
> + case 'l': RETURN("l");
> + case 'L': RETURN("L");
> + case 'q': RETURN("q");
> + case 'Q': RETURN("Q");
> + case 'n': RETURN("n");
> + case 'N': RETURN("N");
> + case 'f': RETURN("f");
> + case 'd': RETURN("d");
> + case '?': RETURN("?");
> + case 'P': RETURN("P");
> + }
> +
> + return NULL;
> +}
> +
> +
> +/* Cast a memoryview's data type to 'format'. The input array must be
> + C-contiguous. At least one of input-format, output-format must have
> + byte size. The output array is 1-D, with the same byte length as the
> + input array. Thus, view->len must be a multiple of the new itemsize. */
> +static int
> +cast_to_1D(PyMemoryViewObject *mv, PyObject *format)
> +{
> + Py_buffer *view = &mv->view;
> + PyObject *asciifmt;
> + char srcchar, destchar;
> + Py_ssize_t itemsize;
> + int ret = -1;
> +
> + assert(view->ndim >= 1);
> + assert(Py_SIZE(mv) == 3*view->ndim);
> + assert(view->shape == mv->ob_array);
> + assert(view->strides == mv->ob_array + view->ndim);
> + assert(view->suboffsets == mv->ob_array + 2*view->ndim);
> +
> + asciifmt = PyUnicode_AsASCIIString(format);
> + if (asciifmt == NULL)
> + return ret;
> +
> + itemsize = get_native_fmtchar(&destchar, PyBytes_AS_STRING(asciifmt));
> + if (itemsize < 0) {
> + PyErr_SetString(PyExc_ValueError,
> + "memoryview: destination format must be a native single "
> + "character format prefixed with an optional '@'");
> + goto out;
> + }
> +
> + if ((get_native_fmtchar(&srcchar, view->format) < 0 ||
> + !IS_BYTE_FORMAT(srcchar)) && !IS_BYTE_FORMAT(destchar)) {
> + PyErr_SetString(PyExc_TypeError,
> + "memoryview: cannot cast between two non-byte formats");
> + goto out;
> + }
> + if (view->len % itemsize) {
> + PyErr_SetString(PyExc_TypeError,
> + "memoryview: length is not a multiple of itemsize");
> + goto out;
> + }
> +
> + view->format = (char *)get_native_fmtstr(PyBytes_AS_STRING(asciifmt));
> + if (view->format == NULL) {
> + /* NOT_REACHED: get_native_fmtchar() already validates the format. */
> + PyErr_SetString(PyExc_RuntimeError,
> + "memoryview: internal error");
> + goto out;
> + }
> + view->itemsize = itemsize;
> +
> + view->ndim = 1;
> + view->shape[0] = view->len / view->itemsize;
> + view->strides[0] = view->itemsize;
> + view->suboffsets = NULL;
> +
> + init_flags(mv);
> +
> + ret = 0;
> +
> +out:
> + Py_DECREF(asciifmt);
> + return ret;
> +}
> +
> +/* The memoryview must have space for 3*len(seq) elements. */
> +static Py_ssize_t
> +copy_shape(Py_ssize_t *shape, const PyObject *seq, Py_ssize_t ndim,
> + Py_ssize_t itemsize)
> +{
> + Py_ssize_t x, i;
> + Py_ssize_t len = itemsize;
> +
> + for (i = 0; i < ndim; i++) {
> + PyObject *tmp = PySequence_Fast_GET_ITEM(seq, i);
> + if (!PyLong_Check(tmp)) {
> + PyErr_SetString(PyExc_TypeError,
> + "memoryview.cast(): elements of shape must be integers");
> + return -1;
> + }
> + x = PyLong_AsSsize_t(tmp);
> + if (x == -1 && PyErr_Occurred()) {
> + return -1;
> + }
> + if (x <= 0) {
> + /* In general elements of shape may be 0, but not for casting. */
> + PyErr_Format(PyExc_ValueError,
> + "memoryview.cast(): elements of shape must be integers > 0");
> + return -1;
> + }
> + if (x > PY_SSIZE_T_MAX / len) {
> + PyErr_Format(PyExc_ValueError,
> + "memoryview.cast(): product(shape) > SSIZE_MAX");
> + return -1;
> + }
> + len *= x;
> + shape[i] = x;
> + }
> +
> + return len;
> +}
> +
> +/* Cast a 1-D array to a new shape. The result array will be C-contiguous.
> + If the result array does not have exactly the same byte length as the
> + input array, raise ValueError. */
> +static int
> +cast_to_ND(PyMemoryViewObject *mv, const PyObject *shape, int ndim)
> +{
> + Py_buffer *view = &mv->view;
> + Py_ssize_t len;
> +
> + assert(view->ndim == 1); /* ndim from cast_to_1D() */
> + assert(Py_SIZE(mv) == 3*(ndim==0?1:ndim)); /* ndim of result array */
> + assert(view->shape == mv->ob_array);
> + assert(view->strides == mv->ob_array + (ndim==0?1:ndim));
> + assert(view->suboffsets == NULL);
> +
> + view->ndim = ndim;
> + if (view->ndim == 0) {
> + view->shape = NULL;
> + view->strides = NULL;
> + len = view->itemsize;
> + }
> + else {
> + len = copy_shape(view->shape, shape, ndim, view->itemsize);
> + if (len < 0)
> + return -1;
> + init_strides_from_shape(view);
> + }
> +
> + if (view->len != len) {
> + PyErr_SetString(PyExc_TypeError,
> + "memoryview: product(shape) * itemsize != buffer size");
> + return -1;
> + }
> +
> + init_flags(mv);
> +
> + return 0;
> +}
> +
> +static int
> +zero_in_shape(PyMemoryViewObject *mv)
> +{
> + Py_buffer *view = &mv->view;
> + Py_ssize_t i;
> +
> + for (i = 0; i < view->ndim; i++)
> + if (view->shape[i] == 0)
> + return 1;
> +
> + return 0;
> +}
> +
> +/*
> + Cast a copy of 'self' to a different view. The input view must
> + be C-contiguous. The function always casts the input view to a
> + 1-D output according to 'format'. At least one of input-format,
> + output-format must have byte size.
> +
> + If 'shape' is given, the 1-D view from the previous step will
> + be cast to a C-contiguous view with new shape and strides.
> +
> + All casts must result in views that will have the exact byte
> + size of the original input. Otherwise, an error is raised.
> +*/
> +static PyObject *
> +memory_cast(PyMemoryViewObject *self, PyObject *args, PyObject *kwds)
> +{
> + static char *kwlist[] = {"format", "shape", NULL};
> + PyMemoryViewObject *mv = NULL;
> + PyObject *shape = NULL;
> + PyObject *format;
> + Py_ssize_t ndim = 1;
> +
> + CHECK_RELEASED(self);
> +
> + if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|O", kwlist,
> + &format, &shape)) {
> + return NULL;
> + }
> + if (!PyUnicode_Check(format)) {
> + PyErr_SetString(PyExc_TypeError,
> + "memoryview: format argument must be a string");
> + return NULL;
> + }
> + if (!MV_C_CONTIGUOUS(self->flags)) {
> + PyErr_SetString(PyExc_TypeError,
> + "memoryview: casts are restricted to C-contiguous views");
> + return NULL;
> + }
> + if ((shape || self->view.ndim != 1) && zero_in_shape(self)) {
> + PyErr_SetString(PyExc_TypeError,
> + "memoryview: cannot cast view with zeros in shape or strides");
> + return NULL;
> + }
> + if (shape) {
> + CHECK_LIST_OR_TUPLE(shape)
> + ndim = PySequence_Fast_GET_SIZE(shape);
> + if (ndim > PyBUF_MAX_NDIM) {
> + PyErr_SetString(PyExc_ValueError,
> + "memoryview: number of dimensions must not exceed "
> + Py_STRINGIFY(PyBUF_MAX_NDIM));
> + return NULL;
> + }
> + if (self->view.ndim != 1 && ndim != 1) {
> + PyErr_SetString(PyExc_TypeError,
> + "memoryview: cast must be 1D -> ND or ND -> 1D");
> + return NULL;
> + }
> + }
> +
> + mv = (PyMemoryViewObject *)
> + mbuf_add_incomplete_view(self->mbuf, &self->view, ndim==0 ? 1 : (int)ndim);
> + if (mv == NULL)
> + return NULL;
> +
> + if (cast_to_1D(mv, format) < 0)
> + goto error;
> + if (shape && cast_to_ND(mv, shape, (int)ndim) < 0)
> + goto error;
> +
> + return (PyObject *)mv;
> +
> +error:
> + Py_DECREF(mv);
> + return NULL;
> +}
> +
> +
> +/**************************************************************************/
> +/* getbuffer */
> +/**************************************************************************/
> +
> +static int
> +memory_getbuf(PyMemoryViewObject *self, Py_buffer *view, int flags)
> +{
> + Py_buffer *base = &self->view;
> + int baseflags = self->flags;
> +
> + CHECK_RELEASED_INT(self);
> +
> + /* start with complete information */
> + *view = *base;
> + view->obj = NULL;
> +
> + if (REQ_WRITABLE(flags) && base->readonly) {
> + PyErr_SetString(PyExc_BufferError,
> + "memoryview: underlying buffer is not writable");
> + return -1;
> + }
> + if (!REQ_FORMAT(flags)) {
> + /* NULL indicates that the buffer's data type has been cast to 'B'.
> + view->itemsize is the _previous_ itemsize. If shape is present,
> + the equality product(shape) * itemsize = len still holds at this
> + point. The equality calcsize(format) = itemsize does _not_ hold
> + from here on! */
> + view->format = NULL;
> + }
> +
> + if (REQ_C_CONTIGUOUS(flags) && !MV_C_CONTIGUOUS(baseflags)) {
> + PyErr_SetString(PyExc_BufferError,
> + "memoryview: underlying buffer is not C-contiguous");
> + return -1;
> + }
> + if (REQ_F_CONTIGUOUS(flags) && !MV_F_CONTIGUOUS(baseflags)) {
> + PyErr_SetString(PyExc_BufferError,
> + "memoryview: underlying buffer is not Fortran contiguous");
> + return -1;
> + }
> + if (REQ_ANY_CONTIGUOUS(flags) && !MV_ANY_CONTIGUOUS(baseflags)) {
> + PyErr_SetString(PyExc_BufferError,
> + "memoryview: underlying buffer is not contiguous");
> + return -1;
> + }
> + if (!REQ_INDIRECT(flags) && (baseflags & _Py_MEMORYVIEW_PIL)) {
> + PyErr_SetString(PyExc_BufferError,
> + "memoryview: underlying buffer requires suboffsets");
> + return -1;
> + }
> + if (!REQ_STRIDES(flags)) {
> + if (!MV_C_CONTIGUOUS(baseflags)) {
> + PyErr_SetString(PyExc_BufferError,
> + "memoryview: underlying buffer is not C-contiguous");
> + return -1;
> + }
> + view->strides = NULL;
> + }
> + if (!REQ_SHAPE(flags)) {
> + /* PyBUF_SIMPLE or PyBUF_WRITABLE: at this point buf is C-contiguous,
> + so base->buf = ndbuf->data. */
> + if (view->format != NULL) {
> + /* PyBUF_SIMPLE|PyBUF_FORMAT and PyBUF_WRITABLE|PyBUF_FORMAT do
> + not make sense. */
> + PyErr_Format(PyExc_BufferError,
> + "memoryview: cannot cast to unsigned bytes if the format flag "
> + "is present");
> + return -1;
> + }
> + /* product(shape) * itemsize = len and calcsize(format) = itemsize
> + do _not_ hold from here on! */
> + view->ndim = 1;
> + view->shape = NULL;
> + }
> +
> +
> + view->obj = (PyObject *)self;
> + Py_INCREF(view->obj);
> + self->exports++;
> +
> + return 0;
> +}
> +
> +static void
> +memory_releasebuf(PyMemoryViewObject *self, Py_buffer *view)
> +{
> + self->exports--;
> + return;
> + /* PyBuffer_Release() decrements view->obj after this function returns. */
> +}
> +
> +/* Buffer methods */
> +static PyBufferProcs memory_as_buffer = {
> + (getbufferproc)memory_getbuf, /* bf_getbuffer */
> + (releasebufferproc)memory_releasebuf, /* bf_releasebuffer */
> +};
> +
> +
> +/****************************************************************************/
> +/* Optimized pack/unpack for all native format specifiers */
> +/****************************************************************************/
> +
> +/*
> + Fix exceptions:
> + 1) Include format string in the error message.
> + 2) OverflowError -> ValueError.
> + 3) The error message from PyNumber_Index() is not ideal.
> +*/
> +static int
> +type_error_int(const char *fmt)
> +{
> + PyErr_Format(PyExc_TypeError,
> + "memoryview: invalid type for format '%s'", fmt);
> + return -1;
> +}
> +
> +static int
> +value_error_int(const char *fmt)
> +{
> + PyErr_Format(PyExc_ValueError,
> + "memoryview: invalid value for format '%s'", fmt);
> + return -1;
> +}
> +
> +static int
> +fix_error_int(const char *fmt)
> +{
> + assert(PyErr_Occurred());
> + if (PyErr_ExceptionMatches(PyExc_TypeError)) {
> + PyErr_Clear();
> + return type_error_int(fmt);
> + }
> + else if (PyErr_ExceptionMatches(PyExc_OverflowError) ||
> + PyErr_ExceptionMatches(PyExc_ValueError)) {
> + PyErr_Clear();
> + return value_error_int(fmt);
> + }
> +
> + return -1;
> +}
> +
> +/* Accept integer objects or objects with an __index__() method. */
> +static long
> +pylong_as_ld(PyObject *item)
> +{
> + PyObject *tmp;
> + long ld;
> +
> + tmp = PyNumber_Index(item);
> + if (tmp == NULL)
> + return -1;
> +
> + ld = PyLong_AsLong(tmp);
> + Py_DECREF(tmp);
> + return ld;
> +}
> +
> +static unsigned long
> +pylong_as_lu(PyObject *item)
> +{
> + PyObject *tmp;
> + unsigned long lu;
> +
> + tmp = PyNumber_Index(item);
> + if (tmp == NULL)
> + return (unsigned long)-1;
> +
> + lu = PyLong_AsUnsignedLong(tmp);
> + Py_DECREF(tmp);
> + return lu;
> +}
> +
> +static long long
> +pylong_as_lld(PyObject *item)
> +{
> + PyObject *tmp;
> + long long lld;
> +
> + tmp = PyNumber_Index(item);
> + if (tmp == NULL)
> + return -1;
> +
> + lld = PyLong_AsLongLong(tmp);
> + Py_DECREF(tmp);
> + return lld;
> +}
> +
> +static unsigned long long
> +pylong_as_llu(PyObject *item)
> +{
> + PyObject *tmp;
> + unsigned long long llu;
> +
> + tmp = PyNumber_Index(item);
> + if (tmp == NULL)
> + return (unsigned long long)-1;
> +
> + llu = PyLong_AsUnsignedLongLong(tmp);
> + Py_DECREF(tmp);
> + return llu;
> +}
> +
> +static Py_ssize_t
> +pylong_as_zd(PyObject *item)
> +{
> + PyObject *tmp;
> + Py_ssize_t zd;
> +
> + tmp = PyNumber_Index(item);
> + if (tmp == NULL)
> + return -1;
> +
> + zd = PyLong_AsSsize_t(tmp);
> + Py_DECREF(tmp);
> + return zd;
> +}
> +
> +static size_t
> +pylong_as_zu(PyObject *item)
> +{
> + PyObject *tmp;
> + size_t zu;
> +
> + tmp = PyNumber_Index(item);
> + if (tmp == NULL)
> + return (size_t)-1;
> +
> + zu = PyLong_AsSize_t(tmp);
> + Py_DECREF(tmp);
> + return zu;
> +}
> +
> +/* Timings with the ndarray from _testbuffer.c indicate that using the
> + struct module is around 15x slower than the two functions below. */
> +
> +#define UNPACK_SINGLE(dest, ptr, type) \
> + do { \
> + type x; \
> + memcpy((char *)&x, ptr, sizeof x); \
> + dest = x; \
> + } while (0)
> +
> +/* Unpack a single item. 'fmt' can be any native format character in struct
> + module syntax. This function is very sensitive to small changes. With this
> + layout gcc automatically generates a fast jump table. */
> +static PyObject *
> +unpack_single(const char *ptr, const char *fmt)
> +{
> + unsigned long long llu;
> + unsigned long lu;
> + size_t zu;
> + long long lld;
> + long ld;
> + Py_ssize_t zd;
> + double d;
> + unsigned char uc;
> + void *p;
> +
> + switch (fmt[0]) {
> +
> + /* signed integers and fast path for 'B' */
> + case 'B': uc = *((unsigned char *)ptr); goto convert_uc;
> + case 'b': ld = *((signed char *)ptr); goto convert_ld;
> + case 'h': UNPACK_SINGLE(ld, ptr, short); goto convert_ld;
> + case 'i': UNPACK_SINGLE(ld, ptr, int); goto convert_ld;
> + case 'l': UNPACK_SINGLE(ld, ptr, long); goto convert_ld;
> +
> + /* boolean */
> + case '?': UNPACK_SINGLE(ld, ptr, _Bool); goto convert_bool;
> +
> + /* unsigned integers */
> + case 'H': UNPACK_SINGLE(lu, ptr, unsigned short); goto convert_lu;
> + case 'I': UNPACK_SINGLE(lu, ptr, unsigned int); goto convert_lu;
> + case 'L': UNPACK_SINGLE(lu, ptr, unsigned long); goto convert_lu;
> +
> + /* native 64-bit */
> + case 'q': UNPACK_SINGLE(lld, ptr, long long); goto convert_lld;
> + case 'Q': UNPACK_SINGLE(llu, ptr, unsigned long long); goto convert_llu;
> +
> + /* ssize_t and size_t */
> + case 'n': UNPACK_SINGLE(zd, ptr, Py_ssize_t); goto convert_zd;
> + case 'N': UNPACK_SINGLE(zu, ptr, size_t); goto convert_zu;
> +
> + /* floats */
> + case 'f': UNPACK_SINGLE(d, ptr, float); goto convert_double;
> + case 'd': UNPACK_SINGLE(d, ptr, double); goto convert_double;
> +
> + /* bytes object */
> + case 'c': goto convert_bytes;
> +
> + /* pointer */
> + case 'P': UNPACK_SINGLE(p, ptr, void *); goto convert_pointer;
> +
> + /* default */
> + default: goto err_format;
> + }
> +
> +convert_uc:
> + /* PyLong_FromUnsignedLong() is slower */
> + return PyLong_FromLong(uc);
> +convert_ld:
> + return PyLong_FromLong(ld);
> +convert_lu:
> + return PyLong_FromUnsignedLong(lu);
> +convert_lld:
> + return PyLong_FromLongLong(lld);
> +convert_llu:
> + return PyLong_FromUnsignedLongLong(llu);
> +convert_zd:
> + return PyLong_FromSsize_t(zd);
> +convert_zu:
> + return PyLong_FromSize_t(zu);
> +convert_double:
> + return PyFloat_FromDouble(d);
> +convert_bool:
> + return PyBool_FromLong(ld);
> +convert_bytes:
> + return PyBytes_FromStringAndSize(ptr, 1);
> +convert_pointer:
> + return PyLong_FromVoidPtr(p);
> +err_format:
> + PyErr_Format(PyExc_NotImplementedError,
> + "memoryview: format %s not supported", fmt);
> + return NULL;
> +}
> +
> +#define PACK_SINGLE(ptr, src, type) \
> + do { \
> + type x; \
> + x = (type)src; \
> + memcpy(ptr, (char *)&x, sizeof x); \
> + } while (0)
> +
> +/* Pack a single item. 'fmt' can be any native format character in
> + struct module syntax. */
> +static int
> +pack_single(char *ptr, PyObject *item, const char *fmt)
> +{
> + unsigned long long llu;
> + unsigned long lu;
> + size_t zu;
> + long long lld;
> + long ld;
> + Py_ssize_t zd;
> + double d;
> + void *p;
> +
> + switch (fmt[0]) {
> + /* signed integers */
> + case 'b': case 'h': case 'i': case 'l':
> + ld = pylong_as_ld(item);
> + if (ld == -1 && PyErr_Occurred())
> + goto err_occurred;
> + switch (fmt[0]) {
> + case 'b':
> + if (ld < SCHAR_MIN || ld > SCHAR_MAX) goto err_range;
> + *((signed char *)ptr) = (signed char)ld; break;
> + case 'h':
> + if (ld < SHRT_MIN || ld > SHRT_MAX) goto err_range;
> + PACK_SINGLE(ptr, ld, short); break;
> + case 'i':
> + if (ld < INT_MIN || ld > INT_MAX) goto err_range;
> + PACK_SINGLE(ptr, ld, int); break;
> + default: /* 'l' */
> + PACK_SINGLE(ptr, ld, long); break;
> + }
> + break;
> +
> + /* unsigned integers */
> + case 'B': case 'H': case 'I': case 'L':
> + lu = pylong_as_lu(item);
> + if (lu == (unsigned long)-1 && PyErr_Occurred())
> + goto err_occurred;
> + switch (fmt[0]) {
> + case 'B':
> + if (lu > UCHAR_MAX) goto err_range;
> + *((unsigned char *)ptr) = (unsigned char)lu; break;
> + case 'H':
> + if (lu > USHRT_MAX) goto err_range;
> + PACK_SINGLE(ptr, lu, unsigned short); break;
> + case 'I':
> + if (lu > UINT_MAX) goto err_range;
> + PACK_SINGLE(ptr, lu, unsigned int); break;
> + default: /* 'L' */
> + PACK_SINGLE(ptr, lu, unsigned long); break;
> + }
> + break;
> +
> + /* native 64-bit */
> + case 'q':
> + lld = pylong_as_lld(item);
> + if (lld == -1 && PyErr_Occurred())
> + goto err_occurred;
> + PACK_SINGLE(ptr, lld, long long);
> + break;
> + case 'Q':
> + llu = pylong_as_llu(item);
> + if (llu == (unsigned long long)-1 && PyErr_Occurred())
> + goto err_occurred;
> + PACK_SINGLE(ptr, llu, unsigned long long);
> + break;
> +
> + /* ssize_t and size_t */
> + case 'n':
> + zd = pylong_as_zd(item);
> + if (zd == -1 && PyErr_Occurred())
> + goto err_occurred;
> + PACK_SINGLE(ptr, zd, Py_ssize_t);
> + break;
> + case 'N':
> + zu = pylong_as_zu(item);
> + if (zu == (size_t)-1 && PyErr_Occurred())
> + goto err_occurred;
> + PACK_SINGLE(ptr, zu, size_t);
> + break;
> +
> + /* floats */
> + case 'f': case 'd':
> + d = PyFloat_AsDouble(item);
> + if (d == -1.0 && PyErr_Occurred())
> + goto err_occurred;
> + if (fmt[0] == 'f') {
> + PACK_SINGLE(ptr, d, float);
> + }
> + else {
> + PACK_SINGLE(ptr, d, double);
> + }
> + break;
> +
> + /* bool */
> + case '?':
> + ld = PyObject_IsTrue(item);
> + if (ld < 0)
> + return -1; /* preserve original error */
> + PACK_SINGLE(ptr, ld, _Bool);
> + break;
> +
> + /* bytes object */
> + case 'c':
> + if (!PyBytes_Check(item))
> + return type_error_int(fmt);
> + if (PyBytes_GET_SIZE(item) != 1)
> + return value_error_int(fmt);
> + *ptr = PyBytes_AS_STRING(item)[0];
> + break;
> +
> + /* pointer */
> + case 'P':
> + p = PyLong_AsVoidPtr(item);
> + if (p == NULL && PyErr_Occurred())
> + goto err_occurred;
> + PACK_SINGLE(ptr, p, void *);
> + break;
> +
> + /* default */
> + default: goto err_format;
> + }
> +
> + return 0;
> +
> +err_occurred:
> + return fix_error_int(fmt);
> +err_range:
> + return value_error_int(fmt);
> +err_format:
> + PyErr_Format(PyExc_NotImplementedError,
> + "memoryview: format %s not supported", fmt);
> + return -1;
> +}
> +
> +
> +/****************************************************************************/
> +/* unpack using the struct module */
> +/****************************************************************************/
> +
> +/* For reasonable performance it is necessary to cache all objects required
> + for unpacking. An unpacker can handle the format passed to unpack_from().
> + Invariant: All pointer fields of the struct should either be NULL or valid
> + pointers. */
> +struct unpacker {
> + PyObject *unpack_from; /* Struct.unpack_from(format) */
> + PyObject *mview; /* cached memoryview */
> + char *item; /* buffer for mview */
> + Py_ssize_t itemsize; /* len(item) */
> +};
> +
> +static struct unpacker *
> +unpacker_new(void)
> +{
> + struct unpacker *x = PyMem_Malloc(sizeof *x);
> +
> + if (x == NULL) {
> + PyErr_NoMemory();
> + return NULL;
> + }
> +
> + x->unpack_from = NULL;
> + x->mview = NULL;
> + x->item = NULL;
> + x->itemsize = 0;
> +
> + return x;
> +}
> +
> +static void
> +unpacker_free(struct unpacker *x)
> +{
> + if (x) {
> + Py_XDECREF(x->unpack_from);
> + Py_XDECREF(x->mview);
> + PyMem_Free(x->item);
> + PyMem_Free(x);
> + }
> +}
> +
> +/* Return a new unpacker for the given format. */
> +static struct unpacker *
> +struct_get_unpacker(const char *fmt, Py_ssize_t itemsize)
> +{
> + PyObject *structmodule; /* XXX cache these two */
> + PyObject *Struct = NULL; /* XXX in globals? */
> + PyObject *structobj = NULL;
> + PyObject *format = NULL;
> + struct unpacker *x = NULL;
> +
> + structmodule = PyImport_ImportModule("struct");
> + if (structmodule == NULL)
> + return NULL;
> +
> + Struct = PyObject_GetAttrString(structmodule, "Struct");
> + Py_DECREF(structmodule);
> + if (Struct == NULL)
> + return NULL;
> +
> + x = unpacker_new();
> + if (x == NULL)
> + goto error;
> +
> + format = PyBytes_FromString(fmt);
> + if (format == NULL)
> + goto error;
> +
> + structobj = PyObject_CallFunctionObjArgs(Struct, format, NULL);
> + if (structobj == NULL)
> + goto error;
> +
> + x->unpack_from = PyObject_GetAttrString(structobj, "unpack_from");
> + if (x->unpack_from == NULL)
> + goto error;
> +
> + x->item = PyMem_Malloc(itemsize);
> + if (x->item == NULL) {
> + PyErr_NoMemory();
> + goto error;
> + }
> + x->itemsize = itemsize;
> +
> + x->mview = PyMemoryView_FromMemory(x->item, itemsize, PyBUF_WRITE);
> + if (x->mview == NULL)
> + goto error;
> +
> +
> +out:
> + Py_XDECREF(Struct);
> + Py_XDECREF(format);
> + Py_XDECREF(structobj);
> + return x;
> +
> +error:
> + unpacker_free(x);
> + x = NULL;
> + goto out;
> +}
> +
> +/* unpack a single item */
> +static PyObject *
> +struct_unpack_single(const char *ptr, struct unpacker *x)
> +{
> + PyObject *v;
> +
> + memcpy(x->item, ptr, x->itemsize);
> + v = PyObject_CallFunctionObjArgs(x->unpack_from, x->mview, NULL);
> + if (v == NULL)
> + return NULL;
> +
> + if (PyTuple_GET_SIZE(v) == 1) {
> + PyObject *tmp = PyTuple_GET_ITEM(v, 0);
> + Py_INCREF(tmp);
> + Py_DECREF(v);
> + return tmp;
> + }
> +
> + return v;
> +}
> +
> +
> +/****************************************************************************/
> +/* Representations */
> +/****************************************************************************/
> +
> +/* allow explicit form of native format */
> +static const char *
> +adjust_fmt(const Py_buffer *view)
> +{
> + const char *fmt;
> +
> + fmt = (view->format[0] == '@') ? view->format+1 : view->format;
> + if (fmt[0] && fmt[1] == '\0')
> + return fmt;
> +
> + PyErr_Format(PyExc_NotImplementedError,
> + "memoryview: unsupported format %s", view->format);
> + return NULL;
> +}
> +
> +/* Base case for multi-dimensional unpacking. Assumption: ndim == 1. */
> +static PyObject *
> +tolist_base(const char *ptr, const Py_ssize_t *shape,
> + const Py_ssize_t *strides, const Py_ssize_t *suboffsets,
> + const char *fmt)
> +{
> + PyObject *lst, *item;
> + Py_ssize_t i;
> +
> + lst = PyList_New(shape[0]);
> + if (lst == NULL)
> + return NULL;
> +
> + for (i = 0; i < shape[0]; ptr+=strides[0], i++) {
> + const char *xptr = ADJUST_PTR(ptr, suboffsets, 0);
> + item = unpack_single(xptr, fmt);
> + if (item == NULL) {
> + Py_DECREF(lst);
> + return NULL;
> + }
> + PyList_SET_ITEM(lst, i, item);
> + }
> +
> + return lst;
> +}
> +
> +/* Unpack a multi-dimensional array into a nested list.
> + Assumption: ndim >= 1. */
> +static PyObject *
> +tolist_rec(const char *ptr, Py_ssize_t ndim, const Py_ssize_t *shape,
> + const Py_ssize_t *strides, const Py_ssize_t *suboffsets,
> + const char *fmt)
> +{
> + PyObject *lst, *item;
> + Py_ssize_t i;
> +
> + assert(ndim >= 1);
> + assert(shape != NULL);
> + assert(strides != NULL);
> +
> + if (ndim == 1)
> + return tolist_base(ptr, shape, strides, suboffsets, fmt);
> +
> + lst = PyList_New(shape[0]);
> + if (lst == NULL)
> + return NULL;
> +
> + for (i = 0; i < shape[0]; ptr+=strides[0], i++) {
> + const char *xptr = ADJUST_PTR(ptr, suboffsets, 0);
> + item = tolist_rec(xptr, ndim-1, shape+1,
> + strides+1, suboffsets ? suboffsets+1 : NULL,
> + fmt);
> + if (item == NULL) {
> + Py_DECREF(lst);
> + return NULL;
> + }
> + PyList_SET_ITEM(lst, i, item);
> + }
> +
> + return lst;
> +}
> +
> +/* Return a list representation of the memoryview. Currently only buffers
> + with native format strings are supported. */
> +static PyObject *
> +memory_tolist(PyMemoryViewObject *mv, PyObject *noargs)
> +{
> + const Py_buffer *view = &(mv->view);
> + const char *fmt;
> +
> + CHECK_RELEASED(mv);
> +
> + fmt = adjust_fmt(view);
> + if (fmt == NULL)
> + return NULL;
> + if (view->ndim == 0) {
> + return unpack_single(view->buf, fmt);
> + }
> + else if (view->ndim == 1) {
> + return tolist_base(view->buf, view->shape,
> + view->strides, view->suboffsets,
> + fmt);
> + }
> + else {
> + return tolist_rec(view->buf, view->ndim, view->shape,
> + view->strides, view->suboffsets,
> + fmt);
> + }
> +}
> +
> +static PyObject *
> +memory_tobytes(PyMemoryViewObject *self, PyObject *dummy)
> +{
> + Py_buffer *src = VIEW_ADDR(self);
> + PyObject *bytes = NULL;
> +
> + CHECK_RELEASED(self);
> +
> + if (MV_C_CONTIGUOUS(self->flags)) {
> + return PyBytes_FromStringAndSize(src->buf, src->len);
> + }
> +
> + bytes = PyBytes_FromStringAndSize(NULL, src->len);
> + if (bytes == NULL)
> + return NULL;
> +
> + if (buffer_to_contiguous(PyBytes_AS_STRING(bytes), src, 'C') < 0) {
> + Py_DECREF(bytes);
> + return NULL;
> + }
> +
> + return bytes;
> +}
> +
> +static PyObject *
> +memory_hex(PyMemoryViewObject *self, PyObject *dummy)
> +{
> + Py_buffer *src = VIEW_ADDR(self);
> + PyObject *bytes;
> + PyObject *ret;
> +
> + CHECK_RELEASED(self);
> +
> + if (MV_C_CONTIGUOUS(self->flags)) {
> + return _Py_strhex(src->buf, src->len);
> + }
> +
> + bytes = memory_tobytes(self, dummy);
> + if (bytes == NULL)
> + return NULL;
> +
> + ret = _Py_strhex(PyBytes_AS_STRING(bytes), Py_SIZE(bytes));
> + Py_DECREF(bytes);
> +
> + return ret;
> +}
> +
> +static PyObject *
> +memory_repr(PyMemoryViewObject *self)
> +{
> + if (self->flags & _Py_MEMORYVIEW_RELEASED)
> + return PyUnicode_FromFormat("<released memory at %p>", self);
> + else
> + return PyUnicode_FromFormat("<memory at %p>", self);
> +}
> +
> +
> +/**************************************************************************/
> +/* Indexing and slicing */
> +/**************************************************************************/
> +
> +static char *
> +lookup_dimension(Py_buffer *view, char *ptr, int dim, Py_ssize_t index)
> +{
> + Py_ssize_t nitems; /* items in the given dimension */
> +
> + assert(view->shape);
> + assert(view->strides);
> +
> + nitems = view->shape[dim];
> + if (index < 0) {
> + index += nitems;
> + }
> + if (index < 0 || index >= nitems) {
> + PyErr_Format(PyExc_IndexError,
> + "index out of bounds on dimension %d", dim + 1);
> + return NULL;
> + }
> +
> + ptr += view->strides[dim] * index;
> +
> + ptr = ADJUST_PTR(ptr, view->suboffsets, dim);
> +
> + return ptr;
> +}
> +
> +/* Get the pointer to the item at index. */
> +static char *
> +ptr_from_index(Py_buffer *view, Py_ssize_t index)
> +{
> + char *ptr = (char *)view->buf;
> + return lookup_dimension(view, ptr, 0, index);
> +}
> +
> +/* Get the pointer to the item at tuple. */
> +static char *
> +ptr_from_tuple(Py_buffer *view, PyObject *tup)
> +{
> + char *ptr = (char *)view->buf;
> + Py_ssize_t dim, nindices = PyTuple_GET_SIZE(tup);
> +
> + if (nindices > view->ndim) {
> + PyErr_Format(PyExc_TypeError,
> + "cannot index %zd-dimension view with %zd-element tuple",
> + view->ndim, nindices);
> + return NULL;
> + }
> +
> + for (dim = 0; dim < nindices; dim++) {
> + Py_ssize_t index;
> + index = PyNumber_AsSsize_t(PyTuple_GET_ITEM(tup, dim),
> + PyExc_IndexError);
> + if (index == -1 && PyErr_Occurred())
> + return NULL;
> + ptr = lookup_dimension(view, ptr, (int)dim, index);
> + if (ptr == NULL)
> + return NULL;
> + }
> + return ptr;
> +}
> +
> +/* Return the item at index. In a one-dimensional view, this is an object
> + with the type specified by view->format. Otherwise, the item is a sub-view.
> + The function is used in memory_subscript() and memory_as_sequence. */
> +static PyObject *
> +memory_item(PyMemoryViewObject *self, Py_ssize_t index)
> +{
> + Py_buffer *view = &(self->view);
> + const char *fmt;
> +
> + CHECK_RELEASED(self);
> +
> + fmt = adjust_fmt(view);
> + if (fmt == NULL)
> + return NULL;
> +
> + if (view->ndim == 0) {
> + PyErr_SetString(PyExc_TypeError, "invalid indexing of 0-dim memory");
> + return NULL;
> + }
> + if (view->ndim == 1) {
> + char *ptr = ptr_from_index(view, index);
> + if (ptr == NULL)
> + return NULL;
> + return unpack_single(ptr, fmt);
> + }
> +
> + PyErr_SetString(PyExc_NotImplementedError,
> + "multi-dimensional sub-views are not implemented");
> + return NULL;
> +}
> +
> +/* Return the item at position *key* (a tuple of indices). */
> +static PyObject *
> +memory_item_multi(PyMemoryViewObject *self, PyObject *tup)
> +{
> + Py_buffer *view = &(self->view);
> + const char *fmt;
> + Py_ssize_t nindices = PyTuple_GET_SIZE(tup);
> + char *ptr;
> +
> + CHECK_RELEASED(self);
> +
> + fmt = adjust_fmt(view);
> + if (fmt == NULL)
> + return NULL;
> +
> + if (nindices < view->ndim) {
> + PyErr_SetString(PyExc_NotImplementedError,
> + "sub-views are not implemented");
> + return NULL;
> + }
> + ptr = ptr_from_tuple(view, tup);
> + if (ptr == NULL)
> + return NULL;
> + return unpack_single(ptr, fmt);
> +}
> +
> +static int
> +init_slice(Py_buffer *base, PyObject *key, int dim)
> +{
> + Py_ssize_t start, stop, step, slicelength;
> +
> + if (PySlice_Unpack(key, &start, &stop, &step) < 0) {
> + return -1;
> + }
> + slicelength = PySlice_AdjustIndices(base->shape[dim], &start, &stop, step);
> +
> +
> + if (base->suboffsets == NULL || dim == 0) {
> + adjust_buf:
> + base->buf = (char *)base->buf + base->strides[dim] * start;
> + }
> + else {
> + Py_ssize_t n = dim-1;
> + while (n >= 0 && base->suboffsets[n] < 0)
> + n--;
> + if (n < 0)
> + goto adjust_buf; /* all suboffsets are negative */
> + base->suboffsets[n] = base->suboffsets[n] + base->strides[dim] * start;
> + }
> + base->shape[dim] = slicelength;
> + base->strides[dim] = base->strides[dim] * step;
> +
> + return 0;
> +}
> +
> +static int
> +is_multislice(PyObject *key)
> +{
> + Py_ssize_t size, i;
> +
> + if (!PyTuple_Check(key))
> + return 0;
> + size = PyTuple_GET_SIZE(key);
> + if (size == 0)
> + return 0;
> +
> + for (i = 0; i < size; i++) {
> + PyObject *x = PyTuple_GET_ITEM(key, i);
> + if (!PySlice_Check(x))
> + return 0;
> + }
> + return 1;
> +}
> +
> +static Py_ssize_t
> +is_multiindex(PyObject *key)
> +{
> + Py_ssize_t size, i;
> +
> + if (!PyTuple_Check(key))
> + return 0;
> + size = PyTuple_GET_SIZE(key);
> + for (i = 0; i < size; i++) {
> + PyObject *x = PyTuple_GET_ITEM(key, i);
> + if (!PyIndex_Check(x))
> + return 0;
> + }
> + return 1;
> +}
> +
> +/* mv[obj] returns an object holding the data for one element if obj
> + fully indexes the memoryview or another memoryview object if it
> + does not.
> +
> + 0-d memoryview objects can be referenced using mv[...] or mv[()]
> + but not with anything else. */
> +static PyObject *
> +memory_subscript(PyMemoryViewObject *self, PyObject *key)
> +{
> + Py_buffer *view;
> + view = &(self->view);
> +
> + CHECK_RELEASED(self);
> +
> + if (view->ndim == 0) {
> + if (PyTuple_Check(key) && PyTuple_GET_SIZE(key) == 0) {
> + const char *fmt = adjust_fmt(view);
> + if (fmt == NULL)
> + return NULL;
> + return unpack_single(view->buf, fmt);
> + }
> + else if (key == Py_Ellipsis) {
> + Py_INCREF(self);
> + return (PyObject *)self;
> + }
> + else {
> + PyErr_SetString(PyExc_TypeError,
> + "invalid indexing of 0-dim memory");
> + return NULL;
> + }
> + }
> +
> + if (PyIndex_Check(key)) {
> + Py_ssize_t index;
> + index = PyNumber_AsSsize_t(key, PyExc_IndexError);
> + if (index == -1 && PyErr_Occurred())
> + return NULL;
> + return memory_item(self, index);
> + }
> + else if (PySlice_Check(key)) {
> + PyMemoryViewObject *sliced;
> +
> + sliced = (PyMemoryViewObject *)mbuf_add_view(self->mbuf, view);
> + if (sliced == NULL)
> + return NULL;
> +
> + if (init_slice(&sliced->view, key, 0) < 0) {
> + Py_DECREF(sliced);
> + return NULL;
> + }
> + init_len(&sliced->view);
> + init_flags(sliced);
> +
> + return (PyObject *)sliced;
> + }
> + else if (is_multiindex(key)) {
> + return memory_item_multi(self, key);
> + }
> + else if (is_multislice(key)) {
> + PyErr_SetString(PyExc_NotImplementedError,
> + "multi-dimensional slicing is not implemented");
> + return NULL;
> + }
> +
> + PyErr_SetString(PyExc_TypeError, "memoryview: invalid slice key");
> + return NULL;
> +}
> +
> +static int
> +memory_ass_sub(PyMemoryViewObject *self, PyObject *key, PyObject *value)
> +{
> + Py_buffer *view = &(self->view);
> + Py_buffer src;
> + const char *fmt;
> + char *ptr;
> +
> + CHECK_RELEASED_INT(self);
> +
> + fmt = adjust_fmt(view);
> + if (fmt == NULL)
> + return -1;
> +
> + if (view->readonly) {
> + PyErr_SetString(PyExc_TypeError, "cannot modify read-only memory");
> + return -1;
> + }
> + if (value == NULL) {
> + PyErr_SetString(PyExc_TypeError, "cannot delete memory");
> + return -1;
> + }
> + if (view->ndim == 0) {
> + if (key == Py_Ellipsis ||
> + (PyTuple_Check(key) && PyTuple_GET_SIZE(key)==0)) {
> + ptr = (char *)view->buf;
> + return pack_single(ptr, value, fmt);
> + }
> + else {
> + PyErr_SetString(PyExc_TypeError,
> + "invalid indexing of 0-dim memory");
> + return -1;
> + }
> + }
> +
> + if (PyIndex_Check(key)) {
> + Py_ssize_t index;
> + if (1 < view->ndim) {
> + PyErr_SetString(PyExc_NotImplementedError,
> + "sub-views are not implemented");
> + return -1;
> + }
> + index = PyNumber_AsSsize_t(key, PyExc_IndexError);
> + if (index == -1 && PyErr_Occurred())
> + return -1;
> + ptr = ptr_from_index(view, index);
> + if (ptr == NULL)
> + return -1;
> + return pack_single(ptr, value, fmt);
> + }
> + /* one-dimensional: fast path */
> + if (PySlice_Check(key) && view->ndim == 1) {
> + Py_buffer dest; /* sliced view */
> + Py_ssize_t arrays[3];
> + int ret = -1;
> +
> + /* rvalue must be an exporter */
> + if (PyObject_GetBuffer(value, &src, PyBUF_FULL_RO) < 0)
> + return ret;
> +
> + dest = *view;
> + dest.shape = &arrays[0]; dest.shape[0] = view->shape[0];
> + dest.strides = &arrays[1]; dest.strides[0] = view->strides[0];
> + if (view->suboffsets) {
> + dest.suboffsets = &arrays[2]; dest.suboffsets[0] = view->suboffsets[0];
> + }
> +
> + if (init_slice(&dest, key, 0) < 0)
> + goto end_block;
> + dest.len = dest.shape[0] * dest.itemsize;
> +
> + ret = copy_single(&dest, &src);
> +
> + end_block:
> + PyBuffer_Release(&src);
> + return ret;
> + }
> + if (is_multiindex(key)) {
> + char *ptr;
> + if (PyTuple_GET_SIZE(key) < view->ndim) {
> + PyErr_SetString(PyExc_NotImplementedError,
> + "sub-views are not implemented");
> + return -1;
> + }
> + ptr = ptr_from_tuple(view, key);
> + if (ptr == NULL)
> + return -1;
> + return pack_single(ptr, value, fmt);
> + }
> + if (PySlice_Check(key) || is_multislice(key)) {
> + /* Call memory_subscript() to produce a sliced lvalue, then copy
> + rvalue into lvalue. This is already implemented in _testbuffer.c. */
> + PyErr_SetString(PyExc_NotImplementedError,
> + "memoryview slice assignments are currently restricted "
> + "to ndim = 1");
> + return -1;
> + }
> +
> + PyErr_SetString(PyExc_TypeError, "memoryview: invalid slice key");
> + return -1;
> +}
> +
> +static Py_ssize_t
> +memory_length(PyMemoryViewObject *self)
> +{
> + CHECK_RELEASED_INT(self);
> + return self->view.ndim == 0 ? 1 : self->view.shape[0];
> +}
> +
> +/* As mapping */
> +static PyMappingMethods memory_as_mapping = {
> + (lenfunc)memory_length, /* mp_length */
> + (binaryfunc)memory_subscript, /* mp_subscript */
> + (objobjargproc)memory_ass_sub, /* mp_ass_subscript */
> +};
> +
> +/* As sequence */
> +static PySequenceMethods memory_as_sequence = {
> + (lenfunc)memory_length, /* sq_length */
> + 0, /* sq_concat */
> + 0, /* sq_repeat */
> + (ssizeargfunc)memory_item, /* sq_item */
> +};
> +
> +
> +/**************************************************************************/
> +/* Comparisons */
> +/**************************************************************************/
> +
> +#define MV_COMPARE_EX -1 /* exception */
> +#define MV_COMPARE_NOT_IMPL -2 /* not implemented */
> +
> +/* Translate a StructError to "not equal". Preserve other exceptions. */
> +static int
> +fix_struct_error_int(void)
> +{
> + assert(PyErr_Occurred());
> + /* XXX Cannot get at StructError directly? */
> + if (PyErr_ExceptionMatches(PyExc_ImportError) ||
> + PyErr_ExceptionMatches(PyExc_MemoryError)) {
> + return MV_COMPARE_EX;
> + }
> + /* StructError: invalid or unknown format -> not equal */
> + PyErr_Clear();
> + return 0;
> +}
> +
> +/* Unpack and compare single items of p and q using the struct module. */
> +static int
> +struct_unpack_cmp(const char *p, const char *q,
> + struct unpacker *unpack_p, struct unpacker *unpack_q)
> +{
> + PyObject *v, *w;
> + int ret;
> +
> + /* At this point any exception from the struct module should not be
> + StructError, since both formats have been accepted already. */
> + v = struct_unpack_single(p, unpack_p);
> + if (v == NULL)
> + return MV_COMPARE_EX;
> +
> + w = struct_unpack_single(q, unpack_q);
> + if (w == NULL) {
> + Py_DECREF(v);
> + return MV_COMPARE_EX;
> + }
> +
> + /* MV_COMPARE_EX == -1: exceptions are preserved */
> + ret = PyObject_RichCompareBool(v, w, Py_EQ);
> + Py_DECREF(v);
> + Py_DECREF(w);
> +
> + return ret;
> +}
> +
> +/* Unpack and compare single items of p and q. If both p and q have the same
> + single element native format, the comparison uses a fast path (gcc creates
> + a jump table and converts memcpy into simple assignments on x86/x64).
> +
> + Otherwise, the comparison is delegated to the struct module, which is
> + 30-60x slower. */
> +#define CMP_SINGLE(p, q, type) \
> + do { \
> + type x; \
> + type y; \
> + memcpy((char *)&x, p, sizeof x); \
> + memcpy((char *)&y, q, sizeof y); \
> + equal = (x == y); \
> + } while (0)
> +
> +static int
> +unpack_cmp(const char *p, const char *q, char fmt,
> + struct unpacker *unpack_p, struct unpacker *unpack_q)
> +{
> + int equal;
> +
> + switch (fmt) {
> +
> + /* signed integers and fast path for 'B' */
> + case 'B': return *((unsigned char *)p) == *((unsigned char *)q);
> + case 'b': return *((signed char *)p) == *((signed char *)q);
> + case 'h': CMP_SINGLE(p, q, short); return equal;
> + case 'i': CMP_SINGLE(p, q, int); return equal;
> + case 'l': CMP_SINGLE(p, q, long); return equal;
> +
> + /* boolean */
> + case '?': CMP_SINGLE(p, q, _Bool); return equal;
> +
> + /* unsigned integers */
> + case 'H': CMP_SINGLE(p, q, unsigned short); return equal;
> + case 'I': CMP_SINGLE(p, q, unsigned int); return equal;
> + case 'L': CMP_SINGLE(p, q, unsigned long); return equal;
> +
> + /* native 64-bit */
> + case 'q': CMP_SINGLE(p, q, long long); return equal;
> + case 'Q': CMP_SINGLE(p, q, unsigned long long); return equal;
> +
> + /* ssize_t and size_t */
> + case 'n': CMP_SINGLE(p, q, Py_ssize_t); return equal;
> + case 'N': CMP_SINGLE(p, q, size_t); return equal;
> +
> + /* floats */
> + /* XXX DBL_EPSILON? */
> + case 'f': CMP_SINGLE(p, q, float); return equal;
> + case 'd': CMP_SINGLE(p, q, double); return equal;
> +
> + /* bytes object */
> + case 'c': return *p == *q;
> +
> + /* pointer */
> + case 'P': CMP_SINGLE(p, q, void *); return equal;
> +
> + /* use the struct module */
> + case '_':
> + assert(unpack_p);
> + assert(unpack_q);
> + return struct_unpack_cmp(p, q, unpack_p, unpack_q);
> + }
> +
> + /* NOT REACHED */
> + PyErr_SetString(PyExc_RuntimeError,
> + "memoryview: internal error in richcompare");
> + return MV_COMPARE_EX;
> +}
> +
> +/* Base case for recursive array comparisons. Assumption: ndim == 1. */
> +static int
> +cmp_base(const char *p, const char *q, const Py_ssize_t *shape,
> + const Py_ssize_t *pstrides, const Py_ssize_t *psuboffsets,
> + const Py_ssize_t *qstrides, const Py_ssize_t *qsuboffsets,
> + char fmt, struct unpacker *unpack_p, struct unpacker *unpack_q)
> +{
> + Py_ssize_t i;
> + int equal;
> +
> + for (i = 0; i < shape[0]; p+=pstrides[0], q+=qstrides[0], i++) {
> + const char *xp = ADJUST_PTR(p, psuboffsets, 0);
> + const char *xq = ADJUST_PTR(q, qsuboffsets, 0);
> + equal = unpack_cmp(xp, xq, fmt, unpack_p, unpack_q);
> + if (equal <= 0)
> + return equal;
> + }
> +
> + return 1;
> +}
> +
> +/* Recursively compare two multi-dimensional arrays that have the same
> + logical structure. Assumption: ndim >= 1. */
> +static int
> +cmp_rec(const char *p, const char *q,
> + Py_ssize_t ndim, const Py_ssize_t *shape,
> + const Py_ssize_t *pstrides, const Py_ssize_t *psuboffsets,
> + const Py_ssize_t *qstrides, const Py_ssize_t *qsuboffsets,
> + char fmt, struct unpacker *unpack_p, struct unpacker *unpack_q)
> +{
> + Py_ssize_t i;
> + int equal;
> +
> + assert(ndim >= 1);
> + assert(shape != NULL);
> + assert(pstrides != NULL);
> + assert(qstrides != NULL);
> +
> + if (ndim == 1) {
> + return cmp_base(p, q, shape,
> + pstrides, psuboffsets,
> + qstrides, qsuboffsets,
> + fmt, unpack_p, unpack_q);
> + }
> +
> + for (i = 0; i < shape[0]; p+=pstrides[0], q+=qstrides[0], i++) {
> + const char *xp = ADJUST_PTR(p, psuboffsets, 0);
> + const char *xq = ADJUST_PTR(q, qsuboffsets, 0);
> + equal = cmp_rec(xp, xq, ndim-1, shape+1,
> + pstrides+1, psuboffsets ? psuboffsets+1 : NULL,
> + qstrides+1, qsuboffsets ? qsuboffsets+1 : NULL,
> + fmt, unpack_p, unpack_q);
> + if (equal <= 0)
> + return equal;
> + }
> +
> + return 1;
> +}
> +
> +static PyObject *
> +memory_richcompare(PyObject *v, PyObject *w, int op)
> +{
> + PyObject *res;
> + Py_buffer wbuf, *vv;
> + Py_buffer *ww = NULL;
> + struct unpacker *unpack_v = NULL;
> + struct unpacker *unpack_w = NULL;
> + char vfmt, wfmt;
> + int equal = MV_COMPARE_NOT_IMPL;
> +
> + if (op != Py_EQ && op != Py_NE)
> + goto result; /* Py_NotImplemented */
> +
> + assert(PyMemoryView_Check(v));
> + if (BASE_INACCESSIBLE(v)) {
> + equal = (v == w);
> + goto result;
> + }
> + vv = VIEW_ADDR(v);
> +
> + if (PyMemoryView_Check(w)) {
> + if (BASE_INACCESSIBLE(w)) {
> + equal = (v == w);
> + goto result;
> + }
> + ww = VIEW_ADDR(w);
> + }
> + else {
> + if (PyObject_GetBuffer(w, &wbuf, PyBUF_FULL_RO) < 0) {
> + PyErr_Clear();
> + goto result; /* Py_NotImplemented */
> + }
> + ww = &wbuf;
> + }
> +
> + if (!equiv_shape(vv, ww)) {
> + PyErr_Clear();
> + equal = 0;
> + goto result;
> + }
> +
> + /* Use fast unpacking for identical primitive C type formats. */
> + if (get_native_fmtchar(&vfmt, vv->format) < 0)
> + vfmt = '_';
> + if (get_native_fmtchar(&wfmt, ww->format) < 0)
> + wfmt = '_';
> + if (vfmt == '_' || wfmt == '_' || vfmt != wfmt) {
> + /* Use struct module unpacking. NOTE: Even for equal format strings,
> + memcmp() cannot be used for item comparison since it would give
> + incorrect results in the case of NaNs or uninitialized padding
> + bytes. */
> + vfmt = '_';
> + unpack_v = struct_get_unpacker(vv->format, vv->itemsize);
> + if (unpack_v == NULL) {
> + equal = fix_struct_error_int();
> + goto result;
> + }
> + unpack_w = struct_get_unpacker(ww->format, ww->itemsize);
> + if (unpack_w == NULL) {
> + equal = fix_struct_error_int();
> + goto result;
> + }
> + }
> +
> + if (vv->ndim == 0) {
> + equal = unpack_cmp(vv->buf, ww->buf,
> + vfmt, unpack_v, unpack_w);
> + }
> + else if (vv->ndim == 1) {
> + equal = cmp_base(vv->buf, ww->buf, vv->shape,
> + vv->strides, vv->suboffsets,
> + ww->strides, ww->suboffsets,
> + vfmt, unpack_v, unpack_w);
> + }
> + else {
> + equal = cmp_rec(vv->buf, ww->buf, vv->ndim, vv->shape,
> + vv->strides, vv->suboffsets,
> + ww->strides, ww->suboffsets,
> + vfmt, unpack_v, unpack_w);
> + }
> +
> +result:
> + if (equal < 0) {
> + if (equal == MV_COMPARE_NOT_IMPL)
> + res = Py_NotImplemented;
> + else /* exception */
> + res = NULL;
> + }
> + else if ((equal && op == Py_EQ) || (!equal && op == Py_NE))
> + res = Py_True;
> + else
> + res = Py_False;
> +
> + if (ww == &wbuf)
> + PyBuffer_Release(ww);
> +
> + unpacker_free(unpack_v);
> + unpacker_free(unpack_w);
> +
> + Py_XINCREF(res);
> + return res;
> +}
> +
> +/**************************************************************************/
> +/* Hash */
> +/**************************************************************************/
> +
> +static Py_hash_t
> +memory_hash(PyMemoryViewObject *self)
> +{
> + if (self->hash == -1) {
> + Py_buffer *view = &self->view;
> + char *mem = view->buf;
> + Py_ssize_t ret;
> + char fmt;
> +
> + CHECK_RELEASED_INT(self);
> +
> + if (!view->readonly) {
> + PyErr_SetString(PyExc_ValueError,
> + "cannot hash writable memoryview object");
> + return -1;
> + }
> + ret = get_native_fmtchar(&fmt, view->format);
> + if (ret < 0 || !IS_BYTE_FORMAT(fmt)) {
> + PyErr_SetString(PyExc_ValueError,
> + "memoryview: hashing is restricted to formats 'B', 'b' or 'c'");
> + return -1;
> + }
> + if (view->obj != NULL && PyObject_Hash(view->obj) == -1) {
> + /* Keep the original error message */
> + return -1;
> + }
> +
> + if (!MV_C_CONTIGUOUS(self->flags)) {
> + mem = PyMem_Malloc(view->len);
> + if (mem == NULL) {
> + PyErr_NoMemory();
> + return -1;
> + }
> + if (buffer_to_contiguous(mem, view, 'C') < 0) {
> + PyMem_Free(mem);
> + return -1;
> + }
> + }
> +
> + /* Can't fail */
> + self->hash = _Py_HashBytes(mem, view->len);
> +
> + if (mem != view->buf)
> + PyMem_Free(mem);
> + }
> +
> + return self->hash;
> +}
> +
> +
> +/**************************************************************************/
> +/* getters */
> +/**************************************************************************/
> +
> +static PyObject *
> +_IntTupleFromSsizet(int len, Py_ssize_t *vals)
> +{
> + int i;
> + PyObject *o;
> + PyObject *intTuple;
> +
> + if (vals == NULL)
> + return PyTuple_New(0);
> +
> + intTuple = PyTuple_New(len);
> + if (!intTuple)
> + return NULL;
> + for (i=0; i<len; i++) {
> + o = PyLong_FromSsize_t(vals[i]);
> + if (!o) {
> + Py_DECREF(intTuple);
> + return NULL;
> + }
> + PyTuple_SET_ITEM(intTuple, i, o);
> + }
> + return intTuple;
> +}
> +
> +static PyObject *
> +memory_obj_get(PyMemoryViewObject *self, void *Py_UNUSED(ignored))
> +{
> + Py_buffer *view = &self->view;
> +
> + CHECK_RELEASED(self);
> + if (view->obj == NULL) {
> + Py_RETURN_NONE;
> + }
> + Py_INCREF(view->obj);
> + return view->obj;
> +}
> +
> +static PyObject *
> +memory_nbytes_get(PyMemoryViewObject *self, void *Py_UNUSED(ignored))
> +{
> + CHECK_RELEASED(self);
> + return PyLong_FromSsize_t(self->view.len);
> +}
> +
> +static PyObject *
> +memory_format_get(PyMemoryViewObject *self, void *Py_UNUSED(ignored))
> +{
> + CHECK_RELEASED(self);
> + return PyUnicode_FromString(self->view.format);
> +}
> +
> +static PyObject *
> +memory_itemsize_get(PyMemoryViewObject *self, void *Py_UNUSED(ignored))
> +{
> + CHECK_RELEASED(self);
> + return PyLong_FromSsize_t(self->view.itemsize);
> +}
> +
> +static PyObject *
> +memory_shape_get(PyMemoryViewObject *self, void *Py_UNUSED(ignored))
> +{
> + CHECK_RELEASED(self);
> + return _IntTupleFromSsizet(self->view.ndim, self->view.shape);
> +}
> +
> +static PyObject *
> +memory_strides_get(PyMemoryViewObject *self, void *Py_UNUSED(ignored))
> +{
> + CHECK_RELEASED(self);
> + return _IntTupleFromSsizet(self->view.ndim, self->view.strides);
> +}
> +
> +static PyObject *
> +memory_suboffsets_get(PyMemoryViewObject *self, void *Py_UNUSED(ignored))
> +{
> + CHECK_RELEASED(self);
> + return _IntTupleFromSsizet(self->view.ndim, self->view.suboffsets);
> +}
> +
> +static PyObject *
> +memory_readonly_get(PyMemoryViewObject *self, void *Py_UNUSED(ignored))
> +{
> + CHECK_RELEASED(self);
> + return PyBool_FromLong(self->view.readonly);
> +}
> +
> +static PyObject *
> +memory_ndim_get(PyMemoryViewObject *self, void *Py_UNUSED(ignored))
> +{
> + CHECK_RELEASED(self);
> + return PyLong_FromLong(self->view.ndim);
> +}
> +
> +static PyObject *
> +memory_c_contiguous(PyMemoryViewObject *self, PyObject *dummy)
> +{
> + CHECK_RELEASED(self);
> + return PyBool_FromLong(MV_C_CONTIGUOUS(self->flags));
> +}
> +
> +static PyObject *
> +memory_f_contiguous(PyMemoryViewObject *self, PyObject *dummy)
> +{
> + CHECK_RELEASED(self);
> + return PyBool_FromLong(MV_F_CONTIGUOUS(self->flags));
> +}
> +
> +static PyObject *
> +memory_contiguous(PyMemoryViewObject *self, PyObject *dummy)
> +{
> + CHECK_RELEASED(self);
> + return PyBool_FromLong(MV_ANY_CONTIGUOUS(self->flags));
> +}
> +
> +PyDoc_STRVAR(memory_obj_doc,
> + "The underlying object of the memoryview.");
> +PyDoc_STRVAR(memory_nbytes_doc,
> + "The amount of space in bytes that the array would use in\n"
> + " a contiguous representation.");
> +PyDoc_STRVAR(memory_readonly_doc,
> + "A bool indicating whether the memory is read only.");
> +PyDoc_STRVAR(memory_itemsize_doc,
> + "The size in bytes of each element of the memoryview.");
> +PyDoc_STRVAR(memory_format_doc,
> + "A string containing the format (in struct module style)\n"
> + " for each element in the view.");
> +PyDoc_STRVAR(memory_ndim_doc,
> + "An integer indicating how many dimensions of a multi-dimensional\n"
> + " array the memory represents.");
> +PyDoc_STRVAR(memory_shape_doc,
> + "A tuple of ndim integers giving the shape of the memory\n"
> + " as an N-dimensional array.");
> +PyDoc_STRVAR(memory_strides_doc,
> + "A tuple of ndim integers giving the size in bytes to access\n"
> + " each element for each dimension of the array.");
> +PyDoc_STRVAR(memory_suboffsets_doc,
> + "A tuple of integers used internally for PIL-style arrays.");
> +PyDoc_STRVAR(memory_c_contiguous_doc,
> + "A bool indicating whether the memory is C contiguous.");
> +PyDoc_STRVAR(memory_f_contiguous_doc,
> + "A bool indicating whether the memory is Fortran contiguous.");
> +PyDoc_STRVAR(memory_contiguous_doc,
> + "A bool indicating whether the memory is contiguous.");
> +
> +
> +static PyGetSetDef memory_getsetlist[] = {
> + {"obj", (getter)memory_obj_get, NULL, memory_obj_doc},
> + {"nbytes", (getter)memory_nbytes_get, NULL, memory_nbytes_doc},
> + {"readonly", (getter)memory_readonly_get, NULL, memory_readonly_doc},
> + {"itemsize", (getter)memory_itemsize_get, NULL, memory_itemsize_doc},
> + {"format", (getter)memory_format_get, NULL, memory_format_doc},
> + {"ndim", (getter)memory_ndim_get, NULL, memory_ndim_doc},
> + {"shape", (getter)memory_shape_get, NULL, memory_shape_doc},
> + {"strides", (getter)memory_strides_get, NULL, memory_strides_doc},
> + {"suboffsets", (getter)memory_suboffsets_get, NULL, memory_suboffsets_doc},
> + {"c_contiguous", (getter)memory_c_contiguous, NULL, memory_c_contiguous_doc},
> + {"f_contiguous", (getter)memory_f_contiguous, NULL, memory_f_contiguous_doc},
> + {"contiguous", (getter)memory_contiguous, NULL, memory_contiguous_doc},
> + {NULL, NULL, NULL, NULL},
> +};
> +
> +PyDoc_STRVAR(memory_release_doc,
> +"release($self, /)\n--\n\
> +\n\
> +Release the underlying buffer exposed by the memoryview object.");
> +PyDoc_STRVAR(memory_tobytes_doc,
> +"tobytes($self, /)\n--\n\
> +\n\
> +Return the data in the buffer as a byte string.");
> +PyDoc_STRVAR(memory_hex_doc,
> +"hex($self, /)\n--\n\
> +\n\
> +Return the data in the buffer as a string of hexadecimal numbers.");
> +PyDoc_STRVAR(memory_tolist_doc,
> +"tolist($self, /)\n--\n\
> +\n\
> +Return the data in the buffer as a list of elements.");
> +PyDoc_STRVAR(memory_cast_doc,
> +"cast($self, /, format, *, shape)\n--\n\
> +\n\
> +Cast a memoryview to a new format or shape.");
> +
> +static PyMethodDef memory_methods[] = {
> + {"release", (PyCFunction)memory_release, METH_NOARGS, memory_release_doc},
> + {"tobytes", (PyCFunction)memory_tobytes, METH_NOARGS, memory_tobytes_doc},
> + {"hex", (PyCFunction)memory_hex, METH_NOARGS, memory_hex_doc},
> + {"tolist", (PyCFunction)memory_tolist, METH_NOARGS, memory_tolist_doc},
> + {"cast", (PyCFunction)memory_cast, METH_VARARGS|METH_KEYWORDS, memory_cast_doc},
> + {"__enter__", memory_enter, METH_NOARGS, NULL},
> + {"__exit__", memory_exit, METH_VARARGS, NULL},
> + {NULL, NULL}
> +};
> +
> +
> +PyTypeObject PyMemoryView_Type = {
> + PyVarObject_HEAD_INIT(&PyType_Type, 0)
> + "memoryview", /* tp_name */
> + offsetof(PyMemoryViewObject, ob_array), /* tp_basicsize */
> + sizeof(Py_ssize_t), /* tp_itemsize */
> + (destructor)memory_dealloc, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + (reprfunc)memory_repr, /* tp_repr */
> + 0, /* tp_as_number */
> + &memory_as_sequence, /* tp_as_sequence */
> + &memory_as_mapping, /* tp_as_mapping */
> + (hashfunc)memory_hash, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + PyObject_GenericGetAttr, /* tp_getattro */
> + 0, /* tp_setattro */
> + &memory_as_buffer, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC, /* tp_flags */
> + memory_doc, /* tp_doc */
> + (traverseproc)memory_traverse, /* tp_traverse */
> + (inquiry)memory_clear, /* tp_clear */
> + memory_richcompare, /* tp_richcompare */
> + offsetof(PyMemoryViewObject, weakreflist),/* tp_weaklistoffset */
> + 0, /* tp_iter */
> + 0, /* tp_iternext */
> + memory_methods, /* tp_methods */
> + 0, /* tp_members */
> + memory_getsetlist, /* tp_getset */
> + 0, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + 0, /* tp_init */
> + 0, /* tp_alloc */
> + memory_new, /* tp_new */
> +};
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/object.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/object.c
> new file mode 100644
> index 00000000..97b307da
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/object.c
> @@ -0,0 +1,2082 @@
> +
> +/* Generic object operations; and implementation of None */
> +
> +#include "Python.h"
> +#include "frameobject.h"
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +_Py_IDENTIFIER(Py_Repr);
> +_Py_IDENTIFIER(__bytes__);
> +_Py_IDENTIFIER(__dir__);
> +_Py_IDENTIFIER(__isabstractmethod__);
> +_Py_IDENTIFIER(builtins);
> +
> +#ifdef Py_REF_DEBUG
> +Py_ssize_t _Py_RefTotal;
> +
> +Py_ssize_t
> +_Py_GetRefTotal(void)
> +{
> + PyObject *o;
> + Py_ssize_t total = _Py_RefTotal;
> + o = _PySet_Dummy;
> + if (o != NULL)
> + total -= o->ob_refcnt;
> + return total;
> +}
> +
> +void
> +_PyDebug_PrintTotalRefs(void) {
> + PyObject *xoptions, *value;
> + _Py_IDENTIFIER(showrefcount);
> +
> + xoptions = PySys_GetXOptions();
> + if (xoptions == NULL)
> + return;
> + value = _PyDict_GetItemId(xoptions, &PyId_showrefcount);
> + if (value == Py_True)
> + fprintf(stderr,
> + "[%" PY_FORMAT_SIZE_T "d refs, "
> + "%" PY_FORMAT_SIZE_T "d blocks]\n",
> + _Py_GetRefTotal(), _Py_GetAllocatedBlocks());
> +}
> +#endif /* Py_REF_DEBUG */
> +
> +/* Object allocation routines used by NEWOBJ and NEWVAROBJ macros.
> + These are used by the individual routines for object creation.
> + Do not call them otherwise, they do not initialize the object! */
> +
> +#ifdef Py_TRACE_REFS
> +/* Head of circular doubly-linked list of all objects. These are linked
> + * together via the _ob_prev and _ob_next members of a PyObject, which
> + * exist only in a Py_TRACE_REFS build.
> + */
> +static PyObject refchain = {&refchain, &refchain};
> +
> +/* Insert op at the front of the list of all objects. If force is true,
> + * op is added even if _ob_prev and _ob_next are non-NULL already. If
> + * force is false amd _ob_prev or _ob_next are non-NULL, do nothing.
> + * force should be true if and only if op points to freshly allocated,
> + * uninitialized memory, or you've unlinked op from the list and are
> + * relinking it into the front.
> + * Note that objects are normally added to the list via _Py_NewReference,
> + * which is called by PyObject_Init. Not all objects are initialized that
> + * way, though; exceptions include statically allocated type objects, and
> + * statically allocated singletons (like Py_True and Py_None).
> + */
> +void
> +_Py_AddToAllObjects(PyObject *op, int force)
> +{
> +#ifdef Py_DEBUG
> + if (!force) {
> + /* If it's initialized memory, op must be in or out of
> + * the list unambiguously.
> + */
> + assert((op->_ob_prev == NULL) == (op->_ob_next == NULL));
> + }
> +#endif
> + if (force || op->_ob_prev == NULL) {
> + op->_ob_next = refchain._ob_next;
> + op->_ob_prev = &refchain;
> + refchain._ob_next->_ob_prev = op;
> + refchain._ob_next = op;
> + }
> +}
> +#endif /* Py_TRACE_REFS */
> +
> +#ifdef COUNT_ALLOCS
> +static PyTypeObject *type_list;
> +/* All types are added to type_list, at least when
> + they get one object created. That makes them
> + immortal, which unfortunately contributes to
> + garbage itself. If unlist_types_without_objects
> + is set, they will be removed from the type_list
> + once the last object is deallocated. */
> +static int unlist_types_without_objects;
> +extern Py_ssize_t tuple_zero_allocs, fast_tuple_allocs;
> +extern Py_ssize_t quick_int_allocs, quick_neg_int_allocs;
> +extern Py_ssize_t null_strings, one_strings;
> +void
> +dump_counts(FILE* f)
> +{
> + PyTypeObject *tp;
> + PyObject *xoptions, *value;
> + _Py_IDENTIFIER(showalloccount);
> +
> + xoptions = PySys_GetXOptions();
> + if (xoptions == NULL)
> + return;
> + value = _PyDict_GetItemId(xoptions, &PyId_showalloccount);
> + if (value != Py_True)
> + return;
> +
> + for (tp = type_list; tp; tp = tp->tp_next)
> + fprintf(f, "%s alloc'd: %" PY_FORMAT_SIZE_T "d, "
> + "freed: %" PY_FORMAT_SIZE_T "d, "
> + "max in use: %" PY_FORMAT_SIZE_T "d\n",
> + tp->tp_name, tp->tp_allocs, tp->tp_frees,
> + tp->tp_maxalloc);
> + fprintf(f, "fast tuple allocs: %" PY_FORMAT_SIZE_T "d, "
> + "empty: %" PY_FORMAT_SIZE_T "d\n",
> + fast_tuple_allocs, tuple_zero_allocs);
> + fprintf(f, "fast int allocs: pos: %" PY_FORMAT_SIZE_T "d, "
> + "neg: %" PY_FORMAT_SIZE_T "d\n",
> + quick_int_allocs, quick_neg_int_allocs);
> + fprintf(f, "null strings: %" PY_FORMAT_SIZE_T "d, "
> + "1-strings: %" PY_FORMAT_SIZE_T "d\n",
> + null_strings, one_strings);
> +}
> +
> +PyObject *
> +get_counts(void)
> +{
> + PyTypeObject *tp;
> + PyObject *result;
> + PyObject *v;
> +
> + result = PyList_New(0);
> + if (result == NULL)
> + return NULL;
> + for (tp = type_list; tp; tp = tp->tp_next) {
> + v = Py_BuildValue("(snnn)", tp->tp_name, tp->tp_allocs,
> + tp->tp_frees, tp->tp_maxalloc);
> + if (v == NULL) {
> + Py_DECREF(result);
> + return NULL;
> + }
> + if (PyList_Append(result, v) < 0) {
> + Py_DECREF(v);
> + Py_DECREF(result);
> + return NULL;
> + }
> + Py_DECREF(v);
> + }
> + return result;
> +}
> +
> +void
> +inc_count(PyTypeObject *tp)
> +{
> + if (tp->tp_next == NULL && tp->tp_prev == NULL) {
> + /* first time; insert in linked list */
> + if (tp->tp_next != NULL) /* sanity check */
> + Py_FatalError("XXX inc_count sanity check");
> + if (type_list)
> + type_list->tp_prev = tp;
> + tp->tp_next = type_list;
> + /* Note that as of Python 2.2, heap-allocated type objects
> + * can go away, but this code requires that they stay alive
> + * until program exit. That's why we're careful with
> + * refcounts here. type_list gets a new reference to tp,
> + * while ownership of the reference type_list used to hold
> + * (if any) was transferred to tp->tp_next in the line above.
> + * tp is thus effectively immortal after this.
> + */
> + Py_INCREF(tp);
> + type_list = tp;
> +#ifdef Py_TRACE_REFS
> + /* Also insert in the doubly-linked list of all objects,
> + * if not already there.
> + */
> + _Py_AddToAllObjects((PyObject *)tp, 0);
> +#endif
> + }
> + tp->tp_allocs++;
> + if (tp->tp_allocs - tp->tp_frees > tp->tp_maxalloc)
> + tp->tp_maxalloc = tp->tp_allocs - tp->tp_frees;
> +}
> +
> +void dec_count(PyTypeObject *tp)
> +{
> + tp->tp_frees++;
> + if (unlist_types_without_objects &&
> + tp->tp_allocs == tp->tp_frees) {
> + /* unlink the type from type_list */
> + if (tp->tp_prev)
> + tp->tp_prev->tp_next = tp->tp_next;
> + else
> + type_list = tp->tp_next;
> + if (tp->tp_next)
> + tp->tp_next->tp_prev = tp->tp_prev;
> + tp->tp_next = tp->tp_prev = NULL;
> + Py_DECREF(tp);
> + }
> +}
> +
> +#endif
> +
> +#ifdef Py_REF_DEBUG
> +/* Log a fatal error; doesn't return. */
> +void
> +_Py_NegativeRefcount(const char *fname, int lineno, PyObject *op)
> +{
> + char buf[300];
> +
> + PyOS_snprintf(buf, sizeof(buf),
> + "%s:%i object at %p has negative ref count "
> + "%" PY_FORMAT_SIZE_T "d",
> + fname, lineno, op, op->ob_refcnt);
> + Py_FatalError(buf);
> +}
> +
> +#endif /* Py_REF_DEBUG */
> +
> +void
> +Py_IncRef(PyObject *o)
> +{
> + Py_XINCREF(o);
> +}
> +
> +void
> +Py_DecRef(PyObject *o)
> +{
> + Py_XDECREF(o);
> +}
> +
> +PyObject *
> +PyObject_Init(PyObject *op, PyTypeObject *tp)
> +{
> + if (op == NULL)
> + return PyErr_NoMemory();
> + /* Any changes should be reflected in PyObject_INIT (objimpl.h) */
> + Py_TYPE(op) = tp;
> + _Py_NewReference(op);
> + return op;
> +}
> +
> +PyVarObject *
> +PyObject_InitVar(PyVarObject *op, PyTypeObject *tp, Py_ssize_t size)
> +{
> + if (op == NULL)
> + return (PyVarObject *) PyErr_NoMemory();
> + /* Any changes should be reflected in PyObject_INIT_VAR */
> + op->ob_size = size;
> + Py_TYPE(op) = tp;
> + _Py_NewReference((PyObject *)op);
> + return op;
> +}
> +
> +PyObject *
> +_PyObject_New(PyTypeObject *tp)
> +{
> + PyObject *op;
> + op = (PyObject *) PyObject_MALLOC(_PyObject_SIZE(tp));
> + if (op == NULL)
> + return PyErr_NoMemory();
> + return PyObject_INIT(op, tp);
> +}
> +
> +PyVarObject *
> +_PyObject_NewVar(PyTypeObject *tp, Py_ssize_t nitems)
> +{
> + PyVarObject *op;
> + const size_t size = _PyObject_VAR_SIZE(tp, nitems);
> + op = (PyVarObject *) PyObject_MALLOC(size);
> + if (op == NULL)
> + return (PyVarObject *)PyErr_NoMemory();
> + return PyObject_INIT_VAR(op, tp, nitems);
> +}
> +
> +void
> +PyObject_CallFinalizer(PyObject *self)
> +{
> + PyTypeObject *tp = Py_TYPE(self);
> +
> + /* The former could happen on heaptypes created from the C API, e.g.
> + PyType_FromSpec(). */
> + if (!PyType_HasFeature(tp, Py_TPFLAGS_HAVE_FINALIZE) ||
> + tp->tp_finalize == NULL)
> + return;
> + /* tp_finalize should only be called once. */
> + if (PyType_IS_GC(tp) && _PyGC_FINALIZED(self))
> + return;
> +
> + tp->tp_finalize(self);
> + if (PyType_IS_GC(tp))
> + _PyGC_SET_FINALIZED(self, 1);
> +}
> +
> +int
> +PyObject_CallFinalizerFromDealloc(PyObject *self)
> +{
> + Py_ssize_t refcnt;
> +
> + /* Temporarily resurrect the object. */
> + if (self->ob_refcnt != 0) {
> + Py_FatalError("PyObject_CallFinalizerFromDealloc called on "
> + "object with a non-zero refcount");
> + }
> + self->ob_refcnt = 1;
> +
> + PyObject_CallFinalizer(self);
> +
> + /* Undo the temporary resurrection; can't use DECREF here, it would
> + * cause a recursive call.
> + */
> + assert(self->ob_refcnt > 0);
> + if (--self->ob_refcnt == 0)
> + return 0; /* this is the normal path out */
> +
> + /* tp_finalize resurrected it! Make it look like the original Py_DECREF
> + * never happened.
> + */
> + refcnt = self->ob_refcnt;
> + _Py_NewReference(self);
> + self->ob_refcnt = refcnt;
> +
> + if (PyType_IS_GC(Py_TYPE(self))) {
> + assert(_PyGC_REFS(self) != _PyGC_REFS_UNTRACKED);
> + }
> + /* If Py_REF_DEBUG, _Py_NewReference bumped _Py_RefTotal, so
> + * we need to undo that. */
> + _Py_DEC_REFTOTAL;
> + /* If Py_TRACE_REFS, _Py_NewReference re-added self to the object
> + * chain, so no more to do there.
> + * If COUNT_ALLOCS, the original decref bumped tp_frees, and
> + * _Py_NewReference bumped tp_allocs: both of those need to be
> + * undone.
> + */
> +#ifdef COUNT_ALLOCS
> + --Py_TYPE(self)->tp_frees;
> + --Py_TYPE(self)->tp_allocs;
> +#endif
> + return -1;
> +}
> +
> +int
> +PyObject_Print(PyObject *op, FILE *fp, int flags)
> +{
> + int ret = 0;
> + if (PyErr_CheckSignals())
> + return -1;
> +#ifdef USE_STACKCHECK
> + if (PyOS_CheckStack()) {
> + PyErr_SetString(PyExc_MemoryError, "stack overflow");
> + return -1;
> + }
> +#endif
> + clearerr(fp); /* Clear any previous error condition */
> + if (op == NULL) {
> + Py_BEGIN_ALLOW_THREADS
> + fprintf(fp, "<nil>");
> + Py_END_ALLOW_THREADS
> + }
> + else {
> + if (op->ob_refcnt <= 0)
> + /* XXX(twouters) cast refcount to long until %zd is
> + universally available */
> + Py_BEGIN_ALLOW_THREADS
> + fprintf(fp, "<refcnt %ld at %p>",
> + (long)op->ob_refcnt, op);
> + Py_END_ALLOW_THREADS
> + else {
> + PyObject *s;
> + if (flags & Py_PRINT_RAW)
> + s = PyObject_Str(op);
> + else
> + s = PyObject_Repr(op);
> + if (s == NULL)
> + ret = -1;
> + else if (PyBytes_Check(s)) {
> + fwrite(PyBytes_AS_STRING(s), 1,
> + PyBytes_GET_SIZE(s), fp);
> + }
> + else if (PyUnicode_Check(s)) {
> + PyObject *t;
> + t = PyUnicode_AsEncodedString(s, "utf-8", "backslashreplace");
> + if (t == NULL) {
> + ret = -1;
> + }
> + else {
> + fwrite(PyBytes_AS_STRING(t), 1,
> + PyBytes_GET_SIZE(t), fp);
> + Py_DECREF(t);
> + }
> + }
> + else {
> + PyErr_Format(PyExc_TypeError,
> + "str() or repr() returned '%.100s'",
> + s->ob_type->tp_name);
> + ret = -1;
> + }
> + Py_XDECREF(s);
> + }
> + }
> + if (ret == 0) {
> + if (ferror(fp)) {
> + PyErr_SetFromErrno(PyExc_IOError);
> + clearerr(fp);
> + ret = -1;
> + }
> + }
> + return ret;
> +}
> +
> +/* For debugging convenience. Set a breakpoint here and call it from your DLL */
> +void
> +_Py_BreakPoint(void)
> +{
> +}
> +
> +
> +/* Heuristic checking if the object memory has been deallocated.
> + Rely on the debug hooks on Python memory allocators which fills the memory
> + with DEADBYTE (0xDB) when memory is deallocated.
> +
> + The function can be used to prevent segmentation fault on dereferencing
> + pointers like 0xdbdbdbdbdbdbdbdb. Such pointer is very unlikely to be mapped
> + in memory. */
> +int
> +_PyObject_IsFreed(PyObject *op)
> +{
> + uintptr_t ptr = (uintptr_t)op;
> + if (_PyMem_IsFreed(&ptr, sizeof(ptr))) {
> + return 1;
> + }
> + int freed = _PyMem_IsFreed(&op->ob_type, sizeof(op->ob_type));
> + /* ignore op->ob_ref: the value can have be modified
> + by Py_INCREF() and Py_DECREF(). */
> +#ifdef Py_TRACE_REFS
> + freed &= _PyMem_IsFreed(&op->_ob_next, sizeof(op->_ob_next));
> + freed &= _PyMem_IsFreed(&op->_ob_prev, sizeof(op->_ob_prev));
> +#endif
> + return freed;
> +}
> +
> +
> +/* For debugging convenience. See Misc/gdbinit for some useful gdb hooks */
> +void
> +_PyObject_Dump(PyObject* op)
> +{
> + if (op == NULL) {
> + fprintf(stderr, "<NULL object>\n");
> + fflush(stderr);
> + return;
> + }
> +
> + if (_PyObject_IsFreed(op)) {
> + /* It seems like the object memory has been freed:
> + don't access it to prevent a segmentation fault. */
> + fprintf(stderr, "<freed object>\n");
> + return;
> + }
> +
> + PyGILState_STATE gil;
> + PyObject *error_type, *error_value, *error_traceback;
> +
> + fprintf(stderr, "object : ");
> + fflush(stderr);
> +#ifdef WITH_THREAD
> + gil = PyGILState_Ensure();
> +#endif
> + PyErr_Fetch(&error_type, &error_value, &error_traceback);
> + (void)PyObject_Print(op, stderr, 0);
> + fflush(stderr);
> + PyErr_Restore(error_type, error_value, error_traceback);
> +#ifdef WITH_THREAD
> + PyGILState_Release(gil);
> +#endif
> + /* XXX(twouters) cast refcount to long until %zd is
> + universally available */
> + fprintf(stderr, "\n"
> + "type : %s\n"
> + "refcount: %ld\n"
> + "address : %p\n",
> + Py_TYPE(op)==NULL ? "NULL" : Py_TYPE(op)->tp_name,
> + (long)op->ob_refcnt,
> + op);
> + fflush(stderr);
> +}
> +
> +PyObject *
> +PyObject_Repr(PyObject *v)
> +{
> + PyObject *res;
> + if (PyErr_CheckSignals())
> + return NULL;
> +#ifdef USE_STACKCHECK
> + if (PyOS_CheckStack()) {
> + PyErr_SetString(PyExc_MemoryError, "stack overflow");
> + return NULL;
> + }
> +#endif
> + if (v == NULL)
> + return PyUnicode_FromString("<NULL>");
> + if (Py_TYPE(v)->tp_repr == NULL)
> + return PyUnicode_FromFormat("<%s object at %p>",
> + v->ob_type->tp_name, v);
> +
> +#ifdef Py_DEBUG
> + /* PyObject_Repr() must not be called with an exception set,
> + because it may clear it (directly or indirectly) and so the
> + caller loses its exception */
> + assert(!PyErr_Occurred());
> +#endif
> +
> + /* It is possible for a type to have a tp_repr representation that loops
> + infinitely. */
> + if (Py_EnterRecursiveCall(" while getting the repr of an object"))
> + return NULL;
> + res = (*v->ob_type->tp_repr)(v);
> + Py_LeaveRecursiveCall();
> + if (res == NULL)
> + return NULL;
> + if (!PyUnicode_Check(res)) {
> + PyErr_Format(PyExc_TypeError,
> + "__repr__ returned non-string (type %.200s)",
> + res->ob_type->tp_name);
> + Py_DECREF(res);
> + return NULL;
> + }
> +#ifndef Py_DEBUG
> + if (PyUnicode_READY(res) < 0)
> + return NULL;
> +#endif
> + return res;
> +}
> +
> +PyObject *
> +PyObject_Str(PyObject *v)
> +{
> + PyObject *res;
> + if (PyErr_CheckSignals())
> + return NULL;
> +#ifdef USE_STACKCHECK
> + if (PyOS_CheckStack()) {
> + PyErr_SetString(PyExc_MemoryError, "stack overflow");
> + return NULL;
> + }
> +#endif
> + if (v == NULL)
> + return PyUnicode_FromString("<NULL>");
> + if (PyUnicode_CheckExact(v)) {
> +#ifndef Py_DEBUG
> + if (PyUnicode_READY(v) < 0)
> + return NULL;
> +#endif
> + Py_INCREF(v);
> + return v;
> + }
> + if (Py_TYPE(v)->tp_str == NULL)
> + return PyObject_Repr(v);
> +
> +#ifdef Py_DEBUG
> + /* PyObject_Str() must not be called with an exception set,
> + because it may clear it (directly or indirectly) and so the
> + caller loses its exception */
> + assert(!PyErr_Occurred());
> +#endif
> +
> + /* It is possible for a type to have a tp_str representation that loops
> + infinitely. */
> + if (Py_EnterRecursiveCall(" while getting the str of an object"))
> + return NULL;
> + res = (*Py_TYPE(v)->tp_str)(v);
> + Py_LeaveRecursiveCall();
> + if (res == NULL)
> + return NULL;
> + if (!PyUnicode_Check(res)) {
> + PyErr_Format(PyExc_TypeError,
> + "__str__ returned non-string (type %.200s)",
> + Py_TYPE(res)->tp_name);
> + Py_DECREF(res);
> + return NULL;
> + }
> +#ifndef Py_DEBUG
> + if (PyUnicode_READY(res) < 0)
> + return NULL;
> +#endif
> + assert(_PyUnicode_CheckConsistency(res, 1));
> + return res;
> +}
> +
> +PyObject *
> +PyObject_ASCII(PyObject *v)
> +{
> + PyObject *repr, *ascii, *res;
> +
> + repr = PyObject_Repr(v);
> + if (repr == NULL)
> + return NULL;
> +
> + if (PyUnicode_IS_ASCII(repr))
> + return repr;
> +
> + /* repr is guaranteed to be a PyUnicode object by PyObject_Repr */
> + ascii = _PyUnicode_AsASCIIString(repr, "backslashreplace");
> + Py_DECREF(repr);
> + if (ascii == NULL)
> + return NULL;
> +
> + res = PyUnicode_DecodeASCII(
> + PyBytes_AS_STRING(ascii),
> + PyBytes_GET_SIZE(ascii),
> + NULL);
> +
> + Py_DECREF(ascii);
> + return res;
> +}
> +
> +PyObject *
> +PyObject_Bytes(PyObject *v)
> +{
> + PyObject *result, *func;
> +
> + if (v == NULL)
> + return PyBytes_FromString("<NULL>");
> +
> + if (PyBytes_CheckExact(v)) {
> + Py_INCREF(v);
> + return v;
> + }
> +
> + func = _PyObject_LookupSpecial(v, &PyId___bytes__);
> + if (func != NULL) {
> + result = PyObject_CallFunctionObjArgs(func, NULL);
> + Py_DECREF(func);
> + if (result == NULL)
> + return NULL;
> + if (!PyBytes_Check(result)) {
> + PyErr_Format(PyExc_TypeError,
> + "__bytes__ returned non-bytes (type %.200s)",
> + Py_TYPE(result)->tp_name);
> + Py_DECREF(result);
> + return NULL;
> + }
> + return result;
> + }
> + else if (PyErr_Occurred())
> + return NULL;
> + return PyBytes_FromObject(v);
> +}
> +
> +/* For Python 3.0.1 and later, the old three-way comparison has been
> + completely removed in favour of rich comparisons. PyObject_Compare() and
> + PyObject_Cmp() are gone, and the builtin cmp function no longer exists.
> + The old tp_compare slot has been renamed to tp_reserved, and should no
> + longer be used. Use tp_richcompare instead.
> +
> + See (*) below for practical amendments.
> +
> + tp_richcompare gets called with a first argument of the appropriate type
> + and a second object of an arbitrary type. We never do any kind of
> + coercion.
> +
> + The tp_richcompare slot should return an object, as follows:
> +
> + NULL if an exception occurred
> + NotImplemented if the requested comparison is not implemented
> + any other false value if the requested comparison is false
> + any other true value if the requested comparison is true
> +
> + The PyObject_RichCompare[Bool]() wrappers raise TypeError when they get
> + NotImplemented.
> +
> + (*) Practical amendments:
> +
> + - If rich comparison returns NotImplemented, == and != are decided by
> + comparing the object pointer (i.e. falling back to the base object
> + implementation).
> +
> +*/
> +
> +/* Map rich comparison operators to their swapped version, e.g. LT <--> GT */
> +int _Py_SwappedOp[] = {Py_GT, Py_GE, Py_EQ, Py_NE, Py_LT, Py_LE};
> +
> +static const char * const opstrings[] = {"<", "<=", "==", "!=", ">", ">="};
> +
> +/* Perform a rich comparison, raising TypeError when the requested comparison
> + operator is not supported. */
> +static PyObject *
> +do_richcompare(PyObject *v, PyObject *w, int op)
> +{
> + richcmpfunc f;
> + PyObject *res;
> + int checked_reverse_op = 0;
> +
> + if (v->ob_type != w->ob_type &&
> + PyType_IsSubtype(w->ob_type, v->ob_type) &&
> + (f = w->ob_type->tp_richcompare) != NULL) {
> + checked_reverse_op = 1;
> + res = (*f)(w, v, _Py_SwappedOp[op]);
> + if (res != Py_NotImplemented)
> + return res;
> + Py_DECREF(res);
> + }
> + if ((f = v->ob_type->tp_richcompare) != NULL) {
> + res = (*f)(v, w, op);
> + if (res != Py_NotImplemented)
> + return res;
> + Py_DECREF(res);
> + }
> + if (!checked_reverse_op && (f = w->ob_type->tp_richcompare) != NULL) {
> + res = (*f)(w, v, _Py_SwappedOp[op]);
> + if (res != Py_NotImplemented)
> + return res;
> + Py_DECREF(res);
> + }
> + /* If neither object implements it, provide a sensible default
> + for == and !=, but raise an exception for ordering. */
> + switch (op) {
> + case Py_EQ:
> + res = (v == w) ? Py_True : Py_False;
> + break;
> + case Py_NE:
> + res = (v != w) ? Py_True : Py_False;
> + break;
> + default:
> + PyErr_Format(PyExc_TypeError,
> + "'%s' not supported between instances of '%.100s' and '%.100s'",
> + opstrings[op],
> + v->ob_type->tp_name,
> + w->ob_type->tp_name);
> + return NULL;
> + }
> + Py_INCREF(res);
> + return res;
> +}
> +
> +/* Perform a rich comparison with object result. This wraps do_richcompare()
> + with a check for NULL arguments and a recursion check. */
> +
> +PyObject *
> +PyObject_RichCompare(PyObject *v, PyObject *w, int op)
> +{
> + PyObject *res;
> +
> + assert(Py_LT <= op && op <= Py_GE);
> + if (v == NULL || w == NULL) {
> + if (!PyErr_Occurred())
> + PyErr_BadInternalCall();
> + return NULL;
> + }
> + if (Py_EnterRecursiveCall(" in comparison"))
> + return NULL;
> + res = do_richcompare(v, w, op);
> + Py_LeaveRecursiveCall();
> + return res;
> +}
> +
> +/* Perform a rich comparison with integer result. This wraps
> + PyObject_RichCompare(), returning -1 for error, 0 for false, 1 for true. */
> +int
> +PyObject_RichCompareBool(PyObject *v, PyObject *w, int op)
> +{
> + PyObject *res;
> + int ok;
> +
> + /* Quick result when objects are the same.
> + Guarantees that identity implies equality. */
> + if (v == w) {
> + if (op == Py_EQ)
> + return 1;
> + else if (op == Py_NE)
> + return 0;
> + }
> +
> + res = PyObject_RichCompare(v, w, op);
> + if (res == NULL)
> + return -1;
> + if (PyBool_Check(res))
> + ok = (res == Py_True);
> + else
> + ok = PyObject_IsTrue(res);
> + Py_DECREF(res);
> + return ok;
> +}
> +
> +Py_hash_t
> +PyObject_HashNotImplemented(PyObject *v)
> +{
> + PyErr_Format(PyExc_TypeError, "unhashable type: '%.200s'",
> + Py_TYPE(v)->tp_name);
> + return -1;
> +}
> +
> +Py_hash_t
> +PyObject_Hash(PyObject *v)
> +{
> + PyTypeObject *tp = Py_TYPE(v);
> + if (tp->tp_hash != NULL)
> + return (*tp->tp_hash)(v);
> + /* To keep to the general practice that inheriting
> + * solely from object in C code should work without
> + * an explicit call to PyType_Ready, we implicitly call
> + * PyType_Ready here and then check the tp_hash slot again
> + */
> + if (tp->tp_dict == NULL) {
> + if (PyType_Ready(tp) < 0)
> + return -1;
> + if (tp->tp_hash != NULL)
> + return (*tp->tp_hash)(v);
> + }
> + /* Otherwise, the object can't be hashed */
> + return PyObject_HashNotImplemented(v);
> +}
> +
> +PyObject *
> +PyObject_GetAttrString(PyObject *v, const char *name)
> +{
> + PyObject *w, *res;
> +
> + if (Py_TYPE(v)->tp_getattr != NULL)
> + return (*Py_TYPE(v)->tp_getattr)(v, (char*)name);
> + w = PyUnicode_InternFromString(name);
> + if (w == NULL)
> + return NULL;
> + res = PyObject_GetAttr(v, w);
> + Py_DECREF(w);
> + return res;
> +}
> +
> +int
> +PyObject_HasAttrString(PyObject *v, const char *name)
> +{
> + PyObject *res = PyObject_GetAttrString(v, name);
> + if (res != NULL) {
> + Py_DECREF(res);
> + return 1;
> + }
> + PyErr_Clear();
> + return 0;
> +}
> +
> +int
> +PyObject_SetAttrString(PyObject *v, const char *name, PyObject *w)
> +{
> + PyObject *s;
> + int res;
> +
> + if (Py_TYPE(v)->tp_setattr != NULL)
> + return (*Py_TYPE(v)->tp_setattr)(v, (char*)name, w);
> + s = PyUnicode_InternFromString(name);
> + if (s == NULL)
> + return -1;
> + res = PyObject_SetAttr(v, s, w);
> + Py_XDECREF(s);
> + return res;
> +}
> +
> +int
> +_PyObject_IsAbstract(PyObject *obj)
> +{
> + int res;
> + PyObject* isabstract;
> +
> + if (obj == NULL)
> + return 0;
> +
> + isabstract = _PyObject_GetAttrId(obj, &PyId___isabstractmethod__);
> + if (isabstract == NULL) {
> + if (PyErr_ExceptionMatches(PyExc_AttributeError)) {
> + PyErr_Clear();
> + return 0;
> + }
> + return -1;
> + }
> + res = PyObject_IsTrue(isabstract);
> + Py_DECREF(isabstract);
> + return res;
> +}
> +
> +PyObject *
> +_PyObject_GetAttrId(PyObject *v, _Py_Identifier *name)
> +{
> + PyObject *result;
> + PyObject *oname = _PyUnicode_FromId(name); /* borrowed */
> + if (!oname)
> + return NULL;
> + result = PyObject_GetAttr(v, oname);
> + return result;
> +}
> +
> +int
> +_PyObject_HasAttrId(PyObject *v, _Py_Identifier *name)
> +{
> + int result;
> + PyObject *oname = _PyUnicode_FromId(name); /* borrowed */
> + if (!oname)
> + return -1;
> + result = PyObject_HasAttr(v, oname);
> + return result;
> +}
> +
> +int
> +_PyObject_SetAttrId(PyObject *v, _Py_Identifier *name, PyObject *w)
> +{
> + int result;
> + PyObject *oname = _PyUnicode_FromId(name); /* borrowed */
> + if (!oname)
> + return -1;
> + result = PyObject_SetAttr(v, oname, w);
> + return result;
> +}
> +
> +PyObject *
> +PyObject_GetAttr(PyObject *v, PyObject *name)
> +{
> + PyTypeObject *tp = Py_TYPE(v);
> +
> + if (!PyUnicode_Check(name)) {
> + PyErr_Format(PyExc_TypeError,
> + "attribute name must be string, not '%.200s'",
> + name->ob_type->tp_name);
> + return NULL;
> + }
> + if (tp->tp_getattro != NULL)
> + return (*tp->tp_getattro)(v, name);
> + if (tp->tp_getattr != NULL) {
> + char *name_str = PyUnicode_AsUTF8(name);
> + if (name_str == NULL)
> + return NULL;
> + return (*tp->tp_getattr)(v, name_str);
> + }
> + PyErr_Format(PyExc_AttributeError,
> + "'%.50s' object has no attribute '%U'",
> + tp->tp_name, name);
> + return NULL;
> +}
> +
> +int
> +PyObject_HasAttr(PyObject *v, PyObject *name)
> +{
> + PyObject *res = PyObject_GetAttr(v, name);
> + if (res != NULL) {
> + Py_DECREF(res);
> + return 1;
> + }
> + PyErr_Clear();
> + return 0;
> +}
> +
> +int
> +PyObject_SetAttr(PyObject *v, PyObject *name, PyObject *value)
> +{
> + PyTypeObject *tp = Py_TYPE(v);
> + int err;
> +
> + if (!PyUnicode_Check(name)) {
> + PyErr_Format(PyExc_TypeError,
> + "attribute name must be string, not '%.200s'",
> + name->ob_type->tp_name);
> + return -1;
> + }
> + Py_INCREF(name);
> +
> + PyUnicode_InternInPlace(&name);
> + if (tp->tp_setattro != NULL) {
> + err = (*tp->tp_setattro)(v, name, value);
> + Py_DECREF(name);
> + return err;
> + }
> + if (tp->tp_setattr != NULL) {
> + char *name_str = PyUnicode_AsUTF8(name);
> + if (name_str == NULL)
> + return -1;
> + err = (*tp->tp_setattr)(v, name_str, value);
> + Py_DECREF(name);
> + return err;
> + }
> + Py_DECREF(name);
> + assert(name->ob_refcnt >= 1);
> + if (tp->tp_getattr == NULL && tp->tp_getattro == NULL)
> + PyErr_Format(PyExc_TypeError,
> + "'%.100s' object has no attributes "
> + "(%s .%U)",
> + tp->tp_name,
> + value==NULL ? "del" : "assign to",
> + name);
> + else
> + PyErr_Format(PyExc_TypeError,
> + "'%.100s' object has only read-only attributes "
> + "(%s .%U)",
> + tp->tp_name,
> + value==NULL ? "del" : "assign to",
> + name);
> + return -1;
> +}
> +
> +/* Helper to get a pointer to an object's __dict__ slot, if any */
> +
> +PyObject **
> +_PyObject_GetDictPtr(PyObject *obj)
> +{
> + Py_ssize_t dictoffset;
> + PyTypeObject *tp = Py_TYPE(obj);
> +
> + dictoffset = tp->tp_dictoffset;
> + if (dictoffset == 0)
> + return NULL;
> + if (dictoffset < 0) {
> + Py_ssize_t tsize;
> + size_t size;
> +
> + tsize = ((PyVarObject *)obj)->ob_size;
> + if (tsize < 0)
> + tsize = -tsize;
> + size = _PyObject_VAR_SIZE(tp, tsize);
> +
> + dictoffset += (long)size;
> + assert(dictoffset > 0);
> + assert(dictoffset % SIZEOF_VOID_P == 0);
> + }
> + return (PyObject **) ((char *)obj + dictoffset);
> +}
> +
> +PyObject *
> +PyObject_SelfIter(PyObject *obj)
> +{
> + Py_INCREF(obj);
> + return obj;
> +}
> +
> +/* Convenience function to get a builtin from its name */
> +PyObject *
> +_PyObject_GetBuiltin(const char *name)
> +{
> + PyObject *mod_name, *mod, *attr;
> +
> + mod_name = _PyUnicode_FromId(&PyId_builtins); /* borrowed */
> + if (mod_name == NULL)
> + return NULL;
> + mod = PyImport_Import(mod_name);
> + if (mod == NULL)
> + return NULL;
> + attr = PyObject_GetAttrString(mod, name);
> + Py_DECREF(mod);
> + return attr;
> +}
> +
> +/* Helper used when the __next__ method is removed from a type:
> + tp_iternext is never NULL and can be safely called without checking
> + on every iteration.
> + */
> +
> +PyObject *
> +_PyObject_NextNotImplemented(PyObject *self)
> +{
> + PyErr_Format(PyExc_TypeError,
> + "'%.200s' object is not iterable",
> + Py_TYPE(self)->tp_name);
> + return NULL;
> +}
> +
> +/* Generic GetAttr functions - put these in your tp_[gs]etattro slot */
> +
> +PyObject *
> +_PyObject_GenericGetAttrWithDict(PyObject *obj, PyObject *name, PyObject *dict)
> +{
> + PyTypeObject *tp = Py_TYPE(obj);
> + PyObject *descr = NULL;
> + PyObject *res = NULL;
> + descrgetfunc f;
> + Py_ssize_t dictoffset;
> + PyObject **dictptr;
> +
> + if (!PyUnicode_Check(name)){
> + PyErr_Format(PyExc_TypeError,
> + "attribute name must be string, not '%.200s'",
> + name->ob_type->tp_name);
> + return NULL;
> + }
> + Py_INCREF(name);
> +
> + if (tp->tp_dict == NULL) {
> + if (PyType_Ready(tp) < 0)
> + goto done;
> + }
> +
> + descr = _PyType_Lookup(tp, name);
> +
> + f = NULL;
> + if (descr != NULL) {
> + Py_INCREF(descr);
> + f = descr->ob_type->tp_descr_get;
> + if (f != NULL && PyDescr_IsData(descr)) {
> + res = f(descr, obj, (PyObject *)obj->ob_type);
> + goto done;
> + }
> + }
> +
> + if (dict == NULL) {
> + /* Inline _PyObject_GetDictPtr */
> + dictoffset = tp->tp_dictoffset;
> + if (dictoffset != 0) {
> + if (dictoffset < 0) {
> + Py_ssize_t tsize;
> + size_t size;
> +
> + tsize = ((PyVarObject *)obj)->ob_size;
> + if (tsize < 0)
> + tsize = -tsize;
> + size = _PyObject_VAR_SIZE(tp, tsize);
> + assert(size <= PY_SSIZE_T_MAX);
> +
> + dictoffset += (Py_ssize_t)size;
> + assert(dictoffset > 0);
> + assert(dictoffset % SIZEOF_VOID_P == 0);
> + }
> + dictptr = (PyObject **) ((char *)obj + dictoffset);
> + dict = *dictptr;
> + }
> + }
> + if (dict != NULL) {
> + Py_INCREF(dict);
> + res = PyDict_GetItem(dict, name);
> + if (res != NULL) {
> + Py_INCREF(res);
> + Py_DECREF(dict);
> + goto done;
> + }
> + Py_DECREF(dict);
> + }
> +
> + if (f != NULL) {
> + res = f(descr, obj, (PyObject *)Py_TYPE(obj));
> + goto done;
> + }
> +
> + if (descr != NULL) {
> + res = descr;
> + descr = NULL;
> + goto done;
> + }
> +
> + PyErr_Format(PyExc_AttributeError,
> + "'%.50s' object has no attribute '%U'",
> + tp->tp_name, name);
> + done:
> + Py_XDECREF(descr);
> + Py_DECREF(name);
> + return res;
> +}
> +
> +PyObject *
> +PyObject_GenericGetAttr(PyObject *obj, PyObject *name)
> +{
> + return _PyObject_GenericGetAttrWithDict(obj, name, NULL);
> +}
> +
> +int
> +_PyObject_GenericSetAttrWithDict(PyObject *obj, PyObject *name,
> + PyObject *value, PyObject *dict)
> +{
> + PyTypeObject *tp = Py_TYPE(obj);
> + PyObject *descr;
> + descrsetfunc f;
> + PyObject **dictptr;
> + int res = -1;
> +
> + if (!PyUnicode_Check(name)){
> + PyErr_Format(PyExc_TypeError,
> + "attribute name must be string, not '%.200s'",
> + name->ob_type->tp_name);
> + return -1;
> + }
> +
> + if (tp->tp_dict == NULL && PyType_Ready(tp) < 0)
> + return -1;
> +
> + Py_INCREF(name);
> +
> + descr = _PyType_Lookup(tp, name);
> +
> + if (descr != NULL) {
> + Py_INCREF(descr);
> + f = descr->ob_type->tp_descr_set;
> + if (f != NULL) {
> + res = f(descr, obj, value);
> + goto done;
> + }
> + }
> +
> + if (dict == NULL) {
> + dictptr = _PyObject_GetDictPtr(obj);
> + if (dictptr == NULL) {
> + if (descr == NULL) {
> + PyErr_Format(PyExc_AttributeError,
> + "'%.100s' object has no attribute '%U'",
> + tp->tp_name, name);
> + }
> + else {
> + PyErr_Format(PyExc_AttributeError,
> + "'%.50s' object attribute '%U' is read-only",
> + tp->tp_name, name);
> + }
> + goto done;
> + }
> + res = _PyObjectDict_SetItem(tp, dictptr, name, value);
> + }
> + else {
> + Py_INCREF(dict);
> + if (value == NULL)
> + res = PyDict_DelItem(dict, name);
> + else
> + res = PyDict_SetItem(dict, name, value);
> + Py_DECREF(dict);
> + }
> + if (res < 0 && PyErr_ExceptionMatches(PyExc_KeyError))
> + PyErr_SetObject(PyExc_AttributeError, name);
> +
> + done:
> + Py_XDECREF(descr);
> + Py_DECREF(name);
> + return res;
> +}
> +
> +int
> +PyObject_GenericSetAttr(PyObject *obj, PyObject *name, PyObject *value)
> +{
> + return _PyObject_GenericSetAttrWithDict(obj, name, value, NULL);
> +}
> +
> +int
> +PyObject_GenericSetDict(PyObject *obj, PyObject *value, void *context)
> +{
> + PyObject **dictptr = _PyObject_GetDictPtr(obj);
> + if (dictptr == NULL) {
> + PyErr_SetString(PyExc_AttributeError,
> + "This object has no __dict__");
> + return -1;
> + }
> + if (value == NULL) {
> + PyErr_SetString(PyExc_TypeError, "cannot delete __dict__");
> + return -1;
> + }
> + if (!PyDict_Check(value)) {
> + PyErr_Format(PyExc_TypeError,
> + "__dict__ must be set to a dictionary, "
> + "not a '%.200s'", Py_TYPE(value)->tp_name);
> + return -1;
> + }
> + Py_INCREF(value);
> + Py_XSETREF(*dictptr, value);
> + return 0;
> +}
> +
> +
> +/* Test a value used as condition, e.g., in a for or if statement.
> + Return -1 if an error occurred */
> +
> +int
> +PyObject_IsTrue(PyObject *v)
> +{
> + Py_ssize_t res;
> + if (v == Py_True)
> + return 1;
> + if (v == Py_False)
> + return 0;
> + if (v == Py_None)
> + return 0;
> + else if (v->ob_type->tp_as_number != NULL &&
> + v->ob_type->tp_as_number->nb_bool != NULL)
> + res = (*v->ob_type->tp_as_number->nb_bool)(v);
> + else if (v->ob_type->tp_as_mapping != NULL &&
> + v->ob_type->tp_as_mapping->mp_length != NULL)
> + res = (*v->ob_type->tp_as_mapping->mp_length)(v);
> + else if (v->ob_type->tp_as_sequence != NULL &&
> + v->ob_type->tp_as_sequence->sq_length != NULL)
> + res = (*v->ob_type->tp_as_sequence->sq_length)(v);
> + else
> + return 1;
> + /* if it is negative, it should be either -1 or -2 */
> + return (res > 0) ? 1 : Py_SAFE_DOWNCAST(res, Py_ssize_t, int);
> +}
> +
> +/* equivalent of 'not v'
> + Return -1 if an error occurred */
> +
> +int
> +PyObject_Not(PyObject *v)
> +{
> + int res;
> + res = PyObject_IsTrue(v);
> + if (res < 0)
> + return res;
> + return res == 0;
> +}
> +
> +/* Test whether an object can be called */
> +
> +int
> +PyCallable_Check(PyObject *x)
> +{
> + if (x == NULL)
> + return 0;
> + return x->ob_type->tp_call != NULL;
> +}
> +
> +
> +/* Helper for PyObject_Dir without arguments: returns the local scope. */
> +static PyObject *
> +_dir_locals(void)
> +{
> + PyObject *names;
> + PyObject *locals;
> +
> + locals = PyEval_GetLocals();
> + if (locals == NULL)
> + return NULL;
> +
> + names = PyMapping_Keys(locals);
> + if (!names)
> + return NULL;
> + if (!PyList_Check(names)) {
> + PyErr_Format(PyExc_TypeError,
> + "dir(): expected keys() of locals to be a list, "
> + "not '%.200s'", Py_TYPE(names)->tp_name);
> + Py_DECREF(names);
> + return NULL;
> + }
> + if (PyList_Sort(names)) {
> + Py_DECREF(names);
> + return NULL;
> + }
> + /* the locals don't need to be DECREF'd */
> + return names;
> +}
> +
> +/* Helper for PyObject_Dir: object introspection. */
> +static PyObject *
> +_dir_object(PyObject *obj)
> +{
> + PyObject *result, *sorted;
> + PyObject *dirfunc = _PyObject_LookupSpecial(obj, &PyId___dir__);
> +
> + assert(obj);
> + if (dirfunc == NULL) {
> + if (!PyErr_Occurred())
> + PyErr_SetString(PyExc_TypeError, "object does not provide __dir__");
> + return NULL;
> + }
> + /* use __dir__ */
> + result = PyObject_CallFunctionObjArgs(dirfunc, NULL);
> + Py_DECREF(dirfunc);
> + if (result == NULL)
> + return NULL;
> + /* return sorted(result) */
> + sorted = PySequence_List(result);
> + Py_DECREF(result);
> + if (sorted == NULL)
> + return NULL;
> + if (PyList_Sort(sorted)) {
> + Py_DECREF(sorted);
> + return NULL;
> + }
> + return sorted;
> +}
> +
> +/* Implementation of dir() -- if obj is NULL, returns the names in the current
> + (local) scope. Otherwise, performs introspection of the object: returns a
> + sorted list of attribute names (supposedly) accessible from the object
> +*/
> +PyObject *
> +PyObject_Dir(PyObject *obj)
> +{
> + return (obj == NULL) ? _dir_locals() : _dir_object(obj);
> +}
> +
> +/*
> +None is a non-NULL undefined value.
> +There is (and should be!) no way to create other objects of this type,
> +so there is exactly one (which is indestructible, by the way).
> +*/
> +
> +/* ARGSUSED */
> +static PyObject *
> +none_repr(PyObject *op)
> +{
> + return PyUnicode_FromString("None");
> +}
> +
> +/* ARGUSED */
> +static void
> +none_dealloc(PyObject* ignore)
> +{
> + /* This should never get called, but we also don't want to SEGV if
> + * we accidentally decref None out of existence.
> + */
> + Py_FatalError("deallocating None");
> +}
> +
> +static PyObject *
> +none_new(PyTypeObject *type, PyObject *args, PyObject *kwargs)
> +{
> + if (PyTuple_GET_SIZE(args) || (kwargs && PyDict_Size(kwargs))) {
> + PyErr_SetString(PyExc_TypeError, "NoneType takes no arguments");
> + return NULL;
> + }
> + Py_RETURN_NONE;
> +}
> +
> +static int
> +none_bool(PyObject *v)
> +{
> + return 0;
> +}
> +
> +static PyNumberMethods none_as_number = {
> + 0, /* nb_add */
> + 0, /* nb_subtract */
> + 0, /* nb_multiply */
> + 0, /* nb_remainder */
> + 0, /* nb_divmod */
> + 0, /* nb_power */
> + 0, /* nb_negative */
> + 0, /* nb_positive */
> + 0, /* nb_absolute */
> + (inquiry)none_bool, /* nb_bool */
> + 0, /* nb_invert */
> + 0, /* nb_lshift */
> + 0, /* nb_rshift */
> + 0, /* nb_and */
> + 0, /* nb_xor */
> + 0, /* nb_or */
> + 0, /* nb_int */
> + 0, /* nb_reserved */
> + 0, /* nb_float */
> + 0, /* nb_inplace_add */
> + 0, /* nb_inplace_subtract */
> + 0, /* nb_inplace_multiply */
> + 0, /* nb_inplace_remainder */
> + 0, /* nb_inplace_power */
> + 0, /* nb_inplace_lshift */
> + 0, /* nb_inplace_rshift */
> + 0, /* nb_inplace_and */
> + 0, /* nb_inplace_xor */
> + 0, /* nb_inplace_or */
> + 0, /* nb_floor_divide */
> + 0, /* nb_true_divide */
> + 0, /* nb_inplace_floor_divide */
> + 0, /* nb_inplace_true_divide */
> + 0, /* nb_index */
> +};
> +
> +PyTypeObject _PyNone_Type = {
> + PyVarObject_HEAD_INIT(&PyType_Type, 0)
> + "NoneType",
> + 0,
> + 0,
> + none_dealloc, /*tp_dealloc*/ /*never called*/
> + 0, /*tp_print*/
> + 0, /*tp_getattr*/
> + 0, /*tp_setattr*/
> + 0, /*tp_reserved*/
> + none_repr, /*tp_repr*/
> + &none_as_number, /*tp_as_number*/
> + 0, /*tp_as_sequence*/
> + 0, /*tp_as_mapping*/
> + 0, /*tp_hash */
> + 0, /*tp_call */
> + 0, /*tp_str */
> + 0, /*tp_getattro */
> + 0, /*tp_setattro */
> + 0, /*tp_as_buffer */
> + Py_TPFLAGS_DEFAULT, /*tp_flags */
> + 0, /*tp_doc */
> + 0, /*tp_traverse */
> + 0, /*tp_clear */
> + 0, /*tp_richcompare */
> + 0, /*tp_weaklistoffset */
> + 0, /*tp_iter */
> + 0, /*tp_iternext */
> + 0, /*tp_methods */
> + 0, /*tp_members */
> + 0, /*tp_getset */
> + 0, /*tp_base */
> + 0, /*tp_dict */
> + 0, /*tp_descr_get */
> + 0, /*tp_descr_set */
> + 0, /*tp_dictoffset */
> + 0, /*tp_init */
> + 0, /*tp_alloc */
> + none_new, /*tp_new */
> +};
> +
> +PyObject _Py_NoneStruct = {
> + _PyObject_EXTRA_INIT
> + 1, &_PyNone_Type
> +};
> +
> +/* NotImplemented is an object that can be used to signal that an
> + operation is not implemented for the given type combination. */
> +
> +static PyObject *
> +NotImplemented_repr(PyObject *op)
> +{
> + return PyUnicode_FromString("NotImplemented");
> +}
> +
> +static PyObject *
> +NotImplemented_reduce(PyObject *op)
> +{
> + return PyUnicode_FromString("NotImplemented");
> +}
> +
> +static PyMethodDef notimplemented_methods[] = {
> + {"__reduce__", (PyCFunction)NotImplemented_reduce, METH_NOARGS, NULL},
> + {NULL, NULL}
> +};
> +
> +static PyObject *
> +notimplemented_new(PyTypeObject *type, PyObject *args, PyObject *kwargs)
> +{
> + if (PyTuple_GET_SIZE(args) || (kwargs && PyDict_Size(kwargs))) {
> + PyErr_SetString(PyExc_TypeError, "NotImplementedType takes no arguments");
> + return NULL;
> + }
> + Py_RETURN_NOTIMPLEMENTED;
> +}
> +
> +static void
> +notimplemented_dealloc(PyObject* ignore)
> +{
> + /* This should never get called, but we also don't want to SEGV if
> + * we accidentally decref NotImplemented out of existence.
> + */
> + Py_FatalError("deallocating NotImplemented");
> +}
> +
> +PyTypeObject _PyNotImplemented_Type = {
> + PyVarObject_HEAD_INIT(&PyType_Type, 0)
> + "NotImplementedType",
> + 0,
> + 0,
> + notimplemented_dealloc, /*tp_dealloc*/ /*never called*/
> + 0, /*tp_print*/
> + 0, /*tp_getattr*/
> + 0, /*tp_setattr*/
> + 0, /*tp_reserved*/
> + NotImplemented_repr, /*tp_repr*/
> + 0, /*tp_as_number*/
> + 0, /*tp_as_sequence*/
> + 0, /*tp_as_mapping*/
> + 0, /*tp_hash */
> + 0, /*tp_call */
> + 0, /*tp_str */
> + 0, /*tp_getattro */
> + 0, /*tp_setattro */
> + 0, /*tp_as_buffer */
> + Py_TPFLAGS_DEFAULT, /*tp_flags */
> + 0, /*tp_doc */
> + 0, /*tp_traverse */
> + 0, /*tp_clear */
> + 0, /*tp_richcompare */
> + 0, /*tp_weaklistoffset */
> + 0, /*tp_iter */
> + 0, /*tp_iternext */
> + notimplemented_methods, /*tp_methods */
> + 0, /*tp_members */
> + 0, /*tp_getset */
> + 0, /*tp_base */
> + 0, /*tp_dict */
> + 0, /*tp_descr_get */
> + 0, /*tp_descr_set */
> + 0, /*tp_dictoffset */
> + 0, /*tp_init */
> + 0, /*tp_alloc */
> + notimplemented_new, /*tp_new */
> +};
> +
> +PyObject _Py_NotImplementedStruct = {
> + _PyObject_EXTRA_INIT
> + 1, &_PyNotImplemented_Type
> +};
> +
> +void
> +_Py_ReadyTypes(void)
> +{
> + if (PyType_Ready(&PyBaseObject_Type) < 0)
> + Py_FatalError("Can't initialize object type");
> +
> + if (PyType_Ready(&PyType_Type) < 0)
> + Py_FatalError("Can't initialize type type");
> +
> + if (PyType_Ready(&_PyWeakref_RefType) < 0)
> + Py_FatalError("Can't initialize weakref type");
> +
> + if (PyType_Ready(&_PyWeakref_CallableProxyType) < 0)
> + Py_FatalError("Can't initialize callable weakref proxy type");
> +
> + if (PyType_Ready(&_PyWeakref_ProxyType) < 0)
> + Py_FatalError("Can't initialize weakref proxy type");
> +
> + if (PyType_Ready(&PyLong_Type) < 0)
> + Py_FatalError("Can't initialize int type");
> +
> + if (PyType_Ready(&PyBool_Type) < 0)
> + Py_FatalError("Can't initialize bool type");
> +
> + if (PyType_Ready(&PyByteArray_Type) < 0)
> + Py_FatalError("Can't initialize bytearray type");
> +
> + if (PyType_Ready(&PyBytes_Type) < 0)
> + Py_FatalError("Can't initialize 'str'");
> +
> + if (PyType_Ready(&PyList_Type) < 0)
> + Py_FatalError("Can't initialize list type");
> +
> + if (PyType_Ready(&_PyNone_Type) < 0)
> + Py_FatalError("Can't initialize None type");
> +
> + if (PyType_Ready(&_PyNotImplemented_Type) < 0)
> + Py_FatalError("Can't initialize NotImplemented type");
> +
> + if (PyType_Ready(&PyTraceBack_Type) < 0)
> + Py_FatalError("Can't initialize traceback type");
> +
> + if (PyType_Ready(&PySuper_Type) < 0)
> + Py_FatalError("Can't initialize super type");
> +
> + if (PyType_Ready(&PyRange_Type) < 0)
> + Py_FatalError("Can't initialize range type");
> +
> + if (PyType_Ready(&PyDict_Type) < 0)
> + Py_FatalError("Can't initialize dict type");
> +
> + if (PyType_Ready(&PyDictKeys_Type) < 0)
> + Py_FatalError("Can't initialize dict keys type");
> +
> + if (PyType_Ready(&PyDictValues_Type) < 0)
> + Py_FatalError("Can't initialize dict values type");
> +
> + if (PyType_Ready(&PyDictItems_Type) < 0)
> + Py_FatalError("Can't initialize dict items type");
> +
> + if (PyType_Ready(&PyODict_Type) < 0)
> + Py_FatalError("Can't initialize OrderedDict type");
> +
> + if (PyType_Ready(&PyODictKeys_Type) < 0)
> + Py_FatalError("Can't initialize odict_keys type");
> +
> + if (PyType_Ready(&PyODictItems_Type) < 0)
> + Py_FatalError("Can't initialize odict_items type");
> +
> + if (PyType_Ready(&PyODictValues_Type) < 0)
> + Py_FatalError("Can't initialize odict_values type");
> +
> + if (PyType_Ready(&PyODictIter_Type) < 0)
> + Py_FatalError("Can't initialize odict_keyiterator type");
> +
> + if (PyType_Ready(&PySet_Type) < 0)
> + Py_FatalError("Can't initialize set type");
> +
> + if (PyType_Ready(&PyUnicode_Type) < 0)
> + Py_FatalError("Can't initialize str type");
> +
> + if (PyType_Ready(&PySlice_Type) < 0)
> + Py_FatalError("Can't initialize slice type");
> +
> + if (PyType_Ready(&PyStaticMethod_Type) < 0)
> + Py_FatalError("Can't initialize static method type");
> +
> + if (PyType_Ready(&PyComplex_Type) < 0)
> + Py_FatalError("Can't initialize complex type");
> +
> + if (PyType_Ready(&PyFloat_Type) < 0)
> + Py_FatalError("Can't initialize float type");
> +
> + if (PyType_Ready(&PyFrozenSet_Type) < 0)
> + Py_FatalError("Can't initialize frozenset type");
> +
> + if (PyType_Ready(&PyProperty_Type) < 0)
> + Py_FatalError("Can't initialize property type");
> +
> + if (PyType_Ready(&_PyManagedBuffer_Type) < 0)
> + Py_FatalError("Can't initialize managed buffer type");
> +
> + if (PyType_Ready(&PyMemoryView_Type) < 0)
> + Py_FatalError("Can't initialize memoryview type");
> +
> + if (PyType_Ready(&PyTuple_Type) < 0)
> + Py_FatalError("Can't initialize tuple type");
> +
> + if (PyType_Ready(&PyEnum_Type) < 0)
> + Py_FatalError("Can't initialize enumerate type");
> +
> + if (PyType_Ready(&PyReversed_Type) < 0)
> + Py_FatalError("Can't initialize reversed type");
> +
> + if (PyType_Ready(&PyStdPrinter_Type) < 0)
> + Py_FatalError("Can't initialize StdPrinter");
> +
> + if (PyType_Ready(&PyCode_Type) < 0)
> + Py_FatalError("Can't initialize code type");
> +
> + if (PyType_Ready(&PyFrame_Type) < 0)
> + Py_FatalError("Can't initialize frame type");
> +
> + if (PyType_Ready(&PyCFunction_Type) < 0)
> + Py_FatalError("Can't initialize builtin function type");
> +
> + if (PyType_Ready(&PyMethod_Type) < 0)
> + Py_FatalError("Can't initialize method type");
> +
> + if (PyType_Ready(&PyFunction_Type) < 0)
> + Py_FatalError("Can't initialize function type");
> +
> + if (PyType_Ready(&PyDictProxy_Type) < 0)
> + Py_FatalError("Can't initialize dict proxy type");
> +
> + if (PyType_Ready(&PyGen_Type) < 0)
> + Py_FatalError("Can't initialize generator type");
> +
> + if (PyType_Ready(&PyGetSetDescr_Type) < 0)
> + Py_FatalError("Can't initialize get-set descriptor type");
> +
> + if (PyType_Ready(&PyWrapperDescr_Type) < 0)
> + Py_FatalError("Can't initialize wrapper type");
> +
> + if (PyType_Ready(&_PyMethodWrapper_Type) < 0)
> + Py_FatalError("Can't initialize method wrapper type");
> +
> + if (PyType_Ready(&PyEllipsis_Type) < 0)
> + Py_FatalError("Can't initialize ellipsis type");
> +
> + if (PyType_Ready(&PyMemberDescr_Type) < 0)
> + Py_FatalError("Can't initialize member descriptor type");
> +
> + if (PyType_Ready(&_PyNamespace_Type) < 0)
> + Py_FatalError("Can't initialize namespace type");
> +
> + if (PyType_Ready(&PyCapsule_Type) < 0)
> + Py_FatalError("Can't initialize capsule type");
> +
> + if (PyType_Ready(&PyLongRangeIter_Type) < 0)
> + Py_FatalError("Can't initialize long range iterator type");
> +
> + if (PyType_Ready(&PyCell_Type) < 0)
> + Py_FatalError("Can't initialize cell type");
> +
> + if (PyType_Ready(&PyInstanceMethod_Type) < 0)
> + Py_FatalError("Can't initialize instance method type");
> +
> + if (PyType_Ready(&PyClassMethodDescr_Type) < 0)
> + Py_FatalError("Can't initialize class method descr type");
> +
> + if (PyType_Ready(&PyMethodDescr_Type) < 0)
> + Py_FatalError("Can't initialize method descr type");
> +
> + if (PyType_Ready(&PyCallIter_Type) < 0)
> + Py_FatalError("Can't initialize call iter type");
> +
> + if (PyType_Ready(&PySeqIter_Type) < 0)
> + Py_FatalError("Can't initialize sequence iterator type");
> +
> + if (PyType_Ready(&PyCoro_Type) < 0)
> + Py_FatalError("Can't initialize coroutine type");
> +
> + if (PyType_Ready(&_PyCoroWrapper_Type) < 0)
> + Py_FatalError("Can't initialize coroutine wrapper type");
> +}
> +
> +
> +#ifdef Py_TRACE_REFS
> +
> +void
> +_Py_NewReference(PyObject *op)
> +{
> + _Py_INC_REFTOTAL;
> + op->ob_refcnt = 1;
> + _Py_AddToAllObjects(op, 1);
> + _Py_INC_TPALLOCS(op);
> +}
> +
> +void
> +_Py_ForgetReference(PyObject *op)
> +{
> +#ifdef SLOW_UNREF_CHECK
> + PyObject *p;
> +#endif
> + if (op->ob_refcnt < 0)
> + Py_FatalError("UNREF negative refcnt");
> + if (op == &refchain ||
> + op->_ob_prev->_ob_next != op || op->_ob_next->_ob_prev != op) {
> + fprintf(stderr, "* ob\n");
> + _PyObject_Dump(op);
> + fprintf(stderr, "* op->_ob_prev->_ob_next\n");
> + _PyObject_Dump(op->_ob_prev->_ob_next);
> + fprintf(stderr, "* op->_ob_next->_ob_prev\n");
> + _PyObject_Dump(op->_ob_next->_ob_prev);
> + Py_FatalError("UNREF invalid object");
> + }
> +#ifdef SLOW_UNREF_CHECK
> + for (p = refchain._ob_next; p != &refchain; p = p->_ob_next) {
> + if (p == op)
> + break;
> + }
> + if (p == &refchain) /* Not found */
> + Py_FatalError("UNREF unknown object");
> +#endif
> + op->_ob_next->_ob_prev = op->_ob_prev;
> + op->_ob_prev->_ob_next = op->_ob_next;
> + op->_ob_next = op->_ob_prev = NULL;
> + _Py_INC_TPFREES(op);
> +}
> +
> +void
> +_Py_Dealloc(PyObject *op)
> +{
> + destructor dealloc = Py_TYPE(op)->tp_dealloc;
> + _Py_ForgetReference(op);
> + (*dealloc)(op);
> +}
> +
> +/* Print all live objects. Because PyObject_Print is called, the
> + * interpreter must be in a healthy state.
> + */
> +void
> +_Py_PrintReferences(FILE *fp)
> +{
> + PyObject *op;
> + fprintf(fp, "Remaining objects:\n");
> + for (op = refchain._ob_next; op != &refchain; op = op->_ob_next) {
> + fprintf(fp, "%p [%" PY_FORMAT_SIZE_T "d] ", op, op->ob_refcnt);
> + if (PyObject_Print(op, fp, 0) != 0)
> + PyErr_Clear();
> + putc('\n', fp);
> + }
> +}
> +
> +/* Print the addresses of all live objects. Unlike _Py_PrintReferences, this
> + * doesn't make any calls to the Python C API, so is always safe to call.
> + */
> +void
> +_Py_PrintReferenceAddresses(FILE *fp)
> +{
> + PyObject *op;
> + fprintf(fp, "Remaining object addresses:\n");
> + for (op = refchain._ob_next; op != &refchain; op = op->_ob_next)
> + fprintf(fp, "%p [%" PY_FORMAT_SIZE_T "d] %s\n", op,
> + op->ob_refcnt, Py_TYPE(op)->tp_name);
> +}
> +
> +PyObject *
> +_Py_GetObjects(PyObject *self, PyObject *args)
> +{
> + int i, n;
> + PyObject *t = NULL;
> + PyObject *res, *op;
> +
> + if (!PyArg_ParseTuple(args, "i|O", &n, &t))
> + return NULL;
> + op = refchain._ob_next;
> + res = PyList_New(0);
> + if (res == NULL)
> + return NULL;
> + for (i = 0; (n == 0 || i < n) && op != &refchain; i++) {
> + while (op == self || op == args || op == res || op == t ||
> + (t != NULL && Py_TYPE(op) != (PyTypeObject *) t)) {
> + op = op->_ob_next;
> + if (op == &refchain)
> + return res;
> + }
> + if (PyList_Append(res, op) < 0) {
> + Py_DECREF(res);
> + return NULL;
> + }
> + op = op->_ob_next;
> + }
> + return res;
> +}
> +
> +#endif
> +
> +
> +/* Hack to force loading of abstract.o */
> +Py_ssize_t (*_Py_abstract_hack)(PyObject *) = PyObject_Size;
> +
> +
> +void
> +_PyObject_DebugTypeStats(FILE *out)
> +{
> + _PyCFunction_DebugMallocStats(out);
> + _PyDict_DebugMallocStats(out);
> + _PyFloat_DebugMallocStats(out);
> + _PyFrame_DebugMallocStats(out);
> + _PyList_DebugMallocStats(out);
> + _PyMethod_DebugMallocStats(out);
> + _PyTuple_DebugMallocStats(out);
> +}
> +
> +/* These methods are used to control infinite recursion in repr, str, print,
> + etc. Container objects that may recursively contain themselves,
> + e.g. builtin dictionaries and lists, should use Py_ReprEnter() and
> + Py_ReprLeave() to avoid infinite recursion.
> +
> + Py_ReprEnter() returns 0 the first time it is called for a particular
> + object and 1 every time thereafter. It returns -1 if an exception
> + occurred. Py_ReprLeave() has no return value.
> +
> + See dictobject.c and listobject.c for examples of use.
> +*/
> +
> +int
> +Py_ReprEnter(PyObject *obj)
> +{
> + PyObject *dict;
> + PyObject *list;
> + Py_ssize_t i;
> +
> + dict = PyThreadState_GetDict();
> + /* Ignore a missing thread-state, so that this function can be called
> + early on startup. */
> + if (dict == NULL)
> + return 0;
> + list = _PyDict_GetItemId(dict, &PyId_Py_Repr);
> + if (list == NULL) {
> + list = PyList_New(0);
> + if (list == NULL)
> + return -1;
> + if (_PyDict_SetItemId(dict, &PyId_Py_Repr, list) < 0)
> + return -1;
> + Py_DECREF(list);
> + }
> + i = PyList_GET_SIZE(list);
> + while (--i >= 0) {
> + if (PyList_GET_ITEM(list, i) == obj)
> + return 1;
> + }
> + if (PyList_Append(list, obj) < 0)
> + return -1;
> + return 0;
> +}
> +
> +void
> +Py_ReprLeave(PyObject *obj)
> +{
> + PyObject *dict;
> + PyObject *list;
> + Py_ssize_t i;
> + PyObject *error_type, *error_value, *error_traceback;
> +
> + PyErr_Fetch(&error_type, &error_value, &error_traceback);
> +
> + dict = PyThreadState_GetDict();
> + if (dict == NULL)
> + goto finally;
> +
> + list = _PyDict_GetItemId(dict, &PyId_Py_Repr);
> + if (list == NULL || !PyList_Check(list))
> + goto finally;
> +
> + i = PyList_GET_SIZE(list);
> + /* Count backwards because we always expect obj to be list[-1] */
> + while (--i >= 0) {
> + if (PyList_GET_ITEM(list, i) == obj) {
> + PyList_SetSlice(list, i, i + 1, NULL);
> + break;
> + }
> + }
> +
> +finally:
> + /* ignore exceptions because there is no way to report them. */
> + PyErr_Restore(error_type, error_value, error_traceback);
> +}
> +
> +/* Trashcan support. */
> +
> +/* Current call-stack depth of tp_dealloc calls. */
> +int _PyTrash_delete_nesting = 0;
> +
> +/* List of objects that still need to be cleaned up, singly linked via their
> + * gc headers' gc_prev pointers.
> + */
> +PyObject *_PyTrash_delete_later = NULL;
> +
> +/* Add op to the _PyTrash_delete_later list. Called when the current
> + * call-stack depth gets large. op must be a currently untracked gc'ed
> + * object, with refcount 0. Py_DECREF must already have been called on it.
> + */
> +void
> +_PyTrash_deposit_object(PyObject *op)
> +{
> + assert(PyObject_IS_GC(op));
> + assert(_PyGC_REFS(op) == _PyGC_REFS_UNTRACKED);
> + assert(op->ob_refcnt == 0);
> + _Py_AS_GC(op)->gc.gc_prev = (PyGC_Head *)_PyTrash_delete_later;
> + _PyTrash_delete_later = op;
> +}
> +
> +/* The equivalent API, using per-thread state recursion info */
> +void
> +_PyTrash_thread_deposit_object(PyObject *op)
> +{
> + PyThreadState *tstate = PyThreadState_GET();
> + assert(PyObject_IS_GC(op));
> + assert(_PyGC_REFS(op) == _PyGC_REFS_UNTRACKED);
> + assert(op->ob_refcnt == 0);
> + _Py_AS_GC(op)->gc.gc_prev = (PyGC_Head *) tstate->trash_delete_later;
> + tstate->trash_delete_later = op;
> +}
> +
> +/* Dealloccate all the objects in the _PyTrash_delete_later list. Called when
> + * the call-stack unwinds again.
> + */
> +void
> +_PyTrash_destroy_chain(void)
> +{
> + while (_PyTrash_delete_later) {
> + PyObject *op = _PyTrash_delete_later;
> + destructor dealloc = Py_TYPE(op)->tp_dealloc;
> +
> + _PyTrash_delete_later =
> + (PyObject*) _Py_AS_GC(op)->gc.gc_prev;
> +
> + /* Call the deallocator directly. This used to try to
> + * fool Py_DECREF into calling it indirectly, but
> + * Py_DECREF was already called on this object, and in
> + * assorted non-release builds calling Py_DECREF again ends
> + * up distorting allocation statistics.
> + */
> + assert(op->ob_refcnt == 0);
> + ++_PyTrash_delete_nesting;
> + (*dealloc)(op);
> + --_PyTrash_delete_nesting;
> + }
> +}
> +
> +/* The equivalent API, using per-thread state recursion info */
> +void
> +_PyTrash_thread_destroy_chain(void)
> +{
> + PyThreadState *tstate = PyThreadState_GET();
> + while (tstate->trash_delete_later) {
> + PyObject *op = tstate->trash_delete_later;
> + destructor dealloc = Py_TYPE(op)->tp_dealloc;
> +
> + tstate->trash_delete_later =
> + (PyObject*) _Py_AS_GC(op)->gc.gc_prev;
> +
> + /* Call the deallocator directly. This used to try to
> + * fool Py_DECREF into calling it indirectly, but
> + * Py_DECREF was already called on this object, and in
> + * assorted non-release builds calling Py_DECREF again ends
> + * up distorting allocation statistics.
> + */
> + assert(op->ob_refcnt == 0);
> + ++tstate->trash_delete_nesting;
> + (*dealloc)(op);
> + --tstate->trash_delete_nesting;
> + }
> +}
> +
> +#ifndef Py_TRACE_REFS
> +/* For Py_LIMITED_API, we need an out-of-line version of _Py_Dealloc.
> + Define this here, so we can undefine the macro. */
> +#undef _Py_Dealloc
> +PyAPI_FUNC(void) _Py_Dealloc(PyObject *);
> +void
> +_Py_Dealloc(PyObject *op)
> +{
> + _Py_INC_TPFREES(op) _Py_COUNT_ALLOCS_COMMA
> + (*Py_TYPE(op)->tp_dealloc)(op);
> +}
> +#endif
> +
> +#ifdef __cplusplus
> +}
> +#endif
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/stringlib/transmogrify.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/stringlib/transmogrify.h
> new file mode 100644
> index 00000000..15c63eda
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/stringlib/transmogrify.h
> @@ -0,0 +1,701 @@
> +#if STRINGLIB_IS_UNICODE
> +# error "transmogrify.h only compatible with byte-wise strings"
> +#endif
> +
> +/* the more complicated methods. parts of these should be pulled out into the
> + shared code in bytes_methods.c to cut down on duplicate code bloat. */
> +
> +static PyObject *
> +return_self(PyObject *self)
> +{
> +#if !STRINGLIB_MUTABLE
> + if (STRINGLIB_CHECK_EXACT(self)) {
> + Py_INCREF(self);
> + return self;
> + }
> +#endif
> + return STRINGLIB_NEW(STRINGLIB_STR(self), STRINGLIB_LEN(self));
> +}
> +
> +static PyObject*
> +stringlib_expandtabs(PyObject *self, PyObject *args, PyObject *kwds)
> +{
> + const char *e, *p;
> + char *q;
> + Py_ssize_t i, j;
> + PyObject *u;
> + static char *kwlist[] = {"tabsize", 0};
> + int tabsize = 8;
> +
> + if (!PyArg_ParseTupleAndKeywords(args, kwds, "|i:expandtabs",
> + kwlist, &tabsize))
> + return NULL;
> +
> + /* First pass: determine size of output string */
> + i = j = 0;
> + e = STRINGLIB_STR(self) + STRINGLIB_LEN(self);
> + for (p = STRINGLIB_STR(self); p < e; p++) {
> + if (*p == '\t') {
> + if (tabsize > 0) {
> + Py_ssize_t incr = tabsize - (j % tabsize);
> + if (j > PY_SSIZE_T_MAX - incr)
> + goto overflow;
> + j += incr;
> + }
> + }
> + else {
> + if (j > PY_SSIZE_T_MAX - 1)
> + goto overflow;
> + j++;
> + if (*p == '\n' || *p == '\r') {
> + if (i > PY_SSIZE_T_MAX - j)
> + goto overflow;
> + i += j;
> + j = 0;
> + }
> + }
> + }
> +
> + if (i > PY_SSIZE_T_MAX - j)
> + goto overflow;
> +
> + /* Second pass: create output string and fill it */
> + u = STRINGLIB_NEW(NULL, i + j);
> + if (!u)
> + return NULL;
> +
> + j = 0;
> + q = STRINGLIB_STR(u);
> +
> + for (p = STRINGLIB_STR(self); p < e; p++) {
> + if (*p == '\t') {
> + if (tabsize > 0) {
> + i = tabsize - (j % tabsize);
> + j += i;
> + while (i--)
> + *q++ = ' ';
> + }
> + }
> + else {
> + j++;
> + *q++ = *p;
> + if (*p == '\n' || *p == '\r')
> + j = 0;
> + }
> + }
> +
> + return u;
> + overflow:
> + PyErr_SetString(PyExc_OverflowError, "result too long");
> + return NULL;
> +}
> +
> +static PyObject *
> +pad(PyObject *self, Py_ssize_t left, Py_ssize_t right, char fill)
> +{
> + PyObject *u;
> +
> + if (left < 0)
> + left = 0;
> + if (right < 0)
> + right = 0;
> +
> + if (left == 0 && right == 0) {
> + return return_self(self);
> + }
> +
> + u = STRINGLIB_NEW(NULL, left + STRINGLIB_LEN(self) + right);
> + if (u) {
> + if (left)
> + memset(STRINGLIB_STR(u), fill, left);
> + memcpy(STRINGLIB_STR(u) + left,
> + STRINGLIB_STR(self),
> + STRINGLIB_LEN(self));
> + if (right)
> + memset(STRINGLIB_STR(u) + left + STRINGLIB_LEN(self),
> + fill, right);
> + }
> +
> + return u;
> +}
> +
> +static PyObject *
> +stringlib_ljust(PyObject *self, PyObject *args)
> +{
> + Py_ssize_t width;
> + char fillchar = ' ';
> +
> + if (!PyArg_ParseTuple(args, "n|c:ljust", &width, &fillchar))
> + return NULL;
> +
> + if (STRINGLIB_LEN(self) >= width) {
> + return return_self(self);
> + }
> +
> + return pad(self, 0, width - STRINGLIB_LEN(self), fillchar);
> +}
> +
> +
> +static PyObject *
> +stringlib_rjust(PyObject *self, PyObject *args)
> +{
> + Py_ssize_t width;
> + char fillchar = ' ';
> +
> + if (!PyArg_ParseTuple(args, "n|c:rjust", &width, &fillchar))
> + return NULL;
> +
> + if (STRINGLIB_LEN(self) >= width) {
> + return return_self(self);
> + }
> +
> + return pad(self, width - STRINGLIB_LEN(self), 0, fillchar);
> +}
> +
> +
> +static PyObject *
> +stringlib_center(PyObject *self, PyObject *args)
> +{
> + Py_ssize_t marg, left;
> + Py_ssize_t width;
> + char fillchar = ' ';
> +
> + if (!PyArg_ParseTuple(args, "n|c:center", &width, &fillchar))
> + return NULL;
> +
> + if (STRINGLIB_LEN(self) >= width) {
> + return return_self(self);
> + }
> +
> + marg = width - STRINGLIB_LEN(self);
> + left = marg / 2 + (marg & width & 1);
> +
> + return pad(self, left, marg - left, fillchar);
> +}
> +
> +static PyObject *
> +stringlib_zfill(PyObject *self, PyObject *args)
> +{
> + Py_ssize_t fill;
> + PyObject *s;
> + char *p;
> + Py_ssize_t width;
> +
> + if (!PyArg_ParseTuple(args, "n:zfill", &width))
> + return NULL;
> +
> + if (STRINGLIB_LEN(self) >= width) {
> + return return_self(self);
> + }
> +
> + fill = width - STRINGLIB_LEN(self);
> +
> + s = pad(self, fill, 0, '0');
> +
> + if (s == NULL)
> + return NULL;
> +
> + p = STRINGLIB_STR(s);
> + if (p[fill] == '+' || p[fill] == '-') {
> + /* move sign to beginning of string */
> + p[0] = p[fill];
> + p[fill] = '0';
> + }
> +
> + return s;
> +}
> +
> +
> +/* find and count characters and substrings */
> +
> +#define findchar(target, target_len, c) \
> + ((char *)memchr((const void *)(target), c, target_len))
> +
> +
> +static Py_ssize_t
> +countchar(const char *target, Py_ssize_t target_len, char c,
> + Py_ssize_t maxcount)
> +{
> + Py_ssize_t count = 0;
> + const char *start = target;
> + const char *end = target + target_len;
> +
> + while ((start = findchar(start, end - start, c)) != NULL) {
> + count++;
> + if (count >= maxcount)
> + break;
> + start += 1;
> + }
> + return count;
> +}
> +
> +
> +/* Algorithms for different cases of string replacement */
> +
> +/* len(self)>=1, from="", len(to)>=1, maxcount>=1 */
> +static PyObject *
> +stringlib_replace_interleave(PyObject *self,
> + const char *to_s, Py_ssize_t to_len,
> + Py_ssize_t maxcount)
> +{
> + const char *self_s;
> + char *result_s;
> + Py_ssize_t self_len, result_len;
> + Py_ssize_t count, i;
> + PyObject *result;
> +
> + self_len = STRINGLIB_LEN(self);
> +
> + /* 1 at the end plus 1 after every character;
> + count = min(maxcount, self_len + 1) */
> + if (maxcount <= self_len) {
> + count = maxcount;
> + }
> + else {
> + /* Can't overflow: self_len + 1 <= maxcount <= PY_SSIZE_T_MAX. */
> + count = self_len + 1;
> + }
> +
> + /* Check for overflow */
> + /* result_len = count * to_len + self_len; */
> + assert(count > 0);
> + if (to_len > (PY_SSIZE_T_MAX - self_len) / count) {
> + PyErr_SetString(PyExc_OverflowError,
> + "replace bytes are too long");
> + return NULL;
> + }
> + result_len = count * to_len + self_len;
> + result = STRINGLIB_NEW(NULL, result_len);
> + if (result == NULL) {
> + return NULL;
> + }
> +
> + self_s = STRINGLIB_STR(self);
> + result_s = STRINGLIB_STR(result);
> +
> + if (to_len > 1) {
> + /* Lay the first one down (guaranteed this will occur) */
> + memcpy(result_s, to_s, to_len);
> + result_s += to_len;
> + count -= 1;
> +
> + for (i = 0; i < count; i++) {
> + *result_s++ = *self_s++;
> + memcpy(result_s, to_s, to_len);
> + result_s += to_len;
> + }
> + }
> + else {
> + result_s[0] = to_s[0];
> + result_s += to_len;
> + count -= 1;
> + for (i = 0; i < count; i++) {
> + *result_s++ = *self_s++;
> + result_s[0] = to_s[0];
> + result_s += to_len;
> + }
> + }
> +
> + /* Copy the rest of the original string */
> + memcpy(result_s, self_s, self_len - i);
> +
> + return result;
> +}
> +
> +/* Special case for deleting a single character */
> +/* len(self)>=1, len(from)==1, to="", maxcount>=1 */
> +static PyObject *
> +stringlib_replace_delete_single_character(PyObject *self,
> + char from_c, Py_ssize_t maxcount)
> +{
> + const char *self_s, *start, *next, *end;
> + char *result_s;
> + Py_ssize_t self_len, result_len;
> + Py_ssize_t count;
> + PyObject *result;
> +
> + self_len = STRINGLIB_LEN(self);
> + self_s = STRINGLIB_STR(self);
> +
> + count = countchar(self_s, self_len, from_c, maxcount);
> + if (count == 0) {
> + return return_self(self);
> + }
> +
> + result_len = self_len - count; /* from_len == 1 */
> + assert(result_len>=0);
> +
> + result = STRINGLIB_NEW(NULL, result_len);
> + if (result == NULL) {
> + return NULL;
> + }
> + result_s = STRINGLIB_STR(result);
> +
> + start = self_s;
> + end = self_s + self_len;
> + while (count-- > 0) {
> + next = findchar(start, end - start, from_c);
> + if (next == NULL)
> + break;
> + memcpy(result_s, start, next - start);
> + result_s += (next - start);
> + start = next + 1;
> + }
> + memcpy(result_s, start, end - start);
> +
> + return result;
> +}
> +
> +/* len(self)>=1, len(from)>=2, to="", maxcount>=1 */
> +
> +static PyObject *
> +stringlib_replace_delete_substring(PyObject *self,
> + const char *from_s, Py_ssize_t from_len,
> + Py_ssize_t maxcount)
> +{
> + const char *self_s, *start, *next, *end;
> + char *result_s;
> + Py_ssize_t self_len, result_len;
> + Py_ssize_t count, offset;
> + PyObject *result;
> +
> + self_len = STRINGLIB_LEN(self);
> + self_s = STRINGLIB_STR(self);
> +
> + count = stringlib_count(self_s, self_len,
> + from_s, from_len,
> + maxcount);
> +
> + if (count == 0) {
> + /* no matches */
> + return return_self(self);
> + }
> +
> + result_len = self_len - (count * from_len);
> + assert (result_len>=0);
> +
> + result = STRINGLIB_NEW(NULL, result_len);
> + if (result == NULL) {
> + return NULL;
> + }
> + result_s = STRINGLIB_STR(result);
> +
> + start = self_s;
> + end = self_s + self_len;
> + while (count-- > 0) {
> + offset = stringlib_find(start, end - start,
> + from_s, from_len,
> + 0);
> + if (offset == -1)
> + break;
> + next = start + offset;
> +
> + memcpy(result_s, start, next - start);
> +
> + result_s += (next - start);
> + start = next + from_len;
> + }
> + memcpy(result_s, start, end - start);
> + return result;
> +}
> +
> +/* len(self)>=1, len(from)==len(to)==1, maxcount>=1 */
> +static PyObject *
> +stringlib_replace_single_character_in_place(PyObject *self,
> + char from_c, char to_c,
> + Py_ssize_t maxcount)
> +{
> + const char *self_s, *end;
> + char *result_s, *start, *next;
> + Py_ssize_t self_len;
> + PyObject *result;
> +
> + /* The result string will be the same size */
> + self_s = STRINGLIB_STR(self);
> + self_len = STRINGLIB_LEN(self);
> +
> + next = findchar(self_s, self_len, from_c);
> +
> + if (next == NULL) {
> + /* No matches; return the original bytes */
> + return return_self(self);
> + }
> +
> + /* Need to make a new bytes */
> + result = STRINGLIB_NEW(NULL, self_len);
> + if (result == NULL) {
> + return NULL;
> + }
> + result_s = STRINGLIB_STR(result);
> + memcpy(result_s, self_s, self_len);
> +
> + /* change everything in-place, starting with this one */
> + start = result_s + (next - self_s);
> + *start = to_c;
> + start++;
> + end = result_s + self_len;
> +
> + while (--maxcount > 0) {
> + next = findchar(start, end - start, from_c);
> + if (next == NULL)
> + break;
> + *next = to_c;
> + start = next + 1;
> + }
> +
> + return result;
> +}
> +
> +/* len(self)>=1, len(from)==len(to)>=2, maxcount>=1 */
> +static PyObject *
> +stringlib_replace_substring_in_place(PyObject *self,
> + const char *from_s, Py_ssize_t from_len,
> + const char *to_s, Py_ssize_t to_len,
> + Py_ssize_t maxcount)
> +{
> + const char *self_s, *end;
> + char *result_s, *start;
> + Py_ssize_t self_len, offset;
> + PyObject *result;
> +
> + /* The result bytes will be the same size */
> +
> + self_s = STRINGLIB_STR(self);
> + self_len = STRINGLIB_LEN(self);
> +
> + offset = stringlib_find(self_s, self_len,
> + from_s, from_len,
> + 0);
> + if (offset == -1) {
> + /* No matches; return the original bytes */
> + return return_self(self);
> + }
> +
> + /* Need to make a new bytes */
> + result = STRINGLIB_NEW(NULL, self_len);
> + if (result == NULL) {
> + return NULL;
> + }
> + result_s = STRINGLIB_STR(result);
> + memcpy(result_s, self_s, self_len);
> +
> + /* change everything in-place, starting with this one */
> + start = result_s + offset;
> + memcpy(start, to_s, from_len);
> + start += from_len;
> + end = result_s + self_len;
> +
> + while ( --maxcount > 0) {
> + offset = stringlib_find(start, end - start,
> + from_s, from_len,
> + 0);
> + if (offset == -1)
> + break;
> + memcpy(start + offset, to_s, from_len);
> + start += offset + from_len;
> + }
> +
> + return result;
> +}
> +
> +/* len(self)>=1, len(from)==1, len(to)>=2, maxcount>=1 */
> +static PyObject *
> +stringlib_replace_single_character(PyObject *self,
> + char from_c,
> + const char *to_s, Py_ssize_t to_len,
> + Py_ssize_t maxcount)
> +{
> + const char *self_s, *start, *next, *end;
> + char *result_s;
> + Py_ssize_t self_len, result_len;
> + Py_ssize_t count;
> + PyObject *result;
> +
> + self_s = STRINGLIB_STR(self);
> + self_len = STRINGLIB_LEN(self);
> +
> + count = countchar(self_s, self_len, from_c, maxcount);
> + if (count == 0) {
> + /* no matches, return unchanged */
> + return return_self(self);
> + }
> +
> + /* use the difference between current and new, hence the "-1" */
> + /* result_len = self_len + count * (to_len-1) */
> + assert(count > 0);
> + if (to_len - 1 > (PY_SSIZE_T_MAX - self_len) / count) {
> + PyErr_SetString(PyExc_OverflowError, "replace bytes is too long");
> + return NULL;
> + }
> + result_len = self_len + count * (to_len - 1);
> +
> + result = STRINGLIB_NEW(NULL, result_len);
> + if (result == NULL) {
> + return NULL;
> + }
> + result_s = STRINGLIB_STR(result);
> +
> + start = self_s;
> + end = self_s + self_len;
> + while (count-- > 0) {
> + next = findchar(start, end - start, from_c);
> + if (next == NULL)
> + break;
> +
> + if (next == start) {
> + /* replace with the 'to' */
> + memcpy(result_s, to_s, to_len);
> + result_s += to_len;
> + start += 1;
> + } else {
> + /* copy the unchanged old then the 'to' */
> + memcpy(result_s, start, next - start);
> + result_s += (next - start);
> + memcpy(result_s, to_s, to_len);
> + result_s += to_len;
> + start = next + 1;
> + }
> + }
> + /* Copy the remainder of the remaining bytes */
> + memcpy(result_s, start, end - start);
> +
> + return result;
> +}
> +
> +/* len(self)>=1, len(from)>=2, len(to)>=2, maxcount>=1 */
> +static PyObject *
> +stringlib_replace_substring(PyObject *self,
> + const char *from_s, Py_ssize_t from_len,
> + const char *to_s, Py_ssize_t to_len,
> + Py_ssize_t maxcount)
> +{
> + const char *self_s, *start, *next, *end;
> + char *result_s;
> + Py_ssize_t self_len, result_len;
> + Py_ssize_t count, offset;
> + PyObject *result;
> +
> + self_s = STRINGLIB_STR(self);
> + self_len = STRINGLIB_LEN(self);
> +
> + count = stringlib_count(self_s, self_len,
> + from_s, from_len,
> + maxcount);
> +
> + if (count == 0) {
> + /* no matches, return unchanged */
> + return return_self(self);
> + }
> +
> + /* Check for overflow */
> + /* result_len = self_len + count * (to_len-from_len) */
> + assert(count > 0);
> + if (to_len - from_len > (PY_SSIZE_T_MAX - self_len) / count) {
> + PyErr_SetString(PyExc_OverflowError, "replace bytes is too long");
> + return NULL;
> + }
> + result_len = self_len + count * (to_len - from_len);
> +
> + result = STRINGLIB_NEW(NULL, result_len);
> + if (result == NULL) {
> + return NULL;
> + }
> + result_s = STRINGLIB_STR(result);
> +
> + start = self_s;
> + end = self_s + self_len;
> + while (count-- > 0) {
> + offset = stringlib_find(start, end - start,
> + from_s, from_len,
> + 0);
> + if (offset == -1)
> + break;
> + next = start + offset;
> + if (next == start) {
> + /* replace with the 'to' */
> + memcpy(result_s, to_s, to_len);
> + result_s += to_len;
> + start += from_len;
> + } else {
> + /* copy the unchanged old then the 'to' */
> + memcpy(result_s, start, next - start);
> + result_s += (next - start);
> + memcpy(result_s, to_s, to_len);
> + result_s += to_len;
> + start = next + from_len;
> + }
> + }
> + /* Copy the remainder of the remaining bytes */
> + memcpy(result_s, start, end - start);
> +
> + return result;
> +}
> +
> +
> +static PyObject *
> +stringlib_replace(PyObject *self,
> + const char *from_s, Py_ssize_t from_len,
> + const char *to_s, Py_ssize_t to_len,
> + Py_ssize_t maxcount)
> +{
> + if (maxcount < 0) {
> + maxcount = PY_SSIZE_T_MAX;
> + } else if (maxcount == 0 || STRINGLIB_LEN(self) == 0) {
> + /* nothing to do; return the original bytes */
> + return return_self(self);
> + }
> +
> + /* Handle zero-length special cases */
> + if (from_len == 0) {
> + if (to_len == 0) {
> + /* nothing to do; return the original bytes */
> + return return_self(self);
> + }
> + /* insert the 'to' bytes everywhere. */
> + /* >>> b"Python".replace(b"", b".") */
> + /* b'.P.y.t.h.o.n.' */
> + return stringlib_replace_interleave(self, to_s, to_len, maxcount);
> + }
> +
> + /* Except for b"".replace(b"", b"A") == b"A" there is no way beyond this */
> + /* point for an empty self bytes to generate a non-empty bytes */
> + /* Special case so the remaining code always gets a non-empty bytes */
> + if (STRINGLIB_LEN(self) == 0) {
> + return return_self(self);
> + }
> +
> + if (to_len == 0) {
> + /* delete all occurrences of 'from' bytes */
> + if (from_len == 1) {
> + return stringlib_replace_delete_single_character(
> + self, from_s[0], maxcount);
> + } else {
> + return stringlib_replace_delete_substring(
> + self, from_s, from_len, maxcount);
> + }
> + }
> +
> + /* Handle special case where both bytes have the same length */
> +
> + if (from_len == to_len) {
> + if (from_len == 1) {
> + return stringlib_replace_single_character_in_place(
> + self, from_s[0], to_s[0], maxcount);
> + } else {
> + return stringlib_replace_substring_in_place(
> + self, from_s, from_len, to_s, to_len, maxcount);
> + }
> + }
> +
> + /* Otherwise use the more generic algorithms */
> + if (from_len == 1) {
> + return stringlib_replace_single_character(
> + self, from_s[0], to_s, to_len, maxcount);
> + } else {
> + /* len('from')>=2, len('to')>=1 */
> + return stringlib_replace_substring(
> + self, from_s, from_len, to_s, to_len, maxcount);
> + }
> +}
> +
> +#undef findchar
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/unicodeobject.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/unicodeobject.c
> new file mode 100644
> index 00000000..1fdd5ec1
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/unicodeobject.c
> @@ -0,0 +1,15773 @@
> +/*
> +
> +Unicode implementation based on original code by Fredrik Lundh,
> +modified by Marc-Andre Lemburg <mal@lemburg.com>.
> +
> +Major speed upgrades to the method implementations at the Reykjavik
> +NeedForSpeed sprint, by Fredrik Lundh and Andrew Dalke.
> +
> +Copyright (c) Corporation for National Research Initiatives.
> +
> +--------------------------------------------------------------------
> +The original string type implementation is:
> +
> + Copyright (c) 1999 by Secret Labs AB
> + Copyright (c) 1999 by Fredrik Lundh
> +
> +By obtaining, using, and/or copying this software and/or its
> +associated documentation, you agree that you have read, understood,
> +and will comply with the following terms and conditions:
> +
> +Permission to use, copy, modify, and distribute this software and its
> +associated documentation for any purpose and without fee is hereby
> +granted, provided that the above copyright notice appears in all
> +copies, and that both that copyright notice and this permission notice
> +appear in supporting documentation, and that the name of Secret Labs
> +AB or the author not be used in advertising or publicity pertaining to
> +distribution of the software without specific, written prior
> +permission.
> +
> +SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO
> +THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
> +FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR BE LIABLE FOR
> +ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
> +WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
> +ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
> +OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
> +--------------------------------------------------------------------
> +
> +*/
> +
> +#define PY_SSIZE_T_CLEAN
> +#include "Python.h"
> +#include "ucnhash.h"
> +#include "bytes_methods.h"
> +#include "stringlib/eq.h"
> +
> +#ifdef MS_WINDOWS
> +#include <windows.h>
> +#endif
> +
> +/*[clinic input]
> +class str "PyUnicodeObject *" "&PyUnicode_Type"
> +[clinic start generated code]*/
> +/*[clinic end generated code: output=da39a3ee5e6b4b0d input=604e916854800fa8]*/
> +
> +/* --- Globals ------------------------------------------------------------
> +
> +NOTE: In the interpreter's initialization phase, some globals are currently
> + initialized dynamically as needed. In the process Unicode objects may
> + be created before the Unicode type is ready.
> +
> +*/
> +
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/* Maximum code point of Unicode 6.0: 0x10ffff (1,114,111) */
> +#define MAX_UNICODE 0x10ffff
> +
> +#ifdef Py_DEBUG
> +# define _PyUnicode_CHECK(op) _PyUnicode_CheckConsistency(op, 0)
> +#else
> +# define _PyUnicode_CHECK(op) PyUnicode_Check(op)
> +#endif
> +
> +#define _PyUnicode_UTF8(op) \
> + (((PyCompactUnicodeObject*)(op))->utf8)
> +#define PyUnicode_UTF8(op) \
> + (assert(_PyUnicode_CHECK(op)), \
> + assert(PyUnicode_IS_READY(op)), \
> + PyUnicode_IS_COMPACT_ASCII(op) ? \
> + ((char*)((PyASCIIObject*)(op) + 1)) : \
> + _PyUnicode_UTF8(op))
> +#define _PyUnicode_UTF8_LENGTH(op) \
> + (((PyCompactUnicodeObject*)(op))->utf8_length)
> +#define PyUnicode_UTF8_LENGTH(op) \
> + (assert(_PyUnicode_CHECK(op)), \
> + assert(PyUnicode_IS_READY(op)), \
> + PyUnicode_IS_COMPACT_ASCII(op) ? \
> + ((PyASCIIObject*)(op))->length : \
> + _PyUnicode_UTF8_LENGTH(op))
> +#define _PyUnicode_WSTR(op) \
> + (((PyASCIIObject*)(op))->wstr)
> +#define _PyUnicode_WSTR_LENGTH(op) \
> + (((PyCompactUnicodeObject*)(op))->wstr_length)
> +#define _PyUnicode_LENGTH(op) \
> + (((PyASCIIObject *)(op))->length)
> +#define _PyUnicode_STATE(op) \
> + (((PyASCIIObject *)(op))->state)
> +#define _PyUnicode_HASH(op) \
> + (((PyASCIIObject *)(op))->hash)
> +#define _PyUnicode_KIND(op) \
> + (assert(_PyUnicode_CHECK(op)), \
> + ((PyASCIIObject *)(op))->state.kind)
> +#define _PyUnicode_GET_LENGTH(op) \
> + (assert(_PyUnicode_CHECK(op)), \
> + ((PyASCIIObject *)(op))->length)
> +#define _PyUnicode_DATA_ANY(op) \
> + (((PyUnicodeObject*)(op))->data.any)
> +
> +#undef PyUnicode_READY
> +#define PyUnicode_READY(op) \
> + (assert(_PyUnicode_CHECK(op)), \
> + (PyUnicode_IS_READY(op) ? \
> + 0 : \
> + _PyUnicode_Ready(op)))
> +
> +#define _PyUnicode_SHARE_UTF8(op) \
> + (assert(_PyUnicode_CHECK(op)), \
> + assert(!PyUnicode_IS_COMPACT_ASCII(op)), \
> + (_PyUnicode_UTF8(op) == PyUnicode_DATA(op)))
> +#define _PyUnicode_SHARE_WSTR(op) \
> + (assert(_PyUnicode_CHECK(op)), \
> + (_PyUnicode_WSTR(unicode) == PyUnicode_DATA(op)))
> +
> +/* true if the Unicode object has an allocated UTF-8 memory block
> + (not shared with other data) */
> +#define _PyUnicode_HAS_UTF8_MEMORY(op) \
> + ((!PyUnicode_IS_COMPACT_ASCII(op) \
> + && _PyUnicode_UTF8(op) \
> + && _PyUnicode_UTF8(op) != PyUnicode_DATA(op)))
> +
> +/* true if the Unicode object has an allocated wstr memory block
> + (not shared with other data) */
> +#define _PyUnicode_HAS_WSTR_MEMORY(op) \
> + ((_PyUnicode_WSTR(op) && \
> + (!PyUnicode_IS_READY(op) || \
> + _PyUnicode_WSTR(op) != PyUnicode_DATA(op))))
> +
> +/* Generic helper macro to convert characters of different types.
> + from_type and to_type have to be valid type names, begin and end
> + are pointers to the source characters which should be of type
> + "from_type *". to is a pointer of type "to_type *" and points to the
> + buffer where the result characters are written to. */
> +#define _PyUnicode_CONVERT_BYTES(from_type, to_type, begin, end, to) \
> + do { \
> + to_type *_to = (to_type *)(to); \
> + const from_type *_iter = (from_type *)(begin); \
> + const from_type *_end = (from_type *)(end); \
> + Py_ssize_t n = (_end) - (_iter); \
> + const from_type *_unrolled_end = \
> + _iter + _Py_SIZE_ROUND_DOWN(n, 4); \
> + while (_iter < (_unrolled_end)) { \
> + _to[0] = (to_type) _iter[0]; \
> + _to[1] = (to_type) _iter[1]; \
> + _to[2] = (to_type) _iter[2]; \
> + _to[3] = (to_type) _iter[3]; \
> + _iter += 4; _to += 4; \
> + } \
> + while (_iter < (_end)) \
> + *_to++ = (to_type) *_iter++; \
> + } while (0)
> +
> +#ifdef MS_WINDOWS
> + /* On Windows, overallocate by 50% is the best factor */
> +# define OVERALLOCATE_FACTOR 2
> +#else
> + /* On Linux, overallocate by 25% is the best factor */
> +# define OVERALLOCATE_FACTOR 4
> +#endif
> +
> +/* This dictionary holds all interned unicode strings. Note that references
> + to strings in this dictionary are *not* counted in the string's ob_refcnt.
> + When the interned string reaches a refcnt of 0 the string deallocation
> + function will delete the reference from this dictionary.
> +
> + Another way to look at this is that to say that the actual reference
> + count of a string is: s->ob_refcnt + (s->state ? 2 : 0)
> +*/
> +static PyObject *interned = NULL;
> +
> +/* The empty Unicode object is shared to improve performance. */
> +static PyObject *unicode_empty = NULL;
> +
> +#define _Py_INCREF_UNICODE_EMPTY() \
> + do { \
> + if (unicode_empty != NULL) \
> + Py_INCREF(unicode_empty); \
> + else { \
> + unicode_empty = PyUnicode_New(0, 0); \
> + if (unicode_empty != NULL) { \
> + Py_INCREF(unicode_empty); \
> + assert(_PyUnicode_CheckConsistency(unicode_empty, 1)); \
> + } \
> + } \
> + } while (0)
> +
> +#define _Py_RETURN_UNICODE_EMPTY() \
> + do { \
> + _Py_INCREF_UNICODE_EMPTY(); \
> + return unicode_empty; \
> + } while (0)
> +
> +#define FILL(kind, data, value, start, length) \
> + do { \
> + assert(0 <= start); \
> + assert(kind != PyUnicode_WCHAR_KIND); \
> + switch (kind) { \
> + case PyUnicode_1BYTE_KIND: { \
> + assert(value <= 0xff); \
> + Py_UCS1 ch = (unsigned char)value; \
> + Py_UCS1 *to = (Py_UCS1 *)data + start; \
> + memset(to, ch, length); \
> + break; \
> + } \
> + case PyUnicode_2BYTE_KIND: { \
> + assert(value <= 0xffff); \
> + Py_UCS2 ch = (Py_UCS2)value; \
> + Py_UCS2 *to = (Py_UCS2 *)data + start; \
> + const Py_UCS2 *end = to + length; \
> + for (; to < end; ++to) *to = ch; \
> + break; \
> + } \
> + case PyUnicode_4BYTE_KIND: { \
> + assert(value <= MAX_UNICODE); \
> + Py_UCS4 ch = value; \
> + Py_UCS4 * to = (Py_UCS4 *)data + start; \
> + const Py_UCS4 *end = to + length; \
> + for (; to < end; ++to) *to = ch; \
> + break; \
> + } \
> + default: assert(0); \
> + } \
> + } while (0)
> +
> +
> +/* Forward declaration */
> +static int
> +_PyUnicodeWriter_WriteCharInline(_PyUnicodeWriter *writer, Py_UCS4 ch);
> +
> +/* List of static strings. */
> +static _Py_Identifier *static_strings = NULL;
> +
> +/* Single character Unicode strings in the Latin-1 range are being
> + shared as well. */
> +static PyObject *unicode_latin1[256] = {NULL};
> +
> +/* Fast detection of the most frequent whitespace characters */
> +const unsigned char _Py_ascii_whitespace[] = {
> + 0, 0, 0, 0, 0, 0, 0, 0,
> +/* case 0x0009: * CHARACTER TABULATION */
> +/* case 0x000A: * LINE FEED */
> +/* case 0x000B: * LINE TABULATION */
> +/* case 0x000C: * FORM FEED */
> +/* case 0x000D: * CARRIAGE RETURN */
> + 0, 1, 1, 1, 1, 1, 0, 0,
> + 0, 0, 0, 0, 0, 0, 0, 0,
> +/* case 0x001C: * FILE SEPARATOR */
> +/* case 0x001D: * GROUP SEPARATOR */
> +/* case 0x001E: * RECORD SEPARATOR */
> +/* case 0x001F: * UNIT SEPARATOR */
> + 0, 0, 0, 0, 1, 1, 1, 1,
> +/* case 0x0020: * SPACE */
> + 1, 0, 0, 0, 0, 0, 0, 0,
> + 0, 0, 0, 0, 0, 0, 0, 0,
> + 0, 0, 0, 0, 0, 0, 0, 0,
> + 0, 0, 0, 0, 0, 0, 0, 0,
> +
> + 0, 0, 0, 0, 0, 0, 0, 0,
> + 0, 0, 0, 0, 0, 0, 0, 0,
> + 0, 0, 0, 0, 0, 0, 0, 0,
> + 0, 0, 0, 0, 0, 0, 0, 0,
> + 0, 0, 0, 0, 0, 0, 0, 0,
> + 0, 0, 0, 0, 0, 0, 0, 0,
> + 0, 0, 0, 0, 0, 0, 0, 0,
> + 0, 0, 0, 0, 0, 0, 0, 0
> +};
> +
> +/* forward */
> +static PyUnicodeObject *_PyUnicode_New(Py_ssize_t length);
> +static PyObject* get_latin1_char(unsigned char ch);
> +static int unicode_modifiable(PyObject *unicode);
> +
> +
> +static PyObject *
> +_PyUnicode_FromUCS1(const Py_UCS1 *s, Py_ssize_t size);
> +static PyObject *
> +_PyUnicode_FromUCS2(const Py_UCS2 *s, Py_ssize_t size);
> +static PyObject *
> +_PyUnicode_FromUCS4(const Py_UCS4 *s, Py_ssize_t size);
> +
> +static PyObject *
> +unicode_encode_call_errorhandler(const char *errors,
> + PyObject **errorHandler,const char *encoding, const char *reason,
> + PyObject *unicode, PyObject **exceptionObject,
> + Py_ssize_t startpos, Py_ssize_t endpos, Py_ssize_t *newpos);
> +
> +static void
> +raise_encode_exception(PyObject **exceptionObject,
> + const char *encoding,
> + PyObject *unicode,
> + Py_ssize_t startpos, Py_ssize_t endpos,
> + const char *reason);
> +
> +/* Same for linebreaks */
> +static const unsigned char ascii_linebreak[] = {
> + 0, 0, 0, 0, 0, 0, 0, 0,
> +/* 0x000A, * LINE FEED */
> +/* 0x000B, * LINE TABULATION */
> +/* 0x000C, * FORM FEED */
> +/* 0x000D, * CARRIAGE RETURN */
> + 0, 0, 1, 1, 1, 1, 0, 0,
> + 0, 0, 0, 0, 0, 0, 0, 0,
> +/* 0x001C, * FILE SEPARATOR */
> +/* 0x001D, * GROUP SEPARATOR */
> +/* 0x001E, * RECORD SEPARATOR */
> + 0, 0, 0, 0, 1, 1, 1, 0,
> + 0, 0, 0, 0, 0, 0, 0, 0,
> + 0, 0, 0, 0, 0, 0, 0, 0,
> + 0, 0, 0, 0, 0, 0, 0, 0,
> + 0, 0, 0, 0, 0, 0, 0, 0,
> +
> + 0, 0, 0, 0, 0, 0, 0, 0,
> + 0, 0, 0, 0, 0, 0, 0, 0,
> + 0, 0, 0, 0, 0, 0, 0, 0,
> + 0, 0, 0, 0, 0, 0, 0, 0,
> + 0, 0, 0, 0, 0, 0, 0, 0,
> + 0, 0, 0, 0, 0, 0, 0, 0,
> + 0, 0, 0, 0, 0, 0, 0, 0,
> + 0, 0, 0, 0, 0, 0, 0, 0
> +};
> +
> +#include "clinic/unicodeobject.c.h"
> +
> +typedef enum {
> + _Py_ERROR_UNKNOWN=0,
> + _Py_ERROR_STRICT,
> + _Py_ERROR_SURROGATEESCAPE,
> + _Py_ERROR_REPLACE,
> + _Py_ERROR_IGNORE,
> + _Py_ERROR_BACKSLASHREPLACE,
> + _Py_ERROR_SURROGATEPASS,
> + _Py_ERROR_XMLCHARREFREPLACE,
> + _Py_ERROR_OTHER
> +} _Py_error_handler;
> +
> +static _Py_error_handler
> +get_error_handler(const char *errors)
> +{
> + if (errors == NULL || strcmp(errors, "strict") == 0) {
> + return _Py_ERROR_STRICT;
> + }
> + if (strcmp(errors, "surrogateescape") == 0) {
> + return _Py_ERROR_SURROGATEESCAPE;
> + }
> + if (strcmp(errors, "replace") == 0) {
> + return _Py_ERROR_REPLACE;
> + }
> + if (strcmp(errors, "ignore") == 0) {
> + return _Py_ERROR_IGNORE;
> + }
> + if (strcmp(errors, "backslashreplace") == 0) {
> + return _Py_ERROR_BACKSLASHREPLACE;
> + }
> + if (strcmp(errors, "surrogatepass") == 0) {
> + return _Py_ERROR_SURROGATEPASS;
> + }
> + if (strcmp(errors, "xmlcharrefreplace") == 0) {
> + return _Py_ERROR_XMLCHARREFREPLACE;
> + }
> + return _Py_ERROR_OTHER;
> +}
> +
> +/* The max unicode value is always 0x10FFFF while using the PEP-393 API.
> + This function is kept for backward compatibility with the old API. */
> +Py_UNICODE
> +PyUnicode_GetMax(void)
> +{
> +#ifdef Py_UNICODE_WIDE
> + return 0x10FFFF;
> +#else
> + /* This is actually an illegal character, so it should
> + not be passed to unichr. */
> + return 0xFFFF;
> +#endif
> +}
> +
> +#ifdef Py_DEBUG
> +int
> +_PyUnicode_CheckConsistency(PyObject *op, int check_content)
> +{
> + PyASCIIObject *ascii;
> + unsigned int kind;
> +
> + assert(PyUnicode_Check(op));
> +
> + ascii = (PyASCIIObject *)op;
> + kind = ascii->state.kind;
> +
> + if (ascii->state.ascii == 1 && ascii->state.compact == 1) {
> + assert(kind == PyUnicode_1BYTE_KIND);
> + assert(ascii->state.ready == 1);
> + }
> + else {
> + PyCompactUnicodeObject *compact = (PyCompactUnicodeObject *)op;
> + void *data;
> +
> + if (ascii->state.compact == 1) {
> + data = compact + 1;
> + assert(kind == PyUnicode_1BYTE_KIND
> + || kind == PyUnicode_2BYTE_KIND
> + || kind == PyUnicode_4BYTE_KIND);
> + assert(ascii->state.ascii == 0);
> + assert(ascii->state.ready == 1);
> + assert (compact->utf8 != data);
> + }
> + else {
> + PyUnicodeObject *unicode = (PyUnicodeObject *)op;
> +
> + data = unicode->data.any;
> + if (kind == PyUnicode_WCHAR_KIND) {
> + assert(ascii->length == 0);
> + assert(ascii->hash == -1);
> + assert(ascii->state.compact == 0);
> + assert(ascii->state.ascii == 0);
> + assert(ascii->state.ready == 0);
> + assert(ascii->state.interned == SSTATE_NOT_INTERNED);
> + assert(ascii->wstr != NULL);
> + assert(data == NULL);
> + assert(compact->utf8 == NULL);
> + }
> + else {
> + assert(kind == PyUnicode_1BYTE_KIND
> + || kind == PyUnicode_2BYTE_KIND
> + || kind == PyUnicode_4BYTE_KIND);
> + assert(ascii->state.compact == 0);
> + assert(ascii->state.ready == 1);
> + assert(data != NULL);
> + if (ascii->state.ascii) {
> + assert (compact->utf8 == data);
> + assert (compact->utf8_length == ascii->length);
> + }
> + else
> + assert (compact->utf8 != data);
> + }
> + }
> + if (kind != PyUnicode_WCHAR_KIND) {
> + if (
> +#if SIZEOF_WCHAR_T == 2
> + kind == PyUnicode_2BYTE_KIND
> +#else
> + kind == PyUnicode_4BYTE_KIND
> +#endif
> + )
> + {
> + assert(ascii->wstr == data);
> + assert(compact->wstr_length == ascii->length);
> + } else
> + assert(ascii->wstr != data);
> + }
> +
> + if (compact->utf8 == NULL)
> + assert(compact->utf8_length == 0);
> + if (ascii->wstr == NULL)
> + assert(compact->wstr_length == 0);
> + }
> + /* check that the best kind is used */
> + if (check_content && kind != PyUnicode_WCHAR_KIND)
> + {
> + Py_ssize_t i;
> + Py_UCS4 maxchar = 0;
> + void *data;
> + Py_UCS4 ch;
> +
> + data = PyUnicode_DATA(ascii);
> + for (i=0; i < ascii->length; i++)
> + {
> + ch = PyUnicode_READ(kind, data, i);
> + if (ch > maxchar)
> + maxchar = ch;
> + }
> + if (kind == PyUnicode_1BYTE_KIND) {
> + if (ascii->state.ascii == 0) {
> + assert(maxchar >= 128);
> + assert(maxchar <= 255);
> + }
> + else
> + assert(maxchar < 128);
> + }
> + else if (kind == PyUnicode_2BYTE_KIND) {
> + assert(maxchar >= 0x100);
> + assert(maxchar <= 0xFFFF);
> + }
> + else {
> + assert(maxchar >= 0x10000);
> + assert(maxchar <= MAX_UNICODE);
> + }
> + assert(PyUnicode_READ(kind, data, ascii->length) == 0);
> + }
> + return 1;
> +}
> +#endif
> +
> +static PyObject*
> +unicode_result_wchar(PyObject *unicode)
> +{
> +#ifndef Py_DEBUG
> + Py_ssize_t len;
> +
> + len = _PyUnicode_WSTR_LENGTH(unicode);
> + if (len == 0) {
> + Py_DECREF(unicode);
> + _Py_RETURN_UNICODE_EMPTY();
> + }
> +
> + if (len == 1) {
> + wchar_t ch = _PyUnicode_WSTR(unicode)[0];
> + if ((Py_UCS4)ch < 256) {
> + PyObject *latin1_char = get_latin1_char((unsigned char)ch);
> + Py_DECREF(unicode);
> + return latin1_char;
> + }
> + }
> +
> + if (_PyUnicode_Ready(unicode) < 0) {
> + Py_DECREF(unicode);
> + return NULL;
> + }
> +#else
> + assert(Py_REFCNT(unicode) == 1);
> +
> + /* don't make the result ready in debug mode to ensure that the caller
> + makes the string ready before using it */
> + assert(_PyUnicode_CheckConsistency(unicode, 1));
> +#endif
> + return unicode;
> +}
> +
> +static PyObject*
> +unicode_result_ready(PyObject *unicode)
> +{
> + Py_ssize_t length;
> +
> + length = PyUnicode_GET_LENGTH(unicode);
> + if (length == 0) {
> + if (unicode != unicode_empty) {
> + Py_DECREF(unicode);
> + _Py_RETURN_UNICODE_EMPTY();
> + }
> + return unicode_empty;
> + }
> +
> + if (length == 1) {
> + void *data = PyUnicode_DATA(unicode);
> + int kind = PyUnicode_KIND(unicode);
> + Py_UCS4 ch = PyUnicode_READ(kind, data, 0);
> + if (ch < 256) {
> + PyObject *latin1_char = unicode_latin1[ch];
> + if (latin1_char != NULL) {
> + if (unicode != latin1_char) {
> + Py_INCREF(latin1_char);
> + Py_DECREF(unicode);
> + }
> + return latin1_char;
> + }
> + else {
> + assert(_PyUnicode_CheckConsistency(unicode, 1));
> + Py_INCREF(unicode);
> + unicode_latin1[ch] = unicode;
> + return unicode;
> + }
> + }
> + }
> +
> + assert(_PyUnicode_CheckConsistency(unicode, 1));
> + return unicode;
> +}
> +
> +static PyObject*
> +unicode_result(PyObject *unicode)
> +{
> + assert(_PyUnicode_CHECK(unicode));
> + if (PyUnicode_IS_READY(unicode))
> + return unicode_result_ready(unicode);
> + else
> + return unicode_result_wchar(unicode);
> +}
> +
> +static PyObject*
> +unicode_result_unchanged(PyObject *unicode)
> +{
> + if (PyUnicode_CheckExact(unicode)) {
> + if (PyUnicode_READY(unicode) == -1)
> + return NULL;
> + Py_INCREF(unicode);
> + return unicode;
> + }
> + else
> + /* Subtype -- return genuine unicode string with the same value. */
> + return _PyUnicode_Copy(unicode);
> +}
> +
> +/* Implementation of the "backslashreplace" error handler for 8-bit encodings:
> + ASCII, Latin1, UTF-8, etc. */
> +static char*
> +backslashreplace(_PyBytesWriter *writer, char *str,
> + PyObject *unicode, Py_ssize_t collstart, Py_ssize_t collend)
> +{
> + Py_ssize_t size, i;
> + Py_UCS4 ch;
> + enum PyUnicode_Kind kind;
> + void *data;
> +
> + assert(PyUnicode_IS_READY(unicode));
> + kind = PyUnicode_KIND(unicode);
> + data = PyUnicode_DATA(unicode);
> +
> + size = 0;
> + /* determine replacement size */
> + for (i = collstart; i < collend; ++i) {
> + Py_ssize_t incr;
> +
> + ch = PyUnicode_READ(kind, data, i);
> + if (ch < 0x100)
> + incr = 2+2;
> + else if (ch < 0x10000)
> + incr = 2+4;
> + else {
> + assert(ch <= MAX_UNICODE);
> + incr = 2+8;
> + }
> + if (size > PY_SSIZE_T_MAX - incr) {
> + PyErr_SetString(PyExc_OverflowError,
> + "encoded result is too long for a Python string");
> + return NULL;
> + }
> + size += incr;
> + }
> +
> + str = _PyBytesWriter_Prepare(writer, str, size);
> + if (str == NULL)
> + return NULL;
> +
> + /* generate replacement */
> + for (i = collstart; i < collend; ++i) {
> + ch = PyUnicode_READ(kind, data, i);
> + *str++ = '\\';
> + if (ch >= 0x00010000) {
> + *str++ = 'U';
> + *str++ = Py_hexdigits[(ch>>28)&0xf];
> + *str++ = Py_hexdigits[(ch>>24)&0xf];
> + *str++ = Py_hexdigits[(ch>>20)&0xf];
> + *str++ = Py_hexdigits[(ch>>16)&0xf];
> + *str++ = Py_hexdigits[(ch>>12)&0xf];
> + *str++ = Py_hexdigits[(ch>>8)&0xf];
> + }
> + else if (ch >= 0x100) {
> + *str++ = 'u';
> + *str++ = Py_hexdigits[(ch>>12)&0xf];
> + *str++ = Py_hexdigits[(ch>>8)&0xf];
> + }
> + else
> + *str++ = 'x';
> + *str++ = Py_hexdigits[(ch>>4)&0xf];
> + *str++ = Py_hexdigits[ch&0xf];
> + }
> + return str;
> +}
> +
> +/* Implementation of the "xmlcharrefreplace" error handler for 8-bit encodings:
> + ASCII, Latin1, UTF-8, etc. */
> +static char*
> +xmlcharrefreplace(_PyBytesWriter *writer, char *str,
> + PyObject *unicode, Py_ssize_t collstart, Py_ssize_t collend)
> +{
> + Py_ssize_t size, i;
> + Py_UCS4 ch;
> + enum PyUnicode_Kind kind;
> + void *data;
> +
> + assert(PyUnicode_IS_READY(unicode));
> + kind = PyUnicode_KIND(unicode);
> + data = PyUnicode_DATA(unicode);
> +
> + size = 0;
> + /* determine replacement size */
> + for (i = collstart; i < collend; ++i) {
> + Py_ssize_t incr;
> +
> + ch = PyUnicode_READ(kind, data, i);
> + if (ch < 10)
> + incr = 2+1+1;
> + else if (ch < 100)
> + incr = 2+2+1;
> + else if (ch < 1000)
> + incr = 2+3+1;
> + else if (ch < 10000)
> + incr = 2+4+1;
> + else if (ch < 100000)
> + incr = 2+5+1;
> + else if (ch < 1000000)
> + incr = 2+6+1;
> + else {
> + assert(ch <= MAX_UNICODE);
> + incr = 2+7+1;
> + }
> + if (size > PY_SSIZE_T_MAX - incr) {
> + PyErr_SetString(PyExc_OverflowError,
> + "encoded result is too long for a Python string");
> + return NULL;
> + }
> + size += incr;
> + }
> +
> + str = _PyBytesWriter_Prepare(writer, str, size);
> + if (str == NULL)
> + return NULL;
> +
> + /* generate replacement */
> + for (i = collstart; i < collend; ++i) {
> + str += sprintf(str, "&#%d;", PyUnicode_READ(kind, data, i));
> + }
> + return str;
> +}
> +
> +/* --- Bloom Filters ----------------------------------------------------- */
> +
> +/* stuff to implement simple "bloom filters" for Unicode characters.
> + to keep things simple, we use a single bitmask, using the least 5
> + bits from each unicode characters as the bit index. */
> +
> +/* the linebreak mask is set up by Unicode_Init below */
> +
> +#if LONG_BIT >= 128
> +#define BLOOM_WIDTH 128
> +#elif LONG_BIT >= 64
> +#define BLOOM_WIDTH 64
> +#elif LONG_BIT >= 32
> +#define BLOOM_WIDTH 32
> +#else
> +#error "LONG_BIT is smaller than 32"
> +#endif
> +
> +#define BLOOM_MASK unsigned long
> +
> +static BLOOM_MASK bloom_linebreak = ~(BLOOM_MASK)0;
> +
> +#define BLOOM(mask, ch) ((mask & (1UL << ((ch) & (BLOOM_WIDTH - 1)))))
> +
> +#define BLOOM_LINEBREAK(ch) \
> + ((ch) < 128U ? ascii_linebreak[(ch)] : \
> + (BLOOM(bloom_linebreak, (ch)) && Py_UNICODE_ISLINEBREAK(ch)))
> +
> +static BLOOM_MASK
> +make_bloom_mask(int kind, void* ptr, Py_ssize_t len)
> +{
> +#define BLOOM_UPDATE(TYPE, MASK, PTR, LEN) \
> + do { \
> + TYPE *data = (TYPE *)PTR; \
> + TYPE *end = data + LEN; \
> + Py_UCS4 ch; \
> + for (; data != end; data++) { \
> + ch = *data; \
> + MASK |= (1UL << (ch & (BLOOM_WIDTH - 1))); \
> + } \
> + break; \
> + } while (0)
> +
> + /* calculate simple bloom-style bitmask for a given unicode string */
> +
> + BLOOM_MASK mask;
> +
> + mask = 0;
> + switch (kind) {
> + case PyUnicode_1BYTE_KIND:
> + BLOOM_UPDATE(Py_UCS1, mask, ptr, len);
> + break;
> + case PyUnicode_2BYTE_KIND:
> + BLOOM_UPDATE(Py_UCS2, mask, ptr, len);
> + break;
> + case PyUnicode_4BYTE_KIND:
> + BLOOM_UPDATE(Py_UCS4, mask, ptr, len);
> + break;
> + default:
> + assert(0);
> + }
> + return mask;
> +
> +#undef BLOOM_UPDATE
> +}
> +
> +static int
> +ensure_unicode(PyObject *obj)
> +{
> + if (!PyUnicode_Check(obj)) {
> + PyErr_Format(PyExc_TypeError,
> + "must be str, not %.100s",
> + Py_TYPE(obj)->tp_name);
> + return -1;
> + }
> + return PyUnicode_READY(obj);
> +}
> +
> +/* Compilation of templated routines */
> +
> +#include "stringlib/asciilib.h"
> +#include "stringlib/fastsearch.h"
> +#include "stringlib/partition.h"
> +#include "stringlib/split.h"
> +#include "stringlib/count.h"
> +#include "stringlib/find.h"
> +#include "stringlib/find_max_char.h"
> +#include "stringlib/undef.h"
> +
> +#include "stringlib/ucs1lib.h"
> +#include "stringlib/fastsearch.h"
> +#include "stringlib/partition.h"
> +#include "stringlib/split.h"
> +#include "stringlib/count.h"
> +#include "stringlib/find.h"
> +#include "stringlib/replace.h"
> +#include "stringlib/find_max_char.h"
> +#include "stringlib/undef.h"
> +
> +#include "stringlib/ucs2lib.h"
> +#include "stringlib/fastsearch.h"
> +#include "stringlib/partition.h"
> +#include "stringlib/split.h"
> +#include "stringlib/count.h"
> +#include "stringlib/find.h"
> +#include "stringlib/replace.h"
> +#include "stringlib/find_max_char.h"
> +#include "stringlib/undef.h"
> +
> +#include "stringlib/ucs4lib.h"
> +#include "stringlib/fastsearch.h"
> +#include "stringlib/partition.h"
> +#include "stringlib/split.h"
> +#include "stringlib/count.h"
> +#include "stringlib/find.h"
> +#include "stringlib/replace.h"
> +#include "stringlib/find_max_char.h"
> +#include "stringlib/undef.h"
> +
> +#include "stringlib/unicodedefs.h"
> +#include "stringlib/fastsearch.h"
> +#include "stringlib/count.h"
> +#include "stringlib/find.h"
> +#include "stringlib/undef.h"
> +
> +/* --- Unicode Object ----------------------------------------------------- */
> +
> +static PyObject *
> +fixup(PyObject *self, Py_UCS4 (*fixfct)(PyObject *s));
> +
> +static Py_ssize_t
> +findchar(const void *s, int kind,
> + Py_ssize_t size, Py_UCS4 ch,
> + int direction)
> +{
> + switch (kind) {
> + case PyUnicode_1BYTE_KIND:
> + if ((Py_UCS1) ch != ch)
> + return -1;
> + if (direction > 0)
> + return ucs1lib_find_char((Py_UCS1 *) s, size, (Py_UCS1) ch);
> + else
> + return ucs1lib_rfind_char((Py_UCS1 *) s, size, (Py_UCS1) ch);
> + case PyUnicode_2BYTE_KIND:
> + if ((Py_UCS2) ch != ch)
> + return -1;
> + if (direction > 0)
> + return ucs2lib_find_char((Py_UCS2 *) s, size, (Py_UCS2) ch);
> + else
> + return ucs2lib_rfind_char((Py_UCS2 *) s, size, (Py_UCS2) ch);
> + case PyUnicode_4BYTE_KIND:
> + if (direction > 0)
> + return ucs4lib_find_char((Py_UCS4 *) s, size, ch);
> + else
> + return ucs4lib_rfind_char((Py_UCS4 *) s, size, ch);
> + default:
> + assert(0);
> + return -1;
> + }
> +}
> +
> +#ifdef Py_DEBUG
> +/* Fill the data of a Unicode string with invalid characters to detect bugs
> + earlier.
> +
> + _PyUnicode_CheckConsistency(str, 1) detects invalid characters, at least for
> + ASCII and UCS-4 strings. U+00FF is invalid in ASCII and U+FFFFFFFF is an
> + invalid character in Unicode 6.0. */
> +static void
> +unicode_fill_invalid(PyObject *unicode, Py_ssize_t old_length)
> +{
> + int kind = PyUnicode_KIND(unicode);
> + Py_UCS1 *data = PyUnicode_1BYTE_DATA(unicode);
> + Py_ssize_t length = _PyUnicode_LENGTH(unicode);
> + if (length <= old_length)
> + return;
> + memset(data + old_length * kind, 0xff, (length - old_length) * kind);
> +}
> +#endif
> +
> +static PyObject*
> +resize_compact(PyObject *unicode, Py_ssize_t length)
> +{
> + Py_ssize_t char_size;
> + Py_ssize_t struct_size;
> + Py_ssize_t new_size;
> + int share_wstr;
> + PyObject *new_unicode;
> +#ifdef Py_DEBUG
> + Py_ssize_t old_length = _PyUnicode_LENGTH(unicode);
> +#endif
> +
> + assert(unicode_modifiable(unicode));
> + assert(PyUnicode_IS_READY(unicode));
> + assert(PyUnicode_IS_COMPACT(unicode));
> +
> + char_size = PyUnicode_KIND(unicode);
> + if (PyUnicode_IS_ASCII(unicode))
> + struct_size = sizeof(PyASCIIObject);
> + else
> + struct_size = sizeof(PyCompactUnicodeObject);
> + share_wstr = _PyUnicode_SHARE_WSTR(unicode);
> +
> + if (length > ((PY_SSIZE_T_MAX - struct_size) / char_size - 1)) {
> + PyErr_NoMemory();
> + return NULL;
> + }
> + new_size = (struct_size + (length + 1) * char_size);
> +
> + if (_PyUnicode_HAS_UTF8_MEMORY(unicode)) {
> + PyObject_DEL(_PyUnicode_UTF8(unicode));
> + _PyUnicode_UTF8(unicode) = NULL;
> + _PyUnicode_UTF8_LENGTH(unicode) = 0;
> + }
> + _Py_DEC_REFTOTAL;
> + _Py_ForgetReference(unicode);
> +
> + new_unicode = (PyObject *)PyObject_REALLOC(unicode, new_size);
> + if (new_unicode == NULL) {
> + _Py_NewReference(unicode);
> + PyErr_NoMemory();
> + return NULL;
> + }
> + unicode = new_unicode;
> + _Py_NewReference(unicode);
> +
> + _PyUnicode_LENGTH(unicode) = length;
> + if (share_wstr) {
> + _PyUnicode_WSTR(unicode) = PyUnicode_DATA(unicode);
> + if (!PyUnicode_IS_ASCII(unicode))
> + _PyUnicode_WSTR_LENGTH(unicode) = length;
> + }
> + else if (_PyUnicode_HAS_WSTR_MEMORY(unicode)) {
> + PyObject_DEL(_PyUnicode_WSTR(unicode));
> + _PyUnicode_WSTR(unicode) = NULL;
> + if (!PyUnicode_IS_ASCII(unicode))
> + _PyUnicode_WSTR_LENGTH(unicode) = 0;
> + }
> +#ifdef Py_DEBUG
> + unicode_fill_invalid(unicode, old_length);
> +#endif
> + PyUnicode_WRITE(PyUnicode_KIND(unicode), PyUnicode_DATA(unicode),
> + length, 0);
> + assert(_PyUnicode_CheckConsistency(unicode, 0));
> + return unicode;
> +}
> +
> +static int
> +resize_inplace(PyObject *unicode, Py_ssize_t length)
> +{
> + wchar_t *wstr;
> + Py_ssize_t new_size;
> + assert(!PyUnicode_IS_COMPACT(unicode));
> + assert(Py_REFCNT(unicode) == 1);
> +
> + if (PyUnicode_IS_READY(unicode)) {
> + Py_ssize_t char_size;
> + int share_wstr, share_utf8;
> + void *data;
> +#ifdef Py_DEBUG
> + Py_ssize_t old_length = _PyUnicode_LENGTH(unicode);
> +#endif
> +
> + data = _PyUnicode_DATA_ANY(unicode);
> + char_size = PyUnicode_KIND(unicode);
> + share_wstr = _PyUnicode_SHARE_WSTR(unicode);
> + share_utf8 = _PyUnicode_SHARE_UTF8(unicode);
> +
> + if (length > (PY_SSIZE_T_MAX / char_size - 1)) {
> + PyErr_NoMemory();
> + return -1;
> + }
> + new_size = (length + 1) * char_size;
> +
> + if (!share_utf8 && _PyUnicode_HAS_UTF8_MEMORY(unicode))
> + {
> + PyObject_DEL(_PyUnicode_UTF8(unicode));
> + _PyUnicode_UTF8(unicode) = NULL;
> + _PyUnicode_UTF8_LENGTH(unicode) = 0;
> + }
> +
> + data = (PyObject *)PyObject_REALLOC(data, new_size);
> + if (data == NULL) {
> + PyErr_NoMemory();
> + return -1;
> + }
> + _PyUnicode_DATA_ANY(unicode) = data;
> + if (share_wstr) {
> + _PyUnicode_WSTR(unicode) = data;
> + _PyUnicode_WSTR_LENGTH(unicode) = length;
> + }
> + if (share_utf8) {
> + _PyUnicode_UTF8(unicode) = data;
> + _PyUnicode_UTF8_LENGTH(unicode) = length;
> + }
> + _PyUnicode_LENGTH(unicode) = length;
> + PyUnicode_WRITE(PyUnicode_KIND(unicode), data, length, 0);
> +#ifdef Py_DEBUG
> + unicode_fill_invalid(unicode, old_length);
> +#endif
> + if (share_wstr || _PyUnicode_WSTR(unicode) == NULL) {
> + assert(_PyUnicode_CheckConsistency(unicode, 0));
> + return 0;
> + }
> + }
> + assert(_PyUnicode_WSTR(unicode) != NULL);
> +
> + /* check for integer overflow */
> + if (length > PY_SSIZE_T_MAX / (Py_ssize_t)sizeof(wchar_t) - 1) {
> + PyErr_NoMemory();
> + return -1;
> + }
> + new_size = sizeof(wchar_t) * (length + 1);
> + wstr = _PyUnicode_WSTR(unicode);
> + wstr = PyObject_REALLOC(wstr, new_size);
> + if (!wstr) {
> + PyErr_NoMemory();
> + return -1;
> + }
> + _PyUnicode_WSTR(unicode) = wstr;
> + _PyUnicode_WSTR(unicode)[length] = 0;
> + _PyUnicode_WSTR_LENGTH(unicode) = length;
> + assert(_PyUnicode_CheckConsistency(unicode, 0));
> + return 0;
> +}
> +
> +static PyObject*
> +resize_copy(PyObject *unicode, Py_ssize_t length)
> +{
> + Py_ssize_t copy_length;
> + if (_PyUnicode_KIND(unicode) != PyUnicode_WCHAR_KIND) {
> + PyObject *copy;
> +
> + if (PyUnicode_READY(unicode) == -1)
> + return NULL;
> +
> + copy = PyUnicode_New(length, PyUnicode_MAX_CHAR_VALUE(unicode));
> + if (copy == NULL)
> + return NULL;
> +
> + copy_length = Py_MIN(length, PyUnicode_GET_LENGTH(unicode));
> + _PyUnicode_FastCopyCharacters(copy, 0, unicode, 0, copy_length);
> + return copy;
> + }
> + else {
> + PyObject *w;
> +
> + w = (PyObject*)_PyUnicode_New(length);
> + if (w == NULL)
> + return NULL;
> + copy_length = _PyUnicode_WSTR_LENGTH(unicode);
> + copy_length = Py_MIN(copy_length, length);
> + memcpy(_PyUnicode_WSTR(w), _PyUnicode_WSTR(unicode),
> + copy_length * sizeof(wchar_t));
> + return w;
> + }
> +}
> +
> +/* We allocate one more byte to make sure the string is
> + Ux0000 terminated; some code (e.g. new_identifier)
> + relies on that.
> +
> + XXX This allocator could further be enhanced by assuring that the
> + free list never reduces its size below 1.
> +
> +*/
> +
> +static PyUnicodeObject *
> +_PyUnicode_New(Py_ssize_t length)
> +{
> + PyUnicodeObject *unicode;
> + size_t new_size;
> +
> + /* Optimization for empty strings */
> + if (length == 0 && unicode_empty != NULL) {
> + Py_INCREF(unicode_empty);
> + return (PyUnicodeObject*)unicode_empty;
> + }
> +
> + /* Ensure we won't overflow the size. */
> + if (length > ((PY_SSIZE_T_MAX / (Py_ssize_t)sizeof(Py_UNICODE)) - 1)) {
> + return (PyUnicodeObject *)PyErr_NoMemory();
> + }
> + if (length < 0) {
> + PyErr_SetString(PyExc_SystemError,
> + "Negative size passed to _PyUnicode_New");
> + return NULL;
> + }
> +
> + unicode = PyObject_New(PyUnicodeObject, &PyUnicode_Type);
> + if (unicode == NULL)
> + return NULL;
> + new_size = sizeof(Py_UNICODE) * ((size_t)length + 1);
> +
> + _PyUnicode_WSTR_LENGTH(unicode) = length;
> + _PyUnicode_HASH(unicode) = -1;
> + _PyUnicode_STATE(unicode).interned = 0;
> + _PyUnicode_STATE(unicode).kind = 0;
> + _PyUnicode_STATE(unicode).compact = 0;
> + _PyUnicode_STATE(unicode).ready = 0;
> + _PyUnicode_STATE(unicode).ascii = 0;
> + _PyUnicode_DATA_ANY(unicode) = NULL;
> + _PyUnicode_LENGTH(unicode) = 0;
> + _PyUnicode_UTF8(unicode) = NULL;
> + _PyUnicode_UTF8_LENGTH(unicode) = 0;
> +
> + _PyUnicode_WSTR(unicode) = (Py_UNICODE*) PyObject_MALLOC(new_size);
> + if (!_PyUnicode_WSTR(unicode)) {
> + Py_DECREF(unicode);
> + PyErr_NoMemory();
> + return NULL;
> + }
> +
> + /* Initialize the first element to guard against cases where
> + * the caller fails before initializing str -- unicode_resize()
> + * reads str[0], and the Keep-Alive optimization can keep memory
> + * allocated for str alive across a call to unicode_dealloc(unicode).
> + * We don't want unicode_resize to read uninitialized memory in
> + * that case.
> + */
> + _PyUnicode_WSTR(unicode)[0] = 0;
> + _PyUnicode_WSTR(unicode)[length] = 0;
> +
> + assert(_PyUnicode_CheckConsistency((PyObject *)unicode, 0));
> + return unicode;
> +}
> +
> +static const char*
> +unicode_kind_name(PyObject *unicode)
> +{
> + /* don't check consistency: unicode_kind_name() is called from
> + _PyUnicode_Dump() */
> + if (!PyUnicode_IS_COMPACT(unicode))
> + {
> + if (!PyUnicode_IS_READY(unicode))
> + return "wstr";
> + switch (PyUnicode_KIND(unicode))
> + {
> + case PyUnicode_1BYTE_KIND:
> + if (PyUnicode_IS_ASCII(unicode))
> + return "legacy ascii";
> + else
> + return "legacy latin1";
> + case PyUnicode_2BYTE_KIND:
> + return "legacy UCS2";
> + case PyUnicode_4BYTE_KIND:
> + return "legacy UCS4";
> + default:
> + return "<legacy invalid kind>";
> + }
> + }
> + assert(PyUnicode_IS_READY(unicode));
> + switch (PyUnicode_KIND(unicode)) {
> + case PyUnicode_1BYTE_KIND:
> + if (PyUnicode_IS_ASCII(unicode))
> + return "ascii";
> + else
> + return "latin1";
> + case PyUnicode_2BYTE_KIND:
> + return "UCS2";
> + case PyUnicode_4BYTE_KIND:
> + return "UCS4";
> + default:
> + return "<invalid compact kind>";
> + }
> +}
> +
> +#ifdef Py_DEBUG
> +/* Functions wrapping macros for use in debugger */
> +char *_PyUnicode_utf8(void *unicode){
> + return PyUnicode_UTF8(unicode);
> +}
> +
> +void *_PyUnicode_compact_data(void *unicode) {
> + return _PyUnicode_COMPACT_DATA(unicode);
> +}
> +void *_PyUnicode_data(void *unicode){
> + printf("obj %p\n", unicode);
> + printf("compact %d\n", PyUnicode_IS_COMPACT(unicode));
> + printf("compact ascii %d\n", PyUnicode_IS_COMPACT_ASCII(unicode));
> + printf("ascii op %p\n", ((void*)((PyASCIIObject*)(unicode) + 1)));
> + printf("compact op %p\n", ((void*)((PyCompactUnicodeObject*)(unicode) + 1)));
> + printf("compact data %p\n", _PyUnicode_COMPACT_DATA(unicode));
> + return PyUnicode_DATA(unicode);
> +}
> +
> +void
> +_PyUnicode_Dump(PyObject *op)
> +{
> + PyASCIIObject *ascii = (PyASCIIObject *)op;
> + PyCompactUnicodeObject *compact = (PyCompactUnicodeObject *)op;
> + PyUnicodeObject *unicode = (PyUnicodeObject *)op;
> + void *data;
> +
> + if (ascii->state.compact)
> + {
> + if (ascii->state.ascii)
> + data = (ascii + 1);
> + else
> + data = (compact + 1);
> + }
> + else
> + data = unicode->data.any;
> + printf("%s: len=%" PY_FORMAT_SIZE_T "u, ",
> + unicode_kind_name(op), ascii->length);
> +
> + if (ascii->wstr == data)
> + printf("shared ");
> + printf("wstr=%p", ascii->wstr);
> +
> + if (!(ascii->state.ascii == 1 && ascii->state.compact == 1)) {
> + printf(" (%" PY_FORMAT_SIZE_T "u), ", compact->wstr_length);
> + if (!ascii->state.compact && compact->utf8 == unicode->data.any)
> + printf("shared ");
> + printf("utf8=%p (%" PY_FORMAT_SIZE_T "u)",
> + compact->utf8, compact->utf8_length);
> + }
> + printf(", data=%p\n", data);
> +}
> +#endif
> +
> +PyObject *
> +PyUnicode_New(Py_ssize_t size, Py_UCS4 maxchar)
> +{
> + PyObject *obj;
> + PyCompactUnicodeObject *unicode;
> + void *data;
> + enum PyUnicode_Kind kind;
> + int is_sharing, is_ascii;
> + Py_ssize_t char_size;
> + Py_ssize_t struct_size;
> +
> + /* Optimization for empty strings */
> + if (size == 0 && unicode_empty != NULL) {
> + Py_INCREF(unicode_empty);
> + return unicode_empty;
> + }
> +
> + is_ascii = 0;
> + is_sharing = 0;
> + struct_size = sizeof(PyCompactUnicodeObject);
> + if (maxchar < 128) {
> + kind = PyUnicode_1BYTE_KIND;
> + char_size = 1;
> + is_ascii = 1;
> + struct_size = sizeof(PyASCIIObject);
> + }
> + else if (maxchar < 256) {
> + kind = PyUnicode_1BYTE_KIND;
> + char_size = 1;
> + }
> + else if (maxchar < 65536) {
> + kind = PyUnicode_2BYTE_KIND;
> + char_size = 2;
> + if (sizeof(wchar_t) == 2)
> + is_sharing = 1;
> + }
> + else {
> + if (maxchar > MAX_UNICODE) {
> + PyErr_SetString(PyExc_SystemError,
> + "invalid maximum character passed to PyUnicode_New");
> + return NULL;
> + }
> + kind = PyUnicode_4BYTE_KIND;
> + char_size = 4;
> + if (sizeof(wchar_t) == 4)
> + is_sharing = 1;
> + }
> +
> + /* Ensure we won't overflow the size. */
> + if (size < 0) {
> + PyErr_SetString(PyExc_SystemError,
> + "Negative size passed to PyUnicode_New");
> + return NULL;
> + }
> + if (size > ((PY_SSIZE_T_MAX - struct_size) / char_size - 1))
> + return PyErr_NoMemory();
> +
> + /* Duplicated allocation code from _PyObject_New() instead of a call to
> + * PyObject_New() so we are able to allocate space for the object and
> + * it's data buffer.
> + */
> + obj = (PyObject *) PyObject_MALLOC(struct_size + (size + 1) * char_size);
> + if (obj == NULL)
> + return PyErr_NoMemory();
> + obj = PyObject_INIT(obj, &PyUnicode_Type);
> + if (obj == NULL)
> + return NULL;
> +
> + unicode = (PyCompactUnicodeObject *)obj;
> + if (is_ascii)
> + data = ((PyASCIIObject*)obj) + 1;
> + else
> + data = unicode + 1;
> + _PyUnicode_LENGTH(unicode) = size;
> + _PyUnicode_HASH(unicode) = -1;
> + _PyUnicode_STATE(unicode).interned = 0;
> + _PyUnicode_STATE(unicode).kind = kind;
> + _PyUnicode_STATE(unicode).compact = 1;
> + _PyUnicode_STATE(unicode).ready = 1;
> + _PyUnicode_STATE(unicode).ascii = is_ascii;
> + if (is_ascii) {
> + ((char*)data)[size] = 0;
> + _PyUnicode_WSTR(unicode) = NULL;
> + }
> + else if (kind == PyUnicode_1BYTE_KIND) {
> + ((char*)data)[size] = 0;
> + _PyUnicode_WSTR(unicode) = NULL;
> + _PyUnicode_WSTR_LENGTH(unicode) = 0;
> + unicode->utf8 = NULL;
> + unicode->utf8_length = 0;
> + }
> + else {
> + unicode->utf8 = NULL;
> + unicode->utf8_length = 0;
> + if (kind == PyUnicode_2BYTE_KIND)
> + ((Py_UCS2*)data)[size] = 0;
> + else /* kind == PyUnicode_4BYTE_KIND */
> + ((Py_UCS4*)data)[size] = 0;
> + if (is_sharing) {
> + _PyUnicode_WSTR_LENGTH(unicode) = size;
> + _PyUnicode_WSTR(unicode) = (wchar_t *)data;
> + }
> + else {
> + _PyUnicode_WSTR_LENGTH(unicode) = 0;
> + _PyUnicode_WSTR(unicode) = NULL;
> + }
> + }
> +#ifdef Py_DEBUG
> + unicode_fill_invalid((PyObject*)unicode, 0);
> +#endif
> + assert(_PyUnicode_CheckConsistency((PyObject*)unicode, 0));
> + return obj;
> +}
> +
> +#if SIZEOF_WCHAR_T == 2
> +/* Helper function to convert a 16-bits wchar_t representation to UCS4, this
> + will decode surrogate pairs, the other conversions are implemented as macros
> + for efficiency.
> +
> + This function assumes that unicode can hold one more code point than wstr
> + characters for a terminating null character. */
> +static void
> +unicode_convert_wchar_to_ucs4(const wchar_t *begin, const wchar_t *end,
> + PyObject *unicode)
> +{
> + const wchar_t *iter;
> + Py_UCS4 *ucs4_out;
> +
> + assert(unicode != NULL);
> + assert(_PyUnicode_CHECK(unicode));
> + assert(_PyUnicode_KIND(unicode) == PyUnicode_4BYTE_KIND);
> + ucs4_out = PyUnicode_4BYTE_DATA(unicode);
> +
> + for (iter = begin; iter < end; ) {
> + assert(ucs4_out < (PyUnicode_4BYTE_DATA(unicode) +
> + _PyUnicode_GET_LENGTH(unicode)));
> + if (Py_UNICODE_IS_HIGH_SURROGATE(iter[0])
> + && (iter+1) < end
> + && Py_UNICODE_IS_LOW_SURROGATE(iter[1]))
> + {
> + *ucs4_out++ = Py_UNICODE_JOIN_SURROGATES(iter[0], iter[1]);
> + iter += 2;
> + }
> + else {
> + *ucs4_out++ = *iter;
> + iter++;
> + }
> + }
> + assert(ucs4_out == (PyUnicode_4BYTE_DATA(unicode) +
> + _PyUnicode_GET_LENGTH(unicode)));
> +
> +}
> +#endif
> +
> +static int
> +unicode_check_modifiable(PyObject *unicode)
> +{
> + if (!unicode_modifiable(unicode)) {
> + PyErr_SetString(PyExc_SystemError,
> + "Cannot modify a string currently used");
> + return -1;
> + }
> + return 0;
> +}
> +
> +static int
> +_copy_characters(PyObject *to, Py_ssize_t to_start,
> + PyObject *from, Py_ssize_t from_start,
> + Py_ssize_t how_many, int check_maxchar)
> +{
> + unsigned int from_kind, to_kind;
> + void *from_data, *to_data;
> +
> + assert(0 <= how_many);
> + assert(0 <= from_start);
> + assert(0 <= to_start);
> + assert(PyUnicode_Check(from));
> + assert(PyUnicode_IS_READY(from));
> + assert(from_start + how_many <= PyUnicode_GET_LENGTH(from));
> +
> + assert(PyUnicode_Check(to));
> + assert(PyUnicode_IS_READY(to));
> + assert(to_start + how_many <= PyUnicode_GET_LENGTH(to));
> +
> + if (how_many == 0)
> + return 0;
> +
> + from_kind = PyUnicode_KIND(from);
> + from_data = PyUnicode_DATA(from);
> + to_kind = PyUnicode_KIND(to);
> + to_data = PyUnicode_DATA(to);
> +
> +#ifdef Py_DEBUG
> + if (!check_maxchar
> + && PyUnicode_MAX_CHAR_VALUE(from) > PyUnicode_MAX_CHAR_VALUE(to))
> + {
> + const Py_UCS4 to_maxchar = PyUnicode_MAX_CHAR_VALUE(to);
> + Py_UCS4 ch;
> + Py_ssize_t i;
> + for (i=0; i < how_many; i++) {
> + ch = PyUnicode_READ(from_kind, from_data, from_start + i);
> + assert(ch <= to_maxchar);
> + }
> + }
> +#endif
> +
> + if (from_kind == to_kind) {
> + if (check_maxchar
> + && !PyUnicode_IS_ASCII(from) && PyUnicode_IS_ASCII(to))
> + {
> + /* Writing Latin-1 characters into an ASCII string requires to
> + check that all written characters are pure ASCII */
> + Py_UCS4 max_char;
> + max_char = ucs1lib_find_max_char(from_data,
> + (Py_UCS1*)from_data + how_many);
> + if (max_char >= 128)
> + return -1;
> + }
> + memcpy((char*)to_data + to_kind * to_start,
> + (char*)from_data + from_kind * from_start,
> + to_kind * how_many);
> + }
> + else if (from_kind == PyUnicode_1BYTE_KIND
> + && to_kind == PyUnicode_2BYTE_KIND)
> + {
> + _PyUnicode_CONVERT_BYTES(
> + Py_UCS1, Py_UCS2,
> + PyUnicode_1BYTE_DATA(from) + from_start,
> + PyUnicode_1BYTE_DATA(from) + from_start + how_many,
> + PyUnicode_2BYTE_DATA(to) + to_start
> + );
> + }
> + else if (from_kind == PyUnicode_1BYTE_KIND
> + && to_kind == PyUnicode_4BYTE_KIND)
> + {
> + _PyUnicode_CONVERT_BYTES(
> + Py_UCS1, Py_UCS4,
> + PyUnicode_1BYTE_DATA(from) + from_start,
> + PyUnicode_1BYTE_DATA(from) + from_start + how_many,
> + PyUnicode_4BYTE_DATA(to) + to_start
> + );
> + }
> + else if (from_kind == PyUnicode_2BYTE_KIND
> + && to_kind == PyUnicode_4BYTE_KIND)
> + {
> + _PyUnicode_CONVERT_BYTES(
> + Py_UCS2, Py_UCS4,
> + PyUnicode_2BYTE_DATA(from) + from_start,
> + PyUnicode_2BYTE_DATA(from) + from_start + how_many,
> + PyUnicode_4BYTE_DATA(to) + to_start
> + );
> + }
> + else {
> + assert (PyUnicode_MAX_CHAR_VALUE(from) > PyUnicode_MAX_CHAR_VALUE(to));
> +
> + if (!check_maxchar) {
> + if (from_kind == PyUnicode_2BYTE_KIND
> + && to_kind == PyUnicode_1BYTE_KIND)
> + {
> + _PyUnicode_CONVERT_BYTES(
> + Py_UCS2, Py_UCS1,
> + PyUnicode_2BYTE_DATA(from) + from_start,
> + PyUnicode_2BYTE_DATA(from) + from_start + how_many,
> + PyUnicode_1BYTE_DATA(to) + to_start
> + );
> + }
> + else if (from_kind == PyUnicode_4BYTE_KIND
> + && to_kind == PyUnicode_1BYTE_KIND)
> + {
> + _PyUnicode_CONVERT_BYTES(
> + Py_UCS4, Py_UCS1,
> + PyUnicode_4BYTE_DATA(from) + from_start,
> + PyUnicode_4BYTE_DATA(from) + from_start + how_many,
> + PyUnicode_1BYTE_DATA(to) + to_start
> + );
> + }
> + else if (from_kind == PyUnicode_4BYTE_KIND
> + && to_kind == PyUnicode_2BYTE_KIND)
> + {
> + _PyUnicode_CONVERT_BYTES(
> + Py_UCS4, Py_UCS2,
> + PyUnicode_4BYTE_DATA(from) + from_start,
> + PyUnicode_4BYTE_DATA(from) + from_start + how_many,
> + PyUnicode_2BYTE_DATA(to) + to_start
> + );
> + }
> + else {
> + assert(0);
> + return -1;
> + }
> + }
> + else {
> + const Py_UCS4 to_maxchar = PyUnicode_MAX_CHAR_VALUE(to);
> + Py_UCS4 ch;
> + Py_ssize_t i;
> +
> + for (i=0; i < how_many; i++) {
> + ch = PyUnicode_READ(from_kind, from_data, from_start + i);
> + if (ch > to_maxchar)
> + return -1;
> + PyUnicode_WRITE(to_kind, to_data, to_start + i, ch);
> + }
> + }
> + }
> + return 0;
> +}
> +
> +void
> +_PyUnicode_FastCopyCharacters(
> + PyObject *to, Py_ssize_t to_start,
> + PyObject *from, Py_ssize_t from_start, Py_ssize_t how_many)
> +{
> + (void)_copy_characters(to, to_start, from, from_start, how_many, 0);
> +}
> +
> +Py_ssize_t
> +PyUnicode_CopyCharacters(PyObject *to, Py_ssize_t to_start,
> + PyObject *from, Py_ssize_t from_start,
> + Py_ssize_t how_many)
> +{
> + int err;
> +
> + if (!PyUnicode_Check(from) || !PyUnicode_Check(to)) {
> + PyErr_BadInternalCall();
> + return -1;
> + }
> +
> + if (PyUnicode_READY(from) == -1)
> + return -1;
> + if (PyUnicode_READY(to) == -1)
> + return -1;
> +
> + if ((size_t)from_start > (size_t)PyUnicode_GET_LENGTH(from)) {
> + PyErr_SetString(PyExc_IndexError, "string index out of range");
> + return -1;
> + }
> + if ((size_t)to_start > (size_t)PyUnicode_GET_LENGTH(to)) {
> + PyErr_SetString(PyExc_IndexError, "string index out of range");
> + return -1;
> + }
> + if (how_many < 0) {
> + PyErr_SetString(PyExc_SystemError, "how_many cannot be negative");
> + return -1;
> + }
> + how_many = Py_MIN(PyUnicode_GET_LENGTH(from)-from_start, how_many);
> + if (to_start + how_many > PyUnicode_GET_LENGTH(to)) {
> + PyErr_Format(PyExc_SystemError,
> + "Cannot write %zi characters at %zi "
> + "in a string of %zi characters",
> + how_many, to_start, PyUnicode_GET_LENGTH(to));
> + return -1;
> + }
> +
> + if (how_many == 0)
> + return 0;
> +
> + if (unicode_check_modifiable(to))
> + return -1;
> +
> + err = _copy_characters(to, to_start, from, from_start, how_many, 1);
> + if (err) {
> + PyErr_Format(PyExc_SystemError,
> + "Cannot copy %s characters "
> + "into a string of %s characters",
> + unicode_kind_name(from),
> + unicode_kind_name(to));
> + return -1;
> + }
> + return how_many;
> +}
> +
> +/* Find the maximum code point and count the number of surrogate pairs so a
> + correct string length can be computed before converting a string to UCS4.
> + This function counts single surrogates as a character and not as a pair.
> +
> + Return 0 on success, or -1 on error. */
> +static int
> +find_maxchar_surrogates(const wchar_t *begin, const wchar_t *end,
> + Py_UCS4 *maxchar, Py_ssize_t *num_surrogates)
> +{
> + const wchar_t *iter;
> + Py_UCS4 ch;
> +
> + assert(num_surrogates != NULL && maxchar != NULL);
> + *num_surrogates = 0;
> + *maxchar = 0;
> +
> + for (iter = begin; iter < end; ) {
> +#if SIZEOF_WCHAR_T == 2
> + if (Py_UNICODE_IS_HIGH_SURROGATE(iter[0])
> + && (iter+1) < end
> + && Py_UNICODE_IS_LOW_SURROGATE(iter[1]))
> + {
> + ch = Py_UNICODE_JOIN_SURROGATES(iter[0], iter[1]);
> + ++(*num_surrogates);
> + iter += 2;
> + }
> + else
> +#endif
> + {
> + ch = *iter;
> + iter++;
> + }
> + if (ch > *maxchar) {
> + *maxchar = ch;
> + if (*maxchar > MAX_UNICODE) {
> + PyErr_Format(PyExc_ValueError,
> + "character U+%x is not in range [U+0000; U+10ffff]",
> + ch);
> + return -1;
> + }
> + }
> + }
> + return 0;
> +}
> +
> +int
> +_PyUnicode_Ready(PyObject *unicode)
> +{
> + wchar_t *end;
> + Py_UCS4 maxchar = 0;
> + Py_ssize_t num_surrogates;
> +#if SIZEOF_WCHAR_T == 2
> + Py_ssize_t length_wo_surrogates;
> +#endif
> +
> + /* _PyUnicode_Ready() is only intended for old-style API usage where
> + strings were created using _PyObject_New() and where no canonical
> + representation (the str field) has been set yet aka strings
> + which are not yet ready. */
> + assert(_PyUnicode_CHECK(unicode));
> + assert(_PyUnicode_KIND(unicode) == PyUnicode_WCHAR_KIND);
> + assert(_PyUnicode_WSTR(unicode) != NULL);
> + assert(_PyUnicode_DATA_ANY(unicode) == NULL);
> + assert(_PyUnicode_UTF8(unicode) == NULL);
> + /* Actually, it should neither be interned nor be anything else: */
> + assert(_PyUnicode_STATE(unicode).interned == SSTATE_NOT_INTERNED);
> +
> + end = _PyUnicode_WSTR(unicode) + _PyUnicode_WSTR_LENGTH(unicode);
> + if (find_maxchar_surrogates(_PyUnicode_WSTR(unicode), end,
> + &maxchar, &num_surrogates) == -1)
> + return -1;
> +
> + if (maxchar < 256) {
> + _PyUnicode_DATA_ANY(unicode) = PyObject_MALLOC(_PyUnicode_WSTR_LENGTH(unicode) + 1);
> + if (!_PyUnicode_DATA_ANY(unicode)) {
> + PyErr_NoMemory();
> + return -1;
> + }
> + _PyUnicode_CONVERT_BYTES(wchar_t, unsigned char,
> + _PyUnicode_WSTR(unicode), end,
> + PyUnicode_1BYTE_DATA(unicode));
> + PyUnicode_1BYTE_DATA(unicode)[_PyUnicode_WSTR_LENGTH(unicode)] = '\0';
> + _PyUnicode_LENGTH(unicode) = _PyUnicode_WSTR_LENGTH(unicode);
> + _PyUnicode_STATE(unicode).kind = PyUnicode_1BYTE_KIND;
> + if (maxchar < 128) {
> + _PyUnicode_STATE(unicode).ascii = 1;
> + _PyUnicode_UTF8(unicode) = _PyUnicode_DATA_ANY(unicode);
> + _PyUnicode_UTF8_LENGTH(unicode) = _PyUnicode_WSTR_LENGTH(unicode);
> + }
> + else {
> + _PyUnicode_STATE(unicode).ascii = 0;
> + _PyUnicode_UTF8(unicode) = NULL;
> + _PyUnicode_UTF8_LENGTH(unicode) = 0;
> + }
> + PyObject_FREE(_PyUnicode_WSTR(unicode));
> + _PyUnicode_WSTR(unicode) = NULL;
> + _PyUnicode_WSTR_LENGTH(unicode) = 0;
> + }
> + /* In this case we might have to convert down from 4-byte native
> + wchar_t to 2-byte unicode. */
> + else if (maxchar < 65536) {
> + assert(num_surrogates == 0 &&
> + "FindMaxCharAndNumSurrogatePairs() messed up");
> +
> +#if SIZEOF_WCHAR_T == 2
> + /* We can share representations and are done. */
> + _PyUnicode_DATA_ANY(unicode) = _PyUnicode_WSTR(unicode);
> + PyUnicode_2BYTE_DATA(unicode)[_PyUnicode_WSTR_LENGTH(unicode)] = '\0';
> + _PyUnicode_LENGTH(unicode) = _PyUnicode_WSTR_LENGTH(unicode);
> + _PyUnicode_STATE(unicode).kind = PyUnicode_2BYTE_KIND;
> + _PyUnicode_UTF8(unicode) = NULL;
> + _PyUnicode_UTF8_LENGTH(unicode) = 0;
> +#else
> + /* sizeof(wchar_t) == 4 */
> + _PyUnicode_DATA_ANY(unicode) = PyObject_MALLOC(
> + 2 * (_PyUnicode_WSTR_LENGTH(unicode) + 1));
> + if (!_PyUnicode_DATA_ANY(unicode)) {
> + PyErr_NoMemory();
> + return -1;
> + }
> + _PyUnicode_CONVERT_BYTES(wchar_t, Py_UCS2,
> + _PyUnicode_WSTR(unicode), end,
> + PyUnicode_2BYTE_DATA(unicode));
> + PyUnicode_2BYTE_DATA(unicode)[_PyUnicode_WSTR_LENGTH(unicode)] = '\0';
> + _PyUnicode_LENGTH(unicode) = _PyUnicode_WSTR_LENGTH(unicode);
> + _PyUnicode_STATE(unicode).kind = PyUnicode_2BYTE_KIND;
> + _PyUnicode_UTF8(unicode) = NULL;
> + _PyUnicode_UTF8_LENGTH(unicode) = 0;
> + PyObject_FREE(_PyUnicode_WSTR(unicode));
> + _PyUnicode_WSTR(unicode) = NULL;
> + _PyUnicode_WSTR_LENGTH(unicode) = 0;
> +#endif
> + }
> + /* maxchar exeeds 16 bit, wee need 4 bytes for unicode characters */
> + else {
> +#if SIZEOF_WCHAR_T == 2
> + /* in case the native representation is 2-bytes, we need to allocate a
> + new normalized 4-byte version. */
> + length_wo_surrogates = _PyUnicode_WSTR_LENGTH(unicode) - num_surrogates;
> + if (length_wo_surrogates > PY_SSIZE_T_MAX / 4 - 1) {
> + PyErr_NoMemory();
> + return -1;
> + }
> + _PyUnicode_DATA_ANY(unicode) = PyObject_MALLOC(4 * (length_wo_surrogates + 1));
> + if (!_PyUnicode_DATA_ANY(unicode)) {
> + PyErr_NoMemory();
> + return -1;
> + }
> + _PyUnicode_LENGTH(unicode) = length_wo_surrogates;
> + _PyUnicode_STATE(unicode).kind = PyUnicode_4BYTE_KIND;
> + _PyUnicode_UTF8(unicode) = NULL;
> + _PyUnicode_UTF8_LENGTH(unicode) = 0;
> + /* unicode_convert_wchar_to_ucs4() requires a ready string */
> + _PyUnicode_STATE(unicode).ready = 1;
> + unicode_convert_wchar_to_ucs4(_PyUnicode_WSTR(unicode), end, unicode);
> + PyObject_FREE(_PyUnicode_WSTR(unicode));
> + _PyUnicode_WSTR(unicode) = NULL;
> + _PyUnicode_WSTR_LENGTH(unicode) = 0;
> +#else
> + assert(num_surrogates == 0);
> +
> + _PyUnicode_DATA_ANY(unicode) = _PyUnicode_WSTR(unicode);
> + _PyUnicode_LENGTH(unicode) = _PyUnicode_WSTR_LENGTH(unicode);
> + _PyUnicode_UTF8(unicode) = NULL;
> + _PyUnicode_UTF8_LENGTH(unicode) = 0;
> + _PyUnicode_STATE(unicode).kind = PyUnicode_4BYTE_KIND;
> +#endif
> + PyUnicode_4BYTE_DATA(unicode)[_PyUnicode_LENGTH(unicode)] = '\0';
> + }
> + _PyUnicode_STATE(unicode).ready = 1;
> + assert(_PyUnicode_CheckConsistency(unicode, 1));
> + return 0;
> +}
> +
> +static void
> +unicode_dealloc(PyObject *unicode)
> +{
> + switch (PyUnicode_CHECK_INTERNED(unicode)) {
> + case SSTATE_NOT_INTERNED:
> + break;
> +
> + case SSTATE_INTERNED_MORTAL:
> + /* revive dead object temporarily for DelItem */
> + Py_REFCNT(unicode) = 3;
> + if (PyDict_DelItem(interned, unicode) != 0)
> + Py_FatalError(
> + "deletion of interned string failed");
> + break;
> +
> + case SSTATE_INTERNED_IMMORTAL:
> + Py_FatalError("Immortal interned string died.");
> + /* fall through */
> +
> + default:
> + Py_FatalError("Inconsistent interned string state.");
> + }
> +
> + if (_PyUnicode_HAS_WSTR_MEMORY(unicode))
> + PyObject_DEL(_PyUnicode_WSTR(unicode));
> + if (_PyUnicode_HAS_UTF8_MEMORY(unicode))
> + PyObject_DEL(_PyUnicode_UTF8(unicode));
> + if (!PyUnicode_IS_COMPACT(unicode) && _PyUnicode_DATA_ANY(unicode))
> + PyObject_DEL(_PyUnicode_DATA_ANY(unicode));
> +
> + Py_TYPE(unicode)->tp_free(unicode);
> +}
> +
> +#ifdef Py_DEBUG
> +static int
> +unicode_is_singleton(PyObject *unicode)
> +{
> + PyASCIIObject *ascii = (PyASCIIObject *)unicode;
> + if (unicode == unicode_empty)
> + return 1;
> + if (ascii->state.kind != PyUnicode_WCHAR_KIND && ascii->length == 1)
> + {
> + Py_UCS4 ch = PyUnicode_READ_CHAR(unicode, 0);
> + if (ch < 256 && unicode_latin1[ch] == unicode)
> + return 1;
> + }
> + return 0;
> +}
> +#endif
> +
> +static int
> +unicode_modifiable(PyObject *unicode)
> +{
> + assert(_PyUnicode_CHECK(unicode));
> + if (Py_REFCNT(unicode) != 1)
> + return 0;
> + if (_PyUnicode_HASH(unicode) != -1)
> + return 0;
> + if (PyUnicode_CHECK_INTERNED(unicode))
> + return 0;
> + if (!PyUnicode_CheckExact(unicode))
> + return 0;
> +#ifdef Py_DEBUG
> + /* singleton refcount is greater than 1 */
> + assert(!unicode_is_singleton(unicode));
> +#endif
> + return 1;
> +}
> +
> +static int
> +unicode_resize(PyObject **p_unicode, Py_ssize_t length)
> +{
> + PyObject *unicode;
> + Py_ssize_t old_length;
> +
> + assert(p_unicode != NULL);
> + unicode = *p_unicode;
> +
> + assert(unicode != NULL);
> + assert(PyUnicode_Check(unicode));
> + assert(0 <= length);
> +
> + if (_PyUnicode_KIND(unicode) == PyUnicode_WCHAR_KIND)
> + old_length = PyUnicode_WSTR_LENGTH(unicode);
> + else
> + old_length = PyUnicode_GET_LENGTH(unicode);
> + if (old_length == length)
> + return 0;
> +
> + if (length == 0) {
> + _Py_INCREF_UNICODE_EMPTY();
> + if (!unicode_empty)
> + return -1;
> + Py_SETREF(*p_unicode, unicode_empty);
> + return 0;
> + }
> +
> + if (!unicode_modifiable(unicode)) {
> + PyObject *copy = resize_copy(unicode, length);
> + if (copy == NULL)
> + return -1;
> + Py_SETREF(*p_unicode, copy);
> + return 0;
> + }
> +
> + if (PyUnicode_IS_COMPACT(unicode)) {
> + PyObject *new_unicode = resize_compact(unicode, length);
> + if (new_unicode == NULL)
> + return -1;
> + *p_unicode = new_unicode;
> + return 0;
> + }
> + return resize_inplace(unicode, length);
> +}
> +
> +int
> +PyUnicode_Resize(PyObject **p_unicode, Py_ssize_t length)
> +{
> + PyObject *unicode;
> + if (p_unicode == NULL) {
> + PyErr_BadInternalCall();
> + return -1;
> + }
> + unicode = *p_unicode;
> + if (unicode == NULL || !PyUnicode_Check(unicode) || length < 0)
> + {
> + PyErr_BadInternalCall();
> + return -1;
> + }
> + return unicode_resize(p_unicode, length);
> +}
> +
> +/* Copy an ASCII or latin1 char* string into a Python Unicode string.
> +
> + WARNING: The function doesn't copy the terminating null character and
> + doesn't check the maximum character (may write a latin1 character in an
> + ASCII string). */
> +static void
> +unicode_write_cstr(PyObject *unicode, Py_ssize_t index,
> + const char *str, Py_ssize_t len)
> +{
> + enum PyUnicode_Kind kind = PyUnicode_KIND(unicode);
> + void *data = PyUnicode_DATA(unicode);
> + const char *end = str + len;
> +
> + switch (kind) {
> + case PyUnicode_1BYTE_KIND: {
> + assert(index + len <= PyUnicode_GET_LENGTH(unicode));
> +#ifdef Py_DEBUG
> + if (PyUnicode_IS_ASCII(unicode)) {
> + Py_UCS4 maxchar = ucs1lib_find_max_char(
> + (const Py_UCS1*)str,
> + (const Py_UCS1*)str + len);
> + assert(maxchar < 128);
> + }
> +#endif
> + memcpy((char *) data + index, str, len);
> + break;
> + }
> + case PyUnicode_2BYTE_KIND: {
> + Py_UCS2 *start = (Py_UCS2 *)data + index;
> + Py_UCS2 *ucs2 = start;
> + assert(index <= PyUnicode_GET_LENGTH(unicode));
> +
> + for (; str < end; ++ucs2, ++str)
> + *ucs2 = (Py_UCS2)*str;
> +
> + assert((ucs2 - start) <= PyUnicode_GET_LENGTH(unicode));
> + break;
> + }
> + default: {
> + Py_UCS4 *start = (Py_UCS4 *)data + index;
> + Py_UCS4 *ucs4 = start;
> + assert(kind == PyUnicode_4BYTE_KIND);
> + assert(index <= PyUnicode_GET_LENGTH(unicode));
> +
> + for (; str < end; ++ucs4, ++str)
> + *ucs4 = (Py_UCS4)*str;
> +
> + assert((ucs4 - start) <= PyUnicode_GET_LENGTH(unicode));
> + }
> + }
> +}
> +
> +static PyObject*
> +get_latin1_char(unsigned char ch)
> +{
> + PyObject *unicode = unicode_latin1[ch];
> + if (!unicode) {
> + unicode = PyUnicode_New(1, ch);
> + if (!unicode)
> + return NULL;
> + PyUnicode_1BYTE_DATA(unicode)[0] = ch;
> + assert(_PyUnicode_CheckConsistency(unicode, 1));
> + unicode_latin1[ch] = unicode;
> + }
> + Py_INCREF(unicode);
> + return unicode;
> +}
> +
> +static PyObject*
> +unicode_char(Py_UCS4 ch)
> +{
> + PyObject *unicode;
> +
> + assert(ch <= MAX_UNICODE);
> +
> + if (ch < 256)
> + return get_latin1_char(ch);
> +
> + unicode = PyUnicode_New(1, ch);
> + if (unicode == NULL)
> + return NULL;
> + switch (PyUnicode_KIND(unicode)) {
> + case PyUnicode_1BYTE_KIND:
> + PyUnicode_1BYTE_DATA(unicode)[0] = (Py_UCS1)ch;
> + break;
> + case PyUnicode_2BYTE_KIND:
> + PyUnicode_2BYTE_DATA(unicode)[0] = (Py_UCS2)ch;
> + break;
> + default:
> + assert(PyUnicode_KIND(unicode) == PyUnicode_4BYTE_KIND);
> + PyUnicode_4BYTE_DATA(unicode)[0] = ch;
> + }
> + assert(_PyUnicode_CheckConsistency(unicode, 1));
> + return unicode;
> +}
> +
> +PyObject *
> +PyUnicode_FromUnicode(const Py_UNICODE *u, Py_ssize_t size)
> +{
> + PyObject *unicode;
> + Py_UCS4 maxchar = 0;
> + Py_ssize_t num_surrogates;
> +
> + if (u == NULL)
> + return (PyObject*)_PyUnicode_New(size);
> +
> + /* If the Unicode data is known at construction time, we can apply
> + some optimizations which share commonly used objects. */
> +
> + /* Optimization for empty strings */
> + if (size == 0)
> + _Py_RETURN_UNICODE_EMPTY();
> +
> + /* Single character Unicode objects in the Latin-1 range are
> + shared when using this constructor */
> + if (size == 1 && (Py_UCS4)*u < 256)
> + return get_latin1_char((unsigned char)*u);
> +
> + /* If not empty and not single character, copy the Unicode data
> + into the new object */
> + if (find_maxchar_surrogates(u, u + size,
> + &maxchar, &num_surrogates) == -1)
> + return NULL;
> +
> + unicode = PyUnicode_New(size - num_surrogates, maxchar);
> + if (!unicode)
> + return NULL;
> +
> + switch (PyUnicode_KIND(unicode)) {
> + case PyUnicode_1BYTE_KIND:
> + _PyUnicode_CONVERT_BYTES(Py_UNICODE, unsigned char,
> + u, u + size, PyUnicode_1BYTE_DATA(unicode));
> + break;
> + case PyUnicode_2BYTE_KIND:
> +#if Py_UNICODE_SIZE == 2
> + memcpy(PyUnicode_2BYTE_DATA(unicode), u, size * 2);
> +#else
> + _PyUnicode_CONVERT_BYTES(Py_UNICODE, Py_UCS2,
> + u, u + size, PyUnicode_2BYTE_DATA(unicode));
> +#endif
> + break;
> + case PyUnicode_4BYTE_KIND:
> +#if SIZEOF_WCHAR_T == 2
> + /* This is the only case which has to process surrogates, thus
> + a simple copy loop is not enough and we need a function. */
> + unicode_convert_wchar_to_ucs4(u, u + size, unicode);
> +#else
> + assert(num_surrogates == 0);
> + memcpy(PyUnicode_4BYTE_DATA(unicode), u, size * 4);
> +#endif
> + break;
> + default:
> + assert(0 && "Impossible state");
> + }
> +
> + return unicode_result(unicode);
> +}
> +
> +PyObject *
> +PyUnicode_FromStringAndSize(const char *u, Py_ssize_t size)
> +{
> + if (size < 0) {
> + PyErr_SetString(PyExc_SystemError,
> + "Negative size passed to PyUnicode_FromStringAndSize");
> + return NULL;
> + }
> + if (u != NULL)
> + return PyUnicode_DecodeUTF8Stateful(u, size, NULL, NULL);
> + else
> + return (PyObject *)_PyUnicode_New(size);
> +}
> +
> +PyObject *
> +PyUnicode_FromString(const char *u)
> +{
> + size_t size = strlen(u);
> + if (size > PY_SSIZE_T_MAX) {
> + PyErr_SetString(PyExc_OverflowError, "input too long");
> + return NULL;
> + }
> + return PyUnicode_DecodeUTF8Stateful(u, (Py_ssize_t)size, NULL, NULL);
> +}
> +
> +PyObject *
> +_PyUnicode_FromId(_Py_Identifier *id)
> +{
> + if (!id->object) {
> + id->object = PyUnicode_DecodeUTF8Stateful(id->string,
> + strlen(id->string),
> + NULL, NULL);
> + if (!id->object)
> + return NULL;
> + PyUnicode_InternInPlace(&id->object);
> + assert(!id->next);
> + id->next = static_strings;
> + static_strings = id;
> + }
> + return id->object;
> +}
> +
> +void
> +_PyUnicode_ClearStaticStrings()
> +{
> + _Py_Identifier *tmp, *s = static_strings;
> + while (s) {
> + Py_CLEAR(s->object);
> + tmp = s->next;
> + s->next = NULL;
> + s = tmp;
> + }
> + static_strings = NULL;
> +}
> +
> +/* Internal function, doesn't check maximum character */
> +
> +PyObject*
> +_PyUnicode_FromASCII(const char *buffer, Py_ssize_t size)
> +{
> + const unsigned char *s = (const unsigned char *)buffer;
> + PyObject *unicode;
> + if (size == 1) {
> +#ifdef Py_DEBUG
> + assert((unsigned char)s[0] < 128);
> +#endif
> + return get_latin1_char(s[0]);
> + }
> + unicode = PyUnicode_New(size, 127);
> + if (!unicode)
> + return NULL;
> + memcpy(PyUnicode_1BYTE_DATA(unicode), s, size);
> + assert(_PyUnicode_CheckConsistency(unicode, 1));
> + return unicode;
> +}
> +
> +static Py_UCS4
> +kind_maxchar_limit(unsigned int kind)
> +{
> + switch (kind) {
> + case PyUnicode_1BYTE_KIND:
> + return 0x80;
> + case PyUnicode_2BYTE_KIND:
> + return 0x100;
> + case PyUnicode_4BYTE_KIND:
> + return 0x10000;
> + default:
> + assert(0 && "invalid kind");
> + return MAX_UNICODE;
> + }
> +}
> +
> +static Py_UCS4
> +align_maxchar(Py_UCS4 maxchar)
> +{
> + if (maxchar <= 127)
> + return 127;
> + else if (maxchar <= 255)
> + return 255;
> + else if (maxchar <= 65535)
> + return 65535;
> + else
> + return MAX_UNICODE;
> +}
> +
> +static PyObject*
> +_PyUnicode_FromUCS1(const Py_UCS1* u, Py_ssize_t size)
> +{
> + PyObject *res;
> + unsigned char max_char;
> +
> + if (size == 0)
> + _Py_RETURN_UNICODE_EMPTY();
> + assert(size > 0);
> + if (size == 1)
> + return get_latin1_char(u[0]);
> +
> + max_char = ucs1lib_find_max_char(u, u + size);
> + res = PyUnicode_New(size, max_char);
> + if (!res)
> + return NULL;
> + memcpy(PyUnicode_1BYTE_DATA(res), u, size);
> + assert(_PyUnicode_CheckConsistency(res, 1));
> + return res;
> +}
> +
> +static PyObject*
> +_PyUnicode_FromUCS2(const Py_UCS2 *u, Py_ssize_t size)
> +{
> + PyObject *res;
> + Py_UCS2 max_char;
> +
> + if (size == 0)
> + _Py_RETURN_UNICODE_EMPTY();
> + assert(size > 0);
> + if (size == 1)
> + return unicode_char(u[0]);
> +
> + max_char = ucs2lib_find_max_char(u, u + size);
> + res = PyUnicode_New(size, max_char);
> + if (!res)
> + return NULL;
> + if (max_char >= 256)
> + memcpy(PyUnicode_2BYTE_DATA(res), u, sizeof(Py_UCS2)*size);
> + else {
> + _PyUnicode_CONVERT_BYTES(
> + Py_UCS2, Py_UCS1, u, u + size, PyUnicode_1BYTE_DATA(res));
> + }
> + assert(_PyUnicode_CheckConsistency(res, 1));
> + return res;
> +}
> +
> +static PyObject*
> +_PyUnicode_FromUCS4(const Py_UCS4 *u, Py_ssize_t size)
> +{
> + PyObject *res;
> + Py_UCS4 max_char;
> +
> + if (size == 0)
> + _Py_RETURN_UNICODE_EMPTY();
> + assert(size > 0);
> + if (size == 1)
> + return unicode_char(u[0]);
> +
> + max_char = ucs4lib_find_max_char(u, u + size);
> + res = PyUnicode_New(size, max_char);
> + if (!res)
> + return NULL;
> + if (max_char < 256)
> + _PyUnicode_CONVERT_BYTES(Py_UCS4, Py_UCS1, u, u + size,
> + PyUnicode_1BYTE_DATA(res));
> + else if (max_char < 0x10000)
> + _PyUnicode_CONVERT_BYTES(Py_UCS4, Py_UCS2, u, u + size,
> + PyUnicode_2BYTE_DATA(res));
> + else
> + memcpy(PyUnicode_4BYTE_DATA(res), u, sizeof(Py_UCS4)*size);
> + assert(_PyUnicode_CheckConsistency(res, 1));
> + return res;
> +}
> +
> +PyObject*
> +PyUnicode_FromKindAndData(int kind, const void *buffer, Py_ssize_t size)
> +{
> + if (size < 0) {
> + PyErr_SetString(PyExc_ValueError, "size must be positive");
> + return NULL;
> + }
> + switch (kind) {
> + case PyUnicode_1BYTE_KIND:
> + return _PyUnicode_FromUCS1(buffer, size);
> + case PyUnicode_2BYTE_KIND:
> + return _PyUnicode_FromUCS2(buffer, size);
> + case PyUnicode_4BYTE_KIND:
> + return _PyUnicode_FromUCS4(buffer, size);
> + default:
> + PyErr_SetString(PyExc_SystemError, "invalid kind");
> + return NULL;
> + }
> +}
> +
> +Py_UCS4
> +_PyUnicode_FindMaxChar(PyObject *unicode, Py_ssize_t start, Py_ssize_t end)
> +{
> + enum PyUnicode_Kind kind;
> + void *startptr, *endptr;
> +
> + assert(PyUnicode_IS_READY(unicode));
> + assert(0 <= start);
> + assert(end <= PyUnicode_GET_LENGTH(unicode));
> + assert(start <= end);
> +
> + if (start == 0 && end == PyUnicode_GET_LENGTH(unicode))
> + return PyUnicode_MAX_CHAR_VALUE(unicode);
> +
> + if (start == end)
> + return 127;
> +
> + if (PyUnicode_IS_ASCII(unicode))
> + return 127;
> +
> + kind = PyUnicode_KIND(unicode);
> + startptr = PyUnicode_DATA(unicode);
> + endptr = (char *)startptr + end * kind;
> + startptr = (char *)startptr + start * kind;
> + switch(kind) {
> + case PyUnicode_1BYTE_KIND:
> + return ucs1lib_find_max_char(startptr, endptr);
> + case PyUnicode_2BYTE_KIND:
> + return ucs2lib_find_max_char(startptr, endptr);
> + case PyUnicode_4BYTE_KIND:
> + return ucs4lib_find_max_char(startptr, endptr);
> + default:
> + assert(0);
> + return 0;
> + }
> +}
> +
> +/* Ensure that a string uses the most efficient storage, if it is not the
> + case: create a new string with of the right kind. Write NULL into *p_unicode
> + on error. */
> +static void
> +unicode_adjust_maxchar(PyObject **p_unicode)
> +{
> + PyObject *unicode, *copy;
> + Py_UCS4 max_char;
> + Py_ssize_t len;
> + unsigned int kind;
> +
> + assert(p_unicode != NULL);
> + unicode = *p_unicode;
> + assert(PyUnicode_IS_READY(unicode));
> + if (PyUnicode_IS_ASCII(unicode))
> + return;
> +
> + len = PyUnicode_GET_LENGTH(unicode);
> + kind = PyUnicode_KIND(unicode);
> + if (kind == PyUnicode_1BYTE_KIND) {
> + const Py_UCS1 *u = PyUnicode_1BYTE_DATA(unicode);
> + max_char = ucs1lib_find_max_char(u, u + len);
> + if (max_char >= 128)
> + return;
> + }
> + else if (kind == PyUnicode_2BYTE_KIND) {
> + const Py_UCS2 *u = PyUnicode_2BYTE_DATA(unicode);
> + max_char = ucs2lib_find_max_char(u, u + len);
> + if (max_char >= 256)
> + return;
> + }
> + else {
> + const Py_UCS4 *u = PyUnicode_4BYTE_DATA(unicode);
> + assert(kind == PyUnicode_4BYTE_KIND);
> + max_char = ucs4lib_find_max_char(u, u + len);
> + if (max_char >= 0x10000)
> + return;
> + }
> + copy = PyUnicode_New(len, max_char);
> + if (copy != NULL)
> + _PyUnicode_FastCopyCharacters(copy, 0, unicode, 0, len);
> + Py_DECREF(unicode);
> + *p_unicode = copy;
> +}
> +
> +PyObject*
> +_PyUnicode_Copy(PyObject *unicode)
> +{
> + Py_ssize_t length;
> + PyObject *copy;
> +
> + if (!PyUnicode_Check(unicode)) {
> + PyErr_BadInternalCall();
> + return NULL;
> + }
> + if (PyUnicode_READY(unicode) == -1)
> + return NULL;
> +
> + length = PyUnicode_GET_LENGTH(unicode);
> + copy = PyUnicode_New(length, PyUnicode_MAX_CHAR_VALUE(unicode));
> + if (!copy)
> + return NULL;
> + assert(PyUnicode_KIND(copy) == PyUnicode_KIND(unicode));
> +
> + memcpy(PyUnicode_DATA(copy), PyUnicode_DATA(unicode),
> + length * PyUnicode_KIND(unicode));
> + assert(_PyUnicode_CheckConsistency(copy, 1));
> + return copy;
> +}
> +
> +
> +/* Widen Unicode objects to larger buffers. Don't write terminating null
> + character. Return NULL on error. */
> +
> +void*
> +_PyUnicode_AsKind(PyObject *s, unsigned int kind)
> +{
> + Py_ssize_t len;
> + void *result;
> + unsigned int skind;
> +
> + if (PyUnicode_READY(s) == -1)
> + return NULL;
> +
> + len = PyUnicode_GET_LENGTH(s);
> + skind = PyUnicode_KIND(s);
> + if (skind >= kind) {
> + PyErr_SetString(PyExc_SystemError, "invalid widening attempt");
> + return NULL;
> + }
> + switch (kind) {
> + case PyUnicode_2BYTE_KIND:
> + result = PyMem_New(Py_UCS2, len);
> + if (!result)
> + return PyErr_NoMemory();
> + assert(skind == PyUnicode_1BYTE_KIND);
> + _PyUnicode_CONVERT_BYTES(
> + Py_UCS1, Py_UCS2,
> + PyUnicode_1BYTE_DATA(s),
> + PyUnicode_1BYTE_DATA(s) + len,
> + result);
> + return result;
> + case PyUnicode_4BYTE_KIND:
> + result = PyMem_New(Py_UCS4, len);
> + if (!result)
> + return PyErr_NoMemory();
> + if (skind == PyUnicode_2BYTE_KIND) {
> + _PyUnicode_CONVERT_BYTES(
> + Py_UCS2, Py_UCS4,
> + PyUnicode_2BYTE_DATA(s),
> + PyUnicode_2BYTE_DATA(s) + len,
> + result);
> + }
> + else {
> + assert(skind == PyUnicode_1BYTE_KIND);
> + _PyUnicode_CONVERT_BYTES(
> + Py_UCS1, Py_UCS4,
> + PyUnicode_1BYTE_DATA(s),
> + PyUnicode_1BYTE_DATA(s) + len,
> + result);
> + }
> + return result;
> + default:
> + break;
> + }
> + PyErr_SetString(PyExc_SystemError, "invalid kind");
> + return NULL;
> +}
> +
> +static Py_UCS4*
> +as_ucs4(PyObject *string, Py_UCS4 *target, Py_ssize_t targetsize,
> + int copy_null)
> +{
> + int kind;
> + void *data;
> + Py_ssize_t len, targetlen;
> + if (PyUnicode_READY(string) == -1)
> + return NULL;
> + kind = PyUnicode_KIND(string);
> + data = PyUnicode_DATA(string);
> + len = PyUnicode_GET_LENGTH(string);
> + targetlen = len;
> + if (copy_null)
> + targetlen++;
> + if (!target) {
> + target = PyMem_New(Py_UCS4, targetlen);
> + if (!target) {
> + PyErr_NoMemory();
> + return NULL;
> + }
> + }
> + else {
> + if (targetsize < targetlen) {
> + PyErr_Format(PyExc_SystemError,
> + "string is longer than the buffer");
> + if (copy_null && 0 < targetsize)
> + target[0] = 0;
> + return NULL;
> + }
> + }
> + if (kind == PyUnicode_1BYTE_KIND) {
> + Py_UCS1 *start = (Py_UCS1 *) data;
> + _PyUnicode_CONVERT_BYTES(Py_UCS1, Py_UCS4, start, start + len, target);
> + }
> + else if (kind == PyUnicode_2BYTE_KIND) {
> + Py_UCS2 *start = (Py_UCS2 *) data;
> + _PyUnicode_CONVERT_BYTES(Py_UCS2, Py_UCS4, start, start + len, target);
> + }
> + else {
> + assert(kind == PyUnicode_4BYTE_KIND);
> + memcpy(target, data, len * sizeof(Py_UCS4));
> + }
> + if (copy_null)
> + target[len] = 0;
> + return target;
> +}
> +
> +Py_UCS4*
> +PyUnicode_AsUCS4(PyObject *string, Py_UCS4 *target, Py_ssize_t targetsize,
> + int copy_null)
> +{
> + if (target == NULL || targetsize < 0) {
> + PyErr_BadInternalCall();
> + return NULL;
> + }
> + return as_ucs4(string, target, targetsize, copy_null);
> +}
> +
> +Py_UCS4*
> +PyUnicode_AsUCS4Copy(PyObject *string)
> +{
> + return as_ucs4(string, NULL, 0, 1);
> +}
> +
> +#ifdef HAVE_WCHAR_H
> +
> +PyObject *
> +PyUnicode_FromWideChar(const wchar_t *w, Py_ssize_t size)
> +{
> + if (w == NULL) {
> + if (size == 0)
> + _Py_RETURN_UNICODE_EMPTY();
> + PyErr_BadInternalCall();
> + return NULL;
> + }
> +
> + if (size == -1) {
> + size = wcslen(w);
> + }
> +
> + return PyUnicode_FromUnicode(w, size);
> +}
> +
> +#endif /* HAVE_WCHAR_H */
> +
> +/* maximum number of characters required for output of %lld or %p.
> + We need at most ceil(log10(256)*SIZEOF_LONG_LONG) digits,
> + plus 1 for the sign. 53/22 is an upper bound for log10(256). */
> +#define MAX_LONG_LONG_CHARS (2 + (SIZEOF_LONG_LONG*53-1) / 22)
> +
> +static int
> +unicode_fromformat_write_str(_PyUnicodeWriter *writer, PyObject *str,
> + Py_ssize_t width, Py_ssize_t precision)
> +{
> + Py_ssize_t length, fill, arglen;
> + Py_UCS4 maxchar;
> +
> + if (PyUnicode_READY(str) == -1)
> + return -1;
> +
> + length = PyUnicode_GET_LENGTH(str);
> + if ((precision == -1 || precision >= length)
> + && width <= length)
> + return _PyUnicodeWriter_WriteStr(writer, str);
> +
> + if (precision != -1)
> + length = Py_MIN(precision, length);
> +
> + arglen = Py_MAX(length, width);
> + if (PyUnicode_MAX_CHAR_VALUE(str) > writer->maxchar)
> + maxchar = _PyUnicode_FindMaxChar(str, 0, length);
> + else
> + maxchar = writer->maxchar;
> +
> + if (_PyUnicodeWriter_Prepare(writer, arglen, maxchar) == -1)
> + return -1;
> +
> + if (width > length) {
> + fill = width - length;
> + if (PyUnicode_Fill(writer->buffer, writer->pos, fill, ' ') == -1)
> + return -1;
> + writer->pos += fill;
> + }
> +
> + _PyUnicode_FastCopyCharacters(writer->buffer, writer->pos,
> + str, 0, length);
> + writer->pos += length;
> + return 0;
> +}
> +
> +static int
> +unicode_fromformat_write_cstr(_PyUnicodeWriter *writer, const char *str,
> + Py_ssize_t width, Py_ssize_t precision)
> +{
> + /* UTF-8 */
> + Py_ssize_t length;
> + PyObject *unicode;
> + int res;
> +
> + length = strlen(str);
> + if (precision != -1)
> + length = Py_MIN(length, precision);
> + unicode = PyUnicode_DecodeUTF8Stateful(str, length, "replace", NULL);
> + if (unicode == NULL)
> + return -1;
> +
> + res = unicode_fromformat_write_str(writer, unicode, width, -1);
> + Py_DECREF(unicode);
> + return res;
> +}
> +
> +static const char*
> +unicode_fromformat_arg(_PyUnicodeWriter *writer,
> + const char *f, va_list *vargs)
> +{
> + const char *p;
> + Py_ssize_t len;
> + int zeropad;
> + Py_ssize_t width;
> + Py_ssize_t precision;
> + int longflag;
> + int longlongflag;
> + int size_tflag;
> + Py_ssize_t fill;
> +
> + p = f;
> + f++;
> + zeropad = 0;
> + if (*f == '0') {
> + zeropad = 1;
> + f++;
> + }
> +
> + /* parse the width.precision part, e.g. "%2.5s" => width=2, precision=5 */
> + width = -1;
> + if (Py_ISDIGIT((unsigned)*f)) {
> + width = *f - '0';
> + f++;
> + while (Py_ISDIGIT((unsigned)*f)) {
> + if (width > (PY_SSIZE_T_MAX - ((int)*f - '0')) / 10) {
> + PyErr_SetString(PyExc_ValueError,
> + "width too big");
> + return NULL;
> + }
> + width = (width * 10) + (*f - '0');
> + f++;
> + }
> + }
> + precision = -1;
> + if (*f == '.') {
> + f++;
> + if (Py_ISDIGIT((unsigned)*f)) {
> + precision = (*f - '0');
> + f++;
> + while (Py_ISDIGIT((unsigned)*f)) {
> + if (precision > (PY_SSIZE_T_MAX - ((int)*f - '0')) / 10) {
> + PyErr_SetString(PyExc_ValueError,
> + "precision too big");
> + return NULL;
> + }
> + precision = (precision * 10) + (*f - '0');
> + f++;
> + }
> + }
> + if (*f == '%') {
> + /* "%.3%s" => f points to "3" */
> + f--;
> + }
> + }
> + if (*f == '\0') {
> + /* bogus format "%.123" => go backward, f points to "3" */
> + f--;
> + }
> +
> + /* Handle %ld, %lu, %lld and %llu. */
> + longflag = 0;
> + longlongflag = 0;
> + size_tflag = 0;
> + if (*f == 'l') {
> + if (f[1] == 'd' || f[1] == 'u' || f[1] == 'i') {
> + longflag = 1;
> + ++f;
> + }
> + else if (f[1] == 'l' &&
> + (f[2] == 'd' || f[2] == 'u' || f[2] == 'i')) {
> + longlongflag = 1;
> + f += 2;
> + }
> + }
> + /* handle the size_t flag. */
> + else if (*f == 'z' && (f[1] == 'd' || f[1] == 'u' || f[1] == 'i')) {
> + size_tflag = 1;
> + ++f;
> + }
> +
> + if (f[1] == '\0')
> + writer->overallocate = 0;
> +
> + switch (*f) {
> + case 'c':
> + {
> + int ordinal = va_arg(*vargs, int);
> + if (ordinal < 0 || ordinal > MAX_UNICODE) {
> + PyErr_SetString(PyExc_OverflowError,
> + "character argument not in range(0x110000)");
> + return NULL;
> + }
> + if (_PyUnicodeWriter_WriteCharInline(writer, ordinal) < 0)
> + return NULL;
> + break;
> + }
> +
> + case 'i':
> + case 'd':
> + case 'u':
> + case 'x':
> + {
> + /* used by sprintf */
> + char buffer[MAX_LONG_LONG_CHARS];
> + Py_ssize_t arglen;
> +
> + if (*f == 'u') {
> + if (longflag)
> + len = sprintf(buffer, "%lu",
> + va_arg(*vargs, unsigned long));
> + else if (longlongflag)
> + len = sprintf(buffer, "%llu",
> + va_arg(*vargs, unsigned long long));
> + else if (size_tflag)
> + len = sprintf(buffer, "%" PY_FORMAT_SIZE_T "u",
> + va_arg(*vargs, size_t));
> + else
> + len = sprintf(buffer, "%u",
> + va_arg(*vargs, unsigned int));
> + }
> + else if (*f == 'x') {
> + len = sprintf(buffer, "%x", va_arg(*vargs, int));
> + }
> + else {
> + if (longflag)
> + len = sprintf(buffer, "%li",
> + va_arg(*vargs, long));
> + else if (longlongflag)
> + len = sprintf(buffer, "%lli",
> + va_arg(*vargs, long long));
> + else if (size_tflag)
> + len = sprintf(buffer, "%" PY_FORMAT_SIZE_T "i",
> + va_arg(*vargs, Py_ssize_t));
> + else
> + len = sprintf(buffer, "%i",
> + va_arg(*vargs, int));
> + }
> + assert(len >= 0);
> +
> + if (precision < len)
> + precision = len;
> +
> + arglen = Py_MAX(precision, width);
> + if (_PyUnicodeWriter_Prepare(writer, arglen, 127) == -1)
> + return NULL;
> +
> + if (width > precision) {
> + Py_UCS4 fillchar;
> + fill = width - precision;
> + fillchar = zeropad?'0':' ';
> + if (PyUnicode_Fill(writer->buffer, writer->pos, fill, fillchar) == -1)
> + return NULL;
> + writer->pos += fill;
> + }
> + if (precision > len) {
> + fill = precision - len;
> + if (PyUnicode_Fill(writer->buffer, writer->pos, fill, '0') == -1)
> + return NULL;
> + writer->pos += fill;
> + }
> +
> + if (_PyUnicodeWriter_WriteASCIIString(writer, buffer, len) < 0)
> + return NULL;
> + break;
> + }
> +
> + case 'p':
> + {
> + char number[MAX_LONG_LONG_CHARS];
> +
> + len = sprintf(number, "%p", va_arg(*vargs, void*));
> + assert(len >= 0);
> +
> + /* %p is ill-defined: ensure leading 0x. */
> + if (number[1] == 'X')
> + number[1] = 'x';
> + else if (number[1] != 'x') {
> + memmove(number + 2, number,
> + strlen(number) + 1);
> + number[0] = '0';
> + number[1] = 'x';
> + len += 2;
> + }
> +
> + if (_PyUnicodeWriter_WriteASCIIString(writer, number, len) < 0)
> + return NULL;
> + break;
> + }
> +
> + case 's':
> + {
> + /* UTF-8 */
> + const char *s = va_arg(*vargs, const char*);
> + if (unicode_fromformat_write_cstr(writer, s, width, precision) < 0)
> + return NULL;
> + break;
> + }
> +
> + case 'U':
> + {
> + PyObject *obj = va_arg(*vargs, PyObject *);
> + assert(obj && _PyUnicode_CHECK(obj));
> +
> + if (unicode_fromformat_write_str(writer, obj, width, precision) == -1)
> + return NULL;
> + break;
> + }
> +
> + case 'V':
> + {
> + PyObject *obj = va_arg(*vargs, PyObject *);
> + const char *str = va_arg(*vargs, const char *);
> + if (obj) {
> + assert(_PyUnicode_CHECK(obj));
> + if (unicode_fromformat_write_str(writer, obj, width, precision) == -1)
> + return NULL;
> + }
> + else {
> + assert(str != NULL);
> + if (unicode_fromformat_write_cstr(writer, str, width, precision) < 0)
> + return NULL;
> + }
> + break;
> + }
> +
> + case 'S':
> + {
> + PyObject *obj = va_arg(*vargs, PyObject *);
> + PyObject *str;
> + assert(obj);
> + str = PyObject_Str(obj);
> + if (!str)
> + return NULL;
> + if (unicode_fromformat_write_str(writer, str, width, precision) == -1) {
> + Py_DECREF(str);
> + return NULL;
> + }
> + Py_DECREF(str);
> + break;
> + }
> +
> + case 'R':
> + {
> + PyObject *obj = va_arg(*vargs, PyObject *);
> + PyObject *repr;
> + assert(obj);
> + repr = PyObject_Repr(obj);
> + if (!repr)
> + return NULL;
> + if (unicode_fromformat_write_str(writer, repr, width, precision) == -1) {
> + Py_DECREF(repr);
> + return NULL;
> + }
> + Py_DECREF(repr);
> + break;
> + }
> +
> + case 'A':
> + {
> + PyObject *obj = va_arg(*vargs, PyObject *);
> + PyObject *ascii;
> + assert(obj);
> + ascii = PyObject_ASCII(obj);
> + if (!ascii)
> + return NULL;
> + if (unicode_fromformat_write_str(writer, ascii, width, precision) == -1) {
> + Py_DECREF(ascii);
> + return NULL;
> + }
> + Py_DECREF(ascii);
> + break;
> + }
> +
> + case '%':
> + if (_PyUnicodeWriter_WriteCharInline(writer, '%') < 0)
> + return NULL;
> + break;
> +
> + default:
> + /* if we stumble upon an unknown formatting code, copy the rest
> + of the format string to the output string. (we cannot just
> + skip the code, since there's no way to know what's in the
> + argument list) */
> + len = strlen(p);
> + if (_PyUnicodeWriter_WriteLatin1String(writer, p, len) == -1)
> + return NULL;
> + f = p+len;
> + return f;
> + }
> +
> + f++;
> + return f;
> +}
> +
> +PyObject *
> +PyUnicode_FromFormatV(const char *format, va_list vargs)
> +{
> + va_list vargs2;
> + const char *f;
> + _PyUnicodeWriter writer;
> +
> + _PyUnicodeWriter_Init(&writer);
> + writer.min_length = strlen(format) + 100;
> + writer.overallocate = 1;
> +
> + // Copy varags to be able to pass a reference to a subfunction.
> + va_copy(vargs2, vargs);
> +
> + for (f = format; *f; ) {
> + if (*f == '%') {
> + f = unicode_fromformat_arg(&writer, f, &vargs2);
> + if (f == NULL)
> + goto fail;
> + }
> + else {
> + const char *p;
> + Py_ssize_t len;
> +
> + p = f;
> + do
> + {
> + if ((unsigned char)*p > 127) {
> + PyErr_Format(PyExc_ValueError,
> + "PyUnicode_FromFormatV() expects an ASCII-encoded format "
> + "string, got a non-ASCII byte: 0x%02x",
> + (unsigned char)*p);
> + goto fail;
> + }
> + p++;
> + }
> + while (*p != '\0' && *p != '%');
> + len = p - f;
> +
> + if (*p == '\0')
> + writer.overallocate = 0;
> +
> + if (_PyUnicodeWriter_WriteASCIIString(&writer, f, len) < 0)
> + goto fail;
> +
> + f = p;
> + }
> + }
> + va_end(vargs2);
> + return _PyUnicodeWriter_Finish(&writer);
> +
> + fail:
> + va_end(vargs2);
> + _PyUnicodeWriter_Dealloc(&writer);
> + return NULL;
> +}
> +
> +PyObject *
> +PyUnicode_FromFormat(const char *format, ...)
> +{
> + PyObject* ret;
> + va_list vargs;
> +
> +#ifdef HAVE_STDARG_PROTOTYPES
> + va_start(vargs, format);
> +#else
> + va_start(vargs);
> +#endif
> + ret = PyUnicode_FromFormatV(format, vargs);
> + va_end(vargs);
> + return ret;
> +}
> +
> +#ifdef HAVE_WCHAR_H
> +
> +/* Helper function for PyUnicode_AsWideChar() and PyUnicode_AsWideCharString():
> + convert a Unicode object to a wide character string.
> +
> + - If w is NULL: return the number of wide characters (including the null
> + character) required to convert the unicode object. Ignore size argument.
> +
> + - Otherwise: return the number of wide characters (excluding the null
> + character) written into w. Write at most size wide characters (including
> + the null character). */
> +static Py_ssize_t
> +unicode_aswidechar(PyObject *unicode,
> + wchar_t *w,
> + Py_ssize_t size)
> +{
> + Py_ssize_t res;
> + const wchar_t *wstr;
> +
> + wstr = PyUnicode_AsUnicodeAndSize(unicode, &res);
> + if (wstr == NULL)
> + return -1;
> +
> + if (w != NULL) {
> + if (size > res)
> + size = res + 1;
> + else
> + res = size;
> + memcpy(w, wstr, size * sizeof(wchar_t));
> + return res;
> + }
> + else
> + return res + 1;
> +}
> +
> +Py_ssize_t
> +PyUnicode_AsWideChar(PyObject *unicode,
> + wchar_t *w,
> + Py_ssize_t size)
> +{
> + if (unicode == NULL) {
> + PyErr_BadInternalCall();
> + return -1;
> + }
> + return unicode_aswidechar(unicode, w, size);
> +}
> +
> +wchar_t*
> +PyUnicode_AsWideCharString(PyObject *unicode,
> + Py_ssize_t *size)
> +{
> + wchar_t* buffer;
> + Py_ssize_t buflen;
> +
> + if (unicode == NULL) {
> + PyErr_BadInternalCall();
> + return NULL;
> + }
> +
> + buflen = unicode_aswidechar(unicode, NULL, 0);
> + if (buflen == -1)
> + return NULL;
> + buffer = PyMem_NEW(wchar_t, buflen);
> + if (buffer == NULL) {
> + PyErr_NoMemory();
> + return NULL;
> + }
> + buflen = unicode_aswidechar(unicode, buffer, buflen);
> + if (buflen == -1) {
> + PyMem_FREE(buffer);
> + return NULL;
> + }
> + if (size != NULL)
> + *size = buflen;
> + return buffer;
> +}
> +
> +wchar_t*
> +_PyUnicode_AsWideCharString(PyObject *unicode)
> +{
> + const wchar_t *wstr;
> + wchar_t *buffer;
> + Py_ssize_t buflen;
> +
> + if (unicode == NULL) {
> + PyErr_BadInternalCall();
> + return NULL;
> + }
> +
> + wstr = PyUnicode_AsUnicodeAndSize(unicode, &buflen);
> + if (wstr == NULL) {
> + return NULL;
> + }
> + if (wcslen(wstr) != (size_t)buflen) {
> + PyErr_SetString(PyExc_ValueError,
> + "embedded null character");
> + return NULL;
> + }
> +
> + buffer = PyMem_NEW(wchar_t, buflen + 1);
> + if (buffer == NULL) {
> + PyErr_NoMemory();
> + return NULL;
> + }
> + memcpy(buffer, wstr, (buflen + 1) * sizeof(wchar_t));
> + return buffer;
> +}
> +
> +#endif /* HAVE_WCHAR_H */
> +
> +PyObject *
> +PyUnicode_FromOrdinal(int ordinal)
> +{
> + if (ordinal < 0 || ordinal > MAX_UNICODE) {
> + PyErr_SetString(PyExc_ValueError,
> + "chr() arg not in range(0x110000)");
> + return NULL;
> + }
> +
> + return unicode_char((Py_UCS4)ordinal);
> +}
> +
> +PyObject *
> +PyUnicode_FromObject(PyObject *obj)
> +{
> + /* XXX Perhaps we should make this API an alias of
> + PyObject_Str() instead ?! */
> + if (PyUnicode_CheckExact(obj)) {
> + if (PyUnicode_READY(obj) == -1)
> + return NULL;
> + Py_INCREF(obj);
> + return obj;
> + }
> + if (PyUnicode_Check(obj)) {
> + /* For a Unicode subtype that's not a Unicode object,
> + return a true Unicode object with the same data. */
> + return _PyUnicode_Copy(obj);
> + }
> + PyErr_Format(PyExc_TypeError,
> + "Can't convert '%.100s' object to str implicitly",
> + Py_TYPE(obj)->tp_name);
> + return NULL;
> +}
> +
> +PyObject *
> +PyUnicode_FromEncodedObject(PyObject *obj,
> + const char *encoding,
> + const char *errors)
> +{
> + Py_buffer buffer;
> + PyObject *v;
> +
> + if (obj == NULL) {
> + PyErr_BadInternalCall();
> + return NULL;
> + }
> +
> + /* Decoding bytes objects is the most common case and should be fast */
> + if (PyBytes_Check(obj)) {
> + if (PyBytes_GET_SIZE(obj) == 0)
> + _Py_RETURN_UNICODE_EMPTY();
> + v = PyUnicode_Decode(
> + PyBytes_AS_STRING(obj), PyBytes_GET_SIZE(obj),
> + encoding, errors);
> + return v;
> + }
> +
> + if (PyUnicode_Check(obj)) {
> + PyErr_SetString(PyExc_TypeError,
> + "decoding str is not supported");
> + return NULL;
> + }
> +
> + /* Retrieve a bytes buffer view through the PEP 3118 buffer interface */
> + if (PyObject_GetBuffer(obj, &buffer, PyBUF_SIMPLE) < 0) {
> + PyErr_Format(PyExc_TypeError,
> + "decoding to str: need a bytes-like object, %.80s found",
> + Py_TYPE(obj)->tp_name);
> + return NULL;
> + }
> +
> + if (buffer.len == 0) {
> + PyBuffer_Release(&buffer);
> + _Py_RETURN_UNICODE_EMPTY();
> + }
> +
> + v = PyUnicode_Decode((char*) buffer.buf, buffer.len, encoding, errors);
> + PyBuffer_Release(&buffer);
> + return v;
> +}
> +
> +/* Normalize an encoding name: similar to encodings.normalize_encoding(), but
> + also convert to lowercase. Return 1 on success, or 0 on error (encoding is
> + longer than lower_len-1). */
> +int
> +_Py_normalize_encoding(const char *encoding,
> + char *lower,
> + size_t lower_len)
> +{
> + const char *e;
> + char *l;
> + char *l_end;
> + int punct;
> +
> + assert(encoding != NULL);
> +
> + e = encoding;
> + l = lower;
> + l_end = &lower[lower_len - 1];
> + punct = 0;
> + while (1) {
> + char c = *e;
> + if (c == 0) {
> + break;
> + }
> +
> + if (Py_ISALNUM(c) || c == '.') {
> + if (punct && l != lower) {
> + if (l == l_end) {
> + return 0;
> + }
> + *l++ = '_';
> + }
> + punct = 0;
> +
> + if (l == l_end) {
> + return 0;
> + }
> + *l++ = Py_TOLOWER(c);
> + }
> + else {
> + punct = 1;
> + }
> +
> + e++;
> + }
> + *l = '\0';
> + return 1;
> +}
> +
> +PyObject *
> +PyUnicode_Decode(const char *s,
> + Py_ssize_t size,
> + const char *encoding,
> + const char *errors)
> +{
> + PyObject *buffer = NULL, *unicode;
> + Py_buffer info;
> + char buflower[11]; /* strlen("iso-8859-1\0") == 11, longest shortcut */
> +
> + if (encoding == NULL) {
> + return PyUnicode_DecodeUTF8Stateful(s, size, errors, NULL);
> + }
> +
> + /* Shortcuts for common default encodings */
> + if (_Py_normalize_encoding(encoding, buflower, sizeof(buflower))) {
> + char *lower = buflower;
> +
> + /* Fast paths */
> + if (lower[0] == 'u' && lower[1] == 't' && lower[2] == 'f') {
> + lower += 3;
> + if (*lower == '_') {
> + /* Match "utf8" and "utf_8" */
> + lower++;
> + }
> +
> + if (lower[0] == '8' && lower[1] == 0) {
> + return PyUnicode_DecodeUTF8Stateful(s, size, errors, NULL);
> + }
> + else if (lower[0] == '1' && lower[1] == '6' && lower[2] == 0) {
> + return PyUnicode_DecodeUTF16(s, size, errors, 0);
> + }
> + else if (lower[0] == '3' && lower[1] == '2' && lower[2] == 0) {
> + return PyUnicode_DecodeUTF32(s, size, errors, 0);
> + }
> + }
> + else {
> + if (strcmp(lower, "ascii") == 0
> + || strcmp(lower, "us_ascii") == 0) {
> + return PyUnicode_DecodeASCII(s, size, errors);
> + }
> + #ifdef MS_WINDOWS
> + else if (strcmp(lower, "mbcs") == 0) {
> + return PyUnicode_DecodeMBCS(s, size, errors);
> + }
> + #endif
> + else if (strcmp(lower, "latin1") == 0
> + || strcmp(lower, "latin_1") == 0
> + || strcmp(lower, "iso_8859_1") == 0
> + || strcmp(lower, "iso8859_1") == 0) {
> + return PyUnicode_DecodeLatin1(s, size, errors);
> + }
> + }
> + }
> +
> + /* Decode via the codec registry */
> + buffer = NULL;
> + if (PyBuffer_FillInfo(&info, NULL, (void *)s, size, 1, PyBUF_FULL_RO) < 0)
> + goto onError;
> + buffer = PyMemoryView_FromBuffer(&info);
> + if (buffer == NULL)
> + goto onError;
> + unicode = _PyCodec_DecodeText(buffer, encoding, errors);
> + if (unicode == NULL)
> + goto onError;
> + if (!PyUnicode_Check(unicode)) {
> + PyErr_Format(PyExc_TypeError,
> + "'%.400s' decoder returned '%.400s' instead of 'str'; "
> + "use codecs.decode() to decode to arbitrary types",
> + encoding,
> + Py_TYPE(unicode)->tp_name);
> + Py_DECREF(unicode);
> + goto onError;
> + }
> + Py_DECREF(buffer);
> + return unicode_result(unicode);
> +
> + onError:
> + Py_XDECREF(buffer);
> + return NULL;
> +}
> +
> +PyObject *
> +PyUnicode_AsDecodedObject(PyObject *unicode,
> + const char *encoding,
> + const char *errors)
> +{
> + if (!PyUnicode_Check(unicode)) {
> + PyErr_BadArgument();
> + return NULL;
> + }
> +
> + if (PyErr_WarnEx(PyExc_DeprecationWarning,
> + "PyUnicode_AsDecodedObject() is deprecated; "
> + "use PyCodec_Decode() to decode from str", 1) < 0)
> + return NULL;
> +
> + if (encoding == NULL)
> + encoding = PyUnicode_GetDefaultEncoding();
> +
> + /* Decode via the codec registry */
> + return PyCodec_Decode(unicode, encoding, errors);
> +}
> +
> +PyObject *
> +PyUnicode_AsDecodedUnicode(PyObject *unicode,
> + const char *encoding,
> + const char *errors)
> +{
> + PyObject *v;
> +
> + if (!PyUnicode_Check(unicode)) {
> + PyErr_BadArgument();
> + goto onError;
> + }
> +
> + if (PyErr_WarnEx(PyExc_DeprecationWarning,
> + "PyUnicode_AsDecodedUnicode() is deprecated; "
> + "use PyCodec_Decode() to decode from str to str", 1) < 0)
> + return NULL;
> +
> + if (encoding == NULL)
> + encoding = PyUnicode_GetDefaultEncoding();
> +
> + /* Decode via the codec registry */
> + v = PyCodec_Decode(unicode, encoding, errors);
> + if (v == NULL)
> + goto onError;
> + if (!PyUnicode_Check(v)) {
> + PyErr_Format(PyExc_TypeError,
> + "'%.400s' decoder returned '%.400s' instead of 'str'; "
> + "use codecs.decode() to decode to arbitrary types",
> + encoding,
> + Py_TYPE(unicode)->tp_name);
> + Py_DECREF(v);
> + goto onError;
> + }
> + return unicode_result(v);
> +
> + onError:
> + return NULL;
> +}
> +
> +PyObject *
> +PyUnicode_Encode(const Py_UNICODE *s,
> + Py_ssize_t size,
> + const char *encoding,
> + const char *errors)
> +{
> + PyObject *v, *unicode;
> +
> + unicode = PyUnicode_FromUnicode(s, size);
> + if (unicode == NULL)
> + return NULL;
> + v = PyUnicode_AsEncodedString(unicode, encoding, errors);
> + Py_DECREF(unicode);
> + return v;
> +}
> +
> +PyObject *
> +PyUnicode_AsEncodedObject(PyObject *unicode,
> + const char *encoding,
> + const char *errors)
> +{
> + PyObject *v;
> +
> + if (!PyUnicode_Check(unicode)) {
> + PyErr_BadArgument();
> + goto onError;
> + }
> +
> + if (PyErr_WarnEx(PyExc_DeprecationWarning,
> + "PyUnicode_AsEncodedObject() is deprecated; "
> + "use PyUnicode_AsEncodedString() to encode from str to bytes "
> + "or PyCodec_Encode() for generic encoding", 1) < 0)
> + return NULL;
> +
> + if (encoding == NULL)
> + encoding = PyUnicode_GetDefaultEncoding();
> +
> + /* Encode via the codec registry */
> + v = PyCodec_Encode(unicode, encoding, errors);
> + if (v == NULL)
> + goto onError;
> + return v;
> +
> + onError:
> + return NULL;
> +}
> +
> +static size_t
> +wcstombs_errorpos(const wchar_t *wstr)
> +{
> + size_t len;
> +#if SIZEOF_WCHAR_T == 2
> + wchar_t buf[3];
> +#else
> + wchar_t buf[2];
> +#endif
> + char outbuf[MB_LEN_MAX];
> + const wchar_t *start, *previous;
> +
> +#if SIZEOF_WCHAR_T == 2
> + buf[2] = 0;
> +#else
> + buf[1] = 0;
> +#endif
> + start = wstr;
> + while (*wstr != L'\0')
> + {
> + previous = wstr;
> +#if SIZEOF_WCHAR_T == 2
> + if (Py_UNICODE_IS_HIGH_SURROGATE(wstr[0])
> + && Py_UNICODE_IS_LOW_SURROGATE(wstr[1]))
> + {
> + buf[0] = wstr[0];
> + buf[1] = wstr[1];
> + wstr += 2;
> + }
> + else {
> + buf[0] = *wstr;
> + buf[1] = 0;
> + wstr++;
> + }
> +#else
> + buf[0] = *wstr;
> + wstr++;
> +#endif
> + len = wcstombs(outbuf, buf, sizeof(outbuf));
> + if (len == (size_t)-1)
> + return previous - start;
> + }
> +
> + /* failed to find the unencodable character */
> + return 0;
> +}
> +
> +static int
> +locale_error_handler(const char *errors, int *surrogateescape)
> +{
> + _Py_error_handler error_handler = get_error_handler(errors);
> + switch (error_handler)
> + {
> + case _Py_ERROR_STRICT:
> + *surrogateescape = 0;
> + return 0;
> + case _Py_ERROR_SURROGATEESCAPE:
> + *surrogateescape = 1;
> + return 0;
> + default:
> + PyErr_Format(PyExc_ValueError,
> + "only 'strict' and 'surrogateescape' error handlers "
> + "are supported, not '%s'",
> + errors);
> + return -1;
> + }
> +}
> +
> +static PyObject *
> +unicode_encode_locale(PyObject *unicode, const char *errors,
> + int current_locale)
> +{
> + Py_ssize_t wlen, wlen2;
> + wchar_t *wstr;
> + PyObject *bytes = NULL;
> + char *errmsg;
> + PyObject *reason = NULL;
> + PyObject *exc;
> + size_t error_pos;
> + int surrogateescape;
> +
> + if (locale_error_handler(errors, &surrogateescape) < 0)
> + return NULL;
> +
> + wstr = PyUnicode_AsWideCharString(unicode, &wlen);
> + if (wstr == NULL)
> + return NULL;
> +
> + wlen2 = wcslen(wstr);
> + if (wlen2 != wlen) {
> + PyMem_Free(wstr);
> + PyErr_SetString(PyExc_ValueError, "embedded null character");
> + return NULL;
> + }
> +
> + if (surrogateescape) {
> + /* "surrogateescape" error handler */
> + char *str;
> +
> + str = _Py_EncodeLocaleEx(wstr, &error_pos, current_locale);
> + if (str == NULL) {
> + if (error_pos == (size_t)-1) {
> + PyErr_NoMemory();
> + PyMem_Free(wstr);
> + return NULL;
> + }
> + else {
> + goto encode_error;
> + }
> + }
> + PyMem_Free(wstr);
> +
> + bytes = PyBytes_FromString(str);
> + PyMem_Free(str);
> + }
> + else {
> + /* strict mode */
> + size_t len, len2;
> +
> + len = wcstombs(NULL, wstr, 0);
> + if (len == (size_t)-1) {
> + error_pos = (size_t)-1;
> + goto encode_error;
> + }
> +
> + bytes = PyBytes_FromStringAndSize(NULL, len);
> + if (bytes == NULL) {
> + PyMem_Free(wstr);
> + return NULL;
> + }
> +
> + len2 = wcstombs(PyBytes_AS_STRING(bytes), wstr, len+1);
> + if (len2 == (size_t)-1 || len2 > len) {
> + error_pos = (size_t)-1;
> + goto encode_error;
> + }
> + PyMem_Free(wstr);
> + }
> + return bytes;
> +
> +encode_error:
> + errmsg = strerror(errno);
> + assert(errmsg != NULL);
> +
> + if (error_pos == (size_t)-1)
> + error_pos = wcstombs_errorpos(wstr);
> +
> + PyMem_Free(wstr);
> + Py_XDECREF(bytes);
> +
> + if (errmsg != NULL) {
> + size_t errlen;
> + wstr = Py_DecodeLocale(errmsg, &errlen);
> + if (wstr != NULL) {
> + reason = PyUnicode_FromWideChar(wstr, errlen);
> + PyMem_RawFree(wstr);
> + } else
> + errmsg = NULL;
> + }
> + if (errmsg == NULL)
> + reason = PyUnicode_FromString(
> + "wcstombs() encountered an unencodable "
> + "wide character");
> + if (reason == NULL)
> + return NULL;
> +
> + exc = PyObject_CallFunction(PyExc_UnicodeEncodeError, "sOnnO",
> + "locale", unicode,
> + (Py_ssize_t)error_pos,
> + (Py_ssize_t)(error_pos+1),
> + reason);
> + Py_DECREF(reason);
> + if (exc != NULL) {
> + PyCodec_StrictErrors(exc);
> + Py_XDECREF(exc);
> + }
> + return NULL;
> +}
> +
> +PyObject *
> +PyUnicode_EncodeLocale(PyObject *unicode, const char *errors)
> +{
> + return unicode_encode_locale(unicode, errors, 1);
> +}
> +
> +PyObject *
> +PyUnicode_EncodeFSDefault(PyObject *unicode)
> +{
> +#if defined(__APPLE__)
> + return _PyUnicode_AsUTF8String(unicode, Py_FileSystemDefaultEncodeErrors);
> +#else
> + PyInterpreterState *interp = PyThreadState_GET()->interp;
> + /* Bootstrap check: if the filesystem codec is implemented in Python, we
> + cannot use it to encode and decode filenames before it is loaded. Load
> + the Python codec requires to encode at least its own filename. Use the C
> + version of the locale codec until the codec registry is initialized and
> + the Python codec is loaded.
> +
> + Py_FileSystemDefaultEncoding is shared between all interpreters, we
> + cannot only rely on it: check also interp->fscodec_initialized for
> + subinterpreters. */
> + if (Py_FileSystemDefaultEncoding && interp->fscodec_initialized) {
> + return PyUnicode_AsEncodedString(unicode,
> + Py_FileSystemDefaultEncoding,
> + Py_FileSystemDefaultEncodeErrors);
> + }
> + else {
> + return unicode_encode_locale(unicode,
> + Py_FileSystemDefaultEncodeErrors, 0);
> + }
> +#endif
> +}
> +
> +PyObject *
> +PyUnicode_AsEncodedString(PyObject *unicode,
> + const char *encoding,
> + const char *errors)
> +{
> + PyObject *v;
> + char buflower[11]; /* strlen("iso_8859_1\0") == 11, longest shortcut */
> +
> + if (!PyUnicode_Check(unicode)) {
> + PyErr_BadArgument();
> + return NULL;
> + }
> +
> + if (encoding == NULL) {
> + return _PyUnicode_AsUTF8String(unicode, errors);
> + }
> +
> + /* Shortcuts for common default encodings */
> + if (_Py_normalize_encoding(encoding, buflower, sizeof(buflower))) {
> + char *lower = buflower;
> +
> + /* Fast paths */
> + if (lower[0] == 'u' && lower[1] == 't' && lower[2] == 'f') {
> + lower += 3;
> + if (*lower == '_') {
> + /* Match "utf8" and "utf_8" */
> + lower++;
> + }
> +
> + if (lower[0] == '8' && lower[1] == 0) {
> + return _PyUnicode_AsUTF8String(unicode, errors);
> + }
> + else if (lower[0] == '1' && lower[1] == '6' && lower[2] == 0) {
> + return _PyUnicode_EncodeUTF16(unicode, errors, 0);
> + }
> + else if (lower[0] == '3' && lower[1] == '2' && lower[2] == 0) {
> + return _PyUnicode_EncodeUTF32(unicode, errors, 0);
> + }
> + }
> + else {
> + if (strcmp(lower, "ascii") == 0
> + || strcmp(lower, "us_ascii") == 0) {
> + return _PyUnicode_AsASCIIString(unicode, errors);
> + }
> +#ifdef MS_WINDOWS
> + else if (strcmp(lower, "mbcs") == 0) {
> + return PyUnicode_EncodeCodePage(CP_ACP, unicode, errors);
> + }
> +#endif
> + else if (strcmp(lower, "latin1") == 0 ||
> + strcmp(lower, "latin_1") == 0 ||
> + strcmp(lower, "iso_8859_1") == 0 ||
> + strcmp(lower, "iso8859_1") == 0) {
> + return _PyUnicode_AsLatin1String(unicode, errors);
> + }
> + }
> + }
> +
> + /* Encode via the codec registry */
> + v = _PyCodec_EncodeText(unicode, encoding, errors);
> + if (v == NULL)
> + return NULL;
> +
> + /* The normal path */
> + if (PyBytes_Check(v))
> + return v;
> +
> + /* If the codec returns a buffer, raise a warning and convert to bytes */
> + if (PyByteArray_Check(v)) {
> + int error;
> + PyObject *b;
> +
> + error = PyErr_WarnFormat(PyExc_RuntimeWarning, 1,
> + "encoder %s returned bytearray instead of bytes; "
> + "use codecs.encode() to encode to arbitrary types",
> + encoding);
> + if (error) {
> + Py_DECREF(v);
> + return NULL;
> + }
> +
> + b = PyBytes_FromStringAndSize(PyByteArray_AS_STRING(v), Py_SIZE(v));
> + Py_DECREF(v);
> + return b;
> + }
> +
> + PyErr_Format(PyExc_TypeError,
> + "'%.400s' encoder returned '%.400s' instead of 'bytes'; "
> + "use codecs.encode() to encode to arbitrary types",
> + encoding,
> + Py_TYPE(v)->tp_name);
> + Py_DECREF(v);
> + return NULL;
> +}
> +
> +PyObject *
> +PyUnicode_AsEncodedUnicode(PyObject *unicode,
> + const char *encoding,
> + const char *errors)
> +{
> + PyObject *v;
> +
> + if (!PyUnicode_Check(unicode)) {
> + PyErr_BadArgument();
> + goto onError;
> + }
> +
> + if (PyErr_WarnEx(PyExc_DeprecationWarning,
> + "PyUnicode_AsEncodedUnicode() is deprecated; "
> + "use PyCodec_Encode() to encode from str to str", 1) < 0)
> + return NULL;
> +
> + if (encoding == NULL)
> + encoding = PyUnicode_GetDefaultEncoding();
> +
> + /* Encode via the codec registry */
> + v = PyCodec_Encode(unicode, encoding, errors);
> + if (v == NULL)
> + goto onError;
> + if (!PyUnicode_Check(v)) {
> + PyErr_Format(PyExc_TypeError,
> + "'%.400s' encoder returned '%.400s' instead of 'str'; "
> + "use codecs.encode() to encode to arbitrary types",
> + encoding,
> + Py_TYPE(v)->tp_name);
> + Py_DECREF(v);
> + goto onError;
> + }
> + return v;
> +
> + onError:
> + return NULL;
> +}
> +
> +static size_t
> +mbstowcs_errorpos(const char *str, size_t len)
> +{
> +#ifdef HAVE_MBRTOWC
> + const char *start = str;
> + mbstate_t mbs;
> + size_t converted;
> + wchar_t ch;
> +
> + memset(&mbs, 0, sizeof mbs);
> + while (len)
> + {
> + converted = mbrtowc(&ch, str, len, &mbs);
> + if (converted == 0)
> + /* Reached end of string */
> + break;
> + if (converted == (size_t)-1 || converted == (size_t)-2) {
> + /* Conversion error or incomplete character */
> + return str - start;
> + }
> + else {
> + str += converted;
> + len -= converted;
> + }
> + }
> + /* failed to find the undecodable byte sequence */
> + return 0;
> +#endif
> + return 0;
> +}
> +
> +static PyObject*
> +unicode_decode_locale(const char *str, Py_ssize_t len,
> + const char *errors, int current_locale)
> +{
> + wchar_t smallbuf[256];
> + size_t smallbuf_len = Py_ARRAY_LENGTH(smallbuf);
> + wchar_t *wstr;
> + size_t wlen, wlen2;
> + PyObject *unicode;
> + int surrogateescape;
> + size_t error_pos;
> + char *errmsg;
> + PyObject *reason = NULL; /* initialize to prevent gcc warning */
> + PyObject *exc;
> +
> + if (locale_error_handler(errors, &surrogateescape) < 0)
> + return NULL;
> +
> + if (str[len] != '\0' || (size_t)len != strlen(str)) {
> + PyErr_SetString(PyExc_ValueError, "embedded null byte");
> + return NULL;
> + }
> +
> + if (surrogateescape) {
> + /* "surrogateescape" error handler */
> + wstr = _Py_DecodeLocaleEx(str, &wlen, current_locale);
> + if (wstr == NULL) {
> + if (wlen == (size_t)-1)
> + PyErr_NoMemory();
> + else
> + PyErr_SetFromErrno(PyExc_OSError);
> + return NULL;
> + }
> +
> + unicode = PyUnicode_FromWideChar(wstr, wlen);
> + PyMem_RawFree(wstr);
> + }
> + else {
> + /* strict mode */
> +#ifndef HAVE_BROKEN_MBSTOWCS
> + wlen = mbstowcs(NULL, str, 0);
> +#else
> + wlen = len;
> +#endif
> + if (wlen == (size_t)-1)
> + goto decode_error;
> + if (wlen+1 <= smallbuf_len) {
> + wstr = smallbuf;
> + }
> + else {
> + wstr = PyMem_New(wchar_t, wlen+1);
> + if (!wstr)
> + return PyErr_NoMemory();
> + }
> +
> + wlen2 = mbstowcs(wstr, str, wlen+1);
> + if (wlen2 == (size_t)-1) {
> + if (wstr != smallbuf)
> + PyMem_Free(wstr);
> + goto decode_error;
> + }
> +#ifdef HAVE_BROKEN_MBSTOWCS
> + assert(wlen2 == wlen);
> +#endif
> + unicode = PyUnicode_FromWideChar(wstr, wlen2);
> + if (wstr != smallbuf)
> + PyMem_Free(wstr);
> + }
> + return unicode;
> +
> +decode_error:
> + reason = NULL;
> + errmsg = strerror(errno);
> + assert(errmsg != NULL);
> +
> + error_pos = mbstowcs_errorpos(str, len);
> + if (errmsg != NULL) {
> + size_t errlen;
> + wstr = Py_DecodeLocale(errmsg, &errlen);
> + if (wstr != NULL) {
> + reason = PyUnicode_FromWideChar(wstr, errlen);
> + PyMem_RawFree(wstr);
> + }
> + }
> + if (reason == NULL)
> + reason = PyUnicode_FromString(
> + "mbstowcs() encountered an invalid multibyte sequence");
> + if (reason == NULL)
> + return NULL;
> +
> + exc = PyObject_CallFunction(PyExc_UnicodeDecodeError, "sy#nnO",
> + "locale", str, len,
> + (Py_ssize_t)error_pos,
> + (Py_ssize_t)(error_pos+1),
> + reason);
> + Py_DECREF(reason);
> + if (exc != NULL) {
> + PyCodec_StrictErrors(exc);
> + Py_XDECREF(exc);
> + }
> + return NULL;
> +}
> +
> +PyObject*
> +PyUnicode_DecodeLocaleAndSize(const char *str, Py_ssize_t size,
> + const char *errors)
> +{
> + return unicode_decode_locale(str, size, errors, 1);
> +}
> +
> +PyObject*
> +PyUnicode_DecodeLocale(const char *str, const char *errors)
> +{
> + Py_ssize_t size = (Py_ssize_t)strlen(str);
> + return unicode_decode_locale(str, size, errors, 1);
> +}
> +
> +
> +PyObject*
> +PyUnicode_DecodeFSDefault(const char *s) {
> + Py_ssize_t size = (Py_ssize_t)strlen(s);
> + return PyUnicode_DecodeFSDefaultAndSize(s, size);
> +}
> +
> +PyObject*
> +PyUnicode_DecodeFSDefaultAndSize(const char *s, Py_ssize_t size)
> +{
> +#if defined(__APPLE__)
> + return PyUnicode_DecodeUTF8Stateful(s, size, Py_FileSystemDefaultEncodeErrors, NULL);
> +#else
> + PyInterpreterState *interp = PyThreadState_GET()->interp;
> + /* Bootstrap check: if the filesystem codec is implemented in Python, we
> + cannot use it to encode and decode filenames before it is loaded. Load
> + the Python codec requires to encode at least its own filename. Use the C
> + version of the locale codec until the codec registry is initialized and
> + the Python codec is loaded.
> +
> + Py_FileSystemDefaultEncoding is shared between all interpreters, we
> + cannot only rely on it: check also interp->fscodec_initialized for
> + subinterpreters. */
> + if (Py_FileSystemDefaultEncoding && interp->fscodec_initialized) {
> + return PyUnicode_Decode(s, size,
> + Py_FileSystemDefaultEncoding,
> + Py_FileSystemDefaultEncodeErrors);
> + }
> + else {
> + return unicode_decode_locale(s, size,
> + Py_FileSystemDefaultEncodeErrors, 0);
> + }
> +#endif
> +}
> +
> +
> +int
> +PyUnicode_FSConverter(PyObject* arg, void* addr)
> +{
> + PyObject *path = NULL;
> + PyObject *output = NULL;
> + Py_ssize_t size;
> + void *data;
> + if (arg == NULL) {
> + Py_DECREF(*(PyObject**)addr);
> + *(PyObject**)addr = NULL;
> + return 1;
> + }
> + path = PyOS_FSPath(arg);
> + if (path == NULL) {
> + return 0;
> + }
> + if (PyBytes_Check(path)) {
> + output = path;
> + }
> + else { // PyOS_FSPath() guarantees its returned value is bytes or str.
> + output = PyUnicode_EncodeFSDefault(path);
> + Py_DECREF(path);
> + if (!output) {
> + return 0;
> + }
> + assert(PyBytes_Check(output));
> + }
> +
> + size = PyBytes_GET_SIZE(output);
> + data = PyBytes_AS_STRING(output);
> + if ((size_t)size != strlen(data)) {
> + PyErr_SetString(PyExc_ValueError, "embedded null byte");
> + Py_DECREF(output);
> + return 0;
> + }
> + *(PyObject**)addr = output;
> + return Py_CLEANUP_SUPPORTED;
> +}
> +
> +
> +int
> +PyUnicode_FSDecoder(PyObject* arg, void* addr)
> +{
> + int is_buffer = 0;
> + PyObject *path = NULL;
> + PyObject *output = NULL;
> + if (arg == NULL) {
> + Py_DECREF(*(PyObject**)addr);
> + *(PyObject**)addr = NULL;
> + return 1;
> + }
> +
> + is_buffer = PyObject_CheckBuffer(arg);
> + if (!is_buffer) {
> + path = PyOS_FSPath(arg);
> + if (path == NULL) {
> + return 0;
> + }
> + }
> + else {
> + path = arg;
> + Py_INCREF(arg);
> + }
> +
> + if (PyUnicode_Check(path)) {
> + if (PyUnicode_READY(path) == -1) {
> + Py_DECREF(path);
> + return 0;
> + }
> + output = path;
> + }
> + else if (PyBytes_Check(path) || is_buffer) {
> + PyObject *path_bytes = NULL;
> +
> + if (!PyBytes_Check(path) &&
> + PyErr_WarnFormat(PyExc_DeprecationWarning, 1,
> + "path should be string, bytes, or os.PathLike, not %.200s",
> + Py_TYPE(arg)->tp_name)) {
> + Py_DECREF(path);
> + return 0;
> + }
> + path_bytes = PyBytes_FromObject(path);
> + Py_DECREF(path);
> + if (!path_bytes) {
> + return 0;
> + }
> + output = PyUnicode_DecodeFSDefaultAndSize(PyBytes_AS_STRING(path_bytes),
> + PyBytes_GET_SIZE(path_bytes));
> + Py_DECREF(path_bytes);
> + if (!output) {
> + return 0;
> + }
> + }
> + else {
> + PyErr_Format(PyExc_TypeError,
> + "path should be string, bytes, or os.PathLike, not %.200s",
> + Py_TYPE(arg)->tp_name);
> + Py_DECREF(path);
> + return 0;
> + }
> + if (PyUnicode_READY(output) == -1) {
> + Py_DECREF(output);
> + return 0;
> + }
> + if (findchar(PyUnicode_DATA(output), PyUnicode_KIND(output),
> + PyUnicode_GET_LENGTH(output), 0, 1) >= 0) {
> + PyErr_SetString(PyExc_ValueError, "embedded null character");
> + Py_DECREF(output);
> + return 0;
> + }
> + *(PyObject**)addr = output;
> + return Py_CLEANUP_SUPPORTED;
> +}
> +
> +
> +char*
> +PyUnicode_AsUTF8AndSize(PyObject *unicode, Py_ssize_t *psize)
> +{
> + PyObject *bytes;
> +
> + if (!PyUnicode_Check(unicode)) {
> + PyErr_BadArgument();
> + return NULL;
> + }
> + if (PyUnicode_READY(unicode) == -1)
> + return NULL;
> +
> + if (PyUnicode_UTF8(unicode) == NULL) {
> + assert(!PyUnicode_IS_COMPACT_ASCII(unicode));
> + bytes = _PyUnicode_AsUTF8String(unicode, NULL);
> + if (bytes == NULL)
> + return NULL;
> + _PyUnicode_UTF8(unicode) = PyObject_MALLOC(PyBytes_GET_SIZE(bytes) + 1);
> + if (_PyUnicode_UTF8(unicode) == NULL) {
> + PyErr_NoMemory();
> + Py_DECREF(bytes);
> + return NULL;
> + }
> + _PyUnicode_UTF8_LENGTH(unicode) = PyBytes_GET_SIZE(bytes);
> + memcpy(_PyUnicode_UTF8(unicode),
> + PyBytes_AS_STRING(bytes),
> + _PyUnicode_UTF8_LENGTH(unicode) + 1);
> + Py_DECREF(bytes);
> + }
> +
> + if (psize)
> + *psize = PyUnicode_UTF8_LENGTH(unicode);
> + return PyUnicode_UTF8(unicode);
> +}
> +
> +char*
> +PyUnicode_AsUTF8(PyObject *unicode)
> +{
> + return PyUnicode_AsUTF8AndSize(unicode, NULL);
> +}
> +
> +Py_UNICODE *
> +PyUnicode_AsUnicodeAndSize(PyObject *unicode, Py_ssize_t *size)
> +{
> + const unsigned char *one_byte;
> +#if SIZEOF_WCHAR_T == 4
> + const Py_UCS2 *two_bytes;
> +#else
> + const Py_UCS4 *four_bytes;
> + const Py_UCS4 *ucs4_end;
> + Py_ssize_t num_surrogates;
> +#endif
> + wchar_t *w;
> + wchar_t *wchar_end;
> +
> + if (!PyUnicode_Check(unicode)) {
> + PyErr_BadArgument();
> + return NULL;
> + }
> + if (_PyUnicode_WSTR(unicode) == NULL) {
> + /* Non-ASCII compact unicode object */
> + assert(_PyUnicode_KIND(unicode) != 0);
> + assert(PyUnicode_IS_READY(unicode));
> +
> + if (PyUnicode_KIND(unicode) == PyUnicode_4BYTE_KIND) {
> +#if SIZEOF_WCHAR_T == 2
> + four_bytes = PyUnicode_4BYTE_DATA(unicode);
> + ucs4_end = four_bytes + _PyUnicode_LENGTH(unicode);
> + num_surrogates = 0;
> +
> + for (; four_bytes < ucs4_end; ++four_bytes) {
> + if (*four_bytes > 0xFFFF)
> + ++num_surrogates;
> + }
> +
> + _PyUnicode_WSTR(unicode) = (wchar_t *) PyObject_MALLOC(
> + sizeof(wchar_t) * (_PyUnicode_LENGTH(unicode) + 1 + num_surrogates));
> + if (!_PyUnicode_WSTR(unicode)) {
> + PyErr_NoMemory();
> + return NULL;
> + }
> + _PyUnicode_WSTR_LENGTH(unicode) = _PyUnicode_LENGTH(unicode) + num_surrogates;
> +
> + w = _PyUnicode_WSTR(unicode);
> + wchar_end = w + _PyUnicode_WSTR_LENGTH(unicode);
> + four_bytes = PyUnicode_4BYTE_DATA(unicode);
> + for (; four_bytes < ucs4_end; ++four_bytes, ++w) {
> + if (*four_bytes > 0xFFFF) {
> + assert(*four_bytes <= MAX_UNICODE);
> + /* encode surrogate pair in this case */
> + *w++ = Py_UNICODE_HIGH_SURROGATE(*four_bytes);
> + *w = Py_UNICODE_LOW_SURROGATE(*four_bytes);
> + }
> + else
> + *w = *four_bytes;
> +
> + if (w > wchar_end) {
> + assert(0 && "Miscalculated string end");
> + }
> + }
> + *w = 0;
> +#else
> + /* sizeof(wchar_t) == 4 */
> + Py_FatalError("Impossible unicode object state, wstr and str "
> + "should share memory already.");
> + return NULL;
> +#endif
> + }
> + else {
> + if ((size_t)_PyUnicode_LENGTH(unicode) >
> + PY_SSIZE_T_MAX / sizeof(wchar_t) - 1) {
> + PyErr_NoMemory();
> + return NULL;
> + }
> + _PyUnicode_WSTR(unicode) = (wchar_t *) PyObject_MALLOC(sizeof(wchar_t) *
> + (_PyUnicode_LENGTH(unicode) + 1));
> + if (!_PyUnicode_WSTR(unicode)) {
> + PyErr_NoMemory();
> + return NULL;
> + }
> + if (!PyUnicode_IS_COMPACT_ASCII(unicode))
> + _PyUnicode_WSTR_LENGTH(unicode) = _PyUnicode_LENGTH(unicode);
> + w = _PyUnicode_WSTR(unicode);
> + wchar_end = w + _PyUnicode_LENGTH(unicode);
> +
> + if (PyUnicode_KIND(unicode) == PyUnicode_1BYTE_KIND) {
> + one_byte = PyUnicode_1BYTE_DATA(unicode);
> + for (; w < wchar_end; ++one_byte, ++w)
> + *w = *one_byte;
> + /* null-terminate the wstr */
> + *w = 0;
> + }
> + else if (PyUnicode_KIND(unicode) == PyUnicode_2BYTE_KIND) {
> +#if SIZEOF_WCHAR_T == 4
> + two_bytes = PyUnicode_2BYTE_DATA(unicode);
> + for (; w < wchar_end; ++two_bytes, ++w)
> + *w = *two_bytes;
> + /* null-terminate the wstr */
> + *w = 0;
> +#else
> + /* sizeof(wchar_t) == 2 */
> + PyObject_FREE(_PyUnicode_WSTR(unicode));
> + _PyUnicode_WSTR(unicode) = NULL;
> + Py_FatalError("Impossible unicode object state, wstr "
> + "and str should share memory already.");
> + return NULL;
> +#endif
> + }
> + else {
> + assert(0 && "This should never happen.");
> + }
> + }
> + }
> + if (size != NULL)
> + *size = PyUnicode_WSTR_LENGTH(unicode);
> + return _PyUnicode_WSTR(unicode);
> +}
> +
> +Py_UNICODE *
> +PyUnicode_AsUnicode(PyObject *unicode)
> +{
> + return PyUnicode_AsUnicodeAndSize(unicode, NULL);
> +}
> +
> +const Py_UNICODE *
> +_PyUnicode_AsUnicode(PyObject *unicode)
> +{
> + Py_ssize_t size;
> + const Py_UNICODE *wstr;
> +
> + wstr = PyUnicode_AsUnicodeAndSize(unicode, &size);
> + if (wstr && wcslen(wstr) != (size_t)size) {
> + PyErr_SetString(PyExc_ValueError, "embedded null character");
> + return NULL;
> + }
> + return wstr;
> +}
> +
> +
> +Py_ssize_t
> +PyUnicode_GetSize(PyObject *unicode)
> +{
> + if (!PyUnicode_Check(unicode)) {
> + PyErr_BadArgument();
> + goto onError;
> + }
> + return PyUnicode_GET_SIZE(unicode);
> +
> + onError:
> + return -1;
> +}
> +
> +Py_ssize_t
> +PyUnicode_GetLength(PyObject *unicode)
> +{
> + if (!PyUnicode_Check(unicode)) {
> + PyErr_BadArgument();
> + return -1;
> + }
> + if (PyUnicode_READY(unicode) == -1)
> + return -1;
> + return PyUnicode_GET_LENGTH(unicode);
> +}
> +
> +Py_UCS4
> +PyUnicode_ReadChar(PyObject *unicode, Py_ssize_t index)
> +{
> + void *data;
> + int kind;
> +
> + if (!PyUnicode_Check(unicode)) {
> + PyErr_BadArgument();
> + return (Py_UCS4)-1;
> + }
> + if (PyUnicode_READY(unicode) == -1) {
> + return (Py_UCS4)-1;
> + }
> + if (index < 0 || index >= PyUnicode_GET_LENGTH(unicode)) {
> + PyErr_SetString(PyExc_IndexError, "string index out of range");
> + return (Py_UCS4)-1;
> + }
> + data = PyUnicode_DATA(unicode);
> + kind = PyUnicode_KIND(unicode);
> + return PyUnicode_READ(kind, data, index);
> +}
> +
> +int
> +PyUnicode_WriteChar(PyObject *unicode, Py_ssize_t index, Py_UCS4 ch)
> +{
> + if (!PyUnicode_Check(unicode) || !PyUnicode_IS_COMPACT(unicode)) {
> + PyErr_BadArgument();
> + return -1;
> + }
> + assert(PyUnicode_IS_READY(unicode));
> + if (index < 0 || index >= PyUnicode_GET_LENGTH(unicode)) {
> + PyErr_SetString(PyExc_IndexError, "string index out of range");
> + return -1;
> + }
> + if (unicode_check_modifiable(unicode))
> + return -1;
> + if (ch > PyUnicode_MAX_CHAR_VALUE(unicode)) {
> + PyErr_SetString(PyExc_ValueError, "character out of range");
> + return -1;
> + }
> + PyUnicode_WRITE(PyUnicode_KIND(unicode), PyUnicode_DATA(unicode),
> + index, ch);
> + return 0;
> +}
> +
> +const char *
> +PyUnicode_GetDefaultEncoding(void)
> +{
> + return "utf-8";
> +}
> +
> +/* create or adjust a UnicodeDecodeError */
> +static void
> +make_decode_exception(PyObject **exceptionObject,
> + const char *encoding,
> + const char *input, Py_ssize_t length,
> + Py_ssize_t startpos, Py_ssize_t endpos,
> + const char *reason)
> +{
> + if (*exceptionObject == NULL) {
> + *exceptionObject = PyUnicodeDecodeError_Create(
> + encoding, input, length, startpos, endpos, reason);
> + }
> + else {
> + if (PyUnicodeDecodeError_SetStart(*exceptionObject, startpos))
> + goto onError;
> + if (PyUnicodeDecodeError_SetEnd(*exceptionObject, endpos))
> + goto onError;
> + if (PyUnicodeDecodeError_SetReason(*exceptionObject, reason))
> + goto onError;
> + }
> + return;
> +
> +onError:
> + Py_CLEAR(*exceptionObject);
> +}
> +
> +#ifdef MS_WINDOWS
> +/* error handling callback helper:
> + build arguments, call the callback and check the arguments,
> + if no exception occurred, copy the replacement to the output
> + and adjust various state variables.
> + return 0 on success, -1 on error
> +*/
> +
> +static int
> +unicode_decode_call_errorhandler_wchar(
> + const char *errors, PyObject **errorHandler,
> + const char *encoding, const char *reason,
> + const char **input, const char **inend, Py_ssize_t *startinpos,
> + Py_ssize_t *endinpos, PyObject **exceptionObject, const char **inptr,
> + PyObject **output, Py_ssize_t *outpos)
> +{
> + static const char *argparse = "O!n;decoding error handler must return (str, int) tuple";
> +
> + PyObject *restuple = NULL;
> + PyObject *repunicode = NULL;
> + Py_ssize_t outsize;
> + Py_ssize_t insize;
> + Py_ssize_t requiredsize;
> + Py_ssize_t newpos;
> + PyObject *inputobj = NULL;
> + wchar_t *repwstr;
> + Py_ssize_t repwlen;
> +
> + assert (_PyUnicode_KIND(*output) == PyUnicode_WCHAR_KIND);
> + outsize = _PyUnicode_WSTR_LENGTH(*output);
> +
> + if (*errorHandler == NULL) {
> + *errorHandler = PyCodec_LookupError(errors);
> + if (*errorHandler == NULL)
> + goto onError;
> + }
> +
> + make_decode_exception(exceptionObject,
> + encoding,
> + *input, *inend - *input,
> + *startinpos, *endinpos,
> + reason);
> + if (*exceptionObject == NULL)
> + goto onError;
> +
> + restuple = PyObject_CallFunctionObjArgs(*errorHandler, *exceptionObject, NULL);
> + if (restuple == NULL)
> + goto onError;
> + if (!PyTuple_Check(restuple)) {
> + PyErr_SetString(PyExc_TypeError, &argparse[4]);
> + goto onError;
> + }
> + if (!PyArg_ParseTuple(restuple, argparse, &PyUnicode_Type, &repunicode, &newpos))
> + goto onError;
> +
> + /* Copy back the bytes variables, which might have been modified by the
> + callback */
> + inputobj = PyUnicodeDecodeError_GetObject(*exceptionObject);
> + if (!inputobj)
> + goto onError;
> + if (!PyBytes_Check(inputobj)) {
> + PyErr_Format(PyExc_TypeError, "exception attribute object must be bytes");
> + }
> + *input = PyBytes_AS_STRING(inputobj);
> + insize = PyBytes_GET_SIZE(inputobj);
> + *inend = *input + insize;
> + /* we can DECREF safely, as the exception has another reference,
> + so the object won't go away. */
> + Py_DECREF(inputobj);
> +
> + if (newpos<0)
> + newpos = insize+newpos;
> + if (newpos<0 || newpos>insize) {
> + PyErr_Format(PyExc_IndexError, "position %zd from error handler out of bounds", newpos);
> + goto onError;
> + }
> +
> + repwstr = PyUnicode_AsUnicodeAndSize(repunicode, &repwlen);
> + if (repwstr == NULL)
> + goto onError;
> + /* need more space? (at least enough for what we
> + have+the replacement+the rest of the string (starting
> + at the new input position), so we won't have to check space
> + when there are no errors in the rest of the string) */
> + requiredsize = *outpos;
> + if (requiredsize > PY_SSIZE_T_MAX - repwlen)
> + goto overflow;
> + requiredsize += repwlen;
> + if (requiredsize > PY_SSIZE_T_MAX - (insize - newpos))
> + goto overflow;
> + requiredsize += insize - newpos;
> + if (requiredsize > outsize) {
> + if (outsize <= PY_SSIZE_T_MAX/2 && requiredsize < 2*outsize)
> + requiredsize = 2*outsize;
> + if (unicode_resize(output, requiredsize) < 0)
> + goto onError;
> + }
> + wcsncpy(_PyUnicode_WSTR(*output) + *outpos, repwstr, repwlen);
> + *outpos += repwlen;
> + *endinpos = newpos;
> + *inptr = *input + newpos;
> +
> + /* we made it! */
> + Py_XDECREF(restuple);
> + return 0;
> +
> + overflow:
> + PyErr_SetString(PyExc_OverflowError,
> + "decoded result is too long for a Python string");
> +
> + onError:
> + Py_XDECREF(restuple);
> + return -1;
> +}
> +#endif /* MS_WINDOWS */
> +
> +static int
> +unicode_decode_call_errorhandler_writer(
> + const char *errors, PyObject **errorHandler,
> + const char *encoding, const char *reason,
> + const char **input, const char **inend, Py_ssize_t *startinpos,
> + Py_ssize_t *endinpos, PyObject **exceptionObject, const char **inptr,
> + _PyUnicodeWriter *writer /* PyObject **output, Py_ssize_t *outpos */)
> +{
> + static const char *argparse = "O!n;decoding error handler must return (str, int) tuple";
> +
> + PyObject *restuple = NULL;
> + PyObject *repunicode = NULL;
> + Py_ssize_t insize;
> + Py_ssize_t newpos;
> + Py_ssize_t replen;
> + Py_ssize_t remain;
> + PyObject *inputobj = NULL;
> + int need_to_grow = 0;
> + const char *new_inptr;
> +
> + if (*errorHandler == NULL) {
> + *errorHandler = PyCodec_LookupError(errors);
> + if (*errorHandler == NULL)
> + goto onError;
> + }
> +
> + make_decode_exception(exceptionObject,
> + encoding,
> + *input, *inend - *input,
> + *startinpos, *endinpos,
> + reason);
> + if (*exceptionObject == NULL)
> + goto onError;
> +
> + restuple = PyObject_CallFunctionObjArgs(*errorHandler, *exceptionObject, NULL);
> + if (restuple == NULL)
> + goto onError;
> + if (!PyTuple_Check(restuple)) {
> + PyErr_SetString(PyExc_TypeError, &argparse[4]);
> + goto onError;
> + }
> + if (!PyArg_ParseTuple(restuple, argparse, &PyUnicode_Type, &repunicode, &newpos))
> + goto onError;
> +
> + /* Copy back the bytes variables, which might have been modified by the
> + callback */
> + inputobj = PyUnicodeDecodeError_GetObject(*exceptionObject);
> + if (!inputobj)
> + goto onError;
> + if (!PyBytes_Check(inputobj)) {
> + PyErr_Format(PyExc_TypeError, "exception attribute object must be bytes");
> + }
> + remain = *inend - *input - *endinpos;
> + *input = PyBytes_AS_STRING(inputobj);
> + insize = PyBytes_GET_SIZE(inputobj);
> + *inend = *input + insize;
> + /* we can DECREF safely, as the exception has another reference,
> + so the object won't go away. */
> + Py_DECREF(inputobj);
> +
> + if (newpos<0)
> + newpos = insize+newpos;
> + if (newpos<0 || newpos>insize) {
> + PyErr_Format(PyExc_IndexError, "position %zd from error handler out of bounds", newpos);
> + goto onError;
> + }
> +
> + if (PyUnicode_READY(repunicode) < 0)
> + goto onError;
> + replen = PyUnicode_GET_LENGTH(repunicode);
> + if (replen > 1) {
> + writer->min_length += replen - 1;
> + need_to_grow = 1;
> + }
> + new_inptr = *input + newpos;
> + if (*inend - new_inptr > remain) {
> + /* We don't know the decoding algorithm here so we make the worst
> + assumption that one byte decodes to one unicode character.
> + If unfortunately one byte could decode to more unicode characters,
> + the decoder may write out-of-bound then. Is it possible for the
> + algorithms using this function? */
> + writer->min_length += *inend - new_inptr - remain;
> + need_to_grow = 1;
> + }
> + if (need_to_grow) {
> + writer->overallocate = 1;
> + if (_PyUnicodeWriter_Prepare(writer, writer->min_length - writer->pos,
> + PyUnicode_MAX_CHAR_VALUE(repunicode)) == -1)
> + goto onError;
> + }
> + if (_PyUnicodeWriter_WriteStr(writer, repunicode) == -1)
> + goto onError;
> +
> + *endinpos = newpos;
> + *inptr = new_inptr;
> +
> + /* we made it! */
> + Py_XDECREF(restuple);
> + return 0;
> +
> + onError:
> + Py_XDECREF(restuple);
> + return -1;
> +}
> +
> +/* --- UTF-7 Codec -------------------------------------------------------- */
> +
> +/* See RFC2152 for details. We encode conservatively and decode liberally. */
> +
> +/* Three simple macros defining base-64. */
> +
> +/* Is c a base-64 character? */
> +
> +#define IS_BASE64(c) \
> + (((c) >= 'A' && (c) <= 'Z') || \
> + ((c) >= 'a' && (c) <= 'z') || \
> + ((c) >= '0' && (c) <= '9') || \
> + (c) == '+' || (c) == '/')
> +
> +/* given that c is a base-64 character, what is its base-64 value? */
> +
> +#define FROM_BASE64(c) \
> + (((c) >= 'A' && (c) <= 'Z') ? (c) - 'A' : \
> + ((c) >= 'a' && (c) <= 'z') ? (c) - 'a' + 26 : \
> + ((c) >= '0' && (c) <= '9') ? (c) - '0' + 52 : \
> + (c) == '+' ? 62 : 63)
> +
> +/* What is the base-64 character of the bottom 6 bits of n? */
> +
> +#define TO_BASE64(n) \
> + ("ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/"[(n) & 0x3f])
> +
> +/* DECODE_DIRECT: this byte encountered in a UTF-7 string should be
> + * decoded as itself. We are permissive on decoding; the only ASCII
> + * byte not decoding to itself is the + which begins a base64
> + * string. */
> +
> +#define DECODE_DIRECT(c) \
> + ((c) <= 127 && (c) != '+')
> +
> +/* The UTF-7 encoder treats ASCII characters differently according to
> + * whether they are Set D, Set O, Whitespace, or special (i.e. none of
> + * the above). See RFC2152. This array identifies these different
> + * sets:
> + * 0 : "Set D"
> + * alphanumeric and '(),-./:?
> + * 1 : "Set O"
> + * !"#$%&*;<=>@[]^_`{|}
> + * 2 : "whitespace"
> + * ht nl cr sp
> + * 3 : special (must be base64 encoded)
> + * everything else (i.e. +\~ and non-printing codes 0-8 11-12 14-31 127)
> + */
> +
> +static
> +char utf7_category[128] = {
> +/* nul soh stx etx eot enq ack bel bs ht nl vt np cr so si */
> + 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 3, 3, 2, 3, 3,
> +/* dle dc1 dc2 dc3 dc4 nak syn etb can em sub esc fs gs rs us */
> + 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
> +/* sp ! " # $ % & ' ( ) * + , - . / */
> + 2, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 3, 0, 0, 0, 0,
> +/* 0 1 2 3 4 5 6 7 8 9 : ; < = > ? */
> + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0,
> +/* @ A B C D E F G H I J K L M N O */
> + 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
> +/* P Q R S T U V W X Y Z [ \ ] ^ _ */
> + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 3, 1, 1, 1,
> +/* ` a b c d e f g h i j k l m n o */
> + 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
> +/* p q r s t u v w x y z { | } ~ del */
> + 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 3, 3,
> +};
> +
> +/* ENCODE_DIRECT: this character should be encoded as itself. The
> + * answer depends on whether we are encoding set O as itself, and also
> + * on whether we are encoding whitespace as itself. RFC2152 makes it
> + * clear that the answers to these questions vary between
> + * applications, so this code needs to be flexible. */
> +
> +#define ENCODE_DIRECT(c, directO, directWS) \
> + ((c) < 128 && (c) > 0 && \
> + ((utf7_category[(c)] == 0) || \
> + (directWS && (utf7_category[(c)] == 2)) || \
> + (directO && (utf7_category[(c)] == 1))))
> +
> +PyObject *
> +PyUnicode_DecodeUTF7(const char *s,
> + Py_ssize_t size,
> + const char *errors)
> +{
> + return PyUnicode_DecodeUTF7Stateful(s, size, errors, NULL);
> +}
> +
> +/* The decoder. The only state we preserve is our read position,
> + * i.e. how many characters we have consumed. So if we end in the
> + * middle of a shift sequence we have to back off the read position
> + * and the output to the beginning of the sequence, otherwise we lose
> + * all the shift state (seen bits, number of bits seen, high
> + * surrogate). */
> +
> +PyObject *
> +PyUnicode_DecodeUTF7Stateful(const char *s,
> + Py_ssize_t size,
> + const char *errors,
> + Py_ssize_t *consumed)
> +{
> + const char *starts = s;
> + Py_ssize_t startinpos;
> + Py_ssize_t endinpos;
> + const char *e;
> + _PyUnicodeWriter writer;
> + const char *errmsg = "";
> + int inShift = 0;
> + Py_ssize_t shiftOutStart;
> + unsigned int base64bits = 0;
> + unsigned long base64buffer = 0;
> + Py_UCS4 surrogate = 0;
> + PyObject *errorHandler = NULL;
> + PyObject *exc = NULL;
> +
> + if (size == 0) {
> + if (consumed)
> + *consumed = 0;
> + _Py_RETURN_UNICODE_EMPTY();
> + }
> +
> + /* Start off assuming it's all ASCII. Widen later as necessary. */
> + _PyUnicodeWriter_Init(&writer);
> + writer.min_length = size;
> +
> + shiftOutStart = 0;
> + e = s + size;
> +
> + while (s < e) {
> + Py_UCS4 ch;
> + restart:
> + ch = (unsigned char) *s;
> +
> + if (inShift) { /* in a base-64 section */
> + if (IS_BASE64(ch)) { /* consume a base-64 character */
> + base64buffer = (base64buffer << 6) | FROM_BASE64(ch);
> + base64bits += 6;
> + s++;
> + if (base64bits >= 16) {
> + /* we have enough bits for a UTF-16 value */
> + Py_UCS4 outCh = (Py_UCS4)(base64buffer >> (base64bits-16));
> + base64bits -= 16;
> + base64buffer &= (1 << base64bits) - 1; /* clear high bits */
> + assert(outCh <= 0xffff);
> + if (surrogate) {
> + /* expecting a second surrogate */
> + if (Py_UNICODE_IS_LOW_SURROGATE(outCh)) {
> + Py_UCS4 ch2 = Py_UNICODE_JOIN_SURROGATES(surrogate, outCh);
> + if (_PyUnicodeWriter_WriteCharInline(&writer, ch2) < 0)
> + goto onError;
> + surrogate = 0;
> + continue;
> + }
> + else {
> + if (_PyUnicodeWriter_WriteCharInline(&writer, surrogate) < 0)
> + goto onError;
> + surrogate = 0;
> + }
> + }
> + if (Py_UNICODE_IS_HIGH_SURROGATE(outCh)) {
> + /* first surrogate */
> + surrogate = outCh;
> + }
> + else {
> + if (_PyUnicodeWriter_WriteCharInline(&writer, outCh) < 0)
> + goto onError;
> + }
> + }
> + }
> + else { /* now leaving a base-64 section */
> + inShift = 0;
> + if (base64bits > 0) { /* left-over bits */
> + if (base64bits >= 6) {
> + /* We've seen at least one base-64 character */
> + s++;
> + errmsg = "partial character in shift sequence";
> + goto utf7Error;
> + }
> + else {
> + /* Some bits remain; they should be zero */
> + if (base64buffer != 0) {
> + s++;
> + errmsg = "non-zero padding bits in shift sequence";
> + goto utf7Error;
> + }
> + }
> + }
> + if (surrogate && DECODE_DIRECT(ch)) {
> + if (_PyUnicodeWriter_WriteCharInline(&writer, surrogate) < 0)
> + goto onError;
> + }
> + surrogate = 0;
> + if (ch == '-') {
> + /* '-' is absorbed; other terminating
> + characters are preserved */
> + s++;
> + }
> + }
> + }
> + else if ( ch == '+' ) {
> + startinpos = s-starts;
> + s++; /* consume '+' */
> + if (s < e && *s == '-') { /* '+-' encodes '+' */
> + s++;
> + if (_PyUnicodeWriter_WriteCharInline(&writer, '+') < 0)
> + goto onError;
> + }
> + else { /* begin base64-encoded section */
> + inShift = 1;
> + surrogate = 0;
> + shiftOutStart = writer.pos;
> + base64bits = 0;
> + base64buffer = 0;
> + }
> + }
> + else if (DECODE_DIRECT(ch)) { /* character decodes as itself */
> + s++;
> + if (_PyUnicodeWriter_WriteCharInline(&writer, ch) < 0)
> + goto onError;
> + }
> + else {
> + startinpos = s-starts;
> + s++;
> + errmsg = "unexpected special character";
> + goto utf7Error;
> + }
> + continue;
> +utf7Error:
> + endinpos = s-starts;
> + if (unicode_decode_call_errorhandler_writer(
> + errors, &errorHandler,
> + "utf7", errmsg,
> + &starts, &e, &startinpos, &endinpos, &exc, &s,
> + &writer))
> + goto onError;
> + }
> +
> + /* end of string */
> +
> + if (inShift && !consumed) { /* in shift sequence, no more to follow */
> + /* if we're in an inconsistent state, that's an error */
> + inShift = 0;
> + if (surrogate ||
> + (base64bits >= 6) ||
> + (base64bits > 0 && base64buffer != 0)) {
> + endinpos = size;
> + if (unicode_decode_call_errorhandler_writer(
> + errors, &errorHandler,
> + "utf7", "unterminated shift sequence",
> + &starts, &e, &startinpos, &endinpos, &exc, &s,
> + &writer))
> + goto onError;
> + if (s < e)
> + goto restart;
> + }
> + }
> +
> + /* return state */
> + if (consumed) {
> + if (inShift) {
> + *consumed = startinpos;
> + if (writer.pos != shiftOutStart && writer.maxchar > 127) {
> + PyObject *result = PyUnicode_FromKindAndData(
> + writer.kind, writer.data, shiftOutStart);
> + Py_XDECREF(errorHandler);
> + Py_XDECREF(exc);
> + _PyUnicodeWriter_Dealloc(&writer);
> + return result;
> + }
> + writer.pos = shiftOutStart; /* back off output */
> + }
> + else {
> + *consumed = s-starts;
> + }
> + }
> +
> + Py_XDECREF(errorHandler);
> + Py_XDECREF(exc);
> + return _PyUnicodeWriter_Finish(&writer);
> +
> + onError:
> + Py_XDECREF(errorHandler);
> + Py_XDECREF(exc);
> + _PyUnicodeWriter_Dealloc(&writer);
> + return NULL;
> +}
> +
> +
> +PyObject *
> +_PyUnicode_EncodeUTF7(PyObject *str,
> + int base64SetO,
> + int base64WhiteSpace,
> + const char *errors)
> +{
> + int kind;
> + void *data;
> + Py_ssize_t len;
> + PyObject *v;
> + int inShift = 0;
> + Py_ssize_t i;
> + unsigned int base64bits = 0;
> + unsigned long base64buffer = 0;
> + char * out;
> + char * start;
> +
> + if (PyUnicode_READY(str) == -1)
> + return NULL;
> + kind = PyUnicode_KIND(str);
> + data = PyUnicode_DATA(str);
> + len = PyUnicode_GET_LENGTH(str);
> +
> + if (len == 0)
> + return PyBytes_FromStringAndSize(NULL, 0);
> +
> + /* It might be possible to tighten this worst case */
> + if (len > PY_SSIZE_T_MAX / 8)
> + return PyErr_NoMemory();
> + v = PyBytes_FromStringAndSize(NULL, len * 8);
> + if (v == NULL)
> + return NULL;
> +
> + start = out = PyBytes_AS_STRING(v);
> + for (i = 0; i < len; ++i) {
> + Py_UCS4 ch = PyUnicode_READ(kind, data, i);
> +
> + if (inShift) {
> + if (ENCODE_DIRECT(ch, !base64SetO, !base64WhiteSpace)) {
> + /* shifting out */
> + if (base64bits) { /* output remaining bits */
> + *out++ = TO_BASE64(base64buffer << (6-base64bits));
> + base64buffer = 0;
> + base64bits = 0;
> + }
> + inShift = 0;
> + /* Characters not in the BASE64 set implicitly unshift the sequence
> + so no '-' is required, except if the character is itself a '-' */
> + if (IS_BASE64(ch) || ch == '-') {
> + *out++ = '-';
> + }
> + *out++ = (char) ch;
> + }
> + else {
> + goto encode_char;
> + }
> + }
> + else { /* not in a shift sequence */
> + if (ch == '+') {
> + *out++ = '+';
> + *out++ = '-';
> + }
> + else if (ENCODE_DIRECT(ch, !base64SetO, !base64WhiteSpace)) {
> + *out++ = (char) ch;
> + }
> + else {
> + *out++ = '+';
> + inShift = 1;
> + goto encode_char;
> + }
> + }
> + continue;
> +encode_char:
> + if (ch >= 0x10000) {
> + assert(ch <= MAX_UNICODE);
> +
> + /* code first surrogate */
> + base64bits += 16;
> + base64buffer = (base64buffer << 16) | Py_UNICODE_HIGH_SURROGATE(ch);
> + while (base64bits >= 6) {
> + *out++ = TO_BASE64(base64buffer >> (base64bits-6));
> + base64bits -= 6;
> + }
> + /* prepare second surrogate */
> + ch = Py_UNICODE_LOW_SURROGATE(ch);
> + }
> + base64bits += 16;
> + base64buffer = (base64buffer << 16) | ch;
> + while (base64bits >= 6) {
> + *out++ = TO_BASE64(base64buffer >> (base64bits-6));
> + base64bits -= 6;
> + }
> + }
> + if (base64bits)
> + *out++= TO_BASE64(base64buffer << (6-base64bits) );
> + if (inShift)
> + *out++ = '-';
> + if (_PyBytes_Resize(&v, out - start) < 0)
> + return NULL;
> + return v;
> +}
> +PyObject *
> +PyUnicode_EncodeUTF7(const Py_UNICODE *s,
> + Py_ssize_t size,
> + int base64SetO,
> + int base64WhiteSpace,
> + const char *errors)
> +{
> + PyObject *result;
> + PyObject *tmp = PyUnicode_FromUnicode(s, size);
> + if (tmp == NULL)
> + return NULL;
> + result = _PyUnicode_EncodeUTF7(tmp, base64SetO,
> + base64WhiteSpace, errors);
> + Py_DECREF(tmp);
> + return result;
> +}
> +
> +#undef IS_BASE64
> +#undef FROM_BASE64
> +#undef TO_BASE64
> +#undef DECODE_DIRECT
> +#undef ENCODE_DIRECT
> +
> +/* --- UTF-8 Codec -------------------------------------------------------- */
> +
> +PyObject *
> +PyUnicode_DecodeUTF8(const char *s,
> + Py_ssize_t size,
> + const char *errors)
> +{
> + return PyUnicode_DecodeUTF8Stateful(s, size, errors, NULL);
> +}
> +
> +#include "stringlib/asciilib.h"
> +#include "stringlib/codecs.h"
> +#include "stringlib/undef.h"
> +
> +#include "stringlib/ucs1lib.h"
> +#include "stringlib/codecs.h"
> +#include "stringlib/undef.h"
> +
> +#include "stringlib/ucs2lib.h"
> +#include "stringlib/codecs.h"
> +#include "stringlib/undef.h"
> +
> +#include "stringlib/ucs4lib.h"
> +#include "stringlib/codecs.h"
> +#include "stringlib/undef.h"
> +
> +/* Mask to quickly check whether a C 'long' contains a
> + non-ASCII, UTF8-encoded char. */
> +#if (SIZEOF_LONG == 8)
> +# define ASCII_CHAR_MASK 0x8080808080808080UL
> +#elif (SIZEOF_LONG == 4)
> +# define ASCII_CHAR_MASK 0x80808080UL
> +#else
> +# error C 'long' size should be either 4 or 8!
> +#endif
> +
> +static Py_ssize_t
> +ascii_decode(const char *start, const char *end, Py_UCS1 *dest)
> +{
> + const char *p = start;
> + const char *aligned_end = (const char *) _Py_ALIGN_DOWN(end, SIZEOF_LONG);
> +
> + /*
> + * Issue #17237: m68k is a bit different from most architectures in
> + * that objects do not use "natural alignment" - for example, int and
> + * long are only aligned at 2-byte boundaries. Therefore the assert()
> + * won't work; also, tests have shown that skipping the "optimised
> + * version" will even speed up m68k.
> + */
> +#if !defined(__m68k__)
> +#if SIZEOF_LONG <= SIZEOF_VOID_P
> + assert(_Py_IS_ALIGNED(dest, SIZEOF_LONG));
> + if (_Py_IS_ALIGNED(p, SIZEOF_LONG)) {
> + /* Fast path, see in STRINGLIB(utf8_decode) for
> + an explanation. */
> + /* Help allocation */
> + const char *_p = p;
> + Py_UCS1 * q = dest;
> + while (_p < aligned_end) {
> + unsigned long value = *(const unsigned long *) _p;
> + if (value & ASCII_CHAR_MASK)
> + break;
> + *((unsigned long *)q) = value;
> + _p += SIZEOF_LONG;
> + q += SIZEOF_LONG;
> + }
> + p = _p;
> + while (p < end) {
> + if ((unsigned char)*p & 0x80)
> + break;
> + *q++ = *p++;
> + }
> + return p - start;
> + }
> +#endif
> +#endif
> + while (p < end) {
> + /* Fast path, see in STRINGLIB(utf8_decode) in stringlib/codecs.h
> + for an explanation. */
> + if (_Py_IS_ALIGNED(p, SIZEOF_LONG)) {
> + /* Help allocation */
> + const char *_p = p;
> + while (_p < aligned_end) {
> + unsigned long value = *(unsigned long *) _p;
> + if (value & ASCII_CHAR_MASK)
> + break;
> + _p += SIZEOF_LONG;
> + }
> + p = _p;
> + if (_p == end)
> + break;
> + }
> + if ((unsigned char)*p & 0x80)
> + break;
> + ++p;
> + }
> + memcpy(dest, start, p - start);
> + return p - start;
> +}
> +
> +PyObject *
> +PyUnicode_DecodeUTF8Stateful(const char *s,
> + Py_ssize_t size,
> + const char *errors,
> + Py_ssize_t *consumed)
> +{
> + _PyUnicodeWriter writer;
> + const char *starts = s;
> + const char *end = s + size;
> +
> + Py_ssize_t startinpos;
> + Py_ssize_t endinpos;
> + const char *errmsg = "";
> + PyObject *error_handler_obj = NULL;
> + PyObject *exc = NULL;
> + _Py_error_handler error_handler = _Py_ERROR_UNKNOWN;
> +
> + if (size == 0) {
> + if (consumed)
> + *consumed = 0;
> + _Py_RETURN_UNICODE_EMPTY();
> + }
> +
> + /* ASCII is equivalent to the first 128 ordinals in Unicode. */
> + if (size == 1 && (unsigned char)s[0] < 128) {
> + if (consumed)
> + *consumed = 1;
> + return get_latin1_char((unsigned char)s[0]);
> + }
> +
> + _PyUnicodeWriter_Init(&writer);
> + writer.min_length = size;
> + if (_PyUnicodeWriter_Prepare(&writer, writer.min_length, 127) == -1)
> + goto onError;
> +
> + writer.pos = ascii_decode(s, end, writer.data);
> + s += writer.pos;
> + while (s < end) {
> + Py_UCS4 ch;
> + int kind = writer.kind;
> +
> + if (kind == PyUnicode_1BYTE_KIND) {
> + if (PyUnicode_IS_ASCII(writer.buffer))
> + ch = asciilib_utf8_decode(&s, end, writer.data, &writer.pos);
> + else
> + ch = ucs1lib_utf8_decode(&s, end, writer.data, &writer.pos);
> + } else if (kind == PyUnicode_2BYTE_KIND) {
> + ch = ucs2lib_utf8_decode(&s, end, writer.data, &writer.pos);
> + } else {
> + assert(kind == PyUnicode_4BYTE_KIND);
> + ch = ucs4lib_utf8_decode(&s, end, writer.data, &writer.pos);
> + }
> +
> + switch (ch) {
> + case 0:
> + if (s == end || consumed)
> + goto End;
> + errmsg = "unexpected end of data";
> + startinpos = s - starts;
> + endinpos = end - starts;
> + break;
> + case 1:
> + errmsg = "invalid start byte";
> + startinpos = s - starts;
> + endinpos = startinpos + 1;
> + break;
> + case 2:
> + case 3:
> + case 4:
> + errmsg = "invalid continuation byte";
> + startinpos = s - starts;
> + endinpos = startinpos + ch - 1;
> + break;
> + default:
> + if (_PyUnicodeWriter_WriteCharInline(&writer, ch) < 0)
> + goto onError;
> + continue;
> + }
> +
> + if (error_handler == _Py_ERROR_UNKNOWN)
> + error_handler = get_error_handler(errors);
> +
> + switch (error_handler) {
> + case _Py_ERROR_IGNORE:
> + s += (endinpos - startinpos);
> + break;
> +
> + case _Py_ERROR_REPLACE:
> + if (_PyUnicodeWriter_WriteCharInline(&writer, 0xfffd) < 0)
> + goto onError;
> + s += (endinpos - startinpos);
> + break;
> +
> + case _Py_ERROR_SURROGATEESCAPE:
> + {
> + Py_ssize_t i;
> +
> + if (_PyUnicodeWriter_PrepareKind(&writer, PyUnicode_2BYTE_KIND) < 0)
> + goto onError;
> + for (i=startinpos; i<endinpos; i++) {
> + ch = (Py_UCS4)(unsigned char)(starts[i]);
> + PyUnicode_WRITE(writer.kind, writer.data, writer.pos,
> + ch + 0xdc00);
> + writer.pos++;
> + }
> + s += (endinpos - startinpos);
> + break;
> + }
> +
> + default:
> + if (unicode_decode_call_errorhandler_writer(
> + errors, &error_handler_obj,
> + "utf-8", errmsg,
> + &starts, &end, &startinpos, &endinpos, &exc, &s,
> + &writer))
> + goto onError;
> + }
> + }
> +
> +End:
> + if (consumed)
> + *consumed = s - starts;
> +
> + Py_XDECREF(error_handler_obj);
> + Py_XDECREF(exc);
> + return _PyUnicodeWriter_Finish(&writer);
> +
> +onError:
> + Py_XDECREF(error_handler_obj);
> + Py_XDECREF(exc);
> + _PyUnicodeWriter_Dealloc(&writer);
> + return NULL;
> +}
> +
> +#if defined(__APPLE__) || defined(__ANDROID__)
> +
> +/* Simplified UTF-8 decoder using surrogateescape error handler,
> + used to decode the command line arguments on Mac OS X and Android.
> +
> + Return a pointer to a newly allocated wide character string (use
> + PyMem_RawFree() to free the memory), or NULL on memory allocation error. */
> +
> +wchar_t*
> +_Py_DecodeUTF8_surrogateescape(const char *s, Py_ssize_t size)
> +{
> + const char *e;
> + wchar_t *unicode;
> + Py_ssize_t outpos;
> +
> + /* Note: size will always be longer than the resulting Unicode
> + character count */
> + if (PY_SSIZE_T_MAX / (Py_ssize_t)sizeof(wchar_t) < (size + 1))
> + return NULL;
> + unicode = PyMem_RawMalloc((size + 1) * sizeof(wchar_t));
> + if (!unicode)
> + return NULL;
> +
> + /* Unpack UTF-8 encoded data */
> + e = s + size;
> + outpos = 0;
> + while (s < e) {
> + Py_UCS4 ch;
> +#if SIZEOF_WCHAR_T == 4
> + ch = ucs4lib_utf8_decode(&s, e, (Py_UCS4 *)unicode, &outpos);
> +#else
> + ch = ucs2lib_utf8_decode(&s, e, (Py_UCS2 *)unicode, &outpos);
> +#endif
> + if (ch > 0xFF) {
> +#if SIZEOF_WCHAR_T == 4
> + assert(0);
> +#else
> + assert(ch > 0xFFFF && ch <= MAX_UNICODE);
> + /* compute and append the two surrogates: */
> + unicode[outpos++] = (wchar_t)Py_UNICODE_HIGH_SURROGATE(ch);
> + unicode[outpos++] = (wchar_t)Py_UNICODE_LOW_SURROGATE(ch);
> +#endif
> + }
> + else {
> + if (!ch && s == e)
> + break;
> + /* surrogateescape */
> + unicode[outpos++] = 0xDC00 + (unsigned char)*s++;
> + }
> + }
> + unicode[outpos] = L'\0';
> + return unicode;
> +}
> +
> +#endif /* __APPLE__ or __ANDROID__ */
> +
> +/* Primary internal function which creates utf8 encoded bytes objects.
> +
> + Allocation strategy: if the string is short, convert into a stack buffer
> + and allocate exactly as much space needed at the end. Else allocate the
> + maximum possible needed (4 result bytes per Unicode character), and return
> + the excess memory at the end.
> +*/
> +PyObject *
> +_PyUnicode_AsUTF8String(PyObject *unicode, const char *errors)
> +{
> + enum PyUnicode_Kind kind;
> + void *data;
> + Py_ssize_t size;
> +
> + if (!PyUnicode_Check(unicode)) {
> + PyErr_BadArgument();
> + return NULL;
> + }
> +
> + if (PyUnicode_READY(unicode) == -1)
> + return NULL;
> +
> + if (PyUnicode_UTF8(unicode))
> + return PyBytes_FromStringAndSize(PyUnicode_UTF8(unicode),
> + PyUnicode_UTF8_LENGTH(unicode));
> +
> + kind = PyUnicode_KIND(unicode);
> + data = PyUnicode_DATA(unicode);
> + size = PyUnicode_GET_LENGTH(unicode);
> +
> + switch (kind) {
> + default:
> + assert(0);
> + case PyUnicode_1BYTE_KIND:
> + /* the string cannot be ASCII, or PyUnicode_UTF8() would be set */
> + assert(!PyUnicode_IS_ASCII(unicode));
> + return ucs1lib_utf8_encoder(unicode, data, size, errors);
> + case PyUnicode_2BYTE_KIND:
> + return ucs2lib_utf8_encoder(unicode, data, size, errors);
> + case PyUnicode_4BYTE_KIND:
> + return ucs4lib_utf8_encoder(unicode, data, size, errors);
> + }
> +}
> +
> +PyObject *
> +PyUnicode_EncodeUTF8(const Py_UNICODE *s,
> + Py_ssize_t size,
> + const char *errors)
> +{
> + PyObject *v, *unicode;
> +
> + unicode = PyUnicode_FromUnicode(s, size);
> + if (unicode == NULL)
> + return NULL;
> + v = _PyUnicode_AsUTF8String(unicode, errors);
> + Py_DECREF(unicode);
> + return v;
> +}
> +
> +PyObject *
> +PyUnicode_AsUTF8String(PyObject *unicode)
> +{
> + return _PyUnicode_AsUTF8String(unicode, NULL);
> +}
> +
> +/* --- UTF-32 Codec ------------------------------------------------------- */
> +
> +PyObject *
> +PyUnicode_DecodeUTF32(const char *s,
> + Py_ssize_t size,
> + const char *errors,
> + int *byteorder)
> +{
> + return PyUnicode_DecodeUTF32Stateful(s, size, errors, byteorder, NULL);
> +}
> +
> +PyObject *
> +PyUnicode_DecodeUTF32Stateful(const char *s,
> + Py_ssize_t size,
> + const char *errors,
> + int *byteorder,
> + Py_ssize_t *consumed)
> +{
> + const char *starts = s;
> + Py_ssize_t startinpos;
> + Py_ssize_t endinpos;
> + _PyUnicodeWriter writer;
> + const unsigned char *q, *e;
> + int le, bo = 0; /* assume native ordering by default */
> + const char *encoding;
> + const char *errmsg = "";
> + PyObject *errorHandler = NULL;
> + PyObject *exc = NULL;
> +
> + q = (unsigned char *)s;
> + e = q + size;
> +
> + if (byteorder)
> + bo = *byteorder;
> +
> + /* Check for BOM marks (U+FEFF) in the input and adjust current
> + byte order setting accordingly. In native mode, the leading BOM
> + mark is skipped, in all other modes, it is copied to the output
> + stream as-is (giving a ZWNBSP character). */
> + if (bo == 0 && size >= 4) {
> + Py_UCS4 bom = ((unsigned int)q[3] << 24) | (q[2] << 16) | (q[1] << 8) | q[0];
> + if (bom == 0x0000FEFF) {
> + bo = -1;
> + q += 4;
> + }
> + else if (bom == 0xFFFE0000) {
> + bo = 1;
> + q += 4;
> + }
> + if (byteorder)
> + *byteorder = bo;
> + }
> +
> + if (q == e) {
> + if (consumed)
> + *consumed = size;
> + _Py_RETURN_UNICODE_EMPTY();
> + }
> +
> +#ifdef WORDS_BIGENDIAN
> + le = bo < 0;
> +#else
> + le = bo <= 0;
> +#endif
> + encoding = le ? "utf-32-le" : "utf-32-be";
> +
> + _PyUnicodeWriter_Init(&writer);
> + writer.min_length = (e - q + 3) / 4;
> + if (_PyUnicodeWriter_Prepare(&writer, writer.min_length, 127) == -1)
> + goto onError;
> +
> + while (1) {
> + Py_UCS4 ch = 0;
> + Py_UCS4 maxch = PyUnicode_MAX_CHAR_VALUE(writer.buffer);
> +
> + if (e - q >= 4) {
> + enum PyUnicode_Kind kind = writer.kind;
> + void *data = writer.data;
> + const unsigned char *last = e - 4;
> + Py_ssize_t pos = writer.pos;
> + if (le) {
> + do {
> + ch = ((unsigned int)q[3] << 24) | (q[2] << 16) | (q[1] << 8) | q[0];
> + if (ch > maxch)
> + break;
> + if (kind != PyUnicode_1BYTE_KIND &&
> + Py_UNICODE_IS_SURROGATE(ch))
> + break;
> + PyUnicode_WRITE(kind, data, pos++, ch);
> + q += 4;
> + } while (q <= last);
> + }
> + else {
> + do {
> + ch = ((unsigned int)q[0] << 24) | (q[1] << 16) | (q[2] << 8) | q[3];
> + if (ch > maxch)
> + break;
> + if (kind != PyUnicode_1BYTE_KIND &&
> + Py_UNICODE_IS_SURROGATE(ch))
> + break;
> + PyUnicode_WRITE(kind, data, pos++, ch);
> + q += 4;
> + } while (q <= last);
> + }
> + writer.pos = pos;
> + }
> +
> + if (Py_UNICODE_IS_SURROGATE(ch)) {
> + errmsg = "code point in surrogate code point range(0xd800, 0xe000)";
> + startinpos = ((const char *)q) - starts;
> + endinpos = startinpos + 4;
> + }
> + else if (ch <= maxch) {
> + if (q == e || consumed)
> + break;
> + /* remaining bytes at the end? (size should be divisible by 4) */
> + errmsg = "truncated data";
> + startinpos = ((const char *)q) - starts;
> + endinpos = ((const char *)e) - starts;
> + }
> + else {
> + if (ch < 0x110000) {
> + if (_PyUnicodeWriter_WriteCharInline(&writer, ch) < 0)
> + goto onError;
> + q += 4;
> + continue;
> + }
> + errmsg = "code point not in range(0x110000)";
> + startinpos = ((const char *)q) - starts;
> + endinpos = startinpos + 4;
> + }
> +
> + /* The remaining input chars are ignored if the callback
> + chooses to skip the input */
> + if (unicode_decode_call_errorhandler_writer(
> + errors, &errorHandler,
> + encoding, errmsg,
> + &starts, (const char **)&e, &startinpos, &endinpos, &exc, (const char **)&q,
> + &writer))
> + goto onError;
> + }
> +
> + if (consumed)
> + *consumed = (const char *)q-starts;
> +
> + Py_XDECREF(errorHandler);
> + Py_XDECREF(exc);
> + return _PyUnicodeWriter_Finish(&writer);
> +
> + onError:
> + _PyUnicodeWriter_Dealloc(&writer);
> + Py_XDECREF(errorHandler);
> + Py_XDECREF(exc);
> + return NULL;
> +}
> +
> +PyObject *
> +_PyUnicode_EncodeUTF32(PyObject *str,
> + const char *errors,
> + int byteorder)
> +{
> + enum PyUnicode_Kind kind;
> + const void *data;
> + Py_ssize_t len;
> + PyObject *v;
> + uint32_t *out;
> +#if PY_LITTLE_ENDIAN
> + int native_ordering = byteorder <= 0;
> +#else
> + int native_ordering = byteorder >= 0;
> +#endif
> + const char *encoding;
> + Py_ssize_t nsize, pos;
> + PyObject *errorHandler = NULL;
> + PyObject *exc = NULL;
> + PyObject *rep = NULL;
> +
> + if (!PyUnicode_Check(str)) {
> + PyErr_BadArgument();
> + return NULL;
> + }
> + if (PyUnicode_READY(str) == -1)
> + return NULL;
> + kind = PyUnicode_KIND(str);
> + data = PyUnicode_DATA(str);
> + len = PyUnicode_GET_LENGTH(str);
> +
> + if (len > PY_SSIZE_T_MAX / 4 - (byteorder == 0))
> + return PyErr_NoMemory();
> + nsize = len + (byteorder == 0);
> + v = PyBytes_FromStringAndSize(NULL, nsize * 4);
> + if (v == NULL)
> + return NULL;
> +
> + /* output buffer is 4-bytes aligned */
> + assert(_Py_IS_ALIGNED(PyBytes_AS_STRING(v), 4));
> + out = (uint32_t *)PyBytes_AS_STRING(v);
> + if (byteorder == 0)
> + *out++ = 0xFEFF;
> + if (len == 0)
> + goto done;
> +
> + if (byteorder == -1)
> + encoding = "utf-32-le";
> + else if (byteorder == 1)
> + encoding = "utf-32-be";
> + else
> + encoding = "utf-32";
> +
> + if (kind == PyUnicode_1BYTE_KIND) {
> + ucs1lib_utf32_encode((const Py_UCS1 *)data, len, &out, native_ordering);
> + goto done;
> + }
> +
> + pos = 0;
> + while (pos < len) {
> + Py_ssize_t repsize, moreunits;
> +
> + if (kind == PyUnicode_2BYTE_KIND) {
> + pos += ucs2lib_utf32_encode((const Py_UCS2 *)data + pos, len - pos,
> + &out, native_ordering);
> + }
> + else {
> + assert(kind == PyUnicode_4BYTE_KIND);
> + pos += ucs4lib_utf32_encode((const Py_UCS4 *)data + pos, len - pos,
> + &out, native_ordering);
> + }
> + if (pos == len)
> + break;
> +
> + rep = unicode_encode_call_errorhandler(
> + errors, &errorHandler,
> + encoding, "surrogates not allowed",
> + str, &exc, pos, pos + 1, &pos);
> + if (!rep)
> + goto error;
> +
> + if (PyBytes_Check(rep)) {
> + repsize = PyBytes_GET_SIZE(rep);
> + if (repsize & 3) {
> + raise_encode_exception(&exc, encoding,
> + str, pos - 1, pos,
> + "surrogates not allowed");
> + goto error;
> + }
> + moreunits = repsize / 4;
> + }
> + else {
> + assert(PyUnicode_Check(rep));
> + if (PyUnicode_READY(rep) < 0)
> + goto error;
> + moreunits = repsize = PyUnicode_GET_LENGTH(rep);
> + if (!PyUnicode_IS_ASCII(rep)) {
> + raise_encode_exception(&exc, encoding,
> + str, pos - 1, pos,
> + "surrogates not allowed");
> + goto error;
> + }
> + }
> +
> + /* four bytes are reserved for each surrogate */
> + if (moreunits > 1) {
> + Py_ssize_t outpos = out - (uint32_t*) PyBytes_AS_STRING(v);
> + if (moreunits >= (PY_SSIZE_T_MAX - PyBytes_GET_SIZE(v)) / 4) {
> + /* integer overflow */
> + PyErr_NoMemory();
> + goto error;
> + }
> + if (_PyBytes_Resize(&v, PyBytes_GET_SIZE(v) + 4 * (moreunits - 1)) < 0)
> + goto error;
> + out = (uint32_t*) PyBytes_AS_STRING(v) + outpos;
> + }
> +
> + if (PyBytes_Check(rep)) {
> + memcpy(out, PyBytes_AS_STRING(rep), repsize);
> + out += moreunits;
> + } else /* rep is unicode */ {
> + assert(PyUnicode_KIND(rep) == PyUnicode_1BYTE_KIND);
> + ucs1lib_utf32_encode(PyUnicode_1BYTE_DATA(rep), repsize,
> + &out, native_ordering);
> + }
> +
> + Py_CLEAR(rep);
> + }
> +
> + /* Cut back to size actually needed. This is necessary for, for example,
> + encoding of a string containing isolated surrogates and the 'ignore'
> + handler is used. */
> + nsize = (unsigned char*) out - (unsigned char*) PyBytes_AS_STRING(v);
> + if (nsize != PyBytes_GET_SIZE(v))
> + _PyBytes_Resize(&v, nsize);
> + Py_XDECREF(errorHandler);
> + Py_XDECREF(exc);
> + done:
> + return v;
> + error:
> + Py_XDECREF(rep);
> + Py_XDECREF(errorHandler);
> + Py_XDECREF(exc);
> + Py_XDECREF(v);
> + return NULL;
> +}
> +
> +PyObject *
> +PyUnicode_EncodeUTF32(const Py_UNICODE *s,
> + Py_ssize_t size,
> + const char *errors,
> + int byteorder)
> +{
> + PyObject *result;
> + PyObject *tmp = PyUnicode_FromUnicode(s, size);
> + if (tmp == NULL)
> + return NULL;
> + result = _PyUnicode_EncodeUTF32(tmp, errors, byteorder);
> + Py_DECREF(tmp);
> + return result;
> +}
> +
> +PyObject *
> +PyUnicode_AsUTF32String(PyObject *unicode)
> +{
> + return _PyUnicode_EncodeUTF32(unicode, NULL, 0);
> +}
> +
> +/* --- UTF-16 Codec ------------------------------------------------------- */
> +
> +PyObject *
> +PyUnicode_DecodeUTF16(const char *s,
> + Py_ssize_t size,
> + const char *errors,
> + int *byteorder)
> +{
> + return PyUnicode_DecodeUTF16Stateful(s, size, errors, byteorder, NULL);
> +}
> +
> +PyObject *
> +PyUnicode_DecodeUTF16Stateful(const char *s,
> + Py_ssize_t size,
> + const char *errors,
> + int *byteorder,
> + Py_ssize_t *consumed)
> +{
> + const char *starts = s;
> + Py_ssize_t startinpos;
> + Py_ssize_t endinpos;
> + _PyUnicodeWriter writer;
> + const unsigned char *q, *e;
> + int bo = 0; /* assume native ordering by default */
> + int native_ordering;
> + const char *errmsg = "";
> + PyObject *errorHandler = NULL;
> + PyObject *exc = NULL;
> + const char *encoding;
> +
> + q = (unsigned char *)s;
> + e = q + size;
> +
> + if (byteorder)
> + bo = *byteorder;
> +
> + /* Check for BOM marks (U+FEFF) in the input and adjust current
> + byte order setting accordingly. In native mode, the leading BOM
> + mark is skipped, in all other modes, it is copied to the output
> + stream as-is (giving a ZWNBSP character). */
> + if (bo == 0 && size >= 2) {
> + const Py_UCS4 bom = (q[1] << 8) | q[0];
> + if (bom == 0xFEFF) {
> + q += 2;
> + bo = -1;
> + }
> + else if (bom == 0xFFFE) {
> + q += 2;
> + bo = 1;
> + }
> + if (byteorder)
> + *byteorder = bo;
> + }
> +
> + if (q == e) {
> + if (consumed)
> + *consumed = size;
> + _Py_RETURN_UNICODE_EMPTY();
> + }
> +
> +#if PY_LITTLE_ENDIAN
> + native_ordering = bo <= 0;
> + encoding = bo <= 0 ? "utf-16-le" : "utf-16-be";
> +#else
> + native_ordering = bo >= 0;
> + encoding = bo >= 0 ? "utf-16-be" : "utf-16-le";
> +#endif
> +
> + /* Note: size will always be longer than the resulting Unicode
> + character count normally. Error handler will take care of
> + resizing when needed. */
> + _PyUnicodeWriter_Init(&writer);
> + writer.min_length = (e - q + 1) / 2;
> + if (_PyUnicodeWriter_Prepare(&writer, writer.min_length, 127) == -1)
> + goto onError;
> +
> + while (1) {
> + Py_UCS4 ch = 0;
> + if (e - q >= 2) {
> + int kind = writer.kind;
> + if (kind == PyUnicode_1BYTE_KIND) {
> + if (PyUnicode_IS_ASCII(writer.buffer))
> + ch = asciilib_utf16_decode(&q, e,
> + (Py_UCS1*)writer.data, &writer.pos,
> + native_ordering);
> + else
> + ch = ucs1lib_utf16_decode(&q, e,
> + (Py_UCS1*)writer.data, &writer.pos,
> + native_ordering);
> + } else if (kind == PyUnicode_2BYTE_KIND) {
> + ch = ucs2lib_utf16_decode(&q, e,
> + (Py_UCS2*)writer.data, &writer.pos,
> + native_ordering);
> + } else {
> + assert(kind == PyUnicode_4BYTE_KIND);
> + ch = ucs4lib_utf16_decode(&q, e,
> + (Py_UCS4*)writer.data, &writer.pos,
> + native_ordering);
> + }
> + }
> +
> + switch (ch)
> + {
> + case 0:
> + /* remaining byte at the end? (size should be even) */
> + if (q == e || consumed)
> + goto End;
> + errmsg = "truncated data";
> + startinpos = ((const char *)q) - starts;
> + endinpos = ((const char *)e) - starts;
> + break;
> + /* The remaining input chars are ignored if the callback
> + chooses to skip the input */
> + case 1:
> + q -= 2;
> + if (consumed)
> + goto End;
> + errmsg = "unexpected end of data";
> + startinpos = ((const char *)q) - starts;
> + endinpos = ((const char *)e) - starts;
> + break;
> + case 2:
> + errmsg = "illegal encoding";
> + startinpos = ((const char *)q) - 2 - starts;
> + endinpos = startinpos + 2;
> + break;
> + case 3:
> + errmsg = "illegal UTF-16 surrogate";
> + startinpos = ((const char *)q) - 4 - starts;
> + endinpos = startinpos + 2;
> + break;
> + default:
> + if (_PyUnicodeWriter_WriteCharInline(&writer, ch) < 0)
> + goto onError;
> + continue;
> + }
> +
> + if (unicode_decode_call_errorhandler_writer(
> + errors,
> + &errorHandler,
> + encoding, errmsg,
> + &starts,
> + (const char **)&e,
> + &startinpos,
> + &endinpos,
> + &exc,
> + (const char **)&q,
> + &writer))
> + goto onError;
> + }
> +
> +End:
> + if (consumed)
> + *consumed = (const char *)q-starts;
> +
> + Py_XDECREF(errorHandler);
> + Py_XDECREF(exc);
> + return _PyUnicodeWriter_Finish(&writer);
> +
> + onError:
> + _PyUnicodeWriter_Dealloc(&writer);
> + Py_XDECREF(errorHandler);
> + Py_XDECREF(exc);
> + return NULL;
> +}
> +
> +PyObject *
> +_PyUnicode_EncodeUTF16(PyObject *str,
> + const char *errors,
> + int byteorder)
> +{
> + enum PyUnicode_Kind kind;
> + const void *data;
> + Py_ssize_t len;
> + PyObject *v;
> + unsigned short *out;
> + Py_ssize_t pairs;
> +#if PY_BIG_ENDIAN
> + int native_ordering = byteorder >= 0;
> +#else
> + int native_ordering = byteorder <= 0;
> +#endif
> + const char *encoding;
> + Py_ssize_t nsize, pos;
> + PyObject *errorHandler = NULL;
> + PyObject *exc = NULL;
> + PyObject *rep = NULL;
> +
> + if (!PyUnicode_Check(str)) {
> + PyErr_BadArgument();
> + return NULL;
> + }
> + if (PyUnicode_READY(str) == -1)
> + return NULL;
> + kind = PyUnicode_KIND(str);
> + data = PyUnicode_DATA(str);
> + len = PyUnicode_GET_LENGTH(str);
> +
> + pairs = 0;
> + if (kind == PyUnicode_4BYTE_KIND) {
> + const Py_UCS4 *in = (const Py_UCS4 *)data;
> + const Py_UCS4 *end = in + len;
> + while (in < end) {
> + if (*in++ >= 0x10000) {
> + pairs++;
> + }
> + }
> + }
> + if (len > PY_SSIZE_T_MAX / 2 - pairs - (byteorder == 0)) {
> + return PyErr_NoMemory();
> + }
> + nsize = len + pairs + (byteorder == 0);
> + v = PyBytes_FromStringAndSize(NULL, nsize * 2);
> + if (v == NULL) {
> + return NULL;
> + }
> +
> + /* output buffer is 2-bytes aligned */
> + assert(_Py_IS_ALIGNED(PyBytes_AS_STRING(v), 2));
> + out = (unsigned short *)PyBytes_AS_STRING(v);
> + if (byteorder == 0) {
> + *out++ = 0xFEFF;
> + }
> + if (len == 0) {
> + goto done;
> + }
> +
> + if (kind == PyUnicode_1BYTE_KIND) {
> + ucs1lib_utf16_encode((const Py_UCS1 *)data, len, &out, native_ordering);
> + goto done;
> + }
> +
> + if (byteorder < 0) {
> + encoding = "utf-16-le";
> + }
> + else if (byteorder > 0) {
> + encoding = "utf-16-be";
> + }
> + else {
> + encoding = "utf-16";
> + }
> +
> + pos = 0;
> + while (pos < len) {
> + Py_ssize_t repsize, moreunits;
> +
> + if (kind == PyUnicode_2BYTE_KIND) {
> + pos += ucs2lib_utf16_encode((const Py_UCS2 *)data + pos, len - pos,
> + &out, native_ordering);
> + }
> + else {
> + assert(kind == PyUnicode_4BYTE_KIND);
> + pos += ucs4lib_utf16_encode((const Py_UCS4 *)data + pos, len - pos,
> + &out, native_ordering);
> + }
> + if (pos == len)
> + break;
> +
> + rep = unicode_encode_call_errorhandler(
> + errors, &errorHandler,
> + encoding, "surrogates not allowed",
> + str, &exc, pos, pos + 1, &pos);
> + if (!rep)
> + goto error;
> +
> + if (PyBytes_Check(rep)) {
> + repsize = PyBytes_GET_SIZE(rep);
> + if (repsize & 1) {
> + raise_encode_exception(&exc, encoding,
> + str, pos - 1, pos,
> + "surrogates not allowed");
> + goto error;
> + }
> + moreunits = repsize / 2;
> + }
> + else {
> + assert(PyUnicode_Check(rep));
> + if (PyUnicode_READY(rep) < 0)
> + goto error;
> + moreunits = repsize = PyUnicode_GET_LENGTH(rep);
> + if (!PyUnicode_IS_ASCII(rep)) {
> + raise_encode_exception(&exc, encoding,
> + str, pos - 1, pos,
> + "surrogates not allowed");
> + goto error;
> + }
> + }
> +
> + /* two bytes are reserved for each surrogate */
> + if (moreunits > 1) {
> + Py_ssize_t outpos = out - (unsigned short*) PyBytes_AS_STRING(v);
> + if (moreunits >= (PY_SSIZE_T_MAX - PyBytes_GET_SIZE(v)) / 2) {
> + /* integer overflow */
> + PyErr_NoMemory();
> + goto error;
> + }
> + if (_PyBytes_Resize(&v, PyBytes_GET_SIZE(v) + 2 * (moreunits - 1)) < 0)
> + goto error;
> + out = (unsigned short*) PyBytes_AS_STRING(v) + outpos;
> + }
> +
> + if (PyBytes_Check(rep)) {
> + memcpy(out, PyBytes_AS_STRING(rep), repsize);
> + out += moreunits;
> + } else /* rep is unicode */ {
> + assert(PyUnicode_KIND(rep) == PyUnicode_1BYTE_KIND);
> + ucs1lib_utf16_encode(PyUnicode_1BYTE_DATA(rep), repsize,
> + &out, native_ordering);
> + }
> +
> + Py_CLEAR(rep);
> + }
> +
> + /* Cut back to size actually needed. This is necessary for, for example,
> + encoding of a string containing isolated surrogates and the 'ignore' handler
> + is used. */
> + nsize = (unsigned char*) out - (unsigned char*) PyBytes_AS_STRING(v);
> + if (nsize != PyBytes_GET_SIZE(v))
> + _PyBytes_Resize(&v, nsize);
> + Py_XDECREF(errorHandler);
> + Py_XDECREF(exc);
> + done:
> + return v;
> + error:
> + Py_XDECREF(rep);
> + Py_XDECREF(errorHandler);
> + Py_XDECREF(exc);
> + Py_XDECREF(v);
> + return NULL;
> +#undef STORECHAR
> +}
> +
> +PyObject *
> +PyUnicode_EncodeUTF16(const Py_UNICODE *s,
> + Py_ssize_t size,
> + const char *errors,
> + int byteorder)
> +{
> + PyObject *result;
> + PyObject *tmp = PyUnicode_FromUnicode(s, size);
> + if (tmp == NULL)
> + return NULL;
> + result = _PyUnicode_EncodeUTF16(tmp, errors, byteorder);
> + Py_DECREF(tmp);
> + return result;
> +}
> +
> +PyObject *
> +PyUnicode_AsUTF16String(PyObject *unicode)
> +{
> + return _PyUnicode_EncodeUTF16(unicode, NULL, 0);
> +}
> +
> +/* --- Unicode Escape Codec ----------------------------------------------- */
> +
> +static _PyUnicode_Name_CAPI *ucnhash_CAPI = NULL;
> +
> +PyObject *
> +_PyUnicode_DecodeUnicodeEscape(const char *s,
> + Py_ssize_t size,
> + const char *errors,
> + const char **first_invalid_escape)
> +{
> + const char *starts = s;
> + _PyUnicodeWriter writer;
> + const char *end;
> + PyObject *errorHandler = NULL;
> + PyObject *exc = NULL;
> +
> + // so we can remember if we've seen an invalid escape char or not
> + *first_invalid_escape = NULL;
> +
> + if (size == 0) {
> + _Py_RETURN_UNICODE_EMPTY();
> + }
> + /* Escaped strings will always be longer than the resulting
> + Unicode string, so we start with size here and then reduce the
> + length after conversion to the true value.
> + (but if the error callback returns a long replacement string
> + we'll have to allocate more space) */
> + _PyUnicodeWriter_Init(&writer);
> + writer.min_length = size;
> + if (_PyUnicodeWriter_Prepare(&writer, size, 127) < 0) {
> + goto onError;
> + }
> +
> + end = s + size;
> + while (s < end) {
> + unsigned char c = (unsigned char) *s++;
> + Py_UCS4 ch;
> + int count;
> + Py_ssize_t startinpos;
> + Py_ssize_t endinpos;
> + const char *message;
> +
> +#define WRITE_ASCII_CHAR(ch) \
> + do { \
> + assert(ch <= 127); \
> + assert(writer.pos < writer.size); \
> + PyUnicode_WRITE(writer.kind, writer.data, writer.pos++, ch); \
> + } while(0)
> +
> +#define WRITE_CHAR(ch) \
> + do { \
> + if (ch <= writer.maxchar) { \
> + assert(writer.pos < writer.size); \
> + PyUnicode_WRITE(writer.kind, writer.data, writer.pos++, ch); \
> + } \
> + else if (_PyUnicodeWriter_WriteCharInline(&writer, ch) < 0) { \
> + goto onError; \
> + } \
> + } while(0)
> +
> + /* Non-escape characters are interpreted as Unicode ordinals */
> + if (c != '\\') {
> + WRITE_CHAR(c);
> + continue;
> + }
> +
> + startinpos = s - starts - 1;
> + /* \ - Escapes */
> + if (s >= end) {
> + message = "\\ at end of string";
> + goto error;
> + }
> + c = (unsigned char) *s++;
> +
> + assert(writer.pos < writer.size);
> + switch (c) {
> +
> + /* \x escapes */
> + case '\n': continue;
> + case '\\': WRITE_ASCII_CHAR('\\'); continue;
> + case '\'': WRITE_ASCII_CHAR('\''); continue;
> + case '\"': WRITE_ASCII_CHAR('\"'); continue;
> + case 'b': WRITE_ASCII_CHAR('\b'); continue;
> + /* FF */
> + case 'f': WRITE_ASCII_CHAR('\014'); continue;
> + case 't': WRITE_ASCII_CHAR('\t'); continue;
> + case 'n': WRITE_ASCII_CHAR('\n'); continue;
> + case 'r': WRITE_ASCII_CHAR('\r'); continue;
> + /* VT */
> + case 'v': WRITE_ASCII_CHAR('\013'); continue;
> + /* BEL, not classic C */
> + case 'a': WRITE_ASCII_CHAR('\007'); continue;
> +
> + /* \OOO (octal) escapes */
> + case '0': case '1': case '2': case '3':
> + case '4': case '5': case '6': case '7':
> + ch = c - '0';
> + if (s < end && '0' <= *s && *s <= '7') {
> + ch = (ch<<3) + *s++ - '0';
> + if (s < end && '0' <= *s && *s <= '7') {
> + ch = (ch<<3) + *s++ - '0';
> + }
> + }
> + WRITE_CHAR(ch);
> + continue;
> +
> + /* hex escapes */
> + /* \xXX */
> + case 'x':
> + count = 2;
> + message = "truncated \\xXX escape";
> + goto hexescape;
> +
> + /* \uXXXX */
> + case 'u':
> + count = 4;
> + message = "truncated \\uXXXX escape";
> + goto hexescape;
> +
> + /* \UXXXXXXXX */
> + case 'U':
> + count = 8;
> + message = "truncated \\UXXXXXXXX escape";
> + hexescape:
> + for (ch = 0; count && s < end; ++s, --count) {
> + c = (unsigned char)*s;
> + ch <<= 4;
> + if (c >= '0' && c <= '9') {
> + ch += c - '0';
> + }
> + else if (c >= 'a' && c <= 'f') {
> + ch += c - ('a' - 10);
> + }
> + else if (c >= 'A' && c <= 'F') {
> + ch += c - ('A' - 10);
> + }
> + else {
> + break;
> + }
> + }
> + if (count) {
> + goto error;
> + }
> +
> + /* when we get here, ch is a 32-bit unicode character */
> + if (ch > MAX_UNICODE) {
> + message = "illegal Unicode character";
> + goto error;
> + }
> +
> + WRITE_CHAR(ch);
> + continue;
> +
> + /* \N{name} */
> + case 'N':
> + if (ucnhash_CAPI == NULL) {
> + /* load the unicode data module */
> + ucnhash_CAPI = (_PyUnicode_Name_CAPI *)PyCapsule_Import(
> + PyUnicodeData_CAPSULE_NAME, 1);
> + if (ucnhash_CAPI == NULL) {
> + PyErr_SetString(
> + PyExc_UnicodeError,
> + "\\N escapes not supported (can't load unicodedata module)"
> + );
> + goto onError;
> + }
> + }
> +
> + message = "malformed \\N character escape";
> + if (s < end && *s == '{') {
> + const char *start = ++s;
> + size_t namelen;
> + /* look for the closing brace */
> + while (s < end && *s != '}')
> + s++;
> + namelen = s - start;
> + if (namelen && s < end) {
> + /* found a name. look it up in the unicode database */
> + s++;
> + ch = 0xffffffff; /* in case 'getcode' messes up */
> + if (namelen <= INT_MAX &&
> + ucnhash_CAPI->getcode(NULL, start, (int)namelen,
> + &ch, 0)) {
> + assert(ch <= MAX_UNICODE);
> + WRITE_CHAR(ch);
> + continue;
> + }
> + message = "unknown Unicode character name";
> + }
> + }
> + goto error;
> +
> + default:
> + if (*first_invalid_escape == NULL) {
> + *first_invalid_escape = s-1; /* Back up one char, since we've
> + already incremented s. */
> + }
> + WRITE_ASCII_CHAR('\\');
> + WRITE_CHAR(c);
> + continue;
> + }
> +
> + error:
> + endinpos = s-starts;
> + writer.min_length = end - s + writer.pos;
> + if (unicode_decode_call_errorhandler_writer(
> + errors, &errorHandler,
> + "unicodeescape", message,
> + &starts, &end, &startinpos, &endinpos, &exc, &s,
> + &writer)) {
> + goto onError;
> + }
> + assert(end - s <= writer.size - writer.pos);
> +
> +#undef WRITE_ASCII_CHAR
> +#undef WRITE_CHAR
> + }
> +
> + Py_XDECREF(errorHandler);
> + Py_XDECREF(exc);
> + return _PyUnicodeWriter_Finish(&writer);
> +
> + onError:
> + _PyUnicodeWriter_Dealloc(&writer);
> + Py_XDECREF(errorHandler);
> + Py_XDECREF(exc);
> + return NULL;
> +}
> +
> +PyObject *
> +PyUnicode_DecodeUnicodeEscape(const char *s,
> + Py_ssize_t size,
> + const char *errors)
> +{
> + const char *first_invalid_escape;
> + PyObject *result = _PyUnicode_DecodeUnicodeEscape(s, size, errors,
> + &first_invalid_escape);
> + if (result == NULL)
> + return NULL;
> + if (first_invalid_escape != NULL) {
> + if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1,
> + "invalid escape sequence '\\%c'",
> + (unsigned char)*first_invalid_escape) < 0) {
> + Py_DECREF(result);
> + return NULL;
> + }
> + }
> + return result;
> +}
> +
> +/* Return a Unicode-Escape string version of the Unicode object. */
> +
> +PyObject *
> +PyUnicode_AsUnicodeEscapeString(PyObject *unicode)
> +{
> + Py_ssize_t i, len;
> + PyObject *repr;
> + char *p;
> + enum PyUnicode_Kind kind;
> + void *data;
> + Py_ssize_t expandsize;
> +
> + /* Initial allocation is based on the longest-possible character
> + escape.
> +
> + For UCS1 strings it's '\xxx', 4 bytes per source character.
> + For UCS2 strings it's '\uxxxx', 6 bytes per source character.
> + For UCS4 strings it's '\U00xxxxxx', 10 bytes per source character.
> + */
> +
> + if (!PyUnicode_Check(unicode)) {
> + PyErr_BadArgument();
> + return NULL;
> + }
> + if (PyUnicode_READY(unicode) == -1) {
> + return NULL;
> + }
> +
> + len = PyUnicode_GET_LENGTH(unicode);
> + if (len == 0) {
> + return PyBytes_FromStringAndSize(NULL, 0);
> + }
> +
> + kind = PyUnicode_KIND(unicode);
> + data = PyUnicode_DATA(unicode);
> + /* 4 byte characters can take up 10 bytes, 2 byte characters can take up 6
> + bytes, and 1 byte characters 4. */
> + expandsize = kind * 2 + 2;
> + if (len > PY_SSIZE_T_MAX / expandsize) {
> + return PyErr_NoMemory();
> + }
> + repr = PyBytes_FromStringAndSize(NULL, expandsize * len);
> + if (repr == NULL) {
> + return NULL;
> + }
> +
> + p = PyBytes_AS_STRING(repr);
> + for (i = 0; i < len; i++) {
> + Py_UCS4 ch = PyUnicode_READ(kind, data, i);
> +
> + /* U+0000-U+00ff range */
> + if (ch < 0x100) {
> + if (ch >= ' ' && ch < 127) {
> + if (ch != '\\') {
> + /* Copy printable US ASCII as-is */
> + *p++ = (char) ch;
> + }
> + /* Escape backslashes */
> + else {
> + *p++ = '\\';
> + *p++ = '\\';
> + }
> + }
> +
> + /* Map special whitespace to '\t', \n', '\r' */
> + else if (ch == '\t') {
> + *p++ = '\\';
> + *p++ = 't';
> + }
> + else if (ch == '\n') {
> + *p++ = '\\';
> + *p++ = 'n';
> + }
> + else if (ch == '\r') {
> + *p++ = '\\';
> + *p++ = 'r';
> + }
> +
> + /* Map non-printable US ASCII and 8-bit characters to '\xHH' */
> + else {
> + *p++ = '\\';
> + *p++ = 'x';
> + *p++ = Py_hexdigits[(ch >> 4) & 0x000F];
> + *p++ = Py_hexdigits[ch & 0x000F];
> + }
> + }
> + /* U+0100-U+ffff range: Map 16-bit characters to '\uHHHH' */
> + else if (ch < 0x10000) {
> + *p++ = '\\';
> + *p++ = 'u';
> + *p++ = Py_hexdigits[(ch >> 12) & 0x000F];
> + *p++ = Py_hexdigits[(ch >> 8) & 0x000F];
> + *p++ = Py_hexdigits[(ch >> 4) & 0x000F];
> + *p++ = Py_hexdigits[ch & 0x000F];
> + }
> + /* U+010000-U+10ffff range: Map 21-bit characters to '\U00HHHHHH' */
> + else {
> +
> + /* Make sure that the first two digits are zero */
> + assert(ch <= MAX_UNICODE && MAX_UNICODE <= 0x10ffff);
> + *p++ = '\\';
> + *p++ = 'U';
> + *p++ = '0';
> + *p++ = '0';
> + *p++ = Py_hexdigits[(ch >> 20) & 0x0000000F];
> + *p++ = Py_hexdigits[(ch >> 16) & 0x0000000F];
> + *p++ = Py_hexdigits[(ch >> 12) & 0x0000000F];
> + *p++ = Py_hexdigits[(ch >> 8) & 0x0000000F];
> + *p++ = Py_hexdigits[(ch >> 4) & 0x0000000F];
> + *p++ = Py_hexdigits[ch & 0x0000000F];
> + }
> + }
> +
> + assert(p - PyBytes_AS_STRING(repr) > 0);
> + if (_PyBytes_Resize(&repr, p - PyBytes_AS_STRING(repr)) < 0) {
> + return NULL;
> + }
> + return repr;
> +}
> +
> +PyObject *
> +PyUnicode_EncodeUnicodeEscape(const Py_UNICODE *s,
> + Py_ssize_t size)
> +{
> + PyObject *result;
> + PyObject *tmp = PyUnicode_FromUnicode(s, size);
> + if (tmp == NULL) {
> + return NULL;
> + }
> +
> + result = PyUnicode_AsUnicodeEscapeString(tmp);
> + Py_DECREF(tmp);
> + return result;
> +}
> +
> +/* --- Raw Unicode Escape Codec ------------------------------------------- */
> +
> +PyObject *
> +PyUnicode_DecodeRawUnicodeEscape(const char *s,
> + Py_ssize_t size,
> + const char *errors)
> +{
> + const char *starts = s;
> + _PyUnicodeWriter writer;
> + const char *end;
> + PyObject *errorHandler = NULL;
> + PyObject *exc = NULL;
> +
> + if (size == 0) {
> + _Py_RETURN_UNICODE_EMPTY();
> + }
> +
> + /* Escaped strings will always be longer than the resulting
> + Unicode string, so we start with size here and then reduce the
> + length after conversion to the true value. (But decoding error
> + handler might have to resize the string) */
> + _PyUnicodeWriter_Init(&writer);
> + writer.min_length = size;
> + if (_PyUnicodeWriter_Prepare(&writer, size, 127) < 0) {
> + goto onError;
> + }
> +
> + end = s + size;
> + while (s < end) {
> + unsigned char c = (unsigned char) *s++;
> + Py_UCS4 ch;
> + int count;
> + Py_ssize_t startinpos;
> + Py_ssize_t endinpos;
> + const char *message;
> +
> +#define WRITE_CHAR(ch) \
> + do { \
> + if (ch <= writer.maxchar) { \
> + assert(writer.pos < writer.size); \
> + PyUnicode_WRITE(writer.kind, writer.data, writer.pos++, ch); \
> + } \
> + else if (_PyUnicodeWriter_WriteCharInline(&writer, ch) < 0) { \
> + goto onError; \
> + } \
> + } while(0)
> +
> + /* Non-escape characters are interpreted as Unicode ordinals */
> + if (c != '\\' || s >= end) {
> + WRITE_CHAR(c);
> + continue;
> + }
> +
> + c = (unsigned char) *s++;
> + if (c == 'u') {
> + count = 4;
> + message = "truncated \\uXXXX escape";
> + }
> + else if (c == 'U') {
> + count = 8;
> + message = "truncated \\UXXXXXXXX escape";
> + }
> + else {
> + assert(writer.pos < writer.size);
> + PyUnicode_WRITE(writer.kind, writer.data, writer.pos++, '\\');
> + WRITE_CHAR(c);
> + continue;
> + }
> + startinpos = s - starts - 2;
> +
> + /* \uHHHH with 4 hex digits, \U00HHHHHH with 8 */
> + for (ch = 0; count && s < end; ++s, --count) {
> + c = (unsigned char)*s;
> + ch <<= 4;
> + if (c >= '0' && c <= '9') {
> + ch += c - '0';
> + }
> + else if (c >= 'a' && c <= 'f') {
> + ch += c - ('a' - 10);
> + }
> + else if (c >= 'A' && c <= 'F') {
> + ch += c - ('A' - 10);
> + }
> + else {
> + break;
> + }
> + }
> + if (!count) {
> + if (ch <= MAX_UNICODE) {
> + WRITE_CHAR(ch);
> + continue;
> + }
> + message = "\\Uxxxxxxxx out of range";
> + }
> +
> + endinpos = s-starts;
> + writer.min_length = end - s + writer.pos;
> + if (unicode_decode_call_errorhandler_writer(
> + errors, &errorHandler,
> + "rawunicodeescape", message,
> + &starts, &end, &startinpos, &endinpos, &exc, &s,
> + &writer)) {
> + goto onError;
> + }
> + assert(end - s <= writer.size - writer.pos);
> +
> +#undef WRITE_CHAR
> + }
> + Py_XDECREF(errorHandler);
> + Py_XDECREF(exc);
> + return _PyUnicodeWriter_Finish(&writer);
> +
> + onError:
> + _PyUnicodeWriter_Dealloc(&writer);
> + Py_XDECREF(errorHandler);
> + Py_XDECREF(exc);
> + return NULL;
> +
> +}
> +
> +
> +PyObject *
> +PyUnicode_AsRawUnicodeEscapeString(PyObject *unicode)
> +{
> + PyObject *repr;
> + char *p;
> + Py_ssize_t expandsize, pos;
> + int kind;
> + void *data;
> + Py_ssize_t len;
> +
> + if (!PyUnicode_Check(unicode)) {
> + PyErr_BadArgument();
> + return NULL;
> + }
> + if (PyUnicode_READY(unicode) == -1) {
> + return NULL;
> + }
> + kind = PyUnicode_KIND(unicode);
> + data = PyUnicode_DATA(unicode);
> + len = PyUnicode_GET_LENGTH(unicode);
> + if (kind == PyUnicode_1BYTE_KIND) {
> + return PyBytes_FromStringAndSize(data, len);
> + }
> +
> + /* 4 byte characters can take up 10 bytes, 2 byte characters can take up 6
> + bytes, and 1 byte characters 4. */
> + expandsize = kind * 2 + 2;
> +
> + if (len > PY_SSIZE_T_MAX / expandsize) {
> + return PyErr_NoMemory();
> + }
> + repr = PyBytes_FromStringAndSize(NULL, expandsize * len);
> + if (repr == NULL) {
> + return NULL;
> + }
> + if (len == 0) {
> + return repr;
> + }
> +
> + p = PyBytes_AS_STRING(repr);
> + for (pos = 0; pos < len; pos++) {
> + Py_UCS4 ch = PyUnicode_READ(kind, data, pos);
> +
> + /* U+0000-U+00ff range: Copy 8-bit characters as-is */
> + if (ch < 0x100) {
> + *p++ = (char) ch;
> + }
> + /* U+0000-U+00ff range: Map 16-bit characters to '\uHHHH' */
> + else if (ch < 0x10000) {
> + *p++ = '\\';
> + *p++ = 'u';
> + *p++ = Py_hexdigits[(ch >> 12) & 0xf];
> + *p++ = Py_hexdigits[(ch >> 8) & 0xf];
> + *p++ = Py_hexdigits[(ch >> 4) & 0xf];
> + *p++ = Py_hexdigits[ch & 15];
> + }
> + /* U+010000-U+10ffff range: Map 32-bit characters to '\U00HHHHHH' */
> + else {
> + assert(ch <= MAX_UNICODE && MAX_UNICODE <= 0x10ffff);
> + *p++ = '\\';
> + *p++ = 'U';
> + *p++ = '0';
> + *p++ = '0';
> + *p++ = Py_hexdigits[(ch >> 20) & 0xf];
> + *p++ = Py_hexdigits[(ch >> 16) & 0xf];
> + *p++ = Py_hexdigits[(ch >> 12) & 0xf];
> + *p++ = Py_hexdigits[(ch >> 8) & 0xf];
> + *p++ = Py_hexdigits[(ch >> 4) & 0xf];
> + *p++ = Py_hexdigits[ch & 15];
> + }
> + }
> +
> + assert(p > PyBytes_AS_STRING(repr));
> + if (_PyBytes_Resize(&repr, p - PyBytes_AS_STRING(repr)) < 0) {
> + return NULL;
> + }
> + return repr;
> +}
> +
> +PyObject *
> +PyUnicode_EncodeRawUnicodeEscape(const Py_UNICODE *s,
> + Py_ssize_t size)
> +{
> + PyObject *result;
> + PyObject *tmp = PyUnicode_FromUnicode(s, size);
> + if (tmp == NULL)
> + return NULL;
> + result = PyUnicode_AsRawUnicodeEscapeString(tmp);
> + Py_DECREF(tmp);
> + return result;
> +}
> +
> +/* --- Unicode Internal Codec ------------------------------------------- */
> +
> +PyObject *
> +_PyUnicode_DecodeUnicodeInternal(const char *s,
> + Py_ssize_t size,
> + const char *errors)
> +{
> + const char *starts = s;
> + Py_ssize_t startinpos;
> + Py_ssize_t endinpos;
> + _PyUnicodeWriter writer;
> + const char *end;
> + const char *reason;
> + PyObject *errorHandler = NULL;
> + PyObject *exc = NULL;
> +
> + if (PyErr_WarnEx(PyExc_DeprecationWarning,
> + "unicode_internal codec has been deprecated",
> + 1))
> + return NULL;
> +
> + if (size < 0) {
> + PyErr_BadInternalCall();
> + return NULL;
> + }
> + if (size == 0)
> + _Py_RETURN_UNICODE_EMPTY();
> +
> + _PyUnicodeWriter_Init(&writer);
> + if (size / Py_UNICODE_SIZE > PY_SSIZE_T_MAX - 1) {
> + PyErr_NoMemory();
> + goto onError;
> + }
> + writer.min_length = (size + (Py_UNICODE_SIZE - 1)) / Py_UNICODE_SIZE;
> +
> + end = s + size;
> + while (s < end) {
> + Py_UNICODE uch;
> + Py_UCS4 ch;
> + if (end - s < Py_UNICODE_SIZE) {
> + endinpos = end-starts;
> + reason = "truncated input";
> + goto error;
> + }
> + /* We copy the raw representation one byte at a time because the
> + pointer may be unaligned (see test_codeccallbacks). */
> + ((char *) &uch)[0] = s[0];
> + ((char *) &uch)[1] = s[1];
> +#ifdef Py_UNICODE_WIDE
> + ((char *) &uch)[2] = s[2];
> + ((char *) &uch)[3] = s[3];
> +#endif
> + ch = uch;
> +#ifdef Py_UNICODE_WIDE
> + /* We have to sanity check the raw data, otherwise doom looms for
> + some malformed UCS-4 data. */
> + if (ch > 0x10ffff) {
> + endinpos = s - starts + Py_UNICODE_SIZE;
> + reason = "illegal code point (> 0x10FFFF)";
> + goto error;
> + }
> +#endif
> + s += Py_UNICODE_SIZE;
> +#ifndef Py_UNICODE_WIDE
> + if (Py_UNICODE_IS_HIGH_SURROGATE(ch) && end - s >= Py_UNICODE_SIZE)
> + {
> + Py_UNICODE uch2;
> + ((char *) &uch2)[0] = s[0];
> + ((char *) &uch2)[1] = s[1];
> + if (Py_UNICODE_IS_LOW_SURROGATE(uch2))
> + {
> + ch = Py_UNICODE_JOIN_SURROGATES(uch, uch2);
> + s += Py_UNICODE_SIZE;
> + }
> + }
> +#endif
> +
> + if (_PyUnicodeWriter_WriteCharInline(&writer, ch) < 0)
> + goto onError;
> + continue;
> +
> + error:
> + startinpos = s - starts;
> + if (unicode_decode_call_errorhandler_writer(
> + errors, &errorHandler,
> + "unicode_internal", reason,
> + &starts, &end, &startinpos, &endinpos, &exc, &s,
> + &writer))
> + goto onError;
> + }
> +
> + Py_XDECREF(errorHandler);
> + Py_XDECREF(exc);
> + return _PyUnicodeWriter_Finish(&writer);
> +
> + onError:
> + _PyUnicodeWriter_Dealloc(&writer);
> + Py_XDECREF(errorHandler);
> + Py_XDECREF(exc);
> + return NULL;
> +}
> +
> +/* --- Latin-1 Codec ------------------------------------------------------ */
> +
> +PyObject *
> +PyUnicode_DecodeLatin1(const char *s,
> + Py_ssize_t size,
> + const char *errors)
> +{
> + /* Latin-1 is equivalent to the first 256 ordinals in Unicode. */
> + return _PyUnicode_FromUCS1((unsigned char*)s, size);
> +}
> +
> +/* create or adjust a UnicodeEncodeError */
> +static void
> +make_encode_exception(PyObject **exceptionObject,
> + const char *encoding,
> + PyObject *unicode,
> + Py_ssize_t startpos, Py_ssize_t endpos,
> + const char *reason)
> +{
> + if (*exceptionObject == NULL) {
> + *exceptionObject = PyObject_CallFunction(
> + PyExc_UnicodeEncodeError, "sOnns",
> + encoding, unicode, startpos, endpos, reason);
> + }
> + else {
> + if (PyUnicodeEncodeError_SetStart(*exceptionObject, startpos))
> + goto onError;
> + if (PyUnicodeEncodeError_SetEnd(*exceptionObject, endpos))
> + goto onError;
> + if (PyUnicodeEncodeError_SetReason(*exceptionObject, reason))
> + goto onError;
> + return;
> + onError:
> + Py_CLEAR(*exceptionObject);
> + }
> +}
> +
> +/* raises a UnicodeEncodeError */
> +static void
> +raise_encode_exception(PyObject **exceptionObject,
> + const char *encoding,
> + PyObject *unicode,
> + Py_ssize_t startpos, Py_ssize_t endpos,
> + const char *reason)
> +{
> + make_encode_exception(exceptionObject,
> + encoding, unicode, startpos, endpos, reason);
> + if (*exceptionObject != NULL)
> + PyCodec_StrictErrors(*exceptionObject);
> +}
> +
> +/* error handling callback helper:
> + build arguments, call the callback and check the arguments,
> + put the result into newpos and return the replacement string, which
> + has to be freed by the caller */
> +static PyObject *
> +unicode_encode_call_errorhandler(const char *errors,
> + PyObject **errorHandler,
> + const char *encoding, const char *reason,
> + PyObject *unicode, PyObject **exceptionObject,
> + Py_ssize_t startpos, Py_ssize_t endpos,
> + Py_ssize_t *newpos)
> +{
> + static const char *argparse = "On;encoding error handler must return (str/bytes, int) tuple";
> + Py_ssize_t len;
> + PyObject *restuple;
> + PyObject *resunicode;
> +
> + if (*errorHandler == NULL) {
> + *errorHandler = PyCodec_LookupError(errors);
> + if (*errorHandler == NULL)
> + return NULL;
> + }
> +
> + if (PyUnicode_READY(unicode) == -1)
> + return NULL;
> + len = PyUnicode_GET_LENGTH(unicode);
> +
> + make_encode_exception(exceptionObject,
> + encoding, unicode, startpos, endpos, reason);
> + if (*exceptionObject == NULL)
> + return NULL;
> +
> + restuple = PyObject_CallFunctionObjArgs(
> + *errorHandler, *exceptionObject, NULL);
> + if (restuple == NULL)
> + return NULL;
> + if (!PyTuple_Check(restuple)) {
> + PyErr_SetString(PyExc_TypeError, &argparse[3]);
> + Py_DECREF(restuple);
> + return NULL;
> + }
> + if (!PyArg_ParseTuple(restuple, argparse,
> + &resunicode, newpos)) {
> + Py_DECREF(restuple);
> + return NULL;
> + }
> + if (!PyUnicode_Check(resunicode) && !PyBytes_Check(resunicode)) {
> + PyErr_SetString(PyExc_TypeError, &argparse[3]);
> + Py_DECREF(restuple);
> + return NULL;
> + }
> + if (*newpos<0)
> + *newpos = len + *newpos;
> + if (*newpos<0 || *newpos>len) {
> + PyErr_Format(PyExc_IndexError, "position %zd from error handler out of bounds", *newpos);
> + Py_DECREF(restuple);
> + return NULL;
> + }
> + Py_INCREF(resunicode);
> + Py_DECREF(restuple);
> + return resunicode;
> +}
> +
> +static PyObject *
> +unicode_encode_ucs1(PyObject *unicode,
> + const char *errors,
> + const Py_UCS4 limit)
> +{
> + /* input state */
> + Py_ssize_t pos=0, size;
> + int kind;
> + void *data;
> + /* pointer into the output */
> + char *str;
> + const char *encoding = (limit == 256) ? "latin-1" : "ascii";
> + const char *reason = (limit == 256) ? "ordinal not in range(256)" : "ordinal not in range(128)";
> + PyObject *error_handler_obj = NULL;
> + PyObject *exc = NULL;
> + _Py_error_handler error_handler = _Py_ERROR_UNKNOWN;
> + PyObject *rep = NULL;
> + /* output object */
> + _PyBytesWriter writer;
> +
> + if (PyUnicode_READY(unicode) == -1)
> + return NULL;
> + size = PyUnicode_GET_LENGTH(unicode);
> + kind = PyUnicode_KIND(unicode);
> + data = PyUnicode_DATA(unicode);
> + /* allocate enough for a simple encoding without
> + replacements, if we need more, we'll resize */
> + if (size == 0)
> + return PyBytes_FromStringAndSize(NULL, 0);
> +
> + _PyBytesWriter_Init(&writer);
> + str = _PyBytesWriter_Alloc(&writer, size);
> + if (str == NULL)
> + return NULL;
> +
> + while (pos < size) {
> + Py_UCS4 ch = PyUnicode_READ(kind, data, pos);
> +
> + /* can we encode this? */
> + if (ch < limit) {
> + /* no overflow check, because we know that the space is enough */
> + *str++ = (char)ch;
> + ++pos;
> + }
> + else {
> + Py_ssize_t newpos, i;
> + /* startpos for collecting unencodable chars */
> + Py_ssize_t collstart = pos;
> + Py_ssize_t collend = collstart + 1;
> + /* find all unecodable characters */
> +
> + while ((collend < size) && (PyUnicode_READ(kind, data, collend) >= limit))
> + ++collend;
> +
> + /* Only overallocate the buffer if it's not the last write */
> + writer.overallocate = (collend < size);
> +
> + /* cache callback name lookup (if not done yet, i.e. it's the first error) */
> + if (error_handler == _Py_ERROR_UNKNOWN)
> + error_handler = get_error_handler(errors);
> +
> + switch (error_handler) {
> + case _Py_ERROR_STRICT:
> + raise_encode_exception(&exc, encoding, unicode, collstart, collend, reason);
> + goto onError;
> +
> + case _Py_ERROR_REPLACE:
> + memset(str, '?', collend - collstart);
> + str += (collend - collstart);
> + /* fall through */
> + case _Py_ERROR_IGNORE:
> + pos = collend;
> + break;
> +
> + case _Py_ERROR_BACKSLASHREPLACE:
> + /* subtract preallocated bytes */
> + writer.min_size -= (collend - collstart);
> + str = backslashreplace(&writer, str,
> + unicode, collstart, collend);
> + if (str == NULL)
> + goto onError;
> + pos = collend;
> + break;
> +
> + case _Py_ERROR_XMLCHARREFREPLACE:
> + /* subtract preallocated bytes */
> + writer.min_size -= (collend - collstart);
> + str = xmlcharrefreplace(&writer, str,
> + unicode, collstart, collend);
> + if (str == NULL)
> + goto onError;
> + pos = collend;
> + break;
> +
> + case _Py_ERROR_SURROGATEESCAPE:
> + for (i = collstart; i < collend; ++i) {
> + ch = PyUnicode_READ(kind, data, i);
> + if (ch < 0xdc80 || 0xdcff < ch) {
> + /* Not a UTF-8b surrogate */
> + break;
> + }
> + *str++ = (char)(ch - 0xdc00);
> + ++pos;
> + }
> + if (i >= collend)
> + break;
> + collstart = pos;
> + assert(collstart != collend);
> + /* fall through */
> +
> + default:
> + rep = unicode_encode_call_errorhandler(errors, &error_handler_obj,
> + encoding, reason, unicode, &exc,
> + collstart, collend, &newpos);
> + if (rep == NULL)
> + goto onError;
> +
> + /* subtract preallocated bytes */
> + writer.min_size -= 1;
> +
> + if (PyBytes_Check(rep)) {
> + /* Directly copy bytes result to output. */
> + str = _PyBytesWriter_WriteBytes(&writer, str,
> + PyBytes_AS_STRING(rep),
> + PyBytes_GET_SIZE(rep));
> + }
> + else {
> + assert(PyUnicode_Check(rep));
> +
> + if (PyUnicode_READY(rep) < 0)
> + goto onError;
> +
> + if (PyUnicode_IS_ASCII(rep)) {
> + /* Fast path: all characters are smaller than limit */
> + assert(limit >= 128);
> + assert(PyUnicode_KIND(rep) == PyUnicode_1BYTE_KIND);
> + str = _PyBytesWriter_WriteBytes(&writer, str,
> + PyUnicode_DATA(rep),
> + PyUnicode_GET_LENGTH(rep));
> + }
> + else {
> + Py_ssize_t repsize = PyUnicode_GET_LENGTH(rep);
> +
> + str = _PyBytesWriter_Prepare(&writer, str, repsize);
> + if (str == NULL)
> + goto onError;
> +
> + /* check if there is anything unencodable in the
> + replacement and copy it to the output */
> + for (i = 0; repsize-->0; ++i, ++str) {
> + ch = PyUnicode_READ_CHAR(rep, i);
> + if (ch >= limit) {
> + raise_encode_exception(&exc, encoding, unicode,
> + pos, pos+1, reason);
> + goto onError;
> + }
> + *str = (char)ch;
> + }
> + }
> + }
> + if (str == NULL)
> + goto onError;
> +
> + pos = newpos;
> + Py_CLEAR(rep);
> + }
> +
> + /* If overallocation was disabled, ensure that it was the last
> + write. Otherwise, we missed an optimization */
> + assert(writer.overallocate || pos == size);
> + }
> + }
> +
> + Py_XDECREF(error_handler_obj);
> + Py_XDECREF(exc);
> + return _PyBytesWriter_Finish(&writer, str);
> +
> + onError:
> + Py_XDECREF(rep);
> + _PyBytesWriter_Dealloc(&writer);
> + Py_XDECREF(error_handler_obj);
> + Py_XDECREF(exc);
> + return NULL;
> +}
> +
> +/* Deprecated */
> +PyObject *
> +PyUnicode_EncodeLatin1(const Py_UNICODE *p,
> + Py_ssize_t size,
> + const char *errors)
> +{
> + PyObject *result;
> + PyObject *unicode = PyUnicode_FromUnicode(p, size);
> + if (unicode == NULL)
> + return NULL;
> + result = unicode_encode_ucs1(unicode, errors, 256);
> + Py_DECREF(unicode);
> + return result;
> +}
> +
> +PyObject *
> +_PyUnicode_AsLatin1String(PyObject *unicode, const char *errors)
> +{
> + if (!PyUnicode_Check(unicode)) {
> + PyErr_BadArgument();
> + return NULL;
> + }
> + if (PyUnicode_READY(unicode) == -1)
> + return NULL;
> + /* Fast path: if it is a one-byte string, construct
> + bytes object directly. */
> + if (PyUnicode_KIND(unicode) == PyUnicode_1BYTE_KIND)
> + return PyBytes_FromStringAndSize(PyUnicode_DATA(unicode),
> + PyUnicode_GET_LENGTH(unicode));
> + /* Non-Latin-1 characters present. Defer to above function to
> + raise the exception. */
> + return unicode_encode_ucs1(unicode, errors, 256);
> +}
> +
> +PyObject*
> +PyUnicode_AsLatin1String(PyObject *unicode)
> +{
> + return _PyUnicode_AsLatin1String(unicode, NULL);
> +}
> +
> +/* --- 7-bit ASCII Codec -------------------------------------------------- */
> +
> +PyObject *
> +PyUnicode_DecodeASCII(const char *s,
> + Py_ssize_t size,
> + const char *errors)
> +{
> + const char *starts = s;
> + _PyUnicodeWriter writer;
> + int kind;
> + void *data;
> + Py_ssize_t startinpos;
> + Py_ssize_t endinpos;
> + Py_ssize_t outpos;
> + const char *e;
> + PyObject *error_handler_obj = NULL;
> + PyObject *exc = NULL;
> + _Py_error_handler error_handler = _Py_ERROR_UNKNOWN;
> +
> + if (size == 0)
> + _Py_RETURN_UNICODE_EMPTY();
> +
> + /* ASCII is equivalent to the first 128 ordinals in Unicode. */
> + if (size == 1 && (unsigned char)s[0] < 128)
> + return get_latin1_char((unsigned char)s[0]);
> +
> + _PyUnicodeWriter_Init(&writer);
> + writer.min_length = size;
> + if (_PyUnicodeWriter_Prepare(&writer, writer.min_length, 127) < 0)
> + return NULL;
> +
> + e = s + size;
> + data = writer.data;
> + outpos = ascii_decode(s, e, (Py_UCS1 *)data);
> + writer.pos = outpos;
> + if (writer.pos == size)
> + return _PyUnicodeWriter_Finish(&writer);
> +
> + s += writer.pos;
> + kind = writer.kind;
> + while (s < e) {
> + unsigned char c = (unsigned char)*s;
> + if (c < 128) {
> + PyUnicode_WRITE(kind, data, writer.pos, c);
> + writer.pos++;
> + ++s;
> + continue;
> + }
> +
> + /* byte outsize range 0x00..0x7f: call the error handler */
> +
> + if (error_handler == _Py_ERROR_UNKNOWN)
> + error_handler = get_error_handler(errors);
> +
> + switch (error_handler)
> + {
> + case _Py_ERROR_REPLACE:
> + case _Py_ERROR_SURROGATEESCAPE:
> + /* Fast-path: the error handler only writes one character,
> + but we may switch to UCS2 at the first write */
> + if (_PyUnicodeWriter_PrepareKind(&writer, PyUnicode_2BYTE_KIND) < 0)
> + goto onError;
> + kind = writer.kind;
> + data = writer.data;
> +
> + if (error_handler == _Py_ERROR_REPLACE)
> + PyUnicode_WRITE(kind, data, writer.pos, 0xfffd);
> + else
> + PyUnicode_WRITE(kind, data, writer.pos, c + 0xdc00);
> + writer.pos++;
> + ++s;
> + break;
> +
> + case _Py_ERROR_IGNORE:
> + ++s;
> + break;
> +
> + default:
> + startinpos = s-starts;
> + endinpos = startinpos + 1;
> + if (unicode_decode_call_errorhandler_writer(
> + errors, &error_handler_obj,
> + "ascii", "ordinal not in range(128)",
> + &starts, &e, &startinpos, &endinpos, &exc, &s,
> + &writer))
> + goto onError;
> + kind = writer.kind;
> + data = writer.data;
> + }
> + }
> + Py_XDECREF(error_handler_obj);
> + Py_XDECREF(exc);
> + return _PyUnicodeWriter_Finish(&writer);
> +
> + onError:
> + _PyUnicodeWriter_Dealloc(&writer);
> + Py_XDECREF(error_handler_obj);
> + Py_XDECREF(exc);
> + return NULL;
> +}
> +
> +/* Deprecated */
> +PyObject *
> +PyUnicode_EncodeASCII(const Py_UNICODE *p,
> + Py_ssize_t size,
> + const char *errors)
> +{
> + PyObject *result;
> + PyObject *unicode = PyUnicode_FromUnicode(p, size);
> + if (unicode == NULL)
> + return NULL;
> + result = unicode_encode_ucs1(unicode, errors, 128);
> + Py_DECREF(unicode);
> + return result;
> +}
> +
> +PyObject *
> +_PyUnicode_AsASCIIString(PyObject *unicode, const char *errors)
> +{
> + if (!PyUnicode_Check(unicode)) {
> + PyErr_BadArgument();
> + return NULL;
> + }
> + if (PyUnicode_READY(unicode) == -1)
> + return NULL;
> + /* Fast path: if it is an ASCII-only string, construct bytes object
> + directly. Else defer to above function to raise the exception. */
> + if (PyUnicode_IS_ASCII(unicode))
> + return PyBytes_FromStringAndSize(PyUnicode_DATA(unicode),
> + PyUnicode_GET_LENGTH(unicode));
> + return unicode_encode_ucs1(unicode, errors, 128);
> +}
> +
> +PyObject *
> +PyUnicode_AsASCIIString(PyObject *unicode)
> +{
> + return _PyUnicode_AsASCIIString(unicode, NULL);
> +}
> +
> +#ifdef MS_WINDOWS
> +
> +/* --- MBCS codecs for Windows -------------------------------------------- */
> +
> +#if SIZEOF_INT < SIZEOF_SIZE_T
> +#define NEED_RETRY
> +#endif
> +
> +#ifndef WC_ERR_INVALID_CHARS
> +# define WC_ERR_INVALID_CHARS 0x0080
> +#endif
> +
> +static const char*
> +code_page_name(UINT code_page, PyObject **obj)
> +{
> + *obj = NULL;
> + if (code_page == CP_ACP)
> + return "mbcs";
> + if (code_page == CP_UTF7)
> + return "CP_UTF7";
> + if (code_page == CP_UTF8)
> + return "CP_UTF8";
> +
> + *obj = PyBytes_FromFormat("cp%u", code_page);
> + if (*obj == NULL)
> + return NULL;
> + return PyBytes_AS_STRING(*obj);
> +}
> +
> +static DWORD
> +decode_code_page_flags(UINT code_page)
> +{
> + if (code_page == CP_UTF7) {
> + /* The CP_UTF7 decoder only supports flags=0 */
> + return 0;
> + }
> + else
> + return MB_ERR_INVALID_CHARS;
> +}
> +
> +/*
> + * Decode a byte string from a Windows code page into unicode object in strict
> + * mode.
> + *
> + * Returns consumed size if succeed, returns -2 on decode error, or raise an
> + * OSError and returns -1 on other error.
> + */
> +static int
> +decode_code_page_strict(UINT code_page,
> + PyObject **v,
> + const char *in,
> + int insize)
> +{
> + const DWORD flags = decode_code_page_flags(code_page);
> + wchar_t *out;
> + DWORD outsize;
> +
> + /* First get the size of the result */
> + assert(insize > 0);
> + outsize = MultiByteToWideChar(code_page, flags, in, insize, NULL, 0);
> + if (outsize <= 0)
> + goto error;
> +
> + if (*v == NULL) {
> + /* Create unicode object */
> + /* FIXME: don't use _PyUnicode_New(), but allocate a wchar_t* buffer */
> + *v = (PyObject*)_PyUnicode_New(outsize);
> + if (*v == NULL)
> + return -1;
> + out = PyUnicode_AS_UNICODE(*v);
> + }
> + else {
> + /* Extend unicode object */
> + Py_ssize_t n = PyUnicode_GET_SIZE(*v);
> + if (unicode_resize(v, n + outsize) < 0)
> + return -1;
> + out = PyUnicode_AS_UNICODE(*v) + n;
> + }
> +
> + /* Do the conversion */
> + outsize = MultiByteToWideChar(code_page, flags, in, insize, out, outsize);
> + if (outsize <= 0)
> + goto error;
> + return insize;
> +
> +error:
> + if (GetLastError() == ERROR_NO_UNICODE_TRANSLATION)
> + return -2;
> + PyErr_SetFromWindowsErr(0);
> + return -1;
> +}
> +
> +/*
> + * Decode a byte string from a code page into unicode object with an error
> + * handler.
> + *
> + * Returns consumed size if succeed, or raise an OSError or
> + * UnicodeDecodeError exception and returns -1 on error.
> + */
> +static int
> +decode_code_page_errors(UINT code_page,
> + PyObject **v,
> + const char *in, const int size,
> + const char *errors, int final)
> +{
> + const char *startin = in;
> + const char *endin = in + size;
> + const DWORD flags = decode_code_page_flags(code_page);
> + /* Ideally, we should get reason from FormatMessage. This is the Windows
> + 2000 English version of the message. */
> + const char *reason = "No mapping for the Unicode character exists "
> + "in the target code page.";
> + /* each step cannot decode more than 1 character, but a character can be
> + represented as a surrogate pair */
> + wchar_t buffer[2], *out;
> + int insize;
> + Py_ssize_t outsize;
> + PyObject *errorHandler = NULL;
> + PyObject *exc = NULL;
> + PyObject *encoding_obj = NULL;
> + const char *encoding;
> + DWORD err;
> + int ret = -1;
> +
> + assert(size > 0);
> +
> + encoding = code_page_name(code_page, &encoding_obj);
> + if (encoding == NULL)
> + return -1;
> +
> + if ((errors == NULL || strcmp(errors, "strict") == 0) && final) {
> + /* The last error was ERROR_NO_UNICODE_TRANSLATION, then we raise a
> + UnicodeDecodeError. */
> + make_decode_exception(&exc, encoding, in, size, 0, 0, reason);
> + if (exc != NULL) {
> + PyCodec_StrictErrors(exc);
> + Py_CLEAR(exc);
> + }
> + goto error;
> + }
> +
> + if (*v == NULL) {
> + /* Create unicode object */
> + if (size > PY_SSIZE_T_MAX / (Py_ssize_t)Py_ARRAY_LENGTH(buffer)) {
> + PyErr_NoMemory();
> + goto error;
> + }
> + /* FIXME: don't use _PyUnicode_New(), but allocate a wchar_t* buffer */
> + *v = (PyObject*)_PyUnicode_New(size * Py_ARRAY_LENGTH(buffer));
> + if (*v == NULL)
> + goto error;
> + out = PyUnicode_AS_UNICODE(*v);
> + }
> + else {
> + /* Extend unicode object */
> + Py_ssize_t n = PyUnicode_GET_SIZE(*v);
> + if (size > (PY_SSIZE_T_MAX - n) / (Py_ssize_t)Py_ARRAY_LENGTH(buffer)) {
> + PyErr_NoMemory();
> + goto error;
> + }
> + if (unicode_resize(v, n + size * Py_ARRAY_LENGTH(buffer)) < 0)
> + goto error;
> + out = PyUnicode_AS_UNICODE(*v) + n;
> + }
> +
> + /* Decode the byte string character per character */
> + while (in < endin)
> + {
> + /* Decode a character */
> + insize = 1;
> + do
> + {
> + outsize = MultiByteToWideChar(code_page, flags,
> + in, insize,
> + buffer, Py_ARRAY_LENGTH(buffer));
> + if (outsize > 0)
> + break;
> + err = GetLastError();
> + if (err != ERROR_NO_UNICODE_TRANSLATION
> + && err != ERROR_INSUFFICIENT_BUFFER)
> + {
> + PyErr_SetFromWindowsErr(0);
> + goto error;
> + }
> + insize++;
> + }
> + /* 4=maximum length of a UTF-8 sequence */
> + while (insize <= 4 && (in + insize) <= endin);
> +
> + if (outsize <= 0) {
> + Py_ssize_t startinpos, endinpos, outpos;
> +
> + /* last character in partial decode? */
> + if (in + insize >= endin && !final)
> + break;
> +
> + startinpos = in - startin;
> + endinpos = startinpos + 1;
> + outpos = out - PyUnicode_AS_UNICODE(*v);
> + if (unicode_decode_call_errorhandler_wchar(
> + errors, &errorHandler,
> + encoding, reason,
> + &startin, &endin, &startinpos, &endinpos, &exc, &in,
> + v, &outpos))
> + {
> + goto error;
> + }
> + out = PyUnicode_AS_UNICODE(*v) + outpos;
> + }
> + else {
> + in += insize;
> + memcpy(out, buffer, outsize * sizeof(wchar_t));
> + out += outsize;
> + }
> + }
> +
> + /* write a NUL character at the end */
> + *out = 0;
> +
> + /* Extend unicode object */
> + outsize = out - PyUnicode_AS_UNICODE(*v);
> + assert(outsize <= PyUnicode_WSTR_LENGTH(*v));
> + if (unicode_resize(v, outsize) < 0)
> + goto error;
> + /* (in - startin) <= size and size is an int */
> + ret = Py_SAFE_DOWNCAST(in - startin, Py_ssize_t, int);
> +
> +error:
> + Py_XDECREF(encoding_obj);
> + Py_XDECREF(errorHandler);
> + Py_XDECREF(exc);
> + return ret;
> +}
> +
> +static PyObject *
> +decode_code_page_stateful(int code_page,
> + const char *s, Py_ssize_t size,
> + const char *errors, Py_ssize_t *consumed)
> +{
> + PyObject *v = NULL;
> + int chunk_size, final, converted, done;
> +
> + if (code_page < 0) {
> + PyErr_SetString(PyExc_ValueError, "invalid code page number");
> + return NULL;
> + }
> + if (size < 0) {
> + PyErr_BadInternalCall();
> + return NULL;
> + }
> +
> + if (consumed)
> + *consumed = 0;
> +
> + do
> + {
> +#ifdef NEED_RETRY
> + if (size > INT_MAX) {
> + chunk_size = INT_MAX;
> + final = 0;
> + done = 0;
> + }
> + else
> +#endif
> + {
> + chunk_size = (int)size;
> + final = (consumed == NULL);
> + done = 1;
> + }
> +
> + if (chunk_size == 0 && done) {
> + if (v != NULL)
> + break;
> + _Py_RETURN_UNICODE_EMPTY();
> + }
> +
> + converted = decode_code_page_strict(code_page, &v,
> + s, chunk_size);
> + if (converted == -2)
> + converted = decode_code_page_errors(code_page, &v,
> + s, chunk_size,
> + errors, final);
> + assert(converted != 0 || done);
> +
> + if (converted < 0) {
> + Py_XDECREF(v);
> + return NULL;
> + }
> +
> + if (consumed)
> + *consumed += converted;
> +
> + s += converted;
> + size -= converted;
> + } while (!done);
> +
> + return unicode_result(v);
> +}
> +
> +PyObject *
> +PyUnicode_DecodeCodePageStateful(int code_page,
> + const char *s,
> + Py_ssize_t size,
> + const char *errors,
> + Py_ssize_t *consumed)
> +{
> + return decode_code_page_stateful(code_page, s, size, errors, consumed);
> +}
> +
> +PyObject *
> +PyUnicode_DecodeMBCSStateful(const char *s,
> + Py_ssize_t size,
> + const char *errors,
> + Py_ssize_t *consumed)
> +{
> + return decode_code_page_stateful(CP_ACP, s, size, errors, consumed);
> +}
> +
> +PyObject *
> +PyUnicode_DecodeMBCS(const char *s,
> + Py_ssize_t size,
> + const char *errors)
> +{
> + return PyUnicode_DecodeMBCSStateful(s, size, errors, NULL);
> +}
> +
> +static DWORD
> +encode_code_page_flags(UINT code_page, const char *errors)
> +{
> + if (code_page == CP_UTF8) {
> + return WC_ERR_INVALID_CHARS;
> + }
> + else if (code_page == CP_UTF7) {
> + /* CP_UTF7 only supports flags=0 */
> + return 0;
> + }
> + else {
> + if (errors != NULL && strcmp(errors, "replace") == 0)
> + return 0;
> + else
> + return WC_NO_BEST_FIT_CHARS;
> + }
> +}
> +
> +/*
> + * Encode a Unicode string to a Windows code page into a byte string in strict
> + * mode.
> + *
> + * Returns consumed characters if succeed, returns -2 on encode error, or raise
> + * an OSError and returns -1 on other error.
> + */
> +static int
> +encode_code_page_strict(UINT code_page, PyObject **outbytes,
> + PyObject *unicode, Py_ssize_t offset, int len,
> + const char* errors)
> +{
> + BOOL usedDefaultChar = FALSE;
> + BOOL *pusedDefaultChar = &usedDefaultChar;
> + int outsize;
> + wchar_t *p;
> + Py_ssize_t size;
> + const DWORD flags = encode_code_page_flags(code_page, NULL);
> + char *out;
> + /* Create a substring so that we can get the UTF-16 representation
> + of just the slice under consideration. */
> + PyObject *substring;
> +
> + assert(len > 0);
> +
> + if (code_page != CP_UTF8 && code_page != CP_UTF7)
> + pusedDefaultChar = &usedDefaultChar;
> + else
> + pusedDefaultChar = NULL;
> +
> + substring = PyUnicode_Substring(unicode, offset, offset+len);
> + if (substring == NULL)
> + return -1;
> + p = PyUnicode_AsUnicodeAndSize(substring, &size);
> + if (p == NULL) {
> + Py_DECREF(substring);
> + return -1;
> + }
> + assert(size <= INT_MAX);
> +
> + /* First get the size of the result */
> + outsize = WideCharToMultiByte(code_page, flags,
> + p, (int)size,
> + NULL, 0,
> + NULL, pusedDefaultChar);
> + if (outsize <= 0)
> + goto error;
> + /* If we used a default char, then we failed! */
> + if (pusedDefaultChar && *pusedDefaultChar) {
> + Py_DECREF(substring);
> + return -2;
> + }
> +
> + if (*outbytes == NULL) {
> + /* Create string object */
> + *outbytes = PyBytes_FromStringAndSize(NULL, outsize);
> + if (*outbytes == NULL) {
> + Py_DECREF(substring);
> + return -1;
> + }
> + out = PyBytes_AS_STRING(*outbytes);
> + }
> + else {
> + /* Extend string object */
> + const Py_ssize_t n = PyBytes_Size(*outbytes);
> + if (outsize > PY_SSIZE_T_MAX - n) {
> + PyErr_NoMemory();
> + Py_DECREF(substring);
> + return -1;
> + }
> + if (_PyBytes_Resize(outbytes, n + outsize) < 0) {
> + Py_DECREF(substring);
> + return -1;
> + }
> + out = PyBytes_AS_STRING(*outbytes) + n;
> + }
> +
> + /* Do the conversion */
> + outsize = WideCharToMultiByte(code_page, flags,
> + p, (int)size,
> + out, outsize,
> + NULL, pusedDefaultChar);
> + Py_CLEAR(substring);
> + if (outsize <= 0)
> + goto error;
> + if (pusedDefaultChar && *pusedDefaultChar)
> + return -2;
> + return 0;
> +
> +error:
> + Py_XDECREF(substring);
> + if (GetLastError() == ERROR_NO_UNICODE_TRANSLATION)
> + return -2;
> + PyErr_SetFromWindowsErr(0);
> + return -1;
> +}
> +
> +/*
> + * Encode a Unicode string to a Windows code page into a byte string using an
> + * error handler.
> + *
> + * Returns consumed characters if succeed, or raise an OSError and returns
> + * -1 on other error.
> + */
> +static int
> +encode_code_page_errors(UINT code_page, PyObject **outbytes,
> + PyObject *unicode, Py_ssize_t unicode_offset,
> + Py_ssize_t insize, const char* errors)
> +{
> + const DWORD flags = encode_code_page_flags(code_page, errors);
> + Py_ssize_t pos = unicode_offset;
> + Py_ssize_t endin = unicode_offset + insize;
> + /* Ideally, we should get reason from FormatMessage. This is the Windows
> + 2000 English version of the message. */
> + const char *reason = "invalid character";
> + /* 4=maximum length of a UTF-8 sequence */
> + char buffer[4];
> + BOOL usedDefaultChar = FALSE, *pusedDefaultChar;
> + Py_ssize_t outsize;
> + char *out;
> + PyObject *errorHandler = NULL;
> + PyObject *exc = NULL;
> + PyObject *encoding_obj = NULL;
> + const char *encoding;
> + Py_ssize_t newpos, newoutsize;
> + PyObject *rep;
> + int ret = -1;
> +
> + assert(insize > 0);
> +
> + encoding = code_page_name(code_page, &encoding_obj);
> + if (encoding == NULL)
> + return -1;
> +
> + if (errors == NULL || strcmp(errors, "strict") == 0) {
> + /* The last error was ERROR_NO_UNICODE_TRANSLATION,
> + then we raise a UnicodeEncodeError. */
> + make_encode_exception(&exc, encoding, unicode, 0, 0, reason);
> + if (exc != NULL) {
> + PyCodec_StrictErrors(exc);
> + Py_DECREF(exc);
> + }
> + Py_XDECREF(encoding_obj);
> + return -1;
> + }
> +
> + if (code_page != CP_UTF8 && code_page != CP_UTF7)
> + pusedDefaultChar = &usedDefaultChar;
> + else
> + pusedDefaultChar = NULL;
> +
> + if (Py_ARRAY_LENGTH(buffer) > PY_SSIZE_T_MAX / insize) {
> + PyErr_NoMemory();
> + goto error;
> + }
> + outsize = insize * Py_ARRAY_LENGTH(buffer);
> +
> + if (*outbytes == NULL) {
> + /* Create string object */
> + *outbytes = PyBytes_FromStringAndSize(NULL, outsize);
> + if (*outbytes == NULL)
> + goto error;
> + out = PyBytes_AS_STRING(*outbytes);
> + }
> + else {
> + /* Extend string object */
> + Py_ssize_t n = PyBytes_Size(*outbytes);
> + if (n > PY_SSIZE_T_MAX - outsize) {
> + PyErr_NoMemory();
> + goto error;
> + }
> + if (_PyBytes_Resize(outbytes, n + outsize) < 0)
> + goto error;
> + out = PyBytes_AS_STRING(*outbytes) + n;
> + }
> +
> + /* Encode the string character per character */
> + while (pos < endin)
> + {
> + Py_UCS4 ch = PyUnicode_READ_CHAR(unicode, pos);
> + wchar_t chars[2];
> + int charsize;
> + if (ch < 0x10000) {
> + chars[0] = (wchar_t)ch;
> + charsize = 1;
> + }
> + else {
> + chars[0] = Py_UNICODE_HIGH_SURROGATE(ch);
> + chars[1] = Py_UNICODE_LOW_SURROGATE(ch);
> + charsize = 2;
> + }
> +
> + outsize = WideCharToMultiByte(code_page, flags,
> + chars, charsize,
> + buffer, Py_ARRAY_LENGTH(buffer),
> + NULL, pusedDefaultChar);
> + if (outsize > 0) {
> + if (pusedDefaultChar == NULL || !(*pusedDefaultChar))
> + {
> + pos++;
> + memcpy(out, buffer, outsize);
> + out += outsize;
> + continue;
> + }
> + }
> + else if (GetLastError() != ERROR_NO_UNICODE_TRANSLATION) {
> + PyErr_SetFromWindowsErr(0);
> + goto error;
> + }
> +
> + rep = unicode_encode_call_errorhandler(
> + errors, &errorHandler, encoding, reason,
> + unicode, &exc,
> + pos, pos + 1, &newpos);
> + if (rep == NULL)
> + goto error;
> + pos = newpos;
> +
> + if (PyBytes_Check(rep)) {
> + outsize = PyBytes_GET_SIZE(rep);
> + if (outsize != 1) {
> + Py_ssize_t offset = out - PyBytes_AS_STRING(*outbytes);
> + newoutsize = PyBytes_GET_SIZE(*outbytes) + (outsize - 1);
> + if (_PyBytes_Resize(outbytes, newoutsize) < 0) {
> + Py_DECREF(rep);
> + goto error;
> + }
> + out = PyBytes_AS_STRING(*outbytes) + offset;
> + }
> + memcpy(out, PyBytes_AS_STRING(rep), outsize);
> + out += outsize;
> + }
> + else {
> + Py_ssize_t i;
> + enum PyUnicode_Kind kind;
> + void *data;
> +
> + if (PyUnicode_READY(rep) == -1) {
> + Py_DECREF(rep);
> + goto error;
> + }
> +
> + outsize = PyUnicode_GET_LENGTH(rep);
> + if (outsize != 1) {
> + Py_ssize_t offset = out - PyBytes_AS_STRING(*outbytes);
> + newoutsize = PyBytes_GET_SIZE(*outbytes) + (outsize - 1);
> + if (_PyBytes_Resize(outbytes, newoutsize) < 0) {
> + Py_DECREF(rep);
> + goto error;
> + }
> + out = PyBytes_AS_STRING(*outbytes) + offset;
> + }
> + kind = PyUnicode_KIND(rep);
> + data = PyUnicode_DATA(rep);
> + for (i=0; i < outsize; i++) {
> + Py_UCS4 ch = PyUnicode_READ(kind, data, i);
> + if (ch > 127) {
> + raise_encode_exception(&exc,
> + encoding, unicode,
> + pos, pos + 1,
> + "unable to encode error handler result to ASCII");
> + Py_DECREF(rep);
> + goto error;
> + }
> + *out = (unsigned char)ch;
> + out++;
> + }
> + }
> + Py_DECREF(rep);
> + }
> + /* write a NUL byte */
> + *out = 0;
> + outsize = out - PyBytes_AS_STRING(*outbytes);
> + assert(outsize <= PyBytes_GET_SIZE(*outbytes));
> + if (_PyBytes_Resize(outbytes, outsize) < 0)
> + goto error;
> + ret = 0;
> +
> +error:
> + Py_XDECREF(encoding_obj);
> + Py_XDECREF(errorHandler);
> + Py_XDECREF(exc);
> + return ret;
> +}
> +
> +static PyObject *
> +encode_code_page(int code_page,
> + PyObject *unicode,
> + const char *errors)
> +{
> + Py_ssize_t len;
> + PyObject *outbytes = NULL;
> + Py_ssize_t offset;
> + int chunk_len, ret, done;
> +
> + if (!PyUnicode_Check(unicode)) {
> + PyErr_BadArgument();
> + return NULL;
> + }
> +
> + if (PyUnicode_READY(unicode) == -1)
> + return NULL;
> + len = PyUnicode_GET_LENGTH(unicode);
> +
> + if (code_page < 0) {
> + PyErr_SetString(PyExc_ValueError, "invalid code page number");
> + return NULL;
> + }
> +
> + if (len == 0)
> + return PyBytes_FromStringAndSize(NULL, 0);
> +
> + offset = 0;
> + do
> + {
> +#ifdef NEED_RETRY
> + /* UTF-16 encoding may double the size, so use only INT_MAX/2
> + chunks. */
> + if (len > INT_MAX/2) {
> + chunk_len = INT_MAX/2;
> + done = 0;
> + }
> + else
> +#endif
> + {
> + chunk_len = (int)len;
> + done = 1;
> + }
> +
> + ret = encode_code_page_strict(code_page, &outbytes,
> + unicode, offset, chunk_len,
> + errors);
> + if (ret == -2)
> + ret = encode_code_page_errors(code_page, &outbytes,
> + unicode, offset,
> + chunk_len, errors);
> + if (ret < 0) {
> + Py_XDECREF(outbytes);
> + return NULL;
> + }
> +
> + offset += chunk_len;
> + len -= chunk_len;
> + } while (!done);
> +
> + return outbytes;
> +}
> +
> +PyObject *
> +PyUnicode_EncodeMBCS(const Py_UNICODE *p,
> + Py_ssize_t size,
> + const char *errors)
> +{
> + PyObject *unicode, *res;
> + unicode = PyUnicode_FromUnicode(p, size);
> + if (unicode == NULL)
> + return NULL;
> + res = encode_code_page(CP_ACP, unicode, errors);
> + Py_DECREF(unicode);
> + return res;
> +}
> +
> +PyObject *
> +PyUnicode_EncodeCodePage(int code_page,
> + PyObject *unicode,
> + const char *errors)
> +{
> + return encode_code_page(code_page, unicode, errors);
> +}
> +
> +PyObject *
> +PyUnicode_AsMBCSString(PyObject *unicode)
> +{
> + return PyUnicode_EncodeCodePage(CP_ACP, unicode, NULL);
> +}
> +
> +#undef NEED_RETRY
> +
> +#endif /* MS_WINDOWS */
> +
> +/* --- Character Mapping Codec -------------------------------------------- */
> +
> +static int
> +charmap_decode_string(const char *s,
> + Py_ssize_t size,
> + PyObject *mapping,
> + const char *errors,
> + _PyUnicodeWriter *writer)
> +{
> + const char *starts = s;
> + const char *e;
> + Py_ssize_t startinpos, endinpos;
> + PyObject *errorHandler = NULL, *exc = NULL;
> + Py_ssize_t maplen;
> + enum PyUnicode_Kind mapkind;
> + void *mapdata;
> + Py_UCS4 x;
> + unsigned char ch;
> +
> + if (PyUnicode_READY(mapping) == -1)
> + return -1;
> +
> + maplen = PyUnicode_GET_LENGTH(mapping);
> + mapdata = PyUnicode_DATA(mapping);
> + mapkind = PyUnicode_KIND(mapping);
> +
> + e = s + size;
> +
> + if (mapkind == PyUnicode_1BYTE_KIND && maplen >= 256) {
> + /* fast-path for cp037, cp500 and iso8859_1 encodings. iso8859_1
> + * is disabled in encoding aliases, latin1 is preferred because
> + * its implementation is faster. */
> + Py_UCS1 *mapdata_ucs1 = (Py_UCS1 *)mapdata;
> + Py_UCS1 *outdata = (Py_UCS1 *)writer->data;
> + Py_UCS4 maxchar = writer->maxchar;
> +
> + assert (writer->kind == PyUnicode_1BYTE_KIND);
> + while (s < e) {
> + ch = *s;
> + x = mapdata_ucs1[ch];
> + if (x > maxchar) {
> + if (_PyUnicodeWriter_Prepare(writer, 1, 0xff) == -1)
> + goto onError;
> + maxchar = writer->maxchar;
> + outdata = (Py_UCS1 *)writer->data;
> + }
> + outdata[writer->pos] = x;
> + writer->pos++;
> + ++s;
> + }
> + return 0;
> + }
> +
> + while (s < e) {
> + if (mapkind == PyUnicode_2BYTE_KIND && maplen >= 256) {
> + enum PyUnicode_Kind outkind = writer->kind;
> + Py_UCS2 *mapdata_ucs2 = (Py_UCS2 *)mapdata;
> + if (outkind == PyUnicode_1BYTE_KIND) {
> + Py_UCS1 *outdata = (Py_UCS1 *)writer->data;
> + Py_UCS4 maxchar = writer->maxchar;
> + while (s < e) {
> + ch = *s;
> + x = mapdata_ucs2[ch];
> + if (x > maxchar)
> + goto Error;
> + outdata[writer->pos] = x;
> + writer->pos++;
> + ++s;
> + }
> + break;
> + }
> + else if (outkind == PyUnicode_2BYTE_KIND) {
> + Py_UCS2 *outdata = (Py_UCS2 *)writer->data;
> + while (s < e) {
> + ch = *s;
> + x = mapdata_ucs2[ch];
> + if (x == 0xFFFE)
> + goto Error;
> + outdata[writer->pos] = x;
> + writer->pos++;
> + ++s;
> + }
> + break;
> + }
> + }
> + ch = *s;
> +
> + if (ch < maplen)
> + x = PyUnicode_READ(mapkind, mapdata, ch);
> + else
> + x = 0xfffe; /* invalid value */
> +Error:
> + if (x == 0xfffe)
> + {
> + /* undefined mapping */
> + startinpos = s-starts;
> + endinpos = startinpos+1;
> + if (unicode_decode_call_errorhandler_writer(
> + errors, &errorHandler,
> + "charmap", "character maps to <undefined>",
> + &starts, &e, &startinpos, &endinpos, &exc, &s,
> + writer)) {
> + goto onError;
> + }
> + continue;
> + }
> +
> + if (_PyUnicodeWriter_WriteCharInline(writer, x) < 0)
> + goto onError;
> + ++s;
> + }
> + Py_XDECREF(errorHandler);
> + Py_XDECREF(exc);
> + return 0;
> +
> +onError:
> + Py_XDECREF(errorHandler);
> + Py_XDECREF(exc);
> + return -1;
> +}
> +
> +static int
> +charmap_decode_mapping(const char *s,
> + Py_ssize_t size,
> + PyObject *mapping,
> + const char *errors,
> + _PyUnicodeWriter *writer)
> +{
> + const char *starts = s;
> + const char *e;
> + Py_ssize_t startinpos, endinpos;
> + PyObject *errorHandler = NULL, *exc = NULL;
> + unsigned char ch;
> + PyObject *key, *item = NULL;
> +
> + e = s + size;
> +
> + while (s < e) {
> + ch = *s;
> +
> + /* Get mapping (char ordinal -> integer, Unicode char or None) */
> + key = PyLong_FromLong((long)ch);
> + if (key == NULL)
> + goto onError;
> +
> + item = PyObject_GetItem(mapping, key);
> + Py_DECREF(key);
> + if (item == NULL) {
> + if (PyErr_ExceptionMatches(PyExc_LookupError)) {
> + /* No mapping found means: mapping is undefined. */
> + PyErr_Clear();
> + goto Undefined;
> + } else
> + goto onError;
> + }
> +
> + /* Apply mapping */
> + if (item == Py_None)
> + goto Undefined;
> + if (PyLong_Check(item)) {
> + long value = PyLong_AS_LONG(item);
> + if (value == 0xFFFE)
> + goto Undefined;
> + if (value < 0 || value > MAX_UNICODE) {
> + PyErr_Format(PyExc_TypeError,
> + "character mapping must be in range(0x%lx)",
> + (unsigned long)MAX_UNICODE + 1);
> + goto onError;
> + }
> +
> + if (_PyUnicodeWriter_WriteCharInline(writer, value) < 0)
> + goto onError;
> + }
> + else if (PyUnicode_Check(item)) {
> + if (PyUnicode_READY(item) == -1)
> + goto onError;
> + if (PyUnicode_GET_LENGTH(item) == 1) {
> + Py_UCS4 value = PyUnicode_READ_CHAR(item, 0);
> + if (value == 0xFFFE)
> + goto Undefined;
> + if (_PyUnicodeWriter_WriteCharInline(writer, value) < 0)
> + goto onError;
> + }
> + else {
> + writer->overallocate = 1;
> + if (_PyUnicodeWriter_WriteStr(writer, item) == -1)
> + goto onError;
> + }
> + }
> + else {
> + /* wrong return value */
> + PyErr_SetString(PyExc_TypeError,
> + "character mapping must return integer, None or str");
> + goto onError;
> + }
> + Py_CLEAR(item);
> + ++s;
> + continue;
> +
> +Undefined:
> + /* undefined mapping */
> + Py_CLEAR(item);
> + startinpos = s-starts;
> + endinpos = startinpos+1;
> + if (unicode_decode_call_errorhandler_writer(
> + errors, &errorHandler,
> + "charmap", "character maps to <undefined>",
> + &starts, &e, &startinpos, &endinpos, &exc, &s,
> + writer)) {
> + goto onError;
> + }
> + }
> + Py_XDECREF(errorHandler);
> + Py_XDECREF(exc);
> + return 0;
> +
> +onError:
> + Py_XDECREF(item);
> + Py_XDECREF(errorHandler);
> + Py_XDECREF(exc);
> + return -1;
> +}
> +
> +PyObject *
> +PyUnicode_DecodeCharmap(const char *s,
> + Py_ssize_t size,
> + PyObject *mapping,
> + const char *errors)
> +{
> + _PyUnicodeWriter writer;
> +
> + /* Default to Latin-1 */
> + if (mapping == NULL)
> + return PyUnicode_DecodeLatin1(s, size, errors);
> +
> + if (size == 0)
> + _Py_RETURN_UNICODE_EMPTY();
> + _PyUnicodeWriter_Init(&writer);
> + writer.min_length = size;
> + if (_PyUnicodeWriter_Prepare(&writer, writer.min_length, 127) == -1)
> + goto onError;
> +
> + if (PyUnicode_CheckExact(mapping)) {
> + if (charmap_decode_string(s, size, mapping, errors, &writer) < 0)
> + goto onError;
> + }
> + else {
> + if (charmap_decode_mapping(s, size, mapping, errors, &writer) < 0)
> + goto onError;
> + }
> + return _PyUnicodeWriter_Finish(&writer);
> +
> + onError:
> + _PyUnicodeWriter_Dealloc(&writer);
> + return NULL;
> +}
> +
> +/* Charmap encoding: the lookup table */
> +
> +struct encoding_map {
> + PyObject_HEAD
> + unsigned char level1[32];
> + int count2, count3;
> + unsigned char level23[1];
> +};
> +
> +static PyObject*
> +encoding_map_size(PyObject *obj, PyObject* args)
> +{
> + struct encoding_map *map = (struct encoding_map*)obj;
> + return PyLong_FromLong(sizeof(*map) - 1 + 16*map->count2 +
> + 128*map->count3);
> +}
> +
> +static PyMethodDef encoding_map_methods[] = {
> + {"size", encoding_map_size, METH_NOARGS,
> + PyDoc_STR("Return the size (in bytes) of this object") },
> + { 0 }
> +};
> +
> +static void
> +encoding_map_dealloc(PyObject* o)
> +{
> + PyObject_FREE(o);
> +}
> +
> +static PyTypeObject EncodingMapType = {
> + PyVarObject_HEAD_INIT(NULL, 0)
> + "EncodingMap", /*tp_name*/
> + sizeof(struct encoding_map), /*tp_basicsize*/
> + 0, /*tp_itemsize*/
> + /* methods */
> + encoding_map_dealloc, /*tp_dealloc*/
> + 0, /*tp_print*/
> + 0, /*tp_getattr*/
> + 0, /*tp_setattr*/
> + 0, /*tp_reserved*/
> + 0, /*tp_repr*/
> + 0, /*tp_as_number*/
> + 0, /*tp_as_sequence*/
> + 0, /*tp_as_mapping*/
> + 0, /*tp_hash*/
> + 0, /*tp_call*/
> + 0, /*tp_str*/
> + 0, /*tp_getattro*/
> + 0, /*tp_setattro*/
> + 0, /*tp_as_buffer*/
> + Py_TPFLAGS_DEFAULT, /*tp_flags*/
> + 0, /*tp_doc*/
> + 0, /*tp_traverse*/
> + 0, /*tp_clear*/
> + 0, /*tp_richcompare*/
> + 0, /*tp_weaklistoffset*/
> + 0, /*tp_iter*/
> + 0, /*tp_iternext*/
> + encoding_map_methods, /*tp_methods*/
> + 0, /*tp_members*/
> + 0, /*tp_getset*/
> + 0, /*tp_base*/
> + 0, /*tp_dict*/
> + 0, /*tp_descr_get*/
> + 0, /*tp_descr_set*/
> + 0, /*tp_dictoffset*/
> + 0, /*tp_init*/
> + 0, /*tp_alloc*/
> + 0, /*tp_new*/
> + 0, /*tp_free*/
> + 0, /*tp_is_gc*/
> +};
> +
> +PyObject*
> +PyUnicode_BuildEncodingMap(PyObject* string)
> +{
> + PyObject *result;
> + struct encoding_map *mresult;
> + int i;
> + int need_dict = 0;
> + unsigned char level1[32];
> + unsigned char level2[512];
> + unsigned char *mlevel1, *mlevel2, *mlevel3;
> + int count2 = 0, count3 = 0;
> + int kind;
> + void *data;
> + Py_ssize_t length;
> + Py_UCS4 ch;
> +
> + if (!PyUnicode_Check(string) || !PyUnicode_GET_LENGTH(string)) {
> + PyErr_BadArgument();
> + return NULL;
> + }
> + kind = PyUnicode_KIND(string);
> + data = PyUnicode_DATA(string);
> + length = PyUnicode_GET_LENGTH(string);
> + length = Py_MIN(length, 256);
> + memset(level1, 0xFF, sizeof level1);
> + memset(level2, 0xFF, sizeof level2);
> +
> + /* If there isn't a one-to-one mapping of NULL to \0,
> + or if there are non-BMP characters, we need to use
> + a mapping dictionary. */
> + if (PyUnicode_READ(kind, data, 0) != 0)
> + need_dict = 1;
> + for (i = 1; i < length; i++) {
> + int l1, l2;
> + ch = PyUnicode_READ(kind, data, i);
> + if (ch == 0 || ch > 0xFFFF) {
> + need_dict = 1;
> + break;
> + }
> + if (ch == 0xFFFE)
> + /* unmapped character */
> + continue;
> + l1 = ch >> 11;
> + l2 = ch >> 7;
> + if (level1[l1] == 0xFF)
> + level1[l1] = count2++;
> + if (level2[l2] == 0xFF)
> + level2[l2] = count3++;
> + }
> +
> + if (count2 >= 0xFF || count3 >= 0xFF)
> + need_dict = 1;
> +
> + if (need_dict) {
> + PyObject *result = PyDict_New();
> + PyObject *key, *value;
> + if (!result)
> + return NULL;
> + for (i = 0; i < length; i++) {
> + key = PyLong_FromLong(PyUnicode_READ(kind, data, i));
> + value = PyLong_FromLong(i);
> + if (!key || !value)
> + goto failed1;
> + if (PyDict_SetItem(result, key, value) == -1)
> + goto failed1;
> + Py_DECREF(key);
> + Py_DECREF(value);
> + }
> + return result;
> + failed1:
> + Py_XDECREF(key);
> + Py_XDECREF(value);
> + Py_DECREF(result);
> + return NULL;
> + }
> +
> + /* Create a three-level trie */
> + result = PyObject_MALLOC(sizeof(struct encoding_map) +
> + 16*count2 + 128*count3 - 1);
> + if (!result)
> + return PyErr_NoMemory();
> + PyObject_Init(result, &EncodingMapType);
> + mresult = (struct encoding_map*)result;
> + mresult->count2 = count2;
> + mresult->count3 = count3;
> + mlevel1 = mresult->level1;
> + mlevel2 = mresult->level23;
> + mlevel3 = mresult->level23 + 16*count2;
> + memcpy(mlevel1, level1, 32);
> + memset(mlevel2, 0xFF, 16*count2);
> + memset(mlevel3, 0, 128*count3);
> + count3 = 0;
> + for (i = 1; i < length; i++) {
> + int o1, o2, o3, i2, i3;
> + Py_UCS4 ch = PyUnicode_READ(kind, data, i);
> + if (ch == 0xFFFE)
> + /* unmapped character */
> + continue;
> + o1 = ch>>11;
> + o2 = (ch>>7) & 0xF;
> + i2 = 16*mlevel1[o1] + o2;
> + if (mlevel2[i2] == 0xFF)
> + mlevel2[i2] = count3++;
> + o3 = ch & 0x7F;
> + i3 = 128*mlevel2[i2] + o3;
> + mlevel3[i3] = i;
> + }
> + return result;
> +}
> +
> +static int
> +encoding_map_lookup(Py_UCS4 c, PyObject *mapping)
> +{
> + struct encoding_map *map = (struct encoding_map*)mapping;
> + int l1 = c>>11;
> + int l2 = (c>>7) & 0xF;
> + int l3 = c & 0x7F;
> + int i;
> +
> + if (c > 0xFFFF)
> + return -1;
> + if (c == 0)
> + return 0;
> + /* level 1*/
> + i = map->level1[l1];
> + if (i == 0xFF) {
> + return -1;
> + }
> + /* level 2*/
> + i = map->level23[16*i+l2];
> + if (i == 0xFF) {
> + return -1;
> + }
> + /* level 3 */
> + i = map->level23[16*map->count2 + 128*i + l3];
> + if (i == 0) {
> + return -1;
> + }
> + return i;
> +}
> +
> +/* Lookup the character ch in the mapping. If the character
> + can't be found, Py_None is returned (or NULL, if another
> + error occurred). */
> +static PyObject *
> +charmapencode_lookup(Py_UCS4 c, PyObject *mapping)
> +{
> + PyObject *w = PyLong_FromLong((long)c);
> + PyObject *x;
> +
> + if (w == NULL)
> + return NULL;
> + x = PyObject_GetItem(mapping, w);
> + Py_DECREF(w);
> + if (x == NULL) {
> + if (PyErr_ExceptionMatches(PyExc_LookupError)) {
> + /* No mapping found means: mapping is undefined. */
> + PyErr_Clear();
> + x = Py_None;
> + Py_INCREF(x);
> + return x;
> + } else
> + return NULL;
> + }
> + else if (x == Py_None)
> + return x;
> + else if (PyLong_Check(x)) {
> + long value = PyLong_AS_LONG(x);
> + if (value < 0 || value > 255) {
> + PyErr_SetString(PyExc_TypeError,
> + "character mapping must be in range(256)");
> + Py_DECREF(x);
> + return NULL;
> + }
> + return x;
> + }
> + else if (PyBytes_Check(x))
> + return x;
> + else {
> + /* wrong return value */
> + PyErr_Format(PyExc_TypeError,
> + "character mapping must return integer, bytes or None, not %.400s",
> + x->ob_type->tp_name);
> + Py_DECREF(x);
> + return NULL;
> + }
> +}
> +
> +static int
> +charmapencode_resize(PyObject **outobj, Py_ssize_t *outpos, Py_ssize_t requiredsize)
> +{
> + Py_ssize_t outsize = PyBytes_GET_SIZE(*outobj);
> + /* exponentially overallocate to minimize reallocations */
> + if (requiredsize < 2*outsize)
> + requiredsize = 2*outsize;
> + if (_PyBytes_Resize(outobj, requiredsize))
> + return -1;
> + return 0;
> +}
> +
> +typedef enum charmapencode_result {
> + enc_SUCCESS, enc_FAILED, enc_EXCEPTION
> +} charmapencode_result;
> +/* lookup the character, put the result in the output string and adjust
> + various state variables. Resize the output bytes object if not enough
> + space is available. Return a new reference to the object that
> + was put in the output buffer, or Py_None, if the mapping was undefined
> + (in which case no character was written) or NULL, if a
> + reallocation error occurred. The caller must decref the result */
> +static charmapencode_result
> +charmapencode_output(Py_UCS4 c, PyObject *mapping,
> + PyObject **outobj, Py_ssize_t *outpos)
> +{
> + PyObject *rep;
> + char *outstart;
> + Py_ssize_t outsize = PyBytes_GET_SIZE(*outobj);
> +
> + if (Py_TYPE(mapping) == &EncodingMapType) {
> + int res = encoding_map_lookup(c, mapping);
> + Py_ssize_t requiredsize = *outpos+1;
> + if (res == -1)
> + return enc_FAILED;
> + if (outsize<requiredsize)
> + if (charmapencode_resize(outobj, outpos, requiredsize))
> + return enc_EXCEPTION;
> + outstart = PyBytes_AS_STRING(*outobj);
> + outstart[(*outpos)++] = (char)res;
> + return enc_SUCCESS;
> + }
> +
> + rep = charmapencode_lookup(c, mapping);
> + if (rep==NULL)
> + return enc_EXCEPTION;
> + else if (rep==Py_None) {
> + Py_DECREF(rep);
> + return enc_FAILED;
> + } else {
> + if (PyLong_Check(rep)) {
> + Py_ssize_t requiredsize = *outpos+1;
> + if (outsize<requiredsize)
> + if (charmapencode_resize(outobj, outpos, requiredsize)) {
> + Py_DECREF(rep);
> + return enc_EXCEPTION;
> + }
> + outstart = PyBytes_AS_STRING(*outobj);
> + outstart[(*outpos)++] = (char)PyLong_AS_LONG(rep);
> + }
> + else {
> + const char *repchars = PyBytes_AS_STRING(rep);
> + Py_ssize_t repsize = PyBytes_GET_SIZE(rep);
> + Py_ssize_t requiredsize = *outpos+repsize;
> + if (outsize<requiredsize)
> + if (charmapencode_resize(outobj, outpos, requiredsize)) {
> + Py_DECREF(rep);
> + return enc_EXCEPTION;
> + }
> + outstart = PyBytes_AS_STRING(*outobj);
> + memcpy(outstart + *outpos, repchars, repsize);
> + *outpos += repsize;
> + }
> + }
> + Py_DECREF(rep);
> + return enc_SUCCESS;
> +}
> +
> +/* handle an error in PyUnicode_EncodeCharmap
> + Return 0 on success, -1 on error */
> +static int
> +charmap_encoding_error(
> + PyObject *unicode, Py_ssize_t *inpos, PyObject *mapping,
> + PyObject **exceptionObject,
> + _Py_error_handler *error_handler, PyObject **error_handler_obj, const char *errors,
> + PyObject **res, Py_ssize_t *respos)
> +{
> + PyObject *repunicode = NULL; /* initialize to prevent gcc warning */
> + Py_ssize_t size, repsize;
> + Py_ssize_t newpos;
> + enum PyUnicode_Kind kind;
> + void *data;
> + Py_ssize_t index;
> + /* startpos for collecting unencodable chars */
> + Py_ssize_t collstartpos = *inpos;
> + Py_ssize_t collendpos = *inpos+1;
> + Py_ssize_t collpos;
> + char *encoding = "charmap";
> + char *reason = "character maps to <undefined>";
> + charmapencode_result x;
> + Py_UCS4 ch;
> + int val;
> +
> + if (PyUnicode_READY(unicode) == -1)
> + return -1;
> + size = PyUnicode_GET_LENGTH(unicode);
> + /* find all unencodable characters */
> + while (collendpos < size) {
> + PyObject *rep;
> + if (Py_TYPE(mapping) == &EncodingMapType) {
> + ch = PyUnicode_READ_CHAR(unicode, collendpos);
> + val = encoding_map_lookup(ch, mapping);
> + if (val != -1)
> + break;
> + ++collendpos;
> + continue;
> + }
> +
> + ch = PyUnicode_READ_CHAR(unicode, collendpos);
> + rep = charmapencode_lookup(ch, mapping);
> + if (rep==NULL)
> + return -1;
> + else if (rep!=Py_None) {
> + Py_DECREF(rep);
> + break;
> + }
> + Py_DECREF(rep);
> + ++collendpos;
> + }
> + /* cache callback name lookup
> + * (if not done yet, i.e. it's the first error) */
> + if (*error_handler == _Py_ERROR_UNKNOWN)
> + *error_handler = get_error_handler(errors);
> +
> + switch (*error_handler) {
> + case _Py_ERROR_STRICT:
> + raise_encode_exception(exceptionObject, encoding, unicode, collstartpos, collendpos, reason);
> + return -1;
> +
> + case _Py_ERROR_REPLACE:
> + for (collpos = collstartpos; collpos<collendpos; ++collpos) {
> + x = charmapencode_output('?', mapping, res, respos);
> + if (x==enc_EXCEPTION) {
> + return -1;
> + }
> + else if (x==enc_FAILED) {
> + raise_encode_exception(exceptionObject, encoding, unicode, collstartpos, collendpos, reason);
> + return -1;
> + }
> + }
> + /* fall through */
> + case _Py_ERROR_IGNORE:
> + *inpos = collendpos;
> + break;
> +
> + case _Py_ERROR_XMLCHARREFREPLACE:
> + /* generate replacement (temporarily (mis)uses p) */
> + for (collpos = collstartpos; collpos < collendpos; ++collpos) {
> + char buffer[2+29+1+1];
> + char *cp;
> + sprintf(buffer, "&#%d;", (int)PyUnicode_READ_CHAR(unicode, collpos));
> + for (cp = buffer; *cp; ++cp) {
> + x = charmapencode_output(*cp, mapping, res, respos);
> + if (x==enc_EXCEPTION)
> + return -1;
> + else if (x==enc_FAILED) {
> + raise_encode_exception(exceptionObject, encoding, unicode, collstartpos, collendpos, reason);
> + return -1;
> + }
> + }
> + }
> + *inpos = collendpos;
> + break;
> +
> + default:
> + repunicode = unicode_encode_call_errorhandler(errors, error_handler_obj,
> + encoding, reason, unicode, exceptionObject,
> + collstartpos, collendpos, &newpos);
> + if (repunicode == NULL)
> + return -1;
> + if (PyBytes_Check(repunicode)) {
> + /* Directly copy bytes result to output. */
> + Py_ssize_t outsize = PyBytes_Size(*res);
> + Py_ssize_t requiredsize;
> + repsize = PyBytes_Size(repunicode);
> + requiredsize = *respos + repsize;
> + if (requiredsize > outsize)
> + /* Make room for all additional bytes. */
> + if (charmapencode_resize(res, respos, requiredsize)) {
> + Py_DECREF(repunicode);
> + return -1;
> + }
> + memcpy(PyBytes_AsString(*res) + *respos,
> + PyBytes_AsString(repunicode), repsize);
> + *respos += repsize;
> + *inpos = newpos;
> + Py_DECREF(repunicode);
> + break;
> + }
> + /* generate replacement */
> + if (PyUnicode_READY(repunicode) == -1) {
> + Py_DECREF(repunicode);
> + return -1;
> + }
> + repsize = PyUnicode_GET_LENGTH(repunicode);
> + data = PyUnicode_DATA(repunicode);
> + kind = PyUnicode_KIND(repunicode);
> + for (index = 0; index < repsize; index++) {
> + Py_UCS4 repch = PyUnicode_READ(kind, data, index);
> + x = charmapencode_output(repch, mapping, res, respos);
> + if (x==enc_EXCEPTION) {
> + Py_DECREF(repunicode);
> + return -1;
> + }
> + else if (x==enc_FAILED) {
> + Py_DECREF(repunicode);
> + raise_encode_exception(exceptionObject, encoding, unicode, collstartpos, collendpos, reason);
> + return -1;
> + }
> + }
> + *inpos = newpos;
> + Py_DECREF(repunicode);
> + }
> + return 0;
> +}
> +
> +PyObject *
> +_PyUnicode_EncodeCharmap(PyObject *unicode,
> + PyObject *mapping,
> + const char *errors)
> +{
> + /* output object */
> + PyObject *res = NULL;
> + /* current input position */
> + Py_ssize_t inpos = 0;
> + Py_ssize_t size;
> + /* current output position */
> + Py_ssize_t respos = 0;
> + PyObject *error_handler_obj = NULL;
> + PyObject *exc = NULL;
> + _Py_error_handler error_handler = _Py_ERROR_UNKNOWN;
> + void *data;
> + int kind;
> +
> + if (PyUnicode_READY(unicode) == -1)
> + return NULL;
> + size = PyUnicode_GET_LENGTH(unicode);
> + data = PyUnicode_DATA(unicode);
> + kind = PyUnicode_KIND(unicode);
> +
> + /* Default to Latin-1 */
> + if (mapping == NULL)
> + return unicode_encode_ucs1(unicode, errors, 256);
> +
> + /* allocate enough for a simple encoding without
> + replacements, if we need more, we'll resize */
> + res = PyBytes_FromStringAndSize(NULL, size);
> + if (res == NULL)
> + goto onError;
> + if (size == 0)
> + return res;
> +
> + while (inpos<size) {
> + Py_UCS4 ch = PyUnicode_READ(kind, data, inpos);
> + /* try to encode it */
> + charmapencode_result x = charmapencode_output(ch, mapping, &res, &respos);
> + if (x==enc_EXCEPTION) /* error */
> + goto onError;
> + if (x==enc_FAILED) { /* unencodable character */
> + if (charmap_encoding_error(unicode, &inpos, mapping,
> + &exc,
> + &error_handler, &error_handler_obj, errors,
> + &res, &respos)) {
> + goto onError;
> + }
> + }
> + else
> + /* done with this character => adjust input position */
> + ++inpos;
> + }
> +
> + /* Resize if we allocated to much */
> + if (respos<PyBytes_GET_SIZE(res))
> + if (_PyBytes_Resize(&res, respos) < 0)
> + goto onError;
> +
> + Py_XDECREF(exc);
> + Py_XDECREF(error_handler_obj);
> + return res;
> +
> + onError:
> + Py_XDECREF(res);
> + Py_XDECREF(exc);
> + Py_XDECREF(error_handler_obj);
> + return NULL;
> +}
> +
> +/* Deprecated */
> +PyObject *
> +PyUnicode_EncodeCharmap(const Py_UNICODE *p,
> + Py_ssize_t size,
> + PyObject *mapping,
> + const char *errors)
> +{
> + PyObject *result;
> + PyObject *unicode = PyUnicode_FromUnicode(p, size);
> + if (unicode == NULL)
> + return NULL;
> + result = _PyUnicode_EncodeCharmap(unicode, mapping, errors);
> + Py_DECREF(unicode);
> + return result;
> +}
> +
> +PyObject *
> +PyUnicode_AsCharmapString(PyObject *unicode,
> + PyObject *mapping)
> +{
> + if (!PyUnicode_Check(unicode) || mapping == NULL) {
> + PyErr_BadArgument();
> + return NULL;
> + }
> + return _PyUnicode_EncodeCharmap(unicode, mapping, NULL);
> +}
> +
> +/* create or adjust a UnicodeTranslateError */
> +static void
> +make_translate_exception(PyObject **exceptionObject,
> + PyObject *unicode,
> + Py_ssize_t startpos, Py_ssize_t endpos,
> + const char *reason)
> +{
> + if (*exceptionObject == NULL) {
> + *exceptionObject = _PyUnicodeTranslateError_Create(
> + unicode, startpos, endpos, reason);
> + }
> + else {
> + if (PyUnicodeTranslateError_SetStart(*exceptionObject, startpos))
> + goto onError;
> + if (PyUnicodeTranslateError_SetEnd(*exceptionObject, endpos))
> + goto onError;
> + if (PyUnicodeTranslateError_SetReason(*exceptionObject, reason))
> + goto onError;
> + return;
> + onError:
> + Py_CLEAR(*exceptionObject);
> + }
> +}
> +
> +/* error handling callback helper:
> + build arguments, call the callback and check the arguments,
> + put the result into newpos and return the replacement string, which
> + has to be freed by the caller */
> +static PyObject *
> +unicode_translate_call_errorhandler(const char *errors,
> + PyObject **errorHandler,
> + const char *reason,
> + PyObject *unicode, PyObject **exceptionObject,
> + Py_ssize_t startpos, Py_ssize_t endpos,
> + Py_ssize_t *newpos)
> +{
> + static const char *argparse = "O!n;translating error handler must return (str, int) tuple";
> +
> + Py_ssize_t i_newpos;
> + PyObject *restuple;
> + PyObject *resunicode;
> +
> + if (*errorHandler == NULL) {
> + *errorHandler = PyCodec_LookupError(errors);
> + if (*errorHandler == NULL)
> + return NULL;
> + }
> +
> + make_translate_exception(exceptionObject,
> + unicode, startpos, endpos, reason);
> + if (*exceptionObject == NULL)
> + return NULL;
> +
> + restuple = PyObject_CallFunctionObjArgs(
> + *errorHandler, *exceptionObject, NULL);
> + if (restuple == NULL)
> + return NULL;
> + if (!PyTuple_Check(restuple)) {
> + PyErr_SetString(PyExc_TypeError, &argparse[4]);
> + Py_DECREF(restuple);
> + return NULL;
> + }
> + if (!PyArg_ParseTuple(restuple, argparse, &PyUnicode_Type,
> + &resunicode, &i_newpos)) {
> + Py_DECREF(restuple);
> + return NULL;
> + }
> + if (i_newpos<0)
> + *newpos = PyUnicode_GET_LENGTH(unicode)+i_newpos;
> + else
> + *newpos = i_newpos;
> + if (*newpos<0 || *newpos>PyUnicode_GET_LENGTH(unicode)) {
> + PyErr_Format(PyExc_IndexError, "position %zd from error handler out of bounds", *newpos);
> + Py_DECREF(restuple);
> + return NULL;
> + }
> + Py_INCREF(resunicode);
> + Py_DECREF(restuple);
> + return resunicode;
> +}
> +
> +/* Lookup the character ch in the mapping and put the result in result,
> + which must be decrefed by the caller.
> + Return 0 on success, -1 on error */
> +static int
> +charmaptranslate_lookup(Py_UCS4 c, PyObject *mapping, PyObject **result)
> +{
> + PyObject *w = PyLong_FromLong((long)c);
> + PyObject *x;
> +
> + if (w == NULL)
> + return -1;
> + x = PyObject_GetItem(mapping, w);
> + Py_DECREF(w);
> + if (x == NULL) {
> + if (PyErr_ExceptionMatches(PyExc_LookupError)) {
> + /* No mapping found means: use 1:1 mapping. */
> + PyErr_Clear();
> + *result = NULL;
> + return 0;
> + } else
> + return -1;
> + }
> + else if (x == Py_None) {
> + *result = x;
> + return 0;
> + }
> + else if (PyLong_Check(x)) {
> + long value = PyLong_AS_LONG(x);
> + if (value < 0 || value > MAX_UNICODE) {
> + PyErr_Format(PyExc_ValueError,
> + "character mapping must be in range(0x%x)",
> + MAX_UNICODE+1);
> + Py_DECREF(x);
> + return -1;
> + }
> + *result = x;
> + return 0;
> + }
> + else if (PyUnicode_Check(x)) {
> + *result = x;
> + return 0;
> + }
> + else {
> + /* wrong return value */
> + PyErr_SetString(PyExc_TypeError,
> + "character mapping must return integer, None or str");
> + Py_DECREF(x);
> + return -1;
> + }
> +}
> +
> +/* lookup the character, write the result into the writer.
> + Return 1 if the result was written into the writer, return 0 if the mapping
> + was undefined, raise an exception return -1 on error. */
> +static int
> +charmaptranslate_output(Py_UCS4 ch, PyObject *mapping,
> + _PyUnicodeWriter *writer)
> +{
> + PyObject *item;
> +
> + if (charmaptranslate_lookup(ch, mapping, &item))
> + return -1;
> +
> + if (item == NULL) {
> + /* not found => default to 1:1 mapping */
> + if (_PyUnicodeWriter_WriteCharInline(writer, ch) < 0) {
> + return -1;
> + }
> + return 1;
> + }
> +
> + if (item == Py_None) {
> + Py_DECREF(item);
> + return 0;
> + }
> +
> + if (PyLong_Check(item)) {
> + long ch = (Py_UCS4)PyLong_AS_LONG(item);
> + /* PyLong_AS_LONG() cannot fail, charmaptranslate_lookup() already
> + used it */
> + if (_PyUnicodeWriter_WriteCharInline(writer, ch) < 0) {
> + Py_DECREF(item);
> + return -1;
> + }
> + Py_DECREF(item);
> + return 1;
> + }
> +
> + if (!PyUnicode_Check(item)) {
> + Py_DECREF(item);
> + return -1;
> + }
> +
> + if (_PyUnicodeWriter_WriteStr(writer, item) < 0) {
> + Py_DECREF(item);
> + return -1;
> + }
> +
> + Py_DECREF(item);
> + return 1;
> +}
> +
> +static int
> +unicode_fast_translate_lookup(PyObject *mapping, Py_UCS1 ch,
> + Py_UCS1 *translate)
> +{
> + PyObject *item = NULL;
> + int ret = 0;
> +
> + if (charmaptranslate_lookup(ch, mapping, &item)) {
> + return -1;
> + }
> +
> + if (item == Py_None) {
> + /* deletion */
> + translate[ch] = 0xfe;
> + }
> + else if (item == NULL) {
> + /* not found => default to 1:1 mapping */
> + translate[ch] = ch;
> + return 1;
> + }
> + else if (PyLong_Check(item)) {
> + long replace = PyLong_AS_LONG(item);
> + /* PyLong_AS_LONG() cannot fail, charmaptranslate_lookup() already
> + used it */
> + if (127 < replace) {
> + /* invalid character or character outside ASCII:
> + skip the fast translate */
> + goto exit;
> + }
> + translate[ch] = (Py_UCS1)replace;
> + }
> + else if (PyUnicode_Check(item)) {
> + Py_UCS4 replace;
> +
> + if (PyUnicode_READY(item) == -1) {
> + Py_DECREF(item);
> + return -1;
> + }
> + if (PyUnicode_GET_LENGTH(item) != 1)
> + goto exit;
> +
> + replace = PyUnicode_READ_CHAR(item, 0);
> + if (replace > 127)
> + goto exit;
> + translate[ch] = (Py_UCS1)replace;
> + }
> + else {
> + /* not None, NULL, long or unicode */
> + goto exit;
> + }
> + ret = 1;
> +
> + exit:
> + Py_DECREF(item);
> + return ret;
> +}
> +
> +/* Fast path for ascii => ascii translation. Return 1 if the whole string
> + was translated into writer, return 0 if the input string was partially
> + translated into writer, raise an exception and return -1 on error. */
> +static int
> +unicode_fast_translate(PyObject *input, PyObject *mapping,
> + _PyUnicodeWriter *writer, int ignore,
> + Py_ssize_t *input_pos)
> +{
> + Py_UCS1 ascii_table[128], ch, ch2;
> + Py_ssize_t len;
> + Py_UCS1 *in, *end, *out;
> + int res = 0;
> +
> + len = PyUnicode_GET_LENGTH(input);
> +
> + memset(ascii_table, 0xff, 128);
> +
> + in = PyUnicode_1BYTE_DATA(input);
> + end = in + len;
> +
> + assert(PyUnicode_IS_ASCII(writer->buffer));
> + assert(PyUnicode_GET_LENGTH(writer->buffer) == len);
> + out = PyUnicode_1BYTE_DATA(writer->buffer);
> +
> + for (; in < end; in++) {
> + ch = *in;
> + ch2 = ascii_table[ch];
> + if (ch2 == 0xff) {
> + int translate = unicode_fast_translate_lookup(mapping, ch,
> + ascii_table);
> + if (translate < 0)
> + return -1;
> + if (translate == 0)
> + goto exit;
> + ch2 = ascii_table[ch];
> + }
> + if (ch2 == 0xfe) {
> + if (ignore)
> + continue;
> + goto exit;
> + }
> + assert(ch2 < 128);
> + *out = ch2;
> + out++;
> + }
> + res = 1;
> +
> +exit:
> + writer->pos = out - PyUnicode_1BYTE_DATA(writer->buffer);
> + *input_pos = in - PyUnicode_1BYTE_DATA(input);
> + return res;
> +}
> +
> +static PyObject *
> +_PyUnicode_TranslateCharmap(PyObject *input,
> + PyObject *mapping,
> + const char *errors)
> +{
> + /* input object */
> + char *data;
> + Py_ssize_t size, i;
> + int kind;
> + /* output buffer */
> + _PyUnicodeWriter writer;
> + /* error handler */
> + char *reason = "character maps to <undefined>";
> + PyObject *errorHandler = NULL;
> + PyObject *exc = NULL;
> + int ignore;
> + int res;
> +
> + if (mapping == NULL) {
> + PyErr_BadArgument();
> + return NULL;
> + }
> +
> + if (PyUnicode_READY(input) == -1)
> + return NULL;
> + data = (char*)PyUnicode_DATA(input);
> + kind = PyUnicode_KIND(input);
> + size = PyUnicode_GET_LENGTH(input);
> +
> + if (size == 0)
> + return PyUnicode_FromObject(input);
> +
> + /* allocate enough for a simple 1:1 translation without
> + replacements, if we need more, we'll resize */
> + _PyUnicodeWriter_Init(&writer);
> + if (_PyUnicodeWriter_Prepare(&writer, size, 127) == -1)
> + goto onError;
> +
> + ignore = (errors != NULL && strcmp(errors, "ignore") == 0);
> +
> + if (PyUnicode_READY(input) == -1)
> + return NULL;
> + if (PyUnicode_IS_ASCII(input)) {
> + res = unicode_fast_translate(input, mapping, &writer, ignore, &i);
> + if (res < 0) {
> + _PyUnicodeWriter_Dealloc(&writer);
> + return NULL;
> + }
> + if (res == 1)
> + return _PyUnicodeWriter_Finish(&writer);
> + }
> + else {
> + i = 0;
> + }
> +
> + while (i<size) {
> + /* try to encode it */
> + int translate;
> + PyObject *repunicode = NULL; /* initialize to prevent gcc warning */
> + Py_ssize_t newpos;
> + /* startpos for collecting untranslatable chars */
> + Py_ssize_t collstart;
> + Py_ssize_t collend;
> + Py_UCS4 ch;
> +
> + ch = PyUnicode_READ(kind, data, i);
> + translate = charmaptranslate_output(ch, mapping, &writer);
> + if (translate < 0)
> + goto onError;
> +
> + if (translate != 0) {
> + /* it worked => adjust input pointer */
> + ++i;
> + continue;
> + }
> +
> + /* untranslatable character */
> + collstart = i;
> + collend = i+1;
> +
> + /* find all untranslatable characters */
> + while (collend < size) {
> + PyObject *x;
> + ch = PyUnicode_READ(kind, data, collend);
> + if (charmaptranslate_lookup(ch, mapping, &x))
> + goto onError;
> + Py_XDECREF(x);
> + if (x != Py_None)
> + break;
> + ++collend;
> + }
> +
> + if (ignore) {
> + i = collend;
> + }
> + else {
> + repunicode = unicode_translate_call_errorhandler(errors, &errorHandler,
> + reason, input, &exc,
> + collstart, collend, &newpos);
> + if (repunicode == NULL)
> + goto onError;
> + if (_PyUnicodeWriter_WriteStr(&writer, repunicode) < 0) {
> + Py_DECREF(repunicode);
> + goto onError;
> + }
> + Py_DECREF(repunicode);
> + i = newpos;
> + }
> + }
> + Py_XDECREF(exc);
> + Py_XDECREF(errorHandler);
> + return _PyUnicodeWriter_Finish(&writer);
> +
> + onError:
> + _PyUnicodeWriter_Dealloc(&writer);
> + Py_XDECREF(exc);
> + Py_XDECREF(errorHandler);
> + return NULL;
> +}
> +
> +/* Deprecated. Use PyUnicode_Translate instead. */
> +PyObject *
> +PyUnicode_TranslateCharmap(const Py_UNICODE *p,
> + Py_ssize_t size,
> + PyObject *mapping,
> + const char *errors)
> +{
> + PyObject *result;
> + PyObject *unicode = PyUnicode_FromUnicode(p, size);
> + if (!unicode)
> + return NULL;
> + result = _PyUnicode_TranslateCharmap(unicode, mapping, errors);
> + Py_DECREF(unicode);
> + return result;
> +}
> +
> +PyObject *
> +PyUnicode_Translate(PyObject *str,
> + PyObject *mapping,
> + const char *errors)
> +{
> + if (ensure_unicode(str) < 0)
> + return NULL;
> + return _PyUnicode_TranslateCharmap(str, mapping, errors);
> +}
> +
> +static Py_UCS4
> +fix_decimal_and_space_to_ascii(PyObject *self)
> +{
> + /* No need to call PyUnicode_READY(self) because this function is only
> + called as a callback from fixup() which does it already. */
> + const Py_ssize_t len = PyUnicode_GET_LENGTH(self);
> + const int kind = PyUnicode_KIND(self);
> + void *data = PyUnicode_DATA(self);
> + Py_UCS4 maxchar = 127, ch, fixed;
> + int modified = 0;
> + Py_ssize_t i;
> +
> + for (i = 0; i < len; ++i) {
> + ch = PyUnicode_READ(kind, data, i);
> + fixed = 0;
> + if (ch > 127) {
> + if (Py_UNICODE_ISSPACE(ch))
> + fixed = ' ';
> + else {
> + const int decimal = Py_UNICODE_TODECIMAL(ch);
> + if (decimal >= 0)
> + fixed = '0' + decimal;
> + }
> + if (fixed != 0) {
> + modified = 1;
> + maxchar = Py_MAX(maxchar, fixed);
> + PyUnicode_WRITE(kind, data, i, fixed);
> + }
> + else
> + maxchar = Py_MAX(maxchar, ch);
> + }
> + }
> +
> + return (modified) ? maxchar : 0;
> +}
> +
> +PyObject *
> +_PyUnicode_TransformDecimalAndSpaceToASCII(PyObject *unicode)
> +{
> + if (!PyUnicode_Check(unicode)) {
> + PyErr_BadInternalCall();
> + return NULL;
> + }
> + if (PyUnicode_READY(unicode) == -1)
> + return NULL;
> + if (PyUnicode_MAX_CHAR_VALUE(unicode) <= 127) {
> + /* If the string is already ASCII, just return the same string */
> + Py_INCREF(unicode);
> + return unicode;
> + }
> + return fixup(unicode, fix_decimal_and_space_to_ascii);
> +}
> +
> +PyObject *
> +PyUnicode_TransformDecimalToASCII(Py_UNICODE *s,
> + Py_ssize_t length)
> +{
> + PyObject *decimal;
> + Py_ssize_t i;
> + Py_UCS4 maxchar;
> + enum PyUnicode_Kind kind;
> + void *data;
> +
> + maxchar = 127;
> + for (i = 0; i < length; i++) {
> + Py_UCS4 ch = s[i];
> + if (ch > 127) {
> + int decimal = Py_UNICODE_TODECIMAL(ch);
> + if (decimal >= 0)
> + ch = '0' + decimal;
> + maxchar = Py_MAX(maxchar, ch);
> + }
> + }
> +
> + /* Copy to a new string */
> + decimal = PyUnicode_New(length, maxchar);
> + if (decimal == NULL)
> + return decimal;
> + kind = PyUnicode_KIND(decimal);
> + data = PyUnicode_DATA(decimal);
> + /* Iterate over code points */
> + for (i = 0; i < length; i++) {
> + Py_UCS4 ch = s[i];
> + if (ch > 127) {
> + int decimal = Py_UNICODE_TODECIMAL(ch);
> + if (decimal >= 0)
> + ch = '0' + decimal;
> + }
> + PyUnicode_WRITE(kind, data, i, ch);
> + }
> + return unicode_result(decimal);
> +}
> +/* --- Decimal Encoder ---------------------------------------------------- */
> +
> +int
> +PyUnicode_EncodeDecimal(Py_UNICODE *s,
> + Py_ssize_t length,
> + char *output,
> + const char *errors)
> +{
> + PyObject *unicode;
> + Py_ssize_t i;
> + enum PyUnicode_Kind kind;
> + void *data;
> +
> + if (output == NULL) {
> + PyErr_BadArgument();
> + return -1;
> + }
> +
> + unicode = PyUnicode_FromUnicode(s, length);
> + if (unicode == NULL)
> + return -1;
> +
> + if (PyUnicode_READY(unicode) == -1) {
> + Py_DECREF(unicode);
> + return -1;
> + }
> + kind = PyUnicode_KIND(unicode);
> + data = PyUnicode_DATA(unicode);
> +
> + for (i=0; i < length; ) {
> + PyObject *exc;
> + Py_UCS4 ch;
> + int decimal;
> + Py_ssize_t startpos;
> +
> + ch = PyUnicode_READ(kind, data, i);
> +
> + if (Py_UNICODE_ISSPACE(ch)) {
> + *output++ = ' ';
> + i++;
> + continue;
> + }
> + decimal = Py_UNICODE_TODECIMAL(ch);
> + if (decimal >= 0) {
> + *output++ = '0' + decimal;
> + i++;
> + continue;
> + }
> + if (0 < ch && ch < 256) {
> + *output++ = (char)ch;
> + i++;
> + continue;
> + }
> +
> + startpos = i;
> + exc = NULL;
> + raise_encode_exception(&exc, "decimal", unicode,
> + startpos, startpos+1,
> + "invalid decimal Unicode string");
> + Py_XDECREF(exc);
> + Py_DECREF(unicode);
> + return -1;
> + }
> + /* 0-terminate the output string */
> + *output++ = '\0';
> + Py_DECREF(unicode);
> + return 0;
> +}
> +
> +/* --- Helpers ------------------------------------------------------------ */
> +
> +/* helper macro to fixup start/end slice values */
> +#define ADJUST_INDICES(start, end, len) \
> + if (end > len) \
> + end = len; \
> + else if (end < 0) { \
> + end += len; \
> + if (end < 0) \
> + end = 0; \
> + } \
> + if (start < 0) { \
> + start += len; \
> + if (start < 0) \
> + start = 0; \
> + }
> +
> +static Py_ssize_t
> +any_find_slice(PyObject* s1, PyObject* s2,
> + Py_ssize_t start,
> + Py_ssize_t end,
> + int direction)
> +{
> + int kind1, kind2;
> + void *buf1, *buf2;
> + Py_ssize_t len1, len2, result;
> +
> + kind1 = PyUnicode_KIND(s1);
> + kind2 = PyUnicode_KIND(s2);
> + if (kind1 < kind2)
> + return -1;
> +
> + len1 = PyUnicode_GET_LENGTH(s1);
> + len2 = PyUnicode_GET_LENGTH(s2);
> + ADJUST_INDICES(start, end, len1);
> + if (end - start < len2)
> + return -1;
> +
> + buf1 = PyUnicode_DATA(s1);
> + buf2 = PyUnicode_DATA(s2);
> + if (len2 == 1) {
> + Py_UCS4 ch = PyUnicode_READ(kind2, buf2, 0);
> + result = findchar((const char *)buf1 + kind1*start,
> + kind1, end - start, ch, direction);
> + if (result == -1)
> + return -1;
> + else
> + return start + result;
> + }
> +
> + if (kind2 != kind1) {
> + buf2 = _PyUnicode_AsKind(s2, kind1);
> + if (!buf2)
> + return -2;
> + }
> +
> + if (direction > 0) {
> + switch (kind1) {
> + case PyUnicode_1BYTE_KIND:
> + if (PyUnicode_IS_ASCII(s1) && PyUnicode_IS_ASCII(s2))
> + result = asciilib_find_slice(buf1, len1, buf2, len2, start, end);
> + else
> + result = ucs1lib_find_slice(buf1, len1, buf2, len2, start, end);
> + break;
> + case PyUnicode_2BYTE_KIND:
> + result = ucs2lib_find_slice(buf1, len1, buf2, len2, start, end);
> + break;
> + case PyUnicode_4BYTE_KIND:
> + result = ucs4lib_find_slice(buf1, len1, buf2, len2, start, end);
> + break;
> + default:
> + assert(0); result = -2;
> + }
> + }
> + else {
> + switch (kind1) {
> + case PyUnicode_1BYTE_KIND:
> + if (PyUnicode_IS_ASCII(s1) && PyUnicode_IS_ASCII(s2))
> + result = asciilib_rfind_slice(buf1, len1, buf2, len2, start, end);
> + else
> + result = ucs1lib_rfind_slice(buf1, len1, buf2, len2, start, end);
> + break;
> + case PyUnicode_2BYTE_KIND:
> + result = ucs2lib_rfind_slice(buf1, len1, buf2, len2, start, end);
> + break;
> + case PyUnicode_4BYTE_KIND:
> + result = ucs4lib_rfind_slice(buf1, len1, buf2, len2, start, end);
> + break;
> + default:
> + assert(0); result = -2;
> + }
> + }
> +
> + if (kind2 != kind1)
> + PyMem_Free(buf2);
> +
> + return result;
> +}
> +
> +/* _PyUnicode_InsertThousandsGrouping() helper functions */
> +#include "stringlib/localeutil.h"
> +
> +/**
> + * InsertThousandsGrouping:
> + * @writer: Unicode writer.
> + * @n_buffer: Number of characters in @buffer.
> + * @digits: Digits we're reading from. If count is non-NULL, this is unused.
> + * @d_pos: Start of digits string.
> + * @n_digits: The number of digits in the string, in which we want
> + * to put the grouping chars.
> + * @min_width: The minimum width of the digits in the output string.
> + * Output will be zero-padded on the left to fill.
> + * @grouping: see definition in localeconv().
> + * @thousands_sep: see definition in localeconv().
> + *
> + * There are 2 modes: counting and filling. If @writer is NULL,
> + * we are in counting mode, else filling mode.
> + * If counting, the required buffer size is returned.
> + * If filling, we know the buffer will be large enough, so we don't
> + * need to pass in the buffer size.
> + * Inserts thousand grouping characters (as defined by grouping and
> + * thousands_sep) into @writer.
> + *
> + * Return value: -1 on error, number of characters otherwise.
> + **/
> +Py_ssize_t
> +_PyUnicode_InsertThousandsGrouping(
> + _PyUnicodeWriter *writer,
> + Py_ssize_t n_buffer,
> + PyObject *digits,
> + Py_ssize_t d_pos,
> + Py_ssize_t n_digits,
> + Py_ssize_t min_width,
> + const char *grouping,
> + PyObject *thousands_sep,
> + Py_UCS4 *maxchar)
> +{
> + if (writer) {
> + assert(digits != NULL);
> + assert(maxchar == NULL);
> + }
> + else {
> + assert(digits == NULL);
> + assert(maxchar != NULL);
> + }
> + assert(0 <= d_pos);
> + assert(0 <= n_digits);
> + assert(0 <= min_width);
> + assert(grouping != NULL);
> +
> + if (digits != NULL) {
> + if (PyUnicode_READY(digits) == -1) {
> + return -1;
> + }
> + }
> + if (PyUnicode_READY(thousands_sep) == -1) {
> + return -1;
> + }
> +
> + Py_ssize_t count = 0;
> + Py_ssize_t n_zeros;
> + int loop_broken = 0;
> + int use_separator = 0; /* First time through, don't append the
> + separator. They only go between
> + groups. */
> + Py_ssize_t buffer_pos;
> + Py_ssize_t digits_pos;
> + Py_ssize_t len;
> + Py_ssize_t n_chars;
> + Py_ssize_t remaining = n_digits; /* Number of chars remaining to
> + be looked at */
> + /* A generator that returns all of the grouping widths, until it
> + returns 0. */
> + GroupGenerator groupgen;
> + GroupGenerator_init(&groupgen, grouping);
> + const Py_ssize_t thousands_sep_len = PyUnicode_GET_LENGTH(thousands_sep);
> +
> + /* if digits are not grouped, thousands separator
> + should be an empty string */
> + assert(!(grouping[0] == CHAR_MAX && thousands_sep_len != 0));
> +
> + digits_pos = d_pos + n_digits;
> + if (writer) {
> + buffer_pos = writer->pos + n_buffer;
> + assert(buffer_pos <= PyUnicode_GET_LENGTH(writer->buffer));
> + assert(digits_pos <= PyUnicode_GET_LENGTH(digits));
> + }
> + else {
> + buffer_pos = n_buffer;
> + }
> +
> + if (!writer) {
> + *maxchar = 127;
> + }
> +
> + while ((len = GroupGenerator_next(&groupgen)) > 0) {
> + len = Py_MIN(len, Py_MAX(Py_MAX(remaining, min_width), 1));
> + n_zeros = Py_MAX(0, len - remaining);
> + n_chars = Py_MAX(0, Py_MIN(remaining, len));
> +
> + /* Use n_zero zero's and n_chars chars */
> +
> + /* Count only, don't do anything. */
> + count += (use_separator ? thousands_sep_len : 0) + n_zeros + n_chars;
> +
> + /* Copy into the writer. */
> + InsertThousandsGrouping_fill(writer, &buffer_pos,
> + digits, &digits_pos,
> + n_chars, n_zeros,
> + use_separator ? thousands_sep : NULL,
> + thousands_sep_len, maxchar);
> +
> + /* Use a separator next time. */
> + use_separator = 1;
> +
> + remaining -= n_chars;
> + min_width -= len;
> +
> + if (remaining <= 0 && min_width <= 0) {
> + loop_broken = 1;
> + break;
> + }
> + min_width -= thousands_sep_len;
> + }
> + if (!loop_broken) {
> + /* We left the loop without using a break statement. */
> +
> + len = Py_MAX(Py_MAX(remaining, min_width), 1);
> + n_zeros = Py_MAX(0, len - remaining);
> + n_chars = Py_MAX(0, Py_MIN(remaining, len));
> +
> + /* Use n_zero zero's and n_chars chars */
> + count += (use_separator ? thousands_sep_len : 0) + n_zeros + n_chars;
> +
> + /* Copy into the writer. */
> + InsertThousandsGrouping_fill(writer, &buffer_pos,
> + digits, &digits_pos,
> + n_chars, n_zeros,
> + use_separator ? thousands_sep : NULL,
> + thousands_sep_len, maxchar);
> + }
> + return count;
> +}
> +
> +
> +Py_ssize_t
> +PyUnicode_Count(PyObject *str,
> + PyObject *substr,
> + Py_ssize_t start,
> + Py_ssize_t end)
> +{
> + Py_ssize_t result;
> + int kind1, kind2;
> + void *buf1 = NULL, *buf2 = NULL;
> + Py_ssize_t len1, len2;
> +
> + if (ensure_unicode(str) < 0 || ensure_unicode(substr) < 0)
> + return -1;
> +
> + kind1 = PyUnicode_KIND(str);
> + kind2 = PyUnicode_KIND(substr);
> + if (kind1 < kind2)
> + return 0;
> +
> + len1 = PyUnicode_GET_LENGTH(str);
> + len2 = PyUnicode_GET_LENGTH(substr);
> + ADJUST_INDICES(start, end, len1);
> + if (end - start < len2)
> + return 0;
> +
> + buf1 = PyUnicode_DATA(str);
> + buf2 = PyUnicode_DATA(substr);
> + if (kind2 != kind1) {
> + buf2 = _PyUnicode_AsKind(substr, kind1);
> + if (!buf2)
> + goto onError;
> + }
> +
> + switch (kind1) {
> + case PyUnicode_1BYTE_KIND:
> + if (PyUnicode_IS_ASCII(str) && PyUnicode_IS_ASCII(substr))
> + result = asciilib_count(
> + ((Py_UCS1*)buf1) + start, end - start,
> + buf2, len2, PY_SSIZE_T_MAX
> + );
> + else
> + result = ucs1lib_count(
> + ((Py_UCS1*)buf1) + start, end - start,
> + buf2, len2, PY_SSIZE_T_MAX
> + );
> + break;
> + case PyUnicode_2BYTE_KIND:
> + result = ucs2lib_count(
> + ((Py_UCS2*)buf1) + start, end - start,
> + buf2, len2, PY_SSIZE_T_MAX
> + );
> + break;
> + case PyUnicode_4BYTE_KIND:
> + result = ucs4lib_count(
> + ((Py_UCS4*)buf1) + start, end - start,
> + buf2, len2, PY_SSIZE_T_MAX
> + );
> + break;
> + default:
> + assert(0); result = 0;
> + }
> +
> + if (kind2 != kind1)
> + PyMem_Free(buf2);
> +
> + return result;
> + onError:
> + if (kind2 != kind1 && buf2)
> + PyMem_Free(buf2);
> + return -1;
> +}
> +
> +Py_ssize_t
> +PyUnicode_Find(PyObject *str,
> + PyObject *substr,
> + Py_ssize_t start,
> + Py_ssize_t end,
> + int direction)
> +{
> + if (ensure_unicode(str) < 0 || ensure_unicode(substr) < 0)
> + return -2;
> +
> + return any_find_slice(str, substr, start, end, direction);
> +}
> +
> +Py_ssize_t
> +PyUnicode_FindChar(PyObject *str, Py_UCS4 ch,
> + Py_ssize_t start, Py_ssize_t end,
> + int direction)
> +{
> + int kind;
> + Py_ssize_t result;
> + if (PyUnicode_READY(str) == -1)
> + return -2;
> + if (start < 0 || end < 0) {
> + PyErr_SetString(PyExc_IndexError, "string index out of range");
> + return -2;
> + }
> + if (end > PyUnicode_GET_LENGTH(str))
> + end = PyUnicode_GET_LENGTH(str);
> + if (start >= end)
> + return -1;
> + kind = PyUnicode_KIND(str);
> + result = findchar(PyUnicode_1BYTE_DATA(str) + kind*start,
> + kind, end-start, ch, direction);
> + if (result == -1)
> + return -1;
> + else
> + return start + result;
> +}
> +
> +static int
> +tailmatch(PyObject *self,
> + PyObject *substring,
> + Py_ssize_t start,
> + Py_ssize_t end,
> + int direction)
> +{
> + int kind_self;
> + int kind_sub;
> + void *data_self;
> + void *data_sub;
> + Py_ssize_t offset;
> + Py_ssize_t i;
> + Py_ssize_t end_sub;
> +
> + if (PyUnicode_READY(self) == -1 ||
> + PyUnicode_READY(substring) == -1)
> + return -1;
> +
> + ADJUST_INDICES(start, end, PyUnicode_GET_LENGTH(self));
> + end -= PyUnicode_GET_LENGTH(substring);
> + if (end < start)
> + return 0;
> +
> + if (PyUnicode_GET_LENGTH(substring) == 0)
> + return 1;
> +
> + kind_self = PyUnicode_KIND(self);
> + data_self = PyUnicode_DATA(self);
> + kind_sub = PyUnicode_KIND(substring);
> + data_sub = PyUnicode_DATA(substring);
> + end_sub = PyUnicode_GET_LENGTH(substring) - 1;
> +
> + if (direction > 0)
> + offset = end;
> + else
> + offset = start;
> +
> + if (PyUnicode_READ(kind_self, data_self, offset) ==
> + PyUnicode_READ(kind_sub, data_sub, 0) &&
> + PyUnicode_READ(kind_self, data_self, offset + end_sub) ==
> + PyUnicode_READ(kind_sub, data_sub, end_sub)) {
> + /* If both are of the same kind, memcmp is sufficient */
> + if (kind_self == kind_sub) {
> + return ! memcmp((char *)data_self +
> + (offset * PyUnicode_KIND(substring)),
> + data_sub,
> + PyUnicode_GET_LENGTH(substring) *
> + PyUnicode_KIND(substring));
> + }
> + /* otherwise we have to compare each character by first accessing it */
> + else {
> + /* We do not need to compare 0 and len(substring)-1 because
> + the if statement above ensured already that they are equal
> + when we end up here. */
> + for (i = 1; i < end_sub; ++i) {
> + if (PyUnicode_READ(kind_self, data_self, offset + i) !=
> + PyUnicode_READ(kind_sub, data_sub, i))
> + return 0;
> + }
> + return 1;
> + }
> + }
> +
> + return 0;
> +}
> +
> +Py_ssize_t
> +PyUnicode_Tailmatch(PyObject *str,
> + PyObject *substr,
> + Py_ssize_t start,
> + Py_ssize_t end,
> + int direction)
> +{
> + if (ensure_unicode(str) < 0 || ensure_unicode(substr) < 0)
> + return -1;
> +
> + return tailmatch(str, substr, start, end, direction);
> +}
> +
> +/* Apply fixfct filter to the Unicode object self and return a
> + reference to the modified object */
> +
> +static PyObject *
> +fixup(PyObject *self,
> + Py_UCS4 (*fixfct)(PyObject *s))
> +{
> + PyObject *u;
> + Py_UCS4 maxchar_old, maxchar_new = 0;
> + PyObject *v;
> +
> + u = _PyUnicode_Copy(self);
> + if (u == NULL)
> + return NULL;
> + maxchar_old = PyUnicode_MAX_CHAR_VALUE(u);
> +
> + /* fix functions return the new maximum character in a string,
> + if the kind of the resulting unicode object does not change,
> + everything is fine. Otherwise we need to change the string kind
> + and re-run the fix function. */
> + maxchar_new = fixfct(u);
> +
> + if (maxchar_new == 0) {
> + /* no changes */;
> + if (PyUnicode_CheckExact(self)) {
> + Py_DECREF(u);
> + Py_INCREF(self);
> + return self;
> + }
> + else
> + return u;
> + }
> +
> + maxchar_new = align_maxchar(maxchar_new);
> +
> + if (maxchar_new == maxchar_old)
> + return u;
> +
> + /* In case the maximum character changed, we need to
> + convert the string to the new category. */
> + v = PyUnicode_New(PyUnicode_GET_LENGTH(self), maxchar_new);
> + if (v == NULL) {
> + Py_DECREF(u);
> + return NULL;
> + }
> + if (maxchar_new > maxchar_old) {
> + /* If the maxchar increased so that the kind changed, not all
> + characters are representable anymore and we need to fix the
> + string again. This only happens in very few cases. */
> + _PyUnicode_FastCopyCharacters(v, 0,
> + self, 0, PyUnicode_GET_LENGTH(self));
> + maxchar_old = fixfct(v);
> + assert(maxchar_old > 0 && maxchar_old <= maxchar_new);
> + }
> + else {
> + _PyUnicode_FastCopyCharacters(v, 0,
> + u, 0, PyUnicode_GET_LENGTH(self));
> + }
> + Py_DECREF(u);
> + assert(_PyUnicode_CheckConsistency(v, 1));
> + return v;
> +}
> +
> +static PyObject *
> +ascii_upper_or_lower(PyObject *self, int lower)
> +{
> + Py_ssize_t len = PyUnicode_GET_LENGTH(self);
> + char *resdata, *data = PyUnicode_DATA(self);
> + PyObject *res;
> +
> + res = PyUnicode_New(len, 127);
> + if (res == NULL)
> + return NULL;
> + resdata = PyUnicode_DATA(res);
> + if (lower)
> + _Py_bytes_lower(resdata, data, len);
> + else
> + _Py_bytes_upper(resdata, data, len);
> + return res;
> +}
> +
> +static Py_UCS4
> +handle_capital_sigma(int kind, void *data, Py_ssize_t length, Py_ssize_t i)
> +{
> + Py_ssize_t j;
> + int final_sigma;
> + Py_UCS4 c = 0; /* initialize to prevent gcc warning */
> + /* U+03A3 is in the Final_Sigma context when, it is found like this:
> +
> + \p{cased}\p{case-ignorable}*U+03A3!(\p{case-ignorable}*\p{cased})
> +
> + where ! is a negation and \p{xxx} is a character with property xxx.
> + */
> + for (j = i - 1; j >= 0; j--) {
> + c = PyUnicode_READ(kind, data, j);
> + if (!_PyUnicode_IsCaseIgnorable(c))
> + break;
> + }
> + final_sigma = j >= 0 && _PyUnicode_IsCased(c);
> + if (final_sigma) {
> + for (j = i + 1; j < length; j++) {
> + c = PyUnicode_READ(kind, data, j);
> + if (!_PyUnicode_IsCaseIgnorable(c))
> + break;
> + }
> + final_sigma = j == length || !_PyUnicode_IsCased(c);
> + }
> + return (final_sigma) ? 0x3C2 : 0x3C3;
> +}
> +
> +static int
> +lower_ucs4(int kind, void *data, Py_ssize_t length, Py_ssize_t i,
> + Py_UCS4 c, Py_UCS4 *mapped)
> +{
> + /* Obscure special case. */
> + if (c == 0x3A3) {
> + mapped[0] = handle_capital_sigma(kind, data, length, i);
> + return 1;
> + }
> + return _PyUnicode_ToLowerFull(c, mapped);
> +}
> +
> +static Py_ssize_t
> +do_capitalize(int kind, void *data, Py_ssize_t length, Py_UCS4 *res, Py_UCS4 *maxchar)
> +{
> + Py_ssize_t i, k = 0;
> + int n_res, j;
> + Py_UCS4 c, mapped[3];
> +
> + c = PyUnicode_READ(kind, data, 0);
> + n_res = _PyUnicode_ToUpperFull(c, mapped);
> + for (j = 0; j < n_res; j++) {
> + *maxchar = Py_MAX(*maxchar, mapped[j]);
> + res[k++] = mapped[j];
> + }
> + for (i = 1; i < length; i++) {
> + c = PyUnicode_READ(kind, data, i);
> + n_res = lower_ucs4(kind, data, length, i, c, mapped);
> + for (j = 0; j < n_res; j++) {
> + *maxchar = Py_MAX(*maxchar, mapped[j]);
> + res[k++] = mapped[j];
> + }
> + }
> + return k;
> +}
> +
> +static Py_ssize_t
> +do_swapcase(int kind, void *data, Py_ssize_t length, Py_UCS4 *res, Py_UCS4 *maxchar) {
> + Py_ssize_t i, k = 0;
> +
> + for (i = 0; i < length; i++) {
> + Py_UCS4 c = PyUnicode_READ(kind, data, i), mapped[3];
> + int n_res, j;
> + if (Py_UNICODE_ISUPPER(c)) {
> + n_res = lower_ucs4(kind, data, length, i, c, mapped);
> + }
> + else if (Py_UNICODE_ISLOWER(c)) {
> + n_res = _PyUnicode_ToUpperFull(c, mapped);
> + }
> + else {
> + n_res = 1;
> + mapped[0] = c;
> + }
> + for (j = 0; j < n_res; j++) {
> + *maxchar = Py_MAX(*maxchar, mapped[j]);
> + res[k++] = mapped[j];
> + }
> + }
> + return k;
> +}
> +
> +static Py_ssize_t
> +do_upper_or_lower(int kind, void *data, Py_ssize_t length, Py_UCS4 *res,
> + Py_UCS4 *maxchar, int lower)
> +{
> + Py_ssize_t i, k = 0;
> +
> + for (i = 0; i < length; i++) {
> + Py_UCS4 c = PyUnicode_READ(kind, data, i), mapped[3];
> + int n_res, j;
> + if (lower)
> + n_res = lower_ucs4(kind, data, length, i, c, mapped);
> + else
> + n_res = _PyUnicode_ToUpperFull(c, mapped);
> + for (j = 0; j < n_res; j++) {
> + *maxchar = Py_MAX(*maxchar, mapped[j]);
> + res[k++] = mapped[j];
> + }
> + }
> + return k;
> +}
> +
> +static Py_ssize_t
> +do_upper(int kind, void *data, Py_ssize_t length, Py_UCS4 *res, Py_UCS4 *maxchar)
> +{
> + return do_upper_or_lower(kind, data, length, res, maxchar, 0);
> +}
> +
> +static Py_ssize_t
> +do_lower(int kind, void *data, Py_ssize_t length, Py_UCS4 *res, Py_UCS4 *maxchar)
> +{
> + return do_upper_or_lower(kind, data, length, res, maxchar, 1);
> +}
> +
> +static Py_ssize_t
> +do_casefold(int kind, void *data, Py_ssize_t length, Py_UCS4 *res, Py_UCS4 *maxchar)
> +{
> + Py_ssize_t i, k = 0;
> +
> + for (i = 0; i < length; i++) {
> + Py_UCS4 c = PyUnicode_READ(kind, data, i);
> + Py_UCS4 mapped[3];
> + int j, n_res = _PyUnicode_ToFoldedFull(c, mapped);
> + for (j = 0; j < n_res; j++) {
> + *maxchar = Py_MAX(*maxchar, mapped[j]);
> + res[k++] = mapped[j];
> + }
> + }
> + return k;
> +}
> +
> +static Py_ssize_t
> +do_title(int kind, void *data, Py_ssize_t length, Py_UCS4 *res, Py_UCS4 *maxchar)
> +{
> + Py_ssize_t i, k = 0;
> + int previous_is_cased;
> +
> + previous_is_cased = 0;
> + for (i = 0; i < length; i++) {
> + const Py_UCS4 c = PyUnicode_READ(kind, data, i);
> + Py_UCS4 mapped[3];
> + int n_res, j;
> +
> + if (previous_is_cased)
> + n_res = lower_ucs4(kind, data, length, i, c, mapped);
> + else
> + n_res = _PyUnicode_ToTitleFull(c, mapped);
> +
> + for (j = 0; j < n_res; j++) {
> + *maxchar = Py_MAX(*maxchar, mapped[j]);
> + res[k++] = mapped[j];
> + }
> +
> + previous_is_cased = _PyUnicode_IsCased(c);
> + }
> + return k;
> +}
> +
> +static PyObject *
> +case_operation(PyObject *self,
> + Py_ssize_t (*perform)(int, void *, Py_ssize_t, Py_UCS4 *, Py_UCS4 *))
> +{
> + PyObject *res = NULL;
> + Py_ssize_t length, newlength = 0;
> + int kind, outkind;
> + void *data, *outdata;
> + Py_UCS4 maxchar = 0, *tmp, *tmpend;
> +
> + assert(PyUnicode_IS_READY(self));
> +
> + kind = PyUnicode_KIND(self);
> + data = PyUnicode_DATA(self);
> + length = PyUnicode_GET_LENGTH(self);
> + if ((size_t) length > PY_SSIZE_T_MAX / (3 * sizeof(Py_UCS4))) {
> + PyErr_SetString(PyExc_OverflowError, "string is too long");
> + return NULL;
> + }
> + tmp = PyMem_MALLOC(sizeof(Py_UCS4) * 3 * length);
> + if (tmp == NULL)
> + return PyErr_NoMemory();
> + newlength = perform(kind, data, length, tmp, &maxchar);
> + res = PyUnicode_New(newlength, maxchar);
> + if (res == NULL)
> + goto leave;
> + tmpend = tmp + newlength;
> + outdata = PyUnicode_DATA(res);
> + outkind = PyUnicode_KIND(res);
> + switch (outkind) {
> + case PyUnicode_1BYTE_KIND:
> + _PyUnicode_CONVERT_BYTES(Py_UCS4, Py_UCS1, tmp, tmpend, outdata);
> + break;
> + case PyUnicode_2BYTE_KIND:
> + _PyUnicode_CONVERT_BYTES(Py_UCS4, Py_UCS2, tmp, tmpend, outdata);
> + break;
> + case PyUnicode_4BYTE_KIND:
> + memcpy(outdata, tmp, sizeof(Py_UCS4) * newlength);
> + break;
> + default:
> + assert(0);
> + break;
> + }
> + leave:
> + PyMem_FREE(tmp);
> + return res;
> +}
> +
> +PyObject *
> +PyUnicode_Join(PyObject *separator, PyObject *seq)
> +{
> + PyObject *res;
> + PyObject *fseq;
> + Py_ssize_t seqlen;
> + PyObject **items;
> +
> + fseq = PySequence_Fast(seq, "can only join an iterable");
> + if (fseq == NULL) {
> + return NULL;
> + }
> +
> + /* NOTE: the following code can't call back into Python code,
> + * so we are sure that fseq won't be mutated.
> + */
> +
> + items = PySequence_Fast_ITEMS(fseq);
> + seqlen = PySequence_Fast_GET_SIZE(fseq);
> + res = _PyUnicode_JoinArray(separator, items, seqlen);
> + Py_DECREF(fseq);
> + return res;
> +}
> +
> +PyObject *
> +_PyUnicode_JoinArray(PyObject *separator, PyObject **items, Py_ssize_t seqlen)
> +{
> + PyObject *res = NULL; /* the result */
> + PyObject *sep = NULL;
> + Py_ssize_t seplen;
> + PyObject *item;
> + Py_ssize_t sz, i, res_offset;
> + Py_UCS4 maxchar;
> + Py_UCS4 item_maxchar;
> + int use_memcpy;
> + unsigned char *res_data = NULL, *sep_data = NULL;
> + PyObject *last_obj;
> + unsigned int kind = 0;
> +
> + /* If empty sequence, return u"". */
> + if (seqlen == 0) {
> + _Py_RETURN_UNICODE_EMPTY();
> + }
> +
> + /* If singleton sequence with an exact Unicode, return that. */
> + last_obj = NULL;
> + if (seqlen == 1) {
> + if (PyUnicode_CheckExact(items[0])) {
> + res = items[0];
> + Py_INCREF(res);
> + return res;
> + }
> + seplen = 0;
> + maxchar = 0;
> + }
> + else {
> + /* Set up sep and seplen */
> + if (separator == NULL) {
> + /* fall back to a blank space separator */
> + sep = PyUnicode_FromOrdinal(' ');
> + if (!sep)
> + goto onError;
> + seplen = 1;
> + maxchar = 32;
> + }
> + else {
> + if (!PyUnicode_Check(separator)) {
> + PyErr_Format(PyExc_TypeError,
> + "separator: expected str instance,"
> + " %.80s found",
> + Py_TYPE(separator)->tp_name);
> + goto onError;
> + }
> + if (PyUnicode_READY(separator))
> + goto onError;
> + sep = separator;
> + seplen = PyUnicode_GET_LENGTH(separator);
> + maxchar = PyUnicode_MAX_CHAR_VALUE(separator);
> + /* inc refcount to keep this code path symmetric with the
> + above case of a blank separator */
> + Py_INCREF(sep);
> + }
> + last_obj = sep;
> + }
> +
> + /* There are at least two things to join, or else we have a subclass
> + * of str in the sequence.
> + * Do a pre-pass to figure out the total amount of space we'll
> + * need (sz), and see whether all argument are strings.
> + */
> + sz = 0;
> +#ifdef Py_DEBUG
> + use_memcpy = 0;
> +#else
> + use_memcpy = 1;
> +#endif
> + for (i = 0; i < seqlen; i++) {
> + size_t add_sz;
> + item = items[i];
> + if (!PyUnicode_Check(item)) {
> + PyErr_Format(PyExc_TypeError,
> + "sequence item %zd: expected str instance,"
> + " %.80s found",
> + i, Py_TYPE(item)->tp_name);
> + goto onError;
> + }
> + if (PyUnicode_READY(item) == -1)
> + goto onError;
> + add_sz = PyUnicode_GET_LENGTH(item);
> + item_maxchar = PyUnicode_MAX_CHAR_VALUE(item);
> + maxchar = Py_MAX(maxchar, item_maxchar);
> + if (i != 0) {
> + add_sz += seplen;
> + }
> + if (add_sz > (size_t)(PY_SSIZE_T_MAX - sz)) {
> + PyErr_SetString(PyExc_OverflowError,
> + "join() result is too long for a Python string");
> + goto onError;
> + }
> + sz += add_sz;
> + if (use_memcpy && last_obj != NULL) {
> + if (PyUnicode_KIND(last_obj) != PyUnicode_KIND(item))
> + use_memcpy = 0;
> + }
> + last_obj = item;
> + }
> +
> + res = PyUnicode_New(sz, maxchar);
> + if (res == NULL)
> + goto onError;
> +
> + /* Catenate everything. */
> +#ifdef Py_DEBUG
> + use_memcpy = 0;
> +#else
> + if (use_memcpy) {
> + res_data = PyUnicode_1BYTE_DATA(res);
> + kind = PyUnicode_KIND(res);
> + if (seplen != 0)
> + sep_data = PyUnicode_1BYTE_DATA(sep);
> + }
> +#endif
> + if (use_memcpy) {
> + for (i = 0; i < seqlen; ++i) {
> + Py_ssize_t itemlen;
> + item = items[i];
> +
> + /* Copy item, and maybe the separator. */
> + if (i && seplen != 0) {
> + memcpy(res_data,
> + sep_data,
> + kind * seplen);
> + res_data += kind * seplen;
> + }
> +
> + itemlen = PyUnicode_GET_LENGTH(item);
> + if (itemlen != 0) {
> + memcpy(res_data,
> + PyUnicode_DATA(item),
> + kind * itemlen);
> + res_data += kind * itemlen;
> + }
> + }
> + assert(res_data == PyUnicode_1BYTE_DATA(res)
> + + kind * PyUnicode_GET_LENGTH(res));
> + }
> + else {
> + for (i = 0, res_offset = 0; i < seqlen; ++i) {
> + Py_ssize_t itemlen;
> + item = items[i];
> +
> + /* Copy item, and maybe the separator. */
> + if (i && seplen != 0) {
> + _PyUnicode_FastCopyCharacters(res, res_offset, sep, 0, seplen);
> + res_offset += seplen;
> + }
> +
> + itemlen = PyUnicode_GET_LENGTH(item);
> + if (itemlen != 0) {
> + _PyUnicode_FastCopyCharacters(res, res_offset, item, 0, itemlen);
> + res_offset += itemlen;
> + }
> + }
> + assert(res_offset == PyUnicode_GET_LENGTH(res));
> + }
> +
> + Py_XDECREF(sep);
> + assert(_PyUnicode_CheckConsistency(res, 1));
> + return res;
> +
> + onError:
> + Py_XDECREF(sep);
> + Py_XDECREF(res);
> + return NULL;
> +}
> +
> +void
> +_PyUnicode_FastFill(PyObject *unicode, Py_ssize_t start, Py_ssize_t length,
> + Py_UCS4 fill_char)
> +{
> + const enum PyUnicode_Kind kind = PyUnicode_KIND(unicode);
> + void *data = PyUnicode_DATA(unicode);
> + assert(PyUnicode_IS_READY(unicode));
> + assert(unicode_modifiable(unicode));
> + assert(fill_char <= PyUnicode_MAX_CHAR_VALUE(unicode));
> + assert(start >= 0);
> + assert(start + length <= PyUnicode_GET_LENGTH(unicode));
> + FILL(kind, data, fill_char, start, length);
> +}
> +
> +Py_ssize_t
> +PyUnicode_Fill(PyObject *unicode, Py_ssize_t start, Py_ssize_t length,
> + Py_UCS4 fill_char)
> +{
> + Py_ssize_t maxlen;
> +
> + if (!PyUnicode_Check(unicode)) {
> + PyErr_BadInternalCall();
> + return -1;
> + }
> + if (PyUnicode_READY(unicode) == -1)
> + return -1;
> + if (unicode_check_modifiable(unicode))
> + return -1;
> +
> + if (start < 0) {
> + PyErr_SetString(PyExc_IndexError, "string index out of range");
> + return -1;
> + }
> + if (fill_char > PyUnicode_MAX_CHAR_VALUE(unicode)) {
> + PyErr_SetString(PyExc_ValueError,
> + "fill character is bigger than "
> + "the string maximum character");
> + return -1;
> + }
> +
> + maxlen = PyUnicode_GET_LENGTH(unicode) - start;
> + length = Py_MIN(maxlen, length);
> + if (length <= 0)
> + return 0;
> +
> + _PyUnicode_FastFill(unicode, start, length, fill_char);
> + return length;
> +}
> +
> +static PyObject *
> +pad(PyObject *self,
> + Py_ssize_t left,
> + Py_ssize_t right,
> + Py_UCS4 fill)
> +{
> + PyObject *u;
> + Py_UCS4 maxchar;
> + int kind;
> + void *data;
> +
> + if (left < 0)
> + left = 0;
> + if (right < 0)
> + right = 0;
> +
> + if (left == 0 && right == 0)
> + return unicode_result_unchanged(self);
> +
> + if (left > PY_SSIZE_T_MAX - _PyUnicode_LENGTH(self) ||
> + right > PY_SSIZE_T_MAX - (left + _PyUnicode_LENGTH(self))) {
> + PyErr_SetString(PyExc_OverflowError, "padded string is too long");
> + return NULL;
> + }
> + maxchar = PyUnicode_MAX_CHAR_VALUE(self);
> + maxchar = Py_MAX(maxchar, fill);
> + u = PyUnicode_New(left + _PyUnicode_LENGTH(self) + right, maxchar);
> + if (!u)
> + return NULL;
> +
> + kind = PyUnicode_KIND(u);
> + data = PyUnicode_DATA(u);
> + if (left)
> + FILL(kind, data, fill, 0, left);
> + if (right)
> + FILL(kind, data, fill, left + _PyUnicode_LENGTH(self), right);
> + _PyUnicode_FastCopyCharacters(u, left, self, 0, _PyUnicode_LENGTH(self));
> + assert(_PyUnicode_CheckConsistency(u, 1));
> + return u;
> +}
> +
> +PyObject *
> +PyUnicode_Splitlines(PyObject *string, int keepends)
> +{
> + PyObject *list;
> +
> + if (ensure_unicode(string) < 0)
> + return NULL;
> +
> + switch (PyUnicode_KIND(string)) {
> + case PyUnicode_1BYTE_KIND:
> + if (PyUnicode_IS_ASCII(string))
> + list = asciilib_splitlines(
> + string, PyUnicode_1BYTE_DATA(string),
> + PyUnicode_GET_LENGTH(string), keepends);
> + else
> + list = ucs1lib_splitlines(
> + string, PyUnicode_1BYTE_DATA(string),
> + PyUnicode_GET_LENGTH(string), keepends);
> + break;
> + case PyUnicode_2BYTE_KIND:
> + list = ucs2lib_splitlines(
> + string, PyUnicode_2BYTE_DATA(string),
> + PyUnicode_GET_LENGTH(string), keepends);
> + break;
> + case PyUnicode_4BYTE_KIND:
> + list = ucs4lib_splitlines(
> + string, PyUnicode_4BYTE_DATA(string),
> + PyUnicode_GET_LENGTH(string), keepends);
> + break;
> + default:
> + assert(0);
> + list = 0;
> + }
> + return list;
> +}
> +
> +static PyObject *
> +split(PyObject *self,
> + PyObject *substring,
> + Py_ssize_t maxcount)
> +{
> + int kind1, kind2;
> + void *buf1, *buf2;
> + Py_ssize_t len1, len2;
> + PyObject* out;
> +
> + if (maxcount < 0)
> + maxcount = PY_SSIZE_T_MAX;
> +
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> +
> + if (substring == NULL)
> + switch (PyUnicode_KIND(self)) {
> + case PyUnicode_1BYTE_KIND:
> + if (PyUnicode_IS_ASCII(self))
> + return asciilib_split_whitespace(
> + self, PyUnicode_1BYTE_DATA(self),
> + PyUnicode_GET_LENGTH(self), maxcount
> + );
> + else
> + return ucs1lib_split_whitespace(
> + self, PyUnicode_1BYTE_DATA(self),
> + PyUnicode_GET_LENGTH(self), maxcount
> + );
> + case PyUnicode_2BYTE_KIND:
> + return ucs2lib_split_whitespace(
> + self, PyUnicode_2BYTE_DATA(self),
> + PyUnicode_GET_LENGTH(self), maxcount
> + );
> + case PyUnicode_4BYTE_KIND:
> + return ucs4lib_split_whitespace(
> + self, PyUnicode_4BYTE_DATA(self),
> + PyUnicode_GET_LENGTH(self), maxcount
> + );
> + default:
> + assert(0);
> + return NULL;
> + }
> +
> + if (PyUnicode_READY(substring) == -1)
> + return NULL;
> +
> + kind1 = PyUnicode_KIND(self);
> + kind2 = PyUnicode_KIND(substring);
> + len1 = PyUnicode_GET_LENGTH(self);
> + len2 = PyUnicode_GET_LENGTH(substring);
> + if (kind1 < kind2 || len1 < len2) {
> + out = PyList_New(1);
> + if (out == NULL)
> + return NULL;
> + Py_INCREF(self);
> + PyList_SET_ITEM(out, 0, self);
> + return out;
> + }
> + buf1 = PyUnicode_DATA(self);
> + buf2 = PyUnicode_DATA(substring);
> + if (kind2 != kind1) {
> + buf2 = _PyUnicode_AsKind(substring, kind1);
> + if (!buf2)
> + return NULL;
> + }
> +
> + switch (kind1) {
> + case PyUnicode_1BYTE_KIND:
> + if (PyUnicode_IS_ASCII(self) && PyUnicode_IS_ASCII(substring))
> + out = asciilib_split(
> + self, buf1, len1, buf2, len2, maxcount);
> + else
> + out = ucs1lib_split(
> + self, buf1, len1, buf2, len2, maxcount);
> + break;
> + case PyUnicode_2BYTE_KIND:
> + out = ucs2lib_split(
> + self, buf1, len1, buf2, len2, maxcount);
> + break;
> + case PyUnicode_4BYTE_KIND:
> + out = ucs4lib_split(
> + self, buf1, len1, buf2, len2, maxcount);
> + break;
> + default:
> + out = NULL;
> + }
> + if (kind2 != kind1)
> + PyMem_Free(buf2);
> + return out;
> +}
> +
> +static PyObject *
> +rsplit(PyObject *self,
> + PyObject *substring,
> + Py_ssize_t maxcount)
> +{
> + int kind1, kind2;
> + void *buf1, *buf2;
> + Py_ssize_t len1, len2;
> + PyObject* out;
> +
> + if (maxcount < 0)
> + maxcount = PY_SSIZE_T_MAX;
> +
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> +
> + if (substring == NULL)
> + switch (PyUnicode_KIND(self)) {
> + case PyUnicode_1BYTE_KIND:
> + if (PyUnicode_IS_ASCII(self))
> + return asciilib_rsplit_whitespace(
> + self, PyUnicode_1BYTE_DATA(self),
> + PyUnicode_GET_LENGTH(self), maxcount
> + );
> + else
> + return ucs1lib_rsplit_whitespace(
> + self, PyUnicode_1BYTE_DATA(self),
> + PyUnicode_GET_LENGTH(self), maxcount
> + );
> + case PyUnicode_2BYTE_KIND:
> + return ucs2lib_rsplit_whitespace(
> + self, PyUnicode_2BYTE_DATA(self),
> + PyUnicode_GET_LENGTH(self), maxcount
> + );
> + case PyUnicode_4BYTE_KIND:
> + return ucs4lib_rsplit_whitespace(
> + self, PyUnicode_4BYTE_DATA(self),
> + PyUnicode_GET_LENGTH(self), maxcount
> + );
> + default:
> + assert(0);
> + return NULL;
> + }
> +
> + if (PyUnicode_READY(substring) == -1)
> + return NULL;
> +
> + kind1 = PyUnicode_KIND(self);
> + kind2 = PyUnicode_KIND(substring);
> + len1 = PyUnicode_GET_LENGTH(self);
> + len2 = PyUnicode_GET_LENGTH(substring);
> + if (kind1 < kind2 || len1 < len2) {
> + out = PyList_New(1);
> + if (out == NULL)
> + return NULL;
> + Py_INCREF(self);
> + PyList_SET_ITEM(out, 0, self);
> + return out;
> + }
> + buf1 = PyUnicode_DATA(self);
> + buf2 = PyUnicode_DATA(substring);
> + if (kind2 != kind1) {
> + buf2 = _PyUnicode_AsKind(substring, kind1);
> + if (!buf2)
> + return NULL;
> + }
> +
> + switch (kind1) {
> + case PyUnicode_1BYTE_KIND:
> + if (PyUnicode_IS_ASCII(self) && PyUnicode_IS_ASCII(substring))
> + out = asciilib_rsplit(
> + self, buf1, len1, buf2, len2, maxcount);
> + else
> + out = ucs1lib_rsplit(
> + self, buf1, len1, buf2, len2, maxcount);
> + break;
> + case PyUnicode_2BYTE_KIND:
> + out = ucs2lib_rsplit(
> + self, buf1, len1, buf2, len2, maxcount);
> + break;
> + case PyUnicode_4BYTE_KIND:
> + out = ucs4lib_rsplit(
> + self, buf1, len1, buf2, len2, maxcount);
> + break;
> + default:
> + out = NULL;
> + }
> + if (kind2 != kind1)
> + PyMem_Free(buf2);
> + return out;
> +}
> +
> +static Py_ssize_t
> +anylib_find(int kind, PyObject *str1, void *buf1, Py_ssize_t len1,
> + PyObject *str2, void *buf2, Py_ssize_t len2, Py_ssize_t offset)
> +{
> + switch (kind) {
> + case PyUnicode_1BYTE_KIND:
> + if (PyUnicode_IS_ASCII(str1) && PyUnicode_IS_ASCII(str2))
> + return asciilib_find(buf1, len1, buf2, len2, offset);
> + else
> + return ucs1lib_find(buf1, len1, buf2, len2, offset);
> + case PyUnicode_2BYTE_KIND:
> + return ucs2lib_find(buf1, len1, buf2, len2, offset);
> + case PyUnicode_4BYTE_KIND:
> + return ucs4lib_find(buf1, len1, buf2, len2, offset);
> + }
> + assert(0);
> + return -1;
> +}
> +
> +static Py_ssize_t
> +anylib_count(int kind, PyObject *sstr, void* sbuf, Py_ssize_t slen,
> + PyObject *str1, void *buf1, Py_ssize_t len1, Py_ssize_t maxcount)
> +{
> + switch (kind) {
> + case PyUnicode_1BYTE_KIND:
> + if (PyUnicode_IS_ASCII(sstr) && PyUnicode_IS_ASCII(str1))
> + return asciilib_count(sbuf, slen, buf1, len1, maxcount);
> + else
> + return ucs1lib_count(sbuf, slen, buf1, len1, maxcount);
> + case PyUnicode_2BYTE_KIND:
> + return ucs2lib_count(sbuf, slen, buf1, len1, maxcount);
> + case PyUnicode_4BYTE_KIND:
> + return ucs4lib_count(sbuf, slen, buf1, len1, maxcount);
> + }
> + assert(0);
> + return 0;
> +}
> +
> +static void
> +replace_1char_inplace(PyObject *u, Py_ssize_t pos,
> + Py_UCS4 u1, Py_UCS4 u2, Py_ssize_t maxcount)
> +{
> + int kind = PyUnicode_KIND(u);
> + void *data = PyUnicode_DATA(u);
> + Py_ssize_t len = PyUnicode_GET_LENGTH(u);
> + if (kind == PyUnicode_1BYTE_KIND) {
> + ucs1lib_replace_1char_inplace((Py_UCS1 *)data + pos,
> + (Py_UCS1 *)data + len,
> + u1, u2, maxcount);
> + }
> + else if (kind == PyUnicode_2BYTE_KIND) {
> + ucs2lib_replace_1char_inplace((Py_UCS2 *)data + pos,
> + (Py_UCS2 *)data + len,
> + u1, u2, maxcount);
> + }
> + else {
> + assert(kind == PyUnicode_4BYTE_KIND);
> + ucs4lib_replace_1char_inplace((Py_UCS4 *)data + pos,
> + (Py_UCS4 *)data + len,
> + u1, u2, maxcount);
> + }
> +}
> +
> +static PyObject *
> +replace(PyObject *self, PyObject *str1,
> + PyObject *str2, Py_ssize_t maxcount)
> +{
> + PyObject *u;
> + char *sbuf = PyUnicode_DATA(self);
> + char *buf1 = PyUnicode_DATA(str1);
> + char *buf2 = PyUnicode_DATA(str2);
> + int srelease = 0, release1 = 0, release2 = 0;
> + int skind = PyUnicode_KIND(self);
> + int kind1 = PyUnicode_KIND(str1);
> + int kind2 = PyUnicode_KIND(str2);
> + Py_ssize_t slen = PyUnicode_GET_LENGTH(self);
> + Py_ssize_t len1 = PyUnicode_GET_LENGTH(str1);
> + Py_ssize_t len2 = PyUnicode_GET_LENGTH(str2);
> + int mayshrink;
> + Py_UCS4 maxchar, maxchar_str1, maxchar_str2;
> +
> + if (maxcount < 0)
> + maxcount = PY_SSIZE_T_MAX;
> + else if (maxcount == 0 || slen == 0)
> + goto nothing;
> +
> + if (str1 == str2)
> + goto nothing;
> +
> + maxchar = PyUnicode_MAX_CHAR_VALUE(self);
> + maxchar_str1 = PyUnicode_MAX_CHAR_VALUE(str1);
> + if (maxchar < maxchar_str1)
> + /* substring too wide to be present */
> + goto nothing;
> + maxchar_str2 = PyUnicode_MAX_CHAR_VALUE(str2);
> + /* Replacing str1 with str2 may cause a maxchar reduction in the
> + result string. */
> + mayshrink = (maxchar_str2 < maxchar_str1) && (maxchar == maxchar_str1);
> + maxchar = Py_MAX(maxchar, maxchar_str2);
> +
> + if (len1 == len2) {
> + /* same length */
> + if (len1 == 0)
> + goto nothing;
> + if (len1 == 1) {
> + /* replace characters */
> + Py_UCS4 u1, u2;
> + Py_ssize_t pos;
> +
> + u1 = PyUnicode_READ(kind1, buf1, 0);
> + pos = findchar(sbuf, skind, slen, u1, 1);
> + if (pos < 0)
> + goto nothing;
> + u2 = PyUnicode_READ(kind2, buf2, 0);
> + u = PyUnicode_New(slen, maxchar);
> + if (!u)
> + goto error;
> +
> + _PyUnicode_FastCopyCharacters(u, 0, self, 0, slen);
> + replace_1char_inplace(u, pos, u1, u2, maxcount);
> + }
> + else {
> + int rkind = skind;
> + char *res;
> + Py_ssize_t i;
> +
> + if (kind1 < rkind) {
> + /* widen substring */
> + buf1 = _PyUnicode_AsKind(str1, rkind);
> + if (!buf1) goto error;
> + release1 = 1;
> + }
> + i = anylib_find(rkind, self, sbuf, slen, str1, buf1, len1, 0);
> + if (i < 0)
> + goto nothing;
> + if (rkind > kind2) {
> + /* widen replacement */
> + buf2 = _PyUnicode_AsKind(str2, rkind);
> + if (!buf2) goto error;
> + release2 = 1;
> + }
> + else if (rkind < kind2) {
> + /* widen self and buf1 */
> + rkind = kind2;
> + if (release1) PyMem_Free(buf1);
> + release1 = 0;
> + sbuf = _PyUnicode_AsKind(self, rkind);
> + if (!sbuf) goto error;
> + srelease = 1;
> + buf1 = _PyUnicode_AsKind(str1, rkind);
> + if (!buf1) goto error;
> + release1 = 1;
> + }
> + u = PyUnicode_New(slen, maxchar);
> + if (!u)
> + goto error;
> + assert(PyUnicode_KIND(u) == rkind);
> + res = PyUnicode_DATA(u);
> +
> + memcpy(res, sbuf, rkind * slen);
> + /* change everything in-place, starting with this one */
> + memcpy(res + rkind * i,
> + buf2,
> + rkind * len2);
> + i += len1;
> +
> + while ( --maxcount > 0) {
> + i = anylib_find(rkind, self,
> + sbuf+rkind*i, slen-i,
> + str1, buf1, len1, i);
> + if (i == -1)
> + break;
> + memcpy(res + rkind * i,
> + buf2,
> + rkind * len2);
> + i += len1;
> + }
> + }
> + }
> + else {
> + Py_ssize_t n, i, j, ires;
> + Py_ssize_t new_size;
> + int rkind = skind;
> + char *res;
> +
> + if (kind1 < rkind) {
> + /* widen substring */
> + buf1 = _PyUnicode_AsKind(str1, rkind);
> + if (!buf1) goto error;
> + release1 = 1;
> + }
> + n = anylib_count(rkind, self, sbuf, slen, str1, buf1, len1, maxcount);
> + if (n == 0)
> + goto nothing;
> + if (kind2 < rkind) {
> + /* widen replacement */
> + buf2 = _PyUnicode_AsKind(str2, rkind);
> + if (!buf2) goto error;
> + release2 = 1;
> + }
> + else if (kind2 > rkind) {
> + /* widen self and buf1 */
> + rkind = kind2;
> + sbuf = _PyUnicode_AsKind(self, rkind);
> + if (!sbuf) goto error;
> + srelease = 1;
> + if (release1) PyMem_Free(buf1);
> + release1 = 0;
> + buf1 = _PyUnicode_AsKind(str1, rkind);
> + if (!buf1) goto error;
> + release1 = 1;
> + }
> + /* new_size = PyUnicode_GET_LENGTH(self) + n * (PyUnicode_GET_LENGTH(str2) -
> + PyUnicode_GET_LENGTH(str1))); */
> + if (len1 < len2 && len2 - len1 > (PY_SSIZE_T_MAX - slen) / n) {
> + PyErr_SetString(PyExc_OverflowError,
> + "replace string is too long");
> + goto error;
> + }
> + new_size = slen + n * (len2 - len1);
> + if (new_size == 0) {
> + _Py_INCREF_UNICODE_EMPTY();
> + if (!unicode_empty)
> + goto error;
> + u = unicode_empty;
> + goto done;
> + }
> + if (new_size > (PY_SSIZE_T_MAX / rkind)) {
> + PyErr_SetString(PyExc_OverflowError,
> + "replace string is too long");
> + goto error;
> + }
> + u = PyUnicode_New(new_size, maxchar);
> + if (!u)
> + goto error;
> + assert(PyUnicode_KIND(u) == rkind);
> + res = PyUnicode_DATA(u);
> + ires = i = 0;
> + if (len1 > 0) {
> + while (n-- > 0) {
> + /* look for next match */
> + j = anylib_find(rkind, self,
> + sbuf + rkind * i, slen-i,
> + str1, buf1, len1, i);
> + if (j == -1)
> + break;
> + else if (j > i) {
> + /* copy unchanged part [i:j] */
> + memcpy(res + rkind * ires,
> + sbuf + rkind * i,
> + rkind * (j-i));
> + ires += j - i;
> + }
> + /* copy substitution string */
> + if (len2 > 0) {
> + memcpy(res + rkind * ires,
> + buf2,
> + rkind * len2);
> + ires += len2;
> + }
> + i = j + len1;
> + }
> + if (i < slen)
> + /* copy tail [i:] */
> + memcpy(res + rkind * ires,
> + sbuf + rkind * i,
> + rkind * (slen-i));
> + }
> + else {
> + /* interleave */
> + while (n > 0) {
> + memcpy(res + rkind * ires,
> + buf2,
> + rkind * len2);
> + ires += len2;
> + if (--n <= 0)
> + break;
> + memcpy(res + rkind * ires,
> + sbuf + rkind * i,
> + rkind);
> + ires++;
> + i++;
> + }
> + memcpy(res + rkind * ires,
> + sbuf + rkind * i,
> + rkind * (slen-i));
> + }
> + }
> +
> + if (mayshrink) {
> + unicode_adjust_maxchar(&u);
> + if (u == NULL)
> + goto error;
> + }
> +
> + done:
> + if (srelease)
> + PyMem_FREE(sbuf);
> + if (release1)
> + PyMem_FREE(buf1);
> + if (release2)
> + PyMem_FREE(buf2);
> + assert(_PyUnicode_CheckConsistency(u, 1));
> + return u;
> +
> + nothing:
> + /* nothing to replace; return original string (when possible) */
> + if (srelease)
> + PyMem_FREE(sbuf);
> + if (release1)
> + PyMem_FREE(buf1);
> + if (release2)
> + PyMem_FREE(buf2);
> + return unicode_result_unchanged(self);
> +
> + error:
> + if (srelease && sbuf)
> + PyMem_FREE(sbuf);
> + if (release1 && buf1)
> + PyMem_FREE(buf1);
> + if (release2 && buf2)
> + PyMem_FREE(buf2);
> + return NULL;
> +}
> +
> +/* --- Unicode Object Methods --------------------------------------------- */
> +
> +PyDoc_STRVAR(title__doc__,
> + "S.title() -> str\n\
> +\n\
> +Return a titlecased version of S, i.e. words start with title case\n\
> +characters, all remaining cased characters have lower case.");
> +
> +static PyObject*
> +unicode_title(PyObject *self)
> +{
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> + return case_operation(self, do_title);
> +}
> +
> +PyDoc_STRVAR(capitalize__doc__,
> + "S.capitalize() -> str\n\
> +\n\
> +Return a capitalized version of S, i.e. make the first character\n\
> +have upper case and the rest lower case.");
> +
> +static PyObject*
> +unicode_capitalize(PyObject *self)
> +{
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> + if (PyUnicode_GET_LENGTH(self) == 0)
> + return unicode_result_unchanged(self);
> + return case_operation(self, do_capitalize);
> +}
> +
> +PyDoc_STRVAR(casefold__doc__,
> + "S.casefold() -> str\n\
> +\n\
> +Return a version of S suitable for caseless comparisons.");
> +
> +static PyObject *
> +unicode_casefold(PyObject *self)
> +{
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> + if (PyUnicode_IS_ASCII(self))
> + return ascii_upper_or_lower(self, 1);
> + return case_operation(self, do_casefold);
> +}
> +
> +
> +/* Argument converter. Accepts a single Unicode character. */
> +
> +static int
> +convert_uc(PyObject *obj, void *addr)
> +{
> + Py_UCS4 *fillcharloc = (Py_UCS4 *)addr;
> +
> + if (!PyUnicode_Check(obj)) {
> + PyErr_Format(PyExc_TypeError,
> + "The fill character must be a unicode character, "
> + "not %.100s", Py_TYPE(obj)->tp_name);
> + return 0;
> + }
> + if (PyUnicode_READY(obj) < 0)
> + return 0;
> + if (PyUnicode_GET_LENGTH(obj) != 1) {
> + PyErr_SetString(PyExc_TypeError,
> + "The fill character must be exactly one character long");
> + return 0;
> + }
> + *fillcharloc = PyUnicode_READ_CHAR(obj, 0);
> + return 1;
> +}
> +
> +PyDoc_STRVAR(center__doc__,
> + "S.center(width[, fillchar]) -> str\n\
> +\n\
> +Return S centered in a string of length width. Padding is\n\
> +done using the specified fill character (default is a space)");
> +
> +static PyObject *
> +unicode_center(PyObject *self, PyObject *args)
> +{
> + Py_ssize_t marg, left;
> + Py_ssize_t width;
> + Py_UCS4 fillchar = ' ';
> +
> + if (!PyArg_ParseTuple(args, "n|O&:center", &width, convert_uc, &fillchar))
> + return NULL;
> +
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> +
> + if (PyUnicode_GET_LENGTH(self) >= width)
> + return unicode_result_unchanged(self);
> +
> + marg = width - PyUnicode_GET_LENGTH(self);
> + left = marg / 2 + (marg & width & 1);
> +
> + return pad(self, left, marg - left, fillchar);
> +}
> +
> +/* This function assumes that str1 and str2 are readied by the caller. */
> +
> +static int
> +unicode_compare(PyObject *str1, PyObject *str2)
> +{
> +#define COMPARE(TYPE1, TYPE2) \
> + do { \
> + TYPE1* p1 = (TYPE1 *)data1; \
> + TYPE2* p2 = (TYPE2 *)data2; \
> + TYPE1* end = p1 + len; \
> + Py_UCS4 c1, c2; \
> + for (; p1 != end; p1++, p2++) { \
> + c1 = *p1; \
> + c2 = *p2; \
> + if (c1 != c2) \
> + return (c1 < c2) ? -1 : 1; \
> + } \
> + } \
> + while (0)
> +
> + int kind1, kind2;
> + void *data1, *data2;
> + Py_ssize_t len1, len2, len;
> +
> + kind1 = PyUnicode_KIND(str1);
> + kind2 = PyUnicode_KIND(str2);
> + data1 = PyUnicode_DATA(str1);
> + data2 = PyUnicode_DATA(str2);
> + len1 = PyUnicode_GET_LENGTH(str1);
> + len2 = PyUnicode_GET_LENGTH(str2);
> + len = Py_MIN(len1, len2);
> +
> + switch(kind1) {
> + case PyUnicode_1BYTE_KIND:
> + {
> + switch(kind2) {
> + case PyUnicode_1BYTE_KIND:
> + {
> + int cmp = memcmp(data1, data2, len);
> + /* normalize result of memcmp() into the range [-1; 1] */
> + if (cmp < 0)
> + return -1;
> + if (cmp > 0)
> + return 1;
> + break;
> + }
> + case PyUnicode_2BYTE_KIND:
> + COMPARE(Py_UCS1, Py_UCS2);
> + break;
> + case PyUnicode_4BYTE_KIND:
> + COMPARE(Py_UCS1, Py_UCS4);
> + break;
> + default:
> + assert(0);
> + }
> + break;
> + }
> + case PyUnicode_2BYTE_KIND:
> + {
> + switch(kind2) {
> + case PyUnicode_1BYTE_KIND:
> + COMPARE(Py_UCS2, Py_UCS1);
> + break;
> + case PyUnicode_2BYTE_KIND:
> + {
> + COMPARE(Py_UCS2, Py_UCS2);
> + break;
> + }
> + case PyUnicode_4BYTE_KIND:
> + COMPARE(Py_UCS2, Py_UCS4);
> + break;
> + default:
> + assert(0);
> + }
> + break;
> + }
> + case PyUnicode_4BYTE_KIND:
> + {
> + switch(kind2) {
> + case PyUnicode_1BYTE_KIND:
> + COMPARE(Py_UCS4, Py_UCS1);
> + break;
> + case PyUnicode_2BYTE_KIND:
> + COMPARE(Py_UCS4, Py_UCS2);
> + break;
> + case PyUnicode_4BYTE_KIND:
> + {
> +#if defined(HAVE_WMEMCMP) && SIZEOF_WCHAR_T == 4
> + int cmp = wmemcmp((wchar_t *)data1, (wchar_t *)data2, len);
> + /* normalize result of wmemcmp() into the range [-1; 1] */
> + if (cmp < 0)
> + return -1;
> + if (cmp > 0)
> + return 1;
> +#else
> + COMPARE(Py_UCS4, Py_UCS4);
> +#endif
> + break;
> + }
> + default:
> + assert(0);
> + }
> + break;
> + }
> + default:
> + assert(0);
> + }
> +
> + if (len1 == len2)
> + return 0;
> + if (len1 < len2)
> + return -1;
> + else
> + return 1;
> +
> +#undef COMPARE
> +}
> +
> +static int
> +unicode_compare_eq(PyObject *str1, PyObject *str2)
> +{
> + int kind;
> + void *data1, *data2;
> + Py_ssize_t len;
> + int cmp;
> +
> + len = PyUnicode_GET_LENGTH(str1);
> + if (PyUnicode_GET_LENGTH(str2) != len)
> + return 0;
> + kind = PyUnicode_KIND(str1);
> + if (PyUnicode_KIND(str2) != kind)
> + return 0;
> + data1 = PyUnicode_DATA(str1);
> + data2 = PyUnicode_DATA(str2);
> +
> + cmp = memcmp(data1, data2, len * kind);
> + return (cmp == 0);
> +}
> +
> +
> +int
> +PyUnicode_Compare(PyObject *left, PyObject *right)
> +{
> + if (PyUnicode_Check(left) && PyUnicode_Check(right)) {
> + if (PyUnicode_READY(left) == -1 ||
> + PyUnicode_READY(right) == -1)
> + return -1;
> +
> + /* a string is equal to itself */
> + if (left == right)
> + return 0;
> +
> + return unicode_compare(left, right);
> + }
> + PyErr_Format(PyExc_TypeError,
> + "Can't compare %.100s and %.100s",
> + left->ob_type->tp_name,
> + right->ob_type->tp_name);
> + return -1;
> +}
> +
> +int
> +PyUnicode_CompareWithASCIIString(PyObject* uni, const char* str)
> +{
> + Py_ssize_t i;
> + int kind;
> + Py_UCS4 chr;
> + const unsigned char *ustr = (const unsigned char *)str;
> +
> + assert(_PyUnicode_CHECK(uni));
> + if (!PyUnicode_IS_READY(uni)) {
> + const wchar_t *ws = _PyUnicode_WSTR(uni);
> + /* Compare Unicode string and source character set string */
> + for (i = 0; (chr = ws[i]) && ustr[i]; i++) {
> + if (chr != ustr[i])
> + return (chr < ustr[i]) ? -1 : 1;
> + }
> + /* This check keeps Python strings that end in '\0' from comparing equal
> + to C strings identical up to that point. */
> + if (_PyUnicode_WSTR_LENGTH(uni) != i || chr)
> + return 1; /* uni is longer */
> + if (ustr[i])
> + return -1; /* str is longer */
> + return 0;
> + }
> + kind = PyUnicode_KIND(uni);
> + if (kind == PyUnicode_1BYTE_KIND) {
> + const void *data = PyUnicode_1BYTE_DATA(uni);
> + size_t len1 = (size_t)PyUnicode_GET_LENGTH(uni);
> + size_t len, len2 = strlen(str);
> + int cmp;
> +
> + len = Py_MIN(len1, len2);
> + cmp = memcmp(data, str, len);
> + if (cmp != 0) {
> + if (cmp < 0)
> + return -1;
> + else
> + return 1;
> + }
> + if (len1 > len2)
> + return 1; /* uni is longer */
> + if (len1 < len2)
> + return -1; /* str is longer */
> + return 0;
> + }
> + else {
> + void *data = PyUnicode_DATA(uni);
> + /* Compare Unicode string and source character set string */
> + for (i = 0; (chr = PyUnicode_READ(kind, data, i)) && str[i]; i++)
> + if (chr != (unsigned char)str[i])
> + return (chr < (unsigned char)(str[i])) ? -1 : 1;
> + /* This check keeps Python strings that end in '\0' from comparing equal
> + to C strings identical up to that point. */
> + if (PyUnicode_GET_LENGTH(uni) != i || chr)
> + return 1; /* uni is longer */
> + if (str[i])
> + return -1; /* str is longer */
> + return 0;
> + }
> +}
> +
> +static int
> +non_ready_unicode_equal_to_ascii_string(PyObject *unicode, const char *str)
> +{
> + size_t i, len;
> + const wchar_t *p;
> + len = (size_t)_PyUnicode_WSTR_LENGTH(unicode);
> + if (strlen(str) != len)
> + return 0;
> + p = _PyUnicode_WSTR(unicode);
> + assert(p);
> + for (i = 0; i < len; i++) {
> + unsigned char c = (unsigned char)str[i];
> + if (c >= 128 || p[i] != (wchar_t)c)
> + return 0;
> + }
> + return 1;
> +}
> +
> +int
> +_PyUnicode_EqualToASCIIString(PyObject *unicode, const char *str)
> +{
> + size_t len;
> + assert(_PyUnicode_CHECK(unicode));
> + assert(str);
> +#ifndef NDEBUG
> + for (const char *p = str; *p; p++) {
> + assert((unsigned char)*p < 128);
> + }
> +#endif
> + if (PyUnicode_READY(unicode) == -1) {
> + /* Memory error or bad data */
> + PyErr_Clear();
> + return non_ready_unicode_equal_to_ascii_string(unicode, str);
> + }
> + if (!PyUnicode_IS_ASCII(unicode))
> + return 0;
> + len = (size_t)PyUnicode_GET_LENGTH(unicode);
> + return strlen(str) == len &&
> + memcmp(PyUnicode_1BYTE_DATA(unicode), str, len) == 0;
> +}
> +
> +int
> +_PyUnicode_EqualToASCIIId(PyObject *left, _Py_Identifier *right)
> +{
> + PyObject *right_uni;
> + Py_hash_t hash;
> +
> + assert(_PyUnicode_CHECK(left));
> + assert(right->string);
> +#ifndef NDEBUG
> + for (const char *p = right->string; *p; p++) {
> + assert((unsigned char)*p < 128);
> + }
> +#endif
> +
> + if (PyUnicode_READY(left) == -1) {
> + /* memory error or bad data */
> + PyErr_Clear();
> + return non_ready_unicode_equal_to_ascii_string(left, right->string);
> + }
> +
> + if (!PyUnicode_IS_ASCII(left))
> + return 0;
> +
> + right_uni = _PyUnicode_FromId(right); /* borrowed */
> + if (right_uni == NULL) {
> + /* memory error or bad data */
> + PyErr_Clear();
> + return _PyUnicode_EqualToASCIIString(left, right->string);
> + }
> +
> + if (left == right_uni)
> + return 1;
> +
> + if (PyUnicode_CHECK_INTERNED(left))
> + return 0;
> +
> + assert(_PyUnicode_HASH(right_uni) != -1);
> + hash = _PyUnicode_HASH(left);
> + if (hash != -1 && hash != _PyUnicode_HASH(right_uni))
> + return 0;
> +
> + return unicode_compare_eq(left, right_uni);
> +}
> +
> +#define TEST_COND(cond) \
> + ((cond) ? Py_True : Py_False)
> +
> +PyObject *
> +PyUnicode_RichCompare(PyObject *left, PyObject *right, int op)
> +{
> + int result;
> + PyObject *v;
> +
> + if (!PyUnicode_Check(left) || !PyUnicode_Check(right))
> + Py_RETURN_NOTIMPLEMENTED;
> +
> + if (PyUnicode_READY(left) == -1 ||
> + PyUnicode_READY(right) == -1)
> + return NULL;
> +
> + if (left == right) {
> + switch (op) {
> + case Py_EQ:
> + case Py_LE:
> + case Py_GE:
> + /* a string is equal to itself */
> + v = Py_True;
> + break;
> + case Py_NE:
> + case Py_LT:
> + case Py_GT:
> + v = Py_False;
> + break;
> + default:
> + PyErr_BadArgument();
> + return NULL;
> + }
> + }
> + else if (op == Py_EQ || op == Py_NE) {
> + result = unicode_compare_eq(left, right);
> + result ^= (op == Py_NE);
> + v = TEST_COND(result);
> + }
> + else {
> + result = unicode_compare(left, right);
> +
> + /* Convert the return value to a Boolean */
> + switch (op) {
> + case Py_LE:
> + v = TEST_COND(result <= 0);
> + break;
> + case Py_GE:
> + v = TEST_COND(result >= 0);
> + break;
> + case Py_LT:
> + v = TEST_COND(result == -1);
> + break;
> + case Py_GT:
> + v = TEST_COND(result == 1);
> + break;
> + default:
> + PyErr_BadArgument();
> + return NULL;
> + }
> + }
> + Py_INCREF(v);
> + return v;
> +}
> +
> +int
> +_PyUnicode_EQ(PyObject *aa, PyObject *bb)
> +{
> + return unicode_eq(aa, bb);
> +}
> +
> +int
> +PyUnicode_Contains(PyObject *str, PyObject *substr)
> +{
> + int kind1, kind2;
> + void *buf1, *buf2;
> + Py_ssize_t len1, len2;
> + int result;
> +
> + if (!PyUnicode_Check(substr)) {
> + PyErr_Format(PyExc_TypeError,
> + "'in <string>' requires string as left operand, not %.100s",
> + Py_TYPE(substr)->tp_name);
> + return -1;
> + }
> + if (PyUnicode_READY(substr) == -1)
> + return -1;
> + if (ensure_unicode(str) < 0)
> + return -1;
> +
> + kind1 = PyUnicode_KIND(str);
> + kind2 = PyUnicode_KIND(substr);
> + if (kind1 < kind2)
> + return 0;
> + len1 = PyUnicode_GET_LENGTH(str);
> + len2 = PyUnicode_GET_LENGTH(substr);
> + if (len1 < len2)
> + return 0;
> + buf1 = PyUnicode_DATA(str);
> + buf2 = PyUnicode_DATA(substr);
> + if (len2 == 1) {
> + Py_UCS4 ch = PyUnicode_READ(kind2, buf2, 0);
> + result = findchar((const char *)buf1, kind1, len1, ch, 1) != -1;
> + return result;
> + }
> + if (kind2 != kind1) {
> + buf2 = _PyUnicode_AsKind(substr, kind1);
> + if (!buf2)
> + return -1;
> + }
> +
> + switch (kind1) {
> + case PyUnicode_1BYTE_KIND:
> + result = ucs1lib_find(buf1, len1, buf2, len2, 0) != -1;
> + break;
> + case PyUnicode_2BYTE_KIND:
> + result = ucs2lib_find(buf1, len1, buf2, len2, 0) != -1;
> + break;
> + case PyUnicode_4BYTE_KIND:
> + result = ucs4lib_find(buf1, len1, buf2, len2, 0) != -1;
> + break;
> + default:
> + result = -1;
> + assert(0);
> + }
> +
> + if (kind2 != kind1)
> + PyMem_Free(buf2);
> +
> + return result;
> +}
> +
> +/* Concat to string or Unicode object giving a new Unicode object. */
> +
> +PyObject *
> +PyUnicode_Concat(PyObject *left, PyObject *right)
> +{
> + PyObject *result;
> + Py_UCS4 maxchar, maxchar2;
> + Py_ssize_t left_len, right_len, new_len;
> +
> + if (ensure_unicode(left) < 0 || ensure_unicode(right) < 0)
> + return NULL;
> +
> + /* Shortcuts */
> + if (left == unicode_empty)
> + return PyUnicode_FromObject(right);
> + if (right == unicode_empty)
> + return PyUnicode_FromObject(left);
> +
> + left_len = PyUnicode_GET_LENGTH(left);
> + right_len = PyUnicode_GET_LENGTH(right);
> + if (left_len > PY_SSIZE_T_MAX - right_len) {
> + PyErr_SetString(PyExc_OverflowError,
> + "strings are too large to concat");
> + return NULL;
> + }
> + new_len = left_len + right_len;
> +
> + maxchar = PyUnicode_MAX_CHAR_VALUE(left);
> + maxchar2 = PyUnicode_MAX_CHAR_VALUE(right);
> + maxchar = Py_MAX(maxchar, maxchar2);
> +
> + /* Concat the two Unicode strings */
> + result = PyUnicode_New(new_len, maxchar);
> + if (result == NULL)
> + return NULL;
> + _PyUnicode_FastCopyCharacters(result, 0, left, 0, left_len);
> + _PyUnicode_FastCopyCharacters(result, left_len, right, 0, right_len);
> + assert(_PyUnicode_CheckConsistency(result, 1));
> + return result;
> +}
> +
> +void
> +PyUnicode_Append(PyObject **p_left, PyObject *right)
> +{
> + PyObject *left, *res;
> + Py_UCS4 maxchar, maxchar2;
> + Py_ssize_t left_len, right_len, new_len;
> +
> + if (p_left == NULL) {
> + if (!PyErr_Occurred())
> + PyErr_BadInternalCall();
> + return;
> + }
> + left = *p_left;
> + if (right == NULL || left == NULL
> + || !PyUnicode_Check(left) || !PyUnicode_Check(right)) {
> + if (!PyErr_Occurred())
> + PyErr_BadInternalCall();
> + goto error;
> + }
> +
> + if (PyUnicode_READY(left) == -1)
> + goto error;
> + if (PyUnicode_READY(right) == -1)
> + goto error;
> +
> + /* Shortcuts */
> + if (left == unicode_empty) {
> + Py_DECREF(left);
> + Py_INCREF(right);
> + *p_left = right;
> + return;
> + }
> + if (right == unicode_empty)
> + return;
> +
> + left_len = PyUnicode_GET_LENGTH(left);
> + right_len = PyUnicode_GET_LENGTH(right);
> + if (left_len > PY_SSIZE_T_MAX - right_len) {
> + PyErr_SetString(PyExc_OverflowError,
> + "strings are too large to concat");
> + goto error;
> + }
> + new_len = left_len + right_len;
> +
> + if (unicode_modifiable(left)
> + && PyUnicode_CheckExact(right)
> + && PyUnicode_KIND(right) <= PyUnicode_KIND(left)
> + /* Don't resize for ascii += latin1. Convert ascii to latin1 requires
> + to change the structure size, but characters are stored just after
> + the structure, and so it requires to move all characters which is
> + not so different than duplicating the string. */
> + && !(PyUnicode_IS_ASCII(left) && !PyUnicode_IS_ASCII(right)))
> + {
> + /* append inplace */
> + if (unicode_resize(p_left, new_len) != 0)
> + goto error;
> +
> + /* copy 'right' into the newly allocated area of 'left' */
> + _PyUnicode_FastCopyCharacters(*p_left, left_len, right, 0, right_len);
> + }
> + else {
> + maxchar = PyUnicode_MAX_CHAR_VALUE(left);
> + maxchar2 = PyUnicode_MAX_CHAR_VALUE(right);
> + maxchar = Py_MAX(maxchar, maxchar2);
> +
> + /* Concat the two Unicode strings */
> + res = PyUnicode_New(new_len, maxchar);
> + if (res == NULL)
> + goto error;
> + _PyUnicode_FastCopyCharacters(res, 0, left, 0, left_len);
> + _PyUnicode_FastCopyCharacters(res, left_len, right, 0, right_len);
> + Py_DECREF(left);
> + *p_left = res;
> + }
> + assert(_PyUnicode_CheckConsistency(*p_left, 1));
> + return;
> +
> +error:
> + Py_CLEAR(*p_left);
> +}
> +
> +void
> +PyUnicode_AppendAndDel(PyObject **pleft, PyObject *right)
> +{
> + PyUnicode_Append(pleft, right);
> + Py_XDECREF(right);
> +}
> +
> +/*
> +Wraps stringlib_parse_args_finds() and additionally ensures that the
> +first argument is a unicode object.
> +*/
> +
> +static int
> +parse_args_finds_unicode(const char * function_name, PyObject *args,
> + PyObject **substring,
> + Py_ssize_t *start, Py_ssize_t *end)
> +{
> + if(stringlib_parse_args_finds(function_name, args, substring,
> + start, end)) {
> + if (ensure_unicode(*substring) < 0)
> + return 0;
> + return 1;
> + }
> + return 0;
> +}
> +
> +PyDoc_STRVAR(count__doc__,
> + "S.count(sub[, start[, end]]) -> int\n\
> +\n\
> +Return the number of non-overlapping occurrences of substring sub in\n\
> +string S[start:end]. Optional arguments start and end are\n\
> +interpreted as in slice notation.");
> +
> +static PyObject *
> +unicode_count(PyObject *self, PyObject *args)
> +{
> + PyObject *substring = NULL; /* initialize to fix a compiler warning */
> + Py_ssize_t start = 0;
> + Py_ssize_t end = PY_SSIZE_T_MAX;
> + PyObject *result;
> + int kind1, kind2;
> + void *buf1, *buf2;
> + Py_ssize_t len1, len2, iresult;
> +
> + if (!parse_args_finds_unicode("count", args, &substring, &start, &end))
> + return NULL;
> +
> + kind1 = PyUnicode_KIND(self);
> + kind2 = PyUnicode_KIND(substring);
> + if (kind1 < kind2)
> + return PyLong_FromLong(0);
> +
> + len1 = PyUnicode_GET_LENGTH(self);
> + len2 = PyUnicode_GET_LENGTH(substring);
> + ADJUST_INDICES(start, end, len1);
> + if (end - start < len2)
> + return PyLong_FromLong(0);
> +
> + buf1 = PyUnicode_DATA(self);
> + buf2 = PyUnicode_DATA(substring);
> + if (kind2 != kind1) {
> + buf2 = _PyUnicode_AsKind(substring, kind1);
> + if (!buf2)
> + return NULL;
> + }
> + switch (kind1) {
> + case PyUnicode_1BYTE_KIND:
> + iresult = ucs1lib_count(
> + ((Py_UCS1*)buf1) + start, end - start,
> + buf2, len2, PY_SSIZE_T_MAX
> + );
> + break;
> + case PyUnicode_2BYTE_KIND:
> + iresult = ucs2lib_count(
> + ((Py_UCS2*)buf1) + start, end - start,
> + buf2, len2, PY_SSIZE_T_MAX
> + );
> + break;
> + case PyUnicode_4BYTE_KIND:
> + iresult = ucs4lib_count(
> + ((Py_UCS4*)buf1) + start, end - start,
> + buf2, len2, PY_SSIZE_T_MAX
> + );
> + break;
> + default:
> + assert(0); iresult = 0;
> + }
> +
> + result = PyLong_FromSsize_t(iresult);
> +
> + if (kind2 != kind1)
> + PyMem_Free(buf2);
> +
> + return result;
> +}
> +
> +PyDoc_STRVAR(encode__doc__,
> + "S.encode(encoding='utf-8', errors='strict') -> bytes\n\
> +\n\
> +Encode S using the codec registered for encoding. Default encoding\n\
> +is 'utf-8'. errors may be given to set a different error\n\
> +handling scheme. Default is 'strict' meaning that encoding errors raise\n\
> +a UnicodeEncodeError. Other possible values are 'ignore', 'replace' and\n\
> +'xmlcharrefreplace' as well as any other name registered with\n\
> +codecs.register_error that can handle UnicodeEncodeErrors.");
> +
> +static PyObject *
> +unicode_encode(PyObject *self, PyObject *args, PyObject *kwargs)
> +{
> + static char *kwlist[] = {"encoding", "errors", 0};
> + char *encoding = NULL;
> + char *errors = NULL;
> +
> + if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|ss:encode",
> + kwlist, &encoding, &errors))
> + return NULL;
> + return PyUnicode_AsEncodedString(self, encoding, errors);
> +}
> +
> +PyDoc_STRVAR(expandtabs__doc__,
> + "S.expandtabs(tabsize=8) -> str\n\
> +\n\
> +Return a copy of S where all tab characters are expanded using spaces.\n\
> +If tabsize is not given, a tab size of 8 characters is assumed.");
> +
> +static PyObject*
> +unicode_expandtabs(PyObject *self, PyObject *args, PyObject *kwds)
> +{
> + Py_ssize_t i, j, line_pos, src_len, incr;
> + Py_UCS4 ch;
> + PyObject *u;
> + void *src_data, *dest_data;
> + static char *kwlist[] = {"tabsize", 0};
> + int tabsize = 8;
> + int kind;
> + int found;
> +
> + if (!PyArg_ParseTupleAndKeywords(args, kwds, "|i:expandtabs",
> + kwlist, &tabsize))
> + return NULL;
> +
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> +
> + /* First pass: determine size of output string */
> + src_len = PyUnicode_GET_LENGTH(self);
> + i = j = line_pos = 0;
> + kind = PyUnicode_KIND(self);
> + src_data = PyUnicode_DATA(self);
> + found = 0;
> + for (; i < src_len; i++) {
> + ch = PyUnicode_READ(kind, src_data, i);
> + if (ch == '\t') {
> + found = 1;
> + if (tabsize > 0) {
> + incr = tabsize - (line_pos % tabsize); /* cannot overflow */
> + if (j > PY_SSIZE_T_MAX - incr)
> + goto overflow;
> + line_pos += incr;
> + j += incr;
> + }
> + }
> + else {
> + if (j > PY_SSIZE_T_MAX - 1)
> + goto overflow;
> + line_pos++;
> + j++;
> + if (ch == '\n' || ch == '\r')
> + line_pos = 0;
> + }
> + }
> + if (!found)
> + return unicode_result_unchanged(self);
> +
> + /* Second pass: create output string and fill it */
> + u = PyUnicode_New(j, PyUnicode_MAX_CHAR_VALUE(self));
> + if (!u)
> + return NULL;
> + dest_data = PyUnicode_DATA(u);
> +
> + i = j = line_pos = 0;
> +
> + for (; i < src_len; i++) {
> + ch = PyUnicode_READ(kind, src_data, i);
> + if (ch == '\t') {
> + if (tabsize > 0) {
> + incr = tabsize - (line_pos % tabsize);
> + line_pos += incr;
> + FILL(kind, dest_data, ' ', j, incr);
> + j += incr;
> + }
> + }
> + else {
> + line_pos++;
> + PyUnicode_WRITE(kind, dest_data, j, ch);
> + j++;
> + if (ch == '\n' || ch == '\r')
> + line_pos = 0;
> + }
> + }
> + assert (j == PyUnicode_GET_LENGTH(u));
> + return unicode_result(u);
> +
> + overflow:
> + PyErr_SetString(PyExc_OverflowError, "new string is too long");
> + return NULL;
> +}
> +
> +PyDoc_STRVAR(find__doc__,
> + "S.find(sub[, start[, end]]) -> int\n\
> +\n\
> +Return the lowest index in S where substring sub is found,\n\
> +such that sub is contained within S[start:end]. Optional\n\
> +arguments start and end are interpreted as in slice notation.\n\
> +\n\
> +Return -1 on failure.");
> +
> +static PyObject *
> +unicode_find(PyObject *self, PyObject *args)
> +{
> + /* initialize variables to prevent gcc warning */
> + PyObject *substring = NULL;
> + Py_ssize_t start = 0;
> + Py_ssize_t end = 0;
> + Py_ssize_t result;
> +
> + if (!parse_args_finds_unicode("find", args, &substring, &start, &end))
> + return NULL;
> +
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> +
> + result = any_find_slice(self, substring, start, end, 1);
> +
> + if (result == -2)
> + return NULL;
> +
> + return PyLong_FromSsize_t(result);
> +}
> +
> +static PyObject *
> +unicode_getitem(PyObject *self, Py_ssize_t index)
> +{
> + void *data;
> + enum PyUnicode_Kind kind;
> + Py_UCS4 ch;
> +
> + if (!PyUnicode_Check(self)) {
> + PyErr_BadArgument();
> + return NULL;
> + }
> + if (PyUnicode_READY(self) == -1) {
> + return NULL;
> + }
> + if (index < 0 || index >= PyUnicode_GET_LENGTH(self)) {
> + PyErr_SetString(PyExc_IndexError, "string index out of range");
> + return NULL;
> + }
> + kind = PyUnicode_KIND(self);
> + data = PyUnicode_DATA(self);
> + ch = PyUnicode_READ(kind, data, index);
> + return unicode_char(ch);
> +}
> +
> +/* Believe it or not, this produces the same value for ASCII strings
> + as bytes_hash(). */
> +static Py_hash_t
> +unicode_hash(PyObject *self)
> +{
> + Py_ssize_t len;
> + Py_uhash_t x; /* Unsigned for defined overflow behavior. */
> +
> +#ifdef Py_DEBUG
> + assert(_Py_HashSecret_Initialized);
> +#endif
> + if (_PyUnicode_HASH(self) != -1)
> + return _PyUnicode_HASH(self);
> + if (PyUnicode_READY(self) == -1)
> + return -1;
> + len = PyUnicode_GET_LENGTH(self);
> + /*
> + We make the hash of the empty string be 0, rather than using
> + (prefix ^ suffix), since this slightly obfuscates the hash secret
> + */
> + if (len == 0) {
> + _PyUnicode_HASH(self) = 0;
> + return 0;
> + }
> + x = _Py_HashBytes(PyUnicode_DATA(self),
> + PyUnicode_GET_LENGTH(self) * PyUnicode_KIND(self));
> + _PyUnicode_HASH(self) = x;
> + return x;
> +}
> +
> +PyDoc_STRVAR(index__doc__,
> + "S.index(sub[, start[, end]]) -> int\n\
> +\n\
> +Return the lowest index in S where substring sub is found, \n\
> +such that sub is contained within S[start:end]. Optional\n\
> +arguments start and end are interpreted as in slice notation.\n\
> +\n\
> +Raises ValueError when the substring is not found.");
> +
> +static PyObject *
> +unicode_index(PyObject *self, PyObject *args)
> +{
> + /* initialize variables to prevent gcc warning */
> + Py_ssize_t result;
> + PyObject *substring = NULL;
> + Py_ssize_t start = 0;
> + Py_ssize_t end = 0;
> +
> + if (!parse_args_finds_unicode("index", args, &substring, &start, &end))
> + return NULL;
> +
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> +
> + result = any_find_slice(self, substring, start, end, 1);
> +
> + if (result == -2)
> + return NULL;
> +
> + if (result < 0) {
> + PyErr_SetString(PyExc_ValueError, "substring not found");
> + return NULL;
> + }
> +
> + return PyLong_FromSsize_t(result);
> +}
> +
> +PyDoc_STRVAR(islower__doc__,
> + "S.islower() -> bool\n\
> +\n\
> +Return True if all cased characters in S are lowercase and there is\n\
> +at least one cased character in S, False otherwise.");
> +
> +static PyObject*
> +unicode_islower(PyObject *self)
> +{
> + Py_ssize_t i, length;
> + int kind;
> + void *data;
> + int cased;
> +
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> + length = PyUnicode_GET_LENGTH(self);
> + kind = PyUnicode_KIND(self);
> + data = PyUnicode_DATA(self);
> +
> + /* Shortcut for single character strings */
> + if (length == 1)
> + return PyBool_FromLong(
> + Py_UNICODE_ISLOWER(PyUnicode_READ(kind, data, 0)));
> +
> + /* Special case for empty strings */
> + if (length == 0)
> + return PyBool_FromLong(0);
> +
> + cased = 0;
> + for (i = 0; i < length; i++) {
> + const Py_UCS4 ch = PyUnicode_READ(kind, data, i);
> +
> + if (Py_UNICODE_ISUPPER(ch) || Py_UNICODE_ISTITLE(ch))
> + return PyBool_FromLong(0);
> + else if (!cased && Py_UNICODE_ISLOWER(ch))
> + cased = 1;
> + }
> + return PyBool_FromLong(cased);
> +}
> +
> +PyDoc_STRVAR(isupper__doc__,
> + "S.isupper() -> bool\n\
> +\n\
> +Return True if all cased characters in S are uppercase and there is\n\
> +at least one cased character in S, False otherwise.");
> +
> +static PyObject*
> +unicode_isupper(PyObject *self)
> +{
> + Py_ssize_t i, length;
> + int kind;
> + void *data;
> + int cased;
> +
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> + length = PyUnicode_GET_LENGTH(self);
> + kind = PyUnicode_KIND(self);
> + data = PyUnicode_DATA(self);
> +
> + /* Shortcut for single character strings */
> + if (length == 1)
> + return PyBool_FromLong(
> + Py_UNICODE_ISUPPER(PyUnicode_READ(kind, data, 0)) != 0);
> +
> + /* Special case for empty strings */
> + if (length == 0)
> + return PyBool_FromLong(0);
> +
> + cased = 0;
> + for (i = 0; i < length; i++) {
> + const Py_UCS4 ch = PyUnicode_READ(kind, data, i);
> +
> + if (Py_UNICODE_ISLOWER(ch) || Py_UNICODE_ISTITLE(ch))
> + return PyBool_FromLong(0);
> + else if (!cased && Py_UNICODE_ISUPPER(ch))
> + cased = 1;
> + }
> + return PyBool_FromLong(cased);
> +}
> +
> +PyDoc_STRVAR(istitle__doc__,
> + "S.istitle() -> bool\n\
> +\n\
> +Return True if S is a titlecased string and there is at least one\n\
> +character in S, i.e. upper- and titlecase characters may only\n\
> +follow uncased characters and lowercase characters only cased ones.\n\
> +Return False otherwise.");
> +
> +static PyObject*
> +unicode_istitle(PyObject *self)
> +{
> + Py_ssize_t i, length;
> + int kind;
> + void *data;
> + int cased, previous_is_cased;
> +
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> + length = PyUnicode_GET_LENGTH(self);
> + kind = PyUnicode_KIND(self);
> + data = PyUnicode_DATA(self);
> +
> + /* Shortcut for single character strings */
> + if (length == 1) {
> + Py_UCS4 ch = PyUnicode_READ(kind, data, 0);
> + return PyBool_FromLong((Py_UNICODE_ISTITLE(ch) != 0) ||
> + (Py_UNICODE_ISUPPER(ch) != 0));
> + }
> +
> + /* Special case for empty strings */
> + if (length == 0)
> + return PyBool_FromLong(0);
> +
> + cased = 0;
> + previous_is_cased = 0;
> + for (i = 0; i < length; i++) {
> + const Py_UCS4 ch = PyUnicode_READ(kind, data, i);
> +
> + if (Py_UNICODE_ISUPPER(ch) || Py_UNICODE_ISTITLE(ch)) {
> + if (previous_is_cased)
> + return PyBool_FromLong(0);
> + previous_is_cased = 1;
> + cased = 1;
> + }
> + else if (Py_UNICODE_ISLOWER(ch)) {
> + if (!previous_is_cased)
> + return PyBool_FromLong(0);
> + previous_is_cased = 1;
> + cased = 1;
> + }
> + else
> + previous_is_cased = 0;
> + }
> + return PyBool_FromLong(cased);
> +}
> +
> +PyDoc_STRVAR(isspace__doc__,
> + "S.isspace() -> bool\n\
> +\n\
> +Return True if all characters in S are whitespace\n\
> +and there is at least one character in S, False otherwise.");
> +
> +static PyObject*
> +unicode_isspace(PyObject *self)
> +{
> + Py_ssize_t i, length;
> + int kind;
> + void *data;
> +
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> + length = PyUnicode_GET_LENGTH(self);
> + kind = PyUnicode_KIND(self);
> + data = PyUnicode_DATA(self);
> +
> + /* Shortcut for single character strings */
> + if (length == 1)
> + return PyBool_FromLong(
> + Py_UNICODE_ISSPACE(PyUnicode_READ(kind, data, 0)));
> +
> + /* Special case for empty strings */
> + if (length == 0)
> + return PyBool_FromLong(0);
> +
> + for (i = 0; i < length; i++) {
> + const Py_UCS4 ch = PyUnicode_READ(kind, data, i);
> + if (!Py_UNICODE_ISSPACE(ch))
> + return PyBool_FromLong(0);
> + }
> + return PyBool_FromLong(1);
> +}
> +
> +PyDoc_STRVAR(isalpha__doc__,
> + "S.isalpha() -> bool\n\
> +\n\
> +Return True if all characters in S are alphabetic\n\
> +and there is at least one character in S, False otherwise.");
> +
> +static PyObject*
> +unicode_isalpha(PyObject *self)
> +{
> + Py_ssize_t i, length;
> + int kind;
> + void *data;
> +
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> + length = PyUnicode_GET_LENGTH(self);
> + kind = PyUnicode_KIND(self);
> + data = PyUnicode_DATA(self);
> +
> + /* Shortcut for single character strings */
> + if (length == 1)
> + return PyBool_FromLong(
> + Py_UNICODE_ISALPHA(PyUnicode_READ(kind, data, 0)));
> +
> + /* Special case for empty strings */
> + if (length == 0)
> + return PyBool_FromLong(0);
> +
> + for (i = 0; i < length; i++) {
> + if (!Py_UNICODE_ISALPHA(PyUnicode_READ(kind, data, i)))
> + return PyBool_FromLong(0);
> + }
> + return PyBool_FromLong(1);
> +}
> +
> +PyDoc_STRVAR(isalnum__doc__,
> + "S.isalnum() -> bool\n\
> +\n\
> +Return True if all characters in S are alphanumeric\n\
> +and there is at least one character in S, False otherwise.");
> +
> +static PyObject*
> +unicode_isalnum(PyObject *self)
> +{
> + int kind;
> + void *data;
> + Py_ssize_t len, i;
> +
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> +
> + kind = PyUnicode_KIND(self);
> + data = PyUnicode_DATA(self);
> + len = PyUnicode_GET_LENGTH(self);
> +
> + /* Shortcut for single character strings */
> + if (len == 1) {
> + const Py_UCS4 ch = PyUnicode_READ(kind, data, 0);
> + return PyBool_FromLong(Py_UNICODE_ISALNUM(ch));
> + }
> +
> + /* Special case for empty strings */
> + if (len == 0)
> + return PyBool_FromLong(0);
> +
> + for (i = 0; i < len; i++) {
> + const Py_UCS4 ch = PyUnicode_READ(kind, data, i);
> + if (!Py_UNICODE_ISALNUM(ch))
> + return PyBool_FromLong(0);
> + }
> + return PyBool_FromLong(1);
> +}
> +
> +PyDoc_STRVAR(isdecimal__doc__,
> + "S.isdecimal() -> bool\n\
> +\n\
> +Return True if there are only decimal characters in S,\n\
> +False otherwise.");
> +
> +static PyObject*
> +unicode_isdecimal(PyObject *self)
> +{
> + Py_ssize_t i, length;
> + int kind;
> + void *data;
> +
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> + length = PyUnicode_GET_LENGTH(self);
> + kind = PyUnicode_KIND(self);
> + data = PyUnicode_DATA(self);
> +
> + /* Shortcut for single character strings */
> + if (length == 1)
> + return PyBool_FromLong(
> + Py_UNICODE_ISDECIMAL(PyUnicode_READ(kind, data, 0)));
> +
> + /* Special case for empty strings */
> + if (length == 0)
> + return PyBool_FromLong(0);
> +
> + for (i = 0; i < length; i++) {
> + if (!Py_UNICODE_ISDECIMAL(PyUnicode_READ(kind, data, i)))
> + return PyBool_FromLong(0);
> + }
> + return PyBool_FromLong(1);
> +}
> +
> +PyDoc_STRVAR(isdigit__doc__,
> + "S.isdigit() -> bool\n\
> +\n\
> +Return True if all characters in S are digits\n\
> +and there is at least one character in S, False otherwise.");
> +
> +static PyObject*
> +unicode_isdigit(PyObject *self)
> +{
> + Py_ssize_t i, length;
> + int kind;
> + void *data;
> +
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> + length = PyUnicode_GET_LENGTH(self);
> + kind = PyUnicode_KIND(self);
> + data = PyUnicode_DATA(self);
> +
> + /* Shortcut for single character strings */
> + if (length == 1) {
> + const Py_UCS4 ch = PyUnicode_READ(kind, data, 0);
> + return PyBool_FromLong(Py_UNICODE_ISDIGIT(ch));
> + }
> +
> + /* Special case for empty strings */
> + if (length == 0)
> + return PyBool_FromLong(0);
> +
> + for (i = 0; i < length; i++) {
> + if (!Py_UNICODE_ISDIGIT(PyUnicode_READ(kind, data, i)))
> + return PyBool_FromLong(0);
> + }
> + return PyBool_FromLong(1);
> +}
> +
> +PyDoc_STRVAR(isnumeric__doc__,
> + "S.isnumeric() -> bool\n\
> +\n\
> +Return True if there are only numeric characters in S,\n\
> +False otherwise.");
> +
> +static PyObject*
> +unicode_isnumeric(PyObject *self)
> +{
> + Py_ssize_t i, length;
> + int kind;
> + void *data;
> +
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> + length = PyUnicode_GET_LENGTH(self);
> + kind = PyUnicode_KIND(self);
> + data = PyUnicode_DATA(self);
> +
> + /* Shortcut for single character strings */
> + if (length == 1)
> + return PyBool_FromLong(
> + Py_UNICODE_ISNUMERIC(PyUnicode_READ(kind, data, 0)));
> +
> + /* Special case for empty strings */
> + if (length == 0)
> + return PyBool_FromLong(0);
> +
> + for (i = 0; i < length; i++) {
> + if (!Py_UNICODE_ISNUMERIC(PyUnicode_READ(kind, data, i)))
> + return PyBool_FromLong(0);
> + }
> + return PyBool_FromLong(1);
> +}
> +
> +int
> +PyUnicode_IsIdentifier(PyObject *self)
> +{
> + int kind;
> + void *data;
> + Py_ssize_t i;
> + Py_UCS4 first;
> +
> + if (PyUnicode_READY(self) == -1) {
> + Py_FatalError("identifier not ready");
> + return 0;
> + }
> +
> + /* Special case for empty strings */
> + if (PyUnicode_GET_LENGTH(self) == 0)
> + return 0;
> + kind = PyUnicode_KIND(self);
> + data = PyUnicode_DATA(self);
> +
> + /* PEP 3131 says that the first character must be in
> + XID_Start and subsequent characters in XID_Continue,
> + and for the ASCII range, the 2.x rules apply (i.e
> + start with letters and underscore, continue with
> + letters, digits, underscore). However, given the current
> + definition of XID_Start and XID_Continue, it is sufficient
> + to check just for these, except that _ must be allowed
> + as starting an identifier. */
> + first = PyUnicode_READ(kind, data, 0);
> + if (!_PyUnicode_IsXidStart(first) && first != 0x5F /* LOW LINE */)
> + return 0;
> +
> + for (i = 1; i < PyUnicode_GET_LENGTH(self); i++)
> + if (!_PyUnicode_IsXidContinue(PyUnicode_READ(kind, data, i)))
> + return 0;
> + return 1;
> +}
> +
> +PyDoc_STRVAR(isidentifier__doc__,
> + "S.isidentifier() -> bool\n\
> +\n\
> +Return True if S is a valid identifier according\n\
> +to the language definition.\n\
> +\n\
> +Use keyword.iskeyword() to test for reserved identifiers\n\
> +such as \"def\" and \"class\".\n");
> +
> +static PyObject*
> +unicode_isidentifier(PyObject *self)
> +{
> + return PyBool_FromLong(PyUnicode_IsIdentifier(self));
> +}
> +
> +PyDoc_STRVAR(isprintable__doc__,
> + "S.isprintable() -> bool\n\
> +\n\
> +Return True if all characters in S are considered\n\
> +printable in repr() or S is empty, False otherwise.");
> +
> +static PyObject*
> +unicode_isprintable(PyObject *self)
> +{
> + Py_ssize_t i, length;
> + int kind;
> + void *data;
> +
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> + length = PyUnicode_GET_LENGTH(self);
> + kind = PyUnicode_KIND(self);
> + data = PyUnicode_DATA(self);
> +
> + /* Shortcut for single character strings */
> + if (length == 1)
> + return PyBool_FromLong(
> + Py_UNICODE_ISPRINTABLE(PyUnicode_READ(kind, data, 0)));
> +
> + for (i = 0; i < length; i++) {
> + if (!Py_UNICODE_ISPRINTABLE(PyUnicode_READ(kind, data, i))) {
> + Py_RETURN_FALSE;
> + }
> + }
> + Py_RETURN_TRUE;
> +}
> +
> +PyDoc_STRVAR(join__doc__,
> + "S.join(iterable) -> str\n\
> +\n\
> +Return a string which is the concatenation of the strings in the\n\
> +iterable. The separator between elements is S.");
> +
> +static PyObject*
> +unicode_join(PyObject *self, PyObject *data)
> +{
> + return PyUnicode_Join(self, data);
> +}
> +
> +static Py_ssize_t
> +unicode_length(PyObject *self)
> +{
> + if (PyUnicode_READY(self) == -1)
> + return -1;
> + return PyUnicode_GET_LENGTH(self);
> +}
> +
> +PyDoc_STRVAR(ljust__doc__,
> + "S.ljust(width[, fillchar]) -> str\n\
> +\n\
> +Return S left-justified in a Unicode string of length width. Padding is\n\
> +done using the specified fill character (default is a space).");
> +
> +static PyObject *
> +unicode_ljust(PyObject *self, PyObject *args)
> +{
> + Py_ssize_t width;
> + Py_UCS4 fillchar = ' ';
> +
> + if (!PyArg_ParseTuple(args, "n|O&:ljust", &width, convert_uc, &fillchar))
> + return NULL;
> +
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> +
> + if (PyUnicode_GET_LENGTH(self) >= width)
> + return unicode_result_unchanged(self);
> +
> + return pad(self, 0, width - PyUnicode_GET_LENGTH(self), fillchar);
> +}
> +
> +PyDoc_STRVAR(lower__doc__,
> + "S.lower() -> str\n\
> +\n\
> +Return a copy of the string S converted to lowercase.");
> +
> +static PyObject*
> +unicode_lower(PyObject *self)
> +{
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> + if (PyUnicode_IS_ASCII(self))
> + return ascii_upper_or_lower(self, 1);
> + return case_operation(self, do_lower);
> +}
> +
> +#define LEFTSTRIP 0
> +#define RIGHTSTRIP 1
> +#define BOTHSTRIP 2
> +
> +/* Arrays indexed by above */
> +static const char * const stripformat[] = {"|O:lstrip", "|O:rstrip", "|O:strip"};
> +
> +#define STRIPNAME(i) (stripformat[i]+3)
> +
> +/* externally visible for str.strip(unicode) */
> +PyObject *
> +_PyUnicode_XStrip(PyObject *self, int striptype, PyObject *sepobj)
> +{
> + void *data;
> + int kind;
> + Py_ssize_t i, j, len;
> + BLOOM_MASK sepmask;
> + Py_ssize_t seplen;
> +
> + if (PyUnicode_READY(self) == -1 || PyUnicode_READY(sepobj) == -1)
> + return NULL;
> +
> + kind = PyUnicode_KIND(self);
> + data = PyUnicode_DATA(self);
> + len = PyUnicode_GET_LENGTH(self);
> + seplen = PyUnicode_GET_LENGTH(sepobj);
> + sepmask = make_bloom_mask(PyUnicode_KIND(sepobj),
> + PyUnicode_DATA(sepobj),
> + seplen);
> +
> + i = 0;
> + if (striptype != RIGHTSTRIP) {
> + while (i < len) {
> + Py_UCS4 ch = PyUnicode_READ(kind, data, i);
> + if (!BLOOM(sepmask, ch))
> + break;
> + if (PyUnicode_FindChar(sepobj, ch, 0, seplen, 1) < 0)
> + break;
> + i++;
> + }
> + }
> +
> + j = len;
> + if (striptype != LEFTSTRIP) {
> + j--;
> + while (j >= i) {
> + Py_UCS4 ch = PyUnicode_READ(kind, data, j);
> + if (!BLOOM(sepmask, ch))
> + break;
> + if (PyUnicode_FindChar(sepobj, ch, 0, seplen, 1) < 0)
> + break;
> + j--;
> + }
> +
> + j++;
> + }
> +
> + return PyUnicode_Substring(self, i, j);
> +}
> +
> +PyObject*
> +PyUnicode_Substring(PyObject *self, Py_ssize_t start, Py_ssize_t end)
> +{
> + unsigned char *data;
> + int kind;
> + Py_ssize_t length;
> +
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> +
> + length = PyUnicode_GET_LENGTH(self);
> + end = Py_MIN(end, length);
> +
> + if (start == 0 && end == length)
> + return unicode_result_unchanged(self);
> +
> + if (start < 0 || end < 0) {
> + PyErr_SetString(PyExc_IndexError, "string index out of range");
> + return NULL;
> + }
> + if (start >= length || end < start)
> + _Py_RETURN_UNICODE_EMPTY();
> +
> + length = end - start;
> + if (PyUnicode_IS_ASCII(self)) {
> + data = PyUnicode_1BYTE_DATA(self);
> + return _PyUnicode_FromASCII((char*)(data + start), length);
> + }
> + else {
> + kind = PyUnicode_KIND(self);
> + data = PyUnicode_1BYTE_DATA(self);
> + return PyUnicode_FromKindAndData(kind,
> + data + kind * start,
> + length);
> + }
> +}
> +
> +static PyObject *
> +do_strip(PyObject *self, int striptype)
> +{
> + Py_ssize_t len, i, j;
> +
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> +
> + len = PyUnicode_GET_LENGTH(self);
> +
> + if (PyUnicode_IS_ASCII(self)) {
> + Py_UCS1 *data = PyUnicode_1BYTE_DATA(self);
> +
> + i = 0;
> + if (striptype != RIGHTSTRIP) {
> + while (i < len) {
> + Py_UCS1 ch = data[i];
> + if (!_Py_ascii_whitespace[ch])
> + break;
> + i++;
> + }
> + }
> +
> + j = len;
> + if (striptype != LEFTSTRIP) {
> + j--;
> + while (j >= i) {
> + Py_UCS1 ch = data[j];
> + if (!_Py_ascii_whitespace[ch])
> + break;
> + j--;
> + }
> + j++;
> + }
> + }
> + else {
> + int kind = PyUnicode_KIND(self);
> + void *data = PyUnicode_DATA(self);
> +
> + i = 0;
> + if (striptype != RIGHTSTRIP) {
> + while (i < len) {
> + Py_UCS4 ch = PyUnicode_READ(kind, data, i);
> + if (!Py_UNICODE_ISSPACE(ch))
> + break;
> + i++;
> + }
> + }
> +
> + j = len;
> + if (striptype != LEFTSTRIP) {
> + j--;
> + while (j >= i) {
> + Py_UCS4 ch = PyUnicode_READ(kind, data, j);
> + if (!Py_UNICODE_ISSPACE(ch))
> + break;
> + j--;
> + }
> + j++;
> + }
> + }
> +
> + return PyUnicode_Substring(self, i, j);
> +}
> +
> +
> +static PyObject *
> +do_argstrip(PyObject *self, int striptype, PyObject *args)
> +{
> + PyObject *sep = NULL;
> +
> + if (!PyArg_ParseTuple(args, stripformat[striptype], &sep))
> + return NULL;
> +
> + if (sep != NULL && sep != Py_None) {
> + if (PyUnicode_Check(sep))
> + return _PyUnicode_XStrip(self, striptype, sep);
> + else {
> + PyErr_Format(PyExc_TypeError,
> + "%s arg must be None or str",
> + STRIPNAME(striptype));
> + return NULL;
> + }
> + }
> +
> + return do_strip(self, striptype);
> +}
> +
> +
> +PyDoc_STRVAR(strip__doc__,
> + "S.strip([chars]) -> str\n\
> +\n\
> +Return a copy of the string S with leading and trailing\n\
> +whitespace removed.\n\
> +If chars is given and not None, remove characters in chars instead.");
> +
> +static PyObject *
> +unicode_strip(PyObject *self, PyObject *args)
> +{
> + if (PyTuple_GET_SIZE(args) == 0)
> + return do_strip(self, BOTHSTRIP); /* Common case */
> + else
> + return do_argstrip(self, BOTHSTRIP, args);
> +}
> +
> +
> +PyDoc_STRVAR(lstrip__doc__,
> + "S.lstrip([chars]) -> str\n\
> +\n\
> +Return a copy of the string S with leading whitespace removed.\n\
> +If chars is given and not None, remove characters in chars instead.");
> +
> +static PyObject *
> +unicode_lstrip(PyObject *self, PyObject *args)
> +{
> + if (PyTuple_GET_SIZE(args) == 0)
> + return do_strip(self, LEFTSTRIP); /* Common case */
> + else
> + return do_argstrip(self, LEFTSTRIP, args);
> +}
> +
> +
> +PyDoc_STRVAR(rstrip__doc__,
> + "S.rstrip([chars]) -> str\n\
> +\n\
> +Return a copy of the string S with trailing whitespace removed.\n\
> +If chars is given and not None, remove characters in chars instead.");
> +
> +static PyObject *
> +unicode_rstrip(PyObject *self, PyObject *args)
> +{
> + if (PyTuple_GET_SIZE(args) == 0)
> + return do_strip(self, RIGHTSTRIP); /* Common case */
> + else
> + return do_argstrip(self, RIGHTSTRIP, args);
> +}
> +
> +
> +static PyObject*
> +unicode_repeat(PyObject *str, Py_ssize_t len)
> +{
> + PyObject *u;
> + Py_ssize_t nchars, n;
> +
> + if (len < 1)
> + _Py_RETURN_UNICODE_EMPTY();
> +
> + /* no repeat, return original string */
> + if (len == 1)
> + return unicode_result_unchanged(str);
> +
> + if (PyUnicode_READY(str) == -1)
> + return NULL;
> +
> + if (PyUnicode_GET_LENGTH(str) > PY_SSIZE_T_MAX / len) {
> + PyErr_SetString(PyExc_OverflowError,
> + "repeated string is too long");
> + return NULL;
> + }
> + nchars = len * PyUnicode_GET_LENGTH(str);
> +
> + u = PyUnicode_New(nchars, PyUnicode_MAX_CHAR_VALUE(str));
> + if (!u)
> + return NULL;
> + assert(PyUnicode_KIND(u) == PyUnicode_KIND(str));
> +
> + if (PyUnicode_GET_LENGTH(str) == 1) {
> + const int kind = PyUnicode_KIND(str);
> + const Py_UCS4 fill_char = PyUnicode_READ(kind, PyUnicode_DATA(str), 0);
> + if (kind == PyUnicode_1BYTE_KIND) {
> + void *to = PyUnicode_DATA(u);
> + memset(to, (unsigned char)fill_char, len);
> + }
> + else if (kind == PyUnicode_2BYTE_KIND) {
> + Py_UCS2 *ucs2 = PyUnicode_2BYTE_DATA(u);
> + for (n = 0; n < len; ++n)
> + ucs2[n] = fill_char;
> + } else {
> + Py_UCS4 *ucs4 = PyUnicode_4BYTE_DATA(u);
> + assert(kind == PyUnicode_4BYTE_KIND);
> + for (n = 0; n < len; ++n)
> + ucs4[n] = fill_char;
> + }
> + }
> + else {
> + /* number of characters copied this far */
> + Py_ssize_t done = PyUnicode_GET_LENGTH(str);
> + const Py_ssize_t char_size = PyUnicode_KIND(str);
> + char *to = (char *) PyUnicode_DATA(u);
> + memcpy(to, PyUnicode_DATA(str),
> + PyUnicode_GET_LENGTH(str) * char_size);
> + while (done < nchars) {
> + n = (done <= nchars-done) ? done : nchars-done;
> + memcpy(to + (done * char_size), to, n * char_size);
> + done += n;
> + }
> + }
> +
> + assert(_PyUnicode_CheckConsistency(u, 1));
> + return u;
> +}
> +
> +PyObject *
> +PyUnicode_Replace(PyObject *str,
> + PyObject *substr,
> + PyObject *replstr,
> + Py_ssize_t maxcount)
> +{
> + if (ensure_unicode(str) < 0 || ensure_unicode(substr) < 0 ||
> + ensure_unicode(replstr) < 0)
> + return NULL;
> + return replace(str, substr, replstr, maxcount);
> +}
> +
> +PyDoc_STRVAR(replace__doc__,
> + "S.replace(old, new[, count]) -> str\n\
> +\n\
> +Return a copy of S with all occurrences of substring\n\
> +old replaced by new. If the optional argument count is\n\
> +given, only the first count occurrences are replaced.");
> +
> +static PyObject*
> +unicode_replace(PyObject *self, PyObject *args)
> +{
> + PyObject *str1;
> + PyObject *str2;
> + Py_ssize_t maxcount = -1;
> +
> + if (!PyArg_ParseTuple(args, "UU|n:replace", &str1, &str2, &maxcount))
> + return NULL;
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> + return replace(self, str1, str2, maxcount);
> +}
> +
> +static PyObject *
> +unicode_repr(PyObject *unicode)
> +{
> + PyObject *repr;
> + Py_ssize_t isize;
> + Py_ssize_t osize, squote, dquote, i, o;
> + Py_UCS4 max, quote;
> + int ikind, okind, unchanged;
> + void *idata, *odata;
> +
> + if (PyUnicode_READY(unicode) == -1)
> + return NULL;
> +
> + isize = PyUnicode_GET_LENGTH(unicode);
> + idata = PyUnicode_DATA(unicode);
> +
> + /* Compute length of output, quote characters, and
> + maximum character */
> + osize = 0;
> + max = 127;
> + squote = dquote = 0;
> + ikind = PyUnicode_KIND(unicode);
> + for (i = 0; i < isize; i++) {
> + Py_UCS4 ch = PyUnicode_READ(ikind, idata, i);
> + Py_ssize_t incr = 1;
> + switch (ch) {
> + case '\'': squote++; break;
> + case '"': dquote++; break;
> + case '\\': case '\t': case '\r': case '\n':
> + incr = 2;
> + break;
> + default:
> + /* Fast-path ASCII */
> + if (ch < ' ' || ch == 0x7f)
> + incr = 4; /* \xHH */
> + else if (ch < 0x7f)
> + ;
> + else if (Py_UNICODE_ISPRINTABLE(ch))
> + max = ch > max ? ch : max;
> + else if (ch < 0x100)
> + incr = 4; /* \xHH */
> + else if (ch < 0x10000)
> + incr = 6; /* \uHHHH */
> + else
> + incr = 10; /* \uHHHHHHHH */
> + }
> + if (osize > PY_SSIZE_T_MAX - incr) {
> + PyErr_SetString(PyExc_OverflowError,
> + "string is too long to generate repr");
> + return NULL;
> + }
> + osize += incr;
> + }
> +
> + quote = '\'';
> + unchanged = (osize == isize);
> + if (squote) {
> + unchanged = 0;
> + if (dquote)
> + /* Both squote and dquote present. Use squote,
> + and escape them */
> + osize += squote;
> + else
> + quote = '"';
> + }
> + osize += 2; /* quotes */
> +
> + repr = PyUnicode_New(osize, max);
> + if (repr == NULL)
> + return NULL;
> + okind = PyUnicode_KIND(repr);
> + odata = PyUnicode_DATA(repr);
> +
> + PyUnicode_WRITE(okind, odata, 0, quote);
> + PyUnicode_WRITE(okind, odata, osize-1, quote);
> + if (unchanged) {
> + _PyUnicode_FastCopyCharacters(repr, 1,
> + unicode, 0,
> + isize);
> + }
> + else {
> + for (i = 0, o = 1; i < isize; i++) {
> + Py_UCS4 ch = PyUnicode_READ(ikind, idata, i);
> +
> + /* Escape quotes and backslashes */
> + if ((ch == quote) || (ch == '\\')) {
> + PyUnicode_WRITE(okind, odata, o++, '\\');
> + PyUnicode_WRITE(okind, odata, o++, ch);
> + continue;
> + }
> +
> + /* Map special whitespace to '\t', \n', '\r' */
> + if (ch == '\t') {
> + PyUnicode_WRITE(okind, odata, o++, '\\');
> + PyUnicode_WRITE(okind, odata, o++, 't');
> + }
> + else if (ch == '\n') {
> + PyUnicode_WRITE(okind, odata, o++, '\\');
> + PyUnicode_WRITE(okind, odata, o++, 'n');
> + }
> + else if (ch == '\r') {
> + PyUnicode_WRITE(okind, odata, o++, '\\');
> + PyUnicode_WRITE(okind, odata, o++, 'r');
> + }
> +
> + /* Map non-printable US ASCII to '\xhh' */
> + else if (ch < ' ' || ch == 0x7F) {
> + PyUnicode_WRITE(okind, odata, o++, '\\');
> + PyUnicode_WRITE(okind, odata, o++, 'x');
> + PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[(ch >> 4) & 0x000F]);
> + PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[ch & 0x000F]);
> + }
> +
> + /* Copy ASCII characters as-is */
> + else if (ch < 0x7F) {
> + PyUnicode_WRITE(okind, odata, o++, ch);
> + }
> +
> + /* Non-ASCII characters */
> + else {
> + /* Map Unicode whitespace and control characters
> + (categories Z* and C* except ASCII space)
> + */
> + if (!Py_UNICODE_ISPRINTABLE(ch)) {
> + PyUnicode_WRITE(okind, odata, o++, '\\');
> + /* Map 8-bit characters to '\xhh' */
> + if (ch <= 0xff) {
> + PyUnicode_WRITE(okind, odata, o++, 'x');
> + PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[(ch >> 4) & 0x000F]);
> + PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[ch & 0x000F]);
> + }
> + /* Map 16-bit characters to '\uxxxx' */
> + else if (ch <= 0xffff) {
> + PyUnicode_WRITE(okind, odata, o++, 'u');
> + PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[(ch >> 12) & 0xF]);
> + PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[(ch >> 8) & 0xF]);
> + PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[(ch >> 4) & 0xF]);
> + PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[ch & 0xF]);
> + }
> + /* Map 21-bit characters to '\U00xxxxxx' */
> + else {
> + PyUnicode_WRITE(okind, odata, o++, 'U');
> + PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[(ch >> 28) & 0xF]);
> + PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[(ch >> 24) & 0xF]);
> + PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[(ch >> 20) & 0xF]);
> + PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[(ch >> 16) & 0xF]);
> + PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[(ch >> 12) & 0xF]);
> + PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[(ch >> 8) & 0xF]);
> + PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[(ch >> 4) & 0xF]);
> + PyUnicode_WRITE(okind, odata, o++, Py_hexdigits[ch & 0xF]);
> + }
> + }
> + /* Copy characters as-is */
> + else {
> + PyUnicode_WRITE(okind, odata, o++, ch);
> + }
> + }
> + }
> + }
> + /* Closing quote already added at the beginning */
> + assert(_PyUnicode_CheckConsistency(repr, 1));
> + return repr;
> +}
> +
> +PyDoc_STRVAR(rfind__doc__,
> + "S.rfind(sub[, start[, end]]) -> int\n\
> +\n\
> +Return the highest index in S where substring sub is found,\n\
> +such that sub is contained within S[start:end]. Optional\n\
> +arguments start and end are interpreted as in slice notation.\n\
> +\n\
> +Return -1 on failure.");
> +
> +static PyObject *
> +unicode_rfind(PyObject *self, PyObject *args)
> +{
> + /* initialize variables to prevent gcc warning */
> + PyObject *substring = NULL;
> + Py_ssize_t start = 0;
> + Py_ssize_t end = 0;
> + Py_ssize_t result;
> +
> + if (!parse_args_finds_unicode("rfind", args, &substring, &start, &end))
> + return NULL;
> +
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> +
> + result = any_find_slice(self, substring, start, end, -1);
> +
> + if (result == -2)
> + return NULL;
> +
> + return PyLong_FromSsize_t(result);
> +}
> +
> +PyDoc_STRVAR(rindex__doc__,
> + "S.rindex(sub[, start[, end]]) -> int\n\
> +\n\
> +Return the highest index in S where substring sub is found,\n\
> +such that sub is contained within S[start:end]. Optional\n\
> +arguments start and end are interpreted as in slice notation.\n\
> +\n\
> +Raises ValueError when the substring is not found.");
> +
> +static PyObject *
> +unicode_rindex(PyObject *self, PyObject *args)
> +{
> + /* initialize variables to prevent gcc warning */
> + PyObject *substring = NULL;
> + Py_ssize_t start = 0;
> + Py_ssize_t end = 0;
> + Py_ssize_t result;
> +
> + if (!parse_args_finds_unicode("rindex", args, &substring, &start, &end))
> + return NULL;
> +
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> +
> + result = any_find_slice(self, substring, start, end, -1);
> +
> + if (result == -2)
> + return NULL;
> +
> + if (result < 0) {
> + PyErr_SetString(PyExc_ValueError, "substring not found");
> + return NULL;
> + }
> +
> + return PyLong_FromSsize_t(result);
> +}
> +
> +PyDoc_STRVAR(rjust__doc__,
> + "S.rjust(width[, fillchar]) -> str\n\
> +\n\
> +Return S right-justified in a string of length width. Padding is\n\
> +done using the specified fill character (default is a space).");
> +
> +static PyObject *
> +unicode_rjust(PyObject *self, PyObject *args)
> +{
> + Py_ssize_t width;
> + Py_UCS4 fillchar = ' ';
> +
> + if (!PyArg_ParseTuple(args, "n|O&:rjust", &width, convert_uc, &fillchar))
> + return NULL;
> +
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> +
> + if (PyUnicode_GET_LENGTH(self) >= width)
> + return unicode_result_unchanged(self);
> +
> + return pad(self, width - PyUnicode_GET_LENGTH(self), 0, fillchar);
> +}
> +
> +PyObject *
> +PyUnicode_Split(PyObject *s, PyObject *sep, Py_ssize_t maxsplit)
> +{
> + if (ensure_unicode(s) < 0 || (sep != NULL && ensure_unicode(sep) < 0))
> + return NULL;
> +
> + return split(s, sep, maxsplit);
> +}
> +
> +PyDoc_STRVAR(split__doc__,
> + "S.split(sep=None, maxsplit=-1) -> list of strings\n\
> +\n\
> +Return a list of the words in S, using sep as the\n\
> +delimiter string. If maxsplit is given, at most maxsplit\n\
> +splits are done. If sep is not specified or is None, any\n\
> +whitespace string is a separator and empty strings are\n\
> +removed from the result.");
> +
> +static PyObject*
> +unicode_split(PyObject *self, PyObject *args, PyObject *kwds)
> +{
> + static char *kwlist[] = {"sep", "maxsplit", 0};
> + PyObject *substring = Py_None;
> + Py_ssize_t maxcount = -1;
> +
> + if (!PyArg_ParseTupleAndKeywords(args, kwds, "|On:split",
> + kwlist, &substring, &maxcount))
> + return NULL;
> +
> + if (substring == Py_None)
> + return split(self, NULL, maxcount);
> +
> + if (PyUnicode_Check(substring))
> + return split(self, substring, maxcount);
> +
> + PyErr_Format(PyExc_TypeError,
> + "must be str or None, not %.100s",
> + Py_TYPE(substring)->tp_name);
> + return NULL;
> +}
> +
> +PyObject *
> +PyUnicode_Partition(PyObject *str_obj, PyObject *sep_obj)
> +{
> + PyObject* out;
> + int kind1, kind2;
> + void *buf1, *buf2;
> + Py_ssize_t len1, len2;
> +
> + if (ensure_unicode(str_obj) < 0 || ensure_unicode(sep_obj) < 0)
> + return NULL;
> +
> + kind1 = PyUnicode_KIND(str_obj);
> + kind2 = PyUnicode_KIND(sep_obj);
> + len1 = PyUnicode_GET_LENGTH(str_obj);
> + len2 = PyUnicode_GET_LENGTH(sep_obj);
> + if (kind1 < kind2 || len1 < len2) {
> + _Py_INCREF_UNICODE_EMPTY();
> + if (!unicode_empty)
> + out = NULL;
> + else {
> + out = PyTuple_Pack(3, str_obj, unicode_empty, unicode_empty);
> + Py_DECREF(unicode_empty);
> + }
> + return out;
> + }
> + buf1 = PyUnicode_DATA(str_obj);
> + buf2 = PyUnicode_DATA(sep_obj);
> + if (kind2 != kind1) {
> + buf2 = _PyUnicode_AsKind(sep_obj, kind1);
> + if (!buf2)
> + return NULL;
> + }
> +
> + switch (kind1) {
> + case PyUnicode_1BYTE_KIND:
> + if (PyUnicode_IS_ASCII(str_obj) && PyUnicode_IS_ASCII(sep_obj))
> + out = asciilib_partition(str_obj, buf1, len1, sep_obj, buf2, len2);
> + else
> + out = ucs1lib_partition(str_obj, buf1, len1, sep_obj, buf2, len2);
> + break;
> + case PyUnicode_2BYTE_KIND:
> + out = ucs2lib_partition(str_obj, buf1, len1, sep_obj, buf2, len2);
> + break;
> + case PyUnicode_4BYTE_KIND:
> + out = ucs4lib_partition(str_obj, buf1, len1, sep_obj, buf2, len2);
> + break;
> + default:
> + assert(0);
> + out = 0;
> + }
> +
> + if (kind2 != kind1)
> + PyMem_Free(buf2);
> +
> + return out;
> +}
> +
> +
> +PyObject *
> +PyUnicode_RPartition(PyObject *str_obj, PyObject *sep_obj)
> +{
> + PyObject* out;
> + int kind1, kind2;
> + void *buf1, *buf2;
> + Py_ssize_t len1, len2;
> +
> + if (ensure_unicode(str_obj) < 0 || ensure_unicode(sep_obj) < 0)
> + return NULL;
> +
> + kind1 = PyUnicode_KIND(str_obj);
> + kind2 = PyUnicode_KIND(sep_obj);
> + len1 = PyUnicode_GET_LENGTH(str_obj);
> + len2 = PyUnicode_GET_LENGTH(sep_obj);
> + if (kind1 < kind2 || len1 < len2) {
> + _Py_INCREF_UNICODE_EMPTY();
> + if (!unicode_empty)
> + out = NULL;
> + else {
> + out = PyTuple_Pack(3, unicode_empty, unicode_empty, str_obj);
> + Py_DECREF(unicode_empty);
> + }
> + return out;
> + }
> + buf1 = PyUnicode_DATA(str_obj);
> + buf2 = PyUnicode_DATA(sep_obj);
> + if (kind2 != kind1) {
> + buf2 = _PyUnicode_AsKind(sep_obj, kind1);
> + if (!buf2)
> + return NULL;
> + }
> +
> + switch (kind1) {
> + case PyUnicode_1BYTE_KIND:
> + if (PyUnicode_IS_ASCII(str_obj) && PyUnicode_IS_ASCII(sep_obj))
> + out = asciilib_rpartition(str_obj, buf1, len1, sep_obj, buf2, len2);
> + else
> + out = ucs1lib_rpartition(str_obj, buf1, len1, sep_obj, buf2, len2);
> + break;
> + case PyUnicode_2BYTE_KIND:
> + out = ucs2lib_rpartition(str_obj, buf1, len1, sep_obj, buf2, len2);
> + break;
> + case PyUnicode_4BYTE_KIND:
> + out = ucs4lib_rpartition(str_obj, buf1, len1, sep_obj, buf2, len2);
> + break;
> + default:
> + assert(0);
> + out = 0;
> + }
> +
> + if (kind2 != kind1)
> + PyMem_Free(buf2);
> +
> + return out;
> +}
> +
> +PyDoc_STRVAR(partition__doc__,
> + "S.partition(sep) -> (head, sep, tail)\n\
> +\n\
> +Search for the separator sep in S, and return the part before it,\n\
> +the separator itself, and the part after it. If the separator is not\n\
> +found, return S and two empty strings.");
> +
> +static PyObject*
> +unicode_partition(PyObject *self, PyObject *separator)
> +{
> + return PyUnicode_Partition(self, separator);
> +}
> +
> +PyDoc_STRVAR(rpartition__doc__,
> + "S.rpartition(sep) -> (head, sep, tail)\n\
> +\n\
> +Search for the separator sep in S, starting at the end of S, and return\n\
> +the part before it, the separator itself, and the part after it. If the\n\
> +separator is not found, return two empty strings and S.");
> +
> +static PyObject*
> +unicode_rpartition(PyObject *self, PyObject *separator)
> +{
> + return PyUnicode_RPartition(self, separator);
> +}
> +
> +PyObject *
> +PyUnicode_RSplit(PyObject *s, PyObject *sep, Py_ssize_t maxsplit)
> +{
> + if (ensure_unicode(s) < 0 || (sep != NULL && ensure_unicode(sep) < 0))
> + return NULL;
> +
> + return rsplit(s, sep, maxsplit);
> +}
> +
> +PyDoc_STRVAR(rsplit__doc__,
> + "S.rsplit(sep=None, maxsplit=-1) -> list of strings\n\
> +\n\
> +Return a list of the words in S, using sep as the\n\
> +delimiter string, starting at the end of the string and\n\
> +working to the front. If maxsplit is given, at most maxsplit\n\
> +splits are done. If sep is not specified, any whitespace string\n\
> +is a separator.");
> +
> +static PyObject*
> +unicode_rsplit(PyObject *self, PyObject *args, PyObject *kwds)
> +{
> + static char *kwlist[] = {"sep", "maxsplit", 0};
> + PyObject *substring = Py_None;
> + Py_ssize_t maxcount = -1;
> +
> + if (!PyArg_ParseTupleAndKeywords(args, kwds, "|On:rsplit",
> + kwlist, &substring, &maxcount))
> + return NULL;
> +
> + if (substring == Py_None)
> + return rsplit(self, NULL, maxcount);
> +
> + if (PyUnicode_Check(substring))
> + return rsplit(self, substring, maxcount);
> +
> + PyErr_Format(PyExc_TypeError,
> + "must be str or None, not %.100s",
> + Py_TYPE(substring)->tp_name);
> + return NULL;
> +}
> +
> +PyDoc_STRVAR(splitlines__doc__,
> + "S.splitlines([keepends]) -> list of strings\n\
> +\n\
> +Return a list of the lines in S, breaking at line boundaries.\n\
> +Line breaks are not included in the resulting list unless keepends\n\
> +is given and true.");
> +
> +static PyObject*
> +unicode_splitlines(PyObject *self, PyObject *args, PyObject *kwds)
> +{
> + static char *kwlist[] = {"keepends", 0};
> + int keepends = 0;
> +
> + if (!PyArg_ParseTupleAndKeywords(args, kwds, "|i:splitlines",
> + kwlist, &keepends))
> + return NULL;
> +
> + return PyUnicode_Splitlines(self, keepends);
> +}
> +
> +static
> +PyObject *unicode_str(PyObject *self)
> +{
> + return unicode_result_unchanged(self);
> +}
> +
> +PyDoc_STRVAR(swapcase__doc__,
> + "S.swapcase() -> str\n\
> +\n\
> +Return a copy of S with uppercase characters converted to lowercase\n\
> +and vice versa.");
> +
> +static PyObject*
> +unicode_swapcase(PyObject *self)
> +{
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> + return case_operation(self, do_swapcase);
> +}
> +
> +/*[clinic input]
> +
> +@staticmethod
> +str.maketrans as unicode_maketrans
> +
> + x: object
> +
> + y: unicode=NULL
> +
> + z: unicode=NULL
> +
> + /
> +
> +Return a translation table usable for str.translate().
> +
> +If there is only one argument, it must be a dictionary mapping Unicode
> +ordinals (integers) or characters to Unicode ordinals, strings or None.
> +Character keys will be then converted to ordinals.
> +If there are two arguments, they must be strings of equal length, and
> +in the resulting dictionary, each character in x will be mapped to the
> +character at the same position in y. If there is a third argument, it
> +must be a string, whose characters will be mapped to None in the result.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +unicode_maketrans_impl(PyObject *x, PyObject *y, PyObject *z)
> +/*[clinic end generated code: output=a925c89452bd5881 input=7bfbf529a293c6c5]*/
> +{
> + PyObject *new = NULL, *key, *value;
> + Py_ssize_t i = 0;
> + int res;
> +
> + new = PyDict_New();
> + if (!new)
> + return NULL;
> + if (y != NULL) {
> + int x_kind, y_kind, z_kind;
> + void *x_data, *y_data, *z_data;
> +
> + /* x must be a string too, of equal length */
> + if (!PyUnicode_Check(x)) {
> + PyErr_SetString(PyExc_TypeError, "first maketrans argument must "
> + "be a string if there is a second argument");
> + goto err;
> + }
> + if (PyUnicode_GET_LENGTH(x) != PyUnicode_GET_LENGTH(y)) {
> + PyErr_SetString(PyExc_ValueError, "the first two maketrans "
> + "arguments must have equal length");
> + goto err;
> + }
> + /* create entries for translating chars in x to those in y */
> + x_kind = PyUnicode_KIND(x);
> + y_kind = PyUnicode_KIND(y);
> + x_data = PyUnicode_DATA(x);
> + y_data = PyUnicode_DATA(y);
> + for (i = 0; i < PyUnicode_GET_LENGTH(x); i++) {
> + key = PyLong_FromLong(PyUnicode_READ(x_kind, x_data, i));
> + if (!key)
> + goto err;
> + value = PyLong_FromLong(PyUnicode_READ(y_kind, y_data, i));
> + if (!value) {
> + Py_DECREF(key);
> + goto err;
> + }
> + res = PyDict_SetItem(new, key, value);
> + Py_DECREF(key);
> + Py_DECREF(value);
> + if (res < 0)
> + goto err;
> + }
> + /* create entries for deleting chars in z */
> + if (z != NULL) {
> + z_kind = PyUnicode_KIND(z);
> + z_data = PyUnicode_DATA(z);
> + for (i = 0; i < PyUnicode_GET_LENGTH(z); i++) {
> + key = PyLong_FromLong(PyUnicode_READ(z_kind, z_data, i));
> + if (!key)
> + goto err;
> + res = PyDict_SetItem(new, key, Py_None);
> + Py_DECREF(key);
> + if (res < 0)
> + goto err;
> + }
> + }
> + } else {
> + int kind;
> + void *data;
> +
> + /* x must be a dict */
> + if (!PyDict_CheckExact(x)) {
> + PyErr_SetString(PyExc_TypeError, "if you give only one argument "
> + "to maketrans it must be a dict");
> + goto err;
> + }
> + /* copy entries into the new dict, converting string keys to int keys */
> + while (PyDict_Next(x, &i, &key, &value)) {
> + if (PyUnicode_Check(key)) {
> + /* convert string keys to integer keys */
> + PyObject *newkey;
> + if (PyUnicode_GET_LENGTH(key) != 1) {
> + PyErr_SetString(PyExc_ValueError, "string keys in translate "
> + "table must be of length 1");
> + goto err;
> + }
> + kind = PyUnicode_KIND(key);
> + data = PyUnicode_DATA(key);
> + newkey = PyLong_FromLong(PyUnicode_READ(kind, data, 0));
> + if (!newkey)
> + goto err;
> + res = PyDict_SetItem(new, newkey, value);
> + Py_DECREF(newkey);
> + if (res < 0)
> + goto err;
> + } else if (PyLong_Check(key)) {
> + /* just keep integer keys */
> + if (PyDict_SetItem(new, key, value) < 0)
> + goto err;
> + } else {
> + PyErr_SetString(PyExc_TypeError, "keys in translate table must "
> + "be strings or integers");
> + goto err;
> + }
> + }
> + }
> + return new;
> + err:
> + Py_DECREF(new);
> + return NULL;
> +}
> +
> +PyDoc_STRVAR(translate__doc__,
> + "S.translate(table) -> str\n\
> +\n\
> +Return a copy of the string S in which each character has been mapped\n\
> +through the given translation table. The table must implement\n\
> +lookup/indexing via __getitem__, for instance a dictionary or list,\n\
> +mapping Unicode ordinals to Unicode ordinals, strings, or None. If\n\
> +this operation raises LookupError, the character is left untouched.\n\
> +Characters mapped to None are deleted.");
> +
> +static PyObject*
> +unicode_translate(PyObject *self, PyObject *table)
> +{
> + return _PyUnicode_TranslateCharmap(self, table, "ignore");
> +}
> +
> +PyDoc_STRVAR(upper__doc__,
> + "S.upper() -> str\n\
> +\n\
> +Return a copy of S converted to uppercase.");
> +
> +static PyObject*
> +unicode_upper(PyObject *self)
> +{
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> + if (PyUnicode_IS_ASCII(self))
> + return ascii_upper_or_lower(self, 0);
> + return case_operation(self, do_upper);
> +}
> +
> +PyDoc_STRVAR(zfill__doc__,
> + "S.zfill(width) -> str\n\
> +\n\
> +Pad a numeric string S with zeros on the left, to fill a field\n\
> +of the specified width. The string S is never truncated.");
> +
> +static PyObject *
> +unicode_zfill(PyObject *self, PyObject *args)
> +{
> + Py_ssize_t fill;
> + PyObject *u;
> + Py_ssize_t width;
> + int kind;
> + void *data;
> + Py_UCS4 chr;
> +
> + if (!PyArg_ParseTuple(args, "n:zfill", &width))
> + return NULL;
> +
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> +
> + if (PyUnicode_GET_LENGTH(self) >= width)
> + return unicode_result_unchanged(self);
> +
> + fill = width - PyUnicode_GET_LENGTH(self);
> +
> + u = pad(self, fill, 0, '0');
> +
> + if (u == NULL)
> + return NULL;
> +
> + kind = PyUnicode_KIND(u);
> + data = PyUnicode_DATA(u);
> + chr = PyUnicode_READ(kind, data, fill);
> +
> + if (chr == '+' || chr == '-') {
> + /* move sign to beginning of string */
> + PyUnicode_WRITE(kind, data, 0, chr);
> + PyUnicode_WRITE(kind, data, fill, '0');
> + }
> +
> + assert(_PyUnicode_CheckConsistency(u, 1));
> + return u;
> +}
> +
> +#if 0
> +static PyObject *
> +unicode__decimal2ascii(PyObject *self)
> +{
> + return PyUnicode_TransformDecimalAndSpaceToASCII(self);
> +}
> +#endif
> +
> +PyDoc_STRVAR(startswith__doc__,
> + "S.startswith(prefix[, start[, end]]) -> bool\n\
> +\n\
> +Return True if S starts with the specified prefix, False otherwise.\n\
> +With optional start, test S beginning at that position.\n\
> +With optional end, stop comparing S at that position.\n\
> +prefix can also be a tuple of strings to try.");
> +
> +static PyObject *
> +unicode_startswith(PyObject *self,
> + PyObject *args)
> +{
> + PyObject *subobj;
> + PyObject *substring;
> + Py_ssize_t start = 0;
> + Py_ssize_t end = PY_SSIZE_T_MAX;
> + int result;
> +
> + if (!stringlib_parse_args_finds("startswith", args, &subobj, &start, &end))
> + return NULL;
> + if (PyTuple_Check(subobj)) {
> + Py_ssize_t i;
> + for (i = 0; i < PyTuple_GET_SIZE(subobj); i++) {
> + substring = PyTuple_GET_ITEM(subobj, i);
> + if (!PyUnicode_Check(substring)) {
> + PyErr_Format(PyExc_TypeError,
> + "tuple for startswith must only contain str, "
> + "not %.100s",
> + Py_TYPE(substring)->tp_name);
> + return NULL;
> + }
> + result = tailmatch(self, substring, start, end, -1);
> + if (result == -1)
> + return NULL;
> + if (result) {
> + Py_RETURN_TRUE;
> + }
> + }
> + /* nothing matched */
> + Py_RETURN_FALSE;
> + }
> + if (!PyUnicode_Check(subobj)) {
> + PyErr_Format(PyExc_TypeError,
> + "startswith first arg must be str or "
> + "a tuple of str, not %.100s", Py_TYPE(subobj)->tp_name);
> + return NULL;
> + }
> + result = tailmatch(self, subobj, start, end, -1);
> + if (result == -1)
> + return NULL;
> + return PyBool_FromLong(result);
> +}
> +
> +
> +PyDoc_STRVAR(endswith__doc__,
> + "S.endswith(suffix[, start[, end]]) -> bool\n\
> +\n\
> +Return True if S ends with the specified suffix, False otherwise.\n\
> +With optional start, test S beginning at that position.\n\
> +With optional end, stop comparing S at that position.\n\
> +suffix can also be a tuple of strings to try.");
> +
> +static PyObject *
> +unicode_endswith(PyObject *self,
> + PyObject *args)
> +{
> + PyObject *subobj;
> + PyObject *substring;
> + Py_ssize_t start = 0;
> + Py_ssize_t end = PY_SSIZE_T_MAX;
> + int result;
> +
> + if (!stringlib_parse_args_finds("endswith", args, &subobj, &start, &end))
> + return NULL;
> + if (PyTuple_Check(subobj)) {
> + Py_ssize_t i;
> + for (i = 0; i < PyTuple_GET_SIZE(subobj); i++) {
> + substring = PyTuple_GET_ITEM(subobj, i);
> + if (!PyUnicode_Check(substring)) {
> + PyErr_Format(PyExc_TypeError,
> + "tuple for endswith must only contain str, "
> + "not %.100s",
> + Py_TYPE(substring)->tp_name);
> + return NULL;
> + }
> + result = tailmatch(self, substring, start, end, +1);
> + if (result == -1)
> + return NULL;
> + if (result) {
> + Py_RETURN_TRUE;
> + }
> + }
> + Py_RETURN_FALSE;
> + }
> + if (!PyUnicode_Check(subobj)) {
> + PyErr_Format(PyExc_TypeError,
> + "endswith first arg must be str or "
> + "a tuple of str, not %.100s", Py_TYPE(subobj)->tp_name);
> + return NULL;
> + }
> + result = tailmatch(self, subobj, start, end, +1);
> + if (result == -1)
> + return NULL;
> + return PyBool_FromLong(result);
> +}
> +
> +static void
> +_PyUnicodeWriter_Update(_PyUnicodeWriter *writer)
> +{
> + writer->maxchar = PyUnicode_MAX_CHAR_VALUE(writer->buffer);
> + writer->data = PyUnicode_DATA(writer->buffer);
> +
> + if (!writer->readonly) {
> + writer->kind = PyUnicode_KIND(writer->buffer);
> + writer->size = PyUnicode_GET_LENGTH(writer->buffer);
> + }
> + else {
> + /* use a value smaller than PyUnicode_1BYTE_KIND() so
> + _PyUnicodeWriter_PrepareKind() will copy the buffer. */
> + writer->kind = PyUnicode_WCHAR_KIND;
> + assert(writer->kind <= PyUnicode_1BYTE_KIND);
> +
> + /* Copy-on-write mode: set buffer size to 0 so
> + * _PyUnicodeWriter_Prepare() will copy (and enlarge) the buffer on
> + * next write. */
> + writer->size = 0;
> + }
> +}
> +
> +void
> +_PyUnicodeWriter_Init(_PyUnicodeWriter *writer)
> +{
> + memset(writer, 0, sizeof(*writer));
> +
> + /* ASCII is the bare minimum */
> + writer->min_char = 127;
> +
> + /* use a value smaller than PyUnicode_1BYTE_KIND() so
> + _PyUnicodeWriter_PrepareKind() will copy the buffer. */
> + writer->kind = PyUnicode_WCHAR_KIND;
> + assert(writer->kind <= PyUnicode_1BYTE_KIND);
> +}
> +
> +int
> +_PyUnicodeWriter_PrepareInternal(_PyUnicodeWriter *writer,
> + Py_ssize_t length, Py_UCS4 maxchar)
> +{
> + Py_ssize_t newlen;
> + PyObject *newbuffer;
> +
> + assert(maxchar <= MAX_UNICODE);
> +
> + /* ensure that the _PyUnicodeWriter_Prepare macro was used */
> + assert((maxchar > writer->maxchar && length >= 0)
> + || length > 0);
> +
> + if (length > PY_SSIZE_T_MAX - writer->pos) {
> + PyErr_NoMemory();
> + return -1;
> + }
> + newlen = writer->pos + length;
> +
> + maxchar = Py_MAX(maxchar, writer->min_char);
> +
> + if (writer->buffer == NULL) {
> + assert(!writer->readonly);
> + if (writer->overallocate
> + && newlen <= (PY_SSIZE_T_MAX - newlen / OVERALLOCATE_FACTOR)) {
> + /* overallocate to limit the number of realloc() */
> + newlen += newlen / OVERALLOCATE_FACTOR;
> + }
> + if (newlen < writer->min_length)
> + newlen = writer->min_length;
> +
> + writer->buffer = PyUnicode_New(newlen, maxchar);
> + if (writer->buffer == NULL)
> + return -1;
> + }
> + else if (newlen > writer->size) {
> + if (writer->overallocate
> + && newlen <= (PY_SSIZE_T_MAX - newlen / OVERALLOCATE_FACTOR)) {
> + /* overallocate to limit the number of realloc() */
> + newlen += newlen / OVERALLOCATE_FACTOR;
> + }
> + if (newlen < writer->min_length)
> + newlen = writer->min_length;
> +
> + if (maxchar > writer->maxchar || writer->readonly) {
> + /* resize + widen */
> + maxchar = Py_MAX(maxchar, writer->maxchar);
> + newbuffer = PyUnicode_New(newlen, maxchar);
> + if (newbuffer == NULL)
> + return -1;
> + _PyUnicode_FastCopyCharacters(newbuffer, 0,
> + writer->buffer, 0, writer->pos);
> + Py_DECREF(writer->buffer);
> + writer->readonly = 0;
> + }
> + else {
> + newbuffer = resize_compact(writer->buffer, newlen);
> + if (newbuffer == NULL)
> + return -1;
> + }
> + writer->buffer = newbuffer;
> + }
> + else if (maxchar > writer->maxchar) {
> + assert(!writer->readonly);
> + newbuffer = PyUnicode_New(writer->size, maxchar);
> + if (newbuffer == NULL)
> + return -1;
> + _PyUnicode_FastCopyCharacters(newbuffer, 0,
> + writer->buffer, 0, writer->pos);
> + Py_SETREF(writer->buffer, newbuffer);
> + }
> + _PyUnicodeWriter_Update(writer);
> + return 0;
> +
> +#undef OVERALLOCATE_FACTOR
> +}
> +
> +int
> +_PyUnicodeWriter_PrepareKindInternal(_PyUnicodeWriter *writer,
> + enum PyUnicode_Kind kind)
> +{
> + Py_UCS4 maxchar;
> +
> + /* ensure that the _PyUnicodeWriter_PrepareKind macro was used */
> + assert(writer->kind < kind);
> +
> + switch (kind)
> + {
> + case PyUnicode_1BYTE_KIND: maxchar = 0xff; break;
> + case PyUnicode_2BYTE_KIND: maxchar = 0xffff; break;
> + case PyUnicode_4BYTE_KIND: maxchar = 0x10ffff; break;
> + default:
> + assert(0 && "invalid kind");
> + return -1;
> + }
> +
> + return _PyUnicodeWriter_PrepareInternal(writer, 0, maxchar);
> +}
> +
> +static int
> +_PyUnicodeWriter_WriteCharInline(_PyUnicodeWriter *writer, Py_UCS4 ch)
> +{
> + assert(ch <= MAX_UNICODE);
> + if (_PyUnicodeWriter_Prepare(writer, 1, ch) < 0)
> + return -1;
> + PyUnicode_WRITE(writer->kind, writer->data, writer->pos, ch);
> + writer->pos++;
> + return 0;
> +}
> +
> +int
> +_PyUnicodeWriter_WriteChar(_PyUnicodeWriter *writer, Py_UCS4 ch)
> +{
> + return _PyUnicodeWriter_WriteCharInline(writer, ch);
> +}
> +
> +int
> +_PyUnicodeWriter_WriteStr(_PyUnicodeWriter *writer, PyObject *str)
> +{
> + Py_UCS4 maxchar;
> + Py_ssize_t len;
> +
> + if (PyUnicode_READY(str) == -1)
> + return -1;
> + len = PyUnicode_GET_LENGTH(str);
> + if (len == 0)
> + return 0;
> + maxchar = PyUnicode_MAX_CHAR_VALUE(str);
> + if (maxchar > writer->maxchar || len > writer->size - writer->pos) {
> + if (writer->buffer == NULL && !writer->overallocate) {
> + assert(_PyUnicode_CheckConsistency(str, 1));
> + writer->readonly = 1;
> + Py_INCREF(str);
> + writer->buffer = str;
> + _PyUnicodeWriter_Update(writer);
> + writer->pos += len;
> + return 0;
> + }
> + if (_PyUnicodeWriter_PrepareInternal(writer, len, maxchar) == -1)
> + return -1;
> + }
> + _PyUnicode_FastCopyCharacters(writer->buffer, writer->pos,
> + str, 0, len);
> + writer->pos += len;
> + return 0;
> +}
> +
> +int
> +_PyUnicodeWriter_WriteSubstring(_PyUnicodeWriter *writer, PyObject *str,
> + Py_ssize_t start, Py_ssize_t end)
> +{
> + Py_UCS4 maxchar;
> + Py_ssize_t len;
> +
> + if (PyUnicode_READY(str) == -1)
> + return -1;
> +
> + assert(0 <= start);
> + assert(end <= PyUnicode_GET_LENGTH(str));
> + assert(start <= end);
> +
> + if (end == 0)
> + return 0;
> +
> + if (start == 0 && end == PyUnicode_GET_LENGTH(str))
> + return _PyUnicodeWriter_WriteStr(writer, str);
> +
> + if (PyUnicode_MAX_CHAR_VALUE(str) > writer->maxchar)
> + maxchar = _PyUnicode_FindMaxChar(str, start, end);
> + else
> + maxchar = writer->maxchar;
> + len = end - start;
> +
> + if (_PyUnicodeWriter_Prepare(writer, len, maxchar) < 0)
> + return -1;
> +
> + _PyUnicode_FastCopyCharacters(writer->buffer, writer->pos,
> + str, start, len);
> + writer->pos += len;
> + return 0;
> +}
> +
> +int
> +_PyUnicodeWriter_WriteASCIIString(_PyUnicodeWriter *writer,
> + const char *ascii, Py_ssize_t len)
> +{
> + if (len == -1)
> + len = strlen(ascii);
> +
> + assert(ucs1lib_find_max_char((Py_UCS1*)ascii, (Py_UCS1*)ascii + len) < 128);
> +
> + if (writer->buffer == NULL && !writer->overallocate) {
> + PyObject *str;
> +
> + str = _PyUnicode_FromASCII(ascii, len);
> + if (str == NULL)
> + return -1;
> +
> + writer->readonly = 1;
> + writer->buffer = str;
> + _PyUnicodeWriter_Update(writer);
> + writer->pos += len;
> + return 0;
> + }
> +
> + if (_PyUnicodeWriter_Prepare(writer, len, 127) == -1)
> + return -1;
> +
> + switch (writer->kind)
> + {
> + case PyUnicode_1BYTE_KIND:
> + {
> + const Py_UCS1 *str = (const Py_UCS1 *)ascii;
> + Py_UCS1 *data = writer->data;
> +
> + memcpy(data + writer->pos, str, len);
> + break;
> + }
> + case PyUnicode_2BYTE_KIND:
> + {
> + _PyUnicode_CONVERT_BYTES(
> + Py_UCS1, Py_UCS2,
> + ascii, ascii + len,
> + (Py_UCS2 *)writer->data + writer->pos);
> + break;
> + }
> + case PyUnicode_4BYTE_KIND:
> + {
> + _PyUnicode_CONVERT_BYTES(
> + Py_UCS1, Py_UCS4,
> + ascii, ascii + len,
> + (Py_UCS4 *)writer->data + writer->pos);
> + break;
> + }
> + default:
> + assert(0);
> + }
> +
> + writer->pos += len;
> + return 0;
> +}
> +
> +int
> +_PyUnicodeWriter_WriteLatin1String(_PyUnicodeWriter *writer,
> + const char *str, Py_ssize_t len)
> +{
> + Py_UCS4 maxchar;
> +
> + maxchar = ucs1lib_find_max_char((Py_UCS1*)str, (Py_UCS1*)str + len);
> + if (_PyUnicodeWriter_Prepare(writer, len, maxchar) == -1)
> + return -1;
> + unicode_write_cstr(writer->buffer, writer->pos, str, len);
> + writer->pos += len;
> + return 0;
> +}
> +
> +PyObject *
> +_PyUnicodeWriter_Finish(_PyUnicodeWriter *writer)
> +{
> + PyObject *str;
> +
> + if (writer->pos == 0) {
> + Py_CLEAR(writer->buffer);
> + _Py_RETURN_UNICODE_EMPTY();
> + }
> +
> + str = writer->buffer;
> + writer->buffer = NULL;
> +
> + if (writer->readonly) {
> + assert(PyUnicode_GET_LENGTH(str) == writer->pos);
> + return str;
> + }
> +
> + if (PyUnicode_GET_LENGTH(str) != writer->pos) {
> + PyObject *str2;
> + str2 = resize_compact(str, writer->pos);
> + if (str2 == NULL) {
> + Py_DECREF(str);
> + return NULL;
> + }
> + str = str2;
> + }
> +
> + assert(_PyUnicode_CheckConsistency(str, 1));
> + return unicode_result_ready(str);
> +}
> +
> +void
> +_PyUnicodeWriter_Dealloc(_PyUnicodeWriter *writer)
> +{
> + Py_CLEAR(writer->buffer);
> +}
> +
> +#include "stringlib/unicode_format.h"
> +
> +PyDoc_STRVAR(format__doc__,
> + "S.format(*args, **kwargs) -> str\n\
> +\n\
> +Return a formatted version of S, using substitutions from args and kwargs.\n\
> +The substitutions are identified by braces ('{' and '}').");
> +
> +PyDoc_STRVAR(format_map__doc__,
> + "S.format_map(mapping) -> str\n\
> +\n\
> +Return a formatted version of S, using substitutions from mapping.\n\
> +The substitutions are identified by braces ('{' and '}').");
> +
> +static PyObject *
> +unicode__format__(PyObject* self, PyObject* args)
> +{
> + PyObject *format_spec;
> + _PyUnicodeWriter writer;
> + int ret;
> +
> + if (!PyArg_ParseTuple(args, "U:__format__", &format_spec))
> + return NULL;
> +
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> + _PyUnicodeWriter_Init(&writer);
> + ret = _PyUnicode_FormatAdvancedWriter(&writer,
> + self, format_spec, 0,
> + PyUnicode_GET_LENGTH(format_spec));
> + if (ret == -1) {
> + _PyUnicodeWriter_Dealloc(&writer);
> + return NULL;
> + }
> + return _PyUnicodeWriter_Finish(&writer);
> +}
> +
> +PyDoc_STRVAR(p_format__doc__,
> + "S.__format__(format_spec) -> str\n\
> +\n\
> +Return a formatted version of S as described by format_spec.");
> +
> +static PyObject *
> +unicode__sizeof__(PyObject *v)
> +{
> + Py_ssize_t size;
> +
> + /* If it's a compact object, account for base structure +
> + character data. */
> + if (PyUnicode_IS_COMPACT_ASCII(v))
> + size = sizeof(PyASCIIObject) + PyUnicode_GET_LENGTH(v) + 1;
> + else if (PyUnicode_IS_COMPACT(v))
> + size = sizeof(PyCompactUnicodeObject) +
> + (PyUnicode_GET_LENGTH(v) + 1) * PyUnicode_KIND(v);
> + else {
> + /* If it is a two-block object, account for base object, and
> + for character block if present. */
> + size = sizeof(PyUnicodeObject);
> + if (_PyUnicode_DATA_ANY(v))
> + size += (PyUnicode_GET_LENGTH(v) + 1) *
> + PyUnicode_KIND(v);
> + }
> + /* If the wstr pointer is present, account for it unless it is shared
> + with the data pointer. Check if the data is not shared. */
> + if (_PyUnicode_HAS_WSTR_MEMORY(v))
> + size += (PyUnicode_WSTR_LENGTH(v) + 1) * sizeof(wchar_t);
> + if (_PyUnicode_HAS_UTF8_MEMORY(v))
> + size += PyUnicode_UTF8_LENGTH(v) + 1;
> +
> + return PyLong_FromSsize_t(size);
> +}
> +
> +PyDoc_STRVAR(sizeof__doc__,
> + "S.__sizeof__() -> size of S in memory, in bytes");
> +
> +static PyObject *
> +unicode_getnewargs(PyObject *v)
> +{
> + PyObject *copy = _PyUnicode_Copy(v);
> + if (!copy)
> + return NULL;
> + return Py_BuildValue("(N)", copy);
> +}
> +
> +static PyMethodDef unicode_methods[] = {
> + {"encode", (PyCFunction) unicode_encode, METH_VARARGS | METH_KEYWORDS, encode__doc__},
> + {"replace", (PyCFunction) unicode_replace, METH_VARARGS, replace__doc__},
> + {"split", (PyCFunction) unicode_split, METH_VARARGS | METH_KEYWORDS, split__doc__},
> + {"rsplit", (PyCFunction) unicode_rsplit, METH_VARARGS | METH_KEYWORDS, rsplit__doc__},
> + {"join", (PyCFunction) unicode_join, METH_O, join__doc__},
> + {"capitalize", (PyCFunction) unicode_capitalize, METH_NOARGS, capitalize__doc__},
> + {"casefold", (PyCFunction) unicode_casefold, METH_NOARGS, casefold__doc__},
> + {"title", (PyCFunction) unicode_title, METH_NOARGS, title__doc__},
> + {"center", (PyCFunction) unicode_center, METH_VARARGS, center__doc__},
> + {"count", (PyCFunction) unicode_count, METH_VARARGS, count__doc__},
> + {"expandtabs", (PyCFunction) unicode_expandtabs,
> + METH_VARARGS | METH_KEYWORDS, expandtabs__doc__},
> + {"find", (PyCFunction) unicode_find, METH_VARARGS, find__doc__},
> + {"partition", (PyCFunction) unicode_partition, METH_O, partition__doc__},
> + {"index", (PyCFunction) unicode_index, METH_VARARGS, index__doc__},
> + {"ljust", (PyCFunction) unicode_ljust, METH_VARARGS, ljust__doc__},
> + {"lower", (PyCFunction) unicode_lower, METH_NOARGS, lower__doc__},
> + {"lstrip", (PyCFunction) unicode_lstrip, METH_VARARGS, lstrip__doc__},
> + {"rfind", (PyCFunction) unicode_rfind, METH_VARARGS, rfind__doc__},
> + {"rindex", (PyCFunction) unicode_rindex, METH_VARARGS, rindex__doc__},
> + {"rjust", (PyCFunction) unicode_rjust, METH_VARARGS, rjust__doc__},
> + {"rstrip", (PyCFunction) unicode_rstrip, METH_VARARGS, rstrip__doc__},
> + {"rpartition", (PyCFunction) unicode_rpartition, METH_O, rpartition__doc__},
> + {"splitlines", (PyCFunction) unicode_splitlines,
> + METH_VARARGS | METH_KEYWORDS, splitlines__doc__},
> + {"strip", (PyCFunction) unicode_strip, METH_VARARGS, strip__doc__},
> + {"swapcase", (PyCFunction) unicode_swapcase, METH_NOARGS, swapcase__doc__},
> + {"translate", (PyCFunction) unicode_translate, METH_O, translate__doc__},
> + {"upper", (PyCFunction) unicode_upper, METH_NOARGS, upper__doc__},
> + {"startswith", (PyCFunction) unicode_startswith, METH_VARARGS, startswith__doc__},
> + {"endswith", (PyCFunction) unicode_endswith, METH_VARARGS, endswith__doc__},
> + {"islower", (PyCFunction) unicode_islower, METH_NOARGS, islower__doc__},
> + {"isupper", (PyCFunction) unicode_isupper, METH_NOARGS, isupper__doc__},
> + {"istitle", (PyCFunction) unicode_istitle, METH_NOARGS, istitle__doc__},
> + {"isspace", (PyCFunction) unicode_isspace, METH_NOARGS, isspace__doc__},
> + {"isdecimal", (PyCFunction) unicode_isdecimal, METH_NOARGS, isdecimal__doc__},
> + {"isdigit", (PyCFunction) unicode_isdigit, METH_NOARGS, isdigit__doc__},
> + {"isnumeric", (PyCFunction) unicode_isnumeric, METH_NOARGS, isnumeric__doc__},
> + {"isalpha", (PyCFunction) unicode_isalpha, METH_NOARGS, isalpha__doc__},
> + {"isalnum", (PyCFunction) unicode_isalnum, METH_NOARGS, isalnum__doc__},
> + {"isidentifier", (PyCFunction) unicode_isidentifier, METH_NOARGS, isidentifier__doc__},
> + {"isprintable", (PyCFunction) unicode_isprintable, METH_NOARGS, isprintable__doc__},
> + {"zfill", (PyCFunction) unicode_zfill, METH_VARARGS, zfill__doc__},
> + {"format", (PyCFunction) do_string_format, METH_VARARGS | METH_KEYWORDS, format__doc__},
> + {"format_map", (PyCFunction) do_string_format_map, METH_O, format_map__doc__},
> + {"__format__", (PyCFunction) unicode__format__, METH_VARARGS, p_format__doc__},
> + UNICODE_MAKETRANS_METHODDEF
> + {"__sizeof__", (PyCFunction) unicode__sizeof__, METH_NOARGS, sizeof__doc__},
> +#if 0
> + /* These methods are just used for debugging the implementation. */
> + {"_decimal2ascii", (PyCFunction) unicode__decimal2ascii, METH_NOARGS},
> +#endif
> +
> + {"__getnewargs__", (PyCFunction)unicode_getnewargs, METH_NOARGS},
> + {NULL, NULL}
> +};
> +
> +static PyObject *
> +unicode_mod(PyObject *v, PyObject *w)
> +{
> + if (!PyUnicode_Check(v))
> + Py_RETURN_NOTIMPLEMENTED;
> + return PyUnicode_Format(v, w);
> +}
> +
> +static PyNumberMethods unicode_as_number = {
> + 0, /*nb_add*/
> + 0, /*nb_subtract*/
> + 0, /*nb_multiply*/
> + unicode_mod, /*nb_remainder*/
> +};
> +
> +static PySequenceMethods unicode_as_sequence = {
> + (lenfunc) unicode_length, /* sq_length */
> + PyUnicode_Concat, /* sq_concat */
> + (ssizeargfunc) unicode_repeat, /* sq_repeat */
> + (ssizeargfunc) unicode_getitem, /* sq_item */
> + 0, /* sq_slice */
> + 0, /* sq_ass_item */
> + 0, /* sq_ass_slice */
> + PyUnicode_Contains, /* sq_contains */
> +};
> +
> +static PyObject*
> +unicode_subscript(PyObject* self, PyObject* item)
> +{
> + if (PyUnicode_READY(self) == -1)
> + return NULL;
> +
> + if (PyIndex_Check(item)) {
> + Py_ssize_t i = PyNumber_AsSsize_t(item, PyExc_IndexError);
> + if (i == -1 && PyErr_Occurred())
> + return NULL;
> + if (i < 0)
> + i += PyUnicode_GET_LENGTH(self);
> + return unicode_getitem(self, i);
> + } else if (PySlice_Check(item)) {
> + Py_ssize_t start, stop, step, slicelength, cur, i;
> + PyObject *result;
> + void *src_data, *dest_data;
> + int src_kind, dest_kind;
> + Py_UCS4 ch, max_char, kind_limit;
> +
> + if (PySlice_Unpack(item, &start, &stop, &step) < 0) {
> + return NULL;
> + }
> + slicelength = PySlice_AdjustIndices(PyUnicode_GET_LENGTH(self),
> + &start, &stop, step);
> +
> + if (slicelength <= 0) {
> + _Py_RETURN_UNICODE_EMPTY();
> + } else if (start == 0 && step == 1 &&
> + slicelength == PyUnicode_GET_LENGTH(self)) {
> + return unicode_result_unchanged(self);
> + } else if (step == 1) {
> + return PyUnicode_Substring(self,
> + start, start + slicelength);
> + }
> + /* General case */
> + src_kind = PyUnicode_KIND(self);
> + src_data = PyUnicode_DATA(self);
> + if (!PyUnicode_IS_ASCII(self)) {
> + kind_limit = kind_maxchar_limit(src_kind);
> + max_char = 0;
> + for (cur = start, i = 0; i < slicelength; cur += step, i++) {
> + ch = PyUnicode_READ(src_kind, src_data, cur);
> + if (ch > max_char) {
> + max_char = ch;
> + if (max_char >= kind_limit)
> + break;
> + }
> + }
> + }
> + else
> + max_char = 127;
> + result = PyUnicode_New(slicelength, max_char);
> + if (result == NULL)
> + return NULL;
> + dest_kind = PyUnicode_KIND(result);
> + dest_data = PyUnicode_DATA(result);
> +
> + for (cur = start, i = 0; i < slicelength; cur += step, i++) {
> + Py_UCS4 ch = PyUnicode_READ(src_kind, src_data, cur);
> + PyUnicode_WRITE(dest_kind, dest_data, i, ch);
> + }
> + assert(_PyUnicode_CheckConsistency(result, 1));
> + return result;
> + } else {
> + PyErr_SetString(PyExc_TypeError, "string indices must be integers");
> + return NULL;
> + }
> +}
> +
> +static PyMappingMethods unicode_as_mapping = {
> + (lenfunc)unicode_length, /* mp_length */
> + (binaryfunc)unicode_subscript, /* mp_subscript */
> + (objobjargproc)0, /* mp_ass_subscript */
> +};
> +
> +
> +/* Helpers for PyUnicode_Format() */
> +
> +struct unicode_formatter_t {
> + PyObject *args;
> + int args_owned;
> + Py_ssize_t arglen, argidx;
> + PyObject *dict;
> +
> + enum PyUnicode_Kind fmtkind;
> + Py_ssize_t fmtcnt, fmtpos;
> + void *fmtdata;
> + PyObject *fmtstr;
> +
> + _PyUnicodeWriter writer;
> +};
> +
> +struct unicode_format_arg_t {
> + Py_UCS4 ch;
> + int flags;
> + Py_ssize_t width;
> + int prec;
> + int sign;
> +};
> +
> +static PyObject *
> +unicode_format_getnextarg(struct unicode_formatter_t *ctx)
> +{
> + Py_ssize_t argidx = ctx->argidx;
> +
> + if (argidx < ctx->arglen) {
> + ctx->argidx++;
> + if (ctx->arglen < 0)
> + return ctx->args;
> + else
> + return PyTuple_GetItem(ctx->args, argidx);
> + }
> + PyErr_SetString(PyExc_TypeError,
> + "not enough arguments for format string");
> + return NULL;
> +}
> +
> +/* Returns a new reference to a PyUnicode object, or NULL on failure. */
> +
> +/* Format a float into the writer if the writer is not NULL, or into *p_output
> + otherwise.
> +
> + Return 0 on success, raise an exception and return -1 on error. */
> +static int
> +formatfloat(PyObject *v, struct unicode_format_arg_t *arg,
> + PyObject **p_output,
> + _PyUnicodeWriter *writer)
> +{
> + char *p;
> + double x;
> + Py_ssize_t len;
> + int prec;
> + int dtoa_flags;
> +
> + x = PyFloat_AsDouble(v);
> + if (x == -1.0 && PyErr_Occurred())
> + return -1;
> +
> + prec = arg->prec;
> + if (prec < 0)
> + prec = 6;
> +
> + if (arg->flags & F_ALT)
> + dtoa_flags = Py_DTSF_ALT;
> + else
> + dtoa_flags = 0;
> + p = PyOS_double_to_string(x, arg->ch, prec, dtoa_flags, NULL);
> + if (p == NULL)
> + return -1;
> + len = strlen(p);
> + if (writer) {
> + if (_PyUnicodeWriter_WriteASCIIString(writer, p, len) < 0) {
> + PyMem_Free(p);
> + return -1;
> + }
> + }
> + else
> + *p_output = _PyUnicode_FromASCII(p, len);
> + PyMem_Free(p);
> + return 0;
> +}
> +
> +/* formatlong() emulates the format codes d, u, o, x and X, and
> + * the F_ALT flag, for Python's long (unbounded) ints. It's not used for
> + * Python's regular ints.
> + * Return value: a new PyUnicodeObject*, or NULL if error.
> + * The output string is of the form
> + * "-"? ("0x" | "0X")? digit+
> + * "0x"/"0X" are present only for x and X conversions, with F_ALT
> + * set in flags. The case of hex digits will be correct,
> + * There will be at least prec digits, zero-filled on the left if
> + * necessary to get that many.
> + * val object to be converted
> + * flags bitmask of format flags; only F_ALT is looked at
> + * prec minimum number of digits; 0-fill on left if needed
> + * type a character in [duoxX]; u acts the same as d
> + *
> + * CAUTION: o, x and X conversions on regular ints can never
> + * produce a '-' sign, but can for Python's unbounded ints.
> + */
> +PyObject *
> +_PyUnicode_FormatLong(PyObject *val, int alt, int prec, int type)
> +{
> + PyObject *result = NULL;
> + char *buf;
> + Py_ssize_t i;
> + int sign; /* 1 if '-', else 0 */
> + int len; /* number of characters */
> + Py_ssize_t llen;
> + int numdigits; /* len == numnondigits + numdigits */
> + int numnondigits = 0;
> +
> + /* Avoid exceeding SSIZE_T_MAX */
> + if (prec > INT_MAX-3) {
> + PyErr_SetString(PyExc_OverflowError,
> + "precision too large");
> + return NULL;
> + }
> +
> + assert(PyLong_Check(val));
> +
> + switch (type) {
> + default:
> + assert(!"'type' not in [diuoxX]");
> + case 'd':
> + case 'i':
> + case 'u':
> + /* int and int subclasses should print numerically when a numeric */
> + /* format code is used (see issue18780) */
> + result = PyNumber_ToBase(val, 10);
> + break;
> + case 'o':
> + numnondigits = 2;
> + result = PyNumber_ToBase(val, 8);
> + break;
> + case 'x':
> + case 'X':
> + numnondigits = 2;
> + result = PyNumber_ToBase(val, 16);
> + break;
> + }
> + if (!result)
> + return NULL;
> +
> + assert(unicode_modifiable(result));
> + assert(PyUnicode_IS_READY(result));
> + assert(PyUnicode_IS_ASCII(result));
> +
> + /* To modify the string in-place, there can only be one reference. */
> + if (Py_REFCNT(result) != 1) {
> + Py_DECREF(result);
> + PyErr_BadInternalCall();
> + return NULL;
> + }
> + buf = PyUnicode_DATA(result);
> + llen = PyUnicode_GET_LENGTH(result);
> + if (llen > INT_MAX) {
> + Py_DECREF(result);
> + PyErr_SetString(PyExc_ValueError,
> + "string too large in _PyUnicode_FormatLong");
> + return NULL;
> + }
> + len = (int)llen;
> + sign = buf[0] == '-';
> + numnondigits += sign;
> + numdigits = len - numnondigits;
> + assert(numdigits > 0);
> +
> + /* Get rid of base marker unless F_ALT */
> + if (((alt) == 0 &&
> + (type == 'o' || type == 'x' || type == 'X'))) {
> + assert(buf[sign] == '0');
> + assert(buf[sign+1] == 'x' || buf[sign+1] == 'X' ||
> + buf[sign+1] == 'o');
> + numnondigits -= 2;
> + buf += 2;
> + len -= 2;
> + if (sign)
> + buf[0] = '-';
> + assert(len == numnondigits + numdigits);
> + assert(numdigits > 0);
> + }
> +
> + /* Fill with leading zeroes to meet minimum width. */
> + if (prec > numdigits) {
> + PyObject *r1 = PyBytes_FromStringAndSize(NULL,
> + numnondigits + prec);
> + char *b1;
> + if (!r1) {
> + Py_DECREF(result);
> + return NULL;
> + }
> + b1 = PyBytes_AS_STRING(r1);
> + for (i = 0; i < numnondigits; ++i)
> + *b1++ = *buf++;
> + for (i = 0; i < prec - numdigits; i++)
> + *b1++ = '0';
> + for (i = 0; i < numdigits; i++)
> + *b1++ = *buf++;
> + *b1 = '\0';
> + Py_DECREF(result);
> + result = r1;
> + buf = PyBytes_AS_STRING(result);
> + len = numnondigits + prec;
> + }
> +
> + /* Fix up case for hex conversions. */
> + if (type == 'X') {
> + /* Need to convert all lower case letters to upper case.
> + and need to convert 0x to 0X (and -0x to -0X). */
> + for (i = 0; i < len; i++)
> + if (buf[i] >= 'a' && buf[i] <= 'x')
> + buf[i] -= 'a'-'A';
> + }
> + if (!PyUnicode_Check(result)
> + || buf != PyUnicode_DATA(result)) {
> + PyObject *unicode;
> + unicode = _PyUnicode_FromASCII(buf, len);
> + Py_DECREF(result);
> + result = unicode;
> + }
> + else if (len != PyUnicode_GET_LENGTH(result)) {
> + if (PyUnicode_Resize(&result, len) < 0)
> + Py_CLEAR(result);
> + }
> + return result;
> +}
> +
> +/* Format an integer or a float as an integer.
> + * Return 1 if the number has been formatted into the writer,
> + * 0 if the number has been formatted into *p_output
> + * -1 and raise an exception on error */
> +static int
> +mainformatlong(PyObject *v,
> + struct unicode_format_arg_t *arg,
> + PyObject **p_output,
> + _PyUnicodeWriter *writer)
> +{
> + PyObject *iobj, *res;
> + char type = (char)arg->ch;
> +
> + if (!PyNumber_Check(v))
> + goto wrongtype;
> +
> + /* make sure number is a type of integer for o, x, and X */
> + if (!PyLong_Check(v)) {
> + if (type == 'o' || type == 'x' || type == 'X') {
> + iobj = PyNumber_Index(v);
> + if (iobj == NULL) {
> + if (PyErr_ExceptionMatches(PyExc_TypeError))
> + goto wrongtype;
> + return -1;
> + }
> + }
> + else {
> + iobj = PyNumber_Long(v);
> + if (iobj == NULL ) {
> + if (PyErr_ExceptionMatches(PyExc_TypeError))
> + goto wrongtype;
> + return -1;
> + }
> + }
> + assert(PyLong_Check(iobj));
> + }
> + else {
> + iobj = v;
> + Py_INCREF(iobj);
> + }
> +
> + if (PyLong_CheckExact(v)
> + && arg->width == -1 && arg->prec == -1
> + && !(arg->flags & (F_SIGN | F_BLANK))
> + && type != 'X')
> + {
> + /* Fast path */
> + int alternate = arg->flags & F_ALT;
> + int base;
> +
> + switch(type)
> + {
> + default:
> + assert(0 && "'type' not in [diuoxX]");
> + case 'd':
> + case 'i':
> + case 'u':
> + base = 10;
> + break;
> + case 'o':
> + base = 8;
> + break;
> + case 'x':
> + case 'X':
> + base = 16;
> + break;
> + }
> +
> + if (_PyLong_FormatWriter(writer, v, base, alternate) == -1) {
> + Py_DECREF(iobj);
> + return -1;
> + }
> + Py_DECREF(iobj);
> + return 1;
> + }
> +
> + res = _PyUnicode_FormatLong(iobj, arg->flags & F_ALT, arg->prec, type);
> + Py_DECREF(iobj);
> + if (res == NULL)
> + return -1;
> + *p_output = res;
> + return 0;
> +
> +wrongtype:
> + switch(type)
> + {
> + case 'o':
> + case 'x':
> + case 'X':
> + PyErr_Format(PyExc_TypeError,
> + "%%%c format: an integer is required, "
> + "not %.200s",
> + type, Py_TYPE(v)->tp_name);
> + break;
> + default:
> + PyErr_Format(PyExc_TypeError,
> + "%%%c format: a number is required, "
> + "not %.200s",
> + type, Py_TYPE(v)->tp_name);
> + break;
> + }
> + return -1;
> +}
> +
> +static Py_UCS4
> +formatchar(PyObject *v)
> +{
> + /* presume that the buffer is at least 3 characters long */
> + if (PyUnicode_Check(v)) {
> + if (PyUnicode_GET_LENGTH(v) == 1) {
> + return PyUnicode_READ_CHAR(v, 0);
> + }
> + goto onError;
> + }
> + else {
> + PyObject *iobj;
> + long x;
> + /* make sure number is a type of integer */
> + if (!PyLong_Check(v)) {
> + iobj = PyNumber_Index(v);
> + if (iobj == NULL) {
> + goto onError;
> + }
> + x = PyLong_AsLong(iobj);
> + Py_DECREF(iobj);
> + }
> + else {
> + x = PyLong_AsLong(v);
> + }
> + if (x == -1 && PyErr_Occurred())
> + goto onError;
> +
> + if (x < 0 || x > MAX_UNICODE) {
> + PyErr_SetString(PyExc_OverflowError,
> + "%c arg not in range(0x110000)");
> + return (Py_UCS4) -1;
> + }
> +
> + return (Py_UCS4) x;
> + }
> +
> + onError:
> + PyErr_SetString(PyExc_TypeError,
> + "%c requires int or char");
> + return (Py_UCS4) -1;
> +}
> +
> +/* Parse options of an argument: flags, width, precision.
> + Handle also "%(name)" syntax.
> +
> + Return 0 if the argument has been formatted into arg->str.
> + Return 1 if the argument has been written into ctx->writer,
> + Raise an exception and return -1 on error. */
> +static int
> +unicode_format_arg_parse(struct unicode_formatter_t *ctx,
> + struct unicode_format_arg_t *arg)
> +{
> +#define FORMAT_READ(ctx) \
> + PyUnicode_READ((ctx)->fmtkind, (ctx)->fmtdata, (ctx)->fmtpos)
> +
> + PyObject *v;
> +
> + if (arg->ch == '(') {
> + /* Get argument value from a dictionary. Example: "%(name)s". */
> + Py_ssize_t keystart;
> + Py_ssize_t keylen;
> + PyObject *key;
> + int pcount = 1;
> +
> + if (ctx->dict == NULL) {
> + PyErr_SetString(PyExc_TypeError,
> + "format requires a mapping");
> + return -1;
> + }
> + ++ctx->fmtpos;
> + --ctx->fmtcnt;
> + keystart = ctx->fmtpos;
> + /* Skip over balanced parentheses */
> + while (pcount > 0 && --ctx->fmtcnt >= 0) {
> + arg->ch = FORMAT_READ(ctx);
> + if (arg->ch == ')')
> + --pcount;
> + else if (arg->ch == '(')
> + ++pcount;
> + ctx->fmtpos++;
> + }
> + keylen = ctx->fmtpos - keystart - 1;
> + if (ctx->fmtcnt < 0 || pcount > 0) {
> + PyErr_SetString(PyExc_ValueError,
> + "incomplete format key");
> + return -1;
> + }
> + key = PyUnicode_Substring(ctx->fmtstr,
> + keystart, keystart + keylen);
> + if (key == NULL)
> + return -1;
> + if (ctx->args_owned) {
> + ctx->args_owned = 0;
> + Py_DECREF(ctx->args);
> + }
> + ctx->args = PyObject_GetItem(ctx->dict, key);
> + Py_DECREF(key);
> + if (ctx->args == NULL)
> + return -1;
> + ctx->args_owned = 1;
> + ctx->arglen = -1;
> + ctx->argidx = -2;
> + }
> +
> + /* Parse flags. Example: "%+i" => flags=F_SIGN. */
> + while (--ctx->fmtcnt >= 0) {
> + arg->ch = FORMAT_READ(ctx);
> + ctx->fmtpos++;
> + switch (arg->ch) {
> + case '-': arg->flags |= F_LJUST; continue;
> + case '+': arg->flags |= F_SIGN; continue;
> + case ' ': arg->flags |= F_BLANK; continue;
> + case '#': arg->flags |= F_ALT; continue;
> + case '0': arg->flags |= F_ZERO; continue;
> + }
> + break;
> + }
> +
> + /* Parse width. Example: "%10s" => width=10 */
> + if (arg->ch == '*') {
> + v = unicode_format_getnextarg(ctx);
> + if (v == NULL)
> + return -1;
> + if (!PyLong_Check(v)) {
> + PyErr_SetString(PyExc_TypeError,
> + "* wants int");
> + return -1;
> + }
> + arg->width = PyLong_AsSsize_t(v);
> + if (arg->width == -1 && PyErr_Occurred())
> + return -1;
> + if (arg->width < 0) {
> + arg->flags |= F_LJUST;
> + arg->width = -arg->width;
> + }
> + if (--ctx->fmtcnt >= 0) {
> + arg->ch = FORMAT_READ(ctx);
> + ctx->fmtpos++;
> + }
> + }
> + else if (arg->ch >= '0' && arg->ch <= '9') {
> + arg->width = arg->ch - '0';
> + while (--ctx->fmtcnt >= 0) {
> + arg->ch = FORMAT_READ(ctx);
> + ctx->fmtpos++;
> + if (arg->ch < '0' || arg->ch > '9')
> + break;
> + /* Since arg->ch is unsigned, the RHS would end up as unsigned,
> + mixing signed and unsigned comparison. Since arg->ch is between
> + '0' and '9', casting to int is safe. */
> + if (arg->width > (PY_SSIZE_T_MAX - ((int)arg->ch - '0')) / 10) {
> + PyErr_SetString(PyExc_ValueError,
> + "width too big");
> + return -1;
> + }
> + arg->width = arg->width*10 + (arg->ch - '0');
> + }
> + }
> +
> + /* Parse precision. Example: "%.3f" => prec=3 */
> + if (arg->ch == '.') {
> + arg->prec = 0;
> + if (--ctx->fmtcnt >= 0) {
> + arg->ch = FORMAT_READ(ctx);
> + ctx->fmtpos++;
> + }
> + if (arg->ch == '*') {
> + v = unicode_format_getnextarg(ctx);
> + if (v == NULL)
> + return -1;
> + if (!PyLong_Check(v)) {
> + PyErr_SetString(PyExc_TypeError,
> + "* wants int");
> + return -1;
> + }
> + arg->prec = _PyLong_AsInt(v);
> + if (arg->prec == -1 && PyErr_Occurred())
> + return -1;
> + if (arg->prec < 0)
> + arg->prec = 0;
> + if (--ctx->fmtcnt >= 0) {
> + arg->ch = FORMAT_READ(ctx);
> + ctx->fmtpos++;
> + }
> + }
> + else if (arg->ch >= '0' && arg->ch <= '9') {
> + arg->prec = arg->ch - '0';
> + while (--ctx->fmtcnt >= 0) {
> + arg->ch = FORMAT_READ(ctx);
> + ctx->fmtpos++;
> + if (arg->ch < '0' || arg->ch > '9')
> + break;
> + if (arg->prec > (INT_MAX - ((int)arg->ch - '0')) / 10) {
> + PyErr_SetString(PyExc_ValueError,
> + "precision too big");
> + return -1;
> + }
> + arg->prec = arg->prec*10 + (arg->ch - '0');
> + }
> + }
> + }
> +
> + /* Ignore "h", "l" and "L" format prefix (ex: "%hi" or "%ls") */
> + if (ctx->fmtcnt >= 0) {
> + if (arg->ch == 'h' || arg->ch == 'l' || arg->ch == 'L') {
> + if (--ctx->fmtcnt >= 0) {
> + arg->ch = FORMAT_READ(ctx);
> + ctx->fmtpos++;
> + }
> + }
> + }
> + if (ctx->fmtcnt < 0) {
> + PyErr_SetString(PyExc_ValueError,
> + "incomplete format");
> + return -1;
> + }
> + return 0;
> +
> +#undef FORMAT_READ
> +}
> +
> +/* Format one argument. Supported conversion specifiers:
> +
> + - "s", "r", "a": any type
> + - "i", "d", "u": int or float
> + - "o", "x", "X": int
> + - "e", "E", "f", "F", "g", "G": float
> + - "c": int or str (1 character)
> +
> + When possible, the output is written directly into the Unicode writer
> + (ctx->writer). A string is created when padding is required.
> +
> + Return 0 if the argument has been formatted into *p_str,
> + 1 if the argument has been written into ctx->writer,
> + -1 on error. */
> +static int
> +unicode_format_arg_format(struct unicode_formatter_t *ctx,
> + struct unicode_format_arg_t *arg,
> + PyObject **p_str)
> +{
> + PyObject *v;
> + _PyUnicodeWriter *writer = &ctx->writer;
> +
> + if (ctx->fmtcnt == 0)
> + ctx->writer.overallocate = 0;
> +
> + if (arg->ch == '%') {
> + if (_PyUnicodeWriter_WriteCharInline(writer, '%') < 0)
> + return -1;
> + return 1;
> + }
> +
> + v = unicode_format_getnextarg(ctx);
> + if (v == NULL)
> + return -1;
> +
> +
> + switch (arg->ch) {
> + case 's':
> + case 'r':
> + case 'a':
> + if (PyLong_CheckExact(v) && arg->width == -1 && arg->prec == -1) {
> + /* Fast path */
> + if (_PyLong_FormatWriter(writer, v, 10, arg->flags & F_ALT) == -1)
> + return -1;
> + return 1;
> + }
> +
> + if (PyUnicode_CheckExact(v) && arg->ch == 's') {
> + *p_str = v;
> + Py_INCREF(*p_str);
> + }
> + else {
> + if (arg->ch == 's')
> + *p_str = PyObject_Str(v);
> + else if (arg->ch == 'r')
> + *p_str = PyObject_Repr(v);
> + else
> + *p_str = PyObject_ASCII(v);
> + }
> + break;
> +
> + case 'i':
> + case 'd':
> + case 'u':
> + case 'o':
> + case 'x':
> + case 'X':
> + {
> + int ret = mainformatlong(v, arg, p_str, writer);
> + if (ret != 0)
> + return ret;
> + arg->sign = 1;
> + break;
> + }
> +
> + case 'e':
> + case 'E':
> + case 'f':
> + case 'F':
> + case 'g':
> + case 'G':
> + if (arg->width == -1 && arg->prec == -1
> + && !(arg->flags & (F_SIGN | F_BLANK)))
> + {
> + /* Fast path */
> + if (formatfloat(v, arg, NULL, writer) == -1)
> + return -1;
> + return 1;
> + }
> +
> + arg->sign = 1;
> + if (formatfloat(v, arg, p_str, NULL) == -1)
> + return -1;
> + break;
> +
> + case 'c':
> + {
> + Py_UCS4 ch = formatchar(v);
> + if (ch == (Py_UCS4) -1)
> + return -1;
> + if (arg->width == -1 && arg->prec == -1) {
> + /* Fast path */
> + if (_PyUnicodeWriter_WriteCharInline(writer, ch) < 0)
> + return -1;
> + return 1;
> + }
> + *p_str = PyUnicode_FromOrdinal(ch);
> + break;
> + }
> +
> + default:
> + PyErr_Format(PyExc_ValueError,
> + "unsupported format character '%c' (0x%x) "
> + "at index %zd",
> + (31<=arg->ch && arg->ch<=126) ? (char)arg->ch : '?',
> + (int)arg->ch,
> + ctx->fmtpos - 1);
> + return -1;
> + }
> + if (*p_str == NULL)
> + return -1;
> + assert (PyUnicode_Check(*p_str));
> + return 0;
> +}
> +
> +static int
> +unicode_format_arg_output(struct unicode_formatter_t *ctx,
> + struct unicode_format_arg_t *arg,
> + PyObject *str)
> +{
> + Py_ssize_t len;
> + enum PyUnicode_Kind kind;
> + void *pbuf;
> + Py_ssize_t pindex;
> + Py_UCS4 signchar;
> + Py_ssize_t buflen;
> + Py_UCS4 maxchar;
> + Py_ssize_t sublen;
> + _PyUnicodeWriter *writer = &ctx->writer;
> + Py_UCS4 fill;
> +
> + fill = ' ';
> + if (arg->sign && arg->flags & F_ZERO)
> + fill = '0';
> +
> + if (PyUnicode_READY(str) == -1)
> + return -1;
> +
> + len = PyUnicode_GET_LENGTH(str);
> + if ((arg->width == -1 || arg->width <= len)
> + && (arg->prec == -1 || arg->prec >= len)
> + && !(arg->flags & (F_SIGN | F_BLANK)))
> + {
> + /* Fast path */
> + if (_PyUnicodeWriter_WriteStr(writer, str) == -1)
> + return -1;
> + return 0;
> + }
> +
> + /* Truncate the string for "s", "r" and "a" formats
> + if the precision is set */
> + if (arg->ch == 's' || arg->ch == 'r' || arg->ch == 'a') {
> + if (arg->prec >= 0 && len > arg->prec)
> + len = arg->prec;
> + }
> +
> + /* Adjust sign and width */
> + kind = PyUnicode_KIND(str);
> + pbuf = PyUnicode_DATA(str);
> + pindex = 0;
> + signchar = '\0';
> + if (arg->sign) {
> + Py_UCS4 ch = PyUnicode_READ(kind, pbuf, pindex);
> + if (ch == '-' || ch == '+') {
> + signchar = ch;
> + len--;
> + pindex++;
> + }
> + else if (arg->flags & F_SIGN)
> + signchar = '+';
> + else if (arg->flags & F_BLANK)
> + signchar = ' ';
> + else
> + arg->sign = 0;
> + }
> + if (arg->width < len)
> + arg->width = len;
> +
> + /* Prepare the writer */
> + maxchar = writer->maxchar;
> + if (!(arg->flags & F_LJUST)) {
> + if (arg->sign) {
> + if ((arg->width-1) > len)
> + maxchar = Py_MAX(maxchar, fill);
> + }
> + else {
> + if (arg->width > len)
> + maxchar = Py_MAX(maxchar, fill);
> + }
> + }
> + if (PyUnicode_MAX_CHAR_VALUE(str) > maxchar) {
> + Py_UCS4 strmaxchar = _PyUnicode_FindMaxChar(str, 0, pindex+len);
> + maxchar = Py_MAX(maxchar, strmaxchar);
> + }
> +
> + buflen = arg->width;
> + if (arg->sign && len == arg->width)
> + buflen++;
> + if (_PyUnicodeWriter_Prepare(writer, buflen, maxchar) == -1)
> + return -1;
> +
> + /* Write the sign if needed */
> + if (arg->sign) {
> + if (fill != ' ') {
> + PyUnicode_WRITE(writer->kind, writer->data, writer->pos, signchar);
> + writer->pos += 1;
> + }
> + if (arg->width > len)
> + arg->width--;
> + }
> +
> + /* Write the numeric prefix for "x", "X" and "o" formats
> + if the alternate form is used.
> + For example, write "0x" for the "%#x" format. */
> + if ((arg->flags & F_ALT) && (arg->ch == 'x' || arg->ch == 'X' || arg->ch == 'o')) {
> + assert(PyUnicode_READ(kind, pbuf, pindex) == '0');
> + assert(PyUnicode_READ(kind, pbuf, pindex + 1) == arg->ch);
> + if (fill != ' ') {
> + PyUnicode_WRITE(writer->kind, writer->data, writer->pos, '0');
> + PyUnicode_WRITE(writer->kind, writer->data, writer->pos+1, arg->ch);
> + writer->pos += 2;
> + pindex += 2;
> + }
> + arg->width -= 2;
> + if (arg->width < 0)
> + arg->width = 0;
> + len -= 2;
> + }
> +
> + /* Pad left with the fill character if needed */
> + if (arg->width > len && !(arg->flags & F_LJUST)) {
> + sublen = arg->width - len;
> + FILL(writer->kind, writer->data, fill, writer->pos, sublen);
> + writer->pos += sublen;
> + arg->width = len;
> + }
> +
> + /* If padding with spaces: write sign if needed and/or numeric prefix if
> + the alternate form is used */
> + if (fill == ' ') {
> + if (arg->sign) {
> + PyUnicode_WRITE(writer->kind, writer->data, writer->pos, signchar);
> + writer->pos += 1;
> + }
> + if ((arg->flags & F_ALT) && (arg->ch == 'x' || arg->ch == 'X' || arg->ch == 'o')) {
> + assert(PyUnicode_READ(kind, pbuf, pindex) == '0');
> + assert(PyUnicode_READ(kind, pbuf, pindex+1) == arg->ch);
> + PyUnicode_WRITE(writer->kind, writer->data, writer->pos, '0');
> + PyUnicode_WRITE(writer->kind, writer->data, writer->pos+1, arg->ch);
> + writer->pos += 2;
> + pindex += 2;
> + }
> + }
> +
> + /* Write characters */
> + if (len) {
> + _PyUnicode_FastCopyCharacters(writer->buffer, writer->pos,
> + str, pindex, len);
> + writer->pos += len;
> + }
> +
> + /* Pad right with the fill character if needed */
> + if (arg->width > len) {
> + sublen = arg->width - len;
> + FILL(writer->kind, writer->data, ' ', writer->pos, sublen);
> + writer->pos += sublen;
> + }
> + return 0;
> +}
> +
> +/* Helper of PyUnicode_Format(): format one arg.
> + Return 0 on success, raise an exception and return -1 on error. */
> +static int
> +unicode_format_arg(struct unicode_formatter_t *ctx)
> +{
> + struct unicode_format_arg_t arg;
> + PyObject *str;
> + int ret;
> +
> + arg.ch = PyUnicode_READ(ctx->fmtkind, ctx->fmtdata, ctx->fmtpos);
> + arg.flags = 0;
> + arg.width = -1;
> + arg.prec = -1;
> + arg.sign = 0;
> + str = NULL;
> +
> + ret = unicode_format_arg_parse(ctx, &arg);
> + if (ret == -1)
> + return -1;
> +
> + ret = unicode_format_arg_format(ctx, &arg, &str);
> + if (ret == -1)
> + return -1;
> +
> + if (ret != 1) {
> + ret = unicode_format_arg_output(ctx, &arg, str);
> + Py_DECREF(str);
> + if (ret == -1)
> + return -1;
> + }
> +
> + if (ctx->dict && (ctx->argidx < ctx->arglen) && arg.ch != '%') {
> + PyErr_SetString(PyExc_TypeError,
> + "not all arguments converted during string formatting");
> + return -1;
> + }
> + return 0;
> +}
> +
> +PyObject *
> +PyUnicode_Format(PyObject *format, PyObject *args)
> +{
> + struct unicode_formatter_t ctx;
> +
> + if (format == NULL || args == NULL) {
> + PyErr_BadInternalCall();
> + return NULL;
> + }
> +
> + if (ensure_unicode(format) < 0)
> + return NULL;
> +
> + ctx.fmtstr = format;
> + ctx.fmtdata = PyUnicode_DATA(ctx.fmtstr);
> + ctx.fmtkind = PyUnicode_KIND(ctx.fmtstr);
> + ctx.fmtcnt = PyUnicode_GET_LENGTH(ctx.fmtstr);
> + ctx.fmtpos = 0;
> +
> + _PyUnicodeWriter_Init(&ctx.writer);
> + ctx.writer.min_length = ctx.fmtcnt + 100;
> + ctx.writer.overallocate = 1;
> +
> + if (PyTuple_Check(args)) {
> + ctx.arglen = PyTuple_Size(args);
> + ctx.argidx = 0;
> + }
> + else {
> + ctx.arglen = -1;
> + ctx.argidx = -2;
> + }
> + ctx.args_owned = 0;
> + if (PyMapping_Check(args) && !PyTuple_Check(args) && !PyUnicode_Check(args))
> + ctx.dict = args;
> + else
> + ctx.dict = NULL;
> + ctx.args = args;
> +
> + while (--ctx.fmtcnt >= 0) {
> + if (PyUnicode_READ(ctx.fmtkind, ctx.fmtdata, ctx.fmtpos) != '%') {
> + Py_ssize_t nonfmtpos;
> +
> + nonfmtpos = ctx.fmtpos++;
> + while (ctx.fmtcnt >= 0 &&
> + PyUnicode_READ(ctx.fmtkind, ctx.fmtdata, ctx.fmtpos) != '%') {
> + ctx.fmtpos++;
> + ctx.fmtcnt--;
> + }
> + if (ctx.fmtcnt < 0) {
> + ctx.fmtpos--;
> + ctx.writer.overallocate = 0;
> + }
> +
> + if (_PyUnicodeWriter_WriteSubstring(&ctx.writer, ctx.fmtstr,
> + nonfmtpos, ctx.fmtpos) < 0)
> + goto onError;
> + }
> + else {
> + ctx.fmtpos++;
> + if (unicode_format_arg(&ctx) == -1)
> + goto onError;
> + }
> + }
> +
> + if (ctx.argidx < ctx.arglen && !ctx.dict) {
> + PyErr_SetString(PyExc_TypeError,
> + "not all arguments converted during string formatting");
> + goto onError;
> + }
> +
> + if (ctx.args_owned) {
> + Py_DECREF(ctx.args);
> + }
> + return _PyUnicodeWriter_Finish(&ctx.writer);
> +
> + onError:
> + _PyUnicodeWriter_Dealloc(&ctx.writer);
> + if (ctx.args_owned) {
> + Py_DECREF(ctx.args);
> + }
> + return NULL;
> +}
> +
> +static PyObject *
> +unicode_subtype_new(PyTypeObject *type, PyObject *args, PyObject *kwds);
> +
> +static PyObject *
> +unicode_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
> +{
> + PyObject *x = NULL;
> + static char *kwlist[] = {"object", "encoding", "errors", 0};
> + char *encoding = NULL;
> + char *errors = NULL;
> +
> + if (type != &PyUnicode_Type)
> + return unicode_subtype_new(type, args, kwds);
> + if (!PyArg_ParseTupleAndKeywords(args, kwds, "|Oss:str",
> + kwlist, &x, &encoding, &errors))
> + return NULL;
> + if (x == NULL)
> + _Py_RETURN_UNICODE_EMPTY();
> + if (encoding == NULL && errors == NULL)
> + return PyObject_Str(x);
> + else
> + return PyUnicode_FromEncodedObject(x, encoding, errors);
> +}
> +
> +static PyObject *
> +unicode_subtype_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
> +{
> + PyObject *unicode, *self;
> + Py_ssize_t length, char_size;
> + int share_wstr, share_utf8;
> + unsigned int kind;
> + void *data;
> +
> + assert(PyType_IsSubtype(type, &PyUnicode_Type));
> +
> + unicode = unicode_new(&PyUnicode_Type, args, kwds);
> + if (unicode == NULL)
> + return NULL;
> + assert(_PyUnicode_CHECK(unicode));
> + if (PyUnicode_READY(unicode) == -1) {
> + Py_DECREF(unicode);
> + return NULL;
> + }
> +
> + self = type->tp_alloc(type, 0);
> + if (self == NULL) {
> + Py_DECREF(unicode);
> + return NULL;
> + }
> + kind = PyUnicode_KIND(unicode);
> + length = PyUnicode_GET_LENGTH(unicode);
> +
> + _PyUnicode_LENGTH(self) = length;
> +#ifdef Py_DEBUG
> + _PyUnicode_HASH(self) = -1;
> +#else
> + _PyUnicode_HASH(self) = _PyUnicode_HASH(unicode);
> +#endif
> + _PyUnicode_STATE(self).interned = 0;
> + _PyUnicode_STATE(self).kind = kind;
> + _PyUnicode_STATE(self).compact = 0;
> + _PyUnicode_STATE(self).ascii = _PyUnicode_STATE(unicode).ascii;
> + _PyUnicode_STATE(self).ready = 1;
> + _PyUnicode_WSTR(self) = NULL;
> + _PyUnicode_UTF8_LENGTH(self) = 0;
> + _PyUnicode_UTF8(self) = NULL;
> + _PyUnicode_WSTR_LENGTH(self) = 0;
> + _PyUnicode_DATA_ANY(self) = NULL;
> +
> + share_utf8 = 0;
> + share_wstr = 0;
> + if (kind == PyUnicode_1BYTE_KIND) {
> + char_size = 1;
> + if (PyUnicode_MAX_CHAR_VALUE(unicode) < 128)
> + share_utf8 = 1;
> + }
> + else if (kind == PyUnicode_2BYTE_KIND) {
> + char_size = 2;
> + if (sizeof(wchar_t) == 2)
> + share_wstr = 1;
> + }
> + else {
> + assert(kind == PyUnicode_4BYTE_KIND);
> + char_size = 4;
> + if (sizeof(wchar_t) == 4)
> + share_wstr = 1;
> + }
> +
> + /* Ensure we won't overflow the length. */
> + if (length > (PY_SSIZE_T_MAX / char_size - 1)) {
> + PyErr_NoMemory();
> + goto onError;
> + }
> + data = PyObject_MALLOC((length + 1) * char_size);
> + if (data == NULL) {
> + PyErr_NoMemory();
> + goto onError;
> + }
> +
> + _PyUnicode_DATA_ANY(self) = data;
> + if (share_utf8) {
> + _PyUnicode_UTF8_LENGTH(self) = length;
> + _PyUnicode_UTF8(self) = data;
> + }
> + if (share_wstr) {
> + _PyUnicode_WSTR_LENGTH(self) = length;
> + _PyUnicode_WSTR(self) = (wchar_t *)data;
> + }
> +
> + memcpy(data, PyUnicode_DATA(unicode),
> + kind * (length + 1));
> + assert(_PyUnicode_CheckConsistency(self, 1));
> +#ifdef Py_DEBUG
> + _PyUnicode_HASH(self) = _PyUnicode_HASH(unicode);
> +#endif
> + Py_DECREF(unicode);
> + return self;
> +
> +onError:
> + Py_DECREF(unicode);
> + Py_DECREF(self);
> + return NULL;
> +}
> +
> +PyDoc_STRVAR(unicode_doc,
> +"str(object='') -> str\n\
> +str(bytes_or_buffer[, encoding[, errors]]) -> str\n\
> +\n\
> +Create a new string object from the given object. If encoding or\n\
> +errors is specified, then the object must expose a data buffer\n\
> +that will be decoded using the given encoding and error handler.\n\
> +Otherwise, returns the result of object.__str__() (if defined)\n\
> +or repr(object).\n\
> +encoding defaults to sys.getdefaultencoding().\n\
> +errors defaults to 'strict'.");
> +
> +static PyObject *unicode_iter(PyObject *seq);
> +
> +PyTypeObject PyUnicode_Type = {
> + PyVarObject_HEAD_INIT(&PyType_Type, 0)
> + "str", /* tp_name */
> + sizeof(PyUnicodeObject), /* tp_size */
> + 0, /* tp_itemsize */
> + /* Slots */
> + (destructor)unicode_dealloc, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + unicode_repr, /* tp_repr */
> + &unicode_as_number, /* tp_as_number */
> + &unicode_as_sequence, /* tp_as_sequence */
> + &unicode_as_mapping, /* tp_as_mapping */
> + (hashfunc) unicode_hash, /* tp_hash*/
> + 0, /* tp_call*/
> + (reprfunc) unicode_str, /* tp_str */
> + PyObject_GenericGetAttr, /* tp_getattro */
> + 0, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE |
> + Py_TPFLAGS_UNICODE_SUBCLASS, /* tp_flags */
> + unicode_doc, /* tp_doc */
> + 0, /* tp_traverse */
> + 0, /* tp_clear */
> + PyUnicode_RichCompare, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + unicode_iter, /* tp_iter */
> + 0, /* tp_iternext */
> + unicode_methods, /* tp_methods */
> + 0, /* tp_members */
> + 0, /* tp_getset */
> + &PyBaseObject_Type, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + 0, /* tp_init */
> + 0, /* tp_alloc */
> + unicode_new, /* tp_new */
> + PyObject_Del, /* tp_free */
> +};
> +
> +/* Initialize the Unicode implementation */
> +
> +int _PyUnicode_Init(void)
> +{
> + /* XXX - move this array to unicodectype.c ? */
> + Py_UCS2 linebreak[] = {
> + 0x000A, /* LINE FEED */
> + 0x000D, /* CARRIAGE RETURN */
> + 0x001C, /* FILE SEPARATOR */
> + 0x001D, /* GROUP SEPARATOR */
> + 0x001E, /* RECORD SEPARATOR */
> + 0x0085, /* NEXT LINE */
> + 0x2028, /* LINE SEPARATOR */
> + 0x2029, /* PARAGRAPH SEPARATOR */
> + };
> +
> + /* Init the implementation */
> + _Py_INCREF_UNICODE_EMPTY();
> + if (!unicode_empty)
> + Py_FatalError("Can't create empty string");
> + Py_DECREF(unicode_empty);
> +
> + if (PyType_Ready(&PyUnicode_Type) < 0)
> + Py_FatalError("Can't initialize 'unicode'");
> +
> + /* initialize the linebreak bloom filter */
> + bloom_linebreak = make_bloom_mask(
> + PyUnicode_2BYTE_KIND, linebreak,
> + Py_ARRAY_LENGTH(linebreak));
> +
> + if (PyType_Ready(&EncodingMapType) < 0)
> + Py_FatalError("Can't initialize encoding map type");
> +
> + if (PyType_Ready(&PyFieldNameIter_Type) < 0)
> + Py_FatalError("Can't initialize field name iterator type");
> +
> + if (PyType_Ready(&PyFormatterIter_Type) < 0)
> + Py_FatalError("Can't initialize formatter iter type");
> +
> + return 0;
> +}
> +
> +/* Finalize the Unicode implementation */
> +
> +int
> +PyUnicode_ClearFreeList(void)
> +{
> + return 0;
> +}
> +
> +void
> +_PyUnicode_Fini(void)
> +{
> + int i;
> +
> + Py_CLEAR(unicode_empty);
> +
> + for (i = 0; i < 256; i++)
> + Py_CLEAR(unicode_latin1[i]);
> + _PyUnicode_ClearStaticStrings();
> + (void)PyUnicode_ClearFreeList();
> +}
> +
> +void
> +PyUnicode_InternInPlace(PyObject **p)
> +{
> + PyObject *s = *p;
> + PyObject *t;
> +#ifdef Py_DEBUG
> + assert(s != NULL);
> + assert(_PyUnicode_CHECK(s));
> +#else
> + if (s == NULL || !PyUnicode_Check(s))
> + return;
> +#endif
> + /* If it's a subclass, we don't really know what putting
> + it in the interned dict might do. */
> + if (!PyUnicode_CheckExact(s))
> + return;
> + if (PyUnicode_CHECK_INTERNED(s))
> + return;
> + if (interned == NULL) {
> + interned = PyDict_New();
> + if (interned == NULL) {
> + PyErr_Clear(); /* Don't leave an exception */
> + return;
> + }
> + }
> + Py_ALLOW_RECURSION
> + t = PyDict_SetDefault(interned, s, s);
> + Py_END_ALLOW_RECURSION
> + if (t == NULL) {
> + PyErr_Clear();
> + return;
> + }
> + if (t != s) {
> + Py_INCREF(t);
> + Py_SETREF(*p, t);
> + return;
> + }
> + /* The two references in interned are not counted by refcnt.
> + The deallocator will take care of this */
> + Py_REFCNT(s) -= 2;
> + _PyUnicode_STATE(s).interned = SSTATE_INTERNED_MORTAL;
> +}
> +
> +void
> +PyUnicode_InternImmortal(PyObject **p)
> +{
> + PyUnicode_InternInPlace(p);
> + if (PyUnicode_CHECK_INTERNED(*p) != SSTATE_INTERNED_IMMORTAL) {
> + _PyUnicode_STATE(*p).interned = SSTATE_INTERNED_IMMORTAL;
> + Py_INCREF(*p);
> + }
> +}
> +
> +PyObject *
> +PyUnicode_InternFromString(const char *cp)
> +{
> + PyObject *s = PyUnicode_FromString(cp);
> + if (s == NULL)
> + return NULL;
> + PyUnicode_InternInPlace(&s);
> + return s;
> +}
> +
> +void
> +_Py_ReleaseInternedUnicodeStrings(void)
> +{
> + PyObject *keys;
> + PyObject *s;
> + Py_ssize_t i, n;
> + Py_ssize_t immortal_size = 0, mortal_size = 0;
> +
> + if (interned == NULL || !PyDict_Check(interned))
> + return;
> + keys = PyDict_Keys(interned);
> + if (keys == NULL || !PyList_Check(keys)) {
> + PyErr_Clear();
> + return;
> + }
> +
> + /* Since _Py_ReleaseInternedUnicodeStrings() is intended to help a leak
> + detector, interned unicode strings are not forcibly deallocated;
> + rather, we give them their stolen references back, and then clear
> + and DECREF the interned dict. */
> +
> + n = PyList_GET_SIZE(keys);
> + fprintf(stderr, "releasing %" PY_FORMAT_SIZE_T "d interned strings\n",
> + n);
> + for (i = 0; i < n; i++) {
> + s = PyList_GET_ITEM(keys, i);
> + if (PyUnicode_READY(s) == -1) {
> + assert(0 && "could not ready string");
> + fprintf(stderr, "could not ready string\n");
> + }
> + switch (PyUnicode_CHECK_INTERNED(s)) {
> + case SSTATE_NOT_INTERNED:
> + /* XXX Shouldn't happen */
> + break;
> + case SSTATE_INTERNED_IMMORTAL:
> + Py_REFCNT(s) += 1;
> + immortal_size += PyUnicode_GET_LENGTH(s);
> + break;
> + case SSTATE_INTERNED_MORTAL:
> + Py_REFCNT(s) += 2;
> + mortal_size += PyUnicode_GET_LENGTH(s);
> + break;
> + default:
> + Py_FatalError("Inconsistent interned string state.");
> + }
> + _PyUnicode_STATE(s).interned = SSTATE_NOT_INTERNED;
> + }
> + fprintf(stderr, "total size of all interned strings: "
> + "%" PY_FORMAT_SIZE_T "d/%" PY_FORMAT_SIZE_T "d "
> + "mortal/immortal\n", mortal_size, immortal_size);
> + Py_DECREF(keys);
> + PyDict_Clear(interned);
> + Py_CLEAR(interned);
> +}
> +
> +
> +/********************* Unicode Iterator **************************/
> +
> +typedef struct {
> + PyObject_HEAD
> + Py_ssize_t it_index;
> + PyObject *it_seq; /* Set to NULL when iterator is exhausted */
> +} unicodeiterobject;
> +
> +static void
> +unicodeiter_dealloc(unicodeiterobject *it)
> +{
> + _PyObject_GC_UNTRACK(it);
> + Py_XDECREF(it->it_seq);
> + PyObject_GC_Del(it);
> +}
> +
> +static int
> +unicodeiter_traverse(unicodeiterobject *it, visitproc visit, void *arg)
> +{
> + Py_VISIT(it->it_seq);
> + return 0;
> +}
> +
> +static PyObject *
> +unicodeiter_next(unicodeiterobject *it)
> +{
> + PyObject *seq, *item;
> +
> + assert(it != NULL);
> + seq = it->it_seq;
> + if (seq == NULL)
> + return NULL;
> + assert(_PyUnicode_CHECK(seq));
> +
> + if (it->it_index < PyUnicode_GET_LENGTH(seq)) {
> + int kind = PyUnicode_KIND(seq);
> + void *data = PyUnicode_DATA(seq);
> + Py_UCS4 chr = PyUnicode_READ(kind, data, it->it_index);
> + item = PyUnicode_FromOrdinal(chr);
> + if (item != NULL)
> + ++it->it_index;
> + return item;
> + }
> +
> + it->it_seq = NULL;
> + Py_DECREF(seq);
> + return NULL;
> +}
> +
> +static PyObject *
> +unicodeiter_len(unicodeiterobject *it)
> +{
> + Py_ssize_t len = 0;
> + if (it->it_seq)
> + len = PyUnicode_GET_LENGTH(it->it_seq) - it->it_index;
> + return PyLong_FromSsize_t(len);
> +}
> +
> +PyDoc_STRVAR(length_hint_doc, "Private method returning an estimate of len(list(it)).");
> +
> +static PyObject *
> +unicodeiter_reduce(unicodeiterobject *it)
> +{
> + if (it->it_seq != NULL) {
> + return Py_BuildValue("N(O)n", _PyObject_GetBuiltin("iter"),
> + it->it_seq, it->it_index);
> + } else {
> + PyObject *u = PyUnicode_FromUnicode(NULL, 0);
> + if (u == NULL)
> + return NULL;
> + return Py_BuildValue("N(N)", _PyObject_GetBuiltin("iter"), u);
> + }
> +}
> +
> +PyDoc_STRVAR(reduce_doc, "Return state information for pickling.");
> +
> +static PyObject *
> +unicodeiter_setstate(unicodeiterobject *it, PyObject *state)
> +{
> + Py_ssize_t index = PyLong_AsSsize_t(state);
> + if (index == -1 && PyErr_Occurred())
> + return NULL;
> + if (it->it_seq != NULL) {
> + if (index < 0)
> + index = 0;
> + else if (index > PyUnicode_GET_LENGTH(it->it_seq))
> + index = PyUnicode_GET_LENGTH(it->it_seq); /* iterator truncated */
> + it->it_index = index;
> + }
> + Py_RETURN_NONE;
> +}
> +
> +PyDoc_STRVAR(setstate_doc, "Set state information for unpickling.");
> +
> +static PyMethodDef unicodeiter_methods[] = {
> + {"__length_hint__", (PyCFunction)unicodeiter_len, METH_NOARGS,
> + length_hint_doc},
> + {"__reduce__", (PyCFunction)unicodeiter_reduce, METH_NOARGS,
> + reduce_doc},
> + {"__setstate__", (PyCFunction)unicodeiter_setstate, METH_O,
> + setstate_doc},
> + {NULL, NULL} /* sentinel */
> +};
> +
> +PyTypeObject PyUnicodeIter_Type = {
> + PyVarObject_HEAD_INIT(&PyType_Type, 0)
> + "str_iterator", /* tp_name */
> + sizeof(unicodeiterobject), /* tp_basicsize */
> + 0, /* tp_itemsize */
> + /* methods */
> + (destructor)unicodeiter_dealloc, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + 0, /* tp_repr */
> + 0, /* tp_as_number */
> + 0, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + PyObject_GenericGetAttr, /* tp_getattro */
> + 0, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,/* tp_flags */
> + 0, /* tp_doc */
> + (traverseproc)unicodeiter_traverse, /* tp_traverse */
> + 0, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + PyObject_SelfIter, /* tp_iter */
> + (iternextfunc)unicodeiter_next, /* tp_iternext */
> + unicodeiter_methods, /* tp_methods */
> + 0,
> +};
> +
> +static PyObject *
> +unicode_iter(PyObject *seq)
> +{
> + unicodeiterobject *it;
> +
> + if (!PyUnicode_Check(seq)) {
> + PyErr_BadInternalCall();
> + return NULL;
> + }
> + if (PyUnicode_READY(seq) == -1)
> + return NULL;
> + it = PyObject_GC_New(unicodeiterobject, &PyUnicodeIter_Type);
> + if (it == NULL)
> + return NULL;
> + it->it_index = 0;
> + Py_INCREF(seq);
> + it->it_seq = seq;
> + _PyObject_GC_TRACK(it);
> + return (PyObject *)it;
> +}
> +
> +
> +size_t
> +Py_UNICODE_strlen(const Py_UNICODE *u)
> +{
> + int res = 0;
> + while(*u++)
> + res++;
> + return res;
> +}
> +
> +Py_UNICODE*
> +Py_UNICODE_strcpy(Py_UNICODE *s1, const Py_UNICODE *s2)
> +{
> + Py_UNICODE *u = s1;
> + while ((*u++ = *s2++));
> + return s1;
> +}
> +
> +Py_UNICODE*
> +Py_UNICODE_strncpy(Py_UNICODE *s1, const Py_UNICODE *s2, size_t n)
> +{
> + Py_UNICODE *u = s1;
> + while ((*u++ = *s2++))
> + if (n-- == 0)
> + break;
> + return s1;
> +}
> +
> +Py_UNICODE*
> +Py_UNICODE_strcat(Py_UNICODE *s1, const Py_UNICODE *s2)
> +{
> + Py_UNICODE *u1 = s1;
> + u1 += Py_UNICODE_strlen(u1);
> + Py_UNICODE_strcpy(u1, s2);
> + return s1;
> +}
> +
> +int
> +Py_UNICODE_strcmp(const Py_UNICODE *s1, const Py_UNICODE *s2)
> +{
> + while (*s1 && *s2 && *s1 == *s2)
> + s1++, s2++;
> + if (*s1 && *s2)
> + return (*s1 < *s2) ? -1 : +1;
> + if (*s1)
> + return 1;
> + if (*s2)
> + return -1;
> + return 0;
> +}
> +
> +int
> +Py_UNICODE_strncmp(const Py_UNICODE *s1, const Py_UNICODE *s2, size_t n)
> +{
> + Py_UNICODE u1, u2;
> + for (; n != 0; n--) {
> + u1 = *s1;
> + u2 = *s2;
> + if (u1 != u2)
> + return (u1 < u2) ? -1 : +1;
> + if (u1 == '\0')
> + return 0;
> + s1++;
> + s2++;
> + }
> + return 0;
> +}
> +
> +Py_UNICODE*
> +Py_UNICODE_strchr(const Py_UNICODE *s, Py_UNICODE c)
> +{
> + const Py_UNICODE *p;
> + for (p = s; *p; p++)
> + if (*p == c)
> + return (Py_UNICODE*)p;
> + return NULL;
> +}
> +
> +Py_UNICODE*
> +Py_UNICODE_strrchr(const Py_UNICODE *s, Py_UNICODE c)
> +{
> + const Py_UNICODE *p;
> + p = s + Py_UNICODE_strlen(s);
> + while (p != s) {
> + p--;
> + if (*p == c)
> + return (Py_UNICODE*)p;
> + }
> + return NULL;
> +}
> +
> +Py_UNICODE*
> +PyUnicode_AsUnicodeCopy(PyObject *unicode)
> +{
> + Py_UNICODE *u, *copy;
> + Py_ssize_t len, size;
> +
> + if (!PyUnicode_Check(unicode)) {
> + PyErr_BadArgument();
> + return NULL;
> + }
> + u = PyUnicode_AsUnicodeAndSize(unicode, &len);
> + if (u == NULL)
> + return NULL;
> + /* Ensure we won't overflow the size. */
> + if (len > ((PY_SSIZE_T_MAX / (Py_ssize_t)sizeof(Py_UNICODE)) - 1)) {
> + PyErr_NoMemory();
> + return NULL;
> + }
> + size = len + 1; /* copy the null character */
> + size *= sizeof(Py_UNICODE);
> + copy = PyMem_Malloc(size);
> + if (copy == NULL) {
> + PyErr_NoMemory();
> + return NULL;
> + }
> + memcpy(copy, u, size);
> + return copy;
> +}
> +
> +/* A _string module, to export formatter_parser and formatter_field_name_split
> + to the string.Formatter class implemented in Python. */
> +
> +static PyMethodDef _string_methods[] = {
> + {"formatter_field_name_split", (PyCFunction) formatter_field_name_split,
> + METH_O, PyDoc_STR("split the argument as a field name")},
> + {"formatter_parser", (PyCFunction) formatter_parser,
> + METH_O, PyDoc_STR("parse the argument as a format string")},
> + {NULL, NULL}
> +};
> +
> +static struct PyModuleDef _string_module = {
> + PyModuleDef_HEAD_INIT,
> + "_string",
> + PyDoc_STR("string helper module"),
> + 0,
> + _string_methods,
> + NULL,
> + NULL,
> + NULL,
> + NULL
> +};
> +
> +PyMODINIT_FUNC
> +PyInit__string(void)
> +{
> + return PyModule_Create(&_string_module);
> +}
> +
> +
> +#ifdef __cplusplus
> +}
> +#endif
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/bltinmodule.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/bltinmodule.c
> new file mode 100644
> index 00000000..deb243ec
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/bltinmodule.c
> @@ -0,0 +1,2794 @@
> +/** @file
> + builtin module.
> +
> + Copyright (c) 2010 - 2021, Intel Corporation. All rights reserved.<BR>
> + This program and the accompanying materials are licensed and made available under
> + the terms and conditions of the BSD License that accompanies this distribution.
> + The full text of the license may be found at
> + http://opensource.org/licenses/bsd-license.
> +
> + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
> + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
> +**/
> +/* Built-in functions */
> +
> +#include "Python.h"
> +#include "Python-ast.h"
> +
> +#include "node.h"
> +#include "code.h"
> +
> +#include "asdl.h"
> +#include "ast.h"
> +
> +#include <ctype.h>
> +
> +#ifdef HAVE_LANGINFO_H
> +#include <langinfo.h> /* CODESET */
> +#endif
> +
> +/* The default encoding used by the platform file system APIs
> + Can remain NULL for all platforms that don't have such a concept
> +
> + Don't forget to modify PyUnicode_DecodeFSDefault() if you touch any of the
> + values for Py_FileSystemDefaultEncoding!
> +*/
> +#if defined(__APPLE__)
> +const char *Py_FileSystemDefaultEncoding = "utf-8";
> +int Py_HasFileSystemDefaultEncoding = 1;
> +#elif defined(MS_WINDOWS)
> +/* may be changed by initfsencoding(), but should never be free()d */
> +const char *Py_FileSystemDefaultEncoding = "utf-8";
> +int Py_HasFileSystemDefaultEncoding = 1;
> +#elif defined(UEFI_MSVC_64)
> +const char *Py_FileSystemDefaultEncoding = "utf-8";
> +int Py_HasFileSystemDefaultEncoding = 1;
> +#elif defined(UEFI_MSVC_32)
> +const char *Py_FileSystemDefaultEncoding = "utf-8";
> +int Py_HasFileSystemDefaultEncoding = 1;
> +#else
> +const char *Py_FileSystemDefaultEncoding = NULL; /* set by initfsencoding() */
> +int Py_HasFileSystemDefaultEncoding = 0;
> +#endif
> +const char *Py_FileSystemDefaultEncodeErrors = "surrogateescape";
> +
> +_Py_IDENTIFIER(__builtins__);
> +_Py_IDENTIFIER(__dict__);
> +_Py_IDENTIFIER(__prepare__);
> +_Py_IDENTIFIER(__round__);
> +_Py_IDENTIFIER(encoding);
> +_Py_IDENTIFIER(errors);
> +_Py_IDENTIFIER(fileno);
> +_Py_IDENTIFIER(flush);
> +_Py_IDENTIFIER(metaclass);
> +_Py_IDENTIFIER(sort);
> +_Py_IDENTIFIER(stdin);
> +_Py_IDENTIFIER(stdout);
> +_Py_IDENTIFIER(stderr);
> +
> +#include "clinic/bltinmodule.c.h"
> +
> +/* AC: cannot convert yet, waiting for *args support */
> +static PyObject *
> +builtin___build_class__(PyObject *self, PyObject *args, PyObject *kwds)
> +{
> + PyObject *func, *name, *bases, *mkw, *meta, *winner, *prep, *ns;
> + PyObject *cls = NULL, *cell = NULL;
> + Py_ssize_t nargs;
> + int isclass = 0; /* initialize to prevent gcc warning */
> +
> + assert(args != NULL);
> + if (!PyTuple_Check(args)) {
> + PyErr_SetString(PyExc_TypeError,
> + "__build_class__: args is not a tuple");
> + return NULL;
> + }
> + nargs = PyTuple_GET_SIZE(args);
> + if (nargs < 2) {
> + PyErr_SetString(PyExc_TypeError,
> + "__build_class__: not enough arguments");
> + return NULL;
> + }
> + func = PyTuple_GET_ITEM(args, 0); /* Better be callable */
> + if (!PyFunction_Check(func)) {
> + PyErr_SetString(PyExc_TypeError,
> + "__build_class__: func must be a function");
> + return NULL;
> + }
> + name = PyTuple_GET_ITEM(args, 1);
> + if (!PyUnicode_Check(name)) {
> + PyErr_SetString(PyExc_TypeError,
> + "__build_class__: name is not a string");
> + return NULL;
> + }
> + bases = PyTuple_GetSlice(args, 2, nargs);
> + if (bases == NULL)
> + return NULL;
> +
> + if (kwds == NULL) {
> + meta = NULL;
> + mkw = NULL;
> + }
> + else {
> + mkw = PyDict_Copy(kwds); /* Don't modify kwds passed in! */
> + if (mkw == NULL) {
> + Py_DECREF(bases);
> + return NULL;
> + }
> + meta = _PyDict_GetItemId(mkw, &PyId_metaclass);
> + if (meta != NULL) {
> + Py_INCREF(meta);
> + if (_PyDict_DelItemId(mkw, &PyId_metaclass) < 0) {
> + Py_DECREF(meta);
> + Py_DECREF(mkw);
> + Py_DECREF(bases);
> + return NULL;
> + }
> + /* metaclass is explicitly given, check if it's indeed a class */
> + isclass = PyType_Check(meta);
> + }
> + }
> + if (meta == NULL) {
> + /* if there are no bases, use type: */
> + if (PyTuple_GET_SIZE(bases) == 0) {
> + meta = (PyObject *) (&PyType_Type);
> + }
> + /* else get the type of the first base */
> + else {
> + PyObject *base0 = PyTuple_GET_ITEM(bases, 0);
> + meta = (PyObject *) (base0->ob_type);
> + }
> + Py_INCREF(meta);
> + isclass = 1; /* meta is really a class */
> + }
> +
> + if (isclass) {
> + /* meta is really a class, so check for a more derived
> + metaclass, or possible metaclass conflicts: */
> + winner = (PyObject *)_PyType_CalculateMetaclass((PyTypeObject *)meta,
> + bases);
> + if (winner == NULL) {
> + Py_DECREF(meta);
> + Py_XDECREF(mkw);
> + Py_DECREF(bases);
> + return NULL;
> + }
> + if (winner != meta) {
> + Py_DECREF(meta);
> + meta = winner;
> + Py_INCREF(meta);
> + }
> + }
> + /* else: meta is not a class, so we cannot do the metaclass
> + calculation, so we will use the explicitly given object as it is */
> + prep = _PyObject_GetAttrId(meta, &PyId___prepare__);
> + if (prep == NULL) {
> + if (PyErr_ExceptionMatches(PyExc_AttributeError)) {
> + PyErr_Clear();
> + ns = PyDict_New();
> + }
> + else {
> + Py_DECREF(meta);
> + Py_XDECREF(mkw);
> + Py_DECREF(bases);
> + return NULL;
> + }
> + }
> + else {
> + PyObject *pargs[2] = {name, bases};
> + ns = _PyObject_FastCallDict(prep, pargs, 2, mkw);
> + Py_DECREF(prep);
> + }
> + if (ns == NULL) {
> + Py_DECREF(meta);
> + Py_XDECREF(mkw);
> + Py_DECREF(bases);
> + return NULL;
> + }
> + if (!PyMapping_Check(ns)) {
> + PyErr_Format(PyExc_TypeError,
> + "%.200s.__prepare__() must return a mapping, not %.200s",
> + isclass ? ((PyTypeObject *)meta)->tp_name : "<metaclass>",
> + Py_TYPE(ns)->tp_name);
> + goto error;
> + }
> + cell = PyEval_EvalCodeEx(PyFunction_GET_CODE(func), PyFunction_GET_GLOBALS(func), ns,
> + NULL, 0, NULL, 0, NULL, 0, NULL,
> + PyFunction_GET_CLOSURE(func));
> + if (cell != NULL) {
> + PyObject *margs[3] = {name, bases, ns};
> + cls = _PyObject_FastCallDict(meta, margs, 3, mkw);
> + if (cls != NULL && PyType_Check(cls) && PyCell_Check(cell)) {
> + PyObject *cell_cls = PyCell_GET(cell);
> + if (cell_cls != cls) {
> + /* TODO: In 3.7, DeprecationWarning will become RuntimeError.
> + * At that point, cell_error won't be needed.
> + */
> + int cell_error;
> + if (cell_cls == NULL) {
> + const char *msg =
> + "__class__ not set defining %.200R as %.200R. "
> + "Was __classcell__ propagated to type.__new__?";
> + cell_error = PyErr_WarnFormat(
> + PyExc_DeprecationWarning, 1, msg, name, cls);
> + } else {
> + const char *msg =
> + "__class__ set to %.200R defining %.200R as %.200R";
> + PyErr_Format(PyExc_TypeError, msg, cell_cls, name, cls);
> + cell_error = 1;
> + }
> + if (cell_error) {
> + Py_DECREF(cls);
> + cls = NULL;
> + goto error;
> + } else {
> + /* Fill in the cell, since type.__new__ didn't do it */
> + PyCell_Set(cell, cls);
> + }
> + }
> + }
> + }
> +error:
> + Py_XDECREF(cell);
> + Py_DECREF(ns);
> + Py_DECREF(meta);
> + Py_XDECREF(mkw);
> + Py_DECREF(bases);
> + return cls;
> +}
> +
> +PyDoc_STRVAR(build_class_doc,
> +"__build_class__(func, name, *bases, metaclass=None, **kwds) -> class\n\
> +\n\
> +Internal helper function used by the class statement.");
> +
> +static PyObject *
> +builtin___import__(PyObject *self, PyObject *args, PyObject *kwds)
> +{
> + static char *kwlist[] = {"name", "globals", "locals", "fromlist",
> + "level", 0};
> + PyObject *name, *globals = NULL, *locals = NULL, *fromlist = NULL;
> + int level = 0;
> +
> + if (!PyArg_ParseTupleAndKeywords(args, kwds, "U|OOOi:__import__",
> + kwlist, &name, &globals, &locals, &fromlist, &level))
> + return NULL;
> + return PyImport_ImportModuleLevelObject(name, globals, locals,
> + fromlist, level);
> +}
> +
> +PyDoc_STRVAR(import_doc,
> +"__import__(name, globals=None, locals=None, fromlist=(), level=0) -> module\n\
> +\n\
> +Import a module. Because this function is meant for use by the Python\n\
> +interpreter and not for general use, it is better to use\n\
> +importlib.import_module() to programmatically import a module.\n\
> +\n\
> +The globals argument is only used to determine the context;\n\
> +they are not modified. The locals argument is unused. The fromlist\n\
> +should be a list of names to emulate ``from name import ...'', or an\n\
> +empty list to emulate ``import name''.\n\
> +When importing a module from a package, note that __import__('A.B', ...)\n\
> +returns package A when fromlist is empty, but its submodule B when\n\
> +fromlist is not empty. The level argument is used to determine whether to\n\
> +perform absolute or relative imports: 0 is absolute, while a positive number\n\
> +is the number of parent directories to search relative to the current module.");
> +
> +
> +/*[clinic input]
> +abs as builtin_abs
> +
> + x: object
> + /
> +
> +Return the absolute value of the argument.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_abs(PyObject *module, PyObject *x)
> +/*[clinic end generated code: output=b1b433b9e51356f5 input=bed4ca14e29c20d1]*/
> +{
> + return PyNumber_Absolute(x);
> +}
> +
> +/*[clinic input]
> +all as builtin_all
> +
> + iterable: object
> + /
> +
> +Return True if bool(x) is True for all values x in the iterable.
> +
> +If the iterable is empty, return True.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_all(PyObject *module, PyObject *iterable)
> +/*[clinic end generated code: output=ca2a7127276f79b3 input=1a7c5d1bc3438a21]*/
> +{
> + PyObject *it, *item;
> + PyObject *(*iternext)(PyObject *);
> + int cmp;
> +
> + it = PyObject_GetIter(iterable);
> + if (it == NULL)
> + return NULL;
> + iternext = *Py_TYPE(it)->tp_iternext;
> +
> + for (;;) {
> + item = iternext(it);
> + if (item == NULL)
> + break;
> + cmp = PyObject_IsTrue(item);
> + Py_DECREF(item);
> + if (cmp < 0) {
> + Py_DECREF(it);
> + return NULL;
> + }
> + if (cmp == 0) {
> + Py_DECREF(it);
> + Py_RETURN_FALSE;
> + }
> + }
> + Py_DECREF(it);
> + if (PyErr_Occurred()) {
> + if (PyErr_ExceptionMatches(PyExc_StopIteration))
> + PyErr_Clear();
> + else
> + return NULL;
> + }
> + Py_RETURN_TRUE;
> +}
> +
> +/*[clinic input]
> +any as builtin_any
> +
> + iterable: object
> + /
> +
> +Return True if bool(x) is True for any x in the iterable.
> +
> +If the iterable is empty, return False.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_any(PyObject *module, PyObject *iterable)
> +/*[clinic end generated code: output=fa65684748caa60e input=41d7451c23384f24]*/
> +{
> + PyObject *it, *item;
> + PyObject *(*iternext)(PyObject *);
> + int cmp;
> +
> + it = PyObject_GetIter(iterable);
> + if (it == NULL)
> + return NULL;
> + iternext = *Py_TYPE(it)->tp_iternext;
> +
> + for (;;) {
> + item = iternext(it);
> + if (item == NULL)
> + break;
> + cmp = PyObject_IsTrue(item);
> + Py_DECREF(item);
> + if (cmp < 0) {
> + Py_DECREF(it);
> + return NULL;
> + }
> + if (cmp > 0) {
> + Py_DECREF(it);
> + Py_RETURN_TRUE;
> + }
> + }
> + Py_DECREF(it);
> + if (PyErr_Occurred()) {
> + if (PyErr_ExceptionMatches(PyExc_StopIteration))
> + PyErr_Clear();
> + else
> + return NULL;
> + }
> + Py_RETURN_FALSE;
> +}
> +
> +/*[clinic input]
> +ascii as builtin_ascii
> +
> + obj: object
> + /
> +
> +Return an ASCII-only representation of an object.
> +
> +As repr(), return a string containing a printable representation of an
> +object, but escape the non-ASCII characters in the string returned by
> +repr() using \\x, \\u or \\U escapes. This generates a string similar
> +to that returned by repr() in Python 2.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_ascii(PyObject *module, PyObject *obj)
> +/*[clinic end generated code: output=6d37b3f0984c7eb9 input=4c62732e1b3a3cc9]*/
> +{
> + return PyObject_ASCII(obj);
> +}
> +
> +
> +/*[clinic input]
> +bin as builtin_bin
> +
> + number: object
> + /
> +
> +Return the binary representation of an integer.
> +
> + >>> bin(2796202)
> + '0b1010101010101010101010'
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_bin(PyObject *module, PyObject *number)
> +/*[clinic end generated code: output=b6fc4ad5e649f4f7 input=53f8a0264bacaf90]*/
> +{
> + return PyNumber_ToBase(number, 2);
> +}
> +
> +
> +/*[clinic input]
> +callable as builtin_callable
> +
> + obj: object
> + /
> +
> +Return whether the object is callable (i.e., some kind of function).
> +
> +Note that classes are callable, as are instances of classes with a
> +__call__() method.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_callable(PyObject *module, PyObject *obj)
> +/*[clinic end generated code: output=2b095d59d934cb7e input=1423bab99cc41f58]*/
> +{
> + return PyBool_FromLong((long)PyCallable_Check(obj));
> +}
> +
> +
> +typedef struct {
> + PyObject_HEAD
> + PyObject *func;
> + PyObject *it;
> +} filterobject;
> +
> +static PyObject *
> +filter_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
> +{
> + PyObject *func, *seq;
> + PyObject *it;
> + filterobject *lz;
> +
> + if (type == &PyFilter_Type && !_PyArg_NoKeywords("filter()", kwds))
> + return NULL;
> +
> + if (!PyArg_UnpackTuple(args, "filter", 2, 2, &func, &seq))
> + return NULL;
> +
> + /* Get iterator. */
> + it = PyObject_GetIter(seq);
> + if (it == NULL)
> + return NULL;
> +
> + /* create filterobject structure */
> + lz = (filterobject *)type->tp_alloc(type, 0);
> + if (lz == NULL) {
> + Py_DECREF(it);
> + return NULL;
> + }
> + Py_INCREF(func);
> + lz->func = func;
> + lz->it = it;
> +
> + return (PyObject *)lz;
> +}
> +
> +static void
> +filter_dealloc(filterobject *lz)
> +{
> + PyObject_GC_UnTrack(lz);
> + Py_XDECREF(lz->func);
> + Py_XDECREF(lz->it);
> + Py_TYPE(lz)->tp_free(lz);
> +}
> +
> +static int
> +filter_traverse(filterobject *lz, visitproc visit, void *arg)
> +{
> + Py_VISIT(lz->it);
> + Py_VISIT(lz->func);
> + return 0;
> +}
> +
> +static PyObject *
> +filter_next(filterobject *lz)
> +{
> + PyObject *item;
> + PyObject *it = lz->it;
> + long ok;
> + PyObject *(*iternext)(PyObject *);
> + int checktrue = lz->func == Py_None || lz->func == (PyObject *)&PyBool_Type;
> +
> + iternext = *Py_TYPE(it)->tp_iternext;
> + for (;;) {
> + item = iternext(it);
> + if (item == NULL)
> + return NULL;
> +
> + if (checktrue) {
> + ok = PyObject_IsTrue(item);
> + } else {
> + PyObject *good;
> + good = PyObject_CallFunctionObjArgs(lz->func, item, NULL);
> + if (good == NULL) {
> + Py_DECREF(item);
> + return NULL;
> + }
> + ok = PyObject_IsTrue(good);
> + Py_DECREF(good);
> + }
> + if (ok > 0)
> + return item;
> + Py_DECREF(item);
> + if (ok < 0)
> + return NULL;
> + }
> +}
> +
> +static PyObject *
> +filter_reduce(filterobject *lz)
> +{
> + return Py_BuildValue("O(OO)", Py_TYPE(lz), lz->func, lz->it);
> +}
> +
> +PyDoc_STRVAR(reduce_doc, "Return state information for pickling.");
> +
> +static PyMethodDef filter_methods[] = {
> + {"__reduce__", (PyCFunction)filter_reduce, METH_NOARGS, reduce_doc},
> + {NULL, NULL} /* sentinel */
> +};
> +
> +PyDoc_STRVAR(filter_doc,
> +"filter(function or None, iterable) --> filter object\n\
> +\n\
> +Return an iterator yielding those items of iterable for which function(item)\n\
> +is true. If function is None, return the items that are true.");
> +
> +PyTypeObject PyFilter_Type = {
> + PyVarObject_HEAD_INIT(&PyType_Type, 0)
> + "filter", /* tp_name */
> + sizeof(filterobject), /* tp_basicsize */
> + 0, /* tp_itemsize */
> + /* methods */
> + (destructor)filter_dealloc, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + 0, /* tp_repr */
> + 0, /* tp_as_number */
> + 0, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + PyObject_GenericGetAttr, /* tp_getattro */
> + 0, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC |
> + Py_TPFLAGS_BASETYPE, /* tp_flags */
> + filter_doc, /* tp_doc */
> + (traverseproc)filter_traverse, /* tp_traverse */
> + 0, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + PyObject_SelfIter, /* tp_iter */
> + (iternextfunc)filter_next, /* tp_iternext */
> + filter_methods, /* tp_methods */
> + 0, /* tp_members */
> + 0, /* tp_getset */
> + 0, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + 0, /* tp_init */
> + PyType_GenericAlloc, /* tp_alloc */
> + filter_new, /* tp_new */
> + PyObject_GC_Del, /* tp_free */
> +};
> +
> +
> +/*[clinic input]
> +format as builtin_format
> +
> + value: object
> + format_spec: unicode(c_default="NULL") = ''
> + /
> +
> +Return value.__format__(format_spec)
> +
> +format_spec defaults to the empty string.
> +See the Format Specification Mini-Language section of help('FORMATTING') for
> +details.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_format_impl(PyObject *module, PyObject *value, PyObject *format_spec)
> +/*[clinic end generated code: output=2f40bdfa4954b077 input=88339c93ea522b33]*/
> +{
> + return PyObject_Format(value, format_spec);
> +}
> +
> +/*[clinic input]
> +chr as builtin_chr
> +
> + i: int
> + /
> +
> +Return a Unicode string of one character with ordinal i; 0 <= i <= 0x10ffff.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_chr_impl(PyObject *module, int i)
> +/*[clinic end generated code: output=c733afcd200afcb7 input=3f604ef45a70750d]*/
> +{
> + return PyUnicode_FromOrdinal(i);
> +}
> +
> +
> +static const char *
> +source_as_string(PyObject *cmd, const char *funcname, const char *what, PyCompilerFlags *cf, PyObject **cmd_copy)
> +{
> + const char *str;
> + Py_ssize_t size;
> + Py_buffer view;
> +
> + *cmd_copy = NULL;
> + if (PyUnicode_Check(cmd)) {
> + cf->cf_flags |= PyCF_IGNORE_COOKIE;
> + str = PyUnicode_AsUTF8AndSize(cmd, &size);
> + if (str == NULL)
> + return NULL;
> + }
> + else if (PyBytes_Check(cmd)) {
> + str = PyBytes_AS_STRING(cmd);
> + size = PyBytes_GET_SIZE(cmd);
> + }
> + else if (PyByteArray_Check(cmd)) {
> + str = PyByteArray_AS_STRING(cmd);
> + size = PyByteArray_GET_SIZE(cmd);
> + }
> + else if (PyObject_GetBuffer(cmd, &view, PyBUF_SIMPLE) == 0) {
> + /* Copy to NUL-terminated buffer. */
> + *cmd_copy = PyBytes_FromStringAndSize(
> + (const char *)view.buf, view.len);
> + PyBuffer_Release(&view);
> + if (*cmd_copy == NULL) {
> + return NULL;
> + }
> + str = PyBytes_AS_STRING(*cmd_copy);
> + size = PyBytes_GET_SIZE(*cmd_copy);
> + }
> + else {
> + PyErr_Format(PyExc_TypeError,
> + "%s() arg 1 must be a %s object",
> + funcname, what);
> + return NULL;
> + }
> +
> + if (strlen(str) != (size_t)size) {
> + PyErr_SetString(PyExc_ValueError,
> + "source code string cannot contain null bytes");
> + Py_CLEAR(*cmd_copy);
> + return NULL;
> + }
> + return str;
> +}
> +
> +/*[clinic input]
> +compile as builtin_compile
> +
> + source: object
> + filename: object(converter="PyUnicode_FSDecoder")
> + mode: str
> + flags: int = 0
> + dont_inherit: int(c_default="0") = False
> + optimize: int = -1
> +
> +Compile source into a code object that can be executed by exec() or eval().
> +
> +The source code may represent a Python module, statement or expression.
> +The filename will be used for run-time error messages.
> +The mode must be 'exec' to compile a module, 'single' to compile a
> +single (interactive) statement, or 'eval' to compile an expression.
> +The flags argument, if present, controls which future statements influence
> +the compilation of the code.
> +The dont_inherit argument, if true, stops the compilation inheriting
> +the effects of any future statements in effect in the code calling
> +compile; if absent or false these statements do influence the compilation,
> +in addition to any features explicitly specified.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_compile_impl(PyObject *module, PyObject *source, PyObject *filename,
> + const char *mode, int flags, int dont_inherit,
> + int optimize)
> +/*[clinic end generated code: output=1fa176e33452bb63 input=9d53e8cfb3c86414]*/
> +{
> + PyObject *source_copy;
> + const char *str;
> + int compile_mode = -1;
> + int is_ast;
> + PyCompilerFlags cf;
> + int start[] = {Py_file_input, Py_eval_input, Py_single_input};
> + PyObject *result;
> +
> + cf.cf_flags = flags | PyCF_SOURCE_IS_UTF8;
> +
> + if (flags &
> + ~(PyCF_MASK | PyCF_MASK_OBSOLETE | PyCF_DONT_IMPLY_DEDENT | PyCF_ONLY_AST))
> + {
> + PyErr_SetString(PyExc_ValueError,
> + "compile(): unrecognised flags");
> + goto error;
> + }
> + /* XXX Warn if (supplied_flags & PyCF_MASK_OBSOLETE) != 0? */
> +
> + if (optimize < -1 || optimize > 2) {
> + PyErr_SetString(PyExc_ValueError,
> + "compile(): invalid optimize value");
> + goto error;
> + }
> +
> + if (!dont_inherit) {
> + PyEval_MergeCompilerFlags(&cf);
> + }
> +
> + if (strcmp(mode, "exec") == 0)
> + compile_mode = 0;
> + else if (strcmp(mode, "eval") == 0)
> + compile_mode = 1;
> + else if (strcmp(mode, "single") == 0)
> + compile_mode = 2;
> + else {
> + PyErr_SetString(PyExc_ValueError,
> + "compile() mode must be 'exec', 'eval' or 'single'");
> + goto error;
> + }
> +
> + is_ast = PyAST_Check(source);
> + if (is_ast == -1)
> + goto error;
> + if (is_ast) {
> + if (flags & PyCF_ONLY_AST) {
> + Py_INCREF(source);
> + result = source;
> + }
> + else {
> + PyArena *arena;
> + mod_ty mod;
> +
> + arena = PyArena_New();
> + if (arena == NULL)
> + goto error;
> + mod = PyAST_obj2mod(source, arena, compile_mode);
> + if (mod == NULL) {
> + PyArena_Free(arena);
> + goto error;
> + }
> + if (!PyAST_Validate(mod)) {
> + PyArena_Free(arena);
> + goto error;
> + }
> + result = (PyObject*)PyAST_CompileObject(mod, filename,
> + &cf, optimize, arena);
> + PyArena_Free(arena);
> + }
> + goto finally;
> + }
> +
> + str = source_as_string(source, "compile", "string, bytes or AST", &cf, &source_copy);
> + if (str == NULL)
> + goto error;
> +
> + result = Py_CompileStringObject(str, filename, start[compile_mode], &cf, optimize);
> + Py_XDECREF(source_copy);
> + goto finally;
> +
> +error:
> + result = NULL;
> +finally:
> + Py_DECREF(filename);
> + return result;
> +}
> +
> +/* AC: cannot convert yet, as needs PEP 457 group support in inspect */
> +static PyObject *
> +builtin_dir(PyObject *self, PyObject *args)
> +{
> + PyObject *arg = NULL;
> +
> + if (!PyArg_UnpackTuple(args, "dir", 0, 1, &arg))
> + return NULL;
> + return PyObject_Dir(arg);
> +}
> +
> +PyDoc_STRVAR(dir_doc,
> +"dir([object]) -> list of strings\n"
> +"\n"
> +"If called without an argument, return the names in the current scope.\n"
> +"Else, return an alphabetized list of names comprising (some of) the attributes\n"
> +"of the given object, and of attributes reachable from it.\n"
> +"If the object supplies a method named __dir__, it will be used; otherwise\n"
> +"the default dir() logic is used and returns:\n"
> +" for a module object: the module's attributes.\n"
> +" for a class object: its attributes, and recursively the attributes\n"
> +" of its bases.\n"
> +" for any other object: its attributes, its class's attributes, and\n"
> +" recursively the attributes of its class's base classes.");
> +
> +/*[clinic input]
> +divmod as builtin_divmod
> +
> + x: object
> + y: object
> + /
> +
> +Return the tuple (x//y, x%y). Invariant: div*y + mod == x.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_divmod_impl(PyObject *module, PyObject *x, PyObject *y)
> +/*[clinic end generated code: output=b06d8a5f6e0c745e input=175ad9c84ff41a85]*/
> +{
> + return PyNumber_Divmod(x, y);
> +}
> +
> +
> +/*[clinic input]
> +eval as builtin_eval
> +
> + source: object
> + globals: object = None
> + locals: object = None
> + /
> +
> +Evaluate the given source in the context of globals and locals.
> +
> +The source may be a string representing a Python expression
> +or a code object as returned by compile().
> +The globals must be a dictionary and locals can be any mapping,
> +defaulting to the current globals and locals.
> +If only globals is given, locals defaults to it.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_eval_impl(PyObject *module, PyObject *source, PyObject *globals,
> + PyObject *locals)
> +/*[clinic end generated code: output=0a0824aa70093116 input=11ee718a8640e527]*/
> +{
> + PyObject *result, *source_copy;
> + const char *str;
> + PyCompilerFlags cf;
> +
> + if (locals != Py_None && !PyMapping_Check(locals)) {
> + PyErr_SetString(PyExc_TypeError, "locals must be a mapping");
> + return NULL;
> + }
> + if (globals != Py_None && !PyDict_Check(globals)) {
> + PyErr_SetString(PyExc_TypeError, PyMapping_Check(globals) ?
> + "globals must be a real dict; try eval(expr, {}, mapping)"
> + : "globals must be a dict");
> + return NULL;
> + }
> + if (globals == Py_None) {
> + globals = PyEval_GetGlobals();
> + if (locals == Py_None) {
> + locals = PyEval_GetLocals();
> + if (locals == NULL)
> + return NULL;
> + }
> + }
> + else if (locals == Py_None)
> + locals = globals;
> +
> + if (globals == NULL || locals == NULL) {
> + PyErr_SetString(PyExc_TypeError,
> + "eval must be given globals and locals "
> + "when called without a frame");
> + return NULL;
> + }
> +
> + if (_PyDict_GetItemId(globals, &PyId___builtins__) == NULL) {
> + if (_PyDict_SetItemId(globals, &PyId___builtins__,
> + PyEval_GetBuiltins()) != 0)
> + return NULL;
> + }
> +
> + if (PyCode_Check(source)) {
> + if (PyCode_GetNumFree((PyCodeObject *)source) > 0) {
> + PyErr_SetString(PyExc_TypeError,
> + "code object passed to eval() may not contain free variables");
> + return NULL;
> + }
> + return PyEval_EvalCode(source, globals, locals);
> + }
> +
> + cf.cf_flags = PyCF_SOURCE_IS_UTF8;
> + str = source_as_string(source, "eval", "string, bytes or code", &cf, &source_copy);
> + if (str == NULL)
> + return NULL;
> +
> + while (*str == ' ' || *str == '\t')
> + str++;
> +
> + (void)PyEval_MergeCompilerFlags(&cf);
> + result = PyRun_StringFlags(str, Py_eval_input, globals, locals, &cf);
> + Py_XDECREF(source_copy);
> + return result;
> +}
> +
> +/*[clinic input]
> +exec as builtin_exec
> +
> + source: object
> + globals: object = None
> + locals: object = None
> + /
> +
> +Execute the given source in the context of globals and locals.
> +
> +The source may be a string representing one or more Python statements
> +or a code object as returned by compile().
> +The globals must be a dictionary and locals can be any mapping,
> +defaulting to the current globals and locals.
> +If only globals is given, locals defaults to it.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_exec_impl(PyObject *module, PyObject *source, PyObject *globals,
> + PyObject *locals)
> +/*[clinic end generated code: output=3c90efc6ab68ef5d input=01ca3e1c01692829]*/
> +{
> + PyObject *v;
> +
> + if (globals == Py_None) {
> + globals = PyEval_GetGlobals();
> + if (locals == Py_None) {
> + locals = PyEval_GetLocals();
> + if (locals == NULL)
> + return NULL;
> + }
> + if (!globals || !locals) {
> + PyErr_SetString(PyExc_SystemError,
> + "globals and locals cannot be NULL");
> + return NULL;
> + }
> + }
> + else if (locals == Py_None)
> + locals = globals;
> +
> + if (!PyDict_Check(globals)) {
> + PyErr_Format(PyExc_TypeError, "exec() globals must be a dict, not %.100s",
> + globals->ob_type->tp_name);
> + return NULL;
> + }
> + if (!PyMapping_Check(locals)) {
> + PyErr_Format(PyExc_TypeError,
> + "locals must be a mapping or None, not %.100s",
> + locals->ob_type->tp_name);
> + return NULL;
> + }
> + if (_PyDict_GetItemId(globals, &PyId___builtins__) == NULL) {
> + if (_PyDict_SetItemId(globals, &PyId___builtins__,
> + PyEval_GetBuiltins()) != 0)
> + return NULL;
> + }
> +
> + if (PyCode_Check(source)) {
> + if (PyCode_GetNumFree((PyCodeObject *)source) > 0) {
> + PyErr_SetString(PyExc_TypeError,
> + "code object passed to exec() may not "
> + "contain free variables");
> + return NULL;
> + }
> + v = PyEval_EvalCode(source, globals, locals);
> + }
> + else {
> + PyObject *source_copy;
> + const char *str;
> + PyCompilerFlags cf;
> + cf.cf_flags = PyCF_SOURCE_IS_UTF8;
> + str = source_as_string(source, "exec",
> + "string, bytes or code", &cf,
> + &source_copy);
> + if (str == NULL)
> + return NULL;
> + if (PyEval_MergeCompilerFlags(&cf))
> + v = PyRun_StringFlags(str, Py_file_input, globals,
> + locals, &cf);
> + else
> + v = PyRun_String(str, Py_file_input, globals, locals);
> + Py_XDECREF(source_copy);
> + }
> + if (v == NULL)
> + return NULL;
> + Py_DECREF(v);
> + Py_RETURN_NONE;
> +}
> +
> +
> +/* AC: cannot convert yet, as needs PEP 457 group support in inspect */
> +static PyObject *
> +builtin_getattr(PyObject *self, PyObject *args)
> +{
> + PyObject *v, *result, *dflt = NULL;
> + PyObject *name;
> +
> + if (!PyArg_UnpackTuple(args, "getattr", 2, 3, &v, &name, &dflt))
> + return NULL;
> +
> + if (!PyUnicode_Check(name)) {
> + PyErr_SetString(PyExc_TypeError,
> + "getattr(): attribute name must be string");
> + return NULL;
> + }
> + result = PyObject_GetAttr(v, name);
> + if (result == NULL && dflt != NULL &&
> + PyErr_ExceptionMatches(PyExc_AttributeError))
> + {
> + PyErr_Clear();
> + Py_INCREF(dflt);
> + result = dflt;
> + }
> + return result;
> +}
> +
> +PyDoc_STRVAR(getattr_doc,
> +"getattr(object, name[, default]) -> value\n\
> +\n\
> +Get a named attribute from an object; getattr(x, 'y') is equivalent to x.y.\n\
> +When a default argument is given, it is returned when the attribute doesn't\n\
> +exist; without it, an exception is raised in that case.");
> +
> +
> +/*[clinic input]
> +globals as builtin_globals
> +
> +Return the dictionary containing the current scope's global variables.
> +
> +NOTE: Updates to this dictionary *will* affect name lookups in the current
> +global scope and vice-versa.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_globals_impl(PyObject *module)
> +/*[clinic end generated code: output=e5dd1527067b94d2 input=9327576f92bb48ba]*/
> +{
> + PyObject *d;
> +
> + d = PyEval_GetGlobals();
> + Py_XINCREF(d);
> + return d;
> +}
> +
> +
> +/*[clinic input]
> +hasattr as builtin_hasattr
> +
> + obj: object
> + name: object
> + /
> +
> +Return whether the object has an attribute with the given name.
> +
> +This is done by calling getattr(obj, name) and catching AttributeError.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_hasattr_impl(PyObject *module, PyObject *obj, PyObject *name)
> +/*[clinic end generated code: output=a7aff2090a4151e5 input=0faec9787d979542]*/
> +{
> + PyObject *v;
> +
> + if (!PyUnicode_Check(name)) {
> + PyErr_SetString(PyExc_TypeError,
> + "hasattr(): attribute name must be string");
> + return NULL;
> + }
> + v = PyObject_GetAttr(obj, name);
> + if (v == NULL) {
> + if (PyErr_ExceptionMatches(PyExc_AttributeError)) {
> + PyErr_Clear();
> + Py_RETURN_FALSE;
> + }
> + return NULL;
> + }
> + Py_DECREF(v);
> + Py_RETURN_TRUE;
> +}
> +
> +
> +/* AC: gdb's integration with CPython relies on builtin_id having
> + * the *exact* parameter names of "self" and "v", so we ensure we
> + * preserve those name rather than using the AC defaults.
> + */
> +/*[clinic input]
> +id as builtin_id
> +
> + self: self(type="PyModuleDef *")
> + obj as v: object
> + /
> +
> +Return the identity of an object.
> +
> +This is guaranteed to be unique among simultaneously existing objects.
> +(CPython uses the object's memory address.)
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_id(PyModuleDef *self, PyObject *v)
> +/*[clinic end generated code: output=0aa640785f697f65 input=5a534136419631f4]*/
> +{
> + return PyLong_FromVoidPtr(v);
> +}
> +
> +
> +/* map object ************************************************************/
> +
> +typedef struct {
> + PyObject_HEAD
> + PyObject *iters;
> + PyObject *func;
> +} mapobject;
> +
> +static PyObject *
> +map_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
> +{
> + PyObject *it, *iters, *func;
> + mapobject *lz;
> + Py_ssize_t numargs, i;
> +
> + if (type == &PyMap_Type && !_PyArg_NoKeywords("map()", kwds))
> + return NULL;
> +
> + numargs = PyTuple_Size(args);
> + if (numargs < 2) {
> + PyErr_SetString(PyExc_TypeError,
> + "map() must have at least two arguments.");
> + return NULL;
> + }
> +
> + iters = PyTuple_New(numargs-1);
> + if (iters == NULL)
> + return NULL;
> +
> + for (i=1 ; i<numargs ; i++) {
> + /* Get iterator. */
> + it = PyObject_GetIter(PyTuple_GET_ITEM(args, i));
> + if (it == NULL) {
> + Py_DECREF(iters);
> + return NULL;
> + }
> + PyTuple_SET_ITEM(iters, i-1, it);
> + }
> +
> + /* create mapobject structure */
> + lz = (mapobject *)type->tp_alloc(type, 0);
> + if (lz == NULL) {
> + Py_DECREF(iters);
> + return NULL;
> + }
> + lz->iters = iters;
> + func = PyTuple_GET_ITEM(args, 0);
> + Py_INCREF(func);
> + lz->func = func;
> +
> + return (PyObject *)lz;
> +}
> +
> +static void
> +map_dealloc(mapobject *lz)
> +{
> + PyObject_GC_UnTrack(lz);
> + Py_XDECREF(lz->iters);
> + Py_XDECREF(lz->func);
> + Py_TYPE(lz)->tp_free(lz);
> +}
> +
> +static int
> +map_traverse(mapobject *lz, visitproc visit, void *arg)
> +{
> + Py_VISIT(lz->iters);
> + Py_VISIT(lz->func);
> + return 0;
> +}
> +
> +static PyObject *
> +map_next(mapobject *lz)
> +{
> + PyObject *small_stack[5];
> + PyObject **stack;
> + Py_ssize_t niters, nargs, i;
> + PyObject *result = NULL;
> +
> + niters = PyTuple_GET_SIZE(lz->iters);
> + if (niters <= (Py_ssize_t)Py_ARRAY_LENGTH(small_stack)) {
> + stack = small_stack;
> + }
> + else {
> + stack = PyMem_Malloc(niters * sizeof(stack[0]));
> + if (stack == NULL) {
> + PyErr_NoMemory();
> + return NULL;
> + }
> + }
> +
> + nargs = 0;
> + for (i=0; i < niters; i++) {
> + PyObject *it = PyTuple_GET_ITEM(lz->iters, i);
> + PyObject *val = Py_TYPE(it)->tp_iternext(it);
> + if (val == NULL) {
> + goto exit;
> + }
> + stack[i] = val;
> + nargs++;
> + }
> +
> + result = _PyObject_FastCall(lz->func, stack, nargs);
> +
> +exit:
> + for (i=0; i < nargs; i++) {
> + Py_DECREF(stack[i]);
> + }
> + if (stack != small_stack) {
> + PyMem_Free(stack);
> + }
> + return result;
> +}
> +
> +static PyObject *
> +map_reduce(mapobject *lz)
> +{
> + Py_ssize_t numargs = PyTuple_GET_SIZE(lz->iters);
> + PyObject *args = PyTuple_New(numargs+1);
> + Py_ssize_t i;
> + if (args == NULL)
> + return NULL;
> + Py_INCREF(lz->func);
> + PyTuple_SET_ITEM(args, 0, lz->func);
> + for (i = 0; i<numargs; i++){
> + PyObject *it = PyTuple_GET_ITEM(lz->iters, i);
> + Py_INCREF(it);
> + PyTuple_SET_ITEM(args, i+1, it);
> + }
> +
> + return Py_BuildValue("ON", Py_TYPE(lz), args);
> +}
> +
> +static PyMethodDef map_methods[] = {
> + {"__reduce__", (PyCFunction)map_reduce, METH_NOARGS, reduce_doc},
> + {NULL, NULL} /* sentinel */
> +};
> +
> +
> +PyDoc_STRVAR(map_doc,
> +"map(func, *iterables) --> map object\n\
> +\n\
> +Make an iterator that computes the function using arguments from\n\
> +each of the iterables. Stops when the shortest iterable is exhausted.");
> +
> +PyTypeObject PyMap_Type = {
> + PyVarObject_HEAD_INIT(&PyType_Type, 0)
> + "map", /* tp_name */
> + sizeof(mapobject), /* tp_basicsize */
> + 0, /* tp_itemsize */
> + /* methods */
> + (destructor)map_dealloc, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + 0, /* tp_repr */
> + 0, /* tp_as_number */
> + 0, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + PyObject_GenericGetAttr, /* tp_getattro */
> + 0, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC |
> + Py_TPFLAGS_BASETYPE, /* tp_flags */
> + map_doc, /* tp_doc */
> + (traverseproc)map_traverse, /* tp_traverse */
> + 0, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + PyObject_SelfIter, /* tp_iter */
> + (iternextfunc)map_next, /* tp_iternext */
> + map_methods, /* tp_methods */
> + 0, /* tp_members */
> + 0, /* tp_getset */
> + 0, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + 0, /* tp_init */
> + PyType_GenericAlloc, /* tp_alloc */
> + map_new, /* tp_new */
> + PyObject_GC_Del, /* tp_free */
> +};
> +
> +
> +/* AC: cannot convert yet, as needs PEP 457 group support in inspect */
> +static PyObject *
> +builtin_next(PyObject *self, PyObject *args)
> +{
> + PyObject *it, *res;
> + PyObject *def = NULL;
> +
> + if (!PyArg_UnpackTuple(args, "next", 1, 2, &it, &def))
> + return NULL;
> + if (!PyIter_Check(it)) {
> + PyErr_Format(PyExc_TypeError,
> + "'%.200s' object is not an iterator",
> + it->ob_type->tp_name);
> + return NULL;
> + }
> +
> + res = (*it->ob_type->tp_iternext)(it);
> + if (res != NULL) {
> + return res;
> + } else if (def != NULL) {
> + if (PyErr_Occurred()) {
> + if(!PyErr_ExceptionMatches(PyExc_StopIteration))
> + return NULL;
> + PyErr_Clear();
> + }
> + Py_INCREF(def);
> + return def;
> + } else if (PyErr_Occurred()) {
> + return NULL;
> + } else {
> + PyErr_SetNone(PyExc_StopIteration);
> + return NULL;
> + }
> +}
> +
> +PyDoc_STRVAR(next_doc,
> +"next(iterator[, default])\n\
> +\n\
> +Return the next item from the iterator. If default is given and the iterator\n\
> +is exhausted, it is returned instead of raising StopIteration.");
> +
> +
> +/*[clinic input]
> +setattr as builtin_setattr
> +
> + obj: object
> + name: object
> + value: object
> + /
> +
> +Sets the named attribute on the given object to the specified value.
> +
> +setattr(x, 'y', v) is equivalent to ``x.y = v''
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_setattr_impl(PyObject *module, PyObject *obj, PyObject *name,
> + PyObject *value)
> +/*[clinic end generated code: output=dc2ce1d1add9acb4 input=bd2b7ca6875a1899]*/
> +{
> + if (PyObject_SetAttr(obj, name, value) != 0)
> + return NULL;
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +
> +/*[clinic input]
> +delattr as builtin_delattr
> +
> + obj: object
> + name: object
> + /
> +
> +Deletes the named attribute from the given object.
> +
> +delattr(x, 'y') is equivalent to ``del x.y''
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_delattr_impl(PyObject *module, PyObject *obj, PyObject *name)
> +/*[clinic end generated code: output=85134bc58dff79fa input=db16685d6b4b9410]*/
> +{
> + if (PyObject_SetAttr(obj, name, (PyObject *)NULL) != 0)
> + return NULL;
> + Py_INCREF(Py_None);
> + return Py_None;
> +}
> +
> +
> +/*[clinic input]
> +hash as builtin_hash
> +
> + obj: object
> + /
> +
> +Return the hash value for the given object.
> +
> +Two objects that compare equal must also have the same hash value, but the
> +reverse is not necessarily true.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_hash(PyObject *module, PyObject *obj)
> +/*[clinic end generated code: output=237668e9d7688db7 input=58c48be822bf9c54]*/
> +{
> + Py_hash_t x;
> +
> + x = PyObject_Hash(obj);
> + if (x == -1)
> + return NULL;
> + return PyLong_FromSsize_t(x);
> +}
> +
> +
> +/*[clinic input]
> +hex as builtin_hex
> +
> + number: object
> + /
> +
> +Return the hexadecimal representation of an integer.
> +
> + >>> hex(12648430)
> + '0xc0ffee'
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_hex(PyObject *module, PyObject *number)
> +/*[clinic end generated code: output=e46b612169099408 input=e645aff5fc7d540e]*/
> +{
> + return PyNumber_ToBase(number, 16);
> +}
> +
> +
> +/* AC: cannot convert yet, as needs PEP 457 group support in inspect */
> +static PyObject *
> +builtin_iter(PyObject *self, PyObject *args)
> +{
> + PyObject *v, *w = NULL;
> +
> + if (!PyArg_UnpackTuple(args, "iter", 1, 2, &v, &w))
> + return NULL;
> + if (w == NULL)
> + return PyObject_GetIter(v);
> + if (!PyCallable_Check(v)) {
> + PyErr_SetString(PyExc_TypeError,
> + "iter(v, w): v must be callable");
> + return NULL;
> + }
> + return PyCallIter_New(v, w);
> +}
> +
> +PyDoc_STRVAR(iter_doc,
> +"iter(iterable) -> iterator\n\
> +iter(callable, sentinel) -> iterator\n\
> +\n\
> +Get an iterator from an object. In the first form, the argument must\n\
> +supply its own iterator, or be a sequence.\n\
> +In the second form, the callable is called until it returns the sentinel.");
> +
> +
> +/*[clinic input]
> +len as builtin_len
> +
> + obj: object
> + /
> +
> +Return the number of items in a container.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_len(PyObject *module, PyObject *obj)
> +/*[clinic end generated code: output=fa7a270d314dfb6c input=bc55598da9e9c9b5]*/
> +{
> + Py_ssize_t res;
> +
> + res = PyObject_Size(obj);
> + if (res < 0 && PyErr_Occurred())
> + return NULL;
> + return PyLong_FromSsize_t(res);
> +}
> +
> +
> +/*[clinic input]
> +locals as builtin_locals
> +
> +Return a dictionary containing the current scope's local variables.
> +
> +NOTE: Whether or not updates to this dictionary will affect name lookups in
> +the local scope and vice-versa is *implementation dependent* and not
> +covered by any backwards compatibility guarantees.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_locals_impl(PyObject *module)
> +/*[clinic end generated code: output=b46c94015ce11448 input=7874018d478d5c4b]*/
> +{
> + PyObject *d;
> +
> + d = PyEval_GetLocals();
> + Py_XINCREF(d);
> + return d;
> +}
> +
> +
> +static PyObject *
> +min_max(PyObject *args, PyObject *kwds, int op)
> +{
> + PyObject *v, *it, *item, *val, *maxitem, *maxval, *keyfunc=NULL;
> + PyObject *emptytuple, *defaultval = NULL;
> + static char *kwlist[] = {"key", "default", NULL};
> + const char *name = op == Py_LT ? "min" : "max";
> + const int positional = PyTuple_Size(args) > 1;
> + int ret;
> +
> + if (positional)
> + v = args;
> + else if (!PyArg_UnpackTuple(args, name, 1, 1, &v))
> + return NULL;
> +
> + emptytuple = PyTuple_New(0);
> + if (emptytuple == NULL)
> + return NULL;
> + ret = PyArg_ParseTupleAndKeywords(emptytuple, kwds, "|$OO", kwlist,
> + &keyfunc, &defaultval);
> + Py_DECREF(emptytuple);
> + if (!ret)
> + return NULL;
> +
> + if (positional && defaultval != NULL) {
> + PyErr_Format(PyExc_TypeError,
> + "Cannot specify a default for %s() with multiple "
> + "positional arguments", name);
> + return NULL;
> + }
> +
> + it = PyObject_GetIter(v);
> + if (it == NULL) {
> + return NULL;
> + }
> +
> + maxitem = NULL; /* the result */
> + maxval = NULL; /* the value associated with the result */
> + while (( item = PyIter_Next(it) )) {
> + /* get the value from the key function */
> + if (keyfunc != NULL) {
> + val = PyObject_CallFunctionObjArgs(keyfunc, item, NULL);
> + if (val == NULL)
> + goto Fail_it_item;
> + }
> + /* no key function; the value is the item */
> + else {
> + val = item;
> + Py_INCREF(val);
> + }
> +
> + /* maximum value and item are unset; set them */
> + if (maxval == NULL) {
> + maxitem = item;
> + maxval = val;
> + }
> + /* maximum value and item are set; update them as necessary */
> + else {
> + int cmp = PyObject_RichCompareBool(val, maxval, op);
> + if (cmp < 0)
> + goto Fail_it_item_and_val;
> + else if (cmp > 0) {
> + Py_DECREF(maxval);
> + Py_DECREF(maxitem);
> + maxval = val;
> + maxitem = item;
> + }
> + else {
> + Py_DECREF(item);
> + Py_DECREF(val);
> + }
> + }
> + }
> + if (PyErr_Occurred())
> + goto Fail_it;
> + if (maxval == NULL) {
> + assert(maxitem == NULL);
> + if (defaultval != NULL) {
> + Py_INCREF(defaultval);
> + maxitem = defaultval;
> + } else {
> + PyErr_Format(PyExc_ValueError,
> + "%s() arg is an empty sequence", name);
> + }
> + }
> + else
> + Py_DECREF(maxval);
> + Py_DECREF(it);
> + return maxitem;
> +
> +Fail_it_item_and_val:
> + Py_DECREF(val);
> +Fail_it_item:
> + Py_DECREF(item);
> +Fail_it:
> + Py_XDECREF(maxval);
> + Py_XDECREF(maxitem);
> + Py_DECREF(it);
> + return NULL;
> +}
> +
> +/* AC: cannot convert yet, waiting for *args support */
> +static PyObject *
> +builtin_min(PyObject *self, PyObject *args, PyObject *kwds)
> +{
> + return min_max(args, kwds, Py_LT);
> +}
> +
> +PyDoc_STRVAR(min_doc,
> +"min(iterable, *[, default=obj, key=func]) -> value\n\
> +min(arg1, arg2, *args, *[, key=func]) -> value\n\
> +\n\
> +With a single iterable argument, return its smallest item. The\n\
> +default keyword-only argument specifies an object to return if\n\
> +the provided iterable is empty.\n\
> +With two or more arguments, return the smallest argument.");
> +
> +
> +/* AC: cannot convert yet, waiting for *args support */
> +static PyObject *
> +builtin_max(PyObject *self, PyObject *args, PyObject *kwds)
> +{
> + return min_max(args, kwds, Py_GT);
> +}
> +
> +PyDoc_STRVAR(max_doc,
> +"max(iterable, *[, default=obj, key=func]) -> value\n\
> +max(arg1, arg2, *args, *[, key=func]) -> value\n\
> +\n\
> +With a single iterable argument, return its biggest item. The\n\
> +default keyword-only argument specifies an object to return if\n\
> +the provided iterable is empty.\n\
> +With two or more arguments, return the largest argument.");
> +
> +
> +/*[clinic input]
> +oct as builtin_oct
> +
> + number: object
> + /
> +
> +Return the octal representation of an integer.
> +
> + >>> oct(342391)
> + '0o1234567'
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_oct(PyObject *module, PyObject *number)
> +/*[clinic end generated code: output=40a34656b6875352 input=ad6b274af4016c72]*/
> +{
> + return PyNumber_ToBase(number, 8);
> +}
> +
> +
> +/*[clinic input]
> +ord as builtin_ord
> +
> + c: object
> + /
> +
> +Return the Unicode code point for a one-character string.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_ord(PyObject *module, PyObject *c)
> +/*[clinic end generated code: output=4fa5e87a323bae71 input=3064e5d6203ad012]*/
> +{
> + long ord;
> + Py_ssize_t size;
> +
> + if (PyBytes_Check(c)) {
> + size = PyBytes_GET_SIZE(c);
> + if (size == 1) {
> + ord = (long)((unsigned char)*PyBytes_AS_STRING(c));
> + return PyLong_FromLong(ord);
> + }
> + }
> + else if (PyUnicode_Check(c)) {
> + if (PyUnicode_READY(c) == -1)
> + return NULL;
> + size = PyUnicode_GET_LENGTH(c);
> + if (size == 1) {
> + ord = (long)PyUnicode_READ_CHAR(c, 0);
> + return PyLong_FromLong(ord);
> + }
> + }
> + else if (PyByteArray_Check(c)) {
> + /* XXX Hopefully this is temporary */
> + size = PyByteArray_GET_SIZE(c);
> + if (size == 1) {
> + ord = (long)((unsigned char)*PyByteArray_AS_STRING(c));
> + return PyLong_FromLong(ord);
> + }
> + }
> + else {
> + PyErr_Format(PyExc_TypeError,
> + "ord() expected string of length 1, but " \
> + "%.200s found", c->ob_type->tp_name);
> + return NULL;
> + }
> +
> + PyErr_Format(PyExc_TypeError,
> + "ord() expected a character, "
> + "but string of length %zd found",
> + size);
> + return NULL;
> +}
> +
> +
> +/*[clinic input]
> +pow as builtin_pow
> +
> + x: object
> + y: object
> + z: object = None
> + /
> +
> +Equivalent to x**y (with two arguments) or x**y % z (with three arguments)
> +
> +Some types, such as ints, are able to use a more efficient algorithm when
> +invoked using the three argument form.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_pow_impl(PyObject *module, PyObject *x, PyObject *y, PyObject *z)
> +/*[clinic end generated code: output=50a14d5d130d404b input=653d57d38d41fc07]*/
> +{
> + return PyNumber_Power(x, y, z);
> +}
> +
> +
> +/* AC: cannot convert yet, waiting for *args support */
> +static PyObject *
> +builtin_print(PyObject *self, PyObject *args, PyObject *kwds)
> +{
> + static char *kwlist[] = {"sep", "end", "file", "flush", 0};
> + static PyObject *dummy_args;
> + PyObject *sep = NULL, *end = NULL, *file = NULL, *flush = NULL;
> + int i, err;
> +
> + if (dummy_args == NULL && !(dummy_args = PyTuple_New(0)))
> + return NULL;
> + if (!PyArg_ParseTupleAndKeywords(dummy_args, kwds, "|OOOO:print",
> + kwlist, &sep, &end, &file, &flush))
> + return NULL;
> + if (file == NULL || file == Py_None) {
> + file = _PySys_GetObjectId(&PyId_stdout);
> + if (file == NULL) {
> + PyErr_SetString(PyExc_RuntimeError, "lost sys.stdout");
> + return NULL;
> + }
> +
> + /* sys.stdout may be None when FILE* stdout isn't connected */
> + if (file == Py_None)
> + Py_RETURN_NONE;
> + }
> +
> + if (sep == Py_None) {
> + sep = NULL;
> + }
> + else if (sep && !PyUnicode_Check(sep)) {
> + PyErr_Format(PyExc_TypeError,
> + "sep must be None or a string, not %.200s",
> + sep->ob_type->tp_name);
> + return NULL;
> + }
> + if (end == Py_None) {
> + end = NULL;
> + }
> + else if (end && !PyUnicode_Check(end)) {
> + PyErr_Format(PyExc_TypeError,
> + "end must be None or a string, not %.200s",
> + end->ob_type->tp_name);
> + return NULL;
> + }
> +
> + for (i = 0; i < PyTuple_Size(args); i++) {
> + if (i > 0) {
> + if (sep == NULL)
> + err = PyFile_WriteString(" ", file);
> + else
> + err = PyFile_WriteObject(sep, file,
> + Py_PRINT_RAW);
> + if (err)
> + return NULL;
> + }
> + err = PyFile_WriteObject(PyTuple_GetItem(args, i), file,
> + Py_PRINT_RAW);
> + if (err)
> + return NULL;
> + }
> +
> + if (end == NULL)
> + err = PyFile_WriteString("\n", file);
> + else
> + err = PyFile_WriteObject(end, file, Py_PRINT_RAW);
> + if (err)
> + return NULL;
> +
> + if (flush != NULL) {
> + PyObject *tmp;
> + int do_flush = PyObject_IsTrue(flush);
> + if (do_flush == -1)
> + return NULL;
> + else if (do_flush) {
> + tmp = _PyObject_CallMethodId(file, &PyId_flush, NULL);
> + if (tmp == NULL)
> + return NULL;
> + else
> + Py_DECREF(tmp);
> + }
> + }
> +
> + Py_RETURN_NONE;
> +}
> +
> +PyDoc_STRVAR(print_doc,
> +"print(value, ..., sep=' ', end='\\n', file=sys.stdout, flush=False)\n\
> +\n\
> +Prints the values to a stream, or to sys.stdout by default.\n\
> +Optional keyword arguments:\n\
> +file: a file-like object (stream); defaults to the current sys.stdout.\n\
> +sep: string inserted between values, default a space.\n\
> +end: string appended after the last value, default a newline.\n\
> +flush: whether to forcibly flush the stream.");
> +
> +
> +/*[clinic input]
> +input as builtin_input
> +
> + prompt: object(c_default="NULL") = None
> + /
> +
> +Read a string from standard input. The trailing newline is stripped.
> +
> +The prompt string, if given, is printed to standard output without a
> +trailing newline before reading input.
> +
> +If the user hits EOF (*nix: Ctrl-D, Windows: Ctrl-Z+Return), raise EOFError.
> +On *nix systems, readline is used if available.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_input_impl(PyObject *module, PyObject *prompt)
> +/*[clinic end generated code: output=83db5a191e7a0d60 input=5e8bb70c2908fe3c]*/
> +{
> + PyObject *fin = _PySys_GetObjectId(&PyId_stdin);
> + PyObject *fout = _PySys_GetObjectId(&PyId_stdout);
> + PyObject *ferr = _PySys_GetObjectId(&PyId_stderr);
> + PyObject *tmp;
> + long fd;
> + int tty;
> +
> + /* Check that stdin/out/err are intact */
> + if (fin == NULL || fin == Py_None) {
> + PyErr_SetString(PyExc_RuntimeError,
> + "input(): lost sys.stdin");
> + return NULL;
> + }
> + if (fout == NULL || fout == Py_None) {
> + PyErr_SetString(PyExc_RuntimeError,
> + "input(): lost sys.stdout");
> + return NULL;
> + }
> + if (ferr == NULL || ferr == Py_None) {
> + PyErr_SetString(PyExc_RuntimeError,
> + "input(): lost sys.stderr");
> + return NULL;
> + }
> +
> + /* First of all, flush stderr */
> + tmp = _PyObject_CallMethodId(ferr, &PyId_flush, NULL);
> + if (tmp == NULL)
> + PyErr_Clear();
> + else
> + Py_DECREF(tmp);
> +
> + /* We should only use (GNU) readline if Python's sys.stdin and
> + sys.stdout are the same as C's stdin and stdout, because we
> + need to pass it those. */
> + tmp = _PyObject_CallMethodId(fin, &PyId_fileno, NULL);
> + if (tmp == NULL) {
> + PyErr_Clear();
> + tty = 0;
> + }
> + else {
> + fd = PyLong_AsLong(tmp);
> + Py_DECREF(tmp);
> + if (fd < 0 && PyErr_Occurred())
> + return NULL;
> + tty = fd == fileno(stdin) && isatty(fd);
> + }
> + if (tty) {
> + tmp = _PyObject_CallMethodId(fout, &PyId_fileno, NULL);
> + if (tmp == NULL) {
> + PyErr_Clear();
> + tty = 0;
> + }
> + else {
> + fd = PyLong_AsLong(tmp);
> + Py_DECREF(tmp);
> + if (fd < 0 && PyErr_Occurred())
> + return NULL;
> + tty = fd == fileno(stdout) && isatty(fd);
> + }
> + }
> +
> + /* If we're interactive, use (GNU) readline */
> + if (tty) {
> + PyObject *po = NULL;
> + char *promptstr;
> + char *s = NULL;
> + PyObject *stdin_encoding = NULL, *stdin_errors = NULL;
> + PyObject *stdout_encoding = NULL, *stdout_errors = NULL;
> + char *stdin_encoding_str, *stdin_errors_str;
> + PyObject *result;
> + size_t len;
> +
> + /* stdin is a text stream, so it must have an encoding. */
> + stdin_encoding = _PyObject_GetAttrId(fin, &PyId_encoding);
> + stdin_errors = _PyObject_GetAttrId(fin, &PyId_errors);
> + if (!stdin_encoding || !stdin_errors ||
> + !PyUnicode_Check(stdin_encoding) ||
> + !PyUnicode_Check(stdin_errors)) {
> + tty = 0;
> + goto _readline_errors;
> + }
> + stdin_encoding_str = PyUnicode_AsUTF8(stdin_encoding);
> + stdin_errors_str = PyUnicode_AsUTF8(stdin_errors);
> + if (!stdin_encoding_str || !stdin_errors_str)
> + goto _readline_errors;
> + tmp = _PyObject_CallMethodId(fout, &PyId_flush, NULL);
> + if (tmp == NULL)
> + PyErr_Clear();
> + else
> + Py_DECREF(tmp);
> + if (prompt != NULL) {
> + /* We have a prompt, encode it as stdout would */
> + char *stdout_encoding_str, *stdout_errors_str;
> + PyObject *stringpo;
> + stdout_encoding = _PyObject_GetAttrId(fout, &PyId_encoding);
> + stdout_errors = _PyObject_GetAttrId(fout, &PyId_errors);
> + if (!stdout_encoding || !stdout_errors ||
> + !PyUnicode_Check(stdout_encoding) ||
> + !PyUnicode_Check(stdout_errors)) {
> + tty = 0;
> + goto _readline_errors;
> + }
> + stdout_encoding_str = PyUnicode_AsUTF8(stdout_encoding);
> + stdout_errors_str = PyUnicode_AsUTF8(stdout_errors);
> + if (!stdout_encoding_str || !stdout_errors_str)
> + goto _readline_errors;
> + stringpo = PyObject_Str(prompt);
> + if (stringpo == NULL)
> + goto _readline_errors;
> + po = PyUnicode_AsEncodedString(stringpo,
> + stdout_encoding_str, stdout_errors_str);
> + Py_CLEAR(stdout_encoding);
> + Py_CLEAR(stdout_errors);
> + Py_CLEAR(stringpo);
> + if (po == NULL)
> + goto _readline_errors;
> + assert(PyBytes_Check(po));
> + promptstr = PyBytes_AS_STRING(po);
> + }
> + else {
> + po = NULL;
> + promptstr = "";
> + }
> + s = PyOS_Readline(stdin, stdout, promptstr);
> + if (s == NULL) {
> + PyErr_CheckSignals();
> + if (!PyErr_Occurred())
> + PyErr_SetNone(PyExc_KeyboardInterrupt);
> + goto _readline_errors;
> + }
> +
> + len = strlen(s);
> + if (len == 0) {
> + PyErr_SetNone(PyExc_EOFError);
> + result = NULL;
> + }
> + else {
> + if (len > PY_SSIZE_T_MAX) {
> + PyErr_SetString(PyExc_OverflowError,
> + "input: input too long");
> + result = NULL;
> + }
> + else {
> + len--; /* strip trailing '\n' */
> + if (len != 0 && s[len-1] == '\r')
> + len--; /* strip trailing '\r' */
> + result = PyUnicode_Decode(s, len, stdin_encoding_str,
> + stdin_errors_str);
> + }
> + }
> + Py_DECREF(stdin_encoding);
> + Py_DECREF(stdin_errors);
> + Py_XDECREF(po);
> + PyMem_FREE(s);
> + return result;
> +
> + _readline_errors:
> + Py_XDECREF(stdin_encoding);
> + Py_XDECREF(stdout_encoding);
> + Py_XDECREF(stdin_errors);
> + Py_XDECREF(stdout_errors);
> + Py_XDECREF(po);
> + if (tty)
> + return NULL;
> +
> + PyErr_Clear();
> + }
> +
> + /* Fallback if we're not interactive */
> + if (prompt != NULL) {
> + if (PyFile_WriteObject(prompt, fout, Py_PRINT_RAW) != 0)
> + return NULL;
> + }
> + tmp = _PyObject_CallMethodId(fout, &PyId_flush, NULL);
> + if (tmp == NULL)
> + PyErr_Clear();
> + else
> + Py_DECREF(tmp);
> + return PyFile_GetLine(fin, -1);
> +}
> +
> +
> +/*[clinic input]
> +repr as builtin_repr
> +
> + obj: object
> + /
> +
> +Return the canonical string representation of the object.
> +
> +For many object types, including most builtins, eval(repr(obj)) == obj.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_repr(PyObject *module, PyObject *obj)
> +/*[clinic end generated code: output=7ed3778c44fd0194 input=1c9e6d66d3e3be04]*/
> +{
> + return PyObject_Repr(obj);
> +}
> +
> +
> +/* AC: cannot convert yet, as needs PEP 457 group support in inspect
> + * or a semantic change to accept None for "ndigits"
> + */
> +static PyObject *
> +builtin_round(PyObject *self, PyObject *args, PyObject *kwds)
> +{
> + PyObject *ndigits = NULL;
> + static char *kwlist[] = {"number", "ndigits", 0};
> + PyObject *number, *round, *result;
> +
> + if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|O:round",
> + kwlist, &number, &ndigits))
> + return NULL;
> +
> + if (Py_TYPE(number)->tp_dict == NULL) {
> + if (PyType_Ready(Py_TYPE(number)) < 0)
> + return NULL;
> + }
> +
> + round = _PyObject_LookupSpecial(number, &PyId___round__);
> + if (round == NULL) {
> + if (!PyErr_Occurred())
> + PyErr_Format(PyExc_TypeError,
> + "type %.100s doesn't define __round__ method",
> + Py_TYPE(number)->tp_name);
> + return NULL;
> + }
> +
> + if (ndigits == NULL || ndigits == Py_None)
> + result = PyObject_CallFunctionObjArgs(round, NULL);
> + else
> + result = PyObject_CallFunctionObjArgs(round, ndigits, NULL);
> + Py_DECREF(round);
> + return result;
> +}
> +
> +PyDoc_STRVAR(round_doc,
> +"round(number[, ndigits]) -> number\n\
> +\n\
> +Round a number to a given precision in decimal digits (default 0 digits).\n\
> +This returns an int when called with one argument, otherwise the\n\
> +same type as the number. ndigits may be negative.");
> +
> +
> +/*AC: we need to keep the kwds dict intact to easily call into the
> + * list.sort method, which isn't currently supported in AC. So we just use
> + * the initially generated signature with a custom implementation.
> + */
> +/* [disabled clinic input]
> +sorted as builtin_sorted
> +
> + iterable as seq: object
> + key as keyfunc: object = None
> + reverse: object = False
> +
> +Return a new list containing all items from the iterable in ascending order.
> +
> +A custom key function can be supplied to customize the sort order, and the
> +reverse flag can be set to request the result in descending order.
> +[end disabled clinic input]*/
> +
> +PyDoc_STRVAR(builtin_sorted__doc__,
> +"sorted($module, iterable, /, *, key=None, reverse=False)\n"
> +"--\n"
> +"\n"
> +"Return a new list containing all items from the iterable in ascending order.\n"
> +"\n"
> +"A custom key function can be supplied to customize the sort order, and the\n"
> +"reverse flag can be set to request the result in descending order.");
> +
> +#define BUILTIN_SORTED_METHODDEF \
> + {"sorted", (PyCFunction)builtin_sorted, METH_VARARGS|METH_KEYWORDS, builtin_sorted__doc__},
> +
> +static PyObject *
> +builtin_sorted(PyObject *self, PyObject *args, PyObject *kwds)
> +{
> + PyObject *newlist, *v, *seq, *keyfunc=NULL, **newargs;
> + PyObject *callable;
> + static char *kwlist[] = {"", "key", "reverse", 0};
> + int reverse;
> + Py_ssize_t nargs;
> +
> + /* args 1-3 should match listsort in Objects/listobject.c */
> + if (!PyArg_ParseTupleAndKeywords(args, kwds, "O|Oi:sorted",
> + kwlist, &seq, &keyfunc, &reverse))
> + return NULL;
> +
> + newlist = PySequence_List(seq);
> + if (newlist == NULL)
> + return NULL;
> +
> + callable = _PyObject_GetAttrId(newlist, &PyId_sort);
> + if (callable == NULL) {
> + Py_DECREF(newlist);
> + return NULL;
> + }
> +
> + assert(PyTuple_GET_SIZE(args) >= 1);
> + newargs = &PyTuple_GET_ITEM(args, 1);
> + nargs = PyTuple_GET_SIZE(args) - 1;
> + v = _PyObject_FastCallDict(callable, newargs, nargs, kwds);
> + Py_DECREF(callable);
> + if (v == NULL) {
> + Py_DECREF(newlist);
> + return NULL;
> + }
> + Py_DECREF(v);
> + return newlist;
> +}
> +
> +
> +/* AC: cannot convert yet, as needs PEP 457 group support in inspect */
> +static PyObject *
> +builtin_vars(PyObject *self, PyObject *args)
> +{
> + PyObject *v = NULL;
> + PyObject *d;
> +
> + if (!PyArg_UnpackTuple(args, "vars", 0, 1, &v))
> + return NULL;
> + if (v == NULL) {
> + d = PyEval_GetLocals();
> + if (d == NULL)
> + return NULL;
> + Py_INCREF(d);
> + }
> + else {
> + d = _PyObject_GetAttrId(v, &PyId___dict__);
> + if (d == NULL) {
> + PyErr_SetString(PyExc_TypeError,
> + "vars() argument must have __dict__ attribute");
> + return NULL;
> + }
> + }
> + return d;
> +}
> +
> +PyDoc_STRVAR(vars_doc,
> +"vars([object]) -> dictionary\n\
> +\n\
> +Without arguments, equivalent to locals().\n\
> +With an argument, equivalent to object.__dict__.");
> +
> +
> +/*[clinic input]
> +sum as builtin_sum
> +
> + iterable: object
> + start: object(c_default="NULL") = 0
> + /
> +
> +Return the sum of a 'start' value (default: 0) plus an iterable of numbers
> +
> +When the iterable is empty, return the start value.
> +This function is intended specifically for use with numeric values and may
> +reject non-numeric types.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_sum_impl(PyObject *module, PyObject *iterable, PyObject *start)
> +/*[clinic end generated code: output=df758cec7d1d302f input=3b5b7a9d7611c73a]*/
> +{
> + PyObject *result = start;
> + PyObject *temp, *item, *iter;
> +
> + iter = PyObject_GetIter(iterable);
> + if (iter == NULL)
> + return NULL;
> +
> + if (result == NULL) {
> + result = PyLong_FromLong(0);
> + if (result == NULL) {
> + Py_DECREF(iter);
> + return NULL;
> + }
> + } else {
> + /* reject string values for 'start' parameter */
> + if (PyUnicode_Check(result)) {
> + PyErr_SetString(PyExc_TypeError,
> + "sum() can't sum strings [use ''.join(seq) instead]");
> + Py_DECREF(iter);
> + return NULL;
> + }
> + if (PyBytes_Check(result)) {
> + PyErr_SetString(PyExc_TypeError,
> + "sum() can't sum bytes [use b''.join(seq) instead]");
> + Py_DECREF(iter);
> + return NULL;
> + }
> + if (PyByteArray_Check(result)) {
> + PyErr_SetString(PyExc_TypeError,
> + "sum() can't sum bytearray [use b''.join(seq) instead]");
> + Py_DECREF(iter);
> + return NULL;
> + }
> + Py_INCREF(result);
> + }
> +
> +#ifndef SLOW_SUM
> + /* Fast addition by keeping temporary sums in C instead of new Python objects.
> + Assumes all inputs are the same type. If the assumption fails, default
> + to the more general routine.
> + */
> + if (PyLong_CheckExact(result)) {
> + int overflow;
> + long i_result = PyLong_AsLongAndOverflow(result, &overflow);
> + /* If this already overflowed, don't even enter the loop. */
> + if (overflow == 0) {
> + Py_DECREF(result);
> + result = NULL;
> + }
> + while(result == NULL) {
> + item = PyIter_Next(iter);
> + if (item == NULL) {
> + Py_DECREF(iter);
> + if (PyErr_Occurred())
> + return NULL;
> + return PyLong_FromLong(i_result);
> + }
> + if (PyLong_CheckExact(item)) {
> + long b = PyLong_AsLongAndOverflow(item, &overflow);
> + long x = i_result + b;
> + if (overflow == 0 && ((x^i_result) >= 0 || (x^b) >= 0)) {
> + i_result = x;
> + Py_DECREF(item);
> + continue;
> + }
> + }
> + /* Either overflowed or is not an int. Restore real objects and process normally */
> + result = PyLong_FromLong(i_result);
> + if (result == NULL) {
> + Py_DECREF(item);
> + Py_DECREF(iter);
> + return NULL;
> + }
> + temp = PyNumber_Add(result, item);
> + Py_DECREF(result);
> + Py_DECREF(item);
> + result = temp;
> + if (result == NULL) {
> + Py_DECREF(iter);
> + return NULL;
> + }
> + }
> + }
> +
> + if (PyFloat_CheckExact(result)) {
> + double f_result = PyFloat_AS_DOUBLE(result);
> + Py_DECREF(result);
> + result = NULL;
> + while(result == NULL) {
> + item = PyIter_Next(iter);
> + if (item == NULL) {
> + Py_DECREF(iter);
> + if (PyErr_Occurred())
> + return NULL;
> + return PyFloat_FromDouble(f_result);
> + }
> + if (PyFloat_CheckExact(item)) {
> + PyFPE_START_PROTECT("add", Py_DECREF(item); Py_DECREF(iter); return 0)
> + f_result += PyFloat_AS_DOUBLE(item);
> + PyFPE_END_PROTECT(f_result)
> + Py_DECREF(item);
> + continue;
> + }
> + if (PyLong_CheckExact(item)) {
> + long value;
> + int overflow;
> + value = PyLong_AsLongAndOverflow(item, &overflow);
> + if (!overflow) {
> + PyFPE_START_PROTECT("add", Py_DECREF(item); Py_DECREF(iter); return 0)
> + f_result += (double)value;
> + PyFPE_END_PROTECT(f_result)
> + Py_DECREF(item);
> + continue;
> + }
> + }
> + result = PyFloat_FromDouble(f_result);
> + if (result == NULL) {
> + Py_DECREF(item);
> + Py_DECREF(iter);
> + return NULL;
> + }
> + temp = PyNumber_Add(result, item);
> + Py_DECREF(result);
> + Py_DECREF(item);
> + result = temp;
> + if (result == NULL) {
> + Py_DECREF(iter);
> + return NULL;
> + }
> + }
> + }
> +#endif
> +
> + for(;;) {
> + item = PyIter_Next(iter);
> + if (item == NULL) {
> + /* error, or end-of-sequence */
> + if (PyErr_Occurred()) {
> + Py_DECREF(result);
> + result = NULL;
> + }
> + break;
> + }
> + /* It's tempting to use PyNumber_InPlaceAdd instead of
> + PyNumber_Add here, to avoid quadratic running time
> + when doing 'sum(list_of_lists, [])'. However, this
> + would produce a change in behaviour: a snippet like
> +
> + empty = []
> + sum([[x] for x in range(10)], empty)
> +
> + would change the value of empty. */
> + temp = PyNumber_Add(result, item);
> + Py_DECREF(result);
> + Py_DECREF(item);
> + result = temp;
> + if (result == NULL)
> + break;
> + }
> + Py_DECREF(iter);
> + return result;
> +}
> +
> +
> +/*[clinic input]
> +isinstance as builtin_isinstance
> +
> + obj: object
> + class_or_tuple: object
> + /
> +
> +Return whether an object is an instance of a class or of a subclass thereof.
> +
> +A tuple, as in ``isinstance(x, (A, B, ...))``, may be given as the target to
> +check against. This is equivalent to ``isinstance(x, A) or isinstance(x, B)
> +or ...`` etc.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_isinstance_impl(PyObject *module, PyObject *obj,
> + PyObject *class_or_tuple)
> +/*[clinic end generated code: output=6faf01472c13b003 input=ffa743db1daf7549]*/
> +{
> + int retval;
> +
> + retval = PyObject_IsInstance(obj, class_or_tuple);
> + if (retval < 0)
> + return NULL;
> + return PyBool_FromLong(retval);
> +}
> +
> +
> +/*[clinic input]
> +issubclass as builtin_issubclass
> +
> + cls: object
> + class_or_tuple: object
> + /
> +
> +Return whether 'cls' is a derived from another class or is the same class.
> +
> +A tuple, as in ``issubclass(x, (A, B, ...))``, may be given as the target to
> +check against. This is equivalent to ``issubclass(x, A) or issubclass(x, B)
> +or ...`` etc.
> +[clinic start generated code]*/
> +
> +static PyObject *
> +builtin_issubclass_impl(PyObject *module, PyObject *cls,
> + PyObject *class_or_tuple)
> +/*[clinic end generated code: output=358412410cd7a250 input=af5f35e9ceaddaf6]*/
> +{
> + int retval;
> +
> + retval = PyObject_IsSubclass(cls, class_or_tuple);
> + if (retval < 0)
> + return NULL;
> + return PyBool_FromLong(retval);
> +}
> +
> +
> +typedef struct {
> + PyObject_HEAD
> + Py_ssize_t tuplesize;
> + PyObject *ittuple; /* tuple of iterators */
> + PyObject *result;
> +} zipobject;
> +
> +static PyObject *
> +zip_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
> +{
> + zipobject *lz;
> + Py_ssize_t i;
> + PyObject *ittuple; /* tuple of iterators */
> + PyObject *result;
> + Py_ssize_t tuplesize = PySequence_Length(args);
> +
> + if (type == &PyZip_Type && !_PyArg_NoKeywords("zip()", kwds))
> + return NULL;
> +
> + /* args must be a tuple */
> + assert(PyTuple_Check(args));
> +
> + /* obtain iterators */
> + ittuple = PyTuple_New(tuplesize);
> + if (ittuple == NULL)
> + return NULL;
> + for (i=0; i < tuplesize; ++i) {
> + PyObject *item = PyTuple_GET_ITEM(args, i);
> + PyObject *it = PyObject_GetIter(item);
> + if (it == NULL) {
> + if (PyErr_ExceptionMatches(PyExc_TypeError))
> + PyErr_Format(PyExc_TypeError,
> + "zip argument #%zd must support iteration",
> + i+1);
> + Py_DECREF(ittuple);
> + return NULL;
> + }
> + PyTuple_SET_ITEM(ittuple, i, it);
> + }
> +
> + /* create a result holder */
> + result = PyTuple_New(tuplesize);
> + if (result == NULL) {
> + Py_DECREF(ittuple);
> + return NULL;
> + }
> + for (i=0 ; i < tuplesize ; i++) {
> + Py_INCREF(Py_None);
> + PyTuple_SET_ITEM(result, i, Py_None);
> + }
> +
> + /* create zipobject structure */
> + lz = (zipobject *)type->tp_alloc(type, 0);
> + if (lz == NULL) {
> + Py_DECREF(ittuple);
> + Py_DECREF(result);
> + return NULL;
> + }
> + lz->ittuple = ittuple;
> + lz->tuplesize = tuplesize;
> + lz->result = result;
> +
> + return (PyObject *)lz;
> +}
> +
> +static void
> +zip_dealloc(zipobject *lz)
> +{
> + PyObject_GC_UnTrack(lz);
> + Py_XDECREF(lz->ittuple);
> + Py_XDECREF(lz->result);
> + Py_TYPE(lz)->tp_free(lz);
> +}
> +
> +static int
> +zip_traverse(zipobject *lz, visitproc visit, void *arg)
> +{
> + Py_VISIT(lz->ittuple);
> + Py_VISIT(lz->result);
> + return 0;
> +}
> +
> +static PyObject *
> +zip_next(zipobject *lz)
> +{
> + Py_ssize_t i;
> + Py_ssize_t tuplesize = lz->tuplesize;
> + PyObject *result = lz->result;
> + PyObject *it;
> + PyObject *item;
> + PyObject *olditem;
> +
> + if (tuplesize == 0)
> + return NULL;
> + if (Py_REFCNT(result) == 1) {
> + Py_INCREF(result);
> + for (i=0 ; i < tuplesize ; i++) {
> + it = PyTuple_GET_ITEM(lz->ittuple, i);
> + item = (*Py_TYPE(it)->tp_iternext)(it);
> + if (item == NULL) {
> + Py_DECREF(result);
> + return NULL;
> + }
> + olditem = PyTuple_GET_ITEM(result, i);
> + PyTuple_SET_ITEM(result, i, item);
> + Py_DECREF(olditem);
> + }
> + } else {
> + result = PyTuple_New(tuplesize);
> + if (result == NULL)
> + return NULL;
> + for (i=0 ; i < tuplesize ; i++) {
> + it = PyTuple_GET_ITEM(lz->ittuple, i);
> + item = (*Py_TYPE(it)->tp_iternext)(it);
> + if (item == NULL) {
> + Py_DECREF(result);
> + return NULL;
> + }
> + PyTuple_SET_ITEM(result, i, item);
> + }
> + }
> + return result;
> +}
> +
> +static PyObject *
> +zip_reduce(zipobject *lz)
> +{
> + /* Just recreate the zip with the internal iterator tuple */
> + return Py_BuildValue("OO", Py_TYPE(lz), lz->ittuple);
> +}
> +
> +static PyMethodDef zip_methods[] = {
> + {"__reduce__", (PyCFunction)zip_reduce, METH_NOARGS, reduce_doc},
> + {NULL, NULL} /* sentinel */
> +};
> +
> +PyDoc_STRVAR(zip_doc,
> +"zip(iter1 [,iter2 [...]]) --> zip object\n\
> +\n\
> +Return a zip object whose .__next__() method returns a tuple where\n\
> +the i-th element comes from the i-th iterable argument. The .__next__()\n\
> +method continues until the shortest iterable in the argument sequence\n\
> +is exhausted and then it raises StopIteration.");
> +
> +PyTypeObject PyZip_Type = {
> + PyVarObject_HEAD_INIT(&PyType_Type, 0)
> + "zip", /* tp_name */
> + sizeof(zipobject), /* tp_basicsize */
> + 0, /* tp_itemsize */
> + /* methods */
> + (destructor)zip_dealloc, /* tp_dealloc */
> + 0, /* tp_print */
> + 0, /* tp_getattr */
> + 0, /* tp_setattr */
> + 0, /* tp_reserved */
> + 0, /* tp_repr */
> + 0, /* tp_as_number */
> + 0, /* tp_as_sequence */
> + 0, /* tp_as_mapping */
> + 0, /* tp_hash */
> + 0, /* tp_call */
> + 0, /* tp_str */
> + PyObject_GenericGetAttr, /* tp_getattro */
> + 0, /* tp_setattro */
> + 0, /* tp_as_buffer */
> + Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC |
> + Py_TPFLAGS_BASETYPE, /* tp_flags */
> + zip_doc, /* tp_doc */
> + (traverseproc)zip_traverse, /* tp_traverse */
> + 0, /* tp_clear */
> + 0, /* tp_richcompare */
> + 0, /* tp_weaklistoffset */
> + PyObject_SelfIter, /* tp_iter */
> + (iternextfunc)zip_next, /* tp_iternext */
> + zip_methods, /* tp_methods */
> + 0, /* tp_members */
> + 0, /* tp_getset */
> + 0, /* tp_base */
> + 0, /* tp_dict */
> + 0, /* tp_descr_get */
> + 0, /* tp_descr_set */
> + 0, /* tp_dictoffset */
> + 0, /* tp_init */
> + PyType_GenericAlloc, /* tp_alloc */
> + zip_new, /* tp_new */
> + PyObject_GC_Del, /* tp_free */
> +};
> +
> +
> +static PyMethodDef builtin_methods[] = {
> + {"__build_class__", (PyCFunction)builtin___build_class__,
> + METH_VARARGS | METH_KEYWORDS, build_class_doc},
> + {"__import__", (PyCFunction)builtin___import__, METH_VARARGS | METH_KEYWORDS, import_doc},
> + BUILTIN_ABS_METHODDEF
> + BUILTIN_ALL_METHODDEF
> + BUILTIN_ANY_METHODDEF
> + BUILTIN_ASCII_METHODDEF
> + BUILTIN_BIN_METHODDEF
> + BUILTIN_CALLABLE_METHODDEF
> + BUILTIN_CHR_METHODDEF
> + BUILTIN_COMPILE_METHODDEF
> + BUILTIN_DELATTR_METHODDEF
> + {"dir", builtin_dir, METH_VARARGS, dir_doc},
> + BUILTIN_DIVMOD_METHODDEF
> + BUILTIN_EVAL_METHODDEF
> + BUILTIN_EXEC_METHODDEF
> + BUILTIN_FORMAT_METHODDEF
> + {"getattr", builtin_getattr, METH_VARARGS, getattr_doc},
> + BUILTIN_GLOBALS_METHODDEF
> + BUILTIN_HASATTR_METHODDEF
> + BUILTIN_HASH_METHODDEF
> + BUILTIN_HEX_METHODDEF
> + BUILTIN_ID_METHODDEF
> + BUILTIN_INPUT_METHODDEF
> + BUILTIN_ISINSTANCE_METHODDEF
> + BUILTIN_ISSUBCLASS_METHODDEF
> + {"iter", builtin_iter, METH_VARARGS, iter_doc},
> + BUILTIN_LEN_METHODDEF
> + BUILTIN_LOCALS_METHODDEF
> + {"max", (PyCFunction)builtin_max, METH_VARARGS | METH_KEYWORDS, max_doc},
> + {"min", (PyCFunction)builtin_min, METH_VARARGS | METH_KEYWORDS, min_doc},
> + {"next", (PyCFunction)builtin_next, METH_VARARGS, next_doc},
> + BUILTIN_OCT_METHODDEF
> + BUILTIN_ORD_METHODDEF
> + BUILTIN_POW_METHODDEF
> + {"print", (PyCFunction)builtin_print, METH_VARARGS | METH_KEYWORDS, print_doc},
> + BUILTIN_REPR_METHODDEF
> + {"round", (PyCFunction)builtin_round, METH_VARARGS | METH_KEYWORDS, round_doc},
> + BUILTIN_SETATTR_METHODDEF
> + BUILTIN_SORTED_METHODDEF
> + BUILTIN_SUM_METHODDEF
> + {"vars", builtin_vars, METH_VARARGS, vars_doc},
> + {NULL, NULL},
> +};
> +
> +PyDoc_STRVAR(builtin_doc,
> +"Built-in functions, exceptions, and other objects.\n\
> +\n\
> +Noteworthy: None is the `nil' object; Ellipsis represents `...' in slices.");
> +
> +static struct PyModuleDef builtinsmodule = {
> + PyModuleDef_HEAD_INIT,
> + "builtins",
> + builtin_doc,
> + -1, /* multiple "initialization" just copies the module dict. */
> + builtin_methods,
> + NULL,
> + NULL,
> + NULL,
> + NULL
> +};
> +
> +
> +PyObject *
> +_PyBuiltin_Init(void)
> +{
> + PyObject *mod, *dict, *debug;
> +
> + if (PyType_Ready(&PyFilter_Type) < 0 ||
> + PyType_Ready(&PyMap_Type) < 0 ||
> + PyType_Ready(&PyZip_Type) < 0)
> + return NULL;
> +
> + mod = PyModule_Create(&builtinsmodule);
> + if (mod == NULL)
> + return NULL;
> + dict = PyModule_GetDict(mod);
> +
> +#ifdef Py_TRACE_REFS
> + /* "builtins" exposes a number of statically allocated objects
> + * that, before this code was added in 2.3, never showed up in
> + * the list of "all objects" maintained by Py_TRACE_REFS. As a
> + * result, programs leaking references to None and False (etc)
> + * couldn't be diagnosed by examining sys.getobjects(0).
> + */
> +#define ADD_TO_ALL(OBJECT) _Py_AddToAllObjects((PyObject *)(OBJECT), 0)
> +#else
> +#define ADD_TO_ALL(OBJECT) (void)0
> +#endif
> +
> +#define SETBUILTIN(NAME, OBJECT) \
> + if (PyDict_SetItemString(dict, NAME, (PyObject *)OBJECT) < 0) \
> + return NULL; \
> + ADD_TO_ALL(OBJECT)
> +
> + SETBUILTIN("None", Py_None);
> + SETBUILTIN("Ellipsis", Py_Ellipsis);
> + SETBUILTIN("NotImplemented", Py_NotImplemented);
> + SETBUILTIN("False", Py_False);
> + SETBUILTIN("True", Py_True);
> + SETBUILTIN("bool", &PyBool_Type);
> + SETBUILTIN("memoryview", &PyMemoryView_Type);
> + SETBUILTIN("bytearray", &PyByteArray_Type);
> + SETBUILTIN("bytes", &PyBytes_Type);
> + SETBUILTIN("classmethod", &PyClassMethod_Type);
> + SETBUILTIN("complex", &PyComplex_Type);
> + SETBUILTIN("dict", &PyDict_Type);
> + SETBUILTIN("enumerate", &PyEnum_Type);
> + SETBUILTIN("filter", &PyFilter_Type);
> + SETBUILTIN("float", &PyFloat_Type);
> + SETBUILTIN("frozenset", &PyFrozenSet_Type);
> + SETBUILTIN("property", &PyProperty_Type);
> + SETBUILTIN("int", &PyLong_Type);
> + SETBUILTIN("list", &PyList_Type);
> + SETBUILTIN("map", &PyMap_Type);
> + SETBUILTIN("object", &PyBaseObject_Type);
> + SETBUILTIN("range", &PyRange_Type);
> + SETBUILTIN("reversed", &PyReversed_Type);
> + SETBUILTIN("set", &PySet_Type);
> + SETBUILTIN("slice", &PySlice_Type);
> + SETBUILTIN("staticmethod", &PyStaticMethod_Type);
> + SETBUILTIN("str", &PyUnicode_Type);
> + SETBUILTIN("super", &PySuper_Type);
> + SETBUILTIN("tuple", &PyTuple_Type);
> + SETBUILTIN("type", &PyType_Type);
> + SETBUILTIN("zip", &PyZip_Type);
> + debug = PyBool_FromLong(Py_OptimizeFlag == 0);
> + if (PyDict_SetItemString(dict, "__debug__", debug) < 0) {
> + Py_DECREF(debug);
> + return NULL;
> + }
> + Py_DECREF(debug);
> +
> + return mod;
> +#undef ADD_TO_ALL
> +#undef SETBUILTIN
> +}
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/fileutils.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/fileutils.c
> new file mode 100644
> index 00000000..3367e296
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/fileutils.c
> @@ -0,0 +1,1767 @@
> +/** @file
> + File Utilities
> +
> + Copyright (c) 2010 - 2021, Intel Corporation. All rights reserved.<BR>
> + This program and the accompanying materials are licensed and made available under
> + the terms and conditions of the BSD License that accompanies this distribution.
> + The full text of the license may be found at
> + http://opensource.org/licenses/bsd-license.
> +
> + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
> + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
> +**/
> +#include "Python.h"
> +#include "osdefs.h"
> +#include <locale.h>
> +
> +#ifdef MS_WINDOWS
> +# include <malloc.h>
> +# include <windows.h>
> +extern int winerror_to_errno(int);
> +#endif
> +
> +#ifdef HAVE_LANGINFO_H
> +#include <langinfo.h>
> +#endif
> +
> +#ifdef HAVE_SYS_IOCTL_H
> +#include <sys/ioctl.h>
> +#endif
> +
> +#ifdef HAVE_FCNTL_H
> +#include <fcntl.h>
> +#endif /* HAVE_FCNTL_H */
> +
> +#if defined(__APPLE__) || defined(__ANDROID__)
> +extern wchar_t* _Py_DecodeUTF8_surrogateescape(const char *s, Py_ssize_t size);
> +#endif
> +
> +#ifdef O_CLOEXEC
> +/* Does open() support the O_CLOEXEC flag? Possible values:
> +
> + -1: unknown
> + 0: open() ignores O_CLOEXEC flag, ex: Linux kernel older than 2.6.23
> + 1: open() supports O_CLOEXEC flag, close-on-exec is set
> +
> + The flag is used by _Py_open(), _Py_open_noraise(), io.FileIO
> + and os.open(). */
> +int _Py_open_cloexec_works = -1;
> +#endif
> +
> +PyObject *
> +_Py_device_encoding(int fd)
> +{
> +#if defined(MS_WINDOWS)
> + UINT cp;
> +#endif
> + int valid;
> + _Py_BEGIN_SUPPRESS_IPH
> + valid = isatty(fd);
> + _Py_END_SUPPRESS_IPH
> + if (!valid)
> + Py_RETURN_NONE;
> +
> +#if defined(MS_WINDOWS)
> + if (fd == 0)
> + cp = GetConsoleCP();
> + else if (fd == 1 || fd == 2)
> + cp = GetConsoleOutputCP();
> + else
> + cp = 0;
> + /* GetConsoleCP() and GetConsoleOutputCP() return 0 if the application
> + has no console */
> + if (cp != 0)
> + return PyUnicode_FromFormat("cp%u", (unsigned int)cp);
> +#elif defined(CODESET)
> + {
> + char *codeset = nl_langinfo(CODESET);
> + if (codeset != NULL && codeset[0] != 0)
> + return PyUnicode_FromString(codeset);
> + }
> +#endif
> + Py_RETURN_NONE;
> +}
> +
> +#if !defined(__APPLE__) && !defined(__ANDROID__) && !defined(MS_WINDOWS)
> +
> +#define USE_FORCE_ASCII
> +
> +extern int _Py_normalize_encoding(const char *, char *, size_t);
> +
> +/* Workaround FreeBSD and OpenIndiana locale encoding issue with the C locale.
> + On these operating systems, nl_langinfo(CODESET) announces an alias of the
> + ASCII encoding, whereas mbstowcs() and wcstombs() functions use the
> + ISO-8859-1 encoding. The problem is that os.fsencode() and os.fsdecode() use
> + locale.getpreferredencoding() codec. For example, if command line arguments
> + are decoded by mbstowcs() and encoded back by os.fsencode(), we get a
> + UnicodeEncodeError instead of retrieving the original byte string.
> +
> + The workaround is enabled if setlocale(LC_CTYPE, NULL) returns "C",
> + nl_langinfo(CODESET) announces "ascii" (or an alias to ASCII), and at least
> + one byte in range 0x80-0xff can be decoded from the locale encoding. The
> + workaround is also enabled on error, for example if getting the locale
> + failed.
> +
> + Values of force_ascii:
> +
> + 1: the workaround is used: Py_EncodeLocale() uses
> + encode_ascii_surrogateescape() and Py_DecodeLocale() uses
> + decode_ascii_surrogateescape()
> + 0: the workaround is not used: Py_EncodeLocale() uses wcstombs() and
> + Py_DecodeLocale() uses mbstowcs()
> + -1: unknown, need to call check_force_ascii() to get the value
> +*/
> +static int force_ascii = -1;
> +
> +static int
> +check_force_ascii(void)
> +{
> + char *loc;
> +#if defined(HAVE_LANGINFO_H) && defined(CODESET)
> + char *codeset, **alias;
> + char encoding[20]; /* longest name: "iso_646.irv_1991\0" */
> + int is_ascii;
> + unsigned int i;
> + char* ascii_aliases[] = {
> + "ascii",
> + /* Aliases from Lib/encodings/aliases.py */
> + "646",
> + "ansi_x3.4_1968",
> + "ansi_x3.4_1986",
> + "ansi_x3_4_1968",
> + "cp367",
> + "csascii",
> + "ibm367",
> + "iso646_us",
> + "iso_646.irv_1991",
> + "iso_ir_6",
> + "us",
> + "us_ascii",
> + NULL
> + };
> +#endif
> +
> + loc = setlocale(LC_CTYPE, NULL);
> + if (loc == NULL)
> + goto error;
> + if (strcmp(loc, "C") != 0 && strcmp(loc, "POSIX") != 0) {
> + /* the LC_CTYPE locale is different than C */
> + return 0;
> + }
> +
> +#if defined(HAVE_LANGINFO_H) && defined(CODESET)
> + codeset = nl_langinfo(CODESET);
> + if (!codeset || codeset[0] == '\0') {
> + /* CODESET is not set or empty */
> + goto error;
> + }
> + if (!_Py_normalize_encoding(codeset, encoding, sizeof(encoding)))
> + goto error;
> +
> + is_ascii = 0;
> + for (alias=ascii_aliases; *alias != NULL; alias++) {
> + if (strcmp(encoding, *alias) == 0) {
> + is_ascii = 1;
> + break;
> + }
> + }
> + if (!is_ascii) {
> + /* nl_langinfo(CODESET) is not "ascii" or an alias of ASCII */
> + return 0;
> + }
> +
> + for (i=0x80; i<0xff; i++) {
> + unsigned char ch;
> + wchar_t wch;
> + size_t res;
> +
> + ch = (unsigned char)i;
> + res = mbstowcs(&wch, (char*)&ch, 1);
> + if (res != (size_t)-1) {
> + /* decoding a non-ASCII character from the locale encoding succeed:
> + the locale encoding is not ASCII, force ASCII */
> + return 1;
> + }
> + }
> + /* None of the bytes in the range 0x80-0xff can be decoded from the locale
> + encoding: the locale encoding is really ASCII */
> + return 0;
> +#else
> + /* nl_langinfo(CODESET) is not available: always force ASCII */
> + return 1;
> +#endif
> +
> +error:
> + /* if an error occurred, force the ASCII encoding */
> + return 1;
> +}
> +
> +static char*
> +encode_ascii_surrogateescape(const wchar_t *text, size_t *error_pos)
> +{
> + char *result = NULL, *out;
> + size_t len, i;
> + wchar_t ch;
> +
> + if (error_pos != NULL)
> + *error_pos = (size_t)-1;
> +
> + len = wcslen(text);
> +
> + result = PyMem_Malloc(len + 1); /* +1 for NUL byte */
> + if (result == NULL)
> + return NULL;
> +
> + out = result;
> + for (i=0; i<len; i++) {
> + ch = text[i];
> +
> + if (ch <= 0x7f) {
> + /* ASCII character */
> + *out++ = (char)ch;
> + }
> + else if (0xdc80 <= ch && ch <= 0xdcff) {
> + /* UTF-8b surrogate */
> + *out++ = (char)(ch - 0xdc00);
> + }
> + else {
> + if (error_pos != NULL)
> + *error_pos = i;
> + PyMem_Free(result);
> + return NULL;
> + }
> + }
> + *out = '\0';
> + return result;
> +}
> +#endif /* !defined(__APPLE__) && !defined(MS_WINDOWS) */
> +
> +#if !defined(HAVE_MBRTOWC) || defined(USE_FORCE_ASCII)
> +static wchar_t*
> +decode_ascii_surrogateescape(const char *arg, size_t *size)
> +{
> + wchar_t *res;
> + unsigned char *in;
> + wchar_t *out;
> + size_t argsize = strlen(arg) + 1;
> +
> + if (argsize > PY_SSIZE_T_MAX/sizeof(wchar_t))
> + return NULL;
> + res = PyMem_RawMalloc(argsize*sizeof(wchar_t));
> + if (!res)
> + return NULL;
> +
> + in = (unsigned char*)arg;
> + out = res;
> + while(*in)
> + if(*in < 128)
> + *out++ = *in++;
> + else
> + *out++ = 0xdc00 + *in++;
> + *out = 0;
> + if (size != NULL)
> + *size = out - res;
> + return res;
> +}
> +#endif
> +
> +
> +static wchar_t*
> +decode_current_locale(const char* arg, size_t *size)
> +{
> + wchar_t *res;
> + size_t argsize;
> + size_t count;
> +#ifdef HAVE_MBRTOWC
> + unsigned char *in;
> + wchar_t *out;
> + mbstate_t mbs;
> +#endif
> +
> +#ifdef HAVE_BROKEN_MBSTOWCS
> + /* Some platforms have a broken implementation of
> + * mbstowcs which does not count the characters that
> + * would result from conversion. Use an upper bound.
> + */
> + argsize = strlen(arg);
> +#else
> + argsize = mbstowcs(NULL, arg, 0);
> +#endif
> + if (argsize != (size_t)-1) {
> + if (argsize == PY_SSIZE_T_MAX)
> + goto oom;
> + argsize += 1;
> + if (argsize > PY_SSIZE_T_MAX/sizeof(wchar_t))
> + goto oom;
> + res = (wchar_t *)PyMem_RawMalloc(argsize*sizeof(wchar_t));
> + if (!res)
> + goto oom;
> + count = mbstowcs(res, arg, argsize);
> + if (count != (size_t)-1) {
> + wchar_t *tmp;
> + /* Only use the result if it contains no
> + surrogate characters. */
> + for (tmp = res; *tmp != 0 &&
> + !Py_UNICODE_IS_SURROGATE(*tmp); tmp++)
> + ;
> + if (*tmp == 0) {
> + if (size != NULL)
> + *size = count;
> + return res;
> + }
> + }
> + PyMem_RawFree(res);
> + }
> + /* Conversion failed. Fall back to escaping with surrogateescape. */
> +#ifdef HAVE_MBRTOWC
> + /* Try conversion with mbrtwoc (C99), and escape non-decodable bytes. */
> +
> + /* Overallocate; as multi-byte characters are in the argument, the
> + actual output could use less memory. */
> + argsize = strlen(arg) + 1;
> + if (argsize > PY_SSIZE_T_MAX/sizeof(wchar_t))
> + goto oom;
> + res = (wchar_t*)PyMem_RawMalloc(argsize*sizeof(wchar_t));
> + if (!res)
> + goto oom;
> + in = (unsigned char*)arg;
> + out = res;
> + memset(&mbs, 0, sizeof mbs);
> + while (argsize) {
> + size_t converted = mbrtowc(out, (char*)in, argsize, &mbs);
> + if (converted == 0)
> + /* Reached end of string; null char stored. */
> + break;
> + if (converted == (size_t)-2) {
> + /* Incomplete character. This should never happen,
> + since we provide everything that we have -
> + unless there is a bug in the C library, or I
> + misunderstood how mbrtowc works. */
> + PyMem_RawFree(res);
> + if (size != NULL)
> + *size = (size_t)-2;
> + return NULL;
> + }
> + if (converted == (size_t)-1) {
> + /* Conversion error. Escape as UTF-8b, and start over
> + in the initial shift state. */
> + *out++ = 0xdc00 + *in++;
> + argsize--;
> + memset(&mbs, 0, sizeof mbs);
> + continue;
> + }
> + if (Py_UNICODE_IS_SURROGATE(*out)) {
> + /* Surrogate character. Escape the original
> + byte sequence with surrogateescape. */
> + argsize -= converted;
> + while (converted--)
> + *out++ = 0xdc00 + *in++;
> + continue;
> + }
> + /* successfully converted some bytes */
> + in += converted;
> + argsize -= converted;
> + out++;
> + }
> + if (size != NULL)
> + *size = out - res;
> +#else /* HAVE_MBRTOWC */
> + /* Cannot use C locale for escaping; manually escape as if charset
> + is ASCII (i.e. escape all bytes > 128. This will still roundtrip
> + correctly in the locale's charset, which must be an ASCII superset. */
> + res = decode_ascii_surrogateescape(arg, size);
> + if (res == NULL)
> + goto oom;
> +#endif /* HAVE_MBRTOWC */
> + return res;
> +
> +oom:
> + if (size != NULL)
> + *size = (size_t)-1;
> + return NULL;
> +}
> +
> +
> +static wchar_t*
> +decode_locale(const char* arg, size_t *size, int current_locale)
> +{
> + if (current_locale) {
> + return decode_current_locale(arg, size);
> + }
> +
> +#if defined(__APPLE__) || defined(__ANDROID__)
> + wchar_t *wstr;
> + wstr = _Py_DecodeUTF8_surrogateescape(arg, strlen(arg));
> + if (size != NULL) {
> + if (wstr != NULL)
> + *size = wcslen(wstr);
> + else
> + *size = (size_t)-1;
> + }
> + return wstr;
> +#else
> +
> +#ifdef USE_FORCE_ASCII
> + if (force_ascii == -1) {
> + force_ascii = check_force_ascii();
> + }
> +
> + if (force_ascii) {
> + /* force ASCII encoding to workaround mbstowcs() issue */
> + wchar_t *res = decode_ascii_surrogateescape(arg, size);
> + if (res == NULL) {
> + if (size != NULL)
> + *size = (size_t)-1;
> + return NULL;
> + }
> + return res;
> + }
> +#endif
> +
> + return decode_current_locale(arg, size);
> +#endif /* __APPLE__ or __ANDROID__ */
> +}
> +
> +
> +/* Decode a byte string from the locale encoding with the
> + surrogateescape error handler: undecodable bytes are decoded as characters
> + in range U+DC80..U+DCFF. If a byte sequence can be decoded as a surrogate
> + character, escape the bytes using the surrogateescape error handler instead
> + of decoding them.
> +
> + Return a pointer to a newly allocated wide character string, use
> + PyMem_RawFree() to free the memory. If size is not NULL, write the number of
> + wide characters excluding the null character into *size
> +
> + Return NULL on decoding error or memory allocation error. If *size* is not
> + NULL, *size is set to (size_t)-1 on memory error or set to (size_t)-2 on
> + decoding error.
> +
> + Decoding errors should never happen, unless there is a bug in the C
> + library.
> +
> + Use the Py_EncodeLocale() function to encode the character string back to a
> + byte string. */
> +wchar_t*
> +Py_DecodeLocale(const char* arg, size_t *size)
> +{
> + return decode_locale(arg, size, 0);
> +}
> +
> +
> +wchar_t*
> +_Py_DecodeLocaleEx(const char* arg, size_t *size, int current_locale)
> +{
> + return decode_locale(arg, size, current_locale);
> +}
> +
> +
> +static char*
> +encode_current_locale(const wchar_t *text, size_t *error_pos)
> +{
> + const size_t len = wcslen(text);
> + char *result = NULL, *bytes = NULL;
> + size_t i, size, converted;
> + wchar_t c, buf[2];
> +
> + /* The function works in two steps:
> + 1. compute the length of the output buffer in bytes (size)
> + 2. outputs the bytes */
> + size = 0;
> + buf[1] = 0;
> + while (1) {
> + for (i=0; i < len; i++) {
> + c = text[i];
> + if (c >= 0xdc80 && c <= 0xdcff) {
> + /* UTF-8b surrogate */
> + if (bytes != NULL) {
> + *bytes++ = c - 0xdc00;
> + size--;
> + }
> + else
> + size++;
> + continue;
> + }
> + else {
> + buf[0] = c;
> + if (bytes != NULL)
> + converted = wcstombs(bytes, buf, size);
> + else
> + converted = wcstombs(NULL, buf, 0);
> + if (converted == (size_t)-1) {
> + if (result != NULL)
> + PyMem_Free(result);
> + if (error_pos != NULL)
> + *error_pos = i;
> + return NULL;
> + }
> + if (bytes != NULL) {
> + bytes += converted;
> + size -= converted;
> + }
> + else
> + size += converted;
> + }
> + }
> + if (result != NULL) {
> + *bytes = '\0';
> + break;
> + }
> +
> + size += 1; /* nul byte at the end */
> + result = PyMem_Malloc(size);
> + if (result == NULL) {
> + if (error_pos != NULL)
> + *error_pos = (size_t)-1;
> + return NULL;
> + }
> + bytes = result;
> + }
> + return result;
> +}
> +
> +
> +static char*
> +encode_locale(const wchar_t *text, size_t *error_pos, int current_locale)
> +{
> + if (current_locale) {
> + return encode_current_locale(text, error_pos);
> + }
> +
> +#if defined(__APPLE__) || defined(__ANDROID__)
> + Py_ssize_t len;
> + PyObject *unicode, *bytes = NULL;
> + char *cpath;
> +
> + unicode = PyUnicode_FromWideChar(text, wcslen(text));
> + if (unicode == NULL)
> + return NULL;
> +
> + bytes = _PyUnicode_AsUTF8String(unicode, "surrogateescape");
> + Py_DECREF(unicode);
> + if (bytes == NULL) {
> + PyErr_Clear();
> + if (error_pos != NULL)
> + *error_pos = (size_t)-1;
> + return NULL;
> + }
> +
> + len = PyBytes_GET_SIZE(bytes);
> + cpath = PyMem_Malloc(len+1);
> + if (cpath == NULL) {
> + PyErr_Clear();
> + Py_DECREF(bytes);
> + if (error_pos != NULL)
> + *error_pos = (size_t)-1;
> + return NULL;
> + }
> + memcpy(cpath, PyBytes_AsString(bytes), len + 1);
> + Py_DECREF(bytes);
> + return cpath;
> +#else /* __APPLE__ */
> +
> +#ifdef USE_FORCE_ASCII
> + if (force_ascii == -1) {
> + force_ascii = check_force_ascii();
> + }
> +
> + if (force_ascii) {
> + return encode_ascii_surrogateescape(text, error_pos);
> + }
> +#endif
> +
> + return encode_current_locale(text, error_pos);
> +#endif /* __APPLE__ or __ANDROID__ */
> +}
> +
> +
> +/* Encode a wide character string to the locale encoding with the
> + surrogateescape error handler: surrogate characters in the range
> + U+DC80..U+DCFF are converted to bytes 0x80..0xFF.
> +
> + Return a pointer to a newly allocated byte string, use PyMem_Free() to free
> + the memory. Return NULL on encoding or memory allocation error.
> +
> + If error_pos is not NULL, *error_pos is set to the index of the invalid
> + character on encoding error, or set to (size_t)-1 otherwise.
> +
> + Use the Py_DecodeLocale() function to decode the bytes string back to a wide
> + character string. */
> +char*
> +Py_EncodeLocale(const wchar_t *text, size_t *error_pos)
> +{
> + return encode_locale(text, error_pos, 0);
> +}
> +
> +
> +char*
> +_Py_EncodeLocaleEx(const wchar_t *text, size_t *error_pos, int current_locale)
> +{
> + return encode_locale(text, error_pos, current_locale);
> +}
> +
> +
> +#ifdef MS_WINDOWS
> +static __int64 secs_between_epochs = 11644473600; /* Seconds between 1.1.1601 and 1.1.1970 */
> +
> +static void
> +FILE_TIME_to_time_t_nsec(FILETIME *in_ptr, time_t *time_out, int* nsec_out)
> +{
> + /* XXX endianness. Shouldn't matter, as all Windows implementations are little-endian */
> + /* Cannot simply cast and dereference in_ptr,
> + since it might not be aligned properly */
> + __int64 in;
> + memcpy(&in, in_ptr, sizeof(in));
> + *nsec_out = (int)(in % 10000000) * 100; /* FILETIME is in units of 100 nsec. */
> + *time_out = Py_SAFE_DOWNCAST((in / 10000000) - secs_between_epochs, __int64, time_t);
> +}
> +
> +void
> +_Py_time_t_to_FILE_TIME(time_t time_in, int nsec_in, FILETIME *out_ptr)
> +{
> + /* XXX endianness */
> + __int64 out;
> + out = time_in + secs_between_epochs;
> + out = out * 10000000 + nsec_in / 100;
> + memcpy(out_ptr, &out, sizeof(out));
> +}
> +
> +/* Below, we *know* that ugo+r is 0444 */
> +#if _S_IREAD != 0400
> +#error Unsupported C library
> +#endif
> +static int
> +attributes_to_mode(DWORD attr)
> +{
> + int m = 0;
> + if (attr & FILE_ATTRIBUTE_DIRECTORY)
> + m |= _S_IFDIR | 0111; /* IFEXEC for user,group,other */
> + else
> + m |= _S_IFREG;
> + if (attr & FILE_ATTRIBUTE_READONLY)
> + m |= 0444;
> + else
> + m |= 0666;
> + return m;
> +}
> +
> +void
> +_Py_attribute_data_to_stat(BY_HANDLE_FILE_INFORMATION *info, ULONG reparse_tag,
> + struct _Py_stat_struct *result)
> +{
> + memset(result, 0, sizeof(*result));
> + result->st_mode = attributes_to_mode(info->dwFileAttributes);
> + result->st_size = (((__int64)info->nFileSizeHigh)<<32) + info->nFileSizeLow;
> + result->st_dev = info->dwVolumeSerialNumber;
> + result->st_rdev = result->st_dev;
> + FILE_TIME_to_time_t_nsec(&info->ftCreationTime, &result->st_ctime, &result->st_ctime_nsec);
> + FILE_TIME_to_time_t_nsec(&info->ftLastWriteTime, &result->st_mtime, &result->st_mtime_nsec);
> + FILE_TIME_to_time_t_nsec(&info->ftLastAccessTime, &result->st_atime, &result->st_atime_nsec);
> + result->st_nlink = info->nNumberOfLinks;
> + result->st_ino = (((uint64_t)info->nFileIndexHigh) << 32) + info->nFileIndexLow;
> + if (reparse_tag == IO_REPARSE_TAG_SYMLINK) {
> + /* first clear the S_IFMT bits */
> + result->st_mode ^= (result->st_mode & S_IFMT);
> + /* now set the bits that make this a symlink */
> + result->st_mode |= S_IFLNK;
> + }
> + result->st_file_attributes = info->dwFileAttributes;
> +}
> +#endif
> +
> +/* Return information about a file.
> +
> + On POSIX, use fstat().
> +
> + On Windows, use GetFileType() and GetFileInformationByHandle() which support
> + files larger than 2 GB. fstat() may fail with EOVERFLOW on files larger
> + than 2 GB because the file size type is a signed 32-bit integer: see issue
> + #23152.
> +
> + On Windows, set the last Windows error and return nonzero on error. On
> + POSIX, set errno and return nonzero on error. Fill status and return 0 on
> + success. */
> +int
> +_Py_fstat_noraise(int fd, struct _Py_stat_struct *status)
> +{
> +#ifdef MS_WINDOWS
> + BY_HANDLE_FILE_INFORMATION info;
> + HANDLE h;
> + int type;
> +
> + _Py_BEGIN_SUPPRESS_IPH
> + h = (HANDLE)_get_osfhandle(fd);
> + _Py_END_SUPPRESS_IPH
> +
> + if (h == INVALID_HANDLE_VALUE) {
> + /* errno is already set by _get_osfhandle, but we also set
> + the Win32 error for callers who expect that */
> + SetLastError(ERROR_INVALID_HANDLE);
> + return -1;
> + }
> + memset(status, 0, sizeof(*status));
> +
> + type = GetFileType(h);
> + if (type == FILE_TYPE_UNKNOWN) {
> + DWORD error = GetLastError();
> + if (error != 0) {
> + errno = winerror_to_errno(error);
> + return -1;
> + }
> + /* else: valid but unknown file */
> + }
> +
> + if (type != FILE_TYPE_DISK) {
> + if (type == FILE_TYPE_CHAR)
> + status->st_mode = _S_IFCHR;
> + else if (type == FILE_TYPE_PIPE)
> + status->st_mode = _S_IFIFO;
> + return 0;
> + }
> +
> + if (!GetFileInformationByHandle(h, &info)) {
> + /* The Win32 error is already set, but we also set errno for
> + callers who expect it */
> + errno = winerror_to_errno(GetLastError());
> + return -1;
> + }
> +
> + _Py_attribute_data_to_stat(&info, 0, status);
> + /* specific to fstat() */
> + status->st_ino = (((uint64_t)info.nFileIndexHigh) << 32) + info.nFileIndexLow;
> + return 0;
> +#else
> + return fstat(fd, status);
> +#endif
> +}
> +
> +/* Return information about a file.
> +
> + On POSIX, use fstat().
> +
> + On Windows, use GetFileType() and GetFileInformationByHandle() which support
> + files larger than 2 GB. fstat() may fail with EOVERFLOW on files larger
> + than 2 GB because the file size type is a signed 32-bit integer: see issue
> + #23152.
> +
> + Raise an exception and return -1 on error. On Windows, set the last Windows
> + error on error. On POSIX, set errno on error. Fill status and return 0 on
> + success.
> +
> + Release the GIL to call GetFileType() and GetFileInformationByHandle(), or
> + to call fstat(). The caller must hold the GIL. */
> +int
> +_Py_fstat(int fd, struct _Py_stat_struct *status)
> +{
> + int res;
> +
> +#ifdef WITH_THREAD
> + assert(PyGILState_Check());
> +#endif
> +
> + Py_BEGIN_ALLOW_THREADS
> + res = _Py_fstat_noraise(fd, status);
> + Py_END_ALLOW_THREADS
> +
> + if (res != 0) {
> +#ifdef MS_WINDOWS
> + PyErr_SetFromWindowsErr(0);
> +#else
> + PyErr_SetFromErrno(PyExc_OSError);
> +#endif
> + return -1;
> + }
> + return 0;
> +}
> +
> +/* Call _wstat() on Windows, or encode the path to the filesystem encoding and
> + call stat() otherwise. Only fill st_mode attribute on Windows.
> +
> + Return 0 on success, -1 on _wstat() / stat() error, -2 if an exception was
> + raised. */
> +
> +int
> +_Py_stat(PyObject *path, struct stat *statbuf)
> +{
> +#ifdef MS_WINDOWS
> + int err;
> + struct _stat wstatbuf;
> + const wchar_t *wpath;
> +
> + wpath = _PyUnicode_AsUnicode(path);
> + if (wpath == NULL)
> + return -2;
> +
> + err = _wstat(wpath, &wstatbuf);
> + if (!err)
> + statbuf->st_mode = wstatbuf.st_mode;
> + return err;
> +#else
> + int ret;
> + PyObject *bytes;
> + char *cpath;
> +
> + bytes = PyUnicode_EncodeFSDefault(path);
> + if (bytes == NULL)
> + return -2;
> +
> + /* check for embedded null bytes */
> + if (PyBytes_AsStringAndSize(bytes, &cpath, NULL) == -1) {
> + Py_DECREF(bytes);
> + return -2;
> + }
> +
> + ret = stat(cpath, statbuf);
> + Py_DECREF(bytes);
> + return ret;
> +#endif
> +}
> +
> +
> +/* This function MUST be kept async-signal-safe on POSIX when raise=0. */
> +static int
> +get_inheritable(int fd, int raise)
> +{
> +#ifdef MS_WINDOWS
> + HANDLE handle;
> + DWORD flags;
> +
> + _Py_BEGIN_SUPPRESS_IPH
> + handle = (HANDLE)_get_osfhandle(fd);
> + _Py_END_SUPPRESS_IPH
> + if (handle == INVALID_HANDLE_VALUE) {
> + if (raise)
> + PyErr_SetFromErrno(PyExc_OSError);
> + return -1;
> + }
> +
> + if (!GetHandleInformation(handle, &flags)) {
> + if (raise)
> + PyErr_SetFromWindowsErr(0);
> + return -1;
> + }
> +
> + return (flags & HANDLE_FLAG_INHERIT);
> +#else
> + int flags;
> +
> + flags = fcntl(fd, F_GETFD, 0);
> + if (flags == -1) {
> + if (raise)
> + PyErr_SetFromErrno(PyExc_OSError);
> + return -1;
> + }
> + return !(flags & FD_CLOEXEC);
> +#endif
> +}
> +
> +/* Get the inheritable flag of the specified file descriptor.
> + Return 1 if the file descriptor can be inherited, 0 if it cannot,
> + raise an exception and return -1 on error. */
> +int
> +_Py_get_inheritable(int fd)
> +{
> + return get_inheritable(fd, 1);
> +}
> +
> +
> +/* This function MUST be kept async-signal-safe on POSIX when raise=0. */
> +static int
> +set_inheritable(int fd, int inheritable, int raise, int *atomic_flag_works)
> +{
> +#ifdef MS_WINDOWS
> + HANDLE handle;
> + DWORD flags;
> +#else
> +#if defined(HAVE_SYS_IOCTL_H) && defined(FIOCLEX) && defined(FIONCLEX)
> + static int ioctl_works = -1;
> + int request;
> + int err;
> +#endif
> + int flags, new_flags;
> + int res;
> +#endif
> +
> + /* atomic_flag_works can only be used to make the file descriptor
> + non-inheritable */
> + assert(!(atomic_flag_works != NULL && inheritable));
> +
> + if (atomic_flag_works != NULL && !inheritable) {
> + if (*atomic_flag_works == -1) {
> + int isInheritable = get_inheritable(fd, raise);
> + if (isInheritable == -1)
> + return -1;
> + *atomic_flag_works = !isInheritable;
> + }
> +
> + if (*atomic_flag_works)
> + return 0;
> + }
> +
> +#ifdef MS_WINDOWS
> + _Py_BEGIN_SUPPRESS_IPH
> + handle = (HANDLE)_get_osfhandle(fd);
> + _Py_END_SUPPRESS_IPH
> + if (handle == INVALID_HANDLE_VALUE) {
> + if (raise)
> + PyErr_SetFromErrno(PyExc_OSError);
> + return -1;
> + }
> +
> + if (inheritable)
> + flags = HANDLE_FLAG_INHERIT;
> + else
> + flags = 0;
> + if (!SetHandleInformation(handle, HANDLE_FLAG_INHERIT, flags)) {
> + if (raise)
> + PyErr_SetFromWindowsErr(0);
> + return -1;
> + }
> + return 0;
> +
> +#else
> +
> +#if defined(HAVE_SYS_IOCTL_H) && defined(FIOCLEX) && defined(FIONCLEX)
> + if (ioctl_works != 0 && raise != 0) {
> + /* fast-path: ioctl() only requires one syscall */
> + /* caveat: raise=0 is an indicator that we must be async-signal-safe
> + * thus avoid using ioctl() so we skip the fast-path. */
> + if (inheritable)
> + request = FIONCLEX;
> + else
> + request = FIOCLEX;
> + err = ioctl(fd, request, NULL);
> + if (!err) {
> + ioctl_works = 1;
> + return 0;
> + }
> +
> + if (errno != ENOTTY && errno != EACCES) {
> + if (raise)
> + PyErr_SetFromErrno(PyExc_OSError);
> + return -1;
> + }
> + else {
> + /* Issue #22258: Here, ENOTTY means "Inappropriate ioctl for
> + device". The ioctl is declared but not supported by the kernel.
> + Remember that ioctl() doesn't work. It is the case on
> + Illumos-based OS for example.
> +
> + Issue #27057: When SELinux policy disallows ioctl it will fail
> + with EACCES. While FIOCLEX is safe operation it may be
> + unavailable because ioctl was denied altogether.
> + This can be the case on Android. */
> + ioctl_works = 0;
> + }
> + /* fallback to fcntl() if ioctl() does not work */
> + }
> +#endif
> +
> + /* slow-path: fcntl() requires two syscalls */
> + flags = fcntl(fd, F_GETFD);
> + if (flags < 0) {
> + if (raise)
> + PyErr_SetFromErrno(PyExc_OSError);
> + return -1;
> + }
> +
> + if (inheritable) {
> + new_flags = flags & ~FD_CLOEXEC;
> + }
> + else {
> + new_flags = flags | FD_CLOEXEC;
> + }
> +
> + if (new_flags == flags) {
> + /* FD_CLOEXEC flag already set/cleared: nothing to do */
> + return 0;
> + }
> +
> + res = fcntl(fd, F_SETFD, new_flags);
> + if (res < 0) {
> + if (raise)
> + PyErr_SetFromErrno(PyExc_OSError);
> + return -1;
> + }
> + return 0;
> +#endif
> +}
> +
> +/* Make the file descriptor non-inheritable.
> + Return 0 on success, set errno and return -1 on error. */
> +static int
> +make_non_inheritable(int fd)
> +{
> + return set_inheritable(fd, 0, 0, NULL);
> +}
> +
> +/* Set the inheritable flag of the specified file descriptor.
> + On success: return 0, on error: raise an exception and return -1.
> +
> + If atomic_flag_works is not NULL:
> +
> + * if *atomic_flag_works==-1, check if the inheritable is set on the file
> + descriptor: if yes, set *atomic_flag_works to 1, otherwise set to 0 and
> + set the inheritable flag
> + * if *atomic_flag_works==1: do nothing
> + * if *atomic_flag_works==0: set inheritable flag to False
> +
> + Set atomic_flag_works to NULL if no atomic flag was used to create the
> + file descriptor.
> +
> + atomic_flag_works can only be used to make a file descriptor
> + non-inheritable: atomic_flag_works must be NULL if inheritable=1. */
> +int
> +_Py_set_inheritable(int fd, int inheritable, int *atomic_flag_works)
> +{
> + return set_inheritable(fd, inheritable, 1, atomic_flag_works);
> +}
> +
> +/* Same as _Py_set_inheritable() but on error, set errno and
> + don't raise an exception.
> + This function is async-signal-safe. */
> +int
> +_Py_set_inheritable_async_safe(int fd, int inheritable, int *atomic_flag_works)
> +{
> + return set_inheritable(fd, inheritable, 0, atomic_flag_works);
> +}
> +
> +static int
> +_Py_open_impl(const char *pathname, int flags, int gil_held)
> +{
> + int fd;
> + int async_err = 0;
> +#ifndef MS_WINDOWS
> + int *atomic_flag_works;
> +#endif
> +
> +#ifdef MS_WINDOWS
> + flags |= O_NOINHERIT;
> +#elif defined(O_CLOEXEC)
> + atomic_flag_works = &_Py_open_cloexec_works;
> + flags |= O_CLOEXEC;
> +#else
> + atomic_flag_works = NULL;
> +#endif
> +
> + if (gil_held) {
> + do {
> + Py_BEGIN_ALLOW_THREADS
> +#ifdef UEFI_C_SOURCE
> + fd = open(pathname, flags, 0);
> +#else
> + fd = open(pathname, flags);
> +#endif
> + Py_END_ALLOW_THREADS
> + } while (fd < 0
> + && errno == EINTR && !(async_err = PyErr_CheckSignals()));
> + if (async_err)
> + return -1;
> + if (fd < 0) {
> + PyErr_SetFromErrnoWithFilename(PyExc_OSError, pathname);
> + return -1;
> + }
> + }
> + else {
> +#ifdef UEFI_C_SOURCE
> + fd = open(pathname, flags, 0);
> +#else
> + fd = open(pathname, flags);
> +#endif
> + if (fd < 0)
> + return -1;
> + }
> +
> +#ifndef MS_WINDOWS
> + if (set_inheritable(fd, 0, gil_held, atomic_flag_works) < 0) {
> + close(fd);
> + return -1;
> + }
> +#endif
> +
> + return fd;
> +}
> +
> +/* Open a file with the specified flags (wrapper to open() function).
> + Return a file descriptor on success. Raise an exception and return -1 on
> + error.
> +
> + The file descriptor is created non-inheritable.
> +
> + When interrupted by a signal (open() fails with EINTR), retry the syscall,
> + except if the Python signal handler raises an exception.
> +
> + Release the GIL to call open(). The caller must hold the GIL. */
> +int
> +_Py_open(const char *pathname, int flags)
> +{
> +#ifdef WITH_THREAD
> + /* _Py_open() must be called with the GIL held. */
> + assert(PyGILState_Check());
> +#endif
> + return _Py_open_impl(pathname, flags, 1);
> +}
> +
> +/* Open a file with the specified flags (wrapper to open() function).
> + Return a file descriptor on success. Set errno and return -1 on error.
> +
> + The file descriptor is created non-inheritable.
> +
> + If interrupted by a signal, fail with EINTR. */
> +int
> +_Py_open_noraise(const char *pathname, int flags)
> +{
> + return _Py_open_impl(pathname, flags, 0);
> +}
> +
> +/* Open a file. Use _wfopen() on Windows, encode the path to the locale
> + encoding and use fopen() otherwise.
> +
> + The file descriptor is created non-inheritable.
> +
> + If interrupted by a signal, fail with EINTR. */
> +FILE *
> +_Py_wfopen(const wchar_t *path, const wchar_t *mode)
> +{
> + FILE *f;
> +#ifndef MS_WINDOWS
> + char *cpath;
> + char cmode[10];
> + size_t r;
> + r = wcstombs(cmode, mode, 10);
> + if (r == (size_t)-1 || r >= 10) {
> + errno = EINVAL;
> + return NULL;
> + }
> + cpath = Py_EncodeLocale(path, NULL);
> + if (cpath == NULL)
> + return NULL;
> + f = fopen(cpath, cmode);
> + PyMem_Free(cpath);
> +#else
> + f = _wfopen(path, mode);
> +#endif
> + if (f == NULL)
> + return NULL;
> + if (make_non_inheritable(fileno(f)) < 0) {
> + fclose(f);
> + return NULL;
> + }
> + return f;
> +}
> +
> +/* Wrapper to fopen().
> +
> + The file descriptor is created non-inheritable.
> +
> + If interrupted by a signal, fail with EINTR. */
> +FILE*
> +_Py_fopen(const char *pathname, const char *mode)
> +{
> + FILE *f = fopen(pathname, mode);
> + if (f == NULL)
> + return NULL;
> + if (make_non_inheritable(fileno(f)) < 0) {
> + fclose(f);
> + return NULL;
> + }
> + return f;
> +}
> +
> +/* Open a file. Call _wfopen() on Windows, or encode the path to the filesystem
> + encoding and call fopen() otherwise.
> +
> + Return the new file object on success. Raise an exception and return NULL
> + on error.
> +
> + The file descriptor is created non-inheritable.
> +
> + When interrupted by a signal (open() fails with EINTR), retry the syscall,
> + except if the Python signal handler raises an exception.
> +
> + Release the GIL to call _wfopen() or fopen(). The caller must hold
> + the GIL. */
> +FILE*
> +_Py_fopen_obj(PyObject *path, const char *mode)
> +{
> + FILE *f;
> + int async_err = 0;
> +#ifdef MS_WINDOWS
> + const wchar_t *wpath;
> + wchar_t wmode[10];
> + int usize;
> +
> +#ifdef WITH_THREAD
> + assert(PyGILState_Check());
> +#endif
> +
> + if (!PyUnicode_Check(path)) {
> + PyErr_Format(PyExc_TypeError,
> + "str file path expected under Windows, got %R",
> + Py_TYPE(path));
> + return NULL;
> + }
> + wpath = _PyUnicode_AsUnicode(path);
> + if (wpath == NULL)
> + return NULL;
> +
> + usize = MultiByteToWideChar(CP_ACP, 0, mode, -1,
> + wmode, Py_ARRAY_LENGTH(wmode));
> + if (usize == 0) {
> + PyErr_SetFromWindowsErr(0);
> + return NULL;
> + }
> +
> + do {
> + Py_BEGIN_ALLOW_THREADS
> + f = _wfopen(wpath, wmode);
> + Py_END_ALLOW_THREADS
> + } while (f == NULL
> + && errno == EINTR && !(async_err = PyErr_CheckSignals()));
> +#else
> + PyObject *bytes;
> + char *path_bytes;
> +
> +#ifdef WITH_THREAD
> + assert(PyGILState_Check());
> +#endif
> +
> + if (!PyUnicode_FSConverter(path, &bytes))
> + return NULL;
> + path_bytes = PyBytes_AS_STRING(bytes);
> +
> + do {
> + Py_BEGIN_ALLOW_THREADS
> + f = fopen(path_bytes, mode);
> + Py_END_ALLOW_THREADS
> + } while (f == NULL
> + && errno == EINTR && !(async_err = PyErr_CheckSignals()));
> +
> + Py_DECREF(bytes);
> +#endif
> + if (async_err)
> + return NULL;
> +
> + if (f == NULL) {
> + PyErr_SetFromErrnoWithFilenameObject(PyExc_OSError, path);
> + return NULL;
> + }
> +
> + if (set_inheritable(fileno(f), 0, 1, NULL) < 0) {
> + fclose(f);
> + return NULL;
> + }
> + return f;
> +}
> +
> +/* Read count bytes from fd into buf.
> +
> + On success, return the number of read bytes, it can be lower than count.
> + If the current file offset is at or past the end of file, no bytes are read,
> + and read() returns zero.
> +
> + On error, raise an exception, set errno and return -1.
> +
> + When interrupted by a signal (read() fails with EINTR), retry the syscall.
> + If the Python signal handler raises an exception, the function returns -1
> + (the syscall is not retried).
> +
> + Release the GIL to call read(). The caller must hold the GIL. */
> +Py_ssize_t
> +_Py_read(int fd, void *buf, size_t count)
> +{
> + Py_ssize_t n;
> + int err;
> + int async_err = 0;
> +
> +#ifdef WITH_THREAD
> + assert(PyGILState_Check());
> +#endif
> +
> + /* _Py_read() must not be called with an exception set, otherwise the
> + * caller may think that read() was interrupted by a signal and the signal
> + * handler raised an exception. */
> + assert(!PyErr_Occurred());
> +
> + if (count > _PY_READ_MAX) {
> + count = _PY_READ_MAX;
> + }
> +
> + _Py_BEGIN_SUPPRESS_IPH
> + do {
> + Py_BEGIN_ALLOW_THREADS
> + errno = 0;
> +#ifdef MS_WINDOWS
> + n = read(fd, buf, (int)count);
> +#else
> + n = read(fd, buf, count);
> +#endif
> + /* save/restore errno because PyErr_CheckSignals()
> + * and PyErr_SetFromErrno() can modify it */
> + err = errno;
> + Py_END_ALLOW_THREADS
> + } while (n < 0 && err == EINTR &&
> + !(async_err = PyErr_CheckSignals()));
> + _Py_END_SUPPRESS_IPH
> +
> + if (async_err) {
> + /* read() was interrupted by a signal (failed with EINTR)
> + * and the Python signal handler raised an exception */
> + errno = err;
> + assert(errno == EINTR && PyErr_Occurred());
> + return -1;
> + }
> + if (n < 0) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + errno = err;
> + return -1;
> + }
> +
> + return n;
> +}
> +
> +static Py_ssize_t
> +_Py_write_impl(int fd, const void *buf, size_t count, int gil_held)
> +{
> + Py_ssize_t n = 0;
> + int err;
> + int async_err = 0;
> +
> + _Py_BEGIN_SUPPRESS_IPH
> +#ifdef MS_WINDOWS
> + if (count > 32767 && isatty(fd)) {
> + /* Issue #11395: the Windows console returns an error (12: not
> + enough space error) on writing into stdout if stdout mode is
> + binary and the length is greater than 66,000 bytes (or less,
> + depending on heap usage). */
> + count = 32767;
> + }
> +#endif
> + if (count > _PY_WRITE_MAX) {
> + count = _PY_WRITE_MAX;
> + }
> +
> + if (gil_held) {
> + do {
> + Py_BEGIN_ALLOW_THREADS
> + errno = 0;
> +#ifdef MS_WINDOWS
> + n = write(fd, buf, (int)count);
> +#elif UEFI_C_SOURCE
> + n += write(fd, ((char *)buf + n), count - n);
> +#else
> + n = write(fd, buf, count);
> +#endif
> + /* save/restore errno because PyErr_CheckSignals()
> + * and PyErr_SetFromErrno() can modify it */
> + err = errno;
> + Py_END_ALLOW_THREADS
> +#ifdef UEFI_C_SOURCE
> + } while ((n < 0 && err == EINTR &&
> + !(async_err = PyErr_CheckSignals())) || (n < count));
> + }
> +#else
> + } while (n < 0 && err == EINTR &&
> + !(async_err = PyErr_CheckSignals()));
> + }
> +#endif
> + else {
> + do {
> + errno = 0;
> +#ifdef MS_WINDOWS
> + n = write(fd, buf, (int)count);
> +#else
> + n = write(fd, buf, count);
> +#endif
> + err = errno;
> + } while (n < 0 && err == EINTR);
> + }
> + _Py_END_SUPPRESS_IPH
> +
> + if (async_err) {
> + /* write() was interrupted by a signal (failed with EINTR)
> + and the Python signal handler raised an exception (if gil_held is
> + nonzero). */
> + errno = err;
> + assert(errno == EINTR && (!gil_held || PyErr_Occurred()));
> + return -1;
> + }
> + if (n < 0) {
> + if (gil_held)
> + PyErr_SetFromErrno(PyExc_OSError);
> + errno = err;
> + return -1;
> + }
> +
> + return n;
> +}
> +
> +/* Write count bytes of buf into fd.
> +
> + On success, return the number of written bytes, it can be lower than count
> + including 0. On error, raise an exception, set errno and return -1.
> +
> + When interrupted by a signal (write() fails with EINTR), retry the syscall.
> + If the Python signal handler raises an exception, the function returns -1
> + (the syscall is not retried).
> +
> + Release the GIL to call write(). The caller must hold the GIL. */
> +Py_ssize_t
> +_Py_write(int fd, const void *buf, size_t count)
> +{
> +#ifdef WITH_THREAD
> + assert(PyGILState_Check());
> +#endif
> +
> + /* _Py_write() must not be called with an exception set, otherwise the
> + * caller may think that write() was interrupted by a signal and the signal
> + * handler raised an exception. */
> + assert(!PyErr_Occurred());
> +
> + return _Py_write_impl(fd, buf, count, 1);
> +}
> +
> +/* Write count bytes of buf into fd.
> + *
> + * On success, return the number of written bytes, it can be lower than count
> + * including 0. On error, set errno and return -1.
> + *
> + * When interrupted by a signal (write() fails with EINTR), retry the syscall
> + * without calling the Python signal handler. */
> +Py_ssize_t
> +_Py_write_noraise(int fd, const void *buf, size_t count)
> +{
> + return _Py_write_impl(fd, buf, count, 0);
> +}
> +
> +#ifdef HAVE_READLINK
> +
> +/* Read value of symbolic link. Encode the path to the locale encoding, decode
> + the result from the locale encoding. Return -1 on error. */
> +
> +int
> +_Py_wreadlink(const wchar_t *path, wchar_t *buf, size_t bufsiz)
> +{
> + char *cpath;
> + char cbuf[MAXPATHLEN];
> + wchar_t *wbuf;
> + int res;
> + size_t r1;
> +
> + cpath = Py_EncodeLocale(path, NULL);
> + if (cpath == NULL) {
> + errno = EINVAL;
> + return -1;
> + }
> + res = (int)readlink(cpath, cbuf, Py_ARRAY_LENGTH(cbuf));
> + PyMem_Free(cpath);
> + if (res == -1)
> + return -1;
> + if (res == Py_ARRAY_LENGTH(cbuf)) {
> + errno = EINVAL;
> + return -1;
> + }
> + cbuf[res] = '\0'; /* buf will be null terminated */
> + wbuf = Py_DecodeLocale(cbuf, &r1);
> + if (wbuf == NULL) {
> + errno = EINVAL;
> + return -1;
> + }
> + if (bufsiz <= r1) {
> + PyMem_RawFree(wbuf);
> + errno = EINVAL;
> + return -1;
> + }
> + wcsncpy(buf, wbuf, bufsiz);
> + PyMem_RawFree(wbuf);
> + return (int)r1;
> +}
> +#endif
> +
> +#ifdef HAVE_REALPATH
> +
> +/* Return the canonicalized absolute pathname. Encode path to the locale
> + encoding, decode the result from the locale encoding.
> + Return NULL on error. */
> +
> +wchar_t*
> +_Py_wrealpath(const wchar_t *path,
> + wchar_t *resolved_path, size_t resolved_path_size)
> +{
> + char *cpath;
> + char cresolved_path[MAXPATHLEN];
> + wchar_t *wresolved_path;
> + char *res;
> + size_t r;
> + cpath = Py_EncodeLocale(path, NULL);
> + if (cpath == NULL) {
> + errno = EINVAL;
> + return NULL;
> + }
> + res = realpath(cpath, cresolved_path);
> + PyMem_Free(cpath);
> + if (res == NULL)
> + return NULL;
> +
> + wresolved_path = Py_DecodeLocale(cresolved_path, &r);
> + if (wresolved_path == NULL) {
> + errno = EINVAL;
> + return NULL;
> + }
> + if (resolved_path_size <= r) {
> + PyMem_RawFree(wresolved_path);
> + errno = EINVAL;
> + return NULL;
> + }
> + wcsncpy(resolved_path, wresolved_path, resolved_path_size);
> + PyMem_RawFree(wresolved_path);
> + return resolved_path;
> +}
> +#endif
> +
> +/* Get the current directory. size is the buffer size in wide characters
> + including the null character. Decode the path from the locale encoding.
> + Return NULL on error. */
> +
> +wchar_t*
> +_Py_wgetcwd(wchar_t *buf, size_t size)
> +{
> +#ifdef MS_WINDOWS
> + int isize = (int)Py_MIN(size, INT_MAX);
> + return _wgetcwd(buf, isize);
> +#else
> + char fname[MAXPATHLEN];
> + wchar_t *wname;
> + size_t len;
> +
> + if (getcwd(fname, Py_ARRAY_LENGTH(fname)) == NULL)
> + return NULL;
> + wname = Py_DecodeLocale(fname, &len);
> + if (wname == NULL)
> + return NULL;
> + if (size <= len) {
> + PyMem_RawFree(wname);
> + return NULL;
> + }
> + wcsncpy(buf, wname, size);
> + PyMem_RawFree(wname);
> + return buf;
> +#endif
> +}
> +
> +/* Duplicate a file descriptor. The new file descriptor is created as
> + non-inheritable. Return a new file descriptor on success, raise an OSError
> + exception and return -1 on error.
> +
> + The GIL is released to call dup(). The caller must hold the GIL. */
> +int
> +_Py_dup(int fd)
> +{
> +#ifdef MS_WINDOWS
> + HANDLE handle;
> + DWORD ftype;
> +#endif
> +
> +#ifdef WITH_THREAD
> + assert(PyGILState_Check());
> +#endif
> +
> +#ifdef MS_WINDOWS
> + _Py_BEGIN_SUPPRESS_IPH
> + handle = (HANDLE)_get_osfhandle(fd);
> + _Py_END_SUPPRESS_IPH
> + if (handle == INVALID_HANDLE_VALUE) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + return -1;
> + }
> +
> + /* get the file type, ignore the error if it failed */
> + ftype = GetFileType(handle);
> +
> + Py_BEGIN_ALLOW_THREADS
> + _Py_BEGIN_SUPPRESS_IPH
> + fd = dup(fd);
> + _Py_END_SUPPRESS_IPH
> + Py_END_ALLOW_THREADS
> + if (fd < 0) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + return -1;
> + }
> +
> + /* Character files like console cannot be make non-inheritable */
> + if (ftype != FILE_TYPE_CHAR) {
> + if (_Py_set_inheritable(fd, 0, NULL) < 0) {
> + _Py_BEGIN_SUPPRESS_IPH
> + close(fd);
> + _Py_END_SUPPRESS_IPH
> + return -1;
> + }
> + }
> +#elif defined(HAVE_FCNTL_H) && defined(F_DUPFD_CLOEXEC)
> + Py_BEGIN_ALLOW_THREADS
> + _Py_BEGIN_SUPPRESS_IPH
> + fd = fcntl(fd, F_DUPFD_CLOEXEC, 0);
> + _Py_END_SUPPRESS_IPH
> + Py_END_ALLOW_THREADS
> + if (fd < 0) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + return -1;
> + }
> +
> +#else
> + Py_BEGIN_ALLOW_THREADS
> + _Py_BEGIN_SUPPRESS_IPH
> + fd = dup(fd);
> + _Py_END_SUPPRESS_IPH
> + Py_END_ALLOW_THREADS
> + if (fd < 0) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + return -1;
> + }
> +
> + if (_Py_set_inheritable(fd, 0, NULL) < 0) {
> + _Py_BEGIN_SUPPRESS_IPH
> + close(fd);
> + _Py_END_SUPPRESS_IPH
> + return -1;
> + }
> +#endif
> + return fd;
> +}
> +
> +#ifndef MS_WINDOWS
> +/* Get the blocking mode of the file descriptor.
> + Return 0 if the O_NONBLOCK flag is set, 1 if the flag is cleared,
> + raise an exception and return -1 on error. */
> +int
> +_Py_get_blocking(int fd)
> +{
> + int flags;
> + _Py_BEGIN_SUPPRESS_IPH
> + flags = fcntl(fd, F_GETFL, 0);
> + _Py_END_SUPPRESS_IPH
> + if (flags < 0) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + return -1;
> + }
> +
> + return !(flags & O_NONBLOCK);
> +}
> +
> +/* Set the blocking mode of the specified file descriptor.
> +
> + Set the O_NONBLOCK flag if blocking is False, clear the O_NONBLOCK flag
> + otherwise.
> +
> + Return 0 on success, raise an exception and return -1 on error. */
> +int
> +_Py_set_blocking(int fd, int blocking)
> +{
> +#if defined(HAVE_SYS_IOCTL_H) && defined(FIONBIO)
> + int arg = !blocking;
> + if (ioctl(fd, FIONBIO, &arg) < 0)
> + goto error;
> +#else
> + int flags, res;
> +
> + _Py_BEGIN_SUPPRESS_IPH
> + flags = fcntl(fd, F_GETFL, 0);
> + if (flags >= 0) {
> + if (blocking)
> + flags = flags & (~O_NONBLOCK);
> + else
> + flags = flags | O_NONBLOCK;
> +
> + res = fcntl(fd, F_SETFL, flags);
> + } else {
> + res = -1;
> + }
> + _Py_END_SUPPRESS_IPH
> +
> + if (res < 0)
> + goto error;
> +#endif
> + return 0;
> +
> +error:
> + PyErr_SetFromErrno(PyExc_OSError);
> + return -1;
> +}
> +#endif
> +
> +
> +int
> +_Py_GetLocaleconvNumeric(PyObject **decimal_point, PyObject **thousands_sep,
> + const char **grouping)
> +{
> + int res = -1;
> +
> + struct lconv *lc = localeconv();
> +
> + int change_locale = 0;
> + if (decimal_point != NULL &&
> + (strlen(lc->decimal_point) > 1 || ((unsigned char)lc->decimal_point[0]) > 127))
> + {
> + change_locale = 1;
> + }
> + if (thousands_sep != NULL &&
> + (strlen(lc->thousands_sep) > 1 || ((unsigned char)lc->thousands_sep[0]) > 127))
> + {
> + change_locale = 1;
> + }
> +
> + /* Keep a copy of the LC_CTYPE locale */
> + char *oldloc = NULL, *loc = NULL;
> + if (change_locale) {
> + oldloc = setlocale(LC_CTYPE, NULL);
> + if (!oldloc) {
> + PyErr_SetString(PyExc_RuntimeWarning, "failed to get LC_CTYPE locale");
> + return -1;
> + }
> +
> + oldloc = _PyMem_Strdup(oldloc);
> + if (!oldloc) {
> + PyErr_NoMemory();
> + return -1;
> + }
> +
> + loc = setlocale(LC_NUMERIC, NULL);
> + if (loc != NULL && strcmp(loc, oldloc) == 0) {
> + loc = NULL;
> + }
> +
> + if (loc != NULL) {
> + /* Only set the locale temporarily the LC_CTYPE locale
> + if LC_NUMERIC locale is different than LC_CTYPE locale and
> + decimal_point and/or thousands_sep are non-ASCII or longer than
> + 1 byte */
> + setlocale(LC_CTYPE, loc);
> + }
> + }
> +
> + if (decimal_point != NULL) {
> + *decimal_point = PyUnicode_DecodeLocale(lc->decimal_point, NULL);
> + if (*decimal_point == NULL) {
> + goto error;
> + }
> + }
> + if (thousands_sep != NULL) {
> + *thousands_sep = PyUnicode_DecodeLocale(lc->thousands_sep, NULL);
> + if (*thousands_sep == NULL) {
> + goto error;
> + }
> + }
> +
> + if (grouping != NULL) {
> + *grouping = lc->grouping;
> + }
> +
> + res = 0;
> +
> +error:
> + if (loc != NULL) {
> + setlocale(LC_CTYPE, oldloc);
> + }
> + PyMem_Free(oldloc);
> + return res;
> +}
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/getcopyright.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/getcopyright.c
> new file mode 100644
> index 00000000..6c01d6ac
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/getcopyright.c
> @@ -0,0 +1,38 @@
> +/** @file
> + Return the copyright string. This is updated manually.
> +
> + Copyright (c) 2015, Daryl McDaniel. All rights reserved.<BR>
> + Copyright (c) 2010 - 2021, Intel Corporation. All rights reserved.<BR>
> + This program and the accompanying materials are licensed and made available under
> + the terms and conditions of the BSD License that accompanies this distribution.
> + The full text of the license may be found at
> + http://opensource.org/licenses/bsd-license.
> +
> + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
> + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
> +**/
> +
> +#include "Python.h"
> +
> +static const char cprt[] =
> +"\
> +Copyright (c) 2010-2021 Intel Corporation.\n\
> +All Rights Reserved.\n\
> +\n\
> +Copyright (c) 2001-2018 Python Software Foundation.\n\
> +All Rights Reserved.\n\
> +\n\
> +Copyright (c) 2000 BeOpen.com.\n\
> +All Rights Reserved.\n\
> +\n\
> +Copyright (c) 1995-2001 Corporation for National Research Initiatives.\n\
> +All Rights Reserved.\n\
> +\n\
> +Copyright (c) 1991-1995 Stichting Mathematisch Centrum, Amsterdam.\n\
> +All Rights Reserved.";
> +
> +const char *
> +Py_GetCopyright(void)
> +{
> + return cprt;
> +}
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/importlib_external.h b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/importlib_external.h
> new file mode 100644
> index 00000000..25a4f885
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/importlib_external.h
> @@ -0,0 +1,2431 @@
> +/* Auto-generated by Programs/_freeze_importlib.c */
> +const unsigned char _Py_M__importlib_external[] = {
> + 99,0,0,0,0,0,0,0,0,0,0,0,0,5,0,0,
> + 0,64,0,0,0,115,248,1,0,0,100,0,90,0,100,91,
> + 90,1,100,92,90,2,101,2,101,1,23,0,90,3,100,4,
> + 100,5,132,0,90,4,100,6,100,7,132,0,90,5,100,8,
> + 100,9,132,0,90,6,100,10,100,11,132,0,90,7,100,12,
> + 100,13,132,0,90,8,100,14,100,15,132,0,90,9,100,16,
> + 100,17,132,0,90,10,100,18,100,19,132,0,90,11,100,20,
> + 100,21,132,0,90,12,100,93,100,23,100,24,132,1,90,13,
> + 101,14,101,13,106,15,131,1,90,16,100,25,106,17,100,26,
> + 100,27,131,2,100,28,23,0,90,18,101,19,106,20,101,18,
> + 100,27,131,2,90,21,100,29,90,22,100,30,90,23,100,31,
> + 103,1,90,24,100,32,103,1,90,25,101,25,4,0,90,26,
> + 90,27,100,94,100,33,100,34,156,1,100,35,100,36,132,3,
> + 90,28,100,37,100,38,132,0,90,29,100,39,100,40,132,0,
> + 90,30,100,41,100,42,132,0,90,31,100,43,100,44,132,0,
> + 90,32,100,45,100,46,132,0,90,33,100,47,100,48,132,0,
> + 90,34,100,95,100,49,100,50,132,1,90,35,100,96,100,51,
> + 100,52,132,1,90,36,100,97,100,54,100,55,132,1,90,37,
> + 100,56,100,57,132,0,90,38,101,39,131,0,90,40,100,98,
> + 100,33,101,40,100,58,156,2,100,59,100,60,132,3,90,41,
> + 71,0,100,61,100,62,132,0,100,62,131,2,90,42,71,0,
> + 100,63,100,64,132,0,100,64,131,2,90,43,71,0,100,65,
> + 100,66,132,0,100,66,101,43,131,3,90,44,71,0,100,67,
> + 100,68,132,0,100,68,131,2,90,45,71,0,100,69,100,70,
> + 132,0,100,70,101,45,101,44,131,4,90,46,71,0,100,71,
> + 100,72,132,0,100,72,101,45,101,43,131,4,90,47,103,0,
> + 90,48,71,0,100,73,100,74,132,0,100,74,101,45,101,43,
> + 131,4,90,49,71,0,100,75,100,76,132,0,100,76,131,2,
> + 90,50,71,0,100,77,100,78,132,0,100,78,131,2,90,51,
> + 71,0,100,79,100,80,132,0,100,80,131,2,90,52,71,0,
> + 100,81,100,82,132,0,100,82,131,2,90,53,100,99,100,83,
> + 100,84,132,1,90,54,100,85,100,86,132,0,90,55,100,87,
> + 100,88,132,0,90,56,100,89,100,90,132,0,90,57,100,33,
> + 83,0,41,100,97,94,1,0,0,67,111,114,101,32,105,109,
> + 112,108,101,109,101,110,116,97,116,105,111,110,32,111,102,32,
> + 112,97,116,104,45,98,97,115,101,100,32,105,109,112,111,114,
> + 116,46,10,10,84,104,105,115,32,109,111,100,117,108,101,32,
> + 105,115,32,78,79,84,32,109,101,97,110,116,32,116,111,32,
> + 98,101,32,100,105,114,101,99,116,108,121,32,105,109,112,111,
> + 114,116,101,100,33,32,73,116,32,104,97,115,32,98,101,101,
> + 110,32,100,101,115,105,103,110,101,100,32,115,117,99,104,10,
> + 116,104,97,116,32,105,116,32,99,97,110,32,98,101,32,98,
> + 111,111,116,115,116,114,97,112,112,101,100,32,105,110,116,111,
> + 32,80,121,116,104,111,110,32,97,115,32,116,104,101,32,105,
> + 109,112,108,101,109,101,110,116,97,116,105,111,110,32,111,102,
> + 32,105,109,112,111,114,116,46,32,65,115,10,115,117,99,104,
> + 32,105,116,32,114,101,113,117,105,114,101,115,32,116,104,101,
> + 32,105,110,106,101,99,116,105,111,110,32,111,102,32,115,112,
> + 101,99,105,102,105,99,32,109,111,100,117,108,101,115,32,97,
> + 110,100,32,97,116,116,114,105,98,117,116,101,115,32,105,110,
> + 32,111,114,100,101,114,32,116,111,10,119,111,114,107,46,32,
> + 79,110,101,32,115,104,111,117,108,100,32,117,115,101,32,105,
> + 109,112,111,114,116,108,105,98,32,97,115,32,116,104,101,32,
> + 112,117,98,108,105,99,45,102,97,99,105,110,103,32,118,101,
> + 114,115,105,111,110,32,111,102,32,116,104,105,115,32,109,111,
> + 100,117,108,101,46,10,10,218,3,119,105,110,218,6,99,121,
> + 103,119,105,110,218,6,100,97,114,119,105,110,99,0,0,0,
> + 0,0,0,0,0,1,0,0,0,3,0,0,0,3,0,0,
> + 0,115,60,0,0,0,116,0,106,1,106,2,116,3,131,1,
> + 114,48,116,0,106,1,106,2,116,4,131,1,114,30,100,1,
> + 137,0,110,4,100,2,137,0,135,0,102,1,100,3,100,4,
> + 132,8,125,0,110,8,100,5,100,4,132,0,125,0,124,0,
> + 83,0,41,6,78,90,12,80,89,84,72,79,78,67,65,83,
> + 69,79,75,115,12,0,0,0,80,89,84,72,79,78,67,65,
> + 83,69,79,75,99,0,0,0,0,0,0,0,0,0,0,0,
> + 0,2,0,0,0,19,0,0,0,115,10,0,0,0,136,0,
> + 116,0,106,1,107,6,83,0,41,1,122,53,84,114,117,101,
> + 32,105,102,32,102,105,108,101,110,97,109,101,115,32,109,117,
> + 115,116,32,98,101,32,99,104,101,99,107,101,100,32,99,97,
> + 115,101,45,105,110,115,101,110,115,105,116,105,118,101,108,121,
> + 46,41,2,218,3,95,111,115,90,7,101,110,118,105,114,111,
> + 110,169,0,41,1,218,3,107,101,121,114,4,0,0,0,250,
> + 38,60,102,114,111,122,101,110,32,105,109,112,111,114,116,108,
> + 105,98,46,95,98,111,111,116,115,116,114,97,112,95,101,120,
> + 116,101,114,110,97,108,62,218,11,95,114,101,108,97,120,95,
> + 99,97,115,101,37,0,0,0,115,2,0,0,0,0,2,122,
> + 37,95,109,97,107,101,95,114,101,108,97,120,95,99,97,115,
> + 101,46,60,108,111,99,97,108,115,62,46,95,114,101,108,97,
> + 120,95,99,97,115,101,99,0,0,0,0,0,0,0,0,0,
> + 0,0,0,1,0,0,0,83,0,0,0,115,4,0,0,0,
> + 100,1,83,0,41,2,122,53,84,114,117,101,32,105,102,32,
> + 102,105,108,101,110,97,109,101,115,32,109,117,115,116,32,98,
> + 101,32,99,104,101,99,107,101,100,32,99,97,115,101,45,105,
> + 110,115,101,110,115,105,116,105,118,101,108,121,46,70,114,4,
> + 0,0,0,114,4,0,0,0,114,4,0,0,0,114,4,0,
> + 0,0,114,6,0,0,0,114,7,0,0,0,41,0,0,0,
> + 115,2,0,0,0,0,2,41,5,218,3,115,121,115,218,8,
> + 112,108,97,116,102,111,114,109,218,10,115,116,97,114,116,115,
> + 119,105,116,104,218,27,95,67,65,83,69,95,73,78,83,69,
> + 78,83,73,84,73,86,69,95,80,76,65,84,70,79,82,77,
> + 83,218,35,95,67,65,83,69,95,73,78,83,69,78,83,73,
> + 84,73,86,69,95,80,76,65,84,70,79,82,77,83,95,83,
> + 84,82,95,75,69,89,41,1,114,7,0,0,0,114,4,0,
> + 0,0,41,1,114,5,0,0,0,114,6,0,0,0,218,16,
> + 95,109,97,107,101,95,114,101,108,97,120,95,99,97,115,101,
> + 30,0,0,0,115,14,0,0,0,0,1,12,1,12,1,6,
> + 2,4,2,14,4,8,3,114,13,0,0,0,99,1,0,0,
> + 0,0,0,0,0,1,0,0,0,3,0,0,0,67,0,0,
> + 0,115,20,0,0,0,116,0,124,0,131,1,100,1,64,0,
> + 106,1,100,2,100,3,131,2,83,0,41,4,122,42,67,111,
> + 110,118,101,114,116,32,97,32,51,50,45,98,105,116,32,105,
> + 110,116,101,103,101,114,32,116,111,32,108,105,116,116,108,101,
> + 45,101,110,100,105,97,110,46,108,3,0,0,0,255,127,255,
> + 127,3,0,233,4,0,0,0,218,6,108,105,116,116,108,101,
> + 41,2,218,3,105,110,116,218,8,116,111,95,98,121,116,101,
> + 115,41,1,218,1,120,114,4,0,0,0,114,4,0,0,0,
> + 114,6,0,0,0,218,7,95,119,95,108,111,110,103,47,0,
> + 0,0,115,2,0,0,0,0,2,114,19,0,0,0,99,1,
> + 0,0,0,0,0,0,0,1,0,0,0,3,0,0,0,67,
> + 0,0,0,115,12,0,0,0,116,0,106,1,124,0,100,1,
> + 131,2,83,0,41,2,122,47,67,111,110,118,101,114,116,32,
> + 52,32,98,121,116,101,115,32,105,110,32,108,105,116,116,108,
> + 101,45,101,110,100,105,97,110,32,116,111,32,97,110,32,105,
> + 110,116,101,103,101,114,46,114,15,0,0,0,41,2,114,16,
> + 0,0,0,218,10,102,114,111,109,95,98,121,116,101,115,41,
> + 1,90,9,105,110,116,95,98,121,116,101,115,114,4,0,0,
> + 0,114,4,0,0,0,114,6,0,0,0,218,7,95,114,95,
> + 108,111,110,103,52,0,0,0,115,2,0,0,0,0,2,114,
> + 21,0,0,0,99,0,0,0,0,0,0,0,0,1,0,0,
> + 0,3,0,0,0,71,0,0,0,115,20,0,0,0,116,0,
> + 106,1,100,1,100,2,132,0,124,0,68,0,131,1,131,1,
> + 83,0,41,3,122,31,82,101,112,108,97,99,101,109,101,110,
> + 116,32,102,111,114,32,111,115,46,112,97,116,104,46,106,111,
> + 105,110,40,41,46,99,1,0,0,0,0,0,0,0,2,0,
> + 0,0,4,0,0,0,83,0,0,0,115,26,0,0,0,103,
> + 0,124,0,93,18,125,1,124,1,114,4,124,1,106,0,116,
> + 1,131,1,145,2,113,4,83,0,114,4,0,0,0,41,2,
> + 218,6,114,115,116,114,105,112,218,15,112,97,116,104,95,115,
> + 101,112,97,114,97,116,111,114,115,41,2,218,2,46,48,218,
> + 4,112,97,114,116,114,4,0,0,0,114,4,0,0,0,114,
> + 6,0,0,0,250,10,60,108,105,115,116,99,111,109,112,62,
> + 59,0,0,0,115,2,0,0,0,6,1,122,30,95,112,97,
> + 116,104,95,106,111,105,110,46,60,108,111,99,97,108,115,62,
> + 46,60,108,105,115,116,99,111,109,112,62,41,2,218,8,112,
> + 97,116,104,95,115,101,112,218,4,106,111,105,110,41,1,218,
> + 10,112,97,116,104,95,112,97,114,116,115,114,4,0,0,0,
> + 114,4,0,0,0,114,6,0,0,0,218,10,95,112,97,116,
> + 104,95,106,111,105,110,57,0,0,0,115,4,0,0,0,0,
> + 2,10,1,114,30,0,0,0,99,1,0,0,0,0,0,0,
> + 0,5,0,0,0,5,0,0,0,67,0,0,0,115,96,0,
> + 0,0,116,0,116,1,131,1,100,1,107,2,114,36,124,0,
> + 106,2,116,3,131,1,92,3,125,1,125,2,125,3,124,1,
> + 124,3,102,2,83,0,120,50,116,4,124,0,131,1,68,0,
> + 93,38,125,4,124,4,116,1,107,6,114,46,124,0,106,5,
> + 124,4,100,1,100,2,141,2,92,2,125,1,125,3,124,1,
> + 124,3,102,2,83,0,113,46,87,0,100,3,124,0,102,2,
> + 83,0,41,4,122,32,82,101,112,108,97,99,101,109,101,110,
> + 116,32,102,111,114,32,111,115,46,112,97,116,104,46,115,112,
> + 108,105,116,40,41,46,233,1,0,0,0,41,1,90,8,109,
> + 97,120,115,112,108,105,116,218,0,41,6,218,3,108,101,110,
> + 114,23,0,0,0,218,10,114,112,97,114,116,105,116,105,111,
> + 110,114,27,0,0,0,218,8,114,101,118,101,114,115,101,100,
> + 218,6,114,115,112,108,105,116,41,5,218,4,112,97,116,104,
> + 90,5,102,114,111,110,116,218,1,95,218,4,116,97,105,108,
> + 114,18,0,0,0,114,4,0,0,0,114,4,0,0,0,114,
> + 6,0,0,0,218,11,95,112,97,116,104,95,115,112,108,105,
> + 116,63,0,0,0,115,16,0,0,0,0,2,12,1,16,1,
> + 8,1,14,1,8,1,18,1,12,1,114,40,0,0,0,99,
> + 1,0,0,0,0,0,0,0,1,0,0,0,2,0,0,0,
> + 67,0,0,0,115,10,0,0,0,116,0,106,1,124,0,131,
> + 1,83,0,41,1,122,126,83,116,97,116,32,116,104,101,32,
> + 112,97,116,104,46,10,10,32,32,32,32,77,97,100,101,32,
> + 97,32,115,101,112,97,114,97,116,101,32,102,117,110,99,116,
> + 105,111,110,32,116,111,32,109,97,107,101,32,105,116,32,101,
> + 97,115,105,101,114,32,116,111,32,111,118,101,114,114,105,100,
> + 101,32,105,110,32,101,120,112,101,114,105,109,101,110,116,115,
> + 10,32,32,32,32,40,101,46,103,46,32,99,97,99,104,101,
> + 32,115,116,97,116,32,114,101,115,117,108,116,115,41,46,10,
> + 10,32,32,32,32,41,2,114,3,0,0,0,90,4,115,116,
> + 97,116,41,1,114,37,0,0,0,114,4,0,0,0,114,4,
> + 0,0,0,114,6,0,0,0,218,10,95,112,97,116,104,95,
> + 115,116,97,116,75,0,0,0,115,2,0,0,0,0,7,114,
> + 41,0,0,0,99,2,0,0,0,0,0,0,0,3,0,0,
> + 0,11,0,0,0,67,0,0,0,115,48,0,0,0,121,12,
> + 116,0,124,0,131,1,125,2,87,0,110,20,4,0,116,1,
> + 107,10,114,32,1,0,1,0,1,0,100,1,83,0,88,0,
> + 124,2,106,2,100,2,64,0,124,1,107,2,83,0,41,3,
> + 122,49,84,101,115,116,32,119,104,101,116,104,101,114,32,116,
> + 104,101,32,112,97,116,104,32,105,115,32,116,104,101,32,115,
> + 112,101,99,105,102,105,101,100,32,109,111,100,101,32,116,121,
> + 112,101,46,70,105,0,240,0,0,41,3,114,41,0,0,0,
> + 218,7,79,83,69,114,114,111,114,218,7,115,116,95,109,111,
> + 100,101,41,3,114,37,0,0,0,218,4,109,111,100,101,90,
> + 9,115,116,97,116,95,105,110,102,111,114,4,0,0,0,114,
> + 4,0,0,0,114,6,0,0,0,218,18,95,112,97,116,104,
> + 95,105,115,95,109,111,100,101,95,116,121,112,101,85,0,0,
> + 0,115,10,0,0,0,0,2,2,1,12,1,14,1,6,1,
> + 114,45,0,0,0,99,1,0,0,0,0,0,0,0,1,0,
> + 0,0,3,0,0,0,67,0,0,0,115,10,0,0,0,116,
> + 0,124,0,100,1,131,2,83,0,41,2,122,31,82,101,112,
> + 108,97,99,101,109,101,110,116,32,102,111,114,32,111,115,46,
> + 112,97,116,104,46,105,115,102,105,108,101,46,105,0,128,0,
> + 0,41,1,114,45,0,0,0,41,1,114,37,0,0,0,114,
> + 4,0,0,0,114,4,0,0,0,114,6,0,0,0,218,12,
> + 95,112,97,116,104,95,105,115,102,105,108,101,94,0,0,0,
> + 115,2,0,0,0,0,2,114,46,0,0,0,99,1,0,0,
> + 0,0,0,0,0,1,0,0,0,3,0,0,0,67,0,0,
> + 0,115,22,0,0,0,124,0,115,12,116,0,106,1,131,0,
> + 125,0,116,2,124,0,100,1,131,2,83,0,41,2,122,30,
> + 82,101,112,108,97,99,101,109,101,110,116,32,102,111,114,32,
> + 111,115,46,112,97,116,104,46,105,115,100,105,114,46,105,0,
> + 64,0,0,41,3,114,3,0,0,0,218,6,103,101,116,99,
> + 119,100,114,45,0,0,0,41,1,114,37,0,0,0,114,4,
> + 0,0,0,114,4,0,0,0,114,6,0,0,0,218,11,95,
> + 112,97,116,104,95,105,115,100,105,114,99,0,0,0,115,6,
> + 0,0,0,0,2,4,1,8,1,114,48,0,0,0,233,182,
> + 1,0,0,99,3,0,0,0,0,0,0,0,6,0,0,0,
> + 17,0,0,0,67,0,0,0,115,162,0,0,0,100,1,106,
> + 0,124,0,116,1,124,0,131,1,131,2,125,3,116,2,106,
> + 3,124,3,116,2,106,4,116,2,106,5,66,0,116,2,106,
> + 6,66,0,124,2,100,2,64,0,131,3,125,4,121,50,116,
> + 7,106,8,124,4,100,3,131,2,143,16,125,5,124,5,106,
> + 9,124,1,131,1,1,0,87,0,100,4,81,0,82,0,88,
> + 0,116,2,106,10,124,3,124,0,131,2,1,0,87,0,110,
> + 58,4,0,116,11,107,10,114,156,1,0,1,0,1,0,121,
> + 14,116,2,106,12,124,3,131,1,1,0,87,0,110,20,4,
> + 0,116,11,107,10,114,148,1,0,1,0,1,0,89,0,110,
> + 2,88,0,130,0,89,0,110,2,88,0,100,4,83,0,41,
> + 5,122,162,66,101,115,116,45,101,102,102,111,114,116,32,102,
> + 117,110,99,116,105,111,110,32,116,111,32,119,114,105,116,101,
> + 32,100,97,116,97,32,116,111,32,97,32,112,97,116,104,32,
> + 97,116,111,109,105,99,97,108,108,121,46,10,32,32,32,32,
> + 66,101,32,112,114,101,112,97,114,101,100,32,116,111,32,104,
> + 97,110,100,108,101,32,97,32,70,105,108,101,69,120,105,115,
> + 116,115,69,114,114,111,114,32,105,102,32,99,111,110,99,117,
> + 114,114,101,110,116,32,119,114,105,116,105,110,103,32,111,102,
> + 32,116,104,101,10,32,32,32,32,116,101,109,112,111,114,97,
> + 114,121,32,102,105,108,101,32,105,115,32,97,116,116,101,109,
> + 112,116,101,100,46,122,5,123,125,46,123,125,105,182,1,0,
> + 0,90,2,119,98,78,41,13,218,6,102,111,114,109,97,116,
> + 218,2,105,100,114,3,0,0,0,90,4,111,112,101,110,90,
> + 6,79,95,69,88,67,76,90,7,79,95,67,82,69,65,84,
> + 90,8,79,95,87,82,79,78,76,89,218,3,95,105,111,218,
> + 6,70,105,108,101,73,79,218,5,119,114,105,116,101,90,6,
> + 114,101,110,97,109,101,114,42,0,0,0,90,6,117,110,108,
> + 105,110,107,41,6,114,37,0,0,0,218,4,100,97,116,97,
> + 114,44,0,0,0,90,8,112,97,116,104,95,116,109,112,90,
> + 2,102,100,218,4,102,105,108,101,114,4,0,0,0,114,4,
> + 0,0,0,114,6,0,0,0,218,13,95,119,114,105,116,101,
> + 95,97,116,111,109,105,99,106,0,0,0,115,26,0,0,0,
> + 0,5,16,1,6,1,26,1,2,3,14,1,20,1,16,1,
> + 14,1,2,1,14,1,14,1,6,1,114,57,0,0,0,105,
> + 51,13,0,0,233,2,0,0,0,114,15,0,0,0,115,2,
> + 0,0,0,13,10,90,11,95,95,112,121,99,97,99,104,101,
> + 95,95,122,4,111,112,116,45,122,3,46,112,121,122,4,46,
> + 112,121,99,78,41,1,218,12,111,112,116,105,109,105,122,97,
> + 116,105,111,110,99,2,0,0,0,1,0,0,0,11,0,0,
> + 0,6,0,0,0,67,0,0,0,115,234,0,0,0,124,1,
> + 100,1,107,9,114,52,116,0,106,1,100,2,116,2,131,2,
> + 1,0,124,2,100,1,107,9,114,40,100,3,125,3,116,3,
> + 124,3,131,1,130,1,124,1,114,48,100,4,110,2,100,5,
> + 125,2,116,4,124,0,131,1,92,2,125,4,125,5,124,5,
> + 106,5,100,6,131,1,92,3,125,6,125,7,125,8,116,6,
> + 106,7,106,8,125,9,124,9,100,1,107,8,114,104,116,9,
> + 100,7,131,1,130,1,100,4,106,10,124,6,114,116,124,6,
> + 110,2,124,8,124,7,124,9,103,3,131,1,125,10,124,2,
> + 100,1,107,8,114,162,116,6,106,11,106,12,100,8,107,2,
> + 114,154,100,4,125,2,110,8,116,6,106,11,106,12,125,2,
> + 116,13,124,2,131,1,125,2,124,2,100,4,107,3,114,214,
> + 124,2,106,14,131,0,115,200,116,15,100,9,106,16,124,2,
> + 131,1,131,1,130,1,100,10,106,16,124,10,116,17,124,2,
> + 131,3,125,10,116,18,124,4,116,19,124,10,116,20,100,8,
> + 25,0,23,0,131,3,83,0,41,11,97,254,2,0,0,71,
> + 105,118,101,110,32,116,104,101,32,112,97,116,104,32,116,111,
> + 32,97,32,46,112,121,32,102,105,108,101,44,32,114,101,116,
> + 117,114,110,32,116,104,101,32,112,97,116,104,32,116,111,32,
> + 105,116,115,32,46,112,121,99,32,102,105,108,101,46,10,10,
> + 32,32,32,32,84,104,101,32,46,112,121,32,102,105,108,101,
> + 32,100,111,101,115,32,110,111,116,32,110,101,101,100,32,116,
> + 111,32,101,120,105,115,116,59,32,116,104,105,115,32,115,105,
> + 109,112,108,121,32,114,101,116,117,114,110,115,32,116,104,101,
> + 32,112,97,116,104,32,116,111,32,116,104,101,10,32,32,32,
> + 32,46,112,121,99,32,102,105,108,101,32,99,97,108,99,117,
> + 108,97,116,101,100,32,97,115,32,105,102,32,116,104,101,32,
> + 46,112,121,32,102,105,108,101,32,119,101,114,101,32,105,109,
> + 112,111,114,116,101,100,46,10,10,32,32,32,32,84,104,101,
> + 32,39,111,112,116,105,109,105,122,97,116,105,111,110,39,32,
> + 112,97,114,97,109,101,116,101,114,32,99,111,110,116,114,111,
> + 108,115,32,116,104,101,32,112,114,101,115,117,109,101,100,32,
> + 111,112,116,105,109,105,122,97,116,105,111,110,32,108,101,118,
> + 101,108,32,111,102,10,32,32,32,32,116,104,101,32,98,121,
> + 116,101,99,111,100,101,32,102,105,108,101,46,32,73,102,32,
> + 39,111,112,116,105,109,105,122,97,116,105,111,110,39,32,105,
> + 115,32,110,111,116,32,78,111,110,101,44,32,116,104,101,32,
> + 115,116,114,105,110,103,32,114,101,112,114,101,115,101,110,116,
> + 97,116,105,111,110,10,32,32,32,32,111,102,32,116,104,101,
> + 32,97,114,103,117,109,101,110,116,32,105,115,32,116,97,107,
> + 101,110,32,97,110,100,32,118,101,114,105,102,105,101,100,32,
> + 116,111,32,98,101,32,97,108,112,104,97,110,117,109,101,114,
> + 105,99,32,40,101,108,115,101,32,86,97,108,117,101,69,114,
> + 114,111,114,10,32,32,32,32,105,115,32,114,97,105,115,101,
> + 100,41,46,10,10,32,32,32,32,84,104,101,32,100,101,98,
> + 117,103,95,111,118,101,114,114,105,100,101,32,112,97,114,97,
> + 109,101,116,101,114,32,105,115,32,100,101,112,114,101,99,97,
> + 116,101,100,46,32,73,102,32,100,101,98,117,103,95,111,118,
> + 101,114,114,105,100,101,32,105,115,32,110,111,116,32,78,111,
> + 110,101,44,10,32,32,32,32,97,32,84,114,117,101,32,118,
> + 97,108,117,101,32,105,115,32,116,104,101,32,115,97,109,101,
> + 32,97,115,32,115,101,116,116,105,110,103,32,39,111,112,116,
> + 105,109,105,122,97,116,105,111,110,39,32,116,111,32,116,104,
> + 101,32,101,109,112,116,121,32,115,116,114,105,110,103,10,32,
> + 32,32,32,119,104,105,108,101,32,97,32,70,97,108,115,101,
> + 32,118,97,108,117,101,32,105,115,32,101,113,117,105,118,97,
> + 108,101,110,116,32,116,111,32,115,101,116,116,105,110,103,32,
> + 39,111,112,116,105,109,105,122,97,116,105,111,110,39,32,116,
> + 111,32,39,49,39,46,10,10,32,32,32,32,73,102,32,115,
> + 121,115,46,105,109,112,108,101,109,101,110,116,97,116,105,111,
> + 110,46,99,97,99,104,101,95,116,97,103,32,105,115,32,78,
> + 111,110,101,32,116,104,101,110,32,78,111,116,73,109,112,108,
> + 101,109,101,110,116,101,100,69,114,114,111,114,32,105,115,32,
> + 114,97,105,115,101,100,46,10,10,32,32,32,32,78,122,70,
> + 116,104,101,32,100,101,98,117,103,95,111,118,101,114,114,105,
> + 100,101,32,112,97,114,97,109,101,116,101,114,32,105,115,32,
> + 100,101,112,114,101,99,97,116,101,100,59,32,117,115,101,32,
> + 39,111,112,116,105,109,105,122,97,116,105,111,110,39,32,105,
> + 110,115,116,101,97,100,122,50,100,101,98,117,103,95,111,118,
> + 101,114,114,105,100,101,32,111,114,32,111,112,116,105,109,105,
> + 122,97,116,105,111,110,32,109,117,115,116,32,98,101,32,115,
> + 101,116,32,116,111,32,78,111,110,101,114,32,0,0,0,114,
> + 31,0,0,0,218,1,46,122,36,115,121,115,46,105,109,112,
> + 108,101,109,101,110,116,97,116,105,111,110,46,99,97,99,104,
> + 101,95,116,97,103,32,105,115,32,78,111,110,101,233,0,0,
> + 0,0,122,24,123,33,114,125,32,105,115,32,110,111,116,32,
> + 97,108,112,104,97,110,117,109,101,114,105,99,122,7,123,125,
> + 46,123,125,123,125,41,21,218,9,95,119,97,114,110,105,110,
> + 103,115,218,4,119,97,114,110,218,18,68,101,112,114,101,99,
> + 97,116,105,111,110,87,97,114,110,105,110,103,218,9,84,121,
> + 112,101,69,114,114,111,114,114,40,0,0,0,114,34,0,0,
> + 0,114,8,0,0,0,218,14,105,109,112,108,101,109,101,110,
> + 116,97,116,105,111,110,218,9,99,97,99,104,101,95,116,97,
> + 103,218,19,78,111,116,73,109,112,108,101,109,101,110,116,101,
> + 100,69,114,114,111,114,114,28,0,0,0,218,5,102,108,97,
> + 103,115,218,8,111,112,116,105,109,105,122,101,218,3,115,116,
> + 114,218,7,105,115,97,108,110,117,109,218,10,86,97,108,117,
> + 101,69,114,114,111,114,114,50,0,0,0,218,4,95,79,80,
> + 84,114,30,0,0,0,218,8,95,80,89,67,65,67,72,69,
> + 218,17,66,89,84,69,67,79,68,69,95,83,85,70,70,73,
> + 88,69,83,41,11,114,37,0,0,0,90,14,100,101,98,117,
> + 103,95,111,118,101,114,114,105,100,101,114,59,0,0,0,218,
> + 7,109,101,115,115,97,103,101,218,4,104,101,97,100,114,39,
> + 0,0,0,90,4,98,97,115,101,218,3,115,101,112,218,4,
> + 114,101,115,116,90,3,116,97,103,90,15,97,108,109,111,115,
> + 116,95,102,105,108,101,110,97,109,101,114,4,0,0,0,114,
> + 4,0,0,0,114,6,0,0,0,218,17,99,97,99,104,101,
> + 95,102,114,111,109,95,115,111,117,114,99,101,7,1,0,0,
> + 115,46,0,0,0,0,18,8,1,6,1,6,1,8,1,4,
> + 1,8,1,12,2,12,1,16,1,8,1,8,1,8,1,24,
> + 1,8,1,12,1,6,2,8,1,8,1,8,1,8,1,14,
> + 1,14,1,114,81,0,0,0,99,1,0,0,0,0,0,0,
> + 0,8,0,0,0,5,0,0,0,67,0,0,0,115,220,0,
> + 0,0,116,0,106,1,106,2,100,1,107,8,114,20,116,3,
> + 100,2,131,1,130,1,116,4,124,0,131,1,92,2,125,1,
> + 125,2,116,4,124,1,131,1,92,2,125,1,125,3,124,3,
> + 116,5,107,3,114,68,116,6,100,3,106,7,116,5,124,0,
> + 131,2,131,1,130,1,124,2,106,8,100,4,131,1,125,4,
> + 124,4,100,11,107,7,114,102,116,6,100,7,106,7,124,2,
> + 131,1,131,1,130,1,110,86,124,4,100,6,107,2,114,188,
> + 124,2,106,9,100,4,100,5,131,2,100,12,25,0,125,5,
> + 124,5,106,10,116,11,131,1,115,150,116,6,100,8,106,7,
> + 116,11,131,1,131,1,130,1,124,5,116,12,116,11,131,1,
> + 100,1,133,2,25,0,125,6,124,6,106,13,131,0,115,188,
> + 116,6,100,9,106,7,124,5,131,1,131,1,130,1,124,2,
> + 106,14,100,4,131,1,100,10,25,0,125,7,116,15,124,1,
> + 124,7,116,16,100,10,25,0,23,0,131,2,83,0,41,13,
> + 97,110,1,0,0,71,105,118,101,110,32,116,104,101,32,112,
> + 97,116,104,32,116,111,32,97,32,46,112,121,99,46,32,102,
> + 105,108,101,44,32,114,101,116,117,114,110,32,116,104,101,32,
> + 112,97,116,104,32,116,111,32,105,116,115,32,46,112,121,32,
> + 102,105,108,101,46,10,10,32,32,32,32,84,104,101,32,46,
> + 112,121,99,32,102,105,108,101,32,100,111,101,115,32,110,111,
> + 116,32,110,101,101,100,32,116,111,32,101,120,105,115,116,59,
> + 32,116,104,105,115,32,115,105,109,112,108,121,32,114,101,116,
> + 117,114,110,115,32,116,104,101,32,112,97,116,104,32,116,111,
> + 10,32,32,32,32,116,104,101,32,46,112,121,32,102,105,108,
> + 101,32,99,97,108,99,117,108,97,116,101,100,32,116,111,32,
> + 99,111,114,114,101,115,112,111,110,100,32,116,111,32,116,104,
> + 101,32,46,112,121,99,32,102,105,108,101,46,32,32,73,102,
> + 32,112,97,116,104,32,100,111,101,115,10,32,32,32,32,110,
> + 111,116,32,99,111,110,102,111,114,109,32,116,111,32,80,69,
> + 80,32,51,49,52,55,47,52,56,56,32,102,111,114,109,97,
> + 116,44,32,86,97,108,117,101,69,114,114,111,114,32,119,105,
> + 108,108,32,98,101,32,114,97,105,115,101,100,46,32,73,102,
> + 10,32,32,32,32,115,121,115,46,105,109,112,108,101,109,101,
> + 110,116,97,116,105,111,110,46,99,97,99,104,101,95,116,97,
> + 103,32,105,115,32,78,111,110,101,32,116,104,101,110,32,78,
> + 111,116,73,109,112,108,101,109,101,110,116,101,100,69,114,114,
> + 111,114,32,105,115,32,114,97,105,115,101,100,46,10,10,32,
> + 32,32,32,78,122,36,115,121,115,46,105,109,112,108,101,109,
> + 101,110,116,97,116,105,111,110,46,99,97,99,104,101,95,116,
> + 97,103,32,105,115,32,78,111,110,101,122,37,123,125,32,110,
> + 111,116,32,98,111,116,116,111,109,45,108,101,118,101,108,32,
> + 100,105,114,101,99,116,111,114,121,32,105,110,32,123,33,114,
> + 125,114,60,0,0,0,114,58,0,0,0,233,3,0,0,0,
> + 122,33,101,120,112,101,99,116,101,100,32,111,110,108,121,32,
> + 50,32,111,114,32,51,32,100,111,116,115,32,105,110,32,123,
> + 33,114,125,122,57,111,112,116,105,109,105,122,97,116,105,111,
> + 110,32,112,111,114,116,105,111,110,32,111,102,32,102,105,108,
> + 101,110,97,109,101,32,100,111,101,115,32,110,111,116,32,115,
> + 116,97,114,116,32,119,105,116,104,32,123,33,114,125,122,52,
> + 111,112,116,105,109,105,122,97,116,105,111,110,32,108,101,118,
> + 101,108,32,123,33,114,125,32,105,115,32,110,111,116,32,97,
> + 110,32,97,108,112,104,97,110,117,109,101,114,105,99,32,118,
> + 97,108,117,101,114,61,0,0,0,62,2,0,0,0,114,58,
> + 0,0,0,114,82,0,0,0,233,254,255,255,255,41,17,114,
> + 8,0,0,0,114,66,0,0,0,114,67,0,0,0,114,68,
> + 0,0,0,114,40,0,0,0,114,75,0,0,0,114,73,0,
> + 0,0,114,50,0,0,0,218,5,99,111,117,110,116,114,36,
> + 0,0,0,114,10,0,0,0,114,74,0,0,0,114,33,0,
> + 0,0,114,72,0,0,0,218,9,112,97,114,116,105,116,105,
> + 111,110,114,30,0,0,0,218,15,83,79,85,82,67,69,95,
> + 83,85,70,70,73,88,69,83,41,8,114,37,0,0,0,114,
> + 78,0,0,0,90,16,112,121,99,97,99,104,101,95,102,105,
> + 108,101,110,97,109,101,90,7,112,121,99,97,99,104,101,90,
> + 9,100,111,116,95,99,111,117,110,116,114,59,0,0,0,90,
> + 9,111,112,116,95,108,101,118,101,108,90,13,98,97,115,101,
> + 95,102,105,108,101,110,97,109,101,114,4,0,0,0,114,4,
> + 0,0,0,114,6,0,0,0,218,17,115,111,117,114,99,101,
> + 95,102,114,111,109,95,99,97,99,104,101,52,1,0,0,115,
> + 44,0,0,0,0,9,12,1,8,2,12,1,12,1,8,1,
> + 6,1,10,1,10,1,8,1,6,1,10,1,8,1,16,1,
> + 10,1,6,1,8,1,16,1,8,1,6,1,8,1,14,1,
> + 114,87,0,0,0,99,1,0,0,0,0,0,0,0,5,0,
> + 0,0,12,0,0,0,67,0,0,0,115,128,0,0,0,116,
> + 0,124,0,131,1,100,1,107,2,114,16,100,2,83,0,124,
> + 0,106,1,100,3,131,1,92,3,125,1,125,2,125,3,124,
> + 1,12,0,115,58,124,3,106,2,131,0,100,7,100,8,133,
> + 2,25,0,100,6,107,3,114,62,124,0,83,0,121,12,116,
> + 3,124,0,131,1,125,4,87,0,110,36,4,0,116,4,116,
> + 5,102,2,107,10,114,110,1,0,1,0,1,0,124,0,100,
> + 2,100,9,133,2,25,0,125,4,89,0,110,2,88,0,116,
> + 6,124,4,131,1,114,124,124,4,83,0,124,0,83,0,41,
> + 10,122,188,67,111,110,118,101,114,116,32,97,32,98,121,116,
> + 101,99,111,100,101,32,102,105,108,101,32,112,97,116,104,32,
> + 116,111,32,97,32,115,111,117,114,99,101,32,112,97,116,104,
> + 32,40,105,102,32,112,111,115,115,105,98,108,101,41,46,10,
> + 10,32,32,32,32,84,104,105,115,32,102,117,110,99,116,105,
> + 111,110,32,101,120,105,115,116,115,32,112,117,114,101,108,121,
> + 32,102,111,114,32,98,97,99,107,119,97,114,100,115,45,99,
> + 111,109,112,97,116,105,98,105,108,105,116,121,32,102,111,114,
> + 10,32,32,32,32,80,121,73,109,112,111,114,116,95,69,120,
> + 101,99,67,111,100,101,77,111,100,117,108,101,87,105,116,104,
> + 70,105,108,101,110,97,109,101,115,40,41,32,105,110,32,116,
> + 104,101,32,67,32,65,80,73,46,10,10,32,32,32,32,114,
> + 61,0,0,0,78,114,60,0,0,0,114,82,0,0,0,114,
> + 31,0,0,0,90,2,112,121,233,253,255,255,255,233,255,255,
> + 255,255,114,89,0,0,0,41,7,114,33,0,0,0,114,34,
> + 0,0,0,218,5,108,111,119,101,114,114,87,0,0,0,114,
> + 68,0,0,0,114,73,0,0,0,114,46,0,0,0,41,5,
> + 218,13,98,121,116,101,99,111,100,101,95,112,97,116,104,114,
> + 80,0,0,0,114,38,0,0,0,90,9,101,120,116,101,110,
> + 115,105,111,110,218,11,115,111,117,114,99,101,95,112,97,116,
> + 104,114,4,0,0,0,114,4,0,0,0,114,6,0,0,0,
> + 218,15,95,103,101,116,95,115,111,117,114,99,101,102,105,108,
> + 101,86,1,0,0,115,20,0,0,0,0,7,12,1,4,1,
> + 16,1,26,1,4,1,2,1,12,1,18,1,18,1,114,93,
> + 0,0,0,99,1,0,0,0,0,0,0,0,1,0,0,0,
> + 11,0,0,0,67,0,0,0,115,72,0,0,0,124,0,106,
> + 0,116,1,116,2,131,1,131,1,114,46,121,8,116,3,124,
> + 0,131,1,83,0,4,0,116,4,107,10,114,42,1,0,1,
> + 0,1,0,89,0,113,68,88,0,110,22,124,0,106,0,116,
> + 1,116,5,131,1,131,1,114,64,124,0,83,0,100,0,83,
> + 0,100,0,83,0,41,1,78,41,6,218,8,101,110,100,115,
> + 119,105,116,104,218,5,116,117,112,108,101,114,86,0,0,0,
> + 114,81,0,0,0,114,68,0,0,0,114,76,0,0,0,41,
> + 1,218,8,102,105,108,101,110,97,109,101,114,4,0,0,0,
> + 114,4,0,0,0,114,6,0,0,0,218,11,95,103,101,116,
> + 95,99,97,99,104,101,100,105,1,0,0,115,16,0,0,0,
> + 0,1,14,1,2,1,8,1,14,1,8,1,14,1,4,2,
> + 114,97,0,0,0,99,1,0,0,0,0,0,0,0,2,0,
> + 0,0,11,0,0,0,67,0,0,0,115,52,0,0,0,121,
> + 14,116,0,124,0,131,1,106,1,125,1,87,0,110,24,4,
> + 0,116,2,107,10,114,38,1,0,1,0,1,0,100,1,125,
> + 1,89,0,110,2,88,0,124,1,100,2,79,0,125,1,124,
> + 1,83,0,41,3,122,51,67,97,108,99,117,108,97,116,101,
> + 32,116,104,101,32,109,111,100,101,32,112,101,114,109,105,115,
> + 115,105,111,110,115,32,102,111,114,32,97,32,98,121,116,101,
> + 99,111,100,101,32,102,105,108,101,46,105,182,1,0,0,233,
> + 128,0,0,0,41,3,114,41,0,0,0,114,43,0,0,0,
> + 114,42,0,0,0,41,2,114,37,0,0,0,114,44,0,0,
> + 0,114,4,0,0,0,114,4,0,0,0,114,6,0,0,0,
> + 218,10,95,99,97,108,99,95,109,111,100,101,117,1,0,0,
> + 115,12,0,0,0,0,2,2,1,14,1,14,1,10,3,8,
> + 1,114,99,0,0,0,99,1,0,0,0,0,0,0,0,3,
> + 0,0,0,11,0,0,0,3,0,0,0,115,68,0,0,0,
> + 100,6,135,0,102,1,100,2,100,3,132,9,125,1,121,10,
> + 116,0,106,1,125,2,87,0,110,28,4,0,116,2,107,10,
> + 114,52,1,0,1,0,1,0,100,4,100,5,132,0,125,2,
> + 89,0,110,2,88,0,124,2,124,1,136,0,131,2,1,0,
> + 124,1,83,0,41,7,122,252,68,101,99,111,114,97,116,111,
> + 114,32,116,111,32,118,101,114,105,102,121,32,116,104,97,116,
> + 32,116,104,101,32,109,111,100,117,108,101,32,98,101,105,110,
> + 103,32,114,101,113,117,101,115,116,101,100,32,109,97,116,99,
> + 104,101,115,32,116,104,101,32,111,110,101,32,116,104,101,10,
> + 32,32,32,32,108,111,97,100,101,114,32,99,97,110,32,104,
> + 97,110,100,108,101,46,10,10,32,32,32,32,84,104,101,32,
> + 102,105,114,115,116,32,97,114,103,117,109,101,110,116,32,40,
> + 115,101,108,102,41,32,109,117,115,116,32,100,101,102,105,110,
> + 101,32,95,110,97,109,101,32,119,104,105,99,104,32,116,104,
> + 101,32,115,101,99,111,110,100,32,97,114,103,117,109,101,110,
> + 116,32,105,115,10,32,32,32,32,99,111,109,112,97,114,101,
> + 100,32,97,103,97,105,110,115,116,46,32,73,102,32,116,104,
> + 101,32,99,111,109,112,97,114,105,115,111,110,32,102,97,105,
> + 108,115,32,116,104,101,110,32,73,109,112,111,114,116,69,114,
> + 114,111,114,32,105,115,32,114,97,105,115,101,100,46,10,10,
> + 32,32,32,32,78,99,2,0,0,0,0,0,0,0,4,0,
> + 0,0,4,0,0,0,31,0,0,0,115,66,0,0,0,124,
> + 1,100,0,107,8,114,16,124,0,106,0,125,1,110,32,124,
> + 0,106,0,124,1,107,3,114,48,116,1,100,1,124,0,106,
> + 0,124,1,102,2,22,0,124,1,100,2,141,2,130,1,136,
> + 0,124,0,124,1,102,2,124,2,158,2,124,3,142,1,83,
> + 0,41,3,78,122,30,108,111,97,100,101,114,32,102,111,114,
> + 32,37,115,32,99,97,110,110,111,116,32,104,97,110,100,108,
> + 101,32,37,115,41,1,218,4,110,97,109,101,41,2,114,100,
> + 0,0,0,218,11,73,109,112,111,114,116,69,114,114,111,114,
> + 41,4,218,4,115,101,108,102,114,100,0,0,0,218,4,97,
> + 114,103,115,90,6,107,119,97,114,103,115,41,1,218,6,109,
> + 101,116,104,111,100,114,4,0,0,0,114,6,0,0,0,218,
> + 19,95,99,104,101,99,107,95,110,97,109,101,95,119,114,97,
> + 112,112,101,114,137,1,0,0,115,12,0,0,0,0,1,8,
> + 1,8,1,10,1,4,1,18,1,122,40,95,99,104,101,99,
> + 107,95,110,97,109,101,46,60,108,111,99,97,108,115,62,46,
> + 95,99,104,101,99,107,95,110,97,109,101,95,119,114,97,112,
> + 112,101,114,99,2,0,0,0,0,0,0,0,3,0,0,0,
> + 7,0,0,0,83,0,0,0,115,60,0,0,0,120,40,100,
> + 5,68,0,93,32,125,2,116,0,124,1,124,2,131,2,114,
> + 6,116,1,124,0,124,2,116,2,124,1,124,2,131,2,131,
> + 3,1,0,113,6,87,0,124,0,106,3,106,4,124,1,106,
> + 3,131,1,1,0,100,0,83,0,41,6,78,218,10,95,95,
> + 109,111,100,117,108,101,95,95,218,8,95,95,110,97,109,101,
> + 95,95,218,12,95,95,113,117,97,108,110,97,109,101,95,95,
> + 218,7,95,95,100,111,99,95,95,41,4,114,106,0,0,0,
> + 114,107,0,0,0,114,108,0,0,0,114,109,0,0,0,41,
> + 5,218,7,104,97,115,97,116,116,114,218,7,115,101,116,97,
> + 116,116,114,218,7,103,101,116,97,116,116,114,218,8,95,95,
> + 100,105,99,116,95,95,218,6,117,112,100,97,116,101,41,3,
> + 90,3,110,101,119,90,3,111,108,100,218,7,114,101,112,108,
> + 97,99,101,114,4,0,0,0,114,4,0,0,0,114,6,0,
> + 0,0,218,5,95,119,114,97,112,148,1,0,0,115,8,0,
> + 0,0,0,1,10,1,10,1,22,1,122,26,95,99,104,101,
> + 99,107,95,110,97,109,101,46,60,108,111,99,97,108,115,62,
> + 46,95,119,114,97,112,41,1,78,41,3,218,10,95,98,111,
> + 111,116,115,116,114,97,112,114,116,0,0,0,218,9,78,97,
> + 109,101,69,114,114,111,114,41,3,114,104,0,0,0,114,105,
> + 0,0,0,114,116,0,0,0,114,4,0,0,0,41,1,114,
> + 104,0,0,0,114,6,0,0,0,218,11,95,99,104,101,99,
> + 107,95,110,97,109,101,129,1,0,0,115,14,0,0,0,0,
> + 8,14,7,2,1,10,1,14,2,14,5,10,1,114,119,0,
> + 0,0,99,2,0,0,0,0,0,0,0,5,0,0,0,4,
> + 0,0,0,67,0,0,0,115,60,0,0,0,124,0,106,0,
> + 124,1,131,1,92,2,125,2,125,3,124,2,100,1,107,8,
> + 114,56,116,1,124,3,131,1,114,56,100,2,125,4,116,2,
> + 106,3,124,4,106,4,124,3,100,3,25,0,131,1,116,5,
> + 131,2,1,0,124,2,83,0,41,4,122,155,84,114,121,32,
> + 116,111,32,102,105,110,100,32,97,32,108,111,97,100,101,114,
> + 32,102,111,114,32,116,104,101,32,115,112,101,99,105,102,105,
> + 101,100,32,109,111,100,117,108,101,32,98,121,32,100,101,108,
> + 101,103,97,116,105,110,103,32,116,111,10,32,32,32,32,115,
> + 101,108,102,46,102,105,110,100,95,108,111,97,100,101,114,40,
> + 41,46,10,10,32,32,32,32,84,104,105,115,32,109,101,116,
> + 104,111,100,32,105,115,32,100,101,112,114,101,99,97,116,101,
> + 100,32,105,110,32,102,97,118,111,114,32,111,102,32,102,105,
> + 110,100,101,114,46,102,105,110,100,95,115,112,101,99,40,41,
> + 46,10,10,32,32,32,32,78,122,44,78,111,116,32,105,109,
> + 112,111,114,116,105,110,103,32,100,105,114,101,99,116,111,114,
> + 121,32,123,125,58,32,109,105,115,115,105,110,103,32,95,95,
> + 105,110,105,116,95,95,114,61,0,0,0,41,6,218,11,102,
> + 105,110,100,95,108,111,97,100,101,114,114,33,0,0,0,114,
> + 62,0,0,0,114,63,0,0,0,114,50,0,0,0,218,13,
> + 73,109,112,111,114,116,87,97,114,110,105,110,103,41,5,114,
> + 102,0,0,0,218,8,102,117,108,108,110,97,109,101,218,6,
> + 108,111,97,100,101,114,218,8,112,111,114,116,105,111,110,115,
> + 218,3,109,115,103,114,4,0,0,0,114,4,0,0,0,114,
> + 6,0,0,0,218,17,95,102,105,110,100,95,109,111,100,117,
> + 108,101,95,115,104,105,109,157,1,0,0,115,10,0,0,0,
> + 0,10,14,1,16,1,4,1,22,1,114,126,0,0,0,99,
> + 4,0,0,0,0,0,0,0,11,0,0,0,19,0,0,0,
> + 67,0,0,0,115,136,1,0,0,105,0,125,4,124,2,100,
> + 1,107,9,114,22,124,2,124,4,100,2,60,0,110,4,100,
> + 3,125,2,124,3,100,1,107,9,114,42,124,3,124,4,100,
> + 4,60,0,124,0,100,1,100,5,133,2,25,0,125,5,124,
> + 0,100,5,100,6,133,2,25,0,125,6,124,0,100,6,100,
> + 7,133,2,25,0,125,7,124,5,116,0,107,3,114,124,100,
> + 8,106,1,124,2,124,5,131,2,125,8,116,2,106,3,100,
> + 9,124,8,131,2,1,0,116,4,124,8,102,1,124,4,142,
> + 1,130,1,110,86,116,5,124,6,131,1,100,5,107,3,114,
> + 168,100,10,106,1,124,2,131,1,125,8,116,2,106,3,100,
> + 9,124,8,131,2,1,0,116,6,124,8,131,1,130,1,110,
> + 42,116,5,124,7,131,1,100,5,107,3,114,210,100,11,106,
> + 1,124,2,131,1,125,8,116,2,106,3,100,9,124,8,131,
> + 2,1,0,116,6,124,8,131,1,130,1,124,1,100,1,107,
> + 9,144,1,114,124,121,16,116,7,124,1,100,12,25,0,131,
> + 1,125,9,87,0,110,22,4,0,116,8,107,10,144,1,114,
> + 2,1,0,1,0,1,0,89,0,110,50,88,0,116,9,124,
> + 6,131,1,124,9,107,3,144,1,114,52,100,13,106,1,124,
> + 2,131,1,125,8,116,2,106,3,100,9,124,8,131,2,1,
> + 0,116,4,124,8,102,1,124,4,142,1,130,1,121,16,124,
> + 1,100,14,25,0,100,15,64,0,125,10,87,0,110,22,4,
> + 0,116,8,107,10,144,1,114,90,1,0,1,0,1,0,89,
> + 0,110,34,88,0,116,9,124,7,131,1,124,10,107,3,144,
> + 1,114,124,116,4,100,13,106,1,124,2,131,1,102,1,124,
> + 4,142,1,130,1,124,0,100,7,100,1,133,2,25,0,83,
> + 0,41,16,97,122,1,0,0,86,97,108,105,100,97,116,101,
> + 32,116,104,101,32,104,101,97,100,101,114,32,111,102,32,116,
> + 104,101,32,112,97,115,115,101,100,45,105,110,32,98,121,116,
> + 101,99,111,100,101,32,97,103,97,105,110,115,116,32,115,111,
> + 117,114,99,101,95,115,116,97,116,115,32,40,105,102,10,32,
> + 32,32,32,103,105,118,101,110,41,32,97,110,100,32,114,101,
> + 116,117,114,110,105,110,103,32,116,104,101,32,98,121,116,101,
> + 99,111,100,101,32,116,104,97,116,32,99,97,110,32,98,101,
> + 32,99,111,109,112,105,108,101,100,32,98,121,32,99,111,109,
> + 112,105,108,101,40,41,46,10,10,32,32,32,32,65,108,108,
> + 32,111,116,104,101,114,32,97,114,103,117,109,101,110,116,115,
> + 32,97,114,101,32,117,115,101,100,32,116,111,32,101,110,104,
> + 97,110,99,101,32,101,114,114,111,114,32,114,101,112,111,114,
> + 116,105,110,103,46,10,10,32,32,32,32,73,109,112,111,114,
> + 116,69,114,114,111,114,32,105,115,32,114,97,105,115,101,100,
> + 32,119,104,101,110,32,116,104,101,32,109,97,103,105,99,32,
> + 110,117,109,98,101,114,32,105,115,32,105,110,99,111,114,114,
> + 101,99,116,32,111,114,32,116,104,101,32,98,121,116,101,99,
> + 111,100,101,32,105,115,10,32,32,32,32,102,111,117,110,100,
> + 32,116,111,32,98,101,32,115,116,97,108,101,46,32,69,79,
> + 70,69,114,114,111,114,32,105,115,32,114,97,105,115,101,100,
> + 32,119,104,101,110,32,116,104,101,32,100,97,116,97,32,105,
> + 115,32,102,111,117,110,100,32,116,111,32,98,101,10,32,32,
> + 32,32,116,114,117,110,99,97,116,101,100,46,10,10,32,32,
> + 32,32,78,114,100,0,0,0,122,10,60,98,121,116,101,99,
> + 111,100,101,62,114,37,0,0,0,114,14,0,0,0,233,8,
> + 0,0,0,233,12,0,0,0,122,30,98,97,100,32,109,97,
> + 103,105,99,32,110,117,109,98,101,114,32,105,110,32,123,33,
> + 114,125,58,32,123,33,114,125,122,2,123,125,122,43,114,101,
> + 97,99,104,101,100,32,69,79,70,32,119,104,105,108,101,32,
> + 114,101,97,100,105,110,103,32,116,105,109,101,115,116,97,109,
> + 112,32,105,110,32,123,33,114,125,122,48,114,101,97,99,104,
> + 101,100,32,69,79,70,32,119,104,105,108,101,32,114,101,97,
> + 100,105,110,103,32,115,105,122,101,32,111,102,32,115,111,117,
> + 114,99,101,32,105,110,32,123,33,114,125,218,5,109,116,105,
> + 109,101,122,26,98,121,116,101,99,111,100,101,32,105,115,32,
> + 115,116,97,108,101,32,102,111,114,32,123,33,114,125,218,4,
> + 115,105,122,101,108,3,0,0,0,255,127,255,127,3,0,41,
> + 10,218,12,77,65,71,73,67,95,78,85,77,66,69,82,114,
> + 50,0,0,0,114,117,0,0,0,218,16,95,118,101,114,98,
> + 111,115,101,95,109,101,115,115,97,103,101,114,101,0,0,0,
> + 114,33,0,0,0,218,8,69,79,70,69,114,114,111,114,114,
> + 16,0,0,0,218,8,75,101,121,69,114,114,111,114,114,21,
> + 0,0,0,41,11,114,55,0,0,0,218,12,115,111,117,114,
> + 99,101,95,115,116,97,116,115,114,100,0,0,0,114,37,0,
> + 0,0,90,11,101,120,99,95,100,101,116,97,105,108,115,90,
> + 5,109,97,103,105,99,90,13,114,97,119,95,116,105,109,101,
> + 115,116,97,109,112,90,8,114,97,119,95,115,105,122,101,114,
> + 77,0,0,0,218,12,115,111,117,114,99,101,95,109,116,105,
> + 109,101,218,11,115,111,117,114,99,101,95,115,105,122,101,114,
> + 4,0,0,0,114,4,0,0,0,114,6,0,0,0,218,25,
> + 95,118,97,108,105,100,97,116,101,95,98,121,116,101,99,111,
> + 100,101,95,104,101,97,100,101,114,174,1,0,0,115,76,0,
> + 0,0,0,11,4,1,8,1,10,3,4,1,8,1,8,1,
> + 12,1,12,1,12,1,8,1,12,1,12,1,14,1,12,1,
> + 10,1,12,1,10,1,12,1,10,1,12,1,8,1,10,1,
> + 2,1,16,1,16,1,6,2,14,1,10,1,12,1,12,1,
> + 2,1,16,1,16,1,6,2,14,1,12,1,6,1,114,138,
> + 0,0,0,99,4,0,0,0,0,0,0,0,5,0,0,0,
> + 5,0,0,0,67,0,0,0,115,80,0,0,0,116,0,106,
> + 1,124,0,131,1,125,4,116,2,124,4,116,3,131,2,114,
> + 56,116,4,106,5,100,1,124,2,131,2,1,0,124,3,100,
> + 2,107,9,114,52,116,6,106,7,124,4,124,3,131,2,1,
> + 0,124,4,83,0,116,8,100,3,106,9,124,2,131,1,124,
> + 1,124,2,100,4,141,3,130,1,100,2,83,0,41,5,122,
> + 60,67,111,109,112,105,108,101,32,98,121,116,101,99,111,100,
> + 101,32,97,115,32,114,101,116,117,114,110,101,100,32,98,121,
> + 32,95,118,97,108,105,100,97,116,101,95,98,121,116,101,99,
> + 111,100,101,95,104,101,97,100,101,114,40,41,46,122,21,99,
> + 111,100,101,32,111,98,106,101,99,116,32,102,114,111,109,32,
> + 123,33,114,125,78,122,23,78,111,110,45,99,111,100,101,32,
> + 111,98,106,101,99,116,32,105,110,32,123,33,114,125,41,2,
> + 114,100,0,0,0,114,37,0,0,0,41,10,218,7,109,97,
> + 114,115,104,97,108,90,5,108,111,97,100,115,218,10,105,115,
> + 105,110,115,116,97,110,99,101,218,10,95,99,111,100,101,95,
> + 116,121,112,101,114,117,0,0,0,114,132,0,0,0,218,4,
> + 95,105,109,112,90,16,95,102,105,120,95,99,111,95,102,105,
> + 108,101,110,97,109,101,114,101,0,0,0,114,50,0,0,0,
> + 41,5,114,55,0,0,0,114,100,0,0,0,114,91,0,0,
> + 0,114,92,0,0,0,218,4,99,111,100,101,114,4,0,0,
> + 0,114,4,0,0,0,114,6,0,0,0,218,17,95,99,111,
> + 109,112,105,108,101,95,98,121,116,101,99,111,100,101,229,1,
> + 0,0,115,16,0,0,0,0,2,10,1,10,1,12,1,8,
> + 1,12,1,4,2,10,1,114,144,0,0,0,114,61,0,0,
> + 0,99,3,0,0,0,0,0,0,0,4,0,0,0,3,0,
> + 0,0,67,0,0,0,115,56,0,0,0,116,0,116,1,131,
> + 1,125,3,124,3,106,2,116,3,124,1,131,1,131,1,1,
> + 0,124,3,106,2,116,3,124,2,131,1,131,1,1,0,124,
> + 3,106,2,116,4,106,5,124,0,131,1,131,1,1,0,124,
> + 3,83,0,41,1,122,80,67,111,109,112,105,108,101,32,97,
> + 32,99,111,100,101,32,111,98,106,101,99,116,32,105,110,116,
> + 111,32,98,121,116,101,99,111,100,101,32,102,111,114,32,119,
> + 114,105,116,105,110,103,32,111,117,116,32,116,111,32,97,32,
> + 98,121,116,101,45,99,111,109,112,105,108,101,100,10,32,32,
> + 32,32,102,105,108,101,46,41,6,218,9,98,121,116,101,97,
> + 114,114,97,121,114,131,0,0,0,218,6,101,120,116,101,110,
> + 100,114,19,0,0,0,114,139,0,0,0,90,5,100,117,109,
> + 112,115,41,4,114,143,0,0,0,114,129,0,0,0,114,137,
> + 0,0,0,114,55,0,0,0,114,4,0,0,0,114,4,0,
> + 0,0,114,6,0,0,0,218,17,95,99,111,100,101,95,116,
> + 111,95,98,121,116,101,99,111,100,101,241,1,0,0,115,10,
> + 0,0,0,0,3,8,1,14,1,14,1,16,1,114,147,0,
> + 0,0,99,1,0,0,0,0,0,0,0,5,0,0,0,4,
> + 0,0,0,67,0,0,0,115,62,0,0,0,100,1,100,2,
> + 108,0,125,1,116,1,106,2,124,0,131,1,106,3,125,2,
> + 124,1,106,4,124,2,131,1,125,3,116,1,106,5,100,2,
> + 100,3,131,2,125,4,124,4,106,6,124,0,106,6,124,3,
> + 100,1,25,0,131,1,131,1,83,0,41,4,122,121,68,101,
> + 99,111,100,101,32,98,121,116,101,115,32,114,101,112,114,101,
> + 115,101,110,116,105,110,103,32,115,111,117,114,99,101,32,99,
> + 111,100,101,32,97,110,100,32,114,101,116,117,114,110,32,116,
> + 104,101,32,115,116,114,105,110,103,46,10,10,32,32,32,32,
> + 85,110,105,118,101,114,115,97,108,32,110,101,119,108,105,110,
> + 101,32,115,117,112,112,111,114,116,32,105,115,32,117,115,101,
> + 100,32,105,110,32,116,104,101,32,100,101,99,111,100,105,110,
> + 103,46,10,32,32,32,32,114,61,0,0,0,78,84,41,7,
> + 218,8,116,111,107,101,110,105,122,101,114,52,0,0,0,90,
> + 7,66,121,116,101,115,73,79,90,8,114,101,97,100,108,105,
> + 110,101,90,15,100,101,116,101,99,116,95,101,110,99,111,100,
> + 105,110,103,90,25,73,110,99,114,101,109,101,110,116,97,108,
> + 78,101,119,108,105,110,101,68,101,99,111,100,101,114,218,6,
> + 100,101,99,111,100,101,41,5,218,12,115,111,117,114,99,101,
> + 95,98,121,116,101,115,114,148,0,0,0,90,21,115,111,117,
> + 114,99,101,95,98,121,116,101,115,95,114,101,97,100,108,105,
> + 110,101,218,8,101,110,99,111,100,105,110,103,90,15,110,101,
> + 119,108,105,110,101,95,100,101,99,111,100,101,114,114,4,0,
> + 0,0,114,4,0,0,0,114,6,0,0,0,218,13,100,101,
> + 99,111,100,101,95,115,111,117,114,99,101,251,1,0,0,115,
> + 10,0,0,0,0,5,8,1,12,1,10,1,12,1,114,152,
> + 0,0,0,41,2,114,123,0,0,0,218,26,115,117,98,109,
> + 111,100,117,108,101,95,115,101,97,114,99,104,95,108,111,99,
> + 97,116,105,111,110,115,99,2,0,0,0,2,0,0,0,9,
> + 0,0,0,19,0,0,0,67,0,0,0,115,12,1,0,0,
> + 124,1,100,1,107,8,114,60,100,2,125,1,116,0,124,2,
> + 100,3,131,2,114,64,121,14,124,2,106,1,124,0,131,1,
> + 125,1,87,0,113,64,4,0,116,2,107,10,114,56,1,0,
> + 1,0,1,0,89,0,113,64,88,0,110,4,124,1,125,1,
> + 116,3,106,4,124,0,124,2,124,1,100,4,141,3,125,4,
> + 100,5,124,4,95,5,124,2,100,1,107,8,114,150,120,54,
> + 116,6,131,0,68,0,93,40,92,2,125,5,125,6,124,1,
> + 106,7,116,8,124,6,131,1,131,1,114,102,124,5,124,0,
> + 124,1,131,2,125,2,124,2,124,4,95,9,80,0,113,102,
> + 87,0,100,1,83,0,124,3,116,10,107,8,114,216,116,0,
> + 124,2,100,6,131,2,114,222,121,14,124,2,106,11,124,0,
> + 131,1,125,7,87,0,110,20,4,0,116,2,107,10,114,202,
> + 1,0,1,0,1,0,89,0,113,222,88,0,124,7,114,222,
> + 103,0,124,4,95,12,110,6,124,3,124,4,95,12,124,4,
> + 106,12,103,0,107,2,144,1,114,8,124,1,144,1,114,8,
> + 116,13,124,1,131,1,100,7,25,0,125,8,124,4,106,12,
> + 106,14,124,8,131,1,1,0,124,4,83,0,41,8,97,61,
> + 1,0,0,82,101,116,117,114,110,32,97,32,109,111,100,117,
> + 108,101,32,115,112,101,99,32,98,97,115,101,100,32,111,110,
> + 32,97,32,102,105,108,101,32,108,111,99,97,116,105,111,110,
> + 46,10,10,32,32,32,32,84,111,32,105,110,100,105,99,97,
> + 116,101,32,116,104,97,116,32,116,104,101,32,109,111,100,117,
> + 108,101,32,105,115,32,97,32,112,97,99,107,97,103,101,44,
> + 32,115,101,116,10,32,32,32,32,115,117,98,109,111,100,117,
> + 108,101,95,115,101,97,114,99,104,95,108,111,99,97,116,105,
> + 111,110,115,32,116,111,32,97,32,108,105,115,116,32,111,102,
> + 32,100,105,114,101,99,116,111,114,121,32,112,97,116,104,115,
> + 46,32,32,65,110,10,32,32,32,32,101,109,112,116,121,32,
> + 108,105,115,116,32,105,115,32,115,117,102,102,105,99,105,101,
> + 110,116,44,32,116,104,111,117,103,104,32,105,116,115,32,110,
> + 111,116,32,111,116,104,101,114,119,105,115,101,32,117,115,101,
> + 102,117,108,32,116,111,32,116,104,101,10,32,32,32,32,105,
> + 109,112,111,114,116,32,115,121,115,116,101,109,46,10,10,32,
> + 32,32,32,84,104,101,32,108,111,97,100,101,114,32,109,117,
> + 115,116,32,116,97,107,101,32,97,32,115,112,101,99,32,97,
> + 115,32,105,116,115,32,111,110,108,121,32,95,95,105,110,105,
> + 116,95,95,40,41,32,97,114,103,46,10,10,32,32,32,32,
> + 78,122,9,60,117,110,107,110,111,119,110,62,218,12,103,101,
> + 116,95,102,105,108,101,110,97,109,101,41,1,218,6,111,114,
> + 105,103,105,110,84,218,10,105,115,95,112,97,99,107,97,103,
> + 101,114,61,0,0,0,41,15,114,110,0,0,0,114,154,0,
> + 0,0,114,101,0,0,0,114,117,0,0,0,218,10,77,111,
> + 100,117,108,101,83,112,101,99,90,13,95,115,101,116,95,102,
> + 105,108,101,97,116,116,114,218,27,95,103,101,116,95,115,117,
> + 112,112,111,114,116,101,100,95,102,105,108,101,95,108,111,97,
> + 100,101,114,115,114,94,0,0,0,114,95,0,0,0,114,123,
> + 0,0,0,218,9,95,80,79,80,85,76,65,84,69,114,156,
> + 0,0,0,114,153,0,0,0,114,40,0,0,0,218,6,97,
> + 112,112,101,110,100,41,9,114,100,0,0,0,90,8,108,111,
> + 99,97,116,105,111,110,114,123,0,0,0,114,153,0,0,0,
> + 218,4,115,112,101,99,218,12,108,111,97,100,101,114,95,99,
> + 108,97,115,115,218,8,115,117,102,102,105,120,101,115,114,156,
> + 0,0,0,90,7,100,105,114,110,97,109,101,114,4,0,0,
> + 0,114,4,0,0,0,114,6,0,0,0,218,23,115,112,101,
> + 99,95,102,114,111,109,95,102,105,108,101,95,108,111,99,97,
> + 116,105,111,110,12,2,0,0,115,62,0,0,0,0,12,8,
> + 4,4,1,10,2,2,1,14,1,14,1,8,3,4,7,16,
> + 1,6,3,8,1,16,1,14,1,10,1,6,1,6,2,4,
> + 3,8,2,10,1,2,1,14,1,14,1,6,2,4,1,8,
> + 2,6,1,12,1,6,1,12,1,12,2,114,164,0,0,0,
> + 99,0,0,0,0,0,0,0,0,0,0,0,0,4,0,0,
> + 0,64,0,0,0,115,80,0,0,0,101,0,90,1,100,0,
> + 90,2,100,1,90,3,100,2,90,4,100,3,90,5,100,4,
> + 90,6,101,7,100,5,100,6,132,0,131,1,90,8,101,7,
> + 100,7,100,8,132,0,131,1,90,9,101,7,100,14,100,10,
> + 100,11,132,1,131,1,90,10,101,7,100,15,100,12,100,13,
> + 132,1,131,1,90,11,100,9,83,0,41,16,218,21,87,105,
> + 110,100,111,119,115,82,101,103,105,115,116,114,121,70,105,110,
> + 100,101,114,122,62,77,101,116,97,32,112,97,116,104,32,102,
> + 105,110,100,101,114,32,102,111,114,32,109,111,100,117,108,101,
> + 115,32,100,101,99,108,97,114,101,100,32,105,110,32,116,104,
> + 101,32,87,105,110,100,111,119,115,32,114,101,103,105,115,116,
> + 114,121,46,122,59,83,111,102,116,119,97,114,101,92,80,121,
> + 116,104,111,110,92,80,121,116,104,111,110,67,111,114,101,92,
> + 123,115,121,115,95,118,101,114,115,105,111,110,125,92,77,111,
> + 100,117,108,101,115,92,123,102,117,108,108,110,97,109,101,125,
> + 122,65,83,111,102,116,119,97,114,101,92,80,121,116,104,111,
> + 110,92,80,121,116,104,111,110,67,111,114,101,92,123,115,121,
> + 115,95,118,101,114,115,105,111,110,125,92,77,111,100,117,108,
> + 101,115,92,123,102,117,108,108,110,97,109,101,125,92,68,101,
> + 98,117,103,70,99,2,0,0,0,0,0,0,0,2,0,0,
> + 0,11,0,0,0,67,0,0,0,115,50,0,0,0,121,14,
> + 116,0,106,1,116,0,106,2,124,1,131,2,83,0,4,0,
> + 116,3,107,10,114,44,1,0,1,0,1,0,116,0,106,1,
> + 116,0,106,4,124,1,131,2,83,0,88,0,100,0,83,0,
> + 41,1,78,41,5,218,7,95,119,105,110,114,101,103,90,7,
> + 79,112,101,110,75,101,121,90,17,72,75,69,89,95,67,85,
> + 82,82,69,78,84,95,85,83,69,82,114,42,0,0,0,90,
> + 18,72,75,69,89,95,76,79,67,65,76,95,77,65,67,72,
> + 73,78,69,41,2,218,3,99,108,115,114,5,0,0,0,114,
> + 4,0,0,0,114,4,0,0,0,114,6,0,0,0,218,14,
> + 95,111,112,101,110,95,114,101,103,105,115,116,114,121,92,2,
> + 0,0,115,8,0,0,0,0,2,2,1,14,1,14,1,122,
> + 36,87,105,110,100,111,119,115,82,101,103,105,115,116,114,121,
> + 70,105,110,100,101,114,46,95,111,112,101,110,95,114,101,103,
> + 105,115,116,114,121,99,2,0,0,0,0,0,0,0,6,0,
> + 0,0,16,0,0,0,67,0,0,0,115,112,0,0,0,124,
> + 0,106,0,114,14,124,0,106,1,125,2,110,6,124,0,106,
> + 2,125,2,124,2,106,3,124,1,100,1,116,4,106,5,100,
> + 0,100,2,133,2,25,0,22,0,100,3,141,2,125,3,121,
> + 38,124,0,106,6,124,3,131,1,143,18,125,4,116,7,106,
> + 8,124,4,100,4,131,2,125,5,87,0,100,0,81,0,82,
> + 0,88,0,87,0,110,20,4,0,116,9,107,10,114,106,1,
> + 0,1,0,1,0,100,0,83,0,88,0,124,5,83,0,41,
> + 5,78,122,5,37,100,46,37,100,114,58,0,0,0,41,2,
> + 114,122,0,0,0,90,11,115,121,115,95,118,101,114,115,105,
> + 111,110,114,32,0,0,0,41,10,218,11,68,69,66,85,71,
> + 95,66,85,73,76,68,218,18,82,69,71,73,83,84,82,89,
> + 95,75,69,89,95,68,69,66,85,71,218,12,82,69,71,73,
> + 83,84,82,89,95,75,69,89,114,50,0,0,0,114,8,0,
> + 0,0,218,12,118,101,114,115,105,111,110,95,105,110,102,111,
> + 114,168,0,0,0,114,166,0,0,0,90,10,81,117,101,114,
> + 121,86,97,108,117,101,114,42,0,0,0,41,6,114,167,0,
> + 0,0,114,122,0,0,0,90,12,114,101,103,105,115,116,114,
> + 121,95,107,101,121,114,5,0,0,0,90,4,104,107,101,121,
> + 218,8,102,105,108,101,112,97,116,104,114,4,0,0,0,114,
> + 4,0,0,0,114,6,0,0,0,218,16,95,115,101,97,114,
> + 99,104,95,114,101,103,105,115,116,114,121,99,2,0,0,115,
> + 22,0,0,0,0,2,6,1,8,2,6,1,6,1,22,1,
> + 2,1,12,1,26,1,14,1,6,1,122,38,87,105,110,100,
> + 111,119,115,82,101,103,105,115,116,114,121,70,105,110,100,101,
> + 114,46,95,115,101,97,114,99,104,95,114,101,103,105,115,116,
> + 114,121,78,99,4,0,0,0,0,0,0,0,8,0,0,0,
> + 14,0,0,0,67,0,0,0,115,120,0,0,0,124,0,106,
> + 0,124,1,131,1,125,4,124,4,100,0,107,8,114,22,100,
> + 0,83,0,121,12,116,1,124,4,131,1,1,0,87,0,110,
> + 20,4,0,116,2,107,10,114,54,1,0,1,0,1,0,100,
> + 0,83,0,88,0,120,58,116,3,131,0,68,0,93,48,92,
> + 2,125,5,125,6,124,4,106,4,116,5,124,6,131,1,131,
> + 1,114,64,116,6,106,7,124,1,124,5,124,1,124,4,131,
> + 2,124,4,100,1,141,3,125,7,124,7,83,0,113,64,87,
> + 0,100,0,83,0,41,2,78,41,1,114,155,0,0,0,41,
> + 8,114,174,0,0,0,114,41,0,0,0,114,42,0,0,0,
> + 114,158,0,0,0,114,94,0,0,0,114,95,0,0,0,114,
> + 117,0,0,0,218,16,115,112,101,99,95,102,114,111,109,95,
> + 108,111,97,100,101,114,41,8,114,167,0,0,0,114,122,0,
> + 0,0,114,37,0,0,0,218,6,116,97,114,103,101,116,114,
> + 173,0,0,0,114,123,0,0,0,114,163,0,0,0,114,161,
> + 0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,0,
> + 0,0,218,9,102,105,110,100,95,115,112,101,99,114,2,0,
> + 0,115,26,0,0,0,0,2,10,1,8,1,4,1,2,1,
> + 12,1,14,1,6,1,16,1,14,1,6,1,8,1,8,1,
> + 122,31,87,105,110,100,111,119,115,82,101,103,105,115,116,114,
> + 121,70,105,110,100,101,114,46,102,105,110,100,95,115,112,101,
> + 99,99,3,0,0,0,0,0,0,0,4,0,0,0,3,0,
> + 0,0,67,0,0,0,115,34,0,0,0,124,0,106,0,124,
> + 1,124,2,131,2,125,3,124,3,100,1,107,9,114,26,124,
> + 3,106,1,83,0,100,1,83,0,100,1,83,0,41,2,122,
> + 108,70,105,110,100,32,109,111,100,117,108,101,32,110,97,109,
> + 101,100,32,105,110,32,116,104,101,32,114,101,103,105,115,116,
> + 114,121,46,10,10,32,32,32,32,32,32,32,32,84,104,105,
> + 115,32,109,101,116,104,111,100,32,105,115,32,100,101,112,114,
> + 101,99,97,116,101,100,46,32,32,85,115,101,32,101,120,101,
> + 99,95,109,111,100,117,108,101,40,41,32,105,110,115,116,101,
> + 97,100,46,10,10,32,32,32,32,32,32,32,32,78,41,2,
> + 114,177,0,0,0,114,123,0,0,0,41,4,114,167,0,0,
> + 0,114,122,0,0,0,114,37,0,0,0,114,161,0,0,0,
> + 114,4,0,0,0,114,4,0,0,0,114,6,0,0,0,218,
> + 11,102,105,110,100,95,109,111,100,117,108,101,130,2,0,0,
> + 115,8,0,0,0,0,7,12,1,8,1,6,2,122,33,87,
> + 105,110,100,111,119,115,82,101,103,105,115,116,114,121,70,105,
> + 110,100,101,114,46,102,105,110,100,95,109,111,100,117,108,101,
> + 41,2,78,78,41,1,78,41,12,114,107,0,0,0,114,106,
> + 0,0,0,114,108,0,0,0,114,109,0,0,0,114,171,0,
> + 0,0,114,170,0,0,0,114,169,0,0,0,218,11,99,108,
> + 97,115,115,109,101,116,104,111,100,114,168,0,0,0,114,174,
> + 0,0,0,114,177,0,0,0,114,178,0,0,0,114,4,0,
> + 0,0,114,4,0,0,0,114,4,0,0,0,114,6,0,0,
> + 0,114,165,0,0,0,80,2,0,0,115,20,0,0,0,8,
> + 2,4,3,4,3,4,2,4,2,12,7,12,15,2,1,12,
> + 15,2,1,114,165,0,0,0,99,0,0,0,0,0,0,0,
> + 0,0,0,0,0,2,0,0,0,64,0,0,0,115,48,0,
> + 0,0,101,0,90,1,100,0,90,2,100,1,90,3,100,2,
> + 100,3,132,0,90,4,100,4,100,5,132,0,90,5,100,6,
> + 100,7,132,0,90,6,100,8,100,9,132,0,90,7,100,10,
> + 83,0,41,11,218,13,95,76,111,97,100,101,114,66,97,115,
> + 105,99,115,122,83,66,97,115,101,32,99,108,97,115,115,32,
> + 111,102,32,99,111,109,109,111,110,32,99,111,100,101,32,110,
> + 101,101,100,101,100,32,98,121,32,98,111,116,104,32,83,111,
> + 117,114,99,101,76,111,97,100,101,114,32,97,110,100,10,32,
> + 32,32,32,83,111,117,114,99,101,108,101,115,115,70,105,108,
> + 101,76,111,97,100,101,114,46,99,2,0,0,0,0,0,0,
> + 0,5,0,0,0,3,0,0,0,67,0,0,0,115,64,0,
> + 0,0,116,0,124,0,106,1,124,1,131,1,131,1,100,1,
> + 25,0,125,2,124,2,106,2,100,2,100,1,131,2,100,3,
> + 25,0,125,3,124,1,106,3,100,2,131,1,100,4,25,0,
> + 125,4,124,3,100,5,107,2,111,62,124,4,100,5,107,3,
> + 83,0,41,6,122,141,67,111,110,99,114,101,116,101,32,105,
> + 109,112,108,101,109,101,110,116,97,116,105,111,110,32,111,102,
> + 32,73,110,115,112,101,99,116,76,111,97,100,101,114,46,105,
> + 115,95,112,97,99,107,97,103,101,32,98,121,32,99,104,101,
> + 99,107,105,110,103,32,105,102,10,32,32,32,32,32,32,32,
> + 32,116,104,101,32,112,97,116,104,32,114,101,116,117,114,110,
> + 101,100,32,98,121,32,103,101,116,95,102,105,108,101,110,97,
> + 109,101,32,104,97,115,32,97,32,102,105,108,101,110,97,109,
> + 101,32,111,102,32,39,95,95,105,110,105,116,95,95,46,112,
> + 121,39,46,114,31,0,0,0,114,60,0,0,0,114,61,0,
> + 0,0,114,58,0,0,0,218,8,95,95,105,110,105,116,95,
> + 95,41,4,114,40,0,0,0,114,154,0,0,0,114,36,0,
> + 0,0,114,34,0,0,0,41,5,114,102,0,0,0,114,122,
> + 0,0,0,114,96,0,0,0,90,13,102,105,108,101,110,97,
> + 109,101,95,98,97,115,101,90,9,116,97,105,108,95,110,97,
> + 109,101,114,4,0,0,0,114,4,0,0,0,114,6,0,0,
> + 0,114,156,0,0,0,149,2,0,0,115,8,0,0,0,0,
> + 3,18,1,16,1,14,1,122,24,95,76,111,97,100,101,114,
> + 66,97,115,105,99,115,46,105,115,95,112,97,99,107,97,103,
> + 101,99,2,0,0,0,0,0,0,0,2,0,0,0,1,0,
> + 0,0,67,0,0,0,115,4,0,0,0,100,1,83,0,41,
> + 2,122,42,85,115,101,32,100,101,102,97,117,108,116,32,115,
> + 101,109,97,110,116,105,99,115,32,102,111,114,32,109,111,100,
> + 117,108,101,32,99,114,101,97,116,105,111,110,46,78,114,4,
> + 0,0,0,41,2,114,102,0,0,0,114,161,0,0,0,114,
> + 4,0,0,0,114,4,0,0,0,114,6,0,0,0,218,13,
> + 99,114,101,97,116,101,95,109,111,100,117,108,101,157,2,0,
> + 0,115,0,0,0,0,122,27,95,76,111,97,100,101,114,66,
> + 97,115,105,99,115,46,99,114,101,97,116,101,95,109,111,100,
> + 117,108,101,99,2,0,0,0,0,0,0,0,3,0,0,0,
> + 4,0,0,0,67,0,0,0,115,56,0,0,0,124,0,106,
> + 0,124,1,106,1,131,1,125,2,124,2,100,1,107,8,114,
> + 36,116,2,100,2,106,3,124,1,106,1,131,1,131,1,130,
> + 1,116,4,106,5,116,6,124,2,124,1,106,7,131,3,1,
> + 0,100,1,83,0,41,3,122,19,69,120,101,99,117,116,101,
> + 32,116,104,101,32,109,111,100,117,108,101,46,78,122,52,99,
> + 97,110,110,111,116,32,108,111,97,100,32,109,111,100,117,108,
> + 101,32,123,33,114,125,32,119,104,101,110,32,103,101,116,95,
> + 99,111,100,101,40,41,32,114,101,116,117,114,110,115,32,78,
> + 111,110,101,41,8,218,8,103,101,116,95,99,111,100,101,114,
> + 107,0,0,0,114,101,0,0,0,114,50,0,0,0,114,117,
> + 0,0,0,218,25,95,99,97,108,108,95,119,105,116,104,95,
> + 102,114,97,109,101,115,95,114,101,109,111,118,101,100,218,4,
> + 101,120,101,99,114,113,0,0,0,41,3,114,102,0,0,0,
> + 218,6,109,111,100,117,108,101,114,143,0,0,0,114,4,0,
> + 0,0,114,4,0,0,0,114,6,0,0,0,218,11,101,120,
> + 101,99,95,109,111,100,117,108,101,160,2,0,0,115,10,0,
> + 0,0,0,2,12,1,8,1,6,1,10,1,122,25,95,76,
> + 111,97,100,101,114,66,97,115,105,99,115,46,101,120,101,99,
> + 95,109,111,100,117,108,101,99,2,0,0,0,0,0,0,0,
> + 2,0,0,0,3,0,0,0,67,0,0,0,115,12,0,0,
> + 0,116,0,106,1,124,0,124,1,131,2,83,0,41,1,122,
> + 26,84,104,105,115,32,109,111,100,117,108,101,32,105,115,32,
> + 100,101,112,114,101,99,97,116,101,100,46,41,2,114,117,0,
> + 0,0,218,17,95,108,111,97,100,95,109,111,100,117,108,101,
> + 95,115,104,105,109,41,2,114,102,0,0,0,114,122,0,0,
> + 0,114,4,0,0,0,114,4,0,0,0,114,6,0,0,0,
> + 218,11,108,111,97,100,95,109,111,100,117,108,101,168,2,0,
> + 0,115,2,0,0,0,0,2,122,25,95,76,111,97,100,101,
> + 114,66,97,115,105,99,115,46,108,111,97,100,95,109,111,100,
> + 117,108,101,78,41,8,114,107,0,0,0,114,106,0,0,0,
> + 114,108,0,0,0,114,109,0,0,0,114,156,0,0,0,114,
> + 182,0,0,0,114,187,0,0,0,114,189,0,0,0,114,4,
> + 0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,0,
> + 0,0,114,180,0,0,0,144,2,0,0,115,10,0,0,0,
> + 8,3,4,2,8,8,8,3,8,8,114,180,0,0,0,99,
> + 0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,
> + 64,0,0,0,115,74,0,0,0,101,0,90,1,100,0,90,
> + 2,100,1,100,2,132,0,90,3,100,3,100,4,132,0,90,
> + 4,100,5,100,6,132,0,90,5,100,7,100,8,132,0,90,
> + 6,100,9,100,10,132,0,90,7,100,18,100,12,156,1,100,
> + 13,100,14,132,2,90,8,100,15,100,16,132,0,90,9,100,
> + 17,83,0,41,19,218,12,83,111,117,114,99,101,76,111,97,
> + 100,101,114,99,2,0,0,0,0,0,0,0,2,0,0,0,
> + 1,0,0,0,67,0,0,0,115,8,0,0,0,116,0,130,
> + 1,100,1,83,0,41,2,122,178,79,112,116,105,111,110,97,
> + 108,32,109,101,116,104,111,100,32,116,104,97,116,32,114,101,
> + 116,117,114,110,115,32,116,104,101,32,109,111,100,105,102,105,
> + 99,97,116,105,111,110,32,116,105,109,101,32,40,97,110,32,
> + 105,110,116,41,32,102,111,114,32,116,104,101,10,32,32,32,
> + 32,32,32,32,32,115,112,101,99,105,102,105,101,100,32,112,
> + 97,116,104,44,32,119,104,101,114,101,32,112,97,116,104,32,
> + 105,115,32,97,32,115,116,114,46,10,10,32,32,32,32,32,
> + 32,32,32,82,97,105,115,101,115,32,73,79,69,114,114,111,
> + 114,32,119,104,101,110,32,116,104,101,32,112,97,116,104,32,
> + 99,97,110,110,111,116,32,98,101,32,104,97,110,100,108,101,
> + 100,46,10,32,32,32,32,32,32,32,32,78,41,1,218,7,
> + 73,79,69,114,114,111,114,41,2,114,102,0,0,0,114,37,
> + 0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,0,
> + 0,0,218,10,112,97,116,104,95,109,116,105,109,101,175,2,
> + 0,0,115,2,0,0,0,0,6,122,23,83,111,117,114,99,
> + 101,76,111,97,100,101,114,46,112,97,116,104,95,109,116,105,
> + 109,101,99,2,0,0,0,0,0,0,0,2,0,0,0,3,
> + 0,0,0,67,0,0,0,115,14,0,0,0,100,1,124,0,
> + 106,0,124,1,131,1,105,1,83,0,41,2,97,170,1,0,
> + 0,79,112,116,105,111,110,97,108,32,109,101,116,104,111,100,
> + 32,114,101,116,117,114,110,105,110,103,32,97,32,109,101,116,
> + 97,100,97,116,97,32,100,105,99,116,32,102,111,114,32,116,
> + 104,101,32,115,112,101,99,105,102,105,101,100,32,112,97,116,
> + 104,10,32,32,32,32,32,32,32,32,116,111,32,98,121,32,
> + 116,104,101,32,112,97,116,104,32,40,115,116,114,41,46,10,
> + 32,32,32,32,32,32,32,32,80,111,115,115,105,98,108,101,
> + 32,107,101,121,115,58,10,32,32,32,32,32,32,32,32,45,
> + 32,39,109,116,105,109,101,39,32,40,109,97,110,100,97,116,
> + 111,114,121,41,32,105,115,32,116,104,101,32,110,117,109,101,
> + 114,105,99,32,116,105,109,101,115,116,97,109,112,32,111,102,
> + 32,108,97,115,116,32,115,111,117,114,99,101,10,32,32,32,
> + 32,32,32,32,32,32,32,99,111,100,101,32,109,111,100,105,
> + 102,105,99,97,116,105,111,110,59,10,32,32,32,32,32,32,
> + 32,32,45,32,39,115,105,122,101,39,32,40,111,112,116,105,
> + 111,110,97,108,41,32,105,115,32,116,104,101,32,115,105,122,
> + 101,32,105,110,32,98,121,116,101,115,32,111,102,32,116,104,
> + 101,32,115,111,117,114,99,101,32,99,111,100,101,46,10,10,
> + 32,32,32,32,32,32,32,32,73,109,112,108,101,109,101,110,
> + 116,105,110,103,32,116,104,105,115,32,109,101,116,104,111,100,
> + 32,97,108,108,111,119,115,32,116,104,101,32,108,111,97,100,
> + 101,114,32,116,111,32,114,101,97,100,32,98,121,116,101,99,
> + 111,100,101,32,102,105,108,101,115,46,10,32,32,32,32,32,
> + 32,32,32,82,97,105,115,101,115,32,73,79,69,114,114,111,
> + 114,32,119,104,101,110,32,116,104,101,32,112,97,116,104,32,
> + 99,97,110,110,111,116,32,98,101,32,104,97,110,100,108,101,
> + 100,46,10,32,32,32,32,32,32,32,32,114,129,0,0,0,
> + 41,1,114,192,0,0,0,41,2,114,102,0,0,0,114,37,
> + 0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,0,
> + 0,0,218,10,112,97,116,104,95,115,116,97,116,115,183,2,
> + 0,0,115,2,0,0,0,0,11,122,23,83,111,117,114,99,
> + 101,76,111,97,100,101,114,46,112,97,116,104,95,115,116,97,
> + 116,115,99,4,0,0,0,0,0,0,0,4,0,0,0,3,
> + 0,0,0,67,0,0,0,115,12,0,0,0,124,0,106,0,
> + 124,2,124,3,131,2,83,0,41,1,122,228,79,112,116,105,
> + 111,110,97,108,32,109,101,116,104,111,100,32,119,104,105,99,
> + 104,32,119,114,105,116,101,115,32,100,97,116,97,32,40,98,
> + 121,116,101,115,41,32,116,111,32,97,32,102,105,108,101,32,
> + 112,97,116,104,32,40,97,32,115,116,114,41,46,10,10,32,
> + 32,32,32,32,32,32,32,73,109,112,108,101,109,101,110,116,
> + 105,110,103,32,116,104,105,115,32,109,101,116,104,111,100,32,
> + 97,108,108,111,119,115,32,102,111,114,32,116,104,101,32,119,
> + 114,105,116,105,110,103,32,111,102,32,98,121,116,101,99,111,
> + 100,101,32,102,105,108,101,115,46,10,10,32,32,32,32,32,
> + 32,32,32,84,104,101,32,115,111,117,114,99,101,32,112,97,
> + 116,104,32,105,115,32,110,101,101,100,101,100,32,105,110,32,
> + 111,114,100,101,114,32,116,111,32,99,111,114,114,101,99,116,
> + 108,121,32,116,114,97,110,115,102,101,114,32,112,101,114,109,
> + 105,115,115,105,111,110,115,10,32,32,32,32,32,32,32,32,
> + 41,1,218,8,115,101,116,95,100,97,116,97,41,4,114,102,
> + 0,0,0,114,92,0,0,0,90,10,99,97,99,104,101,95,
> + 112,97,116,104,114,55,0,0,0,114,4,0,0,0,114,4,
> + 0,0,0,114,6,0,0,0,218,15,95,99,97,99,104,101,
> + 95,98,121,116,101,99,111,100,101,196,2,0,0,115,2,0,
> + 0,0,0,8,122,28,83,111,117,114,99,101,76,111,97,100,
> + 101,114,46,95,99,97,99,104,101,95,98,121,116,101,99,111,
> + 100,101,99,3,0,0,0,0,0,0,0,3,0,0,0,1,
> + 0,0,0,67,0,0,0,115,4,0,0,0,100,1,83,0,
> + 41,2,122,150,79,112,116,105,111,110,97,108,32,109,101,116,
> + 104,111,100,32,119,104,105,99,104,32,119,114,105,116,101,115,
> + 32,100,97,116,97,32,40,98,121,116,101,115,41,32,116,111,
> + 32,97,32,102,105,108,101,32,112,97,116,104,32,40,97,32,
> + 115,116,114,41,46,10,10,32,32,32,32,32,32,32,32,73,
> + 109,112,108,101,109,101,110,116,105,110,103,32,116,104,105,115,
> + 32,109,101,116,104,111,100,32,97,108,108,111,119,115,32,102,
> + 111,114,32,116,104,101,32,119,114,105,116,105,110,103,32,111,
> + 102,32,98,121,116,101,99,111,100,101,32,102,105,108,101,115,
> + 46,10,32,32,32,32,32,32,32,32,78,114,4,0,0,0,
> + 41,3,114,102,0,0,0,114,37,0,0,0,114,55,0,0,
> + 0,114,4,0,0,0,114,4,0,0,0,114,6,0,0,0,
> + 114,194,0,0,0,206,2,0,0,115,0,0,0,0,122,21,
> + 83,111,117,114,99,101,76,111,97,100,101,114,46,115,101,116,
> + 95,100,97,116,97,99,2,0,0,0,0,0,0,0,5,0,
> + 0,0,16,0,0,0,67,0,0,0,115,82,0,0,0,124,
> + 0,106,0,124,1,131,1,125,2,121,14,124,0,106,1,124,
> + 2,131,1,125,3,87,0,110,48,4,0,116,2,107,10,114,
> + 72,1,0,125,4,1,0,122,20,116,3,100,1,124,1,100,
> + 2,141,2,124,4,130,2,87,0,89,0,100,3,100,3,125,
> + 4,126,4,88,0,110,2,88,0,116,4,124,3,131,1,83,
> + 0,41,4,122,52,67,111,110,99,114,101,116,101,32,105,109,
> + 112,108,101,109,101,110,116,97,116,105,111,110,32,111,102,32,
> + 73,110,115,112,101,99,116,76,111,97,100,101,114,46,103,101,
> + 116,95,115,111,117,114,99,101,46,122,39,115,111,117,114,99,
> + 101,32,110,111,116,32,97,118,97,105,108,97,98,108,101,32,
> + 116,104,114,111,117,103,104,32,103,101,116,95,100,97,116,97,
> + 40,41,41,1,114,100,0,0,0,78,41,5,114,154,0,0,
> + 0,218,8,103,101,116,95,100,97,116,97,114,42,0,0,0,
> + 114,101,0,0,0,114,152,0,0,0,41,5,114,102,0,0,
> + 0,114,122,0,0,0,114,37,0,0,0,114,150,0,0,0,
> + 218,3,101,120,99,114,4,0,0,0,114,4,0,0,0,114,
> + 6,0,0,0,218,10,103,101,116,95,115,111,117,114,99,101,
> + 213,2,0,0,115,14,0,0,0,0,2,10,1,2,1,14,
> + 1,16,1,4,1,28,1,122,23,83,111,117,114,99,101,76,
> + 111,97,100,101,114,46,103,101,116,95,115,111,117,114,99,101,
> + 114,31,0,0,0,41,1,218,9,95,111,112,116,105,109,105,
> + 122,101,99,3,0,0,0,1,0,0,0,4,0,0,0,8,
> + 0,0,0,67,0,0,0,115,22,0,0,0,116,0,106,1,
> + 116,2,124,1,124,2,100,1,100,2,124,3,100,3,141,6,
> + 83,0,41,4,122,130,82,101,116,117,114,110,32,116,104,101,
> + 32,99,111,100,101,32,111,98,106,101,99,116,32,99,111,109,
> + 112,105,108,101,100,32,102,114,111,109,32,115,111,117,114,99,
> + 101,46,10,10,32,32,32,32,32,32,32,32,84,104,101,32,
> + 39,100,97,116,97,39,32,97,114,103,117,109,101,110,116,32,
> + 99,97,110,32,98,101,32,97,110,121,32,111,98,106,101,99,
> + 116,32,116,121,112,101,32,116,104,97,116,32,99,111,109,112,
> + 105,108,101,40,41,32,115,117,112,112,111,114,116,115,46,10,
> + 32,32,32,32,32,32,32,32,114,185,0,0,0,84,41,2,
> + 218,12,100,111,110,116,95,105,110,104,101,114,105,116,114,70,
> + 0,0,0,41,3,114,117,0,0,0,114,184,0,0,0,218,
> + 7,99,111,109,112,105,108,101,41,4,114,102,0,0,0,114,
> + 55,0,0,0,114,37,0,0,0,114,199,0,0,0,114,4,
> + 0,0,0,114,4,0,0,0,114,6,0,0,0,218,14,115,
> + 111,117,114,99,101,95,116,111,95,99,111,100,101,223,2,0,
> + 0,115,4,0,0,0,0,5,12,1,122,27,83,111,117,114,
> + 99,101,76,111,97,100,101,114,46,115,111,117,114,99,101,95,
> + 116,111,95,99,111,100,101,99,2,0,0,0,0,0,0,0,
> + 10,0,0,0,43,0,0,0,67,0,0,0,115,94,1,0,
> + 0,124,0,106,0,124,1,131,1,125,2,100,1,125,3,121,
> + 12,116,1,124,2,131,1,125,4,87,0,110,24,4,0,116,
> + 2,107,10,114,50,1,0,1,0,1,0,100,1,125,4,89,
> + 0,110,162,88,0,121,14,124,0,106,3,124,2,131,1,125,
> + 5,87,0,110,20,4,0,116,4,107,10,114,86,1,0,1,
> + 0,1,0,89,0,110,126,88,0,116,5,124,5,100,2,25,
> + 0,131,1,125,3,121,14,124,0,106,6,124,4,131,1,125,
> + 6,87,0,110,20,4,0,116,7,107,10,114,134,1,0,1,
> + 0,1,0,89,0,110,78,88,0,121,20,116,8,124,6,124,
> + 5,124,1,124,4,100,3,141,4,125,7,87,0,110,24,4,
> + 0,116,9,116,10,102,2,107,10,114,180,1,0,1,0,1,
> + 0,89,0,110,32,88,0,116,11,106,12,100,4,124,4,124,
> + 2,131,3,1,0,116,13,124,7,124,1,124,4,124,2,100,
> + 5,141,4,83,0,124,0,106,6,124,2,131,1,125,8,124,
> + 0,106,14,124,8,124,2,131,2,125,9,116,11,106,12,100,
> + 6,124,2,131,2,1,0,116,15,106,16,12,0,144,1,114,
> + 90,124,4,100,1,107,9,144,1,114,90,124,3,100,1,107,
> + 9,144,1,114,90,116,17,124,9,124,3,116,18,124,8,131,
> + 1,131,3,125,6,121,30,124,0,106,19,124,2,124,4,124,
> + 6,131,3,1,0,116,11,106,12,100,7,124,4,131,2,1,
> + 0,87,0,110,22,4,0,116,2,107,10,144,1,114,88,1,
> + 0,1,0,1,0,89,0,110,2,88,0,124,9,83,0,41,
> + 8,122,190,67,111,110,99,114,101,116,101,32,105,109,112,108,
> + 101,109,101,110,116,97,116,105,111,110,32,111,102,32,73,110,
> + 115,112,101,99,116,76,111,97,100,101,114,46,103,101,116,95,
> + 99,111,100,101,46,10,10,32,32,32,32,32,32,32,32,82,
> + 101,97,100,105,110,103,32,111,102,32,98,121,116,101,99,111,
> + 100,101,32,114,101,113,117,105,114,101,115,32,112,97,116,104,
> + 95,115,116,97,116,115,32,116,111,32,98,101,32,105,109,112,
> + 108,101,109,101,110,116,101,100,46,32,84,111,32,119,114,105,
> + 116,101,10,32,32,32,32,32,32,32,32,98,121,116,101,99,
> + 111,100,101,44,32,115,101,116,95,100,97,116,97,32,109,117,
> + 115,116,32,97,108,115,111,32,98,101,32,105,109,112,108,101,
> + 109,101,110,116,101,100,46,10,10,32,32,32,32,32,32,32,
> + 32,78,114,129,0,0,0,41,3,114,135,0,0,0,114,100,
> + 0,0,0,114,37,0,0,0,122,13,123,125,32,109,97,116,
> + 99,104,101,115,32,123,125,41,3,114,100,0,0,0,114,91,
> + 0,0,0,114,92,0,0,0,122,19,99,111,100,101,32,111,
> + 98,106,101,99,116,32,102,114,111,109,32,123,125,122,10,119,
> + 114,111,116,101,32,123,33,114,125,41,20,114,154,0,0,0,
> + 114,81,0,0,0,114,68,0,0,0,114,193,0,0,0,114,
> + 191,0,0,0,114,16,0,0,0,114,196,0,0,0,114,42,
> + 0,0,0,114,138,0,0,0,114,101,0,0,0,114,133,0,
> + 0,0,114,117,0,0,0,114,132,0,0,0,114,144,0,0,
> + 0,114,202,0,0,0,114,8,0,0,0,218,19,100,111,110,
> + 116,95,119,114,105,116,101,95,98,121,116,101,99,111,100,101,
> + 114,147,0,0,0,114,33,0,0,0,114,195,0,0,0,41,
> + 10,114,102,0,0,0,114,122,0,0,0,114,92,0,0,0,
> + 114,136,0,0,0,114,91,0,0,0,218,2,115,116,114,55,
> + 0,0,0,218,10,98,121,116,101,115,95,100,97,116,97,114,
> + 150,0,0,0,90,11,99,111,100,101,95,111,98,106,101,99,
> + 116,114,4,0,0,0,114,4,0,0,0,114,6,0,0,0,
> + 114,183,0,0,0,231,2,0,0,115,78,0,0,0,0,7,
> + 10,1,4,1,2,1,12,1,14,1,10,2,2,1,14,1,
> + 14,1,6,2,12,1,2,1,14,1,14,1,6,2,2,1,
> + 4,1,4,1,12,1,18,1,6,2,8,1,6,1,6,1,
> + 2,1,8,1,10,1,12,1,12,1,20,1,10,1,6,1,
> + 10,1,2,1,14,1,16,1,16,1,6,1,122,21,83,111,
> + 117,114,99,101,76,111,97,100,101,114,46,103,101,116,95,99,
> + 111,100,101,78,114,89,0,0,0,41,10,114,107,0,0,0,
> + 114,106,0,0,0,114,108,0,0,0,114,192,0,0,0,114,
> + 193,0,0,0,114,195,0,0,0,114,194,0,0,0,114,198,
> + 0,0,0,114,202,0,0,0,114,183,0,0,0,114,4,0,
> + 0,0,114,4,0,0,0,114,4,0,0,0,114,6,0,0,
> + 0,114,190,0,0,0,173,2,0,0,115,14,0,0,0,8,
> + 2,8,8,8,13,8,10,8,7,8,10,14,8,114,190,0,
> + 0,0,99,0,0,0,0,0,0,0,0,0,0,0,0,4,
> + 0,0,0,0,0,0,0,115,80,0,0,0,101,0,90,1,
> + 100,0,90,2,100,1,90,3,100,2,100,3,132,0,90,4,
> + 100,4,100,5,132,0,90,5,100,6,100,7,132,0,90,6,
> + 101,7,135,0,102,1,100,8,100,9,132,8,131,1,90,8,
> + 101,7,100,10,100,11,132,0,131,1,90,9,100,12,100,13,
> + 132,0,90,10,135,0,4,0,90,11,83,0,41,14,218,10,
> + 70,105,108,101,76,111,97,100,101,114,122,103,66,97,115,101,
> + 32,102,105,108,101,32,108,111,97,100,101,114,32,99,108,97,
> + 115,115,32,119,104,105,99,104,32,105,109,112,108,101,109,101,
> + 110,116,115,32,116,104,101,32,108,111,97,100,101,114,32,112,
> + 114,111,116,111,99,111,108,32,109,101,116,104,111,100,115,32,
> + 116,104,97,116,10,32,32,32,32,114,101,113,117,105,114,101,
> + 32,102,105,108,101,32,115,121,115,116,101,109,32,117,115,97,
> + 103,101,46,99,3,0,0,0,0,0,0,0,3,0,0,0,
> + 2,0,0,0,67,0,0,0,115,16,0,0,0,124,1,124,
> + 0,95,0,124,2,124,0,95,1,100,1,83,0,41,2,122,
> + 75,67,97,99,104,101,32,116,104,101,32,109,111,100,117,108,
> + 101,32,110,97,109,101,32,97,110,100,32,116,104,101,32,112,
> + 97,116,104,32,116,111,32,116,104,101,32,102,105,108,101,32,
> + 102,111,117,110,100,32,98,121,32,116,104,101,10,32,32,32,
> + 32,32,32,32,32,102,105,110,100,101,114,46,78,41,2,114,
> + 100,0,0,0,114,37,0,0,0,41,3,114,102,0,0,0,
> + 114,122,0,0,0,114,37,0,0,0,114,4,0,0,0,114,
> + 4,0,0,0,114,6,0,0,0,114,181,0,0,0,32,3,
> + 0,0,115,4,0,0,0,0,3,6,1,122,19,70,105,108,
> + 101,76,111,97,100,101,114,46,95,95,105,110,105,116,95,95,
> + 99,2,0,0,0,0,0,0,0,2,0,0,0,2,0,0,
> + 0,67,0,0,0,115,24,0,0,0,124,0,106,0,124,1,
> + 106,0,107,2,111,22,124,0,106,1,124,1,106,1,107,2,
> + 83,0,41,1,78,41,2,218,9,95,95,99,108,97,115,115,
> + 95,95,114,113,0,0,0,41,2,114,102,0,0,0,218,5,
> + 111,116,104,101,114,114,4,0,0,0,114,4,0,0,0,114,
> + 6,0,0,0,218,6,95,95,101,113,95,95,38,3,0,0,
> + 115,4,0,0,0,0,1,12,1,122,17,70,105,108,101,76,
> + 111,97,100,101,114,46,95,95,101,113,95,95,99,1,0,0,
> + 0,0,0,0,0,1,0,0,0,3,0,0,0,67,0,0,
> + 0,115,20,0,0,0,116,0,124,0,106,1,131,1,116,0,
> + 124,0,106,2,131,1,65,0,83,0,41,1,78,41,3,218,
> + 4,104,97,115,104,114,100,0,0,0,114,37,0,0,0,41,
> + 1,114,102,0,0,0,114,4,0,0,0,114,4,0,0,0,
> + 114,6,0,0,0,218,8,95,95,104,97,115,104,95,95,42,
> + 3,0,0,115,2,0,0,0,0,1,122,19,70,105,108,101,
> + 76,111,97,100,101,114,46,95,95,104,97,115,104,95,95,99,
> + 2,0,0,0,0,0,0,0,2,0,0,0,3,0,0,0,
> + 3,0,0,0,115,16,0,0,0,116,0,116,1,124,0,131,
> + 2,106,2,124,1,131,1,83,0,41,1,122,100,76,111,97,
> + 100,32,97,32,109,111,100,117,108,101,32,102,114,111,109,32,
> + 97,32,102,105,108,101,46,10,10,32,32,32,32,32,32,32,
> + 32,84,104,105,115,32,109,101,116,104,111,100,32,105,115,32,
> + 100,101,112,114,101,99,97,116,101,100,46,32,32,85,115,101,
> + 32,101,120,101,99,95,109,111,100,117,108,101,40,41,32,105,
> + 110,115,116,101,97,100,46,10,10,32,32,32,32,32,32,32,
> + 32,41,3,218,5,115,117,112,101,114,114,206,0,0,0,114,
> + 189,0,0,0,41,2,114,102,0,0,0,114,122,0,0,0,
> + 41,1,114,207,0,0,0,114,4,0,0,0,114,6,0,0,
> + 0,114,189,0,0,0,45,3,0,0,115,2,0,0,0,0,
> + 10,122,22,70,105,108,101,76,111,97,100,101,114,46,108,111,
> + 97,100,95,109,111,100,117,108,101,99,2,0,0,0,0,0,
> + 0,0,2,0,0,0,1,0,0,0,67,0,0,0,115,6,
> + 0,0,0,124,0,106,0,83,0,41,1,122,58,82,101,116,
> + 117,114,110,32,116,104,101,32,112,97,116,104,32,116,111,32,
> + 116,104,101,32,115,111,117,114,99,101,32,102,105,108,101,32,
> + 97,115,32,102,111,117,110,100,32,98,121,32,116,104,101,32,
> + 102,105,110,100,101,114,46,41,1,114,37,0,0,0,41,2,
> + 114,102,0,0,0,114,122,0,0,0,114,4,0,0,0,114,
> + 4,0,0,0,114,6,0,0,0,114,154,0,0,0,57,3,
> + 0,0,115,2,0,0,0,0,3,122,23,70,105,108,101,76,
> + 111,97,100,101,114,46,103,101,116,95,102,105,108,101,110,97,
> + 109,101,99,2,0,0,0,0,0,0,0,3,0,0,0,9,
> + 0,0,0,67,0,0,0,115,32,0,0,0,116,0,106,1,
> + 124,1,100,1,131,2,143,10,125,2,124,2,106,2,131,0,
> + 83,0,81,0,82,0,88,0,100,2,83,0,41,3,122,39,
> + 82,101,116,117,114,110,32,116,104,101,32,100,97,116,97,32,
> + 102,114,111,109,32,112,97,116,104,32,97,115,32,114,97,119,
> + 32,98,121,116,101,115,46,218,1,114,78,41,3,114,52,0,
> + 0,0,114,53,0,0,0,90,4,114,101,97,100,41,3,114,
> + 102,0,0,0,114,37,0,0,0,114,56,0,0,0,114,4,
> + 0,0,0,114,4,0,0,0,114,6,0,0,0,114,196,0,
> + 0,0,62,3,0,0,115,4,0,0,0,0,2,14,1,122,
> + 19,70,105,108,101,76,111,97,100,101,114,46,103,101,116,95,
> + 100,97,116,97,41,12,114,107,0,0,0,114,106,0,0,0,
> + 114,108,0,0,0,114,109,0,0,0,114,181,0,0,0,114,
> + 209,0,0,0,114,211,0,0,0,114,119,0,0,0,114,189,
> + 0,0,0,114,154,0,0,0,114,196,0,0,0,90,13,95,
> + 95,99,108,97,115,115,99,101,108,108,95,95,114,4,0,0,
> + 0,114,4,0,0,0,41,1,114,207,0,0,0,114,6,0,
> + 0,0,114,206,0,0,0,27,3,0,0,115,14,0,0,0,
> + 8,3,4,2,8,6,8,4,8,3,16,12,12,5,114,206,
> + 0,0,0,99,0,0,0,0,0,0,0,0,0,0,0,0,
> + 3,0,0,0,64,0,0,0,115,46,0,0,0,101,0,90,
> + 1,100,0,90,2,100,1,90,3,100,2,100,3,132,0,90,
> + 4,100,4,100,5,132,0,90,5,100,6,100,7,156,1,100,
> + 8,100,9,132,2,90,6,100,10,83,0,41,11,218,16,83,
> + 111,117,114,99,101,70,105,108,101,76,111,97,100,101,114,122,
> + 62,67,111,110,99,114,101,116,101,32,105,109,112,108,101,109,
> + 101,110,116,97,116,105,111,110,32,111,102,32,83,111,117,114,
> + 99,101,76,111,97,100,101,114,32,117,115,105,110,103,32,116,
> + 104,101,32,102,105,108,101,32,115,121,115,116,101,109,46,99,
> + 2,0,0,0,0,0,0,0,3,0,0,0,3,0,0,0,
> + 67,0,0,0,115,22,0,0,0,116,0,124,1,131,1,125,
> + 2,124,2,106,1,124,2,106,2,100,1,156,2,83,0,41,
> + 2,122,33,82,101,116,117,114,110,32,116,104,101,32,109,101,
> + 116,97,100,97,116,97,32,102,111,114,32,116,104,101,32,112,
> + 97,116,104,46,41,2,114,129,0,0,0,114,130,0,0,0,
> + 41,3,114,41,0,0,0,218,8,115,116,95,109,116,105,109,
> + 101,90,7,115,116,95,115,105,122,101,41,3,114,102,0,0,
> + 0,114,37,0,0,0,114,204,0,0,0,114,4,0,0,0,
> + 114,4,0,0,0,114,6,0,0,0,114,193,0,0,0,72,
> + 3,0,0,115,4,0,0,0,0,2,8,1,122,27,83,111,
> + 117,114,99,101,70,105,108,101,76,111,97,100,101,114,46,112,
> + 97,116,104,95,115,116,97,116,115,99,4,0,0,0,0,0,
> + 0,0,5,0,0,0,5,0,0,0,67,0,0,0,115,24,
> + 0,0,0,116,0,124,1,131,1,125,4,124,0,106,1,124,
> + 2,124,3,124,4,100,1,141,3,83,0,41,2,78,41,1,
> + 218,5,95,109,111,100,101,41,2,114,99,0,0,0,114,194,
> + 0,0,0,41,5,114,102,0,0,0,114,92,0,0,0,114,
> + 91,0,0,0,114,55,0,0,0,114,44,0,0,0,114,4,
> + 0,0,0,114,4,0,0,0,114,6,0,0,0,114,195,0,
> + 0,0,77,3,0,0,115,4,0,0,0,0,2,8,1,122,
> + 32,83,111,117,114,99,101,70,105,108,101,76,111,97,100,101,
> + 114,46,95,99,97,99,104,101,95,98,121,116,101,99,111,100,
> + 101,105,182,1,0,0,41,1,114,216,0,0,0,99,3,0,
> + 0,0,1,0,0,0,9,0,0,0,17,0,0,0,67,0,
> + 0,0,115,250,0,0,0,116,0,124,1,131,1,92,2,125,
> + 4,125,5,103,0,125,6,120,40,124,4,114,56,116,1,124,
> + 4,131,1,12,0,114,56,116,0,124,4,131,1,92,2,125,
> + 4,125,7,124,6,106,2,124,7,131,1,1,0,113,18,87,
> + 0,120,108,116,3,124,6,131,1,68,0,93,96,125,7,116,
> + 4,124,4,124,7,131,2,125,4,121,14,116,5,106,6,124,
> + 4,131,1,1,0,87,0,113,68,4,0,116,7,107,10,114,
> + 118,1,0,1,0,1,0,119,68,89,0,113,68,4,0,116,
> + 8,107,10,114,162,1,0,125,8,1,0,122,18,116,9,106,
> + 10,100,1,124,4,124,8,131,3,1,0,100,2,83,0,100,
> + 2,125,8,126,8,88,0,113,68,88,0,113,68,87,0,121,
> + 28,116,11,124,1,124,2,124,3,131,3,1,0,116,9,106,
> + 10,100,3,124,1,131,2,1,0,87,0,110,48,4,0,116,
> + 8,107,10,114,244,1,0,125,8,1,0,122,20,116,9,106,
> + 10,100,1,124,1,124,8,131,3,1,0,87,0,89,0,100,
> + 2,100,2,125,8,126,8,88,0,110,2,88,0,100,2,83,
> + 0,41,4,122,27,87,114,105,116,101,32,98,121,116,101,115,
> + 32,100,97,116,97,32,116,111,32,97,32,102,105,108,101,46,
> + 122,27,99,111,117,108,100,32,110,111,116,32,99,114,101,97,
> + 116,101,32,123,33,114,125,58,32,123,33,114,125,78,122,12,
> + 99,114,101,97,116,101,100,32,123,33,114,125,41,12,114,40,
> + 0,0,0,114,48,0,0,0,114,160,0,0,0,114,35,0,
> + 0,0,114,30,0,0,0,114,3,0,0,0,90,5,109,107,
> + 100,105,114,218,15,70,105,108,101,69,120,105,115,116,115,69,
> + 114,114,111,114,114,42,0,0,0,114,117,0,0,0,114,132,
> + 0,0,0,114,57,0,0,0,41,9,114,102,0,0,0,114,
> + 37,0,0,0,114,55,0,0,0,114,216,0,0,0,218,6,
> + 112,97,114,101,110,116,114,96,0,0,0,114,29,0,0,0,
> + 114,25,0,0,0,114,197,0,0,0,114,4,0,0,0,114,
> + 4,0,0,0,114,6,0,0,0,114,194,0,0,0,82,3,
> + 0,0,115,42,0,0,0,0,2,12,1,4,2,16,1,12,
> + 1,14,2,14,1,10,1,2,1,14,1,14,2,6,1,16,
> + 3,6,1,8,1,20,1,2,1,12,1,16,1,16,2,8,
> + 1,122,25,83,111,117,114,99,101,70,105,108,101,76,111,97,
> + 100,101,114,46,115,101,116,95,100,97,116,97,78,41,7,114,
> + 107,0,0,0,114,106,0,0,0,114,108,0,0,0,114,109,
> + 0,0,0,114,193,0,0,0,114,195,0,0,0,114,194,0,
> + 0,0,114,4,0,0,0,114,4,0,0,0,114,4,0,0,
> + 0,114,6,0,0,0,114,214,0,0,0,68,3,0,0,115,
> + 8,0,0,0,8,2,4,2,8,5,8,5,114,214,0,0,
> + 0,99,0,0,0,0,0,0,0,0,0,0,0,0,2,0,
> + 0,0,64,0,0,0,115,32,0,0,0,101,0,90,1,100,
> + 0,90,2,100,1,90,3,100,2,100,3,132,0,90,4,100,
> + 4,100,5,132,0,90,5,100,6,83,0,41,7,218,20,83,
> + 111,117,114,99,101,108,101,115,115,70,105,108,101,76,111,97,
> + 100,101,114,122,45,76,111,97,100,101,114,32,119,104,105,99,
> + 104,32,104,97,110,100,108,101,115,32,115,111,117,114,99,101,
> + 108,101,115,115,32,102,105,108,101,32,105,109,112,111,114,116,
> + 115,46,99,2,0,0,0,0,0,0,0,5,0,0,0,5,
> + 0,0,0,67,0,0,0,115,48,0,0,0,124,0,106,0,
> + 124,1,131,1,125,2,124,0,106,1,124,2,131,1,125,3,
> + 116,2,124,3,124,1,124,2,100,1,141,3,125,4,116,3,
> + 124,4,124,1,124,2,100,2,141,3,83,0,41,3,78,41,
> + 2,114,100,0,0,0,114,37,0,0,0,41,2,114,100,0,
> + 0,0,114,91,0,0,0,41,4,114,154,0,0,0,114,196,
> + 0,0,0,114,138,0,0,0,114,144,0,0,0,41,5,114,
> + 102,0,0,0,114,122,0,0,0,114,37,0,0,0,114,55,
> + 0,0,0,114,205,0,0,0,114,4,0,0,0,114,4,0,
> + 0,0,114,6,0,0,0,114,183,0,0,0,117,3,0,0,
> + 115,8,0,0,0,0,1,10,1,10,1,14,1,122,29,83,
> + 111,117,114,99,101,108,101,115,115,70,105,108,101,76,111,97,
> + 100,101,114,46,103,101,116,95,99,111,100,101,99,2,0,0,
> + 0,0,0,0,0,2,0,0,0,1,0,0,0,67,0,0,
> + 0,115,4,0,0,0,100,1,83,0,41,2,122,39,82,101,
> + 116,117,114,110,32,78,111,110,101,32,97,115,32,116,104,101,
> + 114,101,32,105,115,32,110,111,32,115,111,117,114,99,101,32,
> + 99,111,100,101,46,78,114,4,0,0,0,41,2,114,102,0,
> + 0,0,114,122,0,0,0,114,4,0,0,0,114,4,0,0,
> + 0,114,6,0,0,0,114,198,0,0,0,123,3,0,0,115,
> + 2,0,0,0,0,2,122,31,83,111,117,114,99,101,108,101,
> + 115,115,70,105,108,101,76,111,97,100,101,114,46,103,101,116,
> + 95,115,111,117,114,99,101,78,41,6,114,107,0,0,0,114,
> + 106,0,0,0,114,108,0,0,0,114,109,0,0,0,114,183,
> + 0,0,0,114,198,0,0,0,114,4,0,0,0,114,4,0,
> + 0,0,114,4,0,0,0,114,6,0,0,0,114,219,0,0,
> + 0,113,3,0,0,115,6,0,0,0,8,2,4,2,8,6,
> + 114,219,0,0,0,99,0,0,0,0,0,0,0,0,0,0,
> + 0,0,3,0,0,0,64,0,0,0,115,92,0,0,0,101,
> + 0,90,1,100,0,90,2,100,1,90,3,100,2,100,3,132,
> + 0,90,4,100,4,100,5,132,0,90,5,100,6,100,7,132,
> + 0,90,6,100,8,100,9,132,0,90,7,100,10,100,11,132,
> + 0,90,8,100,12,100,13,132,0,90,9,100,14,100,15,132,
> + 0,90,10,100,16,100,17,132,0,90,11,101,12,100,18,100,
> + 19,132,0,131,1,90,13,100,20,83,0,41,21,218,19,69,
> + 120,116,101,110,115,105,111,110,70,105,108,101,76,111,97,100,
> + 101,114,122,93,76,111,97,100,101,114,32,102,111,114,32,101,
> + 120,116,101,110,115,105,111,110,32,109,111,100,117,108,101,115,
> + 46,10,10,32,32,32,32,84,104,101,32,99,111,110,115,116,
> + 114,117,99,116,111,114,32,105,115,32,100,101,115,105,103,110,
> + 101,100,32,116,111,32,119,111,114,107,32,119,105,116,104,32,
> + 70,105,108,101,70,105,110,100,101,114,46,10,10,32,32,32,
> + 32,99,3,0,0,0,0,0,0,0,3,0,0,0,2,0,
> + 0,0,67,0,0,0,115,16,0,0,0,124,1,124,0,95,
> + 0,124,2,124,0,95,1,100,0,83,0,41,1,78,41,2,
> + 114,100,0,0,0,114,37,0,0,0,41,3,114,102,0,0,
> + 0,114,100,0,0,0,114,37,0,0,0,114,4,0,0,0,
> + 114,4,0,0,0,114,6,0,0,0,114,181,0,0,0,140,
> + 3,0,0,115,4,0,0,0,0,1,6,1,122,28,69,120,
> + 116,101,110,115,105,111,110,70,105,108,101,76,111,97,100,101,
> + 114,46,95,95,105,110,105,116,95,95,99,2,0,0,0,0,
> + 0,0,0,2,0,0,0,2,0,0,0,67,0,0,0,115,
> + 24,0,0,0,124,0,106,0,124,1,106,0,107,2,111,22,
> + 124,0,106,1,124,1,106,1,107,2,83,0,41,1,78,41,
> + 2,114,207,0,0,0,114,113,0,0,0,41,2,114,102,0,
> + 0,0,114,208,0,0,0,114,4,0,0,0,114,4,0,0,
> + 0,114,6,0,0,0,114,209,0,0,0,144,3,0,0,115,
> + 4,0,0,0,0,1,12,1,122,26,69,120,116,101,110,115,
> + 105,111,110,70,105,108,101,76,111,97,100,101,114,46,95,95,
> + 101,113,95,95,99,1,0,0,0,0,0,0,0,1,0,0,
> + 0,3,0,0,0,67,0,0,0,115,20,0,0,0,116,0,
> + 124,0,106,1,131,1,116,0,124,0,106,2,131,1,65,0,
> + 83,0,41,1,78,41,3,114,210,0,0,0,114,100,0,0,
> + 0,114,37,0,0,0,41,1,114,102,0,0,0,114,4,0,
> + 0,0,114,4,0,0,0,114,6,0,0,0,114,211,0,0,
> + 0,148,3,0,0,115,2,0,0,0,0,1,122,28,69,120,
> + 116,101,110,115,105,111,110,70,105,108,101,76,111,97,100,101,
> + 114,46,95,95,104,97,115,104,95,95,99,2,0,0,0,0,
> + 0,0,0,3,0,0,0,4,0,0,0,67,0,0,0,115,
> + 36,0,0,0,116,0,106,1,116,2,106,3,124,1,131,2,
> + 125,2,116,0,106,4,100,1,124,1,106,5,124,0,106,6,
> + 131,3,1,0,124,2,83,0,41,2,122,38,67,114,101,97,
> + 116,101,32,97,110,32,117,110,105,116,105,97,108,105,122,101,
> + 100,32,101,120,116,101,110,115,105,111,110,32,109,111,100,117,
> + 108,101,122,38,101,120,116,101,110,115,105,111,110,32,109,111,
> + 100,117,108,101,32,123,33,114,125,32,108,111,97,100,101,100,
> + 32,102,114,111,109,32,123,33,114,125,41,7,114,117,0,0,
> + 0,114,184,0,0,0,114,142,0,0,0,90,14,99,114,101,
> + 97,116,101,95,100,121,110,97,109,105,99,114,132,0,0,0,
> + 114,100,0,0,0,114,37,0,0,0,41,3,114,102,0,0,
> + 0,114,161,0,0,0,114,186,0,0,0,114,4,0,0,0,
> + 114,4,0,0,0,114,6,0,0,0,114,182,0,0,0,151,
> + 3,0,0,115,10,0,0,0,0,2,4,1,10,1,6,1,
> + 12,1,122,33,69,120,116,101,110,115,105,111,110,70,105,108,
> + 101,76,111,97,100,101,114,46,99,114,101,97,116,101,95,109,
> + 111,100,117,108,101,99,2,0,0,0,0,0,0,0,2,0,
> + 0,0,4,0,0,0,67,0,0,0,115,36,0,0,0,116,
> + 0,106,1,116,2,106,3,124,1,131,2,1,0,116,0,106,
> + 4,100,1,124,0,106,5,124,0,106,6,131,3,1,0,100,
> + 2,83,0,41,3,122,30,73,110,105,116,105,97,108,105,122,
> + 101,32,97,110,32,101,120,116,101,110,115,105,111,110,32,109,
> + 111,100,117,108,101,122,40,101,120,116,101,110,115,105,111,110,
> + 32,109,111,100,117,108,101,32,123,33,114,125,32,101,120,101,
> + 99,117,116,101,100,32,102,114,111,109,32,123,33,114,125,78,
> + 41,7,114,117,0,0,0,114,184,0,0,0,114,142,0,0,
> + 0,90,12,101,120,101,99,95,100,121,110,97,109,105,99,114,
> + 132,0,0,0,114,100,0,0,0,114,37,0,0,0,41,2,
> + 114,102,0,0,0,114,186,0,0,0,114,4,0,0,0,114,
> + 4,0,0,0,114,6,0,0,0,114,187,0,0,0,159,3,
> + 0,0,115,6,0,0,0,0,2,14,1,6,1,122,31,69,
> + 120,116,101,110,115,105,111,110,70,105,108,101,76,111,97,100,
> + 101,114,46,101,120,101,99,95,109,111,100,117,108,101,99,2,
> + 0,0,0,0,0,0,0,2,0,0,0,4,0,0,0,3,
> + 0,0,0,115,36,0,0,0,116,0,124,0,106,1,131,1,
> + 100,1,25,0,137,0,116,2,135,0,102,1,100,2,100,3,
> + 132,8,116,3,68,0,131,1,131,1,83,0,41,4,122,49,
> + 82,101,116,117,114,110,32,84,114,117,101,32,105,102,32,116,
> + 104,101,32,101,120,116,101,110,115,105,111,110,32,109,111,100,
> + 117,108,101,32,105,115,32,97,32,112,97,99,107,97,103,101,
> + 46,114,31,0,0,0,99,1,0,0,0,0,0,0,0,2,
> + 0,0,0,4,0,0,0,51,0,0,0,115,26,0,0,0,
> + 124,0,93,18,125,1,136,0,100,0,124,1,23,0,107,2,
> + 86,0,1,0,113,2,100,1,83,0,41,2,114,181,0,0,
> + 0,78,114,4,0,0,0,41,2,114,24,0,0,0,218,6,
> + 115,117,102,102,105,120,41,1,218,9,102,105,108,101,95,110,
> + 97,109,101,114,4,0,0,0,114,6,0,0,0,250,9,60,
> + 103,101,110,101,120,112,114,62,168,3,0,0,115,2,0,0,
> + 0,4,1,122,49,69,120,116,101,110,115,105,111,110,70,105,
> + 108,101,76,111,97,100,101,114,46,105,115,95,112,97,99,107,
> + 97,103,101,46,60,108,111,99,97,108,115,62,46,60,103,101,
> + 110,101,120,112,114,62,41,4,114,40,0,0,0,114,37,0,
> + 0,0,218,3,97,110,121,218,18,69,88,84,69,78,83,73,
> + 79,78,95,83,85,70,70,73,88,69,83,41,2,114,102,0,
> + 0,0,114,122,0,0,0,114,4,0,0,0,41,1,114,222,
> + 0,0,0,114,6,0,0,0,114,156,0,0,0,165,3,0,
> + 0,115,6,0,0,0,0,2,14,1,12,1,122,30,69,120,
> + 116,101,110,115,105,111,110,70,105,108,101,76,111,97,100,101,
> + 114,46,105,115,95,112,97,99,107,97,103,101,99,2,0,0,
> + 0,0,0,0,0,2,0,0,0,1,0,0,0,67,0,0,
> + 0,115,4,0,0,0,100,1,83,0,41,2,122,63,82,101,
> + 116,117,114,110,32,78,111,110,101,32,97,115,32,97,110,32,
> + 101,120,116,101,110,115,105,111,110,32,109,111,100,117,108,101,
> + 32,99,97,110,110,111,116,32,99,114,101,97,116,101,32,97,
> + 32,99,111,100,101,32,111,98,106,101,99,116,46,78,114,4,
> + 0,0,0,41,2,114,102,0,0,0,114,122,0,0,0,114,
> + 4,0,0,0,114,4,0,0,0,114,6,0,0,0,114,183,
> + 0,0,0,171,3,0,0,115,2,0,0,0,0,2,122,28,
> + 69,120,116,101,110,115,105,111,110,70,105,108,101,76,111,97,
> + 100,101,114,46,103,101,116,95,99,111,100,101,99,2,0,0,
> + 0,0,0,0,0,2,0,0,0,1,0,0,0,67,0,0,
> + 0,115,4,0,0,0,100,1,83,0,41,2,122,53,82,101,
> + 116,117,114,110,32,78,111,110,101,32,97,115,32,101,120,116,
> + 101,110,115,105,111,110,32,109,111,100,117,108,101,115,32,104,
> + 97,118,101,32,110,111,32,115,111,117,114,99,101,32,99,111,
> + 100,101,46,78,114,4,0,0,0,41,2,114,102,0,0,0,
> + 114,122,0,0,0,114,4,0,0,0,114,4,0,0,0,114,
> + 6,0,0,0,114,198,0,0,0,175,3,0,0,115,2,0,
> + 0,0,0,2,122,30,69,120,116,101,110,115,105,111,110,70,
> + 105,108,101,76,111,97,100,101,114,46,103,101,116,95,115,111,
> + 117,114,99,101,99,2,0,0,0,0,0,0,0,2,0,0,
> + 0,1,0,0,0,67,0,0,0,115,6,0,0,0,124,0,
> + 106,0,83,0,41,1,122,58,82,101,116,117,114,110,32,116,
> + 104,101,32,112,97,116,104,32,116,111,32,116,104,101,32,115,
> + 111,117,114,99,101,32,102,105,108,101,32,97,115,32,102,111,
> + 117,110,100,32,98,121,32,116,104,101,32,102,105,110,100,101,
> + 114,46,41,1,114,37,0,0,0,41,2,114,102,0,0,0,
> + 114,122,0,0,0,114,4,0,0,0,114,4,0,0,0,114,
> + 6,0,0,0,114,154,0,0,0,179,3,0,0,115,2,0,
> + 0,0,0,3,122,32,69,120,116,101,110,115,105,111,110,70,
> + 105,108,101,76,111,97,100,101,114,46,103,101,116,95,102,105,
> + 108,101,110,97,109,101,78,41,14,114,107,0,0,0,114,106,
> + 0,0,0,114,108,0,0,0,114,109,0,0,0,114,181,0,
> + 0,0,114,209,0,0,0,114,211,0,0,0,114,182,0,0,
> + 0,114,187,0,0,0,114,156,0,0,0,114,183,0,0,0,
> + 114,198,0,0,0,114,119,0,0,0,114,154,0,0,0,114,
> + 4,0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,
> + 0,0,0,114,220,0,0,0,132,3,0,0,115,20,0,0,
> + 0,8,6,4,2,8,4,8,4,8,3,8,8,8,6,8,
> + 6,8,4,8,4,114,220,0,0,0,99,0,0,0,0,0,
> + 0,0,0,0,0,0,0,2,0,0,0,64,0,0,0,115,
> + 96,0,0,0,101,0,90,1,100,0,90,2,100,1,90,3,
> + 100,2,100,3,132,0,90,4,100,4,100,5,132,0,90,5,
> + 100,6,100,7,132,0,90,6,100,8,100,9,132,0,90,7,
> + 100,10,100,11,132,0,90,8,100,12,100,13,132,0,90,9,
> + 100,14,100,15,132,0,90,10,100,16,100,17,132,0,90,11,
> + 100,18,100,19,132,0,90,12,100,20,100,21,132,0,90,13,
> + 100,22,83,0,41,23,218,14,95,78,97,109,101,115,112,97,
> + 99,101,80,97,116,104,97,38,1,0,0,82,101,112,114,101,
> + 115,101,110,116,115,32,97,32,110,97,109,101,115,112,97,99,
> + 101,32,112,97,99,107,97,103,101,39,115,32,112,97,116,104,
> + 46,32,32,73,116,32,117,115,101,115,32,116,104,101,32,109,
> + 111,100,117,108,101,32,110,97,109,101,10,32,32,32,32,116,
> + 111,32,102,105,110,100,32,105,116,115,32,112,97,114,101,110,
> + 116,32,109,111,100,117,108,101,44,32,97,110,100,32,102,114,
> + 111,109,32,116,104,101,114,101,32,105,116,32,108,111,111,107,
> + 115,32,117,112,32,116,104,101,32,112,97,114,101,110,116,39,
> + 115,10,32,32,32,32,95,95,112,97,116,104,95,95,46,32,
> + 32,87,104,101,110,32,116,104,105,115,32,99,104,97,110,103,
> + 101,115,44,32,116,104,101,32,109,111,100,117,108,101,39,115,
> + 32,111,119,110,32,112,97,116,104,32,105,115,32,114,101,99,
> + 111,109,112,117,116,101,100,44,10,32,32,32,32,117,115,105,
> + 110,103,32,112,97,116,104,95,102,105,110,100,101,114,46,32,
> + 32,70,111,114,32,116,111,112,45,108,101,118,101,108,32,109,
> + 111,100,117,108,101,115,44,32,116,104,101,32,112,97,114,101,
> + 110,116,32,109,111,100,117,108,101,39,115,32,112,97,116,104,
> + 10,32,32,32,32,105,115,32,115,121,115,46,112,97,116,104,
> + 46,99,4,0,0,0,0,0,0,0,4,0,0,0,2,0,
> + 0,0,67,0,0,0,115,36,0,0,0,124,1,124,0,95,
> + 0,124,2,124,0,95,1,116,2,124,0,106,3,131,0,131,
> + 1,124,0,95,4,124,3,124,0,95,5,100,0,83,0,41,
> + 1,78,41,6,218,5,95,110,97,109,101,218,5,95,112,97,
> + 116,104,114,95,0,0,0,218,16,95,103,101,116,95,112,97,
> + 114,101,110,116,95,112,97,116,104,218,17,95,108,97,115,116,
> + 95,112,97,114,101,110,116,95,112,97,116,104,218,12,95,112,
> + 97,116,104,95,102,105,110,100,101,114,41,4,114,102,0,0,
> + 0,114,100,0,0,0,114,37,0,0,0,218,11,112,97,116,
> + 104,95,102,105,110,100,101,114,114,4,0,0,0,114,4,0,
> + 0,0,114,6,0,0,0,114,181,0,0,0,192,3,0,0,
> + 115,8,0,0,0,0,1,6,1,6,1,14,1,122,23,95,
> + 78,97,109,101,115,112,97,99,101,80,97,116,104,46,95,95,
> + 105,110,105,116,95,95,99,1,0,0,0,0,0,0,0,4,
> + 0,0,0,3,0,0,0,67,0,0,0,115,38,0,0,0,
> + 124,0,106,0,106,1,100,1,131,1,92,3,125,1,125,2,
> + 125,3,124,2,100,2,107,2,114,30,100,6,83,0,124,1,
> + 100,5,102,2,83,0,41,7,122,62,82,101,116,117,114,110,
> + 115,32,97,32,116,117,112,108,101,32,111,102,32,40,112,97,
> + 114,101,110,116,45,109,111,100,117,108,101,45,110,97,109,101,
> + 44,32,112,97,114,101,110,116,45,112,97,116,104,45,97,116,
> + 116,114,45,110,97,109,101,41,114,60,0,0,0,114,32,0,
> + 0,0,114,8,0,0,0,114,37,0,0,0,90,8,95,95,
> + 112,97,116,104,95,95,41,2,114,8,0,0,0,114,37,0,
> + 0,0,41,2,114,227,0,0,0,114,34,0,0,0,41,4,
> + 114,102,0,0,0,114,218,0,0,0,218,3,100,111,116,90,
> + 2,109,101,114,4,0,0,0,114,4,0,0,0,114,6,0,
> + 0,0,218,23,95,102,105,110,100,95,112,97,114,101,110,116,
> + 95,112,97,116,104,95,110,97,109,101,115,198,3,0,0,115,
> + 8,0,0,0,0,2,18,1,8,2,4,3,122,38,95,78,
> + 97,109,101,115,112,97,99,101,80,97,116,104,46,95,102,105,
> + 110,100,95,112,97,114,101,110,116,95,112,97,116,104,95,110,
> + 97,109,101,115,99,1,0,0,0,0,0,0,0,3,0,0,
> + 0,3,0,0,0,67,0,0,0,115,28,0,0,0,124,0,
> + 106,0,131,0,92,2,125,1,125,2,116,1,116,2,106,3,
> + 124,1,25,0,124,2,131,2,83,0,41,1,78,41,4,114,
> + 234,0,0,0,114,112,0,0,0,114,8,0,0,0,218,7,
> + 109,111,100,117,108,101,115,41,3,114,102,0,0,0,90,18,
> + 112,97,114,101,110,116,95,109,111,100,117,108,101,95,110,97,
> + 109,101,90,14,112,97,116,104,95,97,116,116,114,95,110,97,
> + 109,101,114,4,0,0,0,114,4,0,0,0,114,6,0,0,
> + 0,114,229,0,0,0,208,3,0,0,115,4,0,0,0,0,
> + 1,12,1,122,31,95,78,97,109,101,115,112,97,99,101,80,
> + 97,116,104,46,95,103,101,116,95,112,97,114,101,110,116,95,
> + 112,97,116,104,99,1,0,0,0,0,0,0,0,3,0,0,
> + 0,3,0,0,0,67,0,0,0,115,80,0,0,0,116,0,
> + 124,0,106,1,131,0,131,1,125,1,124,1,124,0,106,2,
> + 107,3,114,74,124,0,106,3,124,0,106,4,124,1,131,2,
> + 125,2,124,2,100,0,107,9,114,68,124,2,106,5,100,0,
> + 107,8,114,68,124,2,106,6,114,68,124,2,106,6,124,0,
> + 95,7,124,1,124,0,95,2,124,0,106,7,83,0,41,1,
> + 78,41,8,114,95,0,0,0,114,229,0,0,0,114,230,0,
> + 0,0,114,231,0,0,0,114,227,0,0,0,114,123,0,0,
> + 0,114,153,0,0,0,114,228,0,0,0,41,3,114,102,0,
> + 0,0,90,11,112,97,114,101,110,116,95,112,97,116,104,114,
> + 161,0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,
> + 0,0,0,218,12,95,114,101,99,97,108,99,117,108,97,116,
> + 101,212,3,0,0,115,16,0,0,0,0,2,12,1,10,1,
> + 14,3,18,1,6,1,8,1,6,1,122,27,95,78,97,109,
> + 101,115,112,97,99,101,80,97,116,104,46,95,114,101,99,97,
> + 108,99,117,108,97,116,101,99,1,0,0,0,0,0,0,0,
> + 1,0,0,0,2,0,0,0,67,0,0,0,115,12,0,0,
> + 0,116,0,124,0,106,1,131,0,131,1,83,0,41,1,78,
> + 41,2,218,4,105,116,101,114,114,236,0,0,0,41,1,114,
> + 102,0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,
> + 0,0,0,218,8,95,95,105,116,101,114,95,95,225,3,0,
> + 0,115,2,0,0,0,0,1,122,23,95,78,97,109,101,115,
> + 112,97,99,101,80,97,116,104,46,95,95,105,116,101,114,95,
> + 95,99,3,0,0,0,0,0,0,0,3,0,0,0,3,0,
> + 0,0,67,0,0,0,115,14,0,0,0,124,2,124,0,106,
> + 0,124,1,60,0,100,0,83,0,41,1,78,41,1,114,228,
> + 0,0,0,41,3,114,102,0,0,0,218,5,105,110,100,101,
> + 120,114,37,0,0,0,114,4,0,0,0,114,4,0,0,0,
> + 114,6,0,0,0,218,11,95,95,115,101,116,105,116,101,109,
> + 95,95,228,3,0,0,115,2,0,0,0,0,1,122,26,95,
> + 78,97,109,101,115,112,97,99,101,80,97,116,104,46,95,95,
> + 115,101,116,105,116,101,109,95,95,99,1,0,0,0,0,0,
> + 0,0,1,0,0,0,2,0,0,0,67,0,0,0,115,12,
> + 0,0,0,116,0,124,0,106,1,131,0,131,1,83,0,41,
> + 1,78,41,2,114,33,0,0,0,114,236,0,0,0,41,1,
> + 114,102,0,0,0,114,4,0,0,0,114,4,0,0,0,114,
> + 6,0,0,0,218,7,95,95,108,101,110,95,95,231,3,0,
> + 0,115,2,0,0,0,0,1,122,22,95,78,97,109,101,115,
> + 112,97,99,101,80,97,116,104,46,95,95,108,101,110,95,95,
> + 99,1,0,0,0,0,0,0,0,1,0,0,0,2,0,0,
> + 0,67,0,0,0,115,12,0,0,0,100,1,106,0,124,0,
> + 106,1,131,1,83,0,41,2,78,122,20,95,78,97,109,101,
> + 115,112,97,99,101,80,97,116,104,40,123,33,114,125,41,41,
> + 2,114,50,0,0,0,114,228,0,0,0,41,1,114,102,0,
> + 0,0,114,4,0,0,0,114,4,0,0,0,114,6,0,0,
> + 0,218,8,95,95,114,101,112,114,95,95,234,3,0,0,115,
> + 2,0,0,0,0,1,122,23,95,78,97,109,101,115,112,97,
> + 99,101,80,97,116,104,46,95,95,114,101,112,114,95,95,99,
> + 2,0,0,0,0,0,0,0,2,0,0,0,2,0,0,0,
> + 67,0,0,0,115,12,0,0,0,124,1,124,0,106,0,131,
> + 0,107,6,83,0,41,1,78,41,1,114,236,0,0,0,41,
> + 2,114,102,0,0,0,218,4,105,116,101,109,114,4,0,0,
> + 0,114,4,0,0,0,114,6,0,0,0,218,12,95,95,99,
> + 111,110,116,97,105,110,115,95,95,237,3,0,0,115,2,0,
> + 0,0,0,1,122,27,95,78,97,109,101,115,112,97,99,101,
> + 80,97,116,104,46,95,95,99,111,110,116,97,105,110,115,95,
> + 95,99,2,0,0,0,0,0,0,0,2,0,0,0,2,0,
> + 0,0,67,0,0,0,115,16,0,0,0,124,0,106,0,106,
> + 1,124,1,131,1,1,0,100,0,83,0,41,1,78,41,2,
> + 114,228,0,0,0,114,160,0,0,0,41,2,114,102,0,0,
> + 0,114,243,0,0,0,114,4,0,0,0,114,4,0,0,0,
> + 114,6,0,0,0,114,160,0,0,0,240,3,0,0,115,2,
> + 0,0,0,0,1,122,21,95,78,97,109,101,115,112,97,99,
> + 101,80,97,116,104,46,97,112,112,101,110,100,78,41,14,114,
> + 107,0,0,0,114,106,0,0,0,114,108,0,0,0,114,109,
> + 0,0,0,114,181,0,0,0,114,234,0,0,0,114,229,0,
> + 0,0,114,236,0,0,0,114,238,0,0,0,114,240,0,0,
> + 0,114,241,0,0,0,114,242,0,0,0,114,244,0,0,0,
> + 114,160,0,0,0,114,4,0,0,0,114,4,0,0,0,114,
> + 4,0,0,0,114,6,0,0,0,114,226,0,0,0,185,3,
> + 0,0,115,22,0,0,0,8,5,4,2,8,6,8,10,8,
> + 4,8,13,8,3,8,3,8,3,8,3,8,3,114,226,0,
> + 0,0,99,0,0,0,0,0,0,0,0,0,0,0,0,3,
> + 0,0,0,64,0,0,0,115,80,0,0,0,101,0,90,1,
> + 100,0,90,2,100,1,100,2,132,0,90,3,101,4,100,3,
> + 100,4,132,0,131,1,90,5,100,5,100,6,132,0,90,6,
> + 100,7,100,8,132,0,90,7,100,9,100,10,132,0,90,8,
> + 100,11,100,12,132,0,90,9,100,13,100,14,132,0,90,10,
> + 100,15,100,16,132,0,90,11,100,17,83,0,41,18,218,16,
> + 95,78,97,109,101,115,112,97,99,101,76,111,97,100,101,114,
> + 99,4,0,0,0,0,0,0,0,4,0,0,0,4,0,0,
> + 0,67,0,0,0,115,18,0,0,0,116,0,124,1,124,2,
> + 124,3,131,3,124,0,95,1,100,0,83,0,41,1,78,41,
> + 2,114,226,0,0,0,114,228,0,0,0,41,4,114,102,0,
> + 0,0,114,100,0,0,0,114,37,0,0,0,114,232,0,0,
> + 0,114,4,0,0,0,114,4,0,0,0,114,6,0,0,0,
> + 114,181,0,0,0,246,3,0,0,115,2,0,0,0,0,1,
> + 122,25,95,78,97,109,101,115,112,97,99,101,76,111,97,100,
> + 101,114,46,95,95,105,110,105,116,95,95,99,2,0,0,0,
> + 0,0,0,0,2,0,0,0,2,0,0,0,67,0,0,0,
> + 115,12,0,0,0,100,1,106,0,124,1,106,1,131,1,83,
> + 0,41,2,122,115,82,101,116,117,114,110,32,114,101,112,114,
> + 32,102,111,114,32,116,104,101,32,109,111,100,117,108,101,46,
> + 10,10,32,32,32,32,32,32,32,32,84,104,101,32,109,101,
> + 116,104,111,100,32,105,115,32,100,101,112,114,101,99,97,116,
> + 101,100,46,32,32,84,104,101,32,105,109,112,111,114,116,32,
> + 109,97,99,104,105,110,101,114,121,32,100,111,101,115,32,116,
> + 104,101,32,106,111,98,32,105,116,115,101,108,102,46,10,10,
> + 32,32,32,32,32,32,32,32,122,25,60,109,111,100,117,108,
> + 101,32,123,33,114,125,32,40,110,97,109,101,115,112,97,99,
> + 101,41,62,41,2,114,50,0,0,0,114,107,0,0,0,41,
> + 2,114,167,0,0,0,114,186,0,0,0,114,4,0,0,0,
> + 114,4,0,0,0,114,6,0,0,0,218,11,109,111,100,117,
> + 108,101,95,114,101,112,114,249,3,0,0,115,2,0,0,0,
> + 0,7,122,28,95,78,97,109,101,115,112,97,99,101,76,111,
> + 97,100,101,114,46,109,111,100,117,108,101,95,114,101,112,114,
> + 99,2,0,0,0,0,0,0,0,2,0,0,0,1,0,0,
> + 0,67,0,0,0,115,4,0,0,0,100,1,83,0,41,2,
> + 78,84,114,4,0,0,0,41,2,114,102,0,0,0,114,122,
> + 0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,0,
> + 0,0,114,156,0,0,0,2,4,0,0,115,2,0,0,0,
> + 0,1,122,27,95,78,97,109,101,115,112,97,99,101,76,111,
> + 97,100,101,114,46,105,115,95,112,97,99,107,97,103,101,99,
> + 2,0,0,0,0,0,0,0,2,0,0,0,1,0,0,0,
> + 67,0,0,0,115,4,0,0,0,100,1,83,0,41,2,78,
> + 114,32,0,0,0,114,4,0,0,0,41,2,114,102,0,0,
> + 0,114,122,0,0,0,114,4,0,0,0,114,4,0,0,0,
> + 114,6,0,0,0,114,198,0,0,0,5,4,0,0,115,2,
> + 0,0,0,0,1,122,27,95,78,97,109,101,115,112,97,99,
> + 101,76,111,97,100,101,114,46,103,101,116,95,115,111,117,114,
> + 99,101,99,2,0,0,0,0,0,0,0,2,0,0,0,6,
> + 0,0,0,67,0,0,0,115,16,0,0,0,116,0,100,1,
> + 100,2,100,3,100,4,100,5,141,4,83,0,41,6,78,114,
> + 32,0,0,0,122,8,60,115,116,114,105,110,103,62,114,185,
> + 0,0,0,84,41,1,114,200,0,0,0,41,1,114,201,0,
> + 0,0,41,2,114,102,0,0,0,114,122,0,0,0,114,4,
> + 0,0,0,114,4,0,0,0,114,6,0,0,0,114,183,0,
> + 0,0,8,4,0,0,115,2,0,0,0,0,1,122,25,95,
> + 78,97,109,101,115,112,97,99,101,76,111,97,100,101,114,46,
> + 103,101,116,95,99,111,100,101,99,2,0,0,0,0,0,0,
> + 0,2,0,0,0,1,0,0,0,67,0,0,0,115,4,0,
> + 0,0,100,1,83,0,41,2,122,42,85,115,101,32,100,101,
> + 102,97,117,108,116,32,115,101,109,97,110,116,105,99,115,32,
> + 102,111,114,32,109,111,100,117,108,101,32,99,114,101,97,116,
> + 105,111,110,46,78,114,4,0,0,0,41,2,114,102,0,0,
> + 0,114,161,0,0,0,114,4,0,0,0,114,4,0,0,0,
> + 114,6,0,0,0,114,182,0,0,0,11,4,0,0,115,0,
> + 0,0,0,122,30,95,78,97,109,101,115,112,97,99,101,76,
> + 111,97,100,101,114,46,99,114,101,97,116,101,95,109,111,100,
> + 117,108,101,99,2,0,0,0,0,0,0,0,2,0,0,0,
> + 1,0,0,0,67,0,0,0,115,4,0,0,0,100,0,83,
> + 0,41,1,78,114,4,0,0,0,41,2,114,102,0,0,0,
> + 114,186,0,0,0,114,4,0,0,0,114,4,0,0,0,114,
> + 6,0,0,0,114,187,0,0,0,14,4,0,0,115,2,0,
> + 0,0,0,1,122,28,95,78,97,109,101,115,112,97,99,101,
> + 76,111,97,100,101,114,46,101,120,101,99,95,109,111,100,117,
> + 108,101,99,2,0,0,0,0,0,0,0,2,0,0,0,3,
> + 0,0,0,67,0,0,0,115,26,0,0,0,116,0,106,1,
> + 100,1,124,0,106,2,131,2,1,0,116,0,106,3,124,0,
> + 124,1,131,2,83,0,41,2,122,98,76,111,97,100,32,97,
> + 32,110,97,109,101,115,112,97,99,101,32,109,111,100,117,108,
> + 101,46,10,10,32,32,32,32,32,32,32,32,84,104,105,115,
> + 32,109,101,116,104,111,100,32,105,115,32,100,101,112,114,101,
> + 99,97,116,101,100,46,32,32,85,115,101,32,101,120,101,99,
> + 95,109,111,100,117,108,101,40,41,32,105,110,115,116,101,97,
> + 100,46,10,10,32,32,32,32,32,32,32,32,122,38,110,97,
> + 109,101,115,112,97,99,101,32,109,111,100,117,108,101,32,108,
> + 111,97,100,101,100,32,119,105,116,104,32,112,97,116,104,32,
> + 123,33,114,125,41,4,114,117,0,0,0,114,132,0,0,0,
> + 114,228,0,0,0,114,188,0,0,0,41,2,114,102,0,0,
> + 0,114,122,0,0,0,114,4,0,0,0,114,4,0,0,0,
> + 114,6,0,0,0,114,189,0,0,0,17,4,0,0,115,6,
> + 0,0,0,0,7,6,1,8,1,122,28,95,78,97,109,101,
> + 115,112,97,99,101,76,111,97,100,101,114,46,108,111,97,100,
> + 95,109,111,100,117,108,101,78,41,12,114,107,0,0,0,114,
> + 106,0,0,0,114,108,0,0,0,114,181,0,0,0,114,179,
> + 0,0,0,114,246,0,0,0,114,156,0,0,0,114,198,0,
> + 0,0,114,183,0,0,0,114,182,0,0,0,114,187,0,0,
> + 0,114,189,0,0,0,114,4,0,0,0,114,4,0,0,0,
> + 114,4,0,0,0,114,6,0,0,0,114,245,0,0,0,245,
> + 3,0,0,115,16,0,0,0,8,1,8,3,12,9,8,3,
> + 8,3,8,3,8,3,8,3,114,245,0,0,0,99,0,0,
> + 0,0,0,0,0,0,0,0,0,0,4,0,0,0,64,0,
> + 0,0,115,106,0,0,0,101,0,90,1,100,0,90,2,100,
> + 1,90,3,101,4,100,2,100,3,132,0,131,1,90,5,101,
> + 4,100,4,100,5,132,0,131,1,90,6,101,4,100,6,100,
> + 7,132,0,131,1,90,7,101,4,100,8,100,9,132,0,131,
> + 1,90,8,101,4,100,17,100,11,100,12,132,1,131,1,90,
> + 9,101,4,100,18,100,13,100,14,132,1,131,1,90,10,101,
> + 4,100,19,100,15,100,16,132,1,131,1,90,11,100,10,83,
> + 0,41,20,218,10,80,97,116,104,70,105,110,100,101,114,122,
> + 62,77,101,116,97,32,112,97,116,104,32,102,105,110,100,101,
> + 114,32,102,111,114,32,115,121,115,46,112,97,116,104,32,97,
> + 110,100,32,112,97,99,107,97,103,101,32,95,95,112,97,116,
> + 104,95,95,32,97,116,116,114,105,98,117,116,101,115,46,99,
> + 1,0,0,0,0,0,0,0,2,0,0,0,4,0,0,0,
> + 67,0,0,0,115,42,0,0,0,120,36,116,0,106,1,106,
> + 2,131,0,68,0,93,22,125,1,116,3,124,1,100,1,131,
> + 2,114,12,124,1,106,4,131,0,1,0,113,12,87,0,100,
> + 2,83,0,41,3,122,125,67,97,108,108,32,116,104,101,32,
> + 105,110,118,97,108,105,100,97,116,101,95,99,97,99,104,101,
> + 115,40,41,32,109,101,116,104,111,100,32,111,110,32,97,108,
> + 108,32,112,97,116,104,32,101,110,116,114,121,32,102,105,110,
> + 100,101,114,115,10,32,32,32,32,32,32,32,32,115,116,111,
> + 114,101,100,32,105,110,32,115,121,115,46,112,97,116,104,95,
> + 105,109,112,111,114,116,101,114,95,99,97,99,104,101,115,32,
> + 40,119,104,101,114,101,32,105,109,112,108,101,109,101,110,116,
> + 101,100,41,46,218,17,105,110,118,97,108,105,100,97,116,101,
> + 95,99,97,99,104,101,115,78,41,5,114,8,0,0,0,218,
> + 19,112,97,116,104,95,105,109,112,111,114,116,101,114,95,99,
> + 97,99,104,101,218,6,118,97,108,117,101,115,114,110,0,0,
> + 0,114,248,0,0,0,41,2,114,167,0,0,0,218,6,102,
> + 105,110,100,101,114,114,4,0,0,0,114,4,0,0,0,114,
> + 6,0,0,0,114,248,0,0,0,35,4,0,0,115,6,0,
> + 0,0,0,4,16,1,10,1,122,28,80,97,116,104,70,105,
> + 110,100,101,114,46,105,110,118,97,108,105,100,97,116,101,95,
> + 99,97,99,104,101,115,99,2,0,0,0,0,0,0,0,3,
> + 0,0,0,12,0,0,0,67,0,0,0,115,86,0,0,0,
> + 116,0,106,1,100,1,107,9,114,30,116,0,106,1,12,0,
> + 114,30,116,2,106,3,100,2,116,4,131,2,1,0,120,50,
> + 116,0,106,1,68,0,93,36,125,2,121,8,124,2,124,1,
> + 131,1,83,0,4,0,116,5,107,10,114,72,1,0,1,0,
> + 1,0,119,38,89,0,113,38,88,0,113,38,87,0,100,1,
> + 83,0,100,1,83,0,41,3,122,46,83,101,97,114,99,104,
> + 32,115,121,115,46,112,97,116,104,95,104,111,111,107,115,32,
> + 102,111,114,32,97,32,102,105,110,100,101,114,32,102,111,114,
> + 32,39,112,97,116,104,39,46,78,122,23,115,121,115,46,112,
> + 97,116,104,95,104,111,111,107,115,32,105,115,32,101,109,112,
> + 116,121,41,6,114,8,0,0,0,218,10,112,97,116,104,95,
> + 104,111,111,107,115,114,62,0,0,0,114,63,0,0,0,114,
> + 121,0,0,0,114,101,0,0,0,41,3,114,167,0,0,0,
> + 114,37,0,0,0,90,4,104,111,111,107,114,4,0,0,0,
> + 114,4,0,0,0,114,6,0,0,0,218,11,95,112,97,116,
> + 104,95,104,111,111,107,115,43,4,0,0,115,16,0,0,0,
> + 0,3,18,1,12,1,12,1,2,1,8,1,14,1,12,2,
> + 122,22,80,97,116,104,70,105,110,100,101,114,46,95,112,97,
> + 116,104,95,104,111,111,107,115,99,2,0,0,0,0,0,0,
> + 0,3,0,0,0,19,0,0,0,67,0,0,0,115,102,0,
> + 0,0,124,1,100,1,107,2,114,42,121,12,116,0,106,1,
> + 131,0,125,1,87,0,110,20,4,0,116,2,107,10,114,40,
> + 1,0,1,0,1,0,100,2,83,0,88,0,121,14,116,3,
> + 106,4,124,1,25,0,125,2,87,0,110,40,4,0,116,5,
> + 107,10,114,96,1,0,1,0,1,0,124,0,106,6,124,1,
> + 131,1,125,2,124,2,116,3,106,4,124,1,60,0,89,0,
> + 110,2,88,0,124,2,83,0,41,3,122,210,71,101,116,32,
> + 116,104,101,32,102,105,110,100,101,114,32,102,111,114,32,116,
> + 104,101,32,112,97,116,104,32,101,110,116,114,121,32,102,114,
> + 111,109,32,115,121,115,46,112,97,116,104,95,105,109,112,111,
> + 114,116,101,114,95,99,97,99,104,101,46,10,10,32,32,32,
> + 32,32,32,32,32,73,102,32,116,104,101,32,112,97,116,104,
> + 32,101,110,116,114,121,32,105,115,32,110,111,116,32,105,110,
> + 32,116,104,101,32,99,97,99,104,101,44,32,102,105,110,100,
> + 32,116,104,101,32,97,112,112,114,111,112,114,105,97,116,101,
> + 32,102,105,110,100,101,114,10,32,32,32,32,32,32,32,32,
> + 97,110,100,32,99,97,99,104,101,32,105,116,46,32,73,102,
> + 32,110,111,32,102,105,110,100,101,114,32,105,115,32,97,118,
> + 97,105,108,97,98,108,101,44,32,115,116,111,114,101,32,78,
> + 111,110,101,46,10,10,32,32,32,32,32,32,32,32,114,32,
> + 0,0,0,78,41,7,114,3,0,0,0,114,47,0,0,0,
> + 218,17,70,105,108,101,78,111,116,70,111,117,110,100,69,114,
> + 114,111,114,114,8,0,0,0,114,249,0,0,0,114,134,0,
> + 0,0,114,253,0,0,0,41,3,114,167,0,0,0,114,37,
> + 0,0,0,114,251,0,0,0,114,4,0,0,0,114,4,0,
> + 0,0,114,6,0,0,0,218,20,95,112,97,116,104,95,105,
> + 109,112,111,114,116,101,114,95,99,97,99,104,101,56,4,0,
> + 0,115,22,0,0,0,0,8,8,1,2,1,12,1,14,3,
> + 6,1,2,1,14,1,14,1,10,1,16,1,122,31,80,97,
> + 116,104,70,105,110,100,101,114,46,95,112,97,116,104,95,105,
> + 109,112,111,114,116,101,114,95,99,97,99,104,101,99,3,0,
> + 0,0,0,0,0,0,6,0,0,0,3,0,0,0,67,0,
> + 0,0,115,82,0,0,0,116,0,124,2,100,1,131,2,114,
> + 26,124,2,106,1,124,1,131,1,92,2,125,3,125,4,110,
> + 14,124,2,106,2,124,1,131,1,125,3,103,0,125,4,124,
> + 3,100,0,107,9,114,60,116,3,106,4,124,1,124,3,131,
> + 2,83,0,116,3,106,5,124,1,100,0,131,2,125,5,124,
> + 4,124,5,95,6,124,5,83,0,41,2,78,114,120,0,0,
> + 0,41,7,114,110,0,0,0,114,120,0,0,0,114,178,0,
> + 0,0,114,117,0,0,0,114,175,0,0,0,114,157,0,0,
> + 0,114,153,0,0,0,41,6,114,167,0,0,0,114,122,0,
> + 0,0,114,251,0,0,0,114,123,0,0,0,114,124,0,0,
> + 0,114,161,0,0,0,114,4,0,0,0,114,4,0,0,0,
> + 114,6,0,0,0,218,16,95,108,101,103,97,99,121,95,103,
> + 101,116,95,115,112,101,99,78,4,0,0,115,18,0,0,0,
> + 0,4,10,1,16,2,10,1,4,1,8,1,12,1,12,1,
> + 6,1,122,27,80,97,116,104,70,105,110,100,101,114,46,95,
> + 108,101,103,97,99,121,95,103,101,116,95,115,112,101,99,78,
> + 99,4,0,0,0,0,0,0,0,9,0,0,0,5,0,0,
> + 0,67,0,0,0,115,170,0,0,0,103,0,125,4,120,160,
> + 124,2,68,0,93,130,125,5,116,0,124,5,116,1,116,2,
> + 102,2,131,2,115,30,113,10,124,0,106,3,124,5,131,1,
> + 125,6,124,6,100,1,107,9,114,10,116,4,124,6,100,2,
> + 131,2,114,72,124,6,106,5,124,1,124,3,131,2,125,7,
> + 110,12,124,0,106,6,124,1,124,6,131,2,125,7,124,7,
> + 100,1,107,8,114,94,113,10,124,7,106,7,100,1,107,9,
> + 114,108,124,7,83,0,124,7,106,8,125,8,124,8,100,1,
> + 107,8,114,130,116,9,100,3,131,1,130,1,124,4,106,10,
> + 124,8,131,1,1,0,113,10,87,0,116,11,106,12,124,1,
> + 100,1,131,2,125,7,124,4,124,7,95,8,124,7,83,0,
> + 100,1,83,0,41,4,122,63,70,105,110,100,32,116,104,101,
> + 32,108,111,97,100,101,114,32,111,114,32,110,97,109,101,115,
> + 112,97,99,101,95,112,97,116,104,32,102,111,114,32,116,104,
> + 105,115,32,109,111,100,117,108,101,47,112,97,99,107,97,103,
> + 101,32,110,97,109,101,46,78,114,177,0,0,0,122,19,115,
> + 112,101,99,32,109,105,115,115,105,110,103,32,108,111,97,100,
> + 101,114,41,13,114,140,0,0,0,114,71,0,0,0,218,5,
> + 98,121,116,101,115,114,255,0,0,0,114,110,0,0,0,114,
> + 177,0,0,0,114,0,1,0,0,114,123,0,0,0,114,153,
> + 0,0,0,114,101,0,0,0,114,146,0,0,0,114,117,0,
> + 0,0,114,157,0,0,0,41,9,114,167,0,0,0,114,122,
> + 0,0,0,114,37,0,0,0,114,176,0,0,0,218,14,110,
> + 97,109,101,115,112,97,99,101,95,112,97,116,104,90,5,101,
> + 110,116,114,121,114,251,0,0,0,114,161,0,0,0,114,124,
> + 0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,0,
> + 0,0,218,9,95,103,101,116,95,115,112,101,99,93,4,0,
> + 0,115,40,0,0,0,0,5,4,1,10,1,14,1,2,1,
> + 10,1,8,1,10,1,14,2,12,1,8,1,2,1,10,1,
> + 4,1,6,1,8,1,8,5,14,2,12,1,6,1,122,20,
> + 80,97,116,104,70,105,110,100,101,114,46,95,103,101,116,95,
> + 115,112,101,99,99,4,0,0,0,0,0,0,0,6,0,0,
> + 0,4,0,0,0,67,0,0,0,115,100,0,0,0,124,2,
> + 100,1,107,8,114,14,116,0,106,1,125,2,124,0,106,2,
> + 124,1,124,2,124,3,131,3,125,4,124,4,100,1,107,8,
> + 114,40,100,1,83,0,124,4,106,3,100,1,107,8,114,92,
> + 124,4,106,4,125,5,124,5,114,86,100,2,124,4,95,5,
> + 116,6,124,1,124,5,124,0,106,2,131,3,124,4,95,4,
> + 124,4,83,0,100,1,83,0,110,4,124,4,83,0,100,1,
> + 83,0,41,3,122,141,84,114,121,32,116,111,32,102,105,110,
> + 100,32,97,32,115,112,101,99,32,102,111,114,32,39,102,117,
> + 108,108,110,97,109,101,39,32,111,110,32,115,121,115,46,112,
> + 97,116,104,32,111,114,32,39,112,97,116,104,39,46,10,10,
> + 32,32,32,32,32,32,32,32,84,104,101,32,115,101,97,114,
> + 99,104,32,105,115,32,98,97,115,101,100,32,111,110,32,115,
> + 121,115,46,112,97,116,104,95,104,111,111,107,115,32,97,110,
> + 100,32,115,121,115,46,112,97,116,104,95,105,109,112,111,114,
> + 116,101,114,95,99,97,99,104,101,46,10,32,32,32,32,32,
> + 32,32,32,78,90,9,110,97,109,101,115,112,97,99,101,41,
> + 7,114,8,0,0,0,114,37,0,0,0,114,3,1,0,0,
> + 114,123,0,0,0,114,153,0,0,0,114,155,0,0,0,114,
> + 226,0,0,0,41,6,114,167,0,0,0,114,122,0,0,0,
> + 114,37,0,0,0,114,176,0,0,0,114,161,0,0,0,114,
> + 2,1,0,0,114,4,0,0,0,114,4,0,0,0,114,6,
> + 0,0,0,114,177,0,0,0,125,4,0,0,115,26,0,0,
> + 0,0,6,8,1,6,1,14,1,8,1,4,1,10,1,6,
> + 1,4,3,6,1,16,1,4,2,6,2,122,20,80,97,116,
> + 104,70,105,110,100,101,114,46,102,105,110,100,95,115,112,101,
> + 99,99,3,0,0,0,0,0,0,0,4,0,0,0,3,0,
> + 0,0,67,0,0,0,115,30,0,0,0,124,0,106,0,124,
> + 1,124,2,131,2,125,3,124,3,100,1,107,8,114,24,100,
> + 1,83,0,124,3,106,1,83,0,41,2,122,170,102,105,110,
> + 100,32,116,104,101,32,109,111,100,117,108,101,32,111,110,32,
> + 115,121,115,46,112,97,116,104,32,111,114,32,39,112,97,116,
> + 104,39,32,98,97,115,101,100,32,111,110,32,115,121,115,46,
> + 112,97,116,104,95,104,111,111,107,115,32,97,110,100,10,32,
> + 32,32,32,32,32,32,32,115,121,115,46,112,97,116,104,95,
> + 105,109,112,111,114,116,101,114,95,99,97,99,104,101,46,10,
> + 10,32,32,32,32,32,32,32,32,84,104,105,115,32,109,101,
> + 116,104,111,100,32,105,115,32,100,101,112,114,101,99,97,116,
> + 101,100,46,32,32,85,115,101,32,102,105,110,100,95,115,112,
> + 101,99,40,41,32,105,110,115,116,101,97,100,46,10,10,32,
> + 32,32,32,32,32,32,32,78,41,2,114,177,0,0,0,114,
> + 123,0,0,0,41,4,114,167,0,0,0,114,122,0,0,0,
> + 114,37,0,0,0,114,161,0,0,0,114,4,0,0,0,114,
> + 4,0,0,0,114,6,0,0,0,114,178,0,0,0,149,4,
> + 0,0,115,8,0,0,0,0,8,12,1,8,1,4,1,122,
> + 22,80,97,116,104,70,105,110,100,101,114,46,102,105,110,100,
> + 95,109,111,100,117,108,101,41,1,78,41,2,78,78,41,1,
> + 78,41,12,114,107,0,0,0,114,106,0,0,0,114,108,0,
> + 0,0,114,109,0,0,0,114,179,0,0,0,114,248,0,0,
> + 0,114,253,0,0,0,114,255,0,0,0,114,0,1,0,0,
> + 114,3,1,0,0,114,177,0,0,0,114,178,0,0,0,114,
> + 4,0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,
> + 0,0,0,114,247,0,0,0,31,4,0,0,115,22,0,0,
> + 0,8,2,4,2,12,8,12,13,12,22,12,15,2,1,12,
> + 31,2,1,12,23,2,1,114,247,0,0,0,99,0,0,0,
> + 0,0,0,0,0,0,0,0,0,3,0,0,0,64,0,0,
> + 0,115,90,0,0,0,101,0,90,1,100,0,90,2,100,1,
> + 90,3,100,2,100,3,132,0,90,4,100,4,100,5,132,0,
> + 90,5,101,6,90,7,100,6,100,7,132,0,90,8,100,8,
> + 100,9,132,0,90,9,100,19,100,11,100,12,132,1,90,10,
> + 100,13,100,14,132,0,90,11,101,12,100,15,100,16,132,0,
> + 131,1,90,13,100,17,100,18,132,0,90,14,100,10,83,0,
> + 41,20,218,10,70,105,108,101,70,105,110,100,101,114,122,172,
> + 70,105,108,101,45,98,97,115,101,100,32,102,105,110,100,101,
> + 114,46,10,10,32,32,32,32,73,110,116,101,114,97,99,116,
> + 105,111,110,115,32,119,105,116,104,32,116,104,101,32,102,105,
> + 108,101,32,115,121,115,116,101,109,32,97,114,101,32,99,97,
> + 99,104,101,100,32,102,111,114,32,112,101,114,102,111,114,109,
> + 97,110,99,101,44,32,98,101,105,110,103,10,32,32,32,32,
> + 114,101,102,114,101,115,104,101,100,32,119,104,101,110,32,116,
> + 104,101,32,100,105,114,101,99,116,111,114,121,32,116,104,101,
> + 32,102,105,110,100,101,114,32,105,115,32,104,97,110,100,108,
> + 105,110,103,32,104,97,115,32,98,101,101,110,32,109,111,100,
> + 105,102,105,101,100,46,10,10,32,32,32,32,99,2,0,0,
> + 0,0,0,0,0,5,0,0,0,5,0,0,0,7,0,0,
> + 0,115,88,0,0,0,103,0,125,3,120,40,124,2,68,0,
> + 93,32,92,2,137,0,125,4,124,3,106,0,135,0,102,1,
> + 100,1,100,2,132,8,124,4,68,0,131,1,131,1,1,0,
> + 113,10,87,0,124,3,124,0,95,1,124,1,112,58,100,3,
> + 124,0,95,2,100,6,124,0,95,3,116,4,131,0,124,0,
> + 95,5,116,4,131,0,124,0,95,6,100,5,83,0,41,7,
> + 122,154,73,110,105,116,105,97,108,105,122,101,32,119,105,116,
> + 104,32,116,104,101,32,112,97,116,104,32,116,111,32,115,101,
> + 97,114,99,104,32,111,110,32,97,110,100,32,97,32,118,97,
> + 114,105,97,98,108,101,32,110,117,109,98,101,114,32,111,102,
> + 10,32,32,32,32,32,32,32,32,50,45,116,117,112,108,101,
> + 115,32,99,111,110,116,97,105,110,105,110,103,32,116,104,101,
> + 32,108,111,97,100,101,114,32,97,110,100,32,116,104,101,32,
> + 102,105,108,101,32,115,117,102,102,105,120,101,115,32,116,104,
> + 101,32,108,111,97,100,101,114,10,32,32,32,32,32,32,32,
> + 32,114,101,99,111,103,110,105,122,101,115,46,99,1,0,0,
> + 0,0,0,0,0,2,0,0,0,3,0,0,0,51,0,0,
> + 0,115,22,0,0,0,124,0,93,14,125,1,124,1,136,0,
> + 102,2,86,0,1,0,113,2,100,0,83,0,41,1,78,114,
> + 4,0,0,0,41,2,114,24,0,0,0,114,221,0,0,0,
> + 41,1,114,123,0,0,0,114,4,0,0,0,114,6,0,0,
> + 0,114,223,0,0,0,178,4,0,0,115,2,0,0,0,4,
> + 0,122,38,70,105,108,101,70,105,110,100,101,114,46,95,95,
> + 105,110,105,116,95,95,46,60,108,111,99,97,108,115,62,46,
> + 60,103,101,110,101,120,112,114,62,114,60,0,0,0,114,31,
> + 0,0,0,78,114,89,0,0,0,41,7,114,146,0,0,0,
> + 218,8,95,108,111,97,100,101,114,115,114,37,0,0,0,218,
> + 11,95,112,97,116,104,95,109,116,105,109,101,218,3,115,101,
> + 116,218,11,95,112,97,116,104,95,99,97,99,104,101,218,19,
> + 95,114,101,108,97,120,101,100,95,112,97,116,104,95,99,97,
> + 99,104,101,41,5,114,102,0,0,0,114,37,0,0,0,218,
> + 14,108,111,97,100,101,114,95,100,101,116,97,105,108,115,90,
> + 7,108,111,97,100,101,114,115,114,163,0,0,0,114,4,0,
> + 0,0,41,1,114,123,0,0,0,114,6,0,0,0,114,181,
> + 0,0,0,172,4,0,0,115,16,0,0,0,0,4,4,1,
> + 14,1,28,1,6,2,10,1,6,1,8,1,122,19,70,105,
> + 108,101,70,105,110,100,101,114,46,95,95,105,110,105,116,95,
> + 95,99,1,0,0,0,0,0,0,0,1,0,0,0,2,0,
> + 0,0,67,0,0,0,115,10,0,0,0,100,3,124,0,95,
> + 0,100,2,83,0,41,4,122,31,73,110,118,97,108,105,100,
> + 97,116,101,32,116,104,101,32,100,105,114,101,99,116,111,114,
> + 121,32,109,116,105,109,101,46,114,31,0,0,0,78,114,89,
> + 0,0,0,41,1,114,6,1,0,0,41,1,114,102,0,0,
> + 0,114,4,0,0,0,114,4,0,0,0,114,6,0,0,0,
> + 114,248,0,0,0,186,4,0,0,115,2,0,0,0,0,2,
> + 122,28,70,105,108,101,70,105,110,100,101,114,46,105,110,118,
> + 97,108,105,100,97,116,101,95,99,97,99,104,101,115,99,2,
> + 0,0,0,0,0,0,0,3,0,0,0,2,0,0,0,67,
> + 0,0,0,115,42,0,0,0,124,0,106,0,124,1,131,1,
> + 125,2,124,2,100,1,107,8,114,26,100,1,103,0,102,2,
> + 83,0,124,2,106,1,124,2,106,2,112,38,103,0,102,2,
> + 83,0,41,2,122,197,84,114,121,32,116,111,32,102,105,110,
> + 100,32,97,32,108,111,97,100,101,114,32,102,111,114,32,116,
> + 104,101,32,115,112,101,99,105,102,105,101,100,32,109,111,100,
> + 117,108,101,44,32,111,114,32,116,104,101,32,110,97,109,101,
> + 115,112,97,99,101,10,32,32,32,32,32,32,32,32,112,97,
> + 99,107,97,103,101,32,112,111,114,116,105,111,110,115,46,32,
> + 82,101,116,117,114,110,115,32,40,108,111,97,100,101,114,44,
> + 32,108,105,115,116,45,111,102,45,112,111,114,116,105,111,110,
> + 115,41,46,10,10,32,32,32,32,32,32,32,32,84,104,105,
> + 115,32,109,101,116,104,111,100,32,105,115,32,100,101,112,114,
> + 101,99,97,116,101,100,46,32,32,85,115,101,32,102,105,110,
> + 100,95,115,112,101,99,40,41,32,105,110,115,116,101,97,100,
> + 46,10,10,32,32,32,32,32,32,32,32,78,41,3,114,177,
> + 0,0,0,114,123,0,0,0,114,153,0,0,0,41,3,114,
> + 102,0,0,0,114,122,0,0,0,114,161,0,0,0,114,4,
> + 0,0,0,114,4,0,0,0,114,6,0,0,0,114,120,0,
> + 0,0,192,4,0,0,115,8,0,0,0,0,7,10,1,8,
> + 1,8,1,122,22,70,105,108,101,70,105,110,100,101,114,46,
> + 102,105,110,100,95,108,111,97,100,101,114,99,6,0,0,0,
> + 0,0,0,0,7,0,0,0,6,0,0,0,67,0,0,0,
> + 115,26,0,0,0,124,1,124,2,124,3,131,2,125,6,116,
> + 0,124,2,124,3,124,6,124,4,100,1,141,4,83,0,41,
> + 2,78,41,2,114,123,0,0,0,114,153,0,0,0,41,1,
> + 114,164,0,0,0,41,7,114,102,0,0,0,114,162,0,0,
> + 0,114,122,0,0,0,114,37,0,0,0,90,4,115,109,115,
> + 108,114,176,0,0,0,114,123,0,0,0,114,4,0,0,0,
> + 114,4,0,0,0,114,6,0,0,0,114,3,1,0,0,204,
> + 4,0,0,115,6,0,0,0,0,1,10,1,8,1,122,20,
> + 70,105,108,101,70,105,110,100,101,114,46,95,103,101,116,95,
> + 115,112,101,99,78,99,3,0,0,0,0,0,0,0,14,0,
> + 0,0,15,0,0,0,67,0,0,0,115,98,1,0,0,100,
> + 1,125,3,124,1,106,0,100,2,131,1,100,3,25,0,125,
> + 4,121,24,116,1,124,0,106,2,112,34,116,3,106,4,131,
> + 0,131,1,106,5,125,5,87,0,110,24,4,0,116,6,107,
> + 10,114,66,1,0,1,0,1,0,100,10,125,5,89,0,110,
> + 2,88,0,124,5,124,0,106,7,107,3,114,92,124,0,106,
> + 8,131,0,1,0,124,5,124,0,95,7,116,9,131,0,114,
> + 114,124,0,106,10,125,6,124,4,106,11,131,0,125,7,110,
> + 10,124,0,106,12,125,6,124,4,125,7,124,7,124,6,107,
> + 6,114,218,116,13,124,0,106,2,124,4,131,2,125,8,120,
> + 72,124,0,106,14,68,0,93,54,92,2,125,9,125,10,100,
> + 5,124,9,23,0,125,11,116,13,124,8,124,11,131,2,125,
> + 12,116,15,124,12,131,1,114,152,124,0,106,16,124,10,124,
> + 1,124,12,124,8,103,1,124,2,131,5,83,0,113,152,87,
> + 0,116,17,124,8,131,1,125,3,120,88,124,0,106,14,68,
> + 0,93,78,92,2,125,9,125,10,116,13,124,0,106,2,124,
> + 4,124,9,23,0,131,2,125,12,116,18,106,19,100,6,124,
> + 12,100,3,100,7,141,3,1,0,124,7,124,9,23,0,124,
> + 6,107,6,114,226,116,15,124,12,131,1,114,226,124,0,106,
> + 16,124,10,124,1,124,12,100,8,124,2,131,5,83,0,113,
> + 226,87,0,124,3,144,1,114,94,116,18,106,19,100,9,124,
> + 8,131,2,1,0,116,18,106,20,124,1,100,8,131,2,125,
> + 13,124,8,103,1,124,13,95,21,124,13,83,0,100,8,83,
> + 0,41,11,122,111,84,114,121,32,116,111,32,102,105,110,100,
> + 32,97,32,115,112,101,99,32,102,111,114,32,116,104,101,32,
> + 115,112,101,99,105,102,105,101,100,32,109,111,100,117,108,101,
> + 46,10,10,32,32,32,32,32,32,32,32,82,101,116,117,114,
> + 110,115,32,116,104,101,32,109,97,116,99,104,105,110,103,32,
> + 115,112,101,99,44,32,111,114,32,78,111,110,101,32,105,102,
> + 32,110,111,116,32,102,111,117,110,100,46,10,32,32,32,32,
> + 32,32,32,32,70,114,60,0,0,0,114,58,0,0,0,114,
> + 31,0,0,0,114,181,0,0,0,122,9,116,114,121,105,110,
> + 103,32,123,125,41,1,90,9,118,101,114,98,111,115,105,116,
> + 121,78,122,25,112,111,115,115,105,98,108,101,32,110,97,109,
> + 101,115,112,97,99,101,32,102,111,114,32,123,125,114,89,0,
> + 0,0,41,22,114,34,0,0,0,114,41,0,0,0,114,37,
> + 0,0,0,114,3,0,0,0,114,47,0,0,0,114,215,0,
> + 0,0,114,42,0,0,0,114,6,1,0,0,218,11,95,102,
> + 105,108,108,95,99,97,99,104,101,114,7,0,0,0,114,9,
> + 1,0,0,114,90,0,0,0,114,8,1,0,0,114,30,0,
> + 0,0,114,5,1,0,0,114,46,0,0,0,114,3,1,0,
> + 0,114,48,0,0,0,114,117,0,0,0,114,132,0,0,0,
> + 114,157,0,0,0,114,153,0,0,0,41,14,114,102,0,0,
> + 0,114,122,0,0,0,114,176,0,0,0,90,12,105,115,95,
> + 110,97,109,101,115,112,97,99,101,90,11,116,97,105,108,95,
> + 109,111,100,117,108,101,114,129,0,0,0,90,5,99,97,99,
> + 104,101,90,12,99,97,99,104,101,95,109,111,100,117,108,101,
> + 90,9,98,97,115,101,95,112,97,116,104,114,221,0,0,0,
> + 114,162,0,0,0,90,13,105,110,105,116,95,102,105,108,101,
> + 110,97,109,101,90,9,102,117,108,108,95,112,97,116,104,114,
> + 161,0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,
> + 0,0,0,114,177,0,0,0,209,4,0,0,115,70,0,0,
> + 0,0,5,4,1,14,1,2,1,24,1,14,1,10,1,10,
> + 1,8,1,6,2,6,1,6,1,10,2,6,1,4,2,8,
> + 1,12,1,16,1,8,1,10,1,8,1,24,4,8,2,16,
> + 1,16,1,16,1,12,1,8,1,10,1,12,1,6,1,12,
> + 1,12,1,8,1,4,1,122,20,70,105,108,101,70,105,110,
> + 100,101,114,46,102,105,110,100,95,115,112,101,99,99,1,0,
> + 0,0,0,0,0,0,9,0,0,0,13,0,0,0,67,0,
> + 0,0,115,194,0,0,0,124,0,106,0,125,1,121,22,116,
> + 1,106,2,124,1,112,22,116,1,106,3,131,0,131,1,125,
> + 2,87,0,110,30,4,0,116,4,116,5,116,6,102,3,107,
> + 10,114,58,1,0,1,0,1,0,103,0,125,2,89,0,110,
> + 2,88,0,116,7,106,8,106,9,100,1,131,1,115,84,116,
> + 10,124,2,131,1,124,0,95,11,110,78,116,10,131,0,125,
> + 3,120,64,124,2,68,0,93,56,125,4,124,4,106,12,100,
> + 2,131,1,92,3,125,5,125,6,125,7,124,6,114,138,100,
> + 3,106,13,124,5,124,7,106,14,131,0,131,2,125,8,110,
> + 4,124,5,125,8,124,3,106,15,124,8,131,1,1,0,113,
> + 96,87,0,124,3,124,0,95,11,116,7,106,8,106,9,116,
> + 16,131,1,114,190,100,4,100,5,132,0,124,2,68,0,131,
> + 1,124,0,95,17,100,6,83,0,41,7,122,68,70,105,108,
> + 108,32,116,104,101,32,99,97,99,104,101,32,111,102,32,112,
> + 111,116,101,110,116,105,97,108,32,109,111,100,117,108,101,115,
> + 32,97,110,100,32,112,97,99,107,97,103,101,115,32,102,111,
> + 114,32,116,104,105,115,32,100,105,114,101,99,116,111,114,121,
> + 46,114,0,0,0,0,114,60,0,0,0,122,5,123,125,46,
> + 123,125,99,1,0,0,0,0,0,0,0,2,0,0,0,3,
> + 0,0,0,83,0,0,0,115,20,0,0,0,104,0,124,0,
> + 93,12,125,1,124,1,106,0,131,0,146,2,113,4,83,0,
> + 114,4,0,0,0,41,1,114,90,0,0,0,41,2,114,24,
> + 0,0,0,90,2,102,110,114,4,0,0,0,114,4,0,0,
> + 0,114,6,0,0,0,250,9,60,115,101,116,99,111,109,112,
> + 62,30,5,0,0,115,2,0,0,0,6,0,122,41,70,105,
> + 108,101,70,105,110,100,101,114,46,95,102,105,108,108,95,99,
> + 97,99,104,101,46,60,108,111,99,97,108,115,62,46,60,115,
> + 101,116,99,111,109,112,62,78,41,18,114,37,0,0,0,114,
> + 3,0,0,0,90,7,108,105,115,116,100,105,114,114,47,0,
> + 0,0,114,254,0,0,0,218,15,80,101,114,109,105,115,115,
> + 105,111,110,69,114,114,111,114,218,18,78,111,116,65,68,105,
> + 114,101,99,116,111,114,121,69,114,114,111,114,114,8,0,0,
> + 0,114,9,0,0,0,114,10,0,0,0,114,7,1,0,0,
> + 114,8,1,0,0,114,85,0,0,0,114,50,0,0,0,114,
> + 90,0,0,0,218,3,97,100,100,114,11,0,0,0,114,9,
> + 1,0,0,41,9,114,102,0,0,0,114,37,0,0,0,90,
> + 8,99,111,110,116,101,110,116,115,90,21,108,111,119,101,114,
> + 95,115,117,102,102,105,120,95,99,111,110,116,101,110,116,115,
> + 114,243,0,0,0,114,100,0,0,0,114,233,0,0,0,114,
> + 221,0,0,0,90,8,110,101,119,95,110,97,109,101,114,4,
> + 0,0,0,114,4,0,0,0,114,6,0,0,0,114,11,1,
> + 0,0,1,5,0,0,115,34,0,0,0,0,2,6,1,2,
> + 1,22,1,20,3,10,3,12,1,12,7,6,1,10,1,16,
> + 1,4,1,18,2,4,1,14,1,6,1,12,1,122,22,70,
> + 105,108,101,70,105,110,100,101,114,46,95,102,105,108,108,95,
> + 99,97,99,104,101,99,1,0,0,0,0,0,0,0,3,0,
> + 0,0,3,0,0,0,7,0,0,0,115,18,0,0,0,135,
> + 0,135,1,102,2,100,1,100,2,132,8,125,2,124,2,83,
> + 0,41,3,97,20,1,0,0,65,32,99,108,97,115,115,32,
> + 109,101,116,104,111,100,32,119,104,105,99,104,32,114,101,116,
> + 117,114,110,115,32,97,32,99,108,111,115,117,114,101,32,116,
> + 111,32,117,115,101,32,111,110,32,115,121,115,46,112,97,116,
> + 104,95,104,111,111,107,10,32,32,32,32,32,32,32,32,119,
> + 104,105,99,104,32,119,105,108,108,32,114,101,116,117,114,110,
> + 32,97,110,32,105,110,115,116,97,110,99,101,32,117,115,105,
> + 110,103,32,116,104,101,32,115,112,101,99,105,102,105,101,100,
> + 32,108,111,97,100,101,114,115,32,97,110,100,32,116,104,101,
> + 32,112,97,116,104,10,32,32,32,32,32,32,32,32,99,97,
> + 108,108,101,100,32,111,110,32,116,104,101,32,99,108,111,115,
> + 117,114,101,46,10,10,32,32,32,32,32,32,32,32,73,102,
> + 32,116,104,101,32,112,97,116,104,32,99,97,108,108,101,100,
> + 32,111,110,32,116,104,101,32,99,108,111,115,117,114,101,32,
> + 105,115,32,110,111,116,32,97,32,100,105,114,101,99,116,111,
> + 114,121,44,32,73,109,112,111,114,116,69,114,114,111,114,32,
> + 105,115,10,32,32,32,32,32,32,32,32,114,97,105,115,101,
> + 100,46,10,10,32,32,32,32,32,32,32,32,99,1,0,0,
> + 0,0,0,0,0,1,0,0,0,4,0,0,0,19,0,0,
> + 0,115,34,0,0,0,116,0,124,0,131,1,115,20,116,1,
> + 100,1,124,0,100,2,141,2,130,1,136,0,124,0,102,1,
> + 136,1,158,2,142,0,83,0,41,3,122,45,80,97,116,104,
> + 32,104,111,111,107,32,102,111,114,32,105,109,112,111,114,116,
> + 108,105,98,46,109,97,99,104,105,110,101,114,121,46,70,105,
> + 108,101,70,105,110,100,101,114,46,122,30,111,110,108,121,32,
> + 100,105,114,101,99,116,111,114,105,101,115,32,97,114,101,32,
> + 115,117,112,112,111,114,116,101,100,41,1,114,37,0,0,0,
> + 41,2,114,48,0,0,0,114,101,0,0,0,41,1,114,37,
> + 0,0,0,41,2,114,167,0,0,0,114,10,1,0,0,114,
> + 4,0,0,0,114,6,0,0,0,218,24,112,97,116,104,95,
> + 104,111,111,107,95,102,111,114,95,70,105,108,101,70,105,110,
> + 100,101,114,42,5,0,0,115,6,0,0,0,0,2,8,1,
> + 12,1,122,54,70,105,108,101,70,105,110,100,101,114,46,112,
> + 97,116,104,95,104,111,111,107,46,60,108,111,99,97,108,115,
> + 62,46,112,97,116,104,95,104,111,111,107,95,102,111,114,95,
> + 70,105,108,101,70,105,110,100,101,114,114,4,0,0,0,41,
> + 3,114,167,0,0,0,114,10,1,0,0,114,16,1,0,0,
> + 114,4,0,0,0,41,2,114,167,0,0,0,114,10,1,0,
> + 0,114,6,0,0,0,218,9,112,97,116,104,95,104,111,111,
> + 107,32,5,0,0,115,4,0,0,0,0,10,14,6,122,20,
> + 70,105,108,101,70,105,110,100,101,114,46,112,97,116,104,95,
> + 104,111,111,107,99,1,0,0,0,0,0,0,0,1,0,0,
> + 0,2,0,0,0,67,0,0,0,115,12,0,0,0,100,1,
> + 106,0,124,0,106,1,131,1,83,0,41,2,78,122,16,70,
> + 105,108,101,70,105,110,100,101,114,40,123,33,114,125,41,41,
> + 2,114,50,0,0,0,114,37,0,0,0,41,1,114,102,0,
> + 0,0,114,4,0,0,0,114,4,0,0,0,114,6,0,0,
> + 0,114,242,0,0,0,50,5,0,0,115,2,0,0,0,0,
> + 1,122,19,70,105,108,101,70,105,110,100,101,114,46,95,95,
> + 114,101,112,114,95,95,41,1,78,41,15,114,107,0,0,0,
> + 114,106,0,0,0,114,108,0,0,0,114,109,0,0,0,114,
> + 181,0,0,0,114,248,0,0,0,114,126,0,0,0,114,178,
> + 0,0,0,114,120,0,0,0,114,3,1,0,0,114,177,0,
> + 0,0,114,11,1,0,0,114,179,0,0,0,114,17,1,0,
> + 0,114,242,0,0,0,114,4,0,0,0,114,4,0,0,0,
> + 114,4,0,0,0,114,6,0,0,0,114,4,1,0,0,163,
> + 4,0,0,115,20,0,0,0,8,7,4,2,8,14,8,4,
> + 4,2,8,12,8,5,10,48,8,31,12,18,114,4,1,0,
> + 0,99,4,0,0,0,0,0,0,0,6,0,0,0,11,0,
> + 0,0,67,0,0,0,115,146,0,0,0,124,0,106,0,100,
> + 1,131,1,125,4,124,0,106,0,100,2,131,1,125,5,124,
> + 4,115,66,124,5,114,36,124,5,106,1,125,4,110,30,124,
> + 2,124,3,107,2,114,56,116,2,124,1,124,2,131,2,125,
> + 4,110,10,116,3,124,1,124,2,131,2,125,4,124,5,115,
> + 84,116,4,124,1,124,2,124,4,100,3,141,3,125,5,121,
> + 36,124,5,124,0,100,2,60,0,124,4,124,0,100,1,60,
> + 0,124,2,124,0,100,4,60,0,124,3,124,0,100,5,60,
> + 0,87,0,110,20,4,0,116,5,107,10,114,140,1,0,1,
> + 0,1,0,89,0,110,2,88,0,100,0,83,0,41,6,78,
> + 218,10,95,95,108,111,97,100,101,114,95,95,218,8,95,95,
> + 115,112,101,99,95,95,41,1,114,123,0,0,0,90,8,95,
> + 95,102,105,108,101,95,95,90,10,95,95,99,97,99,104,101,
> + 100,95,95,41,6,218,3,103,101,116,114,123,0,0,0,114,
> + 219,0,0,0,114,214,0,0,0,114,164,0,0,0,218,9,
> + 69,120,99,101,112,116,105,111,110,41,6,90,2,110,115,114,
> + 100,0,0,0,90,8,112,97,116,104,110,97,109,101,90,9,
> + 99,112,97,116,104,110,97,109,101,114,123,0,0,0,114,161,
> + 0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,0,
> + 0,0,218,14,95,102,105,120,95,117,112,95,109,111,100,117,
> + 108,101,56,5,0,0,115,34,0,0,0,0,2,10,1,10,
> + 1,4,1,4,1,8,1,8,1,12,2,10,1,4,1,14,
> + 1,2,1,8,1,8,1,8,1,12,1,14,2,114,22,1,
> + 0,0,99,0,0,0,0,0,0,0,0,3,0,0,0,3,
> + 0,0,0,67,0,0,0,115,38,0,0,0,116,0,116,1,
> + 106,2,131,0,102,2,125,0,116,3,116,4,102,2,125,1,
> + 116,5,116,6,102,2,125,2,124,0,124,1,124,2,103,3,
> + 83,0,41,1,122,95,82,101,116,117,114,110,115,32,97,32,
> + 108,105,115,116,32,111,102,32,102,105,108,101,45,98,97,115,
> + 101,100,32,109,111,100,117,108,101,32,108,111,97,100,101,114,
> + 115,46,10,10,32,32,32,32,69,97,99,104,32,105,116,101,
> + 109,32,105,115,32,97,32,116,117,112,108,101,32,40,108,111,
> + 97,100,101,114,44,32,115,117,102,102,105,120,101,115,41,46,
> + 10,32,32,32,32,41,7,114,220,0,0,0,114,142,0,0,
> + 0,218,18,101,120,116,101,110,115,105,111,110,95,115,117,102,
> + 102,105,120,101,115,114,214,0,0,0,114,86,0,0,0,114,
> + 219,0,0,0,114,76,0,0,0,41,3,90,10,101,120,116,
> + 101,110,115,105,111,110,115,90,6,115,111,117,114,99,101,90,
> + 8,98,121,116,101,99,111,100,101,114,4,0,0,0,114,4,
> + 0,0,0,114,6,0,0,0,114,158,0,0,0,79,5,0,
> + 0,115,8,0,0,0,0,5,12,1,8,1,8,1,114,158,
> + 0,0,0,99,1,0,0,0,0,0,0,0,12,0,0,0,
> + 12,0,0,0,67,0,0,0,115,198,1,0,0,124,0,97,
> + 0,116,0,106,1,97,1,116,0,106,2,97,2,116,1,106,
> + 3,116,4,25,0,125,1,120,56,100,27,68,0,93,48,125,
> + 2,124,2,116,1,106,3,107,7,114,58,116,0,106,5,124,
> + 2,131,1,125,3,110,10,116,1,106,3,124,2,25,0,125,
> + 3,116,6,124,1,124,2,124,3,131,3,1,0,113,32,87,
> + 0,100,5,100,6,103,1,102,2,100,7,100,8,100,6,103,
> + 2,102,2,100,9,100,8,100,6,103,2,102,2,102,3,125,
> + 4,120,118,124,4,68,0,93,102,92,2,125,5,125,6,116,
> + 7,100,10,100,11,132,0,124,6,68,0,131,1,131,1,115,
> + 152,116,8,130,1,124,6,100,12,25,0,125,7,124,5,116,
> + 1,106,3,107,6,114,184,116,1,106,3,124,5,25,0,125,
> + 8,80,0,113,122,121,16,116,0,106,5,124,5,131,1,125,
> + 8,80,0,87,0,113,122,4,0,116,9,107,10,114,222,1,
> + 0,1,0,1,0,119,122,89,0,113,122,88,0,113,122,87,
> + 0,116,9,100,13,131,1,130,1,116,6,124,1,100,14,124,
> + 8,131,3,1,0,116,6,124,1,100,15,124,7,131,3,1,
> + 0,116,6,124,1,100,16,100,17,106,10,124,6,131,1,131,
> + 3,1,0,121,14,116,0,106,5,100,18,131,1,125,9,87,
> + 0,110,26,4,0,116,9,107,10,144,1,114,62,1,0,1,
> + 0,1,0,100,19,125,9,89,0,110,2,88,0,116,6,124,
> + 1,100,18,124,9,131,3,1,0,116,0,106,5,100,20,131,
> + 1,125,10,116,6,124,1,100,20,124,10,131,3,1,0,124,
> + 5,100,7,107,2,144,1,114,130,116,0,106,5,100,21,131,
> + 1,125,11,116,6,124,1,100,22,124,11,131,3,1,0,116,
> + 6,124,1,100,23,116,11,131,0,131,3,1,0,116,12,106,
> + 13,116,2,106,14,131,0,131,1,1,0,124,5,100,7,107,
> + 2,144,1,114,194,116,15,106,16,100,24,131,1,1,0,100,
> + 25,116,12,107,6,144,1,114,194,100,26,116,17,95,18,100,
> + 19,83,0,41,28,122,205,83,101,116,117,112,32,116,104,101,
> + 32,112,97,116,104,45,98,97,115,101,100,32,105,109,112,111,
> + 114,116,101,114,115,32,102,111,114,32,105,109,112,111,114,116,
> + 108,105,98,32,98,121,32,105,109,112,111,114,116,105,110,103,
> + 32,110,101,101,100,101,100,10,32,32,32,32,98,117,105,108,
> + 116,45,105,110,32,109,111,100,117,108,101,115,32,97,110,100,
> + 32,105,110,106,101,99,116,105,110,103,32,116,104,101,109,32,
> + 105,110,116,111,32,116,104,101,32,103,108,111,98,97,108,32,
> + 110,97,109,101,115,112,97,99,101,46,10,10,32,32,32,32,
> + 79,116,104,101,114,32,99,111,109,112,111,110,101,110,116,115,
> + 32,97,114,101,32,101,120,116,114,97,99,116,101,100,32,102,
> + 114,111,109,32,116,104,101,32,99,111,114,101,32,98,111,111,
> + 116,115,116,114,97,112,32,109,111,100,117,108,101,46,10,10,
> + 32,32,32,32,114,52,0,0,0,114,62,0,0,0,218,8,
> + 98,117,105,108,116,105,110,115,114,139,0,0,0,90,5,112,
> + 111,115,105,120,250,1,47,90,2,110,116,250,1,92,90,4,
> + 101,100,107,50,99,1,0,0,0,0,0,0,0,2,0,0,
> + 0,3,0,0,0,115,0,0,0,115,26,0,0,0,124,0,
> + 93,18,125,1,116,0,124,1,131,1,100,0,107,2,86,0,
> + 1,0,113,2,100,1,83,0,41,2,114,31,0,0,0,78,
> + 41,1,114,33,0,0,0,41,2,114,24,0,0,0,114,79,
> + 0,0,0,114,4,0,0,0,114,4,0,0,0,114,6,0,
> + 0,0,114,223,0,0,0,115,5,0,0,115,2,0,0,0,
> + 4,0,122,25,95,115,101,116,117,112,46,60,108,111,99,97,
> + 108,115,62,46,60,103,101,110,101,120,112,114,62,114,61,0,
> + 0,0,122,38,105,109,112,111,114,116,108,105,98,32,114,101,
> + 113,117,105,114,101,115,32,112,111,115,105,120,32,111,114,32,
> + 110,116,32,111,114,32,101,100,107,50,114,3,0,0,0,114,
> + 27,0,0,0,114,23,0,0,0,114,32,0,0,0,90,7,
> + 95,116,104,114,101,97,100,78,90,8,95,119,101,97,107,114,
> + 101,102,90,6,119,105,110,114,101,103,114,166,0,0,0,114,
> + 7,0,0,0,122,4,46,112,121,119,122,6,95,100,46,112,
> + 121,100,84,41,4,114,52,0,0,0,114,62,0,0,0,114,
> + 24,1,0,0,114,139,0,0,0,41,19,114,117,0,0,0,
> + 114,8,0,0,0,114,142,0,0,0,114,235,0,0,0,114,
> + 107,0,0,0,90,18,95,98,117,105,108,116,105,110,95,102,
> + 114,111,109,95,110,97,109,101,114,111,0,0,0,218,3,97,
> + 108,108,218,14,65,115,115,101,114,116,105,111,110,69,114,114,
> + 111,114,114,101,0,0,0,114,28,0,0,0,114,13,0,0,
> + 0,114,225,0,0,0,114,146,0,0,0,114,23,1,0,0,
> + 114,86,0,0,0,114,160,0,0,0,114,165,0,0,0,114,
> + 169,0,0,0,41,12,218,17,95,98,111,111,116,115,116,114,
> + 97,112,95,109,111,100,117,108,101,90,11,115,101,108,102,95,
> + 109,111,100,117,108,101,90,12,98,117,105,108,116,105,110,95,
> + 110,97,109,101,90,14,98,117,105,108,116,105,110,95,109,111,
> + 100,117,108,101,90,10,111,115,95,100,101,116,97,105,108,115,
> + 90,10,98,117,105,108,116,105,110,95,111,115,114,23,0,0,
> + 0,114,27,0,0,0,90,9,111,115,95,109,111,100,117,108,
> + 101,90,13,116,104,114,101,97,100,95,109,111,100,117,108,101,
> + 90,14,119,101,97,107,114,101,102,95,109,111,100,117,108,101,
> + 90,13,119,105,110,114,101,103,95,109,111,100,117,108,101,114,
> + 4,0,0,0,114,4,0,0,0,114,6,0,0,0,218,6,
> + 95,115,101,116,117,112,90,5,0,0,115,82,0,0,0,0,
> + 8,4,1,6,1,6,3,10,1,10,1,10,1,12,2,10,
> + 1,16,3,32,1,14,2,22,1,8,1,10,1,10,1,4,
> + 2,2,1,10,1,6,1,14,1,12,2,8,1,12,1,12,
> + 1,18,3,2,1,14,1,16,2,10,1,12,3,10,1,12,
> + 3,10,1,10,1,12,3,14,1,14,1,10,1,10,1,10,
> + 1,114,30,1,0,0,99,1,0,0,0,0,0,0,0,2,
> + 0,0,0,3,0,0,0,67,0,0,0,115,50,0,0,0,
> + 116,0,124,0,131,1,1,0,116,1,131,0,125,1,116,2,
> + 106,3,106,4,116,5,106,6,124,1,142,0,103,1,131,1,
> + 1,0,116,2,106,7,106,8,116,9,131,1,1,0,100,1,
> + 83,0,41,2,122,41,73,110,115,116,97,108,108,32,116,104,
> + 101,32,112,97,116,104,45,98,97,115,101,100,32,105,109,112,
> + 111,114,116,32,99,111,109,112,111,110,101,110,116,115,46,78,
> + 41,10,114,30,1,0,0,114,158,0,0,0,114,8,0,0,
> + 0,114,252,0,0,0,114,146,0,0,0,114,4,1,0,0,
> + 114,17,1,0,0,218,9,109,101,116,97,95,112,97,116,104,
> + 114,160,0,0,0,114,247,0,0,0,41,2,114,29,1,0,
> + 0,90,17,115,117,112,112,111,114,116,101,100,95,108,111,97,
> + 100,101,114,115,114,4,0,0,0,114,4,0,0,0,114,6,
> + 0,0,0,218,8,95,105,110,115,116,97,108,108,158,5,0,
> + 0,115,8,0,0,0,0,2,8,1,6,1,20,1,114,32,
> + 1,0,0,41,1,114,0,0,0,0,41,2,114,1,0,0,
> + 0,114,2,0,0,0,41,1,114,49,0,0,0,41,1,78,
> + 41,3,78,78,78,41,3,78,78,78,41,2,114,61,0,0,
> + 0,114,61,0,0,0,41,1,78,41,1,78,41,58,114,109,
> + 0,0,0,114,12,0,0,0,90,37,95,67,65,83,69,95,
> + 73,78,83,69,78,83,73,84,73,86,69,95,80,76,65,84,
> + 70,79,82,77,83,95,66,89,84,69,83,95,75,69,89,114,
> + 11,0,0,0,114,13,0,0,0,114,19,0,0,0,114,21,
> + 0,0,0,114,30,0,0,0,114,40,0,0,0,114,41,0,
> + 0,0,114,45,0,0,0,114,46,0,0,0,114,48,0,0,
> + 0,114,57,0,0,0,218,4,116,121,112,101,218,8,95,95,
> + 99,111,100,101,95,95,114,141,0,0,0,114,17,0,0,0,
> + 114,131,0,0,0,114,16,0,0,0,114,20,0,0,0,90,
> + 17,95,82,65,87,95,77,65,71,73,67,95,78,85,77,66,
> + 69,82,114,75,0,0,0,114,74,0,0,0,114,86,0,0,
> + 0,114,76,0,0,0,90,23,68,69,66,85,71,95,66,89,
> + 84,69,67,79,68,69,95,83,85,70,70,73,88,69,83,90,
> + 27,79,80,84,73,77,73,90,69,68,95,66,89,84,69,67,
> + 79,68,69,95,83,85,70,70,73,88,69,83,114,81,0,0,
> + 0,114,87,0,0,0,114,93,0,0,0,114,97,0,0,0,
> + 114,99,0,0,0,114,119,0,0,0,114,126,0,0,0,114,
> + 138,0,0,0,114,144,0,0,0,114,147,0,0,0,114,152,
> + 0,0,0,218,6,111,98,106,101,99,116,114,159,0,0,0,
> + 114,164,0,0,0,114,165,0,0,0,114,180,0,0,0,114,
> + 190,0,0,0,114,206,0,0,0,114,214,0,0,0,114,219,
> + 0,0,0,114,225,0,0,0,114,220,0,0,0,114,226,0,
> + 0,0,114,245,0,0,0,114,247,0,0,0,114,4,1,0,
> + 0,114,22,1,0,0,114,158,0,0,0,114,30,1,0,0,
> + 114,32,1,0,0,114,4,0,0,0,114,4,0,0,0,114,
> + 4,0,0,0,114,6,0,0,0,218,8,60,109,111,100,117,
> + 108,101,62,8,0,0,0,115,108,0,0,0,4,16,4,1,
> + 4,1,2,1,6,3,8,17,8,5,8,5,8,6,8,12,
> + 8,10,8,9,8,5,8,7,10,22,10,123,16,1,12,2,
> + 4,1,4,2,6,2,6,2,8,2,16,45,8,34,8,19,
> + 8,12,8,12,8,28,8,17,10,55,10,12,10,10,8,14,
> + 6,3,4,1,14,67,14,64,14,29,16,110,14,41,18,45,
> + 18,16,4,3,18,53,14,60,14,42,14,127,0,5,14,127,
> + 0,22,10,23,8,11,8,68,
> +};
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/marshal.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/marshal.c
> new file mode 100644
> index 00000000..dbe75e3b
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/marshal.c
> @@ -0,0 +1,1861 @@
> +
> +/* Write Python objects to files and read them back.
> + This is primarily intended for writing and reading compiled Python code,
> + even though dicts, lists, sets and frozensets, not commonly seen in
> + code objects, are supported.
> + Version 3 of this protocol properly supports circular links
> + and sharing. */
> +
> +#define PY_SSIZE_T_CLEAN
> +
> +#include "Python.h"
> +#include "longintrepr.h"
> +#include "code.h"
> +#include "marshal.h"
> +#include "../Modules/hashtable.h"
> +
> +/* High water mark to determine when the marshalled object is dangerously deep
> + * and risks coring the interpreter. When the object stack gets this deep,
> + * raise an exception instead of continuing.
> + * On Windows debug builds, reduce this value.
> + */
> +#if defined(MS_WINDOWS) && defined(_DEBUG)
> +#define MAX_MARSHAL_STACK_DEPTH 1000
> +#else
> +#define MAX_MARSHAL_STACK_DEPTH 2000
> +#endif
> +
> +#define TYPE_NULL '0'
> +#define TYPE_NONE 'N'
> +#define TYPE_FALSE 'F'
> +#define TYPE_TRUE 'T'
> +#define TYPE_STOPITER 'S'
> +#define TYPE_ELLIPSIS '.'
> +#define TYPE_INT 'i'
> +/* TYPE_INT64 is not generated anymore.
> + Supported for backward compatibility only. */
> +#define TYPE_INT64 'I'
> +#define TYPE_FLOAT 'f'
> +#define TYPE_BINARY_FLOAT 'g'
> +#define TYPE_COMPLEX 'x'
> +#define TYPE_BINARY_COMPLEX 'y'
> +#define TYPE_LONG 'l'
> +#define TYPE_STRING 's'
> +#define TYPE_INTERNED 't'
> +#define TYPE_REF 'r'
> +#define TYPE_TUPLE '('
> +#define TYPE_LIST '['
> +#define TYPE_DICT '{'
> +#define TYPE_CODE 'c'
> +#define TYPE_UNICODE 'u'
> +#define TYPE_UNKNOWN '?'
> +#define TYPE_SET '<'
> +#define TYPE_FROZENSET '>'
> +#define FLAG_REF '\x80' /* with a type, add obj to index */
> +
> +#define TYPE_ASCII 'a'
> +#define TYPE_ASCII_INTERNED 'A'
> +#define TYPE_SMALL_TUPLE ')'
> +#define TYPE_SHORT_ASCII 'z'
> +#define TYPE_SHORT_ASCII_INTERNED 'Z'
> +
> +#define WFERR_OK 0
> +#define WFERR_UNMARSHALLABLE 1
> +#define WFERR_NESTEDTOODEEP 2
> +#define WFERR_NOMEMORY 3
> +
> +typedef struct {
> + FILE *fp;
> + int error; /* see WFERR_* values */
> + int depth;
> + PyObject *str;
> + char *ptr;
> + char *end;
> + char *buf;
> + _Py_hashtable_t *hashtable;
> + int version;
> +} WFILE;
> +
> +#define w_byte(c, p) do { \
> + if ((p)->ptr != (p)->end || w_reserve((p), 1)) \
> + *(p)->ptr++ = (c); \
> + } while(0)
> +
> +static void
> +w_flush(WFILE *p)
> +{
> + assert(p->fp != NULL);
> + fwrite(p->buf, 1, p->ptr - p->buf, p->fp);
> + p->ptr = p->buf;
> +}
> +
> +static int
> +w_reserve(WFILE *p, Py_ssize_t needed)
> +{
> + Py_ssize_t pos, size, delta;
> + if (p->ptr == NULL)
> + return 0; /* An error already occurred */
> + if (p->fp != NULL) {
> + w_flush(p);
> + return needed <= p->end - p->ptr;
> + }
> + assert(p->str != NULL);
> + pos = p->ptr - p->buf;
> + size = PyBytes_Size(p->str);
> + if (size > 16*1024*1024)
> + delta = (size >> 3); /* 12.5% overallocation */
> + else
> + delta = size + 1024;
> + delta = Py_MAX(delta, needed);
> + if (delta > PY_SSIZE_T_MAX - size) {
> + p->error = WFERR_NOMEMORY;
> + return 0;
> + }
> + size += delta;
> + if (_PyBytes_Resize(&p->str, size) != 0) {
> + p->ptr = p->buf = p->end = NULL;
> + return 0;
> + }
> + else {
> + p->buf = PyBytes_AS_STRING(p->str);
> + p->ptr = p->buf + pos;
> + p->end = p->buf + size;
> + return 1;
> + }
> +}
> +
> +static void
> +w_string(const char *s, Py_ssize_t n, WFILE *p)
> +{
> + Py_ssize_t m;
> + if (!n || p->ptr == NULL)
> + return;
> + m = p->end - p->ptr;
> + if (p->fp != NULL) {
> + if (n <= m) {
> + memcpy(p->ptr, s, n);
> + p->ptr += n;
> + }
> + else {
> + w_flush(p);
> + fwrite(s, 1, n, p->fp);
> + }
> + }
> + else {
> + if (n <= m || w_reserve(p, n - m)) {
> + memcpy(p->ptr, s, n);
> + p->ptr += n;
> + }
> + }
> +}
> +
> +static void
> +w_short(int x, WFILE *p)
> +{
> + w_byte((char)( x & 0xff), p);
> + w_byte((char)((x>> 8) & 0xff), p);
> +}
> +
> +static void
> +w_long(long x, WFILE *p)
> +{
> + w_byte((char)( x & 0xff), p);
> + w_byte((char)((x>> 8) & 0xff), p);
> + w_byte((char)((x>>16) & 0xff), p);
> + w_byte((char)((x>>24) & 0xff), p);
> +}
> +
> +#define SIZE32_MAX 0x7FFFFFFF
> +
> +#if SIZEOF_SIZE_T > 4
> +# define W_SIZE(n, p) do { \
> + if ((n) > SIZE32_MAX) { \
> + (p)->depth--; \
> + (p)->error = WFERR_UNMARSHALLABLE; \
> + return; \
> + } \
> + w_long((long)(n), p); \
> + } while(0)
> +#else
> +# define W_SIZE w_long
> +#endif
> +
> +static void
> +w_pstring(const char *s, Py_ssize_t n, WFILE *p)
> +{
> + W_SIZE(n, p);
> + w_string(s, n, p);
> +}
> +
> +static void
> +w_short_pstring(const char *s, Py_ssize_t n, WFILE *p)
> +{
> + w_byte(Py_SAFE_DOWNCAST(n, Py_ssize_t, unsigned char), p);
> + w_string(s, n, p);
> +}
> +
> +/* We assume that Python ints are stored internally in base some power of
> + 2**15; for the sake of portability we'll always read and write them in base
> + exactly 2**15. */
> +
> +#define PyLong_MARSHAL_SHIFT 15
> +#define PyLong_MARSHAL_BASE ((short)1 << PyLong_MARSHAL_SHIFT)
> +#define PyLong_MARSHAL_MASK (PyLong_MARSHAL_BASE - 1)
> +#if PyLong_SHIFT % PyLong_MARSHAL_SHIFT != 0
> +#error "PyLong_SHIFT must be a multiple of PyLong_MARSHAL_SHIFT"
> +#endif
> +#define PyLong_MARSHAL_RATIO (PyLong_SHIFT / PyLong_MARSHAL_SHIFT)
> +
> +#define W_TYPE(t, p) do { \
> + w_byte((t) | flag, (p)); \
> +} while(0)
> +
> +static void
> +w_PyLong(const PyLongObject *ob, char flag, WFILE *p)
> +{
> + Py_ssize_t i, j, n, l;
> + digit d;
> +
> + W_TYPE(TYPE_LONG, p);
> + if (Py_SIZE(ob) == 0) {
> + w_long((long)0, p);
> + return;
> + }
> +
> + /* set l to number of base PyLong_MARSHAL_BASE digits */
> + n = Py_ABS(Py_SIZE(ob));
> + l = (n-1) * PyLong_MARSHAL_RATIO;
> + d = ob->ob_digit[n-1];
> + assert(d != 0); /* a PyLong is always normalized */
> + do {
> + d >>= PyLong_MARSHAL_SHIFT;
> + l++;
> + } while (d != 0);
> + if (l > SIZE32_MAX) {
> + p->depth--;
> + p->error = WFERR_UNMARSHALLABLE;
> + return;
> + }
> + w_long((long)(Py_SIZE(ob) > 0 ? l : -l), p);
> +
> + for (i=0; i < n-1; i++) {
> + d = ob->ob_digit[i];
> + for (j=0; j < PyLong_MARSHAL_RATIO; j++) {
> + w_short(d & PyLong_MARSHAL_MASK, p);
> + d >>= PyLong_MARSHAL_SHIFT;
> + }
> + assert (d == 0);
> + }
> + d = ob->ob_digit[n-1];
> + do {
> + w_short(d & PyLong_MARSHAL_MASK, p);
> + d >>= PyLong_MARSHAL_SHIFT;
> + } while (d != 0);
> +}
> +
> +static int
> +w_ref(PyObject *v, char *flag, WFILE *p)
> +{
> + _Py_hashtable_entry_t *entry;
> + int w;
> +
> + if (p->version < 3 || p->hashtable == NULL)
> + return 0; /* not writing object references */
> +
> + /* if it has only one reference, it definitely isn't shared */
> + if (Py_REFCNT(v) == 1)
> + return 0;
> +
> + entry = _Py_HASHTABLE_GET_ENTRY(p->hashtable, v);
> + if (entry != NULL) {
> + /* write the reference index to the stream */
> + _Py_HASHTABLE_ENTRY_READ_DATA(p->hashtable, entry, w);
> + /* we don't store "long" indices in the dict */
> + assert(0 <= w && w <= 0x7fffffff);
> + w_byte(TYPE_REF, p);
> + w_long(w, p);
> + return 1;
> + } else {
> + size_t s = p->hashtable->entries;
> + /* we don't support long indices */
> + if (s >= 0x7fffffff) {
> + PyErr_SetString(PyExc_ValueError, "too many objects");
> + goto err;
> + }
> + w = (int)s;
> + Py_INCREF(v);
> + if (_Py_HASHTABLE_SET(p->hashtable, v, w) < 0) {
> + Py_DECREF(v);
> + goto err;
> + }
> + *flag |= FLAG_REF;
> + return 0;
> + }
> +err:
> + p->error = WFERR_UNMARSHALLABLE;
> + return 1;
> +}
> +
> +static void
> +w_complex_object(PyObject *v, char flag, WFILE *p);
> +
> +static void
> +w_object(PyObject *v, WFILE *p)
> +{
> + char flag = '\0';
> +
> + p->depth++;
> +
> + if (p->depth > MAX_MARSHAL_STACK_DEPTH) {
> + p->error = WFERR_NESTEDTOODEEP;
> + }
> + else if (v == NULL) {
> + w_byte(TYPE_NULL, p);
> + }
> + else if (v == Py_None) {
> + w_byte(TYPE_NONE, p);
> + }
> + else if (v == PyExc_StopIteration) {
> + w_byte(TYPE_STOPITER, p);
> + }
> + else if (v == Py_Ellipsis) {
> + w_byte(TYPE_ELLIPSIS, p);
> + }
> + else if (v == Py_False) {
> + w_byte(TYPE_FALSE, p);
> + }
> + else if (v == Py_True) {
> + w_byte(TYPE_TRUE, p);
> + }
> + else if (!w_ref(v, &flag, p))
> + w_complex_object(v, flag, p);
> +
> + p->depth--;
> +}
> +
> +static void
> +w_complex_object(PyObject *v, char flag, WFILE *p)
> +{
> + Py_ssize_t i, n;
> +
> + if (PyLong_CheckExact(v)) {
> + long x = PyLong_AsLong(v);
> + if ((x == -1) && PyErr_Occurred()) {
> + PyLongObject *ob = (PyLongObject *)v;
> + PyErr_Clear();
> + w_PyLong(ob, flag, p);
> + }
> + else {
> +#if SIZEOF_LONG > 4
> + long y = Py_ARITHMETIC_RIGHT_SHIFT(long, x, 31);
> + if (y && y != -1) {
> + /* Too large for TYPE_INT */
> + w_PyLong((PyLongObject*)v, flag, p);
> + }
> + else
> +#endif
> + {
> + W_TYPE(TYPE_INT, p);
> + w_long(x, p);
> + }
> + }
> + }
> + else if (PyFloat_CheckExact(v)) {
> + if (p->version > 1) {
> + unsigned char buf[8];
> + if (_PyFloat_Pack8(PyFloat_AsDouble(v),
> + buf, 1) < 0) {
> + p->error = WFERR_UNMARSHALLABLE;
> + return;
> + }
> + W_TYPE(TYPE_BINARY_FLOAT, p);
> + w_string((char*)buf, 8, p);
> + }
> + else {
> + char *buf = PyOS_double_to_string(PyFloat_AS_DOUBLE(v),
> + 'g', 17, 0, NULL);
> + if (!buf) {
> + p->error = WFERR_NOMEMORY;
> + return;
> + }
> + n = strlen(buf);
> + W_TYPE(TYPE_FLOAT, p);
> + w_byte((int)n, p);
> + w_string(buf, n, p);
> + PyMem_Free(buf);
> + }
> + }
> + else if (PyComplex_CheckExact(v)) {
> + if (p->version > 1) {
> + unsigned char buf[8];
> + if (_PyFloat_Pack8(PyComplex_RealAsDouble(v),
> + buf, 1) < 0) {
> + p->error = WFERR_UNMARSHALLABLE;
> + return;
> + }
> + W_TYPE(TYPE_BINARY_COMPLEX, p);
> + w_string((char*)buf, 8, p);
> + if (_PyFloat_Pack8(PyComplex_ImagAsDouble(v),
> + buf, 1) < 0) {
> + p->error = WFERR_UNMARSHALLABLE;
> + return;
> + }
> + w_string((char*)buf, 8, p);
> + }
> + else {
> + char *buf;
> + W_TYPE(TYPE_COMPLEX, p);
> + buf = PyOS_double_to_string(PyComplex_RealAsDouble(v),
> + 'g', 17, 0, NULL);
> + if (!buf) {
> + p->error = WFERR_NOMEMORY;
> + return;
> + }
> + n = strlen(buf);
> + w_byte((int)n, p);
> + w_string(buf, n, p);
> + PyMem_Free(buf);
> + buf = PyOS_double_to_string(PyComplex_ImagAsDouble(v),
> + 'g', 17, 0, NULL);
> + if (!buf) {
> + p->error = WFERR_NOMEMORY;
> + return;
> + }
> + n = strlen(buf);
> + w_byte((int)n, p);
> + w_string(buf, n, p);
> + PyMem_Free(buf);
> + }
> + }
> + else if (PyBytes_CheckExact(v)) {
> + W_TYPE(TYPE_STRING, p);
> + w_pstring(PyBytes_AS_STRING(v), PyBytes_GET_SIZE(v), p);
> + }
> + else if (PyUnicode_CheckExact(v)) {
> + if (p->version >= 4 && PyUnicode_IS_ASCII(v)) {
> + int is_short = PyUnicode_GET_LENGTH(v) < 256;
> + if (is_short) {
> + if (PyUnicode_CHECK_INTERNED(v))
> + W_TYPE(TYPE_SHORT_ASCII_INTERNED, p);
> + else
> + W_TYPE(TYPE_SHORT_ASCII, p);
> + w_short_pstring((char *) PyUnicode_1BYTE_DATA(v),
> + PyUnicode_GET_LENGTH(v), p);
> + }
> + else {
> + if (PyUnicode_CHECK_INTERNED(v))
> + W_TYPE(TYPE_ASCII_INTERNED, p);
> + else
> + W_TYPE(TYPE_ASCII, p);
> + w_pstring((char *) PyUnicode_1BYTE_DATA(v),
> + PyUnicode_GET_LENGTH(v), p);
> + }
> + }
> + else {
> + PyObject *utf8;
> + utf8 = PyUnicode_AsEncodedString(v, "utf8", "surrogatepass");
> + if (utf8 == NULL) {
> + p->depth--;
> + p->error = WFERR_UNMARSHALLABLE;
> + return;
> + }
> + if (p->version >= 3 && PyUnicode_CHECK_INTERNED(v))
> + W_TYPE(TYPE_INTERNED, p);
> + else
> + W_TYPE(TYPE_UNICODE, p);
> + w_pstring(PyBytes_AS_STRING(utf8), PyBytes_GET_SIZE(utf8), p);
> + Py_DECREF(utf8);
> + }
> + }
> + else if (PyTuple_CheckExact(v)) {
> + n = PyTuple_Size(v);
> + if (p->version >= 4 && n < 256) {
> + W_TYPE(TYPE_SMALL_TUPLE, p);
> + w_byte((unsigned char)n, p);
> + }
> + else {
> + W_TYPE(TYPE_TUPLE, p);
> + W_SIZE(n, p);
> + }
> + for (i = 0; i < n; i++) {
> + w_object(PyTuple_GET_ITEM(v, i), p);
> + }
> + }
> + else if (PyList_CheckExact(v)) {
> + W_TYPE(TYPE_LIST, p);
> + n = PyList_GET_SIZE(v);
> + W_SIZE(n, p);
> + for (i = 0; i < n; i++) {
> + w_object(PyList_GET_ITEM(v, i), p);
> + }
> + }
> + else if (PyDict_CheckExact(v)) {
> + Py_ssize_t pos;
> + PyObject *key, *value;
> + W_TYPE(TYPE_DICT, p);
> + /* This one is NULL object terminated! */
> + pos = 0;
> + while (PyDict_Next(v, &pos, &key, &value)) {
> + w_object(key, p);
> + w_object(value, p);
> + }
> + w_object((PyObject *)NULL, p);
> + }
> + else if (PyAnySet_CheckExact(v)) {
> + PyObject *value, *it;
> +
> + if (PyObject_TypeCheck(v, &PySet_Type))
> + W_TYPE(TYPE_SET, p);
> + else
> + W_TYPE(TYPE_FROZENSET, p);
> + n = PyObject_Size(v);
> + if (n == -1) {
> + p->depth--;
> + p->error = WFERR_UNMARSHALLABLE;
> + return;
> + }
> + W_SIZE(n, p);
> + it = PyObject_GetIter(v);
> + if (it == NULL) {
> + p->depth--;
> + p->error = WFERR_UNMARSHALLABLE;
> + return;
> + }
> + while ((value = PyIter_Next(it)) != NULL) {
> + w_object(value, p);
> + Py_DECREF(value);
> + }
> + Py_DECREF(it);
> + if (PyErr_Occurred()) {
> + p->depth--;
> + p->error = WFERR_UNMARSHALLABLE;
> + return;
> + }
> + }
> + else if (PyCode_Check(v)) {
> + PyCodeObject *co = (PyCodeObject *)v;
> + W_TYPE(TYPE_CODE, p);
> + w_long(co->co_argcount, p);
> + w_long(co->co_kwonlyargcount, p);
> + w_long(co->co_nlocals, p);
> + w_long(co->co_stacksize, p);
> + w_long(co->co_flags, p);
> + w_object(co->co_code, p);
> + w_object(co->co_consts, p);
> + w_object(co->co_names, p);
> + w_object(co->co_varnames, p);
> + w_object(co->co_freevars, p);
> + w_object(co->co_cellvars, p);
> + w_object(co->co_filename, p);
> + w_object(co->co_name, p);
> + w_long(co->co_firstlineno, p);
> + w_object(co->co_lnotab, p);
> + }
> + else if (PyObject_CheckBuffer(v)) {
> + /* Write unknown bytes-like objects as a bytes object */
> + Py_buffer view;
> + if (PyObject_GetBuffer(v, &view, PyBUF_SIMPLE) != 0) {
> + w_byte(TYPE_UNKNOWN, p);
> + p->depth--;
> + p->error = WFERR_UNMARSHALLABLE;
> + return;
> + }
> + W_TYPE(TYPE_STRING, p);
> + w_pstring(view.buf, view.len, p);
> + PyBuffer_Release(&view);
> + }
> + else {
> + W_TYPE(TYPE_UNKNOWN, p);
> + p->error = WFERR_UNMARSHALLABLE;
> + }
> +}
> +
> +static int
> +w_init_refs(WFILE *wf, int version)
> +{
> + if (version >= 3) {
> + wf->hashtable = _Py_hashtable_new(sizeof(PyObject *), sizeof(int),
> + _Py_hashtable_hash_ptr,
> + _Py_hashtable_compare_direct);
> + if (wf->hashtable == NULL) {
> + PyErr_NoMemory();
> + return -1;
> + }
> + }
> + return 0;
> +}
> +
> +static int
> +w_decref_entry(_Py_hashtable_t *ht, _Py_hashtable_entry_t *entry,
> + void *Py_UNUSED(data))
> +{
> + PyObject *entry_key;
> +
> + _Py_HASHTABLE_ENTRY_READ_KEY(ht, entry, entry_key);
> + Py_XDECREF(entry_key);
> + return 0;
> +}
> +
> +static void
> +w_clear_refs(WFILE *wf)
> +{
> + if (wf->hashtable != NULL) {
> + _Py_hashtable_foreach(wf->hashtable, w_decref_entry, NULL);
> + _Py_hashtable_destroy(wf->hashtable);
> + }
> +}
> +
> +/* version currently has no effect for writing ints. */
> +void
> +PyMarshal_WriteLongToFile(long x, FILE *fp, int version)
> +{
> + char buf[4];
> + WFILE wf;
> + memset(&wf, 0, sizeof(wf));
> + wf.fp = fp;
> + wf.ptr = wf.buf = buf;
> + wf.end = wf.ptr + sizeof(buf);
> + wf.error = WFERR_OK;
> + wf.version = version;
> + w_long(x, &wf);
> + w_flush(&wf);
> +}
> +
> +void
> +PyMarshal_WriteObjectToFile(PyObject *x, FILE *fp, int version)
> +{
> + char buf[BUFSIZ];
> + WFILE wf;
> + memset(&wf, 0, sizeof(wf));
> + wf.fp = fp;
> + wf.ptr = wf.buf = buf;
> + wf.end = wf.ptr + sizeof(buf);
> + wf.error = WFERR_OK;
> + wf.version = version;
> + if (w_init_refs(&wf, version))
> + return; /* caller mush check PyErr_Occurred() */
> + w_object(x, &wf);
> + w_clear_refs(&wf);
> + w_flush(&wf);
> +}
> +
> +typedef struct {
> + FILE *fp;
> + int depth;
> + PyObject *readable; /* Stream-like object being read from */
> + PyObject *current_filename;
> + char *ptr;
> + char *end;
> + char *buf;
> + Py_ssize_t buf_size;
> + PyObject *refs; /* a list */
> +} RFILE;
> +
> +static const char *
> +r_string(Py_ssize_t n, RFILE *p)
> +{
> + Py_ssize_t read = -1;
> +
> + if (p->ptr != NULL) {
> + /* Fast path for loads() */
> + char *res = p->ptr;
> + Py_ssize_t left = p->end - p->ptr;
> + if (left < n) {
> + PyErr_SetString(PyExc_EOFError,
> + "marshal data too short");
> + return NULL;
> + }
> + p->ptr += n;
> + return res;
> + }
> + if (p->buf == NULL) {
> + p->buf = PyMem_MALLOC(n);
> + if (p->buf == NULL) {
> + PyErr_NoMemory();
> + return NULL;
> + }
> + p->buf_size = n;
> + }
> + else if (p->buf_size < n) {
> + char *tmp = PyMem_REALLOC(p->buf, n);
> + if (tmp == NULL) {
> + PyErr_NoMemory();
> + return NULL;
> + }
> + p->buf = tmp;
> + p->buf_size = n;
> + }
> +
> + if (!p->readable) {
> + assert(p->fp != NULL);
> + read = fread(p->buf, 1, n, p->fp);
> + }
> + else {
> + _Py_IDENTIFIER(readinto);
> + PyObject *res, *mview;
> + Py_buffer buf;
> +
> + if (PyBuffer_FillInfo(&buf, NULL, p->buf, n, 0, PyBUF_CONTIG) == -1)
> + return NULL;
> + mview = PyMemoryView_FromBuffer(&buf);
> + if (mview == NULL)
> + return NULL;
> +
> + res = _PyObject_CallMethodId(p->readable, &PyId_readinto, "N", mview);
> + if (res != NULL) {
> + read = PyNumber_AsSsize_t(res, PyExc_ValueError);
> + Py_DECREF(res);
> + }
> + }
> + if (read != n) {
> + if (!PyErr_Occurred()) {
> + if (read > n)
> + PyErr_Format(PyExc_ValueError,
> + "read() returned too much data: "
> + "%zd bytes requested, %zd returned",
> + n, read);
> + else
> + PyErr_SetString(PyExc_EOFError,
> + "EOF read where not expected");
> + }
> + return NULL;
> + }
> + return p->buf;
> +}
> +
> +static int
> +r_byte(RFILE *p)
> +{
> + int c = EOF;
> +
> + if (p->ptr != NULL) {
> + if (p->ptr < p->end)
> + c = (unsigned char) *p->ptr++;
> + return c;
> + }
> + if (!p->readable) {
> + assert(p->fp);
> + c = getc(p->fp);
> + }
> + else {
> + const char *ptr = r_string(1, p);
> + if (ptr != NULL)
> + c = *(unsigned char *) ptr;
> + }
> + return c;
> +}
> +
> +static int
> +r_short(RFILE *p)
> +{
> + short x = -1;
> + const unsigned char *buffer;
> +
> + buffer = (const unsigned char *) r_string(2, p);
> + if (buffer != NULL) {
> + x = buffer[0];
> + x |= buffer[1] << 8;
> + /* Sign-extension, in case short greater than 16 bits */
> + x |= -(x & 0x8000);
> + }
> + return x;
> +}
> +
> +static long
> +r_long(RFILE *p)
> +{
> + long x = -1;
> + const unsigned char *buffer;
> +
> + buffer = (const unsigned char *) r_string(4, p);
> + if (buffer != NULL) {
> + x = buffer[0];
> + x |= (long)buffer[1] << 8;
> + x |= (long)buffer[2] << 16;
> + x |= (long)buffer[3] << 24;
> +#if SIZEOF_LONG > 4
> + /* Sign extension for 64-bit machines */
> + x |= -(x & 0x80000000L);
> +#endif
> + }
> + return x;
> +}
> +
> +/* r_long64 deals with the TYPE_INT64 code. */
> +static PyObject *
> +r_long64(RFILE *p)
> +{
> + const unsigned char *buffer = (const unsigned char *) r_string(8, p);
> + if (buffer == NULL) {
> + return NULL;
> + }
> + return _PyLong_FromByteArray(buffer, 8,
> + 1 /* little endian */,
> + 1 /* signed */);
> +}
> +
> +static PyObject *
> +r_PyLong(RFILE *p)
> +{
> + PyLongObject *ob;
> + long n, size, i;
> + int j, md, shorts_in_top_digit;
> + digit d;
> +
> + n = r_long(p);
> + if (PyErr_Occurred())
> + return NULL;
> + if (n == 0)
> + return (PyObject *)_PyLong_New(0);
> + if (n < -SIZE32_MAX || n > SIZE32_MAX) {
> + PyErr_SetString(PyExc_ValueError,
> + "bad marshal data (long size out of range)");
> + return NULL;
> + }
> +
> + size = 1 + (Py_ABS(n) - 1) / PyLong_MARSHAL_RATIO;
> + shorts_in_top_digit = 1 + (Py_ABS(n) - 1) % PyLong_MARSHAL_RATIO;
> + ob = _PyLong_New(size);
> + if (ob == NULL)
> + return NULL;
> +
> + Py_SIZE(ob) = n > 0 ? size : -size;
> +
> + for (i = 0; i < size-1; i++) {
> + d = 0;
> + for (j=0; j < PyLong_MARSHAL_RATIO; j++) {
> + md = r_short(p);
> + if (PyErr_Occurred()) {
> + Py_DECREF(ob);
> + return NULL;
> + }
> + if (md < 0 || md > PyLong_MARSHAL_BASE)
> + goto bad_digit;
> + d += (digit)md << j*PyLong_MARSHAL_SHIFT;
> + }
> + ob->ob_digit[i] = d;
> + }
> +
> + d = 0;
> + for (j=0; j < shorts_in_top_digit; j++) {
> + md = r_short(p);
> + if (PyErr_Occurred()) {
> + Py_DECREF(ob);
> + return NULL;
> + }
> + if (md < 0 || md > PyLong_MARSHAL_BASE)
> + goto bad_digit;
> + /* topmost marshal digit should be nonzero */
> + if (md == 0 && j == shorts_in_top_digit - 1) {
> + Py_DECREF(ob);
> + PyErr_SetString(PyExc_ValueError,
> + "bad marshal data (unnormalized long data)");
> + return NULL;
> + }
> + d += (digit)md << j*PyLong_MARSHAL_SHIFT;
> + }
> + if (PyErr_Occurred()) {
> + Py_DECREF(ob);
> + return NULL;
> + }
> + /* top digit should be nonzero, else the resulting PyLong won't be
> + normalized */
> + ob->ob_digit[size-1] = d;
> + return (PyObject *)ob;
> + bad_digit:
> + Py_DECREF(ob);
> + PyErr_SetString(PyExc_ValueError,
> + "bad marshal data (digit out of range in long)");
> + return NULL;
> +}
> +
> +/* allocate the reflist index for a new object. Return -1 on failure */
> +static Py_ssize_t
> +r_ref_reserve(int flag, RFILE *p)
> +{
> + if (flag) { /* currently only FLAG_REF is defined */
> + Py_ssize_t idx = PyList_GET_SIZE(p->refs);
> + if (idx >= 0x7ffffffe) {
> + PyErr_SetString(PyExc_ValueError, "bad marshal data (index list too large)");
> + return -1;
> + }
> + if (PyList_Append(p->refs, Py_None) < 0)
> + return -1;
> + return idx;
> + } else
> + return 0;
> +}
> +
> +/* insert the new object 'o' to the reflist at previously
> + * allocated index 'idx'.
> + * 'o' can be NULL, in which case nothing is done.
> + * if 'o' was non-NULL, and the function succeeds, 'o' is returned.
> + * if 'o' was non-NULL, and the function fails, 'o' is released and
> + * NULL returned. This simplifies error checking at the call site since
> + * a single test for NULL for the function result is enough.
> + */
> +static PyObject *
> +r_ref_insert(PyObject *o, Py_ssize_t idx, int flag, RFILE *p)
> +{
> + if (o != NULL && flag) { /* currently only FLAG_REF is defined */
> + PyObject *tmp = PyList_GET_ITEM(p->refs, idx);
> + Py_INCREF(o);
> + PyList_SET_ITEM(p->refs, idx, o);
> + Py_DECREF(tmp);
> + }
> + return o;
> +}
> +
> +/* combination of both above, used when an object can be
> + * created whenever it is seen in the file, as opposed to
> + * after having loaded its sub-objects.
> + */
> +static PyObject *
> +r_ref(PyObject *o, int flag, RFILE *p)
> +{
> + assert(flag & FLAG_REF);
> + if (o == NULL)
> + return NULL;
> + if (PyList_Append(p->refs, o) < 0) {
> + Py_DECREF(o); /* release the new object */
> + return NULL;
> + }
> + return o;
> +}
> +
> +static PyObject *
> +r_object(RFILE *p)
> +{
> + /* NULL is a valid return value, it does not necessarily means that
> + an exception is set. */
> + PyObject *v, *v2;
> + Py_ssize_t idx = 0;
> + long i, n;
> + int type, code = r_byte(p);
> + int flag, is_interned = 0;
> + PyObject *retval = NULL;
> +
> + if (code == EOF) {
> + PyErr_SetString(PyExc_EOFError,
> + "EOF read where object expected");
> + return NULL;
> + }
> +
> + p->depth++;
> +
> + if (p->depth > MAX_MARSHAL_STACK_DEPTH) {
> + p->depth--;
> + PyErr_SetString(PyExc_ValueError, "recursion limit exceeded");
> + return NULL;
> + }
> +
> + flag = code & FLAG_REF;
> + type = code & ~FLAG_REF;
> +
> +#define R_REF(O) do{\
> + if (flag) \
> + O = r_ref(O, flag, p);\
> +} while (0)
> +
> + switch (type) {
> +
> + case TYPE_NULL:
> + break;
> +
> + case TYPE_NONE:
> + Py_INCREF(Py_None);
> + retval = Py_None;
> + break;
> +
> + case TYPE_STOPITER:
> + Py_INCREF(PyExc_StopIteration);
> + retval = PyExc_StopIteration;
> + break;
> +
> + case TYPE_ELLIPSIS:
> + Py_INCREF(Py_Ellipsis);
> + retval = Py_Ellipsis;
> + break;
> +
> + case TYPE_FALSE:
> + Py_INCREF(Py_False);
> + retval = Py_False;
> + break;
> +
> + case TYPE_TRUE:
> + Py_INCREF(Py_True);
> + retval = Py_True;
> + break;
> +
> + case TYPE_INT:
> + n = r_long(p);
> + retval = PyErr_Occurred() ? NULL : PyLong_FromLong(n);
> + R_REF(retval);
> + break;
> +
> + case TYPE_INT64:
> + retval = r_long64(p);
> + R_REF(retval);
> + break;
> +
> + case TYPE_LONG:
> + retval = r_PyLong(p);
> + R_REF(retval);
> + break;
> +
> + case TYPE_FLOAT:
> + {
> + char buf[256];
> + const char *ptr;
> + double dx;
> + n = r_byte(p);
> + if (n == EOF) {
> + PyErr_SetString(PyExc_EOFError,
> + "EOF read where object expected");
> + break;
> + }
> + ptr = r_string(n, p);
> + if (ptr == NULL)
> + break;
> + memcpy(buf, ptr, n);
> + buf[n] = '\0';
> + dx = PyOS_string_to_double(buf, NULL, NULL);
> + if (dx == -1.0 && PyErr_Occurred())
> + break;
> + retval = PyFloat_FromDouble(dx);
> + R_REF(retval);
> + break;
> + }
> +
> + case TYPE_BINARY_FLOAT:
> + {
> + const unsigned char *buf;
> + double x;
> + buf = (const unsigned char *) r_string(8, p);
> + if (buf == NULL)
> + break;
> + x = _PyFloat_Unpack8(buf, 1);
> + if (x == -1.0 && PyErr_Occurred())
> + break;
> + retval = PyFloat_FromDouble(x);
> + R_REF(retval);
> + break;
> + }
> +
> + case TYPE_COMPLEX:
> + {
> + char buf[256];
> + const char *ptr;
> + Py_complex c;
> + n = r_byte(p);
> + if (n == EOF) {
> + PyErr_SetString(PyExc_EOFError,
> + "EOF read where object expected");
> + break;
> + }
> + ptr = r_string(n, p);
> + if (ptr == NULL)
> + break;
> + memcpy(buf, ptr, n);
> + buf[n] = '\0';
> + c.real = PyOS_string_to_double(buf, NULL, NULL);
> + if (c.real == -1.0 && PyErr_Occurred())
> + break;
> + n = r_byte(p);
> + if (n == EOF) {
> + PyErr_SetString(PyExc_EOFError,
> + "EOF read where object expected");
> + break;
> + }
> + ptr = r_string(n, p);
> + if (ptr == NULL)
> + break;
> + memcpy(buf, ptr, n);
> + buf[n] = '\0';
> + c.imag = PyOS_string_to_double(buf, NULL, NULL);
> + if (c.imag == -1.0 && PyErr_Occurred())
> + break;
> + retval = PyComplex_FromCComplex(c);
> + R_REF(retval);
> + break;
> + }
> +
> + case TYPE_BINARY_COMPLEX:
> + {
> + const unsigned char *buf;
> + Py_complex c;
> + buf = (const unsigned char *) r_string(8, p);
> + if (buf == NULL)
> + break;
> + c.real = _PyFloat_Unpack8(buf, 1);
> + if (c.real == -1.0 && PyErr_Occurred())
> + break;
> + buf = (const unsigned char *) r_string(8, p);
> + if (buf == NULL)
> + break;
> + c.imag = _PyFloat_Unpack8(buf, 1);
> + if (c.imag == -1.0 && PyErr_Occurred())
> + break;
> + retval = PyComplex_FromCComplex(c);
> + R_REF(retval);
> + break;
> + }
> +
> + case TYPE_STRING:
> + {
> + const char *ptr;
> + n = r_long(p);
> + if (PyErr_Occurred())
> + break;
> + if (n < 0 || n > SIZE32_MAX) {
> + PyErr_SetString(PyExc_ValueError, "bad marshal data (bytes object size out of range)");
> + break;
> + }
> + v = PyBytes_FromStringAndSize((char *)NULL, n);
> + if (v == NULL)
> + break;
> + ptr = r_string(n, p);
> + if (ptr == NULL) {
> + Py_DECREF(v);
> + break;
> + }
> + memcpy(PyBytes_AS_STRING(v), ptr, n);
> + retval = v;
> + R_REF(retval);
> + break;
> + }
> +
> + case TYPE_ASCII_INTERNED:
> + is_interned = 1;
> + /* fall through */
> + case TYPE_ASCII:
> + n = r_long(p);
> + if (PyErr_Occurred())
> + break;
> + if (n < 0 || n > SIZE32_MAX) {
> + PyErr_SetString(PyExc_ValueError, "bad marshal data (string size out of range)");
> + break;
> + }
> + goto _read_ascii;
> +
> + case TYPE_SHORT_ASCII_INTERNED:
> + is_interned = 1;
> + /* fall through */
> + case TYPE_SHORT_ASCII:
> + n = r_byte(p);
> + if (n == EOF) {
> + PyErr_SetString(PyExc_EOFError,
> + "EOF read where object expected");
> + break;
> + }
> + _read_ascii:
> + {
> + const char *ptr;
> + ptr = r_string(n, p);
> + if (ptr == NULL)
> + break;
> + v = PyUnicode_FromKindAndData(PyUnicode_1BYTE_KIND, ptr, n);
> + if (v == NULL)
> + break;
> + if (is_interned)
> + PyUnicode_InternInPlace(&v);
> + retval = v;
> + R_REF(retval);
> + break;
> + }
> +
> + case TYPE_INTERNED:
> + is_interned = 1;
> + /* fall through */
> + case TYPE_UNICODE:
> + {
> + const char *buffer;
> +
> + n = r_long(p);
> + if (PyErr_Occurred())
> + break;
> + if (n < 0 || n > SIZE32_MAX) {
> + PyErr_SetString(PyExc_ValueError, "bad marshal data (string size out of range)");
> + break;
> + }
> + if (n != 0) {
> + buffer = r_string(n, p);
> + if (buffer == NULL)
> + break;
> + v = PyUnicode_DecodeUTF8(buffer, n, "surrogatepass");
> + }
> + else {
> + v = PyUnicode_New(0, 0);
> + }
> + if (v == NULL)
> + break;
> + if (is_interned)
> + PyUnicode_InternInPlace(&v);
> + retval = v;
> + R_REF(retval);
> + break;
> + }
> +
> + case TYPE_SMALL_TUPLE:
> + n = (unsigned char) r_byte(p);
> + if (PyErr_Occurred())
> + break;
> + goto _read_tuple;
> + case TYPE_TUPLE:
> + n = r_long(p);
> + if (PyErr_Occurred())
> + break;
> + if (n < 0 || n > SIZE32_MAX) {
> + PyErr_SetString(PyExc_ValueError, "bad marshal data (tuple size out of range)");
> + break;
> + }
> + _read_tuple:
> + v = PyTuple_New(n);
> + R_REF(v);
> + if (v == NULL)
> + break;
> +
> + for (i = 0; i < n; i++) {
> + v2 = r_object(p);
> + if ( v2 == NULL ) {
> + if (!PyErr_Occurred())
> + PyErr_SetString(PyExc_TypeError,
> + "NULL object in marshal data for tuple");
> + Py_DECREF(v);
> + v = NULL;
> + break;
> + }
> + PyTuple_SET_ITEM(v, i, v2);
> + }
> + retval = v;
> + break;
> +
> + case TYPE_LIST:
> + n = r_long(p);
> + if (PyErr_Occurred())
> + break;
> + if (n < 0 || n > SIZE32_MAX) {
> + PyErr_SetString(PyExc_ValueError, "bad marshal data (list size out of range)");
> + break;
> + }
> + v = PyList_New(n);
> + R_REF(v);
> + if (v == NULL)
> + break;
> + for (i = 0; i < n; i++) {
> + v2 = r_object(p);
> + if ( v2 == NULL ) {
> + if (!PyErr_Occurred())
> + PyErr_SetString(PyExc_TypeError,
> + "NULL object in marshal data for list");
> + Py_DECREF(v);
> + v = NULL;
> + break;
> + }
> + PyList_SET_ITEM(v, i, v2);
> + }
> + retval = v;
> + break;
> +
> + case TYPE_DICT:
> + v = PyDict_New();
> + R_REF(v);
> + if (v == NULL)
> + break;
> + for (;;) {
> + PyObject *key, *val;
> + key = r_object(p);
> + if (key == NULL)
> + break;
> + val = r_object(p);
> + if (val == NULL) {
> + Py_DECREF(key);
> + break;
> + }
> + if (PyDict_SetItem(v, key, val) < 0) {
> + Py_DECREF(key);
> + Py_DECREF(val);
> + break;
> + }
> + Py_DECREF(key);
> + Py_DECREF(val);
> + }
> + if (PyErr_Occurred()) {
> + Py_DECREF(v);
> + v = NULL;
> + }
> + retval = v;
> + break;
> +
> + case TYPE_SET:
> + case TYPE_FROZENSET:
> + n = r_long(p);
> + if (PyErr_Occurred())
> + break;
> + if (n < 0 || n > SIZE32_MAX) {
> + PyErr_SetString(PyExc_ValueError, "bad marshal data (set size out of range)");
> + break;
> + }
> +
> + if (n == 0 && type == TYPE_FROZENSET) {
> + /* call frozenset() to get the empty frozenset singleton */
> + v = PyObject_CallFunction((PyObject*)&PyFrozenSet_Type, NULL);
> + if (v == NULL)
> + break;
> + R_REF(v);
> + retval = v;
> + }
> + else {
> + v = (type == TYPE_SET) ? PySet_New(NULL) : PyFrozenSet_New(NULL);
> + if (type == TYPE_SET) {
> + R_REF(v);
> + } else {
> + /* must use delayed registration of frozensets because they must
> + * be init with a refcount of 1
> + */
> + idx = r_ref_reserve(flag, p);
> + if (idx < 0)
> + Py_CLEAR(v); /* signal error */
> + }
> + if (v == NULL)
> + break;
> +
> + for (i = 0; i < n; i++) {
> + v2 = r_object(p);
> + if ( v2 == NULL ) {
> + if (!PyErr_Occurred())
> + PyErr_SetString(PyExc_TypeError,
> + "NULL object in marshal data for set");
> + Py_DECREF(v);
> + v = NULL;
> + break;
> + }
> + if (PySet_Add(v, v2) == -1) {
> + Py_DECREF(v);
> + Py_DECREF(v2);
> + v = NULL;
> + break;
> + }
> + Py_DECREF(v2);
> + }
> + if (type != TYPE_SET)
> + v = r_ref_insert(v, idx, flag, p);
> + retval = v;
> + }
> + break;
> +
> + case TYPE_CODE:
> + {
> + int argcount;
> + int kwonlyargcount;
> + int nlocals;
> + int stacksize;
> + int flags;
> + PyObject *code = NULL;
> + PyObject *consts = NULL;
> + PyObject *names = NULL;
> + PyObject *varnames = NULL;
> + PyObject *freevars = NULL;
> + PyObject *cellvars = NULL;
> + PyObject *filename = NULL;
> + PyObject *name = NULL;
> + int firstlineno;
> + PyObject *lnotab = NULL;
> +
> + idx = r_ref_reserve(flag, p);
> + if (idx < 0)
> + break;
> +
> + v = NULL;
> +
> + /* XXX ignore long->int overflows for now */
> + argcount = (int)r_long(p);
> + if (PyErr_Occurred())
> + goto code_error;
> + kwonlyargcount = (int)r_long(p);
> + if (PyErr_Occurred())
> + goto code_error;
> + nlocals = (int)r_long(p);
> + if (PyErr_Occurred())
> + goto code_error;
> + stacksize = (int)r_long(p);
> + if (PyErr_Occurred())
> + goto code_error;
> + flags = (int)r_long(p);
> + if (PyErr_Occurred())
> + goto code_error;
> + code = r_object(p);
> + if (code == NULL)
> + goto code_error;
> + consts = r_object(p);
> + if (consts == NULL)
> + goto code_error;
> + names = r_object(p);
> + if (names == NULL)
> + goto code_error;
> + varnames = r_object(p);
> + if (varnames == NULL)
> + goto code_error;
> + freevars = r_object(p);
> + if (freevars == NULL)
> + goto code_error;
> + cellvars = r_object(p);
> + if (cellvars == NULL)
> + goto code_error;
> + filename = r_object(p);
> + if (filename == NULL)
> + goto code_error;
> + if (PyUnicode_CheckExact(filename)) {
> + if (p->current_filename != NULL) {
> + if (!PyUnicode_Compare(filename, p->current_filename)) {
> + Py_DECREF(filename);
> + Py_INCREF(p->current_filename);
> + filename = p->current_filename;
> + }
> + }
> + else {
> + p->current_filename = filename;
> + }
> + }
> + name = r_object(p);
> + if (name == NULL)
> + goto code_error;
> + firstlineno = (int)r_long(p);
> + if (firstlineno == -1 && PyErr_Occurred())
> + break;
> + lnotab = r_object(p);
> + if (lnotab == NULL)
> + goto code_error;
> +
> + v = (PyObject *) PyCode_New(
> + argcount, kwonlyargcount,
> + nlocals, stacksize, flags,
> + code, consts, names, varnames,
> + freevars, cellvars, filename, name,
> + firstlineno, lnotab);
> + v = r_ref_insert(v, idx, flag, p);
> +
> + code_error:
> + Py_XDECREF(code);
> + Py_XDECREF(consts);
> + Py_XDECREF(names);
> + Py_XDECREF(varnames);
> + Py_XDECREF(freevars);
> + Py_XDECREF(cellvars);
> + Py_XDECREF(filename);
> + Py_XDECREF(name);
> + Py_XDECREF(lnotab);
> + }
> + retval = v;
> + break;
> +
> + case TYPE_REF:
> + n = r_long(p);
> + if (n < 0 || n >= PyList_GET_SIZE(p->refs)) {
> + if (n == -1 && PyErr_Occurred())
> + break;
> + PyErr_SetString(PyExc_ValueError, "bad marshal data (invalid reference)");
> + break;
> + }
> + v = PyList_GET_ITEM(p->refs, n);
> + if (v == Py_None) {
> + PyErr_SetString(PyExc_ValueError, "bad marshal data (invalid reference)");
> + break;
> + }
> + Py_INCREF(v);
> + retval = v;
> + break;
> +
> + default:
> + /* Bogus data got written, which isn't ideal.
> + This will let you keep working and recover. */
> + PyErr_SetString(PyExc_ValueError, "bad marshal data (unknown type code)");
> + break;
> +
> + }
> + p->depth--;
> + return retval;
> +}
> +
> +static PyObject *
> +read_object(RFILE *p)
> +{
> + PyObject *v;
> + if (PyErr_Occurred()) {
> + fprintf(stderr, "XXX readobject called with exception set\n");
> + return NULL;
> + }
> + v = r_object(p);
> + if (v == NULL && !PyErr_Occurred())
> + PyErr_SetString(PyExc_TypeError, "NULL object in marshal data for object");
> + return v;
> +}
> +
> +int
> +PyMarshal_ReadShortFromFile(FILE *fp)
> +{
> + RFILE rf;
> + int res;
> + assert(fp);
> + rf.readable = NULL;
> + rf.fp = fp;
> + rf.current_filename = NULL;
> + rf.end = rf.ptr = NULL;
> + rf.buf = NULL;
> + res = r_short(&rf);
> + if (rf.buf != NULL)
> + PyMem_FREE(rf.buf);
> + return res;
> +}
> +
> +long
> +PyMarshal_ReadLongFromFile(FILE *fp)
> +{
> + RFILE rf;
> + long res;
> + rf.fp = fp;
> + rf.readable = NULL;
> + rf.current_filename = NULL;
> + rf.ptr = rf.end = NULL;
> + rf.buf = NULL;
> + res = r_long(&rf);
> + if (rf.buf != NULL)
> + PyMem_FREE(rf.buf);
> + return res;
> +}
> +
> +/* Return size of file in bytes; < 0 if unknown or INT_MAX if too big */
> +static off_t
> +getfilesize(FILE *fp)
> +{
> + struct _Py_stat_struct st;
> + if (_Py_fstat_noraise(fileno(fp), &st) != 0)
> + return -1;
> +#if SIZEOF_OFF_T == 4
> + else if (st.st_size >= INT_MAX)
> + return (off_t)INT_MAX;
> +#endif
> + else
> + return (off_t)st.st_size;
> +}
> +
> +/* If we can get the size of the file up-front, and it's reasonably small,
> + * read it in one gulp and delegate to ...FromString() instead. Much quicker
> + * than reading a byte at a time from file; speeds .pyc imports.
> + * CAUTION: since this may read the entire remainder of the file, don't
> + * call it unless you know you're done with the file.
> + */
> +PyObject *
> +PyMarshal_ReadLastObjectFromFile(FILE *fp)
> +{
> +/* REASONABLE_FILE_LIMIT is by defn something big enough for Tkinter.pyc. */
> +#define REASONABLE_FILE_LIMIT (1L << 18)
> + off_t filesize;
> + filesize = getfilesize(fp);
> + if (filesize > 0 && filesize <= REASONABLE_FILE_LIMIT) {
> + char* pBuf = (char *)PyMem_MALLOC(filesize);
> + if (pBuf != NULL) {
> + size_t n = fread(pBuf, 1, (size_t)filesize, fp);
> + PyObject* v = PyMarshal_ReadObjectFromString(pBuf, n);
> + PyMem_FREE(pBuf);
> + return v;
> + }
> +
> + }
> + /* We don't have fstat, or we do but the file is larger than
> + * REASONABLE_FILE_LIMIT or malloc failed -- read a byte at a time.
> + */
> + return PyMarshal_ReadObjectFromFile(fp);
> +
> +#undef REASONABLE_FILE_LIMIT
> +}
> +
> +PyObject *
> +PyMarshal_ReadObjectFromFile(FILE *fp)
> +{
> + RFILE rf;
> + PyObject *result;
> + rf.fp = fp;
> + rf.readable = NULL;
> + rf.current_filename = NULL;
> + rf.depth = 0;
> + rf.ptr = rf.end = NULL;
> + rf.buf = NULL;
> + rf.refs = PyList_New(0);
> + if (rf.refs == NULL)
> + return NULL;
> + result = r_object(&rf);
> + Py_DECREF(rf.refs);
> + if (rf.buf != NULL)
> + PyMem_FREE(rf.buf);
> + return result;
> +}
> +
> +PyObject *
> +PyMarshal_ReadObjectFromString(const char *str, Py_ssize_t len)
> +{
> + RFILE rf;
> + PyObject *result;
> + rf.fp = NULL;
> + rf.readable = NULL;
> + rf.current_filename = NULL;
> + rf.ptr = (char *)str;
> + rf.end = (char *)str + len;
> + rf.buf = NULL;
> + rf.depth = 0;
> + rf.refs = PyList_New(0);
> + if (rf.refs == NULL)
> + return NULL;
> + result = r_object(&rf);
> + Py_DECREF(rf.refs);
> + if (rf.buf != NULL)
> + PyMem_FREE(rf.buf);
> + return result;
> +}
> +
> +PyObject *
> +PyMarshal_WriteObjectToString(PyObject *x, int version)
> +{
> + WFILE wf;
> +
> + memset(&wf, 0, sizeof(wf));
> + wf.str = PyBytes_FromStringAndSize((char *)NULL, 50);
> + if (wf.str == NULL)
> + return NULL;
> + wf.ptr = wf.buf = PyBytes_AS_STRING((PyBytesObject *)wf.str);
> + wf.end = wf.ptr + PyBytes_Size(wf.str);
> + wf.error = WFERR_OK;
> + wf.version = version;
> + if (w_init_refs(&wf, version)) {
> + Py_DECREF(wf.str);
> + return NULL;
> + }
> + w_object(x, &wf);
> + w_clear_refs(&wf);
> + if (wf.str != NULL) {
> + char *base = PyBytes_AS_STRING((PyBytesObject *)wf.str);
> + if (wf.ptr - base > PY_SSIZE_T_MAX) {
> + Py_DECREF(wf.str);
> + PyErr_SetString(PyExc_OverflowError,
> + "too much marshal data for a bytes object");
> + return NULL;
> + }
> + if (_PyBytes_Resize(&wf.str, (Py_ssize_t)(wf.ptr - base)) < 0)
> + return NULL;
> + }
> + if (wf.error != WFERR_OK) {
> + Py_XDECREF(wf.str);
> + if (wf.error == WFERR_NOMEMORY)
> + PyErr_NoMemory();
> + else
> + PyErr_SetString(PyExc_ValueError,
> + (wf.error==WFERR_UNMARSHALLABLE)?"unmarshallable object"
> + :"object too deeply nested to marshal");
> + return NULL;
> + }
> + return wf.str;
> +}
> +
> +/* And an interface for Python programs... */
> +
> +static PyObject *
> +marshal_dump(PyObject *self, PyObject *args)
> +{
> + /* XXX Quick hack -- need to do this differently */
> + PyObject *x;
> + PyObject *f;
> + int version = Py_MARSHAL_VERSION;
> + PyObject *s;
> + PyObject *res;
> + _Py_IDENTIFIER(write);
> +
> + if (!PyArg_ParseTuple(args, "OO|i:dump", &x, &f, &version))
> + return NULL;
> + s = PyMarshal_WriteObjectToString(x, version);
> + if (s == NULL)
> + return NULL;
> + res = _PyObject_CallMethodId(f, &PyId_write, "O", s);
> + Py_DECREF(s);
> + return res;
> +}
> +
> +PyDoc_STRVAR(dump_doc,
> +"dump(value, file[, version])\n\
> +\n\
> +Write the value on the open file. The value must be a supported type.\n\
> +The file must be a writeable binary file.\n\
> +\n\
> +If the value has (or contains an object that has) an unsupported type, a\n\
> +ValueError exception is raised - but garbage data will also be written\n\
> +to the file. The object will not be properly read back by load()\n\
> +\n\
> +The version argument indicates the data format that dump should use.");
> +
> +static PyObject *
> +marshal_load(PyObject *self, PyObject *f)
> +{
> + PyObject *data, *result;
> + _Py_IDENTIFIER(read);
> + RFILE rf;
> +
> + /*
> + * Make a call to the read method, but read zero bytes.
> + * This is to ensure that the object passed in at least
> + * has a read method which returns bytes.
> + * This can be removed if we guarantee good error handling
> + * for r_string()
> + */
> + data = _PyObject_CallMethodId(f, &PyId_read, "i", 0);
> + if (data == NULL)
> + return NULL;
> + if (!PyBytes_Check(data)) {
> + PyErr_Format(PyExc_TypeError,
> + "f.read() returned not bytes but %.100s",
> + data->ob_type->tp_name);
> + result = NULL;
> + }
> + else {
> + rf.depth = 0;
> + rf.fp = NULL;
> + rf.readable = f;
> + rf.current_filename = NULL;
> + rf.ptr = rf.end = NULL;
> + rf.buf = NULL;
> + if ((rf.refs = PyList_New(0)) != NULL) {
> + result = read_object(&rf);
> + Py_DECREF(rf.refs);
> + if (rf.buf != NULL)
> + PyMem_FREE(rf.buf);
> + } else
> + result = NULL;
> + }
> + Py_DECREF(data);
> + return result;
> +}
> +
> +PyDoc_STRVAR(load_doc,
> +"load(file)\n\
> +\n\
> +Read one value from the open file and return it. If no valid value is\n\
> +read (e.g. because the data has a different Python version's\n\
> +incompatible marshal format), raise EOFError, ValueError or TypeError.\n\
> +The file must be a readable binary file.\n\
> +\n\
> +Note: If an object containing an unsupported type was marshalled with\n\
> +dump(), load() will substitute None for the unmarshallable type.");
> +
> +
> +static PyObject *
> +marshal_dumps(PyObject *self, PyObject *args)
> +{
> + PyObject *x;
> + int version = Py_MARSHAL_VERSION;
> + if (!PyArg_ParseTuple(args, "O|i:dumps", &x, &version))
> + return NULL;
> + return PyMarshal_WriteObjectToString(x, version);
> +}
> +
> +PyDoc_STRVAR(dumps_doc,
> +"dumps(value[, version])\n\
> +\n\
> +Return the bytes object that would be written to a file by dump(value, file).\n\
> +The value must be a supported type. Raise a ValueError exception if\n\
> +value has (or contains an object that has) an unsupported type.\n\
> +\n\
> +The version argument indicates the data format that dumps should use.");
> +
> +
> +static PyObject *
> +marshal_loads(PyObject *self, PyObject *args)
> +{
> + RFILE rf;
> + Py_buffer p;
> + char *s;
> + Py_ssize_t n;
> + PyObject* result;
> + if (!PyArg_ParseTuple(args, "y*:loads", &p))
> + return NULL;
> + s = p.buf;
> + n = p.len;
> + rf.fp = NULL;
> + rf.readable = NULL;
> + rf.current_filename = NULL;
> + rf.ptr = s;
> + rf.end = s + n;
> + rf.depth = 0;
> + if ((rf.refs = PyList_New(0)) == NULL)
> + return NULL;
> + result = read_object(&rf);
> + PyBuffer_Release(&p);
> + Py_DECREF(rf.refs);
> + return result;
> +}
> +
> +PyDoc_STRVAR(loads_doc,
> +"loads(bytes)\n\
> +\n\
> +Convert the bytes-like object to a value. If no valid value is found,\n\
> +raise EOFError, ValueError or TypeError. Extra bytes in the input are\n\
> +ignored.");
> +
> +static PyMethodDef marshal_methods[] = {
> + {"dump", marshal_dump, METH_VARARGS, dump_doc},
> + {"load", marshal_load, METH_O, load_doc},
> + {"dumps", marshal_dumps, METH_VARARGS, dumps_doc},
> + {"loads", marshal_loads, METH_VARARGS, loads_doc},
> + {NULL, NULL} /* sentinel */
> +};
> +
> +
> +PyDoc_STRVAR(module_doc,
> +"This module contains functions that can read and write Python values in\n\
> +a binary format. The format is specific to Python, but independent of\n\
> +machine architecture issues.\n\
> +\n\
> +Not all Python object types are supported; in general, only objects\n\
> +whose value is independent from a particular invocation of Python can be\n\
> +written and read by this module. The following types are supported:\n\
> +None, integers, floating point numbers, strings, bytes, bytearrays,\n\
> +tuples, lists, sets, dictionaries, and code objects, where it\n\
> +should be understood that tuples, lists and dictionaries are only\n\
> +supported as long as the values contained therein are themselves\n\
> +supported; and recursive lists and dictionaries should not be written\n\
> +(they will cause infinite loops).\n\
> +\n\
> +Variables:\n\
> +\n\
> +version -- indicates the format that the module uses. Version 0 is the\n\
> + historical format, version 1 shares interned strings and version 2\n\
> + uses a binary format for floating point numbers.\n\
> + Version 3 shares common object references (New in version 3.4).\n\
> +\n\
> +Functions:\n\
> +\n\
> +dump() -- write value to a file\n\
> +load() -- read value from a file\n\
> +dumps() -- marshal value as a bytes object\n\
> +loads() -- read value from a bytes-like object");
> +
> +
> +
> +static struct PyModuleDef marshalmodule = {
> + PyModuleDef_HEAD_INIT,
> + "marshal",
> + module_doc,
> + 0,
> + marshal_methods,
> + NULL,
> + NULL,
> + NULL,
> + NULL
> +};
> +
> +PyMODINIT_FUNC
> +PyMarshal_Init(void)
> +{
> + PyObject *mod = PyModule_Create(&marshalmodule);
> + if (mod == NULL)
> + return NULL;
> + PyModule_AddIntConstant(mod, "version", Py_MARSHAL_VERSION);
> + return mod;
> +}
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pyhash.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pyhash.c
> new file mode 100644
> index 00000000..32ae0e46
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pyhash.c
> @@ -0,0 +1,437 @@
> +/** @file
> + Hash utility functions.
> +
> + Copyright (c) 2010 - 2021, Intel Corporation. All rights reserved.<BR>
> + This program and the accompanying materials are licensed and made available under
> + the terms and conditions of the BSD License that accompanies this distribution.
> + The full text of the license may be found at
> + http://opensource.org/licenses/bsd-license.
> +
> + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
> + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
> +**/
> +
> +/* Set of hash utility functions to help maintaining the invariant that
> + if a==b then hash(a)==hash(b)
> +
> + All the utility functions (_Py_Hash*()) return "-1" to signify an error.
> +*/
> +#include "Python.h"
> +
> +#ifdef __APPLE__
> +# include <libkern/OSByteOrder.h>
> +#elif defined(HAVE_LE64TOH) && defined(HAVE_ENDIAN_H)
> +# include <endian.h>
> +#elif defined(HAVE_LE64TOH) && defined(HAVE_SYS_ENDIAN_H)
> +# include <sys/endian.h>
> +#endif
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +_Py_HashSecret_t _Py_HashSecret;
> +
> +#if Py_HASH_ALGORITHM == Py_HASH_EXTERNAL
> +extern PyHash_FuncDef PyHash_Func;
> +#else
> +static PyHash_FuncDef PyHash_Func;
> +#endif
> +
> +/* Count _Py_HashBytes() calls */
> +#ifdef Py_HASH_STATS
> +#define Py_HASH_STATS_MAX 32
> +static Py_ssize_t hashstats[Py_HASH_STATS_MAX + 1] = {0};
> +#endif
> +
> +/* For numeric types, the hash of a number x is based on the reduction
> + of x modulo the prime P = 2**_PyHASH_BITS - 1. It's designed so that
> + hash(x) == hash(y) whenever x and y are numerically equal, even if
> + x and y have different types.
> +
> + A quick summary of the hashing strategy:
> +
> + (1) First define the 'reduction of x modulo P' for any rational
> + number x; this is a standard extension of the usual notion of
> + reduction modulo P for integers. If x == p/q (written in lowest
> + terms), the reduction is interpreted as the reduction of p times
> + the inverse of the reduction of q, all modulo P; if q is exactly
> + divisible by P then define the reduction to be infinity. So we've
> + got a well-defined map
> +
> + reduce : { rational numbers } -> { 0, 1, 2, ..., P-1, infinity }.
> +
> + (2) Now for a rational number x, define hash(x) by:
> +
> + reduce(x) if x >= 0
> + -reduce(-x) if x < 0
> +
> + If the result of the reduction is infinity (this is impossible for
> + integers, floats and Decimals) then use the predefined hash value
> + _PyHASH_INF for x >= 0, or -_PyHASH_INF for x < 0, instead.
> + _PyHASH_INF, -_PyHASH_INF and _PyHASH_NAN are also used for the
> + hashes of float and Decimal infinities and nans.
> +
> + A selling point for the above strategy is that it makes it possible
> + to compute hashes of decimal and binary floating-point numbers
> + efficiently, even if the exponent of the binary or decimal number
> + is large. The key point is that
> +
> + reduce(x * y) == reduce(x) * reduce(y) (modulo _PyHASH_MODULUS)
> +
> + provided that {reduce(x), reduce(y)} != {0, infinity}. The reduction of a
> + binary or decimal float is never infinity, since the denominator is a power
> + of 2 (for binary) or a divisor of a power of 10 (for decimal). So we have,
> + for nonnegative x,
> +
> + reduce(x * 2**e) == reduce(x) * reduce(2**e) % _PyHASH_MODULUS
> +
> + reduce(x * 10**e) == reduce(x) * reduce(10**e) % _PyHASH_MODULUS
> +
> + and reduce(10**e) can be computed efficiently by the usual modular
> + exponentiation algorithm. For reduce(2**e) it's even better: since
> + P is of the form 2**n-1, reduce(2**e) is 2**(e mod n), and multiplication
> + by 2**(e mod n) modulo 2**n-1 just amounts to a rotation of bits.
> +
> + */
> +
> +Py_hash_t
> +_Py_HashDouble(double v)
> +{
> + int e, sign;
> + double m;
> + Py_uhash_t x, y;
> +
> + if (!Py_IS_FINITE(v)) {
> + if (Py_IS_INFINITY(v))
> + return v > 0 ? _PyHASH_INF : -_PyHASH_INF;
> + else
> + return _PyHASH_NAN;
> + }
> +
> + m = frexp(v, &e);
> +
> + sign = 1;
> + if (m < 0) {
> + sign = -1;
> + m = -m;
> + }
> +
> + /* process 28 bits at a time; this should work well both for binary
> + and hexadecimal floating point. */
> + x = 0;
> + while (m) {
> + x = ((x << 28) & _PyHASH_MODULUS) | x >> (_PyHASH_BITS - 28);
> + m *= 268435456.0; /* 2**28 */
> + e -= 28;
> + y = (Py_uhash_t)m; /* pull out integer part */
> + m -= y;
> + x += y;
> + if (x >= _PyHASH_MODULUS)
> + x -= _PyHASH_MODULUS;
> + }
> +
> + /* adjust for the exponent; first reduce it modulo _PyHASH_BITS */
> + e = e >= 0 ? e % _PyHASH_BITS : _PyHASH_BITS-1-((-1-e) % _PyHASH_BITS);
> + x = ((x << e) & _PyHASH_MODULUS) | x >> (_PyHASH_BITS - e);
> +
> + x = x * sign;
> + if (x == (Py_uhash_t)-1)
> + x = (Py_uhash_t)-2;
> + return (Py_hash_t)x;
> +}
> +
> +Py_hash_t
> +_Py_HashPointer(void *p)
> +{
> + Py_hash_t x;
> + size_t y = (size_t)p;
> + /* bottom 3 or 4 bits are likely to be 0; rotate y by 4 to avoid
> + excessive hash collisions for dicts and sets */
> + y = (y >> 4) | (y << (8 * SIZEOF_VOID_P - 4));
> + x = (Py_hash_t)y;
> + if (x == -1)
> + x = -2;
> + return x;
> +}
> +
> +Py_hash_t
> +_Py_HashBytes(const void *src, Py_ssize_t len)
> +{
> + Py_hash_t x;
> + /*
> + We make the hash of the empty string be 0, rather than using
> + (prefix ^ suffix), since this slightly obfuscates the hash secret
> + */
> + if (len == 0) {
> + return 0;
> + }
> +
> +#ifdef Py_HASH_STATS
> + hashstats[(len <= Py_HASH_STATS_MAX) ? len : 0]++;
> +#endif
> +
> +#if Py_HASH_CUTOFF > 0
> + if (len < Py_HASH_CUTOFF) {
> + /* Optimize hashing of very small strings with inline DJBX33A. */
> + Py_uhash_t hash;
> + const unsigned char *p = src;
> + hash = 5381; /* DJBX33A starts with 5381 */
> +
> + switch(len) {
> + /* ((hash << 5) + hash) + *p == hash * 33 + *p */
> + case 7: hash = ((hash << 5) + hash) + *p++; /* fallthrough */
> + case 6: hash = ((hash << 5) + hash) + *p++; /* fallthrough */
> + case 5: hash = ((hash << 5) + hash) + *p++; /* fallthrough */
> + case 4: hash = ((hash << 5) + hash) + *p++; /* fallthrough */
> + case 3: hash = ((hash << 5) + hash) + *p++; /* fallthrough */
> + case 2: hash = ((hash << 5) + hash) + *p++; /* fallthrough */
> + case 1: hash = ((hash << 5) + hash) + *p++; break;
> + default:
> + assert(0);
> + }
> + hash ^= len;
> + hash ^= (Py_uhash_t) _Py_HashSecret.djbx33a.suffix;
> + x = (Py_hash_t)hash;
> + }
> + else
> +#endif /* Py_HASH_CUTOFF */
> + x = PyHash_Func.hash(src, len);
> +
> + if (x == -1)
> + return -2;
> + return x;
> +}
> +
> +void
> +_PyHash_Fini(void)
> +{
> +#ifdef Py_HASH_STATS
> + int i;
> + Py_ssize_t total = 0;
> + char *fmt = "%2i %8" PY_FORMAT_SIZE_T "d %8" PY_FORMAT_SIZE_T "d\n";
> +
> + fprintf(stderr, "len calls total\n");
> + for (i = 1; i <= Py_HASH_STATS_MAX; i++) {
> + total += hashstats[i];
> + fprintf(stderr, fmt, i, hashstats[i], total);
> + }
> + total += hashstats[0];
> + fprintf(stderr, "> %8" PY_FORMAT_SIZE_T "d %8" PY_FORMAT_SIZE_T "d\n",
> + hashstats[0], total);
> +#endif
> +}
> +
> +PyHash_FuncDef *
> +PyHash_GetFuncDef(void)
> +{
> + return &PyHash_Func;
> +}
> +
> +/* Optimized memcpy() for Windows */
> +#ifdef _MSC_VER
> +# if SIZEOF_PY_UHASH_T == 4
> +# define PY_UHASH_CPY(dst, src) do { \
> + dst[0] = src[0]; dst[1] = src[1]; dst[2] = src[2]; dst[3] = src[3]; \
> + } while(0)
> +# elif SIZEOF_PY_UHASH_T == 8
> +# define PY_UHASH_CPY(dst, src) do { \
> + dst[0] = src[0]; dst[1] = src[1]; dst[2] = src[2]; dst[3] = src[3]; \
> + dst[4] = src[4]; dst[5] = src[5]; dst[6] = src[6]; dst[7] = src[7]; \
> + } while(0)
> +# else
> +# error SIZEOF_PY_UHASH_T must be 4 or 8
> +# endif /* SIZEOF_PY_UHASH_T */
> +#else /* not Windows */
> +# define PY_UHASH_CPY(dst, src) memcpy(dst, src, SIZEOF_PY_UHASH_T)
> +#endif /* _MSC_VER */
> +
> +
> +#if Py_HASH_ALGORITHM == Py_HASH_FNV
> +/* **************************************************************************
> + * Modified Fowler-Noll-Vo (FNV) hash function
> + */
> +static Py_hash_t
> +fnv(const void *src, Py_ssize_t len)
> +{
> + const unsigned char *p = src;
> + Py_uhash_t x;
> + Py_ssize_t remainder, blocks;
> + union {
> + Py_uhash_t value;
> + unsigned char bytes[SIZEOF_PY_UHASH_T];
> + } block;
> +
> +#ifdef Py_DEBUG
> + assert(_Py_HashSecret_Initialized);
> +#endif
> + remainder = len % SIZEOF_PY_UHASH_T;
> + if (remainder == 0) {
> + /* Process at least one block byte by byte to reduce hash collisions
> + * for strings with common prefixes. */
> + remainder = SIZEOF_PY_UHASH_T;
> + }
> + blocks = (len - remainder) / SIZEOF_PY_UHASH_T;
> +
> + x = (Py_uhash_t) _Py_HashSecret.fnv.prefix;
> + x ^= (Py_uhash_t) *p << 7;
> + while (blocks--) {
> + PY_UHASH_CPY(block.bytes, p);
> + x = (_PyHASH_MULTIPLIER * x) ^ block.value;
> + p += SIZEOF_PY_UHASH_T;
> + }
> + /* add remainder */
> + for (; remainder > 0; remainder--)
> + x = (_PyHASH_MULTIPLIER * x) ^ (Py_uhash_t) *p++;
> + x ^= (Py_uhash_t) len;
> + x ^= (Py_uhash_t) _Py_HashSecret.fnv.suffix;
> + if (x == (Py_uhash_t) -1) {
> + x = (Py_uhash_t) -2;
> + }
> + return x;
> +}
> +
> +static PyHash_FuncDef PyHash_Func = {fnv, "fnv", 8 * SIZEOF_PY_HASH_T,
> + 16 * SIZEOF_PY_HASH_T};
> +
> +#endif /* Py_HASH_ALGORITHM == Py_HASH_FNV */
> +
> +
> +#if Py_HASH_ALGORITHM == Py_HASH_SIPHASH24
> +/* **************************************************************************
> + <MIT License>
> + Copyright (c) 2013 Marek Majkowski <marek@popcount.org>
> +
> + Permission is hereby granted, free of charge, to any person obtaining a copy
> + of this software and associated documentation files (the "Software"), to deal
> + in the Software without restriction, including without limitation the rights
> + to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
> + copies of the Software, and to permit persons to whom the Software is
> + furnished to do so, subject to the following conditions:
> +
> + The above copyright notice and this permission notice shall be included in
> + all copies or substantial portions of the Software.
> +
> + THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> + AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
> + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
> + THE SOFTWARE.
> + </MIT License>
> +
> + Original location:
> + https://github.com/majek/csiphash/
> +
> + Solution inspired by code from:
> + Samuel Neves (supercop/crypto_auth/siphash24/little)
> + djb (supercop/crypto_auth/siphash24/little2)
> + Jean-Philippe Aumasson (https://131002.net/siphash/siphash24.c)
> +
> + Modified for Python by Christian Heimes:
> + - C89 / MSVC compatibility
> + - _rotl64() on Windows
> + - letoh64() fallback
> +*/
> +
> +/* byte swap little endian to host endian
> + * Endian conversion not only ensures that the hash function returns the same
> + * value on all platforms. It is also required to for a good dispersion of
> + * the hash values' least significant bits.
> + */
> +#if PY_LITTLE_ENDIAN
> +# define _le64toh(x) ((uint64_t)(x))
> +#elif defined(__APPLE__)
> +# define _le64toh(x) OSSwapLittleToHostInt64(x)
> +#elif defined(HAVE_LETOH64)
> +# define _le64toh(x) le64toh(x)
> +#else
> +# define _le64toh(x) (((uint64_t)(x) << 56) | \
> + (((uint64_t)(x) << 40) & 0xff000000000000ULL) | \
> + (((uint64_t)(x) << 24) & 0xff0000000000ULL) | \
> + (((uint64_t)(x) << 8) & 0xff00000000ULL) | \
> + (((uint64_t)(x) >> 8) & 0xff000000ULL) | \
> + (((uint64_t)(x) >> 24) & 0xff0000ULL) | \
> + (((uint64_t)(x) >> 40) & 0xff00ULL) | \
> + ((uint64_t)(x) >> 56))
> +#endif
> +
> +
> +#if defined(_MSC_VER) && !defined(UEFI_MSVC_64) && !defined(UEFI_MSVC_32)
> +# define ROTATE(x, b) _rotl64(x, b)
> +#else
> +# define ROTATE(x, b) (uint64_t)( ((x) << (b)) | ( (x) >> (64 - (b))) )
> +#endif
> +
> +#define HALF_ROUND(a,b,c,d,s,t) \
> + a += b; c += d; \
> + b = ROTATE(b, s) ^ a; \
> + d = ROTATE(d, t) ^ c; \
> + a = ROTATE(a, 32);
> +
> +#define DOUBLE_ROUND(v0,v1,v2,v3) \
> + HALF_ROUND(v0,v1,v2,v3,13,16); \
> + HALF_ROUND(v2,v1,v0,v3,17,21); \
> + HALF_ROUND(v0,v1,v2,v3,13,16); \
> + HALF_ROUND(v2,v1,v0,v3,17,21);
> +
> +
> +static Py_hash_t
> +siphash24(const void *src, Py_ssize_t src_sz) {
> + uint64_t k0 = _le64toh(_Py_HashSecret.siphash.k0);
> + uint64_t k1 = _le64toh(_Py_HashSecret.siphash.k1);
> + uint64_t b = (uint64_t)src_sz << 56;
> + const uint8_t *in = (uint8_t*)src;
> +
> + uint64_t v0 = k0 ^ 0x736f6d6570736575ULL;
> + uint64_t v1 = k1 ^ 0x646f72616e646f6dULL;
> + uint64_t v2 = k0 ^ 0x6c7967656e657261ULL;
> + uint64_t v3 = k1 ^ 0x7465646279746573ULL;
> +
> + uint64_t t;
> + uint8_t *pt;
> +
> + while (src_sz >= 8) {
> + uint64_t mi;
> + memcpy(&mi, in, sizeof(mi));
> + mi = _le64toh(mi);
> + in += sizeof(mi);
> + src_sz -= sizeof(mi);
> + v3 ^= mi;
> + DOUBLE_ROUND(v0,v1,v2,v3);
> + v0 ^= mi;
> + }
> +
> + t = 0;
> + pt = (uint8_t *)&t;
> + switch (src_sz) {
> + case 7: pt[6] = in[6]; /* fall through */
> + case 6: pt[5] = in[5]; /* fall through */
> + case 5: pt[4] = in[4]; /* fall through */
> + case 4: memcpy(pt, in, sizeof(uint32_t)); break;
> + case 3: pt[2] = in[2]; /* fall through */
> + case 2: pt[1] = in[1]; /* fall through */
> + case 1: pt[0] = in[0]; /* fall through */
> + }
> + b |= _le64toh(t);
> +
> + v3 ^= b;
> + DOUBLE_ROUND(v0,v1,v2,v3);
> + v0 ^= b;
> + v2 ^= 0xff;
> + DOUBLE_ROUND(v0,v1,v2,v3);
> + DOUBLE_ROUND(v0,v1,v2,v3);
> +
> + /* modified */
> + t = (v0 ^ v1) ^ (v2 ^ v3);
> + return (Py_hash_t)t;
> +}
> +
> +static PyHash_FuncDef PyHash_Func = {siphash24, "siphash24", 64, 128};
> +
> +#endif /* Py_HASH_ALGORITHM == Py_HASH_SIPHASH24 */
> +
> +#ifdef __cplusplus
> +}
> +#endif
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pylifecycle.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pylifecycle.c
> new file mode 100644
> index 00000000..919b5c18
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pylifecycle.c
> @@ -0,0 +1,1726 @@
> +/** @file
> + Python interpreter top-level routines, including init/exit
> +
> + Copyright (c) 2010 - 2021, Intel Corporation. All rights reserved.<BR>
> + This program and the accompanying materials are licensed and made available under
> + the terms and conditions of the BSD License that accompanies this distribution.
> + The full text of the license may be found at
> + http://opensource.org/licenses/bsd-license.
> +
> + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
> + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
> +**/
> +
> +#include "Python.h"
> +
> +#include "Python-ast.h"
> +#undef Yield /* undefine macro conflicting with winbase.h */
> +#include "grammar.h"
> +#include "node.h"
> +#include "token.h"
> +#include "parsetok.h"
> +#include "errcode.h"
> +#include "code.h"
> +#include "symtable.h"
> +#include "ast.h"
> +#include "marshal.h"
> +#include "osdefs.h"
> +#include <locale.h>
> +
> +#ifdef HAVE_SIGNAL_H
> +#include <signal.h>
> +#endif
> +
> +#ifdef MS_WINDOWS
> +#include "malloc.h" /* for alloca */
> +#endif
> +
> +#ifdef HAVE_LANGINFO_H
> +#include <langinfo.h>
> +#endif
> +
> +#ifdef MS_WINDOWS
> +#undef BYTE
> +#include "windows.h"
> +
> +extern PyTypeObject PyWindowsConsoleIO_Type;
> +#define PyWindowsConsoleIO_Check(op) (PyObject_TypeCheck((op), &PyWindowsConsoleIO_Type))
> +#endif
> +
> +_Py_IDENTIFIER(flush);
> +_Py_IDENTIFIER(name);
> +_Py_IDENTIFIER(stdin);
> +_Py_IDENTIFIER(stdout);
> +_Py_IDENTIFIER(stderr);
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +extern wchar_t *Py_GetPath(void);
> +
> +extern grammar _PyParser_Grammar; /* From graminit.c */
> +
> +/* Forward */
> +static void initmain(PyInterpreterState *interp);
> +static int initfsencoding(PyInterpreterState *interp);
> +static void initsite(void);
> +static int initstdio(void);
> +static void initsigs(void);
> +static void call_py_exitfuncs(void);
> +static void wait_for_thread_shutdown(void);
> +static void call_ll_exitfuncs(void);
> +extern int _PyUnicode_Init(void);
> +extern int _PyStructSequence_Init(void);
> +extern void _PyUnicode_Fini(void);
> +extern int _PyLong_Init(void);
> +extern void PyLong_Fini(void);
> +extern int _PyFaulthandler_Init(void);
> +extern void _PyFaulthandler_Fini(void);
> +extern void _PyHash_Fini(void);
> +extern int _PyTraceMalloc_Init(void);
> +extern int _PyTraceMalloc_Fini(void);
> +
> +#ifdef WITH_THREAD
> +extern void _PyGILState_Init(PyInterpreterState *, PyThreadState *);
> +extern void _PyGILState_Fini(void);
> +#endif /* WITH_THREAD */
> +
> +/* Global configuration variable declarations are in pydebug.h */
> +/* XXX (ncoghlan): move those declarations to pylifecycle.h? */
> +int Py_DebugFlag; /* Needed by parser.c */
> +int Py_VerboseFlag; /* Needed by import.c */
> +int Py_QuietFlag; /* Needed by sysmodule.c */
> +int Py_InteractiveFlag; /* Needed by Py_FdIsInteractive() below */
> +int Py_InspectFlag; /* Needed to determine whether to exit at SystemExit */
> +int Py_OptimizeFlag = 0; /* Needed by compile.c */
> +int Py_NoSiteFlag; /* Suppress 'import site' */
> +int Py_BytesWarningFlag; /* Warn on str(bytes) and str(buffer) */
> +int Py_UseClassExceptionsFlag = 1; /* Needed by bltinmodule.c: deprecated */
> +int Py_FrozenFlag; /* Needed by getpath.c */
> +int Py_IgnoreEnvironmentFlag; /* e.g. PYTHONPATH, PYTHONHOME */
> +int Py_DontWriteBytecodeFlag; /* Suppress writing bytecode files (*.pyc) */
> +int Py_NoUserSiteDirectory = 0; /* for -s and site.py */
> +int Py_UnbufferedStdioFlag = 0; /* Unbuffered binary std{in,out,err} */
> +int Py_HashRandomizationFlag = 0; /* for -R and PYTHONHASHSEED */
> +int Py_IsolatedFlag = 0; /* for -I, isolate from user's env */
> +#ifdef MS_WINDOWS
> +int Py_LegacyWindowsFSEncodingFlag = 0; /* Uses mbcs instead of utf-8 */
> +int Py_LegacyWindowsStdioFlag = 0; /* Uses FileIO instead of WindowsConsoleIO */
> +#endif
> +
> +PyThreadState *_Py_Finalizing = NULL;
> +
> +/* Hack to force loading of object files */
> +int (*_PyOS_mystrnicmp_hack)(const char *, const char *, Py_ssize_t) = \
> + PyOS_mystrnicmp; /* Python/pystrcmp.o */
> +
> +/* PyModule_GetWarningsModule is no longer necessary as of 2.6
> +since _warnings is builtin. This API should not be used. */
> +PyObject *
> +PyModule_GetWarningsModule(void)
> +{
> + return PyImport_ImportModule("warnings");
> +}
> +
> +static int initialized = 0;
> +
> +/* API to access the initialized flag -- useful for esoteric use */
> +
> +int
> +Py_IsInitialized(void)
> +{
> + return initialized;
> +}
> +
> +/* Helper to allow an embedding application to override the normal
> + * mechanism that attempts to figure out an appropriate IO encoding
> + */
> +
> +static char *_Py_StandardStreamEncoding = NULL;
> +static char *_Py_StandardStreamErrors = NULL;
> +
> +int
> +Py_SetStandardStreamEncoding(const char *encoding, const char *errors)
> +{
> + if (Py_IsInitialized()) {
> + /* This is too late to have any effect */
> + return -1;
> + }
> + /* Can't call PyErr_NoMemory() on errors, as Python hasn't been
> + * initialised yet.
> + *
> + * However, the raw memory allocators are initialised appropriately
> + * as C static variables, so _PyMem_RawStrdup is OK even though
> + * Py_Initialize hasn't been called yet.
> + */
> + if (encoding) {
> + _Py_StandardStreamEncoding = _PyMem_RawStrdup(encoding);
> + if (!_Py_StandardStreamEncoding) {
> + return -2;
> + }
> + }
> + if (errors) {
> + _Py_StandardStreamErrors = _PyMem_RawStrdup(errors);
> + if (!_Py_StandardStreamErrors) {
> + if (_Py_StandardStreamEncoding) {
> + PyMem_RawFree(_Py_StandardStreamEncoding);
> + }
> + return -3;
> + }
> + }
> +#ifdef MS_WINDOWS
> + if (_Py_StandardStreamEncoding) {
> + /* Overriding the stream encoding implies legacy streams */
> + Py_LegacyWindowsStdioFlag = 1;
> + }
> +#endif
> + return 0;
> +}
> +
> +/* Global initializations. Can be undone by Py_FinalizeEx(). Don't
> + call this twice without an intervening Py_FinalizeEx() call. When
> + initializations fail, a fatal error is issued and the function does
> + not return. On return, the first thread and interpreter state have
> + been created.
> +
> + Locking: you must hold the interpreter lock while calling this.
> + (If the lock has not yet been initialized, that's equivalent to
> + having the lock, but you cannot use multiple threads.)
> +
> +*/
> +
> +static int
> +add_flag(int flag, const char *envs)
> +{
> + int env = atoi(envs);
> + if (flag < env)
> + flag = env;
> + if (flag < 1)
> + flag = 1;
> + return flag;
> +}
> +
> +static char*
> +get_codec_name(const char *encoding)
> +{
> + char *name_utf8, *name_str;
> + PyObject *codec, *name = NULL;
> +
> + codec = _PyCodec_Lookup(encoding);
> + if (!codec)
> + goto error;
> +
> + name = _PyObject_GetAttrId(codec, &PyId_name);
> + Py_CLEAR(codec);
> + if (!name)
> + goto error;
> +
> + name_utf8 = PyUnicode_AsUTF8(name);
> + if (name_utf8 == NULL)
> + goto error;
> + name_str = _PyMem_RawStrdup(name_utf8);
> + Py_DECREF(name);
> + if (name_str == NULL) {
> + PyErr_NoMemory();
> + return NULL;
> + }
> + return name_str;
> +
> +error:
> + Py_XDECREF(codec);
> + Py_XDECREF(name);
> + return NULL;
> +}
> +
> +static char*
> +get_locale_encoding(void)
> +{
> +#ifdef MS_WINDOWS
> + char codepage[100];
> + PyOS_snprintf(codepage, sizeof(codepage), "cp%d", GetACP());
> + return get_codec_name(codepage);
> +#elif defined(HAVE_LANGINFO_H) && defined(CODESET)
> + char* codeset = nl_langinfo(CODESET);
> + if (!codeset || codeset[0] == '\0') {
> + PyErr_SetString(PyExc_ValueError, "CODESET is not set or empty");
> + return NULL;
> + }
> + return get_codec_name(codeset);
> +#elif defined(__ANDROID__)
> + return get_codec_name("UTF-8");
> +#else
> + PyErr_SetNone(PyExc_NotImplementedError);
> + return NULL;
> +#endif
> +}
> +
> +static void
> +import_init(PyInterpreterState *interp, PyObject *sysmod)
> +{
> + PyObject *importlib;
> + PyObject *impmod;
> + PyObject *sys_modules;
> + PyObject *value;
> +
> + /* Import _importlib through its frozen version, _frozen_importlib. */
> + if (PyImport_ImportFrozenModule("_frozen_importlib") <= 0) {
> + Py_FatalError("Py_Initialize: can't import _frozen_importlib");
> + }
> + else if (Py_VerboseFlag) {
> + PySys_FormatStderr("import _frozen_importlib # frozen\n");
> + }
> + importlib = PyImport_AddModule("_frozen_importlib");
> + if (importlib == NULL) {
> + Py_FatalError("Py_Initialize: couldn't get _frozen_importlib from "
> + "sys.modules");
> + }
> + interp->importlib = importlib;
> + Py_INCREF(interp->importlib);
> +
> + interp->import_func = PyDict_GetItemString(interp->builtins, "__import__");
> + if (interp->import_func == NULL)
> + Py_FatalError("Py_Initialize: __import__ not found");
> + Py_INCREF(interp->import_func);
> +
> + /* Import the _imp module */
> + impmod = PyInit_imp();
> + if (impmod == NULL) {
> + Py_FatalError("Py_Initialize: can't import _imp");
> + }
> + else if (Py_VerboseFlag) {
> + PySys_FormatStderr("import _imp # builtin\n");
> + }
> + sys_modules = PyImport_GetModuleDict();
> + if (Py_VerboseFlag) {
> + PySys_FormatStderr("import sys # builtin\n");
> + }
> + if (PyDict_SetItemString(sys_modules, "_imp", impmod) < 0) {
> + Py_FatalError("Py_Initialize: can't save _imp to sys.modules");
> + }
> +
> + /* Install importlib as the implementation of import */
> + value = PyObject_CallMethod(importlib, "_install", "OO", sysmod, impmod);
> + if (value == NULL) {
> + PyErr_Print();
> + Py_FatalError("Py_Initialize: importlib install failed");
> + }
> + Py_DECREF(value);
> + Py_DECREF(impmod);
> +
> + _PyImportZip_Init();
> +}
> +
> +
> +void
> +_Py_InitializeEx_Private(int install_sigs, int install_importlib)
> +{
> + PyInterpreterState *interp;
> + PyThreadState *tstate;
> + PyObject *bimod, *sysmod, *pstderr;
> + char *p;
> + extern void _Py_ReadyTypes(void);
> +
> + if (initialized)
> + return;
> + initialized = 1;
> + _Py_Finalizing = NULL;
> +
> +#ifdef HAVE_SETLOCALE
> + /* Set up the LC_CTYPE locale, so we can obtain
> + the locale's charset without having to switch
> + locales. */
> + setlocale(LC_CTYPE, "");
> +#endif
> +
> + if ((p = Py_GETENV("PYTHONDEBUG")) && *p != '\0')
> + Py_DebugFlag = add_flag(Py_DebugFlag, p);
> + if ((p = Py_GETENV("PYTHONVERBOSE")) && *p != '\0')
> + Py_VerboseFlag = add_flag(Py_VerboseFlag, p);
> + if ((p = Py_GETENV("PYTHONOPTIMIZE")) && *p != '\0')
> + Py_OptimizeFlag = add_flag(Py_OptimizeFlag, p);
> + if ((p = Py_GETENV("PYTHONDONTWRITEBYTECODE")) && *p != '\0')
> + Py_DontWriteBytecodeFlag = add_flag(Py_DontWriteBytecodeFlag, p);
> +#ifdef MS_WINDOWS
> + if ((p = Py_GETENV("PYTHONLEGACYWINDOWSFSENCODING")) && *p != '\0')
> + Py_LegacyWindowsFSEncodingFlag = add_flag(Py_LegacyWindowsFSEncodingFlag, p);
> + if ((p = Py_GETENV("PYTHONLEGACYWINDOWSSTDIO")) && *p != '\0')
> + Py_LegacyWindowsStdioFlag = add_flag(Py_LegacyWindowsStdioFlag, p);
> +#endif
> +
> + _PyRandom_Init();
> +
> + interp = PyInterpreterState_New();
> + if (interp == NULL)
> + Py_FatalError("Py_Initialize: can't make first interpreter");
> +
> + tstate = PyThreadState_New(interp);
> + if (tstate == NULL)
> + Py_FatalError("Py_Initialize: can't make first thread");
> + (void) PyThreadState_Swap(tstate);
> +
> +#ifdef WITH_THREAD
> + /* We can't call _PyEval_FiniThreads() in Py_FinalizeEx because
> + destroying the GIL might fail when it is being referenced from
> + another running thread (see issue #9901).
> + Instead we destroy the previously created GIL here, which ensures
> + that we can call Py_Initialize / Py_FinalizeEx multiple times. */
> + _PyEval_FiniThreads();
> +
> + /* Auto-thread-state API */
> + _PyGILState_Init(interp, tstate);
> +#endif /* WITH_THREAD */
> +
> + _Py_ReadyTypes();
> +
> + if (!_PyFrame_Init())
> + Py_FatalError("Py_Initialize: can't init frames");
> +
> + if (!_PyLong_Init())
> + Py_FatalError("Py_Initialize: can't init longs");
> +
> + if (!PyByteArray_Init())
> + Py_FatalError("Py_Initialize: can't init bytearray");
> +
> + if (!_PyFloat_Init())
> + Py_FatalError("Py_Initialize: can't init float");
> +
> + interp->modules = PyDict_New();
> + if (interp->modules == NULL)
> + Py_FatalError("Py_Initialize: can't make modules dictionary");
> +
> + /* Init Unicode implementation; relies on the codec registry */
> + if (_PyUnicode_Init() < 0)
> + Py_FatalError("Py_Initialize: can't initialize unicode");
> + if (_PyStructSequence_Init() < 0)
> + Py_FatalError("Py_Initialize: can't initialize structseq");
> +
> + bimod = _PyBuiltin_Init();
> + if (bimod == NULL)
> + Py_FatalError("Py_Initialize: can't initialize builtins modules");
> + _PyImport_FixupBuiltin(bimod, "builtins");
> + interp->builtins = PyModule_GetDict(bimod);
> + if (interp->builtins == NULL)
> + Py_FatalError("Py_Initialize: can't initialize builtins dict");
> + Py_INCREF(interp->builtins);
> +
> + /* initialize builtin exceptions */
> + _PyExc_Init(bimod);
> +
> + sysmod = _PySys_Init();
> + if (sysmod == NULL)
> + Py_FatalError("Py_Initialize: can't initialize sys");
> + interp->sysdict = PyModule_GetDict(sysmod);
> + if (interp->sysdict == NULL)
> + Py_FatalError("Py_Initialize: can't initialize sys dict");
> + Py_INCREF(interp->sysdict);
> + _PyImport_FixupBuiltin(sysmod, "sys");
> + PySys_SetPath(Py_GetPath());
> + PyDict_SetItemString(interp->sysdict, "modules",
> + interp->modules);
> +
> + /* Set up a preliminary stderr printer until we have enough
> + infrastructure for the io module in place. */
> + pstderr = PyFile_NewStdPrinter(fileno(stderr));
> + if (pstderr == NULL)
> + Py_FatalError("Py_Initialize: can't set preliminary stderr");
> + _PySys_SetObjectId(&PyId_stderr, pstderr);
> + PySys_SetObject("__stderr__", pstderr);
> + Py_DECREF(pstderr);
> +
> + _PyImport_Init();
> +
> + _PyImportHooks_Init();
> +
> + /* Initialize _warnings. */
> + _PyWarnings_Init();
> +
> + if (!install_importlib)
> + return;
> +
> + if (_PyTime_Init() < 0)
> + Py_FatalError("Py_Initialize: can't initialize time");
> +
> + import_init(interp, sysmod);
> +
> + /* initialize the faulthandler module */
> + if (_PyFaulthandler_Init())
> + Py_FatalError("Py_Initialize: can't initialize faulthandler");
> +#ifndef UEFI_C_SOURCE
> + if (initfsencoding(interp) < 0)
> + Py_FatalError("Py_Initialize: unable to load the file system codec");
> +
> + if (install_sigs)
> + initsigs(); /* Signal handling stuff, including initintr() */
> +
> + if (_PyTraceMalloc_Init() < 0)
> + Py_FatalError("Py_Initialize: can't initialize tracemalloc");
> +#endif
> + initmain(interp); /* Module __main__ */
> + if (initstdio() < 0)
> + Py_FatalError(
> + "Py_Initialize: can't initialize sys standard streams");
> +
> + /* Initialize warnings. */
> + if (PySys_HasWarnOptions()) {
> + PyObject *warnings_module = PyImport_ImportModule("warnings");
> + if (warnings_module == NULL) {
> + fprintf(stderr, "'import warnings' failed; traceback:\n");
> + PyErr_Print();
> + }
> + Py_XDECREF(warnings_module);
> + }
> +
> + if (!Py_NoSiteFlag)
> + initsite(); /* Module site */
> +}
> +
> +void
> +Py_InitializeEx(int install_sigs)
> +{
> + _Py_InitializeEx_Private(install_sigs, 1);
> +}
> +
> +void
> +Py_Initialize(void)
> +{
> + Py_InitializeEx(1);
> +}
> +
> +
> +#ifdef COUNT_ALLOCS
> +extern void dump_counts(FILE*);
> +#endif
> +
> +/* Flush stdout and stderr */
> +
> +static int
> +file_is_closed(PyObject *fobj)
> +{
> + int r;
> + PyObject *tmp = PyObject_GetAttrString(fobj, "closed");
> + if (tmp == NULL) {
> + PyErr_Clear();
> + return 0;
> + }
> + r = PyObject_IsTrue(tmp);
> + Py_DECREF(tmp);
> + if (r < 0)
> + PyErr_Clear();
> + return r > 0;
> +}
> +
> +static int
> +flush_std_files(void)
> +{
> + PyObject *fout = _PySys_GetObjectId(&PyId_stdout);
> + PyObject *ferr = _PySys_GetObjectId(&PyId_stderr);
> + PyObject *tmp;
> + int status = 0;
> +
> + if (fout != NULL && fout != Py_None && !file_is_closed(fout)) {
> + tmp = _PyObject_CallMethodId(fout, &PyId_flush, NULL);
> + if (tmp == NULL) {
> + PyErr_WriteUnraisable(fout);
> + status = -1;
> + }
> + else
> + Py_DECREF(tmp);
> + }
> +
> + if (ferr != NULL && ferr != Py_None && !file_is_closed(ferr)) {
> + tmp = _PyObject_CallMethodId(ferr, &PyId_flush, NULL);
> + if (tmp == NULL) {
> + PyErr_Clear();
> + status = -1;
> + }
> + else
> + Py_DECREF(tmp);
> + }
> +
> + return status;
> +}
> +
> +/* Undo the effect of Py_Initialize().
> +
> + Beware: if multiple interpreter and/or thread states exist, these
> + are not wiped out; only the current thread and interpreter state
> + are deleted. But since everything else is deleted, those other
> + interpreter and thread states should no longer be used.
> +
> + (XXX We should do better, e.g. wipe out all interpreters and
> + threads.)
> +
> + Locking: as above.
> +
> +*/
> +
> +int
> +Py_FinalizeEx(void)
> +{
> + PyInterpreterState *interp;
> + PyThreadState *tstate;
> + int status = 0;
> +
> + if (!initialized)
> + return status;
> +
> + wait_for_thread_shutdown();
> +
> + /* The interpreter is still entirely intact at this point, and the
> + * exit funcs may be relying on that. In particular, if some thread
> + * or exit func is still waiting to do an import, the import machinery
> + * expects Py_IsInitialized() to return true. So don't say the
> + * interpreter is uninitialized until after the exit funcs have run.
> + * Note that Threading.py uses an exit func to do a join on all the
> + * threads created thru it, so this also protects pending imports in
> + * the threads created via Threading.
> + */
> + call_py_exitfuncs();
> +
> + /* Get current thread state and interpreter pointer */
> + tstate = PyThreadState_GET();
> + interp = tstate->interp;
> +
> + /* Remaining threads (e.g. daemon threads) will automatically exit
> + after taking the GIL (in PyEval_RestoreThread()). */
> + _Py_Finalizing = tstate;
> + initialized = 0;
> +
> + /* Flush sys.stdout and sys.stderr */
> + if (flush_std_files() < 0) {
> + status = -1;
> + }
> +
> + /* Disable signal handling */
> + PyOS_FiniInterrupts();
> +
> + /* Collect garbage. This may call finalizers; it's nice to call these
> + * before all modules are destroyed.
> + * XXX If a __del__ or weakref callback is triggered here, and tries to
> + * XXX import a module, bad things can happen, because Python no
> + * XXX longer believes it's initialized.
> + * XXX Fatal Python error: Interpreter not initialized (version mismatch?)
> + * XXX is easy to provoke that way. I've also seen, e.g.,
> + * XXX Exception exceptions.ImportError: 'No module named sha'
> + * XXX in <function callback at 0x008F5718> ignored
> + * XXX but I'm unclear on exactly how that one happens. In any case,
> + * XXX I haven't seen a real-life report of either of these.
> + */
> + _PyGC_CollectIfEnabled();
> +#ifdef COUNT_ALLOCS
> + /* With COUNT_ALLOCS, it helps to run GC multiple times:
> + each collection might release some types from the type
> + list, so they become garbage. */
> + while (_PyGC_CollectIfEnabled() > 0)
> + /* nothing */;
> +#endif
> + /* Destroy all modules */
> + PyImport_Cleanup();
> +
> + /* Flush sys.stdout and sys.stderr (again, in case more was printed) */
> + if (flush_std_files() < 0) {
> + status = -1;
> + }
> +
> + /* Collect final garbage. This disposes of cycles created by
> + * class definitions, for example.
> + * XXX This is disabled because it caused too many problems. If
> + * XXX a __del__ or weakref callback triggers here, Python code has
> + * XXX a hard time running, because even the sys module has been
> + * XXX cleared out (sys.stdout is gone, sys.excepthook is gone, etc).
> + * XXX One symptom is a sequence of information-free messages
> + * XXX coming from threads (if a __del__ or callback is invoked,
> + * XXX other threads can execute too, and any exception they encounter
> + * XXX triggers a comedy of errors as subsystem after subsystem
> + * XXX fails to find what it *expects* to find in sys to help report
> + * XXX the exception and consequent unexpected failures). I've also
> + * XXX seen segfaults then, after adding print statements to the
> + * XXX Python code getting called.
> + */
> +#if 0
> + _PyGC_CollectIfEnabled();
> +#endif
> +
> + /* Disable tracemalloc after all Python objects have been destroyed,
> + so it is possible to use tracemalloc in objects destructor. */
> + _PyTraceMalloc_Fini();
> +
> + /* Destroy the database used by _PyImport_{Fixup,Find}Extension */
> + _PyImport_Fini();
> +
> + /* Cleanup typeobject.c's internal caches. */
> + _PyType_Fini();
> +
> + /* unload faulthandler module */
> + _PyFaulthandler_Fini();
> +
> + /* Debugging stuff */
> +#ifdef COUNT_ALLOCS
> + dump_counts(stderr);
> +#endif
> + /* dump hash stats */
> + _PyHash_Fini();
> +
> + _PY_DEBUG_PRINT_TOTAL_REFS();
> +
> +#ifdef Py_TRACE_REFS
> + /* Display all objects still alive -- this can invoke arbitrary
> + * __repr__ overrides, so requires a mostly-intact interpreter.
> + * Alas, a lot of stuff may still be alive now that will be cleaned
> + * up later.
> + */
> + if (Py_GETENV("PYTHONDUMPREFS"))
> + _Py_PrintReferences(stderr);
> +#endif /* Py_TRACE_REFS */
> +
> + /* Clear interpreter state and all thread states. */
> + PyInterpreterState_Clear(interp);
> +
> + /* Now we decref the exception classes. After this point nothing
> + can raise an exception. That's okay, because each Fini() method
> + below has been checked to make sure no exceptions are ever
> + raised.
> + */
> +
> + _PyExc_Fini();
> +
> + /* Sundry finalizers */
> + PyMethod_Fini();
> + PyFrame_Fini();
> + PyCFunction_Fini();
> + PyTuple_Fini();
> + PyList_Fini();
> + PySet_Fini();
> + PyBytes_Fini();
> + PyByteArray_Fini();
> + PyLong_Fini();
> + PyFloat_Fini();
> + PyDict_Fini();
> + PySlice_Fini();
> + _PyGC_Fini();
> + _PyRandom_Fini();
> + _PyArg_Fini();
> + PyAsyncGen_Fini();
> +
> + /* Cleanup Unicode implementation */
> + _PyUnicode_Fini();
> +
> + /* reset file system default encoding */
> + if (!Py_HasFileSystemDefaultEncoding && Py_FileSystemDefaultEncoding) {
> + PyMem_RawFree((char*)Py_FileSystemDefaultEncoding);
> + Py_FileSystemDefaultEncoding = NULL;
> + }
> +
> + /* XXX Still allocated:
> + - various static ad-hoc pointers to interned strings
> + - int and float free list blocks
> + - whatever various modules and libraries allocate
> + */
> +
> + PyGrammar_RemoveAccelerators(&_PyParser_Grammar);
> +
> + /* Cleanup auto-thread-state */
> +#ifdef WITH_THREAD
> + _PyGILState_Fini();
> +#endif /* WITH_THREAD */
> +
> + /* Delete current thread. After this, many C API calls become crashy. */
> + PyThreadState_Swap(NULL);
> +
> + PyInterpreterState_Delete(interp);
> +
> +#ifdef Py_TRACE_REFS
> + /* Display addresses (& refcnts) of all objects still alive.
> + * An address can be used to find the repr of the object, printed
> + * above by _Py_PrintReferences.
> + */
> + if (Py_GETENV("PYTHONDUMPREFS"))
> + _Py_PrintReferenceAddresses(stderr);
> +#endif /* Py_TRACE_REFS */
> +#ifdef WITH_PYMALLOC
> + if (_PyMem_PymallocEnabled()) {
> + char *opt = Py_GETENV("PYTHONMALLOCSTATS");
> + if (opt != NULL && *opt != '\0')
> + _PyObject_DebugMallocStats(stderr);
> + }
> +#endif
> +
> + call_ll_exitfuncs();
> + return status;
> +}
> +
> +void
> +Py_Finalize(void)
> +{
> + Py_FinalizeEx();
> +}
> +
> +/* Create and initialize a new interpreter and thread, and return the
> + new thread. This requires that Py_Initialize() has been called
> + first.
> +
> + Unsuccessful initialization yields a NULL pointer. Note that *no*
> + exception information is available even in this case -- the
> + exception information is held in the thread, and there is no
> + thread.
> +
> + Locking: as above.
> +
> +*/
> +
> +PyThreadState *
> +Py_NewInterpreter(void)
> +{
> + PyInterpreterState *interp;
> + PyThreadState *tstate, *save_tstate;
> + PyObject *bimod, *sysmod;
> +
> + if (!initialized)
> + Py_FatalError("Py_NewInterpreter: call Py_Initialize first");
> +
> +#ifdef WITH_THREAD
> + /* Issue #10915, #15751: The GIL API doesn't work with multiple
> + interpreters: disable PyGILState_Check(). */
> + _PyGILState_check_enabled = 0;
> +#endif
> +
> + interp = PyInterpreterState_New();
> + if (interp == NULL)
> + return NULL;
> +
> + tstate = PyThreadState_New(interp);
> + if (tstate == NULL) {
> + PyInterpreterState_Delete(interp);
> + return NULL;
> + }
> +
> + save_tstate = PyThreadState_Swap(tstate);
> +
> + /* XXX The following is lax in error checking */
> +
> + interp->modules = PyDict_New();
> +
> + bimod = _PyImport_FindBuiltin("builtins");
> + if (bimod != NULL) {
> + interp->builtins = PyModule_GetDict(bimod);
> + if (interp->builtins == NULL)
> + goto handle_error;
> + Py_INCREF(interp->builtins);
> + }
> + else if (PyErr_Occurred()) {
> + goto handle_error;
> + }
> +
> + /* initialize builtin exceptions */
> + _PyExc_Init(bimod);
> +
> + sysmod = _PyImport_FindBuiltin("sys");
> + if (bimod != NULL && sysmod != NULL) {
> + PyObject *pstderr;
> +
> + interp->sysdict = PyModule_GetDict(sysmod);
> + if (interp->sysdict == NULL)
> + goto handle_error;
> + Py_INCREF(interp->sysdict);
> + PySys_SetPath(Py_GetPath());
> + PyDict_SetItemString(interp->sysdict, "modules",
> + interp->modules);
> + /* Set up a preliminary stderr printer until we have enough
> + infrastructure for the io module in place. */
> + pstderr = PyFile_NewStdPrinter(fileno(stderr));
> + if (pstderr == NULL)
> + Py_FatalError("Py_Initialize: can't set preliminary stderr");
> + _PySys_SetObjectId(&PyId_stderr, pstderr);
> + PySys_SetObject("__stderr__", pstderr);
> + Py_DECREF(pstderr);
> +
> + _PyImportHooks_Init();
> +
> + import_init(interp, sysmod);
> +
> + if (initfsencoding(interp) < 0)
> + goto handle_error;
> +
> + if (initstdio() < 0)
> + Py_FatalError(
> + "Py_Initialize: can't initialize sys standard streams");
> + initmain(interp);
> + if (!Py_NoSiteFlag)
> + initsite();
> + }
> +
> + if (!PyErr_Occurred())
> + return tstate;
> +
> +handle_error:
> + /* Oops, it didn't work. Undo it all. */
> +
> + PyErr_PrintEx(0);
> + PyThreadState_Clear(tstate);
> + PyThreadState_Swap(save_tstate);
> + PyThreadState_Delete(tstate);
> + PyInterpreterState_Delete(interp);
> +
> + return NULL;
> +}
> +
> +/* Delete an interpreter and its last thread. This requires that the
> + given thread state is current, that the thread has no remaining
> + frames, and that it is its interpreter's only remaining thread.
> + It is a fatal error to violate these constraints.
> +
> + (Py_FinalizeEx() doesn't have these constraints -- it zaps
> + everything, regardless.)
> +
> + Locking: as above.
> +
> +*/
> +
> +void
> +Py_EndInterpreter(PyThreadState *tstate)
> +{
> + PyInterpreterState *interp = tstate->interp;
> +
> + if (tstate != PyThreadState_GET())
> + Py_FatalError("Py_EndInterpreter: thread is not current");
> + if (tstate->frame != NULL)
> + Py_FatalError("Py_EndInterpreter: thread still has a frame");
> +
> + wait_for_thread_shutdown();
> +
> + if (tstate != interp->tstate_head || tstate->next != NULL)
> + Py_FatalError("Py_EndInterpreter: not the last thread");
> +
> + PyImport_Cleanup();
> + PyInterpreterState_Clear(interp);
> + PyThreadState_Swap(NULL);
> + PyInterpreterState_Delete(interp);
> +}
> +
> +#ifdef MS_WINDOWS
> +static wchar_t *progname = L"python";
> +#else
> +static wchar_t *progname = L"python3";
> +#endif
> +
> +void
> +Py_SetProgramName(wchar_t *pn)
> +{
> + if (pn && *pn)
> + progname = pn;
> +}
> +
> +wchar_t *
> +Py_GetProgramName(void)
> +{
> + return progname;
> +}
> +
> +static wchar_t *default_home = NULL;
> +static wchar_t env_home[MAXPATHLEN+1];
> +
> +void
> +Py_SetPythonHome(wchar_t *home)
> +{
> + default_home = home;
> +}
> +
> +wchar_t *
> +Py_GetPythonHome(void)
> +{
> + wchar_t *home = default_home;
> + if (home == NULL && !Py_IgnoreEnvironmentFlag) {
> + char* chome = Py_GETENV("PYTHONHOME");
> + if (chome) {
> + size_t size = Py_ARRAY_LENGTH(env_home);
> + size_t r = mbstowcs(env_home, chome, size);
> + if (r != (size_t)-1 && r < size)
> + home = env_home;
> + }
> +
> + }
> + return home;
> +}
> +
> +/* Create __main__ module */
> +
> +static void
> +initmain(PyInterpreterState *interp)
> +{
> + PyObject *m, *d, *loader, *ann_dict;
> + m = PyImport_AddModule("__main__");
> + if (m == NULL)
> + Py_FatalError("can't create __main__ module");
> + d = PyModule_GetDict(m);
> + ann_dict = PyDict_New();
> + if ((ann_dict == NULL) ||
> + (PyDict_SetItemString(d, "__annotations__", ann_dict) < 0)) {
> + Py_FatalError("Failed to initialize __main__.__annotations__");
> + }
> + Py_DECREF(ann_dict);
> + if (PyDict_GetItemString(d, "__builtins__") == NULL) {
> + PyObject *bimod = PyImport_ImportModule("builtins");
> + if (bimod == NULL) {
> + Py_FatalError("Failed to retrieve builtins module");
> + }
> + if (PyDict_SetItemString(d, "__builtins__", bimod) < 0) {
> + Py_FatalError("Failed to initialize __main__.__builtins__");
> + }
> + Py_DECREF(bimod);
> + }
> + /* Main is a little special - imp.is_builtin("__main__") will return
> + * False, but BuiltinImporter is still the most appropriate initial
> + * setting for its __loader__ attribute. A more suitable value will
> + * be set if __main__ gets further initialized later in the startup
> + * process.
> + */
> + loader = PyDict_GetItemString(d, "__loader__");
> + if (loader == NULL || loader == Py_None) {
> + PyObject *loader = PyObject_GetAttrString(interp->importlib,
> + "BuiltinImporter");
> + if (loader == NULL) {
> + Py_FatalError("Failed to retrieve BuiltinImporter");
> + }
> + if (PyDict_SetItemString(d, "__loader__", loader) < 0) {
> + Py_FatalError("Failed to initialize __main__.__loader__");
> + }
> + Py_DECREF(loader);
> + }
> +}
> +
> +static int
> +initfsencoding(PyInterpreterState *interp)
> +{
> + PyObject *codec;
> +
> +#ifdef UEFI_MSVC_64
> + Py_FileSystemDefaultEncoding = "utf-8";
> + Py_FileSystemDefaultEncodeErrors = "surrogatepass";
> +#elif defined(UEFI_MSVC_32)
> + Py_FileSystemDefaultEncoding = "utf-8";
> + Py_FileSystemDefaultEncodeErrors = "surrogatepass";
> +#elif defined(MS_WINDOWS)
> + if (Py_LegacyWindowsFSEncodingFlag)
> + {
> + Py_FileSystemDefaultEncoding = "mbcs";
> + Py_FileSystemDefaultEncodeErrors = "replace";
> + }
> + else
> + {
> + Py_FileSystemDefaultEncoding = "utf-8";
> + Py_FileSystemDefaultEncodeErrors = "surrogatepass";
> + }
> +#else
> + if (Py_FileSystemDefaultEncoding == NULL)
> + {
> + Py_FileSystemDefaultEncoding = get_locale_encoding();
> + if (Py_FileSystemDefaultEncoding == NULL)
> + Py_FatalError("Py_Initialize: Unable to get the locale encoding");
> +
> + Py_HasFileSystemDefaultEncoding = 0;
> + interp->fscodec_initialized = 1;
> + return 0;
> + }
> +#endif
> +
> + /* the encoding is mbcs, utf-8 or ascii */
> + codec = _PyCodec_Lookup(Py_FileSystemDefaultEncoding);
> + if (!codec) {
> + /* Such error can only occurs in critical situations: no more
> + * memory, import a module of the standard library failed,
> + * etc. */
> + return -1;
> + }
> + Py_DECREF(codec);
> + interp->fscodec_initialized = 1;
> + return 0;
> +}
> +
> +/* Import the site module (not into __main__ though) */
> +
> +static void
> +initsite(void)
> +{
> + PyObject *m;
> + m = PyImport_ImportModule("site");
> + if (m == NULL) {
> + fprintf(stderr, "Failed to import the site module\n");
> + PyErr_Print();
> + Py_Finalize();
> + exit(1);
> + }
> + else {
> + Py_DECREF(m);
> + }
> +}
> +
> +/* Check if a file descriptor is valid or not.
> + Return 0 if the file descriptor is invalid, return non-zero otherwise. */
> +static int
> +is_valid_fd(int fd)
> +{
> +#ifdef __APPLE__
> + /* bpo-30225: On macOS Tiger, when stdout is redirected to a pipe
> + and the other side of the pipe is closed, dup(1) succeed, whereas
> + fstat(1, &st) fails with EBADF. Prefer fstat() over dup() to detect
> + such error. */
> + struct stat st;
> + return (fstat(fd, &st) == 0);
> +#else
> + int fd2;
> + if (fd < 0)
> + return 0;
> + _Py_BEGIN_SUPPRESS_IPH
> + /* Prefer dup() over fstat(). fstat() can require input/output whereas
> + dup() doesn't, there is a low risk of EMFILE/ENFILE at Python
> + startup. */
> + fd2 = dup(fd);
> + if (fd2 >= 0)
> + close(fd2);
> + _Py_END_SUPPRESS_IPH
> + return fd2 >= 0;
> +#endif
> +}
> +
> +/* returns Py_None if the fd is not valid */
> +static PyObject*
> +create_stdio(PyObject* io,
> + int fd, int write_mode, const char* name,
> + const char* encoding, const char* errors)
> +{
> + PyObject *buf = NULL, *stream = NULL, *text = NULL, *raw = NULL, *res;
> + const char* mode;
> + const char* newline;
> + PyObject *line_buffering;
> + int buffering, isatty;
> + _Py_IDENTIFIER(open);
> + _Py_IDENTIFIER(isatty);
> + _Py_IDENTIFIER(TextIOWrapper);
> + _Py_IDENTIFIER(mode);
> +
> + if (!is_valid_fd(fd))
> + Py_RETURN_NONE;
> +
> + /* stdin is always opened in buffered mode, first because it shouldn't
> + make a difference in common use cases, second because TextIOWrapper
> + depends on the presence of a read1() method which only exists on
> + buffered streams.
> + */
> + if (Py_UnbufferedStdioFlag && write_mode)
> + buffering = 0;
> + else
> + buffering = -1;
> + if (write_mode)
> + mode = "wb";
> + else
> + mode = "rb";
> + buf = _PyObject_CallMethodId(io, &PyId_open, "isiOOOi",
> + fd, mode, buffering,
> + Py_None, Py_None, /* encoding, errors */
> + Py_None, 0); /* newline, closefd */
> + if (buf == NULL)
> + goto error;
> +
> + if (buffering) {
> + _Py_IDENTIFIER(raw);
> + raw = _PyObject_GetAttrId(buf, &PyId_raw);
> + if (raw == NULL)
> + goto error;
> + }
> + else {
> + raw = buf;
> + Py_INCREF(raw);
> + }
> +
> +#ifdef MS_WINDOWS
> + /* Windows console IO is always UTF-8 encoded */
> + if (PyWindowsConsoleIO_Check(raw))
> + encoding = "utf-8";
> +#endif
> +
> + text = PyUnicode_FromString(name);
> + if (text == NULL || _PyObject_SetAttrId(raw, &PyId_name, text) < 0)
> + goto error;
> + res = _PyObject_CallMethodId(raw, &PyId_isatty, NULL);
> + if (res == NULL)
> + goto error;
> + isatty = PyObject_IsTrue(res);
> + Py_DECREF(res);
> + if (isatty == -1)
> + goto error;
> + if (isatty || Py_UnbufferedStdioFlag)
> + line_buffering = Py_True;
> + else
> + line_buffering = Py_False;
> +
> + Py_CLEAR(raw);
> + Py_CLEAR(text);
> +
> +#ifdef MS_WINDOWS
> + /* sys.stdin: enable universal newline mode, translate "\r\n" and "\r"
> + newlines to "\n".
> + sys.stdout and sys.stderr: translate "\n" to "\r\n". */
> + newline = NULL;
> +#else
> + /* sys.stdin: split lines at "\n".
> + sys.stdout and sys.stderr: don't translate newlines (use "\n"). */
> + newline = "\n";
> +#endif
> +
> + stream = _PyObject_CallMethodId(io, &PyId_TextIOWrapper, "OsssO",
> + buf, encoding, errors,
> + newline, line_buffering);
> + Py_CLEAR(buf);
> + if (stream == NULL)
> + goto error;
> +
> + if (write_mode)
> + mode = "w";
> + else
> + mode = "r";
> + text = PyUnicode_FromString(mode);
> + if (!text || _PyObject_SetAttrId(stream, &PyId_mode, text) < 0)
> + goto error;
> + Py_CLEAR(text);
> + return stream;
> +
> +error:
> + Py_XDECREF(buf);
> + Py_XDECREF(stream);
> + Py_XDECREF(text);
> + Py_XDECREF(raw);
> +
> + if (PyErr_ExceptionMatches(PyExc_OSError) && !is_valid_fd(fd)) {
> + /* Issue #24891: the file descriptor was closed after the first
> + is_valid_fd() check was called. Ignore the OSError and set the
> + stream to None. */
> + PyErr_Clear();
> + Py_RETURN_NONE;
> + }
> + return NULL;
> +}
> +
> +/* Initialize sys.stdin, stdout, stderr and builtins.open */
> +static int
> +initstdio(void)
> +{
> + PyObject *iomod = NULL, *wrapper;
> + PyObject *bimod = NULL;
> + PyObject *m;
> + PyObject *std = NULL;
> + int status = 0, fd;
> + PyObject * encoding_attr;
> + char *pythonioencoding = NULL, *encoding, *errors;
> +
> + /* Hack to avoid a nasty recursion issue when Python is invoked
> + in verbose mode: pre-import the Latin-1 and UTF-8 codecs */
> + if ((m = PyImport_ImportModule("encodings.utf_8")) == NULL) {
> + goto error;
> + }
> + Py_DECREF(m);
> +
> + if (!(m = PyImport_ImportModule("encodings.latin_1"))) {
> + goto error;
> + }
> + Py_DECREF(m);
> +
> + if (!(bimod = PyImport_ImportModule("builtins"))) {
> + goto error;
> + }
> +
> + if (!(iomod = PyImport_ImportModule("io"))) {
> + goto error;
> + }
> + if (!(wrapper = PyObject_GetAttrString(iomod, "OpenWrapper"))) {
> + goto error;
> + }
> +
> + /* Set builtins.open */
> + if (PyObject_SetAttrString(bimod, "open", wrapper) == -1) {
> + Py_DECREF(wrapper);
> + goto error;
> + }
> + Py_DECREF(wrapper);
> +
> + encoding = _Py_StandardStreamEncoding;
> + errors = _Py_StandardStreamErrors;
> + if (!encoding || !errors) {
> + pythonioencoding = Py_GETENV("PYTHONIOENCODING");
> + if (pythonioencoding) {
> + char *err;
> + pythonioencoding = _PyMem_Strdup(pythonioencoding);
> + if (pythonioencoding == NULL) {
> + PyErr_NoMemory();
> + goto error;
> + }
> + err = strchr(pythonioencoding, ':');
> + if (err) {
> + *err = '\0';
> + err++;
> + if (*err && !errors) {
> + errors = err;
> + }
> + }
> + if (*pythonioencoding && !encoding) {
> + encoding = pythonioencoding;
> + }
> + }
> + if (!errors && !(pythonioencoding && *pythonioencoding)) {
> + /* When the LC_CTYPE locale is the POSIX locale ("C locale"),
> + stdin and stdout use the surrogateescape error handler by
> + default, instead of the strict error handler. */
> + char *loc = setlocale(LC_CTYPE, NULL);
> + if (loc != NULL && strcmp(loc, "C") == 0)
> + errors = "surrogateescape";
> + }
> + }
> +
> + /* Set sys.stdin */
> + fd = fileno(stdin);
> + /* Under some conditions stdin, stdout and stderr may not be connected
> + * and fileno() may point to an invalid file descriptor. For example
> + * GUI apps don't have valid standard streams by default.
> + */
> + std = create_stdio(iomod, fd, 0, "<stdin>", encoding, errors);
> + if (std == NULL)
> + goto error;
> + PySys_SetObject("__stdin__", std);
> + _PySys_SetObjectId(&PyId_stdin, std);
> + Py_DECREF(std);
> +#ifdef UEFI_C_SOURCE
> + // UEFI shell don't have seperate stderr
> + // connect stderr back to stdout
> + std = PySys_GetObject("stderr");
> + PySys_SetObject("__stdout__", std) ;
> + _PySys_SetObjectId(&PyId_stdout, std);
> + Py_DECREF(std);
> +#endif
> +
> +#ifndef UEFI_C_SOURCE // Couldn't get this code working on EFI shell, the interpreter application hangs here
> + /* Set sys.stdout */
> + fd = fileno(stdout);
> + std = create_stdio(iomod, fd, 1, "<stdout>", encoding, errors);
> + if (std == NULL)
> + goto error;
> + PySys_SetObject("__stdout__", std);
> + _PySys_SetObjectId(&PyId_stdout, std);
> + Py_DECREF(std);
> +#endif
> +
> +
> +#if 0 /* Disable this if you have trouble debugging bootstrap stuff */
> + /* Set sys.stderr, replaces the preliminary stderr */
> + fd = fileno(stderr);
> + std = create_stdio(iomod, fd, 1, "<stderr>", encoding, "backslashreplace");
> + if (std == NULL)
> + goto error;
> +
> + /* Same as hack above, pre-import stderr's codec to avoid recursion
> + when import.c tries to write to stderr in verbose mode. */
> + encoding_attr = PyObject_GetAttrString(std, "encoding");
> + if (encoding_attr != NULL) {
> + const char * std_encoding;
> + std_encoding = PyUnicode_AsUTF8(encoding_attr);
> + if (std_encoding != NULL) {
> + PyObject *codec_info = _PyCodec_Lookup(std_encoding);
> + Py_XDECREF(codec_info);
> + }
> + Py_DECREF(encoding_attr);
> + }
> + PyErr_Clear(); /* Not a fatal error if codec isn't available */
> +
> + if (PySys_SetObject("__stderr__", std) < 0) {
> + Py_DECREF(std);
> + goto error;
> + }
> + if (_PySys_SetObjectId(&PyId_stderr, std) < 0) {
> + Py_DECREF(std);
> + goto error;
> + }
> + Py_DECREF(std);
> +#endif
> +
> + if (0) {
> + error:
> + status = -1;
> + }
> +
> + /* We won't need them anymore. */
> + if (_Py_StandardStreamEncoding) {
> + PyMem_RawFree(_Py_StandardStreamEncoding);
> + _Py_StandardStreamEncoding = NULL;
> + }
> + if (_Py_StandardStreamErrors) {
> + PyMem_RawFree(_Py_StandardStreamErrors);
> + _Py_StandardStreamErrors = NULL;
> + }
> + PyMem_Free(pythonioencoding);
> + Py_XDECREF(bimod);
> + Py_XDECREF(iomod);
> + return status;
> +}
> +
> +
> +static void
> +_Py_FatalError_DumpTracebacks(int fd)
> +{
> + fputc('\n', stderr);
> + fflush(stderr);
> +
> + /* display the current Python stack */
> + _Py_DumpTracebackThreads(fd, NULL, NULL);
> +}
> +
> +/* Print the current exception (if an exception is set) with its traceback,
> + or display the current Python stack.
> +
> + Don't call PyErr_PrintEx() and the except hook, because Py_FatalError() is
> + called on catastrophic cases.
> +
> + Return 1 if the traceback was displayed, 0 otherwise. */
> +
> +static int
> +_Py_FatalError_PrintExc(int fd)
> +{
> + PyObject *ferr, *res;
> + PyObject *exception, *v, *tb;
> + int has_tb;
> +
> + PyErr_Fetch(&exception, &v, &tb);
> + if (exception == NULL) {
> + /* No current exception */
> + return 0;
> + }
> +
> + ferr = _PySys_GetObjectId(&PyId_stderr);
> + if (ferr == NULL || ferr == Py_None) {
> + /* sys.stderr is not set yet or set to None,
> + no need to try to display the exception */
> + return 0;
> + }
> +
> + PyErr_NormalizeException(&exception, &v, &tb);
> + if (tb == NULL) {
> + tb = Py_None;
> + Py_INCREF(tb);
> + }
> + PyException_SetTraceback(v, tb);
> + if (exception == NULL) {
> + /* PyErr_NormalizeException() failed */
> + return 0;
> + }
> +
> + has_tb = (tb != Py_None);
> + PyErr_Display(exception, v, tb);
> + Py_XDECREF(exception);
> + Py_XDECREF(v);
> + Py_XDECREF(tb);
> +
> + /* sys.stderr may be buffered: call sys.stderr.flush() */
> + res = _PyObject_CallMethodId(ferr, &PyId_flush, NULL);
> + if (res == NULL)
> + PyErr_Clear();
> + else
> + Py_DECREF(res);
> +
> + return has_tb;
> +}
> +
> +/* Print fatal error message and abort */
> +
> +void
> +Py_FatalError(const char *msg)
> +{
> + const int fd = fileno(stderr);
> + static int reentrant = 0;
> + PyThreadState *tss_tstate = NULL;
> +#ifdef MS_WINDOWS
> + size_t len;
> + WCHAR* buffer;
> + size_t i;
> +#endif
> +
> + if (reentrant) {
> + /* Py_FatalError() caused a second fatal error.
> + Example: flush_std_files() raises a recursion error. */
> + goto exit;
> + }
> + reentrant = 1;
> +
> + fprintf(stderr, "Fatal Python error: %s\n", msg);
> + fflush(stderr); /* it helps in Windows debug build */
> +
> +#ifdef WITH_THREAD
> + /* Check if the current thread has a Python thread state
> + and holds the GIL */
> + tss_tstate = PyGILState_GetThisThreadState();
> + if (tss_tstate != NULL) {
> + PyThreadState *tstate = PyThreadState_GET();
> + if (tss_tstate != tstate) {
> + /* The Python thread does not hold the GIL */
> + tss_tstate = NULL;
> + }
> + }
> + else {
> + /* Py_FatalError() has been called from a C thread
> + which has no Python thread state. */
> + }
> +#endif
> + int has_tstate_and_gil = (tss_tstate != NULL);
> +
> + if (has_tstate_and_gil) {
> + /* If an exception is set, print the exception with its traceback */
> + if (!_Py_FatalError_PrintExc(fd)) {
> + /* No exception is set, or an exception is set without traceback */
> + _Py_FatalError_DumpTracebacks(fd);
> + }
> + }
> + else {
> + _Py_FatalError_DumpTracebacks(fd);
> + }
> +
> + /* The main purpose of faulthandler is to display the traceback. We already
> + * did our best to display it. So faulthandler can now be disabled.
> + * (Don't trigger it on abort().) */
> + _PyFaulthandler_Fini();
> +
> + /* Check if the current Python thread hold the GIL */
> + if (has_tstate_and_gil) {
> + /* Flush sys.stdout and sys.stderr */
> + flush_std_files();
> + }
> +
> +#ifdef MS_WINDOWS
> + len = strlen(msg);
> +
> + /* Convert the message to wchar_t. This uses a simple one-to-one
> + conversion, assuming that the this error message actually uses ASCII
> + only. If this ceases to be true, we will have to convert. */
> + buffer = alloca( (len+1) * (sizeof *buffer));
> + for( i=0; i<=len; ++i)
> + buffer[i] = msg[i];
> + OutputDebugStringW(L"Fatal Python error: ");
> + OutputDebugStringW(buffer);
> + OutputDebugStringW(L"\n");
> +#endif /* MS_WINDOWS */
> +
> +exit:
> +#if defined(MS_WINDOWS) && defined(_DEBUG)
> + DebugBreak();
> +#endif
> + abort();
> +}
> +
> +/* Clean up and exit */
> +
> +#ifdef WITH_THREAD
> +# include "pythread.h"
> +#endif
> +
> +static void (*pyexitfunc)(void) = NULL;
> +/* For the atexit module. */
> +void _Py_PyAtExit(void (*func)(void))
> +{
> + pyexitfunc = func;
> +}
> +
> +static void
> +call_py_exitfuncs(void)
> +{
> + if (pyexitfunc == NULL)
> + return;
> +
> + (*pyexitfunc)();
> + PyErr_Clear();
> +}
> +
> +/* Wait until threading._shutdown completes, provided
> + the threading module was imported in the first place.
> + The shutdown routine will wait until all non-daemon
> + "threading" threads have completed. */
> +static void
> +wait_for_thread_shutdown(void)
> +{
> +#ifdef WITH_THREAD
> + _Py_IDENTIFIER(_shutdown);
> + PyObject *result;
> + PyThreadState *tstate = PyThreadState_GET();
> + PyObject *threading = PyMapping_GetItemString(tstate->interp->modules,
> + "threading");
> + if (threading == NULL) {
> + /* threading not imported */
> + PyErr_Clear();
> + return;
> + }
> + result = _PyObject_CallMethodId(threading, &PyId__shutdown, NULL);
> + if (result == NULL) {
> + PyErr_WriteUnraisable(threading);
> + }
> + else {
> + Py_DECREF(result);
> + }
> + Py_DECREF(threading);
> +#endif
> +}
> +
> +#define NEXITFUNCS 32
> +static void (*exitfuncs[NEXITFUNCS])(void);
> +static int nexitfuncs = 0;
> +
> +int Py_AtExit(void (*func)(void))
> +{
> + if (nexitfuncs >= NEXITFUNCS)
> + return -1;
> + exitfuncs[nexitfuncs++] = func;
> + return 0;
> +}
> +
> +static void
> +call_ll_exitfuncs(void)
> +{
> + while (nexitfuncs > 0)
> + (*exitfuncs[--nexitfuncs])();
> +
> + fflush(stdout);
> + fflush(stderr);
> +}
> +
> +void
> +Py_Exit(int sts)
> +{
> + if (Py_FinalizeEx() < 0) {
> + sts = 120;
> + }
> +
> + exit(sts);
> +}
> +
> +static void
> +initsigs(void)
> +{
> +#ifdef SIGPIPE
> + PyOS_setsig(SIGPIPE, SIG_IGN);
> +#endif
> +#ifdef SIGXFZ
> + PyOS_setsig(SIGXFZ, SIG_IGN);
> +#endif
> +#ifdef SIGXFSZ
> + PyOS_setsig(SIGXFSZ, SIG_IGN);
> +#endif
> + PyOS_InitInterrupts(); /* May imply initsignal() */
> + if (PyErr_Occurred()) {
> + Py_FatalError("Py_Initialize: can't import signal");
> + }
> +}
> +
> +
> +/* Restore signals that the interpreter has called SIG_IGN on to SIG_DFL.
> + *
> + * All of the code in this function must only use async-signal-safe functions,
> + * listed at `man 7 signal` or
> + * http://www.opengroup.org/onlinepubs/009695399/functions/xsh_chap02_04.html.
> + */
> +void
> +_Py_RestoreSignals(void)
> +{
> +#ifdef SIGPIPE
> + PyOS_setsig(SIGPIPE, SIG_DFL);
> +#endif
> +#ifdef SIGXFZ
> + PyOS_setsig(SIGXFZ, SIG_DFL);
> +#endif
> +#ifdef SIGXFSZ
> + PyOS_setsig(SIGXFSZ, SIG_DFL);
> +#endif
> +}
> +
> +
> +/*
> + * The file descriptor fd is considered ``interactive'' if either
> + * a) isatty(fd) is TRUE, or
> + * b) the -i flag was given, and the filename associated with
> + * the descriptor is NULL or "<stdin>" or "???".
> + */
> +int
> +Py_FdIsInteractive(FILE *fp, const char *filename)
> +{
> + if (isatty((int)fileno(fp)))
> + return 1;
> + if (!Py_InteractiveFlag)
> + return 0;
> + return (filename == NULL) ||
> + (strcmp(filename, "<stdin>") == 0) ||
> + (strcmp(filename, "???") == 0);
> +}
> +
> +
> +/* Wrappers around sigaction() or signal(). */
> +
> +PyOS_sighandler_t
> +PyOS_getsig(int sig)
> +{
> +#ifdef HAVE_SIGACTION
> + struct sigaction context;
> + if (sigaction(sig, NULL, &context) == -1)
> + return SIG_ERR;
> + return context.sa_handler;
> +#else
> + PyOS_sighandler_t handler;
> +/* Special signal handling for the secure CRT in Visual Studio 2005 */
> +#if defined(_MSC_VER) && _MSC_VER >= 1400
> + switch (sig) {
> + /* Only these signals are valid */
> + case SIGINT:
> + case SIGILL:
> + case SIGFPE:
> + case SIGSEGV:
> + case SIGTERM:
> + case SIGBREAK:
> + case SIGABRT:
> + break;
> + /* Don't call signal() with other values or it will assert */
> + default:
> + return SIG_ERR;
> + }
> +#endif /* _MSC_VER && _MSC_VER >= 1400 */
> + handler = signal(sig, SIG_IGN);
> + if (handler != SIG_ERR)
> + signal(sig, handler);
> + return handler;
> +#endif
> +}
> +
> +/*
> + * All of the code in this function must only use async-signal-safe functions,
> + * listed at `man 7 signal` or
> + * http://www.opengroup.org/onlinepubs/009695399/functions/xsh_chap02_04.html.
> + */
> +PyOS_sighandler_t
> +PyOS_setsig(int sig, PyOS_sighandler_t handler)
> +{
> +#ifdef HAVE_SIGACTION
> + /* Some code in Modules/signalmodule.c depends on sigaction() being
> + * used here if HAVE_SIGACTION is defined. Fix that if this code
> + * changes to invalidate that assumption.
> + */
> + struct sigaction context, ocontext;
> + context.sa_handler = handler;
> + sigemptyset(&context.sa_mask);
> + context.sa_flags = 0;
> + if (sigaction(sig, &context, &ocontext) == -1)
> + return SIG_ERR;
> + return ocontext.sa_handler;
> +#else
> + PyOS_sighandler_t oldhandler;
> + oldhandler = signal(sig, handler);
> +#ifdef HAVE_SIGINTERRUPT
> + siginterrupt(sig, 1);
> +#endif
> + return oldhandler;
> +#endif
> +}
> +
> +#ifdef __cplusplus
> +}
> +#endif
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pystate.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pystate.c
> new file mode 100644
> index 00000000..df5f0eda
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pystate.c
> @@ -0,0 +1,969 @@
> +/** @file
> + Thread and interpreter state structures and their interfaces
> + Copyright (c) 2010 - 2021, Intel Corporation. All rights reserved.<BR>
> + This program and the accompanying materials are licensed and made available under
> + the terms and conditions of the BSD License that accompanies this distribution.
> + The full text of the license may be found at
> + http://opensource.org/licenses/bsd-license.
> +
> + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
> + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
> +**/
> +
> +#include "Python.h"
> +
> +#define GET_TSTATE() \
> + ((PyThreadState*)_Py_atomic_load_relaxed(&_PyThreadState_Current))
> +#define SET_TSTATE(value) \
> + _Py_atomic_store_relaxed(&_PyThreadState_Current, (uintptr_t)(value))
> +#define GET_INTERP_STATE() \
> + (GET_TSTATE()->interp)
> +
> +
> +/* --------------------------------------------------------------------------
> +CAUTION
> +
> +Always use PyMem_RawMalloc() and PyMem_RawFree() directly in this file. A
> +number of these functions are advertised as safe to call when the GIL isn't
> +held, and in a debug build Python redirects (e.g.) PyMem_NEW (etc) to Python's
> +debugging obmalloc functions. Those aren't thread-safe (they rely on the GIL
> +to avoid the expense of doing their own locking).
> +-------------------------------------------------------------------------- */
> +
> +#ifdef HAVE_DLOPEN
> +#ifdef HAVE_DLFCN_H
> +#include <dlfcn.h>
> +#endif
> +#if !HAVE_DECL_RTLD_LAZY
> +#define RTLD_LAZY 1
> +#endif
> +#endif
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +int _PyGILState_check_enabled = 1;
> +
> +#ifdef WITH_THREAD
> +#include "pythread.h"
> +static PyThread_type_lock head_mutex = NULL; /* Protects interp->tstate_head */
> +#define HEAD_INIT() (void)(head_mutex || (head_mutex = PyThread_allocate_lock()))
> +#define HEAD_LOCK() PyThread_acquire_lock(head_mutex, WAIT_LOCK)
> +#define HEAD_UNLOCK() PyThread_release_lock(head_mutex)
> +
> +/* The single PyInterpreterState used by this process'
> + GILState implementation
> +*/
> +static PyInterpreterState *autoInterpreterState = NULL;
> +static int autoTLSkey = -1;
> +#else
> +#define HEAD_INIT() /* Nothing */
> +#define HEAD_LOCK() /* Nothing */
> +#define HEAD_UNLOCK() /* Nothing */
> +#endif
> +
> +static PyInterpreterState *interp_head = NULL;
> +static __PyCodeExtraState *coextra_head = NULL;
> +
> +/* Assuming the current thread holds the GIL, this is the
> + PyThreadState for the current thread. */
> +_Py_atomic_address _PyThreadState_Current = {0};
> +PyThreadFrameGetter _PyThreadState_GetFrame = NULL;
> +
> +#ifdef WITH_THREAD
> +static void _PyGILState_NoteThreadState(PyThreadState* tstate);
> +#endif
> +
> +
> +PyInterpreterState *
> +PyInterpreterState_New(void)
> +{
> + PyInterpreterState *interp = (PyInterpreterState *)
> + PyMem_RawMalloc(sizeof(PyInterpreterState));
> +
> + if (interp != NULL) {
> + __PyCodeExtraState* coextra = PyMem_RawMalloc(sizeof(__PyCodeExtraState));
> + if (coextra == NULL) {
> + PyMem_RawFree(interp);
> + return NULL;
> + }
> +
> + HEAD_INIT();
> +#ifdef WITH_THREAD
> + if (head_mutex == NULL)
> + Py_FatalError("Can't initialize threads for interpreter");
> +#endif
> + interp->modules = NULL;
> + interp->modules_by_index = NULL;
> + interp->sysdict = NULL;
> + interp->builtins = NULL;
> + interp->builtins_copy = NULL;
> + interp->tstate_head = NULL;
> + interp->codec_search_path = NULL;
> + interp->codec_search_cache = NULL;
> + interp->codec_error_registry = NULL;
> + interp->codecs_initialized = 0;
> + interp->fscodec_initialized = 0;
> + interp->importlib = NULL;
> + interp->import_func = NULL;
> + interp->eval_frame = _PyEval_EvalFrameDefault;
> + coextra->co_extra_user_count = 0;
> + coextra->interp = interp;
> +#ifdef HAVE_DLOPEN
> +#if HAVE_DECL_RTLD_NOW
> + interp->dlopenflags = RTLD_NOW;
> +#else
> + interp->dlopenflags = RTLD_LAZY;
> +#endif
> +#endif
> +
> + HEAD_LOCK();
> + interp->next = interp_head;
> + interp_head = interp;
> + coextra->next = coextra_head;
> + coextra_head = coextra;
> + HEAD_UNLOCK();
> + }
> +
> + return interp;
> +}
> +
> +
> +void
> +PyInterpreterState_Clear(PyInterpreterState *interp)
> +{
> + PyThreadState *p;
> + HEAD_LOCK();
> + for (p = interp->tstate_head; p != NULL; p = p->next)
> + PyThreadState_Clear(p);
> + HEAD_UNLOCK();
> + Py_CLEAR(interp->codec_search_path);
> + Py_CLEAR(interp->codec_search_cache);
> + Py_CLEAR(interp->codec_error_registry);
> + Py_CLEAR(interp->modules);
> + Py_CLEAR(interp->modules_by_index);
> + Py_CLEAR(interp->sysdict);
> + Py_CLEAR(interp->builtins);
> + Py_CLEAR(interp->builtins_copy);
> + Py_CLEAR(interp->importlib);
> + Py_CLEAR(interp->import_func);
> +}
> +
> +
> +static void
> +zapthreads(PyInterpreterState *interp)
> +{
> + PyThreadState *p;
> + /* No need to lock the mutex here because this should only happen
> + when the threads are all really dead (XXX famous last words). */
> + while ((p = interp->tstate_head) != NULL) {
> + PyThreadState_Delete(p);
> + }
> +}
> +
> +
> +void
> +PyInterpreterState_Delete(PyInterpreterState *interp)
> +{
> + PyInterpreterState **p;
> + __PyCodeExtraState **pextra;
> + __PyCodeExtraState* extra;
> + zapthreads(interp);
> + HEAD_LOCK();
> + for (p = &interp_head; /* N/A */; p = &(*p)->next) {
> + if (*p == NULL)
> + Py_FatalError(
> + "PyInterpreterState_Delete: invalid interp");
> + if (*p == interp)
> + break;
> + }
> + if (interp->tstate_head != NULL)
> + Py_FatalError("PyInterpreterState_Delete: remaining threads");
> + *p = interp->next;
> +
> + for (pextra = &coextra_head; ; pextra = &(*pextra)->next) {
> + if (*pextra == NULL)
> + Py_FatalError(
> + "PyInterpreterState_Delete: invalid extra");
> + extra = *pextra;
> + if (extra->interp == interp) {
> + *pextra = extra->next;
> + PyMem_RawFree(extra);
> + break;
> + }
> + }
> + HEAD_UNLOCK();
> + PyMem_RawFree(interp);
> +#ifdef WITH_THREAD
> + if (interp_head == NULL && head_mutex != NULL) {
> + PyThread_free_lock(head_mutex);
> + head_mutex = NULL;
> + }
> +#endif
> +}
> +
> +
> +/* Default implementation for _PyThreadState_GetFrame */
> +static struct _frame *
> +threadstate_getframe(PyThreadState *self)
> +{
> + return self->frame;
> +}
> +
> +static PyThreadState *
> +new_threadstate(PyInterpreterState *interp, int init)
> +{
> + PyThreadState *tstate = (PyThreadState *)PyMem_RawMalloc(sizeof(PyThreadState));
> +
> + if (_PyThreadState_GetFrame == NULL)
> + _PyThreadState_GetFrame = threadstate_getframe;
> +
> + if (tstate != NULL) {
> + tstate->interp = interp;
> +
> + tstate->frame = NULL;
> + tstate->recursion_depth = 0;
> + tstate->overflowed = 0;
> + tstate->recursion_critical = 0;
> + tstate->tracing = 0;
> + tstate->use_tracing = 0;
> + tstate->gilstate_counter = 0;
> + tstate->async_exc = NULL;
> +#ifdef WITH_THREAD
> + tstate->thread_id = PyThread_get_thread_ident();
> +#else
> + tstate->thread_id = 0;
> +#endif
> +
> + tstate->dict = NULL;
> +
> + tstate->curexc_type = NULL;
> + tstate->curexc_value = NULL;
> + tstate->curexc_traceback = NULL;
> +
> + tstate->exc_type = NULL;
> + tstate->exc_value = NULL;
> + tstate->exc_traceback = NULL;
> +
> + tstate->c_profilefunc = NULL;
> + tstate->c_tracefunc = NULL;
> + tstate->c_profileobj = NULL;
> + tstate->c_traceobj = NULL;
> +
> + tstate->trash_delete_nesting = 0;
> + tstate->trash_delete_later = NULL;
> + tstate->on_delete = NULL;
> + tstate->on_delete_data = NULL;
> +
> + tstate->coroutine_wrapper = NULL;
> + tstate->in_coroutine_wrapper = 0;
> +
> + tstate->async_gen_firstiter = NULL;
> + tstate->async_gen_finalizer = NULL;
> +
> + if (init)
> + _PyThreadState_Init(tstate);
> +
> + HEAD_LOCK();
> + tstate->prev = NULL;
> + tstate->next = interp->tstate_head;
> + if (tstate->next)
> + tstate->next->prev = tstate;
> + interp->tstate_head = tstate;
> + HEAD_UNLOCK();
> + }
> +
> + return tstate;
> +}
> +
> +PyThreadState *
> +PyThreadState_New(PyInterpreterState *interp)
> +{
> + return new_threadstate(interp, 1);
> +}
> +
> +PyThreadState *
> +_PyThreadState_Prealloc(PyInterpreterState *interp)
> +{
> + return new_threadstate(interp, 0);
> +}
> +
> +void
> +_PyThreadState_Init(PyThreadState *tstate)
> +{
> +#ifdef WITH_THREAD
> + _PyGILState_NoteThreadState(tstate);
> +#endif
> +}
> +
> +PyObject*
> +PyState_FindModule(struct PyModuleDef* module)
> +{
> + Py_ssize_t index = module->m_base.m_index;
> + PyInterpreterState *state = GET_INTERP_STATE();
> + PyObject *res;
> + if (module->m_slots) {
> + return NULL;
> + }
> + if (index == 0)
> + return NULL;
> + if (state->modules_by_index == NULL)
> + return NULL;
> + if (index >= PyList_GET_SIZE(state->modules_by_index))
> + return NULL;
> + res = PyList_GET_ITEM(state->modules_by_index, index);
> + return res==Py_None ? NULL : res;
> +}
> +
> +int
> +_PyState_AddModule(PyObject* module, struct PyModuleDef* def)
> +{
> + PyInterpreterState *state;
> + if (!def) {
> + assert(PyErr_Occurred());
> + return -1;
> + }
> + if (def->m_slots) {
> + PyErr_SetString(PyExc_SystemError,
> + "PyState_AddModule called on module with slots");
> + return -1;
> + }
> + state = GET_INTERP_STATE();
> + if (!state->modules_by_index) {
> + state->modules_by_index = PyList_New(0);
> + if (!state->modules_by_index)
> + return -1;
> + }
> + while(PyList_GET_SIZE(state->modules_by_index) <= def->m_base.m_index)
> + if (PyList_Append(state->modules_by_index, Py_None) < 0)
> + return -1;
> + Py_INCREF(module);
> + return PyList_SetItem(state->modules_by_index,
> + def->m_base.m_index, module);
> +}
> +
> +int
> +PyState_AddModule(PyObject* module, struct PyModuleDef* def)
> +{
> + Py_ssize_t index;
> + PyInterpreterState *state = GET_INTERP_STATE();
> + if (!def) {
> + Py_FatalError("PyState_AddModule: Module Definition is NULL");
> + return -1;
> + }
> + index = def->m_base.m_index;
> + if (state->modules_by_index) {
> + if(PyList_GET_SIZE(state->modules_by_index) >= index) {
> + if(module == PyList_GET_ITEM(state->modules_by_index, index)) {
> + Py_FatalError("PyState_AddModule: Module already added!");
> + return -1;
> + }
> + }
> + }
> + return _PyState_AddModule(module, def);
> +}
> +
> +int
> +PyState_RemoveModule(struct PyModuleDef* def)
> +{
> + PyInterpreterState *state;
> + Py_ssize_t index = def->m_base.m_index;
> + if (def->m_slots) {
> + PyErr_SetString(PyExc_SystemError,
> + "PyState_RemoveModule called on module with slots");
> + return -1;
> + }
> + state = GET_INTERP_STATE();
> + if (index == 0) {
> + Py_FatalError("PyState_RemoveModule: Module index invalid.");
> + return -1;
> + }
> + if (state->modules_by_index == NULL) {
> + Py_FatalError("PyState_RemoveModule: Interpreters module-list not acessible.");
> + return -1;
> + }
> + if (index > PyList_GET_SIZE(state->modules_by_index)) {
> + Py_FatalError("PyState_RemoveModule: Module index out of bounds.");
> + return -1;
> + }
> + Py_INCREF(Py_None);
> + return PyList_SetItem(state->modules_by_index, index, Py_None);
> +}
> +
> +/* used by import.c:PyImport_Cleanup */
> +void
> +_PyState_ClearModules(void)
> +{
> + PyInterpreterState *state = GET_INTERP_STATE();
> + if (state->modules_by_index) {
> + Py_ssize_t i;
> + for (i = 0; i < PyList_GET_SIZE(state->modules_by_index); i++) {
> + PyObject *m = PyList_GET_ITEM(state->modules_by_index, i);
> + if (PyModule_Check(m)) {
> + /* cleanup the saved copy of module dicts */
> + PyModuleDef *md = PyModule_GetDef(m);
> + if (md)
> + Py_CLEAR(md->m_base.m_copy);
> + }
> + }
> + /* Setting modules_by_index to NULL could be dangerous, so we
> + clear the list instead. */
> + if (PyList_SetSlice(state->modules_by_index,
> + 0, PyList_GET_SIZE(state->modules_by_index),
> + NULL))
> + PyErr_WriteUnraisable(state->modules_by_index);
> + }
> +}
> +
> +void
> +PyThreadState_Clear(PyThreadState *tstate)
> +{
> + if (Py_VerboseFlag && tstate->frame != NULL)
> + fprintf(stderr,
> + "PyThreadState_Clear: warning: thread still has a frame\n");
> +
> + Py_CLEAR(tstate->frame);
> +
> + Py_CLEAR(tstate->dict);
> + Py_CLEAR(tstate->async_exc);
> +
> + Py_CLEAR(tstate->curexc_type);
> + Py_CLEAR(tstate->curexc_value);
> + Py_CLEAR(tstate->curexc_traceback);
> +
> + Py_CLEAR(tstate->exc_type);
> + Py_CLEAR(tstate->exc_value);
> + Py_CLEAR(tstate->exc_traceback);
> +
> + tstate->c_profilefunc = NULL;
> + tstate->c_tracefunc = NULL;
> + Py_CLEAR(tstate->c_profileobj);
> + Py_CLEAR(tstate->c_traceobj);
> +
> + Py_CLEAR(tstate->coroutine_wrapper);
> + Py_CLEAR(tstate->async_gen_firstiter);
> + Py_CLEAR(tstate->async_gen_finalizer);
> +}
> +
> +
> +/* Common code for PyThreadState_Delete() and PyThreadState_DeleteCurrent() */
> +static void
> +tstate_delete_common(PyThreadState *tstate)
> +{
> + PyInterpreterState *interp;
> + if (tstate == NULL)
> + Py_FatalError("PyThreadState_Delete: NULL tstate");
> + interp = tstate->interp;
> + if (interp == NULL)
> + Py_FatalError("PyThreadState_Delete: NULL interp");
> + HEAD_LOCK();
> + if (tstate->prev)
> + tstate->prev->next = tstate->next;
> + else
> + interp->tstate_head = tstate->next;
> + if (tstate->next)
> + tstate->next->prev = tstate->prev;
> + HEAD_UNLOCK();
> + if (tstate->on_delete != NULL) {
> + tstate->on_delete(tstate->on_delete_data);
> + }
> + PyMem_RawFree(tstate);
> +}
> +
> +
> +void
> +PyThreadState_Delete(PyThreadState *tstate)
> +{
> + if (tstate == GET_TSTATE())
> + Py_FatalError("PyThreadState_Delete: tstate is still current");
> +#ifdef WITH_THREAD
> + if (autoInterpreterState && PyThread_get_key_value(autoTLSkey) == tstate)
> + PyThread_delete_key_value(autoTLSkey);
> +#endif /* WITH_THREAD */
> + tstate_delete_common(tstate);
> +}
> +
> +
> +#ifdef WITH_THREAD
> +void
> +PyThreadState_DeleteCurrent()
> +{
> + PyThreadState *tstate = GET_TSTATE();
> + if (tstate == NULL)
> + Py_FatalError(
> + "PyThreadState_DeleteCurrent: no current tstate");
> + tstate_delete_common(tstate);
> + if (autoInterpreterState && PyThread_get_key_value(autoTLSkey) == tstate)
> + PyThread_delete_key_value(autoTLSkey);
> + SET_TSTATE(NULL);
> + PyEval_ReleaseLock();
> +}
> +#endif /* WITH_THREAD */
> +
> +
> +/*
> + * Delete all thread states except the one passed as argument.
> + * Note that, if there is a current thread state, it *must* be the one
> + * passed as argument. Also, this won't touch any other interpreters
> + * than the current one, since we don't know which thread state should
> + * be kept in those other interpreteres.
> + */
> +void
> +_PyThreadState_DeleteExcept(PyThreadState *tstate)
> +{
> + PyInterpreterState *interp = tstate->interp;
> + PyThreadState *p, *next, *garbage;
> + HEAD_LOCK();
> + /* Remove all thread states, except tstate, from the linked list of
> + thread states. This will allow calling PyThreadState_Clear()
> + without holding the lock. */
> + garbage = interp->tstate_head;
> + if (garbage == tstate)
> + garbage = tstate->next;
> + if (tstate->prev)
> + tstate->prev->next = tstate->next;
> + if (tstate->next)
> + tstate->next->prev = tstate->prev;
> + tstate->prev = tstate->next = NULL;
> + interp->tstate_head = tstate;
> + HEAD_UNLOCK();
> + /* Clear and deallocate all stale thread states. Even if this
> + executes Python code, we should be safe since it executes
> + in the current thread, not one of the stale threads. */
> + for (p = garbage; p; p = next) {
> + next = p->next;
> + PyThreadState_Clear(p);
> + PyMem_RawFree(p);
> + }
> +}
> +
> +
> +PyThreadState *
> +_PyThreadState_UncheckedGet(void)
> +{
> + return GET_TSTATE();
> +}
> +
> +
> +PyThreadState *
> +PyThreadState_Get(void)
> +{
> + PyThreadState *tstate = GET_TSTATE();
> + if (tstate == NULL)
> + Py_FatalError("PyThreadState_Get: no current thread");
> +
> + return tstate;
> +}
> +
> +
> +PyThreadState *
> +PyThreadState_Swap(PyThreadState *newts)
> +{
> + PyThreadState *oldts = GET_TSTATE();
> +
> + SET_TSTATE(newts);
> + /* It should not be possible for more than one thread state
> + to be used for a thread. Check this the best we can in debug
> + builds.
> + */
> +#if defined(Py_DEBUG) && defined(WITH_THREAD)
> + if (newts) {
> + /* This can be called from PyEval_RestoreThread(). Similar
> + to it, we need to ensure errno doesn't change.
> + */
> + int err = errno;
> + PyThreadState *check = PyGILState_GetThisThreadState();
> + if (check && check->interp == newts->interp && check != newts)
> + Py_FatalError("Invalid thread state for this thread");
> + errno = err;
> + }
> +#endif
> + return oldts;
> +}
> +
> +__PyCodeExtraState*
> +__PyCodeExtraState_Get(void) {
> + PyInterpreterState* interp = PyThreadState_Get()->interp;
> +
> + HEAD_LOCK();
> + for (__PyCodeExtraState* cur = coextra_head; cur != NULL; cur = cur->next) {
> + if (cur->interp == interp) {
> + HEAD_UNLOCK();
> + return cur;
> + }
> + }
> + HEAD_UNLOCK();
> +
> + Py_FatalError("__PyCodeExtraState_Get: no code state for interpreter");
> + return NULL;
> +}
> +
> +/* An extension mechanism to store arbitrary additional per-thread state.
> + PyThreadState_GetDict() returns a dictionary that can be used to hold such
> + state; the caller should pick a unique key and store its state there. If
> + PyThreadState_GetDict() returns NULL, an exception has *not* been raised
> + and the caller should assume no per-thread state is available. */
> +
> +PyObject *
> +PyThreadState_GetDict(void)
> +{
> + PyThreadState *tstate = GET_TSTATE();
> + if (tstate == NULL)
> + return NULL;
> +
> + if (tstate->dict == NULL) {
> + PyObject *d;
> + tstate->dict = d = PyDict_New();
> + if (d == NULL)
> + PyErr_Clear();
> + }
> + return tstate->dict;
> +}
> +
> +
> +/* Asynchronously raise an exception in a thread.
> + Requested by Just van Rossum and Alex Martelli.
> + To prevent naive misuse, you must write your own extension
> + to call this, or use ctypes. Must be called with the GIL held.
> + Returns the number of tstates modified (normally 1, but 0 if `id` didn't
> + match any known thread id). Can be called with exc=NULL to clear an
> + existing async exception. This raises no exceptions. */
> +
> +int
> +PyThreadState_SetAsyncExc(long id, PyObject *exc) {
> + PyInterpreterState *interp = GET_INTERP_STATE();
> + PyThreadState *p;
> +
> + /* Although the GIL is held, a few C API functions can be called
> + * without the GIL held, and in particular some that create and
> + * destroy thread and interpreter states. Those can mutate the
> + * list of thread states we're traversing, so to prevent that we lock
> + * head_mutex for the duration.
> + */
> + HEAD_LOCK();
> + for (p = interp->tstate_head; p != NULL; p = p->next) {
> + if (p->thread_id == id) {
> + /* Tricky: we need to decref the current value
> + * (if any) in p->async_exc, but that can in turn
> + * allow arbitrary Python code to run, including
> + * perhaps calls to this function. To prevent
> + * deadlock, we need to release head_mutex before
> + * the decref.
> + */
> + PyObject *old_exc = p->async_exc;
> + Py_XINCREF(exc);
> + p->async_exc = exc;
> + HEAD_UNLOCK();
> + Py_XDECREF(old_exc);
> + _PyEval_SignalAsyncExc();
> + return 1;
> + }
> + }
> + HEAD_UNLOCK();
> + return 0;
> +}
> +
> +
> +/* Routines for advanced debuggers, requested by David Beazley.
> + Don't use unless you know what you are doing! */
> +
> +PyInterpreterState *
> +PyInterpreterState_Head(void)
> +{
> + return interp_head;
> +}
> +
> +PyInterpreterState *
> +PyInterpreterState_Next(PyInterpreterState *interp) {
> + return interp->next;
> +}
> +
> +PyThreadState *
> +PyInterpreterState_ThreadHead(PyInterpreterState *interp) {
> + return interp->tstate_head;
> +}
> +
> +PyThreadState *
> +PyThreadState_Next(PyThreadState *tstate) {
> + return tstate->next;
> +}
> +
> +/* The implementation of sys._current_frames(). This is intended to be
> + called with the GIL held, as it will be when called via
> + sys._current_frames(). It's possible it would work fine even without
> + the GIL held, but haven't thought enough about that.
> +*/
> +PyObject *
> +_PyThread_CurrentFrames(void)
> +{
> + PyObject *result;
> + PyInterpreterState *i;
> +
> + result = PyDict_New();
> + if (result == NULL)
> + return NULL;
> +
> + /* for i in all interpreters:
> + * for t in all of i's thread states:
> + * if t's frame isn't NULL, map t's id to its frame
> + * Because these lists can mutate even when the GIL is held, we
> + * need to grab head_mutex for the duration.
> + */
> + HEAD_LOCK();
> + for (i = interp_head; i != NULL; i = i->next) {
> + PyThreadState *t;
> + for (t = i->tstate_head; t != NULL; t = t->next) {
> + PyObject *id;
> + int stat;
> + struct _frame *frame = t->frame;
> + if (frame == NULL)
> + continue;
> + id = PyLong_FromLong(t->thread_id);
> + if (id == NULL)
> + goto Fail;
> + stat = PyDict_SetItem(result, id, (PyObject *)frame);
> + Py_DECREF(id);
> + if (stat < 0)
> + goto Fail;
> + }
> + }
> + HEAD_UNLOCK();
> + return result;
> +
> + Fail:
> + HEAD_UNLOCK();
> + Py_DECREF(result);
> + return NULL;
> +}
> +
> +/* Python "auto thread state" API. */
> +#ifdef WITH_THREAD
> +
> +/* Keep this as a static, as it is not reliable! It can only
> + ever be compared to the state for the *current* thread.
> + * If not equal, then it doesn't matter that the actual
> + value may change immediately after comparison, as it can't
> + possibly change to the current thread's state.
> + * If equal, then the current thread holds the lock, so the value can't
> + change until we yield the lock.
> +*/
> +static int
> +PyThreadState_IsCurrent(PyThreadState *tstate)
> +{
> + /* Must be the tstate for this thread */
> + assert(PyGILState_GetThisThreadState()==tstate);
> + return tstate == GET_TSTATE();
> +}
> +
> +/* Internal initialization/finalization functions called by
> + Py_Initialize/Py_FinalizeEx
> +*/
> +void
> +_PyGILState_Init(PyInterpreterState *i, PyThreadState *t)
> +{
> + assert(i && t); /* must init with valid states */
> + autoTLSkey = PyThread_create_key();
> + if (autoTLSkey == -1)
> + Py_FatalError("Could not allocate TLS entry");
> + autoInterpreterState = i;
> + assert(PyThread_get_key_value(autoTLSkey) == NULL);
> + assert(t->gilstate_counter == 0);
> +
> + _PyGILState_NoteThreadState(t);
> +}
> +
> +PyInterpreterState *
> +_PyGILState_GetInterpreterStateUnsafe(void)
> +{
> + return autoInterpreterState;
> +}
> +
> +void
> +_PyGILState_Fini(void)
> +{
> + PyThread_delete_key(autoTLSkey);
> + autoTLSkey = -1;
> + autoInterpreterState = NULL;
> +}
> +
> +/* Reset the TLS key - called by PyOS_AfterFork().
> + * This should not be necessary, but some - buggy - pthread implementations
> + * don't reset TLS upon fork(), see issue #10517.
> + */
> +void
> +_PyGILState_Reinit(void)
> +{
> +#ifdef WITH_THREAD
> + head_mutex = NULL;
> + HEAD_INIT();
> +#endif
> + PyThreadState *tstate = PyGILState_GetThisThreadState();
> + PyThread_delete_key(autoTLSkey);
> + if ((autoTLSkey = PyThread_create_key()) == -1)
> + Py_FatalError("Could not allocate TLS entry");
> +
> + /* If the thread had an associated auto thread state, reassociate it with
> + * the new key. */
> + if (tstate && PyThread_set_key_value(autoTLSkey, (void *)tstate) < 0)
> + Py_FatalError("Couldn't create autoTLSkey mapping");
> +}
> +
> +/* When a thread state is created for a thread by some mechanism other than
> + PyGILState_Ensure, it's important that the GILState machinery knows about
> + it so it doesn't try to create another thread state for the thread (this is
> + a better fix for SF bug #1010677 than the first one attempted).
> +*/
> +static void
> +_PyGILState_NoteThreadState(PyThreadState* tstate)
> +{
> + /* If autoTLSkey isn't initialized, this must be the very first
> + threadstate created in Py_Initialize(). Don't do anything for now
> + (we'll be back here when _PyGILState_Init is called). */
> + if (!autoInterpreterState)
> + return;
> +
> + /* Stick the thread state for this thread in thread local storage.
> +
> + The only situation where you can legitimately have more than one
> + thread state for an OS level thread is when there are multiple
> + interpreters.
> +
> + You shouldn't really be using the PyGILState_ APIs anyway (see issues
> + #10915 and #15751).
> +
> + The first thread state created for that given OS level thread will
> + "win", which seems reasonable behaviour.
> + */
> + if (PyThread_get_key_value(autoTLSkey) == NULL) {
> + if (PyThread_set_key_value(autoTLSkey, (void *)tstate) < 0)
> + Py_FatalError("Couldn't create autoTLSkey mapping");
> + }
> +
> + /* PyGILState_Release must not try to delete this thread state. */
> + tstate->gilstate_counter = 1;
> +}
> +
> +/* The public functions */
> +PyThreadState *
> +PyGILState_GetThisThreadState(void)
> +{
> + if (autoInterpreterState == NULL)
> + return NULL;
> + return (PyThreadState *)PyThread_get_key_value(autoTLSkey);
> +}
> +
> +int
> +PyGILState_Check(void)
> +{
> + PyThreadState *tstate;
> +
> + if (!_PyGILState_check_enabled)
> + return 1;
> +
> + if (autoTLSkey == -1)
> + return 1;
> +
> + tstate = GET_TSTATE();
> + if (tstate == NULL)
> + return 0;
> +
> + return (tstate == PyGILState_GetThisThreadState());
> +}
> +
> +PyGILState_STATE
> +PyGILState_Ensure(void)
> +{
> + int current;
> + PyThreadState *tcur;
> + int need_init_threads = 0;
> +
> + /* Note that we do not auto-init Python here - apart from
> + potential races with 2 threads auto-initializing, pep-311
> + spells out other issues. Embedders are expected to have
> + called Py_Initialize() and usually PyEval_InitThreads().
> + */
> + assert(autoInterpreterState); /* Py_Initialize() hasn't been called! */
> + tcur = (PyThreadState *)PyThread_get_key_value(autoTLSkey);
> + if (tcur == NULL) {
> + need_init_threads = 1;
> +
> + /* Create a new thread state for this thread */
> + tcur = PyThreadState_New(autoInterpreterState);
> + if (tcur == NULL)
> + Py_FatalError("Couldn't create thread-state for new thread");
> + /* This is our thread state! We'll need to delete it in the
> + matching call to PyGILState_Release(). */
> + tcur->gilstate_counter = 0;
> + current = 0; /* new thread state is never current */
> + }
> + else {
> + current = PyThreadState_IsCurrent(tcur);
> + }
> +
> + if (current == 0) {
> + PyEval_RestoreThread(tcur);
> + }
> +
> + /* Update our counter in the thread-state - no need for locks:
> + - tcur will remain valid as we hold the GIL.
> + - the counter is safe as we are the only thread "allowed"
> + to modify this value
> + */
> + ++tcur->gilstate_counter;
> +
> + if (need_init_threads) {
> + /* At startup, Python has no concrete GIL. If PyGILState_Ensure() is
> + called from a new thread for the first time, we need the create the
> + GIL. */
> + PyEval_InitThreads();
> + }
> +
> + return current ? PyGILState_LOCKED : PyGILState_UNLOCKED;
> +}
> +
> +void
> +PyGILState_Release(PyGILState_STATE oldstate)
> +{
> + PyThreadState *tcur = (PyThreadState *)PyThread_get_key_value(
> + autoTLSkey);
> + if (tcur == NULL)
> + Py_FatalError("auto-releasing thread-state, "
> + "but no thread-state for this thread");
> + /* We must hold the GIL and have our thread state current */
> + /* XXX - remove the check - the assert should be fine,
> + but while this is very new (April 2003), the extra check
> + by release-only users can't hurt.
> + */
> + if (! PyThreadState_IsCurrent(tcur))
> + Py_FatalError("This thread state must be current when releasing");
> + assert(PyThreadState_IsCurrent(tcur));
> + --tcur->gilstate_counter;
> + assert(tcur->gilstate_counter >= 0); /* illegal counter value */
> +
> + /* If we're going to destroy this thread-state, we must
> + * clear it while the GIL is held, as destructors may run.
> + */
> + if (tcur->gilstate_counter == 0) {
> + /* can't have been locked when we created it */
> + assert(oldstate == PyGILState_UNLOCKED);
> + PyThreadState_Clear(tcur);
> + /* Delete the thread-state. Note this releases the GIL too!
> + * It's vital that the GIL be held here, to avoid shutdown
> + * races; see bugs 225673 and 1061968 (that nasty bug has a
> + * habit of coming back).
> + */
> + PyThreadState_DeleteCurrent();
> + }
> + /* Release the lock if necessary */
> + else if (oldstate == PyGILState_UNLOCKED)
> + PyEval_SaveThread();
> +}
> +
> +#endif /* WITH_THREAD */
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pytime.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pytime.c
> new file mode 100644
> index 00000000..0dedf035
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pytime.c
> @@ -0,0 +1,749 @@
> +/** @file
> + Time related functions
> +
> + Copyright (c) 2010 - 2021, Intel Corporation. All rights reserved.<BR>
> + This program and the accompanying materials are licensed and made available under
> + the terms and conditions of the BSD License that accompanies this distribution.
> + The full text of the license may be found at
> + http://opensource.org/licenses/bsd-license.
> +
> + THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
> + WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
> +**/
> +
> +#include "Python.h"
> +#ifdef MS_WINDOWS
> +#include <windows.h>
> +#endif
> +
> +#if defined(__APPLE__)
> +#include <mach/mach_time.h> /* mach_absolute_time(), mach_timebase_info() */
> +#endif
> +
> +#define _PyTime_check_mul_overflow(a, b) \
> + (assert(b > 0), \
> + (_PyTime_t)(a) < _PyTime_MIN / (_PyTime_t)(b) \
> + || _PyTime_MAX / (_PyTime_t)(b) < (_PyTime_t)(a))
> +
> +/* To millisecond (10^-3) */
> +#define SEC_TO_MS 1000
> +
> +/* To microseconds (10^-6) */
> +#define MS_TO_US 1000
> +#define SEC_TO_US (SEC_TO_MS * MS_TO_US)
> +
> +/* To nanoseconds (10^-9) */
> +#define US_TO_NS 1000
> +#define MS_TO_NS (MS_TO_US * US_TO_NS)
> +#define SEC_TO_NS (SEC_TO_MS * MS_TO_NS)
> +
> +/* Conversion from nanoseconds */
> +#define NS_TO_MS (1000 * 1000)
> +#define NS_TO_US (1000)
> +
> +static void
> +error_time_t_overflow(void)
> +{
> + PyErr_SetString(PyExc_OverflowError,
> + "timestamp out of range for platform time_t");
> +}
> +
> +time_t
> +_PyLong_AsTime_t(PyObject *obj)
> +{
> +#if SIZEOF_TIME_T == SIZEOF_LONG_LONG
> + long long val;
> + val = PyLong_AsLongLong(obj);
> +#else
> + long val;
> + Py_BUILD_ASSERT(sizeof(time_t) <= sizeof(long));
> + val = PyLong_AsLong(obj);
> +#endif
> + if (val == -1 && PyErr_Occurred()) {
> + if (PyErr_ExceptionMatches(PyExc_OverflowError))
> + error_time_t_overflow();
> + return -1;
> + }
> + return (time_t)val;
> +}
> +
> +PyObject *
> +_PyLong_FromTime_t(time_t t)
> +{
> +#if SIZEOF_TIME_T == SIZEOF_LONG_LONG
> + return PyLong_FromLongLong((long long)t);
> +#else
> + Py_BUILD_ASSERT(sizeof(time_t) <= sizeof(long));
> + return PyLong_FromLong((long)t);
> +#endif
> +}
> +
> +/* Round to nearest with ties going to nearest even integer
> + (_PyTime_ROUND_HALF_EVEN) */
> +static double
> +_PyTime_RoundHalfEven(double x)
> +{
> + double rounded = round(x);
> + if (fabs(x-rounded) == 0.5)
> + /* halfway case: round to even */
> + rounded = 2.0*round(x/2.0);
> + return rounded;
> +}
> +
> +static double
> +_PyTime_Round(double x, _PyTime_round_t round)
> +{
> + /* volatile avoids optimization changing how numbers are rounded */
> + volatile double d;
> +
> + d = x;
> + if (round == _PyTime_ROUND_HALF_EVEN){
> + d = _PyTime_RoundHalfEven(d);
> + }
> + else if (round == _PyTime_ROUND_CEILING){
> + d = ceil(d);
> + }
> + else if (round == _PyTime_ROUND_FLOOR) {
> + d = floor(d);
> + }
> + else {
> + assert(round == _PyTime_ROUND_UP);
> + d = (d >= 0.0) ? ceil(d) : floor(d);
> + }
> + return d;
> +}
> +
> +static int
> +_PyTime_DoubleToDenominator(double d, time_t *sec, long *numerator,
> + double denominator, _PyTime_round_t round)
> +{
> + double intpart;
> + /* volatile avoids optimization changing how numbers are rounded */
> + volatile double floatpart;
> +
> + floatpart = modf(d, &intpart);
> +
> + floatpart *= denominator;
> + floatpart = _PyTime_Round(floatpart, round);
> + if (floatpart >= denominator) {
> + floatpart -= denominator;
> + intpart += 1.0;
> + }
> + else if (floatpart < 0) {
> + floatpart += denominator;
> + intpart -= 1.0;
> + }
> + assert(0.0 <= floatpart && floatpart < denominator);
> +
> + if (!_Py_InIntegralTypeRange(time_t, intpart)) {
> + error_time_t_overflow();
> + return -1;
> + }
> + *sec = (time_t)intpart;
> + *numerator = (long)floatpart;
> +
> + return 0;
> +}
> +
> +static int
> +_PyTime_ObjectToDenominator(PyObject *obj, time_t *sec, long *numerator,
> + double denominator, _PyTime_round_t round)
> +{
> + assert(denominator <= (double)LONG_MAX);
> +
> + if (PyFloat_Check(obj)) {
> + double d = PyFloat_AsDouble(obj);
> + if (Py_IS_NAN(d)) {
> + *numerator = 0;
> + PyErr_SetString(PyExc_ValueError, "Invalid value NaN (not a number)");
> + return -1;
> + }
> + return _PyTime_DoubleToDenominator(d, sec, numerator,
> + denominator, round);
> + }
> + else {
> + *sec = _PyLong_AsTime_t(obj);
> + *numerator = 0;
> + if (*sec == (time_t)-1 && PyErr_Occurred())
> + return -1;
> + return 0;
> + }
> +}
> +
> +int
> +_PyTime_ObjectToTime_t(PyObject *obj, time_t *sec, _PyTime_round_t round)
> +{
> + if (PyFloat_Check(obj)) {
> + double intpart;
> + /* volatile avoids optimization changing how numbers are rounded */
> + volatile double d;
> +
> + d = PyFloat_AsDouble(obj);
> + if (Py_IS_NAN(d)) {
> + PyErr_SetString(PyExc_ValueError, "Invalid value NaN (not a number)");
> + return -1;
> + }
> +
> + d = _PyTime_Round(d, round);
> + (void)modf(d, &intpart);
> +
> + if (!_Py_InIntegralTypeRange(time_t, intpart)) {
> + error_time_t_overflow();
> + return -1;
> + }
> + *sec = (time_t)intpart;
> + return 0;
> + }
> + else {
> + *sec = _PyLong_AsTime_t(obj);
> + if (*sec == (time_t)-1 && PyErr_Occurred())
> + return -1;
> + return 0;
> + }
> +}
> +
> +int
> +_PyTime_ObjectToTimespec(PyObject *obj, time_t *sec, long *nsec,
> + _PyTime_round_t round)
> +{
> + int res;
> + res = _PyTime_ObjectToDenominator(obj, sec, nsec, 1e9, round);
> + if (res == 0) {
> + assert(0 <= *nsec && *nsec < SEC_TO_NS);
> + }
> + return res;
> +}
> +
> +int
> +_PyTime_ObjectToTimeval(PyObject *obj, time_t *sec, long *usec,
> + _PyTime_round_t round)
> +{
> + int res;
> + res = _PyTime_ObjectToDenominator(obj, sec, usec, 1e6, round);
> + if (res == 0) {
> + assert(0 <= *usec && *usec < SEC_TO_US);
> + }
> + return res;
> +}
> +
> +static void
> +_PyTime_overflow(void)
> +{
> + PyErr_SetString(PyExc_OverflowError,
> + "timestamp too large to convert to C _PyTime_t");
> +}
> +
> +_PyTime_t
> +_PyTime_FromSeconds(int seconds)
> +{
> + _PyTime_t t;
> + t = (_PyTime_t)seconds;
> + /* ensure that integer overflow cannot happen, int type should have 32
> + bits, whereas _PyTime_t type has at least 64 bits (SEC_TO_MS takes 30
> + bits). */
> + Py_BUILD_ASSERT(INT_MAX <= _PyTime_MAX / SEC_TO_NS);
> + Py_BUILD_ASSERT(INT_MIN >= _PyTime_MIN / SEC_TO_NS);
> + assert((t >= 0 && t <= _PyTime_MAX / SEC_TO_NS)
> + || (t < 0 && t >= _PyTime_MIN / SEC_TO_NS));
> + t *= SEC_TO_NS;
> + return t;
> +}
> +
> +_PyTime_t
> +_PyTime_FromNanoseconds(long long ns)
> +{
> + _PyTime_t t;
> + Py_BUILD_ASSERT(sizeof(long long) <= sizeof(_PyTime_t));
> + t = Py_SAFE_DOWNCAST(ns, long long, _PyTime_t);
> + return t;
> +}
> +
> +static int
> +_PyTime_FromTimespec(_PyTime_t *tp, struct timespec *ts, int raise)
> +{
> + _PyTime_t t;
> + int res = 0;
> +
> + Py_BUILD_ASSERT(sizeof(ts->tv_sec) <= sizeof(_PyTime_t));
> + t = (_PyTime_t)ts->tv_sec;
> +
> + if (_PyTime_check_mul_overflow(t, SEC_TO_NS)) {
> + if (raise)
> + _PyTime_overflow();
> + res = -1;
> + }
> + t = t * SEC_TO_NS;
> +
> + t += ts->tv_nsec;
> +
> + *tp = t;
> + return res;
> +}
> +
> +
> +#ifdef HAVE_CLOCK_GETTIME
> +static int
> +_PyTime_FromTimespec(_PyTime_t *tp, struct timespec *ts, int raise)
> +{
> + _PyTime_t t;
> + int res = 0;
> +
> + Py_BUILD_ASSERT(sizeof(ts->tv_sec) <= sizeof(_PyTime_t));
> + t = (_PyTime_t)ts->tv_sec;
> +
> + if (_PyTime_check_mul_overflow(t, SEC_TO_NS)) {
> + if (raise)
> + _PyTime_overflow();
> + res = -1;
> + }
> + t = t * SEC_TO_NS;
> +
> + t += ts->tv_nsec;
> +
> + *tp = t;
> + return res;
> +}
> +#elif !defined(MS_WINDOWS)
> +static int
> +_PyTime_FromTimeval(_PyTime_t *tp, struct timeval *tv, int raise)
> +{
> + _PyTime_t t;
> + int res = 0;
> +
> + Py_BUILD_ASSERT(sizeof(tv->tv_sec) <= sizeof(_PyTime_t));
> + t = (_PyTime_t)tv->tv_sec;
> +
> + if (_PyTime_check_mul_overflow(t, SEC_TO_NS)) {
> + if (raise)
> + _PyTime_overflow();
> + res = -1;
> + }
> + t = t * SEC_TO_NS;
> +
> + t += (_PyTime_t)tv->tv_usec * US_TO_NS;
> +
> + *tp = t;
> + return res;
> +}
> +#endif
> +
> +static int
> +_PyTime_FromFloatObject(_PyTime_t *t, double value, _PyTime_round_t round,
> + long unit_to_ns)
> +{
> + /* volatile avoids optimization changing how numbers are rounded */
> + volatile double d;
> +
> + /* convert to a number of nanoseconds */
> + d = value;
> + d *= (double)unit_to_ns;
> + d = _PyTime_Round(d, round);
> +
> + if (!_Py_InIntegralTypeRange(_PyTime_t, d)) {
> + _PyTime_overflow();
> + return -1;
> + }
> + *t = (_PyTime_t)d;
> + return 0;
> +}
> +
> +static int
> +_PyTime_FromObject(_PyTime_t *t, PyObject *obj, _PyTime_round_t round,
> + long unit_to_ns)
> +{
> + if (PyFloat_Check(obj)) {
> + double d;
> + d = PyFloat_AsDouble(obj);
> + if (Py_IS_NAN(d)) {
> + PyErr_SetString(PyExc_ValueError, "Invalid value NaN (not a number)");
> + return -1;
> + }
> + return _PyTime_FromFloatObject(t, d, round, unit_to_ns);
> + }
> + else {
> + long long sec;
> + Py_BUILD_ASSERT(sizeof(long long) <= sizeof(_PyTime_t));
> +
> + sec = PyLong_AsLongLong(obj);
> + if (sec == -1 && PyErr_Occurred()) {
> + if (PyErr_ExceptionMatches(PyExc_OverflowError))
> + _PyTime_overflow();
> + return -1;
> + }
> +
> + if (_PyTime_check_mul_overflow(sec, unit_to_ns)) {
> + _PyTime_overflow();
> + return -1;
> + }
> + *t = sec * unit_to_ns;
> + return 0;
> + }
> +}
> +
> +int
> +_PyTime_FromSecondsObject(_PyTime_t *t, PyObject *obj, _PyTime_round_t round)
> +{
> + return _PyTime_FromObject(t, obj, round, SEC_TO_NS);
> +}
> +
> +int
> +_PyTime_FromMillisecondsObject(_PyTime_t *t, PyObject *obj, _PyTime_round_t round)
> +{
> + return _PyTime_FromObject(t, obj, round, MS_TO_NS);
> +}
> +
> +double
> +_PyTime_AsSecondsDouble(_PyTime_t t)
> +{
> + /* volatile avoids optimization changing how numbers are rounded */
> + volatile double d;
> +
> + if (t % SEC_TO_NS == 0) {
> + _PyTime_t secs;
> + /* Divide using integers to avoid rounding issues on the integer part.
> + 1e-9 cannot be stored exactly in IEEE 64-bit. */
> + secs = t / SEC_TO_NS;
> + d = (double)secs;
> + }
> + else {
> + d = (double)t;
> + d /= 1e9;
> + }
> + return d;
> +}
> +
> +PyObject *
> +_PyTime_AsNanosecondsObject(_PyTime_t t)
> +{
> + Py_BUILD_ASSERT(sizeof(long long) >= sizeof(_PyTime_t));
> + return PyLong_FromLongLong((long long)t);
> +}
> +
> +static _PyTime_t
> +_PyTime_Divide(const _PyTime_t t, const _PyTime_t k,
> + const _PyTime_round_t round)
> +{
> + assert(k > 1);
> + if (round == _PyTime_ROUND_HALF_EVEN) {
> + _PyTime_t x, r, abs_r;
> + x = t / k;
> + r = t % k;
> + abs_r = Py_ABS(r);
> + if (abs_r > k / 2 || (abs_r == k / 2 && (Py_ABS(x) & 1))) {
> + if (t >= 0)
> + x++;
> + else
> + x--;
> + }
> + return x;
> + }
> + else if (round == _PyTime_ROUND_CEILING) {
> + if (t >= 0){
> + return (t + k - 1) / k;
> + }
> + else{
> + return t / k;
> + }
> + }
> + else if (round == _PyTime_ROUND_FLOOR){
> + if (t >= 0) {
> + return t / k;
> + }
> + else{
> + return (t - (k - 1)) / k;
> + }
> + }
> + else {
> + assert(round == _PyTime_ROUND_UP);
> + if (t >= 0) {
> + return (t + k - 1) / k;
> + }
> + else {
> + return (t - (k - 1)) / k;
> + }
> + }
> +}
> +
> +_PyTime_t
> +_PyTime_AsMilliseconds(_PyTime_t t, _PyTime_round_t round)
> +{
> + return _PyTime_Divide(t, NS_TO_MS, round);
> +}
> +
> +_PyTime_t
> +_PyTime_AsMicroseconds(_PyTime_t t, _PyTime_round_t round)
> +{
> + return _PyTime_Divide(t, NS_TO_US, round);
> +}
> +
> +static int
> +_PyTime_AsTimeval_impl(_PyTime_t t, _PyTime_t *p_secs, int *p_us,
> + _PyTime_round_t round)
> +{
> + _PyTime_t secs, ns;
> + int usec;
> + int res = 0;
> +
> + secs = t / SEC_TO_NS;
> + ns = t % SEC_TO_NS;
> +
> + usec = (int)_PyTime_Divide(ns, US_TO_NS, round);
> + if (usec < 0) {
> + usec += SEC_TO_US;
> + if (secs != _PyTime_MIN)
> + secs -= 1;
> + else
> + res = -1;
> + }
> + else if (usec >= SEC_TO_US) {
> + usec -= SEC_TO_US;
> + if (secs != _PyTime_MAX)
> + secs += 1;
> + else
> + res = -1;
> + }
> + assert(0 <= usec && usec < SEC_TO_US);
> +
> + *p_secs = secs;
> + *p_us = usec;
> +
> + return res;
> +}
> +
> +static int
> +_PyTime_AsTimevalStruct_impl(_PyTime_t t, struct timeval *tv,
> + _PyTime_round_t round, int raise)
> +{
> + _PyTime_t secs, secs2;
> + int us;
> + int res;
> +
> + res = _PyTime_AsTimeval_impl(t, &secs, &us, round);
> +
> +#ifdef MS_WINDOWS
> + tv->tv_sec = (long)secs;
> +#else
> + tv->tv_sec = secs;
> +#endif
> + tv->tv_usec = us;
> +
> + secs2 = (_PyTime_t)tv->tv_sec;
> + if (res < 0 || secs2 != secs) {
> + if (raise)
> + error_time_t_overflow();
> + return -1;
> + }
> + return 0;
> +}
> +
> +int
> +_PyTime_AsTimeval(_PyTime_t t, struct timeval *tv, _PyTime_round_t round)
> +{
> + return _PyTime_AsTimevalStruct_impl(t, tv, round, 1);
> +}
> +
> +int
> +_PyTime_AsTimeval_noraise(_PyTime_t t, struct timeval *tv, _PyTime_round_t round)
> +{
> + return _PyTime_AsTimevalStruct_impl(t, tv, round, 0);
> +}
> +
> +int
> +_PyTime_AsTimevalTime_t(_PyTime_t t, time_t *p_secs, int *us,
> + _PyTime_round_t round)
> +{
> + _PyTime_t secs;
> + int res;
> +
> + res = _PyTime_AsTimeval_impl(t, &secs, us, round);
> +
> + *p_secs = secs;
> +
> + if (res < 0 || (_PyTime_t)*p_secs != secs) {
> + error_time_t_overflow();
> + return -1;
> + }
> + return 0;
> +}
> +
> +
> +#if defined(HAVE_CLOCK_GETTIME) || defined(HAVE_KQUEUE)
> +int
> +_PyTime_AsTimespec(_PyTime_t t, struct timespec *ts)
> +{
> + _PyTime_t secs, nsec;
> +
> + secs = t / SEC_TO_NS;
> + nsec = t % SEC_TO_NS;
> + if (nsec < 0) {
> + nsec += SEC_TO_NS;
> + secs -= 1;
> + }
> + ts->tv_sec = (time_t)secs;
> + assert(0 <= nsec && nsec < SEC_TO_NS);
> + ts->tv_nsec = nsec;
> +
> + if ((_PyTime_t)ts->tv_sec != secs) {
> + error_time_t_overflow();
> + return -1;
> + }
> + return 0;
> +}
> +#endif
> +
> +static int
> +pygettimeofday(_PyTime_t *tp, _Py_clock_info_t *info, int raise)
> +{
> + int err;
> + struct timeval tv;
> +
> + assert(info == NULL || raise);
> + err = gettimeofday(&tv, (struct timezone *)NULL);
> + if (err) {
> + if (raise)
> + PyErr_SetFromErrno(PyExc_OSError);
> + return -1;
> + }
> + if (_PyTime_FromTimeval(tp, &tv, raise) < 0)
> + return -1;
> +
> + if (info) {
> + info->implementation = "gettimeofday()";
> + info->resolution = 1e-6;
> + info->monotonic = 0;
> + info->adjustable = 1;
> + }
> + return 0;
> +}
> +
> +_PyTime_t
> +_PyTime_GetSystemClock(void)
> +{
> + _PyTime_t t;
> + if (pygettimeofday(&t, NULL, 0) < 0) {
> + /* should not happen, _PyTime_Init() checked the clock at startup */
> + assert(0);
> +
> + /* use a fixed value instead of a random value from the stack */
> + t = 0;
> + }
> + return t;
> +}
> +
> +int
> +_PyTime_GetSystemClockWithInfo(_PyTime_t *t, _Py_clock_info_t *info)
> +{
> + return pygettimeofday(t, info, 1);
> +}
> +
> +static int
> +pymonotonic(_PyTime_t *tp, _Py_clock_info_t *info, int raise)
> +{
> + struct timespec ts;
> +
> + assert(info == NULL || raise);
> +
> + if (info) {
> + info->implementation = "gettimeofday()";
> + info->resolution = 1e-6;
> + info->monotonic = 0;
> + info->adjustable = 1;
> + }
> +
> + if (_PyTime_FromTimespec(tp, &ts, raise) < 0)
> + return -1;
> + return 0;
> +}
> +
> +_PyTime_t
> +_PyTime_GetMonotonicClock(void)
> +{
> + _PyTime_t t;
> + if (pymonotonic(&t, NULL, 0) < 0) {
> + /* should not happen, _PyTime_Init() checked that monotonic clock at
> + startup */
> + assert(0);
> +
> + /* use a fixed value instead of a random value from the stack */
> + t = 0;
> + }
> + return t;
> +}
> +
> +int
> +_PyTime_GetMonotonicClockWithInfo(_PyTime_t *tp, _Py_clock_info_t *info)
> +{
> + return pymonotonic(tp, info, 1);
> +}
> +
> +int
> +_PyTime_Init(void)
> +{
> + _PyTime_t t;
> +
> + /* ensure that the system clock works */
> + if (_PyTime_GetSystemClockWithInfo(&t, NULL) < 0)
> + return -1;
> +
> + /* ensure that the operating system provides a monotonic clock */
> + if (_PyTime_GetMonotonicClockWithInfo(&t, NULL) < 0)
> + return -1;
> +
> + return 0;
> +}
> +
> +int
> +_PyTime_localtime(time_t t, struct tm *tm)
> +{
> +#ifdef MS_WINDOWS
> + int error;
> +
> + error = localtime_s(tm, &t);
> + if (error != 0) {
> + errno = error;
> + PyErr_SetFromErrno(PyExc_OSError);
> + return -1;
> + }
> + return 0;
> +#else /* !MS_WINDOWS */
> + struct tm *temp = NULL;
> + if ((temp = localtime(&t)) == NULL) {
> +#ifdef EINVAL
> + if (errno == 0)
> + errno = EINVAL;
> +#endif
> + PyErr_SetFromErrno(PyExc_OSError);
> + return -1;
> + }
> + *tm = *temp;
> + return 0;
> +#endif /* MS_WINDOWS */
> +}
> +
> +int
> +_PyTime_gmtime(time_t t, struct tm *tm)
> +{
> +#ifdef MS_WINDOWS
> + int error;
> +
> + error = gmtime_s(tm, &t);
> + if (error != 0) {
> + errno = error;
> + PyErr_SetFromErrno(PyExc_OSError);
> + return -1;
> + }
> + return 0;
> +#else /* !MS_WINDOWS */
> + struct tm *temp = NULL;
> + if ((temp = gmtime(&t)) == NULL) {
> +#ifdef EINVAL
> + if (errno == 0)
> + errno = EINVAL;
> +#endif
> + PyErr_SetFromErrno(PyExc_OSError);
> + return -1;
> + }
> + *tm = *temp;
> + return 0;
> +#endif /* MS_WINDOWS */
> +}
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/random.c b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/random.c
> new file mode 100644
> index 00000000..73c756a0
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/random.c
> @@ -0,0 +1,636 @@
> +#include "Python.h"
> +#ifdef MS_WINDOWS
> +# include <windows.h>
> +/* All sample MSDN wincrypt programs include the header below. It is at least
> + * required with Min GW. */
> +# include <wincrypt.h>
> +#else
> +# include <fcntl.h>
> +# ifdef HAVE_SYS_STAT_H
> +# include <sys/stat.h>
> +# endif
> +# ifdef HAVE_LINUX_RANDOM_H
> +# include <linux/random.h>
> +# endif
> +# if defined(HAVE_SYS_RANDOM_H) && (defined(HAVE_GETRANDOM) || defined(HAVE_GETENTROPY))
> +# include <sys/random.h>
> +# endif
> +# if !defined(HAVE_GETRANDOM) && defined(HAVE_GETRANDOM_SYSCALL)
> +# include <sys/syscall.h>
> +# endif
> +#endif
> +
> +#ifdef _Py_MEMORY_SANITIZER
> +# include <sanitizer/msan_interface.h>
> +#endif
> +
> +#ifdef Py_DEBUG
> +int _Py_HashSecret_Initialized = 0;
> +#else
> +static int _Py_HashSecret_Initialized = 0;
> +#endif
> +
> +#ifdef MS_WINDOWS
> +static HCRYPTPROV hCryptProv = 0;
> +
> +static int
> +win32_urandom_init(int raise)
> +{
> + /* Acquire context */
> + if (!CryptAcquireContext(&hCryptProv, NULL, NULL,
> + PROV_RSA_FULL, CRYPT_VERIFYCONTEXT))
> + goto error;
> +
> + return 0;
> +
> +error:
> + if (raise) {
> + PyErr_SetFromWindowsErr(0);
> + }
> + return -1;
> +}
> +
> +/* Fill buffer with size pseudo-random bytes generated by the Windows CryptoGen
> + API. Return 0 on success, or raise an exception and return -1 on error. */
> +static int
> +win32_urandom(unsigned char *buffer, Py_ssize_t size, int raise)
> +{
> + Py_ssize_t chunk;
> +
> + if (hCryptProv == 0)
> + {
> + if (win32_urandom_init(raise) == -1) {
> + return -1;
> + }
> + }
> +
> + while (size > 0)
> + {
> + chunk = size > INT_MAX ? INT_MAX : size;
> + if (!CryptGenRandom(hCryptProv, (DWORD)chunk, buffer))
> + {
> + /* CryptGenRandom() failed */
> + if (raise) {
> + PyErr_SetFromWindowsErr(0);
> + }
> + return -1;
> + }
> + buffer += chunk;
> + size -= chunk;
> + }
> + return 0;
> +}
> +
> +#else /* !MS_WINDOWS */
> +
> +#if defined(HAVE_GETRANDOM) || defined(HAVE_GETRANDOM_SYSCALL)
> +#define PY_GETRANDOM 1
> +
> +/* Call getrandom() to get random bytes:
> +
> + - Return 1 on success
> + - Return 0 if getrandom() is not available (failed with ENOSYS or EPERM),
> + or if getrandom(GRND_NONBLOCK) failed with EAGAIN (system urandom not
> + initialized yet) and raise=0.
> + - Raise an exception (if raise is non-zero) and return -1 on error:
> + if getrandom() failed with EINTR, raise is non-zero and the Python signal
> + handler raised an exception, or if getrandom() failed with a different
> + error.
> +
> + getrandom() is retried if it failed with EINTR: interrupted by a signal. */
> +static int
> +py_getrandom(void *buffer, Py_ssize_t size, int blocking, int raise)
> +{
> + /* Is getrandom() supported by the running kernel? Set to 0 if getrandom()
> + failed with ENOSYS or EPERM. Need Linux kernel 3.17 or newer, or Solaris
> + 11.3 or newer */
> + static int getrandom_works = 1;
> + int flags;
> + char *dest;
> + long n;
> +
> + if (!getrandom_works) {
> + return 0;
> + }
> +
> + flags = blocking ? 0 : GRND_NONBLOCK;
> + dest = buffer;
> + while (0 < size) {
> +#ifdef sun
> + /* Issue #26735: On Solaris, getrandom() is limited to returning up
> + to 1024 bytes. Call it multiple times if more bytes are
> + requested. */
> + n = Py_MIN(size, 1024);
> +#else
> + n = Py_MIN(size, LONG_MAX);
> +#endif
> +
> + errno = 0;
> +#ifdef HAVE_GETRANDOM
> + if (raise) {
> + Py_BEGIN_ALLOW_THREADS
> + n = getrandom(dest, n, flags);
> + Py_END_ALLOW_THREADS
> + }
> + else {
> + n = getrandom(dest, n, flags);
> + }
> +#else
> + /* On Linux, use the syscall() function because the GNU libc doesn't
> + expose the Linux getrandom() syscall yet. See:
> + https://sourceware.org/bugzilla/show_bug.cgi?id=17252 */
> + if (raise) {
> + Py_BEGIN_ALLOW_THREADS
> + n = syscall(SYS_getrandom, dest, n, flags);
> + Py_END_ALLOW_THREADS
> + }
> + else {
> + n = syscall(SYS_getrandom, dest, n, flags);
> + }
> +# ifdef _Py_MEMORY_SANITIZER
> + if (n > 0) {
> + __msan_unpoison(dest, n);
> + }
> +# endif
> +#endif
> +
> + if (n < 0) {
> + /* ENOSYS: the syscall is not supported by the kernel.
> + EPERM: the syscall is blocked by a security policy (ex: SECCOMP)
> + or something else. */
> + if (errno == ENOSYS || errno == EPERM) {
> + getrandom_works = 0;
> + return 0;
> + }
> +
> + /* getrandom(GRND_NONBLOCK) fails with EAGAIN if the system urandom
> + is not initialiazed yet. For _PyRandom_Init(), we ignore the
> + error and fall back on reading /dev/urandom which never blocks,
> + even if the system urandom is not initialized yet:
> + see the PEP 524. */
> + if (errno == EAGAIN && !raise && !blocking) {
> + return 0;
> + }
> +
> + if (errno == EINTR) {
> + if (raise) {
> + if (PyErr_CheckSignals()) {
> + return -1;
> + }
> + }
> +
> + /* retry getrandom() if it was interrupted by a signal */
> + continue;
> + }
> +
> + if (raise) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + }
> + return -1;
> + }
> +
> + dest += n;
> + size -= n;
> + }
> + return 1;
> +}
> +
> +#elif defined(HAVE_GETENTROPY)
> +#define PY_GETENTROPY 1
> +
> +/* Fill buffer with size pseudo-random bytes generated by getentropy():
> +
> + - Return 1 on success
> + - Return 0 if getentropy() syscall is not available (failed with ENOSYS or
> + EPERM).
> + - Raise an exception (if raise is non-zero) and return -1 on error:
> + if getentropy() failed with EINTR, raise is non-zero and the Python signal
> + handler raised an exception, or if getentropy() failed with a different
> + error.
> +
> + getentropy() is retried if it failed with EINTR: interrupted by a signal. */
> +static int
> +py_getentropy(char *buffer, Py_ssize_t size, int raise)
> +{
> + /* Is getentropy() supported by the running kernel? Set to 0 if
> + getentropy() failed with ENOSYS or EPERM. */
> + static int getentropy_works = 1;
> +
> + if (!getentropy_works) {
> + return 0;
> + }
> +
> + while (size > 0) {
> + /* getentropy() is limited to returning up to 256 bytes. Call it
> + multiple times if more bytes are requested. */
> + Py_ssize_t len = Py_MIN(size, 256);
> + int res;
> +
> + if (raise) {
> + Py_BEGIN_ALLOW_THREADS
> + res = getentropy(buffer, len);
> + Py_END_ALLOW_THREADS
> + }
> + else {
> + res = getentropy(buffer, len);
> + }
> +
> + if (res < 0) {
> + /* ENOSYS: the syscall is not supported by the running kernel.
> + EPERM: the syscall is blocked by a security policy (ex: SECCOMP)
> + or something else. */
> + if (errno == ENOSYS || errno == EPERM) {
> + getentropy_works = 0;
> + return 0;
> + }
> +
> + if (errno == EINTR) {
> + if (raise) {
> + if (PyErr_CheckSignals()) {
> + return -1;
> + }
> + }
> +
> + /* retry getentropy() if it was interrupted by a signal */
> + continue;
> + }
> +
> + if (raise) {
> + PyErr_SetFromErrno(PyExc_OSError);
> + }
> + return -1;
> + }
> +
> + buffer += len;
> + size -= len;
> + }
> + return 1;
> +}
> +#endif /* defined(HAVE_GETENTROPY) && !defined(sun) */
> +
> +
> +#if !defined(MS_WINDOWS) && !defined(__VMS)
> +
> +static struct {
> + int fd;
> +#ifdef HAVE_STRUCT_STAT_ST_DEV
> + dev_t st_dev;
> +#endif
> +#ifdef HAVE_STRUCT_STAT_ST_INO
> + ino_t st_ino;
> +#endif
> +} urandom_cache = { -1 };
> +
> +/* Read random bytes from the /dev/urandom device:
> +
> + - Return 0 on success
> + - Raise an exception (if raise is non-zero) and return -1 on error
> +
> + Possible causes of errors:
> +
> + - open() failed with ENOENT, ENXIO, ENODEV, EACCES: the /dev/urandom device
> + was not found. For example, it was removed manually or not exposed in a
> + chroot or container.
> + - open() failed with a different error
> + - fstat() failed
> + - read() failed or returned 0
> +
> + read() is retried if it failed with EINTR: interrupted by a signal.
> +
> + The file descriptor of the device is kept open between calls to avoid using
> + many file descriptors when run in parallel from multiple threads:
> + see the issue #18756.
> +
> + st_dev and st_ino fields of the file descriptor (from fstat()) are cached to
> + check if the file descriptor was replaced by a different file (which is
> + likely a bug in the application): see the issue #21207.
> +
> + If the file descriptor was closed or replaced, open a new file descriptor
> + but don't close the old file descriptor: it probably points to something
> + important for some third-party code. */
> +static int
> +dev_urandom(char *buffer, Py_ssize_t size, int raise)
> +{
> + int fd;
> + Py_ssize_t n;
> +
> + if (raise) {
> + struct _Py_stat_struct st;
> + int fstat_result;
> +
> + if (urandom_cache.fd >= 0) {
> + Py_BEGIN_ALLOW_THREADS
> + fstat_result = _Py_fstat_noraise(urandom_cache.fd, &st);
> + Py_END_ALLOW_THREADS
> +
> + /* Does the fd point to the same thing as before? (issue #21207) */
> + if (fstat_result
> +#ifdef HAVE_STRUCT_STAT_ST_DEV
> + || st.st_dev != urandom_cache.st_dev
> +#endif
> +#ifdef HAVE_STRUCT_STAT_ST_INO
> + || st.st_ino != urandom_cache.st_ino
> +#endif
> + )
> + {
> + /* Something changed: forget the cached fd (but don't close it,
> + since it probably points to something important for some
> + third-party code). */
> + urandom_cache.fd = -1;
> + }
> + }
> + if (urandom_cache.fd >= 0)
> + fd = urandom_cache.fd;
> + else {
> + fd = _Py_open("/dev/urandom", O_RDONLY);
> + if (fd < 0) {
> + if (errno == ENOENT || errno == ENXIO ||
> + errno == ENODEV || errno == EACCES) {
> + PyErr_SetString(PyExc_NotImplementedError,
> + "/dev/urandom (or equivalent) not found");
> + }
> + /* otherwise, keep the OSError exception raised by _Py_open() */
> + return -1;
> + }
> + if (urandom_cache.fd >= 0) {
> + /* urandom_fd was initialized by another thread while we were
> + not holding the GIL, keep it. */
> + close(fd);
> + fd = urandom_cache.fd;
> + }
> + else {
> + if (_Py_fstat(fd, &st)) {
> + close(fd);
> + return -1;
> + }
> + else {
> + urandom_cache.fd = fd;
> +#ifdef HAVE_STRUCT_STAT_ST_DEV
> + urandom_cache.st_dev = st.st_dev;
> +#endif
> +#ifdef HAVE_STRUCT_STAT_ST_INO
> + urandom_cache.st_ino = st.st_ino;
> +#endif
> + }
> + }
> + }
> +
> + do {
> + n = _Py_read(fd, buffer, (size_t)size);
> + if (n == -1)
> + return -1;
> + if (n == 0) {
> + PyErr_Format(PyExc_RuntimeError,
> + "Failed to read %zi bytes from /dev/urandom",
> + size);
> + return -1;
> + }
> +
> + buffer += n;
> + size -= n;
> + } while (0 < size);
> + }
> + else {
> + fd = _Py_open_noraise("/dev/urandom", O_RDONLY);
> + if (fd < 0) {
> + return -1;
> + }
> +
> + while (0 < size)
> + {
> + do {
> + n = read(fd, buffer, (size_t)size);
> + } while (n < 0 && errno == EINTR);
> +
> + if (n <= 0) {
> + /* stop on error or if read(size) returned 0 */
> + close(fd);
> + return -1;
> + }
> +
> + buffer += n;
> + size -= n;
> + }
> + close(fd);
> + }
> + return 0;
> +}
> +
> +static void
> +dev_urandom_close(void)
> +{
> + if (urandom_cache.fd >= 0) {
> + close(urandom_cache.fd);
> + urandom_cache.fd = -1;
> + }
> +}
> +#endif /* !MS_WINDOWS */
> +
> +
> +/* Fill buffer with pseudo-random bytes generated by a linear congruent
> + generator (LCG):
> +
> + x(n+1) = (x(n) * 214013 + 2531011) % 2^32
> +
> + Use bits 23..16 of x(n) to generate a byte. */
> +static void
> +lcg_urandom(unsigned int x0, unsigned char *buffer, size_t size)
> +{
> + size_t index;
> + unsigned int x;
> +
> + x = x0;
> + for (index=0; index < size; index++) {
> + x *= 214013;
> + x += 2531011;
> + /* modulo 2 ^ (8 * sizeof(int)) */
> + buffer[index] = (x >> 16) & 0xff;
> + }
> +}
> +
> +/* Read random bytes:
> +
> + - Return 0 on success
> + - Raise an exception (if raise is non-zero) and return -1 on error
> +
> + Used sources of entropy ordered by preference, preferred source first:
> +
> + - CryptGenRandom() on Windows
> + - getrandom() function (ex: Linux and Solaris): call py_getrandom()
> + - getentropy() function (ex: OpenBSD): call py_getentropy()
> + - /dev/urandom device
> +
> + Read from the /dev/urandom device if getrandom() or getentropy() function
> + is not available or does not work.
> +
> + Prefer getrandom() over getentropy() because getrandom() supports blocking
> + and non-blocking mode: see the PEP 524. Python requires non-blocking RNG at
> + startup to initialize its hash secret, but os.urandom() must block until the
> + system urandom is initialized (at least on Linux 3.17 and newer).
> +
> + Prefer getrandom() and getentropy() over reading directly /dev/urandom
> + because these functions don't need file descriptors and so avoid ENFILE or
> + EMFILE errors (too many open files): see the issue #18756.
> +
> + Only the getrandom() function supports non-blocking mode.
> +
> + Only use RNG running in the kernel. They are more secure because it is
> + harder to get the internal state of a RNG running in the kernel land than a
> + RNG running in the user land. The kernel has a direct access to the hardware
> + and has access to hardware RNG, they are used as entropy sources.
> +
> + Note: the OpenSSL RAND_pseudo_bytes() function does not automatically reseed
> + its RNG on fork(), two child processes (with the same pid) generate the same
> + random numbers: see issue #18747. Kernel RNGs don't have this issue,
> + they have access to good quality entropy sources.
> +
> + If raise is zero:
> +
> + - Don't raise an exception on error
> + - Don't call the Python signal handler (don't call PyErr_CheckSignals()) if
> + a function fails with EINTR: retry directly the interrupted function
> + - Don't release the GIL to call functions.
> +*/
> +static int
> +pyurandom(void *buffer, Py_ssize_t size, int blocking, int raise)
> +{
> +#if defined(PY_GETRANDOM) || defined(PY_GETENTROPY)
> + int res;
> +#endif
> +
> + if (size < 0) {
> + if (raise) {
> + PyErr_Format(PyExc_ValueError,
> + "negative argument not allowed");
> + }
> + return -1;
> + }
> +
> + if (size == 0) {
> + return 0;
> + }
> +
> +#ifdef MS_WINDOWS
> + return win32_urandom((unsigned char *)buffer, size, raise);
> +#else
> +
> +#if defined(PY_GETRANDOM) || defined(PY_GETENTROPY)
> +#ifdef PY_GETRANDOM
> + res = py_getrandom(buffer, size, blocking, raise);
> +#else
> + res = py_getentropy(buffer, size, raise);
> +#endif
> + if (res < 0) {
> + return -1;
> + }
> + if (res == 1) {
> + return 0;
> + }
> + /* getrandom() or getentropy() function is not available: failed with
> + ENOSYS or EPERM. Fall back on reading from /dev/urandom. */
> +#endif
> +
> + return dev_urandom(buffer, size, raise);
> +#endif
> +}
> +
> +/* Fill buffer with size pseudo-random bytes from the operating system random
> + number generator (RNG). It is suitable for most cryptographic purposes
> + except long living private keys for asymmetric encryption.
> +
> + On Linux 3.17 and newer, the getrandom() syscall is used in blocking mode:
> + block until the system urandom entropy pool is initialized (128 bits are
> + collected by the kernel).
> +
> + Return 0 on success. Raise an exception and return -1 on error. */
> +int
> +_PyOS_URandom(void *buffer, Py_ssize_t size)
> +{
> + return pyurandom(buffer, size, 1, 1);
> +}
> +
> +/* Fill buffer with size pseudo-random bytes from the operating system random
> + number generator (RNG). It is not suitable for cryptographic purpose.
> +
> + On Linux 3.17 and newer (when getrandom() syscall is used), if the system
> + urandom is not initialized yet, the function returns "weak" entropy read
> + from /dev/urandom.
> +
> + Return 0 on success. Raise an exception and return -1 on error. */
> +int
> +_PyOS_URandomNonblock(void *buffer, Py_ssize_t size)
> +{
> + return pyurandom(buffer, size, 0, 1);
> +}
> +
> +void
> +_PyRandom_Init(void)
> +{
> +
> + char *env;
> + unsigned char *secret = (unsigned char *)&_Py_HashSecret.uc;
> + Py_ssize_t secret_size = sizeof(_Py_HashSecret_t);
> + Py_BUILD_ASSERT(sizeof(_Py_HashSecret_t) == sizeof(_Py_HashSecret.uc));
> +
> + if (_Py_HashSecret_Initialized)
> + return;
> + _Py_HashSecret_Initialized = 1;
> +
> + /*
> + Hash randomization is enabled. Generate a per-process secret,
> + using PYTHONHASHSEED if provided.
> + */
> +
> + env = Py_GETENV("PYTHONHASHSEED");
> + if (env && *env != '\0' && strcmp(env, "random") != 0) {
> + char *endptr = env;
> + unsigned long seed;
> + seed = strtoul(env, &endptr, 10);
> + if (*endptr != '\0'
> + || seed > 4294967295UL
> + || (errno == ERANGE && seed == ULONG_MAX))
> + {
> + Py_FatalError("PYTHONHASHSEED must be \"random\" or an integer "
> + "in range [0; 4294967295]");
> + }
> + if (seed == 0) {
> + /* disable the randomized hash */
> + memset(secret, 0, secret_size);
> + Py_HashRandomizationFlag = 0;
> + }
> + else {
> + lcg_urandom(seed, secret, secret_size);
> + Py_HashRandomizationFlag = 1;
> + }
> + }
> + else {
> + int res;
> +
> + /* _PyRandom_Init() is called very early in the Python initialization
> + and so exceptions cannot be used (use raise=0).
> +
> + _PyRandom_Init() must not block Python initialization: call
> + pyurandom() is non-blocking mode (blocking=0): see the PEP 524. */
> + res = pyurandom(secret, secret_size, 0, 0);
> + if (res < 0) {
> + //Py_FatalError("failed to get random numbers to initialize Python");
> + }
> + Py_HashRandomizationFlag = 1;
> + }
> +
> +}
> +
> +void
> +_PyRandom_Fini(void)
> +{
> +#ifdef MS_WINDOWS
> + if (hCryptProv) {
> + CryptReleaseContext(hCryptProv, 0);
> + hCryptProv = 0;
> + }
> +#else
> + dev_urandom_close();
> +#endif
> +}
> +
> +#endif
> \ No newline at end of file
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/Python368.inf b/AppPkg/Applications/Python/Python-3.6.8/Python368.inf
> new file mode 100644
> index 00000000..d2e6e734
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/Python368.inf
> @@ -0,0 +1,275 @@
> +## @file
> +# Python368.inf
> +#
> +# Copyright (c) 2011-2021, Intel Corporation. All rights reserved.<BR>
> +# This program and the accompanying materials
> +# are licensed and made available under the terms and conditions of the BSD License
> +# which accompanies this distribution. The full text of the license may be found at
> +# http://opensource.org/licenses/bsd-license.
> +#
> +# THE PROGRAM IS DISTRIBUTED UNDER THE BSD LICENSE ON AN "AS IS" BASIS,
> +# WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
> +#
> +##
> +
> +[Defines]
> + INF_VERSION = 0x00010016
> + BASE_NAME = Python368
> + FILE_GUID = 9DA30E98-094C-4FF0-94CB-81C10E69F750
> + MODULE_TYPE = UEFI_APPLICATION
> + VERSION_STRING = 0.1
> + ENTRY_POINT = ShellCEntryLib
> +
> + DEFINE PYTHON_VERSION = 3.6.8
> +
> +#
> +# VALID_ARCHITECTURES = X64
> +#
> +
> +[Packages]
> + StdLib/StdLib.dec
> + MdePkg/MdePkg.dec
> + MdeModulePkg/MdeModulePkg.dec
> +
> +[LibraryClasses]
> + UefiLib
> + DebugLib
> + LibC
> + LibString
> + LibStdio
> + LibMath
> + LibWchar
> + LibGen
> + LibNetUtil
> + DevMedia
> + #BsdSocketLib
> + #EfiSocketLib
> +
> +[FixedPcd]
> + gEfiMdePkgTokenSpaceGuid.PcdDebugPropertyMask|0x0F
> + gEfiMdePkgTokenSpaceGuid.PcdDebugPrintErrorLevel|0x80000040
> +
> +[Sources]
> +#Parser
> + Parser/acceler.c
> + Parser/bitset.c
> + Parser/firstsets.c
> + Parser/grammar.c
> + Parser/grammar1.c
> + Parser/listnode.c
> + Parser/metagrammar.c
> + Parser/myreadline.c
> + Parser/node.c
> + Parser/parser.c
> + Parser/parsetok.c
> + Parser/tokenizer.c
> +
> +#Python
> + PyMod-$(PYTHON_VERSION)/Python/bltinmodule.c
> + PyMod-$(PYTHON_VERSION)/Python/getcopyright.c
> + PyMod-$(PYTHON_VERSION)/Python/marshal.c
> + PyMod-$(PYTHON_VERSION)/Python/random.c
> + PyMod-$(PYTHON_VERSION)/Python/fileutils.c
> + PyMod-$(PYTHON_VERSION)/Python/pytime.c
> + PyMod-$(PYTHON_VERSION)/Python/pylifecycle.c
> + PyMod-$(PYTHON_VERSION)/Python/pyhash.c
> + PyMod-$(PYTHON_VERSION)/Python/pystate.c
> +
> + Python/_warnings.c
> + Python/asdl.c
> + Python/ast.c
> + Python/ceval.c
> + Python/codecs.c
> + Python/compile.c
> + Python/dtoa.c
> + Python/dynload_stub.c
> + Python/errors.c
> + Python/formatter_unicode.c
> + Python/frozen.c
> + Python/future.c
> + Python/getargs.c
> + Python/getcompiler.c
> + Python/getopt.c
> + Python/getplatform.c
> + Python/getversion.c
> + Python/graminit.c
> + Python/import.c
> + Python/importdl.c
> + Python/modsupport.c
> + Python/mysnprintf.c
> + Python/mystrtoul.c
> + Python/peephole.c
> + Python/pyarena.c
> + Python/pyctype.c
> + Python/pyfpe.c
> + Python/pymath.c
> + Python/pystrcmp.c
> + Python/pystrtod.c
> + Python/Python-ast.c
> + Python/pythonrun.c
> + Python/structmember.c
> + Python/symtable.c
> + Python/sysmodule.c
> + Python/traceback.c
> + Python/pystrhex.c
> +
> +#Objects
> + PyMod-$(PYTHON_VERSION)/Objects/dictobject.c
> + PyMod-$(PYTHON_VERSION)/Objects/memoryobject.c
> + PyMod-$(PYTHON_VERSION)/Objects/object.c
> + PyMod-$(PYTHON_VERSION)/Objects/unicodeobject.c
> +
> + Objects/accu.c
> + Objects/abstract.c
> + Objects/boolobject.c
> + Objects/bytesobject.c
> + Objects/bytearrayobject.c
> + Objects/bytes_methods.c
> + Objects/capsule.c
> + Objects/cellobject.c
> + Objects/classobject.c
> + Objects/codeobject.c
> + Objects/complexobject.c
> + Objects/descrobject.c
> + Objects/enumobject.c
> + Objects/exceptions.c
> + Objects/fileobject.c
> + Objects/floatobject.c
> + Objects/frameobject.c
> + Objects/funcobject.c
> + Objects/genobject.c
> + Objects/longobject.c
> + Objects/iterobject.c
> + Objects/listobject.c
> + Objects/methodobject.c
> + Objects/moduleobject.c
> + Objects/obmalloc.c
> + Objects/odictobject.c
> + Objects/rangeobject.c
> + Objects/setobject.c
> + Objects/sliceobject.c
> + Objects/structseq.c
> + Objects/tupleobject.c
> + Objects/typeobject.c
> + Objects/unicodectype.c
> + Objects/weakrefobject.c
> + Objects/namespaceobject.c
> +
> + # Mandatory Modules -- These must always be built in.
> + PyMod-$(PYTHON_VERSION)/Modules/config.c
> + PyMod-$(PYTHON_VERSION)/Modules/edk2module.c
> + PyMod-$(PYTHON_VERSION)/Modules/errnomodule.c
> + PyMod-$(PYTHON_VERSION)/Modules/getpath.c
> + PyMod-$(PYTHON_VERSION)/Modules/main.c
> + PyMod-$(PYTHON_VERSION)/Modules/selectmodule.c
> + PyMod-$(PYTHON_VERSION)/Modules/faulthandler.c
> + PyMod-$(PYTHON_VERSION)/Modules/timemodule.c
> +
> + Modules/_functoolsmodule.c
> + Modules/gcmodule.c
> + Modules/getbuildinfo.c
> + Programs/python.c
> + Modules/hashtable.c
> + Modules/_stat.c
> + Modules/_opcode.c
> + Modules/_sre.c
> + Modules/_tracemalloc.c
> + Modules/_bisectmodule.c #
> + Modules/_codecsmodule.c #
> + Modules/_collectionsmodule.c #
> + Modules/_csv.c #
> + Modules/_heapqmodule.c #
> + Modules/_json.c #
> + Modules/_localemodule.c #
> + Modules/_math.c #
> + Modules/_randommodule.c #
> + Modules/_struct.c #
> + Modules/_weakref.c #
> + Modules/arraymodule.c #
> + Modules/binascii.c #
> + Modules/cmathmodule.c #
> + Modules/_datetimemodule.c #
> + Modules/itertoolsmodule.c #
> + Modules/mathmodule.c #
> + Modules/md5module.c #
> + Modules/_operator.c #
> + Modules/parsermodule.c #
> + Modules/sha256module.c #
> + Modules/sha512module.c #
> + Modules/sha1module.c #
> + Modules/_blake2/blake2module.c #
> + Modules/_blake2/blake2b_impl.c #
> + Modules/_blake2/blake2s_impl.c #
> + Modules/_sha3/sha3module.c #
> + Modules/signalmodule.c #
> + #Modules/socketmodule.c #
> + Modules/symtablemodule.c #
> + Modules/unicodedata.c #
> + Modules/xxsubtype.c #
> + Modules/zipimport.c #
> + Modules/zlibmodule.c #
> + Modules/_io/_iomodule.c #
> + Modules/_io/bufferedio.c #
> + Modules/_io/bytesio.c #
> + Modules/_io/fileio.c #
> + Modules/_io/iobase.c #
> + Modules/_io/stringio.c #
> + Modules/_io/textio.c #
> +
> +#Modules/cjkcodecs
> + Modules/cjkcodecs/multibytecodec.c #
> + Modules/cjkcodecs/_codecs_cn.c #
> + Modules/cjkcodecs/_codecs_hk.c #
> + Modules/cjkcodecs/_codecs_iso2022.c #
> + Modules/cjkcodecs/_codecs_jp.c #
> + Modules/cjkcodecs/_codecs_kr.c #
> + Modules/cjkcodecs/_codecs_tw.c #
> +
> +#Modules/expat
> + Modules/pyexpat.c #
> + Modules/expat/xmlrole.c #
> + Modules/expat/xmltok.c #
> + Modules/expat/xmlparse.c #
> +
> +Modules/zlib
> + Modules/zlib/adler32.c #
> + Modules/zlib/compress.c #
> + Modules/zlib/crc32.c #
> + Modules/zlib/deflate.c #
> + Modules/zlib/gzclose.c #
> + Modules/zlib/gzlib.c #
> + Modules/zlib/gzread.c #
> + Modules/zlib/gzwrite.c #
> +
> + Modules/zlib/infback.c #
> + Modules/zlib/inffast.c #
> + Modules/zlib/inflate.c #
> + Modules/zlib/inftrees.c #
> + Modules/zlib/trees.c #
> + Modules/zlib/uncompr.c #
> + Modules/zlib/zutil.c #
> +
> +#Modules/ctypes
> + PyMod-$(PYTHON_VERSION)/Modules/_ctypes/_ctypes.c #
> + Modules/_ctypes/stgdict.c #
> + Modules/_ctypes/libffi_msvc/prep_cif.c #
> + PyMod-$(PYTHON_VERSION)/Modules/_ctypes/malloc_closure.c #
> + PyMod-$(PYTHON_VERSION)/Modules/_ctypes/libffi_msvc/ffi.c #
> + Modules/_ctypes/cfield.c #
> + PyMod-$(PYTHON_VERSION)/Modules/_ctypes/callproc.c #
> + Modules/_ctypes/callbacks.c #
> +
> +[Sources.IA32]
> + Modules/_ctypes/libffi_msvc/win32.c #
> +
> +[Sources.X64]
> + Modules/_ctypes/libffi_msvc/win64.asm #
> +
> +[BuildOptions]
> + MSFT:*_*_*_CC_FLAGS = /GL- /Oi- /wd4018 /wd4054 /wd4055 /wd4101 /wd4131 /wd4152 /wd4204 /wd4210 /wd4244 /wd4267 /wd4305 /wd4310 /wd4389 /wd4701 /wd4702 /wd4706 /wd4456 /wd4312 /wd4457 /wd4459 /wd4474 /wd4476 /I$(WORKSPACE)\AppPkg\Applications\Python\Python-3.6.8\Include /DHAVE_MEMMOVE /DUSE_PYEXPAT_CAPI /DXML_STATIC -D UEFI /WX- /DXML_POOR_ENTROPY /DUEFI_C_SOURCE
> +
> +[BuildOptions.IA32]
> + MSFT:*_*_*_CC_FLAGS = /DUEFI_MSVC_32
> +
> +[BuildOptions.X64]
> + MSFT:*_*_*_CC_FLAGS = /DUEFI_MSVC_64
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/create_python368_pkg.bat b/AppPkg/Applications/Python/Python-3.6.8/create_python368_pkg.bat
> new file mode 100644
> index 00000000..6bbdbd9e
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/create_python368_pkg.bat
> @@ -0,0 +1,48 @@
> +@echo off
> +
> +set TOOL_CHAIN_TAG=%1
> +set TARGET=%2
> +set OUT_FOLDER=%3
> +if "%TOOL_CHAIN_TAG%"=="" goto usage
> +if "%TARGET%"=="" goto usage
> +if "%OUT_FOLDER%"=="" goto usage
> +goto continue
> +
> +:usage
> +echo.
> +echo.
> +echo.
> +echo Creates Python EFI Package.
> +echo.
> +echo "Usage: %0 <ToolChain> <Target> <OutFolder>"
> +echo.
> +echo ToolChain = one of VS2013x86, VS2015x86, VS2017, VS2019
> +echo Target = one of RELEASE, DEBUG
> +echo OutFolder = Target folder where package needs to create
> +echo.
> +echo.
> +echo.
> +
> +goto :eof
> +
> +:continue
> +cd ..\..\..\..\
> +IF NOT EXIST Build\AppPkg\%TARGET%_%TOOL_CHAIN_TAG%\X64\Python368.efi goto error
> +mkdir %OUT_FOLDER%\EFI\Tools
> +xcopy Build\AppPkg\%TARGET%_%TOOL_CHAIN_TAG%\X64\Python368.efi %OUT_FOLDER%\EFI\Tools\ /y
> +mkdir %OUT_FOLDER%\EFI\StdLib\lib\python36.8
> +mkdir %OUT_FOLDER%\EFI\StdLib\etc
> +xcopy AppPkg\Applications\Python\Python-3.6.8\Lib\* %OUT_FOLDER%\EFI\StdLib\lib\python36.8\ /Y /S /I
> +xcopy StdLib\Efi\StdLib\etc\* %OUT_FOLDER%\EFI\StdLib\etc\ /Y /S /I
> +goto all_done
> +
> +:error
> +echo Failed to Create Python 3.6.8 Package, Python368.efi is not available on build location Build\AppPkg\%TARGET%_%TOOL_CHAIN_TAG%\X64\
> +
> +
> +:all_done
> +exit /b %ec%
> +
> +
> +
> +
> diff --git a/AppPkg/Applications/Python/Python-3.6.8/srcprep.py b/AppPkg/Applications/Python/Python-3.6.8/srcprep.py
> new file mode 100644
> index 00000000..622cea01
> --- /dev/null
> +++ b/AppPkg/Applications/Python/Python-3.6.8/srcprep.py
> @@ -0,0 +1,30 @@
> +"""Python module to copy specific file to respective destination folder"""
> +import os
> +import shutil
> +
> +def copyDirTree(root_src_dir,root_dst_dir):
> + """
> + Copy directory tree. Overwrites also read only files.
> + :param root_src_dir: source directory
> + :param root_dst_dir: destination directory
> + """
> + for src_dir, dirs, files in os.walk(root_src_dir):
> + dst_dir = src_dir.replace(root_src_dir, root_dst_dir, 1)
> + if not os.path.exists(dst_dir):
> + os.makedirs(dst_dir)
> + for file_ in files:
> + src_file = os.path.join(src_dir, file_)
> + dst_file = os.path.join(dst_dir, file_)
> + if(src_file.__contains__('.h') or src_file.__contains__('.py')):
> + if os.path.exists(dst_file):
> + try:
> + os.remove(dst_file)
> + except PermissionError as exc:
> + os.chmod(dst_file, stat.S_IWUSR)
> + os.remove(dst_file)
> +
> + shutil.copy(src_file, dst_dir)
> +
> +src = r'PyMod-3.6.8'
> +dest = os.getcwd()
> +copyDirTree(src,dest)
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [edk2-devel] [edk2-libc Patch 1/1] AppPkg/Applications/Python/Python-3.6.8: Py 3.6.8 UEFI changes
2021-09-02 20:46 ` Michael D Kinney
@ 2021-09-02 20:48 ` Rebecca Cran
0 siblings, 0 replies; 8+ messages in thread
From: Rebecca Cran @ 2021-09-02 20:48 UTC (permalink / raw)
To: Kinney, Michael D, devel@edk2.groups.io; +Cc: Jayaprakash, N
Thanks, that makes sense. I haven't reviewed the entire patch, so I've
given an Acked-by instead.
--
Rebecca Cran
On 9/2/21 2:46 PM, Kinney, Michael D wrote:
> Hi Rebecca,
>
> Responses below.
>
> Some of the items you are observing are due to following the exact
> same pattern as the Python 2.x ports. There are many things that can
> get cleaned up in the Python 3.x ports. I would prefer to see this
> initial functional version go in and add new BZs for additional cleanups.
>
> Thanks,
>
> Mike
>
>> -----Original Message-----
>> From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Rebecca Cran
>> Sent: Thursday, September 2, 2021 11:41 AM
>> To: devel@edk2.groups.io; Kinney, Michael D <michael.d.kinney@intel.com>
>> Cc: Jayaprakash, N <n.jayaprakash@intel.com>
>> Subject: Re: [edk2-devel] [edk2-libc Patch 1/1] AppPkg/Applications/Python/Python-3.6.8: Py 3.6.8 UEFI changes
>>
>> On 9/2/21 11:12 AM, Michael D Kinney wrote:
>>
>>> AppPkg/AppPkg.dsc | 3 +
>>> .../Python/Python-3.6.8/Py368ReadMe.txt | 220 +
>> This looks like it's formatted using Markdown, so should it be
>> Py368ReadMe.md?
> It looks like there are elements that do not follow MarkDown and the formatting
> looks bad when using a MarkDown viewer. I would recommend leaving it as .txt for
> now. We can enter a new issue to convert to MD.
>
>>> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/create_python368_pkg.bat
>> The xcopy commands should probably have error checking after them.
> There are several limitations to the BAT file. It is just being reused from the
> Python 2.x ports. I think it would be better to port this to a Python script and
> add all error checking in that version. We can enter a new issues for this Python
> port.
>
>>
>> --
>>
>> Rebecca Cran
>>
>>
>>
>>
>>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [edk2-devel] [edk2-libc Patch 0/1] Add Python 3.6.8
2021-09-02 17:12 [edk2-libc Patch 0/1] Add Python 3.6.8 Michael D Kinney
2021-09-02 17:12 ` [edk2-libc Patch 1/1] AppPkg/Applications/Python/Python-3.6.8: Py 3.6.8 UEFI changes Michael D Kinney
2021-09-02 18:22 ` [edk2-devel] [edk2-libc Patch 0/1] Add Python 3.6.8 Michael D Kinney
@ 2021-09-03 1:35 ` Michael D Kinney
2 siblings, 0 replies; 8+ messages in thread
From: Michael D Kinney @ 2021-09-03 1:35 UTC (permalink / raw)
To: devel@edk2.groups.io, Kinney, Michael D; +Cc: Rebecca Cran, Jayaprakash, N
Series Reviewed-by: Michael D Kinney <michael.d.kinney@intel.com>
> -----Original Message-----
> From: devel@edk2.groups.io <devel@edk2.groups.io> On Behalf Of Michael D Kinney
> Sent: Thursday, September 2, 2021 10:13 AM
> To: devel@edk2.groups.io
> Cc: Rebecca Cran <rebecca@nuviainc.com>; Jayaprakash, N <n.jayaprakash@intel.com>
> Subject: [edk2-devel] [edk2-libc Patch 0/1] Add Python 3.6.8
>
> REF: https://bugzilla.tianocore.org/show_bug.cgi?id=3588
>
> This patch series contains the modifications required to
> support Python 3.6.8 in the UEFI Shell. Currently supports
> building Py3.6.8 for UEFI with IA32 and X64 architectures using
> VS2017, VS2019 with the latest edk2/master.
>
> There is an additional patch that must be applied first that
> contains the source code from the Python project that is too
> large to send as an email and does not need to be reviewed since
> it is unmodified content from the Python project
> https://github.com/python/cpython/tree/v3.6.8.
>
> https://github.com/jpshivakavi/edk2-libc/tree/py36_base_code_from_python_project
> https://github.com/jpshivakavi/edk2-libc/commit/d9f7b2e5748c382ad988a98bd3e5e4bb2d50c5c0
>
> Cc: Rebecca Cran <rebecca@nuviainc.com>
> Cc: Michael D Kinney <michael.d.kinney@intel.com>
> Signed-off-by: Jayaprakash N <n.jayaprakash@intel.com>
>
> Jayaprakash Nevara (1):
> AppPkg/Applications/Python/Python-3.6.8: Py 3.6.8 UEFI changes
>
> AppPkg/AppPkg.dsc | 3 +
> .../Python/Python-3.6.8/Py368ReadMe.txt | 220 +
> .../PyMod-3.6.8/Include/fileutils.h | 159 +
> .../Python-3.6.8/PyMod-3.6.8/Include/osdefs.h | 51 +
> .../PyMod-3.6.8/Include/pyconfig.h | 1322 ++
> .../PyMod-3.6.8/Include/pydtrace.h | 74 +
> .../Python-3.6.8/PyMod-3.6.8/Include/pyport.h | 788 +
> .../PyMod-3.6.8/Lib/ctypes/__init__.py | 549 +
> .../PyMod-3.6.8/Lib/genericpath.py | 157 +
> .../Python-3.6.8/PyMod-3.6.8/Lib/glob.py | 110 +
> .../PyMod-3.6.8/Lib/http/client.py | 1481 ++
> .../Lib/importlib/_bootstrap_external.py | 1443 ++
> .../Python/Python-3.6.8/PyMod-3.6.8/Lib/io.py | 99 +
> .../PyMod-3.6.8/Lib/logging/__init__.py | 2021 ++
> .../Python-3.6.8/PyMod-3.6.8/Lib/ntpath.py | 568 +
> .../Python/Python-3.6.8/PyMod-3.6.8/Lib/os.py | 792 +
> .../Python-3.6.8/PyMod-3.6.8/Lib/pydoc.py | 2686 +++
> .../Python-3.6.8/PyMod-3.6.8/Lib/shutil.py | 1160 ++
> .../Python-3.6.8/PyMod-3.6.8/Lib/site.py | 529 +
> .../PyMod-3.6.8/Lib/subprocess.py | 1620 ++
> .../Python-3.6.8/PyMod-3.6.8/Lib/zipfile.py | 2060 ++
> .../PyMod-3.6.8/Modules/_blake2/impl/blake2.h | 161 +
> .../PyMod-3.6.8/Modules/_ctypes/_ctypes.c | 5623 ++++++
> .../PyMod-3.6.8/Modules/_ctypes/callproc.c | 1871 ++
> .../Modules/_ctypes/ctypes_dlfcn.h | 29 +
> .../Modules/_ctypes/libffi_msvc/ffi.c | 572 +
> .../Modules/_ctypes/libffi_msvc/ffi.h | 331 +
> .../Modules/_ctypes/libffi_msvc/ffi_common.h | 85 +
> .../Modules/_ctypes/malloc_closure.c | 128 +
> .../Python-3.6.8/PyMod-3.6.8/Modules/config.c | 159 +
> .../PyMod-3.6.8/Modules/edk2module.c | 4348 +++++
> .../PyMod-3.6.8/Modules/errnomodule.c | 890 +
> .../PyMod-3.6.8/Modules/faulthandler.c | 1414 ++
> .../PyMod-3.6.8/Modules/getpath.c | 1283 ++
> .../Python-3.6.8/PyMod-3.6.8/Modules/main.c | 878 +
> .../PyMod-3.6.8/Modules/selectmodule.c | 2638 +++
> .../PyMod-3.6.8/Modules/socketmodule.c | 7810 ++++++++
> .../PyMod-3.6.8/Modules/socketmodule.h | 282 +
> .../PyMod-3.6.8/Modules/sre_lib.h | 1372 ++
> .../PyMod-3.6.8/Modules/timemodule.c | 1526 ++
> .../PyMod-3.6.8/Modules/zlib/gzguts.h | 218 +
> .../PyMod-3.6.8/Objects/dictobject.c | 4472 +++++
> .../PyMod-3.6.8/Objects/memoryobject.c | 3114 +++
> .../Python-3.6.8/PyMod-3.6.8/Objects/object.c | 2082 ++
> .../Objects/stringlib/transmogrify.h | 701 +
> .../PyMod-3.6.8/Objects/unicodeobject.c | 15773 ++++++++++++++++
> .../PyMod-3.6.8/Python/bltinmodule.c | 2794 +++
> .../PyMod-3.6.8/Python/fileutils.c | 1767 ++
> .../PyMod-3.6.8/Python/getcopyright.c | 38 +
> .../PyMod-3.6.8/Python/importlib_external.h | 2431 +++
> .../Python-3.6.8/PyMod-3.6.8/Python/marshal.c | 1861 ++
> .../Python-3.6.8/PyMod-3.6.8/Python/pyhash.c | 437 +
> .../PyMod-3.6.8/Python/pylifecycle.c | 1726 ++
> .../Python-3.6.8/PyMod-3.6.8/Python/pystate.c | 969 +
> .../Python-3.6.8/PyMod-3.6.8/Python/pytime.c | 749 +
> .../Python-3.6.8/PyMod-3.6.8/Python/random.c | 636 +
> .../Python/Python-3.6.8/Python368.inf | 275 +
> .../Python-3.6.8/create_python368_pkg.bat | 48 +
> .../Python/Python-3.6.8/srcprep.py | 30 +
> 59 files changed, 89413 insertions(+)
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/Py368ReadMe.txt
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/fileutils.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/osdefs.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pyconfig.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pydtrace.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Include/pyport.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/ctypes/__init__.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/genericpath.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/glob.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/http/client.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/importlib/_bootstrap_external.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/io.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/logging/__init__.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/ntpath.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/os.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/pydoc.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/shutil.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/site.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/subprocess.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Lib/zipfile.py
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_blake2/impl/blake2.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/_ctypes.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/callproc.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/ctypes_dlfcn.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/libffi_msvc/ffi_common.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/_ctypes/malloc_closure.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/config.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/edk2module.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/errnomodule.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/faulthandler.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/getpath.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/main.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/selectmodule.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/socketmodule.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/socketmodule.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/sre_lib.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/timemodule.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Modules/zlib/gzguts.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/dictobject.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/memoryobject.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/object.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/stringlib/transmogrify.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Objects/unicodeobject.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/bltinmodule.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/fileutils.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/getcopyright.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/importlib_external.h
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/marshal.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pyhash.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pylifecycle.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pystate.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/pytime.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/PyMod-3.6.8/Python/random.c
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/Python368.inf
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/create_python368_pkg.bat
> create mode 100644 AppPkg/Applications/Python/Python-3.6.8/srcprep.py
>
> --
> 2.32.0.windows.1
>
>
>
>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2021-09-03 1:35 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2021-09-02 17:12 [edk2-libc Patch 0/1] Add Python 3.6.8 Michael D Kinney
2021-09-02 17:12 ` [edk2-libc Patch 1/1] AppPkg/Applications/Python/Python-3.6.8: Py 3.6.8 UEFI changes Michael D Kinney
2021-09-02 18:41 ` [edk2-devel] " Rebecca Cran
2021-09-02 20:46 ` Michael D Kinney
2021-09-02 20:48 ` Rebecca Cran
2021-09-02 20:48 ` Rebecca Cran
2021-09-02 18:22 ` [edk2-devel] [edk2-libc Patch 0/1] Add Python 3.6.8 Michael D Kinney
2021-09-03 1:35 ` Michael D Kinney
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox