pure pynvml-11.5.0

This commit is contained in:
clore 2024-04-30 13:33:13 +00:00
commit 4712ff518f
15 changed files with 10157 additions and 0 deletions

27
LICENSE.txt Normal file
View File

@ -0,0 +1,27 @@
Copyright (c) 2011-2021, NVIDIA Corporation.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of staged-recipes nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

2
MANIFEST.in Normal file
View File

@ -0,0 +1,2 @@
include versioneer.py
include pynvml/_version.py

233
PKG-INFO Normal file
View File

@ -0,0 +1,233 @@
Metadata-Version: 2.1
Name: pynvml
Version: 11.5.0
Summary: Python Bindings for the NVIDIA Management Library
Home-page: http://www.nvidia.com/
Author: NVIDIA Corporation
Author-email: rzamora@nvidia.com
License: BSD
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: System Administrators
Classifier: License :: OSI Approved :: BSD License
Classifier: Operating System :: Microsoft :: Windows
Classifier: Operating System :: POSIX :: Linux
Classifier: Programming Language :: Python
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: System :: Hardware
Classifier: Topic :: System :: Systems Administration
Requires-Python: >=3.6
Description-Content-Type: text/markdown
License-File: LICENSE.txt
Python bindings to the NVIDIA Management Library
================================================
Provides a Python interface to GPU management and monitoring functions.
This is a wrapper around the NVML library.
For information about the NVML library, see the NVML developer page
http://developer.nvidia.com/nvidia-management-library-nvml
As of version 11.0.0, the NVML-wrappers used in pynvml are identical
to those published through [nvidia-ml-py](https://pypi.org/project/nvidia-ml-py/).
Note that this file can be run with 'python -m doctest -v README.txt'
although the results are system dependent
Requires
--------
Python 3, or an earlier version with the ctypes module.
Installation
------------
pip install .
Usage
-----
You can use the lower level nvml bindings
```python
>>> from pynvml import *
>>> nvmlInit()
>>> print("Driver Version:", nvmlSystemGetDriverVersion())
Driver Version: 410.00
>>> deviceCount = nvmlDeviceGetCount()
>>> for i in range(deviceCount):
... handle = nvmlDeviceGetHandleByIndex(i)
... print("Device", i, ":", nvmlDeviceGetName(handle))
...
Device 0 : Tesla V100
>>> nvmlShutdown()
```
Or the higher level nvidia_smi API
```python
from pynvml.smi import nvidia_smi
nvsmi = nvidia_smi.getInstance()
nvsmi.DeviceQuery('memory.free, memory.total')
```
```python
from pynvml.smi import nvidia_smi
nvsmi = nvidia_smi.getInstance()
print(nvsmi.DeviceQuery('--help-query-gpu'), end='\n')
```
Functions
---------
Python methods wrap NVML functions, implemented in a C shared library.
Each function's use is the same with the following exceptions:
- Instead of returning error codes, failing error codes are raised as
Python exceptions.
```python
>>> try:
... nvmlDeviceGetCount()
... except NVMLError as error:
... print(error)
...
Uninitialized
```
- C function output parameters are returned from the corresponding
Python function left to right.
```c
nvmlReturn_t nvmlDeviceGetEccMode(nvmlDevice_t device,
nvmlEnableState_t *current,
nvmlEnableState_t *pending);
```
```python
>>> nvmlInit()
>>> handle = nvmlDeviceGetHandleByIndex(0)
>>> (current, pending) = nvmlDeviceGetEccMode(handle)
```
- C structs are converted into Python classes.
```c
nvmlReturn_t DECLDIR nvmlDeviceGetMemoryInfo(nvmlDevice_t device,
nvmlMemory_t *memory);
typedef struct nvmlMemory_st {
unsigned long long total;
unsigned long long free;
unsigned long long used;
} nvmlMemory_t;
```
```python
>>> info = nvmlDeviceGetMemoryInfo(handle)
>>> print "Total memory:", info.total
Total memory: 5636292608
>>> print "Free memory:", info.free
Free memory: 5578420224
>>> print "Used memory:", info.used
Used memory: 57872384
```
- Python handles string buffer creation.
```c
nvmlReturn_t nvmlSystemGetDriverVersion(char* version,
unsigned int length);
```
```python
>>> version = nvmlSystemGetDriverVersion();
>>> nvmlShutdown()
```
For usage information see the NVML documentation.
Variables
---------
All meaningful NVML constants and enums are exposed in Python.
The NVML_VALUE_NOT_AVAILABLE constant is not used. Instead None is mapped to the field.
NVML Permissions
----------------
Many of the `pynvml` wrappers assume that the underlying NVIDIA Management Library (NVML) API can be used without admin/root privileges. However, it is certainly possible for the system permissions to prevent pynvml from querying GPU performance counters. For example:
```
$ nvidia-smi nvlink -g 0
GPU 0: Tesla V100-SXM2-32GB (UUID: GPU-96ab329d-7a1f-73a8-a9b7-18b4b2855f92)
NVML: Unable to get the NvLink link utilization counter control for link 0: Insufficient Permissions
```
A simple way to check the permissions status is to look for `RmProfilingAdminOnly` in the driver `params` file (Note that `RmProfilingAdminOnly == 1` means that admin/sudo access is required):
```
$ cat /proc/driver/nvidia/params | grep RmProfilingAdminOnly
RmProfilingAdminOnly: 1
```
For more information on setting/unsetting the relevant admin privileges, see [these notes](https://developer.nvidia.com/nvidia-development-tools-solutions-ERR_NVGPUCTRPERM-permission-issue-performance-counters) on resolving `ERR_NVGPUCTRPERM` errors.
Release Notes
-------------
- Version 2.285.0
- Added new functions for NVML 2.285. See NVML documentation for more information.
- Ported to support Python 3.0 and Python 2.0 syntax.
- Added nvidia_smi.py tool as a sample app.
- Version 3.295.0
- Added new functions for NVML 3.295. See NVML documentation for more information.
- Updated nvidia_smi.py tool
- Includes additional error handling
- Version 4.304.0
- Added new functions for NVML 4.304. See NVML documentation for more information.
- Updated nvidia_smi.py tool
- Version 4.304.3
- Fixing nvmlUnitGetDeviceCount bug
- Version 5.319.0
- Added new functions for NVML 5.319. See NVML documentation for more information.
- Version 6.340.0
- Added new functions for NVML 6.340. See NVML documentation for more information.
- Version 7.346.0
- Added new functions for NVML 7.346. See NVML documentation for more information.
- Version 7.352.0
- Added new functions for NVML 7.352. See NVML documentation for more information.
- Version 8.0.0
- Refactor code to a nvidia_smi singleton class
- Added DeviceQuery that returns a dictionary of (name, value).
- Added filter parameters on DeviceQuery to match query api in nvidia-smi
- Added filter parameters on XmlDeviceQuery to match query api in nvidia-smi
- Added integer enumeration for filter strings to reduce overhead for performance monitoring.
- Added loop(filter) method with async and callback support
- Version 8.0.1
- Restructuring directories into two packages (pynvml and nvidia_smi)
- Adding initial tests for both packages
- Some name-convention cleanup in pynvml
- Version 8.0.2
- Added NVLink function wrappers for pynvml module
- Version 8.0.3
- Added versioneer
- Fixed nvmlDeviceGetNvLinkUtilizationCounter bug
- Version 8.0.4
- Added nvmlDeviceGetTotalEnergyConsumption
- Added notes about NVML permissions
- Fixed version-check testing
- Version 11.0.0
- Updated nvml.py to CUDA 11
- Updated smi.py DeviceQuery to R460
- Aligned nvml.py with latest nvidia-ml-py deployment
- Version 11.4.0
- Updated nvml.py to CUDA 11.4
- Updated smi.py NVML_BRAND_NAMES
- Aligned nvml.py with latest nvidia-ml-py deployment (11.495.46)
- Version 11.4.1
- Fix comma bugs in nvml.py
- Version 11.5.0
- Updated nvml.py to support CUDA 11.5 and CUDA 12
- Aligned with latest nvidia-ml-py deployment (11.525.84)

211
README.md Executable file
View File

@ -0,0 +1,211 @@
Python bindings to the NVIDIA Management Library
================================================
Provides a Python interface to GPU management and monitoring functions.
This is a wrapper around the NVML library.
For information about the NVML library, see the NVML developer page
http://developer.nvidia.com/nvidia-management-library-nvml
As of version 11.0.0, the NVML-wrappers used in pynvml are identical
to those published through [nvidia-ml-py](https://pypi.org/project/nvidia-ml-py/).
Note that this file can be run with 'python -m doctest -v README.txt'
although the results are system dependent
Requires
--------
Python 3, or an earlier version with the ctypes module.
Installation
------------
pip install .
Usage
-----
You can use the lower level nvml bindings
```python
>>> from pynvml import *
>>> nvmlInit()
>>> print("Driver Version:", nvmlSystemGetDriverVersion())
Driver Version: 410.00
>>> deviceCount = nvmlDeviceGetCount()
>>> for i in range(deviceCount):
... handle = nvmlDeviceGetHandleByIndex(i)
... print("Device", i, ":", nvmlDeviceGetName(handle))
...
Device 0 : Tesla V100
>>> nvmlShutdown()
```
Or the higher level nvidia_smi API
```python
from pynvml.smi import nvidia_smi
nvsmi = nvidia_smi.getInstance()
nvsmi.DeviceQuery('memory.free, memory.total')
```
```python
from pynvml.smi import nvidia_smi
nvsmi = nvidia_smi.getInstance()
print(nvsmi.DeviceQuery('--help-query-gpu'), end='\n')
```
Functions
---------
Python methods wrap NVML functions, implemented in a C shared library.
Each function's use is the same with the following exceptions:
- Instead of returning error codes, failing error codes are raised as
Python exceptions.
```python
>>> try:
... nvmlDeviceGetCount()
... except NVMLError as error:
... print(error)
...
Uninitialized
```
- C function output parameters are returned from the corresponding
Python function left to right.
```c
nvmlReturn_t nvmlDeviceGetEccMode(nvmlDevice_t device,
nvmlEnableState_t *current,
nvmlEnableState_t *pending);
```
```python
>>> nvmlInit()
>>> handle = nvmlDeviceGetHandleByIndex(0)
>>> (current, pending) = nvmlDeviceGetEccMode(handle)
```
- C structs are converted into Python classes.
```c
nvmlReturn_t DECLDIR nvmlDeviceGetMemoryInfo(nvmlDevice_t device,
nvmlMemory_t *memory);
typedef struct nvmlMemory_st {
unsigned long long total;
unsigned long long free;
unsigned long long used;
} nvmlMemory_t;
```
```python
>>> info = nvmlDeviceGetMemoryInfo(handle)
>>> print "Total memory:", info.total
Total memory: 5636292608
>>> print "Free memory:", info.free
Free memory: 5578420224
>>> print "Used memory:", info.used
Used memory: 57872384
```
- Python handles string buffer creation.
```c
nvmlReturn_t nvmlSystemGetDriverVersion(char* version,
unsigned int length);
```
```python
>>> version = nvmlSystemGetDriverVersion();
>>> nvmlShutdown()
```
For usage information see the NVML documentation.
Variables
---------
All meaningful NVML constants and enums are exposed in Python.
The NVML_VALUE_NOT_AVAILABLE constant is not used. Instead None is mapped to the field.
NVML Permissions
----------------
Many of the `pynvml` wrappers assume that the underlying NVIDIA Management Library (NVML) API can be used without admin/root privileges. However, it is certainly possible for the system permissions to prevent pynvml from querying GPU performance counters. For example:
```
$ nvidia-smi nvlink -g 0
GPU 0: Tesla V100-SXM2-32GB (UUID: GPU-96ab329d-7a1f-73a8-a9b7-18b4b2855f92)
NVML: Unable to get the NvLink link utilization counter control for link 0: Insufficient Permissions
```
A simple way to check the permissions status is to look for `RmProfilingAdminOnly` in the driver `params` file (Note that `RmProfilingAdminOnly == 1` means that admin/sudo access is required):
```
$ cat /proc/driver/nvidia/params | grep RmProfilingAdminOnly
RmProfilingAdminOnly: 1
```
For more information on setting/unsetting the relevant admin privileges, see [these notes](https://developer.nvidia.com/nvidia-development-tools-solutions-ERR_NVGPUCTRPERM-permission-issue-performance-counters) on resolving `ERR_NVGPUCTRPERM` errors.
Release Notes
-------------
- Version 2.285.0
- Added new functions for NVML 2.285. See NVML documentation for more information.
- Ported to support Python 3.0 and Python 2.0 syntax.
- Added nvidia_smi.py tool as a sample app.
- Version 3.295.0
- Added new functions for NVML 3.295. See NVML documentation for more information.
- Updated nvidia_smi.py tool
- Includes additional error handling
- Version 4.304.0
- Added new functions for NVML 4.304. See NVML documentation for more information.
- Updated nvidia_smi.py tool
- Version 4.304.3
- Fixing nvmlUnitGetDeviceCount bug
- Version 5.319.0
- Added new functions for NVML 5.319. See NVML documentation for more information.
- Version 6.340.0
- Added new functions for NVML 6.340. See NVML documentation for more information.
- Version 7.346.0
- Added new functions for NVML 7.346. See NVML documentation for more information.
- Version 7.352.0
- Added new functions for NVML 7.352. See NVML documentation for more information.
- Version 8.0.0
- Refactor code to a nvidia_smi singleton class
- Added DeviceQuery that returns a dictionary of (name, value).
- Added filter parameters on DeviceQuery to match query api in nvidia-smi
- Added filter parameters on XmlDeviceQuery to match query api in nvidia-smi
- Added integer enumeration for filter strings to reduce overhead for performance monitoring.
- Added loop(filter) method with async and callback support
- Version 8.0.1
- Restructuring directories into two packages (pynvml and nvidia_smi)
- Adding initial tests for both packages
- Some name-convention cleanup in pynvml
- Version 8.0.2
- Added NVLink function wrappers for pynvml module
- Version 8.0.3
- Added versioneer
- Fixed nvmlDeviceGetNvLinkUtilizationCounter bug
- Version 8.0.4
- Added nvmlDeviceGetTotalEnergyConsumption
- Added notes about NVML permissions
- Fixed version-check testing
- Version 11.0.0
- Updated nvml.py to CUDA 11
- Updated smi.py DeviceQuery to R460
- Aligned nvml.py with latest nvidia-ml-py deployment
- Version 11.4.0
- Updated nvml.py to CUDA 11.4
- Updated smi.py NVML_BRAND_NAMES
- Aligned nvml.py with latest nvidia-ml-py deployment (11.495.46)
- Version 11.4.1
- Fix comma bugs in nvml.py
- Version 11.5.0
- Updated nvml.py to support CUDA 11.5 and CUDA 12
- Aligned with latest nvidia-ml-py deployment (11.525.84)

233
pynvml.egg-info/PKG-INFO Normal file
View File

@ -0,0 +1,233 @@
Metadata-Version: 2.1
Name: pynvml
Version: 11.5.0
Summary: Python Bindings for the NVIDIA Management Library
Home-page: http://www.nvidia.com/
Author: NVIDIA Corporation
Author-email: rzamora@nvidia.com
License: BSD
Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: System Administrators
Classifier: License :: OSI Approved :: BSD License
Classifier: Operating System :: Microsoft :: Windows
Classifier: Operating System :: POSIX :: Linux
Classifier: Programming Language :: Python
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: System :: Hardware
Classifier: Topic :: System :: Systems Administration
Requires-Python: >=3.6
Description-Content-Type: text/markdown
License-File: LICENSE.txt
Python bindings to the NVIDIA Management Library
================================================
Provides a Python interface to GPU management and monitoring functions.
This is a wrapper around the NVML library.
For information about the NVML library, see the NVML developer page
http://developer.nvidia.com/nvidia-management-library-nvml
As of version 11.0.0, the NVML-wrappers used in pynvml are identical
to those published through [nvidia-ml-py](https://pypi.org/project/nvidia-ml-py/).
Note that this file can be run with 'python -m doctest -v README.txt'
although the results are system dependent
Requires
--------
Python 3, or an earlier version with the ctypes module.
Installation
------------
pip install .
Usage
-----
You can use the lower level nvml bindings
```python
>>> from pynvml import *
>>> nvmlInit()
>>> print("Driver Version:", nvmlSystemGetDriverVersion())
Driver Version: 410.00
>>> deviceCount = nvmlDeviceGetCount()
>>> for i in range(deviceCount):
... handle = nvmlDeviceGetHandleByIndex(i)
... print("Device", i, ":", nvmlDeviceGetName(handle))
...
Device 0 : Tesla V100
>>> nvmlShutdown()
```
Or the higher level nvidia_smi API
```python
from pynvml.smi import nvidia_smi
nvsmi = nvidia_smi.getInstance()
nvsmi.DeviceQuery('memory.free, memory.total')
```
```python
from pynvml.smi import nvidia_smi
nvsmi = nvidia_smi.getInstance()
print(nvsmi.DeviceQuery('--help-query-gpu'), end='\n')
```
Functions
---------
Python methods wrap NVML functions, implemented in a C shared library.
Each function's use is the same with the following exceptions:
- Instead of returning error codes, failing error codes are raised as
Python exceptions.
```python
>>> try:
... nvmlDeviceGetCount()
... except NVMLError as error:
... print(error)
...
Uninitialized
```
- C function output parameters are returned from the corresponding
Python function left to right.
```c
nvmlReturn_t nvmlDeviceGetEccMode(nvmlDevice_t device,
nvmlEnableState_t *current,
nvmlEnableState_t *pending);
```
```python
>>> nvmlInit()
>>> handle = nvmlDeviceGetHandleByIndex(0)
>>> (current, pending) = nvmlDeviceGetEccMode(handle)
```
- C structs are converted into Python classes.
```c
nvmlReturn_t DECLDIR nvmlDeviceGetMemoryInfo(nvmlDevice_t device,
nvmlMemory_t *memory);
typedef struct nvmlMemory_st {
unsigned long long total;
unsigned long long free;
unsigned long long used;
} nvmlMemory_t;
```
```python
>>> info = nvmlDeviceGetMemoryInfo(handle)
>>> print "Total memory:", info.total
Total memory: 5636292608
>>> print "Free memory:", info.free
Free memory: 5578420224
>>> print "Used memory:", info.used
Used memory: 57872384
```
- Python handles string buffer creation.
```c
nvmlReturn_t nvmlSystemGetDriverVersion(char* version,
unsigned int length);
```
```python
>>> version = nvmlSystemGetDriverVersion();
>>> nvmlShutdown()
```
For usage information see the NVML documentation.
Variables
---------
All meaningful NVML constants and enums are exposed in Python.
The NVML_VALUE_NOT_AVAILABLE constant is not used. Instead None is mapped to the field.
NVML Permissions
----------------
Many of the `pynvml` wrappers assume that the underlying NVIDIA Management Library (NVML) API can be used without admin/root privileges. However, it is certainly possible for the system permissions to prevent pynvml from querying GPU performance counters. For example:
```
$ nvidia-smi nvlink -g 0
GPU 0: Tesla V100-SXM2-32GB (UUID: GPU-96ab329d-7a1f-73a8-a9b7-18b4b2855f92)
NVML: Unable to get the NvLink link utilization counter control for link 0: Insufficient Permissions
```
A simple way to check the permissions status is to look for `RmProfilingAdminOnly` in the driver `params` file (Note that `RmProfilingAdminOnly == 1` means that admin/sudo access is required):
```
$ cat /proc/driver/nvidia/params | grep RmProfilingAdminOnly
RmProfilingAdminOnly: 1
```
For more information on setting/unsetting the relevant admin privileges, see [these notes](https://developer.nvidia.com/nvidia-development-tools-solutions-ERR_NVGPUCTRPERM-permission-issue-performance-counters) on resolving `ERR_NVGPUCTRPERM` errors.
Release Notes
-------------
- Version 2.285.0
- Added new functions for NVML 2.285. See NVML documentation for more information.
- Ported to support Python 3.0 and Python 2.0 syntax.
- Added nvidia_smi.py tool as a sample app.
- Version 3.295.0
- Added new functions for NVML 3.295. See NVML documentation for more information.
- Updated nvidia_smi.py tool
- Includes additional error handling
- Version 4.304.0
- Added new functions for NVML 4.304. See NVML documentation for more information.
- Updated nvidia_smi.py tool
- Version 4.304.3
- Fixing nvmlUnitGetDeviceCount bug
- Version 5.319.0
- Added new functions for NVML 5.319. See NVML documentation for more information.
- Version 6.340.0
- Added new functions for NVML 6.340. See NVML documentation for more information.
- Version 7.346.0
- Added new functions for NVML 7.346. See NVML documentation for more information.
- Version 7.352.0
- Added new functions for NVML 7.352. See NVML documentation for more information.
- Version 8.0.0
- Refactor code to a nvidia_smi singleton class
- Added DeviceQuery that returns a dictionary of (name, value).
- Added filter parameters on DeviceQuery to match query api in nvidia-smi
- Added filter parameters on XmlDeviceQuery to match query api in nvidia-smi
- Added integer enumeration for filter strings to reduce overhead for performance monitoring.
- Added loop(filter) method with async and callback support
- Version 8.0.1
- Restructuring directories into two packages (pynvml and nvidia_smi)
- Adding initial tests for both packages
- Some name-convention cleanup in pynvml
- Version 8.0.2
- Added NVLink function wrappers for pynvml module
- Version 8.0.3
- Added versioneer
- Fixed nvmlDeviceGetNvLinkUtilizationCounter bug
- Version 8.0.4
- Added nvmlDeviceGetTotalEnergyConsumption
- Added notes about NVML permissions
- Fixed version-check testing
- Version 11.0.0
- Updated nvml.py to CUDA 11
- Updated smi.py DeviceQuery to R460
- Aligned nvml.py with latest nvidia-ml-py deployment
- Version 11.4.0
- Updated nvml.py to CUDA 11.4
- Updated smi.py NVML_BRAND_NAMES
- Aligned nvml.py with latest nvidia-ml-py deployment (11.495.46)
- Version 11.4.1
- Fix comma bugs in nvml.py
- Version 11.5.0
- Updated nvml.py to support CUDA 11.5 and CUDA 12
- Aligned with latest nvidia-ml-py deployment (11.525.84)

View File

@ -0,0 +1,14 @@
LICENSE.txt
MANIFEST.in
README.md
setup.cfg
setup.py
versioneer.py
pynvml/__init__.py
pynvml/_version.py
pynvml/nvml.py
pynvml/smi.py
pynvml.egg-info/PKG-INFO
pynvml.egg-info/SOURCES.txt
pynvml.egg-info/dependency_links.txt
pynvml.egg-info/top_level.txt

View File

@ -0,0 +1 @@

View File

@ -0,0 +1 @@
pynvml

5
pynvml/__init__.py Normal file
View File

@ -0,0 +1,5 @@
from .nvml import *
from ._version import get_versions
__version__ = get_versions()['version']
del get_versions

21
pynvml/_version.py Normal file
View File

@ -0,0 +1,21 @@
# This file was generated by 'versioneer.py' (0.18) from
# revision-control system data, or from the parent directory name of an
# unpacked source archive. Distribution tarballs contain a pre-generated copy
# of this file.
import json
version_json = '''
{
"date": "2023-02-14T19:25:14-0800",
"dirty": false,
"error": null,
"full-revisionid": "43a7803c42358a87e765bb66373f3ae536d589a3",
"version": "11.5.0"
}
''' # END VERSION_JSON
def get_versions():
return json.loads(version_json)

4532
pynvml/nvml.py Normal file

File diff suppressed because it is too large Load Diff

2996
pynvml/smi.py Executable file

File diff suppressed because it is too large Load Diff

15
setup.cfg Normal file
View File

@ -0,0 +1,15 @@
[metadata]
license_file = LICENSE.txt
[versioneer]
VCS = git
style = pep440
versionfile_source = pynvml/_version.py
versionfile_build = pynvml/_version.py
tag_prefix =
parentdir_prefix = pynvml-
[egg_info]
tag_build =
tag_date = 0

44
setup.py Executable file
View File

@ -0,0 +1,44 @@
from setuptools import setup, find_packages
from os import path
from io import open
import versioneer
here = path.abspath(path.dirname(__file__))
# Get the long description from the README file
with open(path.join(here, 'README.md'), encoding='utf-8') as f:
long_description = f.read()
# earlier versions don't support all classifiers
#if version < '2.2.3':
# from distutils.dist import DistributionMetadata
# DistributionMetadata.classifiers = None
# DistributionMetadata.download_url = None
setup(name='pynvml',
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
python_requires='>=3.6',
description='Python Bindings for the NVIDIA Management Library',
long_description=long_description,
long_description_content_type='text/markdown',
packages=find_packages(exclude=['notebooks', 'docs', 'tests']),
package_data={'pynvml': ['README.md','help_query_gpu.txt']},
license="BSD",
url="http://www.nvidia.com/",
author="NVIDIA Corporation",
author_email="rzamora@nvidia.com",
classifiers=[
'Development Status :: 5 - Production/Stable',
'Intended Audience :: Developers',
'Intended Audience :: System Administrators',
'License :: OSI Approved :: BSD License',
'Operating System :: Microsoft :: Windows',
'Operating System :: POSIX :: Linux',
'Programming Language :: Python',
'Topic :: Software Development :: Libraries :: Python Modules',
'Topic :: System :: Hardware',
'Topic :: System :: Systems Administration',
],
)

1822
versioneer.py Normal file

File diff suppressed because it is too large Load Diff