PrimeGrid
Please visit donation page to help the project cover running costs for this month

Toggle Menu

Join PrimeGrid

Returning Participants

Community

Leader Boards

Results

Other

drummers-lowrise

Advanced search

Message boards : Number crunching : nvdia-cuda-mps-control

Author Message
Profile composite
Volunteer tester
Send message
Joined: 16 Feb 10
Posts: 750
ID: 55391
Credit: 655,509,925
RAC: 408,754
Discovered 2 mega primesFound 1 prime in the 2018 Tour de Primes321 LLR Turquoise: Earned 5,000,000 credits (5,477,467)Cullen LLR Gold: Earned 500,000 credits (776,297)ESP LLR Ruby: Earned 2,000,000 credits (3,120,351)Generalized Cullen/Woodall LLR Ruby: Earned 2,000,000 credits (2,056,207)PPS LLR Jade: Earned 10,000,000 credits (16,510,522)PSP LLR Turquoise: Earned 5,000,000 credits (5,027,818)SoB LLR Sapphire: Earned 20,000,000 credits (22,810,468)SR5 LLR Turquoise: Earned 5,000,000 credits (6,110,877)SGS LLR Ruby: Earned 2,000,000 credits (3,477,744)TRP LLR Turquoise: Earned 5,000,000 credits (7,025,303)Woodall LLR Amethyst: Earned 1,000,000 credits (1,693,614)321 Sieve Sapphire: Earned 20,000,000 credits (37,272,818)Cullen/Woodall Sieve (suspended) Turquoise: Earned 5,000,000 credits (5,571,178)Generalized Cullen/Woodall Sieve (suspended) Emerald: Earned 50,000,000 credits (50,009,610)PPS Sieve Double Silver: Earned 200,000,000 credits (296,203,649)Sierpinski (ESP/PSP/SoB) Sieve (suspended) Jade: Earned 10,000,000 credits (10,165,888)TRP Sieve (suspended) Sapphire: Earned 20,000,000 credits (20,071,454)AP 26/27 Turquoise: Earned 5,000,000 credits (6,616,128)GFN Emerald: Earned 50,000,000 credits (52,755,348)PSA Double Bronze: Earned 100,000,000 credits (102,762,384)
Message 137574 - Posted: 9 Feb 2020 | 0:23:07 UTC

Has anyone tried using the nvida-cuda-mps-control daemon in Linux with BOINC projects?

From the man page:

MPS is a runtime service designed to let multiple MPI processes using CUDA to run concurrently in a way that's transparent to the MPI program. A CUDA program runs in MPS mode if the MPS control daemon is running on the system. When CUDA is first initialized in a program, the CUDA driver attempts to connect to the MPS control daemon. If the connection attempt fails, the program continues to run as it normally would without MPS. ... Currently, CUDA MPS is available on 64-bit Linux only, requires a device that supports Unified Virtual Address (UVA) and has compute capability SM 3.5 or higher. Applications requiring pre- CUDA 4.0 APIs are not supported under CUDA MPS. Certain capabilities are only available startā€ ing with compute capability SM 7.0.


GTX 10XX and RTX 20XX and some less powerful GPUs have the required compute capability.

I wonder if running this daemon fixes the 100% CPU core utilization problem with Genefer subprojects.
What CUDA API version is used by Genefer?

Yves Gallot
Volunteer developer
Project scientist
Send message
Joined: 19 Aug 12
Posts: 556
ID: 164101
Credit: 304,715,793
RAC: 0
GFN Double Silver: Earned 200,000,000 credits (304,715,793)
Message 137592 - Posted: 9 Feb 2020 | 9:03:42 UTC - in response to Message 137574.

Genefer is an OpenCL 1.1 application.

This code fixes 100% CPU core utilization.

Profile composite
Volunteer tester
Send message
Joined: 16 Feb 10
Posts: 750
ID: 55391
Credit: 655,509,925
RAC: 408,754
Discovered 2 mega primesFound 1 prime in the 2018 Tour de Primes321 LLR Turquoise: Earned 5,000,000 credits (5,477,467)Cullen LLR Gold: Earned 500,000 credits (776,297)ESP LLR Ruby: Earned 2,000,000 credits (3,120,351)Generalized Cullen/Woodall LLR Ruby: Earned 2,000,000 credits (2,056,207)PPS LLR Jade: Earned 10,000,000 credits (16,510,522)PSP LLR Turquoise: Earned 5,000,000 credits (5,027,818)SoB LLR Sapphire: Earned 20,000,000 credits (22,810,468)SR5 LLR Turquoise: Earned 5,000,000 credits (6,110,877)SGS LLR Ruby: Earned 2,000,000 credits (3,477,744)TRP LLR Turquoise: Earned 5,000,000 credits (7,025,303)Woodall LLR Amethyst: Earned 1,000,000 credits (1,693,614)321 Sieve Sapphire: Earned 20,000,000 credits (37,272,818)Cullen/Woodall Sieve (suspended) Turquoise: Earned 5,000,000 credits (5,571,178)Generalized Cullen/Woodall Sieve (suspended) Emerald: Earned 50,000,000 credits (50,009,610)PPS Sieve Double Silver: Earned 200,000,000 credits (296,203,649)Sierpinski (ESP/PSP/SoB) Sieve (suspended) Jade: Earned 10,000,000 credits (10,165,888)TRP Sieve (suspended) Sapphire: Earned 20,000,000 credits (20,071,454)AP 26/27 Turquoise: Earned 5,000,000 credits (6,616,128)GFN Emerald: Earned 50,000,000 credits (52,755,348)PSA Double Bronze: Earned 100,000,000 credits (102,762,384)
Message 137595 - Posted: 9 Feb 2020 | 9:49:28 UTC - in response to Message 137592.
Last modified: 9 Feb 2020 | 10:13:55 UTC

Yes, I remember that fix but I didn't remember that Genefer is OpenCL. My bad.

According to our favorite *pedia "CUDA provides both a low level API (CUDA Driver API, non single-source) and a higher level API (CUDA Runtime API, single-source)".

It's not apparent whether OpenCL communicates directly with the GPU card, or via the CUDA driver. If the latter, then Genefer should run blissfully unaware that nvidia-cuda-mps-control is sharing the card among multiple GPU applications. Alas I cannot test this, my GPU card has only compute capability 3.0

EDIT:
I use the Nvidia proprietary driver 390.77 downloaded from the Nvidia site.
The Nvidia package provided by my disto having the same version numbering scheme is the "NVIDIA CUDA Driver Library" whereas the "NVIDIA CUDA Runtime Library" has a version number like 8.0.44-4

This lends credence to the possibility that OpenCL talks to the GPU through the CUDA driver.

Yves Gallot
Volunteer developer
Project scientist
Send message
Joined: 19 Aug 12
Posts: 556
ID: 164101
Credit: 304,715,793
RAC: 0
GFN Double Silver: Earned 200,000,000 credits (304,715,793)
Message 137596 - Posted: 9 Feb 2020 | 11:01:07 UTC - in response to Message 137595.

This lends credence to the possibility that OpenCL talks to the GPU through the CUDA driver.

I think that OpenCL is translated into CUDA. The platform is called "NVIDIA CUDA" and the version is "OpenCL 1.2 CUDA".
The OpenCL code is compiled into a PTX file (you can find it in %HOMEPATH%\AppData\Roaming\NVIDIA\ComputeCache on Windows) and the OpenCL fonction calls might be transposed into a sequence of CUDA functions.
I translated some OpenCL applications into CUDA applications (because CUDA profiler is more advanced) and both applications run at exactly the same speed.

stream
Volunteer moderator
Project administrator
Volunteer developer
Volunteer tester
Send message
Joined: 1 Mar 14
Posts: 717
ID: 301928
Credit: 475,993,698
RAC: 98,079
Discovered 1 mega primeFound 1 prime in the 2018 Tour de PrimesFound 1 prime in the 2019 Tour de PrimesFound 1 prime in the 2020 Tour de Primes321 LLR Jade: Earned 10,000,000 credits (10,011,570)Cullen LLR Jade: Earned 10,000,000 credits (10,009,374)ESP LLR Jade: Earned 10,000,000 credits (10,009,221)Generalized Cullen/Woodall LLR Jade: Earned 10,000,000 credits (10,012,217)PPS LLR Jade: Earned 10,000,000 credits (11,055,307)PSP LLR Jade: Earned 10,000,000 credits (10,044,081)SoB LLR Turquoise: Earned 5,000,000 credits (5,824,522)SR5 LLR Turquoise: Earned 5,000,000 credits (9,904,596)SGS LLR Turquoise: Earned 5,000,000 credits (6,135,068)TRP LLR Turquoise: Earned 5,000,000 credits (9,911,706)Woodall LLR Turquoise: Earned 5,000,000 credits (9,526,087)321 Sieve Sapphire: Earned 20,000,000 credits (20,004,228)Generalized Cullen/Woodall Sieve (suspended) Sapphire: Earned 20,000,000 credits (20,047,667)PPS Sieve Sapphire: Earned 20,000,000 credits (20,866,490)Sierpinski (ESP/PSP/SoB) Sieve (suspended) Sapphire: Earned 20,000,000 credits (20,043,271)TRP Sieve (suspended) Sapphire: Earned 20,000,000 credits (20,015,177)AP 26/27 Sapphire: Earned 20,000,000 credits (20,045,194)GFN Emerald: Earned 50,000,000 credits (52,226,480)PSA Double Silver: Earned 200,000,000 credits (200,301,443)
Message 137654 - Posted: 10 Feb 2020 | 13:55:42 UTC - in response to Message 137596.

This lends credence to the possibility that OpenCL talks to the GPU through the CUDA driver.

I think that OpenCL is translated into CUDA. The platform is called "NVIDIA CUDA" and the version is "OpenCL 1.2 CUDA". The OpenCL code is compiled into a PTX file (you can find it in %HOMEPATH%\AppData\Roaming\NVIDIA\ComputeCache on Windows) and the OpenCL fonction calls might be transposed into a sequence of CUDA functions.

It's not correct to say so. It's like to say that "Pascal programs are translated to C". Both CUDA and OpenCL programs are compiled to intermediate code (PTX) on first pass and PTX files are compiled into machine code for specific GPU on the second pass. Once you get a PTX files, it does not matter which initial language was used to produce them. Starting from PTX, everything is passed through same set of compilers/libraries/drivers. Language-specific function calls are handled by high-level compiler which may produce either PTX primitives either hidden calls to runtime library.

Post to thread

Message boards : Number crunching : nvdia-cuda-mps-control

[Return to PrimeGrid main page]
DNS Powered by DNSEXIT.COM
Copyright © 2005 - 2020 Rytis Slatkevičius (contact) and PrimeGrid community. Server load 3.86, 2.32, 1.96
Generated 3 Jul 2020 | 14:59:02 UTC