Please visit donation page to help the project cover running costs for this month

Toggle Menu

Join PrimeGrid

Returning Participants


Leader Boards




Advanced search

Message boards : Sieving : sr2sieve -t switch

Author Message
Profile warddrProject donor
Volunteer tester
Send message
Joined: 7 Feb 08
Posts: 254
ID: 18735
Credit: 24,054,820
RAC: 0
Found 1 prime in the 2021 Tour de PrimesPPS LLR Ruby: Earned 2,000,000 credits (3,906,465)SGS LLR Ruby: Earned 2,000,000 credits (2,113,266)TPS LLR (retired) Bronze: Earned 10,000 credits (20,542)321 Sieve (suspended) Silver: Earned 100,000 credits (111,333)Cullen/Woodall Sieve Bronze: Earned 10,000 credits (20,009)PPS Sieve Jade: Earned 10,000,000 credits (12,250,642)Sierpinski (ESP/PSP/SoB) Sieve (suspended) Amethyst: Earned 1,000,000 credits (1,056,728)TRP Sieve (suspended) Silver: Earned 100,000 credits (169,622)AP 26/27 Silver: Earned 100,000 credits (107,843)GFN Ruby: Earned 2,000,000 credits (3,686,867)PSA Gold: Earned 500,000 credits (606,099)
Message 14255 - Posted: 7 Mar 2009 | 14:36:14 UTC
Last modified: 7 Mar 2009 | 14:46:28 UTC

I don't think the -t switch of sr2sieve is working as it should, I did some tests with this host (core2quad q8300, 2,5Ghz), ubuntu 8.10 x64, be surprised:

-t4 : 18427818 p/sec
-t5 : 20679812 p/sec
-t6 : 22012325 p/sec
-t7 : 22641903 p/sec
-t8 : 23433708 p/sec
-t9 : impossible but it will probably increase the speed even more if it is possible

With -t8 on my quad every core is using 95% so I switched sr2sieve to max priority (-ZZ) becouse there is enough free speed left for me to work.

geoffProject donor
Volunteer developer
Send message
Joined: 3 Aug 07
Posts: 99
ID: 10427
Credit: 343,437
RAC: 0
TRP Sieve (suspended) Bronze: Earned 10,000 credits (57,150)PSA Silver: Earned 100,000 credits (271,962)
Message 14288 - Posted: 8 Mar 2009 | 21:50:53 UTC - in response to Message 14255.

One possibility is that with -t4 the parent thread is getting bounced from one processor to another, this can be costly on some machines. By running more child threads than than there are processors, the scheduler might have more options to run a waiting child thread instead of running the parent thread, and so the parent might stay on the same processor for longer.

Normally the kernel scheduler will settle on the best method itself if given a bit of time, it can takes a few minutes. You could try to intervene manually by restricting the processor affinity for the parent thread, after sr2sieve has started and spawned the child threads. taskset -pc <PID> <CPU> will restrict process PID to processor number CPU. However this limits the scheduler's options and so it probably won't help in the long run.

Another possibility, it probably doesn't apply in your case, is that another program is running, and so spawning more sr2sieve threads gives sr2sieve a bigger share of total processor time at the expense of that other program.

There are two different threading models I am considering for use in future:

1. Each thread has its own prime generator, and so all threads can generate primes concurrently. This model will use (4+4T) bytes per sieve prime for T threads.

2. All threads access a single prime generator in shared memory, but only one can use it to generate primes at a time. This model will use 8 bytes per sieve prime, independent of the number of threads.

Both of these models require less communication between threads than the current model, and neither requires a separate parent thread so the bouncing problem above will not happen. Ideally I would include both models but choose which to use at runtime depending on how much memory is available and on the L2/L3 cache configuration.

Message boards : Sieving : sr2sieve -t switch

[Return to PrimeGrid main page]
Copyright © 2005 - 2023 Rytis Slatkevičius (contact) and PrimeGrid community. Server load 0.94, 1.08, 1.18
Generated 4 Jun 2023 | 8:28:13 UTC