Author |
Message |
|
I had two WU's run almost the same time - about 80,000 seconds.
The credit for one was 1022 and the other 2215, why is there such a difference?
____________
|
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13787 ID: 53948 Credit: 345,142,938 RAC: 12,033
                              
|
I had two WU's run almost the same time - about 80,000 seconds.
The credit for one was 1022 and the other 2215, why is there such a difference?
Assuming they're both 321 tasks, the answer is that credit is determined by measuring how much work was done. "Work" is determined as the amount of CPU time consumed times the benchmarked speed of your computer.
Simple so far, but here's where it gets all sideways:
The credit is then averaged amongst all the computers that returned a valid result. That's usually two computers, you and your wingman. BOINC's benchmarks are notoriously terrible, and some computers register much higher benchmarks than others, for no good reason. Therefore, some computers, doing the same work, get more credit than others. (No, it's not fair at all. One of my long term goals is to fix this.)
So what happened on your tasks is that for one of them you had a wingman (with very high benchmarks) that reported that he should get lots of credit, and for the other your wingman (with very low benchmarks) reported that he should get a lot less credit. The disparity in the granted credit is the result.
____________
My lucky number is 75898524288+1 |
|
|
|
the major problem now is avx. Because the tasks run 30-50% faster you get substantially less credit than you would running them without.
It would be nice if boinc allowed setting the minimum credit for a subproject as well as the maximum. |
|
|
|
the major problem now is avx. Because the tasks run 30-50% faster you get substantially less credit than you would running them without.
It would be nice if boinc allowed setting the minimum credit for a subproject as well as the maximum.
So it is, in PPS LLR in BOINC I get ~15k/day, but in the same MEGA project in PRPNet I get 50k/day. |
|
|
|
A long time had passed since a last saw less than 1000 credits for a 321 task.
[url]http://www.primegrid.com/workunit.php?wuid=319753573 [/url]
____________
676754^262144+1 is prime |
|
|
Honza Volunteer moderator Volunteer tester Project scientist Send message
Joined: 15 Aug 05 Posts: 1931 ID: 352 Credit: 5,665,865,682 RAC: 4,481,931
                                   
|
Hmm, granted credit 917.30
I recently switched to 321. Got granted credit 693.40
http://www.primegrid.com/workunit.php?wuid=320796913
In other cases I got 2,360.70 on the very same host because I was 3rd to report and other hosts took 3x and 5x longer to complete a task.
It's all a bit of lottery.
I don't have average number yet since most my 321 tasks are wainting for validation.
____________
My stats
Badge score: 1*1 + 5*1 + 8*3 + 9*11 + 10*1 + 11*1 + 12*3 = 186 |
|
|
|
I recently switched to 321. Got granted credit 693.40
In other cases I got 2,360.70 on the very same host because I was 3rd to report and other hosts took 3x and 5x longer to complete a task.
It's all a bit of lottery.
I don't have average number yet since most my 321 tasks are wainting for validation.
This is 'average temperature of the patients in hospital'. BOINC credit based on calculation time, not on the task size. |
|
|
Honza Volunteer moderator Volunteer tester Project scientist Send message
Joined: 15 Aug 05 Posts: 1931 ID: 352 Credit: 5,665,865,682 RAC: 4,481,931
                                   
|
This is 'average temperature of the patients in hospital'. BOINC credit based on calculation time, not on the task size.
The very same host, very similar task size (3*2^9381850-1 vs 3*2^9387260+1), very similar CPU time.
CPU time Credit
28,867.88 2,360.70
28,798.85 693.40
Credit is based on how other instances of the same WU are doing and sequence of results send back to server.
I know this is the way BOINC is designed and credit is not my priority so not a big deal, just to ilustrate some numbers.
____________
My stats
Badge score: 1*1 + 5*1 + 8*3 + 9*11 + 10*1 + 11*1 + 12*3 = 186 |
|
|
|
CPU time Credit
28,867.88 2,360.70
28,798.85 693.40
In the first case initial task was calculated and return (earlier?) from slow computer, in second case task was calculated and return from fast computer. I think so... |
|
|
|
Hmm, granted credit 917.30
I recently switched to 321. Got granted credit 693.40
http://www.primegrid.com/workunit.php?wuid=320796913
In other cases I got 2,360.70 on the very same host because I was 3rd to report and other hosts took 3x and 5x longer to complete a task.
It's all a bit of lottery.
I don't have average number yet since most my 321 tasks are wainting for validation.
The other host on that WU is mine. When two avx capable and with some oc cpus return the same task, credit is really low. A minimum credit, as suggested before, would reduce lottery.
____________
676754^262144+1 is prime |
|
|
rroonnaalldd Volunteer developer Volunteer tester
 Send message
Joined: 3 Jul 09 Posts: 1213 ID: 42893 Credit: 34,634,263 RAC: 0
                 
|
A parity in credits is not possible in my eyes. We have now the same situation on cpu side for avx vs non-avx capable hosts what we have seen for PPS-sieve on gpu vs cpu before.
A minimum credits would be good thing for slower or non-avx cpu's. On one side this mimimum credit setting would result in an advantage for avx-capable hosts. On the other side is the credit granting from the BOINC server-software side reglemented or does not contains all needed settings. Some settings of PG were written by the PG-admins or its contributors itself.
Take the PPSsieve settings for Cuda, OpenCL and CPU as an example for this.
____________
Best wishes. Knowledge is power. by jjwhalen
|
|
|
|
We have now the same situation on cpu side for avx vs non-avx capable hosts what we have seen for PPS-sieve on gpu vs cpu before.
This is not 'the same', in this case used different approach. Compare for example llr avx and llr cuda - where is great advantage? |
|
|
rroonnaalldd Volunteer developer Volunteer tester
 Send message
Joined: 3 Jul 09 Posts: 1213 ID: 42893 Credit: 34,634,263 RAC: 0
                 
|
This is not 'the same', in this case used different approach. Compare for example llr avx and llr cuda - where is great advantage?
I own only a slow GTS450. The last time the llrCUDA app reached only the level of an outdated 2.5GHz Core2Duo. llrAVX is faster and better suited for the calculation of numbers with bigger k values. llrCUDA support only k*2^b+-1(small k,big b). Shoichiro Yamada aka msft stopped the development for llrCUDA in january 2012...
____________
Best wishes. Knowledge is power. by jjwhalen
|
|
|
|
A minimum credits would be good thing for slower or non-avx cpu's.
no, it's definitely the faster cpu's it would benefit.
On SoB tasks pre-avx I was getting an average of around 16K each. With AVX I'm only getting around 10K despite the tasks presumably greater size now. If I dared to turn off hyperthreading that would drop even more.
It makes a faster CPU entirely redundant from a credit perspective. |
|
|
rroonnaalldd Volunteer developer Volunteer tester
 Send message
Joined: 3 Jul 09 Posts: 1213 ID: 42893 Credit: 34,634,263 RAC: 0
                 
|
On SoB tasks pre-avx I was getting an average of around 16K each. With AVX I'm only getting around 10K despite the tasks presumably greater size now. If I dared to turn off hyperthreading that would drop even more.
It makes a faster CPU entirely redundant from a credit perspective.
Here comes something to think about and it is not your problem that i own only 2 little Core2 cpu's in 65nm architecture.
I calculated my last SOB unit 14 months ago. I needed 900k seconds (more than 10 days of calculation!) and got 12k credits granted.
According to your host 193489 with an "Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz" you need around 380ksec and get everytime between ~10k and 14k credits granted. You said, that the units are getting greater now.
That means you compute one SOB unit in a third of the time, which I would need for the entire unit... And the differences in the computing time for the same work unit will be even greater when Intels next gen cpu (Haswell with AVX2) enters the arena.
Either i have a translation problem or i urgently need some sleep.
____________
Best wishes. Knowledge is power. by jjwhalen
|
|
|
|
That means you compute one SOB unit in a third of the time, which I would need for the entire unit... And the differences in the computing time for the same work unit will be even greater when Intels next gen cpu (Haswell with AVX2) enters the arena.
sure, I'm getting more credit/day with the newer cpu but as we're doing the same amount of work whether it takes 10s or 1000000s then we should be getting the same amount of credit.
As it is your 14 month old wu gained you more credit than the new ones are giving me on average and, as evidenced by the 300% variation in 321 credits posted above, the current system is just a complete lottery. If we're going to be "paid" in credits then they at least need to be fairly awarded.
|
|
|
|
Can't boinc credit for primality-proving subprojects be based on "prime score"? That's what is used for challenges and seems to be a method accepted by most without controversy. I believe it is based on the difficulty/amount of work required, not on how fast/capable the computer running it happens to be. Is there some technical boinc reason for not using it?
Apologies in advance if this question has been asked and answered before. I know "boinc credit" is an oft-discussed (and justly maligned) topic.
--Gary |
|
|
Michael Goetz Volunteer moderator Project administrator
 Send message
Joined: 21 Jan 10 Posts: 13787 ID: 53948 Credit: 345,142,938 RAC: 12,033
                              
|
Can't boinc credit for primality-proving subprojects be based on "prime score"? That's what is used for challenges and seems to be a method accepted by most without controversy. I believe it is based on the difficulty/amount of work required, not on how fast/capable the computer running it happens to be. Is there some technical boinc reason for not using it?
Apologies in advance if this question has been asked and answered before. I know "boinc credit" is an oft-discussed (and justly maligned) topic.
Prime score is not the difficulty of testing the primality of a number of a given size; it's the difficulty of finding a prime of that size. Prime score rises much quicker than the associated credit should rise.
A better model -- not perfect but much better than anything else -- is Rogue's scoring in PRPNet. That comes very close to estimating the difficulty, although it doesn't account for FFT size transitions or the that some forms of numbers may be more difficult to compute. Compared to the totally messed up scoring we have now it would be a huge improvement.
We're rather busy with the server moves right now (we MUST get PRPNet moved soon), and we also have some app versions we want to get live, but fixing the credit is high on my priority list.
____________
My lucky number is 75898524288+1 |
|
|
pschoefer Volunteer developer Volunteer tester
 Send message
Joined: 20 Sep 05 Posts: 673 ID: 845 Credit: 2,534,336,138 RAC: 1,581,024
                           
|
Can't boinc credit for primality-proving subprojects be based on "prime score"? That's what is used for challenges and seems to be a method accepted by most without controversy. I believe it is based on the difficulty/amount of work required, not on how fast/capable the computer running it happens to be. Is there some technical boinc reason for not using it?
Prime score does actually reflect the difficulty of finding a prime, so it grows much faster than the amount of work to test the number. While this is not a problem in a challenge (=no big change in size of numbers tested), it would cause too much of a bonus for the big numbers like SoB LLR. Prime score for a current SoB test is roughly 5000 times the prime score for a SGS test, while runtime of a SoB test is only about 300 times the runtime of a SGS test (on my i7-3770K, likely different on other hosts).
For sieving work, runtime is roughly the same for every WU of the same subproject, so it's easy choice to grant fixed credits. For genefer, runtime is proportional to log(b)*N^2, so credits are based on that. LLR, however, is much more difficult. Runtime is something like function(FFT size)*log(number), but FFT size might be different on different hardware.
One solution would be to define a reference system and grant credits based on runtime on this reference system. There would still be some credit variation as not every host is comparable to the reference system, but it wouldn't be as extreme as with the current credit system and it would be more future-proof. However, I don't think it's easy to implement - if possible at all.
____________
|
|
|
Honza Volunteer moderator Volunteer tester Project scientist Send message
Joined: 15 Aug 05 Posts: 1931 ID: 352 Credit: 5,665,865,682 RAC: 4,481,931
                                   
|
One solution would be to define a reference system and grant credits based on runtime on this reference system. There would still be some credit variation as not every host is comparable to the reference system, but it wouldn't be as extreme as with the current credit system and it would be more future-proof. However, I don't think it's easy to implement - if possible at all.
Back in 2005-06, we were seriously discussing such features on boinc.cz
There were even boinc clients with extra features like RRI (it took years until RRI was incorporated by UCB), CPU affinity, etc.
We have suggested that outdated benchmarking would be replaced by small calibration task. It would better tell how fast a host is and if it returns valid results. Both speed and reliability would be measured.
Per host reliability was somehow implemented using adaptive replacation.
I always liked calibration which could be used for giving particular task to extra reliable host - cleaning old WUs, fast checking some results etc.
I think this could be implemented per project. Imagine simple GFN task that would be send upon project attach. Result would be stored with host information and used for better credit calculation.
____________
My stats
Badge score: 1*1 + 5*1 + 8*3 + 9*11 + 10*1 + 11*1 + 12*3 = 186 |
|
|