Author |
Message |
|
Today evening i'am out of the race, because it doesn't make any sense to run a challenge against cheater clients.
P4 machines are earning more points then latest c2d, quads or Xeons. *crazy*
No thanks to this! |
|
|
Sysadm@Nbg Volunteer moderator Volunteer tester Project scientist
 Send message
Joined: 5 Feb 08 Posts: 1214 ID: 18646 Credit: 850,948,383 RAC: 113,472
                      
|
and now we discuss to stop challenging !!
we are challenging with fair play against AF last one
we are not challenging without fair play against SUSA this time
my opinion, others will follow ...
____________
Sysadm@Nbg
my current lucky number: 113856050^65536 + 1
PSA-PRPNet-Stats-URL: http://u-g-f.de/PRPNet/
|
|
|
|
and now we discuss to stop challenging !!
we are challenging with fair play against AF last one
we are not challenging without fair play against SUSA this time
my opinion, others will follow ...
What's the problem !! I don't understand what is fair-play or not ?? Is somebody cheating here ?? I don't like cheating so tell more or nothing please . |
|
|
|
and now we discuss to stop challenging !!
we are challenging with fair play against AF last one
we are not challenging without fair play against SUSA this time
my opinion, others will follow ...
What's the problem !! I don't understand what is fair-play or not ?? Is somebody cheating here ?? I don't like cheating so tell more or nothing please .
Look only to the top teams machines and you know what is cheating and is not. It was fine last month to have against AF a nice and clearly fair race but now a few things changed and i am out.
P4/WIN 32Bit with 5 times higher benches are earning more credits then a Xeon/Quad 45nm WIN64bit, 18000 seconds WU crunch time 170 Points granted for the P4's and newest CPUs are earning for 12000 seconds 70-80 Points.
It's always the same races with fixed credits are without them and if there are races they can use their cheaters we have them destroying our fun.
But we are all aware about how they reached the combined #1
Bye I am back at PPS |
|
|
Sysadm@Nbg Volunteer moderator Volunteer tester Project scientist
 Send message
Joined: 5 Feb 08 Posts: 1214 ID: 18646 Credit: 850,948,383 RAC: 113,472
                      
|
and now we discuss to stop challenging !!
we are challenging with fair play against AF last one
we are not challenging without fair play against SUSA this time
my opinion, others will follow ...
What's the problem !! I don't understand what is fair-play or not ?? Is somebody cheating here ?? I don't like cheating so tell more or nothing please .
more and thats my last post - i´m out of challenge |
|
|
|
I will state unequivocally that Seti.USA has *NEVER* cheated on any challenge. I will be notifying Team Leadership.
I request Primegrid Admins remove those unsubstantiated accusations from the boards until they can back them up (which they can't) |
|
|
|
I will state unequivocally that Seti.USA has *NEVER* cheated on any challenge. I will be notifying Team Leadership.
I request Primegrid Admins remove those unsubstantiated accusations from the boards until they can back them up (which they can't)
Then look at the things below and think about what your team is doing
Used is your known cheater client 5.10.30
Two examples from hundreds
CPU type GenuineIntel
Intel(R) Pentium(R) D CPU 3.00GHz [EM64T Family 15 Model 6 Stepping 2]
Number of CPUs 2
Operating System Microsoft Windows XP Professional x64 Edition
, Service Pack 2, (05.02.3790.00)
Memory 1013.87 MB
Cache 488.28 KB
Measured floating point speed 1357.12 million ops/sec
Measured integer speed 3945.19 million ops/sec
16,486.23 114.09 170.98
18,615.27 128.82 115.87
CPU type GenuineIntel
Intel(R) Xeon(R) CPU E5345 @ 2.33GHz [EM64T Family 6 Model 15 Stepping 7]
Number of CPUs 8
Operating System Microsoft Windows XP Professional x64 Edition
, Service Pack 2, (05.02.3790.00)
Memory 4094.69 MB
Cache 122.07 KB
Measured floating point speed 5113.75 million ops/sec
Measured integer speed 17922.99 million ops/sec
12,817.59 170.88 170.98
12,832.00 171.07 165.76
Your cheaters are over granting newest 45nm CPUs with 200%
Nothing else to say, bye i am crunching for another projekt |
|
|
Vato Volunteer tester
 Send message
Joined: 2 Feb 08 Posts: 838 ID: 18447 Credit: 613,795,775 RAC: 240,944
                          
|
Carnal - read my post above.
I also run one of these 'GenuineIntel Intel(R) Pentium(R) D CPU 3.00GHz' that ticks a WU over in around 19000 seconds (whilst having some other activity going on). That's no different than what you call a 'cheater'. Am I a cheater too? I emphatically am not, as will be proved by the validator when the double-check comes in. You're just annoyed at an artifact of the BOINC credit system for LLR, jumping to conclusions, and therefore making baseless assertions.
____________
|
|
|
|
I also run one of these 'GenuineIntel Intel(R) Pentium(R) D CPU 3.00GHz' that ticks a WU over in around 19000 seconds (whilst having some other activity going on). That's no different than what you call a 'cheater'. Am I a cheater too? I emphatically am not, as will be proved by the validator when the double-check comes in.
i can comfirm this since i'm running a D930 myself!
frank.
|
|
|
|
And when the "cheater"-computers are all overclocked??? You can´t see the real cpu-clock in Boinc |
|
|
|
CPU type GenuineIntel
Intel(R) Pentium(R) D CPU 3.00GHz [EM64T Family 15 Model 6 Stepping 2]
Number of CPUs 2
Operating System Microsoft Windows XP Professional x64 Edition
, Service Pack 2, (05.02.3790.00)
Memory 1013.87 MB
Cache 488.28 KB
Measured floating point speed 1357.12 million ops/sec
Measured integer speed 3945.19 million ops/sec
16,486.23 114.09 170.98
18,615.27 128.82 115.87
I see nothing odd about the benchmark speeds on this client.. just what are you saying. In fact the benchmarks seem a bit low for a Pent D at 3.0 Benchmarks at 1357 ???? should not that benchmark be 1500?
And, Just how is BOINC 5.10.30 a "cheater" client - isnt this a standard BOINC release? a bit outdated, but still a Berkeley released version.
|
|
|
|
Measured floating point speed 1357.12 million ops/sec
Measured integer speed 3945.19 million ops/sec
I see nothing odd about the benchmark speeds on this client.. just what are you saying. In fact the benchmarks seem a bit low for a Pent D at 3.0 Benchmarks at 1357 ???? should not that benchmark be 1500?
And, Just how is BOINC 5.10.30 a "cheater" client - isnt this a standard BOINC release? a bit outdated, but still a Berkeley released version.
neither do i - integer speed on X64 is a joke on elder versions of that yerkley-bench, but we are running 32-bit code right now - so what the hack??
want to check my D930 running 5.10.45 on XP/32?
http://www.primegrid.com/show_host_detail.php?hostid=74755
|
|
|
|
We discussed this enough and you are aware what you are doing.
Examples from none cheating 5.10.45 client on 64bit WinXP
E5420 Xeon Standard @2500
12,055.44 68.50 81.49
12,032.78 68.37 82.58
And if you are thinking that a P4 has more power then dream further
Nothing else to say because we all know SUSA is untouchable.
@FrankHagen
16,036.73 36.66 47.36 yours and make a comparison to the earlier posted results from me. ;)
They are claiming 300% more then you with a related CPU |
|
|
Vato Volunteer tester
 Send message
Joined: 2 Feb 08 Posts: 838 ID: 18447 Credit: 613,795,775 RAC: 240,944
                          
|
OK Carnal, you're obviously not reading (or understanding) what we're posting.
That's the end of this 'discussion/whinge' from my point of view.
____________
|
|
|
|
And also from my point I know what you are doing. |
|
|
|
@FrankHagen
16,036.73 36.66 47.36 yours and make a comparison to the earlier posted results from me.
sorry - YOU are not my favorite waste of time.
|
|
|
RytisVolunteer moderator Project administrator
 Send message
Joined: 22 Jun 05 Posts: 2653 ID: 1 Credit: 89,124,285 RAC: 28,857
                     
|
If this flamewar is going to continue to grow, I'll be putting a stop to it.
____________
|
|
|
|
I will state unequivocally that Seti.USA has *NEVER* cheated on any challenge. I will be notifying Team Leadership.
I request Primegrid Admins remove those unsubstantiated accusations from the boards until they can back them up (which they can't)
I will confirm this.
We are not "Cheating"...And I know that some consider heavily optimized apps/clients is a form of cheating (but that is another subject).
In fact, there is not a single user on our team using any form of optimized application...We had requested our members that have experience w/optimizing use the source app here to make an optimized app.
The answer we recieved was "no". So we are on the same playing field as everyone else (with the exception of phase cooling and extreme overclocking of course. ;) )
____________
|
|
|
|
Used is your known cheater client 5.10.30
Your cheaters are over granting newest 45nm CPUs with 200%
Nothing else to say, bye i am crunching for another projekt
Regarding Rytis' post, this will be my last post on the subject.
My machines are neither overclocked nor do I have any special app for Primegrid. I use Version 5.10.41 on my Dual Core that is doing the challenge.
So kindly base your accusations on TRUE facts, not baseless ones.
Thank You. |
|
|
|
So kindly base your accusations on TRUE facts, not baseless ones.
sorry if i step in again - it's only my 5 cents:
TGIF - someONE (over here) will have to take his time 'till monday and see his shrink then.
frank.
|
|
|
|
sorry if i step in again - it's only my 5 cents:
TGIF - someONE (over here) will have to take his time 'till monday and see his shrink then.
frank.
It's all good Frank...I've had my say :) |
|
|
|
It's all good Frank...I've had my say :)
so let's chill now and enjoy the party to the full!
happy crunching..
frank.
|
|
|
|
It's all good Frank...I've had my say :)
so let's chill now and enjoy the party to the full!
happy crunching..
frank.
|
|
|
|
To stop all suspicious things can you please remove all credits ( like other project does ) from the owner ( cheater ) of this host : http://www.primegrid.com/results.php?hostid=71458
It is anonymous so no team accused but this guys is a cheater !!! |
|
|
|
To stop all suspicious things can you please remove all credits ( like other project does ) from the owner ( cheater ) of this host : http://www.primegrid.com/results.php?hostid=71458
It is anonymous so no team accused but this guys is a cheater !!!
It's not me--I'm not anonymous but what is your proof?? |
|
|
|
Just read carefully the integer speed and the claimed credit
Created 17 Jul 2008 17:44:40 UTC
Total Credit 9,751
Avg. credit 287.35
CPU type GenuineIntel
Intel(R) Core(TM)2 Duo CPU E4500 @ 2.20GHz [x86 Family 6 Model 15 Stepping 13]
Number of CPUs 2
Operating System Microsoft Windows XP
Home Edition, Service Pack 3, (05.01.2600.00)
Memory 2047.11 MB
Cache 488.28 KB
Measured floating point speed 2820.48 million ops/sec
Measured integer speed 149126.92 million ops/sec
Claimed Credit : 1,110.15
regards |
|
|
|
Agreed, that is excessive -
I don't know how the credit system works for this project, but it appears that both WU's in the quorum received the same amount of BOINC credit,
http://www.primegrid.com/workunit.php?wuid=38075824
but this still does not affect the race credits.
|
|
|
|
It makes no difference if people over-claim. It has nothing to do with what is awarded by the project. It's not cheating if it isn't changing the awarded amount.
____________
Reno, NV
|
|
|
|
EXTREM CHEATER
http://www.primegrid.com/show_host_detail.php?hostid=71458
http://www.primegrid.com/workunit.php?wuid=38082298 |
|
|
pschoefer Volunteer developer Volunteer tester
 Send message
Joined: 20 Sep 05 Posts: 677 ID: 845 Credit: 2,859,436,962 RAC: 250,329
                            
|
It makes no difference if people over-claim. It has nothing to do with what is awarded by the project. It's not cheating if it isn't changing the awarded amount.
That's not correct. There are no fixed credits for the long LLR subprojects. I don't know the way how credit is calculated exactly, but the two claims definitely play a role and there is a limit. That's the reason why you get more credit if your wingman has an AMD processor (they're much slower on LLR, but the benchmarks are not that much lower).
____________
|
|
|
|
Just read carefully the integer speed and the claimed credit
Created 17 Jul 2008 17:44:40 UTC
Total Credit 9,751
Avg. credit 287.35
CPU type GenuineIntel
Intel(R) Core(TM)2 Duo CPU E4500 @ 2.20GHz [x86 Family 6 Model 15 Stepping 13]
Number of CPUs 2
Operating System Microsoft Windows XP
Home Edition, Service Pack 3, (05.01.2600.00)
Memory 2047.11 MB
Cache 488.28 KB
Measured floating point speed 2820.48 million ops/sec
Measured integer speed 149126.92 million ops/sec
Claimed Credit : 1,110.15
regards
Hi John and Rytis,
I will only post one time here...
This is really an unbelievable impertinence!
If there isn't a solution in the near future to stop such braindead dummys with there machines you will lost really many crunchers from our side.
No matter that it doesn't hit the challenge, it is a slap in the face to all boinc oriented crunchers that pay a lot of mony to their electricity supplier to help this project.
Please have a closer look on such results and correct them.
I have heard, that you should have something like a "upper limit" of credits/WU. Is this right? If yes, why isn't it working?
A really disappointed and angry
cruncher
____________
|
|
|
RytisVolunteer moderator Project administrator
 Send message
Joined: 22 Jun 05 Posts: 2653 ID: 1 Credit: 89,124,285 RAC: 28,857
                     
|
I have heard, that you should have something like a "upper limit" of credits/WU. Is this right? If yes, why isn't it working?
It actually is working (see my calculation below), the limit was simply too high as we switched from 32-1 to 32+1. I adjusted it now.
Claimed: 1,211.85+81.09=1292.94
Average claim (which would be granted credit if the limit was not in place): 646.47
Actual grant: 448.65
____________
|
|
|
|
I'm right there with you, [SG]marodeur6 and I'm also very disappointed!
Hi Rytis!
____________
|
|
|
|
This is really disgusting, the whole gang of 6.1.0 Cheaters gathered again. I would strongly recommend not only to decrease future misuse by this limit. They have already profited enough by there cheatery.
Please set a sign and block the app. 6.1.0 which most of the cheaters use. The boinc software is able to exclude cheating clients. So show them, that you do not want further cheating. Noone uses 6.1.0 and is unaware that it is cheating!
Throw them out! Now!
For those who don't know what the 6.1.0 does:
It was developed to balance the credit loss for crunchers who used crunch3rs seti application. To do that it overclaims credits. In the case of seti this is on the border of being tolerable (since someone just decides to earn more credits than others, not very fair in itself) but using this overclaiming creditmachine at any other project without using an appropiate application is only one thing: mere naked uncovered cheating)
@blurf, it might be true that noone of your team was cheating here at the challenge, but this is only a question of definition. Obviously this above mentioned client is used by members of your team, probably also by some others. So for me there are a lot of cheaters within your team. But if you deny overclaiming to be cheating, than you might be right. Also as this does not concern the racecredits, you might further your argumentation of noone cheating with the race.
But doesn't this misplaced overcleaiming spoil the whole credit-system, doesn't it spoil your team's number 1?
____________
Dragons can fly, because they don't fit into pirate ships! |
|
|
Sid Send message
Joined: 15 Mar 08 Posts: 14 ID: 20216 Credit: 12,332,484 RAC: 0
            
|
But doesn't this misplaced overclaiming spoil the whole credit-system, doesn't it spoil your team's number 1?
Kalessin:
Any parity in cross-project credits is a myth.
All BOINCers who have been around more than a month realize that some of the most worthy projects are not very 'easy' with their credits, while others [we all know which ones] are cobblestone Coke machines.
Maybe an honest discussion of the elephant in the living room is in order, and a flame war is not.
____________
|
|
|
Sid Send message
Joined: 15 Mar 08 Posts: 14 ID: 20216 Credit: 12,332,484 RAC: 0
            
|
A thread to discuss BOINC crediting and benchmarking.
John:
For the purposes of the PG Challenges, wouldn't just counting cobblestones be more reasonable than the esoteric formula that is currently being used?
____________
|
|
|
pschoefer Volunteer developer Volunteer tester
 Send message
Joined: 20 Sep 05 Posts: 677 ID: 845 Credit: 2,859,436,962 RAC: 250,329
                            
|
Any parity in cross-project credits is a myth.
This discussion is not about cross-project parity, but about the way credit is granted in LLR subprojects...
For the purposes of the PG Challenges, wouldn't just counting cobblestones be more reasonable than the esoteric formula that is currently being used?
Counting cobblestones for challenge with non-fixed credits would be the worst solution. For one WU you get 80 cobblestones, for the next one (with the same runtime) you get 120, because your wingman has a slow AMD processor, for the third one (again, with the same or even a shorter runtime) you get 400 because your wingman has inflated benchmarks...
____________
|
|
|
Sid Send message
Joined: 15 Mar 08 Posts: 14 ID: 20216 Credit: 12,332,484 RAC: 0
            
|
Counting cobblestones for challenge with non-fixed credits would be the worst solution. For one WU you get 80 cobblestones, for the next one (with the same runtime) you get 120, because your wingman has a slow AMD processor, for the third one (again, with the same or even a shorter runtime) you get 400 because your wingman has inflated benchmarks...
So let's ignore one problem by substituting another?
____________
|
|
|
Sid Send message
Joined: 15 Mar 08 Posts: 14 ID: 20216 Credit: 12,332,484 RAC: 0
            
|
Any parity in cross-project credits is a myth.
This discussion is not about cross-project parity, but about the way credit is granted in LLR subprojects...
. . . and what is the title of this thread?
____________
|
|
|
|
It makes no difference if people over-claim. It has nothing to do with what is awarded by the project. It's not cheating if it isn't changing the awarded amount.
That's not correct. There are no fixed credits for the long LLR subprojects. I don't know the way how credit is calculated exactly, but the two claims definitely play a role and there is a limit. That's the reason why you get more credit if your wingman has an AMD processor (they're much slower on LLR, but the benchmarks are not that much lower).
When I posted this, it was still back in the Dog Day's of Summer Challenge thread (before it got moved to this thread). And that is what I was talking about. I will say again, claiming amounts have NOTHING to do with the challenge scores. Let me quote the project again:
There is no correlation between Challenge Points and BOINC credit (cobblestones)! There is ABSOLUTELY NO WAY to cheat Challenge Points.
____________
Reno, NV
|
|
|
Scott Brown Volunteer moderator Project administrator Volunteer tester Project scientist
 Send message
Joined: 17 Oct 05 Posts: 2349 ID: 1178 Credit: 17,534,823,944 RAC: 4,286,933
                                           
|
Counting cobblestones for challenge with non-fixed credits would be the worst solution. For one WU you get 80 cobblestones, for the next one (with the same runtime) you get 120, because your wingman has a slow AMD processor, for the third one (again, with the same or even a shorter runtime) you get 400 because your wingman has inflated benchmarks...
So let's ignore one problem by substituting another?
Cobblestones are used for some challenges (e.g., PSP sieve, TPS) since these are fixed credit projects. On the other LLR projects, these are determined through the benchmarking system with known biases (as noted above) making it a poor choice for scoring. The current challenge scoring is not problematic at all since its (perhaps complicated) formula is essentially a fixed system as well.
____________
141941*2^4299438-1 is prime!
|
|
|
|
I will say again, claiming amounts have NOTHING to do with the challenge scores. Let me quote the project again:
There is no correlation between Challenge Points and BOINC credit (cobblestones)! There is ABSOLUTELY NO WAY to cheat Challenge Points.
Yes your right with that, and i am very glad about it.
But it is so absolutely enervating to always find the same overclaiming cheaters everywhere again and again. They do it in every project, that has no fixed credits.
So with this frustration the discussion wandered from the "race" to "boinc". And seeing blurf anouncing the complete innocence of your team was quite heavy stuff, since your team surely has the highest ratio of 6.1.0 users of all major teams.
I absolutely well understand that you did only mean the race side of the affair.
But what would interest me very much would be the question, if the absolutely fair and honest majority of your team simply doesn't care, is unaware of it, doesn't like it or just admires the gained credits.
While those cheating folks are really also spoiling the majorities credits and standing.
Does sometimes someone tell them to stop with it?
____________
Dragons can fly, because they don't fit into pirate ships! |
|
|
Scott Brown Volunteer moderator Project administrator Volunteer tester Project scientist
 Send message
Joined: 17 Oct 05 Posts: 2349 ID: 1178 Credit: 17,534,823,944 RAC: 4,286,933
                                           
|
There is a relatively easy fix to inflated benchmarks that could be implemented by any project (though it would require some extra programming). Benchmarks are natually inexact because:
1. There is variability between machines of different speeds.
and (very importantly)
2. There is variability within a machine since benchmarks vary across time due to variability is processor load at the time of benchmarks.
Thus, this is a stochastic process that should be considered using a statistical frame of mind. If we assume that this variability is spread out through a fair (read random, stochastic here) process, then the distribution of the benchamrks scores should be roughly normally distributed. This would ideally be applied at the processor (and perhaps OS) level, but more generally could be applied across all types of machines using a single benchmark distribution (or at least one each for integer and floating point benchmarks).
Thus, any excessive benchmarks (say outside the 99.9% confidence bound--high and low) could be identified. Therefore, the LLR scoring algorithim here could be modified as follows:
1. benchmarks from both machines within 99.9% bound, use current scoring process
2. one machine within 99.9% bound, one machine outside of 99.9% bound--assign scoring claimed by client within the bound
3. both machines outside of 99.9% bound, assign to an additional machine until a result from benchmarks within bounds is returned and assign that score to all machines.
While this does not solve all the issues noted above, it does tend to average over them and does eliminate extreme values.
____________
141941*2^4299438-1 is prime!
|
|
|
|
There is a relatively easy fix to inflated benchmarks that could be implemented by any project (though it would require some extra programming). Benchmarks are natually inexact because:
1. There is variability between machines of different speeds.
and (very importantly)
2. There is variability within a machine since benchmarks vary across time due to variability is processor load at the time of benchmarks.
Thus, this is a stochastic process that should be considered using a statistical frame of mind. If we assume that this variability is spread out through a fair (read random, stochastic here) process, then the distribution of the benchamrks scores should be roughly normally distributed. This would ideally be applied at the processor (and perhaps OS) level, but more generally could be applied across all types of machines using a single benchmark distribution (or at least one each for integer and floating point benchmarks).
Thus, any excessive benchmarks (say outside the 99.9% confidence bound--high and low) could be identified. Therefore, the LLR scoring algorithim here could be modified as follows:
1. benchmarks from both machines within 99.9% bound, use current scoring process
2. one machine within 99.9% bound, one machine outside of 99.9% bound--assign scoring claimed by client within the bound
3. both machines outside of 99.9% bound, assign to an additional machine until a result from benchmarks within bounds is returned and assign that score to all machines.
While this does not solve all the issues noted above, it does tend to average over them and does eliminate extreme values.
This would work even better than excluding the specific client. Which could than be used to temporarily stop the cheating until someone has programmed your advice.
____________
Dragons can fly, because they don't fit into pirate ships! |
|
|
|
1. benchmarks from both machines within 99.9% bound, use current scoring process
2. one machine within 99.9% bound, one machine outside of 99.9% bound--assign scoring claimed by client within the bound
3. both machines outside of 99.9% bound, assign to an additional machine until a result from benchmarks within bounds is returned and assign that score to all machines.
While this does not solve all the issues noted above, it does tend to average over them and does eliminate extreme values.
99,9%???
you do know what you are talking about?
that bench in boinc-clients is such a lousy thing and you expect 99,9% accuracy?
this would end up in endless resends.
if you came up with 95%, it was worth a thought, but this one earns you the "joke of the thread" award - until now.. :(
frank.
|
|
|
RytisVolunteer moderator Project administrator
 Send message
Joined: 22 Jun 05 Posts: 2653 ID: 1 Credit: 89,124,285 RAC: 28,857
                     
|
if you came up with 95%, it was worth a thought, but this one earns you the "joke of the thread" award - until now.. :(
I'd say 95% is still too much. But yes, it would probably be a pretty good system; I'll gather some statistics to see if it is doable.
[edit] I checked to see how many different CPU types are crunching for PrimeGrid. My idea was that if there is more than a single host of a specific CPU type, I could get a median of the benchmark values and use them for credit. Well, I found out there are 6381 (!) different CPU model strings, over 2500 are listed only once, so there is no way to exclude invalid benchmarks from them... BOINC doesn't really do a good job identifying CPUs :)
[edit2] I think we might team up with BOINCstats to gather the data.
____________
|
|
|
|
I'd say 95% is still too much. But yes, it would probably be a pretty good system; I'll gather some statistics to see if it is doable.
so you would be willing to sacrifice that much of performace eaten up by resends to this kindergarten-thing?
got another idea: identify cheating clients, prove them and put a list up on top of project page. everyone will know and project is not loosing precious hours of processing time.
frank. |
|
|
RytisVolunteer moderator Project administrator
 Send message
Joined: 22 Jun 05 Posts: 2653 ID: 1 Credit: 89,124,285 RAC: 28,857
                     
|
so you would be willing to sacrifice that much of performace eaten up by resends to this kindergarten-thing?
I updated my post, my idea is not to resend anything, but instead use adjusted benchmark values.
____________
|
|
|
|
just use the challenge formula to grant llr credit. that is multiplying it with a constant factor to match current credits per test. |
|
|
|
I updated my post, my idea is not to resend anything, but instead use adjusted benchmark values.
let's see if i got it: you would wan't to use not a single machines benchmark-values, but the median of it's type. nice conecpt - at least for the majority of hosts.
but then - 2nd thought - if i wanted to cheat, i could simply modify the identifyer and everything fouls up again.
let's get serious - we are talking about 1, 2, 5 or 10% of cheaters?
and about probably less people who can't live with that..
frank. |
|
|
RytisVolunteer moderator Project administrator
 Send message
Joined: 22 Jun 05 Posts: 2653 ID: 1 Credit: 89,124,285 RAC: 28,857
                     
|
but then - 2nd thought - if i wanted to cheat, i could simply modify the identifyer and everything fouls up again.
let's get serious - we are talking about 1, 2, 5 or 10% of cheaters?
and about probably less people who can't live with that..
Well, no open system is 100% safe from cheating, but this would nullify the currently used methods of cheating. Yes, you could change what your client reports to the server (CPU model), but that would mean compiling a custom BOINC client version, which is one step too much for most of the cheaters. This, combined with credit averaging over two results and having a top limit will probably be enough to silence all complaints about cheating :)
____________
|
|
|
|
You do not need any modified BOINC client to inflate benchmarks. Anyone can do that with any stock client downloaded directly from Berkeley. Trying to ban over-claiming by banning certain versions of clients is a fool's game.
____________
Reno, NV
|
|
|
RytisVolunteer moderator Project administrator
 Send message
Joined: 22 Jun 05 Posts: 2653 ID: 1 Credit: 89,124,285 RAC: 28,857
                     
|
You do not need any modified BOINC client to inflate benchmarks. Anyone can do that with any stock client downloaded directly from Berkeley. Trying to ban over-claiming by banning certain versions of clients is a fool's game.
Did you actually read the proposal? I'm not going to ban anyone; I'm simply not going to look at the benchmark values provided by the client when assigning the credit, I'm going to look at a BOINC-wide median benchmark value for that specific host type.
____________
|
|
|
|
You do not need any modified BOINC client to inflate benchmarks. Anyone can do that with any stock client downloaded directly from Berkeley. Trying to ban over-claiming by banning certain versions of clients is a fool's game.
Did you actually read the proposal? I'm not going to ban anyone; I'm simply not going to look at the benchmark values provided by the client when assigning the credit, I'm going to look at a BOINC-wide median benchmark value for that specific host type.
Sorry Rytis. My post was not in response to your proposal. I was simply responding to several comments by other posters, about banning some versions of BOINC clients. It was a general information post, in case folks were unaware that one can change benchmarks without using a modified BOINC client.
____________
Reno, NV
|
|
|
|
You do not need any modified BOINC client to inflate benchmarks. Anyone can do that with any stock client downloaded directly from Berkeley. Trying to ban over-claiming by banning certain versions of clients is a fool's game.
But this is quite an absurd kind of logic.
Certainly you can't get rid of every cheater by banning cheater- versions. Many people (including myself) know how to increase the benchmark result. It's just a question of intact morale not to do so.
Perhaps some cheat3r will immediately write or rename a new one, if those version numbers were blocked.
But the message would be clear: you do not tolerate cheating!
And you won't lose any fair cruncher by blocking the two major cheating clients. Only cheaters would be affected.
It is very simple to do so. I really don't find any reason not to block them.
At least until a better credit system is in place.
And I'm astonished that you are against blocking the cheating clients!
____________
Dragons can fly, because they don't fit into pirate ships! |
|
|
|
I think it's wasted effort banning by version number. It will block some valid clients, and still allow messing with benchmarks. I think effort would be better used coming up with a real solution. Simple as that.
____________
Reno, NV
|
|
|
|
I think it's wasted effort banning by version number. It will block some valid clients, and still allow messing with benchmarks. I think effort would be better used coming up with a real solution. Simple as that.
Ok that might be a reason, only did I think that wasting 10 Minutes time and probably reducing the cheaters by 50% (assumed) and by the way showing that cheaters were not welcome would be a good time/result ratio.
But I don't really know if ten minutes was an accurate estimate. Just guessed that from how the procedure was described.
____________
Dragons can fly, because they don't fit into pirate ships! |
|
|
|
I'm simply not going to look at the benchmark values provided by the client when assigning the credit, I'm going to look at a BOINC-wide median benchmark value for that specific host type.
you know that this would hit every host which is running overclocked?
and there are lots of hosts out there which are running slightly ;) over stock rate reported by their client.
millions of tropical fish could be in danger..
frank. |
|
|
|
Actually, a year or two ago, I received non personal data for all hosts attached to a project of this size to do an analysis to see if an "median benchmark" could be established for all known hosts. Establishing a "median benchmark" and a filter to flag hosts which deviated more than 50% (to allow for serious overclockers) would be effective, as most "cheating" boinc clients increased benchmarks significantly more than that. The trouble come in when you realize just how many variations of hosts and OSes are really out there. Some only had ONE listing, meaning the median would be itself. If that one cheated, then noone would know. Also, there's nothing to stop someone from changing the data that Boinc extracts. Then someone could label his Celeron 500 as a much higher scoring machine. Heck, they could probably get it to report their puter as "the good ship lollypop".
I scrapped the idea when I realized just how much of this would require LOTS of man hours for EACH project. At that point it seemed less likely to be adopted. If Rytis can do it, I say let him.
I could/might reboot my machines to windows after the challenge and look it up.
[edit] thinking back, I did this before Boinc started using MSVC 2005. When Boinc switched from MSVC2003 to 2005, it resulted in markedly higher benchmarks for everyone, so my data would reflect hosts using MSVC2003.
[2nd edit] banning one or two versions of those "cheating" clients would only be effective on those numbered clients. There are now nearly a dozen freely available ones, and nothing to stop anyone with a compiler from coming up with his/her own numbers, or even just naming it the same as an official version and slipping past this kind of snare. |
|
|
|
banning one or two versions of those "cheating" clients would only be effective on those numbered clients. There are now nearly a dozen freely available ones, and nothing to stop anyone with a compiler from coming up with his/her own numbers, or even just naming it the same as an official version and slipping past this kind of snare.
My experience is, that most client-cheaters at the moment do use the 6.1.0 client. (only some still use 5.5.0 or 5.9.0; of about a dozen versions I don't know, but they might certainly float through the boinc world) And yes I do know, that it is easy for those who did before, to compile a new cheater version with a standard version number.
And I know how easy it is to fake the values of the benchmark.
And still:
staying quiet regarding the cheater clients might contribute to make the cheaters think its ok to do so. Quite some of them still try to convince others that they are not cheating, but only raising the credits to their deserved height. Maybe one or two of them would recognize that noone else thinks so!
I think that projects as well as teams should clearly state that they strongly discourage any cheating.
Having the admins ban the most common cheater-clients and having a post that cheating clients are not welcome at a project would perhaps scare of a (small) part of the cheaters. (Wouldn't that be a success?) Or leave one or two cheaters without cheating tools for a while (Wouldn't that also be a success?)
And as far as I understood the disabling of clients; its I think about 20 lines of code, most of which could be copy/pasted.
jordans how to
There are reasons, why this might not be the most effective tool against cheaters. (Yes) but there is no reason not to do it.
It is something which could be done in about ten minutes. The amount of time for the cheaters would be a lot more to counteract. It would imply a statement against cheating and perhaps it will reduce cheating (a little bit for a while)
I really would hope that all projects without fixed credits would use the disabling function- besides all other effective means, we will find.
____________
Dragons can fly, because they don't fit into pirate ships! |
|
|
|
I've looked up the old data. It was extracted on Dec 5th 2006. It contains 4955 active(having contacted the server in the last 7 days) hosts from that project(that number started at much higher until I pared out as many duplicates as I could fairly judge to be duplicates). It contains 659 distinctly different "Model" and "OS" (Win or Linux) combinations. 265 of them are "Unique" entries. Meaning they're ONE of a kind and thus a Median, Mean, or Avg is futile. An addition 114 have only ONE like it and I suspect some of these are really unique but are showing up as to due to the detach/reattach process.
THe establishment of a "mean" would be effective at finding the "high claimers" from many models though which are in frequent use. Is that of any use? Wouldn't there still be a few who "modified" something or other to "game" the system. Is it worth the effort of the project admins/scientist/engineers???
here's a Sample from the P4 3.40 Ghz using Windows.
The Average Claimed credit/hour is 7.49, the median 6.83, there were 58 hosts, the High claim/hour was 12.17, and the lowest claim was 2.65
|
|
|
Scott Brown Volunteer moderator Project administrator Volunteer tester Project scientist
 Send message
Joined: 17 Oct 05 Posts: 2349 ID: 1178 Credit: 17,534,823,944 RAC: 4,286,933
                                           
|
99,9%???
you do know what you are talking about?
I assume that I do, but I suppose that you could ask the dozen or so PhD students from my statistics class last spring to verify that.
that bench in boinc-clients is such a lousy thing and you expect 99,9% accuracy?
this would end up in endless resends.
if you came up with 95%, it was worth a thought, but this one earns you the "joke of the thread" award - until now.. :(
frank.
99.9% is roughly equal to +/- 4 standard deviations of a normal distribution. 99% about 3 s.d. and 95% about 2 s.d. Since legitimate overclocks might venture into 2 or 3 standard deviations above the mean benchmark, I proposed a more conservative (and actually broader) confidence bound, but either 99% or 95% could be justified. However, the 95% bound would result in far more resends than the 99.9% bound (i.e., only .1% of hosts would be identified that required resends rather than 5%). Your logic here is backwards.
So, before you call someone's idea a "joke" or "kindergarten", perhaps you would do us all a favor and pick up an introductory statistics book first.
____________
141941*2^4299438-1 is prime!
|
|
|
Scott Brown Volunteer moderator Project administrator Volunteer tester Project scientist
 Send message
Joined: 17 Oct 05 Posts: 2349 ID: 1178 Credit: 17,534,823,944 RAC: 4,286,933
                                           
|
I'd say 95% is still too much. But yes, it would probably be a pretty good system; I'll gather some statistics to see if it is doable.
[edit] I checked to see how many different CPU types are crunching for PrimeGrid. My idea was that if there is more than a single host of a specific CPU type, I could get a median of the benchmark values and use them for credit. Well, I found out there are 6381 (!) different CPU model strings, over 2500 are listed only once, so there is no way to exclude invalid benchmarks from them... BOINC doesn't really do a good job identifying CPUs :)
[edit2] I think we might team up with BOINCstats to gather the data.
Pulling from the larger BOINCstats database is defintely a good idea, as is using the more stable median value in response to possible extreme values. However, you might be able to group similar CPU/OS combinations together using the broader 99% or 99.9% intervals effectively (i.e., a 2.4Ghz P4 should have very similar benchmarks to a 2.53Ghz or 2.6Ghz, especially if similar OS, bus speed, etc. are maintained, also).
____________
141941*2^4299438-1 is prime!
|
|
|
|
Here's the P4 3.0 Ghz, There's 356 samples, The Avg claim/hour is 6.26, the median 6.00, the High claim/hr 10.54 and the lowest 1.58
|
|
|
|
EEEK, it's been so LONG since I've done this. I forgot, the last two charts have had all 5.5.0 hosts removed for some long forgotten reason. Back to the databases to show them with 5.5.0.
Another eek. For me to show those charts, I'd have to rework from the original data. Meaning many hours that I'm not willing to put in, so I just looked up the high/low claim for those 5.5.0 host and P4 3.0 ghz and windows. The low was 13.42 credit/hour, the high was 17.03. If Plotted, in the chart below, you can see how glaringly high it would be, and that a filter set to 50% higher than median would have filtered them out. Leaving the overclockers to do their best...undetered. |
|
|
|
I've just spent all the time from my last post to now getting this chart ready for just the P4 3.0Ghz computers using Windows. The total hosts now is 375.
The X axis shows Boinc Verions, but is sorted by Claimed credit/hour ascending. (like the earlier ones).
As you all can see, there is a distinct difference between claims of normal hosts, a slight jump for what I'm assuming is Overclocking, and then the "third party" boinc clients.
Also, there were only 19 5.5.0 clients out of 375 hosts, so not a high percentage of users were using it.
Similar results to this probably exist with newer data.
Hope this helps Rytis. |
|
|
|
I've just spent all the time from my last post to now getting this chart ready for just the P4 3.0Ghz computers using Windows.
Great work Astro.
And another extra shudder, seeing again how unfair these cheaters are.
This is an increase in the claiming of about 100%;
voracious gang of...
____________
Dragons can fly, because they don't fit into pirate ships! |
|
|
|
I've just spent all the time from my last post to now getting this chart ready for just the P4 3.0Ghz computers using Windows. [...]
Of course, that assumes that all those machines reported truthfully what kind of CPU they had. The CPU type can also be faked.
____________
Reno, NV
|
|
|
|
just use the challenge formula to grant llr credit. that is multiplying it with a constant factor to match current credits per test.
Sounds sensible and easy. The formula exists already, so by scaling it down appropiately the credits would be independent from benchmarks, running time and whatnot, depending only on the size of the task. |
|
|
Vato Volunteer tester
 Send message
Joined: 2 Feb 08 Posts: 838 ID: 18447 Credit: 613,795,775 RAC: 240,944
                          
|
Not quite - you need to factor in FFT size, as this has quite an impact when you cross a boundary, and intel/amd cpus don't cross it at the same value of n. If you can include this with the above, I think you'd have a fair system. The current challenge nicely avoided this by completing the range prior to the change in FFT size shortly before the challenge began.
____________
|
|
|
|
And how are things going? Have you found a favorite strategy for the future?
does anyone has by the way a statistics that shows how much % of work is delivered by the clients 6.1.0, 5.5.0, and 5.9.0?
and very off-topic @ rytis: Is chess still alive? Were you not somehow the one who was sometimes kicking the server?
____________
Dragons can fly, because they don't fit into pirate ships! |
|
|
RytisVolunteer moderator Project administrator
 Send message
Joined: 22 Jun 05 Posts: 2653 ID: 1 Credit: 89,124,285 RAC: 28,857
                     
|
And how are things going? Have you found a favorite strategy for the future?
We are going to try median BOINC-wide benchmark values for specific kind of CPU type, but it's up to Willy from BOINCstats to make a table (it will probably happen this weekend).
does anyone has by the way a statistics that shows how much % of work is delivered by the clients 6.1.0, 5.5.0, and 5.9.0?
Gathering this kind of statistics is problematic, because the client version isn't stored as a separate entry in the database. So, to answer your question, at least I don't have the data.
and very off-topic @ rytis: Is chess still alive? Were you not somehow the one who was sometimes kicking the server?
I've heard that it's going to be resurrected next month, but don't quote me on this. I was just a technical helper there, it's up to Joerg to push some new games in.
____________
|
|
|
|
We are going to try median BOINC-wide benchmark values for specific kind of CPU type
Ah, another artificial sticking plaster stuck over an artificial wound.
but it's up to Willy from BOINCstats to make a table (it will probably happen this weekend).
Crikey! Putting PG in the firing-line as bad enough, but Willy becoming a Krazy Kredit Kop too? Madness.
Al.
|
|
|
|
and very off-topic @ rytis: Is chess still alive? Were you not somehow the one who was sometimes kicking the server?
Quick off-topic note: I'm working with Joerg on Chess960. Hoping it'll be back up in September. |
|
|
|
and very off-topic @ rytis: Is chess still alive? Were you not somehow the one who was sometimes kicking the server?
Quick off-topic note: I'm working with Joerg on Chess960. Hoping it'll be back up in September.
Thats great news good luck to you and Jörg!
____________
Dragons can fly, because they don't fit into pirate ships! |
|
|