02 May, 2013, Vigud wrote in the 42nd comment:

The Linux distro I use (Fedora) provides a "dieharder" program which I've been using on files generated with various PRNGs. The problem with that is that it rewinds the file instead of trying to work on as much data as it gets (in other words, I'm unable to control the sample size). So I thought of using another program, called "ent" and this is what it says (output edited for readability):

**Quote**

$ ./prng-test -g MT19937 -b -n 1000 | ent

Entropy = 7.959602 bits per byte.

Optimum compression would reduce the sizeof this 4000 byte file by 0 percent.

Chi square distribution for 4000 samples is 225.92, and randomlywould exceed this value 90.50 percent of the times.

Arithmetic mean value of data bytes is 127.4590 (127.5 = random).

Monte Carlo value for Pi is 3.141141141 (error 0.01 percent).

Serial correlation coefficient is -0.002616 (totally uncorrelated = 0.0).

$ ./prng-test -g MT19937 -b -n 100 | ent

Entropy = 7.420279 bits per byte.

Optimum compression would reduce the sizeof this 400 byte file by 7 percent.

Chi square distribution for 400 samples is 283.52, and randomlywould exceed this value 10.60 percent of the times.

Arithmetic mean value of data bytes is 128.4550 (127.5 = random).

Monte Carlo value for Pi is 3.151515152 (error 0.32 percent).

Serial correlation coefficient is 0.008418 (totally uncorrelated = 0.0).

$ ./prng-test -g MT19937 -b -n 10 | ent

Entropy = 5.053056 bits per byte.

Optimum compression would reduce the sizeof this 40 byte file by 36 percent.

Chi square distribution for 40 samples is 292.80, and randomlywould exceed this value 5.19 percent of the times.

Arithmetic mean value of data bytes is 132.2500 (127.5 = random).

Monte Carlo value for Pi is 3.333333333 (error 6.10 percent).

Serial correlation coefficient is 0.011162 (totally uncorrelated = 0.0).

$ ./prng-test -g MM -b -n 1000 | ent

Entropy = 7.955437 bits per byte.

Optimum compression would reduce the size of this 4000 byte file by 0 percent.

Chi square distribution for 4000 samples is 243.33, and randomly would exceed this value 68.98 percent of the times.

Arithmetic mean value of data bytes is 128.7322 (127.5 = random).

Monte Carlo value for Pi is 3.039039039 (error 3.26 percent).

Serial correlation coefficient is -0.021284 (totally uncorrelated = 0.0).

$ ./prng-test -g MM -b -n 100 | ent

Entropy = 7.522828 bits per byte.

Optimum compression would reduce the size of this 400 byte file by 5 percent.

Chi square distribution for 400 samples is 231.04, and randomly would exceed this value 85.69 percent of the times.

Arithmetic mean value of data bytes is 126.6250 (127.5 = random).

Monte Carlo value for Pi is 3.212121212 (error 2.24 percent).

Serial correlation coefficient is -0.050636 (totally uncorrelated = 0.0).

$ ./prng-test -g MM -b -n 10 | ent

Entropy = 5.171928 bits per byte.

Optimum compression would reduce the size of this 40 byte file by 35 percent.

Chi square distribution for 40 samples is 254.40, and randomly would exceed this value 49.88 percent of the times.

Arithmetic mean value of data bytes is 128.4250 (127.5 = random).

Monte Carlo value for Pi is 3.333333333 (error 6.10 percent).

Serial correlation coefficient is -0.073353 (totally uncorrelated = 0.0).

Entropy = 7.959602 bits per byte.

Optimum compression would reduce the sizeof this 4000 byte file by 0 percent.

Chi square distribution for 4000 samples is 225.92, and randomlywould exceed this value 90.50 percent of the times.

Arithmetic mean value of data bytes is 127.4590 (127.5 = random).

Monte Carlo value for Pi is 3.141141141 (error 0.01 percent).

Serial correlation coefficient is -0.002616 (totally uncorrelated = 0.0).

$ ./prng-test -g MT19937 -b -n 100 | ent

Entropy = 7.420279 bits per byte.

Optimum compression would reduce the sizeof this 400 byte file by 7 percent.

Chi square distribution for 400 samples is 283.52, and randomlywould exceed this value 10.60 percent of the times.

Arithmetic mean value of data bytes is 128.4550 (127.5 = random).

Monte Carlo value for Pi is 3.151515152 (error 0.32 percent).

Serial correlation coefficient is 0.008418 (totally uncorrelated = 0.0).

$ ./prng-test -g MT19937 -b -n 10 | ent

Entropy = 5.053056 bits per byte.

Optimum compression would reduce the sizeof this 40 byte file by 36 percent.

Chi square distribution for 40 samples is 292.80, and randomlywould exceed this value 5.19 percent of the times.

Arithmetic mean value of data bytes is 132.2500 (127.5 = random).

Monte Carlo value for Pi is 3.333333333 (error 6.10 percent).

Serial correlation coefficient is 0.011162 (totally uncorrelated = 0.0).

$ ./prng-test -g MM -b -n 1000 | ent

Entropy = 7.955437 bits per byte.

Optimum compression would reduce the size of this 4000 byte file by 0 percent.

Chi square distribution for 4000 samples is 243.33, and randomly would exceed this value 68.98 percent of the times.

Arithmetic mean value of data bytes is 128.7322 (127.5 = random).

Monte Carlo value for Pi is 3.039039039 (error 3.26 percent).

Serial correlation coefficient is -0.021284 (totally uncorrelated = 0.0).

$ ./prng-test -g MM -b -n 100 | ent

Entropy = 7.522828 bits per byte.

Optimum compression would reduce the size of this 400 byte file by 5 percent.

Chi square distribution for 400 samples is 231.04, and randomly would exceed this value 85.69 percent of the times.

Arithmetic mean value of data bytes is 126.6250 (127.5 = random).

Monte Carlo value for Pi is 3.212121212 (error 2.24 percent).

Serial correlation coefficient is -0.050636 (totally uncorrelated = 0.0).

$ ./prng-test -g MM -b -n 10 | ent

Entropy = 5.171928 bits per byte.

Optimum compression would reduce the size of this 40 byte file by 35 percent.

Chi square distribution for 40 samples is 254.40, and randomly would exceed this value 49.88 percent of the times.

Arithmetic mean value of data bytes is 128.4250 (127.5 = random).

Monte Carlo value for Pi is 3.333333333 (error 6.10 percent).

Serial correlation coefficient is -0.073353 (totally uncorrelated = 0.0).

02 May, 2013, Vigud wrote in the 43rd comment:

I'd still like to see what happens if you silently switch PRNG algorithms on your MUD. I'd be really curious to see if ANY player notices.

02 May, 2013, plamzi wrote in the 44th comment:

I'd still like to see what happens if you silently switch PRNG algorithms on your MUD. I'd be really curious to see if ANY player notices.

But what if 'The One' is playing your game?

03 May, 2013, Runter wrote in the 45th comment:

I'd still like to see what happens if you silently switch PRNG algorithms on your MUD. I'd be really curious to see if ANY player notices.

I had to change years ago to MT algorithm when whatever the system calls were generating weren't equally distributed enough. Why? Because players found out they could make tons of money playing blackjack in game because for whatever reason players tended to win more than lose based on how the cards were being shuffled. I made some test scripts with playing rules and sure enough, players were getting a 5-15% return on investment for just about any bet.

Player never noticed it. They just thought they were geniuses at blackjack. If you're okay with carefully designing mechanics only to have them randomly assigned chances by your PRNG then I guess it won't matter if your players notice or not.

03 May, 2013, Lyanic wrote in the 46th comment:

I had to change years ago to MT algorithm when whatever the system calls were generating weren't equally distributed enough. Why? Because players found out they could make tons of money playing blackjack in game because for whatever reason players tended to win more than lose based on how the cards were being shuffled. I made some test scripts with playing rules and sure enough, players were getting a 5-15% return on investment for just about any bet.

0

Player never noticed it. They just thought they were geniuses at blackjack. If you're okay with carefully designing mechanics only to have them randomly assigned chances by your PRNG then I guess it won't matter if your players notice or not.

I had something similar happen many years ago. I noticed I failed a skill check that was supposed to have a 99% probability of succeeding…8 times in a row (let's see…0.01^8…yeah, that a tiny number)…two separate times. When I used to play tabletop games, my friends would joke that I had "Aura of Failed Dice Roll", but COME ON! That's when I wrote some test scripts to check the PRNG, and found that its distribution was quite poor. So, I took a scalpel to it.

As for whether players notice? No, not that I recall.

03 May, 2013, quixadhal wrote in the 47th comment:

Players tend to complain about streaks of failures… yet they're strangely silent about streaks of successes. :)

The funny thing is, REAL random events are not distributed along a nice even curve. That's kind of the point of it being random. Each event is totally independent, and thus it's just as likely to get 50 crits in a row, as none. See Rozencrantz and Guildenstern for a fun example.

The funny thing is, REAL random events are not distributed along a nice even curve. That's kind of the point of it being random. Each event is totally independent, and thus it's just as likely to get 50 crits in a row, as none. See Rozencrantz and Guildenstern for a fun example.

03 May, 2013, arholly wrote in the 48th comment:

The code I came up with looks like this:

#define L 37U

#define K 100U

static unsigned long int sequence[K];

static unsigned int b = L, a = K;

void init_mm(unsigned long int seed) {

unsigned int i;

for (i = 0; i < K * 2; i++)

sequence[i % K] = seed = (1664525 * seed + 1013904223);

return;

}

unsigned long int number_mm(void) {

a++;

b++;

return sequence[a % K] += sequence[b % K];

}

Now see, why doesn't someone put this in the repository as a snippet for some coder down the road. Sure not everyone may agree on the effectiveness of it, but rather than having code buried in a thread, it would make a good addition to the repository…

03 May, 2013, Rarva.Riendf wrote in the 49th comment:

Now see, why doesn't someone put this in the repository as a snippet for some coder down the road. Sure not everyone may agree on the effectiveness of it, but rather than having code buried in a thread, it would make a good addition to the repository…

Because just calling rand() works and noone actually care anymore.

03 May, 2013, Vigud wrote in the 50th comment:

Now see, why doesn't someone put this in the repository as a snippet for some coder down the road. Sure not everyone may agree on the effectiveness of it, but rather than having code buried in a thread, it would make a good addition to the repository…

Because just calling rand() works and noone actually care anymore.

03 May, 2013, quixadhal wrote in the 51st comment:

If your players won't notice the difference, it only matters to you. You could probably spend your time evolving your combat system to not rely so heavily upon random numbers, and actually do something the players would notice. :)

Also, for the last 20 years, random() has been the normal libc function of choice, over rand().

Also, for the last 20 years, random() has been the normal libc function of choice, over rand().

04 May, 2013, Runter wrote in the 52nd comment:

The funny thing is, REAL random events are not distributed along a nice even curve. That's kind of the point of it being random. Each event is totally independent, and thus it's just as likely to get 50 crits in a row, as none. See Rozencrantz and Guildenstern for a fun example.

That's incorrect. Being possible doesn't exclude equal distribution. Flip a coin an infinite number of times and you'll have a perfect guaranteed 50% distribution if the coin is truly random. Well enough random for our intents and purposes is a margin of error that won't affect the game or allow players to game mechanics to an unintended extent.

random

Statistics. of or characterizing a process of selection in which each item of a set has an**equal probability** of being chosen.

Statistics. of or characterizing a process of selection in which each item of a set has an

Examples of streaking have nothing to do with it and don't support an argument for non-equal distribution.

Just hand waving this away to "if people don't notice it" is total nonsense. The true measure is will it affect your design goals. If you have no mechanics that rely on RNG then obviously it doesn't matter to you. The more heavily you depend on it, and the more heavily it has to be calibrated correctly, the more important it is going to be to you. I design things with slim margins for error such that it could tilt my economy and game if I don't have a truly random number generator. It's important to me, and many others, and many softwares out there, that the random number generator is equally distributed.

04 May, 2013, Rarva.Riendf wrote in the 53rd comment:

Just hand waving this away to "if people don't notice it" is total nonsense […]

I design things with slim margins for error such that it could tilt my economy and game

I design things with slim margins for error such that it could tilt my economy and game

Is not that meaning that the players will indeed notice ?

04 May, 2013, Vigud wrote in the 54th comment:

Also, for the last 20 years, random() has been the normal libc function of choice, over rand().

The random() function shall use a non-linear additive feedback random-number generator employing a default state array size of 31 long integers to return successive pseudo-random numbers in the range from 0 to 2**31-1. The period of this random-number generator is approximately 16 x (2**31-1).

04 May, 2013, quixadhal wrote in the 55th comment:

The funny thing is, REAL random events are not distributed along a nice even curve. That's kind of the point of it being random. Each event is totally independent, and thus it's just as likely to get 50 crits in a row, as none. See Rozencrantz and Guildenstern for a fun example.

That's incorrect. Being possible doesn't exclude equal distribution. Flip a coin an infinite number of times and you'll have a perfect guaranteed 50% distribution if the coin is truly random. Well enough random for our intents and purposes is a margin of error that won't affect the game or allow players to game mechanics to an unintended extent.

random

Statistics. of or characterizing a process of selection in which each item of a set has an**equal probability** of being chosen.

Statistics. of or characterizing a process of selection in which each item of a set has an

No, actually, it is NOT incorrect. You should quote a statistics textbook, rather than dictionary.com.

Note the part you didn't bold… "

The problem is, HUMANS don't perceive a streak of identical results as being random. They keep state information in their heads, and after several identical results, they start applying that state information, expecting it to affect the outcome of the next trial. The fact is, humans don't want random numbers, they want result sets that fit a bell curve and have some randomness in the individual results.

05 May, 2013, Runter wrote in the 56th comment:

I don't want to measure statistics books we've read, or classes we've had on the subject. I find these kinds of appeals to authority rather boring. If you don't like the dictionary.com definition, fine. But that's the definition that everyone knows it to mean.

If I have a mechanic that says "dodges 10% of attacks" then I expect**on each independent trial** to have a 10% chance of dodging the attack. So if as a game designer I design it this way, and it ends up that only 3% of attacks are being dodged over the long term (as in the larger the number of rolls we get, the closer we approach the limit function of the actual chance in practice) then there's a problem with the random generation and it's weighted somehow to not be equally distributed.

You seem to not realize this happens in real life too. If you flip a coin, on each independent trial, you have almost a 50% chance of getting heads or tails. So yes, the heads or tails may streak in the short term. But in the long term, as we come closer to infinite flips of the coins, we'll reach the true limit function of the actual chance we had. For example, if you spend all month flipping a weighted coin you may find that heads came up 75% compared to 25% tails. You may now say to yourself it was still random because heads and tails were both possible, but you'd be incorrect, because to be random it had to have equal (or near equal) distribution.

Many random number generators are like the weighted coin. Usually people use weighted coins to cheat. I'd feel better about it if you were suggesting using it to build mechanics in your favor when players wouldn't notice. It's insidious but at least it's not stupid. You're basically suggesting it would be okay if a casino used a weighted coin, without even knowing if it's in favor of the player or the casino. I mean, that's basically what casinos do (their games aren't really random), but it's never weighted in favor of the player.**Because the casino carefully designed the mechanics with statistics in mind and they can't afford to have things off just because a casual player won't notice.**

If I have a mechanic that says "dodges 10% of attacks" then I expect

You seem to not realize this happens in real life too. If you flip a coin, on each independent trial, you have almost a 50% chance of getting heads or tails. So yes, the heads or tails may streak in the short term. But in the long term, as we come closer to infinite flips of the coins, we'll reach the true limit function of the actual chance we had. For example, if you spend all month flipping a weighted coin you may find that heads came up 75% compared to 25% tails. You may now say to yourself it was still random because heads and tails were both possible, but you'd be incorrect, because to be random it had to have equal (or near equal) distribution.

Many random number generators are like the weighted coin. Usually people use weighted coins to cheat. I'd feel better about it if you were suggesting using it to build mechanics in your favor when players wouldn't notice. It's insidious but at least it's not stupid. You're basically suggesting it would be okay if a casino used a weighted coin, without even knowing if it's in favor of the player or the casino. I mean, that's basically what casinos do (their games aren't really random), but it's never weighted in favor of the player.

05 May, 2013, Runter wrote in the 57th comment:

Just hand waving this away to "if people don't notice it" is total nonsense […]

I design things with slim margins for error such that it could tilt my economy and game

I design things with slim margins for error such that it could tilt my economy and game

Is not that meaning that the players will indeed notice ?

Nope, they still may not notice. Especially if it's in their favor. I gave you the blackjack example. Plenty of people think they're just geniuses and can beat the game.

05 May, 2013, quixadhal wrote in the 58th comment:

My point is, normal players don't keep statistics. They log in, they play, they witness the short-term behavior. So, while spending lots of time worrying about how good your PRNG is may be something you enjoy doing, it isn't going to help the players enjoy your game. It's time much better spent elsewhere.

If you feel the POSIX random() system call isn't good enough, and you have a nice drop-in replacement which takes only a moment to use, by all means… it if makes you sleep easier… use it. But implementing your own PRNG, or spending time testing the THEORETICAL improvements seems like a way to prove something that really doesn't need proving.

I've actually seen code, in muds, specifically written to break streaks. Was it written because the random number generator was bad, or because the players whined about missing too often? I suspect the latter, and that's a human perception issue. Calling the attack function and rolling the dice 5 million times probably DOES yield the correct percentages of hits and misses, but the players only care about the few hundred they see over the last hour of gameplay… and seeing 4 or 5 swooshes in a row annoys them, so they whine about it. (They don't mention the 4 or 5 critical hits they got)

If you feel the POSIX random() system call isn't good enough, and you have a nice drop-in replacement which takes only a moment to use, by all means… it if makes you sleep easier… use it. But implementing your own PRNG, or spending time testing the THEORETICAL improvements seems like a way to prove something that really doesn't need proving.

I've actually seen code, in muds, specifically written to break streaks. Was it written because the random number generator was bad, or because the players whined about missing too often? I suspect the latter, and that's a human perception issue. Calling the attack function and rolling the dice 5 million times probably DOES yield the correct percentages of hits and misses, but the players only care about the few hundred they see over the last hour of gameplay… and seeing 4 or 5 swooshes in a row annoys them, so they whine about it. (They don't mention the 4 or 5 critical hits they got)

05 May, 2013, Rarva.Riendf wrote in the 59th comment:

Nope, they still may not notice. Especially if it's in their favor. I gave you the blackjack example. Plenty of people think they're just geniuses and can beat the game.

I say they noticed, otherwise they would not have played blackjack to win money. The fact that they did not understand the reason why does not invalidate that fact that they did indeed notice that your game could be beaten.

I mean, that's basically what casinos do (their games aren't really random)

Well depends of the game, but casino never relied on randomness. They just rely on rules favoring them to begin with. It is still totally random. It just does not matter to them.

06 May, 2013, Tyche wrote in the 60th comment:

A microphone, geiger counter radium dial watch and a program to read it makes a nice random number generator.

Or you could hook up to http://www.fourmilab.ch/hotbits/

Or you could hook up to http://www.fourmilab.ch/hotbits/

40.0/62

Votes:0