In my project I tried to use boost RNG in such manner

inline int rand(const int m){
		uniform_smallint<> us(0,m-1);
		variate_generator<rnggen_t&,uniform_smallint<> > vg(rng,us);
		return vg();
	}

where rnggen_t is type of random number generator (rng). But using this (for every generator) instead of rand()%m, is slowing my program at least 4 times (which of course is doing a lot of other things in loop). I compile this small program with -O3, and I found out that making static vector with variate_generators is slower than current implementation (and I only need random numbers for 2<=m<=6). Am I doing it wrong?

Or maybe you know any good fast random numbers generator?

> Am I doing it wrong?
Yes.
Initialize vg once for the m you need.
To get a random number, instead of rand()%m just call vg().

I've made global array of variate generators which I used like vgg[m]() and it was slower then my approach, so I think I can suspect that constructors while using this function are not a problem. Also I was doing profiling with gprof before, without compiler optimisation, and using mt19937 generatorI had 25% of program time used while calling vg().

Try using the fastest and least consuming random number generator: rand48 (here is the list of all options). The one you are using is very memory-hungry and is not the fastest. Sometimes, it might be worth exploring quasi-random number generators instead (it is found to be faster and just as good for many applications, especially numerical integration (quadrature) and optimization methods).

My program version with rand48 is about 3 times slower then with rand(), there is no big difference between generators. I was thinking about using one of this generators, but I don't have any idea how to seed them.
PS
I've got from my lecturer this algorithm:

FUNCTION VAXRAN(D)

       COMMON /RANF/ A,B,C
       F=D*(A+B+C)
       F=F-AINT(F)
       A=B
       B=C
       C=F
       VAXRAN=F
       RETURN
       END

but he didn't gave me values of parameters, except that A,B,C are floating points (between 0.0 and 1.0) and I don't know which one are reasonable, maybe it's time to read some Knuth.

The formula used for rand48, minstd_rand and the built-in rand() function is the linear congruence formula:

x(n+1) = (a * x(n) + c) % m;

Where a, c, and m are fixed parameters and x(0) is the seed value. Any of the random number generators you have proposed are more complex than this and is very unlikely to be any faster. I find it very bizarre that computing the above formula would take up so much of your computing time (unless all you program does is generate random numbers). The reason why rand() might be faster is because rand() is built-in, and thus, it could have been implemented as inlined assembly code (although fairly unlikely). This is actually the implementation of rand():

int __cdecl rand (void) {
  return (((holdrand = holdrand * 214013L + 2531011L) >> 16) & 0x7fff);
};
void __cdecl srand(int seed) {
  holdrand = seed;
};

rand() is not perfect, but simplicity and speed are not part of its problems. As for Boost.Random, using the basic linear congruence methods should be just as fast unless your compiler really sucks. I tested the following program on my computer (with Linux and GCC 4.6.0):

#include <cstdlib>
#include <ctime>
#include <boost/random/linear_congruential.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
#include <iostream>

int main() {

  std::srand((unsigned int)time(0));
  boost::rand48 rng((unsigned int)time(0));
  boost::minstd_rand rng2((unsigned int)time(0));

  boost::posix_time::ptime start = boost::posix_time::microsec_clock::local_time();
  for(unsigned int i=0;i<1000000;++i)
    rand();
  boost::posix_time::time_duration dt = boost::posix_time::microsec_clock::local_time() - start;
  std::cout << "rand() generated 1000000 numbers in " << dt.total_microseconds() << std::endl;

  start = boost::posix_time::microsec_clock::local_time();
  for(unsigned int i=0;i<1000000;++i)
    rng();
  dt = boost::posix_time::microsec_clock::local_time() - start;
  std::cout << "boost::rand48 generated 1000000 numbers in " << dt.total_microseconds() << std::endl;

  start = boost::posix_time::microsec_clock::local_time();
  for(unsigned int i=0;i<1000000;++i)
    rng2();
  dt = boost::posix_time::microsec_clock::local_time() - start;
  std::cout << "boost::minstd_rand generated 1000000 numbers in " << dt.total_microseconds() << std::endl;

  return 0;
};

And got the following output:

rand() generated 1000000 numbers in 13052
boost::rand48 generated 1000000 numbers in 14325
boost::minstd_rand generated 1000000 numbers in 43943

Which is exactly what you would expect from looking at the table on the Boost.Random website. And I would say that being able to generate one million random numbers in less than 15 milliseconds isn't bad at all.

commented: It saved my day +2

Thank you very much, with your test I found a flaw with my approach. It was not a random number generator problem, but probably with uniform_smallint, even with preconstructed variate_generators. I don't know why it would be so big performance hog, maybe this function is just thinking a little bit too much. I didn't know that you can use PRNG engine just like this (operator()), thanks for showing this approach. I only have to stop using modulo calculations after random number generation and it would be statistically plausible.
PS.
This is my output for your test which convinced me that my approach was that bad (-O3, added volatile variables to make this test work):

rand() generated 1000000 numbers in 198270
boost::rand48 generated 1000000 numbers in 44118
boost::minstd_rand generated 1000000 numbers in 104119

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.