Hello all,
I need to create a structure with some attributes associated with it:

struct SNode {
float PrimSet [n];
int size;
float value;
};

The only problem is the size of the array (i.e. n) is variable and it can be any number below 100, which is unknown before running the code.
I tried using dynamic arrays, but I'm not sure how.

I appreciate any help.
Cheers

Please, let's stop an impetuous publicity of std::vector class as universal panacea.
It is std::valarray is the most effective STL class for dynamic size arrays of numbers.
See, for example: http://stdcxx.apache.org/doc/stdlibref/valarray.html
C++ Standard:

The class template valarray<T > is a one-dimensional smart array, with elements numbered sequentially from zero. It is a representation of the mathematical concept of an ordered set of values.

As usually, no need to dynamically change size of an array after its creation (the main goal of std::vector class). So valarray elements access is faster than vector elements access (with factor 3-4! ). There are tons of use cases where valarray code looks like a wonder of clarity, laconic brevity and effectiveness. In simple cases valarray is an usual array with a wonderful size() member function...

commented: Thanks for the into to valarray, I didn't know such a class existed :) +36
commented: Okay >.< +3

On the other hand, Arkm, don't advocate valarray without a particular reason. The original post provided no particular reason to prefer valarray over a vector. The only requirement stated is ability to represent an array of float with length unknown at compile time. Both valarray and vector meet that requirement, and do it equally well.

You happen to be incorrect about one attribute of valarray: it can be resized. The impact of resize vs no resizing is equivalent for both valarray and vector.

The advantages of valarray relate to it being specifically optimised for repetitive operations on an array of numeric values. If there is no need to do such operations (and the original post identified no such need) vector is a good general-purpose option.

As a rough rule, it is not a good idea to advocate a specialised solution unless you know that solution is relevant to the problem at hand. valarray is a specialised solution.

Coming from C, just make n = 100. Access times is about 8 times as fast and you'll lose little memory in return.. Unless you're making a quazillion of those structures of course.

To grumpier:
I never said that valarray can't be resized. I said: as usually, no need - feel the difference. By the way, effects of valarray and vector resize() are different.

Of course, std::valarray is a specialized solution, and this specialization is exactly the same as original post author wanted: how to declare an array of integers with run-time defined size.

So we have two possible solution:
1. std::vector: variable declared size - OK, element access - slow.
2. std::valarray: variable declared size - OK, element access - fast.
Of course, it's not a good idea to advocate a simple and effective solution when we have slow but usual (sooner fashionable) one...

valarrays were introduced with the idea that it might encourage C++ compiler developers for vector processors to implement them as built-in types (the compiler could emit code to optimize the use of the vector pipeline).
See Bjarne Stroustrup, "The C++ Programming Language: Third Edition",
Chapter 22 Numerics, Section 4 Vector Arithmetic, pages 662-79.

the definition of a vector machine is that it can apply a single operation to a number of elements in parallel. the basic idea on a vector machine is this: apply a single operation to an entire array at a time, then apply the next operation to the entire array, and so on.
The prototypical vector machine is the Cray, which has 3 sets of 64 registers each -- it can be loading one set of 64 registers, applying a single operation to another set of 64, and storing the third set of 64 all at the same time. and has a 32-CPU STREAM bandwidth of 360 GB per second.
valarray fits this pattern perfectly, and with a Cray T90, you could get excellent performance with it.

The basic problem with this design is that it requires a huge bandwidth to memory. A typical modern machine has an extremely fast processor, but main memory is a bottleneck (you use a cache to make up the difference). in this case, optimal usage is almost exactly the opposite -- you want to load a single value, apply all your operations to it, then go on to the next value. most of the operations supported by valarray perform quite poorly, and the only way to get decent performance is to treat a valarray like a std::vector.

as a result, almost nobody uses them. and it is difficult to convince compiler/library developers that they should invest in implementing high performance valarray templates. in any case, vector processor technology very quickly became obsolete.

std::valarray became an orphan when the standardization committee discovered that on 'normal' (non-vector) machines, every optimization which can be done with std::valarray can be better done using 'expression templates'. it was too late to remove 'std::valarray' from the standard, but nobody cared enough to fix it.

if you are already using std::valarray, or if its interface (slices etc.) is what you need, fine.

if you need high speed numerics, you are better off using an expression template library like Blitz++ http://www.oonumerics.org/blitz/

in all other cases, use std::vector.

here are some measurements:

#include <iostream>
#include <valarray>
#include <vector>
#include <boost/random.hpp>
#include <ctime>
#include <iostream>
using namespace std ;
using namespace boost ;

int main()
{
  enum { N = 1024*1024*4, REPEAT=32 };
  valarray<double> va1(N), va2(N), va3(N) ;
  vector<double> vec1(N), vec2(N), vec3(N) ;

  mt19937 engine ;
  uniform_real<> distribution( 0.0, 100.0 ) ;
  variate_generator< mt19937&, uniform_real<> > rng( engine, distribution ) ;

  double c_array_time = 0.0 ;
  double vector_time = 0.0 ;
  double valarray_time_a = 0.0 ;
  double valarray_time_b = 0.0 ;

  for( int i=0 ; i<REPEAT ; ++i )
  {
    for( size_t i=0 ; i<N ; ++i )
    {
      vec1[i] = va1[i] = rng() ;
      vec2[i] = va2[i] = rng() ;
    }

    double* ca1 = &vec1[0] ;
    double* ca2 = &vec2[0] ;
    double* ca3 = &vec3[0] ;

    clock_t begin = clock() ;
    for( size_t i=0 ; i<N ; ++i ) ca3[i] = ca1[i] * ca2[i] ;
    clock_t end = clock() ;
    c_array_time += end - begin ;

    begin = clock() ;
    for( size_t i=0 ; i<N ; ++i ) vec3[i] = vec1[i] * vec2[i] ;
    end = clock() ;
    vector_time += end - begin ;

    begin = clock() ;
    va3 = va1 * va2 ;
    end = clock() ;
    valarray_time_a += end - begin ;

    begin = clock() ;
    for( size_t i=0 ; i<N ; ++i ) va3[i] = va1[i] * va2[i] ;
    end = clock() ;
    valarray_time_b += end - begin ;

    cout << '.' << flush ;
  }
  cout << "\nc array: " << c_array_time / CLOCKS_PER_SEC << " seconds\n" ;
  cout << "vector: " << vector_time / CLOCKS_PER_SEC << " seconds\n" ;
  cout << "valarray operator* : " << valarray_time_a / CLOCKS_PER_SEC << " seconds\n" ;
  cout << "valarray[i] operator* : " << valarray_time_b / CLOCKS_PER_SEC << " seconds\n" ;
}
>dmesg | grep CPU:
CPU: Intel(R) Pentium(R) III Mobile CPU      1000MHz (995.97-MHz 686-class CPU)
>alias c++
g++42 -Wall -std=c++98 -pedantic -Werror -I /usr/local/include -L /usr/local/lib
>c++ -O3 valarray.cc && ./a.out
................................
c array: 10.3359 seconds
vector: 10.3984 seconds
valarray operator* : 10.4688 seconds
valarray[i] operator* : 10.5234 seconds

I have only just joined the forum as of today, and already I can see the "I am a better coder than you", mentality is ripe.

The funny thing is, this sort of approach to helping someone out, does kind of work, because as the war rages, the original question is answered in lots of different ways, my advise to the original poster would be to try as many of the solutions given to them as possible, doing some more research on each one as you go, and use, what you find is best for you.

Well that makes sense, no? I'm a better programmer leads to discussion and hopefully measurements. It's not so much "I'm a better coder" as "I know the way to do this.".

But back to the topic, how come vectors are equally fast as C-arrays in this example? That makes no sense at all. I thought ArkM showed they were at least factor 5 slower?

But back to the topic, how come vectors are equally fast as C-arrays in this example? That makes no sense at all. I thought ArkM showed they were at least factor 5 slower?

Unless I missed something he wasn't commenting on C arrays, but std::valarray versus std::vectors. There is little doubt that C array access is fastest.

Doesn't the program state that C-arrays multiplication took about as long as vector and valarray multiplication? Doesn't that indicate there's some other bottleneck? I could be wrong here, quite often missing information, heh.

To grumpier:
So we have two possible solution:
1. std::vector: variable declared size - OK, element access - slow.
2. std::valarray: variable declared size - OK, element access - fast.
Of course, it's not a good idea to advocate a simple and effective solution when we have slow but usual (sooner fashionable) one...

Your assertions of relative speed are incorrect.

vijayan121 gave a good summary of the history of valarray. In theory, there are some highly specialised circumstances in which valarray operations can be quicker than vector operations, but that relies on specific support by the compiler and usage of hardware features (in vector processors) which were (almost) never employed in practice.

Unless I missed something he wasn't commenting on C arrays, but std::valarray versus std::vectors. There is little doubt that C array access is fastest.

As a matter of fact, there is. Element access of std::vector (and std::valarray) can be optimised by the compiler (inlining of the operator[] functions, etc) and decays effectively to a C array element access (or, equivalently, a pointer dereference). This relies on behaviour of the compiler (specifically, how it does optimisation) but good quality modern compilers do this by default.

Well, that didn't happen to my experience with the latest version of GCC in my program. Is it safe to state that this optimization can be rarely made?

Of course, it's possible to optimize vector element access in the loop because the C++ Standard prescribed that "the elements of a vector are stored contiguously". Meanwhile it's a fact that for VC++ 2008 (Plauger's STL) vijayan121's vector multiplication run time is half as much again than for valarray or simple array one (with optimization for speed). Is the VC++ an exotic or specialized compiler, or my AMD CPU is a specialized valarray processor? I never proclaimed that slow vector element access is prescribed by the C++ Standard. Let's talk about facts, not about "assertions".

It may be observed in passing that clock() is not the best measurement tool for such intervals, and there are many others delicate issues - for example, we must declare valarray with initialization to compensate possible swapping overheads when va3 allocated pages will be accessed in the loop. Apropos, true optimizing compiler discard all loop body statements in vijayan121's test prog because no effect of these assignments: va3 and vec3 do not used below. It's the other story.

Someone prefers std::vector. What's holding things up? I think in the case of integer array the only advantage of std::vector over std::valarray is: strlen("vector")<strlen("valarray") .

In a word, I don't understand why valarray is an outcast of STL classes (see storm in a teacup - sorry, in this thread). I don't understand why to write a program without std::vector is the same as to go out for a walk without trousers...

measurements for the same snippet (3 each) for the microsoft compilers:

compiler: VC++ 9.0 (Express 2008)
optimizations: /Ox /Ob2 /Oi /Ot /Oy /GT /GL
processor: Intel Pentium M 1.86 GHz

c array: 2.076 seconds
vector: 2.094 seconds
valarray operator* : 6.313 seconds
valarray operator* : 2.048 seconds

c array: 2.076 seconds
vector: 2.095 seconds
valarray operator* : 6.39 seconds
valarray operator* : 2.106 seconds

c array: 2.082 seconds
vector: 2.059 seconds
valarray operator* : 6.314 seconds
valarray operator* : 2.049 seconds

-------------------------------------------

compiler: VC++ 8.0 (Express 2005)
optimizations: /Ox /Ob2 /Oi /Ot /Oy /GT /GL
processor: Intel Pentium M 1.86 GHz

c array: 2.105 seconds
vector: 2.115 seconds
valarray operator* : 4.721 seconds
valarray operator* : 2.026 seconds

c array: 2.166 seconds
vector: 2.116 seconds
valarray operator* : 4.634 seconds
valarray operator* : 2.131 seconds

c array: 2.11 seconds
vector: 2.08 seconds
valarray operator* : 4.874 seconds
valarray operator* : 2.005 seconds

> true optimizing compiler discard all loop body statements in vijayan121's test prog because no effect of these assignments

only if you force it by a compiler switch: eg. /Oa (Assume No Aliasing) on microsoft.
otherwise, it has to admit aliasing as allowed by IS 3.10/15

In a word, I don't understand why valarray is an outcast of STL classes (see storm in a teacup - sorry, in this thread). I don't understand why to write a program without std::vector is the same as to go out for a walk without trousers...

It's not outcast. There are arguments for and against its usage.

You were the one who started with "Please, let's stop an impetuous publicity of std::vector class as universal panacea" and then proceeded to advocate valarray.

std::vector is not perfect, but it is worthwhile for most C++ programmers to be familiar with it. It is generally applicable to a wide range of problems and, in practice, is often a suitable (as opposed to optimal) solution to many problems. When a question is asked about how to implement an array with a variable size, it is more helpful to provide a pointer to a widely applicable and commonly used approach or solution (eg std::vector) rather than a specialised one (eg std::valarray). Why? Because the person learning will be able to use std::vector for more things, and learn for themselves. Whereas that person will probably run into limitations of valarray (eg types it cannot hold) early, and then have to learn a more general solution.

A basic approach to helping people learn is to encourage building knowledge and experience that is widely applicable and, once the person has learnt those things, moving to more specialised approaches. In the case of C++ containers, that means teaching general-purpose things (eg std::vector) first, and more specialised and less widely applicable things (eg std::list, valarray) later. Not because those specialised things are inferior, but because general-purpose things will be used more often. That's why they are described as general-purpose.

Your approach (advocating someone learning a specialised solution to a general problem) is backward. Most people who ask how to boil an egg would be given a general answer involving a saucepan, water, and a heat source. Occasionally, someone may mention it's possible to use a high intensity laser beam. Your advocacy of valarray over vector is equivalent to telling people they must boil the egg using the laser and disregard the saucepan.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.