Hi all,

I've been searching for hours but i can't find any solutions so i hope someone can help...

Is there any way of proving that the += operator is more efficient than the + operator...

i.e. A += B instead of A = A + B

I'm hoping to use code to see how much overhead is being saved.


thanks in advance,

John Riley

>Is there any way of proving that the += operator is more efficient than the + operator...
Sure. Look at it and say "Hey, A is being evaluated twice with the + operator but only once with the += operator" and "Hey, A + B is probably making a temporary copy".

It is more efficient and i know that the standard operator assignment is being performed on a temporary value.

My question is, is there a way of proving it with code? Otherwise how do we know for sure.

>My question is, is there a way of proving it with code?
Well, that's different from your original question, isn't it? Why not perform the operations a few billion times and see which one takes longer?

#include <ctime>
#include <climits>
#include <iostream>

template <typename T>
double runassignadd ( unsigned long n )
{
  T a = T();
  T b = T();
  std::clock_t start = std::clock();

  for ( unsigned long i = 0; i < n; i++ )
    a += b;
  
  return ( (double)std::clock() - start ) / CLOCKS_PER_SEC;
}

template <typename T>
double runadd ( unsigned long n )
{
  T a = T();
  T b = T();
  std::clock_t start = std::clock();

  for ( unsigned long i = 0; i < n; i++ )
    a = a + b;
  
  return ( (double)std::clock() - start ) / CLOCKS_PER_SEC;
}

int main()
{
  std::cout<<"A += B    -- "<< runassignadd<int> ( UINT_MAX ) <<'\n';
  std::cout<<"A = A + B -- "<< runadd<int> ( UINT_MAX ) <<'\n';
}

>Otherwise how do we know for sure.
Don't put too much weight on empirical tests for making a general statement. There are often unknowns that skew the result and depend on the machine you test with, the code you use, and so on.

Short of examining the object code created, probably no definitive answer. It may be compiler dependent.

In a brute force approach, run the two operations in a verrrrry big loop, time the differences. I just tried this:

#include <iostream>
#include <cstdlib>
#include <ctime>

using namespace std;

int main() 
{
   int a = 10;
   int b = 6;
   unsigned int i;
   int j;

   clock_t   start_time, end_time;

   start_time = clock( );

   for( j = 0; j < 10; j++ )
      for( i = 0; i < 4000000000 ; i++)
      {
         a = 10;
         a = a + b;
      }

   end_time = clock( );
   cout << "time: " 
         << (double)(end_time - start_time)/ CLOCKS_PER_SEC << endl << endl;

   cout << start_time << '\t' << end_time << endl;

    return 0;
}// end of main

changing the a = a + b; to the += version in a second test. Running on a 3.2GHz Pentium D, the timings were:
a = a + b 109.83
a += b 109.143

Hmmm, is there any significant, practical difference? I think not. Remember, that was 40 billion iterations.

YMMV
Val

I do like the idea of timing the different assignments. What about actually changing or overloading the += operator? any ideas or suggestions?

does anyone know which header file holds the += operator code?

My apologies to Narue... my original question was different but i meant no offense.

> Hmmm, is there any significant, practical difference?
> I think not. Remember, that was 40 billion iterations.

for standard types like int or double, it would not make any difference. the compiler knows everything about these types and can see that a = a + b ; and a += b ; are equivalent and generate identical (optimized) code.

the case could be different for user defined types; for one the overloaded + or += operators may not be inline and the compiler cannot make any assumptions about the equivalence between a = a + b ; and a += b ; . if the operatos are inline, a good compiler can still optimize; even in this case, avoiding the creation of anonymous temporaries can improve performance.

int func_int_plus( int a, int b )
{
  return a = a + b ;
}

int func_int_plus_assign( int a, int b )
{
  return a += b ;
}

struct A
{ 
  A( int xx, int yy ) : x(xx), y(yy) {}
  A operator+ ( const A& that ) const 
  { return A( x+that.x, y+that.y ) ; }
  A& operator+= ( const A& that ) 
  { x += that.x ; y += that.y ; return *this ; }
  int x ;
  int y ;
};

A func_A_plus( A a, A b )
{
  return a = a + b ;
}

A func_A_plus_assign( A a, A b )
{
  return a += b ;
}

A& func_A_plus_assign_byref( A& a, const A& b )
{
  return a += b ;
}

>c++ -O3 -S -fomit-frame-pointer operator.cc
an extract of the relevant parts of the assembly generated (gcc 4.2.3):

.file    "operator.cc"
/////////////////////////////////
_Z13func_int_plusii:
.LFB2:
    movl    8(%esp), %eax
    addl    4(%esp), %eax
    ret
.LFE2:
//////////////////////////////////
_Z20func_int_plus_assignii:
.LFB3:
    movl    8(%esp), %eax
    addl    4(%esp), %eax
    ret
.LFE3:
//////////////////////////////////
_Z11func_A_plus1AS_:
.LFB9:
    pushl    %ebx
.LCFI0:
    movl    8(%esp), %ebx
    movl    12(%esp), %ecx
    addl    16(%esp), %ebx
    addl    20(%esp), %ecx
    movl    %ecx, %edx
    movl    %ebx, %eax
    popl    %ebx
 output   ret
//////////////////////////////////
_Z18func_A_plus_assign1AS_:
.LFB10:
    pushl    %ebx
.LCFI1:
    movl    12(%esp), %ecx
    movl    8(%esp), %ebx
    addl    16(%esp), %ebx
    addl    20(%esp), %ecx
    movl    %ecx, %edx
    movl    %ebx, %eax
    popl    %ebx
    ret
.LFE10:
//////////////////////////////////
_Z24func_A_plus_assign_byrefR1ARKS_:
.LFB11:
    movl    4(%esp), %eax
    movl    8(%esp), %ecx
    movl    (%ecx), %edx
    addl    %edx, (%eax)
    movl    4(%ecx), %edx
    addl    %edx, 4(%eax)
    ret
.LFE11:
//////////////////////////////////
Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.