How do I "pad" my input to a certain number of bits, say 111 bits more than the original input? Any ideas?

Count how many bits you input.
Use a loop to add more bits from that count to however many you need.

if the number of bits and padding are known at compile time:
(if not, you could use boost::dynamic_bitset http://www.boost.org/libs/dynamic_bitset/dynamic_bitset.html )

#include <iostream>
#include<bitset>
#include <string>

template< std::size_t PAD, std::size_t N > inline
std::bitset<  N + PAD > pad_msb( const std::bitset<N>& bs )
{ return std::bitset<  N + PAD >( bs.to_string() ) ; }

template< std::size_t PAD, std::size_t N > inline
std::bitset<  N + PAD > pad_lsb( const std::bitset<N>& bs )
{ return pad_msb<PAD>(bs) << PAD ; }

int main()
{
  std::bitset<41> bits( std::string("10101010101") ) ;
  std::cout << bits << '\n' << pad_msb<5>(bits) << '\n' 
            << pad_lsb<9>(bits) << '\n' ;
}
/** output:  
00000000000000000000000000000010101010101
0000000000000000000000000000000000010101010101
00000000000000000000000000000010101010101000000000
*/

Thanx Salem for suggesting such an obvious thing that didn't occur to me :)

Now vijayan121!
Thanx a lot for helping out but the code gives error of "could not deduce template argument for 'N'" on line 16 and 17 i.e. on

pad_msb<5>(bits)

and on

pad_lsb<9>(bits)

I am new to cpp. Couldn't understand your code in first place. Then I studied about defining templates and now understood it.

My next questions are:
1. What if we want to process the output of pad_msb function as bits and not as string?
2. What if we want to append a bit of 1 instead of bit 0?

Thanx again for helping.

You could try posting some of YOUR code so that we can adapt answers which would likely fit in with your current knowledge and coding.

#include <iostream>
#include <bitset>
#include <string>
#include <iomanip>
#include <limits>
using namespace std ;


template< size_t PAD, size_t NBITS > inline
bitset<NBITS+PAD> pad_msb0( const bitset<NBITS>& bs )
{ return bitset<NBITS+PAD>( bs.to_string() ) ; }

template< size_t PAD, size_t NBITS > inline
 bitset<NBITS+PAD> pad_lsb0( const bitset< NBITS >& bs )
{ return pad_msb0<PAD>(bs) << PAD ; }

// if we want to pad with bit of 1 instead of 0
template< size_t PAD, size_t NBITS > inline
 bitset<NBITS+PAD> pad_msb1( const bitset< NBITS >& bs )
{ return bitset<NBITS+PAD>( string(PAD,'1') + bs.to_string() ) ; }

// if we want to pad with bit of 1 instead of 0
template< size_t PAD, size_t NBITS > inline
 bitset<NBITS+PAD> pad_lsb1( const bitset< NBITS >& bs )
{ return bitset<NBITS+PAD>( bs.to_string() + string(PAD,'1') ) ; }

int main()
{
 enum
 {
   NBITS = 21,
   DIGITS = numeric_limits<unsigned long>::digits,
   WIDTH = DIGITS/4 + 2
 };
 bitset<NBITS> bits( string("10101010111") ) ;
 cout << hex << showbase << internal
      << bits << ' '
      << setw(WIDTH) << setfill('0') << bits.to_ulong() << '\n'
      // > ...the code gives error of...
      // you must be using an old compiler; NBITS should be deduced.
      // if that is the only issue with the compiler, specifying the
      // template parameter explicitly would fix it. ie.
      // istead of pad_msb0<5>(bits), use pad_msb0<5,NBITS>(bits) eg.
      << pad_msb0<5,NBITS>(bits) << ' '
      << setw(WIDTH) << setfill('0')
      // > ...output of pad_msb function as bits and not as string...
      // if the number of bits can be held in an unsigned long value,
      // we can use the bitset<>::to_ulong() method
      << pad_msb0<5,NBITS>(bits).to_ulong() << '\n'
      << pad_msb1<5,NBITS>(bits) << ' '
      << setw(WIDTH) << setfill('0')
      << pad_msb1<5,NBITS>(bits).to_ulong() << '\n'
      << pad_lsb0<9,NBITS>(bits) << ' '
      << setw(WIDTH) << setfill('0')
      << pad_lsb0<9,NBITS>(bits).to_ulong() << '\n'
      << pad_lsb1<9,NBITS>(bits) << ' '
      << setw(WIDTH) << setfill('0')
      << pad_lsb1<9,NBITS>(bits).to_ulong() << '\n' ;

  // > ...output of pad_msb function as bits and not as string...
  // if the number of bits can not be held in an unsigned long value,
  // we can split it up programatically into smaller bitsets and
  // then we can use the bitset<>::to_ulong() method on these
  enum { NBITS_BIG = 85 };
  bitset<NBITS_BIG> many_bits( string("1010101011110101010111101010101"
                         "01010001101010110101000110101011110101010111") ) ;
  const size_t NUM_ULONGS = NBITS_BIG/DIGITS + ( NBITS_BIG%DIGITS ? 1 : 0 ) ;
  string s = many_bits.to_string() ;
  s = string( DIGITS, '0' ) + s ;
  const size_t SZ = s.size() ;
  for( size_t i = NUM_ULONGS ; i > 0 ; --i )
  cout << setw(WIDTH) << setfill('0')
    << bitset<DIGITS>( s.substr( SZ-i*DIGITS, DIGITS ) ).to_ulong() << ' ' ;
  cout << '\n' ;
}
/**>g++42 -Wall -std=c++98 pad_bits.cc && ./a.out # gcc 4.2
000000000010101010111 0x00000557
00000000000000010101010111 0x00000557
11111000000000010101010111 0x03e00557
000000000010101010111000000000 0x000aae00
000000000010101010111111111111 0x000aafff
0x00000557 0xaaf5551a 0xb51abd57
*/

repeat: You could try posting some of YOUR code so that we can adapt answers which would likely fit in with your current knowledge and coding.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.