Hi all, hope you are well.

I'm having some trouble trying to convert the literal value of a hex number to int.
I say literal but I'm not sure if that is the correct term, but my code will explain better I hope.

I have simplified the code down to just my conundrum, the array is not populated in the
fashion shown but the result is the same.

definition of 'uchar' is 'unsigned char' and cannot be changed.

void testFunc00(){

	char in;

	const uchar i0 = '30';
	const uchar i1 = '31';
	const uchar i2 = '32';
	const uchar i3 = '33';
	const uchar i4 = '34';
	const uchar i5 = '35';
	const uchar i6 = '36';
	const uchar i7 = '37';
	const uchar i8 = '38';
	const uchar i9 = '39';
	
	uchar auChars [10] = {i0, i1, i2, i3, i4, i5, i6, i7, i8, i9};
        uchar* CharBuffer = &auChars[0];

	int n = 0;

	for (int i = 1; i <= 3; i++){

		putchar(CharBuffer[i]); // final output = 123
		n += CharBuffer[i];     // I'm wanting to get this int to the value 123
		
	}

	puts("\n");

	cout << n << endl; // output = 150 which I believe is the sum of the decimals 49 50 51

	cin >> in;

}

I'm trying to get the integer 'n' to equal the numeric value one hundred and twenty three (123)
without much success.
I get the feeling it could be done with some sort of bitwise operation, but unfortunately
they confuse the hell out of me.

My hope is that someone might know of a library that exists to deal with this calculation
or offer some suggestions.

In any case, I appreciate all offered replies.

<edit>

I might add that my goal is efficiency over elegance :)

const uchar i0 = '30';

'30' is not a single char, so there's the first problem. The single char:
char x = 30;
is fine. In this case, the char is the RECORD SEPARATOR character, as show here: http://www.asciitable.com/

I don't know quite what you expected to add up to 123 (and I don't know where you're getting that 49 50 51 you talk about) but the following shows adding char values and outputting as an integer.

#include <cstdio>
#include <iostream>
typedef unsigned char uchar;


using namespace std;



int main()
{
  
 
char in;
 
const uchar i0 = 30;
const uchar i1 = 31;
const uchar i2 = 32;
const uchar i3 = 33;
const uchar i4 = 34;
const uchar i5 = 35;
const uchar i6 = 36;
const uchar i7 = 37;
const uchar i8 = 38;
const uchar i9 = 39;
 
uchar auChars [10] = {i0, i1, i2, i3, i4, i5, i6, i7, i8, i9};
uchar* CharBuffer = &auChars[0];
 
int n = 0;
 
for (int i = 0; i <= 3; i++){
 
putchar(CharBuffer[i]); // final output = 123
n += CharBuffer[i]; // I'm wanting to get this int to the value 123
 
}
 
puts("\n");
 
cout << "n = " << n << endl; // output = 150 which I believe is the sum of the decimals 49 50 51
 
cin >> in;
 
}

It is a start, thank you.

The characters that the hex numbers 30, 31, and 32 represent are "1", "2" and "3" I want a value of 123, that is where it comes from, and the decimal value of those are 49, 50 and 51.

Thanks again.

Still after suggestions if anyone can help, and my needs are not wierd.

<edit>

Do you think I will have to somehow convert the chars to a string and then convert them to int?

I don't like the sound of that as it sounds like a lengthy process :(

Hex numbers begin with 0x

#include <cstdio>
    #include <iostream>
    typedef unsigned char uchar;
     
     
    using namespace std;
     
     
     
    int main()
    {
     
     
    char in;
     
    const uchar i0 = 0x30;
    const uchar i1 = 0x31;
    const uchar i2 = 0x32;
    const uchar i3 = 0x33;
    const uchar i4 = 0x34;
    const uchar i5 = 0x35;
    const uchar i6 = 0x36;
    const uchar i7 = 0x37;
    const uchar i8 = 0x38;
    const uchar i9 = 0x39;
     
    uchar auChars [10] = {i0, i1, i2, i3, i4, i5, i6, i7, i8, i9};
    uchar* CharBuffer = &auChars[0];
     
    int n = 0;
     
    for (int i = 0; i <= 3; i++){
     
    putchar(CharBuffer[i]); // final output = 123
    n += CharBuffer[i]; // I'm wanting to get this int to the value 123
     
    }
     
    puts("\n");
     
    cout << "n = " << n << endl; // output = 150 which I believe is the sum of the decimals 49 50 51
     
    cin >> in;
     
    }

I know what hex numbers begin with. I did say in my original post that, that is not how the array is populated, so changing the rest of my code just to get a desired output does not make any sense.

I used putchar(), to illustrate what I am needing from these values, the uchars will always be uchars, I cannot change that.

My problem is getting my desired int from them.

Thank you once again for your input Moschops, I appreciate it.

Are you asking how to manually take a character string, such as "30" and interpret that as hex?

Reverse the string, and then:

int sum = 0;
for (int i=0; i< lengthOfString; ++i)
{
  sum = (int)(reversedString[i] - 48) * ( pow(16,i));
}

Here's what I ended up using for now.

void testFunc00(){

	const uchar i0 = '30';
	const uchar i1 = '31';
	const uchar i2 = '32';
	const uchar i3 = '33';
	const uchar i4 = '34';
	const uchar i5 = '35';
	const uchar i6 = '36';
	const uchar i7 = '37';
	const uchar i8 = '38';
	const uchar i9 = '39';
	
	uchar auChars [10] = {i0, i1, i2, i3, i4, i5, i6, i7, i8, i9};
        uchar* CharBuffer = &auChars[0];
	stringstream ss (stringstream::in | stringstream::out);
	char in;
	int n = 0;

	for (int i = 1; i <= 3; i++){

		ss << CharBuffer[i];
		
	}

	ss >> n;

	cout << "n = " << n << endl; // output 123 

	cin >> in;

}

No Moschops, the '30' is the hex value of the char '1'.

If anyone has I better or more importantly. faster method that this
I'd sure appreciate seeing it, as I know this will be costly in my
program :(

Hex string to int?

#include <iostream>
#include <string>
#include <cstdlib>


using namespace std;

int main()
{
  string s;
  cout << "Enter a hex number to convert to decimal: ";
  cin >> s; cout << endl;
  
  char * p;
  long n = strtol( s.c_str(), & p, 16 );
  if ( * p != 0 ) {
    cout << "not a number" << endl;
    }
  else {
    cout << n << endl;
  }
}

Thanks again for your offering, thing is, my type is not of std string, is is type unsigned char, so I would be making an extra conversion I think, first to string and then to int.

I will have to do some testing to find the speed of any attempts to decide on the quickest.

Thank you so much.

See this bit here?

long n = strtol( s.c_str(), & p, 16 );

That's where the string is turned into a char array, and then into the output int. If you've already got a char array, you don't need the string. That's there because it's easier to input a string for the demo program.

Maybe this makes it clearer:

#include <stdio.h>
#include <stdlib.h>
 
int main ()
{
  char* szNumbers = "30";
  
  long li1 = strtol (szNumbers,&szNumbers+1,16);

  printf ("The decimal equivalent is: %ld", li1);
  return 0;
}

How certain are you that an unsigned char is the same as a char and can hold tcp octets where each bit of a byte might need manipulation?

I am not so sure myself.

The C standards indicate that "char" may either be a "signed char" or "unsigned char". That said, if it's being used to hold a value between 0 and 255 it doesn't matter what it is - all char values can hold at least 256 different values; you just need to know what the chosen encoding mechanism is.

I'll definitely give that a go then because time is precious and I need to squeeze all I can.

I have read conflicting reports on the char/uchar front though, but every article I've read has been conflicting, so I guess there is really not much difference.

Thanks.

I have read conflicting reports on the char/uchar front though

I'm not surprised. It is implementation dependent and at the whim of the compiler writer, who can happily change it around between versions if they so choose.

The C standards indicate that "char" may either be a "signed char" or "unsigned char".

Yes.

That said, if it's being used to hold a value between 0 and 255 it doesn't matter what it is - all char values can hold at least 256 different values; you just need to know what the chosen encoding mechanism is.

No. The char type is allowed a minimum size of 8 bits, which is very common. In that case the range of signed char is approximately -127 to +127, with a slight variance depending on the system's signed integer representation.

So while you're right that all char values can hold at least 256 different values, those different values can vary greatly between signed and unsigned. If you have a signed char as described above and try to assign anything greater than 127, the signed overflow will invoke undefined behavior.

I have read conflicting reports on the char/uchar front though, but every article I've read has been conflicting, so I guess there is really not much difference.

There's really no conflict. Vanilla char is either signed or unsigned as decided upon by the compiler. For that particular compiler it won't change, but if you want to write portable code, it's best not to assume that char is signed or unsigned and use signed char or unsigned char explicitly when the difference matters to your program.

Yes....No....So while you're right that all char values can hold at least 256 different values,

That's a very long way round of saying you agree with me that the char can hold at least 256 different values and that you need to know what the chosen encoding mechanism is.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.