So, I am learning C and have this newby problem. This is my snippet:
#include <stdio.h>
main() {
char x;
unsigned y;
x=0xFF;
y=0xFFFF;
printf("Size of char: %d-bits\n", sizeof(char)*4);
printf("Size of int: %d-bits\n", sizeof(int)*4);
printf("\n0x%X in decimal: %d\n", x, x);
printf("\n0x%X in decimal: %d\n", y, y);
return 0;
}
This prints:
Size of char: 4-bits
Size of int: 16-bits
0xFFFFFFFF in decimal: -1
0xFFFF in decimal: 65535
Question:
a. Why does the program deliver a 8-bit hex number (0xFFFFFFFF) when the input is 0xFF and when the sizeof() delivers char as 4-bit ?
b. The program returns 0xFFFFFFFF as -1 in decimal. However when I use a Hex-Decimal converter I get 0xFF as 255 and 0xFFFFFFFF as 4294967295?