i found this code in a book. i am finding it confusing.

#include<stdio.h>
int main()
{
int arr[3]={2,3,4};
char *p;
p = (char*)arr;         // line A
p = (char*)((int*)p);
printf("%d",*p);
p= (char*)(p+1);        //line B
printf("%d\n",*p);
return 0;
}

the answer is: 2 0
but i cant figure out how/why that '0' gets printed
in line A does 'p' point to 2 or a garbage value?? (since p is a character pointer)
and what is line B doing??

in line B, when the pointer is incremented by 1, why is it not printing the next element of the array, i.e. 3?? :|

p is a pointer to char, not int. So when you increment by 1, it increments by 1 byte instead of the number of bytes in an int. I think you should throw that book away if that program is a sample of its quality.

p is a pointer to char, not int. So when you increment by 1, it increments by 1 byte instead of the number of bytes in an int. I think you should throw that book away if that program is a sample of its quality.

thats what i thought , but why is it printing 0, but not a garbage value?

A char tends to be 8 bits.
An int on a 32-bit machine is 32-bits.
If an int contains a 2, it actually contains {2,0,0,0} (in a 32-bit system)
Try this code:

#include<stdio.h>

int main()
{
	unsigned int i, j;
	int arr[3]={2,3,4};
	char *p;
//	p = (char*)arr;         // line A
	for(i=0;i<3;i++){
		printf("Integer:%d\n",arr[i]);

		for(j=0;j<sizeof(int);j++){	//I'm using sizeof(int) to get the exact size of int. I can't assume it's 4.
			p=(char*)&arr[i];	//point p at arr
			p+=j;			//move p to current byte
			printf("Byte[%d]:%d\n",j,*p);
		}
		printf("*********\n");
	}
	return 0;
}

Why do I not know the size of int?
An int is usually the size of the number of bits in the system it's compiled for.
This can be 16, 32, or 64.
Note that just because I run a program on a 64-bit system, doesn't mean I'm getting 64-bits. If it's a 32-bit program, it's a 32-bit number.

commented: gives the accurate answer. :) +1

A char tends to be 8 bits.
An int on a 32-bit machine is 32-bits.
If an int contains a 2, it actually contains {2,0,0,0} (in a 32-bit system)
Try this code:

#include<stdio.h>

int main()
{
	unsigned int i, j;
	int arr[3]={2,3,4};
	char *p;
//	p = (char*)arr;         // line A
	for(i=0;i<3;i++){
		printf("Integer:%d\n",arr[i]);

		for(j=0;j<sizeof(int);j++){	//I'm using sizeof(int) to get the exact size of int. I can't assume it's 4.
			p=(char*)&arr[i];	//point p at arr
			p+=j;			//move p to current byte
			printf("Byte[%d]:%d\n",j,*p);
		}
		printf("*********\n");
	}
	return 0;
}

Why do I not know the size of int?
An int is usually the size of the number of bits in the system it's compiled for.
This can be 16, 32, or 64.
Note that just because I run a program on a 64-bit system, doesn't mean I'm getting 64-bits. If it's a 32-bit program, it's a 32-bit number.

oh, i am sorry, here in my query int is of 2 bytes.
i am running it on a turbo c compiler. its of 16 bits.
which mean that it would contain {2,0} right?

By the way thanks a lot!! you are the dude! :D
this is what i didn't know:

"If an int contains a 2, it actually contains {2,0,0,0} (in a 32-bit system)"

and it cleared my concept. :)

Fantastic. I'm glad to have helped.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.