Hi

when I send 1000 byte with socket.send , I always recieve 1000 byte with socket.recieve.
I know it shouldn't be true for general but because 1000 bytes is a small amount can we be sure that it will not fragmented?

I tested it in visual studio and until 8KB every amount I sent, I got exactly the same amount and after that it didn't;

if enough data are provided is there a minimum amount on socket.available. for example because of packet size. I have heard that minimum packet size is 64, and default MTU(maximum transmit unit) in XP is 1500 bytes.

I use TCP and ethernet and actually the answer is very critical to me.

Thanks

I don't understand your question. The MTU is set as 1500 on 99.9% of machines (ethernet) or 576 for dialup but you also have part of the 1500byte packet reserved for routing and protocol information (routing: source ip, dest ip, prev/next hop, ttl, vlan tag) so you only have ~1460 bytes of information you can send over ipv4.

Why does it matter if the packet is fragmented? You run the risk of having your application work on the majority of computers and not working on a handful if you start setting advanced network options like this. Just buffer the receive and read it in 1000 byte increments if that is your desired application behavior. There is also a TCP dont-fragment flag you can set but if the PMTU is below the nonfragmented packet size then the data cannot be sent and you will experience a connection error.

commented: socket to us! good answer... +1

The problem is that it's a prewritten code and decision made based on the answer is very important.
the client and server are connected directly by ethernet cable.

I don't understand what you're asking. Just write 1000 bytes to the socket's buffer and flush it which will force the packet to be sent.

Be a part of the DaniWeb community

We're a friendly, industry-focused community of developers, IT pros, digital marketers, and technology enthusiasts meeting, networking, learning, and sharing knowledge.