Socket implementation relies on fixed-size internal send and receive buffers that are allocated as necessary from contiguous, non-paged pool memory. The default size of these buffers is 8k each. Incoming network data is placed into the internal receive buffer for the socket.
Winsock applications that use one of the peek methods, with either recv/WSARecv(..., MSG_PEEK) or ioctlsocket(FIONREAD, ...), to obtain the amount of data in the receive buffer is highly inefficient because the system must lock the data and count it. As the system does this, it is likely that the real-time network will still attempt to fill the buffer with more data. Peeking also does not remove the data, which would allow the buffer to reach its storage limit. As a result, this closes down the network data-flow rate and makes the entire process of data transmission inefficient.
Polling on a stream socket until a certain number of bytes or a "message" arrives is bad code. A stream socket, such as TCP, does not preserve message boundaries because it provides a data stream. As such, the largest message size an application can ever depend upon is one-byte in length. Code that uses peeking to wait until a complete "message" arrives might never succeed on stream-based protocols in Winsock where the data straddles multiple system buffer boundaries, due to design decisions. The peek operation will report the number of bytes up until the first buffer boundary. The bytes remaining in the other boundaries might never be reported, resulting in an incorrect count of data for code algorithms that depend upon the peek values to be accurate. Subsequent peek attempts will not reveal the "hidden" data, which can still be received from the buffers.
The best stream-based protocol socket implementation is to drain data immediately upon arrival into application-allocated buffer space. This allows the socket buffers to remain open to a steady network data-flow rate as the application parses the data, resulting in much better network performance.