The issue occurs because of a difference in the way the Microsoft and MIT Kerberos standards encode and intepret the KVNO value. In a Microsoft Kerberos environment and when using RODCs the 4 bytes of KVNO are actually intepreted as two 2 byte values. For example the decimal KVNO 2870149121 in hex is AB 13 00 01. The first pair of octets represent the RODC's unique ID and the second pair represent the version number of the key. Together these uniquely identify to the KDC which principal and secret were used to create the key. In an MIT based Kerberos implementation the KVNO is treated as a single 32 bit value, however this in itself does not cause an issue. Crucially the encoding of the value differs between Microsoft and MIT Kerberos implementations, MIT Kerberos encodes and decodes the entire KVNO as a 32bit value using ASN.1 DER rules whereas the Windows KDC decodes only the lower 16bits of the KVNO as an ASN.1 DER integer and accepts the upper 16 bits as the RODCID without decoding.
Due to the differences in the encoding/decoding behaviour an MIT based Kerberos client will prepend a 0 to the KVNO value. This is because it encodes the entire KVNO strictly as an ASN.1 DER integer using twos-complement notation. Using this notation the high bit of a value is deducted from the sum of the preceding bits to derive the encoded value. Therefore the high bit of the encoded value must always be 0 if it is to be decoded as a positive or unsigned integer. This means that when the 16th bit of an RODC Unique ID is 1, ie when the RODC ID is 0x8000 (decimal 32768) or greater, it will cause an MIT implementation to prepend a 0 and thus generate a 5 byte KVNO.