By default, Visual Basic for Applications uses the
Double data type for each member of an expression contained within an
Int function. The
Double data type is a floating-point data type that is used to store both integer and decimal values. The problem is that the computer must store the value in binary, rather than in decimal format. However, not all decimal fractions have exact binary equivalents; therefore, the value interpreted by the computer is going to slightly less or slightly more than the actual value.
For numbers whose binary equivalents are slightly larger than the actual value, the
Int function returns the expected result. For numbers whose binary equivalents are slightly less than the actual value, the
Int function returns one number less than the expected result.
In the example shown in the
Steps to Reproduce Behavior section later in this article, the value 59.3 causes the problem (the "100" in the expression is a whole number, and therefore is exempt from the problem). The computer is not using the actual value 59.3 in the calculation, but rather its closest binary equivalent. The closest binary equivalent to 59.3 is 59.29999999999999, which is slightly less than the actual value. This means that the expression inside the
Int function actually being computed is (59.29999999999999 * 100). This expression results in the number 5929.999999999999, which is slightly smaller than 5930. Because the
Int function truncates the number instead of rounding it, the result of Int(5929.999999999999) is 5929.
By wrapping each member of the expression in the
CDec function, you are converting it from a
Double to a
Decimal data type.
Decimal data types are stored as scaled integers, and therefore are exempt from the rounding errors associated with floating-point data types.
Steps to Reproduce Behavior
- Open the sample database Northwind.mdb.
- Press CTRL+G to open the Immediate window in the Visual Basic Editor.
- Type the following expression in the Immediate window, and then press ENTER:
?Int(59.3 * 100)
Note that the result returned is 5929, which is unexpected. Because 59.3 * 100 is 5930, the expected result of the
Int function is 5930.