why float and double does not identify 0 in decimal places [duplicate]

float and double only represent the values of numbers. “1” and “1.0” are both numerals for the same number, so 1 is the correct value of the var you set to 1.0. float and double do not represent the original numerals used to set their values, nor do they represent how much accuracy (relative to some ideal mathematical value) is present. The “1” you see as output is a result of default formatting. Other formatting options are available, but you must specify them yourself.

Leave a Comment