Adjusting decimal precision, .net

Preserving trailing zeroes like this was introduced in .NET 1.1 for more strict conformance with the ECMA CLI specification. There is some info on this on MSDN, e.g. here. You can adjust the precision as follows: Math.Round (or Ceiling, Floor etc) to decrease precision (b from c) Multiply by 1.000… (with the number of decimals … Read more

How do you round a double in Dart to a given degree of precision AFTER the decimal point?

See the docs for num.toStringAsFixed(). String toStringAsFixed(int fractionDigits) Returns a decimal-point string-representation of this. Converts this to a double before computing the string representation. If the absolute value of this is greater or equal to 10^21 then this methods returns an exponential representation computed by this.toStringAsExponential(). Examples: 1000000000000000000000.toStringAsExponential(3); // 1.000e+21 Otherwise the result is the … Read more

C++ floating point precision [duplicate]

To get the correct results, don’t set precision greater than available for this numeric type: #include <iostream> #include <limits> int main() { double a = 0.3; std::cout.precision(std::numeric_limits<double>::digits10); std::cout << a << std::endl; double b = 0; for (char i = 1; i <= 50; i++) { b = b + a; }; std::cout.precision(std::numeric_limits<double>::digits10); std::cout << … Read more

How to create a high resolution timer in Linux to measure program performance?

Check out clock_gettime, which is a POSIX interface to high-resolution timers. If, having read the manpage, you’re left wondering about the difference between CLOCK_REALTIME and CLOCK_MONOTONIC, see Difference between CLOCK_REALTIME and CLOCK_MONOTONIC? See the following page for a complete example: http://www.guyrutenberg.com/2007/09/22/profiling-code-using-clock_gettime/ #include <iostream> #include <time.h> using namespace std; timespec diff(timespec start, timespec end); int main() … Read more

How does JavaScript determine the number of digits to produce when formatting floating-point values?

The default rule for JavaScript when converting a Number value to a decimal numeral is to use just enough digits to distinguish the Number value. (You can request more or fewer digits by using the toPrecision method.) JavaScript uses IEEE-754 basic 64-bit binary floating-point for its Number type. Using IEEE-754, the result of .1 + … Read more

Double precision – decimal places

An IEEE double has 53 significant bits (that’s the value of DBL_MANT_DIG in <cfloat>). That’s approximately 15.95 decimal digits (log10(253)); the implementation sets DBL_DIG to 15, not 16, because it has to round down. So you have nearly an extra decimal digit of precision (beyond what’s implied by DBL_DIG==15) because of that. The nextafter() function … Read more