### Decimal precision in Java Java22.10.2013

It's well known misunderstanding of decimal precision in Java that 0.1 + 0.1 + 0.1 will output 0.30000000000000004. In general it is not a problem. It is how double works.

You can control precision with DecimalFormat. The pattern is a String and can consist of zeros, which will set a digit or 0 if no digit present, a #, which will set a digit or nothing if no digit present.

```import java.text.DecimalFormat;

class DecimalTest {
public static void main(String[] args) {
DecimalFormat df = new DecimalFormat("#.##");
double x = 0.1 + 0.1 + 0.1;
System.out.println(df.format(x)));
}
}
```

Or use BigDecimal for math operations

```import java.text.DecimalFormat;

class DecimalTest {
public static void main(String[] args) {
// example 1
System.out.println(d);

// example 2
double d = 0.1 + 0.1 + 0.1;
BigDecimal bd = new BigDecimal(Double.toString(d));
bd = bd.setScale(5, BigDecimal.ROUND_HALF_UP);
System.out.println(bd);
}
}
```

Also you can use NumberFormat

```import java.text.NumberFormat;

public class DeciamlTest {
public static void main(String args[]) {
double x = 0.1 + 0.1 + 0.1;
NumberFormat fmt = NumberFormat.getInstance( );
fmt.setMinimumIntegerDigits(1);
fmt.setMinimumFractionDigits(2);
System.out.println(fmt.format(x));
}
}
```
###### Quote
If you define the Problem correctly, you almost have the Solution
Steve Jobs