C – is any use of unsigned int just terrible coding practice? [closed]

The rule of the thumb in this case is very simple: use unsigned types to represent unsigned values, use signed type to represent signed values. So, in reality it is just the opposite: most of the time gratuitous use of signed types is a terrible coding practice. I’d even go as far as to say that most of integer types in the code are supposed to be unsigned. Of course, the actual ratio will depend on the application area, but for combinatorial problems and related domains: it is unsigned, unsigned and only unsigned.

Your above example with wrap-around behavior simply demonstrates typical newbie coding error. And in its essence it is no different from the popular

double d = 1/2;

followed by something like “why my d is not 0.5?”.

Note also that in the domain of integral calculations unsigned types are typically more efficient than signed ones (C rounding rules for division are different from the typical machine-supported ones and make a negative impact on the performance of signed types). In mixed integer-floating point calculations signed integer types might have an edge (FPU instruction set typically supports signed integers directly, but not unsigned ones).

great number of functions convert unsigned char to int simply to dodge use of an unsigned data type

Nope. Conversion to int is a rudiment of that bygone era when C language had no function prototypes. All functions were declared without prototypes (or left undeclared), which triggered automatic promotions of smaller integer arguments to int. Once prototypes for standard functions appeared, there were intentionally tailored to be compatible with legacy behavior. For this reason you will never see a “classic” standard library function that accepts [signed/unsigned] char, [signed/unsigned] short arguments (or float for that matter). Signedness has nothing to do with it.

Leave a Comment