c++ - Why is unsigned integer overflow defined behavior but signed integer overflow isn't? -


unsigned integer overflow defined both c , c++ standards. example, c99 standard (§6.2.5/9) states

a computation involving unsigned operands can never overflow, because result cannot represented resulting unsigned integer type reduced modulo number 1 greater largest value can represented resulting type.

however, both standards state signed integer overflow undefined behavior. again, c99 standard (§3.4.3/1)

an example of undefined behavior behavior on integer overflow

is there historical or (even better!) technical reason discrepancy?

the historical reason c implementations (compilers) used whatever overflow behaviour easiest implement integer representation used. c implementations used same representation used cpu - overflow behavior followed integer representation used cpu.

in practice, representations signed values may differ according implementation: one's complement, two's complement, sign-magnitude. unsigned type there no reason standard allow variation because there 1 obvious binary representation (the standard allows binary representation).

relevant quotes:

c99 6.2.6.1:3:

values stored in unsigned bit-fields , objects of type unsigned char shall represented using pure binary notation.

c99 6.2.6.2:2:

if sign bit one, value shall modified in 1 of following ways:

— corresponding value sign bit 0 negated (sign , magnitude);

— sign bit has value −(2n) (two’s complement);

— sign bit has value −(2n − 1) (one’s complement).


nowadays, processors use two's complement representation, signed arithmetic overflow remains undefined , compiler makers want remain undefined because use undefinedness optimization. see instance blog post ian lance taylor or complaint agner fog, , answers bug report.


Comments

Popular posts from this blog

ios - UICollectionView Self Sizing Cells with Auto Layout -

node.js - ldapjs - write after end error -

DOM Manipulation in Wordpress (and elsewhere) using php -