A computer program uses 3 bits to represent integers. When the program adds the decimal (base 10) numbers 5
and 3, the result is 0. Which of the following is the best explanation for the result?
We need to represent \(5+3=8\). If we only have 3 bits, the highest decimal number we would be able to represent is:
$$ 2^2+2^1+2^0=4+2+1=7 $$
To represent 8, we would need 4 bits (1000):
$$ 1\cdot2^3+0\cdot2^2+0\cdot2^1+0\cdot2^0=8+0+0+0=8 $$
This is known as an overflow error.