There is a possible complication when converting hexadecimal values to
decimal. Lets start with the simple case, when the hexadecimal value
represents an unsigned decimal value, and deal with the possibility of a
signed number later.
Hexadecimal numbers work the same way as the familiar decimal numbers, but
there are 16 digits instead of 10, so each column goes up by a power of 16.
For a four-digit number in base 10 we multiply the rightmost digit by 1,
which is 10 raised to the power 0. Working towards the left, the next digit
is multiplied by 10, or 10 raised to the power 1. The third digit is
multiplied by 100, or 10 squared, and the final digit by 1000, or 10 cubed.
Transferring that knowledge to base 16 numbers, the rightmost digit is still
multiplied by 1, although technically it’s 16 raised to the power 0 this
time. The second digit from the right is multiplied by 16, the third from
the right by 256 (16*16), and the leftmost digit by 4096 (16*16*16).
The additional piece of knowledge you need is the values for each of the
hexadecimal digits. Digits 0 through 9 represent the same values as their
decimal counterparts. For the digits 10-15, we use the letters A-F, so A is
10 and F is 15.
Incidentally, using letters is becoming a standard for bases larger than 10.
Some languages, like Ada, allow you to specify numbers in pretty much any
base you like. For base 20 numbers, Ada uses A-J for the values 10-19. This
convention allows you to use any base up to base 36.
Getting back to your specific example, then, A432 is
10 * 4096 + 4 * 256 + 3 * 16 + 2
which works out to 42034 decimal.
There is one possible gotcha. If the number you are converting is a
hexadecimal representation of a number extracted from the bowels of a
computer’s memory, you have to know if the number is signed or unsigned. For
a four-digit unsigned hexadecimal value, the possible values of 0000 to FFFF
convert to the decimal numbers 0 to 65535. That doesn’t allow for negative
numbers, though. The shortest integer in most programming languages is still
represented by four hexadecimal digits, but a special way of coding the
numbers is used. It’s called two’s complement notation, and for a four-digit
hexadecimal value you get decimal numbers from -32768 to 32767. Either way,
there are 65536 possible values, but one way they are all positive or zero,
and the other way half are negative.
Here’s how two’s complement notation works:
Any hexadecimal number with the most significant bit set is a negative
number. As you probably know, each hexadecimal digit is actually four bits
in the computer. The decimal, hexadecimal, and binary values for the 16
hexadecimal digits are:
0 0 0000
1 1 0001
2 2 0010
3 3 0011
4 4 0100
5 5 0101
6 6 0110
7 7 0111
8 8 1000
9 9 1001
10 A 1010
11 B 1011
12 C 1100
13 D 1101
14 E 1110
15 F 1111
Any number from 0000 to 7FFF converts to decimal just like I described
earlier. Numbers from 8000 to FFFF have the high bit set (which means the
leftmost bit in the binary representation is one), so they are actually
negative numbers.
There are two ways to convert negative two’s compliment hexadecimal numbers
to decimal.
The first is the way the computer does it. To convert negative hexadecimal
numbers to decimal, start by flipping all of the bits--each 0 becomes a 1,
and each 1 becomes a 0. The result is the "complement" of the starting
number. You’ll need a table to figure out what digits to substitute. You can
work this table out for yourself from the one above, but I’ll save you the
trouble.
0 F
1 E
2 D
3 C
4 B
5 A
6 9
7 8
Find the number you are complementing in the table, and replace it by the
other number on the same line. A432 becomes 5BCD.
The second step is to add one. 5BCD becomes 5BCE.
Convert the resulting hexadecimal number to decimal, and put a minus sign in
front. 5BCE is 23502 decimal, so the value A432 is -23502.
The second method is to convert the original number to decimal, then
subtract 65536, which is 2 raised to the power 16. Sixteen is also, you’ll
notice, the number of bits in a four-digit hexadecimal number. A432 converts
to decimal 42034, and 42034 - 65536 is, you guessed it, -23502.
Ok, so which is it? is A432 really 42034 or -23502? The actual answer
depends on how you are using the value. The hexadecimal value can represent
either number; you have to know some other way whether you are using
unsigned numbers, which means the decimal value is 42034, or two’s
compliment numbers, in which case the decimal value is -23502.
As an obscure aside, two’s compliment notation is really just a tricky way
to handle negative numbers in a binary world that really deals only with
positive values and zero. It’s a code, if you like. It’s far and away the
most common code used to represent signed integers on modern digital
computers, but it isn’t the only one. It’s been a long time since I used a
computer that used something else, though, so unless you have some reason to
believe your situation is really odd, it’s pretty safe to assume that any
hexadecimal value that represents a signed decimal integer is using two’s
compliment notation to do it.