CHAP 2

Cards (36)

  • Number representation
    Numbers can be represented in different bases. A base of ten is called a decimal.
  • Converting a number to a different base
    1. Multiply each digit by the corresponding power of the base
    2. Add up all the results
  • Converting an integer from decimal to binary
    1. Divide the integer number by 2, find the remainder
    2. Divide the quotient by 2, find the remainder
    3. Repeat until quotient is 0
    4. Write the remainders in reverse order
  • Binary number
    The maximum number that can be represented in n bits is 2^n
  • Most significant bit (MSb) and least significant bit (LSb)
    • The first bit from the left is the MSb, the first bit from the right is the LSb
  • Binary addition
    1 + 0 = 1
    1 + 1 = 10 (with carry)
  • Two's complement
    The complement of 1 is 0 and the complement of 0 is 1
    Two's complement = Complement + 1
  • Subtracting unsigned numbers using two's complement
    Take the two's complement of the subtrahend
    Add it to the minuend
    If there is a carry, discard it and the result is positive
    If no carry, take the two's complement of the result and the result is negative
  • Sign-magnitude representation
    • The most significant bit is the sign bit (0 for positive, 1 for negative)
    The remaining bits hold the magnitude
  • Two's complement representation
    • The most significant bit is the sign bit
    Negation is done by taking the bitwise complement and adding 1
  • Range of two's complement integers
    • 2^(n-1) to 2^(n-1) - 1
  • Characteristics of two's complement
    • One representation of zero
    Negation by bitwise complement and add 1
    Expansion by sign extension
    Overflow rule: If two numbers with same sign are added and result has opposite sign, then overflow
  • Subtracting using two's complement
    Take the two's complement of the subtrahend and add it to the minuend
  • Block diagram of hardware for addition and subtraction
  • Floating point numbers
    Represent a wide range of numbers using a fraction (F) and an exponent (E) in the form F x r^E
    Decimal uses radix r=10, binary uses r=2
    Floating point representation is not unique, can be normalized
  • Floating point numbers suffer from loss of precision
  • Numbers presented
    • Numbers with fractions, e.g., 3.1416
    • Large number: 976,000,000,000,000 = 9.76 × 10^14
    • Small number: 0.0000000000000976 = 9.76 × 10^-14
  • Floating-point number

    Typically expressed in the scientific notation, with a fraction (F), and an exponent (E) of a certain radix (r), in the form of F × r^E
  • Decimal numbers
    Use radix of 10 (F × 10^E)
  • Binary numbers
    Use radix of 2 (F × 2^E)
  • Floating point number representation is not unique
  • Normalized fractional part
    There is only a single non-zero digit before the radix point
  • Floating-point numbers suffer from loss of precision
  • There are infinite real numbers (even within a small range). Using fixed number of bits, we can represent a finite set of distinct numbers not all the real numbers. The nearest approximation is used which results in loss of accuracy.
  • 32-bit single-precision floating-point representation
    Most significant bit is the sign bit (S), with 0 for positive numbers and 1 for negative numbers<|>Following 8 bits represent exponent (E)<|>Remaining 23 bits represents fraction/Mantissa (M)
  • Since the mantissa is always 1.xxxxxxxxx in the normalised form, no need to represent the leading 1. So, effectively: Single Precision: mantissa ===> 1 bit + 23 bits, Double Precision: mantissa ===> 1 bit + 52 bits
  • Floating point number formula
    x = -1^s × (1 + M) × 2^E
  • Decimal to floating point conversion
    1. Convert integer part to binary
    2. Convert fractional part to binary
    3. Combine integer and fractional parts
  • Decimal 18.45 can not be represented in binary precisely
  • ASCII

    American Standard Code for Information Interchange, one of the earlier character coding schemes
  • ASCII is originally a 7-bit code, extended to 8-bit to better utilize the 8-bit computer memory organization
  • ASCII code ranges
    • 20H to 7EH are printable (displayable) characters
    • 00H to 1FH, and 7FH are special control characters (non-printable, non-displayable)
  • Meaningful ASCII control characters
    09H for Tab ('\t')<|>0AH for Line-Feed or newline (LF or '\n')<|>0DH for Carriage-Return (CR or 'r')
  • There is no standard for line delimiter: Unixes and Mac use 0AH (LF or "\n"), Windows use 0D0AH (CR+LF or "\r\n"). Programming languages such as C/C++/Java use 0AH (LF or "\n").
  • Big Endian vs. Little Endian
    Big Endian: most significant byte is stored first in the lowest memory address
    Little Endian: least significant bytes are stored in the lowest memory address
  • For example, the 32-bit integer 2A 75 6B ED Hex is stored as 2A 75 6B ED in Big Endian, and 6B 75 ED 2A in Little Endian