Sonoma State University
 
Algorithm Analysis
Instructor: Henry M. Walker

Lecturer, Sonoma State University
Professor Emeritus of Computer Science and Mathematics, Grinnell College

Although CS 415 has been well developed for several years, last year the CS faculty made a significant, long-term curricular change regarding SSU's Upper Division GE Area B Requirement.

Binary Representation of Integers

Reading outline

Many (most?) applications of computers involve the storage, retrieval, and processing of data. In principle, the nature of that data is vast: numbers, textual characters, colors, musical tones, smells, tastes, etc. As human beings, we have nerve sensors to detect much data, our nervous sensors transmit information to our brains, and our brains somehow make sense of the world we perceive. Various experiments may suggest something about complex electro-chemical properties and reactions that may be part of our memory of the past and our understanding of our circumstances.

Computers, however, have not evolved over millennia, and their basic properties depend upon simple electrical circuits. Pragmatically, electrical engineers know how to pack many millions/billions of circuits into small packages (sometimes called integrated circuits or chips), but the basic building blocks remain rather elementary. All of this discussion raises the basic question, "how can we store data effectively within a computer?"

At a fundamental level, the answer comes in three parts:

  1. identify a cheap, reliable, easy to use mechanism to some a simple element of data — based on the concept of an on-off switch for electricity,
  2. combine the simple elements into small or large groups, and
  3. use a coding scheme to translate numbers or text or other data to patterns within these groups of electrical elements.

This reading is the first of two installments that explore these three parts in the context of integers (numbers without decimal points), and the reading in the next session examines the storage of real numbers (numbers with decimal points). For many applications, these representations of data will work satisfactorily, and we will not need or want to worry about the underlying details. However, these representations sometimes do not behave as we might wish or expect, so we will need to consider some of these implications as part of our discussion as well.


Bits: Simple Data Elements

In electrical circuitry, perhaps the simplest device is a switch. In a circuit, several descriptions come to mind:

In computing, the record of current flowing or not is called a bit. The digit 1 is used to designate current flowing (or presence of a charge, etc.), and the digit 0 is used to designate no current (or no charge).

Since a bit of information (1 or 0) relates to a simple electrical circuit (e.g., a switch), the storage and processing of a bit can be fast and reliable within a computing environment.

Bytes: Groups of Bits

Although a single bit can store very little information (0 or 1, yes or no), numbers or words or images can be represented by combinations of bits. In many contexts, 8 bits are grouped together into a unit called a byte.

Within a byte, the first bit could be 0 or 1 (2 options), the second bit could be 0 or 1 (2 options), etc. Computing the possible combinations, yields

# options = 2 (first bit) × 2 (second bit) × ... × 2 (eighth bit) = 28 = 256

That is, one byte (8 bits) can store 256 bit combinations of values — as long as we can agree what each combination of bit values actually represents.

Beyond a byte, groupings of larger numbers of bits also are common. However, although the designation of 8 bits as a byte is common with many types of computers, different machines may utilize different sized groupings. Sometimes the term word is used for a grouping of bits, but the number of bits in a word varies according to a specific machine. Often the number of bits in a grouping (e.g., byte, word) is a power of two (e.g., a word might be 32=25 bits or 64 =26 bits), but even this generalization does not always apply.

Why Not Use Decimal?

Some early computers used a range of voltages to store decimal numbers. For example, 0 volts in a circuit might represent the number 0, 1 volt might represent 1, 2 volts might represent 2, etc. With this option of decimal-based circuitry used in past computers, one might wonder why such an approach is not still used in most contemporary computers.

The answer draws upon at least two factors.

  1. Circuit complexity, memory size, and processing speed: In practice, detecting the presence or absence of an electrical charge is reasonably simple and easy, while measuring the amount of electricity requires modest effort.

    Although one bit represents little data, a grouping of four bits allows one to keep track of 16 different combinations of on/off — more than enough to encode digits between 0 and 9. With the simplicity of circuitry for a single bit, constructing circuitry for four bits is still easy and quick — much simpler, quicker, and less expensive than handling 10 different voltage levels. Altogether, use of bits rather than storage of decimals allows substantially larger memories and faster processing.

  2. Voltage variability and reliability: Although the idea of using 0 volts for the number 0, 1 volt for the number 1, 2 volts for the number 2, etc. seems conceptually simple, the voltage in actual electrical circuits should be considered somewhat variable. An intended voltage of 3 volts might actually be measured as 2.7 or 3.2 volts, and a voltage of 2.5 volts might be hard to interpret. Taking such variation into account in a voltage-based system can complicate processing and undermine reliability.

    Within a binary-based machine — perhaps based on a 0-5 volt range for circuits, one simple approach is to design circuits to consider 0-2 volts as having no voltage (logically a 0 value for the bit) and to consider 3-5 volts as having full voltage (logically value 1 for the bit). In this setting, circuit design could largely avoid voltages between 2 and 3 volts — even with natural voltage variations.

In summary, use of bit-based data representation allows larger, faster, and more reliable data storage and retrieval than systems based on decimal.


Non-negative Integers

Although any pattern of bit values could be used, in principle, to represent non-negative integers, a single approach has become standard.

Converting a binary number to a non-negative integer

To illustrate the approach, consider the binary number 00011010. We interpret this binary number as follows:

  1. Place the number in columns, and label the columns from the right with values 20, 21, 22, 23, 24, 25, 26, 27.

    binary number0 0 0 1 1 0 1 0
    powers of 2 27 26 25 24 23 22 21 20
  2. Compute the power of 2 for each column in which the binary digit 1 appears:

    binary number0 0 0 1 1 0 1 0
    powers of 2 27 26 25 24 23 22 21 20
    column values with binary 1 16 8 2
  3. Add the values in the rows with the binary digit 1: 16 + 8 + 2.
    The binary number 00011010 represents the decimal number 16 + 8 + 2 = 26.

Practice

Try converting the binary number

to a non-negative decimal integer, using the 3-step algorithm given.

Answer:    

Henry

Converting non-negative decimal integers to binary

A Web search will reveal several algorithms to convert non-negative decimal integers to binary integers. Some start at the left of the binary number by examining large powers of two — great if you remember those large powers of two! The approach here constructs the binary integer from right to left — always looking at small powers of two.

This approach is based on two observations:

  1. The right-most digit: Given any non-negative integer, the right-most binary digit will be 0 if the number is even and 1 if the number is odd.

    As an example, consider the decimal integer
    93 = 64 + 16 + 8 + 4 + 1
    or 26 + 24 + 23 + 22 + 20
    or 0 0 1 0 1 1 1 0 1 as an 8-bit binary integer.

    In the example, all powers of two in the sum are divisible by 2, except the 1 at the end. As an odd number, 93 must have 1 as its right-most binary bit.

  2. Division by 2: When a binary number is divided by 2, each power in the resulting sum of powers is reduced by 1, and the right-most bit is lost (after truncating to retain an integer).

    Again, an example may clarify this observation. Look at the number 93.

    number 27 26 25 24 23 22 21 20
    start 93 = 0 1 0 1 1 1 0 1
    divide by 2
    ignore remainder
    93/2 = 46 = 0 0 1 0 1 1 1 0
    divide by 2
    ignore remainder
    46/2 = 23 = 0 0 0 1 0 1 1 1
    divide by 2
    ignore remainder
    23/2 = 11 = 0 0 0 0 1 0 1 1
    divide by 2
    ignore remainder
    11/2 = 5 = 0 0 0 0 0 1 0 1
    divide by 2
    ignore remainder
    5/2 = 2 = 0 0 0 0 0 0 1 0
    divide by 2
    ignore remainder
    2/2 = 1 = 0 0 0 0 0 0 0 1
    divide by 2
    ignore remainder
    1/2 = 0 = 0 0 0 0 0 0 0 0

In reviewing this table, the right-most column is 1 or 0, respectively, according to whether the number in that row is odd or even. Further, since the bits shift right for each line, the right most bits in the entire table represent all of the bits in the original number — written top to bottom rather than right to left. (Alternatively, reading from the bottom up in the right-most column gives the left-to-right binary representation of the number.)

Putting these observations together yields the following algorithm that generates n bits of a number.

Practice

Try converting the given non-negative decimal integer

to an 8-bit binary number, using the algorithm given. (To get 8 bits, you may need to add some 0's at the start.)

Answer:    







created 27 March 2016 by Henry M. Walker
expanded and edited 3 April 2016 by Henry M. Walker
minor editing 2 January 2023 by Henry M. Walker
Valid HTML 4.01! Valid CSS!
For more information, please contact Henry M. Walker at walker@cs.grinnell.edu.