 # Signal Characteristics

Information can be sent and described in many different ways.  Information can be either digital or analog.  Numbers can be represented in decimal, binary, or any other base.  Once you determine how the information is to be represented, you need to decide how to send it.  Modulation and encoding are two methods of preparing information to be sent from one location to another.

Introductory Exercise:  What does the word "Digital" mean to you?  How do digital systems differ from analog systems?  Can you come up with examples of each?

### Differences between Digital and Analog

You have probably heard the word "digital" used in many different contexts:  digital camera, digital phone, and, of course, digital watches.  While at first glance these three technologies appear quite different, the digital preface means the same thing in each technology.  Digital refers to the discrete resolution of information.  For example, a digital watch provides the hour, the minute, and usually the second.  But that is where the information ends.  It is not possible to determine the time up to a hundredth of a second on a digital watch that expresses time only to the second.  In an traditional, or analog, watch, measuring small amounts of time might be difficult, but it is possible in principle to measure the time essentially as accurately as is desired.  For cameras, the difference is in the picture.  A high-resolution digital camera can store a photo on thousands of extremely small pixels, but the image is still in discrete pieces.  A traditional camera stores the photo in continually-varying intensities on film.  In digital phones, or digital music recording, the sound is broken into discrete pieces.  Analog phones and LPs can transmit and store continually varying signals.  The digital signal can approach the analog signal if the pieces are made arbitrarily narrow, but it will never be completely as smooth as the analog signal.  The figure below illustrates the differences between analog and digital signals. Discussion Question: What are some applications for which you would prefer an analog device?  When would a digital device be preferable?

The figure above may give the impression that digital is not as good as analog.  But that is not necessarily the case.  Increasing the number (and decreasing the size) of the time divisions in a digital signal can make the digital signal nearly as smooth as an analog signal.  And digital signals are much easier to store than analog signals and are much less prone to degradation.  By definition, each piece of information in a digital signal is a number, easily distinguished from other numbers.  One analogy for a digital signal could be a table of numbers.  A comparable analogy for an analog signal would be a graph.  It is much easier to accurately copy a table than a graph.  And once they have been copied several times, the table has a good chance of staying unchanged while the graph will probably look quite different.

Discussion Question:  The word "digital" continues to invade more and more of our activities:  digital cameras are common  now, and digital music recordings seems to have almost completely replaced analog recordings.  Yet analog clocks and watches are still very common.  Do you see everything becoming digital in the future, or are there areas of our lives where analog devices will continue to thrive?

## Binary Mathematics

Early efforts at calculating devices led to cumbersome and unreliable heaps of gears.  One of the primary difficulties was the inventors' dependence on the decimal system.  Each gear had to have 10 distinct settings, and the machining of the day was not up to the task of making such gears.  Babbage developed new methods of machining in order to produce his Difference Engine.  Leibniz developed the simpler binary system of numbers in the late 1600s, but it was over 200 years before his theory would be applied to calculating machines.  Wynn-Williams' particle detectors and Zuse's computers demonstrated the superiority of using the binary system for computing.

Discussion Question: Why do you think computers use the binary system of numbers rather than decimal numbers?  What advantages and disadvantages are there to the binary system?

### The Binary System of Numbers

Before discussing binary numbers, let us look more closely at the decimal system.  Back in elementary school you learned that the number 34618 means you add 8 "ones", 1 "ten", 6 "hundreds", 4 "thousands", and 3 "ten thousands".  Each digit represents a different power of 10.  Using math you learned after elementary school, we could write the number 34618 as

3 x 104 + 4 x 103 + 6 x 102 + 1 x 101 + 8 x 100

In binary mathematics, each digit represents a power of two rather than a power of 10.  And the numbers each digit can take are then restricted to 0 or 1, rather than 0 to 9 for decimal.  The binary number 101101 could be written as

1  x 25 + 0  x 24 + 1 x 23 + 1 x 22 + 0 x 21 + 1 x 20

Converting to decimal, this becomes

1  x 32 + 0  x 16 + 1 x 8 + 1 x 4 + 0 x 2 + 1 x 1 = 45

The addition of binary numbers follows the same rules as the addition of decimal numbers:  you add the two digits representing the same power of two (ten).  If the number is equal to or greater than the next power of two (ten), you "carry" a digit to the next higher power of two (ten).  Just as in decimal math, 0 + 0 = 0, and 1 + 0 = 0 + 1 = 1.  1+1, however takes you to the next power of two, so in binary 1 + 1 = 10.  Here is an example of adding two binary numbers:

 1 1 0 1 0 1 1 0 + 1 0 0 1 1 0 1 1 = 1 0 1 1 1 0 0 0 1

We can check our result by converting to decimal.

11010110 = 128+64+16+4+2 = 214, and
10011011 = 128+16+8+2+1 = 155.
214+155=369.
And indeed,
101110001 = 256+64+32+16+1 = 369.

### How Binary Numbers Represent Information

Just adding numbers is not sufficient to make a machine an information system.  What makes information systems as versatile as they are is the ability to represent many diverse types of  information digitally.  We have already discussed the conversion of an analog sound wave into a digital series of numbers.  In a similar manner, text, pictures, and video can be represented digitally.  And if they can be expressed digitally, they can be expressed in terms of binary numbers.  The ASCII code is a commonly-used format for expressing text in terms of binary digits, or bits.  The table below contains the ASCII conversions for keyboard signals.  The first three bits of the ASCII representation are given along the top of each column, and then the last four digits are taken from the left of each row.  For example, a lower-case e has the ASCII representation 1100101.

 BITS 000 001 010 011 100 101 110 111 0000 NUL DLE SPACE 0 @ P ` p 0001 SOH DC1 ! 1 A Q a q 0010 STX DC2 " 2 B R b r 0011 ETX DC3 # 3 C S c s 0100 EOT DC4 \$ 4 D T d t 0101 ENQ NAK % 5 E U e u 0110 ACK SYN & 6 F V f v 0111 BEL ETB ' 7 G W g w 1000 BS CAN ( 8 H X h x 1001 HT EM ) 9 I Y i y 1010 LF SUB * : J Z j z 1011 VT ESC + ; K LEFTSQUARE k { 1100 FF FS , < L \ l | 1101 CR GS - = M RIGHTSQUARE m } 1110 SO RS . > N ^ n ~ 1111 SI US / ? ) _ o DEL

While ASCII is widely used, it is not the only choice.  Consider, for example, trying to save the information about a picture.  You could assign each color variety a number, and then list the colors in the picture pixel by pixel.  ASCII is certainly one option, since every number has a unique ASCII representation, but ASCII might not be the best choice.   ASCII uses 7 bits for each symbol.  If you had 256 colors, each would need to be represented by 3 decimal digits, or 21 ASCII bits.  Merely representing the color number in binary would require only 8 bits.  Thus ASCII is not the best choice to represent colors.

Discussion Question: Think about different technologies that store or transfer information:  digital cameras, a computer keyboard, a VCR remote control, a music CD, and a mouse.  Which (if any) of these do you think would use the ASCII code?  What would be the advantages?  What are the disadvantages?

### History Revisited

Here are some highlights from the history table in the previous reading.  Note the prevelance of digital data throughout, and the gradual conversion to binary representation.

 c. 3000 B.C. Babylonia The abacus, a mechanical calculating aid Digital Decimal 1623 A.D. Wilhelm Schickard Built first mechanical adding machine (lost until 1900s) Schickard built a "Calculating Clock" that would perform addition and subtraction with gears. Digital Decimal 1700s Special-purpose analog machines The 1700s saw the rise of many machines to aid navigation as well as a few more adding machines. Analog Decimal 1822-1842 A.D. Charles Babbage Developed and partially built mechanical Difference Engine Dreamed up mechanical Analytical Engine  Babbage's creations were hampered by the lack of precision machining available at the time, as well as Babbage's reliance on the decimal system. Digital Decimal 1890 A.D. Herman Hollerith Built electromechanical Tabulating Machine to help with census Digital Decimal? 1930 A.D. Vannevar Bush Built analog differential analyzer to solve differential euations Bush's machine was analog, not digital like the machines of Hollerith and Babbage. Analog Decimal? 1932 A.D. C. E. Wynn-Williams First large-scale application of digital electronics Wynn-Williams used digital electronics to build a binary counter for physics experiments in Cambridge. Digital Binary 1937 A.D. Alan Turing Developed theory of computability - introduced the Turing machine Turing's machine, a purely theoretical creation, consists of an infinitely long tape with binary information on it, and a moving, programmable read/write head which can move along the tape. Digital Binary 1938-1941 A.D. Conrad Zuse Built electomechanical programmable computer  Zuse was the first to use a binary system in a calculating machine, and he recognized the need for a  general-purpose programmable machine.  Zuse also developed his own version of Boolean algebra for the logic portion of the computer. Digital Binary 1939-1942 A.D. John Atanasoff and Clifford Berry Built electronic digital computer - first to use vacuum tubes Atansoff and Berry's computer (ABC) was a special-purpose machine for solving systems of equations.  In addition to being the first machine to use vacuum tubes to calculate, the ABC incorporated binary arithmetic, regenerative electronic memory, and logic circuits. Digital Binary 1940 A.D. George Stibitz and S. B. Williams Built first multi-terminal remotely-accessible calculator This calculating device could perform addition, subtraction, multiplication, and division on complex numbers.  It used relays and binary mathematics, but it was not programmable. Digital Binary 1943-1945 A.D. Ballistics Research Laboratory Developed the Electronic Numerical Integrator and Computer Some of ENIAC's complexity was due to Mauchly's decision to use decimal numbers, rather than binary. Digital Decimal 1947 A.D. Bardeen, Brattain, and Shockley Invented transistor at Bell Telephone Laboratory Transistors perform all of the electrical functions of vacuum tubes, but use little energy, generate little heat, turn on instantly, are sturdy and stable, and are cheap. Digital Binary 1949 A.D. Maurice Wilkes First machine capable of performing useful stored programs Digital Binary 1949 A.D. Prespert Eckert and John Mauchly First computer system Mauchly and Eckert continued to stick with decimal math, but UNIVAC's capabilities were revolotuionary. Digital Decimal 1951 A.D. Jay Forrester and Bob Everett First real-time computer, the Whirlwind. Digital Binary 1974 A.D. Edward Roberts and MITS Introduction of the personal computer Digital Binary
DisclaimerThe information in the above table has been compiled from several of the sources listed in the Bibliography below.  The discussion of some inventions did not specify whether they used decimal or binary representation, so I have made my best guess (followed by a ?).

## Modulation

Discussion Question:  What is the difference between AM and FM?  Why are these forms of modulation needed at all?

As you have learned, the wavelength (and therefore frequency) of a standing wave is related to the length of the medium in which it is created. Electromagnetic waves for radio are created from standing waves of currents and voltages in an antenna. Since the end of the antenna is not fixed, the antenna must be at least 1/4 as long as the wavelength of the radiation it emits. (This relationship is discussed in the assignment on standing waves). For an electromagnetic signal at 500 Hz (typical sound frequency), we find

Lmin = l/4 = c/4f = (3E8 m/s)/(4)(500 Hz) = 1.5 x 105 m

That's one big antenna! Clearly, transmitting electromagnetic signals at the same frequency of sound they represent is not practically feasible. This is why radio and TV signals are sent using a base frequency different than the frequency of the desired information.  The information, such as sound, is represented by the modulation of the signal.  A signal is modulated when one of its parameters changes with time.  We will discuss both amplitude and frequency modulation below.

Another reason to modulate signals is interference. Once I start sending a signal of about 500 Hz, any other 500 Hz signal would interfere with it. So I could only broadcast one signal of audible-range frequencies in any region at a time.

Amplitude Modulation
 In amplitude modulation, the amplitude of a transmission represents the signal. The figure to the right, taken from Signals by John Pierce (an interesting book that is unfortunately out of print), shows how such a signal is created. The carrier wave has a frequency in the kiloHertz range (600 on the AM dial is 600 kHz), so the antenna need only be 125 m long. The initial signal is raised until it is completely positive, then this positive version of the signal is used as the envelope that determines the amplitude of the transmitted wave. The transmitted wave can be broken down into its Fourier components, and that information sent along with the wave to help the decoding process. Since the transmitted wave is non-periodic, a Fourier transform must be used. The collection of amplitudes for the different component frequencies is called a spectrum. This spectrum contains two groups of frequencies, called sidebands. Sometimes one sideband is sent, and sometimes both are sent. The information in the sidebands is in theory redundant to the information in the carrier signal, but the two can be compared to check for losses and interference. The bandwidth for an AM signal will depend on the bandwidth of the original signal, but it will be larger than this original bandwidth. A typical signal has an original bandwidth of 5 kHz and a transmitted bandwidth of 10 kHz.  This increase in bandwidth might seem like a reason not to modulate, but the advantages of modulation far outweigh this bandwidth increase.  As already mentioned, antennae can be of a reasonable length, and several signals containing information at similar frequencies can be sent without risk of interference. Frequency Modulation Frequency modulation changes the frequency of the transmitted wave in response to the amplitude of the original signal. An example of this is shown (again thanks to Signals by John Pierce) in the figure below. The higher the amplitude of the original signal, the more change in the frequency of the transmitted wave, and the higher the bandwidth necessary to transmit.  FM radio frequencies are given in MegaHertz, and the number on your FM dial represents the frequency (in MHz) of the carrier wave before it is modulated.  Broadcasting at 100 MHz requires an antenna 0.75 m long. And the length of this antenna must be minutely adjustable to produce varying frequencies. The original signal for an FM transmission will typically have a bandwidth of 15 kHz. The bandwidth of the transmitted signal will be much larger, typically around 200 kHz, since the frequency changes so much.    Signal-to-Noise Ratio Both amplitude and frequency modulation use analog signals to transmit analog data.  The original information is continually varying, as are the amplitude of AM signals and the frequency of FM signals.  Since many different sources emit radiation at all different frequencies, signals risk being drowned out or distorted through the interference of other radiation.  This "noise" is present even in high-quality fiber optics, and it becomes much more significant when sending signals through the atmosphere, such as in radio and in satellite communication.  We describe the ability of a signal to be recognized through the noise in terms of the signal-to-noise ratio (SNR).  The SNR is the ratio of the signal amplitude to the noise amplitude, and it is usually reported in decibels (dB).  The decibel is a logarithmic scale, so that adding 10 to the dB level reflects a multiplication by 10 in the value you are reporting: SNR (in dB) = 10 log10 (As/An), where As is the signal amplitude, and An is the noise amplitude.  Negative decibels mean that the denominator (noise) is stronger than the numberator (signal).  An SNR of 0 dB means that the signal is the same strength as the noise.  An SNR of 20 means the signal is 100 times stronger than the noise; -10 means that the noise is 10 times stronger than the signal; and 70 means that the signal is 10  million (107) times stronger than the noise. ## Digital Encoding

Discussion Question:  If music is analog, why do digital CDs produce a better-quality sound than analog records?  When might analog be better?

The above discussion of modulation applies to analog signals. Computers, however, work with digital signals. In the signals field lingo, converting to digital is called pulse code modulation. In digital encoding, the amplitude is measured at various times. These amplitudes are then converted to binary numbers. Binary digits are less susceptible to noise than analog signals are, since the exact amplitude of the bit does not matter: it is either on or off, never half-on.  For a highly-precice representation of continually varying information, however, you need to express the amplitudes using more bits, and thus more information must be sent.  Consider the cases illustrated below:  In the graphs above, the dark blue solid curve is analog information which we would like to represent with digital encoding.  Sampling the data at integer values on the horizontal axis results in the pink dotted curve.  This is a fairly good approximation, but there is still an obvious difference between the blue data and the pink encoded signal. We only need to send 7 data points for the pink curve, but we lose accuracy when we do so.  We can produce a much better representation of the information by sampling it at half-integers on the horizontal axis.  The resulting dotted green curve on the right graph is a nearly perfect match to the blue data curve, but we have almost doubled the amount of information needed to represent the curve.  The green curve uses 13 data points. The previous two examples did not use that many data points, because the blue data curve was very smooth.  In fact, it was based on a sine curve.  Real data is less smooth, such as the dark blue line in the graphs below.  Fitting this curve using only the 7 points at integer values on the horizontal axis results in the pink dashed curve, which is not a very good representation of the information.  It loses all of the features.  Doubling the sampling rate to half-integers yields the green dashed curve.  This shows the approximate location of the peaks in the data, although the height and exact location are still inaccurate.  Sampling 16 times, at every 0.4 on the horizontal axis, yields the turquoise curve on the right graph, which is only marginally better than the green curve.  To get a curve that matches this data to high precision requires 31 data points.  The closer to the original signal you want to be, the more information you will have to send.  Shannon (of information theory fame), Oliver, and Pierce (the author of Signals) recognized in 1948 the advantages of digital encoding.  They showed mathematically that a signal having a Fourier spectrum of bandwidth B can be accurately represented by sampling the amplitude 2B times each second.

The above discussion dealt only with the number of points at which you sample the data.  The accuracy of the signal also depends on the accuracy to which you measure at each sampling point.  If we were limited to integral values on the vertical axis, the data would look like the graph below.  The pink line has rounded data sampled at the same rate (every 0.4 on the horizontal scale) as the turquoise line above. This data would use three bits to represent the amplitude at each point:  one for the sign, and two for the integers 0,1,2.  If we wanted to measure the amplitudes more accurately to obtain the turquoise graph above, you would need one bit for the sign, 6 bits for the 2-digit amplitudes, and a few more bits to indicate the need to shift the decimal inter down a factor of 10. Again, the closer to the original information you want your signal to be, the more bits you need to send. Dispersion

While pulses of binary information are less prone to mis-interpretation than analog data (a 1 is quite distinct from a 0), binary information does suffer the effects of dispersion, or spreading.  As a signal travels, it spreads out.  For light, different wavelengths have different indices of refraction and so travel at different rates through materials.  Thus different components of the Fourier spectrum of a pulse will get out of phase and recombine at the destination to give you a wider pulse.  In fiber optics, we talk about modes instead of Fourier components, but the idea is the same.  Different modes travel at different rates, so pulses get distorted and disperse during transit.  This dispersion limits the rate at which you can send information.  Consider the example illustrated below.  A very precise measurement has been made of an analog signal, resulting in a high number of bits needing to be transmitted.  Each bit is represented by a pulse, as in figure (a).  Each of these pulses spreads in transit so that they overlap by the time they reach the destination, as illustrated in figure (b).  The signal output shown in blue in (c) looks like one bit instead of four.  Dispersion forces us to use a lower bit rate (d) so the dispersed bits overlap less (e) and are distinguishable in the output (f).   (a)  The original high-bitrate signal (b)  The high-bitrate pulses after traveling a long distance (c)  The blue line shows the output after traveling a long distance - the pulses are indistinguishable.   (d)  The original lower-bitrate signal (e)  The lower-bitrate pulses after traveling a long distance (f)  The blue line shows the output after traveling a long distance - the pulses are still distinguishable.

By limiting the bitrate at which binary information can be sent, dispersion limits the number of bits that can be used to represent information when data must be sent in a certain amount of time.  This in turn limits the accuracy of the encoded signal.  In addition, narrower pulses have broader Fourier spectra and so require a larger bandwidth to send them.

Digital encoding does, however, have many advantages over analog modulation.  Presuming the bit rate is sufficiently slow to avoid overlap of bits, the information is received at the destination without distortion.  A 1 remains a 1 and doesn't change to 0.99.  Analog signals, on the other hand, are continually varying and so any loss or dispersion distorts the signal.

## Modes of Encoding

Discussion Question: Think about how you would send a digital signal.  The previous section shows one format where the signal goes to zero between each bit.  One could imagine using instead a scheme in which the bits run with no such gap and a series of 1s would be represented by a long high signal.  What are some advantages of each format?  What are some disadvantages?  Can you think of other ways to send binary data that might work better in certain circumstances?

Digital information can be sent in several different formats, including non-return-to-zero, return-to-zero, Manchester code, and bipolar.  The figure below illustrates the signal 10110 in each of these formats.

 One way to represent data, mentioned in the Discussion Question above, is the  non-return-to-zero (NRZ) format.  In this standard format, the signal level is always high for 1 and low for 0.  It does not change unless one bit is different from the previous bit.  NRZ format can be used to transmit the most information in the least amount of time, since the signal level does not change within one bit.  While a constant time per bit is used to compare the four formats here, the real limitation on signal transfer is pulse width.  Since NRZ has only one pulse height per bit it can be sent twice as fast as the other formats that have two or three pulse heights per bit.  If the bit rate is pre-determined and independent of format, NRZ is still economical since pulses twice as wide can be sent using half the bandwidth. While NRZ is a good format to use when the bit rate is known, it runs into trouble when signals are sent over distances or to an unfamiliar system.  If you don't know initially how wide one bit was, you would have to look at several different signals to determine the minium pulse width.  In the return-to-zero (RZ) format illustrated to the right, the signal returns to zero at the end of each bit.  Thus you can find the width of a bit by doubling the width of a pulse.  RZ format clearly separates bits representing 1s, but a string of 0s could be miscounted, thereby contaminating your data.  As for NRZ, the clock speed should be known for RZ data.  Because of the releationship between bandwidth and signal change rate, an RZ signal must either be sent half as fast as NRZ, or it must use twice the bandwidth. Manchester code, illustrated to the right, is the first format we have discussed that does not require a known clock speed.  In the Manchester code, a 1 is indicated by high then low voltage, while a 0 is low then high.  Each bit contains a change in signal height, so the bit width is well-defined.  Like RZ, however, the bit rate must be lower or the bandwidth higher than for a comparable NRZ signal. The final format we will present is pulse bipolar coding.  This format uses a constant null signal of a medium height, with 0 represented by a smaller signal and 1 by a larger signal.  An advantage over RZ would be that data is distinct from silence.  Bipolar coding, unlike RZ and Manchester, represents 1 and 0 by distinct signal levels.  This makes the data easier to identify, but at the price of constantly sending the null-level signal.  Transmitters using the other formats can turn off between bursts of information.  While RZ, Manchester, and bipolar each potentially require two pulse height changes per bit, only the bipolar has two pulse height changes for every bit.  Any delays or distortions will affect the bipolar format the most. The four formats discussed above are commonly-used formats, but they are in no means the only formats used.  Each piece of each system could use its own unique data format.

Discussion Question:  What applications might be beter able to efficiently use the NRZ without losing or corrupting data?  When would a format such as the Manchester cod be preferable?

## Bit Rate, Rise Time, and Bandwidth

As previously mentioned, pulses spread out and get distorted as they travel.  In addition, there is no such thing as a perfectly square pulse.  System characteristics limit the speed at which optical signals can change.  This delay can be described in terms of a rise time, or the time required for a signal to go from 10% of its maximum amplitude to 90%.  Similarly, a fall time is the time required for the signal to fall from 90% to 10% of its maximum amplitude.  Using these boundaries avoids split-hair decisions in the gradually changing portions of exponential decays and rises.  Obviously, if the rise time is much greater than the pulse width, the pulse will never really form and could go undetected.  A general rule is that the rise time should be no more than 70% of the pulse period (width).  Since the bit rate, or frequency, is just the inverse of the width, we have

Rise Time < 0.70 T
Rise Time < 0.70/B

For example, consider sending a signal at12,000 bit/s.  The rise time of each pulse can be no more than (0.70)(1s/12,000bit) = 58ms if the bits are to be readable.  This restriction will become more significant as the desired bit rate increases.  For fiber optics carrying gigabits per second, the maximum rise time would be (0.70)/(5 Gbit/s) = 1.4 x 10-10 s.

We have used the word bandwidth to discuss the range of frequencies necessary to accurately represent a signal based on its Fourier spectrum, but bandwidth can also be used to describe the properties of a fiber.  This "optical" bandwidth is the range of frequencies in which no signal attenuation greater than 3dB will occur.  For the NRZ format, with one pulse height per bit, the necessary optical bandwidth is approximately equal to the bit rate.  For the other formats, this necessary bandwidth doubles.  NRZ, however, is generally implied with the optical bandwidth of a fiber is stated.  If you want to be a bit more technical, the optical bandwidth is closer to 0.9 times the bit rate.  Since the length of a fiber affects the attenuation of a signal, fiber bandwidths are often discussed in Hz-km.  (A Hz is just one bit per second).

## Multiplexing

Our society would never be content with sending only one signal at a time.  While allowing only one person to speak at a time may help keep conversations polite and school rooms under some semblance of control, this policy is inadequate in situations with larger numbers of people.  Imagine if only one person could speak at a time in a football stadium! Or if only one radio station were allowed to broadcast.  Radio stations are each assigned their own carrier frequency so we can have a choice of stations while avoiding interference between stations. In a similar manner, many signals can be sent across one wire or optical fiber. This adjustment of base signal frequency to fit many signals on one carrier is called multiplexing.

Telephone signals have traditionally used the form of multiplexing called frequency-division multiplexing. Cable TV is another application that uses frequency division. In this type of multiplexing, incoming signals are shifted in frequency to one of several "channels", just as different radio stations use different frequency ranges for their broadcasts. For example, a telephone cable might carry 12 conversations at one time, each occupying a bandwidth of 4 kHz above the previous conversation's frequencies. The bandwidth needed to carry 12 conversations is 12 times as large as the bandwidth for one conversation, but you only need one wire rather than 12 to carry the conversations. Groups of channels can be similarly combined into larger and larger groups, until the bandwidth capacity of the wire or fiber is reached. Some groups can carry thousands of channels.

Digital signals and signals over optical fibers do better with time-division multiplexing. In this form of multiplexing, several signals occupy on channel, but at different times. Digital signals can be sent faster than people talk (or type), so electronic encoders look at multiple signals in quick succession. These electronics are fairly inexpensive and allow many users to access a single channel. Time-division multiplexing is also well-suited to satellite communication.

While much of the discussion in this reading assignment used radio and telephones as an example (you can blame the book Signals which I used heavily as a resource), they are applicable to modems, cable modem, and ethernet connections as well. Many signals must be sent in one medium, and these signals have associated bandwidths found from their Fourier transforms. If the signals are not to interfere, they must either use different channels (have different frequencies of bandwidth) or be sent at different times.