![]() |
Signal Characteristics |
Information can be sent and described in many
different ways. Information can be either digital or analog.
Numbers can be represented in decimal, binary, or any other base.
Once you determine how the information is to be represented, you need to
decide how to send it. Modulation and encoding are two methods of
preparing information to be sent from one location to another.
Before discussing binary numbers, let us look
more closely at the decimal system. Back in elementary school you
learned that the number 34618 means you add 8 "ones", 1 "ten", 6 "hundreds",
4 "thousands", and 3 "ten thousands". Each digit represents a different
power of 10. Using math you learned after elementary school, we could
write the number 34618 as
3 x 104 + 4 x 103 + 6 x 102 + 1 x 101 + 8 x 100
In binary mathematics, each digit represents a power of two rather than a power of 10. And the numbers each digit can take are then restricted to 0 or 1, rather than 0 to 9 for decimal. The binary number 101101 could be written as
1 x 25 + 0 x 24 + 1 x 23 + 1 x 22 + 0 x 21 + 1 x 20
Converting to decimal, this becomes
The addition of binary numbers follows the same
rules as the addition of decimal numbers: you add the two digits
representing the same power of two (ten). If the number is equal
to or greater than the next power of two (ten), you "carry" a digit to
the next higher power of two (ten). Just as in decimal math, 0
+ 0 = 0, and 1 + 0 = 0 + 1 = 1. 1+1, however takes you
to the next power of two, so in binary 1 + 1 = 10. Here is
an example of adding two binary numbers:
|
|
|
|
|
|
|
|
||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
We can check our result by converting to decimal.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
While ASCII is widely used, it is not the only choice. Consider, for example, trying to save the information about a picture. You could assign each color variety a number, and then list the colors in the picture pixel by pixel. ASCII is certainly one option, since every number has a unique ASCII representation, but ASCII might not be the best choice. ASCII uses 7 bits for each symbol. If you had 256 colors, each would need to be represented by 3 decimal digits, or 21 ASCII bits. Merely representing the color number in binary would require only 8 bits. Thus ASCII is not the best choice to represent colors.
Babylonia |
The abacus, a mechanical calculating aid | Digital | Decimal |
Wilhelm Schickard |
Built first
mechanical adding machine (lost until 1900s)
Schickard built a "Calculating Clock" that would perform addition and subtraction with gears. |
Digital | Decimal |
|
Special-purpose
analog machines
The 1700s saw the rise of many machines to aid navigation as well as a few more adding machines. |
Analog | Decimal |
Charles Babbage |
Developed
and partially built mechanical Difference Engine
Dreamed up mechanical Analytical Engine Babbage's creations were hampered by the lack of precision machining available at the time, as well as Babbage's reliance on the decimal system. |
Digital | Decimal |
Herman Hollerith |
Built electromechanical Tabulating Machine to help with census | Digital | Decimal? |
Vannevar Bush |
Built analog
differential analyzer to solve differential euations
Bush's machine was analog, not digital like the machines of Hollerith and Babbage. |
Analog | Decimal? |
C. E. Wynn-Williams |
First large-scale
application of digital electronics
Wynn-Williams used digital electronics to build a binary counter for physics experiments in Cambridge. |
Digital | Binary |
Alan Turing |
Developed
theory of computability - introduced the Turing machine
Turing's machine, a purely theoretical creation, consists of an infinitely long tape with binary information on it, and a moving, programmable read/write head which can move along the tape. |
Digital | Binary |
Conrad Zuse |
Built electomechanical
programmable computer
Zuse was the first to use a binary system in a calculating machine, and he recognized the need for a general-purpose programmable machine. Zuse also developed his own version of Boolean algebra for the logic portion of the computer. |
Digital | Binary |
John Atanasoff and Clifford Berry |
Built electronic
digital computer - first to use vacuum tubes
Atansoff and Berry's computer (ABC) was a special-purpose machine for solving systems of equations. In addition to being the first machine to use vacuum tubes to calculate, the ABC incorporated binary arithmetic, regenerative electronic memory, and logic circuits. |
Digital | Binary |
George Stibitz and S. B. Williams |
Built first
multi-terminal remotely-accessible calculator
This calculating device could perform addition, subtraction, multiplication, and division on complex numbers. It used relays and binary mathematics, but it was not programmable. |
Digital | Binary |
Ballistics Research Laboratory |
Developed
the Electronic Numerical Integrator and Computer
Some of ENIAC's complexity was due to Mauchly's decision to use decimal numbers, rather than binary. |
Digital | Decimal |
Bardeen, Brattain, and Shockley |
Invented
transistor at Bell Telephone Laboratory
Transistors perform all of the electrical functions of vacuum tubes, but use little energy, generate little heat, turn on instantly, are sturdy and stable, and are cheap. |
Digital | Binary |
Maurice Wilkes |
First machine capable of performing useful stored programs | Digital | Binary |
Prespert Eckert and John Mauchly |
First computer
system
Mauchly and Eckert continued to stick with decimal math, but UNIVAC's capabilities were revolotuionary. |
Digital | Decimal |
Jay Forrester and Bob Everett |
First real-time computer, the Whirlwind. | Digital | Binary |
Edward Roberts and MITS |
Introduction of the personal computer | Digital | Binary |
As you have learned, the wavelength (and therefore frequency) of a standing wave is related to the length of the medium in which it is created. Electromagnetic waves for radio are created from standing waves of currents and voltages in an antenna. Since the end of the antenna is not fixed, the antenna must be at least 1/4 as long as the wavelength of the radiation it emits. (This relationship is discussed in the assignment on standing waves). For an electromagnetic signal at 500 Hz (typical sound frequency), we find
Lmin = l/4 = c/4f = (3E8 m/s)/(4)(500 Hz) = 1.5 x 105 m
That's one big antenna! Clearly, transmitting electromagnetic signals at the same frequency of sound they represent is not practically feasible. This is why radio and TV signals are sent using a base frequency different than the frequency of the desired information. The information, such as sound, is represented by the modulation of the signal. A signal is modulated when one of its parameters changes with time. We will discuss both amplitude and frequency modulation below.
Another reason to modulate signals is interference. Once I start sending
a signal of about 500 Hz, any other 500 Hz signal would interfere with
it. So I could only broadcast one signal of audible-range frequencies in
any region at a time.
Amplitude Modulation
In amplitude modulation, the amplitude of a transmission represents
the signal. The figure to the right, taken from Signals by John
Pierce (an interesting book that is unfortunately out of print), shows
how such a signal is created. The carrier wave has a frequency in
the kiloHertz range (600 on the AM dial is 600 kHz), so the antenna need
only be 125 m long. The initial signal is raised until it is completely
positive, then this positive version of the signal is used as the envelope
that determines the amplitude of the transmitted wave.
The transmitted wave can be broken down into its Fourier components, and that information sent along with the wave to help the decoding process. Since the transmitted wave is non-periodic, a Fourier transform must be used. The collection of amplitudes for the different component frequencies is called a spectrum. This spectrum contains two groups of frequencies, called sidebands. Sometimes one sideband is sent, and sometimes both are sent. The information in the sidebands is in theory redundant to the information in the carrier signal, but the two can be compared to check for losses and interference. The bandwidth for an AM signal will depend on the bandwidth of the original signal, but it will be larger than this original bandwidth. A typical signal has an original bandwidth of 5 kHz and a transmitted bandwidth of 10 kHz. This increase in bandwidth might seem like a reason not to modulate, but the advantages of modulation far outweigh this bandwidth increase. As already mentioned, antennae can be of a reasonable length, and several signals containing information at similar frequencies can be sent without risk of interference. |
![]() |
Frequency
Modulation
Frequency modulation changes the frequency of the transmitted wave in response to the amplitude of the original signal. An example of this is shown (again thanks to Signals by John Pierce) in the figure below. The higher the amplitude of the original signal, the more change in the frequency of the transmitted wave, and the higher the bandwidth necessary to transmit. FM radio frequencies are given in MegaHertz, and the number on your
FM dial represents the frequency (in MHz) of the carrier wave before it
is modulated. Broadcasting at 100 MHz requires an antenna 0.75 m
long. And the length of this antenna must be minutely adjustable to produce
varying frequencies. The original signal for an FM transmission will typically
have a bandwidth of 15 kHz. The bandwidth of the transmitted signal will
be much larger, typically around 200 kHz, since the frequency changes so
much.
Signal-to-Noise Ratio Both amplitude and frequency modulation use analog signals to transmit analog data. The original information is continually varying, as are the amplitude of AM signals and the frequency of FM signals. Since many different sources emit radiation at all different frequencies, signals risk being drowned out or distorted through the interference of other radiation. This "noise" is present even in high-quality fiber optics, and it becomes much more significant when sending signals through the atmosphere, such as in radio and in satellite communication. We describe the ability of a signal to be recognized through the noise in terms of the signal-to-noise ratio (SNR). The SNR is the ratio of the signal amplitude to the noise amplitude, and it is usually reported in decibels (dB). The decibel is a logarithmic scale, so that adding 10 to the dB level reflects a multiplication by 10 in the value you are reporting: SNR (in dB) = 10 log10 (As/An), where As is the signal amplitude, and An is the noise amplitude. Negative decibels mean that the denominator (noise) is stronger than the numberator (signal). An SNR of 0 dB means that the signal is the same strength as the noise. An SNR of 20 means the signal is 100 times stronger than the noise; -10 means that the noise is 10 times stronger than the signal; and 70 means that the signal is 10 million (107) times stronger than the noise. |
![]() |
The above discussion of modulation applies to analog signals. Computers,
however, work with digital signals. In the signals field lingo, converting
to digital is called pulse code modulation. In digital encoding,
the amplitude is measured at various times. These amplitudes are then converted
to binary numbers. Binary digits are less susceptible to noise than analog
signals are, since the exact amplitude of the bit does not matter: it is
either on or off, never half-on. For a highly-precice representation
of continually varying information, however, you need to express the amplitudes
using more bits, and thus more information must be sent. Consider
the cases illustrated below:
![]() ![]() |
In the graphs above, the dark blue solid curve is analog information
which we would like to represent with digital encoding. Sampling
the data at integer values on the horizontal axis results in the pink dotted
curve. This is a fairly good approximation, but there is still an
obvious difference between the blue data and the pink encoded signal. We
only need to send 7 data points for the pink curve, but we lose accuracy
when we do so. We can produce a much better representation of the
information by sampling it at half-integers on the horizontal axis.
The resulting dotted green curve on the right graph is a nearly perfect
match to the blue data curve, but we have almost doubled the amount of
information needed to represent the curve. The green curve uses 13
data points.
The previous two examples did not use that many data points, because the blue data curve was very smooth. In fact, it was based on a sine curve. Real data is less smooth, such as the dark blue line in the graphs below. Fitting this curve using only the 7 points at integer values on the horizontal axis results in the pink dashed curve, which is not a very good representation of the information. It loses all of the features. Doubling the sampling rate to half-integers yields the green dashed curve. This shows the approximate location of the peaks in the data, although the height and exact location are still inaccurate. Sampling 16 times, at every 0.4 on the horizontal axis, yields the turquoise curve on the right graph, which is only marginally better than the green curve. To get a curve that matches this data to high precision requires 31 data points. The closer to the original signal you want to be, the more information you will have to send. |
![]() ![]() |
The above discussion dealt only with the number of points at which you sample the data. The accuracy of the signal also depends on the accuracy to which you measure at each sampling point. If we were limited to integral values on the vertical axis, the data would look like the graph below. The pink line has rounded data sampled at the same rate (every 0.4 on the horizontal scale) as the turquoise line above. This data would use three bits to represent the amplitude at each point: one for the sign, and two for the integers 0,1,2. If we wanted to measure the amplitudes more accurately to obtain the turquoise graph above, you would need one bit for the sign, 6 bits for the 2-digit amplitudes, and a few more bits to indicate the need to shift the decimal inter down a factor of 10. Again, the closer to the original information you want your signal to be, the more bits you need to send.
Dispersion
While pulses of binary information are less prone to mis-interpretation
than analog data (a 1 is quite distinct from a 0), binary information does
suffer the effects of dispersion, or spreading. As a signal travels,
it spreads out. For light, different wavelengths have different indices
of refraction and so travel at different rates through materials.
Thus different components of the Fourier spectrum of a pulse will get out
of phase and recombine at the destination to give you a wider pulse.
In fiber optics, we talk about modes instead of Fourier components, but
the idea is the same. Different modes travel at different rates,
so pulses get distorted and disperse during transit. This dispersion
limits the rate at which you can send information. Consider the example
illustrated below. A very precise measurement has been made of an
analog signal, resulting in a high number of bits needing to be transmitted.
Each bit is represented by a pulse, as in figure (a). Each of these
pulses spreads in transit so that they overlap by the time they reach the
destination, as illustrated in figure (b). The signal output shown
in blue in (c) looks like one bit instead of four. Dispersion forces
us to use a lower bit rate (d) so the dispersed bits overlap less (e) and
are distinguishable in the output (f).
![]() |
![]() |
![]() |
(a) The original high-bitrate signal | (b) The high-bitrate pulses after traveling a long distance | (c) The blue line shows the output after traveling a long distance - the pulses are indistinguishable. |
![]() |
![]() |
![]() |
(d) The original lower-bitrate signal | (e) The lower-bitrate pulses after traveling a long distance | (f) The blue line shows the output after traveling a long distance - the pulses are still distinguishable. |
By limiting the bitrate at which binary information can be sent, dispersion limits the number of bits that can be used to represent information when data must be sent in a certain amount of time. This in turn limits the accuracy of the encoded signal. In addition, narrower pulses have broader Fourier spectra and so require a larger bandwidth to send them.
Digital encoding does, however, have many advantages over analog modulation.
Presuming the bit rate is sufficiently slow to avoid overlap of bits, the
information is received at the destination without distortion. A
1 remains a 1 and doesn't change to 0.99. Analog signals, on the
other hand, are continually varying and so any loss or dispersion distorts
the signal.
Digital information can be sent in several different formats, including
non-return-to-zero, return-to-zero, Manchester code, and bipolar.
The figure below illustrates the signal 10110 in each of these formats.
One way to represent data, mentioned in the Discussion Question above, is the non-return-to-zero (NRZ) format. In this standard format, the signal level is always high for 1 and low for 0. It does not change unless one bit is different from the previous bit. NRZ format can be used to transmit the most information in the least amount of time, since the signal level does not change within one bit. While a constant time per bit is used to compare the four formats here, the real limitation on signal transfer is pulse width. Since NRZ has only one pulse height per bit it can be sent twice as fast as the other formats that have two or three pulse heights per bit. If the bit rate is pre-determined and independent of format, NRZ is still economical since pulses twice as wide can be sent using half the bandwidth. | ![]() |
While NRZ is a good format to use when the bit rate is known, it runs into trouble when signals are sent over distances or to an unfamiliar system. If you don't know initially how wide one bit was, you would have to look at several different signals to determine the minium pulse width. In the return-to-zero (RZ) format illustrated to the right, the signal returns to zero at the end of each bit. Thus you can find the width of a bit by doubling the width of a pulse. RZ format clearly separates bits representing 1s, but a string of 0s could be miscounted, thereby contaminating your data. As for NRZ, the clock speed should be known for RZ data. Because of the releationship between bandwidth and signal change rate, an RZ signal must either be sent half as fast as NRZ, or it must use twice the bandwidth. | ![]() |
Manchester code, illustrated to the right, is the first format we have discussed that does not require a known clock speed. In the Manchester code, a 1 is indicated by high then low voltage, while a 0 is low then high. Each bit contains a change in signal height, so the bit width is well-defined. Like RZ, however, the bit rate must be lower or the bandwidth higher than for a comparable NRZ signal. | ![]() |
The final format we will present is pulse bipolar coding. This format uses a constant null signal of a medium height, with 0 represented by a smaller signal and 1 by a larger signal. An advantage over RZ would be that data is distinct from silence. Bipolar coding, unlike RZ and Manchester, represents 1 and 0 by distinct signal levels. This makes the data easier to identify, but at the price of constantly sending the null-level signal. Transmitters using the other formats can turn off between bursts of information. While RZ, Manchester, and bipolar each potentially require two pulse height changes per bit, only the bipolar has two pulse height changes for every bit. Any delays or distortions will affect the bipolar format the most. | ![]() |
The four formats discussed above are commonly-used formats, but they are in no means the only formats used. Each piece of each system could use its own unique data format.
Rise Time < 0.70 T
Rise Time < 0.70/B
For example, consider sending a signal at12,000 bit/s. The rise time of each pulse can be no more than (0.70)(1s/12,000bit) = 58ms if the bits are to be readable. This restriction will become more significant as the desired bit rate increases. For fiber optics carrying gigabits per second, the maximum rise time would be (0.70)/(5 Gbit/s) = 1.4 x 10-10 s.
We have used the word bandwidth to discuss the range of frequencies
necessary to accurately represent a signal based on its Fourier spectrum,
but bandwidth can also be used to describe the properties of a fiber.
This "optical" bandwidth is the range of frequencies in which no signal
attenuation greater than 3dB will occur. For the NRZ format, with
one pulse height per bit, the necessary optical bandwidth is approximately
equal to the bit rate. For the other formats, this necessary bandwidth
doubles. NRZ, however, is generally implied with the optical bandwidth
of a fiber is stated. If you want to be a bit more technical, the
optical bandwidth is closer to 0.9 times the bit rate. Since the
length of a fiber affects the attenuation of a signal, fiber bandwidths
are often discussed in Hz-km. (A Hz is just one bit per second).
Telephone signals have traditionally used the form of multiplexing called frequency-division multiplexing. Cable TV is another application that uses frequency division. In this type of multiplexing, incoming signals are shifted in frequency to one of several "channels", just as different radio stations use different frequency ranges for their broadcasts. For example, a telephone cable might carry 12 conversations at one time, each occupying a bandwidth of 4 kHz above the previous conversation's frequencies. The bandwidth needed to carry 12 conversations is 12 times as large as the bandwidth for one conversation, but you only need one wire rather than 12 to carry the conversations. Groups of channels can be similarly combined into larger and larger groups, until the bandwidth capacity of the wire or fiber is reached. Some groups can carry thousands of channels.
Digital signals and signals over optical fibers do better with time-division multiplexing. In this form of multiplexing, several signals occupy on channel, but at different times. Digital signals can be sent faster than people talk (or type), so electronic encoders look at multiple signals in quick succession. These electronics are fairly inexpensive and allow many users to access a single channel. Time-division multiplexing is also well-suited to satellite communication.
While much of the discussion in this reading assignment used radio and
telephones as an example (you can blame the book Signals which I
used heavily as a resource), they are applicable to modems, cable modem,
and ethernet connections as well. Many signals must be sent in one medium,
and these signals have associated bandwidths found from their Fourier transforms.
If the signals are not to interfere, they must either use different channels
(have different frequencies of bandwidth) or be sent at different times.
Computing: The Technology of Information, by Tony Dodd. (Oxford University Press: New York), 1995.
A History of Computing Technology, 2nd ed., by Michael R. Williams (IEEE Computer Society Press: Los Alamitos, CA), 1997.
Signals, by John Pierce. Much of the information in this reading has been taken from this book which is unfortunately out of print.
Understanding Lightwave Transmission, by Grant. Much of the information in this reading has been taken from this book too, which is also unfortunately out of print.
Copyright © 2000-2002 Doris Jeanne Wagner. All Rights Reserved.