Theoretically it would be possible to feed audio signals directly to the human brain. It’s just a matter of coding the audio into about 60,000, 600Hz-average pulse sequences, then using a bio-electrical interface to connect them directly to the auditory nerve string to the brain stem.
In practice, however, sound comes to us as air pressure waves and we use our two ears to convert them to neural information. In many cases, sound is generated by mechanical means - a stick hitting a drum, a hammer hitting a piano wire, wind moving a reed or vocal chord - generating vibrations, which convert the energy into air pressure waves.
In the late 19th century, inventions started to appear that converted mechanical vibrations to an electrical voltage, which could be processed by electrical circuits. One of the main applications was amplification, allowing sound to be reproduced at a louder volume, meaning a larger audience could be reached. We call these systems ‘public address’ - or PA - systems. Another application was to transport the audio signal over longer distances than was possible with air pressure waves: the telephone.
In these systems the microphones are the components that convert air pressure waves into the mechanical vibrations on a membrane, then convert the membrane vibrations into a voltage. This uses either a coil attached to the membrane (a dynamic microphone), or by a membrane that is part of a capacitor circuit (condenser microphone). After electrical processing, the voltages can be converted back into air pressure waves by means of loudspeakers. In most cases this is the reverse process of a dynamic microphone, using a coil attached to a membrane. Loudspeaker membranes are much bigger then those of microphones; the bigger the membrane, the more electrical power the loudspeaker can convert back into air pressure waves.
In between the microphone and the loudspeaker, the ‘transducers’ in an audio system, a universe of electronic components evolved over the years, including pre-amplifiers, compressors, limiters, mixing consoles, graphic and parametric equalisers, crossovers, and - finally - power amplifiers. Around the year 2000, the professional audio market made the big transition to digital signal processing. 10 years later the transition to networked audio infrastructure followed.
Does that mean nothing is analogue anymore ?
No. There is still no way to convert air pressure waves directly to and from the codes used in digital audio systems. An electrical intermediary is always needed, representing the audio signal as a continuous voltage or current - labelled ‘analogue’. The conversion comes in the form analogue-to-digital convertors and digital-to-analogue convertors, often abbreviated ADC or A/D and DAC or D/A.
In a ‘fully digital’ audio system, three analogue circuits are still needed. First, a pre-amplifier is required to match the incoming analogue (line or microphone) signal’s amplitude to the A/D circuit’s input sensitivity. Second, an output line amplifier is required to match the D/A convertor’s outgoing voltage to the power amplifier’s sensitivity, and to make the system independent of the cabling and the power amplifier’s input impedance. Finally, there is the power amplifier, needed to match the output line amplifier’s output to the loudspeaker’s sensitivity and power range. All the rest - in between the pre-amplifier and output line amplifier, including the A/D and D/A circuits - is ‘digital’, with a noise floor and distortion below the human perception thresholds.
Because virtually all audible noise and distortion in a digital system is caused by the pre-amplifier, output line amplifier and power amplifier circuits, it makes sense to have only one of each in a signal chain. Adding an A/D and D/A circuit, along with their associated pre- and output line amplifiers, adds distortion and noise. This occurs, for example, when connecting a power amplifier with built-in digital processing but with an analogue connection, or by inserting a digital effects processor with analogue inputs and outputs - both adding extra AD/DA circuits, decreasing the system’s overall dynamic range. Fortunately, more and more power amplifiers come with networked inputs connected digitally to the amplifier’s DSP. Effects often now have networked connectivity, or come as plug-in algorithms that run on a system’s already-available DSP hardware… all connected digitally.
It’s important to remember that the three analogue parts of a system - the pre-amplifier, output line amplifier and power amplifier - are the Achilles heel of any audio system in terms of dynamic range. At the same time, it’s just as important to know that the two ‘analogue to air pressure wave’ transducers in an audio system (the microphones and loudspeakers) are the Achilles heel concerning distortion. Matching levels and protocols in the signal chain to optimise a system’s dynamic range, and selecting and placing transducers to optimise distortion, is one of the key tasks of a system designer.