So Ramesh, the page you linked to from Siegfried Linkwitz I will be using to quote from. Now we all know what our context is, no matter that I indeed could state that I knew these things already. But it is exactly this fact which has my own context in the first post. So you can read back into that after reading this post here :Other speakers may also be used to hear how the recording translates. This process must be called 'rendering the art', because it is unlikely that a person would have heard the same sound live. The performance, the art, is rendered according to the desires of the recording engineer, conductor and producer. The outcome of this process is misleadingly called 'The Recording'. In addition to the musicians and performers of the art it carries the signature of the people who made the specific 'recording'.When the recording is played back in the home a listener is unlikely to hear a reproduction of what the mixing engineer heard in his studio with his monitors. He hears a rendering of the recording, which is defined largely by the on-axis response of his speakers and their illumination in 3D of his listening room and the room's acoustics. The recording studio acoustics are quite dead so that the direct sound from the speakers dominates what the recording engineer hears and uses for his mixing decisions. A home listening environment is much more live than a recording studio [...]
(emphasis in the above is mine)
This last sentence is correct in itself, but takes for granted in the first place the occurring reflections. This is what I claim is (unexpectedly for Linkwitz) not sufficiently enough occurring. And suddenly the room is as dead as the studio 's room.
I also (sort of) claim that the quality of the waves is so good that no warbling and such occurs among them, which is the additional effect of the too dead room.
When the engineer mixed in his left/right tastes of re-creating the more rich sound opposed to the dead sound from the studio, he listened to a normal speaker in a normal "room" environment (or maybe a dead room just the same) with normal good playback means with the emphasis on "good" because that is what it is about. Read : this is not so good at all but is expected to be good enough or irrelevant otherwise. The phenomenon is not recognized.Stereo over two loudspeakers works by creating a phantom acoustic scene between the loudspeakers. In its simplest form a single monaural signal is fed to both left and right loudspeakers. If the two loudspeaker levels are identical and there is no phase difference between them, then a listener on the center line between the loudspeakers will hear the monaural sound as coming from the center though there is no sound coming from that direction. This is basically a very unnatural event and so a slight movement to the left or right will shift the phantom source to the nearer loudspeaker.Courtesy of Linkwitz
Combine the last text with this picture (please notice that in the original text
this picture is not related to the above text; this is my own "creation"). Now :
What I referred to in my first post is the in my view fact that such a microphone is not able to capture any stereo image in the deadened room
. Mind you, while such a microphone setup is used at the distance, the more normal in-studio setup will be a single microphone, were it about a solo singer (as part of a band, but with the explicit notice that studio recordings usually or at least most often are not taken with the whole band playing together). Thus, a mono signal, or otherwise a stereo signal from the two-mike setup but too close to each other to form a real stereo image of proper size (and if at all the size would be of the distance of the mikes, but superimposed by reflections, if there to begin with).
Anyway, it is not allowed to think that a studio utilizes such a mike setup and create the stereo image at the same time. The studio is NOT a concert hall with explicit proper acoustics. A studio usually is as dead as can be. It will create a dead sound as mono is. Of course unless explicit mike setups are being tweaked in (like a couple more at a few meters distance from e.g. the singer, but I don't think it happens like that in a studio).
All in other words, there is no stereo image recorded in the studio, and chances are very fair that such a voice recording ends up on one channel to begin with, because just one microphone was used. Right ?
If right, then the stereo is "created" later, by the mixing engineer.Today the majority of recordings are produced by down-mixing a multiplicity of monaural tracks of different instrument pick-ups into a 2-channel or n-channel format
Aha (and "today" is 2012 in there).The mix-down often involves equalization, the addition of reverberation and compression in order to fit the taste of the recording engineer and market expectations as seen by the producer.
Panning distributes phantom sources along a line between the loudspeakers. The distance of the phantom scene from a listener is essentially given by the distance between listener and loudspeaker.
Read the above carefully so you keep track of what this is all about. Then digest this :If the loudspeakers are highly directional and the room is acoustically dead, then the image is sometimes closer than the loudspeakers, approaching headphone listening.
I am not talking about the headphone listening (although I recognize it is true) but about the process of how sound comes to you from the loudspeakers. I am also not talking about the acoustically (too) dead room, but I sure talk about the "highly directional".Depth and height behind the loudspeaker line depends upon cues that the brain receives from reverberation and volume levels of sources in the recording. It is difficult to produce a spatially coherent mix from multiple tracks that is believable. The result is typically a collage of phantom sound clusters next to and on top of each other, or a wash of diffuse sound.
What this all over says is that what we listen to is for 100% dependent on what the engineer made of it (plus what the producer demands of it in the first place) BUT that this again 100% depends on what the engineer listens to. And now I mean room + quality of system.
Not taking into account anything which was quoted, so possibly redundant :
Most of us will know that all we can
perceive for left/right differences is based upon phase difference;
If a signal (or music) is springing from our two speakers, and the signal is exactly the same for left and right, you will be receiving both signals in the exact same phase, assumed the speakers are at the exact same distance.
If the signal is completely distortion free *or* the distortion is 100% equal in both channels, you will probably receive an image at the width of your ear distance. And if not that it will be infinitely small. Side note : which latter is impossible in our rooms because of reflections no matter what and these reflections can be called distortions and they will NOT be equal for left and right. So here our room will start to play its role.
If we turn our head a little, something has to change, because the one signal does not arrive at the same time to both ears any more. How this changes, see Linkwitz' text. But for us it is crucial to understand that the phase of the one signal is not equal for both ears any more, which is why we will perceive a change of sound when our head turns.
When two mikes would be placed behind each other in front of the singer, similar happens and the one single signal (coming from the singer's mouth) receives a small hall (one time echo). There are two signals now, and the distance of the mikes determines the time difference, depicted by the speed of sound. In practice two sound sources now exist (mind you, this is mimicked by the two microphones) and time delay is what we would perceive. Meanwhile though, because the sound source really is one only, we receive the same signal at the same time with a phase difference;
The depth which is implied by the two mikes of say 1 meter, will be audible in reality when played back through the speakers. This is not so much because of the time delay between the two recorded "channels" but is because of the phase difference our brain works with. I'd say that in real practice the sound starts to smear in depth, but all what happens is that reality (two mikes behind each other) is superimposed on us.
When there is just one microphone and there would be one loudspeaker, and this speaker radiates not only directly towards us but also via a wall which implies a one meter longer travelling distance, we have the same effect as the story with two microphones behind each other. Mind you, I am talking mono sound now from one speaker only.
But things get quite crucial now for proper understanding ...
*Because* we have two ears, we will be able to perceive the reflected sound just the same and we will be able to discern that the sound comes from the left hand side, assumed we only hooked up the left speaker which radiates to the left wall in front of us. So just because the direct sound implies a different phase angle for left and right ear (between them !) vs. the reflected sound coming from the wall which implies a larger phase angle difference between our two ears, the sound is not only discerned to come from the left hand side, but it is also recognized as a wider beam. It becomes "spacious".
Btw close one ear and you will be clueless where the sound comes from *unless* you start turning your head ...
Because the one speaker will radiate to the right hand wall just the same - and even more because toeing-in assumed as normal, it will radiate in more firm angle to the right hand wall than the left, we will surely receive those reflections just the same. One thing though, the distance is larger. Thus, distance to the right wall is longer than distance to the left wall, and thus reflections from the right wall arrive later again, plus they died out more because of sound disperses to all angles in mid air (the longer the travel, the more it will be spread, the less energy to one point - your ears).
This is all still with one speaker. But sound is quite spacious by now because of the reflections.
Now we make the left hand speaker very directional. It does not reflect to the left wall any more. The right wall it still does, just because it is beaming at it.
What should happen now is that the direct sound is very very tight because hardly energy was lost in the first place and no spaciousness is received from the left wall. It is one beam only. The reflections from the right wall now are very underwhelmed. They already were, but the direct beam is stronger now because no losses (all is relative, but you will understand). The reflections of the right wall could be perceived as just a diffusing factor. But spaciousness ? I don't think so.
The sound is now firm, but dead. It doesn't matter what the engineer faked in, because all he can have faked in is time delay and so be it. Remember, we switched off the right hand speaker so all other tweaks (like time delay between left and right) can not be perceived.
If we now switch back-on the right hand speaker, and we assume that nothing has been done really to the singer's voice just because the engineer perceived a sufficiently wide image for the singer's mouth to begin with, BUT we thus took out the reflections from the side walls (left wall for left speaker, right wall for right speaker), then we have back the mono sound with max with of our ears distance (or less). We have two undistorted sound sources and because they are undistorted we receive complete mono with no sizing of the instruments / mouths at all.
Now we are going to add distortion to the sound. Just normal distortion, of the type we actually always like to get rid of. But it just is there always, to more or lesser degree.
The characteristic of distortion will be that it is quite random. Think noise from various sources.
When it is random, it will also be different for the left and right channel. This means that our virtual mono sound because put (/left) in there by the engineer, becomes a more spacious stereo sound. We will be creating an "image" as such, thinking that such image will carry a certain width.
The mono sound is not mono any more.
On a side note, envision that noise is generally high-frequent and supposed it operates at 10KHz, left/right changes of 10,000 times per second are in order, but less, because the phase angle differences do not max out at this same frequency (because random). The less phase angle difference, the more small the image will be. And because not always the same, the image width will be a diffusing factor in itself; call this "focus" (or not).
Let's keep in mind that when noise is superimposed on our precious signal, each point of the wave for left vs right will start to show a different phase angle. This is how we will be interpreting that as "from left" and "from right". However, because it is random (noise) it will just create spaciousness, hence the image gets wider. It is a trick.
A trick which was applied (to us) as long as we can remember, because we always were subject to distortions.
I thus say that something has changed in this distortion realm. Sound has become way more mono with the same toeing. I must toe-out suddenly or otherwise the image is too small and not right to begin with.
I must add more side wall reflections than before or otherwise the sound beams too much (for me with my horn speakers).
It should be for the better that more beaming is in order, because that is easily audible in the remainder of the sound (I hear a 1000 things more than before). But that the effect of it would be that more reflections are required now, tells me the more that the recordings are not right because the engineer must have been listening t(rh)o(ugh) distortion when he judged the imaging to be fine and finished all off.