XXHighEnd - The Ultra HighEnd Audio Player
March 28, 2024, 09:53:01 am *
Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
News: August 6, 2017 : Phasure Webshop open ! Go to the Shop
Search current board structure only !!  
   Home   Help Search Login Register  
Pages: [1] 2  All
  Print  
Author Topic: Ambiophonics and XXHighEnd?  (Read 19222 times)
0 Members and 1 Guest are viewing this topic.
Matt E
Audio Loudspeaker
*
Offline Offline

Posts: 21


View Profile
« on: May 03, 2010, 12:28:04 pm »

I found a site (www.ambiophonics.org) that describes ideas about improving the imaging of music so that it sounds more like it would if heard live by correcting the localisation problems inherent in stereo sound. I don't really understand the technical side of things, but one of the main focussing appears to be the removal of crosstalk between left and right channels. There is some software which tries to do this digitally (RACE). I found that although the imaging was good and the sound was 'realistic', it lacked quite a bit of detail relative to XXHighEnd. A more traditional approach appears to build a sound absorbant barrier to place throught the midline of the room so that sound from the left speaker stays in the left half of the room and the sound from the right speaker stays in the right. I was just wondering if anyone had tried this with XXHighEnd or if there was any reason why it might not work with XXHighEnd i.e. is there any attempt in XXHighEnd to reduce crosstalk digitally. Any information would help me before I take the plunge and build the barrier. Either way will post results
Cheers,
Matt
Logged
easternlethal
Audio Loudspeaker
*
Offline Offline

Posts: 39


View Profile Email
« Reply #1 on: May 03, 2010, 02:52:56 pm »

I have tried it and found the same as you. i.e accuracy is lost. It appears that the processing is subtracting something from the signal. Also from what I understand crosstalk is a product of the interaction of the speakers with you and your room and not something which can be dealt with at source.  The problem with building a barrier (apart from the obvious inconvenience it will cause) is that it will only work if you sit with one ear on one side and the other ear on the other side of the barrier, and hence only you will be able to enjoy it and no one else. I have read that apparently the best way to deal with this is to use a properly configured surround system (see Floyd Toole's book on Psychoacoustics). Romy the Cat's forum also contains interesting information on this subject (if you can take his attitude).
Logged

[PC: Win7Ultimate 64 bit - SP1 Spinning Disk (no SSD)/Intel dual core/XXver.09Z-4-00 Settings: Adaptive/1024/Q1:1024/Arc Prediction + Peak Extension/Quad/SFS=2/Mixed/Unattended/12ms/Scheme 3] [Audio Chain: Weiss Dac 2 (via firewire) - Holfi 1.5.1 & NB1 - JBL 4428]
PeterSt
Administrator
High Grade Audiophile
*****
Offline Offline

Posts: 16827



View Profile Email
« Reply #2 on: May 04, 2010, 03:11:50 pm »

If I am ever right in the first place, I would say such a barrier destroys what seems to be a virtue : (proper ??) interaction of the soundwaves in mid-air. I really think this phenomenon exists, but it is very hard to proove it does - hence that it can workout to audible levels.


Maybe offtopic a bit, but intrigueing for myself :

Out ears and brain work a lot with the "first arrival" of sound. I mean, even in a room with a lot of reflections we are able to perceive where the source of the sound is (direction). This works by means of "SPL" hence the shorter (more direct) distance the sound travelled, the louder it will be, and our brains will catch the loudest to determine the direction (and dismiss the reflections).

Now, when I am correct that sound waves meeting eachother in mid air will add up like standing waves do, their SPL will be louder at the place they meet, and you would be able to pinpoint it. Notice I am talking about sound waves coming from two speakers, and waves from the left speaker meet waves from the right speaker (somewhere in the middle);

It is known (or should I say assumed) that when a sound springs earlier from the left speaker than it springs from the right speaker, the sound source (singer etc.) has been more close to the left microphone than it was to the right microphone. This is how the image of the reality at the recording is created (at least in the left/right plane). But :

When, in this situation - the sound springs earlier from the left speaker than it does from the right - both sounds meet and would create an audible higher volume in mid air ... WHERE will they meet in the left/right plane in front of you ?

I think I know the answer, and it exactly fits my experiences from listening.

Anyone ?


Logged

For the Stealth III LPS PC :
W10-14393.0 - July 17, 2021 (2.11)
XXHighEnd Mach III Stealth LPS PC -> Xeon Scalable 14/28 core with Hyperthreading On (set to 14/28 cores in BIOS and set to 10/20 cores via Boot Menu) @~660MHz, 48GB, Windows 10 Pro 64 bit build 14393.0 from RAM, music on LAN / Engine#4 Adaptive Mode / Q1/-/3/4/5 = 14/-/0/0/*1*/ Q1Factor = *4* / Dev.Buffer = 4096 / ClockRes = *10ms* / Memory = Straight Contiguous / Include Garbage Collect / SFS = *10.13*  (max 10.13) / not Invert / Phase Alignment Off / Playerprio = Low / ThreadPrio = Realtime / Scheme = Core 3-5 / Not Switch Processors during Playback = Off/ Playback Drive none (see OS from RAM) / UnAttended (Just Start) / Always Copy to XX Drive (see OS from RAM) / Stop Desktop, Remaining, WASAPI and W10 services / Use Remote Desktop / Keep LAN - Not Persist / WallPaper On / OSD Off (!) / Running Time Off / Minimize OS / XTweaks : Balanced Load = *62* / Nervous Rate = *1* / Cool when Idle = n.a / Provide Stable Power = 1 / Utilize Cores always = 1 / Time Performance Index = Optimal / Time Stability = Stable / Custom Filtering *Low* (16x) / Always Clear Proxy before Playback = On -> USB3 from MoBo -> Lush^3
A: W-Y-R-G, B: *W-G* USB 1m00 -> Phisolator 24/768 Phasure NOS1a/G3 75B (BNC Out) async USB DAC, Driver v1.0.4b (16ms) -> B'ASS Current Amplifier -> Blaxius*^2.5* A:B-G, B:B-G Interlink -> Orelo MKII Active Open Baffle Horn Speakers. ET^2 Ethernet from Mach III to Music Server PC (RDC Control).
Removed Switching Supplies from everywhere (also from the PC).

For a general PC :
W10-10586.0 - May 2016 (2.05+)
*XXHighEnd PC -> I7 3930k with Hyperthreading On (12 cores)* @~500MHz, 16GB, Windows 10 Pro 64 bit build 10586.0 from RAM, music on LAN / Engine#4 Adaptive Mode / Q1/-/3/4/5 = 14/-/1/1/1 / Q1Factor = 1 / Dev.Buffer = 4096 / ClockRes = 1ms / Memory = Straight Contiguous / Include Garbage Collect / SFS = 0.10  (max 60) / not Invert / Phase Alignment Off / Playerprio = Low / ThreadPrio = Realtime / Scheme = Core 3-5 / Not Switch Processors during Playback = Off/ Playback Drive none (see OS from RAM) / UnAttended (Just Start) / Always Copy to XX Drive (see OS from RAM) / All Services Off / Keep LAN - Not Persist / WallPaper On / OSD On / Running Time Off / Minimize OS / XTweaks : Balanced Load = *43* / Nervous Rate = 1 / Cool when Idle = 1 / Provide Stable Power = 1 / Utilize Cores always = 1 / Time Performance Index = *Optimal* / Time Stability = *Stable* / Custom Filter *Low* 705600 / -> USB3 *from MoBo* -> Clairixa USB 15cm -> Intona Isolator -> Clairixa USB 1m80 -> 24/768 Phasure NOS1a 75B (BNC Out) async USB DAC, Driver v1.0.4b (4ms) -> Blaxius BNC interlink *-> B'ASS Current Amplifier /w Level4 -> Blaxius Interlink* -> Orelo MKII Active Open Baffle Horn Speakers.
Removed Switching Supplies from everywhere.

Global Moderator
easternlethal
Audio Loudspeaker
*
Offline Offline

Posts: 39


View Profile Email
« Reply #3 on: May 04, 2010, 03:34:40 pm »

Peter - From the listener's perspective, sound waves are only perceived in one place: the head. I think it is questionable whether or not the interaction of soundwaves before such waves are received by the listener's ears produce a good result for the listener. If 'interaction' were essential then there would be more inferior headphones to speakers - when in fact the opposite is true!
Logged

[PC: Win7Ultimate 64 bit - SP1 Spinning Disk (no SSD)/Intel dual core/XXver.09Z-4-00 Settings: Adaptive/1024/Q1:1024/Arc Prediction + Peak Extension/Quad/SFS=2/Mixed/Unattended/12ms/Scheme 3] [Audio Chain: Weiss Dac 2 (via firewire) - Holfi 1.5.1 & NB1 - JBL 4428]
PeterSt
Administrator
High Grade Audiophile
*****
Offline Offline

Posts: 16827



View Profile Email
« Reply #4 on: May 05, 2010, 12:22:30 pm »

Nah ... It looks like you put the proof the other way around with your latter argument. I mean, how can possible mid-air interaction influence the quality of headphones ?

Also, I am not sure why you involve our brain as interpretors of a physical phenomeon, while it is about the physical phenomenon only ?
... or you perhaps didn't understand my post ... or maybe you received some education about how everything is fake to begin with (two loudspeakers can't do the live job) and approach this from the psychological side ?

What I tried to tell is that this audio thing is more physical than we tend to believe. Sound waves do exist, as do reflections, as do timing differences as do phase shifts. That it is our brain to interpret that and do something useful with it, is another matter. But if there were no (e.g.) phase difference, there wouldn't be anything to interpret, right ?

Another thing (and my opinion) is that headphones may look like being nice devices to listen with, so called with the least distortion possible, but it is as fake as can be, because of the environmental influences lacking. This is exactly the subject, whether it is about putting a wall in between loudspeakers (headphone alike) or the other way around : let the waves from two speakers interact (sheerly impossible with headphones).
And of course, two completely separated headphone shells *will* give you the perceivement of being in the middle etc., but this time it is (indeed) your brain doing it. Not so with speakers because it first is about the physical waves coming from all over the place (reflections included), plus there is the possible means of interaction of the both.

What we talk about (a.o.) is about hearing a sound in mid air somewhere (3D space) and you can just walk around it, while it sticks in place. Notice that this can still be about the reflections, but chances get less when you move your head (with ears) 5 meters, and the sound keeps on coming from that same position.
Q sound can (explicitly) do that too, although made for headphones. This is about phase manipulation alone and exactly how it works (out) in your brains.

I am sure we are talking about the same if you read the above, but you forgot to answer my question ...
And the answer to that is key for that other phenomenon : waves adding up in space (and not in your brain !).

Peter

Logged

For the Stealth III LPS PC :
W10-14393.0 - July 17, 2021 (2.11)
XXHighEnd Mach III Stealth LPS PC -> Xeon Scalable 14/28 core with Hyperthreading On (set to 14/28 cores in BIOS and set to 10/20 cores via Boot Menu) @~660MHz, 48GB, Windows 10 Pro 64 bit build 14393.0 from RAM, music on LAN / Engine#4 Adaptive Mode / Q1/-/3/4/5 = 14/-/0/0/*1*/ Q1Factor = *4* / Dev.Buffer = 4096 / ClockRes = *10ms* / Memory = Straight Contiguous / Include Garbage Collect / SFS = *10.13*  (max 10.13) / not Invert / Phase Alignment Off / Playerprio = Low / ThreadPrio = Realtime / Scheme = Core 3-5 / Not Switch Processors during Playback = Off/ Playback Drive none (see OS from RAM) / UnAttended (Just Start) / Always Copy to XX Drive (see OS from RAM) / Stop Desktop, Remaining, WASAPI and W10 services / Use Remote Desktop / Keep LAN - Not Persist / WallPaper On / OSD Off (!) / Running Time Off / Minimize OS / XTweaks : Balanced Load = *62* / Nervous Rate = *1* / Cool when Idle = n.a / Provide Stable Power = 1 / Utilize Cores always = 1 / Time Performance Index = Optimal / Time Stability = Stable / Custom Filtering *Low* (16x) / Always Clear Proxy before Playback = On -> USB3 from MoBo -> Lush^3
A: W-Y-R-G, B: *W-G* USB 1m00 -> Phisolator 24/768 Phasure NOS1a/G3 75B (BNC Out) async USB DAC, Driver v1.0.4b (16ms) -> B'ASS Current Amplifier -> Blaxius*^2.5* A:B-G, B:B-G Interlink -> Orelo MKII Active Open Baffle Horn Speakers. ET^2 Ethernet from Mach III to Music Server PC (RDC Control).
Removed Switching Supplies from everywhere (also from the PC).

For a general PC :
W10-10586.0 - May 2016 (2.05+)
*XXHighEnd PC -> I7 3930k with Hyperthreading On (12 cores)* @~500MHz, 16GB, Windows 10 Pro 64 bit build 10586.0 from RAM, music on LAN / Engine#4 Adaptive Mode / Q1/-/3/4/5 = 14/-/1/1/1 / Q1Factor = 1 / Dev.Buffer = 4096 / ClockRes = 1ms / Memory = Straight Contiguous / Include Garbage Collect / SFS = 0.10  (max 60) / not Invert / Phase Alignment Off / Playerprio = Low / ThreadPrio = Realtime / Scheme = Core 3-5 / Not Switch Processors during Playback = Off/ Playback Drive none (see OS from RAM) / UnAttended (Just Start) / Always Copy to XX Drive (see OS from RAM) / All Services Off / Keep LAN - Not Persist / WallPaper On / OSD On / Running Time Off / Minimize OS / XTweaks : Balanced Load = *43* / Nervous Rate = 1 / Cool when Idle = 1 / Provide Stable Power = 1 / Utilize Cores always = 1 / Time Performance Index = *Optimal* / Time Stability = *Stable* / Custom Filter *Low* 705600 / -> USB3 *from MoBo* -> Clairixa USB 15cm -> Intona Isolator -> Clairixa USB 1m80 -> 24/768 Phasure NOS1a 75B (BNC Out) async USB DAC, Driver v1.0.4b (4ms) -> Blaxius BNC interlink *-> B'ASS Current Amplifier /w Level4 -> Blaxius Interlink* -> Orelo MKII Active Open Baffle Horn Speakers.
Removed Switching Supplies from everywhere.

Global Moderator
easternlethal
Audio Loudspeaker
*
Offline Offline

Posts: 39


View Profile Email
« Reply #5 on: May 05, 2010, 03:12:04 pm »

I understand your point (I think). Sorry I forgot your question.

But before I submit an answer (from my very layman's perspective and understanding) I should clarify that my belief (which I do not hold dearly but faintly) is actually that any interaction of waves in the listening space is undesirable. This is not because I do not believe that stereo speakers cannot provide the same level of accuracy and imaging as a live performance (even though that is the case for many types of performances and speakers). It is just that recording engineers already embed spatial cues perhaps in the form of recorded phase shifts in the recording mixes themselves, and that the listening experience is 'purer' (and hence 'better') if one could only listen to direct waves such as in an anechoic chamber or through headphones. I understand that this is not our reality and that reflections are a fact of life. I also understand as well the other argument that resonant room modes can be exploited to enhance the listening experience. If you have thoughts in this area I will be grateful if you will share it.

Regarding your question, I would say that if a direct wave comes earlier from the left speaker and interacts with a direct wave from the right speaker then the 'collision' would occur at the right plane (assuming that the listener is on axis between the speakers). If, as you say the effect is to increase the sound pressure then unfortunately I do not know in what form the mixed result will ultimately travel to the listener and how it will be perceived. Can you give me another clue?
Logged

[PC: Win7Ultimate 64 bit - SP1 Spinning Disk (no SSD)/Intel dual core/XXver.09Z-4-00 Settings: Adaptive/1024/Q1:1024/Arc Prediction + Peak Extension/Quad/SFS=2/Mixed/Unattended/12ms/Scheme 3] [Audio Chain: Weiss Dac 2 (via firewire) - Holfi 1.5.1 & NB1 - JBL 4428]
PeterSt
Administrator
High Grade Audiophile
*****
Offline Offline

Posts: 16827



View Profile Email
« Reply #6 on: May 05, 2010, 04:21:10 pm »

Quote
I should clarify that my belief (which I do not hold dearly but faintly) is actually that any interaction of waves in the listening space is undesirable. This is not because I do not believe that stereo speakers cannot provide the same level of accuracy and imaging as a live performance (even though that is the case for many types of performances and speakers). It is just that recording engineers already embed spatial cues perhaps in the form of recorded phase shifts in the recording mixes themselves, and that the listening experience is 'purer' (and hence 'better') if one could only listen to direct waves such as in an anechoic chamber or through headphones.

Hi again,

I just quoted this all because I think it is very true. The latter though, about the headphones, is unrelated to the subject. That is *if* the arguments and subjects are not twisted upside down. Happy So :

Regarding the remainder of the quote above, *if* all would be pure enough (which especially includes the staging, or otherwise it never can work), what happens (read : is (re)created) in mid air would be a virtue. And the point is, I see this happening (here in my room), and the more, say, distortion free I'm able to output, the more that happens. But this is the intrigueing part :

Indeed, at first glance, it seems that what springs earlier from left than right would meet at the right side. Well, that would be quite some anomaly (left and right will mix up), plus it is contradictionary to what I perceive (from sounds dedicated (by me) to mid air collision).

But it isn't like that ...

Draw a triangle with 7cm or so lines. The bottom connection point is where you listen. Now, take into account some radiation angle from the speakers, and start drawing a line from the left speaker, so it will virtually cross in front of you (not at you) with the same angle coming from the right speaker. Give the left speaker a 2cm headstart before proceeding. Now draw a new cm on the left line. Next a first cm at the right line. Next another cm at the left line, and so on.
Where do the both lines meet ?

This is quite contrary to what you felt about it, right ? (same with me here). But it *is* what I experience from the sounds, perceived as mid air collisions. Those sounds sweep from left to right and the other way around, which will be (I think) because of the undefined precise angle, AND the vagueness of the collisions (read : the not so pure sounds as theoretically could be).

This is (IMHHO etc.) not happening in your brain, but is just physical reality. And why not ?
What is the difference with room modes etc. ?


I have talked about this before, and as how I perceive this, it looks like harmonics can spring from it, which *never* are wrong harmonics to my ears. And I mean, related to the instrument.

I have another one, which is related, and all about harmonics (in fact, this is about harmonics ONLY) :

The better the sound reproduction, the better flagiolettes come to you. But notice, a flagiolette is not only an explicit means of striking a string, it is also about a too fast half pressed string at the "wrong" place (meaning : where harmonics would appear when done intentionally). So, on this matter to me it is very clear that the better the sound reproduction becomes, the better you hear where musicians go wrong. Yea, you may say, this is as logical as anything. But mind you, I am speaking about additional sounds here from a technical point of view.
If a flagiolette would not be subject to mid air happenings (as how I think they happen or are helped), I see no reason why they come so much more forward when reproduction gets better. They would be just in the captured data like any other tone.

Sort of. Happy

Peter
Logged

For the Stealth III LPS PC :
W10-14393.0 - July 17, 2021 (2.11)
XXHighEnd Mach III Stealth LPS PC -> Xeon Scalable 14/28 core with Hyperthreading On (set to 14/28 cores in BIOS and set to 10/20 cores via Boot Menu) @~660MHz, 48GB, Windows 10 Pro 64 bit build 14393.0 from RAM, music on LAN / Engine#4 Adaptive Mode / Q1/-/3/4/5 = 14/-/0/0/*1*/ Q1Factor = *4* / Dev.Buffer = 4096 / ClockRes = *10ms* / Memory = Straight Contiguous / Include Garbage Collect / SFS = *10.13*  (max 10.13) / not Invert / Phase Alignment Off / Playerprio = Low / ThreadPrio = Realtime / Scheme = Core 3-5 / Not Switch Processors during Playback = Off/ Playback Drive none (see OS from RAM) / UnAttended (Just Start) / Always Copy to XX Drive (see OS from RAM) / Stop Desktop, Remaining, WASAPI and W10 services / Use Remote Desktop / Keep LAN - Not Persist / WallPaper On / OSD Off (!) / Running Time Off / Minimize OS / XTweaks : Balanced Load = *62* / Nervous Rate = *1* / Cool when Idle = n.a / Provide Stable Power = 1 / Utilize Cores always = 1 / Time Performance Index = Optimal / Time Stability = Stable / Custom Filtering *Low* (16x) / Always Clear Proxy before Playback = On -> USB3 from MoBo -> Lush^3
A: W-Y-R-G, B: *W-G* USB 1m00 -> Phisolator 24/768 Phasure NOS1a/G3 75B (BNC Out) async USB DAC, Driver v1.0.4b (16ms) -> B'ASS Current Amplifier -> Blaxius*^2.5* A:B-G, B:B-G Interlink -> Orelo MKII Active Open Baffle Horn Speakers. ET^2 Ethernet from Mach III to Music Server PC (RDC Control).
Removed Switching Supplies from everywhere (also from the PC).

For a general PC :
W10-10586.0 - May 2016 (2.05+)
*XXHighEnd PC -> I7 3930k with Hyperthreading On (12 cores)* @~500MHz, 16GB, Windows 10 Pro 64 bit build 10586.0 from RAM, music on LAN / Engine#4 Adaptive Mode / Q1/-/3/4/5 = 14/-/1/1/1 / Q1Factor = 1 / Dev.Buffer = 4096 / ClockRes = 1ms / Memory = Straight Contiguous / Include Garbage Collect / SFS = 0.10  (max 60) / not Invert / Phase Alignment Off / Playerprio = Low / ThreadPrio = Realtime / Scheme = Core 3-5 / Not Switch Processors during Playback = Off/ Playback Drive none (see OS from RAM) / UnAttended (Just Start) / Always Copy to XX Drive (see OS from RAM) / All Services Off / Keep LAN - Not Persist / WallPaper On / OSD On / Running Time Off / Minimize OS / XTweaks : Balanced Load = *43* / Nervous Rate = 1 / Cool when Idle = 1 / Provide Stable Power = 1 / Utilize Cores always = 1 / Time Performance Index = *Optimal* / Time Stability = *Stable* / Custom Filter *Low* 705600 / -> USB3 *from MoBo* -> Clairixa USB 15cm -> Intona Isolator -> Clairixa USB 1m80 -> 24/768 Phasure NOS1a 75B (BNC Out) async USB DAC, Driver v1.0.4b (4ms) -> Blaxius BNC interlink *-> B'ASS Current Amplifier /w Level4 -> Blaxius Interlink* -> Orelo MKII Active Open Baffle Horn Speakers.
Removed Switching Supplies from everywhere.

Global Moderator
Matt E
Audio Loudspeaker
*
Offline Offline

Posts: 21


View Profile
« Reply #7 on: May 06, 2010, 01:50:26 pm »

Well guys I am pleased that my question has caused a bit of a debate - shame I can't understand half the technical stuff Wink My background is actually in psychology, so I'm pretty surprised that more attention hasn't been paid to the psychological aspects of audio. Unfortunately, at Uni I studied visual perception rather than auditory perception - boo hoo. Despite having no idea about how the brain processes sound I would still think that understanding the perceptual processes behind it would be useful in sound reproduction in addition to understanding the physics as hearing is clearly an active rather than passive process e.g. you will hear your own name in a noise room when you would not hear other words (selective filtering); if you think the lyrics of a song are something different to what they actually are you will generally hear that incorrect word(s) until someone points out the error of your ways (top down processing); you can choose to focus on different aspects of the music (selective attention); I also imagine that we are much more sensitive to variations in tones/timbres in voices than in other instruments/frequencies (adaptive function) in the same way that we are more sensitive to subtle variations in facial appearance in people who are the same race as ourselves, but that is pure conjecture.

As stereo is fundamentally an illusion, the brain's processing of auditory information has to be a factor. If the brain was perfect in deciphering localisation cues stereo reproduction would always sound like it was coming from two sources (60 degrees apart). Thank heavens it doesn't. With ambiophonics the illusion is that the sound comes from outside the speakers instead of between. Sorry can't help any more than that, but if someone wants to see some neat visual illusions I will see what I can rustle up...... Wink

I
Logged
easternlethal
Audio Loudspeaker
*
Offline Offline

Posts: 39


View Profile Email
« Reply #8 on: May 06, 2010, 04:53:59 pm »

Hi Matt - Thanks for bringing up the subject. I think we would all agree that psychoacoustics plays a big part in audio fidelity. I think, though, there is an element of 'physicality' in the sound waves which reach our ears as well (I believe is Peter's point). Therefore the question is how much of the image or localisation is done 1) 'in-head' and how much is not done 'in-head' because 2) it is already there and all the brain needs to do is perceive it for what it is without having to do any processing. Ambiophonics is an attempt to help with 1) 'in-head', but the ultimate goal for us all here is to achieve 2).

In Peter's example, flagiolettes are perceived as a much clearer image compared to before. Question is: is it because of 1), or 2)? If I understand Peter correctly, it is 2) because waves are interacting with each other in such a way that the aural image of flagiolettes is 'improved' before it is perceived by the ear. Unfortunately I personally do not see how this effect can substantially contribute to or alter the 'other' direct waves which do not interact with each other but travel straight to the ear (and hence are given 'priority' by the brain compared to the secondary reflected or 'interacted' waves). To support my view, I will say that flagiolettes are a good example because they consist of high frequencies which necessarily travel faster in a more directive fashion and are hence perceived by the ear more quickly and immediately than lower frequencies and that 'in-head localisation' can therefore occur in normal rooms when listening to speakers with high directivity / narrow dispersion / and with high frequencies (such as flagiolettes being sounded by horn speakers!).

I hope that I can be proved wrong.
Logged

[PC: Win7Ultimate 64 bit - SP1 Spinning Disk (no SSD)/Intel dual core/XXver.09Z-4-00 Settings: Adaptive/1024/Q1:1024/Arc Prediction + Peak Extension/Quad/SFS=2/Mixed/Unattended/12ms/Scheme 3] [Audio Chain: Weiss Dac 2 (via firewire) - Holfi 1.5.1 & NB1 - JBL 4428]
PeterSt
Administrator
High Grade Audiophile
*****
Offline Offline

Posts: 16827



View Profile Email
« Reply #9 on: May 07, 2010, 12:03:21 am »

Guys ! Take it easy now !! I seem to be behind already !

No real contributions from my side, but a few remarks on what you both said though :

Quote
If the brain was perfect in deciphering localisation cues stereo reproduction would always sound like it was coming from two sources (60 degrees apart).

This is "simply" about the brain being able to interpret time differences, which are just about phase shift (think about that, and what it will cause / do).

Quote
With ambiophonics the illusion is that the sound comes from outside the speakers instead of between.

If without separation wall (I assume this the opposite of ambioetc.) the sound comes from within the boundaries of the speakers only, something is seriously wrong. Main amps are the first subject under suspicion.

Quote
2) because waves are interacting with each other in such a way that the aural image of flagiolettes is 'improved' before it is perceived by the ear. Unfortunately I personally do not see how this effect can substantially contribute to or alter the 'other' direct waves which do not interact with each other but travel straight to the ear (and hence are given 'priority' by the brain compared to the secondary reflected or 'interacted' waves).

Not counting in pure reflections (walls) this is not necessarily so. However, it may depend on toe-in or toe-out. Generally I use toe-in (meaning I'm listening from a father distance in general), which allows the waves to cross in front of you, THUS interacting before the direct waves reach your ears. In my case though it doesn't matter much (toe-in/out), because I'm using horns.

Quote
I will say that flagiolettes are a good example because they consist of high frequencies which necessarily travel faster in a more directive fashion

More directive yes (thus more SPL), but not more fast; as far as I am concerned all frequencies travel as fast.

Ok, one small contribution (far too small to understand) :

Positioning systems (which I created) can localize (think GPS). This is reversable. This means that with two antennas (your stereo speakers) the original position of the sound source can be represented in a 3D space, but with non-unique positions (this is about hyperboles). This is nothing like brain "subjective interpretation" but just physics (again, all is about phase shifts). So, two microphones (your ears) can localize where you are, but not uniquely. However, uniquely enough to deal with properly. Computer simulations (my own) could show this, but just think about your own two ears (only) which are able to localize a bird in 3D space.
If you'd see how such a program works, you'd also see this has nothing to do with brain interpretation. It only needs the ability to perceive the most small phase differences between the two ears.

To make you dizzy : the above ("almost unique") counts for 2 frequencies at the same time (read : that bird should have one over tone at least to make something of it). With 4 frequencies it is totally unique (and with 3 it depends on the frequencies).

Like I said, only a small contribution. swoon
Peter
Logged

For the Stealth III LPS PC :
W10-14393.0 - July 17, 2021 (2.11)
XXHighEnd Mach III Stealth LPS PC -> Xeon Scalable 14/28 core with Hyperthreading On (set to 14/28 cores in BIOS and set to 10/20 cores via Boot Menu) @~660MHz, 48GB, Windows 10 Pro 64 bit build 14393.0 from RAM, music on LAN / Engine#4 Adaptive Mode / Q1/-/3/4/5 = 14/-/0/0/*1*/ Q1Factor = *4* / Dev.Buffer = 4096 / ClockRes = *10ms* / Memory = Straight Contiguous / Include Garbage Collect / SFS = *10.13*  (max 10.13) / not Invert / Phase Alignment Off / Playerprio = Low / ThreadPrio = Realtime / Scheme = Core 3-5 / Not Switch Processors during Playback = Off/ Playback Drive none (see OS from RAM) / UnAttended (Just Start) / Always Copy to XX Drive (see OS from RAM) / Stop Desktop, Remaining, WASAPI and W10 services / Use Remote Desktop / Keep LAN - Not Persist / WallPaper On / OSD Off (!) / Running Time Off / Minimize OS / XTweaks : Balanced Load = *62* / Nervous Rate = *1* / Cool when Idle = n.a / Provide Stable Power = 1 / Utilize Cores always = 1 / Time Performance Index = Optimal / Time Stability = Stable / Custom Filtering *Low* (16x) / Always Clear Proxy before Playback = On -> USB3 from MoBo -> Lush^3
A: W-Y-R-G, B: *W-G* USB 1m00 -> Phisolator 24/768 Phasure NOS1a/G3 75B (BNC Out) async USB DAC, Driver v1.0.4b (16ms) -> B'ASS Current Amplifier -> Blaxius*^2.5* A:B-G, B:B-G Interlink -> Orelo MKII Active Open Baffle Horn Speakers. ET^2 Ethernet from Mach III to Music Server PC (RDC Control).
Removed Switching Supplies from everywhere (also from the PC).

For a general PC :
W10-10586.0 - May 2016 (2.05+)
*XXHighEnd PC -> I7 3930k with Hyperthreading On (12 cores)* @~500MHz, 16GB, Windows 10 Pro 64 bit build 10586.0 from RAM, music on LAN / Engine#4 Adaptive Mode / Q1/-/3/4/5 = 14/-/1/1/1 / Q1Factor = 1 / Dev.Buffer = 4096 / ClockRes = 1ms / Memory = Straight Contiguous / Include Garbage Collect / SFS = 0.10  (max 60) / not Invert / Phase Alignment Off / Playerprio = Low / ThreadPrio = Realtime / Scheme = Core 3-5 / Not Switch Processors during Playback = Off/ Playback Drive none (see OS from RAM) / UnAttended (Just Start) / Always Copy to XX Drive (see OS from RAM) / All Services Off / Keep LAN - Not Persist / WallPaper On / OSD On / Running Time Off / Minimize OS / XTweaks : Balanced Load = *43* / Nervous Rate = 1 / Cool when Idle = 1 / Provide Stable Power = 1 / Utilize Cores always = 1 / Time Performance Index = *Optimal* / Time Stability = *Stable* / Custom Filter *Low* 705600 / -> USB3 *from MoBo* -> Clairixa USB 15cm -> Intona Isolator -> Clairixa USB 1m80 -> 24/768 Phasure NOS1a 75B (BNC Out) async USB DAC, Driver v1.0.4b (4ms) -> Blaxius BNC interlink *-> B'ASS Current Amplifier /w Level4 -> Blaxius Interlink* -> Orelo MKII Active Open Baffle Horn Speakers.
Removed Switching Supplies from everywhere.

Global Moderator
Matt E
Audio Loudspeaker
*
Offline Offline

Posts: 21


View Profile
« Reply #10 on: May 07, 2010, 05:20:52 am »

I understand what you are saying about phase shift being the key determinant of positioning sound, hence physics are crucial. I am not arguing that phase shifts are not how we determine where an sound comes from, however, the phase shift from a real instrument (point source) is created differently from a phase shift from stereo speakers (two sources). I think that with an ambiophonics separation panel down the midline you should be able to create phase shifts without interaction between sound waves from left and right channels.

As always I think that the best solution for me is to try it and see what my ears think. I found that the digital processing to create ambiophonics created nice 'atmosphere' but at too much expense of details, so I will try a panel and see (I mean hear).

Out of interest, here's another idea to ponder. I am partially deaf in one hear to a set range of frequencies (approx. 1Khz - 4Khz). My other ear has normal hearing. This prevents me from localising sounds of objects in really life and localisation sounds using the ambiophonics system (most things are shifted towards my good ear side), but not obviously in stereo production. I have tested this by comparing to my wife's ability to localise stereo images to mine. Also I would think that images for set objects would 'break down' if their frequency spanned beyond my auditory frequency range for my bad ear, but fortunately that doesn't happen......I should probably wear a hearing aid one day to see if that changes my expereinces
Logged
PeterSt
Administrator
High Grade Audiophile
*****
Offline Offline

Posts: 16827



View Profile Email
« Reply #11 on: May 07, 2010, 09:39:23 am »

Quote
however, the phase shift from a real instrument (point source) is created differently from a phase shift from stereo speakers (two sources).

Hi Matt. I am not sure, but maybe I was too confusing earlier. But I don't recognize any parameters to work with in the quote above;

One (real) instrument doesn't have any phase shift. Ok, we could dedicate some to it, by means of e.g. the upper plate and lower plate from a violin, and try to perceive the size of it by perceiving vibrations from both plates which WILL have a phase shift opposed to eachother ... but this is too far (but think about how your perceive the size of instruments and voices !!).
So, no phase shift from one instrument for this story.

A phase shift from two stereo speakers, again may exist within itself (meaning wrong alignment), but does nothing to the story as such.

Instead, make the combination, and then it all makes sense :

You have one instrument and it is closer to the left mike than to the right mike. This will mean that the sound reaches the left mike earlier than the right mike. Now, although we tend to think in terms of time or timing, this is totally unrelated, because we can't use such a thing. But, we can use phase differences. Thus, at the time the one (radial) wave from the instrument reaches the right mike, the actual same wave is on another part of the slope at the left mike. Very technically this is about the angle of the slope compared to zero.

At the playback side it happens exactly the same. So, you could see the left and right mike real time connected to the left and right speaker respectively, and as soon as the wave reaches the left mike, it springs from the left speaker. Etc.
So, the same phase difference (which I earlier abbreviated to phase shift) springs from both speakers, as they entered the mikes.

If you are in the middle (call that sweetspot) you won't be keeping on hearing the instrument more to the left because there was some initial timing difference your brain keeps on using all the time (only a computer could do that haha), but because of the alignment (for phase !!) of the both left and right waves emerge in air as well as in your brain when the virtual point source is moved to the left.
Alignment = zero phase difference. It it perfectly allright to call this a standing wave, because it *is* just that, although this (or I) does not proove this happens in mid air as well. Otoh, try to proove it does NOT, but ... don't forget to proove whether it will be audible which is all about SPL and other sounds being more profound etc. I don't think this can be done (the prooving) while test tones may show us something, but will be far from real live).

The "stupid" thing is that our brain will do the adding up my means of simple math, but which isn't about adding up waves really. It is about the localization only, and the math does that (luckily all under the hood). The phase shift between the left and right received waves causes a calculated angle opposed to zero (which would be the middle).

It would be true I think that a wall in between the two speakers would create a more pure image of it all, because the waves from the both speakers will not confuse eachother, and your ears + brain will do the (math) job anyway. But :

While this is actually the example of one instrument hence one source point only, what about two (let alone the complete orchestra) ?

The two instruments in front of our mikes (make up the position yourself) now undoubtedly will radiate waves which meet in mid air, that causing the pureness of the individual waves from each instrument to interact somewhere in the middle. Whether this is really audible at live auditioning it is another matter, as is whether we can ever "capture" that via the loudspeakers after the recording. But, I am 100% confident it will be in the data (because that is as sensitive as it can be) with again the question whether we can hear it. But - and it is all about this on my part - whether it is in the data or not, when played back it will just happen AGAIN. And again we may wonder whether we can capture it with our ears + brain.
There is one anomaly I can foresee : when it is in the data *and* it happens again at playback, it happens double. And it might be the latter which I "see" happening in my room. Thus, it gets over-amplified.

And NOW think about that wall in between ... (which would work out for the better).

Lastly for now, assumed the mikes (both for left and right) are in the middle of the stage, the instruments THUS being outside of them, you just *should* perceive the sound from outside of the speakers. Remember, your speakers represent the mikes (don't forget about the phase math) and the instruments were outside if it. One small (actually huge) problem ... your speakers won't represent the position of the mikes. Certainly not when they were bound together (say, 10 cm distance).
And, although I never tried it, I am fairly confident if you would play back a recording of which you know the mikes were 10cm apart, if you'd put your speakers right next to eachother, the sound stage will be as wide. Actually, just think headphones and you don't need to try.

Illusion plays no role here. It is just math and your brain can do it (as can I, which prooves the math exists).
Peter
Logged

For the Stealth III LPS PC :
W10-14393.0 - July 17, 2021 (2.11)
XXHighEnd Mach III Stealth LPS PC -> Xeon Scalable 14/28 core with Hyperthreading On (set to 14/28 cores in BIOS and set to 10/20 cores via Boot Menu) @~660MHz, 48GB, Windows 10 Pro 64 bit build 14393.0 from RAM, music on LAN / Engine#4 Adaptive Mode / Q1/-/3/4/5 = 14/-/0/0/*1*/ Q1Factor = *4* / Dev.Buffer = 4096 / ClockRes = *10ms* / Memory = Straight Contiguous / Include Garbage Collect / SFS = *10.13*  (max 10.13) / not Invert / Phase Alignment Off / Playerprio = Low / ThreadPrio = Realtime / Scheme = Core 3-5 / Not Switch Processors during Playback = Off/ Playback Drive none (see OS from RAM) / UnAttended (Just Start) / Always Copy to XX Drive (see OS from RAM) / Stop Desktop, Remaining, WASAPI and W10 services / Use Remote Desktop / Keep LAN - Not Persist / WallPaper On / OSD Off (!) / Running Time Off / Minimize OS / XTweaks : Balanced Load = *62* / Nervous Rate = *1* / Cool when Idle = n.a / Provide Stable Power = 1 / Utilize Cores always = 1 / Time Performance Index = Optimal / Time Stability = Stable / Custom Filtering *Low* (16x) / Always Clear Proxy before Playback = On -> USB3 from MoBo -> Lush^3
A: W-Y-R-G, B: *W-G* USB 1m00 -> Phisolator 24/768 Phasure NOS1a/G3 75B (BNC Out) async USB DAC, Driver v1.0.4b (16ms) -> B'ASS Current Amplifier -> Blaxius*^2.5* A:B-G, B:B-G Interlink -> Orelo MKII Active Open Baffle Horn Speakers. ET^2 Ethernet from Mach III to Music Server PC (RDC Control).
Removed Switching Supplies from everywhere (also from the PC).

For a general PC :
W10-10586.0 - May 2016 (2.05+)
*XXHighEnd PC -> I7 3930k with Hyperthreading On (12 cores)* @~500MHz, 16GB, Windows 10 Pro 64 bit build 10586.0 from RAM, music on LAN / Engine#4 Adaptive Mode / Q1/-/3/4/5 = 14/-/1/1/1 / Q1Factor = 1 / Dev.Buffer = 4096 / ClockRes = 1ms / Memory = Straight Contiguous / Include Garbage Collect / SFS = 0.10  (max 60) / not Invert / Phase Alignment Off / Playerprio = Low / ThreadPrio = Realtime / Scheme = Core 3-5 / Not Switch Processors during Playback = Off/ Playback Drive none (see OS from RAM) / UnAttended (Just Start) / Always Copy to XX Drive (see OS from RAM) / All Services Off / Keep LAN - Not Persist / WallPaper On / OSD On / Running Time Off / Minimize OS / XTweaks : Balanced Load = *43* / Nervous Rate = 1 / Cool when Idle = 1 / Provide Stable Power = 1 / Utilize Cores always = 1 / Time Performance Index = *Optimal* / Time Stability = *Stable* / Custom Filter *Low* 705600 / -> USB3 *from MoBo* -> Clairixa USB 15cm -> Intona Isolator -> Clairixa USB 1m80 -> 24/768 Phasure NOS1a 75B (BNC Out) async USB DAC, Driver v1.0.4b (4ms) -> Blaxius BNC interlink *-> B'ASS Current Amplifier /w Level4 -> Blaxius Interlink* -> Orelo MKII Active Open Baffle Horn Speakers.
Removed Switching Supplies from everywhere.

Global Moderator
PeterSt
Administrator
High Grade Audiophile
*****
Offline Offline

Posts: 16827



View Profile Email
« Reply #12 on: May 07, 2010, 11:01:32 am »

I only now read this : http://www.ambiophonics.org/Tutorials/UnderstandingAmbiophonics_Part1.html ...

I must say ... this looks like a nice piece of cr*p, based upon science which is wrong for the matter of performing xxx listening tests, and from there derive some things which had to be given a name (like a Pinnae Cue).
I don't say the findings from this science is wrong, but I do say you can't work with that as this website tries to. It's all derivals from observations with no inherent math, while the math is 100% needed. The only "math" I found in that page is the cancellation of sounds by means of 180 degree phase change (wow).

You could say some proof of it all being off is already about (indeed) the suggestion that we won't (read : can't) hear sounds outside of the speakers. This is just plain wrong, and I would say anyone with a self respecting system knows that.
Interesting is the "solution" about placing the speakers next to eachother (sorry I didn't read about this earlier, while this topic obviously is about that to start with), but with the "conclusion" that the sound stage would be as wide as they are (angled ?), which -as said- just is not true.

What *is* true (ok, IMO) is the combing, which I espressed as adding up (which just as well is about cancelling out), and what I indicated as an anomaly once already in the data as well.
The given reason though, crosstalk (mind you, mid air, not electronically) to me is non-sense again, because it is just an elementary part of the phase shift received by one ear (this time the phase shift between one sound source being the e.g. right speaker, received by two ears). This is "elementary" because it is unrelated to two speakers, and the only thing what happens is that you are able to recognize the exact source of the right speaker, which you wouldn't be able to with one ear only (except for SPL differences, blahblah). So, the sounds springing from the right speaker are perfectly organized to perceive the point source from that, while at the same time the same happens with the left speaker.
The remainder of the story is not different from what I told about in the previous post (and therefore what I just told is elementary again, the perception of the imaging as a whole being another element).


Maybe not all that much important, but notice how the "solution" from Ambiophonics depends on anticipating on the positioning of the sound sources like "the soloists often will be in the middle". Yea, they all will be exactly in the middle ... now, with Ambiophonics processing (or with whatever other wrong result because of wrong assumptions). It doesn't work like that and it doesn't need to.
They can try to tell you that you will have a huge distortion from centered sourced points, but I am quite sure nobody perceived at least *that*.
Reasoning out that this should give you huge distortion without *any* real (checkable) fundamental hence math, is just a commercial. Or is about someone who fell in a pitfall; that may sound more nice (to a well meaning person).
Logged

For the Stealth III LPS PC :
W10-14393.0 - July 17, 2021 (2.11)
XXHighEnd Mach III Stealth LPS PC -> Xeon Scalable 14/28 core with Hyperthreading On (set to 14/28 cores in BIOS and set to 10/20 cores via Boot Menu) @~660MHz, 48GB, Windows 10 Pro 64 bit build 14393.0 from RAM, music on LAN / Engine#4 Adaptive Mode / Q1/-/3/4/5 = 14/-/0/0/*1*/ Q1Factor = *4* / Dev.Buffer = 4096 / ClockRes = *10ms* / Memory = Straight Contiguous / Include Garbage Collect / SFS = *10.13*  (max 10.13) / not Invert / Phase Alignment Off / Playerprio = Low / ThreadPrio = Realtime / Scheme = Core 3-5 / Not Switch Processors during Playback = Off/ Playback Drive none (see OS from RAM) / UnAttended (Just Start) / Always Copy to XX Drive (see OS from RAM) / Stop Desktop, Remaining, WASAPI and W10 services / Use Remote Desktop / Keep LAN - Not Persist / WallPaper On / OSD Off (!) / Running Time Off / Minimize OS / XTweaks : Balanced Load = *62* / Nervous Rate = *1* / Cool when Idle = n.a / Provide Stable Power = 1 / Utilize Cores always = 1 / Time Performance Index = Optimal / Time Stability = Stable / Custom Filtering *Low* (16x) / Always Clear Proxy before Playback = On -> USB3 from MoBo -> Lush^3
A: W-Y-R-G, B: *W-G* USB 1m00 -> Phisolator 24/768 Phasure NOS1a/G3 75B (BNC Out) async USB DAC, Driver v1.0.4b (16ms) -> B'ASS Current Amplifier -> Blaxius*^2.5* A:B-G, B:B-G Interlink -> Orelo MKII Active Open Baffle Horn Speakers. ET^2 Ethernet from Mach III to Music Server PC (RDC Control).
Removed Switching Supplies from everywhere (also from the PC).

For a general PC :
W10-10586.0 - May 2016 (2.05+)
*XXHighEnd PC -> I7 3930k with Hyperthreading On (12 cores)* @~500MHz, 16GB, Windows 10 Pro 64 bit build 10586.0 from RAM, music on LAN / Engine#4 Adaptive Mode / Q1/-/3/4/5 = 14/-/1/1/1 / Q1Factor = 1 / Dev.Buffer = 4096 / ClockRes = 1ms / Memory = Straight Contiguous / Include Garbage Collect / SFS = 0.10  (max 60) / not Invert / Phase Alignment Off / Playerprio = Low / ThreadPrio = Realtime / Scheme = Core 3-5 / Not Switch Processors during Playback = Off/ Playback Drive none (see OS from RAM) / UnAttended (Just Start) / Always Copy to XX Drive (see OS from RAM) / All Services Off / Keep LAN - Not Persist / WallPaper On / OSD On / Running Time Off / Minimize OS / XTweaks : Balanced Load = *43* / Nervous Rate = 1 / Cool when Idle = 1 / Provide Stable Power = 1 / Utilize Cores always = 1 / Time Performance Index = *Optimal* / Time Stability = *Stable* / Custom Filter *Low* 705600 / -> USB3 *from MoBo* -> Clairixa USB 15cm -> Intona Isolator -> Clairixa USB 1m80 -> 24/768 Phasure NOS1a 75B (BNC Out) async USB DAC, Driver v1.0.4b (4ms) -> Blaxius BNC interlink *-> B'ASS Current Amplifier /w Level4 -> Blaxius Interlink* -> Orelo MKII Active Open Baffle Horn Speakers.
Removed Switching Supplies from everywhere.

Global Moderator
Matt E
Audio Loudspeaker
*
Offline Offline

Posts: 21


View Profile
« Reply #13 on: May 12, 2010, 02:38:35 pm »

Thanks Pete. As I said I am a psychologist and understand very little about physics, so your explanation of the speakers representing mics was very helpful. When I was talking about a phase shift from a point source vs two sources I was thinking of phase shift between the left and right ears, but as you have pointed out it is about the phase shifts between the mics being replicated by the speakers. Makes sense. Sorry I'm a bit slow.

Now when you say "There is one anomaly I can foresee : when it is in the data *and* it happens again at playback, it happens double. And it might be the latter which I "see" happening in my room. Thus, it gets over-amplified.", do you think that the wall might actually help with this (although you obviously disagree with the 'ambiophonic' explanation)?

Your critique of what the ambiophonics website said was interesting, but I have to be honest that I hardly read it myself as it sounded like a mixture of gobbledegook and c*"p to me too (although my opinion wasn't really bsed on any knowledge). I just wanted to know if it sounded any good, but I am pleased that there are more intelligent guys like yourself around to figure that out for me before I go and build a massive bloody sonic wall that makes no difference....haha

Matt
Logged
PeterSt
Administrator
High Grade Audiophile
*****
Offline Offline

Posts: 16827



View Profile Email
« Reply #14 on: May 12, 2010, 06:43:57 pm »

Hahaha, don't blame me for being more intelligent as I personally didn't measure your capabilities here. grazy

Quote
And at least *I* don't have the education !
Now when you say "There is one anomaly I can foresee : when it is in the data *and* it happens again at playback, it happens double. And it might be the latter which I "see" happening in my room. Thus, it gets over-amplified.", do you think that the wall might actually help with this (although you obviously disagree with the 'ambiophonic' explanation)?

Well, first off, with all what I said about this, no, this is not about the walls being involved. However ...

The being able to pinpoint sounds in mid air (which I consider a different subject because this is nothing about harmonics being created IM(HH)O etc.), *is* wall related;
When you are in a, say, empty room (see below) it is difficult to determine this, because the walls are where they are, and this is nothing about predetermined angles (think about the position of the microphones again -> you don't know them generally). But :

The house I lived in before the one I'm living in now, had two stone large pillars in the middle of the room (one of them needed to carry the ceiling above it), which btw was used as a nice large (and firm) stand for the stereo equipment;
It was very clear to me that the sounds in mid air (those you can walk around) were part of the reflection of the near by pillar. So, imagine this at 3 meters distance from the listening position and some 20 cm thick, from the speaker to the pillar (also some 3 meters) to me there was a chance of 20cm that the sound went via there to my ears. Imagine the width of the possible angle, so to say. Well, within that perceived angle those sounds always were (and this pillar, being at the right side, had another 4 meters to the right of that the right wall).

Also, and this may be a subject by itself, I never Never NEVER perceive sound from outside my walls. I am glad about that too, because my brain wouldn't be able to cope with it otherwise. This too tells me that reflections are very important.
How about all those people who claim to hear sound 5 meters behind the wall behind the speakers ? I don't know. It looks wrong to me.

Cheers,
Peter


PS: Did I say I don't like headphones ?
haha
Logged

For the Stealth III LPS PC :
W10-14393.0 - July 17, 2021 (2.11)
XXHighEnd Mach III Stealth LPS PC -> Xeon Scalable 14/28 core with Hyperthreading On (set to 14/28 cores in BIOS and set to 10/20 cores via Boot Menu) @~660MHz, 48GB, Windows 10 Pro 64 bit build 14393.0 from RAM, music on LAN / Engine#4 Adaptive Mode / Q1/-/3/4/5 = 14/-/0/0/*1*/ Q1Factor = *4* / Dev.Buffer = 4096 / ClockRes = *10ms* / Memory = Straight Contiguous / Include Garbage Collect / SFS = *10.13*  (max 10.13) / not Invert / Phase Alignment Off / Playerprio = Low / ThreadPrio = Realtime / Scheme = Core 3-5 / Not Switch Processors during Playback = Off/ Playback Drive none (see OS from RAM) / UnAttended (Just Start) / Always Copy to XX Drive (see OS from RAM) / Stop Desktop, Remaining, WASAPI and W10 services / Use Remote Desktop / Keep LAN - Not Persist / WallPaper On / OSD Off (!) / Running Time Off / Minimize OS / XTweaks : Balanced Load = *62* / Nervous Rate = *1* / Cool when Idle = n.a / Provide Stable Power = 1 / Utilize Cores always = 1 / Time Performance Index = Optimal / Time Stability = Stable / Custom Filtering *Low* (16x) / Always Clear Proxy before Playback = On -> USB3 from MoBo -> Lush^3
A: W-Y-R-G, B: *W-G* USB 1m00 -> Phisolator 24/768 Phasure NOS1a/G3 75B (BNC Out) async USB DAC, Driver v1.0.4b (16ms) -> B'ASS Current Amplifier -> Blaxius*^2.5* A:B-G, B:B-G Interlink -> Orelo MKII Active Open Baffle Horn Speakers. ET^2 Ethernet from Mach III to Music Server PC (RDC Control).
Removed Switching Supplies from everywhere (also from the PC).

For a general PC :
W10-10586.0 - May 2016 (2.05+)
*XXHighEnd PC -> I7 3930k with Hyperthreading On (12 cores)* @~500MHz, 16GB, Windows 10 Pro 64 bit build 10586.0 from RAM, music on LAN / Engine#4 Adaptive Mode / Q1/-/3/4/5 = 14/-/1/1/1 / Q1Factor = 1 / Dev.Buffer = 4096 / ClockRes = 1ms / Memory = Straight Contiguous / Include Garbage Collect / SFS = 0.10  (max 60) / not Invert / Phase Alignment Off / Playerprio = Low / ThreadPrio = Realtime / Scheme = Core 3-5 / Not Switch Processors during Playback = Off/ Playback Drive none (see OS from RAM) / UnAttended (Just Start) / Always Copy to XX Drive (see OS from RAM) / All Services Off / Keep LAN - Not Persist / WallPaper On / OSD On / Running Time Off / Minimize OS / XTweaks : Balanced Load = *43* / Nervous Rate = 1 / Cool when Idle = 1 / Provide Stable Power = 1 / Utilize Cores always = 1 / Time Performance Index = *Optimal* / Time Stability = *Stable* / Custom Filter *Low* 705600 / -> USB3 *from MoBo* -> Clairixa USB 15cm -> Intona Isolator -> Clairixa USB 1m80 -> 24/768 Phasure NOS1a 75B (BNC Out) async USB DAC, Driver v1.0.4b (4ms) -> Blaxius BNC interlink *-> B'ASS Current Amplifier /w Level4 -> Blaxius Interlink* -> Orelo MKII Active Open Baffle Horn Speakers.
Removed Switching Supplies from everywhere.

Global Moderator
Pages: [1] 2  All
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1 RC2 | SMF © 2001-2005, Lewis Media Valid XHTML 1.0! Valid CSS!
Page created in 0.044 seconds with 19 queries.