XXHighEnd - The Ultra HighEnd Audio Player
April 23, 2019, 08:24:13 pm *
Welcome, Guest. Please login or register.
Did you miss your activation email?

Login with username, password and session length
News: August 6, 2017 : Phasure Webshop open ! Go to the Shop
Search current board structure only !!  
   Home   Help Search Login Register  
Pages: 1 2 3 4 5 6 [7] 8 9 10
 on: March 09, 2019, 11:35:19 am 
Started by PeterSt - Last post by PeterSt

A:B-W&Y-R, B:B-W

I am playing with this for something like 10 days now and I love it. Btw, this was IIRC the 2nd configuration I tried judged it as "Super Sound" but went on to a next (eager as I was last year when the Lush^2 was new). 5 weeks ago or so I announced that I should revisit this one, and so I finally did.

The reason for going there was a different one than "just do it"; I did it because I suffered from a too white sound (cymbals) and found the description to this configuration which told that the cymbals receive more color of it. Aha ...
And that worked out.

My story is more complicated than being about the Lush^2 alone (I will write a post elsewhere about this) but I am quite confident that everybody should try this config.


 on: March 09, 2019, 11:26:15 am 
Started by numlog - Last post by PeterSt
This is probably obvious to most but RAM latency can be adjusted very easily, just like clock speed.

Laurence, maybe not. Or generally not, I estimate.

e.g CAS went from 14 to 5

I suppose the question is odd, but what would happen if we'd force the automatically applied (right ?) 5 again to 14 or anything else "too high" ? Can that technically be done ?


 on: March 08, 2019, 11:23:49 pm 
Started by numlog - Last post by numlog
This is probably obvious to most but RAM latency can be adjusted very easily, just like clock speed.

When underclocking, if RAM timing is set to auto latency will be lowered automatically depending on how low the clock speed is set, this could account for some of sound difference with underclocking, which itself might make things worse.

The latency is adjusted to very safe calculated values that corresponds to the decrease in speed, with manual setting I was able to siginificantly timing settings at max 2133MHz speed. e.g CAS went from 14 to 5 and somehow works with CPUz reporting it, others were also lowered to a similar degree.

 on: March 07, 2019, 07:40:16 pm 
Started by numlog - Last post by PeterSt

Using a typical (HDD or SSD) dedicated playback drive im sure has benefits, but inevitably complicates the system.

Correct, I'd say. But I don't think that by now the Playback Drive is used any more to imply something with a best sound. The reason to use it changed when time passed ...;

First off, a lot of people indeed use the RAM-OS Disk, or otherwise use an "Mach xx PC" which utilizes that (thus same story).
Next, indeed the playback from a RAM disk emerged more explicitly underway (while this has been possible from day one) with the parameter (Playback Drive) suddenly playing a large role there. Thus indeed, denote a RAM Disk, and you should have the best situation/environment for the better SQ.
Then, what sneaked in along the lines (but which 100% was my target for it !) was the elimination of the possible (or probable) difference between WAV and FLAC. Say that the Playback drive is a specially formatted - but also most specially treated device when written to it - that it acts as a buffer which rules out the difference between WAV and FLAC. However, the sheer fact of this special treatment turned into a life of its own, and the difference between WAV and FLAC was not talked about any more (which did not happen really in this forum anyway, but elsewhere it did massively).

Lastly, the usage of two PC's, one to hold the music (Music Server PC) and one to play the audio (Audio PC) eliminated the use case of the Playback Drive, because the Alwas Copy to XX Drive should be in order in that situation. This is so the LAN can be shut off during playback.

And so tbh I myself never bothered about the Playback Drive any more, since what ... 6 years at least ?
If you use one PC for both storage and playback, it's a whole different world. But you shouldn't do that ... (all present in the Audio PC - or connected to it for that matter, deteriorates).


 on: March 07, 2019, 06:24:23 pm 
Started by numlog - Last post by numlog
Its been over a month and I only realised now that ''playback drive'' is something that can be denoted in settings.

This was explained in the RAM disk thread, incidently a RAM disk sounds like the simplest and most effective use for a playback drive (if you have capacity to spare).

Using a typical (HDD or SSD) dedicated playback drive im sure has benefits, but inevitably complicates the system.

 on: March 07, 2019, 08:58:48 am 
Started by PeterSt - Last post by briefremarks
I found the "consensus" configuration A:BWYR, B:BWR not quite right with large orchestral works.  Have been listening to A:BW&YR, B:BW for a few hours tonight.  This configuration seems better with large orchestral works.  Large string sections sound fuller, not scratchy, and somehow more coherent.  Also imaging seems more precise with placement of orchestra.  Will keep this as default for now, and come back to consensus configuration in a week or so and see how it feels.

 on: March 03, 2019, 07:19:28 pm 
Started by PeterSt - Last post by PeterSt

Ah, Thank you Colin. This can be useful to others indeed.
Allow me to rephrase this all somewhat so it can be related to the normal situation (I myself have trouble reading this back but I know what you have done - I think Happy) :

Tablet (or Laptop) RDC to -> WiFi -> Router -> WiFi -> Music Server PC with WiFi RDC to -> Ethernet cable -> Audio PC
(a bit tongue breaking this, so I hope I put it right)
This situation does not allow the Internet to see the Audio PC (and the other way around), which is good (but if necessary can be realised by means of making a Bridge between the Network Adaptors for the cable vs the WiFi respectively).

The bold part is a kind of unique - at least I never heard someone doing it like that. It relates to the fact that one does not have a Music Server PC with two LAN ports (typical for a laptop) while it has WiFi (again typical for a laptop). So the whole point : the Music Server PC here is a laptop ...

This works fine and only one ethernet connection to Mach III needed this way.

... which I think should read as : The Music Server PC now needs to hve one Ethernet connection only.

The mere crucial one reads here :

MachIII will only ethernet connect to 1G/s capable devices and my router was 100K/s max.

So Colin, you already told about this perfectly and I have nothing to add. But what I should sneak in anyway is the sheer "impossibility" to find this as a reason. I suppose I would have thrown all out of the window in such a situation. So, very well found/done !

Thank you for sharing this.
Kind regards,

 on: March 03, 2019, 05:28:27 pm 
Started by PeterSt - Last post by coliny
MachIII Ethernet Speed:-
With my old PC I used a router ethernet connected to Audio PC to get RDC remote control. The router would not connect via ethernet with MachIII. On Peters advice I solved this problem by using Music PC to make RDC connection to MachIII and another RDC connection via wifi with the router to remote control. This works fine and only one ethernet connection to Mach III needed this way.
Subsequently I found out why router would not connect to MachIII. MachIII will only ethernet connect to 1G/s capable devices and my router was 100K/s max. So if you can't make an ethernet connection with your MachIII this could be the reason.


 on: March 02, 2019, 10:20:36 am 
Started by keithtaruski - Last post by acg
Hi Peter,

Thanks for all this information, I really appreciate it.  It will be processed shortly when I get a bit of time, but it looks like a good time to jump into galleries considering the next XXHE version snippet that you showed.



 on: March 01, 2019, 05:03:04 am 
Started by keithtaruski - Last post by PeterSt
Hey Anthony,

Storage Spaces (Direct) might do similar for twice the amount of disks/SSDs. At the same time (but in the end its objective) it is RAID-1 (mirror) or (IIRC) RAID-4 (RAID-1 with parity, that requiring more disks/SSDs again but never mind this).

Storage Spaces (Direct - which is from WS2019) is cool in the sense of that it really works under the hood for safety (say that it is 100% transparent to you, the user) and is all software based (like normal Mirroring via Manage - Disk Manager). Storage Spaces Direct goes way further because of how it exploits ReFS (but it can use NTFS just the same), how fast it is (I don't really notice write latency) and how it can utilize storage media of any type and size or format at the same time, unlike normal mirorring (normal RAID-1).

The only thing Storage Spaces provides for you (apart from safety in general) would be the one logical drive (which you can make afterwards on to / over your current disks - combine e.g. G: and H: (and more) into one S:), but you will lack all of the other Gallery functions (eh, obviously). Also, working with the meta data (which is what XXHighEnd's Galleries are) allows you to put that on a fast SSD (up to ultra fast like with NVMe) that in itself providing super speed for everything, because really everything works on Galleries, except for loading the real tracks after search/selection and copying Original Music Data (because these obviously require the real music data). A derived example would be that where I use my "fast SSD" for 10 years or so by now, today I could replace it with 5 times faster NVMe (or 10 times or 20 times when I'd arrange that in an array of SSDs with striping etc.) and all it requires is copying the (in my case) 120 GB of mata data and happily continue one hour later, now at super speed.

The latter does have an example in Storage Spaces because if I would be able to add the extra storage required for it (double the original size you have, see beginning of this post), you a. just add the disk(s) (which Storage spaces will fill for you from the ones you added as the original half) and b. Storage Spaces will always use the fastest means at retrieval - which will be your newly added modern disks. Additionally (but it would be the same thing) if you in 10 years of time replace your old original slow disks with again faster means (just take out the old and put in the new with some definition changes), Storage Spaces (or Windows if you want) will start to use that, automatically.

On a side note, I would never use a Spanned or Striped volume (which exists since ancient history) for critical data, because which you lose one disk, you'll lose them all right away (with Spanned you may survive the remaining one(s)). This is asking for trouble. But then I also never used more than RAID-1 in my life, just because it is slower up to ultra-more slower on the write times. This reflects my objection against NASes as well. RAID-1 is faster, especially with the proper (RAID) hardware. Storage Spaces Direct is also faster for the same reasons, especially if you'd apply a Three-Way mirror (now you need 6 disks instead of your original 1).

There's really fun in exploring this all in the realm of SSDs instead of HDDs because of a. no "Elevator Seeking" applies to SSDs (which is why we'd "duplex" HDD's) while SSDs really allow for parallel access if the Controller allows for it OR when the data lines to the CPU are direct (not via e.g. C600 chip set) and each processor core/thread deals with it in parallel (totally useless with HDDs).

I recall this (performance stuff) is my specialty and actually my life. Happy And regarding this I may tell that XXHighEnd not using a database whatsoever (serving my 54K albums) has been deliberate right from the start (see if I could do it).
And of course ... very first ERP system on networked PC's ... (1987, 12MHz XTs (120 of them for the first implementation) on two 33MHz AT File Servers).
OK, I better stop. Fishy


Pages: 1 2 3 4 5 6 [7] 8 9 10
Powered by MySQL Powered by PHP Powered by SMF 1.1 RC2 | SMF © 2001-2005, Lewis Media Valid XHTML 1.0! Valid CSS!
Page created in 0.03 seconds with 10 queries.