Jump to content











Photo
- - - - -

Grub4DOS AHCI / NVMe speed patch for memory maps

grub4dos ahci nvme

  • Please log in to reply
44 replies to this topic

#26 antonino61

antonino61

    Gold Member

  • Advanced user
  • 1525 posts
  •  
    Italy

Posted 10 October 2019 - 08:30 AM

My dear friend, as usual, you didn't understand it at the outset - the non combo is if u gimagex a wim into a standalone vhd. This one takes longer to load than the combo vhd which is attached to the wim.

#27 Wonko the Sane

Wonko the Sane

    The Finder

  • Advanced user
  • 16066 posts
  • Location:The Outside of the Asylum (gate is closed)
  •  
    Italy

Posted 10 October 2019 - 08:54 AM

My dear friend, as usual, you didn't understand it at the outset - the non combo is if u gimagex a wim into a standalone vhd. This one takes longer to load than the combo vhd which is attached to the wim.

Unfortunately, it is not only alacran not understanding your previous post, I also completely fail to understand it :w00t: :ph34r:, and your quoted explanation of it (BTW this fact actually means nothing, as your posts might be crystal clear and both myself and alacran may be coincidentally in one of those days where we fail at English, or at reading or both, still now you have two data points) 

 

Alacran could IMHO tone down a bit his (gratuitious) remarks, however. :whistling:   

 

Maybe - just maybe - you could (only when you have time and if you want) make a post describing with some more details each specific build and the respective loading times/sizes/performance/etc.

 

The fact that you often use "u" instead of "you" makes me think that you are often replying from a mobile, or in a hurry, or both, if you could take some time to write more complete posts it would surely help understanding their contents.  

 

Sorry, gotta go, only for the record I have a Windows 7 build, that one, no not that, the other one, that one, that is faster in loading but slower on transfer on one machine but not on another, and another one, no not the other one another one that is faster in transfer but slower in loading on the laptop (the old one).

 

:duff:

Wonko



#28 antonino61

antonino61

    Gold Member

  • Advanced user
  • 1525 posts
  •  
    Italy

Posted 10 October 2019 - 08:59 AM

Si prefieres, te escribo EN castellano. Unos mensajes antes, no se si lo notaste, iba a preguntar si les interesaba un test de vhd integral, o Sea no vinculado a wim ninguno. Claro esta que no sera de 1gb sino de 5gb, porque el wim se le aplicaria al vhd. Este, comparando con El vhd+wim de antes - combo - se precarga mas lento, o menos rapido, Como prefieras. Precargar solo 1gb es mas deprisa que precargar todos Los 5gb.

#29 antonino61

antonino61

    Gold Member

  • Advanced user
  • 1525 posts
  •  
    Italy

Posted 10 October 2019 - 10:31 AM

My dear friends of good intellect and understanding, the core of the thing here is to tell u that it is quicker to preload a vhd - 1gb - with g4d .45 than it is to preload its lz4 - 187mb now here - with g4d .46. if u do not believe it, test it urselves, on this machine, on that machine and/or the other machine.

#30 alacran

alacran

    Platinum Member

  • .script developer
  • 2710 posts
  •  
    Mexico

Posted 10 October 2019 - 07:27 PM

This are my tests on and old machine just to compare the benefits on this old hardware: it is an i3 3225 3300 Mhz, 4GB Ram at 1333 Mhz, loading from internal HDD Sata II, AHCI is active on MB (the MB is not Sata III capable, so any internal disk will run at Sata II), grub4dos-0.4.6a-2019-08-01 was used for this tests

 

This are only the times required to load to Ram the VHD (not total times to boot to the desktop)

Loading the VHD 1.5 GB uncompressed using the 0.4.5c patched version 18.03 seconds

Loading the VHD 1.5 GB uncompressed Using the 0.4.6a version 18.11 seconds

Loading the VHD 128 MB lz4 compressed Using the 0.4.6a version 4.35 seconds

Of course don't trust very much on the decimals since my finger is not so fast to stop the chronometer.


This do not shows a big improvement for loading to Ram an uncompresd 1.5 GB VHD (diference is marginal)

The time to load to Ram same VHD but lz4 compressed to a size of only 128 MB shows a big improvement, taking no more than 1/4 of time, so the improvement is clearly noticed when using a lz4 compressed VHD, even on old systems.

I need to make clear this results are valid only for a hardward similar to the one used in the test. Of course with an internal SSD, faster Ram and CPU it is very possible this result will be different.

But it also demostrate the hight benefit of using a lz4 compressed VHD on a hadrware similar to the one used in the test or on newer hardware.

The meanning of previous sentence is loading small size file to Ram is faster than loading a so much bigger size file, even if smaller size file needs to be decompressed by the CPU.

That is the reazon I asked the author to modiffy/apply this patch to 0.4.6a to get the benefit of the patch on faster systems + the benefit of the lz4 compressed files, because 0.4.5c is not capable to handle lz4 compessed files.

I know there is a limit for the speed gain depending on speed of the SSD or HDD, Ram speed & CPU decompression speed and the slower of them will be the teorethical limit but it is possible we still may get some improvement before reaching that limit, specially on new and very fast systems.


  • antonino61 likes this

#31 antonino61

antonino61

    Gold Member

  • Advanced user
  • 1525 posts
  •  
    Italy

Posted 10 October 2019 - 08:24 PM

right u r, and what u say is even more valid in the case of nvme, where the difference between compressed and uncompressed is almost unnoticeable up to 1.5gb - larger sizes do evince a difference so noticeable as u have pointed out on my architecture too - I noticed it by loading a 5gb standalone vhd (no combo, the one I was talking about this morning). of course it would be best to engineer the .46 version of g4d to handle nvme (and ahci, for that matter). As for the rest, results on my machine and on ur machine are obviously not comparable. I hope the line of reasoning is clear to Master Wonko as well.

BTW, I see 4sex vs 18sex between lz4 and uncompressed - fine. not do discourage u, but..., do u not think the gain in preloading might be anulled by a possible lag in the second stage, the loading one proper? If I were u I would check the the loading times as well in both cases, because, it is true that there is nothing u can do later, as it is beyond our control, but, who knows if the prior gain is offset by a later delay owing to decompression by a weaker processor than mine? Just a doubt as much as anything else.

nino



#32 alacran

alacran

    Platinum Member

  • .script developer
  • 2710 posts
  •  
    Mexico

Posted 10 October 2019 - 08:33 PM

Just to have a better idea of my previous tests, I decided it was necessary to also try with a 0.4.5c unpatched, then downloaded the very last version available grub4dos-0.4.5c-2016-01-18

 

The time to load the 1.5 GB uncompressed VHD to Ram was 18.10 seconds, I take it as same time taken by 0.4.6a (18.11 sec.).

 

This means the patched version only have a very marginal effect on the mentioned hardware or maybe none since my finger speed is not so fast to stop the chronometer and acurately measure decimals of seconds.


  • antonino61 likes this

#33 antonino61

antonino61

    Gold Member

  • Advanced user
  • 1525 posts
  •  
    Italy

Posted 10 October 2019 - 08:40 PM

couldn't agree with u more, but pls, see if the obviously faster compressed preloading will factor in to our disadvantage in the loading stage and finally amount to an overall delay, even if there is nothing u can do in this second stage. this is another reason why I usually include the final overall boottime, not only the preloading stage.



#34 alacran

alacran

    Platinum Member

  • .script developer
  • 2710 posts
  •  
    Mexico

Posted 10 October 2019 - 08:46 PM

@ antonino61

 

Once the VHD is loaded to Ram it is uncompressed always. The decompression is done by grub4dos code + CPU before each chunk of the file is loaded to Ram

 

So the compression only affects loading to Ram time, and DOES NOT affects anything else.



#35 antonino61

antonino61

    Gold Member

  • Advanced user
  • 1525 posts
  •  
    Italy

Posted 10 October 2019 - 08:50 PM

Oh, I see. Sorry, I did not know it. Doubt cleared. thanx.



#36 dickfitzwell

dickfitzwell

    Member

  • Members
  • 33 posts

Posted 18 October 2019 - 11:48 PM

motherboard is gigabyte H97N-WIFI (rev. 1.1)

downloaded AHCI_NVME_V1.4_20181027.rar

there is only one ssd drive in the rig with 3 primary partitions

file.iso exists on (hd0,1) and is windows install iso that maps without error if not using ahci.  attempting to map file.iso file by entering the following results in a lock up of my rig requiring the reset button.

GRUB4D0S 0.4.5c 2018-10-27, Mem: 630K/2711M/12782M. End: 3617D1

[ Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists the possible
completions of a device’filename. ESC at any time exits. ]

grub> ahci --showall
Error: No BIOS drive selected!
grub> ahci --set-drive=0x80 --set-controller=0 --set-port=5 --showselected
0) AHCI Controller VendorID#8086, DeviceID#8C82, Base Address#F7E35000
   Bus#0, Device#2, Function#1F: 1 Ports, 1 Devices
   Port-5: Hard Disk, Patriot Pyro SSD 560ABBF0 PT140520AS137074
grub> map --top --mem (hd0,1)/f[tab]

Edited by dickfitzwell, 18 October 2019 - 11:53 PM.


#37 Wonko the Sane

Wonko the Sane

    The Finder

  • Advanced user
  • 16066 posts
  • Location:The Outside of the Asylum (gate is closed)
  •  
    Italy

Posted 19 October 2019 - 06:51 AM

It could be some issue with the [TAB] expansion/autocomplete.

Try:

root (hd0,1)

ls

 

:duff:

Wonko



#38 dickfitzwell

dickfitzwell

    Member

  • Members
  • 33 posts

Posted 19 October 2019 - 07:46 AM

It could be some issue with the [TAB] expansion/autocomplete.

Try:

root (hd0,1)

ls

 

:duff:

Wonko

 

same result after "ls," system hang only resolved by reset button.

 

I downloaded AHCI_V1.3_20140601.rar and no system hang, but speed is not perceptibly different from unpatched version. wah wah


Edited by dickfitzwell, 19 October 2019 - 08:04 AM.


#39 Wonko the Sane

Wonko the Sane

    The Finder

  • Advanced user
  • 16066 posts
  • Location:The Outside of the Asylum (gate is closed)
  •  
    Italy

Posted 19 October 2019 - 08:09 AM

So it is - generally - access to the filesystem file listing creating the issue.

 

And if you just do that with the "right" filename?

 

You can try another thing.

 

BEFORE using the ahci command, do a blocklist of the .iso.

Example:

blocklist (hd0,1)/mynice.iso

217+23568746

 

then:

ahci ...

map --top --mem (hd0,1)/217+23568746 ...

 

of course that (if it works) can only work for contiguous files.

 

:duff:

Wonko



#40 antonino61

antonino61

    Gold Member

  • Advanced user
  • 1525 posts
  •  
    Italy

Posted 31 May 2020 - 10:49 PM

Download-Link: sourceforge.net/projects/grub4dosahcipatch

 

Grub4DOS AHCI / NVMe speed patch for memory maps

This package includes the following directories:
- bin -> compiled grldr version 0.4.5c with AHCI / NVMe support
- patch -> ahcinvme.patch for updating stock grub 0.4.5c source code
- src -> all changed and additional source code files used with grub 0.4.5c

The AHCI and NVMe support can be enabled by using the additional grub menu commands "ahci"
and "nvme". We can set the following parameters with the new commands:

ahci --set-drive=DRIVE --set-controller=CONTR --set-port=PORT [--showall] [--showselected]
nvme --set-drive=DRIVE --set-controller=CONTR [--showall] [--showselected]

Using these commands searches PCI configuration space for all present AHCI / NVMe controllers.
After setting this option every read to DRIVE is redirected to the selected AHCI / NVMe
controller. The CONTR and PORT arguments start at zero, where 0 corresponds to the 1st present
AHCI / NVMe controller / port found. If the option --showall is used, all AHCI / NVMe
controllers, ports and connected devices are displayed. If the option --showselected is used,
the selected AHCI / NVMe controller, port and connected device is displayed. The argument
--uninit uninitializes all AHCI / NVMe controllers. This command was implemented to speed up
transfer rates if an image file is loaded from an AHCI / NVMe SSD and mapped to RAM.

The AHCI / NVMe support can not be disabled anymore during compile with the configure command.
We removed this feature to make the code simpler.

The standard read block size is now always 32 MB, because this is the maximum data transfer
size we can get from an AHCI hard disk device at issuing one read command. If an image file
is mapped to RAM we get a speed of 550 MB/s with a Samsung 840 Pro SSD and 3 GB/s for a
Samsung 960 EVO NVMe SSD.

Pay attention that this AHCI / NVMe addition is only useful if you load the images from SSD to RAM.
Loading a disk image directly or using a normal platter HDD does not increase the read speed much.
Use this patch with care on your own risk!

Because we added some parameters to the AHCI and NVMe command line we wanna show you two sample
menu entries for both RAM disk mappings:

title Win10 - RAMDISK AHCI
ahci --set-drive=0x80 --set-controller=0 --set-port=0 --showselected
find --set-root --ignore-floppies /win10.vhd
map --mem /win10.vhd (hd0)
map --hook
ahci --uninit
root (hd0,0)
chainloader /bootmgr

title Win10 - RAMDISK NVMe
nvme --set-drive=0x80 --set-controller=0 --showselected
find --set-root --ignore-floppies /win10.vhd
map --mem /win10.vhd (hd0)
map --hook
nvme --uninit
root (hd0,0)
chainloader /bootmgr

You should always include the new "--uninit" parameter directly after the "map --hook" command.
This stops the redirection of drive 0x80 (hd0) to the AHCI / NVMe read procedure. The drive
number should always match the real BIOS drive number for the system, otherwise the system may
hang.

Many thanks to all the developers and forum members of reboot pro for making it possible to boot
a full-sized Windows from RAM and harddisk image.

Greets
Kai Schtrom

 

My dear schtrom, I have both an nvme m.2 and ... an intel optane memory stick as ordinary storage units (d:\ (optane) and e:\ (nvme proper)).

nvme --set-drive=0x80 --set-controller=0 --showall

yields both of them as nvme, namely e:\ as controller 0 and d:\ as controller 1.

despite I use the optane stick (d:\) as the boot drive and container of all my vhds and wims (which are dubbed on e:\ (nvme) as well), I can take full advantage of your patched version of g4d only as regards what is on e:, as what is on d: does not get loaded and g4d returns an error. right it is, even though it lists both controllers (if I set the controller to 1 instead of 0, it says cant load lba in nvme mode). now since I can assure u I have no better use for the optane stick than to force it to act as a normal drive (it would not speed anything here as a cache memory device owing to my powerful architecture), and as it is the fastest disk on my benchmarks, it would be a pity not to find a way of making ur patched g4d see it as nvme and load what it is asked to (my vhds on it). Can u think of a workaround in the menu list setting or can I do anything that u suggest to make ramloading from it possible? thanx in advance.



#41 DapperDeer

DapperDeer

    Newbie

  • Members
  • 28 posts
  •  
    United States

Posted 02 November 2020 - 08:24 PM

Is this patch needed for AHCI support in general or is this just to increase the speed?



#42 Guest_AnonVendetta_*

Guest_AnonVendetta_*
  • Guests

Posted 11 January 2021 - 03:07 AM

My laptop has 3 Samsung 970 Evo Plus 2TB SSDs installed. I intend to try loading W10 from a ram disk in legacy mode (the image will be located on one of these disks).

My BIOS's disk operating mode is set to AHCI, though it clearly shows these 3 SSDS installed and working fine. In W10, I started with Microsoft's default NVMe controller driver (it came with the OS), but then manually installed Samsung's NVMe driver from their website. Device Management reports that Samsung's driver is loaded in place of Microsoft's driver.

So, do I need this NVME speed patch version of G4D? Or should I just use regular/official G4D from Chenall?

#43 antonino61

antonino61

    Gold Member

  • Advanced user
  • 1525 posts
  •  
    Italy

Posted 11 January 2021 - 08:27 AM

if u can get the microsoft driver to work instead of the samsung, u will experience one hell of a lotta speed increase with this patch; otherwise, keep using the old g4d.

#44 Guest_AnonVendetta_*

Guest_AnonVendetta_*
  • Guests

Posted 11 January 2021 - 08:50 AM

@antonino61: I don't want to use the generic MS NVMe driver,which is why I replaced it with the one from Samsung...because their driver is the optimal driver to use if you have this particular SSD model. I can definitely say there is a slight performance and stability increase in the 2 weeks I've had it installed. My benchmarks are also higher with Samsung's driver. Whenever there is both a generic driver, and a specific driver FROM THE COMPANY THAT MAKES THE DRIVE, I would say that it's usually better to use the one from the product vendor.

That's why I asked if I should use this G4D version, or the original. I honestly don't think that which driver is loaded will make a difference, because G4D and SVBus would have to initialize the disk before the driver can load.

#45 antonino61

antonino61

    Gold Member

  • Advanced user
  • 1525 posts
  •  
    Italy

Posted 11 January 2021 - 10:09 AM

so what is the relevance of ur pointing out the loading of the specific driver instead of the microsoft one in the context of speeding up the preramloading process? my answer subtended my assumption that the patch did not work on account of the specific driver vs. the microsoft driver, as your initial ...So, ... suggested. I did try the patch with the microsoft driver and relative to the ordinary g4d the preramloading is a lot faster, not just a little faster.



#46 antonino61

antonino61

    Gold Member

  • Advanced user
  • 1525 posts
  •  
    Italy

Posted 29 November 2022 - 05:07 PM

http://reboot.pro/in...872#entry212891

 

so, the above makes it extremely difficult to make out what my good old friend Alacrán actually advodates, in his acrobatics between speed and logic, coherence and functionality, common sense, uncommon sense, private sense and public sense, and what more and what not, otherwise it makes no sense.

 

the above makes it a little difficult to make out what a timeless newbie like me has to type in order to "concoct" the old nvme patch with the latest g4d out there, which I hope will go back to alacrán's advocation without being of any prejudice to functionality and speed: 

viz, I normally have the controller identifier string first, something like

nvme --set-drive=0x80 --set-controller=0 --showselected

then the usual strings until i get to map --hook

and then the uninit string which is fundamental

all the above runs on 4.5,

now what do I have to do to associate it to the latest 4.6 and make it work so spick&span as in the previous scenario. I am willing to make everybody happy, once again, without being of any prejudice to functionality and speed.

 

still for functionality and speed's sake, I would like to inform u ppl that I have tried all possible instances of primo ramos 4 (which amounts to 2 out of 11, according to the hardware here) and I must tell you that, in retrospective, it was not worth the trouble (no harm in trying though). it takes almost thrice the time it takes svbus to load the vhd into ram with the nvme patch (almost 9secs vs 3secs). I guess the primo ramos advocates calculated the speed gain relative to non-patched g4d's. I hope I have not missed out on something, but svbus with nvme-patched g4d seems to be the fastest so far, unless otherwise proven.

the only advantage I have seen in the tool is u can save ur ram mods to disk while on ram, which is also the reason for its not being so fast as its advocates claim, I guess, but I am willing to read anybody who begs to differ.



#47 antonino61

antonino61

    Gold Member

  • Advanced user
  • 1525 posts
  •  
    Italy

Posted 29 November 2022 - 05:08 PM

unwanted repetition of the above post






0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users