Jump to content











Photo
- - - - -

[Bug report] Redundant container writing and possible kernel-level I/O deadlock with hard-disk hosted, NTFS-formatted containers on Windows 7


  • Please log in to reply
17 replies to this topic

#1 pter6464

pter6464
  • Members
  • 5 posts
  •  
    United States

Posted 14 December 2016 - 12:44 AM

The latest versions of ImDisk exhibit some unexpected behavior when used to mount raw image containers hosted on a local hard disk (traditional, not SSD). I've confirmed that this behavior exists in ImDisk v2.0.6 and v2.0.9 and most likely originated from the 2.x rewrite of ImDisk. This behavior does not exist in ImDisk v1.9.4. I have observed this behavior on two different Windows 7 x64 machines. This behavior appears to be some kind of race condition and is only observable with operations on large files (> 10 GB). 
 
To reproduce this problem, create a new file on your local hard disk (let's call it D: from now on, for brevity) to act as the raw image container. You can use these commands to do this:
fsutil file createnew D:\<filename> 64000000000
fsutil file setvaliddata D:\<filename> 64000000000
You may need to write zeroes to the first 512 bytes of the file to make sure ImDisk detects this as blank media. Right-click the created file and choose "Mount as ImDisk Virtual Disk. In the dialog box, size of image should be "(existing image file size)" and image offset should be 0. Leave other options default. (Let's call the mounted image partition E: from now on, for brevity.) Quick-format E: with the NTFS filesystem with default cluster size. Now let's assume we have a large file > 10 GB called D:\largefile. (You can use any large file on your PC to observe the problem behavior. And you can also re-use the fsutil commands above to create said large file.)
 
When I initiate a file copy D:\largefile to E:\largefile one of 2 things happens:
1. The file is copied normally at ~50MB/s. This is the expected behavior.
2. The file is copied normally for 1-2 seconds at ~50MB/s. The process copying the file (explorer or any other filemanager) then stops responding. It's now impossible to cancel the file copy through the process's GUI. (Since explorer does asynchronous file copies, it will continue responding, but the thread responsible for file copy will be frozen). ImDisk driver will proceed to write zeroes to the entire length of E:\largefile. (This should not occur since we do not want to fill the file with zeroes, but rather with its actual data from D:\largefile. This is the redundant container writing behavior.) Resmon.exe shows the writing activity coming from the system process (PID 4). On one machine, zeroes are written at ~50MB/s and the hard disk D: is at 100% utilization. On another machine, zeroes are written at ~5MB/s, with D: utilization at 5%. As you can see, this makes ImDisk essentially unusable when dealing with large files. Once zeroes are written to the entire length of E:\largefile, the process copying the file unfreezes and the copying proceeds normally at ~50MB/s and can be cancelled. If I were to forcefully dismount E:\ (see below on how to do this) while zeroes are being written, when I re-mount it and look at the contents of E:\largefile, its first ~50MB are filled with actual data from D:\largefile. The rest is zeroes. This is the data that was copied before the strange race condition occurred causing the ImDisk driver to start writing zeroes to the entire length of the file.
 
When I try to delete E:\largefile (or any large file on the mounted partition) the following behavior happens consistently:
The deletion appears to happen normally. The file disappears and the file manager GUI is responsive. 1-2 seconds later ImDisk starts writing zeroes to the entire length of the deleted E:\largefile. Same as above: [On one machine, zeroes are written at ~50MB/s and the hard disk D: is at 100% utilization. On another machine, zeroes are written at ~5MB/s, with D: utilization at 5%.] If I try to initiate any other I/O request to the mounted image, or try to dismount it, before the zero-writing completes, the file manager will become unresponsive until all zeroes are written.
 
Now for the really bad part: while the ImDisk driver is writing the zeroes, it will sometimes simply stop (another race condition?). Hard disk D: utilization drops to 0%, there's no I/O activity to E: anymore. File manager will remain frozen, and any process that tries to as much as enumerate E: will also freeze. This is the kernel-level I/O deadlock behavior. If I were to shut down Windows at this point, it too will freeze at "Shutting Down...".
 
It is usually possible to recover from this deadlock by forcefully dismounting the host hard drive D: with a command like
chkdsk D: /x
Sometimes, though, this too will fail, and now any enumeration of D: will also lead to deadlock.
 
As you can see this makes ImDisk 2.x essentially useless if you need it for working with large files on local hard-disk hosted containers, forcing me to keep using v1.9.4. I sure hope I'm not the only one who can reproduce this problem.
 
If you're still with me, thank you for reading all this, and, of course, huge thanks to Olof Lagerkvist for creating this tool in the first place. It's been immensely useful to me for many years now and is one of the first things I install on any new machine, together with the drivers :).

Edited by pter6464, 14 December 2016 - 12:50 AM.


#2 Olof Lagerkvist

Olof Lagerkvist

    Gold Member

  • Developer
  • 1338 posts
  • Location:Borås, Sweden
  •  
    Sweden

Posted 14 December 2016 - 12:22 PM

Interesting report, thanks!

 

The 2.0.9 version has been around for quite a while now and only a very few problems have been found in it so far. Nothing very serious really. But there might be reasonable explanations why this particular bug has not been seen before. It looks like most users only use ImDisk for memory backed virtual disks, only a very few mount images. Of those who use it for mounting images, most seem to mount ISO images as virtual CD/DVD drivers or similar. Those mounting large images in Windows 7 or later these days mostly seem to use alternative solutions because of several compatibility problems like no Disk Management support etc.

 

I have not looked at any possible causes yet and I have not much time to do that where I am right now unfortunately. But my guess is that the problem could have something to do with the rather large changes in the way I/O requests are handled in the 2.x versions. It now responds to TRIM requests so that unused areas can be unallocated in backend storage. This means that when files are deleted from a virtual disk and it is backed by a sparse image file, parts of the image file used by the deleted file will be unallocated. In Windows 8 and later, the best way to do this is to send a file level trim request to the image file. In Windows 7 and earlier there is no support for this filesystem request, so the only way is to request that the filesystem driver zeros the ranges. For sparse files this will have the same effect, the ranges will be unallocated. But question is what happens for non-sparse image files in this case. I have no idea actually and I don't think I have a test case for that particular scenario in my usual test runs.

 

What you could try until I have time to fix this is to mark the image file as sparse. Either use fsutil sparse setflag before allocating the full size for the image file or let ImDisk command line tool create the image file and add -o sparse to the command line. You can also try with and without the -o par option. It could be a little dangerous sometimes, there have been blue screen crashes in some cases but in most cases it just boosts I/O performance a lot.



#3 Wonko the Sane

Wonko the Sane

    The Finder

  • Advanced user
  • 13480 posts
  • Location:The Outside of the Asylum (gate is closed)
  •  
    Italy

Posted 14 December 2016 - 12:27 PM

As a side note, I don't think :unsure: that what you described is not the intended behaviour. :dubbio:

 


It seems to me like if you create a sparse fie as soon as you copy it "normally" the sparse files "expands" to a zero filled "normal" file. 
 

Or maybe I am not understanding the issue at all. :w00t: :ph34r:

 

Maybe trying to use a sparse file copying tool?

 

A few of them posted here (just in case):

http://reboot.pro/to...n-many-ramdisk/

http://reboot.pro/to...mdisk/?p=192898

 

:duff:

Wonko



#4 Olof Lagerkvist

Olof Lagerkvist

    Gold Member

  • Developer
  • 1338 posts
  • Location:Borås, Sweden
  •  
    Sweden

Posted 14 December 2016 - 12:31 PM

@Wonko

No, at least from how I understood it he was not using sparse files at all. I was just curious to see what would happen if he switched to using sparse image file as storage for the virtual disk itself, not using sparse attributes for the files copied.



#5 Wonko the Sane

Wonko the Sane

    The Finder

  • Advanced user
  • 13480 posts
  • Location:The Outside of the Asylum (gate is closed)
  •  
    Italy

Posted 14 December 2016 - 12:49 PM

@Wonko

No, at least from how I understood it he was not using sparse files at all. I was just curious to see what would happen if he switched to using sparse image file as storage for the virtual disk itself, not using sparse attributes for the files copied.

I see now :), I misread the OP, I was confusing the D:\<filename> with the D:\largefile :blush: .

 

Still, the result of fsutil createnew+fsutil setvaliddata may be a "queer" file (I mean if it was "ok" why one would need to write first 512 bytes of it?).

 

Anyway the OP can experiment with the given tools as alternative to fsutil sparse.

 

Is the behaviour the same if the file is created through a different method?

Like - say - fsz?

Or dd if=nul ...

I traditionally (but then i am on XP that may be a whole different story) create files with fsz and I don't recall :unsure: having ever needed to write 00's to the first sector for IMDISK to mount it as a blank image.

 

And additionally maybe it is a question related to size, I never even thought of creating a non-sparse file so large.

 

:duff:

Wonko



#6 Olof Lagerkvist

Olof Lagerkvist

    Gold Member

  • Developer
  • 1338 posts
  • Location:Borås, Sweden
  •  
    Sweden

Posted 14 December 2016 - 01:03 PM

From what I have understood fsutil setvaliddata is just a quicker way to allocate a large file because it does not zero existing contents in the clusters allocated for the file. There should be no differences to the file in any other respect, it should still be a fully allocated non-sparse file. That's probably why he needed to zero the first few bytes, so that filesystem drivers do not detect something that happened to already be there as a valid VBR or similar.

#7 Wonko the Sane

Wonko the Sane

    The Finder

  • Advanced user
  • 13480 posts
  • Location:The Outside of the Asylum (gate is closed)
  •  
    Italy

Posted 14 December 2016 - 02:58 PM

From what I have understood fsutil setvaliddata is just a quicker way to allocate a large file because it does not zero existing contents in the clusters allocated for the file. There should be no differences to the file in any other respect, it should still be a fully allocated non-sparse file. That's probably why he needed to zero the first few bytes, so that filesystem drivers do not detect something that happened to already be there as a valid VBR or similar.

Interesting. :)

So, in theory, if I have a filled up to the brim filesystem, then delete a few files and create a new file for the whole free size with this method all my deleted data is in the new file until overwritten?

I must make some tests about this, maybe this feature can be useful for *something*. :unsure:

 

:duff:

Wonko



#8 Olof Lagerkvist

Olof Lagerkvist

    Gold Member

  • Developer
  • 1338 posts
  • Location:Borås, Sweden
  •  
    Sweden

Posted 14 December 2016 - 03:26 PM

Yes, it is frequently used by database storage engines and similar that need to quickly allocate large files and where the file is used in a way that will not expose unused parts of the file to clients (ideally, at least ;) ).

This feature requires administrative privileges of course. Or more specifically, privileges to directly access the volume raw data bypassing filesystem drivers.

#9 pter6464

pter6464
  • Members
  • 5 posts
  •  
    United States

Posted 15 December 2016 - 11:42 PM

Thanks for the fast replies!
 
Olof, it's exactly as you described - making the container a sparse file causes all the unexpected behavior to disappear. The -o par option, though, has no effect on the unexpected behavior. I'm guessing the fix for a later version of ImDisk would be as simple as to not respond to TRIM requests if the container is a non-sparse locally hosted file (or to have an option for this).
 
I'm not using sparse container files because after prolonged use they tend to become quite a mess fragmentation-wise on the host disk. I once had a 500GB sparse container on a 3TB HDD with almost a million segments fragmented all over the place. The disk thrashing was ridiculous, and other new files on the host disk were getting fragmented as well because of that one fragmented container.
 
What I'll do for now is use Arsenal Image Mounter (which I wasn't aware existed until I came to this forum :) ). It handles non-sparse containers just fine from my tests. The only thing is - I miss the handy context menu from ImDisk when right-clicking a file or mount point that lets you quickly (dis)mount an image. Is there some way to add it in using the registry and AIM's CLI?
 
 

 

From what I have understood fsutil setvaliddata is just a quicker way to allocate a large file because it does not zero existing contents in the clusters allocated for the file. There should be no differences to the file in any other respect, it should still be a fully allocated non-sparse file. That's probably why he needed to zero the first few bytes, so that filesystem drivers do not detect something that happened to already be there as a valid VBR or similar.

This is exactly correct :).



#10 Olof Lagerkvist

Olof Lagerkvist

    Gold Member

  • Developer
  • 1338 posts
  • Location:Borås, Sweden
  •  
    Sweden

Posted 16 December 2016 - 12:00 PM

Thanks for testing! I have never thought about the potential problem with fragmentation when using sparse files. I have probably not looked at fragmentation for the last 10 years or so but I understand that it could have some effects on I/O efficiency of course.

Interesting thing here is that there is probably just by pure luck that this behaviour does not happen in AIM. It has a similar way of handling TRIM requests as ImDisk, but Windows 7 disk.sys by itself does not propagate TRIM requests sent by filesystem drivers down to SCSI based storage drivers, only to ATA based storage drivers. That's why SSD vendors often provided filter drivers for disk.sys in Windows 7 that added this feature. This is not needed on Windows 8 and later because TRIM requests are automatically sent to SCSI storage there as well. But there this problem will not show with AIM because file level trim is supported on the base image file and AIM will try that first before sending a zero request. Therefore there are no combinations of OS version dependent behaviour that could cause this problem to happen with AIM, even if it probably would with one of those TRIM propagating filter drivers added.

Unfortunately, Arsenal Image Mounter does not have similar context menus for Explorer and similar.

#11 Wonko the Sane

Wonko the Sane

    The Finder

  • Advanced user
  • 13480 posts
  • Location:The Outside of the Asylum (gate is closed)
  •  
    Italy

Posted 16 December 2016 - 12:10 PM

Unfortunately, Arsenal Image Mounter does not have similar context menus for Explorer and similar.

Maybe erwan.l can add it to Imgmount:

http://reboot.pro/to...19003-imgmount/

http://reboot.pro/fi...e/374-imgmount/

:unsure:

... but I haven't seen him around lately :(.

 

:duff:

Wonko



#12 pter6464

pter6464
  • Members
  • 5 posts
  •  
    United States

Posted 17 December 2016 - 12:43 AM

Interesting thing here is that there is probably just by pure luck that this behaviour does not happen in AIM. It has a similar way of handling TRIM requests as ImDisk, but Windows 7 disk.sys by itself does not propagate TRIM requests sent by filesystem drivers down to SCSI based storage drivers, only to ATA based storage drivers. That's why SSD vendors often provided filter drivers for disk.sys in Windows 7 that added this feature. This is not needed on Windows 8 and later because TRIM requests are automatically sent to SCSI storage there as well. But there this problem will not show with AIM because file level trim is supported on the base image file and AIM will try that first before sending a zero request. Therefore there are no combinations of OS version dependent behaviour that could cause this problem to happen with AIM, even if it probably would with one of those TRIM propagating filter drivers added.

 

Interesting info, thanks!
 
 
I've managed to add context menus for mounting images with AIM, similar to those present in ImDisk. If anyone's interested, here are the registry entries to add:
[HKEY_CLASSES_ROOT\*\shell\AIMMountFile]
@="Mount as &AIM Virtual Disk"
"HasLUAShield"=""

[HKEY_CLASSES_ROOT\*\shell\AIMMountFile\command]
@="C:\\<path-to-AIM>\\aim_ll.exe -a -t file -f \"%L\""

[HKEY_CLASSES_ROOT\*\shell\AIMMountFileRO]
@="Mount as &AIM Virtual Disk (read only)"
"HasLUAShield"=""

[HKEY_CLASSES_ROOT\*\shell\AIMMountFileRO\command]
@="C:\\<path-to-AIM>\\aim_ll.exe -a -t file -o ro,fksig -f \"%L\""

[HKEY_CLASSES_ROOT\Drive\shell\AIMUnmount]
@="Unmount &AIM Virtual Disk"
"HasLUAShield"=""

[HKEY_CLASSES_ROOT\Drive\shell\AIMUnmount\command]
@="C:\\<path-to-AIM>\\aim_ll.exe -d -m %L"

[HKEY_CLASSES_ROOT\Drive\shell\AIMUnmountF]
@="Unmount &AIM Virtual Disk (force)"
"HasLUAShield"=""

[HKEY_CLASSES_ROOT\Drive\shell\AIMUnmountF\command]
@="C:\\<path-to-AIM>\\aim_ll.exe -R -m %L"
Don't forget to replace <path-to-AIM>. Unlike ImDisk's context menus, these require elevation. So if you have UAC enabled and want to use them from explorer or from a non-elevated file manager, you will need to make a shortcut (.lnk file) to aim_ll.exe in the same directory as aim_ll. Let's call the shortcut aim_lle.lnk. Right-click, choose properties, go to Shortcut > Advanced, and turn on Run as admin. Next make a batch file aim_lle.bat in that same directory with the line
%~dp0aim_lle.lnk %*

Finally, in the registry entries above, replace all occurrences of aim_ll.exe with aim_lle.bat.

 
 
 
One last question - I seem to be having trouble mounting a container from within a mounted container when using AIM. AIM says "No volumes attached. Disk could be offline or not partitioned." and Disk Management doesn't let me assign a drive letter. Is this kind of usage not supported in AIM?

Edited by pter6464, 17 December 2016 - 01:04 AM.


#13 pter6464

pter6464
  • Members
  • 5 posts
  •  
    United States

Posted 17 December 2016 - 10:59 PM

I figured this out as well. The problem was actually a disk signature collision. Both containers I was trying to mount were raw partition images with no MBR, so when mounting them, AIM was giving them both unique disk ID 1. The first one mounted fine, but any subsequent ones were causing a disk signature collision and, since there was no MBR, confusing Windows disk management. The obvious solution is to make sure your images have an MBR in the first place, but if you are comfortable with hex editing, you can zero out offset 0x1b8 - 0x1fd in your image file and manually write an MBR with a random unique disk ID and a single partition with start 0 and length = (length of you image file / 512). Note that this will probably render your imaged partition unbootable. The hex editing IS required, because diskpart will refuse to write an MBR if it detects that sector 0 of you image contains a valid NTFS boot record.
 
Cheers!

Edited by pter6464, 17 December 2016 - 11:00 PM.


#14 Wonko the Sane

Wonko the Sane

    The Finder

  • Advanced user
  • 13480 posts
  • Location:The Outside of the Asylum (gate is closed)
  •  
    Italy

Posted 18 December 2016 - 10:06 AM

 

I figured this out as well. The problem was actually a disk signature collision. Both containers I was trying to mount were raw partition images with no MBR, so when mounting them, AIM was giving them both unique disk ID 1. The first one mounted fine, but any subsequent ones were causing a disk signature collision and, since there was no MBR, confusing Windows disk management. The obvious solution is to make sure your images have an MBR in the first place, but if you are comfortable with hex editing, you can zero out offset 0x1b8 - 0x1fd in your image file and manually write an MBR with a random unique disk ID and a single partition with start 0 and length = (length of you image file / 512). Note that this will probably render your imaged partition unbootable. The hex editing IS required, because diskpart will refuse to write an MBR if it detects that sector 0 of you image contains a valid NTFS boot record.
 
Cheers!

 

Another good thing to know :) , I never even thought of using a non partitioned image (or super-floppy) with AIM.

I wonder :unsure: if it would be possible to *somehow* hack together a dual-use MBR/bootsector like the MakebootFat one but for NTFS. :dubbio:

 

 

:duff:

Wonko



#15 Olof Lagerkvist

Olof Lagerkvist

    Gold Member

  • Developer
  • 1338 posts
  • Location:Borås, Sweden
  •  
    Sweden

Posted 18 December 2016 - 01:16 PM

There seems to be an increasing number of people who try to use Arsenal Image Mounter with non-partitioned images. In general, it works pretty well if you mount it as "removable" because Windows will then ignore the disk signature and accept to mount a filesystem directly on the "partition 0" object.

 

There have also been a few other strange things reported regarding volume id value collisions for various kinds of AIM virtual disks though and we will provide a better solution for disk id reporting in next version that will hopefully fix this.



#16 pter6464

pter6464
  • Members
  • 5 posts
  •  
    United States

Posted 19 December 2016 - 12:12 AM

I wonder :unsure: if it would be possible to *somehow* hack together a dual-use MBR/bootsector like the MakebootFat one but for NTFS. :dubbio:

It should be possible :) . The hex edits I do above already make the first sector a combination MBR + NTFS boot sector, and the filesystem drivers are fine with it. All that's left is to fix up the actual boot code, too bad I don't know assembly.

 

In general, it works pretty well if you mount it as "removable" because Windows will then ignore the disk signature and accept to mount a filesystem directly on the "partition 0" object.

Good to know. I'll probably just keep using my hack above anyway, as it's faster for me to hex edit in an MBR than configuring volume write caching settings (write-caching is important to me and it's off by default for removable volumes). Thanks for your work on ImDisk and AIM. Looking forward to future updates :)



#17 Wonko the Sane

Wonko the Sane

    The Finder

  • Advanced user
  • 13480 posts
  • Location:The Outside of the Asylum (gate is closed)
  •  
    Italy

Posted 19 December 2016 - 11:25 AM

I did a quick test (with the "NT52" bootsector) in Virtualbox.
Since the "overlapping data" (the disk signature and the partition table) both fall within the "error messages" of the bootsector, the thing may overall work.
Grub4dos in this case is "more royal than the king", as it won't allow chainloading the bootsector (throwing an error because the "sectors before" of a bootsector of a partition on hard disk should not be 0) as well partnew doesn't allow writing a partition table entry with sectors before 0 (but this is not a problem as one can make a partition entry with 1 sectors before and then overwrite the LBA with a 0 and the CHS with 1).
The good news are that on such a volume if we use grub4dos to chainload the NTLDR it seemingly boots fine (at least up to the BOOT.INI choices).
The bootability of the volume in itself (without changing the code) is of course not OK (though if deployed to a "real" hard disk image it behaves good as before).
There are however some possible issues.
If the volume is created in IMDISK it will have 0 sectors before and - as said - the NTLDR will load, but if the volume is an actual image of a "real" partition, it will obviously have a non-zero "sectors before" and thus it cannot work without changes.
The NTFS formatting of a volume may have different results in different context, namely a hard disk volume will be 1 sector less in size than the partition because of the $BootMirr as last sector (outside the volume but inside the partition extents), so the value for the partition entry needs to be chacked against both the size in the bootsector and the whole image size.
The sensed by BIOS geometry of the volume may be different (it depends on BIOS and on size of the image) from the "expected" 255/63, in my test the 256 Mb image was sensed as 16/63 and the bootsector value was 128/63, I need to test with larger images that will default to 255/63.
The four values in bytes 0x1F8, 0x1F9, 0x1FA and 0x1FB (which are pointers to the four error messages available in the bootsector):
http://thestarman.pc.../mbr/NTFSBR.htm
       0  1  2  3  4  5  6  7  8  9  A  B  C  D  E  F
7D83           0D 0A 41 20 64 69 73 6B 20 72 65 61 64      ..A disk read
7D90  20 65 72 72 6F 72 20 6F 63 63 75 72 72 65 64 00    error occurred.
7DA0  0D 0A 4E 54 4C 44 52 20 69 73 20 6D 69 73 73 69   ..NTLDR is missi
7DB0  6E 67 00 0D 0A 4E 54 4C 44 52 20 69 73 20 63 6F   ng...NTLDR is co
7DC0  6D 70 72 65 73 73 65 64 00 0D 0A 50 72 65 73 73   mpressed...Press
7DD0  20 43 74 72 6C 2B 41 6C 74 2B 44 65 6C 20 74 6F    Ctrl+Alt+Del to
7DE0  20 72 65 73 74 61 72 74 0D 0A 00 00 00 00 00 00    restart........
7DF0  00 00 00 00 00 00 00 00 83 A0 B3 C9 00 00 55 AA   ..............U.
are actually "overlapping" on the 4th partition entry (but since the partition ID remains 00 windows will not be a problem) will need to be modified and *somehow* four much shorter error messages could be implemented, *like*:
7D83           0D 0A 44 69 73 6B 21 00 0D 0A 4E 54 4C   .....Disk!...NTL 
7D90  44 52 4D 69 73 73 00 0D 0A 4E 54 4C 44 52 43 6F   DRMiss...NTLDRCo
7DA0  6D 70 00 0D 0A 43 74 72 6C 41 6C 74 44 65 6C 00   mp...CtrlAltDel.
7DB0  0D 0A 00 00 00 00 00 00 00 00 00 00 00 00 00 00   ................
7DC0  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00   ................
7DD0  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00   ................
7DE0  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00   ................
7DF0  00 00 00 00 00 00 00 00 83 8B 97 A3 00 00 55 AA   ........ƒ‹—£..Uª
About re-writing the assembly code, judging from the disassembly on the mentioned page it should be well possible taking a couple of shortcuts (like disabling the CHS lookup and possibly assuming that Int13h extensions are in the BIOS) to fit some alternate code.
Actually I was thinking about using the unused second 8 sectors of the 16 sectors $Boot :dubbio: which would allow *anything*.
The way the makebootfat works is unfortunately not directly applicable because the $Boot is actually a file within the NTFS file system so we cannot play with its address/offset manipulating "sectors before" :(.
 
:duff:
Wonko
  • Olof Lagerkvist likes this

#18 Wonko the Sane

Wonko the Sane

    The Finder

  • Advanced user
  • 13480 posts
  • Location:The Outside of the Asylum (gate is closed)
  •  
    Italy

Posted 23 December 2016 - 07:53 PM

Ok, not that I am any particularly good with assembly, but standing on the shoulder of giants :) :

  • H. Peter Anvin
  • Andrea Mazzoleni
  • Daniel B. Sedory aka The Starman

 

I managed to put together something that seems to work.

 

The grub4dos issue remains.

 

If anyone is interested in the matter :dubbio: we may later ask Chenall or YaYa (or both) if it would be possible to remove the error.

 

:duff:

Wonko

Attached Files


  • pter6464 likes this




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users