Jump to content











Photo
- - - - -

Rufus v1.3.0 has been released


  • Please log in to reply
208 replies to this topic

#151 Blackcrack

Blackcrack

    Frequent Member

  • Advanced user
  • 412 posts
  •  
    Germany

Posted 23 March 2014 - 02:19 PM

humm, oky.. thy for re :)

 

but... i like Rufus more *g*

 

best regards

Blacky



#152 ianst

ianst

    Newbie

  • Members
  • 20 posts
  •  
    Austria

Posted 23 March 2014 - 04:04 PM

After I've found the behavior described in my previous post, I was also curious to find out the differences between 4 directory entry writes for one of the files. They are





Comparing files 0.bin and 1.BIN
0000065C: 00 41 A
0000065D: 00 66 f
0000065F: 00 69 i
00000661: 00 6C l
00000663: 00 65 e
00000665: 00 30 0
00000667: 00 0F
00000669: 00 C2
0000066A: 00 30 0
0000066C: 00 33 3
0000066E: 00 34 4
00000670: 00 34 4
00000674: 00 FF
00000675: 00 FF
00000678: 00 FF
00000679: 00 FF
0000067A: 00 FF
0000067B: 00 FF
0000067C: 00 46 F
0000067D: 00 49 I
0000067E: 00 41 A
0000067F: 00 41 A
00000680: 00 36 6
00000681: 00 37 7
00000682: 00 7E ~
00000683: 00 31 1
00000684: 00 20
00000685: 00 20
00000686: 00 20
00000687: 00 20
00000689: 00 1C
0000068A: 00 5D ]
0000068B: 00 80
0000068C: 00 77 w
0000068D: 00 44 D
0000068E: 00 77 w
0000068F: 00 44 D
00000692: 00 60 `
00000693: 00 80
00000694: 00 77 w
00000695: 00 44 D




Comparing files 1.bin and 2.BIN
00000696: 00 63 c
00000697: 00 01
00000699: 00 08




Comparing files 2.bin and 3.BIN
00000003: 1A 8A
00000004: 9D 81
00000692: 60 71 ` q
00000693: 80 6E n




Comparing files 3.bin and 4.BIN

The first two writes happen before the file alloc table writes, the second two happen afterwards, and we see that there aren't difference between the third and fourth, but that there are for the first three. First write adds both the short and long variant of the file name, the second changes the starting cluster and the length from 0 to the real values, the third updates the time and the fourth is the same as the third.

 

Strange, but that's how writes to FAT32 for every file when xcopy is performed work.


Edited by ianst, 23 March 2014 - 04:06 PM.


#153 ianst

ianst

    Newbie

  • Members
  • 20 posts
  •  
    Austria

Posted 23 March 2014 - 05:29 PM

And finally the capture from Rufus for one of the small files (see my previous posts for how I've done it):

SCSI: Write(10) LUN: 0x00 (LBA: 0x00001876, Len: 1)
SCSI: Write(10) LUN: 0x00 (LBA: 0x00005600, Len: 8)
SCSI: Write(10) LUN: 0x00 (LBA: 0x00005600, Len: 8)
SCSI: Write(10) LUN: 0x00 (LBA: 0x00005658, Len: 4)
SCSI: Write(10) LUN: 0x00 (LBA: 0x000008e1, Len: 1)

Two fat writes, two dir entry writes for one file data write for each file which has 2048 bytes.

 

The part of writes of the big file looks nice, 64K pushed at once, then the next 64K.:







SCSI: Write(10) LUN: 0x00 (LBA: 0x00008538, Len: 128)
SCSI: Write(10) LUN: 0x00 (LBA: 0x000085b8, Len: 128)

I think now everything is settled. Rufus copies better than xcopy :) avoiding two dir entry writes. Still the overhead per file is 2 x 4KB write (dir entry) and 2 x 512B (fat entries) write for each file when writing to FAT32.

 

It seems still a good compromise for the possibility to have a writable bootable stick.


Edited by ianst, 23 March 2014 - 06:26 PM.


#154 Akeo

Akeo

    Frequent Member

  • Developer
  • 331 posts
  •  
    Ireland

Posted 23 March 2014 - 08:43 PM

Yes, more time usually means more I/O. No surprise there, as neither computer and device are sitting idle.

 

However, as I tried to point out, the only thing I am interested in is whether the flash blocks are being written or not on the flash drive itself, because this equates to wear and tear, and that's the only thing that really matters.

 

You have to remember that most devices will have a cache, and I/O operations will go something like this:

1. The OS tells the device that a set of blocks need to be written

2. When a write I/O operation is received, the device stores the updated data in a cache (i.e. onboard RAM), which is usually large enough to contain data for a few blocks.

3. The device only issues an actual write operation to the flash when one of the 2 following conditions happens:

  a. some time has passed without a cached block of data being modified further, in which case the data can be considered final and should be stored on a permanent basis.

  b. the cache runs out of space as new data pertaining to a block that is not already in the cache arrives, in which case the oldest cache block gets flushed by being written onto the flash.

 

Now, if the cache is not awfully small, and you're writting a bunch of small files, chances are that 3b) is not going to apply that much. Therefore, most of the re-writes for the FAT and whatnot should occur for data that is still sitting in cache rather than on the flash => it might take longer to write a bunch of small files, because more I/O needs to go on the bus. But it still doesn't induce more wear and tear as only data from the cache should be modified, before the write to flash command is issued.

 

Of course, the above is the best case scenario. Maybe some device manufacturers try to make do with as little cache as possible, or their caching algorithm has very small flush timeout. Still, it's in the interest of flash drive manufacturer to reduce the number of writes that are actually issued on the flash, since flash isn't everlasting and it will also boost up the apparent device speed. Also, since it reduces wear and tear, proper cache usage will reduce the number of RMA they might have to handle while a drive is still under warranty.

 

Therefore, unless someone with intimate knowledge of flash drive controllers and caching, especially with regards to how and when actual flash writes are issued on the drive comes along (which, unfortunately, is not something that can be hinted by I/O measurements or USB bus traces), I'm not going to be particularly concerned about small files operations. If you look around, you will see that caching is being used all over the place in modern devices and OSes, so I have no reason to believe that more I/O involving small files translates into more wear and tear for USB flash drives, which is the only thing I am really concerned about.



#155 steve6375

steve6375

    Platinum Member

  • Developer
  • 7107 posts
  • Location:UK
  • Interests:computers, programming (masm,vb6,C,vbs), photography,TV,films,guitars, www.easy2boot.com
  •  
    United Kingdom

Posted 23 March 2014 - 08:55 PM

sorry - didn't read Akeo's reply properly!



#156 ianst

ianst

    Newbie

  • Members
  • 20 posts
  •  
    Austria

Posted 23 March 2014 - 10:25 PM

Akeo, the captures I've made are the ones that actually are transferred over the USB port, bit by bit(!) directly to the device.

 

The devices have to guarantee the order of writes, effectively, they have to write one "bulk transfer" (the block behind every SCSI write command) in order to report it completed. The exception is 

 

http://en.wikipedia....B_Attached_SCSI

 

Which requires:

 

UAS support in the USB driver stack.
Device hardware supporting UAS.
Device firmware supporting UAS.
UAS-compatible system controller.

 

Which is supported only in Windows 8 and with the newest devices. Effectively, if you have a USB 3 hard disk with a moder USB controller and if the hard disk supports NCQ, you'll have reordering. Every HDD advertises the size of the cache. 

 

Microsoft's whitepaper:

 

http://msdn.microsof...e/jj248714.aspx

 

Without such a protocol only OS cache can combine writes and for that write caching has to be enabled. It is not enabled on Windows for real removable devices like USB sticks which use simpler microcontrollers, as Steve also points.

 

Even with reordering possible (UAS), I doubt that OS and filesystem would even allow it for the reordered updates of the file data and  the file allocation table entries when caching is disabled. It's a big topic. See some notes from Linux development (the file allocation data and directory entries are effectively the "metadata") to get an idea of the issues:

 

http://www.linux-mag...r-Ext3-and-Ext4

 

The good news is, when UAS is used, it can also be captured. I admit I've captured the transfers to the USB 2 sticks. I welcome anybody who manages to capture queued transfers to any USB stick.


Edited by ianst, 23 March 2014 - 10:46 PM.


#157 Akeo

Akeo

    Frequent Member

  • Developer
  • 331 posts
  •  
    Ireland

Posted 23 March 2014 - 11:20 PM

Akeo, the captures I've made are the ones that actually are transferred over the USB port, bit by bit(!) directly to the device.


Yes, but I am talking of the cache that exists on the device itself. Writing to the flash is a slow process that you don't want to perform more than you have to, so you may see 10,000 read and writes for the same block in rapid succession on the USB bus, which will take time, and yet have only a single write for that block issued by the device on the flash itself.

PS: UAS support was added to Rufus in version 1.4.2.



#158 ianst

ianst

    Newbie

  • Members
  • 20 posts
  •  
    Austria

Posted 24 March 2014 - 06:39 AM

Yes, but I am talking of the cache that exists on the device itself. Writing to the flash is a slow process that you don't want to perform more than you have to, so you may see 10,000 read and writes for the same block in rapid succession on the USB bus, which will take time, and yet have only a single write for that block issued by the device on the flash itself.

PS: UAS support was added to Rufus in version 1.4.2.

 

I claim that the cache on the device itself can't be used to delay writes needed for opening then writing and closing multiple files one by one without both UAS support on all needed levels and the file system implementation actively using UAS queues for that very scenario. I claim that neither there's a cache behavior you suggest (since that would be effectively reordering of writes and performing them in parallel in the scenarios we observe) nor the existing FAT32 file system code covers reordering of metadata writes explicitly when due to the "fast removal" the OS cache is turned off. You'd need some transactional filesystem manipulation API for that ("either all the files are added or none") which would be extremely hard to implement effectively on many levels.

 

Actually we can also measure that to reach the conclusion. If the mechanisms you describe would exist, the write speed to two distant LBA addresses done to one then another then again to one etc. should be much closer to the speed of the USB transfer. If it is much slower there's no such caching and the device waits to actually finish one write before it reports success. We know that the hosts are orders of magnitude faster, so all the delay is certainly due to the device.


Edited by ianst, 24 March 2014 - 07:32 AM.


#159 ianst

ianst

    Newbie

  • Members
  • 20 posts
  •  
    Austria

Posted 24 March 2014 - 08:21 AM

Watching the Process monitor during the ISO copying done by Rufus to get the exact timings, the writing of 1000 of 2K files took 18 seconds. We know that the effective amount of data transferred was around 11 MB. In the same ISO copying, the transfer of a single 5 MB file took around 1.3 seconds. Obviously the device can do around 4 MB/s when the writes are sequential. The non-sequential writes (the result of the updates of the fat, dir entry and file data) resulted in 0.6 MB/s (that is effective speed of writing all 11 MB which include dir entry and fat writes, not the nett 2 MB of the file data, that speed would be 0.11 MB/s) . It's obvious that the caching you describe (reordering of non-sequential writes in order to combine more writes to the same LBA into one) doesn't take place. The speed drop is completely consistent with the observed five non-sequential transfers (2 fat, 2 dir, one file) per 2K file (around 6 times slower for the amount of data which reached USB device, compared to the sequential write).

 

You can observe all of this with both Process Monitor and UsbPCap, just take this ISO: http://www3.zippysha...63613/file.html  It contains among other files one big 5 MB file and exactly 1000 of 2 KB fikes.


Edited by ianst, 24 March 2014 - 09:10 AM.


#160 Akeo

Akeo

    Frequent Member

  • Developer
  • 331 posts
  •  
    Ireland

Posted 24 March 2014 - 06:31 PM

OK. I can't say I'm still entirely convinced, when HDD and Optical Drives are all about using fairly hefty internal caches, but I don't design USB Flash Drives for a living, and I can definitely see scenarios where not adding much cache RAM can help maximize short term profits. So I'll just take your findings at face value.

 

I guess I'm going to reopen #45 (which was created 2 years ago for this exact issue, and which was closed due to the amount of effort it would require). I don't really mind, as implementing virtual FAT32 and doing our own block processing to make Rufus as fast as possible sounds like a nice challenge. My only issue is that, since Rufus is not something I get paid for, and that I can only develop in my limited spare time, it will probably be months before I look into it again.

 

In the meantime, if you're unhappy about Rufus preferring FAT32 over DD, when both are available, I'd advise your to pick one of the many DD clones for Windows to write your ISO.



#161 ianst

ianst

    Newbie

  • Members
  • 20 posts
  •  
    Austria

Posted 24 March 2014 - 09:20 PM

Akeo, I really respected your judgments about Rufus up to now. I believe you had very good taste not to overdo it, especially not adding every feature anybody mentions. I consider Rufus fast and convenient and am always more inclined to vote for "invisible" features that improve user interaction. I'd rather vote for having "automagical" recognition what's in the file (as both DD and ISO "unpack" the file, then user should only select file and the program recognize if it's a DD or ISO format). And any such behavior that leads to less decisions asked from the user but doing "the right thing."

 

I have my doubts if it's worth speeding up such scenarios like fixed costs per file: as we see on old slow sticks it's something like 15 secs per 1000 files, and I don't expect anybody will observe something very much slower. I discussed with you about the "inside of USB" because it was interesting to me to learn how the stuff works on the low level and I hope it's interesting to you too.

 

I definitely didn't write all this to make you reopen #45, and I also believe you should keep it closed until somebody comes with some already finished solution, like demonstrate that imdisk code can mount a zero-content-but-the-same-size as the target volume file on the hard disk (interestingly, OS can make such without physically writing anything to the file!) that can be formatted using plain OS calls, the files also copied there using plain OS calls (and writes cached automatically of course) and then you just unmount it and copy it to the stick. But for something like that I guess you'd have to insert the whole imdisk.exe in your program since it has another license. Or maybe Olof would be ready to dual-license it, I don't know. But that's all only fun to contemplate and is probably not really practical or of some real benefit: every time hardware gets faster the need for heavy optimizations shrinks. And even if the stick gets "somewhat more" writes what can be "saved" that way is still much less than what spends some big file alone. So it all comes down to some fixed-factor-per-file speedup, and it's questionable if it's worth doing it and making your program more complicated only for that. Long term, every unnecessary feature eats more maintenance time than the time needed develop it.

 

We should learn from the conversations here, including learning to recognize what doesn't fit to some given goal. I discussed and experimented here to learn and improve my own knowledge. For example, I used UsbPCap for the first time now.

 

Once again, thanks sincerely for a really, really nice program and a good work.


Edited by ianst, 24 March 2014 - 09:34 PM.


#162 Akeo

Akeo

    Frequent Member

  • Developer
  • 331 posts
  •  
    Ireland

Posted 25 March 2014 - 01:40 AM

Akeo, I really respected your judgments about Rufus up to now.


Not sure how I am supposed to interpret that... ;)
 

I have my doubts if it's worth speeding up such scenarios like fixed costs per file


Well, if this adds to wear and tear (i.e. the UFD is dumb enough to rewrite the same blocks over and over on the flash in rapid succession, rather than try to cache them for a while), then it is worth doing, especially if it can greatly speed up things. If users can also avoid having to wait 20 mins for 4 GB worth of data to be copied over on a relatively fast drive, they probably won't mind either...
 

I definitely didn't write all this to make you reopen #45


Don't worry, I reopened #45 because I actually want to. As I said, it looks like an interesting challenge, and if done well, it might help Rufus leave every other tool with its head spinning, as far as FAT32 content is concerned. NTFS, on the other hand, will likely prove quite a bit harder to speed up as it's more complex and, as far as I know, its internals are kept under wraps by Microsoft...
 

I also believe you should keep it closed until somebody comes with some already finished solution, like demonstrate that imdisk (...)


Yeah, I also thought about using something like ImDisk, as dealing with a virtual drive before copying its content in one go would look like the simplest way to minimize writes to the UFD. As far as I could see, the ImDisk license shouldn't prevent it from being used in GPL projects, and I  actually had a quick look at its source very recently to find out if I could add support for it as a target in Rufus (the answer to that is not without a lot of work). However, the one drawback is that, if you want to do it blindly, then you want to have a virtual disk that is as large as your target. When dealing with 32GB flash drives or bigger, this becomes a bit problematic...

On the other hand, if you don't do it blindly, then all you really need to cache in RAM is the File Allocation Table, which shouldn't take that much space, as well as a continuous set of blocks, that doesn't need to that large, where you will write the actual data content in sequential manner before mapping it back to the USB. Since we're only extracting a file once, we can always have a good idea when we're done with a specific set of files, so we can flush anything that doesn't belong to the FAT at regular intervals, in a DD-like manner. This means that the only thing we need to keep in RAM and not write until we're all done is the FAT data (as this is the part that gets rewritten a lot).

 

Thus, at least for a FAT32 filesystem, speeding things up should be relatively straightforward to implement. Of course, that doesn't mean it won't take time to implement such a method, but at the very least, it shouldn't require a lot of brainpower to do it. And the only drawback I see from doing that would be that, since the FAT would be written at the very end, the target would look empty on cancel, which is very minor. Overall, it does look like something worth looking into if you have the time, and that's why I'm planning to keep #45 open until I can give it a try.
 

Oh, and regardless of the above, I am still planning to add a way to let user select between DD or file extraction when both are available, though, as previously mentioned, most likely as a cheat mode. Hopefully, this will be in the next version...



#163 ianst

ianst

    Newbie

  • Members
  • 20 posts
  •  
    Austria

Posted 25 March 2014 - 08:29 AM

Akeo, on 25 Mar 2014 - 02:40 AM, said:
When dealing with 32GB flash drives or bigger, this becomes a bit problematic...

Creating the 32GB file filled with zeros can be done extremely fast on NTFS, the only problem is if the user has enough space. FS allows you to reserve the blocks without physically filling them with zeros. And I even think ImDisk has something like sparse file support. But you're right, even less resource hungry solution is doing it yourself. And you are right that it is not "too hard." The trickiest part is filling the directory entries (code pages handling, algorithms for short names and spreading the long names over more fixed fields). See here:

http://blogs.msdn.co...6/10200583.aspx
 

The good thing is you don't have to care about the deletion, only about adding the entries.

 

The second tricky topic is deciding which variants of FAT to support and which parameters of construction. The less combinations supported the easier.

 

Akeo, on 25 Mar 2014 - 02:40 AM, said:
On the other hand, if you don't do it blindly, then all you really need to cache in RAM is the File Allocation Table (...)
Thus, at least for a FAT32 filesystem, speeding things up should be relatively straightforward to implement.

Actually it's the File Allocation Table and the directory entries if you do it file by file when extracting. The easiest solution is to get the full list of all the file names, dates, attributes and the sizes, construct all the directory files based of it, then the File Allocation Table which includes all the files and all the directories, then write sequentially the content of all files and the content of all leaf directories and just at the end move to the beginning of the disk to write the content of the fats and of the root directory. That way the resulting disk is "empty" if the user plugs it out before the last write to the fats and the root directory. Which is good since the goal is to minimize the chance of producing the corrupted disks.

The directories are organized like ordinary files and take space there, and if they aren't constructed in advance, then the same sectors are being rewritten whenever a new file is added to the directory -- just like we saw in the captures. Take a look at my post with the example of the update content (FIIAA67~1 as the short name and file00344 long etc) and the post I've linked.

 

At least we know what's the result if you'd implement all this: the slow stick can be filled even one minute faster if it should have 4000 files on it and you save around 50 MB of rewrites in the same scenario. The rewrite saving is actually not significant if the wear leveling algorithms of controllers are good, but can be a significant improvement if they actually aren't. I don't know how often sticks die from rewrites, I hope it's not common anymore anyway.


Edited by ianst, 25 March 2014 - 09:25 AM.


#164 Akeo

Akeo

    Frequent Member

  • Developer
  • 331 posts
  •  
    Ireland

Posted 22 April 2014 - 11:51 AM

And Rufus 1.4.7 has now been released.

 

The Changelog for this version is as follows:

  • Add VHD support as a target, courtesy of Scott (NEW)
  • Add ReFS support (only for Windows 8.1 or later and only for fixed drives) (NEW)
  • Add a cheat mode to force the use of DD image writing for dual ISOs (NEW)
  • Add Japanese translation, courtesy of Chantella Jackson
  • Add Slovak translation, courtesy of martinco78
  • Add Swedish translation, courtesy of Sopor
  • Improve the display of filesizes when copying content
  • Fix FAT32 cluster transitions
  • Fix unpartitioned drives not always being listed
  • Fix bad blocks report
  • Other minor fixes and improvements


#165 Akeo

Akeo

    Frequent Member

  • Developer
  • 331 posts
  •  
    Ireland

Posted 27 May 2014 - 08:36 PM

Just gonna mention that a BETA version of Rufus 1.4.8 is now available.

 

It brings the following:

  • Add KolibriOS ISO support (NEW)
  • Add Arabic translation, courtesy of عمر الصمد
  • Add Croatian translation, courtesy of Dario Komar
  • Add Danish translation, courtesy of Jens Hansen
  • Add Latvian translation, courtesy of Aldis Tutins
  • Allow the use of VHDs as DD image source (fixed disk/uncompressed only)
  • Report the detected USB speed in the log
  • Fix a long standing issue when launching Rufus from Far Manager
  • Fix support for pure UEFI bootable disk images
  • Various other fixes and improvements


#166 TheHive

TheHive

    Platinum Member

  • .script developer
  • 4171 posts

Posted 29 May 2014 - 07:34 AM

KolibriOS

 

 

Very small ISO.



#167 Akeo

Akeo

    Frequent Member

  • Developer
  • 331 posts
  •  
    Ireland

Posted 03 June 2014 - 11:10 PM

Rufus 1.4.8 has now been released.

 

Here is the final Changelog:

  • Add KolibriOS ISO support (NEW)
  • Add Arabic translation, courtesy of عمر الصمد
  • Add Croatian translation, courtesy of Dario Komar
  • Add Danish translation, courtesy of Jens Hansen
  • Add Latvian translation, courtesy of Aldis Tutins
  • Add Brazilian Portuguese translation, courtesy of Chateaubriand Vieira Moura
  • Allow the use of VHDs as DD image source (fixed disk/uncompressed only)
  • Report the detected USB speed in the log
  • Fix detection for some Buffalo, Lacie, Samsung, Toshiba and Verbatim drives
  • Fix a long standing issue when launching Rufus using Far Manager
  • Fix support for pure EFI bootable disk images
  • Various other fixes and improvements

Enjoy!



#168 TheHive

TheHive

    Platinum Member

  • .script developer
  • 4171 posts

Posted 19 July 2014 - 06:52 PM

Someone using Rufus to install android on pc/laptop.



#169 gbrao

gbrao

    Frequent Member

  • Advanced user
  • 373 posts
  •  
    India

Posted 17 August 2014 - 10:29 AM

Last updated 2014.08.15:

Rufus 1.4.10

http://rufus.akeo.ie/

Version 1.4.10 (2014.08.15)
Fix a crash when scanning disk images with no USB drive plugged
Fix default detection of some OCZ flash drives
Improve Syslinux 6.x support (for Tails 1.x and other ISOs)
Improve disk image handling (refresh partitions, remount drive, etc.)
Other fixes and improvements

#170 Akeo

Akeo

    Frequent Member

  • Developer
  • 331 posts
  •  
    Ireland

Posted 30 October 2014 - 01:40 AM

Happy to announce the BETA for Rufus 1.4.11, available at the usual location:

  • Add Ukrainian translation, courtesy of VKS
  • Fix formatting of drives with a large sector size (2K, 4K)
  • Fix UEFI boot for tails and other Syslinux/EFI based ISOs
  • Fix listing of devices when all 26 drive letters are in use
  • Add a minimize button and other minor UI changes

If you find any issue, please let me know.

 

Oh, and in case anyone's interested, I'll just point out that the USB <-> SATA controller board found in the newer Seagate Expansion Drives enclosures, such as the 4TB version (and which also happens to be UASP), forces the sector size of any HDD plugged into it to 4K... hence the large sector size fix above.

However even if the new version of Rufus now happily formats a 4K based drive, you may find that booting from it is another story, even using GPT/UEFI...

 

 



#171 Akeo

Akeo

    Frequent Member

  • Developer
  • 331 posts
  •  
    Ireland

Posted 04 November 2014 - 08:49 PM

And Rufus 1.4.11 has now been released.

 

The only official difference between the release and the beta is the inclusion of a Czech translation, courtesy of Richard Kahl.

 

Enjoy!



#172 Akeo

Akeo

    Frequent Member

  • Developer
  • 331 posts
  •  
    Ireland

Posted 08 November 2014 - 01:21 PM

Looks like I'll have to do a bugfix release in the coming days, as some people reported a couple issues with recent Red Hat, Debian and CentOS images, which I now have a fix for.

 

So here's a link to 1.4.12 BETA, if you feel like testing it.

The current changelog is as follows:

  •     Fix support for Red Hat 7 and CentOS 7
  •     Fix support for Debian 7.x
  •     Fix default listing of Mushkin Ventura Ultra USB 3.0 drives
  •     Fix Czech translation, courtesy of Jakub Moc
  •     Update Windows version listing for Windows 10


#173 TheHive

TheHive

    Platinum Member

  • .script developer
  • 4171 posts

Posted 09 November 2014 - 08:44 AM

 

Looks like I'll have to do a bugfix release in the coming days, as some people reported a couple issues with recent Red Hat, Debian and CentOS images, which I now have a fix for.

 

So here's a link to 1.4.12 BETA, if you feel like testing it.

The current changelog is as follows:

  •     Fix support for Red Hat 7 and CentOS 7
  •     Fix support for Debian 7.x
  •     Fix default listing of Mushkin Ventura Ultra USB 3.0 drives
  •     Fix Czech translation, courtesy of Jakub Moc
  •     Update Windows version listing for Windows 10

 

Used to burn Android-x86-4.4 iso on usb stick. boots up and installed on a laptop dual boot with win 8.1


  • Akeo likes this

#174 Akeo

Akeo

    Frequent Member

  • Developer
  • 331 posts
  •  
    Ireland

Posted 09 November 2014 - 08:49 PM

Thanks, as always, for your testing, TheHive.

 

Rufus 1.4.12 has now been released.



#175 Nuvo

Nuvo

    Member

  • Members
  • 34 posts
  •  
    Scotland

Posted 26 November 2014 - 01:50 PM

Hi,
Could you please cast your eye over an issue I am currently experiencing with Rufus v1.4.12?

Details can be found here:
http://reboot.pro/to...nting-problems/




0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users