What programs format a hard drive in refs. ReFS – the file system of the future? File system FAT32

The method of storing anything usually always implies some kind of orderliness, but if in human life it is not a prerequisite, then in the world of computers, storing data without it is almost impossible. This orderliness is reflected in the file system, a concept familiar to most users of different electronic devices and operating systems.

A file system can be compared to a kind of markup that determines how, where, and in what way each byte should be written to the media. The first file systems that appeared at the dawn of the electronic era were very imperfect, such as Minix, a file system that has a lot of limitations and is used in the Minix operating system of the same name, which later became the prototype of the Linux kernel.

But time passed, new file systems appeared, more advanced and stable. Today the most popular of them, according to at least among Windows users, it is NTFS, which replaced FAT32, which is now used only in small flash drives and has many disadvantages, the most significant of which is the inability to write files larger than 4 GB. However, NTFS is not without them. So, according to many experts, it lacks efficiency, performance and stability, therefore, the time has come to think about creating an even more advanced file system that can meet the growing demands from first server, and then client systems.

And so, in 2012, Microsoft developers introduced the Resilient File System, or ReFS for short, a recoverable file system positioned as an alternative to NTFS, and in the future, possibly its replacement. In fact, ReFS is a continuation of the development of NTFS, from which it was decided to remove all unnecessary things that never became popular, and add new features instead.

New in Resilient File System:

  • Architecture using function (Storage Spaces)
  • High fault tolerance. File system errors that led to data loss in NTFS will be minimized in ReFS
  • Isolation of damaged areas. If areas of the file system are damaged, the recorded data can be accessed from under running Windows
  • Proactive error correction. Automatically scans volumes for damage and applies preventative data recovery measures
  • Automatic recovery of subfolders and associated files when metadata is damaged
  • Using redundant writes to improve fault tolerance
  • The maximum volume size in ReFS can reach 402 EB versus 18.4 EB in NTFS
  • A file of 18.3 EB can be written to a file formatted in ReFS
  • The number of files in one folder is 18 trillion. versus 4.3 billion in NTFS
  • The length of the file name and path to it is 32767 versus 255 in NTFS

What will be removed:

  • Data compression support
  • Data encryption using EFS technology
  • Extended file attributes
  • Hard links
  • Disk quotas
  • Support for short names and object IDs
  • Possibility of changing the cluster size (remains in question)

What will be inherited from NTFS:

  • Access Control Lists (ACLs)
  • Creating Volume Snapshots
  • Mount points
  • Reprocessing points
  • BitLocker encryption
  • Creating and using symbolic links
  • Recording all changes occurring in the file system (USN log)

Currently, ReFS is in early testing, however, computer geeks can appreciate the benefits of ReFS now, and on a Windows 8.1 or 10 client system. To do this, you will need to perform the following registry tweak:


However, using ReFS on an ongoing basis is not recommended. Firstly, the system is still unfinished, and secondly, there is any possibility of conversion to ReFS and vice versa third party programs missing, thirdly, if you accidentally lose or delete files from a partition formatted in ReFS, there will be nothing to restore them with, since there are no data recovery programs that work with this file system yet.

Should we expect the implementation of ReFS in the near future? We can say with greater certainty that no. If she gets practical use, then first on server systems, which will also not happen soon, but users of client Windows will have to wait after that for at least another five years. Suffice it to recall the implementation of NTFS on client systems, and then it took Microsoft seven years. Well, the most important thing is that there is simply no special need for ReFS. When zettabyte disks appear on desktop computers, then perhaps the finest hour will come for ReFS, but for now we just have to be patient and wait.

Have a great day!

Why may a smartphone not launch programs from a memory card? How is ext4 fundamentally different from ext3? Why will a flash drive last longer if you format it in NTFS rather than FAT? What is the main problem with F2FS? The answers lie in the structural features of file systems. We'll talk about them.

Introduction

File systems define how data is stored. They determine what limitations the user will encounter, how fast read and write operations will be, and how long the drive will operate without failures. This is especially true for budget SSDs and their younger brothers - flash drives. Knowing these features, you can get the most out of any system and optimize its use for specific tasks.

You have to choose the type and parameters of the file system every time you need to do something non-trivial. For example, you want to speed up the most common file operations. At the file system level this can be achieved different ways: indexing will provide quick search, and pre-reserving free blocks will make it easier to rewrite frequently changing files. Preliminary data optimization in random access memory will reduce the number of required I/O operations.

Such properties of modern file systems as lazy writing, deduplication and other advanced algorithms help to increase the period of trouble-free operation. They are especially relevant for cheap SSDs with TLC memory chips, flash drives and memory cards.

There are separate optimizations for different levels of disk arrays: for example, the file system can support simplified volume mirroring, instant snapshotting, or dynamic scaling without taking the volume offline.

Black box

Users generally work with the file system that is offered by default by the operating system. They rarely create new disk partitions and even less often think about their settings - they simply use the recommended parameters or even buy pre-formatted media.

For Windows fans, everything is simple: NTFS on all disk partitions and FAT32 (or the same NTFS) on flash drives. If there is a NAS and it uses some other file system, then for most it remains beyond perception. They simply connect to it over the network and download files, as if from a black box.

On mobile gadgets with Android ext4 is most often found in internal memory and FAT32 on microSD cards. Yabloko does not care at all what kind of file system they have: HFS+, HFSX, APFS, WTFS... for them there are only beautiful folder and file icons drawn by the best designers. Linux users have the richest choice, but you can add support for non-native file systems in both Windows and macOS - more on that later.

Common roots

Over a hundred different file systems have been created, but a little more than a dozen can be considered current. Although they were all developed for their own specific applications, many ended up being related on a conceptual level. They are similar because they use the same type of (meta)data representation structure - B-trees (“bi-trees”).

Like any hierarchical system, a B-tree begins with a root record and then branches down to leaf elements - individual records of files and their attributes, or “leaves.” The main point of creating such a logical structure was to speed up the search for file system objects on large dynamic arrays - like hard drives with a capacity of several terabytes or even more impressive RAID arrays.

B-trees require far fewer disk accesses than other types balanced trees, while performing the same operations. This is achieved due to the fact that the final objects in B-trees are hierarchically located at the same height, and the speed of all operations is precisely proportional to the height of the tree.

Like other balanced trees, B-trees have equal path lengths from the root to any leaf. Instead of growing upward, they branch more and grow wider: all branch points in a B-tree store many references to child objects, making them easy to find in fewer calls. A large number of pointers reduces the number of the most time-consuming disk operations - head positioning when reading arbitrary blocks.

The concept of B-trees was formulated back in the seventies and has since undergone various improvements. In one form or another it is implemented in NTFS, BFS, XFS, JFS, ReiserFS and many DBMSs. All of them are relatives in terms of the basic principles of data organization. The differences concern details, often quite important. Related file systems also have a common disadvantage: they were all created to work specifically with disks even before the advent of SSDs.

Flash memory as the engine of progress

Solid-state drives are gradually replacing disk drives, but for now they are forced to use file systems that are alien to them, passed down by inheritance. They are built on flash memory arrays, the operating principles of which differ from those of disk devices. In particular, flash memory must be erased before being written, an operation that NAND chips cannot perform at the individual cell level. It is only possible for large blocks entirely.

This limitation is due to the fact that in NAND memory all cells are combined into blocks, each of which has only one common connection to the control bus. We will not go into details of the page organization and describe the complete hierarchy. The very principle of group operations with cells and the fact that the sizes of flash memory blocks are usually larger than the blocks addressed in any file system are important. Therefore, all addresses and commands for drives with NAND flash must be translated through the FTL (Flash Translation Layer) abstraction layer.

Compatibility with the logic of disk devices and support for commands of their native interfaces is provided by flash memory controllers. Typically, FTL is implemented in their firmware, but can (partially) be implemented on the host - for example, Plextor writes drivers for its SSDs that accelerate writing.

It is impossible to do without FTL, since even writing one bit to a specific cell triggers a whole series of operations: the controller finds the block containing the desired cell; the block is read completely, written to the cache or to free space, then erased entirely, after which it is rewritten back with the necessary changes.

This approach is reminiscent of everyday life in the army: in order to give an order to one soldier, the sergeant makes a general formation, calls the poor fellow out of formation and commands the rest to disperse. In the now rare NOR memory, the organization was special forces: each cell was controlled independently (each transistor had an individual contact).

The tasks for controllers are increasing, since with each generation of flash memory the technical process of its production decreases in order to increase density and reduce the cost of data storage. Along with technological standards, the estimated service life of chips is also decreasing.

Modules with single-level SLC cells had a declared resource of 100 thousand rewrite cycles and even more. Many of them still work in old flash drives and CF cards. For enterprise-class MLC (eMLC), the resource was declared in the range of 10 to 20 thousand, while for regular consumer-grade MLC it is estimated at 3-5 thousand. Memory of this type is actively being squeezed by even cheaper TLC, whose resource barely reaches a thousand cycles. Keeping the lifespan of flash memory at an acceptable level requires software tricks, and new file systems are becoming one of them.

Initially, the manufacturers assumed that the file system was unimportant. The controller itself must service a short-lived array of memory cells of any type, distributing the load between them in an optimal way. For the file system driver, it simulates a regular disk, and itself performs low-level optimizations on any access. However, in practice, optimization different devices varies from magical to fictitious.

In enterprise SSDs, the built-in controller is a small computer. It has a huge memory buffer (half a gigabyte or more) and supports many data efficiency techniques to avoid unnecessary rewrite cycles. The chip organizes all blocks in the cache, performs lazy writes, performs on-the-fly deduplication, reserves some blocks and clears others in the background. All this magic happens completely unnoticed by the OS, programs and the user. With an SSD like this, it really doesn't matter which file system is used. Internal optimizations have a much greater impact on performance and resource than external ones.

Budget SSDs (and even more so flash drives) are equipped with much less smart controllers. The cache in them is limited or absent, and advanced server technologies are not used at all. The controllers in memory cards are so primitive that it is often claimed that they do not exist at all. Therefore, for cheap devices with flash memory, external methods of load balancing remain relevant - primarily using specialized file systems.

From JFFS to F2FS

One of the first attempts to write a file system that would take into account the principles of organizing flash memory was JFFS - Journaling Flash File System. Initially, this development by the Swedish company Axis Communications was aimed at increasing the memory efficiency of network devices that Axis produced in the nineties. The first version of JFFS supported only NOR memory, but already in the second version it became friends with NAND.

Currently JFFS2 has limited use. Basically it is still used in Linux distributions for embedded systems. It can be found in routers, IP cameras, NAS and other regulars of the Internet of Things. In general, wherever a small amount of reliable memory is required.

A further attempt to develop JFFS2 was LogFS, which stored inodes in separate file. The authors of this idea are Jorn Engel, an employee of the German division of IBM, and Robert Mertens, a teacher at the University of Osnabrück. LogFS source code is available on GitHub. Judging by the fact that last change it was made four years ago, LogFS never gained popularity.

But these attempts spurred the emergence of another specialized file system - F2FS. It was developed by Samsung Corporation, which accounts for a considerable part of the flash memory produced in the world. Samsung makes NAND Flash chips for its own devices and for other companies, and also develops SSDs with fundamentally new interfaces instead of legacy disk ones. Creating a specialized file system optimized for flash memory was a long overdue necessity from Samsung's point of view.

Four years ago, in 2012, Samsung created F2FS (Flash Friendly File System). Her idea was good, but the implementation turned out to be crude. The key task when creating F2FS was simple: to reduce the number of cell rewrite operations and distribute the load on them as evenly as possible. This requires performing operations on multiple cells within the same block at the same time, rather than forcing them one at a time. This means that what is needed is not instant rewriting of existing blocks at the first request of the OS, but caching of commands and data, adding new blocks to free space and delayed erasing of cells.

Today, F2FS support is already officially implemented in Linux (and therefore in Android), but special advantages in practice it does not yet work. The main feature of this file system (lazy rewrite) led to premature conclusions about its effectiveness. The old caching trick even fooled early versions of benchmarks, where F2FS demonstrated an imaginary advantage not by a few percent (as expected) or even by several times, but by orders of magnitude. The F2FS driver simply reported the completion of an operation that the controller was just planning to do. However, if the real performance gain for F2FS is small, then the wear on the cells will definitely be less than when using the same ext4. Those optimizations that a cheap controller cannot do will be performed at the level of the file system itself.

Extents and bitmaps

For now, F2FS is perceived as exotic for geeks. Even in your own Samsung smartphones ext4 still applies. Many consider it a further development of ext3, but this is not entirely true. This is more about a revolution than about breaking the 2 TB per file barrier and simply increasing other quantitative indicators.

When computers were large and files were small, addressing was not a problem. Each file was allocated a certain number of blocks, the addresses of which were entered into the correspondence table. This is how the ext3 file system worked, which remains in service to this day. But in ext4 a fundamentally different addressing method appeared - extents.

Extents can be thought of as extensions of inodes as discrete sets of blocks that are addressed entirely as contiguous sequences. One extent can contain an entire medium-sized file, but for large files it is enough to allocate a dozen or two extents. This is much more efficient than addressing hundreds of thousands of small blocks of four kilobytes.

The recording mechanism itself has also changed in ext4. Now blocks are distributed immediately in one request. And not in advance, but immediately before writing data to disk. Lazy multi-block allocation allows you to get rid of unnecessary operations that ext3 was guilty of: in it, blocks for a new file were allocated immediately, even if it entirely fit in the cache and was planned to be deleted as temporary.


FAT Restricted Diet

In addition to balanced trees and their modifications, there are other popular logical structures. There are file systems with a fundamentally different type of organization - for example, linear. You probably use at least one of them often.

Mystery

Guess the riddle: at twelve she began to gain weight, by sixteen she was a stupid fatty, and by thirty-two she became fat, and remained a simpleton. Who is she?

That's right, this is a story about the FAT file system. Compatibility requirements provided her with bad heredity. On floppy disks it was 12-bit, on hard drives- at first it was 16-bit, but it has reached our days as 32-bit. In each subsequent version, the number of addressable blocks increased, but nothing changed in its essence.

The still popular FAT32 file system appeared twenty years ago. Today it is still primitive and does not support access control lists, disk quotas, background compression, or other modern data optimization technologies.

Why is FAT32 needed these days? Everything is still solely to ensure compatibility. Manufacturers rightly believe that a FAT32 partition can be read by any OS. That's why they create it on external hard drives, USB Flash and memory cards.

How to free up your smartphone's flash memory

microSD(HC) cards used in smartphones are formatted in FAT32 by default. This is the main obstacle to installing applications on them and transferring data from internal memory. To overcome it, you need to create a partition on the card with ext3 or ext4. All file attributes (including owner and access rights) can be transferred to it, so any application can work as if it were launched from internal memory.

Windows does not know how to create more than one partition on flash drives, but for this you can run Linux (at least in a virtual machine) or an advanced utility for working with logical partitioning - for example, MiniTool Partition Wizard Free. Having discovered an additional primary partition with ext3/ext4 on the card, the Link2SD application and similar ones will offer many more options than in the case of a single FAT32 partition.


Another argument in favor of choosing FAT32 is often cited as its lack of journaling, which means faster write operations and less wear on NAND Flash memory cells. In practice, using FAT32 leads to the opposite and gives rise to many other problems.

Flash drives and memory cards die quickly due to the fact that any change in FAT32 causes overwriting of the same sectors where two chains of file tables are located. I saved the entire web page, and it was overwritten a hundred times - with each addition of another small GIF to the flash drive. Have you launched portable software? It creates temporary files and constantly changes them while running. Therefore, it is much better to use NTFS on flash drives with its failure-resistant $MFT table. Small files can be stored directly in the main file table, and its extensions and copies are written to different areas of flash memory. In addition, NTFS indexing makes searching faster.

INFO

For FAT32 and NTFS, theoretical restrictions on the level of nesting are not specified, but in practice they are the same: only 7707 subdirectories can be created in a first-level directory. Those who like to play matryoshka dolls will appreciate it.

Another problem that most users face is that it is impossible to write a file larger than 4 GB to a FAT32 partition. The reason is that in FAT32 the file size is described by 32 bits in the file allocation table, and 2^32 (minus one, to be precise) is exactly four gigs. It turns out that neither a movie in normal quality nor a DVD image can be written to a freshly purchased flash drive.

Copying large files is not so bad: when you try to do this, the error is at least immediately visible. In other situations, FAT32 acts as a time bomb. For example, you copied portable software onto a flash drive and at first you use it without problems. After a long time, one of the programs (for example, accounting or email), the database becomes bloated, and... it simply stops updating. The file cannot be overwritten because it has reached the 4 GB limit.

A less obvious problem is that in FAT32 the creation date of a file or directory can be specified to within two seconds. This is not sufficient for many cryptographic applications that use timestamps. The low precision of the date attribute is another reason why FAT32 is not considered a valid file system from a security perspective. However, her weak sides can be used for your own purposes. For example, if you copy any files from an NTFS partition to a FAT32 volume, they will be cleared of all metadata, as well as inherited and specially set permissions. FAT simply doesn't support them.

exFAT

Unlike FAT12/16/32, exFAT was developed specifically for USB Flash and large (≥ 32 GB) memory cards. Extended FAT eliminates the above-mentioned disadvantage of FAT32 - overwriting the same sectors with any change. As a 64-bit system, it has no practically significant limits on the size of a single file. Theoretically, it can be 2^64 bytes (16 EB) in length, and cards of this size will not appear soon.

Another fundamental difference between exFAT is its support for access control lists (ACLs). This is no longer the same simpleton from the nineties, but the closed nature of the format hinders the implementation of exFAT. ExFAT support is fully and legally implemented only in Windows (starting from XP SP2) and OS X (starting from 10.6.5). On Linux and *BSD it is supported either with restrictions or not quite legally. Microsoft requires licensing for the use of exFAT, and there is a lot of legal controversy in this area.

Btrfs

Another prominent representative of file systems based on B-trees is called Btrfs. This FS appeared in 2007 and was initially created in Oracle with an eye to working with SSDs and RAIDs. For example, it can be dynamically scaled: creating new inodes directly on the running system or dividing a volume into subvolumes without allocating free space to them.

The copy-on-write mechanism implemented in Btrfs and full integration with the Device mapper kernel module allow you to take almost instantaneous snapshots via virtual block devices. Pre-compression (zlib or lzo) and deduplication speed up basic operations while also extending the lifetime of flash memory. This is especially noticeable when working with databases (2-4 times compression is achieved) and small files (they are written in orderly large blocks and can be stored directly in “leaves”).

Btrfs also supports full logging mode (data and metadata), volume checking without unmounting, and many other modern features. The Btrfs code is published under the GPL license. This file system has been supported as stable in Linux since kernel version 4.3.1.

Logbooks

Almost all more or less modern file systems (ext3/ext4, NTFS, HFSX, Btrfs and others) belong to the general group of journaled ones, since they keep records of changes made in a separate log (journal) and are checked against it in the event of a failure during disk operations . However, the logging granularity and fault tolerance of these file systems differ.

Ext3 supports three logging modes: with feedback, organized and complete logging. The first mode involves recording only general changes(metadata) performed asynchronously with respect to changes in the data itself. In the second mode, the same metadata recording is performed, but strictly before making any changes. The third mode is equivalent to full logging (changes both in metadata and in the files themselves).

Data integrity is ensured only last option. The remaining two only speed up the detection of errors during the scan and guarantee restoration of the integrity of the file system itself, but not the contents of the files.

Journaling in NTFS is similar to the second logging mode in ext3. Only changes in metadata are recorded in the log, and the data itself may be lost in the event of a failure. This logging method in NTFS was not intended as a way to achieve maximum reliability, but only as a compromise between performance and fault tolerance. This is why people who are used to working with fully journaled systems consider NTFS pseudo-journaling.

The approach implemented in NTFS is in some ways even better than the default in ext3. NTFS additionally periodically creates checkpoints to ensure that all previously deferred disk operations are completed. Checkpoints have nothing to do with recovery points in \System Volume Information\ . These are just service log entries.

Practice shows that such partial NTFS journaling is in most cases sufficient for trouble-free operation. After all, even with a sudden power outage, disk devices do not lose power instantly. The power supply and numerous capacitors in the drives themselves provide just the minimum amount of energy that is enough to complete the current write operation. With modern SSDs, with their speed and efficiency, the same amount of energy is usually enough to perform pending operations. An attempt to switch to full logging would reduce the speed of most operations significantly.

Connecting third-party files in Windows

The use of file systems is limited by their support at the OS level. For example, Windows does not understand ext2/3/4 and HFS+, but sometimes it is necessary to use them. This can be done by adding the appropriate driver.

WARNING

Most drivers and plugins for supporting third-party file systems have their limitations and do not always work stably. They may conflict with other drivers, antiviruses, and virtualization programs.

An open driver for reading and writing ext2/3 partitions with partial support for ext4. IN latest version extents and partitions up to 16 TB are supported. LVM, access control lists, and extended attributes are not supported.


Exists free plugin For Total Commander. Supports reading ext2/3/4 partitions.


coLinux is an open and free port of the Linux kernel. Together with a 32-bit driver, it allows you to run Linux on Windows environment from 2000 to 7 without using virtualization technologies. Supports 32-bit versions only. Development of a 64-bit modification was canceled. coLinux allows, among other things, to organize from Windows access to ext2/3/4 partitions. Support for the project was suspended in 2014.

Windows 10 may already have built-in support for specific Linux file systems, it’s just hidden. These thoughts are suggested by the kernel-level driver Lxcore.sys and the LxssManager service, which is loaded as a library by the Svchost.exe process. For more information about this, see Alex Ionescu’s report “The Linux Kernel Hidden Inside Windows 10,” which he gave at Black Hat 2016.


ExtFS for Windows is a paid driver produced by Paragon. It runs on Windows 7 to 10 and supports read/write access to ext2/3/4 volumes. Provides almost complete support for ext4 on Windows.

HFS+ for Windows 10 is another proprietary driver produced by Paragon Software. Despite the name, it works in all Windows versions starting with XP. Provides full access to HFS+/HFSX file systems on disks with any layout (MBR/GPT).

WinBtrfs is an early development of the Btrfs driver for Windows. Already in version 0.6 it supports both read and write access to Btrfs volumes. It can handle hard and symbolic links, supports alternative data streams, ACLs, two types of compression and asynchronous read/write mode. While WinBtrfs does not know how to use mkfs.btrfs, btrfs-balance and other utilities to maintain this file system.

Capabilities and limitations of file systems: summary table

File system Maximum volume size Limit size of one file Length of proper file name Length of the full file name (including the path from the root) Limit number of files and/or directories Accuracy of file/directory date indication Rights dos-tu-pa Hard links Symbolic links Snap-shots Data compression in the background Data encryption in the background Grandfather-ple-ka-tion of data
FAT16 2 GB in 512 byte sectors or 4 GB in 64 KB clusters 2 GB 255 bytes with LFN - - - - - - - - - -
FAT32 8 TB sectors of 2 KB each 4 GB (2^32 - 1 byte) 255 bytes with LFN up to 32 subdirectories with CDS 65460 10 ms (create) / 2 s (modify) No No No No No No No
exFAT ≈ 128 PB (2^32-1 clusters of 2^25-1 bytes) theoretical / 512 TB due to third-party restrictions 16 EB (2^64 - 1 byte) 2796202 in the catalog 10 ms ACL No No No No No No
NTFS 256 TB in 64 KB clusters or 16 TB in 4 KB clusters 16 TB (Win 7) / 256 TB (Win 8) 255 Unicode characters (UTF-16) 32,760 Unicode characters, up to a maximum of 255 characters per element 2^32-1 100 ns ACL Yes Yes Yes Yes Yes Yes
HFS+ 8 EB (2^63 bytes) 8 EB 255 Unicode characters (UTF-16) not limited separately 2^32-1 1 s Unix, ACL Yes Yes No Yes Yes No
APFS 8 EB (2^63 bytes) 8 EB 255 Unicode characters (UTF-16) not limited separately 2^63 1 ns Unix, ACL Yes Yes Yes Yes Yes Yes
Ext3 32 TB (theoretically) / 16 TB in 4 KB clusters (due to limitations of e2fs programs) 2 TB (theoretically) / 16 GB for older programs 255 Unicode characters (UTF-16) not limited separately - 1 s Unix, ACL Yes Yes No No No No
Ext4 1 EB (theoretically) / 16 TB in 4 KB clusters (due to limitations of e2fs programs) 16 TB 255 Unicode characters (UTF-16) not limited separately 4 billion 1 ns POSIX Yes Yes No No Yes No
F2FS 16 TB 3.94 TB 255 bytes not limited separately - 1 ns POSIX, ACL Yes Yes No No Yes No
BTRFS 16 EB (2^64 - 1 byte) 16 EB 255 ASCII characters 2^17 bytes - 1 ns POSIX, ACL Yes Yes Yes Yes Yes Yes

Microsoft's new ReFS file system initially appeared on servers running Windows control 2012. And only later it was included in Windows 10, where it can only be used as part of the Storage Spaces feature (disk space virtualization technology) of a disk pool. IN Windows Server 2016 Microsoft promises to significantly improve work with the ReFS file system, and, according to rumors in print, ReFS may replace the outdated NTFS file system in new version Windows 10, which proudly bears the name Windows 10 Pro (for advanced PCs).

But what exactly is ReFs, how does it differ from the currently used NTFS file system, and what advantages does it have?

What is ReFS

In short, it was designed as a fault-tolerant file system. ReFS is a new code-based file system that is essentially a redesign and improvement of the NTFS file system. These include improved reliability of information storage, stable operation in stress modes, file sizes, volumes, directories, the number of files in volumes and directories is limited only by the size of the 64-bit number. Recall that the maximum for such a value is maximum size the file will be 16 exbibytes, and the volume size will be 1 yobibyte.

At the moment, ReFS is not a replacement for NTFS. It has its advantages and disadvantages. But you won't be able to, say, format a disk and install a new one on it. copy of Windows so how would you do it on NTFS.

ReFS protects your data

ReFS uses checksums for metadata and can also use checksums for data files. Every time you read or write files, ReFS checks the checksum to ensure it is correct. This means that the file system itself has a tool capable of detecting corrupted data on the fly.

ReFS is integrated with the Storage Spaces feature. If you have configured mirroring with ReFS support, Windows will easily detect file system corruption and automatically repair it by copying the mirrored data to the damaged disk. This function Available for both Windows 10 and Windows 8.1.


If ReFS detects corrupted data and there is no required copy of the data to restore, the file system is able to immediately remove the corrupted data from the disk. This does not require a system reboot, unlike NTFS.

ReFS not only checks the integrity of files during writing and reading. It automatically scans data integrity by regularly checking all files on the disk, identifying and correcting corrupted data. In this case, there is no need to periodically run the chkdsk command to check the disk.

The new file system is also resistant to data corruption in other ways. For example, you update the metadata of a file (let's say the file name). File system NTFS directly change file metadata. If at this time the system crashes (power outage), there is a high probability that the file will be damaged. When you change metadata, the ReFS file system creates a new copy of the metadata. The file system does not overwrite the old metadata, but writes it into a new block. This eliminates the possibility of file damage. This strategy is called “copy-on-write” (copy-on-write, highlight-on-write). This strategy is available in other modern file systems such as ZFS and BtrFS on Linux, as well as Apple's new APFS file system.

Limitations of the NTFS file system

ReFS is more modern than NTFS and supports much larger data volumes and longer file names. In the long run this is very important.

In the NTFS file system, the file path is limited to 255 characters. In ReFS, the maximum number of characters is already an impressive 32768 characters. There is currently an option in Windows 10 to disable the character element for NTFS. On ReFS disk volumes this limit is disabled by default.

ReFS does not support DOS 8.3 filenames. On NTFS volumes, you have access to the “CProgram Files”, “CProgra`1” folders. They are needed for compatibility with older software. In ReFS you will not find the folders we are used to. They have been deleted.

The theoretical maximum data volume supported by NTFS is 16 exabytes, ReFS supports up to 262,144 exabytes. Now this figure seems simply huge.

ReFS performance

The developers did not set a goal to create a more productive file system. They made a more optimized system.


For example, when used with an array, ReFS supports real-time level optimization. You have a storage pool consisting of two disks. The first disk is selected for high speed operation, fast access to the data. The second disk is selected with reliability criteria for long-term data storage. IN background ReFS will automatically move large chunks of data to a slower disk, thereby ensuring that the data is stored securely.

In Windows Server 2016, developers added a tool that improves performance using certain virtual machine features. For example, ReFS supports block copying, which speeds up the process of virtual machine copying and checkpoint merging operations. To create a copy of a virtual machine, ReFS creates a new copy of the metadata on disk and provides a link to the copied data on disk. This is so that with ReFS, multiple files can reference the same underlying data on disk. After you have worked with virtual machine, change the data, it is written to the disk in a different location, but the original virtual machine data remains on the disk. This significantly speeds up the process of creating copies and reduces the load on the disk.

ReFS supports “Sparse VDL” (discharged files). A sparse file is a file in which a sequence of zero bytes has been replaced with information about that sequence (a list of holes). Holes – specific sequence zero bytes inside the file that are not written to disk. The information about holes itself is stored in the file system metadata.

Discharged file support technology allows you to quickly write zeros to big file. This greatly speeds up the process of creating a new, empty, fixed-size virtual hard disk (VHD) file. Creating such a file in ReFS takes a few seconds, while in NTFS such an operation takes up to 10 minutes.

Still, ReFS is not able to completely replace NTFS

Everything we described above sounds good, but you won't be able to switch to ReFS from NTFS. Windows cannot boot from the ReFS file system, requiring NTFS.


ReFS lacks many technologies available in NTFS. For example, file system compression and encryption, hard links, extended attributes, data deduplication, and disk quotas. Moreover, unlike NTFS, ReFS supports the technology full encryption data - BitLocker.

In Windows 10, you will not be able to format a disk partition with ReFS. The new file system is only available for storage systems where its main function is to protect data from corruption. In Windows Server 2016, you will be able to format a disk partition with ReFS. You will be able to use it to run virtual machines. But you won't be able to select it as a boot disk. Windows boots only from the NTFS file system.

It's unclear what Microsoft's future holds for the new file system. Perhaps one day it will completely replace NTFS in all versions of Windows. But on this moment ReFS can only be used for certain tasks.

Application of ReFS

Much has been said above in support of the new operating system. The disadvantages and advantages are described. I suggest you stop and summarize. For what purposes can, and maybe should, be used ReFS.

In Windows 10, ReFS is only applicable in conjunction with the Storage Spaces component. Be sure to format your disk dedicated to data storage in ReFS, not NTFS. In this case, you will be able to fully appreciate the reliability of data storage.

In Windows Server, you can format a partition for ReFS using the standard Windows tool in the Disk Management console. It is recommended to format it for ReFS if you use virtual servers. But remember that boot disk must be formatted as NTFS. Booting from the ReFS file system is not supported by Windows operating systems.

New ReFS file system and Windows 10| 2017-06-28 06:34:15 | Super User | System software | https://site/media/system/images/new.png | The new file system from Microsoft ReFS has replaced the outdated NTFS. What are the advantages of ReFS and how does it differ from NTFS | refs, refs or ntfs, refs windows 10, refs file system, new file systems, ntfs system, ntfs file system

Meet the new file system ReFS (Resilient File System - fault-tolerant file system).

In principle, it is not so new, Microsoft did not develop ReFS from scratch, previously known under the code name Protogon, which was developed for Windows Server 8 and will now be installed on Windows 8 client machines.

So, to open, close, read and write files, the system uses the same API access interfaces as NTFS.
Many well-known features remained untouched - for example, Bitlocker disk encryption and symbolic links for libraries.
Other features, such as data compression, have disappeared.

The previous NTFS (New Technology File System) file system in version 1.2 was introduced back in 1993 as part of Windows NT 3.1, and by the advent of Windows XP in 2001, NTFS had grown to version 3.1, and only then it began to be installed on client machines.
Gradually, NTFS capabilities have reached their limits: checking storage media large capacity takes too long.
The log (registration file) slows down access, and the maximum file size has almost been reached.

Most of ReFS's innovations lie in the area of ​​creating and managing file and folder structures.
They are designed for automatic error correction, maximum scaling and operation in Always Online mode.
For these purposes, Microsoft uses the concept of B+ trees, familiar from databases.
This means that folders in the file system are structured as tables with files as entries.

These, in turn, can have certain attributes added as subtables, creating a hierarchical tree structure.
Even free disk space is organized in tables.
The core of the ReFS system is the object table - a central directory that lists all the tables in the system.

ReFS has gotten rid of complex log management and now captures new file information in free space, which prevents it from being overwritten.
But even if this suddenly happens, the system will re-register links to records in the B+-tree structure.

Like NTFS, ReFS fundamentally distinguishes between file information (metadata) and file content (user data), but generously provides both with the same security features.
Thus, metadata is protected by default using checksums.
The same protection can be provided to user data if desired.
These checksums are located on the disk at a safe distance from each other so that if an error occurs, the data can be recovered.

Transferring data from NTFS to ReFS

Will it be possible to easily and easily convert data from NTFS to ReFS and vice versa in Windows 8?
Microsoft says there won't be any built-in format conversion functionality, but information can still be copied.
The scope of ReFS is obvious: at first it can only be used as a large data manager for the server.
Therefore, it is not yet possible to run Windows 8 from a disk running the new file system.
There will be no external drives with ReFS yet - only internal ones.

Obviously, over time ReFS will be equipped big amount functions and will be able to replace the outdated system.
Perhaps this will happen with the release of the first update package for Windows 8.

Comparing NTFS and ReFS file systems.

Rename file


NTFS

1. NTFS writes to the Log that the file name should be changed.
NTFS also records all actions there.
2. Only after this does it change the file name on the spot.
Thus, the old name is overwritten by the new one.
3. Finally, a mark indicating the successful completion of the specified operation appears in the Log (file system registration file).


ReFS

1 - The new name is written to the free space.
It is very important that the previous name is not erased at first.
2 - As soon as the new name is written, ReFS changes the reference to the name field.
Now in the file system it leads not to the old name, but to the new one.

Renaming a file during a power failure


ReFS

1. NTFS, as usual, writes the change request to the Log.
2. After this, due to a power failure, the renaming process is interrupted, and there is no record of either the old or new names.
3. Windows reboots.
4. Following this, the error correction program - Chkdisk - is launched.
5. Only now, using the Journal, when applying a rollback, the original file name is restored.


NTFS

1. In the first stage, ReFS writes a new name to another location in the file system, but at this moment the power supply is cut off.
2. Failure causes Windows to automatically restart.
3. After it, the Chkdisk program starts. It analyzes the file system for errors and corrects them if necessary.
Meanwhile, the ReFS dataset is in a stable state. The previous file name becomes valid again immediately after a power failure.

Key goals of ReFS:

Maintain maximum compatibility with a set of widely used NTFS features, and at the same time get rid of unnecessary ones that only complicate the system;
. Verification and auto-correction of data;
. Maximum scalability;
. The impossibility of completely disabling the file system due to the isolation of faulty areas;
. Flexible architecture using the Storage Spaces feature, which is designed and implemented specifically for ReFS.

Key ReFS features (some only available with Storage Spaces):

Metadata integrity with checksums;
. Integrity streams: a method of writing data to disk for additional data protection if part of the disk is damaged;
. Transactional model “allocate on write” (copy on write);
. Large limits on the size of partitions, files and directories.
Partition size is limited to 278 bytes with a cluster size of 16 KB (2 64 16 2 10), Windows stack supports 2 64 .
Maximum number of files in a directory: 2 64 .
Maximum number of directories in a section: 2 64 ;
. Pooling and virtualization for more easy creation partitions and file system management;
. Serial data segmentation (data ripping) for improved performance, redundant writes for fault tolerance;
. Support for background disk cleaning techniques (disk scrubbing) to identify hidden errors;
. Rescue data around a damaged area on the disk;
. Shared storage pools between machines for additional fault tolerance and load balancing.

Pipe cutter and pipe bender for self-assembly of life-support equipment

Two tools from EK Water Blocks are aimed at those who assemble their own liquid liquids: the EK-Loop Soft Tube Cutter and the EK-Loop Modulus Hard Tube Bending Tool.

First January 2020 Graphics Driver Set Radeon Software Adrenalin 2020 Edition 20.1.1 contains optimizations for the game Monster Hunter World: Iceborne and fixes almost three dozen bugs identified in previous releases.

Google will continue to support Chrome browser for Windows 7

Many users, especially corporate ones, are in no hurry to abandon Windows 7, although the advanced Windows support 7 for regular users ends on January 14, 2020.

The first developments of the ReFS file system appeared in 2012 directly in Windows Server 2012. Now the technology is seen in operating systems Windows systems 8 and 10 as a replacement for NTFS. You need to figure out why ReFS is better than other file systems and whether it can be used on home computers.

Concept of ReFS

ReFS ( Resilient file system) – is a fault-tolerant technology that replaced NTFS. Designed to eliminate the shortcomings of its predecessor and reduce the amount of information that can be lost during various operations. Supports working with large files.

So, one of the advantages of the technology is high data security from destruction. The media contains checksums and metadata designed to determine the integrity of the data on the partitions. The scan occurs during read/write operations and immediately detects damaged files.

Benefits of ReFS

The ReFS file system (FS) has the following features:

  1. Great productivity;
  2. Improving the ability to check media for errors;
  3. Low degree of data loss when file system errors and bad blocks occur;
  4. Implementation of EFS encryption;
  5. Disk quota functionality;
  6. Increased maximum file limit to 18.3 EB;
  7. Increased number of files stored in a folder to 18 trillion;
  8. Maximum disk capacity up to 402 EB;
  9. The number of characters in the file name has been increased to 32767.

There are, of course, many opportunities, but that’s not all. However, it is worth considering one point: how useful will all these advantages be to the average user?

For a user working on a computer at home, the only thing that will be useful is the fast speed of checking partitions for errors and reducing the loss of files in the event of these errors. Of course, in this case, security is carried out only at the file system level, that is, it solves only its own problems, and the problem of loss important files still remains a pressing issue. For example, this can happen due to a breakdown hard drive. The technology has the greatest effect in.

The advantage of RAID is high fault tolerance and data safety, as well as high speed work, the most used RAID levels are 1 and 2. The disadvantages of the system are the high cost of purchasing equipment, as well as the time spent on implementation. I think that the average user has no use for this if he does not create home server, working 24/7.

Performing tests based on ReFS and NTFS

Using software We managed to find out that using the ReFS file system compared to NTFS does not provide a noticeable increase in performance. Tests based on similar read and write cycles occurring on the same disk and file sizes, the Crystal Disk Mark utility showed identical results. ReFS had a slight advantage when copying small files.

There were tests using large files, and a slow hard drive partition was used as a guinea pig. The results were disappointing as ReFS showed lower performance compared to NTFS.

There is no doubt that the technology is still raw, the indicators were carried out at the end of 2017, but in Windows 10 the technology can be widely used. The best option for using a file system would be based on an SSD - solid state drives. These drives are better than HDDs in almost every way.

Benefits of ReFS for other users

The system has such a function as a hypervisor - Hyper-V. This technology is a virtual machine. When using a partition formatted in ReFS, there was an advantage in operating speed. Since the file system uses checksums and metadata, it only needs to refer to them when copying files; if there is a match, it does not have to physically copy the data.

Creation virtual disks in ReFS it takes seconds. In NTFS this process takes minutes. Fixed virtual disks in NTFS they create delays and heavily load the hard drive; with SSDs this is an even bigger problem, since a large number of rewrite cycles are “deadly” for the media. Because of this, working in the background with other applications will be problematic.

It is also planned that a high degree of ReFS compatibility will be observed with such virtual machines, like VMware.

Disadvantages of the ReFS file system

Above we looked at the advantages of ReFS technology and touched a little on the disadvantages. Let's talk about the disadvantages in more detail. We must understand that until Microsoft implements the technology in Windows, there will be no development. Now we have the following features:

  1. Existing Windows partitions not subject to ReFS use, that is, it is necessary to use only partitions not used for the system, for example, those intended for storing files.
  2. External drives are not supported.
  3. It is impossible to convert an NTFS disk to a ReFS disk without data loss, only formatting and backup important files.
  4. Not all software is able to recognize this FS.

That's it. Now look at the image below. This Windows 7 and here the FS is not recognized, and an error appears when opening the partition.

In Windows 8, the partition will need to be formatted, since the FS is also not recognized. Before using a new file system on your home PC, it is better to think about the consequences several times. In Windows 8.1, the problem is solved by activating the FS using the registry editor, but this does not always work, especially since using ReFS implies formatting the disk and destroying data.

Some problems arise in Windows 10. If the new partition with ReFS works stably, then the existing one, which was formatted into it, is not recognized by Windows.

How to format a disk or partition in ReFS

Let’s say the user doesn’t care about the shortcomings and shortcomings of the new product. God bless you, friends, let's start analyzing the instructions for formatting a partition in ReFS. I’ll tell you one thing: if something suddenly happens and the partition fails, you can use the R-Studio tool to restore it.

To format, just follow the following procedure:

  1. Open “This PC” and right-click on the desired section;
  2. In the context menu, click the “Format” item;
  3. In the window that opens, in the “File system” field, find REFS;
  4. Click the “Start” button and wait.

The same can be done using the command line, where you need to enter the following commands one by one:

  1. diskpart– utility for working with disks;
  2. lis vol– display all partitions of the computer;
  3. sel vol 3– where 3 is the number of the required volume;
  4. format fs=refs– formatting into the desired file system.

How to enable ReFS using the registry

If you don't have anything that points to FS, it may need to be enabled. For this we need a registry editor. The procedure works properly on Windows 8.1 and 10:

  1. Launch the registry editor (Win+R and enter regedit);
  2. Go to this branch - HKEY_LOCAL_MACHINE\SYSTEM \CurrentControlSet\Control\FileSystem;
  3. On the right side of the window, create a 32-bit DWORD parameter called RefsDisableLastAccessUpdate;
  4. Enter the number 1 as the value.
  5. Find the branch HKEY_LOCAL_MACHINE\SYSTEM \CurrentControlSet\Control;
  6. We create a partition with the name MiniNT, in the end the path to it should be like this: “...\ CurrentControlSet\Control\MiniNT”;
  7. In it we create a 32-bit DWORD parameter and call it AllowRefsFormatOverNonmirrorVolume;
  8. The value must be 1.

As you can see, the opportunity to use ReFS exists, but for now it is not recommended to use it, especially for home computer it does not make sense. Recovering lost files will be problematic, and not all programs understand FS.

Most likely, the technology will develop most on servers, but this will not happen soon. If we remember the advent of NTFS, its full implementation took about seven years. More information can be found on the official Microsoft website - https://docs.microsoft.com/ru-ru/windows-server/storage/refs/refs-overview. In the meantime, you can follow new IT technologies on our website, do not forget to subscribe.