RAPID PUBLISHING ARTICLES PROVIDE INFORMATION DIRECTLY FROM WITHIN THE MICROSOFT SUPPORT ORGANIZATION. THE INFORMATION CONTAINED HEREIN IS CREATED IN RESPONSE TO EMERGING OR UNIQUE TOPICS, OR IS INTENDED SUPPLEMENT OTHER KNOWLEDGE BASE INFORMATION.
↑ Back to the top
An application attempts to extend a file, such as adding a new record to an existing database, but the operation fails.� The error message states "file system limitation," �insufficient resources,� or the �disk is full,� even though the volume has enough free space to hold the extended file.
↑ Back to the top
When a file is very fragmented, NTFS will use more space to save the description of the allocations associated with the fragments. The allocation information is stored in one or more file records. When multiple file records are used, we store the information about which file records in another structure called the ATTRIBUTE_LIST. The number of ATTRIBUTE_LISTs that the file can have is limited by what can fit within one cache map view.
File fragmentation occurs when a file is extended (by an application or NTFS itself), but the new data cannot be placed in clusters which are adjacent to the other clusters used by the file.� One common reason why this occurs is that an application extends a file thousands or millions of times by a small amount each time, and there are other files being written to at the same time.� In the most severe cases, the file is extended slowly over a long period of time and the volume has never been defragmented.� The result is the volume still has free sectors and new files can be created, but the file whose fragment list is full cannot be extended.
Another reason a file can become severely fragmented is due to NTFS compression.� NTFS compression can cause sections of the file to be frequently reallocated, and in those cases it can be difficult to keep file data contiguous on disk.
It is not possible to give an exact file size when a compressed or a highly fragmented file would hit this limit. This would depend on certain average sizes for describing the NTFS on-disk structures which in turn determines how many of them fit within other structures. If the level of fragmentation is high, the limit is reached earlier and vice-versa.
↑ Back to the top
If you are using NTFS compression and are encountering this problem, try disabling NTFS compression.� This will free up space that NTFS uses to describe the compression for the file.
If disabling compression does not work, or compression was not enabled to begin with, once a file reaches this state of severe fragmentation, the only way to defragment it is to copy or move the file to a new location, deleting the original, and then copying the new file back to the original location.� It is not possible to defragment the file with a disk defragmentation utility.� Copying/moving the file works because it allocates new clusters for a new file and keeps them as contiguous as possible.� Because the new clusters are nearly contiguous, the NTFS fragment list for the new file will be nearly empty.
↑ Back to the top
It is best to prevent severe fragmentation scenarios like this from occurring.� Files this fragmented require a lot of management and overhead from the file system, reducing overall system performance when they are accessed for any type of file operation.� Depending upon the role you have in your environment, here are some things to consider to help maintain good file system health:
����Developers � When designing the file creation/access strategy for your application, pre-size the file to a reasonable size, and then extend it in few, but large chunks.� For example, if you know a downloaded video file is going to eventually be 750MB in size, it is best to make the file size 750MB from the start so NTFS can allocate all of the clusters needed at one time instead of extending the file as the application writes to it, potentially fragmenting the file.
Depending on how they are used, sparse files can run into this limitation easily as well.� If you are going to design an application to use sparse files, it is important to note that this limitation can be reach very quickly depending on the ranges of the file that are actually allocated.� For instance, if the file has many alternating allocated/de-allocated ranges, NTFS cannot fold the "empty" ranges into smaller fragments.� This will quickly add fragments to the file's ATTRIBUTE_LIST.
Once this limitation is reached, file management APIs that attempt to extend the file will fail with ERROR_FILE_SYSTEM_LIMITATION in Vista and later versions of Windows, and ERROR_INSUFFICIENT_RESOURCES on down-level versions of Windows.
����System admins/End users � Make sure you have a regular disk maintenance schedule.� Pick a time, preferably once a week or more often, with the least amount of disk usage and schedule that time to perform a defrag of your system(s) volume(s).� You can use Task Scheduler or the Disk Defragmenter utility to schedule regular disk defragmentation.� This will help prevent problems like these from occurring in the future.
↑ Back to the top
MICROSOFT AND/OR ITS SUPPLIERS MAKE NO REPRESENTATIONS OR WARRANTIES ABOUT THE SUITABILITY, RELIABILITY OR ACCURACY OF THE INFORMATION CONTAINED IN THE DOCUMENTS AND RELATED GRAPHICS PUBLISHED ON THIS WEBSITE (THE �MATERIALS�) FOR ANY PURPOSE. THE MATERIALS MAY INCLUDE TECHNICAL INACCURACIES OR TYPOGRAPHICAL ERRORS AND MAY BE REVISED AT ANY TIME WITHOUT NOTICE.
TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, MICROSOFT AND/OR ITS SUPPLIERS DISCLAIM AND EXCLUDE ALL REPRESENTATIONS, WARRANTIES, AND CONDITIONS WHETHER EXPRESS, IMPLIED OR STATUTORY, INCLUDING BUT NOT LIMITED TO REPRESENTATIONS, WARRANTIES, OR CONDITIONS OF TITLE, NON INFRINGEMENT, SATISFACTORY CONDITION OR QUALITY, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, WITH RESPECT TO THE MATERIALS.
↑ Back to the top