Why does size and used space in df output contradict available space [duplicate]
This question already has an answer here:
df command not showing correct values
2 answers
Running the following command:
$ df -h
Gives the following output:
Filesystem Size Used Avail Use% Mounted on
/dev/md2 91G 85G 1.2G 99% /home
Which means out of the 91 total only 85 is used, which should leave 6 Avail
(91 - 85 = 6).
Why is Avail
only 1.2?
This question is explicitly about the contradiction between the Used - Size
and the Avail
column in df
output as opposed to a discrepancy between df
and du
output such as in this related question. In my case there are no deleted files still in use on the filesystem.
disk-usage
marked as duplicate by don_crissti, Kusalananda, Jeff Schaller, Satō Katsura, Thomas Nyman Mar 17 '17 at 12:15
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
add a comment |
This question already has an answer here:
df command not showing correct values
2 answers
Running the following command:
$ df -h
Gives the following output:
Filesystem Size Used Avail Use% Mounted on
/dev/md2 91G 85G 1.2G 99% /home
Which means out of the 91 total only 85 is used, which should leave 6 Avail
(91 - 85 = 6).
Why is Avail
only 1.2?
This question is explicitly about the contradiction between the Used - Size
and the Avail
column in df
output as opposed to a discrepancy between df
and du
output such as in this related question. In my case there are no deleted files still in use on the filesystem.
disk-usage
marked as duplicate by don_crissti, Kusalananda, Jeff Schaller, Satō Katsura, Thomas Nyman Mar 17 '17 at 12:15
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
Baard Kopperud's answer to the linked question does answer your question. It's clearly a dupe - I was just lazy to search for the exact ones but since you insist, here they are: Why is df missing 500MB of available space? and What happened to my free space
– don_crissti
Mar 19 '17 at 13:30
add a comment |
This question already has an answer here:
df command not showing correct values
2 answers
Running the following command:
$ df -h
Gives the following output:
Filesystem Size Used Avail Use% Mounted on
/dev/md2 91G 85G 1.2G 99% /home
Which means out of the 91 total only 85 is used, which should leave 6 Avail
(91 - 85 = 6).
Why is Avail
only 1.2?
This question is explicitly about the contradiction between the Used - Size
and the Avail
column in df
output as opposed to a discrepancy between df
and du
output such as in this related question. In my case there are no deleted files still in use on the filesystem.
disk-usage
This question already has an answer here:
df command not showing correct values
2 answers
Running the following command:
$ df -h
Gives the following output:
Filesystem Size Used Avail Use% Mounted on
/dev/md2 91G 85G 1.2G 99% /home
Which means out of the 91 total only 85 is used, which should leave 6 Avail
(91 - 85 = 6).
Why is Avail
only 1.2?
This question is explicitly about the contradiction between the Used - Size
and the Avail
column in df
output as opposed to a discrepancy between df
and du
output such as in this related question. In my case there are no deleted files still in use on the filesystem.
This question already has an answer here:
df command not showing correct values
2 answers
disk-usage
disk-usage
edited Mar 19 '17 at 11:48
Thomas Nyman
20.1k74969
20.1k74969
asked Mar 17 '17 at 11:00
magicodemagicode
84
84
marked as duplicate by don_crissti, Kusalananda, Jeff Schaller, Satō Katsura, Thomas Nyman Mar 17 '17 at 12:15
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
marked as duplicate by don_crissti, Kusalananda, Jeff Schaller, Satō Katsura, Thomas Nyman Mar 17 '17 at 12:15
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
Baard Kopperud's answer to the linked question does answer your question. It's clearly a dupe - I was just lazy to search for the exact ones but since you insist, here they are: Why is df missing 500MB of available space? and What happened to my free space
– don_crissti
Mar 19 '17 at 13:30
add a comment |
Baard Kopperud's answer to the linked question does answer your question. It's clearly a dupe - I was just lazy to search for the exact ones but since you insist, here they are: Why is df missing 500MB of available space? and What happened to my free space
– don_crissti
Mar 19 '17 at 13:30
Baard Kopperud's answer to the linked question does answer your question. It's clearly a dupe - I was just lazy to search for the exact ones but since you insist, here they are: Why is df missing 500MB of available space? and What happened to my free space
– don_crissti
Mar 19 '17 at 13:30
Baard Kopperud's answer to the linked question does answer your question. It's clearly a dupe - I was just lazy to search for the exact ones but since you insist, here they are: Why is df missing 500MB of available space? and What happened to my free space
– don_crissti
Mar 19 '17 at 13:30
add a comment |
2 Answers
2
active
oldest
votes
By default, ext2, ext3 and ext4 filesystems reserve 5% of their capacity for use by the root user. This reduces fragmentation, and makes it less likely that the root user or any root-owned daemons will run out of disk space to perform important operations. More information for the reasons behind this reservation can be found among the answers to this related question.
You can verify the size of the reservation with the tune2fs
command:
tune2fs -l /dev/md2 | grep "Reserved block count:"
The reservation percentage can be changed using the -m
option of the tune2fs
command:
tune2fs -m 0 /dev/md2
The number of reserved reserved blocks can be changed using the -r
option of the tune2fs
command:
tune2fs -r 0 /dev/md2
Reserved space is least useful on large filesystems with static content that is not related to the operating system. For such filesystems it is reasonable to reduce the reservation to zero. Filesystems are better left with the default 5% reservation include those containing the directories /
, /root
, /var
, and /tmp
that are often used by daemons and other operatins system services to create temporary files or logs at runtime.
add a comment |
The most common cause of this effect is open files that have been deleted.
The kernel will only free the disk blocks of a deleted file if it is not in use at the time of its deletion. Otherwise that is deferred until the file is closed, or the system is rebooted.
A common Unix-world trick to ensure that no temporary files are left around is the following:
- A process creates and opens a temporary file
- While still holding the open file descriptor, the process unlinks (i.e. deletes) the file
- The process reads and writes to the file normally using the file descriptor
- The process closes the file descriptor when it's done, and the kernel frees the space
- If the process (or the system) terminates unexpectedly, the temporary file is already deleted and no clean-up is necessary.
- As a bonus, deleting the file reduces the chances of naming collisions when creating temporary files and it also provides an additional layer of obscurity over the running processes - for anyone but the root user, that is.
This behaviour ensures that processes don't have to deal with files that are suddenly pulled from under their feet, and also that processes don't have to consult each other in order to delete a file. It is unexpected behaviour for those coming from Windows systems, though, since there you are not normally allowed to delete a file that is in use.
The lsof command, when run as root, will show all open files and it will specifically indicate deleted files that are deleted:
# lsof 2>/dev/null | grep deleted
bootlogd 2024 root 1w REG 9,3 58 917506 /tmp/init.0W2ARi (deleted)
bootlogd 2024 root 2w REG 9,3 58 917506 /tmp/init.0W2ARi (deleted)
Stopping and restarting the guilty processes, or just rebooting the server should solve this issue.
Deleted files could also be held open by the kernel if, for example, it's a mounted filesystem image. In this case unmounting the filesystem or rebooting the server should do the trick.
In your case, judging by the size of the "missing" space I'd look for any references to the file that you used to set up the VPS e.g. the Centos DVD image that you deleted after installing.
2
if deleted file is open in df it should appear in Used space, see here unix.stackexchange.com/q/82618/221368
– magicode
Mar 19 '17 at 9:53
add a comment |
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
By default, ext2, ext3 and ext4 filesystems reserve 5% of their capacity for use by the root user. This reduces fragmentation, and makes it less likely that the root user or any root-owned daemons will run out of disk space to perform important operations. More information for the reasons behind this reservation can be found among the answers to this related question.
You can verify the size of the reservation with the tune2fs
command:
tune2fs -l /dev/md2 | grep "Reserved block count:"
The reservation percentage can be changed using the -m
option of the tune2fs
command:
tune2fs -m 0 /dev/md2
The number of reserved reserved blocks can be changed using the -r
option of the tune2fs
command:
tune2fs -r 0 /dev/md2
Reserved space is least useful on large filesystems with static content that is not related to the operating system. For such filesystems it is reasonable to reduce the reservation to zero. Filesystems are better left with the default 5% reservation include those containing the directories /
, /root
, /var
, and /tmp
that are often used by daemons and other operatins system services to create temporary files or logs at runtime.
add a comment |
By default, ext2, ext3 and ext4 filesystems reserve 5% of their capacity for use by the root user. This reduces fragmentation, and makes it less likely that the root user or any root-owned daemons will run out of disk space to perform important operations. More information for the reasons behind this reservation can be found among the answers to this related question.
You can verify the size of the reservation with the tune2fs
command:
tune2fs -l /dev/md2 | grep "Reserved block count:"
The reservation percentage can be changed using the -m
option of the tune2fs
command:
tune2fs -m 0 /dev/md2
The number of reserved reserved blocks can be changed using the -r
option of the tune2fs
command:
tune2fs -r 0 /dev/md2
Reserved space is least useful on large filesystems with static content that is not related to the operating system. For such filesystems it is reasonable to reduce the reservation to zero. Filesystems are better left with the default 5% reservation include those containing the directories /
, /root
, /var
, and /tmp
that are often used by daemons and other operatins system services to create temporary files or logs at runtime.
add a comment |
By default, ext2, ext3 and ext4 filesystems reserve 5% of their capacity for use by the root user. This reduces fragmentation, and makes it less likely that the root user or any root-owned daemons will run out of disk space to perform important operations. More information for the reasons behind this reservation can be found among the answers to this related question.
You can verify the size of the reservation with the tune2fs
command:
tune2fs -l /dev/md2 | grep "Reserved block count:"
The reservation percentage can be changed using the -m
option of the tune2fs
command:
tune2fs -m 0 /dev/md2
The number of reserved reserved blocks can be changed using the -r
option of the tune2fs
command:
tune2fs -r 0 /dev/md2
Reserved space is least useful on large filesystems with static content that is not related to the operating system. For such filesystems it is reasonable to reduce the reservation to zero. Filesystems are better left with the default 5% reservation include those containing the directories /
, /root
, /var
, and /tmp
that are often used by daemons and other operatins system services to create temporary files or logs at runtime.
By default, ext2, ext3 and ext4 filesystems reserve 5% of their capacity for use by the root user. This reduces fragmentation, and makes it less likely that the root user or any root-owned daemons will run out of disk space to perform important operations. More information for the reasons behind this reservation can be found among the answers to this related question.
You can verify the size of the reservation with the tune2fs
command:
tune2fs -l /dev/md2 | grep "Reserved block count:"
The reservation percentage can be changed using the -m
option of the tune2fs
command:
tune2fs -m 0 /dev/md2
The number of reserved reserved blocks can be changed using the -r
option of the tune2fs
command:
tune2fs -r 0 /dev/md2
Reserved space is least useful on large filesystems with static content that is not related to the operating system. For such filesystems it is reasonable to reduce the reservation to zero. Filesystems are better left with the default 5% reservation include those containing the directories /
, /root
, /var
, and /tmp
that are often used by daemons and other operatins system services to create temporary files or logs at runtime.
edited Mar 19 '17 at 11:45
answered Mar 17 '17 at 11:09
Thomas NymanThomas Nyman
20.1k74969
20.1k74969
add a comment |
add a comment |
The most common cause of this effect is open files that have been deleted.
The kernel will only free the disk blocks of a deleted file if it is not in use at the time of its deletion. Otherwise that is deferred until the file is closed, or the system is rebooted.
A common Unix-world trick to ensure that no temporary files are left around is the following:
- A process creates and opens a temporary file
- While still holding the open file descriptor, the process unlinks (i.e. deletes) the file
- The process reads and writes to the file normally using the file descriptor
- The process closes the file descriptor when it's done, and the kernel frees the space
- If the process (or the system) terminates unexpectedly, the temporary file is already deleted and no clean-up is necessary.
- As a bonus, deleting the file reduces the chances of naming collisions when creating temporary files and it also provides an additional layer of obscurity over the running processes - for anyone but the root user, that is.
This behaviour ensures that processes don't have to deal with files that are suddenly pulled from under their feet, and also that processes don't have to consult each other in order to delete a file. It is unexpected behaviour for those coming from Windows systems, though, since there you are not normally allowed to delete a file that is in use.
The lsof command, when run as root, will show all open files and it will specifically indicate deleted files that are deleted:
# lsof 2>/dev/null | grep deleted
bootlogd 2024 root 1w REG 9,3 58 917506 /tmp/init.0W2ARi (deleted)
bootlogd 2024 root 2w REG 9,3 58 917506 /tmp/init.0W2ARi (deleted)
Stopping and restarting the guilty processes, or just rebooting the server should solve this issue.
Deleted files could also be held open by the kernel if, for example, it's a mounted filesystem image. In this case unmounting the filesystem or rebooting the server should do the trick.
In your case, judging by the size of the "missing" space I'd look for any references to the file that you used to set up the VPS e.g. the Centos DVD image that you deleted after installing.
2
if deleted file is open in df it should appear in Used space, see here unix.stackexchange.com/q/82618/221368
– magicode
Mar 19 '17 at 9:53
add a comment |
The most common cause of this effect is open files that have been deleted.
The kernel will only free the disk blocks of a deleted file if it is not in use at the time of its deletion. Otherwise that is deferred until the file is closed, or the system is rebooted.
A common Unix-world trick to ensure that no temporary files are left around is the following:
- A process creates and opens a temporary file
- While still holding the open file descriptor, the process unlinks (i.e. deletes) the file
- The process reads and writes to the file normally using the file descriptor
- The process closes the file descriptor when it's done, and the kernel frees the space
- If the process (or the system) terminates unexpectedly, the temporary file is already deleted and no clean-up is necessary.
- As a bonus, deleting the file reduces the chances of naming collisions when creating temporary files and it also provides an additional layer of obscurity over the running processes - for anyone but the root user, that is.
This behaviour ensures that processes don't have to deal with files that are suddenly pulled from under their feet, and also that processes don't have to consult each other in order to delete a file. It is unexpected behaviour for those coming from Windows systems, though, since there you are not normally allowed to delete a file that is in use.
The lsof command, when run as root, will show all open files and it will specifically indicate deleted files that are deleted:
# lsof 2>/dev/null | grep deleted
bootlogd 2024 root 1w REG 9,3 58 917506 /tmp/init.0W2ARi (deleted)
bootlogd 2024 root 2w REG 9,3 58 917506 /tmp/init.0W2ARi (deleted)
Stopping and restarting the guilty processes, or just rebooting the server should solve this issue.
Deleted files could also be held open by the kernel if, for example, it's a mounted filesystem image. In this case unmounting the filesystem or rebooting the server should do the trick.
In your case, judging by the size of the "missing" space I'd look for any references to the file that you used to set up the VPS e.g. the Centos DVD image that you deleted after installing.
2
if deleted file is open in df it should appear in Used space, see here unix.stackexchange.com/q/82618/221368
– magicode
Mar 19 '17 at 9:53
add a comment |
The most common cause of this effect is open files that have been deleted.
The kernel will only free the disk blocks of a deleted file if it is not in use at the time of its deletion. Otherwise that is deferred until the file is closed, or the system is rebooted.
A common Unix-world trick to ensure that no temporary files are left around is the following:
- A process creates and opens a temporary file
- While still holding the open file descriptor, the process unlinks (i.e. deletes) the file
- The process reads and writes to the file normally using the file descriptor
- The process closes the file descriptor when it's done, and the kernel frees the space
- If the process (or the system) terminates unexpectedly, the temporary file is already deleted and no clean-up is necessary.
- As a bonus, deleting the file reduces the chances of naming collisions when creating temporary files and it also provides an additional layer of obscurity over the running processes - for anyone but the root user, that is.
This behaviour ensures that processes don't have to deal with files that are suddenly pulled from under their feet, and also that processes don't have to consult each other in order to delete a file. It is unexpected behaviour for those coming from Windows systems, though, since there you are not normally allowed to delete a file that is in use.
The lsof command, when run as root, will show all open files and it will specifically indicate deleted files that are deleted:
# lsof 2>/dev/null | grep deleted
bootlogd 2024 root 1w REG 9,3 58 917506 /tmp/init.0W2ARi (deleted)
bootlogd 2024 root 2w REG 9,3 58 917506 /tmp/init.0W2ARi (deleted)
Stopping and restarting the guilty processes, or just rebooting the server should solve this issue.
Deleted files could also be held open by the kernel if, for example, it's a mounted filesystem image. In this case unmounting the filesystem or rebooting the server should do the trick.
In your case, judging by the size of the "missing" space I'd look for any references to the file that you used to set up the VPS e.g. the Centos DVD image that you deleted after installing.
The most common cause of this effect is open files that have been deleted.
The kernel will only free the disk blocks of a deleted file if it is not in use at the time of its deletion. Otherwise that is deferred until the file is closed, or the system is rebooted.
A common Unix-world trick to ensure that no temporary files are left around is the following:
- A process creates and opens a temporary file
- While still holding the open file descriptor, the process unlinks (i.e. deletes) the file
- The process reads and writes to the file normally using the file descriptor
- The process closes the file descriptor when it's done, and the kernel frees the space
- If the process (or the system) terminates unexpectedly, the temporary file is already deleted and no clean-up is necessary.
- As a bonus, deleting the file reduces the chances of naming collisions when creating temporary files and it also provides an additional layer of obscurity over the running processes - for anyone but the root user, that is.
This behaviour ensures that processes don't have to deal with files that are suddenly pulled from under their feet, and also that processes don't have to consult each other in order to delete a file. It is unexpected behaviour for those coming from Windows systems, though, since there you are not normally allowed to delete a file that is in use.
The lsof command, when run as root, will show all open files and it will specifically indicate deleted files that are deleted:
# lsof 2>/dev/null | grep deleted
bootlogd 2024 root 1w REG 9,3 58 917506 /tmp/init.0W2ARi (deleted)
bootlogd 2024 root 2w REG 9,3 58 917506 /tmp/init.0W2ARi (deleted)
Stopping and restarting the guilty processes, or just rebooting the server should solve this issue.
Deleted files could also be held open by the kernel if, for example, it's a mounted filesystem image. In this case unmounting the filesystem or rebooting the server should do the trick.
In your case, judging by the size of the "missing" space I'd look for any references to the file that you used to set up the VPS e.g. the Centos DVD image that you deleted after installing.
edited Mar 17 '17 at 11:22
answered Mar 17 '17 at 11:08
jayeshkh007jayeshkh007
1093
1093
2
if deleted file is open in df it should appear in Used space, see here unix.stackexchange.com/q/82618/221368
– magicode
Mar 19 '17 at 9:53
add a comment |
2
if deleted file is open in df it should appear in Used space, see here unix.stackexchange.com/q/82618/221368
– magicode
Mar 19 '17 at 9:53
2
2
if deleted file is open in df it should appear in Used space, see here unix.stackexchange.com/q/82618/221368
– magicode
Mar 19 '17 at 9:53
if deleted file is open in df it should appear in Used space, see here unix.stackexchange.com/q/82618/221368
– magicode
Mar 19 '17 at 9:53
add a comment |
Baard Kopperud's answer to the linked question does answer your question. It's clearly a dupe - I was just lazy to search for the exact ones but since you insist, here they are: Why is df missing 500MB of available space? and What happened to my free space
– don_crissti
Mar 19 '17 at 13:30