Finding Disk Space Hogs

[problem]

Common admin task, to find what is using all the disk space.

[/problem]

[solution]

Simpliest thing to do, in a known directory – for example /var/log – is to run du -ks *.

That will show all files and directories, in the current directory along with their disk space in kilobytes.

Then cd down the top ones and rerun. You can also pump the output through sort (-n on Solaris and +n on Linux).

[/solution]

[example]


cd /var/log
du -ks * | sort -n # Linux
cd httpd
du -ks * | sort -n

With some flavours of UNIX, du shows actual physical disk space being used – where as df shows disk space being reserved. This is only really noticeable when you remove a file, that is still being written too. du shows the current directory is only using x kb and df still says you are at 100%. 🙂 You need to find and kill the process.

Another useful way of finding largest files, is find command – something like this:


find / -xdev -type f -a size +20000 -ls

This says find files over 10MB (20 thousand 512 byte blocks), do not traverse file systems mounted on this one (-xdev). You can also say -mount to stop find traversing these file systems. That way df and find tally – just re-run it for var, etc as required.

Again this can be pumped through to sort:


find / -xdev -type f -a -size +20000 -ls | sort -k7 -n # linux
find / -xdev -type f -a -size +20000 -ls | sort +6n # Solaris - starts field count from zero

[/example]

[reference]

[tags]UNIX, df, du, Solaris, Unix Coding School[/tags]

[/reference]

If you have found my website useful, please consider buying me a coffee below 😉

Leave a Reply

Your email address will not be published. Required fields are marked *