Linux System Administration
There was recently a bit of traffic on the Usenet newsgroups about the need for (or lack of) an undelete command for Linux. If you were to type rm * tmp instead of rm *tmp and such a command were available, you could quickly recover your files.
The main problem with this idea from a filesystem standpoint involves the differences between the way DOS handles its filesystems and the way Linux handles its filesystems.
Let's look at how DOS handles its filesystems. When DOS writes a file to a hard drive (or a floppy drive) it begins by finding the first block that is marked “free” in the File Allocation Table (FAT). Data is written to that block, the next free block is searched for and written to, and so on until the file has been completely written. The problem with this approach is that the file can be in blocks that are scattered all over the drive. This scattering is known as fragmentation and can seriously degrade your filesystem's performance, because now the hard drive has to look all over the place for file fragments. When files are deleted, the space is marked “free” in the FAT and the blocks can be used by another file.
The good thing about this is that, if you delete a file that is out near the end of your drive, the data in those blocks may not be overwritten for months. In this case, it is likely that you will be able to get your data back for a reasonable amount of time afterwards.
Linux (actually, the second extended filesystem that is almost universally used under Linux) is slightly smarter in its approach to fragmentation. It uses several techniques to reduce fragmentation, involving segmenting the filesystem into independently-managed groups, temporarily reserving large chunks of contiguous space for files, and starting the search for new blocks to be added to a file from the current end of the file, rather than from the start of the filesystem. This greatly decreases fragmentation and makes file access much faster. The only case in which significant fragmentation occurs is when large files are written to an almost-full filesystem, because the filesystem is probably left with lots of free spaces too small to tuck files into nicely.
Because of this policy for finding empty blocks for files, when a file is deleted, the (probably large) contiguous space it occupied becomes a likely place for new files to be written. Also, because Linux is a multi-user, multitasking operating system, there is often more file-creating activity going on than under DOS, which means that those empty spaces where files used to be are more likely to be used for new files. “Undeleteability” has been traded off for a very fast filesystem that normally never needs to be defragmented.
The easiest answer to the problem is to put something in the filesystem that says a file was just deleted, but there are four problems with this approach:
You would need to write a new filesystem or modify a current one (i.e. hack the kernel).
How long should a file be marked “deleted”?
What happens when a hard drive is filled with files that are “deleted”?
What kind of performance loss and fragmentation will occur when files have to be written around “deleted” space?
Each of these questions can be answered and worked around. If you want to do it, go right ahead and try—the ext2 filesystem has space reserved to help you. But I have some solutions that require zero lines of C source code.
I have two similar solutions, and your job as a system administrator is to determine which method is best for you. The first method is a user-by-user no-root-needed approach, and the other is a system-wide approach implemented by root for all (or almost all) users.
The user-by-user approach can be done by anyone with shell access and it doesn't require root privileges, only a few changes to your .profile and .login or .bashrc files and a bit of drive space. The idea is that you alias the rm command to move the files to another directory. Then, when you log in the next time, those files that were moved are purged from the filesystem using the real /bin/rm command. Because the files are not actually deleted by the user, they are accessible until the next login. If you're using the bash shell, add this to your .bashrc file:
alias waste='/bin/rm' alias rm='mv $1 ~/.rm'
and in your
.profile: if [ -x ~/.rm ]; then /bin/rm -r ~/.rm mkdir ~/.rm chmod og-r ~/.rm else mkdir ~/.rm chmod og-r ~/.rm fi
Advantages:
can be done by any user
only takes up user space
/bin/rm is still available as the command waste
automatically gets rid of old files every time you log in.
Disadvantages:
takes up filesystem space (bad if you have a quota)
not easy to implement for many users at once
files get deleted each login (bad if you log in twice at the same time)
The second method is similar to the user-by-user method, but everything is done in /etc/profile and cron entries. The /etc/profile entries do almost the same job as above, and the cron entry removes all the old files every night. The other big change is that deleted files are stored in /tmp before they are removed, so this will not create a problem for users with quotas on their home directories.
The cron daemon (or crond) is a program set up to execute commands at a specified time. These are usually frequently-repeated tasks, such as doing nightly backups or dialing into a SLIP server to get mail every half-hour. Adding an entry requires a bit of work. This is because the user has a crontab file associated with him which lists tasks that the crond program has to perform. To get a list of what crond already knows about, use the crontab -l command, for “list the current cron tasks”. To set new cron tasks, you have to use the crontab <file command for “read in cron assignments from this file”. As you can see, the best way to add a new cron task is to take the list from crontab -l, edit it to suit your needs, and use crontab <file to submit the modified list. It will look something like this:
~# crontab -l > cron.fil ~# vi cron.fil
To add the necessary cron entry, just type the commands above as root and go to the end of the cron.fil file. Add the following lines:
# Automatically remove files from the # /tmp/.rm directory that haven't been # accessed in the last week. 0 0 * * * find /tmp/.rm -atime +7 -exec /bin/rm {} \;
Then type:
~# crontab cron.fil
Of course, you can change -atime +7 to -atime +1 if you want to delete files every day; it depends on how much space you have and how much room you want to give your users.
Now, in your /etc/profile (as root):
if [ -n "$BASH" == "" ] ; then # we must be running bash alias waste='/bin/rm' alias rm='mv $1 /tmp/.rm/"$LOGIN"' undelete () { if [ -e /tmp/.rm/"$LOGIN"/$1 ] ; then cp /tmp/.rm/"$LOGIN"/$1 . else echo "$1 not available" fi } if [ -n -e /tmp/.rm/"$LOGIN" ] ; then mkdir /tmp/.rm/"$LOGIN" chmod og-rwx /tmp/.rm/"$LOGIN" fi fi
Once you restart cron and your users log in, your new `undelete' is ready to go for all users running bash. You can construct a similar mechanism for users using csh, tcsh, ksh, zsh, pdksh, or whatever other shells you use. Alternately, if all your users have /usr/bin in their paths ahead of /bin, you can make a shell script called /usr/bin/rm which does essentially the same thing as the alias above, and create an undelete shell script as well. The advantage of doing this is that it is easier to do complete error checking, which is not done here.
Advantages:
one change affects all (or most) users
files stay longer than the first method
does not take up user's file space
Disadvantages:
some users may not want this feature
can take up a lot of space in /tmp, especially if users delete a lot of files
These solutions will work for simple use. More demanding users may want a more complete solution, and there are many ways to implement these. If you implement a very elegant solution, consider packaging it for general use, and send me an e-mail message about it so that I can tell everyone about it here.
And, as a last-minute correction/addition to a previous article (specifically my article on mtools in LJ issue 5), an alert reader noticed that while mtools can copy Unix files to a DOS diskette, how can you preserve the 256 character name of the original Unix file if DOS can only handle 11 characters at most, and is not case-sensitive? The case was one in which two Unix machines could use DOS diskettes, but could not communicate directly. However, this can apply to backups in which you want your files stored on DOS floppies, or to any other case in which you want long file names preserved. There is a way to do it.
The tar command is used to create one big file which can contain a number of little files. Using the tar command, you can create an archive file which contains a bunch of 256 character file names, while the tar file itself is a legal DOS name. DOS (or the FAT filesystem, anyway) does not care what is in the file, as long as it has at most eight characters plus a three character extension.
Be sure that when you copy the tar file that you do not give the -t (text) option to mtools. The tar file has to be copied in binary format, even if the tar file only contains text files.
So, to copy a few long filenames to the first floppy drive (A: or /dev/fd0):
tar -cvf file.tar longfilename \ reallylongfilename \ Not.In.Dos.Format.Filename.9999 / mcopy file.tar a:
Then at the remote Unix machine (or to restore it):
mcopy a:file.tar file.tar tar -xvf file.tar
or
mread a:file.tar | tar -xf -
And assuming the remote Unix system has mtools and supports 256 character filenames, a copy of the files will now be on each system.
Tune in next time when I find the real relationships between virtual beer, BogoMIPS, and a VIC-20. In the meantime, please send me your comments or questions or even suggestions for future articles to: komarimf@ craft.camp.clarkson.edu.
Mark Komarinski (komarimf@craft.camp.clarkson.edu) graduated from Clarkson University (in very cold Potsdam, New York) with a degree in computer science and technical communication. He now lives in Troy, New York, and spends much of his free time working for the Department of Veterans Affairs where he is a programmer.