[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Secured RM ? (source)



In message <[email protected]>, root w
rites:
>
>OK ... here we go.. Based on discussion on this list this is what I 
>hacked out to (hopefully) more securely remove a file under unix. 
>I'd really apreciate any input, but my main interest is in the security of 
>the protocol in general and not the sloppy and embarassing C 
>programming.. :) 

Well it seems to assume that it only needs to write 1024 bytes past the
end of the file.  You either don't need to write any bytes at all past 
the end, or you need to write one "block" past the end (normally 8K,
but it varies), you can use fsstat(2) on many systems to find the block
size for a filesystem.

Secondly you need to flush the data out to the file (use fflush(3), and
fsync(2) if available, otheruse call sync(2) *twice*).  If you don't some
small set of Unix systems may notice that you are writing to a file that
no longer exists, and not do the writes at all.

Thirdly you should make several passes over the file, writing diffrent
patterns on each pass (and remember the flush & sync after each pass -
while it is uncommon for Unix systems to supres writes to unlinked files
it is extreamly common for them to detect multiple writes to the same part
of a file, and only do the last one).

Forth - and this is the kicker - there is no gaurentee that the filesystem
itself won't keep a copy of the old data for some reason.  Three examples:

 * a compressed filesystem may write the new data elsewhere because it
 compresses diffrently from the old data - you should be able to defeat
 this by filling the disk several times with random paterns

 * the log structured filesystem (LFS) *will* write the new data elsewhere,
 and the space the old data is on will not be overwritten untill the
 cleaner comes and examines the part of the disk it is on - again filling the
 disk several times with random patterns should cause an overwrite

 * NetApp's NFS appliance ("the toaster") can (and normally is) configured
 to take "snapshots" of the filesystem at various times, this makes the
 blocks the file is currently on read-only, and any ovewrites will merely
 allocate new space.  The old copy of the file will be readable for some
 peroid of time (frequently up to a week) under a diffrent name - here you
 will be unable to fill the disk (unless you are the NetApp admin - then
 you can delete the appropriate snapshots and fill the disk...)


While we are at it you probbably want to use stat(2) to find the length
of the file, and you can get far more I/O if you allocate a sizeable
chunk of memory (say 1K, or 8K) and use writev(2) to shove multiple copies
of it out per syscall...    and some indenting on the code would make
it more readable (and simpler for you to write as well).

>While we're here..  I havn't been able to find anyone on the planet who's 
>seen or heard of a linux un-remove, which makes testing my code very 
>tricky.  If anyone can point me at it I'd apreciate it. Hell, if someone 
>can definitively say they've /seen/ such a thing it'd be nice. So far 
>i've found one person who insists that his system admins sisters 
>boyfriends cousin from Saint Petersburg has been using un-rm for unix for 
>years. *sigh* 

I don't know of a real un-rm.  In "the old days" there was a fsdb (filesystem
debugger) that could be used to alter the filesystem at a low level.  If
you knew enough you could "un-rm" a file.  It had to be a very valuable
file to be worth it!).  At some sites they alias rm to do something like
"mv $1 .$1.deleted", making an un-rm is left as an exercize for the
intrested reader.