Saturday, January 19, 2008

Status update

Some people have (quite understandably) requested me to post a status update, so here it goes:

The good news:

  • The project is not dead.
  • The code is being updated every month with new features and bug fixes from OpenSolaris.
  • Critical bugs, and especially any bug reports of corruption, will receive my full attention, so please report them if that happens to you. I am 100% committed to fix any such bugs, but I haven't received any such reports in quite a time. However, I'm not sure if this issue has been truly fixed simply because I've been unable to reproduce it.
  • I will make an effort to review and integrate any patches that I receive that improve zfs-fuse.
The bad news:
  • The beta release is quite old already, so I should definitely make a new release..
  • I haven't been able to work on improving zfs-fuse, except for the monthly updates and the occasional bug fixes.
  • I have a couple of features that I've started to work on but haven't had the time to finish. One of them is async I/O and the other one is automatic disabling of disk write caching with hdparm and sdparm.
That said, if you would like to use zfs-fuse without having to worry about your data safety, please do the following:

  • Use the latest development code, from the Mercurial trunk (very important)
  • Disable disk write caching on your disks with hdparm and sdparm (very important)
  • Create and import your pools with devices from /dev/disk/by-id (highly recommended)

I am reluctant to release a new version until these last 2 points are dealt with in a safe way, either by displaying flashing big red warnings along with setting the mixer volume to the maximum and playing sounds of sirens, or by making zfs-fuse handle it automatically (probably better).

37 comments:

Anonymous said...

Oh go on, sirens and big red boxes (perhaps we could resurrect the old Amiga "guru meditation errors" ?) are always fun.. :-)

As ever, thanks for your hard work on this Riccardo, I recently changed by home box over to an AMD compatible Intel quad core system and haven't had a chance to migrate my pools from the old one..

Unknown said...

Nice work :) Good to know its still a live project.

So are you saying if we follow those 3 directives our data will be 99.999% safe?

Would you recommend for a at home production system with 1TB of data?

Anonymous said...

I'm really happy using zfsfuse in few places (my external hard drive and disk for backups in one of my server!). I use latests versions from Mercurial repository and have only few problems: random crashes (probably out of memmory, i have ~ 200 file system on by backup disk), after them i have weired mount entries (need to export, import, umount, and mount). And not working nfs out of the box.

Also moving beetween datasets using mv is really coping data, and as so is slow :/

If I will have any reproductible bug i will report it!

Generaly stability is very good, more is to be done in performance tuning.

Thanks for your work!

Unknown said...

Thanks for the update. Like many im glad to see your still punching this bad. Now here is a left hook.. Just wanted to share that u need the zlib-devel package to use with trunk where as the 0.4x you didnt.

You made this linux guy very happy camper.

Anonymous said...

Nice work, it looks like we need more people using it. That means somebody needs to modify the Debian (for me!) and Ubuntu (for everyone else!) installers to do a bootable ZFS root install. That'll nab headlines and get everybody trying it.

Bootable ZFS will be a first-in-the-world, so it'd be a big boost for people to become generally aware that it can work.

I realize modding an installer is a lot of work, though. So in the meantime, how about an instruction guide on converting your existing Linux system to bootable ZFS? I'll +1 Digg it!

Anonymous said...

yow. having been burned by way too much raid and LDM, I have been dying to drink the ZFS Grape Kool-Aid for a while. Is there anyone who can give me a sense if it's practical to use Ubuntu 7.10 as a base. I don't need to boot off of zfs. I'm perfectly content to make the boot and root partitions old-school.

meet up on #zfs-fuse to talk about strategies??

Anonymous said...

@tgreaser: this build requirement has been in the debian/control file for ages now :-)

I've been following the mercurial trunk for quite a while now, and building my own packages - I'm only lacking the courage (and spare machine) to actually install it :(

Keep up the good work (maybe Easter is a good time for another beta release?)!

Linuxmanju said...

Hi,

Thanks alot for the wonderful contribution.

I am facing problem exporting zfs mount through nfs. When I try to mount from the client it says permission denied by the server.

Some googling tells me that zfs has its own set for sharing nfs. I tried the command only to get an output that fetaure not implemented yet.

Is there a chance in the near future that it will be implemented?. Any workarounds for NFS share would be appreciated.

Thanks alot
Regards,
Manjunath

wizeman said...

@Manjunath:

The ZFS sharenfs property does not work, but NFS should still work like any other FUSE filesystem.

Take a look at the README.NFS file in the zfs-fuse tree for instructions. If you have problems, you can send an email to the zfs-fuse discussion group for help.

Anonymous said...

what is that about write cache being turned off. From the zfs documentation I get the idea that even with write cache everything is fine if the hard drive follows the flush commands.
I do not know what percentage of harddrives do not do it but that should be considered broken hardware.
just my 2 cents

Anonymous said...

Great scott, great news!

I was sure this project was dead for sure. Hopefully there will be a time when ZFS works as fast/well as ntfs-3g!

shirishag75 said...

I really hope that the 5th comment gets done soon. I'm sure it will make headlines.

Anonymous said...

mod +1:ubuntu/fuse-zfs

It would be really fantastic to get this into universe on Ubuntu.

Then you would see some uptake.

Any hope of getting such a beast into Hardy, the long-term version being cooked for April?Probably not, but it might be put in and then "backported" to Hardy.

My understanding is that the licensing for a Fuse implementation is just fine wrt the GPL.

Anonymous said...

Thanks for the update!

Unknown said...

Yes, a bootable ZFS filesystem based Ubuntu Hardy would be fantastic.

Even just instructions on how to do it would be fantastic.

Last year I tried an external 1TB drive (USB interace) which I use for backup, but it was too slow. A lot slower than the same drive used as Ext3.

As soon as I can boot Ubuntu from ZFS I will start converting machines to it.

And yes, thanks for the update, I was afraid it was dead.

Anonymous said...

May I ask what zfs version one actually gets with zfs-fuse? Did you think of accepting donations as an incentive for you to spend more time on this project?

Alex said...

I'm using ZFS on Ubuntu, running a 2 terrabyte Zpool array (5x500gb HD's). It's been running smooth and steady since I set it up back in September. I've run the servers through all sorts of nightmare scenarios, from overwriting random sectors to removing entire drives, and hard booting the machine repeatedly in the middle of extensive operations, and it's continued to function without difficulty. The only complaints I have are:

1) Performance is a bit slow. I understand that this is being worked on, and it's not a major factor anyway.

2) I get a strange error which seems to have no real impact. My file system is reporting 7326 data errors, however I have examined all of the files on the disk and not a single one shows any sign of corruption. When I try to get a list of the corrupted files using "zpool status -v", the list fails and results in a core dump.

So far the second problem is just annoying, and I'm happy to ignore it. I'm just curious whether anyone else has experienced something similar.

Darth Debian - VelociRaulEitor said...

Hi!! I have 3 months with ZFS in Linux without problems, compression=on, and with critical data(Docs, music, images), in ArchLinux and Kubuntu Hardy. Some crashes caused by wireless, but the file system is every time consistent. This is a great proyect!! Greetings from Chile.

Anonymous said...

this is a great project. I use zfs-fuse for quite a while now on 64 and 32 bit machines with different drive configurations.
Thank you for committing time and effort into this. Keep up the great work!

Gaijin said...

I've got a ZFS setup running now at the company I work for. Its running exclusively using external USB drives (cheap) but I may also resort to using cheap consumer NAS hardware and create a single large file for ZFS-fuse to use as part of a pool. The whole setup is running inside a XEN domU VM with exposed PCI for the USB controller. I tried OpenSolaris (Indiana) first and found that throughput was TERRIBLE (1MB/sec file copy) vs the also terrible but much better 3-6M/sec I am getting with zfs-fuse on a linux domU.

So far no big problems but there is definately a memory leak in the zfs-fuse daemon. Doing a few rsyncs of all my data to the ZFS array from the existing media tends to have zfs-fuse fall over in a heap, requring a zfs umount -a; zfs mount -a and restart of all processes using ZFS mounted files. Its quite annoying but that aside it seems very "data-stable". I've also tried to kill it as best I can with no luck!

Gaijin said...

Regarding my previous "memory leak" comment. I'm a little confused now. I built the latest trunk with debugging enabled and ran it via valgrind with full memory checking turned on. When memory is exhausted, processes just start to die - including zfs-fuse.

After an hour or so of heavy rsync activity, my 640MB of RAM filled and I killed it all to see where the leaks were.

Turns out each thread (16 of them?) had about 3MB-5MB of definately leaked data and stemming from zfsfuse_listener_loop (fuse_listener.c:240). Which wasn't really a problem given that only one of those buffers should ever exist per thread and together that only makes up 80MB of my 500MB or so of process memory.

So where did all the other memory go? Not sure yet. I guess its not "leaked" as much as its allocated in excess. It might be that running with 2GB of RAM (which I dont have) would work perfectly... Can anyone else shed some light on this or do any similar tests?

Anonymous said...

Update, update!

Anonymous said...

This project seems kinda... dead :( despite the post stating the opposite (it is of jan-2008)

I do hope that I am wrong as this seems to be a cool project - I'd love to have ZFS on my box!

Gaijin said...

I'm now using zfs-fuse on a fileserver for 20 people doing design work. Its handling the load quite well but it does have some memory issues. For the time being I'm just restarting the zfs-fuse daemon every week but its not ideal. If I had more time I'd love to get more involved.

I don't think the project is dead, its just a one man part-time show. ZFS certainly isn't dead and the design of this project means that those changes can, relatively easily, be ported over to zfs-fuse. So when solaris gets something new, so do we. ;) In theory at least.

Maybe at some stage in the distant future I'll have more time to try to work on something like this. My experience so far with ZFS is nothing but great!

Anonymous said...

The zfs on fuse project was a good idea ... and yet is definitively dead now.

I would like to thanks sincerely Ricardo Correia for his work. At least he tried. thanks

Anonymous said...

Sure is wonderful to be able to compile it on 'Lenny' using gcc 4.2.4 without a single warning (or error).


Here is a partition editor that mentions you:

Parted Magic (http://www.partedmagic.com/) is a fork of the GParted project started by the same author. Parted Magic supports the ZFS using the ZFS_ON_FUSE (http://www.wizy.org/wiki/ZFS_on_FUSE)


Sun has a grub-0.95 (GPL) that can boot ZFS - can you add that so we can boot too?

ZFS all the way!


It certainly would be nice if there were some "kernel patches" - we promise not to compile them in _your_ country.

Anonymous said...

Update, update!

Anonymous said...

Project is not dead. BUT, I have a question (please reply to my email address rudd-o at rudd-o dot com, there's no option to subscribe to replies).

Why "disable disk write cache"? On Linux + SATA, fsyncs() are the barriers that ZFS uses to ensure consistent data on disk (one after the transaction phase 1, and one after the tree top gets rewritten), so the disk shouldn't reorder disk writes between those barriers.

I read that everywhere, and ZFS is supposedly designed to work correctly with write caches (in fact, ZFS turns the cache ON except in the only situation where one should turn write cache off is when the disk is shared with UFS volumes, as those are sensitive to disk write ordering and the fs does not issue transaction barriers).

Why then the recommendation to disable write cache if ZFS is designed to use it? Performance drops abysmally when doing that.

Please keep updating the code. I use it all the time and I would like to see you continue -- maybe even help you package it for Hardy.

Anonymous said...

Incompatible ZFS version bug:

Please see
http://drwetter.org/blog/zfs_under_linux.en.html

Search for "update"... (<update>)

dagbrown said...

By way of hopefully encouraging others to help by lowering the bar on participation (but certainly not trying to steal anyone's thunder), I've put zfs-fuse up on github at http://github.com/dagbrown/zfs-fuse/ .

Computer.Pers said...

Hey is project alive???

I don't see any new betas....

Anonymous said...

I am, you know, deeply thankful for the efforts poured into porting ZFS to Linux via FUSE. But I really need help and I have not been able to work this by myself (especially the negative inode cache by VFS part) - Ricardo, is there any progress being made on this? Is the Sun situation affecting this project?

regomodo said...

I really hope this project is not dead as i'd love to see it reach to completion. I'd like to give a hand if i could but my lack of c/c++ programming skills would be major hindrance.

Corwin said...

zpool create just hangs for me.. :( I'm using the mercurial repository on Ubuntu 2.6.24-19 amd64. Another interesting thing is zfs-fuse brings out a bug in htop where htop shows hundreds of sequential PID's of zfs-fuse running. ps does not show this.

I've been running ZFS with OpenSolaris for over a year now. One thing I haven't been able to understand is that once a drive is made into a ZFS pool, it can never be used for anything but ZFS again. Using dd, shred, and even a windows "low-level" formatting utility to overwrite the entire disk does not work. Does ZFS overwrite the disk's UBA area?

Also if anyone is confused how to disable write caching, use: hdparm -W 0 /dev/disk/by-id/{drive}

Gavin Burris said...

Looks promising. I'll have to give it a try. Any CentOS people out there banging on this? I'd love to have ZFS for my home storage system.

Gaijin said...

Hey, I'm just looking for another update from the man behind the curtain. ;) There were posts of the main ZFS guy from sun with Linus Torvalds a while back but no news since then. What was that about?

Anonymous said...

The project is not dead. Just check the Mercurial repository. Last update 2 weeks ago. That seems pretty active to me...