Tuesday, December 26, 2006

First alpha of ZFS on FUSE with write support

Ladies (?) and gentlemen, the first preview of ZFS on FUSE/Linux with full write support is finally here!

You can consider it my (late) Christmas gift for the Linux community ;)

Don't forget this is an alpha-quality release. Testing has been very limited.

Performance sucks right now, but should improve before 0.4.0 final, when a multi-threaded event loop and kernel caching support are working (both of these should be easy to implement, FUSE provides the kernel caching).

For more information, see the README and the STATUS file for working/not working features. Download here.

Let me know how it works, and don't forget to report bugs!


Anonymous said...

Works for me. Very slow and crashed when tried to do bonnie++ benchmark. However normal operation and mirror raid replication works quite well. Great work!

Anonymous said...


wizeman said...

Regarding the crash...

Since you posted, I have done now about 10 bonnie++ benchmarks total, on 3 different distros, on 32-bit and 64-bit, SMP and no SMP, but I can't make it crash.

Can you file a bug report as per the instructions in the BUGS file?

Thanks :)

Anonymous said...

Consider this to be on my personal TODO list: "Test ZFS on Linux using every known method". Thanks again for your ongoing efforts!

Anonymous said...

Wow!! Besides speed as far as I could test works really well

Big Thnx for this port! And a good 2oo7!

Anonymous said...


One question: can/will you be able to boot from ZFS/FUSE? I would have guessed that there's not much/anything special for booting, but I saw there's a "ZFS boot" project for OpenSolaris.

Keep it up!

wizeman said...

Yes, there's a knoppix distribution that uses a HTTP FUSE filesystem as root, so FUSE is not an obstacle (nor is ZFS).

So yes, it will be possible, even with full support for RAID-Z, RAID-Z2, striping, etc.

Basically, if I'm not mistaken, there will have to be a small ext3 partition (/boot) with 20 megs or so.

This will have a small initrd (or initramfs or whatever) with the kernel, the modules (including FUSE) and the zfs-fuse binaries that will mount ZFS as a root filesystem.

Anonymous said...

Awesome! I compared a zpool with a single file (rather than a partition) compared to ext2 on loopback to a single file. With bonnie++, I was impressed to see the performance of zfs-fuse was only 10-20% slower than ext2.

For fun, check out what happens when you turn compression on and run bonnie++. The bonnie++ test files compress 28x, and the read and write rates quadruple! It's not a realistic scenario, but interesting to see.

I also tried turning off checksums to see if that had any noticable impact on speed. Much to my surprise, with checksumming off the read rate dropped by 20%! I don't understand how that could be possible though...

Anonymous said...

Excellent! I'll be sure to try this out in the coming days. Time to grab the source from Mercurial...

Anonymous said...

I have lots of bugs to report but I can never seam to be able to access http://developer.berlios.de/projects/zfs-fuse/. All i get is timeouts. Ive tried several proxies but nothing. Ive even checked the uptime at http://www.siteuptime.com but it says Quick check for: developer.berlios.de/projects/zfs-fuse Failed

Anonymous said...

i had also problems accessing developer.berlios.de for the last days. that site seemed down several times (as it is _now_). anyway - why not directly sending the bugreports via email to ricardo?

Anonymous said...

I've got to say I'm really impressed by this - I've just given it a test out (written up in my blog post ZFS on Linux Works!) and it didn't fall over on me once.

Really looking forward to seeing how it develops!

PS: It looks like the ZFS on FUSE site has moved to http://www.wizy.org/wiki/ZFS_on_FUSE.

wizeman said...


I've also noticed BerliOS has been experiencing some downtime in the last week or two...

Please go ahead and email those bug reports to rcorreia at wizy dot org, I'll file them whenever BerliOS comes up and I'll try to fix them in the meantime.

Thanks :)

Anonymous said...

You can connect to BerliOS using https :

For me, it works...

Anonymous said...

To follow up: I next tried zfs-fuse and ext2 on a LVM2 logical volume (one layer closer to the metal than my file vdev test). It's clear the loopback in my previous test penalized ext2 more than zfs-fuse. With logical volume-based vdevs, read and write is 40% slower with zfs-fuse compared to ext2 (as measured by bonnie++).

For a more apples-to-apples comparison, I should be testing zfs with physical disk vdevs against a journaling filesystem on LVM2. Right now ext2 is whipping zfs-fuse, but ext2 also provides no filesystem integrity guarantees or disk spanning. For that, you need something like ext3 + LVM2, which would be a more fair match.

Anonymous said...

Hi Ricardo, great work. ZFS on Kubuntu works like a charm!

For all those German-speaking people: I have compiled a short HOWTO on compiling and installing ZFS on (K)Ubuntu 6.10 »Edgy Eft«. It covers also some basic examples of how to work with ZFS.

You might want to read this tutorial on http://node-0.mneisen.org/2006/12/31/zfs-unter-ubuntu-kubuntu-610-edgy-eft/.

BTW: I have one issue with ZFS. After creating f.e. test/users/mneisen and cd'ing to that directory, I cannot checkout a subversion or git project. The creation of the .git or .svn (or some contents thereof) fails miserably, although the current user has all rights in this directory. root does not have this limitations. What am I doing wrong?

P.S.: Please delete my previous comment and this post scriptum.

Anonymous said...

Yesterday I redid my bonnie++ tests with zfs-fuse LD_PRELOAD'ed with Google's tcmalloc library (a high performance malloc implementation) and found it shaved a minute off the compressed test and over 1 minute 30s off the uncompressed tests.

Another possible optimisation ?

Happy new year all!

Anonymous said...

Well I just realised I handicapped my XFS benchmark tests quite severely. I'd forgotten I was running Beagle and it was helpfully trying to index the 2GB scratch files that Bonnie++ was creating as Bonnie++ was running!

This meant there was a lot of contention for disk I/O and it looks like it penalised XFS by almost 90 seconds over the whole run. I've updated my blog post (again) with the new (better) numbers for XFS.

Anonymous said...

I've just done a different series of tests on an ancient machine with 4 spare SCSI drives and was surprised to see that I don't get any speedup when I add more drives as stripes (or mirrors, which under Linux SW RAID can give improved read performance).

I'm pretty sure it's not a hardware limitation as XFS is almost twice the speed on a single drive and what puzzles me is that the I/O's do seem nicely balanced over the drives, just that when you add more drives each drive becomes less busy. RAID-Z was slower than a single drive too, though that's probably because of the additional burden of parity calculation.

It might be that because it's running in user space its threading model isn't sufficiently optimised yet to take advantage of the 4 CPUs in the box (i.e. the machine is too old for one of the 200MHz PPro's to do enough computation for the filesystem).

Anyway, the fact that I can just add drives to the array/mirror while the filesystem is live and have them in use immediately is still pretty cool. :-)

I've pretty much exhausted the tests I was going to do until the bug that stops me running executables from ZFS is fixed, so you can all get some rest from me for a bit.

Ricardo (and all the contributors, bug reporters, sponsors and supporters), thanks so much for your good work on this.


Unknown said...

great stuff - i had a few crashes for dumb things (not having fuse-utils, modprobe fuse etc)

crashes when you export a pool (i guess this is todo with the remounting point in STATUS)


Anonymous said...

Really great work
But is it normal a cannot run
any binary from a zfs mountpoint ?

wizeman said...

To Martin Eisenhardt:
You haven't done anything wrong, that is probably related to a known bug (cannot execute binaries in zfs-fuse). I'm in the process of fixing it :)

To Chris Samuel:
That Google library looks promising!

To david:
Don't be afraid to report bugs, it's better to report too many than too few. All crashes are considered bugs. Proper error handling and useful error messages are a must :)

To the previous anonymous:
Nope, it's a known bug, working on it :)

Finally, thank you all for the suggestions, bug reports, patches and HOWTOs :)

Anonymous said...

Awesome dude, I wanna kiss you muahahaha

Anonymous said...

Thanks for making zfs available for us passive knownothings, friend! :)

Have become too comfortable with linux to switch to solaris now, but can't help myself from drooling over zfs.

Anonymous said...

It's probably worth mentioning which packages need to be installed to compile and run this. I needed something like:
scons, fuse, fuse-utils
And I had to do a sudo mkdir /etc/zfs.

Looking good so far, though!

Anonymous said...

I would like to thank you too for all the effort.

I will test it for sure.

Although I hope they can work something out with the licenses, so we can have it in the kernel (yeah I know a wet dream).
According to the demos ZFS really is a REVOLUTION.

Anonymous said...

Hmm, ZFS won't compile (due to -Werror) when you build it without debugging, i.e. "scons debug=0".

It says:

gcc -o lib/libzpool/build-user/dbuf.o -c -pipe -Wall -Werror -std=c99 -Wno-missing-braces -Wno-parentheses -Wno-uninitialized -fno-strict-aliasing -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_REENTRANT -DTEXT_DOMAIN=\"zfs-fuse\" -s -O2 -DNDEBUG -Ilib/libavl/include -Ilib/libnvpair/include -Ilib/libzfscommon/include -Ilib/libumem/include -Ilib/libzpool/include -Ilib/libsolcompat/include lib/libzpool/build-user/dbuf.c
cc1: warnings being treated as errors
lib/libzpool/build-user/dbuf.c: In function 'dbuf_evict':
lib/libzpool/build-user/dbuf.c:227: warning: unused variable 'i'
lib/libzpool/build-user/dbuf.c: In function 'dbuf_add_ref':
lib/libzpool/build-user/dbuf.c:1647: warning: unused variable 'holds'
lib/libzpool/build-user/dbuf.c: In function 'dbuf_write_done':
lib/libzpool/build-user/dbuf.c:2182: warning: unused variable 'epbs'
scons: *** [lib/libzpool/build-user/dbuf.o] Error 1

Anonymous said...

With the executable bug fixed the trunk version of ZFS can now do a "make bootstrap" of GCC 4.1.1 which involves a three-stage build. Stage 1 is built with the system GCC, stage 2 is built with the stage 1 compiler and stage 3 is built with the stage 2 compiler. Stage 2 and 3 are then compared with each other to make sure they have built identical code.

Super Carrot said...

Was really impressed by it when I played with it. I had it running on an Ubuntu Edgy box. It was not very stable at all I'm afraid though. It brought down gnome when I played with drag and drop functionality. I tried copying about 200 megs of songs to the file system that I created.

Maybe it was just me, as I dont really know how to use ZFS yet. But I also was not able to delete pools afterwards, that was frustrating. And when it crashed or was restarted it seemed to lose the file system along with any data put on it, however, it would still believe that it is still there even if you are not able to see it.

All in all, I am really impressed and I am disparately looking forward to it being stable enough for real work.

Unknown said...

Can someone with a Mac have a go at using this under http://code.google.com/p/macfuse/ ?


Miguel Filipe said...


Congrats for the achievement.

what about creating a mailinglist/google group for this project?

A lot of the comments to this entry allready belong in someking of forum/mailing list. It would complement & support the berlios bug forum.

keep up the good work.