Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Super Fast Local Workloads with LXD, ZFS, and Juju (jorgecastro.org)
96 points by mitechie on Feb 12, 2016 | hide | past | favorite | 31 comments


For those interested, please find here [1] the recordings of Juju sessions (day 2 & 3 only) at Config Management Camp. Mark Shuttleworth's keynote is also recorded [2]. Recording of James Page's presentation [3] "Building a private cloud with the OpenStack charms" (day 1) might be available on the Ubuntu youtube channel [4] next month.

[1] https://www.youtube.com/playlist?list=PLzSGDpUWtiotngRgVqpa8...

[2] https://www.youtube.com/watch?v=sp_Re8Mx9xk#t=1h8m15s

[3] http://cfgmgmtcamp.eu/schedule/speakers/JamesPage.html

[4] https://www.youtube.com/user/celebrateubuntu


James Page's presentation came available at https://www.youtube.com/watch?v=pGeSxIB3KzY published at [4] https://www.youtube.com/c/jujucharms


I don't think the author of this post really understands ZFS.

Is that a 4 disk mirror? You'd be much faster using a raid10 there.

ZFS 'cache' devices are't for write cache either, you'd want a log device for write perf reasons if that's your constraint... Even then it's unlikely to make a difference on a workstation workload with 16GB of memory..


I think he might just be glossing over some details? At any rate, from what I understand ZFS has a 'zil' write cache for small log writes, and a l2arc for common reads that don't fit into memory. This is just a dev setup, but for real work I think you'd want your zil to also be mirrored so you don't lose writes if an ssd goes bad. For anyone interested this link has a few more details: http://www.45drives.com/wiki/index.php/FreeNAS_-_What_is_ZIL...


Yeah.. it is a subtle difference between

  zpool create home mirror a b c d cache e
and

  zpool create home mirror a b mirror c d cache e


Could you elaborate on the semantics of "cache"? I do understand the mirror keyword, but I know next to nothing about ZFS and don't really get what "cache" means, considering bouth your and OPs comment.


a cache drive is l2arc. Brendan Gregg's post[0] explains it well, but the TLDR is that l2arc is very much like an l2 cache in a cpu. Recent and relevant items are held on a fast drive in case it will be needed in the near future. It is really an L2 cache: L1 is called ARC and is held in dram.

If I recall correctly, the extra indexes and tags mean l2arc requires more memory to work efficiently.

[0]: https://blogs.oracle.com/brendan/entry/test


When your "medium-sized test server" has 128 GB RAM and half a TB worth of SSDs... in 2008! Damn.


You can use a fast disk like an SSD as a 2nd level cache device after ram. So reads will first check the main ram cache first, then the ssd cache disk before finally hitting the underlying storage.


L2ARC is extremely well documented right in the code

https://svnweb.freebsd.org/base/head/sys/cddl/contrib/openso...


Correct me if Im wrong, but he just built a pool with 3 layers of parity right? The first drive with three drives mirroring it? Where as the later is effectively RAID10, a stripe across 2 mirrored vdevs.


LXD is great. It lowers significantly the mental burden of running something in containers because they behave essentially like a VM. That being said LXD and juju are very Ubuntu things. LXD needs wide adoption for it to be attractive as a serious workload.


There are LXD images for Ubuntu, CentOS, Gentoo, Plamo, and other Linux operating systems. So you can really boot most any Linux as you would a VM. Between that and it's use as a hypervisor in OpenStack it gets flexed pretty heavily. Though I agree, more people should be using it!


Lxc itself is a bit hobbled on distros other than Ubuntu if you want unprivileged containers or containers running systemd based distros.

https://www.flockport.com/lxc-and-lxd-support-across-distrib...

LXD is more often associated with a VPS style container, versus docker style containers. So, at the moment, it does have a distinctly "only runs right on Ubuntu" feel.


That article is almost a year old now. Considering Wily and Xenial are systemd based I'm going to assume the LXD/LXC folks have figured this out.

As for your last comment, yeah these are system containers - lighterweight virtual machines and not docker containers. They're meant to replicate and entire machine, with an init, more attuned to a VPS/VM and not an application container.


>>That article is almost a year old now.

Not much has changed, at least as far as running LXC/LXD using something other than Ubuntu as the host os. Not LXC's fault exactly, just the underlying set of requirements, kernel/systemd/etc patches, assuming you want the unpriv containers and systemd based containers to run.

Technically, you could find and apply all of those to Debian, Redhat, whatever, and use them as the host OS, but it wouldn't be a trivial effort.


I used lxc containers with ubuntu 12.04 on a Debian host ;-)


You apparently didn't need unprivileged containers and/or a guest os that uses systemd. And it sounds like you didn't need lxd.

Edit: Or you did, and you figured out how to replicate all the needed changes back to Deb.


If you like LXD, why can't you just use it and not care whether Redhat is using it?


Many organizations use RHEL, and individual workers cannot just spin up Ubuntu for their bank database servers.


I guess we should all give up and only use what RHEL provides to us, then, since it is the only universally available distribution.


Your sarcasm is missing the point.


In it's current state, you can't (in a practical way) run LXD on a host os other than Ubuntu. So, using LXD means migrating away from whatever host os you are using, and switching to Ubuntu. That's not trivial.

Over time, as each distro catches up with the same patches/kernel version/etc choices ubuntu made, this problem will go away. For now, however, LXD means Ubuntu as the host os.


This is a great development experience for distributed software. I'm really looking forward to Ubuntu 16.10 -- a lot of things seem to be coming together there.


Oh? I'm sure 16.10 will be great but there's a bunch of things coming in 16.04 which have me excited!


LXD looks sweet, and seem to have many advantages over Docker. The only issue I got with it is the network settings where I miss a configuration that let me isolate the containers from the LAN, while still giving them Internet access.


The default setup for LXD is to create a NAT bridge which is what you seem to be asking for.

Maybe something went wrong with your install?



Seeing how much it's used there, I wish ZFS would land in linux(-next/-staging) already :-).


We'll, it'll be in Ubuntu 16.04 :)


This is because of Debian's recent announcement about packaging ZFS, right? Or is more work involved in getting ZFS into Xenial?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: