Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ultra fast Thunderbolt NAS with Apple M1 and Linux (chrisbergeron.com)
220 points by cyberge99 on Aug 4, 2021 | hide | past | favorite | 99 comments


Just yesterday I was thinking how badly I want an obscenely fast local datastore for Docker. I am managing a handful of clients right now who are all using Docker containers for their microservice environments and my local machine runs out of disk daily.

What I really want is lightning quick network storage, but I don't think it would be feasible to roll out 10g networking in my current rental home.


10gbe can be done over RJ45 with a direct cable connection and is something I've done before in a pinch between my personal NAS and a client machine (just be careful about your storage protocol because NFS has its pros and cons just like iSCSI and FCoE, and with macOS as a client you're likely in for a Bad Time like I did).


Out of curiosity, what issues did you run into with NFS on macOS? I'm looking into building a NAS for a network of macOS, Linux, and BSD machines and figured NFS would be my first choice.


NFSv3 and NFSv4 clients were fine when local machines over TCP and always-on like with a previous Mac Mini I had but resuming from a laptop was pulling teeth and may or may not have been the reason for some of the crashes I got. iSCSI in theory should be better but I had a lot of problems getting the iSCSI initiator I found to work for myself despite being a pretty simple use case I thought, and I gave up on the project around then as I found macOS simply required more budget for myself to buy an appropriate iSCSI initiator. This was over 5 years ago and things may have changed between then and now.


I've had weird issues with some applications and NFS on macOS (notably, Final Cut Pro X and Photoshop). And I've also had weird issues with SMB and some apps (notably, Photoshop again!).

But at least SMB has been pretty stable for years. NFS support has had some hiccups. A few 10.14 macOS releases broke NFSv3 and NFSv4 for me, but it was fixed in later releases.


I've had stability problems with nfs on MacOS. I ended up using smb instead. Smb ended up being CPU bound on my NAS due to encryption I haven't taken the time to figure out how to disable. It's close to line speed for me though. It's also reliable.


I'm not the author nor am I an apple user anymore, though i was at one point.

NFS doesn't have a lot of metadata, so you loose a lot of functionality. iPhoto for example could cause major issues if you persisted its library on a nfs, potentially making the library irreparable.


Just curious what size is your hard drive?

I had this pain with my last macbook. I ALWAYS ran out of disk space. Never again! I went with a 1 TB drive for my M1.


I got the 2TB drive so I can run out of even more disk space! ;-)


A larger common image + saner node_modules should save a lot


I offload docker to another machine and I'm still using about 400GiB of storage on my relatively new MacBook; and I basically never use this machine outside of work (and even then I didn't use it for like 4 months because I preferred linux). it has no local media or anything.

I'm shocked by how much disk I've used actually.

https://imgur.com/a/iHV4IKn


Would running something like the docker proxy below on an old laptop help?

  TLDR: A caching proxy for Docker; allows centralized management of (multiple) registries and their authentication; caches images from any registry. Caches the potentially huge blob/layer requests (for bandwidth/time savings), and optionally caches manifest requests ("pulls") to avoid rate-limiting.

https://github.com/rpardini/docker-registry-proxy


At some point you need to build images locally or store images locally to run a local docker-compose cluster.

I need a fat block store for that.


That is painful for sure.


What about separate Linux box laying around just to run docker? 8GB M1 Mac just don't have enough RAM for docker VM.

Looks like local docker server should just work exactly same way as local docker while you at same location.


Then you get into the problem of low latency directory sync between your box and that host. I’m still trying to crack that nut.


why don't you simply use one or many nvme thunderbolt/usb disks and move the caches there?


Author here: I actually used a thunderbolt nvme and adapter for a bit. Then, for kicks I got another one and combined them in a raid-0 on macos. Realizing I didn’t want to run nvme’s, in raid-0 over two cables in any sort of production (homelab - I use production loosely here), is what sent me down the Thunderbolt rabbit hole that precipitated this blog post.


In case a single-cable solution would work, I've been using an external TB NVMe RAID 0 in this enclosure for several months and it's worked flawlessly: https://www.amazon.com/gp/product/B08S5JPWR6 (The only drawback is that the power supply is bizarrely-large, and the power connector feels a bit janky. In practice it hasn't been an issue.)


Not the op, but if the nas was pre-existing, and already serving other network clients, no reason to add a second one.


Not OP, but my files are shared with many hosts. Having local files are a PITA to manage coherent state.


I was actually looking for something like this. I have one computer that I want to back up to both local hard drives and to the cloud.

A NAS is great, because you can back up to it and have the backup stored on local hard drives, but it also has a CPU and it can be responsible for uploading the data to the cloud. This is better to do from a dedicated device, especially if you are uploading large amounts of data with limited bandwidth, which I am.

The constraint is that locally backing up to a NAS is slow, even though it is sitting next to my computer. Instead of buying into 10GbE networking, it’d be nice to just use a Thunderbolt cable.


Running FreeNAS on a 1GbE network and I wouldn’t call it slow at all. The initial back is much much faster than my 1TB personal media library I sometimes backup to OneDrive. I think I have 3TB of video from Usenet that is downloaded onto a windows box, uses a 10 year old intel as an unpack scratch pad, and then is stored on the NAS box. Sure the initial library transfer is slow but if you can use a wire it’s not that slow. Certainly not Comcast 25Mb upload slow.

Agree about 10Gb devices being expensive especially if you’re invested in the UniFi ecosystem. I’m tempted to do a direct 10Gb SFP+ connection using used intel or Mellanox 10Gb NICs but I haven’t pulled the trigger. 10Gb Thunderbolt NICs are too expensive and give off questionable amounts of heat.

SFP+ is the way to go right now and gives you options. Fiber, DAC (copper), or Ethernet.


I got 2 used SFP+ cards and a DAC cable for $60 total on eBay. 10G between my file server and primary desktop is great. 1G for everything is fine.


What brand and model of cards? Right now I’m limited to hardware that is compatible with FreeNAS.


Mellanox ConnectX-2

I'm running Linux


I have a freenas too. Uploading to it gives me 100 mb write speed, which is definitely enough for most of my needs.


Yea I was debating on using SAS drives instead of SATA but cost was too much. Bought WD Red drives instead. Hopefully I won’t have to resilver.


There's something weird here:

  root@nas-04:~ # ifconfig thunderbolt0 10.1.1.2

  root@nas-04:~ # ifconfig thunderbolt0
  thunderbolt0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
          inet 192.168.10.2  netmask 255.0.0.0  broadcast 10.255.255.255
The IP on the interface is different than the one that it was theoretically configured with. My suspicion is that perhaps the IP used was changed at some point and this is just an editing slip.

> Now that the interface has an IP, we have to add a static host route entry for it to communicate:

If both sides are configured with an appropriate netmask on the same subnet, there's no need for a specific host route.


Author here: You’re correct, it’s an editing error. There’s another where I do ifconfig on the mini, but I pasted from the nas. I’ll correct the post.


This is cool, but my main takeaway is that you can get around needing the thunderbolt enablement header by shorting two pins together and use it on any motherboard?

What the hell Intel...


If memory serves the main problem is some Thunderbolt cards don't ship with the necessary firmware on them, so would need to be used at least once in a supported motherboard before just used in any system.


That seems even more WTF, and puts me in mind of the old Winmodem setup. Not precisely analogous, but boy it seems weird.


There's still benefits today to using actual modems instead of glorified sound cards or literal sound cards. I have a pactor modem for ham radio use, and it can do several modes other than pactor, like rtty. Compared to using something like fldigi, if I can hear the other side and they can hear me, the data will 100% go through correctly on the physical pactor modem. There's weird jank even when using the basic RF mail software, which can control a variety of modems, or just use your sound card, too. Every session is longer when using software based modems, for example.

I'm aware that the pactor modem itself is probably a software defined device like an FPGA, but it is much better than the software on the PC - and I can use it with an anemic laptop from 2007 with no issues at all, since it just sends text back and forth over a serial port - via USB.


"What the hell Intel... "

I don't understand any of it ...

If I wanted to add a LPT parallel port to my motherboard, I would find an appropriate add-in card with the physical port and plug it into the PCI bus and locate an appropriate driver.

Thunderbolt is just another kind of port, right ? If you buy the add-in card to get the physical port and you have a driver to drive the card ... what in the world is "enablement" and why would it be required ?


The enablement header was a pretty what the hell thing to being with, I totally agree. I always saw it as them trying to avoid the development of thunderbolt add in cards for some reason.

But to find out is does so little that shorting two pins together gets around it is just too funny.


Not to take away anything from the fun of figuring out Thunderbolt on Linux, and such. But, I wonder if a NAS with a 40 Gbps or even 100Gbps NIC hooked up to a Thunderbolt to Ethernet adapter would achieve a better result without extra effort.


The fastest thunderbolt to ethernet adapter I could find is 25 Gbps and it's $1500. 10 Gbps ones are in the $200+ range as well.


True. I assumed it was a simple adaptor for TB3 to 40Gbps Ethernet or atleast QSFP+ Since TB also advertises 40 Gbps, but looks like expensive niche. At $1500, you might as well spend time to figure out how to get Linux to work better with Thunderbolt card.

For other devices in the network that do not impose Thunderbolt though, Melanox 40G NICs are surprisingly cheap.


How cheap is "surprising" and who sells them? I wouldn't mind upgrading my link to the server barn and my house to an assuredly future proof 40gbit...


People talking about cheap 10G/40G are getting it used on eBay. They have a bad habit of comparing used prices against new prices without disclosing it. For example, if you're talking about new equipment, 40G is obsolete and you should buy 25G/50G instead. But used 40G is super-cheap because it's obsolete.


For 10Gbe it’s slightly cheaper if you go to SFP+, but then you need sfp cables. I went this route as they are also cooler. Then I got sick of Thunderbolt finicky config and went for cheap Mellanox PCI cards which are truely excellent and basically zero config.

The dongle is QNAP QNA-T310G1S Thunderbolt 3 to 10GbE Adaptor, US$169 on Amazon.


Buy an eGPU enclosure and put a NIC in that?


probably, but thunderbolt would cost 5x less for equivalent 40Gbps NIC


You can find tons of used 10-40Gbps NICs on ebay for $50 or less. For a simple point-to-point connection with both sides having PCI-E slots that's going to be easier & cheaper to set up than thunderbolt will be. It'll also be more reliable & more practical since getting 40Gbps over Thunderbolt requires extremely short cables, whereas you can get DAC cables for 40Gbps QSFP or Infiniband up to 7M or so. And there's no confusion around what the cable can actually do, unlike the nightmare that is finding the right USB-C cable with the right signal integrity requirements at the right distance for what you actually need/want.


This has not been my experience -- I use both thunderbolt 3 and Mellanox ConnectX-3 40gbps cards, and have been looking for an excuse to add a Mini M1 to the homelab.

The cables and cards for PC Thunderbolt retrofit are far (like 3-5X!) more expensive than the MLNX cards and optical cables.

$0.02 :)


> The cables and cards for PC Thunderbolt retrofit are far (like 3-5X!) more expensive than the MLNX cards and optical cables.

I added new Thunderbolt3 PCIe cards (Gigabyte Titan Ridge) to two of my 2012 Mac Pros for around $100 each a bit over a year ago and the prices have dropped since, so I don't get how that is 3-5x more expensive than MLNX cards. Expense was way way lower a factor than the finesse it took to get MacOS to support them for me, but it was the only path that worked for my needs (different need than OP article) and so I endured the pain of getting it working.


I wonder if there will be "reality" tv shows about data hoarders one day. Probably have to wait for data to be easily visualized with VR or AR.

In fact, I really love the idea that I could have to walk to a particular room in order to view a particular set of old photos. Wearing some AR glasses, of course. And that I could grant view access to visitors. Would suck if the photos accidentally glitched through the floor and I had to dig down through a foot of concrete to get them back though.


> As NVMe disks and interfaces have gotten faster I’ve found myself saturating a 10Gb ethernet network.

I love this race between network speeds, drive speeds, and RAM speeds. The classic "net vs disk vs ram vs cache vs register" interview question is much trickier these days.


I recently moved to somewhere with a real fiber internet connection and was impressed that I could get up to a gigabit speeds but when I tested it I found my wifi network could only handle about 100Mbit and sometimes it was dropping to 50. First time I have ever had wifi actually be the limiting factor. Thankfully my apartment has wired Ethernet to most rooms so my steam downloads are flying along now.


I think most people will find that they'll get <100mbit over wifi. I had to design, buy, and implement 802.11ac stuff to get close to gigabit. And then wifi6/802.11ax came out and nothing wanted to play with Linux and wifi6. So I got a fiber media converter and put a switch on my desk backhauled over fiber to the "house switch" which is in turn backhauled to the server barn switch via fiber. Doing this prevents lightning from blowing up my wifi access points and NAS boxes, which has happened before, necessitating the need for fast WiFi in the first place.


> In this post I discuss how you can upgrade a NAS Server by adding Thunderbolt 3 for lightning fast connectivity at 20 or 40Gbps.

This is an interesting angle of attack, though I'll note that 10Gbps Ethernet was added as an option on the M1 Mac Minis.


10Gbps was already an option on the Intel Mac Minis. My guess is that Apple envisioned a use case for it to be a file server or similar.


here's an idea: use link aggregation yo make it 10 G faster, then.

I wonder what's the simplest option. could it be made to work with a virtual interface, like wireguard?


I just wish Apple would offer a native iSCSI initiator.


It sure would be convenient if they Sherlocked the developers of commercial solutions. The globalSAN initiator's been working well for me, FWIW.


>globalSAN initiator's been working well for me

It doesn't look like it's going to be supported for M1 Macs or is really all that compatible with Big Sur. From their site, it seems that you have to jump through a few hoops to add a kernel extension to get it to work[1].

Sounds like a good opportunity for Apple to roll something into the OS.

[1] https://support.studionetworksolutions.com/hc/en-us/articles...


The maturity of Thunderbolt hardware and software support should have TB on a short list of high speed local connectivity, high-bandwidth multi-disk access options.


Especially now that it's USB 4.


They're very similar, but there are differences. Most notably, Thunderbolt 4 guarantees a minimum speed[0] for compliance, whereas USB4 has a lower minimum.

Also, I believe USB4 was designed for smaller files / blocks and Thunderbolt is designed for more raw output.

[0] https://liliputing.com/2020/11/differences-between-thunderbo...


Thanks for the correction, and for citing such a great reference!


This year I set up 10Gbps networking in my home lab since I am lucky enough to have 10Gbps internet and it didn’t make sense to pay for that but not have it for all my machines.

Or at least I tried to set it up. 10G pcie cards are now cheap enough, but routers are not. Thunderbolt adapters are also expensive, bulky and use a ton of power[1]. I’m keeping my eye open for a surplus router and cheaper adapters but in the meantime I am only using a 2 port card in my main machine to serve at 10Gbps to one other machine. I wonder if I could also serve my m1 MacBook using thunderbolt 3.

[1] In my work life I also tried to use Ethernet in some embedded projects. It turns out Ethernet is just power hungry in general. Even 100Mbps Ethernet uses about 0.5W per port. There are standards for lower power but I’ve yet to see anyone use them.


Have you considered using an x86 machine with dual 10G NICs as your router, and then all you’d need is a 10G switch which may be cheaper than a router. I’ve considered the approach but haven’t undertaken it yet.


Funny I went down this rathole very shortly after discovering the "Thunderbolt networking" change in the Linux kernel. My hope was the use the Linux box (with 10 GbE) as a bridge/switch to give my Mac cheap 10 GbE access. While point-to-point did hit more or less the expected speed, bridging was horribly slow. Ended up just getting an Sonnet Solo10G SFP+ adapter (which is great).


> bridging was horribly slow.

Can you quantify that? (I'm wondering if it might be good-enough for my needs.)


I'm afraid it was 2+ years ago so I don't remember, but it must have been slower than the comparable speed from the gigabit wired connection. FWIW, I was so happy with the Sonnet (not as hot as some alternatives I've used) that I bought two more. However, unless you are using fiber or for other reasons preferring SFP+, going with the grain and getting 10Gbase-T is probably easier.

Getting back to the OP, this could be interesting for me if and only if I can get fiber based Thunderbolt-4, but it seems to not yet be available.


I'm pretty sure Thunderbolt 3 cables will work: https://www.amazon.com/optical-thunderbolt-cables-Corning/s?...


Thanks. $528 for 30 meter TB3 compared to $35 for 30 meter OM4 and a pair of $10 10G SFP+ modules (and the same OM4 will carry 25GbE just fine). Seems like it's not really cost effective.


Does anyone know if I installed ubuntu on a m1 apple mac would I get pci passthrough to gpu. That would be no right because that isnt possible due to the hardware. I was going to do the ubuntu conversion myself but I am running a developer beta and the commands don't work and if I wait longer it might just be in the next LTS or release of ubuntu by the time Monterey is stable.


this blog uses webp as image format without a fallback option. Had to switch to Firefox because Safari wouldn't display the pictures.


Thanks for pointing this out. I’ll add a fallback using the “picture” element (TIL).

I also use some javascript to delay rendering until the browser window stays on an image. That could very well be introducing some wonkiness.


you should use the built in lazy loading https://caniuse.com/loading-lazy-attr or intersection observer for this https://developer.mozilla.org/en-US/docs/Web/API/Intersectio...


Is there really a need for lazy loading a blog page over just providing a height and width attribute and letting the browser work it out?


Not a need, per se, but it's an SEO recommendation according to web.dev measure. Both are actually recommended.


Odd. Displays fine on my Safari 14.1.2. Checked and the images do appear to be web for me as well.

Current Big Sur release, not a Beta.


Safari 14.1.2 on latest Catalina doesn't render


Only safari on big sur has webp support.


That's very strange. Is this a versioning problem (i.e. Catalina not being supported anymore?) or is there some kind of hardware restriction?

From what I can find, Catalina should still be supported for more than a year from now, right?

Edit: well, that's bizarre. Caniuse.com [0] lists the same restriction, WebP is only available from Big Sur onwards. And then people wonder why nobody likes developing for Safari.

[0]: https://www.caniuse.com/?search=webp


Safari uses AVFoundation and ImageIO for handling video and image data respectively. These frameworks are tied to OS releases. This ends up meaning the latest Safari on an older OS might not handle some formats/codecs.

So Safari gets free accelerated handling of new formats and codecs but only when the underlying OS supports them.


For media formats the support is often dependent on the OS. It's been that way forever (ask anyone deploying video in early html5 and getting it to work cross-os and cross-platform). I guess it's more expected with videos, but images are (usually) handled the same.


Maybe with hardware decoding, but software decoding is often done in browsers when hardware support is not available.

I suppose Apple doesn't want to include a software fallback for platforms where hardware isn't available to discourage developers from using the format. It's not like Apple would need to write any complicated decoding algorithms when there's already an open source implementation that's free to use.


H264 didn't work in FF on MacOS and Linux (but it still worked on windows) until v34 (and it only started working because Cisco donated a license), Ogg Vorbis only works on MacOS 11.3 or later in safari (and it also depends on which container you use), HEVC in IE/EdgeHTML depended on hardware (with no fallback to software whatsoever), AV1 in FF65 only worked on Windows, AV1 in FF66 only worked on Windows and MacOS, chrome on android 2.3 required you to specify a m4v without mime-type, but doing it with mime-type worked fine on desktop chrome.

I'm just saying it's not unprecedented to have the same browser version supporting different audio/video/image formats depending on OS or hardware.

For image formats it's not as usual, but if someone is making a site only supporting webp then I'd assume they'd look up some support matrixes beforehand.


That's because of the licensing issues, especially H.265/HEIF. AV1 has barely any hardware support and software decoding is terrible for performance, so while many browsers can easily build in support, it's not really worth it to enable it by default.

VP9 and WebP patents are all granted freely by Google. WebP images aren't as terrible for power consumption even in software because the browser doesn't decode 60 of them every second.


Regardless of that it's not uncommon to have the same browser version support different media codecs/formats.

Regarding the examples, when it comes to Ogg Vorbis there is no issue with software decoding (it's software everywhere), AV1 was granted software decode support on some OS'es but not others in the exact same browser, version and hardware (and IIRC it's the encoding perf that is especially bad for AV1, not decoding) and the chrome on android issue also shows how legal or hardware issues are not the only things limiting what is available (it can also be bugs that just stay around).

All I'm saying is that testing for browser version A is not a surefire test. Either check the support or test on browser A version B on OS C version D on hardware E (for each of the A/B/C/D/E variants) or just stick with what works across the board. Either way you probably won't end up shipping a site with only webp support.


Works fine in the latest Safari.


This is cool, I’ve been thinking about something like it.

However I’m still slightly disappointed because I expected an article describing an M1 Mac Mini as a NAS, running Linux (this dream probably isn’t technically there just yet, as Thunderbolt [and network?] drivers are so far still missing from the Asahi project).


Where by "ultra fast" it is meant that it is much slower than a directly-attached SSD that occupies the same port.


Local storage and Networked storage always have their places. Local solid state storage almost always gets you better performance because you negate the overhead of network. Unless, you’re talking fancy arrays connected to saturate 100G NICs.

Local storage falls apart the moment you need that storage accessible from another machine…


This doesn't say anything about how such storage would be accessed by the mac and some other clients at the same time. It literally just runs iperf over tb. As a matter of architecture it leaves the question of whether it would be preferable to directly attach the storage to the Mac and export it to the rest of the network from there.


Networked storage also falls apart the moment you need anything better than basic file storage. So many filesystem features do not work networked, most network file mounts do not support any kind of modern security and the ones that do are often super slow.

Everything has its place, but I wasted so much time trying to get a nas to work as storage for my home server.


Not to sound pedantic, but Local storage and networked storage are not alternatives. They satisfy different needs.

If you need reasonable access to your storage from more than one computer, you need networked storage(of any kind).

Alternatively, if you are running say a database server, and you know the primary storage could be local, it is ideal to choose better performing local storage and of course plan for backups to networked storage.

As much as I agree with your pain points of networked storage, it’s not like you can “replace” it with local storage everywhere.

I have dealt with a couple PB of networked storage for a research cluster. (FreeBSD+ZFS) over NFS on a dedicated network to a cluster of about 100 servers running recent versions of Linux. It worked like a charm while we admittedly kept things simple in terms of file system features.

All clients mount the appropriate NFS shares they need via autofs(only when they are needed).

ZFS being ZFS was just a marvel of software engineering, was solid as a rock. While giving us transparent compression, superb read caching in memory, and cheap snapshots.


I have a Synology and use the iscsi feature for VMs that aren't even in the same building as the NAS, what sort of issues aren't solved in this sort of way?

Don't get me wrong I have nothing but problems with NFS and SMB access on the Linux clients. Windows works perfectly, as in whatever I set on the Synology is what I get on windows.

There's got to be a better way!


I found that there was basically nothing that I could find which would allow me to mount a filesystem as if it was natively connected securely over the local network without going to real enterprise stuff I either couldn't understand or didn't have the hardware for.

Everything I found was basically FUSE style things where it looks kind of like local storage but any time you want to use advanced features or high performance it would fail.

SSHFS was secure and easy to set up but was maxing out the CPU on my NAS and also did not support file permissions.


Dumb question: Can you use Mac's shared folder for another machine to access files?


Yep, it is possible. Here is the official support page on how to set up a shared folder that would be accessible by a any machine using SMB[0], and here is one for access by a macOS machine[1] (that is a bit simpler for setting up certain advanced config options).

0. https://support.apple.com/guide/mac-help/share-mac-files-wit...

1. https://support.apple.com/guide/mac-help/set-up-file-sharing...


Well, yes, that is implied by the fact that it says “NAS” in the title.

In general, you can usually do faster local than over the network. People use a NAS for other reasons.


That SSD has 0 redundancy so you're trading the purpose of the NAS by going for speed. If that's what you want, then you're wasting time looking at NAS equipment anyways.

So what's your point?


Most SSDs have a lot of redundancy in the form of multiple flash chips, it's just not exposed to you, the user, by the controller chipset.

There are also PCI-E x16 with multiple NVMe slots. There's nothing stopping some company from making something similar for Thunderbolt.

And I guess the cost is too high and the connectors would be too bulky, but it would be nice to have something like a SODIMM for flash that separates the memory from the controller.


Are we really comparing the "redundancy" of an SSD internal workings to a NAS? You're reaching really far for that. Whatever "redundancy" an SSD has is whatever it needs just to present a single instance of that data to you. Sounds pretty damn ineffecient, but if that's what it takes to reliably get 1200MB/s speeds I'm okay with it for situations where I need that.

The redundancy a NAS can provide means losing an entire drive (or more depending on config) in the unit without data loss. Even with that failed drive, the data is still accessible. The failed thing can then even be replaced and the system can be brought back to the normal full redundant state. Your "redundancy" of an SSD doesn't allow for that to happen.

I feel like I'm actually feeding trolls with this




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: