Hacker Newsnew | past | comments | ask | show | jobs | submit | mauricio's commentslogin

Did you read the article? There are large storage and bandwidth requirements.


Yes, I did, of course.

Storage is MUCH cheaper when you colo, and bandwidth requirements are a large part of why you colocate instead of just running servers out of an office building that has at least two upstream connections.

I'm really curious what you think they're using now. Certainly you read the article... It says they're using bare metal servers. That's basically colo where the provider owns, but doesn't control, the hardware.


> Storage is MUCH cheaper

Probably because it's not redundant or automatically backed up at any interval. The worst days of my life have been during hardware failures at colos.


I'm puzzled about why you'd suggest such an obviously silly thing.

Purchased, fully redundant storage is MUCH cheaper than anything in the cloud when talking about any time frame of a year or more.

Obviously you can "rent" storage for a month for less than the cost of purchasing it, but only idiot startup CTOs try to argue a comparison like that.

Two sets of storage are still cheaper, and we all have rsync.


> fully redundant storage

Does it automatically repair itself when it fails? Are you sure you're making a proper comparison?

> Two sets of storage are still cheaper, and we all have rsync.

Yes, you can in fact solve 90% of the problem with 10% of the effort, what's your plan for the rest of the problem? Just call in sick?


I'm not sure what kind of point you're trying to make here, but you're a bit off on a tangent.


NAS is cheap -truenas will sell you 200 TB systems for under $10k, and you can run your web server in a VM. Put second one in a different location and set them to up with the right backup and you have most of what you need. You can probably do much cheaper depending on your needs.

What I don't know is how to make your web servers failover graceful if one goes down (the really hard problem is if the internet splits so both servers are active and making changes). I assume other people still know how to do this.

With the likes of AWS they tell you the above comes free with just a monthly charge. Generally open source projects like having more control over the hardware (which has advantages and disadvantages) and so want to colo. They would probably be happy running in my basement (other than they don't trust me with admin access and I don't want to be admin - but perhaps a couple admins have the ability to do this).


You are aware that backups and redundant storage existed before the cloud, right?


You are aware that adding backups and redundancy increases the costs both in hardware an management, right? I mean, _of course_ you can do that, it's computing.

Perhaps the comparison should be apples to apples when thinking about price.


You're handwaving. Anything can increase cost and complexity. Using Amazon S3 increases cost and complexity.

Set up two locations. Set up rsync. It's still cheaper than cloud storage by far, and that's even after paying yourself handsomely for the script that runs rsync.


> You're handwaving.

I do not see any concrete data from you either. This is a forum. Typically we'd just call this a "conversation."

> Anything can increase cost and complexity. Using Amazon S3 increases cost and complexity.

Yes, and you get something in return for that cost and complexity, so do you care to map out the differences or are you just going to stick to your simple disagreement?

> Set up two locations. Set up rsync. It's still cheaper than cloud storage by far, and that's even after paying yourself handsomely for the script that runs rsync.

You forgot monitoring. You forgot that when this inevitably breaks I'm going to have to go fix it and that you can't schedule failures to only happen during working hours. You're ignoring more than you're considering.


He's not really forgetting monitoring (etc). You'll still need monitoring in place regardless of whether you're monitoring your own servers (colo, etc) or monitoring Cloud servers.

And "when stuff breaks" happens regardless of whether you've chosen Cloud or to run your own servers. It all breaks on occasion (though hopefully rarely).


Simple disagreements suffice. I don't have to make an argument for something just because you bring it up. I'm just pointing out that you bringing it up (hmmm - without backing it up!) doesn't make it a valid point.

You seem to have a bone to pick. I said owned storage is cheaper, and you made up something about it not being redundant. You're not making a salient point as much as you're trying to handwave and dismiss owned hardware as complex, expensive, not worth the "value" one might get from S3, whatever.

If you REALLY think that storage can't be cheaper than "cloud" unless it's not redundant, then show us numbers. Otherwise, you're just making shit up.

You mention all your other things as if you're saying, "Gotcha!" You still have to monitor Amazon. You still have to manage backups. You still have to monitor resource utilization. You're not being clever by trying to imagine that other admins are as bad as you are because all those things seem hard. Good admins do those things no matter where their stuff is running.


22B params * 2 bytes (FP16) = 44GB just for the weights. Doesn't include KV cache and other things.

When the model gets quantized to say 4bit ints, it'll be 22B params * 0.5 bytes = 11GB for example.


A popular use case for SaaS companies isn't even in utilizing Zapier themselves, but rather adding their API to the list of integrations. This can really speed up onboarding for customers -- especially if they already use Zapier.


In order to charge 10x faster, you need to deliver 10x the current. That means all the wiring and electronics need to be upgraded to handle the huge increase without melting.

Also, the rest stop won't just be installing a single fast charger, it would install more than one. So you can't really compare 10 slow chargers against a single fast one. That means the wiring service for the entire rest stop would likely need to be upgraded as well.


The Nintendo Switch.

For all the reasons you mention. I have a career, I have a family, and I have side projects. I just don’t have time to dedicate to games. But sometimes you just need to blow off steam for 15 minutes before bed and the Switch is perfect for this.

Every game can be paused/stopped by hitting one button. The controls are great, software is great, and I’ve been using the the dock to play games together with family like Mario Kart. I have a bunch of maxed out characters in Diablo 3 just by playing 15 minutes here and there. :)


> Every game can be paused/stopped by hitting one button

Being able to play anytime/anywhere for any length of time and the feeling of coziness that you get from being in your own gaming pod make handheld gaming particularly appealing.

As you note with Diablo 3, grindy RPGs are particularly great on handhelds.

Advantages of the Switch over smartphones include physical buttons and joysticks (though regular and Switch-style attached controllers are also available for phones), games that are largely untainted by intrusive "free to play" monetization schemes, and Nintendo's outstanding first-party game library.

Disadvantages are carrying and managing an extra (and somewhat bulky) device, more expensive games, and the lack of plausible deniability.


I would disagree with the premise that the two stores are comparable. The alternative to not use the Mac App Store is not just financial, but also allows developers to bypass the sandbox requirements.

Whether or not Apple permits alternative payment systems for iOS/iPadOS, all the apps will likely remain sandboxed.



Any OS feature should be in control of the user, not Apple who can sell access to these features to developers in the form of "entitlements".


Impressive. I wonder if Google will sell servers with these cards via Google Cloud. Seems like it could be pretty competitive in the transcoding space and also help them push AV1 adoption.


You can transcode as a service on Google Cloud: https://cloud.google.com/transcoder/docs


False dichotomy.


No.


Yes. The Apache Software Foundation, relevantly, doesn't seem to be having any trouble whatsoever maintaining Lucene and Solr.


Hardware: Starlink, ARM/M1, NVMe/SSD, active noise cancellation, AR, VR, OLED, MicroLED, Wifi, GPUs, portable cameras, quantum computing, multi-touch, self-landing rockets, drones, battery tech, robotics

Software: GPT-3, voice assistants, blockchain, compression, H.264/H.265/AV1, image processing, search, databases, BitTorrent/DHT, open source, MMORPG, autonomous driving, Wikipedia, Linux, map/reduce, deep learning, deep fakes, NEAT


I think the business model and the software license often get conflated like this. Once you release software with an open source license, users are free to do what they wish as long as it's within the bounds of the license.

Separately comes the business model. If a business intends on selling a hosted version of their product, as most open source database companies seem to want to do, that product needs to compete on its own merits. Can the business truly offer a better hosting experience on someone else's cloud than those cloud providers themselves?

I think maybe we conflate the two because we want everything. We want to create open source products that a community contributes to and supports but that only the originating company can monetize. It just doesn't work that way.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: