Hacker Newsnew | past | comments | ask | show | jobs | submit | nullwasamistake's commentslogin

Solar roads and sidewalks are a PR stunt. Abrasion is no joke, nothing optically transparent survives on the ground. You can see this easily in the cellar "pavement lights" common in NYC and other old cities. Light still passes through after a century, but maybe 20% and very diffuse. Bad for solar panels.

Roads don't absorb enough energy to generate power, they're not flexible enough. They're designed to not absorb energy since it hastens breakdown. Potholes are a good example of a road surface energy absorber :) .

Solar roofs are a far better bet. Elon is onto something there, but time will tell if costs can be brought down enough. Besides the good PR, solar roofs substantially reduce heat absorbtion, important in the sunny climates solar works well in. And we have a ton of wasted roof space. Many companies would willingly allow roof panels to be put up for free if the economics for power generation were good enough.


I actually love seeing the "pavement lights" in NYC when I walk past them, or recognize that I'm walking under them. At this point, I know they're purely decorative, but it was a nice touch and something idealistic that left behind something cool. As far as what I was thinking on sidewalks, the other response got it right; I was thinking of putting archways and solar roofing over sidewalks. It doesn't need to be high-yield, but the benefits really sound like they outweigh the risks (as a pedestrian) - keeps the sidewalk in shade, keeps the rain and snow off, and generally would make walking a nicer experience. I've been on covered sidewalks before (including the ubiquitous scaffolds in Manhattan, Hoboken, and Jersey City - I used to aim to walk by the buildings I knew had scaffolds up when the weather was bad out), and it's a better experience than just being outside. Plus, if it's covered well, I think it'd reduce the opportunity for pedestrian vehicle accidents; just from having less possibility of people and cars interacting.

EDIT: Also, thank you for answering about the road idea, it was a thought. Too bad it doesn't work. :-P


I think you misunderstood the parent, they said "solar covered sidewalks" that shade pedestrians, so roofs.

Solar roofs on car parks seem good to me; better than using pasture land converted to solar farms which I'm seeing more and more in UK.


Somewhat. OP also mentions generating energy from the pavement, which I don't think will ever be reasonable.

Solar car park covers are a great idea. Easier access than roofs, don't need to be water proof. Good cooling airflow underneath. Tend to be close to cities where power is easier to transport.

Solar parking lot covers are the best ROI solar installations I can imagine.

Surprised Telsa isn't doing this with their SuperCharger stations


Yeah solar roofs on car parks is pretty good because the shade aspect is just a bonus.


This was a PR stunt from the beginning.

We can't even make roads last 20 years with the most durable materials we can find. We make them out of rock and they still fall apart.

Car windshields are scratched to hell after a decade. Grocery checkout scanner windows are made of Sapphire, nearly as hard as diamond, and still need to be replaced.

Solar roads will never be a reality. Optically clear material hard and malleable enough seem a physical impossibly. Metals are the only suitable material and they cannot be made transparent due to hard physical constraints.


JSON sucks. Maybe half our REST bugs are directly related to JSON parsing.

Is that a long or an int? Boolean or the string "true"? Does my library include undefined properties in the JSON? How should I encode and decode this binary blob?

We tried using OpenApi specs on the server and generators to build the clients. In general, the generators are buggy as hell. We eventually gave up as about 1/4 of the endpoints generated directly from our server code didn't work. One look at a spec doc will tell you the complexity is just too high.

We are moving to gRPC. It just works, and takes all the fiddling out of HTTP. It saves us from dev slap fights over stupid cruft like whether an endpoint should be PUT or POST. And saves us a massive amount of time making all those decisions.


Off-topic but I'd want to work on a place where half the REST bugs are from JSON parsing.


Yeah I don't believe I've ever seen a json parsing problem in 11 years of software development.


Just get a boring webapp job in CRUD world :)


I have had the absolute joy of working with gRPC services recently. Static schemas and built in streaming mechanics are fantastic. It definitely removes a lot of my gripes with REST endpoints by design.


Threads are bad for high concurrency. Specifically when you need to call out to another service that has some latency.

Say you have 1000 threads. To handle a request each one needs to make 50ms of external or DB calls. In one second, each thread can handle 20 calls. So you can handle 20k requests/second with 1000 threads. But Rust is so fast it can serve 500k requests a second. So with regular threads, you need ~25,000 threads. The OS isn't going to like that.

With async you can run a single thread per core, with no concurrency limits. So you get your 500k requests without overhead. With fibers you just run 20k fibers which is a little bit of overhead but easy to do.

This is the core reason everyone is pushing async and fibers in fast languages. When you can push a ton of requests/second but each one has latency you can't control, regular threads will kneecap performance.

In "slow" languages like Python, Ruby, etc, async/fibers don't really matter because you can't handle enough requests to saturate a huge thread pool anyways.


Java services have managed for a long time to do just fine. Usually you just have dedicated threadpools for those db/ whatever calls.

But yes, eventually, for very heavy cases (more than what I would call "high") you will want async/await.


Which can still be done via java.util.concurrent (Callable, Futures, Promises, Flow) until Project Loom arrives.


25,000 threads are perfectly fine on Linux.


it is the memory associated with a posix thread that becomes the limit.


Why only 1000 threads? Why not 10k or 100k?

With 8k stack for each, you can easy have 10k-100k threads in a low-end system


Let's be real here, its not just the memory requirements, because context switching and the associated nuking of cpu caches are not free. You can go very far with it nowadays, but you can go much farther with async code, if you really need to.


Nobody is denying that async code is faster. But it’s not as dramatic as presented in the grand parent post.

And IMHO the added code complexity is not worth the trouble.


We have some pretty vanilla file upload code that needs async. S3 latency is fairly high. If you're uploading a few tiny files per user per second, thread usage gets out of hand real fast.

With a simulated load of ~20 users we were running over 1000 threads.

Several posts in the chain say that 20k+ threads is "fine". Not unless you have a ton of cores. The memory and context switching overhead is gigantic. Eventually your server is doing little besides switching between threads.

We had to rewrite our s3 code to use async, now we can do many thousands of concurrent uploads no problem.

Other places we've had to use async is a proxy that intercepts certain HTTP calls and user stats uploader that calls third party analytics service.

Just sayin it's not that unusual to need async code because threading overhead is too high


In what language?


Java


> And IMHO the added code complexity is not worth the trouble.

The thing is, this is just that - your opinion, generalized as The Truth. But engineering is about making the right trade-offs. Often threading will be fine, you'll win simplicity, and all is good. But sometimes you really need the performance, or your field is crowded and its a competitive advantage. Think large-scale infrastructure at AWS, central load-balancers, or high-freq-trading.


It goes deeper than that. There is plenty of research showing that shared memory multithreading is not even a viable concurrency model. The premise that threads are fine and simple is just false.


I'm not sure what you mean. One of Rust's major research contributions is to show that shared memory multithreading is a perfectly viable concurrency model, as long as you enforce ownership discipline to statically eliminate data races.


> The thing is, this is just that - your opinion, generalized as The Truth.

Heh? Where?


In Python I use async instead of threads for reasons unrelated to performance. https://glyph.twistedmatrix.com/2014/02/unyielding.html


Yeah for sure. In Java/C# I see people do this all the damn time. Use async method for REST endpoints then make a blocking DB call. Or even worse, make a non-async REST call to another service from inside an async handler.

As soon as you do that, your code isn't async anymore. And if you're using a framework like Vert.X or node that only runs one thread per core you're in big trouble.

The most reasonable answer I've seen to all this is Java's Project Loom. An attempt to make fibers transparently act like threads, so you can use regular threaded libraries as async code.

Rust is going to have the same problem Java does with async. A lot of code was written way before async was available, and it not always obvious whether something blocks.


It's possible to write crappy code, async or otherwise.

In my c# world, I use async methods for REST endpoints, which in turn use async calls for anything IO-bound (database, message bus, distributed key store, file system etc). I think more often than not, it's done correctly.


A message broker works here when you want async behaviour but you are integrating with sync code. To use your REST example, you receive the call, send a message to DoSomething and then immediately return http 202, perhaps with some id the ui can poll on (if required). Meanwhile, the DoSomething message queue is serviced by a few threads.


That works but it’s an uncommon pattern. Most people prefer to wait in my opinion. I single DB worker doing batch updates would probably be enough.


Eh, no. The whole MVC pattern is obsolete and every web project I've worked on less than 5 years old is a SPA. It's gone the way of PHP. It will always be around in legacy stuff but hardly anyone is building new project with it.


What are you using for the backend? A SPA has no value without one. Unless you're using something like Firebase, pretty good chance your backend is an MVC app. (I think you're confusing server-rendered apps with the MVC pattern)


If you're using graphql for the backend, is it still MVC?

Even if you can stretch the definition to fit, is it still useful to organize your code along those terms instead of what's natural in the new ecosystem? (data source resolvers, type schemas, queries/mutations)


Depends on what you're building the API in. GraphQL is a specification, not a language. For example, you can build a GraphQL API using Rails.


Eh I guess I meant "traditional" web MVC where HTML template for the whole page is pre-rendered on the server.

We use gRPC web, so backend is a bunch of RPC endpoints. Front end is regular SPA.

This has worked great for us because it reduces backend to "shipping objects" instead of all the bs you normally have to deal with in HTTP.

With gRPC there's no use for rails, or any HTTP server framework. As far as I can tell, this is the future of web endpoints, so rails will die out.


> rails will die out

That'll suck. Hope your company doesn't use Github.


Amazon still runs a bunch of Perl in the back. Just because you're stuck with some design choice doesn't mean that framework has a solid future. People still write plenty of COBOL but you don't see any new applications using it


Plenty of new mainframe code is written in corporate settings. Legacy modernization is strong as ever and for some companies that doesn’t mean moving off mainframe - it means upgrading, cleaning up and cloning legacy COBOL code bases and some times writing new.


Pretty sure Rails is just adapting so that the V in the MVC is the SPA.


I'm convinced RPC frameworks are the future of REST. It's too crufty without them. Most of the rails stack just isn't involved in those calls, so there's not much purpose using it.


MVC is way bigger than "server-rendered websites". It applies to anything with an interface - native apps, SPAs. Arguably even services that don't have GUIs. It's a method of separating concerns that has remained remarkably popular through many different technology cycles. You're saying a lot of words but I don't think you know what they mean.


Right on. My concept of MVC just helps guide me on how I’m going to go about building some particular web functionality, irregardless of whether or not the end result is more or less SPA in nature or not. Some parts of my app are nearly completely Ajax driven and server side rendered, other parts are just static web pages. The model means, what classes or methods do I need to build to store the data/state. The view means how am I going to present the to the client side, with a mix of html/template engine working with a backend view class/methods. The controller is how am I going to network the model with the view, and that is generally with forms and other supporting logic. The level of people on this thread that are talking very confidently about things with which they are clearly novices, is pretty shocking. I haven’t seen such a thread on HN in a long time. I’m a python guy using Flask, but congrats to the Rails users and thanks to the developers for continuing to drive it! You’ve made the web a better place.


Irregardless isn’t a word. :)


Assuming your API follows REST, I find the MVC pattern to be immensely helpful. Even if the "V" part is just an object serializer, it's still a great pattern.


And this is why every website now has a loading screen in front of it.


That's stupid implementation. You can _easily_ build static websites that load near instantly and are tiny. Also if you choose to shut off javascript that's your choice, but don't complain the web does not work.


If nearly every implementation of a technology is broken, I wouldn't blame the implementors. I would blame the technology.

Hell, Facebook basically invented reactive Javascript and they still have loading bars everywhere, broken content sections, and requires a full reload every now and again to get content to show. Reddit is even worse. Those are two major implementations of SPAs and they're both horribly broken.

It's nothing to do with if Javascript is enabled in your browser. If Facebook and Reddit, two of the most popular sites on the Internet, can't get SPA right, I'm blaming the technology.


MVC has been around since the 70s and is one of the most formative design patterns. Hardly obsolete.


You can very easily eschew the V and run your spa against a Rails API.


Strictly speaking, the V is still there. The View doesn't have to render HTML; it's still there, rendering JSON, in a Rails API.

You can even build an SPA using the MVC pattern. I think nullwasamistake's comment was misusing buzzwords, and they were rather comparing server-rendered apps vs client-side SPAs.


>A similarly impairing dose of cannabis results in 0.00001 ppm

Yeah this isn't going to work. The detection threshold is so low it's gonna go off if someone is smoking weed within 500 feet. There will be so many false positives it will practically be a divining rod.

Police already abuse sniffer dogs frequently, we shouldn't allow them to use another unreliable tool to arrest people.


Also

>We haven't had enough resources to run any formal trials yet to publish data, but that is changing this year.

So, ok. See you after the trials?


This... Is a prison recipe :). Fish, chip coating, cook in the chip bag in microwave.

Not saying it's bad, I've done it. Chips are a great substitute for bread crumbs.


It's not the same as copying the "feature", in many cases Google straight up takes your content and puts it on their homepage. The difference between a product and a feature is nothing. Every app made is just a bunch of features put together. AWS S3 is a great example, it's just like Dropbox for nerds.

A lyrics website recently put fake lyrics on their site to prove Google was copying, and sure enough the fake lyrics showed up a couple weeks later.

Google isn't just taking content from "evil" companies like Yelp, they're doing it to everybody.

Job search, shopping, song lyrics, news, and who knows what else, is all being somewhat blatantly lifted. And nobody can stop it because blocking Google is a death knell to any site


> The difference between a product and a feature is nothing.

Granted, but the comment you replied to was about the difference between a feature and a company.


Fibers are way better than async. Go has them and Java is working on it with project loom. It's a slight performance disadvantage (maybe 2%) but light-years easier to work with.

Languages with fibers figure ou execution suspend points and physical core assignment for you and abstract it all away. So physically they're doing the same thing as async, just without all the cruft.

Rust decided not to go with fibers to avoid having a runtime. I still disagree with this because they already do reference counting, not every worthwhile abstraction is zero cost.


> Rust decided not to go with fibers to avoid having a runtime. I still disagree with this because they already do reference counting, not every worthwhile abstraction is zero cost.

Refcounting is a library feature, does not require a runtime, and has no impact on code not using it.

Fibers is a langage feature, does require a runtime, and impacts everything.

Rust actually stripped out its support for fibers as its community moved its usage downwards the stack. In much the same way it stripped out « core » support for GC/RC (the @ sigil) or internal (smalltalk/ruby-style) iteration.


The difference with reference counting is that you only get it when you actually use it, but a runtime included by default does not have this property. And having an "opt-out runtime" doesn't fit the bill either; we tried that too.

(I'd also be very skeptical of the 2% figure; where did you get that from?)

Remember, "zero cost" means "zero additional cost", everything has a cost.


2% is from Quasar, a Java fiber library. Some simple benchmark compared it to async code.

Now that rust has async, couldn't they make fibers an opt-in replacement for threads? Your libraries use async, but you can handle those calls with fibers. Fiber support is compiled into your code but not the libraries


Interesting, I wonder what the details look like. We saw fairly big differences in Rust.

> Your libraries use async, but you can handle those calls with fibers. Fiber support is compiled into your code but not the libraries

We tried this! Making the attempt is actually one of the biggest reasons that we decided to remove green threads from Rust; it brought in tons of disadvantages and no real advantages.


I understand why you didn't, but async brings a whole new opportunity to try! At least in library form.

Fibers are great because you can pretend they're threads. Everyone knows threads. Async is a legit PITA even if you're familiar with it. And the local state stored for a fake thread is often useful. When doing async I find myself frequently building hacky hashmaps to hold local variables values. websocket code is a good example of worse-case. 20k ongoing connections held by async (1 thread per core). It's a nightmare in everything except Vert.X Sync (fibers in Java) or maybe Golang and Erlang.

With fibers you get to keep all your local variables and "pretend" threads actually exist. It's a huge boon for productivity in the few languages where it's possible


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: