Hacker Newsnew | past | comments | ask | show | jobs | submit | fphhotchips's commentslogin

This is the first time I've ever even heard of unofficial screen upgrades even being _possible_, and I'm at least two standard deviations from the mean on the "likes to tinker" scale.

I can't even begin to think about how a laptop screen upgrade would go. Who's manufacturing them? How do I get just one? How do I make sure I don't spend a month waiting for shipping and get a fake? How do I make sure the housing is going to fit right? How do I make sure the pin outs match?

... and etc etc. An official upgrade pathway eliminates all of that. Sure, it's not bringing you back to "average person", but Framework have been super clear that's not who they're after. They want people in my bracket. To be honest, as a cohort, we've proven we're willing to (over)pay for this kind of thing, too. It's why the PC Market still exists despite graphics cards being overpriced by about double.


I don't think it's particularly common for techies to upgrade the screen, just that they are the only ones who would because... well, upgrading a screen usually isn't ever needed. The only reason I did is work was offloading some good laptops for low cost but they had sub-1080p screens. I took one with a broken screen for free and when I did the replacement I used a higher end model's replacement part.

I.e. the only different part is finding a laptop of the same screen size and eDP (embedded Display Port) generation to select from. The rest is the same. If you've already got a good screen it's usually not possible though as you're limited by the eDP generation's speed instead of the panel.


One thing I'm confused about in this whole thing is what makes Rebble think they have a right to the data in the first place? They scraped it! "We don't like you scraping the data we scraped" doesn't hold water for me, whether Eric retained it or not.


Yeah, they definitely started by scraping. Apparently 500 of the 13,500 apps were submitted post-Pebble, and Rebble also apparently did a bunch of other upgrades over time.

But you're right that there's some hypocrisy here, given their roots, and they don't really acknowledge that.


I think the whole conversation shows how ridiculous it is to be worried so much about who's "scraping" what. The open web is designed to be public and permissive. If you don't want someone accessing "your" content, then don't serve it to the public. And if you do decide to serve to the public, don't complain when someone accesses that data in a way you don't like. The Internet would be so much better without all these people obsessed about how their bits were being accessed and about whether X counts as "scraping" or Y counts as "scraping." Good grief, people! Find something else to worry about.


Perhaps in general, but in this case it seems like they did have an agreement not to scrape, which overrides the general scrape-at-will ethos that you're describing.


Pebble threw it away, Rebble did not, and Core is a newcomer whith no right to anything.


Core is making new, compatible hardware, at scale, not as a hobby.

We can buy that hardware from Core, today.

That gives them quite a few rights.


What? Absurd. It gives them the right to nothing at all. They can make an app store themselves any time they want.


Well... I have a watchface on the old store. It is non functional because external APIs changed. I just recently decided to update it, and there is now a much improved version in my github account.

I asked Rebble weeks (!!) ago to give me back access to my own account and binaries on their store and to this day, I heard nothing from them. Nada.

If Core start a new store, I will immediately put the new, much improved version of my old watch face on their store. Rebble can keep the old, non-functional one in their archive if they want.


Do you mean that you uploaded it to Pebble back in the day before Rebble? Have you gone through these steps? https://help.rebble.io/recover-developer-account/?viewall=tr...


Yes. I did all that. Sent the email (many times). Got no reply. Ever.


I understand there's two versions of requirements for the NVidia 50 series - the higher end 5070Ti and up, and the lower end 5070 and down. What's the chance of releasing a 5070Ti/5080 version?


It can be a bit difficult, particularly now that some phones are getting more demanding about re-authorising before it will go through. Tap-try to get fingerprint scanner working-tap again is a much less fluid procedure than tap-go.

The position thing is just something you get used to. There's not that many reader models in active use and most of them are pretty good about marking where the nfc reader is these days.


> There's still plenty of hardware-specific APIs, you still debug assembly when something crashes, you still optimize databases for specific storage technologies and multimedia transcoders for specific CPU architectures...

You might, maybe, but an increasing proportion of developers:

- Don't have access to the assembly to debug it

- Don't even know what storage tech their database is sitting on

- Don't know or even control what CPU architecture their code is running on.

My job is debugging and performance profiling other people's code, but the vast majority of that is looking at query plans. If I'm really stumped, I'll look at the C++, but I've not yet once looked at assembly for it.


This makes sense to me. When I optimize, the most significant gains I find are algorithmic. Whether it's an extra call, a data structure that needs to be tweaked, or just utilizing a library that operates closer to silicon. I rarely need to go to assembly or even a lower level language to get acceptable performance. The only exception is occasionally getting into architecture specifics of a GPU. At this point, optimizing compilers are excellent and probably have more architecture details baked into them than I will ever know. Thank you, compiler programmers!


> At this point, optimizing compilers are excellent

the only people that say this are people who don't work on compilers. ask anyone that actually does and they'll tell you most compiler are pretty mediocre (tend to miss a lot of optimization opportunities), some compilers are horrendous, and a few are good in a small domain (matmul).


It's more that the God of Moore's Law have given us so many transistors that we are essentially always I/O blocked, so it effectively doesn't matter how good our assembly is for all but the most specialized of applications. Good assembly, bad assembly, whatever, the point is that your thread is almost always going to be blocked waiting for I/O (disk, network, human input) rather than something that a fancy optimization of the loop that enables better branch prediction can fix.


> It's more that the God of Moore's Law have given us so many transistors that we are essentially always I/O blocked

this is again just more brash confidence without experience. you're wrong. this is a post about GPUs and so i'll tell you that as a GPU compiler engineer i spend my entire day (work day) staring/thinking about asm in order to affect register pressure and ilp and load/store efficiency etc.

> rather than something that a fancy optimization of the loop

a fancy loop optimization (pipelinig) can fix some problems (load/store efficiency) but create other problems (register pressure). the fundamental fact is NFL theorem applies here fully: you cannot optimize for all programs uniformly.

https://en.wikipedia.org/wiki/No_free_lunch_theorem


I just want to second this. Some of my close friends are PL people working on compilers. I was in HPC before coming to ML, having written a fair amount of CUDA kerenls, a lot of parallelism, and dealing with I/O.

While yes, I/O is often a computational bound, I'd be shy to really say that in a consumer space when we aren't installing flash buffers, performing in situ processing, or even pre-fetching. Hell, in many programs I barely even see any caching! TBH, most stuff can greatly benefit from asynchronous and/or parallel operations. Yeah, I/O is an issue, but I really would not call anything I/O bound until you've actually gotten into parallelism and optimizing code. And even not until you apply this to your I/O operations! There is just so much optimization that a compiler can never do, and so much optimization that a compiler won't do unless you're giving it tons of hints (all that "inline", "const", and stuff you see in C. Not to mention the hell that is template metaprogramming). Things you could never get out of a non-typed language like python, no matter how much of the backend is written in C.

That said, GPU programming is fucking hard. Godspeed you madman, and thank you for your service.


> At this point, optimizing compilers are excellent and probably have more architecture details baked into them than I will ever know.

While modern compilers are great, you’d be surprised about the seemingly obvious optimizations compilers can’t do because of language semantics or the code transformations would be infeasible to detect.

I type versions of functions into godbolt all the time and it’s very interesting to see what code is/isn’t equivalent after O3 passes


The need to expose SSE instruction to system languages tells that compilers are not good at translating straightforward code into optimal machine code. And using SSE properly allows often to speed up the code by several times.


The full featured version doesn't even run on Mac OS. Excel, PowerPoint and Word for Mac are all pale imitations of their Windows counterparts.


The irony is that Excel started out as a Mac app.


Certainly "there should be one, and preferably only one, obvious way to solve a problem" hasn't been the case for a while, or maybe ever. See: tfa.

Perhaps it's just because I'm not Dutch.


Melbourne is easily the worst city in the country for this. Most of the tech sector is in the very large enterprise space lead by the banks, and as a result it's who you know and whether you went to Melbourne Grammar or Geelong Grammar that will determine which company you work for once you reach a certain level. Sydney is better just because there's more smaller stuff going on, and because CBA is better than NAB and ANZ combined on tech. (I hate Sydney otherwise and am based out of Melbourne)

Some places in Melbourne get real work done, even in the data sector. They're hard to find, but they exist.


People look at me funny when I say this, but it's true.

I work in performance - a space where we're thinking about threading, parallelism and the like a lot - and I often say "I want to hire who play with trains". What I mean is "I want people who play Factorio", because the concepts and problems are very very similar. But fewer people know Factorio, so I say trains instead.

I think I know why it's enjoyable even though it's so close to work, too. It's the _feedback_. Factorio shows you visually where you screwed up, and what's moving slowly. In actual work the time and frustration is usually in finding it.


I feel like it's not the RAT you'll notice from inside the plane, it will be the silence from the engines. That combined with at least a momentary flicker of the lighting (I'm not sure if a RAT on a 787 will run cabin lighting but I doubt it), and you'll know.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: