Hacker Newsnew | past | comments | ask | show | jobs | submit | tjalfi's favoriteslogin

To quote a recent post: quadratic time is good enough to land to production, and bad enough to fail in production.

Author here. I think the Rust vs. Go question is interesting. I actually originally wrote esbuild in Rust and Go, and Go was the clear winner.

The parser written in Go was both faster to compile and faster to execute than the parser in Rust. The Go version compiled something like 100x faster than Rust and ran at something around 10% faster (I forget the exact numbers, sorry). Based on a profile, it looked like the Go version was faster because GC happened on another thread while Rust had to run destructors on the same thread.

The Rust version also had other problems. Many places in my code had switch statements that branched over all AST nodes and in Rust that compiles to code which uses stack space proportional to the total stack space used by all branches instead of just the maximum stack space used by any one branch: https://github.com/rust-lang/rust/issues/34283. I believe the issue still isn't fixed. That meant that the Rust version quickly overflowed the stack if you had many nested JavaScript syntax constructs, which was easy to hit in large JavaScript files. There were also random other issues such as Rust's floating-point number parser not actually working in all cases: https://github.com/rust-lang/rust/issues/31407. I also had to spend a lot of time getting multi-threading to work in Rust with all of the lifetime stuff. Go had none of these issues.

The Rust version probably could be made to work at an equivalent speed with enough effort. But at a high-level, Go was much more enjoyable to work with. This is a side project and it has to be fun for me to work on it. The Rust version was actively un-fun for me, both because of all of the workarounds that got in the way and because of the extremely slow compile times. Obviously you can tell from the nature of this project that I value fast build times :)


Your viewpoint is predicated on an assumption: that equality is the equilibrium state of human society. In such a world where this equilibrium exists, there is no need for "ideology and affirmative action" because any affirmative actions to create inequality will be erased through the passage of time as the world returns to the equilibrium of equality.

The problem with this attitude is that it is utterly unwarranted, unsubstantiated, and totally Panglossian. It is irrefutable that for generations American society took "affirmative action" to suppress women, to pigeonhole them into an impoverished gender role concerned only with housekeeping and child rearing. You don't even have to go back that far to see this "affirmative action" (http://www.boredpanda.com/vintage-ads). Even if you believe that there is no continuing discrimination,[1] what on earth makes you believe that past discrimination will simply be erased through the history of time?

The solution to gender inequality issues is to simply hire women. Hire women and promote women. Once your organization and industry isn't perceived as male-dominated, once qualified and ambitious women don't turn away from the field to pursue others where being a woman is less likely to be a career liability,[2] the qualified applications will materialize.

One of the greatest success stories of gender equality is, in my opinion, are professional services firms, law in particular but also accounting and consulting. The legal industry went from 95%+ male in the 1950's and 1960's to almost even today, even at large corporate law firms. While tech companies are scratching their heads trying to figure out how to get any women in the door, law firms are under fire because "only" 1/3 of new partners each year are women. "Only" 15% of Big 4 accounting firm partners are women and its a source of constant consternation for women.[3] While any discussion of trying to get women into tech is clouded by the specter of "affirmative action" law firms, at least at the lower levels, no longer even need to take explicit steps to recruit equal numbers of women. Professional services firms are proof that when you hire women and promote women, equalized gender ratios become self-perpetuating. There are still major challenges faced by women today in the professional services industry, but these firms are operating in a whole different century than the tech sector.

[1] Which is itself a ridiculous belief in the face of studies proving that older men are, say, less likely to mentor younger women than younger men, and that employers tend to treat similar resumes with male versus female names differently.

[2] Who wants to, as a woman, invest themselves in a career in tech when there is a decent chance your boss will be this guy: https://news.ycombinator.com/item?id=6875311 ("there are differences in the way men and women think, with men more naturally drawn to STEM fields...")

[3] At what tech company are the most senior engineering roles even 15% women? Marissa Mayer estimated about 15-17% for women engineers in Silicon Valley across the board. For comparison, Big 4 accounting firms are 45-50% women across the board, with 15-20% at the partner level.


Literally the point of a job at a for-profit company is to make money, yes, primarily for yourself and secondarily for the business. This is by no means moral or virtuous. But it is true, and because it is true, the people who admit the truth of it will be more successful in the sense that they will be working with the system, and the people who act as if it is somehow untrue and there is some greater virtue behind it will be frustrated in the sense that they will never be able to work against the system enough to unseat it.

Why did the startup get acquired by the large company in the first place? To make money for the shareholders (mostly the founders), partly through an exit event, partly through the ongoing value of their shares. Sure, they wrote an internal email and a blog post about the "next stage of their journey" and how the big company "shares their vision" or whatever, but that's not the real reason - the reason is money. Why do employees get equity? To incentivize them to care about the money the company makes, because otherwise they wouldn't care sufficiently about the secondary goal of making money for the company (i.e., making more money for management, who sets the equity policies). Why do employers compete on salary in the first place? Because that's what being employed is about - making money.

If you want to do good work, to change the world for the better, to have fun, or whatever, you have two options. One is to secure your place within the system of being successful by making money - i.e., to show management that you will make them money if they keep you around and keep you happy - and then find some room to maneuver within it. There are a lot of people who are happy and fulfilled with their jobs because they can do this. The other is to leave the system (retire, work part-time, join a convent, etc.).

But refusing to admit the rules of the game will work about as well as trying to play a game of chess by bowling a ball at the pieces and knocking them down. You might say you don't care to win, which is fine, but that's hardly the problem - everyone involved, including you, will end up upset.


As someone who has reverse engineers hundreds of random file formats of all kinds over the years, the comment that suggests understanding the code is generally spot on.

You can basically divide the world into read/write/write-only formats and read-only formats.

For read/write/write-only formats, usually the in-memory data structures were written first, and then the serialization/deserialization code. So it almost always more useful to see how the code works, than try to just figure out what random bytes in the file mean. A not insignificant percent of the time, the serialization/deserialization code is fairly straightforward - read some bytes, maybe decompress them, maybe checksum them and compare to a checksum field, shove in right place in memory structure/create a class using it, move on.

Different parts of the program may read different parts of a file, but again, usually a given part of the deserialization/serialization code is fairly understandable.

Read-only formats are scattershot. Lots of reasons. I'll just cover a few. First, because the code doesn't usually have both the writing and reading, you have less of a point of reference for what reading code is doing. Second, they are not uncommonly memory mapped serializations of in-memory structures. But not necessarily even for the current platform. So it may even make perfect sense and be easy to undersatnd on some platform, but on your platform, the code is doing weird conversions and such. This is essentially a variant of "the format was designed before the code". Lots and lots more issues.

I still would start by trying to understand deserialization code rather the format directly in this case, but it often is significantly harder to get a handle on.

There are commonalities in some types of programs (IE you will find commonalities between games using the same engine, etc), but if you are talking "in general", the above is the best "in general" i can give you.

One other tip - it is common to expect things to be logical and make sense - you can even see an example in this very article. Don't expect this.

For example, data fields that don't make sense or are broken, but the program doesn't use it so it doesn't matter, checksums that don't actually check anything, signed/verified files where the signing key is changeable easily, encryption where the key is hardcoded or stored in the file, you name it.

Most folks verify that their program works, they don't usually go look and verify that everything written/read makes any sense.


>> How hard is it for them to say "In the past, we have seen crowd funding sites/services that never delivered anything. That resulted in a lot of unhappy end users going to their credit card companies and initiating a chargeback which we're on the hook to pay. As you can see, if you took the money from your account and disappeared we'd be on the hook for that entire $X. That's why it's important we understand more about what you're up to"

The reason they don't is probably experience. If you're delivering bad news, don't mess around and don't over-explain. Every extra sentence in your communication is just a potential complication. Tell the target the problem simply, including details that directly relate to their conduct, and tell them how to fix it to your satisfaction. Don't tell them the history of this type of problem.

You do this because the target is going to be upset about it and they're going to want to argue. The more you communicate, the more points they have to argue. In your description, I see at least four more points to argue ("we're not like those guys that didn't deliver...our users are different...we're trustworthy people...you're only a percentage of our total funding so your risk really isn't that great..."). None of those points matter and none of them are going to change anything.

Paypal has told them what's wrong and given them concrete steps to fix it. Anything more is just going to drag out the resolution of the problem from Paypal's perspective.


All of this is specifically about the "Go's Apparent Philosophy" paragraph.

Instead of pulling the same quote from Rob Pike over and over and over again, people should watch/read is recent talk about Go titled "What We Got Right, What We Got Wrong": https://commandcenter.blogspot.com/2024/01/what-we-got-right.... A few excerpts:

> Given the title of this talk, many people might expect I'm going to be analyzing good and bad things in the language. Of course I'll do some of that, but much more besides, for several reasons.

> But the real reason I'm going to talk about more than the language is that that's not what the whole project was about. Our original goal was not to create a new programming language, it was to create a better way to write software. We had issues with the languages we were using—everyone does, whatever the language—but the fundamental problems we had were not central to the features of those languages, but rather to the process that had been created for using them to build software at Google.

> The creation of a new language provided a new path to explore other ideas, but it was only an enabler, not the real point. If it didn't take 45 minutes to build the binary I was working on at the time, Go would not have happened, but those 45 minutes were not because the compiler was slow, because it wasn't, or because the language it was written in was bad, because it wasn't. The slowness arose from other factors.

> And those factors were what we wanted to address: The complexities of building modern server software: controlling dependencies, programming with large teams with changing personnel, ease of maintainability, efficient testing, effective use of multicore CPUs and networking, and so on.

> In short, Go is not just a programming language. Of course it is a programming language, that's its definition, but its purpose was to help provide a better way to develop high-quality software, at least compared to our environment 14 plus years ago.

> And that's still what it's about today. Go is a project to make building production software easier and more productive.

I will repeat it because I think it is very important:

> And that's still what it's about today. Go is a project to make building production software easier and more productive.

Another quote about Go being more than the language:

> A few weeks back, when starting to prepare this talk, I had a title but little else. To get me going, I asked people on Mastodon for input. A fair few responded, and I noticed a trend in the replies: people thought the things we got wrong were all in the language, but those we got right were in the larger story, the stuff around the language like gofmt and deployment and testing. I find that encouraging, actually. What we were trying to do seems to have had an effect.

And finally:

> Perhaps the most interesting consequence of these matters is that Go code looks and works the same regardless of who's writing it, is largely free of factions using different subsets of the language, and is guaranteed to continue to compile and run as time goes on. That may be a first for a major programming language.

> We definitely got that right.

So now that we know better what Go is about, we can try to see why the iterators are added. From https://github.com/golang/go/issues/61405#issuecomment-16388.... I'll quote some parts:

> Can you provide more motivation for range over functions?

> The most recent motivation is the addition of generics, which we expect will lead to custom containers such as ordered maps, and it would be good for those custom containers to work well with range loops.

> Another equally good motivation is to provide a better answer for the many functions in the standard library that collect a sequence of results and return the whole thing as a slice. If the results can be generated one at a time, then a representation that allows iterating over them scales better than returning an entire slice. We do not have a standard signature for functions that represent this iteration. Adding support for functions in range would both define a standard signature and provide a real benefit that would encourage its use.

> There are also functions we were reluctant to provide in slices form that probably deserve to be added in iterator form. For example, there should be a strings.Lines(text) that iterates over the lines in a text.

To me this provides enough justification for adding the iterators. But that's not the important part. The important part is listening to what people say about Go, not just "Go the language", but Go as a whole. And listening to what people said recently, not what they said a decade ago. This is how you genuinely engage with other people, I think.


> How can I help people out when they tell me that io_uring is insecure?

Maybe those people are right, though? I think the discussion starts from a place that assumes other people are wrong. If you start there, you will fail to convince people of anything, because you automatically dismiss their claim, without thinking about what they might have seen and what they might think.

A better starting point would be wanting to get to the bottom of it, and assess the security of io_uring. If you start from that point and you give it an honest, thorough assessment, and it turns out it "looks secure", you'll have an easier time convincing people.

You might still be wrong (assessing io_uring's security is not trivial), but at least you tried to understand why people think that.

And reminder: it's ok to "agree to disagree".


"Running a successful open source project is just Good Will Hunting in reverse, where you start out as a respected genius and end up being a janitor who gets into fights."

Credit: https://web.archive.org/web/20200909035546/https://diff.subs...


There is no such thing that allows you to avoid the race condition. There are a few issues:

- You can't control every call to open, and thus enforce O_CLOEXEC on all files (by default)

- Because of this, most files will be leaked into the child, this is likely not desirable, especially since some distros have low open file limits for processes

- posix_spawn (the replacement for fork/exec) allows you to specify a list of file actions to perform, including opening and closing, this seems like a solution at first glance

- However, there is a TOCTOU race here, as you need to first make a list of file actions with posix_spawn_file_actions and then call posix_spawn. Note that every file you want to close needs to have it's own file action, this means you need to determine all the files that are open and manually add each one. This alone introduces the problem of determining all open files in your process.

- In a multi-threaded program it is possible for another thread to open a file between the calls to posix_spawn_file_actions and before posix_spawn, thus creating the potential for files to leak into the child.

- Even in a single threaded program, it is possible for posix_spawn to to invoke functions established with pthread_atfork, and atfork handlers are allowed to call signal-safe functions, including but not limited to open. Implementations aren't required to call atfork handlers, and modern glibc doesn't, but this is by far no guarantee.

- Therefore, my argument is that posix_spawn cannot be used to create a process with a guaranteed minimal and clean state, and so you are back to square-one with fork/exec.

The defaults for working with these APIs are just completely wrong, and very hard to get correct. The issues with fork/exec are numerous and nuanced, and most people simply aren't aware of the issues or don't care. There is a specific song and dance that needs to be performed when using fork/exec and usually you want to hide all of that behind a library function... which will look something similar to CreateProcess.... sure you might use the builder pattern to make it look nicer, but you really don't want the fork/exec split.

Here are few other issues with fork/exec, non-exhaustive:

- Only signal-safe functions can be invoked between fork and exec. This means you need to be super careful with any stdlib code you invoke between these two (or better yet, just don't).

- Multithreaded programs cannot call fork without exec. period. The state of objects such as mutexes and condition variables will be inconsistent. This is implied by the above, but I wanted to specifically call this out.

- Detecting if exec failed instead of the program requires using an extra pipe marked with CLOEXEC, I have seen too much code using a magic exit code (which is wrong)

- Cleaning up the state of the child process and not accidentally creating a zombie is a bit tricky and there are some race conditions to be aware of. pidfd is not a solution if you need to support older kernels, although helps tremendously.

- Interaction with signals is a bit messy.

- When fork is called, all pages will be marked as copy-on-write, this can be slow for processes with lots of memory allocated, and is completely redundant if your goal is to call exec. If other threads exist and are writing to memory, the pages they touch will be copied unnecessarily.

- Like I harped on earlier, files are inherited by default, not the other way around. You should be required to manually list the fds that you want the child to inherit (likely stdin, stderr and stdout only for 99% of cases).

- Distinguishing exec failure from exceeding but the process failing requires a CLOEXEC pipe

- If exec fails, _exit must be called! you cannot terminate the child in any way that might run destructors, of invoke callbacks/handlers as these can perform I/O and would thus be observable.

CreateProcess is just much better, and the whole "it takes 12 parameters how awful" argument against it is 100% a non-issue. It isn't 1960 anymore, it's okay to have a function with a name longer than 6 letters and more than 3 parameters.


Crudely speaking, industrial R&D has three phases: proof of concept, de-risking, and manufacturing. 7LP might (or might not) have passed the proof of concept stage. De-risking is usually the hardest stage (such that many people don't even consider it a stage, preferring to break it down into smaller stages). It is highly likely that this is where they decided to cut bait. (Incidentally, de-risking is usually heavy opex in the R&D department whereas HVM/NPI is mostly heavy capex; while "just" an accounting trick, this can be significant in many companies, and create a real barrier if the necessary opex spend is not palatable.)

The reason one would expect 7LP to be cheaper to launch now is that their competitors have got equivalent processes into production that can be learned from, or even "learned from" (ripped off). Equipment suppliers have debugged their offerings and pruned their lines to what's useful. In short, someone else has derisked it and found what works. That is a major advantage. In other industries, one company doing the derisking can launch an entire industry (see, Apple, iPhone) if moats are low. Moats are very, very, very high in the foundry space, so there are not many companies that could copy TSMC 7FF or Intel 7 even if they wanted to. GlobalFoundries could do it. But they choose not to. If they were on the cusp of a node introduction, they'd love to see their competition swoop in and solve the last problems for them. Sure, it makes them late to market, but at a vastly lower total spend to enter the market (one with tremendous moats and limited competition!). They could probably profit off that.

But they don't want to. So, either, leading-edge process nodes are uneconomical (in which case, good riddance, leave the market), or they don't actually have significant R&D effort completed and are still billions of dollars in R&D opex away from having anything viable. In which case... nothing of real value was lost.

So, yeah, it sucks that we lost a competitor. But I don't think we lost GF on the leading edge because they didn't like the color of paint on the new ion-implanter's frame. I think we lost them because they didn't have a product and they knew it. In which case there is nothing to mourn.


Posting anon.

In 2009, the startup where I was working was hitting the skids, and our investors (correctly) were not willing to back us. We all kept grinding for a month or two in honorable futility, but after a while, my bank account depleted and I had to go.

To make various ends meet and to keep my mental health during the wind down however, I took up some contract work that I found through various friends in the SF startup scene. One company that I really liked and did some small stuff for was Burbn, which was a mobile-only location check-in that was hinged around taking photos of your location.

Missing my friends in NYC (I made a lot of friends in SF, but my inner circle were my college buddies from CMU; I went to tech and they went finance, sigh), I decided to leave SF to head to NYC and get a fresh start.

As I was leaving, I wanted to tie up a few loose ends, so I emailed my contact at Burbn and said I was likely to be unavailable for any more work, but that I liked the project and hoped for the best for him. He responded and said that he was near funding on a small pivot, and that if I was interested, there might be a full-time role available. I declined - I was mentally done with SF and the startup scene (Larry Chiang, 111 Minna, the rise of FB spam-crap like RockYou, etc.) as it was then.

That person was Kevin Systrom; that pivot was Instagram.


It's one of the most popular services in the world, but has one of the worst user experiences of all the apps I use.

This is really common. It's a sign that the value isn't derived from the software itself, but what the software enables you to do. It doesn't need to be good. People pay to access Spotify's library of music and podcasts, despite the UI.

When you run a startup having people hungry to use your MVP despite it's flaws is a classic signal that you're on to something valuable. I could list hundreds of shockingly bad apps that have awful user experiences that I've happily used over the last 40 years because they all did something I really wanted or needed to do. Almost every 'enterprise' app is a total mess from a UI perspective - but they make a fortune because the value that users get from them make it worth putting up with.

People think a beautiful UI is something that every app needs, but really every app just needs to do something useful. None of them need a good UI until there's a competitor with an equivalent service that has a better UI. Only then does the UI actually matter, because it becomes something users will use to choose which service they buy.


Ironically, by flippantly dismissing the concern about the issue, you’re also dismissing the motivations of the people who championed the term and encouraged its adoption. They certainly think it’s important. Labels are very important! The term “Hispanic” was created in an effort to politically organize disparate Hispanic populations, who identified with myriad different nationalities rather than a common race (“La Raza”).

Capitalizing “black” is a political statement, not a linguistic description. In English, races aren’t capitalized, while terms referring to distinct ethnic groups are (so you capitalize “German” but not “white”). As John McWhorter persuasively explains, rationale for capitalizing “Black” is that American descendants of slaves are a distinct group bound by shared history and culture. That’s fine insofar as “Black” is used to refer to what we might call “ADOS.” But in practice “Black” is used as a racial designator. At Harvard, for example, 40% of the “Black” students are african immigrants. Obama is “Black”—that describes his race, not his ethno-cultural group.

Capitalizing “black” is a political effort to center the experience of black people in American life. Consider the term “BIPOC,” which breaks out “black” and “indigenous” as first among equals even though they’re included in the “POC.” What is the intention of that?

These labels and classifications, in turn, have real world ramifications. My daughter’s school has a segregated “black girl magic” lunch every week. There’s no “half white half bangladeshi girl magic,” and few non-black, non-white students, so she was invited to attend the weekly lunch once a month. Even at age 12, my daughter is able to perceive there is a racial hierarchy designed to invert the historical one.


I’ve found my biggest differentiator over the last decade was soft skills - writing, dealing with stakeholders, knowing how to talk to normies, being comfortable in the room with decision makers, being able to do effective presentations and project management skills.

And knowing how to “deal with ambiguity” and focus on how to add business value. If you look at the leveling guidelines of any tech company, anything above mid level is focused on “scope”, “impact” and “dealing with ambiguity”.

Knowing AWS really well is just a tool and it doesn’t hurt that I have a stint at AWS ProServe on my resume

Notice “codez real gud” is not a differentiator.

There is no hard skill you can learn that thousands of of others don’t know that will set you apart.

Well except for some vertical market stuff that will leave you pigeonholed.

Sources:

https://www.levels.fyi/blog/swe-level-framework.html

https://dropbox.tech/culture/sharing-our-engineering-career-...


Anyone who sees Trump as either an aberration or a savior is deeply deluded on the state of America.

In my opinion, the US world order’s decay was unmasked in 2008, and it has been accelerating since. The two economic realities between the poor rural America and the rich coastal cities (and even within them there is so much clear wealth disparity) have only gotten worse, and the political and bureaucratic system isn’t really capable of skillfully dealing with it.

Trump actually speaks to the realities that few politicians will (Bernie Sanders did too in 2016, hence his appeal), though his prescribed solutions are likely just accelerating the country’s demise.


> In 2023, Godwin published an opinion in The Washington Post stating "Yes, it's okay to compare Trump to Hitler. Don't let me stop you." In the article, Godwin says "But when people draw parallels between Donald Trump’s 2024 candidacy and Hitler’s progression from fringe figure to Great Dictator, we aren’t joking. Those of us who hope to preserve our democratic institutions need to underscore the resemblance before we enter the twilight of American democracy."

Past experience with this kind of thing by you-know-who does not lend itself to the idea of a substantive discussion.

HN isn't "about the tech industry" per se - its mandate is to discuss topics of intellectual curiosity. See https://news.ycombinator.com/newsguidelines.html. Celebrity troll moves (or whatever this is) don't fit that bill, so in this case I'm inclined to agree with the users who flagged the story.


They're not selecting to maximize performance, they're selecting to maximize their own control. Pete Hegseth isn't SecDef because he's good at it. He leaks war plans and can't get through a press conference without being seen with a drink in his hand. He's SecDef because he'll do what Trump tells him to do regardless of whether it's legal or a good idea. The tariffs aren't meant to bring manufacturing back. They'd have gradual and consistent and the money raised would be earmarked for developing that industry at home if they were. They're arbitrary because they're the way the people in charge punish countries and companies that don't bend the knee. Everything they're doing is about removing the institution of government with its pesky rules and procedures and bringing everything under the control of one guy who can reward and punish arbitrarily as he sees fit. Overall economic performance simply isn't a factor.

It's changed my outlook a lot to make an arbitrary decision to stop assuming people are stupid when their stated goals don't line up with their actions, and to start assuming the easily predictable results of their actions are their actual goals regardless of what their stated goals are. Once I did that, I started being able to understand and even predict what these previously inscrutable people would do next.


James Carville in the 90s: “I used to think that if there was reincarnation, I wanted to come back as the president or the pope or as a .400 baseball hitter. But now I would like to come back as the bond market. You can intimidate everybody.”

I think this is a bit unfair.

Who among us can really say they haven't gone off the deep end, burned every drop of good will that ever existed towards them and their projects, sued a competitor and got hilariously burned by the judge, all while burning hundreds of millions of your companies value (blackrock marked them down 10%, which is 750 million)

This is just a normal thing that happens. It could have happened to anyone.


I'm from a similar bicultural household as rayiner, though from comment history I'm guessing I come down more on the American side. I've got enough of a background in both cultures to parse out and explain the differences though.

It's not perceived as "wasting a life" or "not enjoying it" by the parent, and oftentimes not by the child either. Rather, it's different values, different time preferences, and different conceptions of self. Western cultures have a conception of self that is very rigid and individualistic. There's a hard boundary between your wants and everyone else's wants, and you're responsible only for your own desire. This is encoded in our structures of law, in contemporary business culture, in the concept of individual rights, in the goals of Western psychotherapy, and in the relationships between family members that we view as normal.

In most traditional Asian cultures, there is much more of a soft boundary between members of the same family. You are expected to consider the welfare of everyone in the family. And that leads to a sense of obligation between parent and child, and then between child and parent as they get older, and between sibling to sibling when it comes to dealing with the outside world. There is a comparatively stronger boundary between the family and the state, eg. many Asian cultures feel like it's okay to snub the rules of the wider society for the benefit of the family, while in American society this is considered grift, nepotism, and corruption.

Likewise, there is a difference in time perception. Americans have a hard boundary between the present and the future or past. This shows up in popular culture through lines like in Rent ("No day but today", "How do you feel today? Then why choose fear?", "Forget regret, your life is yours to live") or through popular aphorisms to "Let go of the past", "Live for the present", "The future is yours to write", etc. Asian cultures often consider the past, present, and future as one: the past informs the present, which becomes the future, and the "you" of today will soon become the you of tomorrow. As a result, it is perfectly natural to preference "future you" over "present you". And that shows up through things like savings rates (where Asians are consistently higher than Americans), long-term investments, business continuity, and willingness to invest in family and raise the next generation. Denying present pleasures for future gains is not a lifestyle that they don't enjoy; it's simply being smart, and the enjoyment comes from the anticipation of the future payoff.

There's a good illustration of the difference in the two cultures from two movies that both came out in 2018/2019, Crazy Rich Asians vs. The Farewell. Crazy Rich Asians is foremost a Chinese-American film. When the grandmother (who is considered the villain in the film) smugly says "We know how to build things that last", she's exemplifying the values and time preferences of Old China. And the film's climax and resolution is all about choosing present happiness over an indeterminate future, basically a victory of American values over traditional Chinese ones. The Farewell, however, more closely depicts the web of obligations in a traditional Chinese family, and is comedic to American audiences simply because the farces that the family goes through to preserve the feelings of the matriarch make no sense to Americans. Sure enough, Crazy Rich Asians was a smash hit in the U.S. but an utter flop in China, while The Farewell was a sleeper hit in America but did very well with Chinese audiences.


My favorite quote:

Music lovers buy hifi systems to listen to their music.

Audiophiles buy music to listen to their hifi systems.


Speaking of high quality "Monstrous Cables" and draconian legal remedies: there's K. W. Jeter's Noir (1998), a Cyberpunk novel whose detective protagonist's main job is killing copyright violators so that their still-living spinal cords may be incorporated into hi-fi system cables:

https://news.ycombinator.com/item?id=15668069

DonHopkins on Nov 10, 2017 | parent | context | favorite | on: Electric Sheep on Ubuntu Linux 17.10

I deserve to be downvoted by the literature snobs, but if you liked Blade Runner the movie (and who in their right mind doesn't?), then you may very well enjoy K. W. Jeter's three written sequels to the MOVIE Blade Runner (not the BOOK DADOES), "Blade Runner 2: The Edge of Human", "Blade Runner 3: Replicant Night", and "Blade Runner 4: Eye and Talon". There is no book "Blade Runner 1" -- that's the movie.

The irony is that Philip K Dick was offered a whole lot of money to write another book entitled "Blade Runner" based on the screenplay of the movie, but he insisted on maintaining the integrity and title of his original book DADOES by re-issuing it with a reference to the (quite different) movie on the cover, instead of rewriting another book called "Blade Runner" based on the movie based on his own book. (Harrumph!) He would have made a lot more money by selling out that way, but he steadfastly refused to do it.

However, fortunately for us, after his death, his friend and fellow SF writer K. W. Jeter (who also wrote an excellent cyberpunk novel Dr. Adder which Dick loved) sold out on his behalf and wrote those three books based on the movie (which referenced famous lines like "Wake up. Time to die!").

They explore the question of what the fuck happened after they went flying off into the wilderness (that unused footage from The Shining), and whether Decker was a replicant. (Who would have guessed??!)

So even though they're not written by PKD, or directly based on his original all time great book, and not as authentic and mentally twisted as a real PKD book, they are still pretty excellent and twisted in their own right, and well worth reading. They're based on an excellent movie based on an epic book, and written by a friend and author PKD respected, who's written some other excellent books.

And while you're at it, check out Dr. Adder and K. W. Jeter's other books too! Especially Noire, for its hi-fi cables made out of the still-living spinal columns of copyright violators. (I suggest you buy a copy and don't pirate it!)

https://en.wikipedia.org/wiki/Blade_Runner_2:_The_Edge_of_Hu...

https://en.wikipedia.org/wiki/Blade_Runner_3:_Replicant_Nigh...

https://en.wikipedia.org/wiki/Blade_Runner_4:_Eye_and_Talon

https://en.wikipedia.org/wiki/K._W._Jeter

https://en.wikipedia.org/wiki/Dr._Adder

https://en.wikipedia.org/wiki/Noir_(novel)

http://www.indiewire.com/2015/12/watch-u-s-theatrical-ending...

http://www.sf-encyclopedia.com/entry/jeter_k_w

Jeter's most significant sf may lie in the thematic trilogy comprising Dr Adder (1984) – his first novel (written 1972), long left unpublished because of its sometimes turgid violence – The Glass Hammer (1985) and Death Arms (1987); Alligator Alley (1989) as by Dr Adder with Mink Mole (see Ferret) is a distant outrider to the sequence. Philip K Dick had read Dr Adder in manuscript and for years advocated it; and it is clear why. Though the novel clearly prefigures the under-soil airlessness of the best urban Cyberpunk, it even more clearly serves as a bridge between the defiant reality-testing Paranoia of Dick's characters and the doomed realpolitiking of the surrendered souls who dwell in post-1984 urban sprawls (see Cities). In each of these convoluted tales, set in a devastated Somme-like Near-Future America, Jeter's characters seem to vacillate between the sf traditions of resistance and cyberpunk quietism. In worlds like these, the intermittent flashes of sf imagery or content are unlasting consolations.

[...]

Much of his later work has consisted of Sharecrop contributions to various proprietorial worlds, including Alien Nation, Star Trek, Star Wars [for titles see Checklist]; of some interest in this output are his Ties – they are also in a sense Sequels by Another Hand – to the film Blade Runner (1982), comprising Blade Runner 2: The Edge of Human (1995), Blade Runner 3: Replicant Night (1996) and Blade Runner 4: Eye & Talon (2000), and making use of some original Philip K Dick material. The sense of ebbing enthusiasm generated by these various Ties is not markedly altered by Jeter's most recent singleton, Noir (1998), a Cyberpunk novel whose detective protagonist's main job is killing copyright violators so that their still-living spinal cords may be incorporated into hi-fi system cables; the irreality of this concept, and the bad-joke names that proliferate throughout, are somewhat stiffened up by the constant interactive presence of the already dead, a Philip K Dick effect, as filtered through Jeter's own intensely florid sensibility. [JC]


The PhD level remark is by comparison with Go's remarks on all programming languages that go beyond its lame type system, including Java's.

"The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt."

"It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical."

We are all well aware of the blue collar goals of Java 1.0.

Now we are at Java 23 EA, where the distance to Go's type system is even greater.

We have Go's failure to learn from history of programming languages, ironically following Java's missteps on generics (reaching out to the same Haskell folks that helped with Pizza compiler), with warts of its own with magic string formats for timestamps, const iota dance for enumerations, magic types and tagged structs.

Had Rust become mature one or two years earlier, and most likely Docker and Kubernetes would have pivoted from Python and Java respectively into Rust instead, with Go's fate being the same as Limbo.


The most significant distinction is that dummy arguments in Fortran can generally be assumed by an optimizer to be free of aliasing, when it matters. Modifications to one dummy argument can't change values read from another, or from global data. So a loop like

  subroutine foo(a, b, n)
    integer n
    real a(n), b(n)
    do j = 1, n
      a(j) = 2 * b(j)
    end do
  end
can be vectorized with no concern about what might happen if the `b` array shares any memory with the `a` array. The burden is on the programmer to not associate these dummy arguments on a call with data that violate this requirement.

(This freedom from aliasing doesn't extend to Fortran's POINTER feature, nor does it apply to the ASSOCIATE construct, some compilers notwithstanding.)


When half of a team are lone wolves because they see no other way to get things done, and the other half are rules lawyers who don't care whether things get done, I think that's a clear sign of management failure. Both of these personality types show up when team members have no faith that they can efficiently work together with their teammates: "drones" reject the idea of working efficiently, and "cowboys" reject the idea of working together.

There are lots of ways for management to go wrong, but if you feel like this article describes your small business, here are some low-hanging fruit:

- Have you sought out management coaching, or are you trying to become a good manager by trial and error?

- Do you treat your employees as trusted, respected professionals (meaning: people who might know better than you)? Do your employees treat one another that way? Do they give you that same respect?

- When you've made a mistake, big or small, how do you discover it? Do your employees and co-founders feel safe and secure enough to give you frequent negative feedback? Do they trust that you'll act on it?

- Do you have the time and resources to provide good management for all of the employees that you're directly responsible for? Have you been properly hiring, promoting and delegating to spread out that workload as the team grows?

- Leaders are just team members whose job is to produce decisions, in the same way that a software engineer's job is to produce code. Are you actually doing your job, by consistently producing high-quality decisions? Who's keeping track of that?

- Your most impactful responsibilities are hiring, firing, promotions, setting salaries, and choosing how to balance quality against speed. Are you giving all of those decisions the care and effort which they deserve?


@dang is a no-op. For reliable message delivery the only way is hn@ycombinator.com.

That statement makes no sense. Flagging is cancelling. You are what you can't stand.

He is a white guy raised under apartheid who's throwing nazi salutes, overtly referencing nazi ideology and symbology, hiring white supremacist staff to work for him, promoting white supremacist content on a platform that he bought for that explicit purpose, and attending rallies in foreign countries to promote far-right ideology.

There's no room for reasonable discussion with anyone who can look at that and come to the conclusion that they're being "framed" as nazis. To be clear, I don't believe all of their voters are nazis, just those who decide to stick with the party in light of the overwhelming evidence.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: