Hacker Newsnew | past | comments | ask | show | jobs | submit | more thrwayaistartup's commentslogin

The open secret is that top-quartile R1 CS faculty positions aren't coveted anymore and don't attract the best like they used to.

The choice is now between increasingly tenuous/meaningless tenure after 5-10 years and a $500K/year lower bound for 10-12 years. That choice is... not a hard choice for anyone who values intellectual freedom. And the right answer sure as shit isn't the faculty position.

A good 50% of those faculty chasing chasing NeurIPS papers are doing so because at least once before going up for tenure they will apply for positions at big tech. They end up coming on not just non-executive, but often outside of management and at the bottom of the (Top IC)-[1-2] total comp band. If they net an offer they'll usually leave. The major barrier to an offer is usually ego and "is this personal actually humble enough to be useful to other people".


Much of this could be true, but I’ve seen R1-tenure-caliber PhDs come on staff at top industry labs for a lot more than the pay you’re suggesting.


Hence "lower bound".


> a $500K/year lower bound for 10-12 years

I don't care if you are talking about top talent here; that is an insane thing to say. As a lower bound? What percentage of software engineers / AI practitioners / data scientists are making $500k/year? 0.1%?


the obvious: the manager is being sloppy. Whether they are correct or not is irrelevant; thinking and writing clearly about these issues is literally this person's entire day job. A sloppy report is the tip of an iceberg of sloppier thinking.

The conspiracy: the manager has an axe to grind that aligns with incumbent auto-makers' business strengths, suggesting that the analysis is debased in one way or another.

The "mind explodes": the manager is talking their book; behind the manager is a team using mountains of data to design messages that optimally push the market in the direction they want. The purpose of the message is not to be logically coherent to nerds on the internet. The point is to convince a few portfolio managers to behave in one way or another, and perhaps also to signal sentiment analysis algos.


I think you entirely missed the point. GP put it well:

>> They are conceptually/abstractly rigorous, but in "implementation" are incredibly sloppy.

Maturity in concept-space and the ability to reason abstractly can be achieved without the sort of formal rigor required by far less abstract and much more conceptually simple programming.

I have seen this first hand TAing and tutoring CS1. I regularly had students who put off their required programming course until senior year. As a result, some were well into graduate-level mathematics and at the top of their class but struggled deeply with the rigor required in implementation. Think about, e.g., missing semi-colons at the end of lines, understanding where a variable is defined, understanding how nested loops work, simple recursion, and so on. Consider something as simple as writing a C/Java program that reads lines from a file, parses them according to a simple format, prints out some accumulated value from the process, and handles common errors appropriately. Programming requires a lot more formal rigor than mathematical proof writing.


I didn’t miss his point. He’s plain wrong.

> Programming requires a lot more formal rigor than mathematical proof writing.

This is is just wrong?

Syntax rigour has almost nothing to do with correctness.


Just take the downvotes with pride.

You have a valid point, which is that we are not even being rigorous enough about the meaning of the word “rigor” in this context.

- One poster praises how programming needs to be boiled down into executable instructions as “rigor,” presumably comparing to an imaginary math prof saying “eh that sort of problem can probably be solved with a Cholesky decomposition” without telling you how to do that or what it is or why it is even germane to the problem. This poster has not seen the sheer number of Java API devs who use the Spring framework every day and have no idea how it does what it does, the number of Git developers who do not understand what Git is or how it uses the filesystem as a simple NoSQL database, or the number of people running on Kubernetes who do not know what the control plane is, do not know what etcd is, no idea of what a custom resource definition is or when it would be useful... If we are comparing apples to apples, “rigor” meaning “this person is talking about a technique they have run across in their context and rather than abstractly indicating that it can be used to fix a problem without exact details of how it does that, they know the technique inside and out and are going to patiently sit down with you until you understand it too,” well, I think the point more often goes to the mathematician.

- Meanwhile you invoke correctness and I think you mean not just ontic correctness “this passed the test cases and happens to be correct on all the actual inputs it will be run on” but epistemic correctness “this argument gives us confidence that the code has a definite contract which it will correctly deliver on,” which you do see in programming and computer science, often in terms of “loop invariants” or “amortized big-O analysis” or the like... But yeah most programmers only interact with this correctness by partially specifying a contract in terms of some test cases which they validate.

That discussion, however, would require a much longer and more nuanced discussion that would be more appropriate for a blog article than for an HN comment thread. Even this comment pointing out that there are at least three meanings of rigor hiding in plain sight is too long.


>> Programming requires a lot more formal rigor than mathematical proof writing.

> This is is just wrong? Syntax rigour has almost nothing to do with correctness.

1. It's all fine and well to wave your hand at "Syntax rigour", but if your code doesn't even parse then you won't get far toward "correctness". The frustration with having to write code that parses was extremely common among the students I am referring to in my original post -- it seemed incidental and unnecessary. It might be incidental, but at least for now it's definitely not unnecessary.

2. It's not just syntactic rigor. I gave two other examples which are not primarily syntactic trip-ups: understanding nested loops and simple recursion. (This actually makes sense -- how often in undergraduate math do you write a proof that involves multiple interacting inductions? It happens, but isn't a particularly common item in the arsenal. And even when you do, the precise way in which the two inductions proceed is almost always irrelevant to the argument because you don't care about the "runtime" of a proof. So the fact that students toward the end of undergraduate struggle with this isn't particularly surprising.)

Even elementary programming ability demands a type of rigor we'll call "implementation rigor". Understanding how nested loops actually work and why switching the order of two nested loops might result in wildly different runtimes. Understanding that two variables that happen to have the same name and two different points in the program might not be referring to the same piece of memory. Etc.

Mathematical maturity doesn't traditionally emphasize this type of "implementation rigor" -- even a mathematician at the end of their undergraduate studies often won't have a novice programmer's level of "implementation rigor".

I am not quite sure why you are being so defensive on this point. To anyone who has educated both mathematicians and computer scientists, it's a fairly obvious point and plainly observable out in the real world. Going on about curry-howard and other abstract nonsense seems to wildly and radically miss this point.


Having taught both I get what you are saying, but the rigor required in programming is quite trivial compared to that in mathematics. Writing a well structured program is much more comparable to what is involved in careful mathematical writing. It's precisely the internal semantic coherence and consistency, rather than syntactic correctness, that is hardest.


> Syntax rigour has almost nothing to do with correctness.

Yes, that's the point. You did miss it.


And yet I didn’t miss it. I insist.

You need more rigour to prove let’s say Beppo Levy theorem than writing a moderately complex piece of software.

Yet you can write it in crappy English, the medium not being the target goal, the ideation process even poorly transcribed in English needs to be perfectly rigorous. Otherwise, you proved nothing.


I'm pretty sure the two of you are working from different definitions of the word "rigorous" here.

(Which, given the topic of conversation here, is somewhat ironic.)


I take it broadly as nothing was clearly defined. :)


> Syntax rigour has almost nothing to do with correctness.

I see your point: has almost nothing correctness with rigour do to Syntax.

Syntax rigor has to do with correctness to the extent that "correctness" exists outside the mind of the creator. Einstein notation is a decent example: the rigor is inherent in the definition of the syntax, but to a novice, it is entirely under-specified and can't be said to be "correct" without its definition being incorporated already...which is the ultimate-parent-posts' point and I think the context in which the post-to-which-you're-replying needs to be taken.

And if you're going to argue "This is just wrong?" (I love the passive-aggressive '?'?) while ignoring the context of the discussion...QED.


Well I think we should stop discussing at the passive agressive thing.


My driving commute is 20 minutes. My cycling commute is 35 minutes (with significant effort). The average drive-tru time at Starbucks is 5 minutes. Add in a 5 minute detour off the optimal driving path, and another few since commuting hours are peak demand and will therefore be at the tail of the distribution. Same job, same commute, but all told, relative to a car and a PSL, cycling is net neutral on time with a difference of 600-800 calories.

Maybe that's not true for al (although a 35 minute bike ride with existing fitness gets you a long way surprisingly fast). But one can lift while watching TV. One can do squats while doing laundry. Etc.

Of course, in each case, one of those two options is far less comfortable. For commuting that's particularly true in the winter. But WRT physical fitness, the barrier is almost always "discomfort" and almost never "time".


Universities do not have a monopoly on education or on learning. You can go teach someone today, no questions asked. You can even charge them for the favor, if you want.

Universities DO have a monopoly on credentialing. Credentialing, contrary to popular opinion in these parts, is actually a useful function.

Providing education and credentialing is not typically free. You need to acquire physical space, hire teachers, and so on. Education could be at least one order of magnitude less expensive than it is now, and it some cases probably even two orders. Education could also be subsidized 100% for the first N most qualified candidates, at point of use, using tax dollars. But there is no way to make it free.


Most comments here are completely missing the point: remote learning during the pandemic era was worse than useless. As a result, the pandemic era created a HUGE bifurcation in the student population between autodidacts and non-autodidacts.

You can see this in high school test scores, in placement exams, in Freshman college performance, and even in new grad hire cohorts.

The autodidact set has realized that they can teach themselves a lot of what they would've learned in coursework at colleges. There are still some elite career pathways where formal education is necessary, but an autodidact who doesn't want to follow one of those career pathways now knows that they can go without.

American colleges provide a useful-if-overpriced service to non-autodidacts, but their product is not ready to deal with students who are YEARS behind in their formal education.


> if overpriced

"overpriced" is a vast understatement. overpriced would be if it costs $8k but is sold at $12k.

Instead the pricing is crippling, and instead of 8k it's 48k, and the average student will roll out with between 100k-200k in debt. debt that is not dischargable except in very rare, specific cases.


> overpriced would be if it costs $8k but is sold at $12k. Instead the pricing is crippling, and instead of 8k it's 48k

The average college tuition and fees at four-year schools in 2020-2021 was $19,020.

> and the average student will roll out with between 100k-200k in debt

The average federal student loan debt is $37,338 per borrower.

Like I said, everyone keeps missing the point.


Laws are certainly up for philosophical debate, but at least in the USA, that debate typically has to happen in the legislature rather than the judiciary.

More importantly, though, most judges are not philosophers.


> including mechnaical duplication with automatic alterations to evade detection while continuing to reproduce protect elements if the original.

That's super interesting and is news to me. Thanks for sharing. Would you mind linking to relevant statutes or court decisions?

(This isn't a "citation needed" post -- I believe you, and I'm genuinely curious to read more, but can't find anything!)


> Humans have done so for millenia before

This is categorically false for both software and movies.

For other media, this ignores the effect of zero-effort copying.


Saying that an entire team is unfamiliar with an IC's tech stack is indeed weird in most cases. But it can make sense, and largely depends on the relationship between the team and the person involved.

A good example from a recent gig: one of our scientists had a HUGE pile very complicated MATLAB, based on several dissertations worth of novel work on both numerical analysis and highly domain-specific mathematical modeling.

Our software engineers needed to either call into or directly use the MATLAB code. Sometimes changes were necessary. This caused a ton of friction for two reasons. First, our SWEs didn't know much MATLAB. Second, you'd have to read two or three papers of complicated mathematics to even understand what the code was doing prior to changing it, and most of our software engineers topped out with Calculus or maybe a Linear Algebra course. So our engineers were unfamiliar with both the tech stack and also the "knowledge" stack.

In that case, I think it's more accurate to say that the software engineers were unfamiliar with the scientist's tech stack than the other way around. There's no way in hell them or anyone else was going to come anywhere close to correctly rewriting the MATLAB code in any reasonable amount of time. And even if you could, the knowledge stack problem still exists.

You can think of hiring those types of scientists as one-man startups that are bringing in their own tech stack and debt that your existing org has no idea how to integrate. You need to plan that reality into both compensation amount and vesting/earning scheduling.

For compensation, err on the side of "way over asking", since this is going to suck harder than the scientist thinks. They are probably going to want to leave, and you need to be able to get them to stick around. (The dynamics here are similar to a startup founder or exec after a merger, but with slight difference: the scientist's "FU money" is a pretty much guaranteed cushy professor of practice gig.)

For vesting, err on the side of paying out bonuses or RSUs early but with a big vesting cliff 2-4 years out. So they get the cash before it vests. Get 'em hooked and don't let 'em leave until the work is caught.

And definitely bank on them leaving once the first cliff is hit.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: