Hacker Newsnew | past | comments | ask | show | jobs | submit | _hark's commentslogin

I don't recall Andrej making "next year!" claims, it was always Elon. I found Andrej's talks from that time to be circumspect and precise in describing their ideas and approach, and not engaging in timeline speculation.


They really should have just marketed the software "as-is" to whatever extent that is allowed by law. I guess they didn't because deployed automobile software is probably not allowed to be considered experimental.

Still, comms that framed it like: "This software purchase upgrades your car with state-of-the-art autonomy capabilities from our AI team, as we approach full self-driving" would have been more honest, still exciting to consumers, and avoided over-promising.


> and avoided over-promising

Stonk is the product and is literally built on over-promising.



There aren't merit-based scholarships to any Ivy League schools, they all offer need-based financial aid packages.


Do they have enough money available to fund everyone who can't afford to come, or do they have to decide who to fund from a wider pool of otherwise good applicants?


MIT, Stanford, Harvard, Princeton, and I believe most or all of the other Ivies, all fund 100% of the demonstrated financial need of every student, and they do not consider the financial needs of applicants when making admission decisions.


No, not for international students. Stanford (I haven't checked others) is very explicit about having a limited number of scholarship for international students: https://financialaid.stanford.edu/undergrad/how/internationa.... Admissions for US applicants are indeed need-blind.


> demonstrated financial need

Higher education is a strange purchase that is engineered to extract the maximum amount of money (up to full-cost tuition, fees, etc.), based on financial records which you are forced to provide.

Any asset except for a residence is typically considered something that could be tendered to the university, and is accordingly deducted from financial need.

This means that external scholarships are limited as to how much they can reduce the expected parental or student contribution. Anything beyond this limit is deducted from need and pocketed by the university.


I'm a researcher at Oxford, and I've both taught and studied here and in the US.

The undergraduate teaching here is phenomenal. It's incredibly labor intensive for the staff, but the depth and breadth students are exposed to in their subject is astonishing. It's difficult to imagine how it can be improved.

My favorite study of university rankings comes from faculty hiring markets, which compute implicit rankings by measuring which institutions tend to hire (PhD->faculty) from others. [1] It's not perfect, but at the very least it's a parameter free way to get a sense of how different universities view each other. The parameters in most university rankings are rather arbitrary and game-able.

Some have pointed to things like contextual admissions [2], and more broadly some identity politics capture of the administration for declining standards. While this might be true, in my view Oxford is still far more meritocratic than US institutions on the whole. There are no legacy admissions, and many subjects have difficult tests which better distinguish between applicants who have all done extremely well on national standardised tests (British A Levels are far more difficult than the SAT/ACT/AP exams.)

Lastly, admissions at Oxford are devolved to the individual colleges, of which there are ~40. The faculty at each college directly interview and select the applicants which they will take as students. This devolved system and the friction it creates is surprisingly robust and makes complete ideological capture more difficult.

The most pressing issue for Oxford's long-term viability as a leading institution, in my view, is the funding situation. For one the British economy is in a long, slow decline. Secondly, even though Oxford has money, there are lots of regulations/soft power influence from the British govt to standardise pay across the country, which makes top institutions like Oxford less competitive on the international market for PhD students, postdocs, and faculty in terms of pay.

[1]: https://www.science.org/doi/10.1126/sciadv.1400005

[2]: https://www.ox.ac.uk/admissions/undergraduate/applying-to-ox...


> British A Levels are far more difficult than the SAT/ACT/AP exams.

I think we just teach people to pass exams, really. Not to say it's necessarily wrong - you do need a grasp of the subject matter still - just that 'how the exam works' is an additional thing you learn.

I'm British, always lived here, took A levels naturally, and took SATs & ACTs too because I applied to MIT - I think I did extremely poorly. (vs. decently on A levels, meeting both my Cambridge & Imperial offers) I just had no familiarity with the test, the sort of questions, etc., maybe some of the subject matter is different too - from memory I think there was no calculus and an extraordinary emphasis on trigonometry? But I can readily understand then that vice versa you'd look at an A level exam and think Oh that's hard, because it's just not what you've been taught towards in the US.


Depends on the subject right? A Level Further Maths modules were (at least when I did it) typically at a very high level, and you couldn't wing them, you really needed to understand the material.


Like I said you do need a grasp of the subject matter still, but I think if the exam changed a bit to something you should technically be able to work out with the same knowledge, but isn't what's expected, a lot of people would struggle. It was always a common complaint anyway right - 'that was nothing like the past papers!' etc.


Implicit in the funding issue is the inability to attract and retain top researchers in resource heavy fields (AI, experimental field).

The starting package For new professors at Oxbridge is several orders of magnitude less than top institutions outside UK


From [1], these are the rankings for CS:

1. Stanford 2. UC Berkeley 3. MIT 4. Caltech 5. Harvard

I'm a little surprised MIT and Caltech are lower than Stanford and UC Berkeley. I know that MIT has a culture of sending its undergraduates to different graduate schools (so, if the top CS students go to MIT for their undergrad and professorships, they often would not have a PhD from MIT, lowering their prestige rating), but that does not explain why Caltech would be lower than Stanford/Berkeley. I know Stanford has a decent CS program, but I'm wondering if there's a gaming network effects thing going on, since both Stanford and Berkeley attract more hustlers.


MIT has it's own peculiar pedagogy that doesn't work for everyone. Stanford is a little more mainstream (in terms of pedagogy) but I'm surprised UIUC and CMU don't appear on this list.

Also... do you mean Computer Science or Software Engineering or "Computer Engineering" (a term that makes me shudder.)


I interpret "Computer Engineering" as electrical engineering for computers (e.g. the people making the new iPhone). I don't know if I mean "Computer Science" or "Software Engineering", because I was just using the article's terminology which may differ between all three of us (they call it "Computer Science").


We should be able to fit 50 universities in top 5.


I studied at Berkeley and Princeton (big classes vs small classes), I find your view to be fundamentally flawed. You presuppose that meritocracy is an inherently fair value system implementation, while many critics and philosophers reject this assumption; in the next breath you delegitimize social justice issues subtly framed as "identity politics", needless to say many other critics and philosophers do not share this talking point either.

Essentially, Oxford researchers—institutionalists—are on the worst perch to evaluate institutions because they don't have a deep understanding of cross-societal differences and inevitably end up using their position to ad hominem and rationalize their own insider-ish biases. That's a tough ideological shell to crack through if the goal is to maintain an objective discussion.

As to the matter, the real issue is that Oxford/Cambridge is a different system than the US big universities. The people who apply to Oxbridge are from UK-related nations where they can study for an IB or an A levels. So for example the miscomparison that "A levels are harder than SAT/AP" is because it fundamentally misunderstands the historical aims of American education philosophy and very different social formations of the 20th century. This is a better approach to explain why UK/European universities are the way they are versus the (previously) leading ring of STEM universities in the US.

Take as another example the PhD system. The American system is different, they prefer non-Masters students direct from undergrad. The European PhD is only 3 years! By one metric that sounds insanely short and not enough time to develop a PhD-level mind. By another metric, yet another systemic difference, with differing rationales and intentions.

More deeply, if we really are to reject identity politics, then a class-based critique would demolish the notion of university education as a filtration system for all societies. Second if Oxbridge are so good then why is all the world's research still essentially American with some satellite results coming out of Europe and perhaps (very cautiously) China? A response that decouples education from research is itself an assumption, one that the American academic philosophy in practice does not agree with. American academia prioritizes research, then teaching, then community service. In other words, decoupling “education” from “research” is itself a pedagogical-philosophy assumption, one the American/British/European academic systems nevertheless still has various problematizable, elitist mindsets about.

So there's a much broader social, political, and historical/class analysis to be made rather than this kind of wonkism of foolish comparisons, and I'm rather miffed that supposedly world-class researchers are still not cognizant of this. Sometimes we are too close to critically think about our own habitus fairly.

Or, before making graphs and charts, read some Paulo Freire.


There is no such thing as "the European PhD". A PhD in the UK nominally takes 3 or 4 years, depending on the program. In Finland, it's nominally 4 years (but typically longer), and that assumes that you already have a Master's. It used to be longer, but Finnish universities moved to shorter "American-style" PhDs, because politicians wanted people to graduate faster.


There is such a thing conceptually as distinct from how American PhDs are selected and developed. I've alluded to this already without elaborating in full on it.


My point was that there several major university traditions in Europe. The differences between them are almost as significant as the difference between any particular European tradition and the American tradition.

The UK is a particularly poor example of how things are done in Europe. In many aspects (such as whether the primary university degree is Bachelor's or Master's) it's closer to the US than the average continental European country.

You edited your comment after I started writing mine. Your idea that the US is still responsible for an exceptionally large fraction of academic research sounds like a leftover from the 20th century. European universities needed a couple of generations to recover from WW2, but since ~20 years ago, there have not been any significant qualitative or quantitative differences between the research output in the US and Europe. (China may also have crossed the threshold recently, but it's too early to say.)

At least not in the fields I'm qualified to judge (computer science, bioinformatics, genomics). There are obviously major differences in both directions in individual topics, but that's because both blocks are pretty small. Neither has enough researchers to cover every subfield and every topic.

American universities fill most of the top positions in university rankings, but that's mostly because the concept of "top institutions" is more relevant in American culture. (That's another aspect where the British tradition is closer to the US than continental Europe.) In many European countries, all proper universities are seen as more or less equivalent as far as education is concerned. Some universities employ more top researchers than others, but that doesn't impact their reputation as educational institutions as much as in the US or the UK.


The problem with this view is that it actively obscures the central role that neoliberalization of academic institutions plays in these formations of quality. So I'll give a logical argument: the US remains the most powerful nation on Earth, and it "in-sources" the world's talent to maintain and reproduce its scientific and technological leadership. Inasmuch as political conditions are changing, European neoliberalized academia shall change and develop as well.

Pointedly, I don't define results or leadership as "research output". I mean who was responsible for Crispr? For LLMs? All roads lead to Rome; but today, empires also change shape and form.


The US has had to share its scientific leadership for some time already, and China is now seriously challenging its technological leadership. It continues to attract foreign academic talent, mostly because its academic salaries are less competitive against industry salaries than in other developed countries. Because Americans are less likely to pursue academic careers, it's often easier for foreign academics to find opportunities in the US than in other countries.

Who was responsible for CRISPR is good question (I'm less familiar with the advances leading to LLMs). There was a series of incremental advances building on each other from at least Japan, Netherlands, Spain, Germany, the US, France, Sweden, and Lithuania. And the Nobel prize was shared between an American researcher working in the US and a French researcher working in Sweden (and later in Germany).


You're either blithely (in fact, stupidly given electoral results this past decade) assuming everyone shares your normative goals and values, or you just asked ChatGPT to write you a "kritik" like some kid in a school debate league.


Edited: I think what's really going on is that you've internalized oppression so as to be so cynical and toxically jealous that someone else online can actually blithely/stupidly say what they think on a Sunday afternoon. Because you're professional working at a university, and you can't just do that and speak out. Noam Chomsky famously described this behavior amongst his peers.

I'm Asian American and LGBT+, and I was privileged by an advanced formal education. So, yes, I literally have a different set of values and goals than you. So you should just try to read it in good faith, I have made no such assumption rather my comment laid out those issues for you to think about. Unless you are doing the old "rules of rational discussion are for me, not for thee"? Surely you're not that sort of antiintellectual reactionary.

And to the other possibility, you're just writing an insult, so the problem there is you and your emotional regulation, and you are responsible for that.

Going back, it's quite the opposite, when the other commenter framed "identity politics" and "meritocracy", they were committing the very error you have ignorantly accused me of. Thus you are just engaging in projection. Not to mention the ("kritik" conservative's dogwshistle).

Thus, the fact that you are lacking in critical thinking skills today does not excuse you from such intellectually prejudiced remarks.

And finally, your reference to "electoral results" tells me you didn't read through my comment, and are pigeonholing me as one type of left-American Democrat or another, of which I have provided enough commentary in the original comment that I could not be one.

So as much as you were trying to suggest the problem is on my end, the problem is with you and your narrowminded (and frankly, one with a racist tenor because surely you would not have said that comment to my face) reply, eli_gottlieb. It's too bad you're actually a postdoc, if I were an evil SJW or Democrat (or whatever politics it was you were insinuating) I'd be cancelling you through your own institution or something.

If you are a conservative, further discussion is going to be pointless. If you are Bernie/AOC/other leftist then I'll chalk it up to you basically misreading what I wrote.


Wow, you think Bernie and AOC are the limits of the spectrum at that end?

> So as much as you were trying to suggest the problem is on my end, the problem is with you and your narrowminded (and frankly, one with a racist tenor because surely you would not have said that comment to my face) reply, eli_gottlieb. It's too bad you're actually a postdoc, if I were an evil SJW or Democrat (or whatever politics it was you were insinuating) I'd be cancelling you through your own institution or something.

Oh yeah, you're asking ChatGPT for kritiks. Fuck off.


> Second if Oxbridge are so good then why is all the world's research still essentially American with some satellite results coming out of Europe and perhaps (very cautiously) China?

Do you have a source for this?


I sat the All Souls exam, taking the philosophy specialist papers, though I'm a math/physics/ML guy. It was a lot of fun, I really appreciate that there's somewhere in the world where these kinds of questions are asked in a formal setting. My questions/answers are written up in brief here [1]

[1] https://www.reddit.com/r/oxforduni/comments/q0giir/my_all_so...

* Oops, they link to my post at the bottom. Sorry for the redundancy.


Very cool! I've done research on reinforcement/imitation learning in world models. A great intro to these ideas is here: https://worldmodels.github.io/

I'm most excited for when these methods will make a meaningful difference in robotics. RL is still not quite there for long-horizon, sparse reward tasks in non-zero-sum environments, even with a perfect simulator; e.g. an assistant which books travel for you. Pay attention to when virtual agents start to really work well as a leading signal for this. Virtual agents are strictly easier than physical ones.

Compounding on that, mismatches between the simulated dynamics and real dynamics make the problem harder (sim2real problem). Although with domain randomization and online corrections (control loop, search) this is less of an issue these days.

Multi-scale effects are also tricky: the characteristic temporal length scale for many actions in robotics can be quite different from the temporal scale of the task (e.g. manipulating ingredients to cook a meal). Locomotion was solved first because it's periodic imo.

Check out PufferAI if you're scale-pilled for RL: just do RL bigger, better, get the basics right. Check out Physical Intelligence for the same in robotics, with a more imitation/offline RL feel.


If FPGAs are competitive on perf/watt, why aren't they more widespread (other than crap software tooling)?

Honestly I've asked different hardware researchers this question and they all seem to give different answers.


They're competitive on perf/watt because they're designed to do one thing. But they're much more expensive than an ASIC, which, if also designed to do one thing would be better than the FPGA.


Can anyone comment on where efficiency gains come from these days at the arch level? I.e. not process-node improvements.

Are there a few big things, many small things...? I'm curious what fruit are left hanging for fast SIMD matrix multiplication.


One big area the last two years has been algorithmic improvements feeding hardware improvements. Supercomputer folks use f64 for everything, or did. Most training was done at f32 four years ago. As algo teams have shown fp8 can be used for training and inference, hardware has updated to accommodate, yielding big gains.

NB: Hobbyist, take all with a grain of salt


Unlike a lot of supercomputer algorithms, where fp error accumulates as you go, gradient descent based algorithms don't need as much precision since any fp errors will still show up at the next loss function calculation to be corrected, which allows you to make do with much lower precision.


Much lower indeed. Even Boolean functions (e.g. AND) are differentiable (though not exactly in the Newton/Leibniz sense) which can be used for backpropagation. They allow for an optimizer similar to stochastic gradient descent. There is a paper on it: https://arxiv.org/abs/2405.16339

It seems to me that floating point math (matrix multiplication) will over time mostly disappear from ML chips, as Boolean operations are much faster both in training an inference. But currently they are still optimized for FP rather than Boolean operations.


In-memory computing (analog or digital). Still doing SIMD matrix multiplication but using more efficient hardware: https://arxiv.org/html/2401.14428v1 https://www.nature.com/articles/s41565-020-0655-z


This is very interesting, but not what the Ironside TPU is doing. The blog post says that the TPU uses conventional HBM RAM.


There's been some talk/rumour of next-gen HBMs having some compute capability on the base die. But again, not what they're doing here, this is regular HBM3/HBM3e.

https://semiengineering.com/speeding-down-memory-lane-with-c...


Specialization. Ie specialized for inference.


Entropy is not absolute!

The entropy of some data is well-defined with respect to a model, but the model choice is free. I.e. different models will assign different entropy to the same data.

And how do we choose a model...? Well, formally by minimizing the information needed to describe both the model and data (the sum of model complexity and data entropy under the model) [1]

You might argue that's all too information-theoretic and in physics there simply is an objective count of the state-space, a maximum entropy, and so on. Alas, there is not even general consensus on whether there is a locally finite number of degrees of freedom.

[1]: https://en.wikipedia.org/wiki/Minimum_description_length


But it is closer to absolute than you make it sound here. There are information theoretic models which are “universal” with respect to a class; that is, they are essentially as good as any in that class, for every individual case you apply - even if different cases are best described by distinct models from that class.

E.g. the KT estimator is, for each individual Bernoulli sequence, as good as the best Bernoulli model for that sequence with at most 1/2 but difference (independent of sequence length)

It is undecidable/uncomputable, and only well defined up to a constant, but you have a “universally universal” model - Kolmogorov complexity. In that sense, entropy IS an absolute.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: