He is not exactly an insider, but seems broadly aligned/sympathetic/well-connected with the Ilya/researchers faction, his tweet/perspective was a useful proxy into what that split may have felt like internally.
Yeah - I think this is the schism. Sam is clearly a product person, these are AI people. Dev day didn’t meaningfully move the needle on AI, but for people building products it sure did.
The fact that this is a schism is already weird. Why do they care how the company transforms the technology coming from the lab into products? It's what pay their salaries in the end of the day and, as long as they can keep doing their research work, it doesn't affect them. Being resented about a thing like this to the point of calling it a "absolute embarrassment" when it clearly wasn't is childish to say the least.
this is sort of why henry ford left the company he founded before the ford we know, i think around 01902. his investors saw that they had a highly profitable luxury product on their hands and wanted to milk it for all it was worth, much like haynes, perhaps scaling up to build dozens of custom cars per year, like the pullman company but without needing railroads, and eventually moving downmarket from selling to racecar drivers and owners of large companies, to also selling to senior executives and rich car hobbyists, while everyday people continued to use horse-driven buggies. ford, by contrast, had in mind a radically egalitarian future that would reshape the entire industrial system, labor-capital relations, and ultimately every moment of day-to-day life
for better or worse, ford got his wish, and drove haynes out of the automobile business about 20 years later. if he'd agreed to spend day and night agonizing over how to get the custom paint job perfect on the car they were delivering to mr. rockefeller next month, that wouldn't have happened, and if fordism had happened at all, he wouldn't have been part of it. maybe france or japan would be the sole superpower today
> as long as they can keep doing their research work, it doesn't affect them
That’s a big question. Once stuff starts going “commercial” incentives can change fairly quickly.
If you want to do interesting research, but the money wants you to figure out how AI can help sell shoes, well guess which is going to win in the end - the one signing your paycheck.
> Once stuff starts going “commercial” incentives can change fairly quickly.
Not in this field. In AI, whoever has the most intelligent model is the one that is going to dominate the market. No company can afford not investing heavily in research.
Thinking you can develop AGI - if such a thing actually can exist - in an academic vacuum, and not by having your AI rubber meet the road through a plethora of real world
business use cases strikes me as extreme hubris.
Or the obvious point that if you're not interested in business use cases then where are you going to get the money for the increasingly exorbitant training costs.
Cheaper, faster and longer context window would be enough of an advancement for me. But then we also had the Assistant API that makes our lives as AI devs much easier.
Seriously, the longer context window is absolutely amazing for opening up new use-cases. If anything, this shows how disconnected the board is from its user base.
I think you are missing the point, this is offered for perspective, not as a “take”.
I find this tweet insightful because it offered a perspective that I (and it seems like you also) don’t have which is helpful in comprehending the situation.
As a developer, I am not particularly invested nor excited by the announcements but I thought they were fine. I think things may be a bit overhyped but I also enjoyed their products for what they are as a consumer and subscriber.
With that said, to me, from the outside, things seemed to be going fine, maybe even great, over there. So while I understand the words in the reporting (“it’s a disagreement in direction”), I think I lack the perspective to actually understand what that entails, and I thought this was an insightful viewpoint to fill in the perspectives that I didn’t have.
The way this was handled still felt iffy to me but with the perspective I can at least imagine what may have drove people to want to take such drastic actions in the first place.
I was underwhelmed, but I got -20 upvotes on Reddit for pointing it out. Yes products are cool, but I'm not following OpenAI for another App Store, I'm following it for AGI. They should be directing all resources to that. As Sam said himself: once it is there, it will pay for itself. Settling to products around GPT-4 just passes the message that the curve has stagnated and we aren't getting more impressive capabilities. Which is saddening.
> He is not exactly an insider, but seems broadly aligned/sympathetic/well-connected with the Ilya/researchers faction, his tweet/perspective was a useful proxy into what that split may have felt like internally.
For those of you aren't particularly interested in Ember or frontend development in general, here is a tidbit that may be of interest to you.
While revamping the tutorial[1] to showcase the new features and idioms, I worked on this tool[2] to automate the generation of the tutorial.
Every page of the tutorial you see is the output from markdown source files (like this one[3]) which contains executable instructions for the steps. The build[4] system will run these commands and edits the same way an end-user would, capture the output of the commands and put them in the tutorial. It also takes the screenshots by opening a real browser (via puppeteer) and navigating to the actual app that we are building as part of the tutorial.
All of these ensures that the tutorial content and screenshots are up-to-date with the latest blueprints (the files used by the generators), config files, etc, and that everything really works as-expected. It makes it much easier to maintain and QC the teaching materials, but also servers as a very useful end-to-end smoke test to ensure all the moving pieces in the ecosystem (including external components like npm, node, etc) are all working together.
Right now the tool and the tutorial content are in the same GitHub repo, but the code is actually written to be completely agnostic to Ember.js (or JS even). It just runs shell commands, edit files etc. My intention is to separate out the tool to its standalone thing, so more communities can benefit from this approach.
tl;dr Metal/rust do not give off any smell, and even if they do, they would be odorless. The smell you associate with metal (coins, nails, etc) are actually formed when you touch the metal, at which point the metal acts as a catalyst to speed up oxidation of your skin oils, which forms odorful (?) molecules. So you are really just smelling yourself. The predominant compound attributed to the smell is 1-octen-3-one. The rest of the video is him trying to create and isolate this compound.
Have you read "Learning Rust With Entirely Too Many Linked Lists"[1]? I think it will be quite helpful for these kind of situations. It walks you through all the possible tools in the language that is available to you, and at the end, if you just want to write it how you would in C, you could always do it "unsafely" with raw pointers (which is no worse than C).
It's also really no better than C. Or at least no better than C++.
This is my main issue with Rust. It doesn't seem to really solve the right problem. I feel like it solves the easy problems that I already know how to solve easily, but as soon as I get to something that really, truly feels like I want it, the best solution is unsafe.
A complicated circular linked data structure is exactly where I want the language to be screaming at me if I make a silly error. But Rust doesn't even consider memory leaks to be errors...
I don't know, ensuring memory safety at compile time and safe concurrency is a pretty big win for me over C, I know many people who claim that they can write/debug C programs to be memory safe, however the real world would respectfully disagree.
Rust doesn't ensure memory safety or safe concurrency. It ensures memory safety - including data race safety, just one part of safe concurrency - assuming you never use any unsafe code and that the standard library is free of bugs. I'm happy to assume the standard library is free of memory safety bugs, because you have to trust something.
But I'm not happy trusting that dependencies aren't using unsafe code, and I'm not happy claiming that Rust ensures safety, when it ensures safety only if you assume that unsafe blocks aren't unsafe.
The problem is that you can't check unsafe blocks locally. Checking that each individual unsafe block doesn't have undefined behaviour requires checking the entire programme.
It's better than nothing, without a doubt, but it isn't safe.
Unsafe blocks are infectious, that's true, but it's possible to write safe APIs that limit that infectiousness to a single module. For example, even though Vec's implementation is crazy unsafe, you don't have to audit all the uses of Vec in safe programs -- a local audit of the Vec code can prove what we need to prove. This is the biggest benefit of the lifetime system and the borrow checker, that when we write piles of unsafe code, we can force safe callers to maintain our invariants.
You can prove it, but you can also prove that a C++ programme has no memory safety bugs. And there are a lot of languages where you don't have to, where it's simply impossible to get memory safety bugs (assuming the runtime is safe).
For nontrivial libraries that use a lot of unsafe, it really is very difficult to know that all the uses of unsafe don't interact in some way to create unsafety. The scoped lock that had a problem in Rust 1.0 (or just before it?) is an example.
You can force callers to maintain your invariants in C++ too, simply by using some basic safety. Yes people can still do things that are obviously visually unsafe in code and undefined, but that's not a serious issue.
I still think Rust is better here. Don't get me wrong. But it's very hyped as 'safe and fast' when it just isn't safe.
If freedom from data and race conditions is the easily solved problems, I'll take it.
The problems with engineering solutions that approach something close to the end of the spectrum of perfection, is that it gets undo criticism for not being perfect-enough. Rust is hopefully a stepping stone along path towards more correct, less error prone computation. Lets not throw the baby out with the bathwater.
I don't really have a refutal, but more of a dismissal.
Almost all of the code I write just uses prebuilt data structures (other then structs to group things) and when writing this code I find the safety measures that rust provides very convenient because I don't have to worry about these things such as lifetimes. It is nice knowing that the compiler will let me know if I make an error.
However yes, it doesn't solve the hard probem of complex circular structures. I don't see this as a major issue because when I am writing these I am carefully thinking about the strucutre anyways. So yes, while it would be nice to have these verified as well I wouldn't want take the tradeoff if it made the language much more complex.
Most of us write new, complex data-structures, that aren't part of the stdlib or a crate, like once a year, at most. Those are hard in Rust if they involve circular pointers. They're hard in C/C++ too, but in a different way (easier to write the code, harder to be sure it's correct).
The idea that Rust would be no better than C/C++ because of the latter parts doesn't make much sense. This kind of work is unusual for most programming. To say that other programming work is easy does not seem to bear out in practice.
And as has been becoming clear in this thread, if you're inventing new data structures, the odds are you overlooked an already existing better alternative.
It doesn't matter that it's 'kind of unusual', even though I contend that it isn't. Even if, for the sake of argument, we assume that it is, that doesn't change my point.
My point is that the whole point of Rust is supposedly that it
>is a systems programming language that runs blazingly fast, prevents segfaults, and guarantees thread safety.
except that when you look at any of the examples of code that really would benefit from the compiler's help, the compiler just throws its hands in the air and goes 'it's all up to you now'.
The problem is that Rust doesn't let you make a single assumption and let the compiler prove the safety of the code using that assumption. It just has a valve that you can hit that removes all guarantees.
If you could say 'this code is safe assuming that this FFI function doesn't exhibit undefined behaviour, please check that for me' or write a proof that says 'this actually is safe, because this pointer can only ever point into this valid memory or this valid memory, and this is why' then the compiler would still be useful.
Whether 'this work' (which is not just creating data structures, but anything that the compiler doesn't understand, which is much broader than just creating data structures) is unusual or not, IMO the whole appeal of Rust is that it makes doing that work easy. But it doesn't.
Rust just doesn't seem worth it, doesn't seem worth rewriting whole ecosystems of code. It doesn't give any actual safety.
> examples of code that really would benefit from the compiler's help
This seems to be the point of disagreement here, and I think evidence clearly shows that you are wrong. Sure, Rust doesn't help you when writing the implementation of e.g. circular data structures. But what it does do is provide, far beyond C or C++, the tools for the author of that data structure to enforce that it's used correctly.
And as mentioned upthread, most memory/concurrency (especially concurrency) bugs are not in the implementations of these structures, but in their use. So Rust is a fantastic win here, empirically speaking. Look at the rate of memory safety bugs in Rust programs vs C++ programs- Ripgrep vs grep, Servo/Quantum vs Firefox, etc.
* Most developers are not writing data structures, so optimizing for that seems unnecessary.
* There is work and research going into verifying unsafe code
* I think historically we can see that most memory safety vulnerabilities are not going to be in some lower level data structure, which is well encapsulated and likely already built by someone else, but in the use of that data structure. In particular - sharing references and also invalidating data safely without leaving references to that data. Rust helps you here, and this seems like the far better target.
* Even if your rust code uses unsafe, you still have benefits - you know where to audit for unsafety, you know where to pay extra close attention, and you can still write a large portion of your code in safe rust.
Thanks for the feedback! I fixed it in part 2, let me know if it's still an issue for you. Also, I did not know that's how it's implemented in real life (learned it from this thread). Perhaps I should try building that in Web Audio in another series!
I did some of that in part 2! For the most part, it is smart enough to do the right thing for simple operations like cut and inserting rows/columns, however they are indeed some edge cases that it doesn't handle well.
He is not exactly an insider, but seems broadly aligned/sympathetic/well-connected with the Ilya/researchers faction, his tweet/perspective was a useful proxy into what that split may have felt like internally.