One aspect is that Tesla is all cameras, whereas Rivian sees it as important to have multi-sensor suites (cameras, ultrasonic, radar, and in Gen 3: lidar). TBH as a customer I prefer to know that the latter is protecting me instead of just cameras.
Thanks for the link, at first glance seems like a fascinatingly rich font
(by the way, to overcome the char/font limit they can publish JuliaMono2 and 3 and 4 and then set those as fallback fonts to reach the full coverage...)
thanks for pointing it out. i mostly program in ASCII range. Myna covers a reasonable subset of Unicode but one can indeed use Julia as a fallback for Myna to cover Unicode if one wishes.
Anyone have experience with AVP+ALVR vs Valve Index? I have only used the latter but interested if I can use ALVR effectively enough to replace the Index.
If anybody else is wondering what parents means by ALVR: "Air Light VR", a software to stream games from your PC to a VR headset: https://github.com/alvr-org/ALVR
Especially because Julia has pretty user friendly and robust GPU capabilities such as JuliaGPU and Reactant[2] among other generic-Julia-code to GPU options.
I get the impression that most of the comments in this thread don't understand what a GPU kernel is. These high-level languages like Python and Julia are not running on the kernel, they are calling into other kernels usually written in C++. The goal is different with Mojo, it says at the top of the article:
Julia's is high level yes, but Julia's semantics allow it to be compiled down to machine code without a "runtime interpretter" . This is a core differentiating feature from Python. Julia can be used to write gpu kernels.
It doesn’t make sense to lump python and Julia together in this high-level/low-level split. Julia is like python if numba were built-in - your code gets jit compiled to native code so you can (for example) write for loops to process an array without the interpreter overhead you get with python.
People have used the same infrastructure to allow you to compile Julia code (with restrictions) into GPU kernels
The author posits that people don't like using LLMs with Rust because LLMs aren't good with Rust. Then people would migrate towards languages that do will with LLMs. However, if that were true, then Julia would be more popular since LLMs do very well with it: https://www.stochasticlifestyle.com/chatgpt-performs-better-...
Does the linked study actually check that the LLM solves the task correctly, or just that the code runs and terminates without errors? I'm bad at reading, but the paper feels like it's saying the latter, which doesn't seem that useful.
I mean, just to steelman the argument, the "market" hasn't had time to react to what LLMs are good at, so your rebuttal falls flat. I think the original statement is more a prediction than a statement of current affairs.
Also, the author didn't say that "ease of use with LLMs" is the _only_ factor that matters. Julia could have other things wrong with it that prevent it from being adopted.
> A crude analogy is the travel of sound waves in air: if you yell at someone, they will hear you long before any single air molecule makes it from here to there.
Isn’t this a very good analogy? What’s so crude about it?
Another tool in this regard is https://github.com/JuliaLang/AllocCheck.jl, "a Julia package that statically checks if a function call may allocate by analyzing the generated LLVM IR of it and its callees using LLVM.jl and GPUCompiler.jl"
I'm currently working on a re-write of a small model from JAX to Julia and finding the Julia code so much easier to write, it's more concise, and find the debugging tools easier to work with in Julia.