Hacker Newsnew | past | comments | ask | show | jobs | submit | thetwentyone's commentslogin

One aspect is that Tesla is all cameras, whereas Rivian sees it as important to have multi-sensor suites (cameras, ultrasonic, radar, and in Gen 3: lidar). TBH as a customer I prefer to know that the latter is protecting me instead of just cameras.


It also suggests to me a more-professionalized R&D culture.

Tesla claiming it planned to implement self-driving with just cameras has always meant I don't trust anything they touch.


Also see the JuliaMono typeface: https://juliamono.netlify.app

It was designed to be a comprehensive monocode typeface to support Julia's full Unicode support.


Thanks for the link, at first glance seems like a fascinatingly rich font (by the way, to overcome the char/font limit they can publish JuliaMono2 and 3 and 4 and then set those as fallback fonts to reach the full coverage...)


thanks for pointing it out. i mostly program in ASCII range. Myna covers a reasonable subset of Unicode but one can indeed use Julia as a fallback for Myna to cover Unicode if one wishes.


Anyone have experience with AVP+ALVR vs Valve Index? I have only used the latter but interested if I can use ALVR effectively enough to replace the Index.


If anybody else is wondering what parents means by ALVR: "Air Light VR", a software to stream games from your PC to a VR headset: https://github.com/alvr-org/ALVR


Especially because Julia has pretty user friendly and robust GPU capabilities such as JuliaGPU and Reactant[2] among other generic-Julia-code to GPU options.

1: https://enzymead.github.io/Reactant.jl/dev/ 2: https://enzymead.github.io/Reactant.jl/dev/


I get the impression that most of the comments in this thread don't understand what a GPU kernel is. These high-level languages like Python and Julia are not running on the kernel, they are calling into other kernels usually written in C++. The goal is different with Mojo, it says at the top of the article:

> write state of the art kernels

You don't write kernels in Julia.


>You don't write kernels in Julia.

The package https://github.com/JuliaGPU/KernelAbstractions.jl was specifically designed so that julia can be compiled down to kernels.

Julia's is high level yes, but Julia's semantics allow it to be compiled down to machine code without a "runtime interpretter" . This is a core differentiating feature from Python. Julia can be used to write gpu kernels.


It doesn’t make sense to lump python and Julia together in this high-level/low-level split. Julia is like python if numba were built-in - your code gets jit compiled to native code so you can (for example) write for loops to process an array without the interpreter overhead you get with python.

People have used the same infrastructure to allow you to compile Julia code (with restrictions) into GPU kernels


Im pretty sure Julia does JIT compilation of pure Julia to the GPU: https://github.com/JuliaGPU/GPUCompiler.jl


” you should use one of the packages that builds on GPUCompiler.jl, such as CUDA.jl, AMDGPU.jl, Metal.jl, oneAPI.jl, or OpenCL.jl”

Not sure how that organization compares to Mojo.


Julia's GPU stack doesn't compile to C++. it compiles Julia straight to GPU assembly.


See new cu tile architecture on CUDA, designed from the ground up with Python in mind.


The author posits that people don't like using LLMs with Rust because LLMs aren't good with Rust. Then people would migrate towards languages that do will with LLMs. However, if that were true, then Julia would be more popular since LLMs do very well with it: https://www.stochasticlifestyle.com/chatgpt-performs-better-...


Does the linked study actually check that the LLM solves the task correctly, or just that the code runs and terminates without errors? I'm bad at reading, but the paper feels like it's saying the latter, which doesn't seem that useful.


I mean, just to steelman the argument, the "market" hasn't had time to react to what LLMs are good at, so your rebuttal falls flat. I think the original statement is more a prediction than a statement of current affairs.

Also, the author didn't say that "ease of use with LLMs" is the _only_ factor that matters. Julia could have other things wrong with it that prevent it from being adopted.


> A crude analogy is the travel of sound waves in air: if you yell at someone, they will hear you long before any single air molecule makes it from here to there.

Isn’t this a very good analogy? What’s so crude about it?


Another tool in this regard is https://github.com/JuliaLang/AllocCheck.jl, "a Julia package that statically checks if a function call may allocate by analyzing the generated LLVM IR of it and its callees using LLVM.jl and GPUCompiler.jl"


I'm currently working on a re-write of a small model from JAX to Julia and finding the Julia code so much easier to write, it's more concise, and find the debugging tools easier to work with in Julia.


I’ve had a good time dabbling with Metal.jl: https://github.com/JuliaGPU/Metal.jl


Same. It can even run realtime workloads (audio).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: