As someone who has been playing around with (and enjoying) Mojo, I have my doubts about how useful Mojo will end up being for your average scientist. You can't get performant code out of Mojo if you're not willing to learn some deeper programming concepts like SIMD or tiling.
I don't have the exact quote on hand, but in the Mojo Discord, Chris Latner explicitly said he wants no "compiler magic" in Mojo. With that idea in mind, Mojo makes it a lot easier to do optimizations like SIMD vectorization by hand, but you will still have to do it manually. My guess is that many scientists who don't like programming would find it annoying to hand-write those kinds of optimizations. If you want a language that gives you nice, performant code on your first attempt, Julia is always a decent option.
Julia has to bootstrap an ecosystem. If mojo can borrow all of the successful Python libraries, that is worth a lot.
Still an enormous uphill battle, but slightly more tractable. Regardless, it is a rough place to be - for a staggering number of uses, Python is fast enough. The organizations who absolutely require top tier performance already have the ability to use FFI. Instagram runs on Django and I believe is still used for YouTube.
OTOH,Rust has shown that there is a reasonable size market for performant, safe system programming languages. I suspect it is being held back from larger adoption due to its complexity. I see an opportunity for Mojo to get into that, with a simpler model and ability to gradually opt in to more complex but high performance parts of the language.
Mojo has a good chance to target Python programmers who would have gone for Go or Java for better performance, and C++ programmers who need something like Rust, but simpler.
That wouldn't be my guess at what "no compiler magic" implies.
Compiler magic means the implementation doing things that the application can't. It reflects limitations or constraints on the target language. If the language is expressive enough you can do everything through library code.
This would mean you're able to do simd vectorisation by hand, but you're also able to run a compile time transform that vectorises your code without needing to bind that transform into the implementation of the compiler.
Thus the non-programmer scientist can use libraries written by someone more on the boundary that do autovec etc, without needing to wait for the core mojo implementation to do it.
Won't the right abstractions like NumPy make it possible for researchers to obtain generally performant code, and lower the bar to write specialized, optimized code without having to drop all the way down to CUDA?
I think I would agree with you. In my opinion, that already exists and is decently mature. CuPy [0] for Python and CUDA.jl [1] for Julia are both excellent ways to interface with GPU that don't require you to get into the nitty gritty of CUDA. Both do their best to keep you at the Array-level abstraction until you actually need to start writing kernels yourself and even then, it's pretty simple. They took a complete GPU novice like me and let me to write pretty performant kernels without having to ever touch raw CUDA.
One thing that concerns me so far is that nothing put out by Mojo or said by any of the Modular leaders indicates that the language itself will ever be open sourced. The Mojo FAQ says:
"Over time we expect to open-source core parts of Mojo, such as the standard library."
In the most recent keynote, Chris Latner said the standard library will be open-sourced starting next year. I have never seen anything about the actual rest of the language. I worry that they don't actually plan to open-source all the core parts of Mojo and they're just letting others put words in their mouth to hype up the language.
> "Drought and heat have already reduced global cereal production by as much as 10 percent in recent years, according to Steffen."
I'm not sure how they define this because I find conflicting info elsewhere. Our world in data has global cereal production only going up [0]. That data only goes until 2021 so there could have been a decrease in 2022 but then it would probably be wrong to say "recent years".
Paywalled, but what I should clarify is income generating assets. If you live on an income inflation is negative vs individuals who mostly gain from assets aren’t affected by it and thus grow their real wealth.
No one is forcing anyone to become owners of assets, but in practice anyone (even the poor) can become asset owners, as assets can be more than just a home. Putting $30/mo into financial assets is attainable by anyone.
Poor financial decision making will always be present, and the idea is to encourage (but not force) people to make rational economic decisions.
25% of American have 0 savings, and 8% do not have health insurance. There is a non trivial amount of people who struggle putting 30 USD/month into financial assets.
There's not a strong reason to use Mojo over cython right now, but if Mojo can deliver on their claims, I think there will be. A borrow checker, better IDE support, function overloading support, and better SIMD support are things that stick out to me in Mojos favor.
"As we continue to take a responsible approach to generative AI, we’re adding new Content Credentials which uses cryptographic methods to add an invisible digital watermark to all AI-generated images in Bing – including time and date it was originally created. We will also bring support for Content Credentials to Paint and Microsoft Designer."
This is really interesting to me. I've heard discussions about doing this, but is this the first time we're actually seeing it implemented in image software?
How robust is this watermark? I assume resizing and cropping won't remove it, but perhaps getting an AI to redraw it or shrinking and AI upscaling would.
Content credentials are also included in images touched by Adobe Firefly. Hopefully they are interoperable implementations and can read each others’ watermarks.
I'm really interested in Mojo not for its AI applications, but as an alternative to Julia for high performance computing. Like Julia, Mojo is also attempting to solve the two-language problem, but I like that Mojo is coming at it from a Python perspective rather than trying to create new syntax. For better or for worse, Python is absolutely dominating in the field of scientific computing, and I don't see that changing anytime soon. Being able to write optimizations at a lower level in a Python-like syntax is really appealing to me.
Furthermore, while I love Julia the language, I'm disappointed in how it really hasn't taken off in adoption by either academia or industry. The community is small and that becomes a real pain point when it comes to tooling. Using the debugger is an awful experience and the VSCode extension that is recommended way to write Julia is very hit-or-miss. I think it would really benefit from a lot more funding that doesn't actually seem to be coming. It's not a 1-to-1 comparison, but Modular has received 3 times the amount of funding as JuliaHub despite being much younger.
I was responsible for the S4TF effort at Google. In my opinion, it validated that some of the ideas are good (e.g. Graph Program Extraction is the algorithm that torch dynamo uses internally), that an efficient compiled language has benefits etc. However, I also learned that it should not be based on Swift and should not be based on TensorFlow. Other than those two things, everything is great ;-)
I’m a huge Julia fan, you can take a look at my posting history. I love Julia’s syntax, and some of its language ideas.
…BUT…
For my personal tastes, Mojo’s lack of garbage collection, Rust-like memory safety, and attention to ahead-of-time compilation put it way ahead. The vast pool of Python developers who can easily pick it up if interested is a big plus.
Julia is aimed at a somewhat different space, but there’s also a huge overlap.
Let’s hope for good interoperability between the two, it seems fairly straightforward…
Lets see how it plays out, given that they are focused only on AI workloads, and somehow those VCs want their money back, which doesn't appeal to everyone.
I acknowledge that there is finally pressure in the Python community to tackle down performance, but don't see Mojo being the solution unless there is something that it will make it go wild.
Right now, I see that more likely with Facebook, NVidia, Intel and Microsoft efforts.
If you can use LLM's to translate your code from one language to another, why would you translate Python to Julia? Even the most ardent Julia supporters will admit that Julia code will be slower than optimized C and C++ code. Why not just use an LLM to rewrite your Python into those languages?
> Julia code will be slower than optimized C and C++ code
This isn't true. Optimized Julia will pretty much always tie optimized C/C++. This shouldn't be surprising. They use the same compiler and run on the same hardware and both let you use inline assembly where needed. Octavian often beats MKL at matmul, and the Julia math library is written in Julia and doesn't lose performance from doing so.
Rewriting to Julia instead of C/C++ has the benefit that the code is still readable and improvable by scientists who wrote the code in the first place.
Yeah, that's what I'm basically thinking: for scientists who already like Python because it's readable and mostly matches the mental model of what they care about, being able to catalyze the switch to a different, faster language that still reads nicely could really drive adoption. But it's a stretch for sure!
I don't have the exact quote on hand, but in the Mojo Discord, Chris Latner explicitly said he wants no "compiler magic" in Mojo. With that idea in mind, Mojo makes it a lot easier to do optimizations like SIMD vectorization by hand, but you will still have to do it manually. My guess is that many scientists who don't like programming would find it annoying to hand-write those kinds of optimizations. If you want a language that gives you nice, performant code on your first attempt, Julia is always a decent option.
These are the docs for some of Mojo's higher order functions that implement vectorization, parallelization, tiling, loop switching, etc. https://docs.modular.com/mojo/stdlib/algorithm/functional
I do think they are a good idea and relatively easy to use; I'm just not convinced that the non-programmer scientist will like them.