This is exciting news! What's also exciting is that it's not just C++ that can run on this supercomputer; there is also good (currently unofficial) support for programming those GPUs from Julia, via the AMDGPU.jl library (note: I am the author/maintainer of this library). Some of our users have been able to run AMDGPU.jl's testsuite on the Crusher test system (which is an attached testing system with the same hardware configuration as Frontier), as well as their own domain-specific programs that use AMDGPU.jl.
What's nice about programming GPUs in Julia is that you can write code once and execute it on multiple kinds of GPUs, with excellent performance. The KernelAbstractions.jl library makes this possible for compute kernels by acting as a frontend to AMDGPU.jl, CUDA.jl, and soon Metal.jl and oneAPI.jl, allowing a single piece of code to be portable to AMD, NVIDIA, Intel, and Apple GPUs, and also CPUs. Similarly, the GPUArrays.jl library allows the same behavior for idiomatic array operations, and will automatically dispatch calls to BLAS, FFT, RNG, linear solver, and DNN vendor-provided libraries when appropriate.
I'm personally looking forward to helping researchers get their Julia code up and running on Frontier so that we can push scientific computing to the max!
Sxmo uses ModemManager[0] for calls+texts, and mmsd-tng[1] for MMS support. It generally works quite well (in my limited experience), modulo some dropped messages (might already be fixed?) and the modem occasionally getting filled up with messages which take a while to clear (causing newer messages to not be delivered).
mmsd-tng is a lot more stable in dealing with transient network issues with 1.7, which was the cause of dropped messages (dog fooding your work helps!).
I think almost all other distros have moved to the modem manager stack (to my knowledge, UBPorts is the only one who still uses oFono). Phosh uses MM, Plasma Mobile moved to MM, and SXMO uses MM, which are the major players in phone DEs.
I think the OP's point is that running these optimizations in production as-is is dangerous because future code changes in the various places could accidentally impede the optimizer's ability to apply all the transformations that users of the codebase expect.
The obvious solution is to query the optimizer to get the final transformation as actual Julia code, and replace the pre-transform code with the post-transform optimized code, and disable any further optimization (aside from very trivial transforms that aren't worth directly including). This ensures that one doesn't accidentally lose the amazing benefits of this symbolic optimization approach on a given piece of code, and that production code always keeps its performance and correctness.
It is certainly a shame, but I'm confident that Dagger and its new DTable should be able to cover all of the ground that JuliaDB covers, while being far easier to maintain. I think JuliaDB had some great ideas, but it didn't go far enough with composability, instead opting to use a limited set of table types (no internal DataFrames.jl support), fully focusing on loading from CSV (which is a horrible data format, albeit very common), and supporting only one CSV reader/writer (CSVFiles.jl). Of course, all of this could get fixed; but with JuliaComputing no longer funding its direct development, and no one dedicating the large portions of time necessary to fix all the outstanding issues and begin developing and merging features, JuliaDB isn't moving anywhere fast.
Thankfully, Dagger is under active maintenance, and has financial support through the JuliaLab (by employing me). Krystian Guliński, the DTable's author and maintainer, is also interested in developing and maintaining the DTable further (having created it as part of his schooling), and will hopefully stay on the Dagger team for the foreseeable future.
With that PR in place, it should be possible to define a "storage device" which is backed by a database. I haven't had a chance to actually try this, since the PR still needs quite some work and testing, but it's definitely something on my radar!
Definitely not dead; Vega is well supported, and with some tweaks, Polaris probably works too (although it definitely was broken in HIP around ROCm 4.0.0 or so).
I think AMD has some work to do on non-C++/Python ecosystem engagement for sure, but they've built a foundation that's quite easy to build upon and get excellent performance and functionality; AMDGPU.jl is a testament to that.
For kernel programming, https://github.com/JuliaGPU/KernelAbstractions.jl (shortened to KA) is what the JuliaGPU team has been developing as a unified programming interface for GPUs of any flavor. It's not significantly different from the (basically identical) interfaces exposed by CUDA.jl and AMDGPU.jl, so it's easy to transition to. I think the event system in KA is also far superior to CUDA's native synchronization system, since it allows one to easily express graphs of dependencies between kernels and data transfers.
AMD has done great work in a very short amount of time, but let's not forget that they're still very new to the GPU compute game. The ROCm stack is overall still pretty buggy, and definitely hard to build in ways other than what AMD deems officially supported.
As AMDGPU.jl's maintainer, I do certainly appreciate more users using AMDGPU.jl if they have the ability to, but I don't want people to think that it's anywhere close in terms of maturity, overall performance, and feature-richness compared to CUDA.jl. If you already have access to an NVIDIA GPU, it's painless to setup and should work really well for basically anything you want to with it. I can't say the same about AMDGPU.jl right now (although we are definitely getting there).
This is an issue with AMD not wanting to long-term support the code paths in ROCm components necessary to enable ROCm on these devices. My hope is that Polaris GPU owners will step up to the plate and contribute patches to ROCm components to ensure that their cards keep working, when AMD is unwilling to do the leg work themselves (which is fair, they aren't nearly as big or rich as NVidia).
What's nice about programming GPUs in Julia is that you can write code once and execute it on multiple kinds of GPUs, with excellent performance. The KernelAbstractions.jl library makes this possible for compute kernels by acting as a frontend to AMDGPU.jl, CUDA.jl, and soon Metal.jl and oneAPI.jl, allowing a single piece of code to be portable to AMD, NVIDIA, Intel, and Apple GPUs, and also CPUs. Similarly, the GPUArrays.jl library allows the same behavior for idiomatic array operations, and will automatically dispatch calls to BLAS, FFT, RNG, linear solver, and DNN vendor-provided libraries when appropriate.
I'm personally looking forward to helping researchers get their Julia code up and running on Frontier so that we can push scientific computing to the max!
Library link: <https://github.com/JuliaGPU/AMDGPU.jl>