I think the question is whether sort should return a new, sorted array or whether it should sort the array in place. In functional languages it is the former, in imperative the latter.
It can be quite useful to have nondestructive sorting in imperative languages as well. Hence python introducing 'sorted' even though '.sort()' preceded it.
It is rewritten to a different language and many people find Rust easier to read, it has better type hint support for IDE etc. Also, you do not lose all the safety, there are still many rules enforced, such as safe linking, no undefined functions.
Unsafe Rust means that all parts of code which do illegal pointer magic are explicitly marked with an "unsafe" keyword. You can now go one by one and fix them.
If the async methods return quickly, then yes, there is some overhead. But the point of asynchonous methods is that they can perform slow operations while yielding the calling thread back to the thread pool. For example Alpha might be reading a gigabyte file, Beta is sending it over the network and Gamma waiting for a timer tomorrow. Async/await is a syntactic sugar to run this in a pleasant way.
Here’s the rub: that time will have to be spent no matter what. Async just “frees up the thread”, but the system as a whole will still have to do the same processing. Async doesn’t make a gigabyte sized I/O take less time or somehow “go away”.
In fairly extensive tests I found that it is actually pretty rare for threads to be the limiting resource, so async provided no benefit at all.
You pretty much need this specific scenario:
- A large auto-scale pool
- Tuned well to keep load at 80-90%.
- High concurrent connections per instance.
- Slow dependencies that return small volumes of data.
- A much larger back end than the front end. Think 100 VMs at the front and 10K at the back.
Violate any of the above and the benefits seem to evaporate. I’m sure that this scenario is common at FAANG sized orgs, but is extremely rare elsewhere.
JetBrains's IntelliJ and friends do that with language plugins. They detect the language according to keywords and patterns and are able to do highlighting and intellisense for multiple languages in a single file. I have not tried this with LSP though, because LSP support landed just in the last version.
Yeah, (neo)vim can do that for non-lsp with a filetype like html.css or similar whichbloads both syntax rules possibly with customizations to determine when it switches into which of the two modes. But again, I don’t know how that works with lsp.
I think that the article misses the point. Many people are using ChatGPT for creation of relatively small but high quality datasets, because it is very easy. Stanford created an amazing dataset for their Alpaca for just $500.
If you are building a competitive model (such as Meta Llama), then you of course don't use ChatGPT-generated data, because you have the money to download the whole internet.
Yeah, just to be clear, I think using ChatGPT for creating small datasets for niche models makes total sense. I'm talking about creating foundation models which is a different thing.
I'm a daily user of Papertrail. It's starting to grow too expensive for our team ($100/16 GB). Still I haven't found an alternative with instant text search and infinite scroll. The ability to quickly scroll through the logs has proven to be crucial to debugging critical problems. I'm having a hard time filtering through JSON logs in Grafana/Elastic search/cloud-native log engine.
NFCtron is a well estabilished fintech startup. We develop NFC cashless payment system, analytic platform and a ticket portal for festivals and long-term events. Currently participating in 300+ events in Czechia and slowly expanding abroad.
Experience is welcome, but we are looking for young people looking for opportunity to grow. We built the technology stack from ground up and are now looking for a helping hand in our journey to the stars!
https://www.nfctron.com/cs/kariera