It's not open source when you disallow people and companies from using it. One big difference between open source and public domain is that code in the public domain doesn't force anyone to redistribute the changes.
I have had several projects where I didn't want to be forked, especially by a company with a marketing budget. I choose not to distribute it with an open source license. There's nothing wrong with that. Companies have sold copies of source to people who paid, so that's an option. But I don't know of any licenses like that which have been written for the public to use (copying a company license is a copyright violation)
Every time I read his article I regret it. I literally mean every time, 100% of it. Judging by the title of the article, I didn't expect his reason to be "engineers working outside their area of expertise". I've seen good engineers figure out problems outside of their expertise plenty of times, so that's not a good reason either.
I feel like this article is the equivalent 16 paragraph stating you're likely to be correct only 10% of the time when you guess a random number from 1 to 10
In just about every language I seen people use .clone rather than deal with problems so I suspect a lot of cases a GC can be just fine or faster. Although I'm comfortable with memory management and rather use C or C++ if I'm writing fast code
Like in case where you can't use Rust? (ie.: existing codebase). Sure that is what Fil-C is good for. Point is that Fil-C does not solve the problem Rust does. It is more like band-aid. (Maybe my comment was misunderstood because of typo: sell/well)
Also I think there is huge difference between GC and fact that some people use .clone() somewhere.
? Putting a program into a safe container or isolation boundary (this is roughly what GC is in this context) causes it to be memory safe. This is not an interesting observation. It also causes it to be significantly slower, to the point of not being competitive anymore.
I believe the claim is that there's nothing in the C standard that requires implementations to be unsafe. If they wanted to, they could bounds check pointers, check allocations are still alive when pointers are dereferenced, etc. and still be conformant to the standard.
Nothing in the C standard requires bytes to have 8 bits either.
There's a massive gap between what C allows, and what real C codebases can tolerate.
In practice, you don't have room to store lengths along pointers without disturbing sizeof and pointer<>integer casts. Fil-C and ASAN need to smuggle that information out of band.
In October alone I seen 5+ articles and comments about multi-threading and I don't know why
I always said if your code locks or use atomics, it's wrong. Everyone says I'm wrong but you get things like what's described in the article. I'd like to recommend a solution but there's pretty much no reasonable way to implement multi-threading when you're not an expert. I heard Erlang and Elixir are good but I haven't tried them so I can't really comment
> I always said if your code locks or use atomics, it's wrong. Everyone says I'm wrong but you get things like what's described in the article.
Ok so say you are simulating high energy photons (x-rays) flowing through a 3d patient volume. You need to simulate 2 billion particles propagating through the patient in order to get an accurate estimation of how the radiation is distributed. How do you accomplish this without locks or atomics without the simulation taking 100 hours to run? Obviously it would take forever to simulate 1 particle at a time, but without locks or atomics the particles will step on each others' toes when updating radiation distribution in the patient. I suppose you could have 2 billion copies of the patient's volume in memory and each particle gets its own private copy and then you merge them all at the end...
I'm saying if you're not writing multi-threaded code everyday, use a library. It can use atomics/locks but you shouldn't use it directly. If the library is designed well it'd be impossible to deadlock.
With a library that encapsulates a low number of patterns (like message passing) you'll be very limited.
If you never start learning about lower level multi-threading issues you'll never learn it. And it's not _that_ hard.
I'm not writing multi threaded every day (by far), but often enough that I can write useful things (using shared memory, atomics, mutexes, condition variables, etc). And I'm looking forward to learn more, better understand various issues, learn new patterns.
I do write code that uses multi-threading every day and usually I write a few lockless functions every month for the in-house library I use. I was considering writing an article on atomics after all the mistakes and bad code I've seen.
A problem with writing an article is that if I don't show real code, people might think I'm exaggerating; if I do show real code, it'd look like I'm calling someone a bad programmer
I've certainly seen my share of buggy multi-threaded programming. On the other hand, that's nothing compared to all the buggy and bad code I've seen overall. And I don't think it's going to get better by telling people not to even try.
I'm very doubtful that multi-threading can be abstracted behind a library. Simple message passing can cover a lot of use cases but not everything by far. I've also seen the Javascript model work fine, but it's not real multi-threading (no parallelism). As to async frameworks, they too are restrictive and they come with a lot of complexity.
To clarify by "your code" I mean your code excluding a library. A good library would make it impossible to deadlock. When I wrote mine I never called outside code during a lock so it was impossible for it to deadlock. My atomic code had auditing and test. I don't recommend people to write their own thread library unless they want to put a lot of work into it
People mess up the order all the time. When you mess up locks you get a deadlock, when you mess up an atomic you have items in the queue drop or processed twice, or some other weird behavior (waking up the wrong thread) you didn't expect. You just get hard to understand race conditions which are always a pain to debug
Just say no to atomics (unless they're hidden in a well written library)
People are messing up any number of things all the time. The more important question is, do you need to risk messing up in a particular situation? I.e. do you need multithreading? In many cases, for example HPC or GUI programming, the answer is yes, you need multithreading to avoid blocking and to get much higher performance.
With a little bit of experience and a bit of care, multithreading isn't _that_ hard. You just need to design for it. You can reduce the number of critical pieces.
Are you saying GUIs in general don't need multithreading or just that you think you haven't needed it so far? Or that you use some high-level async framework that hides just the synchronisation bits (at the cost of async type complexity)?
I use a high-level async framework. It works extremely well so far. I do need to add code occasionally, but that's because it's an in-house lib that isn't a year old yet
I've been meaning to ask you what the motivation of your project is? Why would you want a safe-c? When I saw the headline I was worried that all my runtime code would break (or get slower) because I do some very unsafe things (I have a runtime that I compile with -nostdlib)
I'm also tempted to write a commercial C++ compiler, but it feels like a big ask, paying for a compiler sounds ridiculous even if it reduces your compile time by 50x
That's the best answer. Just wondering because you have a lot of experience and many people disagree with me.
How realistic is a C++ compiler that's 50x faster than clang for debug builds? Assuming 1) there's enough work for each core 2) there's so many files/headers that a unity build takes 1/4th of the time as a rebuild using individual files (this theoretical compiler would be multi-threaded)
I'd call it not terribly slow. IIRC it didn't hit 100k lines on an M2, while I saw over two million on my own compiler (I used multiple instances of tcc since I didn't want to deal with implementing debug info on mac)
I don't remember, I grabbed a project that I can't remember the name of and compiled something else first to warm up the compiler then compile the project. I tried several times and it seemed to be consistently < 100K
Yes. I once was thinking I should write a x86-64 codegen for my own compiler, but using tcc I was able to reach 3 million lines in a second (using multi-threading and multiple instances of tcc) so I called it a day. I never did check out if the linking was in parallel
I'm writing an editor. Could you explain to me the use case? I looked it up and I don't exactly understand the reason besides it might be fun to look at
I really did not like that other editors would lose text if you pressed undo and type. The way my undo's work is if you type "a b c" and hit undo twice (so it's just "a") then type "d", then undo twice, it'll restore to "a b c"
Many times, I write a bunch of code before realising that a lot of it (but not all of it) is incorrect. At this point, being able to switch between two different points of history and selectively copy-paste the stuff that was correct is a godsend.
This is especially useful if you undid some operations, typed a little and then realised that you forgot to copy some important stuff that was strewn around in the old version.
That's one reason why I implemented it so it wouldn't lose information.
Hmm. I plan to implement a diff. The way I have it, you could copy the current source, hit undo to a version you want to compare again than use the clipboard as the version to diff with (vscode calls this "Compare Active File with Clipboard".) If you mess up you can still hit undo to go through all the previous edits you made since it's not lossy. Would this be as good for your usecase?
A tree is essentially a nice GUI on top of what you already have. It's allows me to visually click (or navigate) to a specific point of history, instead of remembering to hit Undo 15 times.
AFAIU, it also allows you to redo, even if you accidentally added a letter after undoing. That accidental edit then adds a branch to the tree instead of replacing all the redo entries.
Just a thought, do you want it as a tree? It seems like it'd make more sense if there was a bar or key you can press to go back and forward to the changes you want. I didn't really understand why having it as a tree was helpful.
I'm not going to say I wont implement it, I'm just saying I'm not understanding why you want it that specific way. Is the change you want always many minutes apart and why you want it as a tree? To find the one that happened many minutes before another? I know some people like how vscode has a timeline that says how many minutes/hours ago a change was, so they can pick it out
I type "A B C D".
Then I undo thrice to get "A".
Then I type to get "A E".
And now I realise that what I actually want is "A E B D".
My edits have created a branch is history. And I need to switch to the old branch and copy stuff twice (first "B" then "D"). That is why a tree is important.
I didn't heavily use undo trees so I don't remember how neovim/emacs displays it. I think I get it enough that if I try it out I'll be able to understand what you mean
I have had several projects where I didn't want to be forked, especially by a company with a marketing budget. I choose not to distribute it with an open source license. There's nothing wrong with that. Companies have sold copies of source to people who paid, so that's an option. But I don't know of any licenses like that which have been written for the public to use (copying a company license is a copyright violation)