Author here; I can answer any questions you might have; It's still a really early project but it's growing quickly! Anyone who's interested should join in the discussion in our chat-room here: https://gitter.im/rasa-editor/Lobby
This looks really neat. I really like the idea, and I think it has lots of potential.
That said, I'm going to try to moderate my enthusiasm until more stuff gets implemented. Thinking back to Yi [0], there is a lot of really cool stuff in that too (namely precise syntax highlighting, which relied heavily on parsing lazily). There's even a couple papers written about it [1]! IIRC, the main problem was (I think it has been at least partially dealt with) performance - people expect their editors to be blazingly fast. It will be interesting to see if Rasa encounters similar problems.
Love it, specially the "What people are saying" part :D
* Excessively Modular! - some bald guy
* I'm glad I'm unemployed so I have time to configure it! - my mate Steve
* You should go outside one of these days. - Mother
> I'm glad I'm unemployed so I have time to configure it!
That raises the question if the author is also unemployed so that he had the time to write it ;-) It would be interesting to know how much time he actually needed.
I think I remember the author posting on /r/haskell a bit back about this project. He mentioned that in his editor even the core editor functionality are implemented as extensions (contrary to Yi), so the extension API is quite robust.
> Rasa is putting extensions first from the very beginning, I've read that Yi has plans to extract their renderers into extensions but hasn't been able to easily extract them at this point. In rasa, EVERYTHING is an extension (for better or worse I suppose we'll see).
I'm excited to see another option appear in text editing, and excited that it's written in Haskell, one of my favorite languages. But, why would I choose it over another text editor? The ability to customize it is neat, but editors like emacs can be customized to one's heart's content, and indeed can suffer from this (why did my editor suddenly become slow? why is my syntax highlighting or indentation not working correctly? who knows, it's the interaction of one of the 50 packages I have installed...).
Of course, you as the author are under no obligation save to write whatever you want, but speaking personally, I would love to see some sort of demonstration of how the editor works, and/or the case made selling me on why I should choose it over other options.
I will say this for Emacs, though: while bad interactions between packages do happen, they're surprisingly rare given the number and variety of packages that are available.
This confirms my own experience. I discovered that programs written in (or use) dynamic languages like Lisp are surprisingly reliable (Emacs in particular).
Type safety won't protect us from broken software. Quote: "The most common bugs caught by static typing are also the least critical sort of bug."
Static typing helps a lot to catch basic type errors but it is surely not the "Messiah" of code safety. In big systems the good old way of testing still seems to be the best practical way.
Haskell's claim "If it runs then it's likely correct" is a deception because no compiler can catch logical errors. This was also Ada's problem in the Ariane disaster, although Ada has probably the strongest type system beside Haskell. You need special verification tools like FramaC, and even those tools don't catch all errors. Haskell's safety is even more questionable in face of the underlying libraries which are written in C. Finally, the still unresolved Cabal hell speaks for itself. Stack works only because it is an isolated repository where the maintainers have to take a lot of attention to make sure that new code doesn't break other code.
> Haskell's claim "If it runs then it's likely correct" is a deception because no compiler can catch logical errors.
Yes it can, see dllthomas' updated examples.
> Finally, the still unresolved Cabal hell speaks for itself.
The major reason for cabal hell was that you couldn't install multiple versions of the same package. This has been resolved with cabal new-build and should become the default after a few releases of testing.
For the curious cabal new-build accomplishes this by copying what the nix package manager does. See:
One optics issue is that dependencies occasionally cause pain in every environment. When that happens in Haskell, it gets labeled "cabal hell" even though the worst issues have been long since fixed.
> Haskell's claim "If it runs then it's likely correct" is a deception
This is missing some context. If we generate random functions until we find one that compiles, of course it's not "likely correct". But that's not what people are doing - they are setting out to write correct code. If the types they use exclude functions that are almost correct, then it's likely when they actually hit something that compiles it will also be something that's correct. All of that said, it's true that the claim is sometimes made more strongly than it deserves to be.
> because no compiler can catch logical errors.
Not without my help. But I can certainly write my code such that the compiler will catch certain logical errors I'm likely to make. I have a pile of examples, but not the time to elaborate - I'll add them later.
Examples. The first few are in C, the last couple in Haskell because they need it.
At the most simple, consider a function that I'm calling like `compute_thing(bid, ask)`. Maybe that's in error, and it actually expects `(ask, bid)`. If it was defined to take primitives (`int` or whatnot) then that's a logical error the types don't catch. But if I wrap those primitives in a struct to get nominal typing (`typedef struct ask { int v; } ask_t`), it's caught.
This is only a small improvement. In code that's well exercised, this will probably be found by tests, and certainly you should make sure you have coverage. But an obscure corner case may only be run in tests where bid == ask, and then the values won't tell you anything's wrong. A bad value also doesn't tell you where the error is, whereas this use of nominal typing points you at the exact source location. (Named parameters would also catch this particular issue, where supported and used).
Slightly more complicated, give the same treatment to indexes. `foos[idx]` versus `lookup_foo(idx)` with a nominally-typed argument. In the former case, there is room for me to have confused which index was talking about which array. For tests, this is possibly worse than the above case, because it's quite common for small tests to involve few elements, which means indexes are more likely to spuriously agree.
Now let's get tricky. I had a few threads (statically assigned to various roles, not a pool) managing resources they own. Latency on the order of microseconds was vitally important, so couldn't have threads waiting around on locks. I had to be careful that a function called on one thread didn't touch a resource owned by another thread.
If my access functions demanded a nominally typed token I passed between functions, I could detect exactly where I called something from an unintended thread. If this token took the form of an empty struct, all this checking happened at compile time with no runtime overhead at all (the exact same bytecode was emitted if I manually stripped out the tokens), and completely eliminated one kind of concurrency bug (which are often very difficult to detect and localize with tests). This made it very easy to move functionality between threads - I could make a small change and my compiler would point out specific locations incompatible with that change.
Moving to Haskell, I had a Markov text generator, where the models were represented as a map from lists of strings to maps from strings to counts (`Map [String] (Map String Int)`). Merging two models was easy (`M.unionWith (M.unionWith (+))`) but if the two models looked back a different number of tokens then the result would be broken. I could hang that value on them as data, and check it manually... or I could carry it in the type and let the compiler check it for me.
In a more serious context, I was evaluating SQL I had parsed. I would produce "I have evaluated this bit of it" results that I could thread together. Adding a phantom parameter to their type, I could distinguish between "this is something that operates at the table level" and "this is something that operates at the row level" and I could not merge them without them being of the same type.
I then wrote functions to convert between the two. The one to take a "table" value to a "row" value (as one might when evaluating a sub-select in an expression) was a no-op. The one to go other way (evaluating an expression in a selection list or WHERE or ORDER BY or ...) I wrote to demand that I introduce the values present in the current row.
So consider: I have lists and sometimes I forget to add them to my dictionary in the right places - a logical error if ever there was one. Now my type checker will catch it.
> Static typing helps a lot to catch basic type errors but it is surely not the "Messiah" of code safety.
I'd argue that type safety is only one benefit of strong types - the others being better documentation[1] and a design which is clearer and easier to reason about.[2] You can get both in a dynamically typed language, but they're much less common since the compiler doesn't require them.
[1] I've lost count of how many times I've looked at some function in Python or Javascript and had no idea what kind of values I was supposed to provide.
[2] My go-to example of this is how in Persistent (a Haskell ORM), values which have been inserted into the database (entities) and which have not have different types, since the non-inserted values don't know what their primary key is yet. This makes it much easier to reason about where the values are coming from and what needs to be done with them.
> Stack works only because it is an isolated repository where the maintainers have to take a lot of attention to make sure that new code doesn't break other code.
In practice, the only breakage checked for is compilation errors. The onus is still on library authors to declare compatibility bounds in their cabal files, which is why [PVP](http://pvp.haskell.org/) is still a thing.
I'm also not sure how accurate it is to call it isolated - it's pretty much at the centre of the ecosystem at this point.
> Static typing helps a lot to catch basic type errors
> but it is surely not the "Messiah" of code safety.
I've been wondering why that is, and one reason I can think of is that testing verifies values. This also implicitly verifies the types of those values.
Of course in production software type errors are rare. They're generally the first errors that pop up during debugging or testing. Which means the primary benefit of a basic static type system is speeding up the feedback loop of finding all your type errors, and reducing the need for testing.
The funny thing about that blog post is that the top 25 bugs "don't look like" type errors to the author, but almost all of them are type errors given some type system and type definition...and not just theoretical, but by existing languages and usages. Buffer overflow, for example, isn't a "type error" according to C, but it is according to Rust. As type systems become stronger and more advanced, more errors become type errors, which means you'll only recognize them as type errors if you use languages and constructs that make them type errors.
> Of course in production software type errors are rare.
Not according to static-typing fundamentalists.
> They're generally the first errors that pop up during debugging or testing.
Exactly, and dynamic languages tend to be highly interactive, whereas most languages with highly evolved static type systems tend to have very slow compilers. So the idea that the compiler for that language gives you useful feedback before your dynamic system gives you feedback from actually running the code is at best dubious.
> ...but almost all of them are type errors
I think you are confusing type errors with modeling errors, but that's very common.
Not in production software, no. Because even the shittiest dynamic language software still gets run once or twice before it gets put into production.
Type errors are extremely common during development, and that is where static types shine.
> Exactly, and dynamic languages tend to be highly interactive, whereas most languages with highly evolved static type systems tend to have very slow compilers. So the idea that the compiler for that language gives you useful feedback before your dynamic system gives you feedback from actually running the code is at best dubious.
My extremely slow scala compiler can compile 50k lines from scratch faster than a boilerplate rails project can even start up. And I never compile from scratch during development, because I have incremental compilation. Even as slow as the compiler is, it is never as slow as compiling+running code of similar complexity in a repl, and it's not even close.
> I think you are confusing type errors with modeling errors, but that's very common.
Nope. Type errors. Just because they aren't called type errors in one language doesn't mean they are not type errors in another language with a different type system.
I see your "nope" and raise you a "nope, not at all". A lot of these errors have relevant information modeled as strings. Your static type checker is only going to tell you that these are two strings, and yes, they happen to be compatible. Congratulations!
Oh, you meant that these things (like SQL statements) should not be represented as strings? And that that modeling should include semantically distinct entities for user strings (need quoting) and SQL syntax (not quotes)? That's modeling, whether you do static type checking or not.
You mean it's impossible to have compile time type safety of sql queries, and that the compiler can't tell the difference between an SQL string and any other string? You're wrong.
No, I most emphatically did not say (or mean) that it is impossible to check this statically. I am saying that this is first a modeling problem, you need to have a model for your SQL. You know, things like
After you have that model for your SQL you can decide whether you want to check it statically or not, but first you need the model.
And yes, some languages allow you to decompose compile-time strings into model objects. Still requires you to have a model first, and I have to admit I prefer not having magic strings, but rather have polymorphic identifiers: http://objective.st/URIs
> So the idea that the compiler for that language gives you useful feedback before your dynamic system gives you feedback from actually running the code is at best dubious.
Working in Haskell, I get useful feedback about code before I write it! I find this very conspicuously missing when I work in Python. I can't ask "what's the shape of what I need to put here?"
I write quite a lot in both Haskell and Python. In python, I pretty much never find myself wondering what shape of object I need to put somewhere, because the type system is so simple and the language is imperative, so there is rarely a need for any of the kind of elaborate highly intricate types that one sees in Haskell. That plus having a simple debugger like pdb and easy access to print statements etc tend to make it a breeze. On the other hand, writing Haskell without a type system (if it were even possible) would be a complete nightmare, because of the very complex types (monad transformers, lenses, higher order functions, etc) required to express the same kind of computations that are easy to express in an imperative language.
I definitely think Haskell's type system is a huge asset to writing correct programs, but I'm not sure that this particular strength is one that I would echo.
> In python, I pretty much never find myself wondering what shape of object I need to put somewhere, because the type system is so simple and the language is imperative, so there is rarely a need for any of the kind of elaborate highly intricate types that one sees in Haskell.
It doesn't take "elaborate types" - just the question of "what interfaces does this need to support?" is a hard one to answer if you don't already have correct (and sufficiently detailed) docs. "This needs a duck... does it need to walk? quack? both? fly?"
It's possible that the Python code bases I've worked in have been poor that I need to ask these questions, but I don't know how I would have improved them.
> That plus having a simple debugger like pdb and easy access to print statements etc tend to make it a breeze.
But in order for pdb to work, I have to have already written the code. It's inherently "push based", whereas in GHCi (or with typed-holes) I can "pull". (Which isn't to say that sometimes "push based" questions aren't the ones I want to ask, or that it wouldn't sometimes be nice to have a better debugger in Haskell.)
> It doesn't take "elaborate types" - just the question of "what interfaces does this need to support?"
I suppose what counts as "elaborate" is a matter of opinion, but Haskell types in everyday code often become very involved and intricate. Here's an example from a piece of code I wrote a while back:
bindingsToSet :: Monad ctx =>
LEnvironment ctx ->
[Binding NExpr] ->
LazyValue ctx
bindingsToSet env bindings = do
let start = InProgress mempty
finish <- flip execStateT start $ forM_ bindings $ \case
NamedVar keys expr -> do
-- Convert the key expressions to a list of text.
keyPath <- lift $ mapM (evalKeyName env) keys
-- Insert the key into the in-progress set.
let lval = evalNExpr env expr
get >>= lift . insertKeyPath keyPath lval >>= put
Inherit maybeExpr keyNames -> forM_ keyNames $ \keyName -> do
-- Evaluate the keyName to a string.
varName <- lift $ evalKeyName env keyName
-- Create the lazy value.
let lval = evalNExpr env $ case maybeExpr of
Nothing -> mkSym varName
Just expr -> mkDot expr varName
-- Insert the keyname into the state.
get >>= lift . insertKeyPath [varName] lval >>= put
-- Convert the finished in-progress object into a LazyValue.
convertIP finish
Inside of this, we have a rather complex interplay between monad transformers, monadic bind, state, etc. Of course to some Haskell devs this might be no big deal to write, but for me, there's no way I could get all of that to be correct without a type checker. And to that end the Haskell type system and GHCi are an invaluable resource. However, I could very easily write equivalent code in python without a great deal of effort, and all of that crazy juggling of monad transformers would turn into very straightforward for loops and mutable assignment. Which isn't to argue that python is a superior solution to the problem, but...
> But in order for pdb to work, I have to have already written the code. It's inherently "push based", whereas in GHCi (or with typed-holes) I can "pull".
I disagree. In python you can start with a pdb that only gets so far, and figure out what y0u need to write based on inspecting the objects you have available in scope, testing out some functions on them, etc. In GHCi you can get help on what object is required to satisfy a type equation, but only after you first provide it with enough information that it has a meaningfully small set of free variables to solve. This of course doesn't mean that type holes are not super useful, but I would argue that their usefulness is in part due to how complicated Haskell's type system is. Of course, by the same token features like type holes turn that complexity into a strength. The functionality is incredibly useful in Haskell, no argument there -- but I don't find myself needing or missing it in python.
> I suppose what counts as "elaborate" is a matter of opinion
I think you got the wrong bit of that. I wasn't saying that I don't sometimes wind up with elaborate types in Haskell, for which checking and inference is even more helpful. I was saying that even in the incredibly simple case where I just want to know an interface I still find it quite useful, and conspicuously absent in Python when I reach for it.
> I disagree. In python you can start with a pdb that only gets so far, and figure out what y0u need to write based on inspecting the objects you have available in scope, testing out some functions on them, etc.
What you describe here is push based. I need to come up with data, pass it into bits of the system, go through it forward until I get to the bit in question, and hope I've considered all the important bits. Again, a push based approach to answering these questions is often a good one, and Python probably has an edge there. I find that pull based is also often useful, and Python lacks it entirely.
> In GHCi you can get help on what object is required to satisfy a type equation, but only after you first provide it with enough information that it has a meaningfully small set of free variables to solve.
I ask these questions regularly, and don't recall it ever breaking down in this way... perhaps I'm just not understanding what you're getting at?
> I don't find myself needing or missing it in python.
Okay. I do. Repeatedly, every time I work in Python. Again, maybe I'm just faced with bad Python code? Maybe I've just done more Haskell, so built more of these habits? Maybe other pieces of our context are just different. Is your Python work usually more like greenfield development or are you needing to make changes to systems primarily built by other people? Or maybe it's just that we're different people :-P
> I think you are confusing type errors with modeling errors, but that's very common.
How would you distinguish between them? Strong/expressive type systems encode the modelling in a way which is mechanically verifiable. e.g. by not permitting negative or fractional numbers to be submitted to a function which indexes an array.
> So the idea that the compiler for that language gives you useful feedback before your dynamic system gives you feedback from actually running the code is at best dubious.
It depends highly on how long it takes to run the code. I work on large pieces of software both static and dynamic which can take a long time to restart and test after changing the code. (far longer than the turnaround time for a static-language change-and-recompile turnaround). In the dynamic systems it frequently takes all day to make what should be a simple change because the turnaround time to test for simple errors is so lengthy and it requires multiple tries just to get it to "compile" (I use this word in a loose sense since there's no compiler but I say "compile" meaning types are correct, I'm calling functions that actually exist, accessing data members that actually exist, free of usages of undefined variables etc.). Not only does it take far longer to purge all of the errors from the newly written code in dynamic languages but I also find that when working with dynamic code that I didn't write myself, the lack of explorability (either manually or IDE-driven) hinders understanding of the code and therefore writing new code involves much more guesswork, which means I write far more incorrect code in the dynamic language.
Once the code gets to the point where it is "production ready" it's usually free of these sort of errors -- whether it is written in a dynamic or static language. Otherwise, it would fail and by definition would not meet the label of "production ready". So I would tend to agree with the grandparent's statement that "in production software type errors are rare". The type of software that I work on though tends to be in house software that's a bit more bleeding edge -- it's constantly being adapted to be used in new situations, needing to be changed, and never quite meeting the label "production ready" in that it's not something that QA would approve of and we would ship to other users. For this kind of software, I would say that the parts written in dynamic languages have far more latent type errors than the static code. These a little landmines that don't occur in the everyday use of the software but users step on all the time when they hit edge cases that no coder had enough imagination to test for.
In my own experience, writing code in a dynamic language can be faster than in a static language, but this usually falls apart around the point where the code gets to the point of being more than will fit on one monitor screen, or more than will fit in one person's head at any time. That's the point where in the dynamic language you start having to guess things, and in the static language the computer helps you avoid the guesswork. I would disagree with your point about slow compilers, because for smallish codebases static compilers are practically instantaneous anyhow, and for large ones the cost of compilation in still much smaller than the cost of one testing iteration of the application.
> long time to restart and test after changing the code.
So one of the reasons for having dynamically typed languages is that they allow you to modify the code without restarting the "program". In some non-trivial senses, there are Smalltalk images that have been running non-stop since ~1976 and have been transformed from running on 16 bit Altos to 64 bit x86s and various other machines.
The reason being dynamic is important for that sort of capability is that allowing temporary inconsistencies allows you to incrementally get from point a to point b, and is also important for exploratory programming. While I don't know your circumstances, it sounds like you are paying the cost of a dynamic approach without reaping the benefits. That probably sucks.
I guess the important point here is that dynamic approaches enable certain capabilities. That's why they're useful. A lot of the criticism I see seems to boil down to "take a static language and the development practices associated with that and just strip away the types". That's not useful.
That said, there are obviously areas where one approach works better and other areas where another approach is more appropriate. For example, it could be that the structure of your programs/data is such that you couldn't do an incremental adjustment without restarting even if you had the technical capability (it sounds like you don't currently have that capability, correct me if I am wrong)
Your reasoning actually sounds very non-fundamentalist.
> [faster dynamic] usually falls apart around the point
> where the code gets to the point of being more than
> will fit on one monitor screen,
Static types have a now well-documented positive effect on comprehensibility of foreign codebases, and this typically includes the code I wrote yesterday ;-). It's not a safety effect, though, and apparently it works even if the type-system is not sound and therefore not safe.
> I would disagree with your point about slow compilers
> because for smallish codebases static compilers are
> practically instantaneous anyhow, and for large ones
> the cost of compilation in still much smaller than
> the cost of one testing iteration of the application.
Have to disagree with you there. In my experience with two such languages, Swift and Scala, compile times can be (and typically are) painful even for fairly small code-bases. Heck even C++ is pretty awful in that regard. And as to restarting, see my previous comments.
I haven't tried swift or Scala, but my experience is that I mostly work in C++ (in console video games). Even though there are thousands of files and millions of lines of code, the time it takes to recompile after a small change is typically 5-10 seconds which is nothing compared to the time it takes to fix the mistake if i type the wrong thing, so the static compilation step is very welcome in this case. Compiling can take minutes with certain kinds of changes (e.g. change the signature of a widely used function that is included in many other files) but that's also generally a case where the longer compilation time is welcome because it tells my all of the myriad of call sites where the function is now being used incorrectly and needs to be fixed. I have a lot of C# experience and generally find compilation times to be a non-issue there even on large projects.
It was interesting what you said about the advantage of dynamic languages is making changes while the program is still running. How does this work in the face of code that crashes? Do you just build some kind of well-armoured inner loop with lots of exception handling that you never have to change, and then all of your changes are in modules which are can fail without taking the system down? We have a python-based "pipeline" which manages the building of all of our game assets and takes a long time to restart since it has a database of gigabytes of information that it needs to reload on startup. And changing the code involves a restart, so the cost of getting the code wrong is a long wait. I know that python does allow for reloading of modules, but the architect of this system has told me that it would be a bad idea due to possible data layout inconsistencies between the new code and the resident data, whereas reloading and deserializing the information is safer.
As you note E-lisp is very hard to write correctly when everything has to work together, but even just the dynamic typing and dynamic scope often gets in the way of correctness. It's only through Herculean effort on the part of the maintainers that these things even work -- it's pretty instructive to look at the issue trackers for some of the larger e-lisp projects, e.g. magit.
It remains to be seen whether this editor can change that, but if the API/Plugin boundaries are strongly typed, I'd be reasonably optimistic that it's at least plausible that it could :).
Dynamic scoping and typing is what makes Emacs easier to extend, rather than harder. It's not unusual for users to fix bugs and performance problems in Emacs packages with some choice redefinitions, global or scoped.
A typed API on the other hand will prescribe interactions between extensions. It's still possible with the right design to make things flexible - eg getting extensions to describe their logic in the form of data that can be modified by other extensions - but I think the axis will be between Emacs style flexibility vs rigidity of extension.
> Dynamic scoping and typing is what makes Emacs easier to extend, rather than harder.
I think that's what this new editor is trying to test. I'm certainly not convinced that that's true. :)
> It's not unusual for users to fix bugs and performance problems in Emacs packages with some choice redefinitions, global or scoped.
Maybe those bugs shouldn't have existed in the first place? I'm don't want to be fixing bugs in my editor -- I just want it work properly.
While outright crashes (as in SEGV) have been rare, there have been loads of times emacs has just gone into an infinite loop, times where functions get the wrong type (or number of) arguments and you just get some weird "argp" (or whatever) error message in the status bar.
I don't think I'm an anomaly -- especially since I use very few extensions (at least AFAICT compared to what some real power-users do).
> A typed API on the other hand will prescribe interactions between extensions. It's still possible with the right design to make things flexible - eg getting extensions to describe their logic in the form of data that can be modified by other extensions - but I think the axis will be between Emacs style flexibility vs rigidity of extension.
The thing is: The API exists has some restrictions whether one formalizes them or not -- the difference is that in one case you'll just get weird/buggy behavior whereas in the other you'll get a compilation error. If it's possible to formalize something "generic enough" is an interesting question.
While this looks appealing and I will probably try it, I'm not sure if I have cognitive room for another core tool configured in Haskell. Running xmonad is already like a commitment to periodic skirmishes with an angry badger.
Haha; I understand that feeling! It's mostly scripted using combinators made available by other extensions; you can just chain them together in `do-notation` to do the things you want.
As a kind of generalized comment to the authors of tools written in smart-people languages: Y'all are great, but I sometimes suspect you'd get broader adoption of your stuff if you kept in mind that a bunch of us are clever enough to benefit from your work, but not always clever enough for the tools you're building it in...
For those people who might be scared off because it is Haskell, I would encourage you look at the default config. [0]
Apart from import statements and two lines of boilerplate, it is just a newline deliminated list of modules. The only line that this does not apply to is the last line, which sets the initial state of the buffer.
Writing your own extensions probably will always involve being comfortable with Haskell, but (assuming it gets a decent community) will not be nessasary for the majority of users.
How do you figure? According to Wikipedia, "the Eastern Baltic languages split from the Western Baltic ones between AD 400 and AD 600", "The differentiation between Lithuanian and Latvian started after AD 800", "The earliest surviving written Lithuanian text is a translation dating from about 1503–1525", etc.
Lithuanian is closest to the original proto-indo-european language keeping more of it's features than any of the hundreds of other languages that spawned out of it.
It's hard to say any language is "oldest" because languages are alive and the people that speak them change constantly. The pieces that make up the language though are indeed very old regardless of the fact that "Lithuanian" hasn't existed nearly as long.
>Among Indo-European languages, Lithuanian is extraordinarily conservative, retaining many archaic features otherwise found only in ancient languages such as Sanskrit[5] or Ancient Greek. For this reason, it is one of the most important sources in the reconstruction of the Proto-Indo-European language despite its late attestation (with the earliest texts dating only to c. 1500 AD). The phonology and especially the nominal morphology of Lithuanian is almost certainly the most conservative of any living Indo-European language,[4][6] although its verbal morphology is less conservative and may be exceeded by the conservatism of Modern Greek verbs, which maintain a number of archaic features lacking in Lithuanian, such as the synthetic aorist and mediopassive forms.
Related languages are different due to changes; but they haven't all changed in the same way. Thus it is perfectly sound to describe one language as closest to the common substrate of other related languages.
Looking only at languages L1 through L3 we may infer a common ancestor from what is more shared. Saying that L4 is closest to this hypothetical language is not an empty concept, or artificial based on some ideology.
This is amazing, I will give it a try over the weekend for sure. Is modular the selling point here? Last thing I seek in an editor is modularity. This is like saying my browser is written in some pluggable framework... Who cares as long as it works?
Personally I wouldn't consider it without a GUI supporting variable-width text, and that seems far away at this point. But if someone writes an OpenGL renderer it might be worth a second look.