Hacker Newsnew | past | comments | ask | show | jobs | submit | opticfluorine's commentslogin

I wonder if you could generate it via a Roslyn incremental source generator instead of as a file to bypass this limit. I'm guessing not, but it does sound like fun.

You can totally use source generators for that.

You're only allowed up to 65535 locals, but this includes hidden locals, which the compiler adds if you're compiling in debug mode.

So you have to make sure to compile only in release mode just to get to 16 bits.


Minor nit, Joule-Thomson is not just the ideal gas law - it is a separate thermodynamic effect entirely. Case in point, for certain gases the change in temperature due to Joule-Thomson has the opposite sign that you would predict from the ideal gas law alone.

This has interesting applications. For example, you can exploit this with dilute metal vapor in an expanding helium gas to cool the metal vapor to very low temperature - the Joule-Thomson expansion of helium increases the helium's temperature by converting the energy of the intermolecular forces into heat. This draws out energy from the metal vapor. If done in a vacuum chamber, then in the region before the shockwave formed by the helium, the supercooled metal atoms will form small van der Waals clusters that can be spectroscopically probed in the jet. This was an interesting area of study back in the 80s that advanced our understanding of van der Waals forces.


I have occasionally, just for fun, written benchmarks for some algorithm in C++ and an equivalent C# implementation, them tried to bring the managed performance in line with native using the methods you mention and others. I'm always surprised by how often I can match the performance of the unmanaged code (even when I'm trying to optimize my C++ to the limit) while still ending up with readable and maintainable C#.


JIT compilers can outperform statically compiled code by analysing at run time exactly what branches are taken and then optimising based on that.


Does this include the GC at the end of it all? Because if that happens after the end timestamp it's not an exact comparison. I read something once about speeding up a C/C++ compiler by simply turning free into a no-op. Such a compiler basically allocates more and more data and only frees it all at the end of execution, so then doing all the free calls is just wasted CPU cycles.


Could you please share some benchmark code? It would be incredibly useful as a learning aid!


I came across this a few months ago when I was evaluating open source installer options for my own open source project. I have no issue with charging for binaries while the source is available under an OSI license, but this from the README rubbed me the wrong way:

"To ensure the long-term sustainability of this project, use of the WiX Toolset requires an Open Source Maintenance Fee. While the source code is freely available under the terms of the LICENSE, all other aspects of the project--including opening or commenting on issues, participating in discussions and downloading releases--require adherence to the Maintenance Fee.

In short, if you use this project to generate revenue, the Open Source Maintenance Fee is required."

I'll give the benefit of the doubt and assume this is just a difficult concept to succinctly explain in a short paragraph. But that summary - that revenue-generating use requires payment - feels misleading to me. Under their license, nothing stops me from creating my own build from source and using it per the terms of the MS-RL license, including for commercial purposes. So to me it feels like a scare tactic to coerce commercial users into becoming sponsors for the project.

I certainly understand the challenges faced by open source maintainers today, but the specific approach taken here just doesn't feel ethical to me. I ended up passing on WiX for that reason even though I'm not a commercial user.


Isn't it just a clear statement that they aren't going to give commercial users support for free?

I know you are saying it isn't clear, but your quote literally includes the statement "While the source code is freely available under the terms of the LICENSE".


I personally think this last sentence from my quote makes it unclear:

"In short, if you use this project to generate revenue, the Open Source Maintenance Fee is required."

Perhaps I'm being too semantic, but I don't feel that is an accurate representation of the license terms involved here.


It could add 'and expect active support FROM US' and be more accurate.

I guess it's treating 'if you are generating revenue and need support you're gonna be demanding as hell' as implicit?


Start-ups and smaller companies that are extremely cash strapped are willing to take an opensource project, compile it themselves, turn it into deployment artifacts and manage that whole lifecycle. There is a threshold where paying someone to manage and certify the lifecycle of tools is more valuable than keeping it in house.

This is pushing those enterprise customers that are just using and updating binary releases because they don't want to take on the compliance risks of first-party support to pay for official versions.


I agree with your point. In the name of promoting basic numeracy:

"""

Sign up for GitHub Sponsorship and create the tiers: Small organization (< 20 people): $10/mo Medium organization (20-100 people): $40/mo Large organization (> 100 people): $60/mo

"""

You are beyond 'cash strapped' if $10/month for something as fundamental as this breaks the bank. The fully loaded cost of a single US software developer is already above $100/hour.


It’s $10/mo and then like 15min/$50/mo in everyone’s time in admin chasing down and filing receipts, reconciling to bank statements, etc.

If you’re a founder doing your own finances, well every additional little monthly charge even if it’s just $1 is quite annoying:

Filing and reconciling 12 receipts takes say 1 hour per year, what if you’re using 20 dependencies? That’s an extra 3-5 days per annum of admin.


One nice thing about GitHub sponsorship is that there is only one bill for the sponsor, and one can support NN projects/creators there. I think it is even bundled with the regular Github invoice?


Sure, but that also doesn't scale reasonably and is entirely a facile argument. My original comment supports organization paying this price instead of dealing with internal compliance burdens. Looking at one of the package lock files for a previous company I still occasionally contract for, there are 9400 dependencies referenced.

So in the name of promoting basic numeracy, and taking into account the realities of scale. Matching that cost for those dependencies (this is a >100 person company) would be $560k per month. That gets you minimal support, just a guarantee that you can submit issues. No guaranteed security maintenance, compliance, or governance of the project.

You can spin up a very strong developer team for forking and maintaining an internal copy of opensource projects at that cost and a lot of large companies do just that. Should they contribute those changes back? Sure if that made sense.

A lot of time in my experience that internal copy is stripped to the bones of functionality to remove the surface area of vulnerabilities if the useful piece isn't extracted into the larger body of code directly. It's less functional with major changes specific to that environment. Would the upstream accept that massive gutting? Probably not. Could the company publish their minimal version? Sure but there are costs there as well and you DO have to justify that time and cost.

Would a company in-house the support and development of a tool over $40/month? Absolutely not, for a one-off case that's probably fine. If you want to meaningfully address the compensation issue from enterprises, opensource single-project subscriptions aren't going to be the answer.

I would LOVE to see more developer incentive programs, but one-by-one options aren't scalable and most projects don't want to provide the table-stakes level of support required of any vendor they work with. It's not optional for those organizations, its law and private contracts.


Note that the package.lock file is not the place to look for your OSMF dependencies. That file will list your project's dependencies and all of their dependencies and so on and so on. You want to look at the list of packages in your package.json file. That will almost certainly be an order of magnitude (or two) smaller.

For example, IIRC, GitHub (all of GitHub) calculated they had 660 direct dependencies. That's still a lot but it's not 9400. :)


OSMF could got name like WiX Toolset Maintenance Fee. Similar to how the Apache License got its name.


Why? It's not specific to the WiX Toolset at all. Other projects can (and some have) adopt the Open Source Maintenance Fee with no changes (or they can change if if they want).

WiX is just the first project to use the OSMF because I need a project to "debug" an issues in OSMF system. As we get all the issues resolved, we may see the OSMF be adopted widely... or not.


> The fully loaded cost of a single US software developer is already above $100/hour.

To be pedantic, it can be $0 if the developer is you yourself, or your friends, wives, husbands and other relatives.


The only object is that monthly fees are super annoying. I'd much prefer an annual :shrug:


You can pay annually. GitHub Sponsors allows that.


Yes, just a couple of minutes setting up a Github action on a fork, and you're good to go.


Yep, and now you have about half a million lines of code* to maintain as well. Have fun with it!*

* Last count the WiX Toolset had 589,719 loc but 444,936 if you skip comments and whitespace.

* This is the point, maintaining successful (and often non-trivial) projects requires a good bit of work.


You can always just merge in the latest changes from the upstream project with a click or too. No need to maintain it on your own.


They actually provide the github action they use to build the releases in their repo already, so you could likely get this done in under 5 minutes.


And that's what a number of organizations have set up since March.


No. Not really. There are 406 forks, and ~10 were created in the last 5 months. The other people on this thread are more correct.


The number of publicly visible forks does not represent the number of organizations compiling their own binaries and internally mandating the use of such binaries.

Mind you, I never implied that there are thousands or hundreds of such cases. But there are some.


Oh, sure there are some. There were some before. It's Open Source after all. That's kinda' the point. :)

If more consumers choose to take on the work of maintaining their own fork because of the OSMF, that's okay too. I believe we are more likely to get contributions if more developers are in the code instead of just consuming binary builds. That's another small reason why I believe the OSMF can work.


If you read the comments on the GitHub issue, the guys seems more than reasonable. My understanding is that they want you pay if you are making money. My guess if you are just a one-person show with a just-started product, they probably won't care much.

Here is their sponsorship page: https://github.com/sponsors/wixtoolset


Yeah, that's basically it.


> I'll give the benefit of the doubt and assume this is just a difficult concept to succinctly explain in a short paragraph.

It is challenging to describe the concept succinctly, especially as there are lots of varied expectations people have about how Open Source projects work. I'm definitely open to suggestions on how to improve the text.


I think they're trying to say that if you are talking to them on behalf or a revenue generating entity, then you better pay them to talk to them about the project.

feels like a pay to interect iff one of the parties interacting is a profit making entity


I have an RTX 4060 with latest drivers, KDE, and Arch. Wayland works perfectly for me. Maybe Debian has some outdated packages that haven't caught up yet?


"Works on my machine, just use my OS" isn't a solution to my problem, whereas X11 is a solution to my problem.


Wasn't suggesting at all that you use my distro or that you can't use X11 as your solution. Debian is great and I use it for all of my servers. I'm just responding to the assertion that Wayland doesn't work with NVIDIA today, which is really only true if you are using older packages for a more stable distro. Nothing wrong with that, but it's not accurate to represent the current state of Wayland based on a distro known for using older packages.


You're literally running Debian. It's Debian. It's old, it's outdated, yes that's your problem!

Listen, I run Debian too. But I'm not going to get online and complain out X Y Z not working when I'm running a package from 3 years ago. Please, be for real.


>It's old, it's outdated, yes that's your problem!

No, it's stable, it's reliable, it's the solution to all of the problems I had on Arch, Fedora, and other rolling releases.

And again, Nvidia drivers work perfectly right out of the box on X11.

Wayland? That's a new problem.


You don't have to preach Debian to the choir man - I run Debian.

We're talking about very new developments here. You're running years old packages. Okay? That's not going to work.

When you're running Debian, it's expected you're going to be 3-5 years behind the Linux userspace status quo. So it's absolutely fine you're on X11. I have a desktop on Bookworm running X11 on Nvidia - works great, I love it. I also have a very, very new laptop running Tumbleweed on Wayland and kernel 6.15. X really struggles with new hardware in a way Wayland does not. For me on that computer, Wayland is better in a plethora of ways. I am a bit forced to run a very new kernel and Mesa and all that due to running bleeding edge hardware.


This definitely resonates with me. One of my favorite pasttimes is to code away on a 2D multiplayer RPG engine, and I've ended up doing at least a few of the projects on this list as part of that (ECS framework, voxel renderer, physics engine). Integration with a scripting engine like Lua is another fun one for a list like this, especially if you try to do it in an idiomatic way for your main programming language and/or leverage code generation and metaprogramming.

One thing I've found I enjoy is working on a larger-scope toy project composed of many loosely coupled systems. It works well for me because no matter what type of project I'm interested in working on, I can usually find something applicable within the larger project. Currently on my to-do list are behavior trees and procedural terrain generation, and honestly I don't know how I'm going to decide which to do first.


I wonder, how close does Godot get with its web export support? The "authoring tool" seems pretty good, and it exports to WASM.


Godot is too complicated compared to Flash. Have you ever used Flash? In Flash, you could literally take the brush from the toolbox, draw a circle, and that was your player character. That was it. No importing assets, no creating sprites, no nothing. You literally draw the circle right there with vector art tools, and it can be whatever you want.

And those were THE best vector art tools, because when you drew a shape and the shape overlapped with another, it automatically erased the other shape like you would expect it to in a raster graphics editor. In pretty much every other software I tried, e.g. Inkscape, Affinity, Corel Draw, Illustrator, you just get two separate shape objects one on top of the other. They seem to be designed for drawing the outlines, not to actually paint with brushes. Flash understood what was intuitive for artists.

Honestly, the older I get the stranger I feel about the fact that there was a brilliance in creating interface for people to be productive with back then that seems to be completely gone. I think this may be in part because desktop applications are unusual nowadays, but it's just really strange that the things I remember seeing have been done exactly once and never copied by anybody despite how well they worked.


It doesn't in high-school visual basic seemed to have had a 50% pickup rate based on final project quality (closest I can think of to Godot that I actually saw) , Flash nearly everyone/group had a viable thing to show off for the final project.


Having just gone through the exercise of integrating Lua with a custom game engine over the past few weeks, I have to echo how clean the integration with other languages is.

It's also worth noting that the interface is clean in such a way that it is straightforward to automatically generate bindings. In my case, I used a handful of Roslyn Incremental Source Generators to automatically generate bindings between C# and Lua that matched my overall architecture. It was not at all difficult because of the way the interface is designed. The Lua stack together with its dynamic typing and "tables" made it very easy to generate marshallers for arbitrary data classes between C# and Lua.

That said, there are plenty of valid criticisms of the language itself (sorry, not to nitpick, but I am really not a fan of one-based indexing). I'm thinking about designing an embedded scripting language that addresses some of these issues while having a similar interface for integration... Would make for a fun side project one of these days.


> sorry, not to nitpick, but I am really not a fan of one-based indexing

It is very funny how this is just the one sole criticism that always gets brought up. Not that other problems don't exist, but they're not very talked about.

Lua's strength as a language is that it does a lot quite well in ways that aren't noticeable. But when you compare things to the competition then they're quite obvious.

E.g. Type Coercion. Complete shitshow in lots of languages. Nightmare in Javascript. In Lua? It's quite elegant but most interestingly, effortlessly elegant. Very little of the rest of the language had to change to accomodate the improvement. (Excercise for the reader: Spot the one change that completely fixes string<->number conversion)

Makes Brendan Eich look like a total clown in comparison.


To be fair, having to work with 1 based indexing when you're used to 0 would be a frustrating source of off-by-one errors and extra cognitive load


As someone who's used Lua a lot as an embedded language in the VFX industry (The Games industry wasn't the only one that used it for that!), and had to deal with wrapping C++ and Python APIs with Lua (and vice-versa at times!), this is indeed very annoying, especially when tracing through callstacks to work out what's going on.

Eventually you end up in a place where it's beneficial to have converter functions that show up in the call stack frames so that you can keep track of whether the index is in the right "coordinate index system" (for lack of a better term) for the right language.


Oh that’s super interesting, where in the VFX industry is Lua common? I typically deal with Python and maybe Tcl (I do mostly Nuke and pipeline integrations), and I can’t think of a tool that is scripted in Lua. But I’ve never worked with Vega or Shake or what this is/was called


Katana uses LuaJIT quite extensively for user-side OpScripts, and previously both DNeg and MPC (they've largely moved on to newer tech now) had quite a lot of Lua code...


Oh, I had no idea! I don’t have much to do with Katana, I just assumed it was also all python


It used to in older (pre 2.0) versions, but due to Python's GIL lock (and the fact Python's quite a bit slower than lua anyway), it was pretty slow an inefficient using Python with AttributeScripts, so 2.0 moved to Lua with OpScripts...


It absolutely is. For a language whose biggest selling factor is embeddability with C/C++, that decision (and I'm being polite) is a headscratcher (along with the other similar source of errors: 0 evaluating to true).


It's the perfect distraction: once you start accepting one-based, everything else that might be not quite to your liking isn't worth talking about. I could easily imagine an alternative timeline where lua was zero-based, but never reached critical mass.


Absolutely so. It’s just one obvious thing from a large set of issues with it.

One can read through “mystdlib” part of any meaningful Lua-based project to see it. Things you’ll likely find there are: NIL, poor man’s classes, __newindex proxying wrapper, strict(), empty dict literal for json, “fixed” iteration protocols.

You don’t even have to walk too far, it’s all right there in neovim: https://neovim.io/doc/user/lua.html


Perhaps that's the secret to lua's success: they got the basics so wrong that they can't fall into the trap of chasing the dream of becoming a better language, focusing on becoming a better implementation instead. Or perhaps even more important: not changing at all when it's obviously good enough for the niche it dominates.


This is very valuable! Thank you!


That's an interesting notion but I think that Lua had no competition - it was almost unique in how easy it was to integrate vs the power it gives. Its popularity was inevitable that way.


The language actually started as a data entry notation for scientists.


I don't really blame Lua for that, though. 1-based indexing comes naturally to humans. It's the other languages which are backwards. I get why other languages are backwards from human intuition, I do. But if something has to be declared the bad guy, imo it's the paradigm which is at odds with human intuition which deserves that label, not the paradigm which fits us well.


1-based indexing is not any more natural than 0-based, it’s just that humans started indexing before the number 0 was conceptualized.


https://www.cs.utexas.edu/~EWD/transcriptions/EWD08xx/EWD831...

Why numbering should start at zero. -- Dijkstra


Argument by authority.

To me 1 based indexing is natural if you stop pretending that arrays are pointers + index arithmetics. Especially with slicing syntax.

It's one of the things that irked me when switching to Julia from Python but which became just obviously better after I made the switch.

E.g. in Julia `1:3` represents the numbers 1 to 3. `A[1]` is the first element of the array, `A[1:3]` is a slice containing the first to third element. `A[1:3]` and `A[4:end]` partitions the array. (As an aside: `For i in 1:3` gives the number 1, 2, 3.)

The same sentence in python:

`1:3` doesn't make sense on its own. `A[0]` is the first element of the array. `A[0:3]` gives the elements `A[0], A[1]` and `A[2]`. `A[0:3]` and `A[3:]` slice the array.

For Python, which follows Dijkstra for its Slice delimiters, I need to draw a picture for beginners (I feel like the numpy documentation used to have this picture, but not anymore). The Julia option requires you to sometimes type an extra +1 but it's a meaningful +1 ("start at the next element") and even beginners never get this wrong.

That said, it seems to me that for Lua, with the focus on embedding in the C world, 0 index makes more sense.


I admire Dijkstra for many things, but this has always been a weak argument to me. To quote:

"when starting with subscript 1, the subscript range 1 ≤ i < N+1; starting with 0, however, gives the nicer range 0 ≤ i < N"

So it's "nicer", ok! Lua has a numeric for..loop, which doesn't require this kind of range syntax. Looping is x,y,step where x and y are inclusive in the range, i.e. Dijkstra's option (b). Dijkstra doesn't like this because iterating the empty set is awkward. But it's far more natural (if you aren't already used to languages from the 0-indexed lineage) to simply specify the lower and upper bounds of your search.

I actually work a lot with Lua, all the time, alongside other 0-indexed languages such as C and JS. I believe 0 makes sense in C, where arrays are pointers and the subscript is actually an offset. That still doesn't make the 1st item the 0th item.

Between this, and the fact that, regardless of language, I find myself having to add or subtract 1 frequently in different scenarios, I think it's less of a deal than people make it out to be.


In any language, arrays are inherently regions of memory and indexes are -- whether they start at 0 or 1 -- offsets into that region. When you implement more complicated algorithms in any language, whether or not it has pointers or how arrays are syntactically manipulated, you start having to do mathematical operations on both indexes and on ranges of index, and it feels really important to make these situations easier.

If you then even consider the simple case of nested arrays, I think it becomes really difficult to defend 1-based indexing as being cognitively easier to manipulate, as the unit of "index" doesn't naturally map to a counting number like that... if you use 0-based indexes, all of the math is simple, whereas with 1-based you have to rebalance your 1s depending on "how many" indexes your compound unit now represents.


And the reason to dismiss c) and d) is so that the difference between the delimiters is the length. That's not exactly profound either.

If the word for word same argument was made by an anonymous blogger no one would even consider citing this as a definitive argument that ends the discussion.


Associative arrays are also "nicer" with 1 based indexing

  t={}
  t[#t+1] --> t[1] DONE!
  t[#t==0 and 0 or #t+1]
Now, in order to fix this (or achieve the same behavior as #t+1), the length operator must return -1, which would be ridiculous. its empty, and we have a perfect representation of emptiness in math "0".

This is also true in awk as well, nobody ever whines about awk "arrays" being 0


Especially when one of the language's main purposes is to be embedded in applications written in other languages (which are predominantly zero based) - and so you tend to have a lot of back-and-forth integration between these two styles that can get confusing. Even from the C API side, for example, the Lua stack is one-based but addressed exclusively from the host language which is likely zero-based.


Don't forget that not equals is ~=, the horror.

The real gripes should be globals by default and ... nothing. Lua is wonderful.


"Don't forget that not equals is ~=, the horror."

I get you are taking the piss but ~= is just as logical as != being the symbols for: "not equals", if you've been exposed to some math(s) as well as ... well, ! means factorial, doesn't it?

Syntax is syntax and so is vocabulary! In the end you copy and paste off of Stack Exchange and all is golden 8)


! is commonly used as the unary not operator, so "a != b" makes sense as a shortcut for "!(a == b)". a not equals b.


But in Lua, the unary not is written as “not”.


For a language apparently inspired from Wirth, one would have expected <> (greater-or-lesser-than). But the real horror, to me, is Windows not letting one disable the so~called "dead keys" easily.


I'm more familiar with CSS than I am with Lua. The syntax for the former has a very different meaning[1].

  [attr~=value]
  Represents elements with an attribute name of attr whose value is a whitespace-separated list of words, one of which is exactly value.

[1] https://developer.mozilla.org/en-US/docs/Web/CSS/Attribute_s...


It's _ENV by default, it just defaults to _G.


Yeah, 36 years of Unicode and it's still not ‘≠’.


In unicode != on a keyboard


It is funny, isn't it? I always wonder how the language would be perceived had they gone with zero based indexing from the start.

I'm a big fan of Lua, including for the reasons you mention. I suspect the reason this one thing is always brought up is twofold: it's easy to notice, and it's very rare these days outside of Lua (if you consider VB.NET to be a legacy language, anyway). Other criticisms take more effort to communicate, and you can throw a rock and hit ten other languages with the same or similar issues.


>Makes Brendan Eich look like a total clown in comparison.

To be fair, Brendan Eich was making a scripting language for the 90's web. It isn't his fault Silicon Valley decided that language needed to become the Ur-language to replace all application development in the future.


Most of the blame should go to Netscape management. They didn't give Eich much time, then burst in before he was done and made him copy a bunch of things from Java. (The new language, codenamed "Mocha" internally, was first publicly announced as "LiveScript", and then Sun threw a bunch of money at Netscape.)

IIRC, Eich was quite influenced by Python's design. I wish he'd just used Lua - would likely have saved a lot of pain. (Although, all that said, I have no idea what Lua looked like in 1994, and how much of its design has changed since then.)


https://news.ycombinator.com/item?id=1905155

If you don't know what Lua was like then, don't wish that I'd "just used Lua".

Other issues include Netscape target system support, "make it look like Java" orders from above, without which it wouldn't have happened, and more.


Oh hi Yoz! LTNS! Hi Brendan!

It sounds like you're saying Yoz got the sequence of events wrong, and that MILLJ was a necessary part of getting scripting in the browser? I sort of had the impression that the reason they hired you in the first place was that they wanted scripting in the browser, but I wasn't there.

I don't think Lua was designed to enforce a security boundary between the user and the programmer, which was a pretty unusual requirement, and very tricky to retrofit. However, contrary to what you say in that comment, I don't think Lua's target system support or evolutionary path would have been a problem. The Lua runtime wasn't (and isn't) OS-dependent, and it didn't evolve rapidly.

But finding that out would have taken time, and time was extremely precious right then. Also, Lua wasn't open-source yet. (See https://compilers.iecc.com/comparch/article/94-07-051.) And it didn't look like Java. So Lua had two fatal flaws, even apart from the opportunity cost of digging into it to see if it was suitable. Three if you count the security role thing.


Hi Kragen, hope you are well.

Yes, the Sun/Netscape Java deal included MILLJ orders from on high, and thereby wrecked any Scheme, HyperTalk, Logo, or Self syntax for what became JS.

Lua differed a lot (so did Python) back in 1995. Any existing language, ignoring the security probs, would be flash-frozen and (at best) slowly and spasmodically updated by something like the ECMA TC39 TG1 group, a perma-fork from 1995 on.


Well, I'm not dead yet! Looking for work, which is harder since I'm in Argentina.

Flash-freezing Lua might not have been so bad; that's basically what every project using Lua does anyway. And by 01995 it was open source.

In case anyone is interested, here's a test function from the Lua 2.1 release (February 01995):

    function savevar (n,v)
     if v == nil then return end;
     if type(v) == "number" then print(n.."="..v) return end
     if type(v) == "string" then print(n.."='"..v.."'") return end
     if type(v) == "table" then
       if v.__visited__ ~= nil then
         print(n .. "=" .. v.__visited__);
       else
        print(n.."=@()")
        v.__visited__ = n;
        local r,f;
        r,f = next(v,nil);
        while r ~= nil do
          if r ~= "__visited__" then
            if type(r) == 'string' then
              savevar(n.."['"..r.."']",f)
            else
              savevar(n.."["..r.."]",f)
            end
          end
          r,f = next(v,r)
        end
       end
     end
    end
It wouldn't have been suitable in some other ways. For example, in versions of Lua since 4.0 (released in 02000), most Lua API functions take a lua_State* as their first argument, so that you can have multiple Lua interpreters active in the same process. All earlier versions of Lua stored the interpreter state in static variables, so you could only have one Lua interpreter per process, clearly a nonstarter for the JavaScript use case.

The Lua version history https://www.lua.org/versions.html gives some indication of what a hypothetical Sketnape Inc. would have been missing out on by embedding Lua instead of JavaScript. Did JavaScript have lexical scoping with full (implicit) closures from the beginning? Because I remember being very pleasantly surprised to discover that it did when I tried it in 02000, and Lua didn't get that feature until Lua 5.0 in 02003.


> It isn't his fault Silicon Valley decided that language needed to become the Ur-language to replace all application development in the future.

Which remains one of the most baffling decisions of all time, even to this day. Javascript is unpleasant to work with in the browser, the place it was designed for. It is utterly beyond me why anyone would go out of their way to use it in contexts where there are countless better languages available for the job. At least in the browser you pretty much have to use JS, so there's a good reason to tolerate it. Not so outside of the browser.


Wasn't the language he was making for the web Scheme?


No, Scheme was defined in 1975.


So? That doesn't stop Brendan Eich from putting it in a web browser 20 years later.


No, it does not. I see I misunderstood your q.


> To be fair, Brendan Eich was making a scripting language for the 90's web.

He was, and he doesn't deserve the full blame for being bad at designing a language when that wasn't his prior job or field of specialization.

But Lua is older so there's this element of "it didn't need to be this bad, he just fucked up" (And Eich being a jerk makes it amusing to pour some salt on that wound. Everyone understands it's not entirely serious.)


"Silicon Valley" is not an actor (human or organization of humans) that decided any such thing. This is like saying a virus decides to infect a host. JS got on first, and that meant it stuck. After getting on first, switching costs and sunk costs (fallacy or not) kept it going.

The pressure to evolve JS in a more fair-play standards setting rose and fell as browser competition rose and fell, because browser vendors compete for developers as lead users and promoters. Before competition came back, a leading or upstart browser could and did innovate ahead of the last JS standard. IE did this with DHTML mostly outside the core language, which MS helped standardize at the same time. I did it in Mozilla's engine in the late '90s, implementing things that made it into ES3, ES5, and ES6 (Array extras, getters and setters, more).

But the evolutionary regime everyone operated in didn't "decide" anything. There was and is no "Silicon Valley" entity calling such shots.


> "Silicon Valley" is not an actor (human or organization of humans) that decided any such thing.

Oh come on, you understand full well that they're referring to the wider SV business/software development "ecosystem".

Which is absolutely to blame for javascript becoming the default language for full-stack development, and the resulting JS-ecosystem being a dysfunctional shitshow.

Most of this new JS-ecosystem was built by venture capital startups & tech giants obsessed with deploying quickly, with near-total disregard for actually building something robustly functional and sustainable.

e.g. React as a framework does not make sense in the real world. It is simply too slow on the median device.

It does, however, make sense in the world of the Venture Capital startup. Where you don't need users to be able to actually use your app/website well. You only need that app/website to exist ASAP so you can collect the next round of investment.


Oh come on yourself.

Companies including Bloomberg and Microsoft (neither in or a part of Silicon Valley), also big to small companies all over the world, built on JS once Moore’s Law and browser tech combined to make Oddpost, and then gmail, feasible.

While the Web 2.0 foundations were being laid by indie devs, Yahoo!, Google, others in and out of the valley, most valley bigcos were building “RIAs” on Java, then Flash. JS did not get some valley-wide endorsement early or all at once.

While there was no command economy leader or bureaucracy to dictate “JS got on first but it is better to replace it with [VBScript, likeliest candidate]”, Microsoft did try a two-step approach after reacting to and the reverse-engineering JS as “JScript”.

They also created VBS alongside JS, worked to promote it too (its most used sites were MS sites), but JS got on first, so MS was too late even by IE3 era, and IE3 was not competitive vs. Netscape or tied to Windows. IE4 was better than Netscape 3 or tardy, buggy 4 on Windows; and most important, it was tied. For this tying, MS was convicted in _U.S. v. Microsoft_ of abusing its OS monopoly.

Think of JS as evolution in action. A 2024-era fable about the Silly Valley cartel picking JS early or coherently may make you feel good, but it’s false.


I don't think JavaScript will replace all application development in the future. WebAssembly will displace JavaScript. With WebAssembly you can use whatever language you like and achieve higher performance than JavaScript.


One based indexing is also the one thing every beginners course has to hammer into people, as its so non-intuitive until you know about pointers.


Honestly, it might almost act as a "honeypot" to give people a convenient target to complain about, which makes it easier for the rest of the language to get viewed as a whole rather than nitpicked. Sometimes I think people like to have at least one negative thing to say about something new they learn, whether it's to show that they understand enough to be able to find something or it's to be "cool" enough not to just like everything.


that would be a galaxy brain move on the part of lua.


To be clear, I'm not claiming that I think this was the original intent for implementing array indexes like this; I'm just theorizing that this might explain why this ends up being such a fixation when it comes to discusses downsides of Lua.


if you fix the one based indexing you should call yours KT@ or Kt`


> I'm thinking about designing an embedded scripting language that addresses some of these issues...

https://xkcd.com/927/

;)


To add to this, there are also official Microsoft extensions to VSCode which add absurdly useful capabilities behind subtle paywalls. For example, the C# extension is actually governed by the Visual Studio license terms and requires a paid VS subscription if your organization does not qualify for Visual Studio Community Edition.

I'm not totally sold on embrace-extemd-extinguish here, but learning about this case was eyebrow raising for me.


C# extension is MIT, even though vsdbg it ships with is closed-source. There's a fork that replaces it with netcoredbg which is open.

C# DevKit is however based on VS license. It builds on top of base C# extension, the core features like debugger, language server, auto-complete and auto-fixers integration, etc. are in the base extension.


When I used to work in defense contracting, this is precisely what we (the contractor) did. We would buy up all available stock of any difficult-to-replace parts (often specific SBCs) when the manufacturer announced end of life.


There's a market in conforming interface and ABI spec meeting hardware to emulate the boot and upgrade devices for German tanks, or some other hardware. Sd card or USB behind, giant milspec plug to the fore.


Doesn’t it rot?


Given that it's definitionally military-grade hardware, and very high value-density and therefore easy to store very securely, I doubt it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: