Hacker Newsnew | past | comments | ask | show | jobs | submit | ddragon's commentslogin

That would be simplifying too much. There are a lot of external factors for something being popular, like timing, luck, support from big enterprises and leading colleges, inertia and sunk costs. You could argue that does make python better regardless of the language itself but that poster was talking about a hypothetical scenario in which those factors were won by a language better designed for those tasks. Would you use python if most libraries, docs and support were elsewhere just for the language design as is today?


Richard Feldman states in a video aiming to explain the popularity of OOP that Python initially had a small community for decades, and that Python's increase in popularity followed a slow and steady increase, which is not true of many other languages like Ruby. That's corroborated by the graph in this article. [0]

Based on Python's slow and steady incresae, timing and luck don't seem like good factors for explaining its popularity. The others are debatable though.

[0]

https://flatironschool.com/blog/python-popularity-the-rise-o...


I fail to see how timing and luck isn't a factor. It's more than how popular it was when it launched, many of those languages that are allegedly better had a strong timing disadvantage by either not existing or not being mature once the data science boom occurred (including equivalents to libraries like numpy, scipy, matplotlib and theano), allowing python to be the right option then. Any language that missed the timing must now play catch up with a fraction of the resources and completely unproven in the market.

Luck is harder to quantify, but at the very least competitors like common lisp didn't have much of it.


Also, Python is where the "Rust Evangelism Strike Force" type stuff really started being a massive phenomenon.

Language wars have been forever, of course. But for a few years around 2010-ish, practically every single thread would have someone bringing up Python. If the post was about a tool, how the thing should have really been written in Python. If it was a how-to tutorial about a feature in another language, there would be a subthread about how Python undeniably does it better. Not occasionally, in a thread here and there - it was to the extent you couldn't miss it even if you wanted to. That's a kind of marketing that's proven effective in a forum like this, which is why it's being replicated by other languages now.


I'm quite interested in the interactive thread pool (although I assume it works based on conventions of everyone playing nice). Julia seems to have a powerful parallelism model but it couldn't apply it to responsive GUI and web frameworks that requires low latency, so it is nice if you indeed can have for example the tasks handling HTTP request focusing on handling it as fast as possible while the background working threads dealing with larger computations use all the speed of the Julia language without being constantly interrupted.


I didn't go deep on Julia's multithreading, but what he is saying is that Julia uses an MxN threading (I think nowadays if you don't specify it at startup it will just use one for each cpu thread), which is the same as a language I did most of distributed programming (elixir/erlang), and as far as I know it's the same as Go.

Having 1 kernel thread for each CPU thread means that your program can use all available CPU threads at the same time (so you get all the parallelism available within the machine), and having a language based scheduler for each thread means you can have minimal overhead (no need to do a system call) to create a new concurrent execution (meaning lightweight/green threading similar to what python allows, except being automatically distributed by the language within all kernel/cpu threads). In Elixir this means you can create millions of processes even though the OS will only see one thread per logical cpu thread, and I never felt the limitation of this abstraction over multiprocessing (of course, Julia is definitely nowhere near as mature - and maybe never will due to stuff like preemptive scheduling and parallel garbage collection that is easier to implement in a language with only immutable types, though it seems to be moving along, and in Julia 1.7 the processes being able to move between kernel threads solving the issue mentioned in that discussion you linked).


But that scheme is the same as green threading and has the same faults. Start Julia with one system thread. Run one infinite loop in one green thread and another in another green thread. One of the loops will run while the other will wait until the other completes (which it never does). This is inferior to Java and C# which both uses system threads by default, allowing both infinite loops to run. Erlang/Elixir has the same problem. You can run thousands of green threads, but if one of them is stupid enough to call a blocking C function then all others have to wait.


> Start Julia with one system thread. Run one infinite loop in one green thread and another in another green thread. One of the loops will run while the other will wait until the other completes (which it never does). This is inferior

If you want julia to use multiple system threads, why are you suggesting one not use system threads for this test? All you have to do is start julia with multiple threads and it'll use those threads for your infinite loops.


That issue actually doesn't happen with regular elixir, since it's immutable and stateless, the scheduler doesn't wait for the process to voluntarily wield (it's always safe to switch, so it just gives some fixed time for each process, which makes it very low latency and very reliable but not very efficient at any individual task due to all the switching). Calling other languages from erlang/elixir create dirty processes that can't be scheduled this way and may cause those issues.

Since I didn't use dirty processes in elixir it did make me forget about this obvious issue you pointed out, that for a mutable language like Julia can happen in every thread, but that's not something that limits the expressiveness of the model, but something that requires consideration to avoid while programming and language level mechanisms to protect the thread (at the very least the ability to define timeouts that can throw an exception on any spawned process) or maybe a future framework on top of it that handles this in a safer way (something like Akka). I can only hope Julia can achieve the full potential of it's multithreading model.


In my experience with a rift s, even though the oculus touch also has gyroscopes and accelerometers, they only help for a few seconds at most when the controllers leave the camera. Those sensors are just not accurate enough (I know little about the details of the sensors, but accelerometers are tracking the second derivative of the position, so any small error will accumulate fast when you want the latter), and you don't want to have your hand all over the place when you're trying to interact with things in VR, which is why, at least for now, you need to measure position directly for it to work, such as the camera/LED devices that are most popular with VR headsets and controllers (and even stuff like the PS Move controller).


Play Splatoon 2, they nailed motion controls. Every time I read people saying they’re not accurate enough I get confused.


I mean, I had a vita and the gyroscope control was more accurate than the stick for shooters but that's because I'll naturally adjust if it overshoots (if I go to above I'll immediately push slightly down in a feedback loop - so here what really matters is the precision, not accuracy and in fact I can even adjust the sensitivity to my preference). That feedback loop with the user doesn't work well in VR, if my hand overshoots I don't have means of resetting the position (I can only compensate, but it's extremely uncomfortable when you feel your hand in position x, look at it and it's at position y and that x-y mapping will keep changing over time - and of course it's even worse with your head PoV not matching your head movements). Of course there are lots of issues as well, how do you get the perfect initial position? After all gyroscope/accelerometers only measure movement, it can't know where it starts (for example for jogging you need a gps to get a measurement of position, just like you need a camera/laser sensor for current VR). For gyroscope in traditional gaming you usually use the stick to adjust a solid start position, which is not possible in VR as well unless you force the user to stay in a perfect pose at the start of every level after inputting arms length and height as an example, which would definitely be annoying quickly if you need to reset frequently).

And finally, you example (splatoon 2) only needs to compute 2 degrees of freedom in movement (rotation left-right - or yawing, rotation down-up - or pitching, since rolling isn't relevant with a dot target), while VR systems depend on 6 degrees of freedom (yawing, pitching, rolling, elevating, strafing and surging - all of these for at least 3 devices at the same time: your head, left hand and right hand). Unfortunately controls in VR are quite complicated, and accelerometers, gyroscopes (and magnetometers which are also used in VR systems to know the reference to the floor) are simply insufficient (but necessary since the positional sensors can't keep track all time with occasional occlusion, such as having one hand passing over the other or leaving the tracking area), which is why the same sensors on the switch are used in every VR headset and controls in addition with even more sensors and algorithms.

EDIT: the camera system also helps a lot with defining gaming boundaries in the room and being able to quickly see if I accidentally leave it, I already punched my monitor once and that's with a barrier that always get visible when I approach something in my room.


Way too long to respond to all of it so I’ll just do some highlights. I covered resetting center again. This is a problem for all gyro controllers, not just VR. Splatoon 2 does this great.

Adding 3 additional axises change nothing. Nintendo didn’t do it because it’s very niche to require that. It costs pennie’s more to get a 6DOF gyro vs a 3DOF. The question is the need. Do you need to rotate the yaw of your hand? Nope.

So my statements stand. The VR folks seem to be on a “we’re more superior than thou” kick with gyro controls.


>6DOF gyro

A gyroscope is used to detect orientation/angular velocity (spinning), the sensor to add the other degrees of freedom is already there in most modern controllers and smartphones (the accelerometer). The issue is still accuracy I'm afraid.

>Do you need to rotate the yaw of your hand? Nope.

I'd certainly enjoy to open doors and make a simple goodbye gesture in VR.


You are confusing precision with accuracy


No I’m not. You do not need ”pixel” perfect accuracy, or precision. Play the game and find out. This is why I’m confused as to why people think even in an FPS the gyro controls need to be accurate enough to perform surgery.

They also complained about discomfort when resetting center on the gyro control. Something else Splatoon 2 nailed gracefully.


I don't think there is anyone developing packages on pluto/jupyter, so I wouldn't worry about that. The most common method for that should be using an editor like VSCode (which will have some linting capabilities) with an open REPL and Revise [1]. What it does is every time you save any of your files (with some known restrictions), it will automatically and incrementally update the state of your application in the REPL, allowing you to probe your code whenever you want (with tons of introspection methods, up to interactively inspecting the native code being generated), and since you never leave the session you don't face the compiling latency more than once. I end up preferring this workflow for experimenting and data science stuff since it retains the structure and tooling of an editor with the ability to interact with my application (and really miss it when I'm writing Python applications), but of course each one has a preferred workflow and it'd be nice if Julia supported more of them as well.

[1] https://github.com/timholy/Revise.jl


While that might happen (and probably cause a method redefinition), there is an important convention that helps preventing it: your package must either own the function or at least one of the types used for arguments, otherwise you're practicing type piracy [1]. I've seen an automated scripts that can detect type piracy, so hopefully it could be part of a linting toolset eventually since not everyone might be aware, but at the very least popular packages shouldn't have them - or at least not in a way that may cause bugs (and if any package has it unintentionally it's probably worth creating an issue).

[1] https://docs.julialang.org/en/v1/manual/style-guide/#Avoid-t...


This video is good explaining the idea behind multiple dispatch in Julia if you have time:

https://www.youtube.com/watch?v=kc9HwsxE1OY


Languages with multiple dispatch aren't rare, but a language having it as the core language paradigm, combined with a compiler capable of completely resolving the method calls during compile time, and therefore able to remove all runtime costs of the dispatch, and a community that fully embraced the idea of creating composable ecosystems is something unique to Julia. I don't think anyone has scaled multiple dispatch to the level of Julia's ecosystem before.


Depends, most use SBCL instead of paying for Allegro or LispWorks, so the perception is skewed.


Returning a Union of int or float isn't that useful but the point is that Julia is a dynamic language, and if there was no implicit union type it would have to just box the return type into an "Any" box, which actually slows down the program (the union here will cause functions to have two optimized versions, one for int and one for float instead of a generic dynamic one for Any). If it had a compile error for every function that can return more than one type it would make the language even more restrictive than some static languages.

Though for intentional uses of unions, most of the time I use a union of a success type or an error type, or a union of a type and null, or a union of type and missing. They could all be special cases, but I don't see the point of not being just one mechanism.


Objects aren't bag-of-functions though (they have state, inheritance, initializers/destructors, interface/abstract classes, classes vs objects and tons of other concepts and patterns) and any complex program can become a large hierarchic tree of classes and graph of objects that goes way beyond a simple bag-of-function. Even modules that are almost literally bag-of-functions will scale quickly to something more complex.

The point is that simple concepts are nice to explain for a beginner, but what actually built your intuition in how to use objects is the years and years learning and experiencing it's benefits and pitfalls. With multiple dispatch it's the same, but since few languages use it (and even fewer, if any, pushes it everywhere like Julia does) most people didn't experience this process.

For me when I'm using a function I just consider them as self-contained abstractions over the arguments. For example there are hundreds of implementations of sum (+), which in practice I ignore and only think about the concept of addition no matter what arguments I give and I trust the compiler/library to find the optimal implementation of the concept or fail (meaning I have to write one myself). If I'm writing a method (or function) I consider arguments as whatever acts the way I need so that I can implement the the concept on them (for example if I'm writing a tensor sum I just consider arguments as n-dimensional iterable arrays and implement assuming that - and declare for the compiler when my method is applicable, without having to care about all other implementations of sum - if anyone needs a scalar sum them that person can implement it and through collaboration we all expand the concept of sum).

And the fact that whoever uses a function can abstract away the implementation, and whoever writes a function can abstract away the whole extension of the arguments (through both duck typing and the fact that the compiler will deal with choosing the correct implementation of the concept) means everything plays along fine without having to deal with details of each side.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: