One of the things I like about Mojolicious (a pretty full-featured async web framework in Perl in an incredibly small amount of code) is that it has a Lite variant, which allows building your whole app (routes, app functions, helper functions, models, data, etc.) in a single file. Once you've gotten it off the ground and the one small file starts to become one big file and it starts feeling hairy, you switch to the full version, and break out all the pieces into the usual directory layout (which can be done mostly automatically).
With regard to Wapp and some of the comments here disparaging Tcl...I've often said Tcl gets much more hate than it deserves. For a lot of tasks, it's fine. I wouldn't pick it for anything, but I understand why some folks still do. And, I did build my current companies first website with Tcl (OpenACS+AOLServer) more than a decade ago. It was more enjoyable to work with than all of the PHP CMS-based sites I've built for the company since then. I wouldn't rule it out, if there were some project I wanted to use that happened to be written in Tcl.
I used to love Perl until the world told me to stop because the syntax was too noisy. Now I'm embedding JavaScript expressions in CSS in JSX in ES6 and being told this is an evolved state.
Agreed. Of all the things one could complain about in Perl, "noisy" syntax is among the dumbest and most superficial (but, among the most common).
An informed rant about Perl might include these two function calls behaving differently and one almost certainly containing a bug, because function arguments are always a list:
Assume some_function receives @new_array and $new_scalar, and vice versa for some_function2. This trips up literally every new Perl developer (our UI guy, who is mostly a PHP and JS dev, ran into it just a few weeks ago).
Lack of function signatures is another good complaint about Perl. It's still not realistic to use them in code that ships for old Perl versions (like for deployment on leading server distributions).
There are others, but, yeah. I also hate how simplistic criticism of Perl is these days. It's knee jerk and poorly-informed in most cases. That doesn't mean I'm recommending Perl for everyone, or that I'd pick it for every new project, just that I wish the dialog around it weren't so stupid so much of the time.
I was excited about Perl6 until I ran a prime number crunching benchmark which ran slower than everything else. I'll stick with Perl5 or move to Ruby. We too use Mojolicious and are very happy with it.
Recently, using Rakudo Star 2017.10. The algorithm adds numbers of the form i+j+2ij into a hash of non primes, then takes k not in that hash and pushes 2k+1 into an array of primes. These are basic operations, but Perl6 simply takes minutes to find primes up to 10k. Even Tcl, which lags behind anything else but Perl6 does it an order of magnitude faster. If you increase to 100k then you could probably go have a pizza in town and it would still be crunching after you were back. So it's definitely not the JIT compliation stage that makes it so slow.
As the author of this recent post https://perl6advent.wordpress.com/2017/12/16/ I'd be really interested in seeing your code for primes. You might enjoy seeing the output of `perl6 --profile` to see if there is any glaringly obvious place its being slow. You get a nice interactive html report.
I find it kind of funny primes are constantly used as a first filter on using Perl 6, despite it being one of the only languages with an efficient built in .is-prime method on integer types >;P I have a more contemporary version of Rakudo built locally too, so can see if this is something that's already gone away if you dont mind throwing your code in a gist/pastebin somewhere?
This is Rakudo version 2017.10 built on MoarVM version 2017.10 implementing Perl 6.c. It's built with radkudobrew. Results are consistent across several platforms, I've also built on an iMac with OS X 10.1. This one is built on Ubuntu 16.04 x86-64.
I'm not exactly sure I understand the algorithm, but if it is only supposed to generate prime numbers upto 1000, than it appears to erroneously include `999` as a prime number.
FWIW, if I would just be interested in the prime numbers upto 1000, I would write that like this:
(1..1000).grep( *.is-prime )
Which for me executes within noise of your Perl 5 algorithm. For larger values on multi-CPU machines, I would write this as:
(1..2500).hyper.grep( *.is-prime )
Around 2500 it becomes faster to `hyper` it, so the work gets automatically distributed over multiple CPU's.
Am currently researching as to why your Perl 6 algorithm is so slow.
The algorithm is an implementation of the Sieve of Sundaram. 999 might have slipped in by an off by one error. Thanx for the suggestion of using is-prime, but in order to benchmark multiple languages, I need to run the same thing everywhere.
Turns out that even though primes are integers, in your Perl 6 version, every calculation was done by using floating point. And this was caused by the call to the subroutine. If you would do:
sieve_sundaram(1000)
instead of:
sieve_sundaram(1e3)
then it all of a sudden becomes 4x as fast. In Perl 5 you never know what you're dealing with with regards to values. In Perl 6 if you tell it to use a floating point, it will infect all calculations to be in floating point afterwards. `1e3` is a floating point value. `1000` is an integer in Perl 6.
Also, you seem to have a sub-optimal algorithm: the second `foreach` doesn't need to go from `1..$n`, but can go from `$i..$n` instead. This brings down the runtime of the Perl 5 version of the code to 89 msecs for me.
Since your program is not using BigInt in the Perl 5 version, it is basically using native integers. In Perl 6, all integer calculations are always BigInts, unless you mark them as native. If I adjust your Perl 6 version for this, the runtime goes down from 4671 msecs to 414 msecs for this version:
sub sieve_sundaram(int $n) {
my %a;
my int @s = 2;
my int $m = $n div 2 - 1;
for 1..$n -> int $i {
for $i..$n -> int $j {
my int $p = $i + $j + 2 * $i * $j;
if $p < $m {
%a{$p} = True;
}
}
}
for 1..$m -> int $k {
if ! %a{$k} {
my int $q = 2 * $k + 1;
@s.push($q);
}
}
return @s;
}
sieve_sundaram(1000);
So, about 11x faster than before. And just under 5x as slow as the Perl 5 version.
I could further make this idiomatic Perl 6, but the most idiomatic version I've already mentioned: `(1..1000).grep( *.is-prime)`
Yeah, but as I mentioned, it is not realistic to use them for installable software, yet. At least not for my needs; we're building software that needs to be easily installable on the leading server distros; we can't ask a million or two people to upgrade their Perl before installing our software.
So, I'm excited about it, too, but I can't use them until CentOS/RHEL 7 reaches end of life (and that's assuming CentOS/RHEL 8 gets a 5.20+ version of Perl, which isn't an entirely safe assumption). It's easy to blame CentOS/RHEL for this, because they ship an old-as-heck Perl version, but it's also reasonable to question why it took 20+ years for Perl to get function signatures in the core language.
JavaScript was written in 10 days in 1995 by Brendan Eich with the purpose of being a «glue language that was easy to use by Web designers and part-time programmers to assemble components such as images and plugins, where the code could be written directly in the Web page markup.» His first choice would have been inspired by Scheme, but unfortunately Netscape forced him into a Java-like syntax for marketing purposes.
https://en.wikipedia.org/wiki/JavaScript
The fact that it has managed to evolve into a decent, if poorly typed language, with some outstanding implementations, while maintaining 99% compatibility with "onmouseover" scripts from the 90s is a testament to its sound design.
CentOS 6 has Perl 5.10.1. CentOS 7 has Perl 5.16. I work on installable software that needs to Just Work on the majority of servers without hassle. So, I don't have sub signatures, yet.
The world is what it is. I just live in it. We, realistically, cannot insist on a new Perl on every system we support (we're talking about a million or two installations). We build tools that are meant to be very easy to install, require minimal CPAN dependencies, etc.
We target 5.10.1. When CentOS 6 is EOL, we will target 5.16, and so on. I don't like it, but I can't make Red Hat ship newer Perl versions. Software Collections has Perl 5.20, which is great for some folks, but it's not a good option for our projects, either.
It's worth noting that Python has the same problem on RHEL/CentOS. Maybe even more pronounced, because some system tools rely on Python, and it can even break them if you change your personal Python to something else (I use pyenv, and I have to remember to change my Python back to the system one when running Gnome Tweak Tool, and the like).
Mojolicious is beyond fantastic! I have a hard time touching anything else, as every step of the way I think "damn this could have been so much easier with mojo..."
With regard to Wapp and some of the comments here disparaging Tcl...I've often said Tcl gets much more hate than it deserves. For a lot of tasks, it's fine. I wouldn't pick it for anything, but I understand why some folks still do. And, I did build my current companies first website with Tcl (OpenACS+AOLServer) more than a decade ago. It was more enjoyable to work with than all of the PHP CMS-based sites I've built for the company since then. I wouldn't rule it out, if there were some project I wanted to use that happened to be written in Tcl.