That's essentially what axioms are. Wether it's reasonable or not depends on your intuition of both axiom & implications.
This is where the axiom of choice is interesting; many people feel that it seems intuitively reasonable, but the implications are counterintuitive (e.g. banach-tarski)
Where a (naive) frequentist might assume, for instance, that after a 90% accurate test comes back positive the hypothesis is likely to be true, a Bayesianist would ask how likely it was to be true in the first place; all the test did was make it ten times more likely, which may or may not make it probable.
I would argue that risk of mistakes should not limit the expressivity of a language or have it added to the pile of bad ideas. It is better for users of the language to be aware of potential pitfalls, and use the language appropriately.
That introduces problems too. If you try to use sugar like '+' with an implementation that doesn't support it, you don't get any sort of error. Instead you get a different expression.
Unfortunately, there's an inherent tradeoff between encoding efficiency and error detection. Notice that with the VerbalExpressions it would be trivial to return a useful error message if the 'at_least_one' pattern did not exist.
Perl 6 regexes attempt improve upon this situation by making regexes more like a regular programming language. That is it errs on the side of error detection rather than encoding efficiency.
(It also adds features that would be difficult to add to Perl 5/PCRE regex design)
For a start if it didn't support using `+`, then any attempt to use it would generate a compiler error because it is not alphanumeric.
(regex is code in Perl 6)
All non-alphanumeric characters are presumed to be metasyntactic, and so must be escaped in some way to match literally.
Arguably best way is to quote it like a string literal.
(Uses the same domain specific sub-language that the main language uses for string literals)
/ "+" + / # at least one + character
It really is a significant redesign.
/A{2,4}/ # Perl 5/PCRE
/A ** 2..4/ # Perl 6
/A (?:BA){1,3}/x
/A [BA] ** 1..3/ # Perl 6: direct translation
/A ** 2..4 % B/ # Perl 6: 2 to 4 A's separated by B
/A (?:BA){1,3} B?/x
/A ** 2..4 %% B/ # Perl 6: %% allows trailing separator
/\" [^"]* \"/x # Perl 5/PCRE
/\" <-["]>* \"/ # Perl 6: direct translation
/「"」 ~ 「"」 <-["]>*/ # Perl 6: between two ", match anything else
# (can be used to generate better error messages)
---
# Perl 5
my $foo = qr/foo/;
'abfoo' =~ /ab $foo/x;
# Perl 6
my $foo = /foo/;
'abfoo' ~~ /ab <$foo>/;
# or
my token foo {foo} # treat it as a lexical subroutine
'abfoo' ~~ /ab <&foo>/;
---
# Perl 5
my $foo = 'foo';
'abfoo' =~ /ab \Q $foo \E/x; # treat as string not regex
# Perl 6
my $foo = 'foo';
'abfoo' ~~ /ab $foo/; # that is the default in Perl 6
You might enjoy the essay The Tyranny Of Structurelessness, which articulates this exact point, if you weren't already referencing it https://news.ycombinator.com/item?id=7409611
Good question! I didn't include too much context in the top-level question to keep the discussion as broad as possible, and hopefully have advice applicable to others as well, but am more than happy to expand a bit here.
It concerns a large multinational in the transport sector. While we have built up a strong digital department, there's a lot of catching up to do, so the more 'batteries included' any given solution has the better.
On the other hand, it's crucial that we can extend any given tool as there will undoubtedly be unforeseen or non-default scenarios. For example: we'd love to perform some analytics on not just what customers are calling the APIs, but perform more detailed queries based on e.q. request parameters, geoIP, or perhaps even User Agent headers. It is absultely no problem to have to do this ourselves by performing raw queries on the database, and perhaps built our own dashboard around it, but again; if there's something that already covers a lot of these cases that'd be ideal.
The minimum required functionality is that the tool can operate as an authenticating proxy, only passing on requests when the e.g. OAuth2 headers are verified. Other security aspects, such as throttling and rate limiting are a requirement as we're dealing with systems that must be protected from unforeseen load.
Nice to haves are features such as autogenerated documentation pages, where clients can test the APIs from within their browser. On the other hand: rolling this ourselves using Swagger wouldn't be a problem either.
Research so far has included looking at some open source tools, e.g. Kong[1] from Mashable, apigee[2], reading up on Gartner's magic quadrant re. API management, and demos from IBM and CA. Costs of these vendor tools aren't a major concern, lack of modifiability absolutely is. I'm currently leaning towards Kong, but am wondering if others have interesting experiences to share.
https://www.3scale.net/ comes with a lot of batteries includes: user login, user dashboard, email handling, if you wish even payment. It's ideal if on the engineering side you just want a simple API call (to 3scale) that returns 'yes/no' for a given API key and everything else can be configured and designed by a non-engineer. We got something running in two days. It's easy to outgrow 3scale though. We're moving away from them because we handle millions of requests/day (saving money).
A 24-bit color stored in base64 gets stored in 4 characters. Your default 20x30 map, multiplied by 4 bytes, is still just 2400 bytes. That fits in a URL, quite comfortably, at least as far as browsers are concerned.
If you want to pack tighter, a palettized serialization would work fine here. Since you don't have a gradient tool or anything else that can span colors, you have a bound on the amount of information being put into the image by the fact the human is only clicking so many times and can only put so much effort into it.
Basically, store each 24-bit color at the beginning with a tight mapping (4 bytes each with a base64), put an end-marker on the palette set, store the map size, then store the palette entry of each of the width x height triangles. If the user uses less than 256 colors, the default 20x30 size you bring up will end up as 4 times colors + 800 bytes, and that's before you do things like bitpacking ("I see the palette only has 8 colors in it so I only need three bits per cell") or any simple RLE you may be inclined to.
(I'd paste in the example of a Christmas tree a friend drew, except it's, uh... a lot. But the source code might help as a starting point. The OP's URLs wouldn't be as huge since it has a much coarser grid.)
This is where the axiom of choice is interesting; many people feel that it seems intuitively reasonable, but the implications are counterintuitive (e.g. banach-tarski)