Hacker Newsnew | past | comments | ask | show | jobs | submit | dashwav's commentslogin

I'm not even old (mid-20s), but I scale the site to 150%. HN is honestly one of the worst sites I have used with regards to default scaling across resolutions.

OTOH the simplicity of the site allows for very easy browser-based scaling, which means I only really think about how bad the text size is once per machine.


This seems to only really work in languages that allow null variables/timestamps. I wouldn't really want to have to do comparators to the default value of a timestamp.


The author is talking about databases, not programming languages.

I do see a different issue, though: The article indeed seems to make no distinction between an absent value and a default timestamp of 0. That limits your database to more or less "now". You cannot really store things about the past. Someone might take such a pattern and fixate it into some kind of library. If then someone else tries to store data from 1970, things can get ... interesting.


That does assume your database only allows unix-style timestamps. MariaDB, for example, has "datetime", supporting dates between the year 1000 and 9999, distinct from null/zero.

Unfortunately, datetime takes 8 bytes vs the 4 for a timestamp.


That is a very neat and smart improvement!


> ...default value of a timestamp.

Assuming the timestamp represent a change of state in a contemporary application, I would expect 1970-01-01 0:00:00Z (UNIX epoch +0 seconds) to be unambiguous enough (But that's definitely an engineering constraint to maintain awareness of)


Languages with Option or Maybe types instead of null will also work fine. So it works in every langauge except Go?


Yep. And I assume you could use Time.IsZero() for Go https://stackoverflow.com/a/36234533


PostgreSQL has computed columns. Creating one for every field that returns its `IS NOT NULL`-ness would accommodate such programming languages.


I think that the biggest differentiator between Discord and the others (mumble, TS, Vent, etc) was that Discord has very mature text chatting as a first class citizen, rather than an afterthought. This is what I have anecdotally seen as the primary thing that has made even people who casually voice chat join public discord servers. The model is much more friendly to casual users than either TS or Mumble are.


When I was using Vent and Teamspeak you had to actually host the server, as I recall... so someone had to set it up and manage it, or you could pay a third party to do it, but that was always too expensive for my friends and I.... discord is a lot easier to set up your own server and find and discover other servers


Yep. This is the major differentiator to me and it's obvious why it caught on after that. When I heard about Discord I was like, "Weird, so many people are hosting their own servers or paying to host a server now? Why did it catch on now? More affordable?" Once I learned it was all free... kinda obvious.


"free"

Sure there is Nitro, but I believe data mining is part of the devil's bargain.


I agree - text chatting is the first foot in the door. It's non-committal, but gets you the ability to voice chat when you feel up to it.

Things like TS and Vent is voice-chat-first and thus, you will only join _after_ you've made friends to voice chat with online in some other platform (like irc or a game). There's no TS community, since it's purely utilitarian, unlike discord.


I have been a big fan of the Mozilla Public License 2.0 [1]. I find it is the best combination of "if you use this and improve or modify, those changes need to go to the original code" while not restricting overall usage.

IMO there really isn't anything you can do to prevent people from making a product out of your work if it is open source, but what you can do is make sure that if someone makes improvements to your work, those improvements need to be publicly available under the MPL2.0 license as well.

This has the effect that if someone wants to make a product by just 'adding one line' that line needs to be published and you could add it upstream, making it publicly available again(thus making it harder to make a product solely from your code).

[1] https://choosealicense.com/licenses/mpl-2.0/


Isn't that the same as lgpl?


There was a really interesting paper [1] being circulated a bit last week in the circles I frequent on a few sites that dug a bit deeper into this. The villains are often very superficial and the consequences of the ensuing fight is very rarely shown in the movie itself, and if it is mentioned only in passing. There is this 'cleansliness' to the fight scenes that give you all of the enjoyment while removing any of the dirty human tragedies from the context.

Really interesting read, and something that I have thought about quite a few times while seeing how popular these movies are nowadays

[1] https://academicworks.cuny.edu/cgi/viewcontent.cgi?article=4...


To be fair, from a security standpoint if you want *the highest security* allowing third party installers is one of the first things I would disable as well.


I have a very hard time getting behind these complex configuration languages. To me what makes a configuration format good is the simplicity of reading the configuration of a program, and almost all of these languages are optimizing for feature complexity over readability. I think that all of the popular config formats (yaml, json, toml, etc) have issues, but none of the major issues with them have to do with being unable to represent a fibonacci sequence in their language.

To draw a direct comparison, when I look at the examples in the github repository, all I can think is "I would never want to have this be a source of truth in my codebase". While I get frustrated w/ whitespace in yaml and the difficulty of reading complex json configuration, if I need a way to programmatically load complex data I would almost always rather use those two as a base and write a 'config-loader' in the language that I am already using for my project (instead of introducing another syntax into the mix)


Conversely, I've been using quite a bit of Jsonnet in different projects for a few years now, and it's a life changer.

Here's a public example - using Jsonnet to parametrize all core resources of a bare metal Kubernetes cluster: [1]. This in turn uses cluster.libsonnet [2], which sets up Calico, Metallb, Rook, Nginx-Ingress-Controller, Cert-Manager, CoreDNS, ...

Note that this top-level file aims to be the _entire_ source of truth for that particular cluster. I know of people who are reusing some of the dependent lib/*libsonnet code in their own deployments, which shows that this is not just abstraction for the sake of abstraction.

Jsonnet isn't perfect, but it allows for actual building of abstraction layers in configuration, guaranteed pure evaluation, and not a single line of text templated or repeated YAML.

[1] - https://cs.hackerspace.pl/hscloud/-/blob/cluster/kube/k0.lib...

[2] - https://cs.hackerspace.pl/hscloud/-/blob/cluster/kube/cluste...


Does a "configuration language" specifically incorporate features for "overlaid" or "unified from parts" configuration?

Much like layered dockerfiles, mature configuration often comes from several places: env vars, configuration appropriate for checkin to git (no secrets), secrets configuration, and of course the old environment-specific configuration.

All of that merges to "The Configuration".

Also, these seem close to templating languages.

I've done this several times with a "stacked map" implementation (much like the JSP key lookups went through page / session / application scopes, or even more convoluted for Spring Webflow.


Answer for Jsonnet: layering/overrides from multiple sources: yes, as you define it (it's a programming language, the logic is yours to define, based on your particular usecase). But no access from environment variables, as that's inpure, and not really in scope for them.


I also have a hard time for the same reasons. I'm torn though; I want a config format that's super easy to read, and that I can easily change anywhere with nothing more than sed/vim/nano/notepad - but I also want to avoid typos and formatting problems.

I'm not sure which is the lesser evil.

Actually, I think (as always), that it depends. For something simple like a config file for an app, JSON/YAML is usually fine.

But for something more complex, like IaC (Infrastructure as Code) definitions, I think perhaps "proper" programming languages might be more beneficial. I had a look at Pulumi just yesterday, and I very much like the idea of writing a simple C#/Typescript app to deploy my, when compared to something like HCL (HashiCorp Configuration Language) or bash scripts that wrap the Azure/AWS CLI tooling.


Starlark is OK (it is very similar to Python except removed non-deterministic part). But the part where no type-hinting kicks in is when you try to read the actual underlying macros / functions the others provided. It is much harder to do without types (which Nickel seems would like to address).

Honestly, I would prefer anything that mimics popular languages to lower the bar of reading.


Have you tried Dhall? Static types and enough power to provide the tools you need, but deliberately not enough to allow full on arbitrary computation.

I've played with it briefly, along with the Kubernetes plugin, and it was a nice experience.


Configuration isn't code isn't data. Data belongs in a database. Code belongs in a codebase. Configuration doesn't. Ideally config should be reduced down to keys and values and stored anywhere where it's easy to push to the environment where the code runs. I don't understand the immense expansion and proliferation of the config layer. I never touch code anymore. All I handle is config and tooling around it. YAML engineering.

Better configuration doesn't mean more ways to treat config like code, or data like config, or god forbid, code. It means treating config like config and code like code. Gitops just makes me sad. Truth should only flow in one direction. The first time I had to write a script to utilize the GitHub API to auto-update a code repo I died a little inside.


So we try to keep these three areas separate, and config generally ends up in our deployment pipeline. The problem is what do you do when code changes necessitate config changes? Adding/removing config properties, etc. We don't want 50 developers messing around with production deployment pipelines.

Doesn't something like this go a little way toward solving that problem?


Manage the complexity, yeah. Solve it? Well, if you’re managing complexity, you’re not really solving it, are you?

Heavy lifting needs to be done with code. If your config layer is growing, I would look for why that is and how you could push the complexity to the code or data and “boil down” the config until it can be represented with just keys and values.

Growing config means there’s areas of your application that aren’t being properly encapsulated. But once something is enshrined as config then it usually never will get treated as a legitimate application concern, worthy of a data model and a UI for changing it. Devs just keep adding onto it and before you know it you need a whole team just to deal with it.


> I have a very hard time getting behind these complex configuration languages. To me what makes a configuration format good is the simplicity of reading the configuration of a program, and almost all of these languages are optimizing for feature complexity over readability. I think that all of the popular config formats (yaml, json, toml, etc) have issues, but none of the major issues with them have to do with being unable to represent a fibonacci sequence in their language.

Static languages like JSON and YAML are fine for toy configurations, but they don't scale to the most basic real-world configuration tasks. Consider any reasonably sized Kubernetes project that someone wants to make available for others to install in their cluster's. The project probably has thousands of lines of complex configuration but much of it will change subtly from one installation to another. Rather than distributing a copy of the configs and detailed instructions on how to manually configure the configuration for each use case, it becomes very naturally expedient to parameterize the configuration.

The most flat-footed solution involves text-based templates (a la jinja, mustache, etc) which is pretty much what Helm has done for a long time. But text-based templates are tremendously cumbersome (you have to make sure your templates always render syntactically correct and ideally also human readable, which is difficult because YAML is whitespace-sensitive and text templates aren't designed to make it easy to control whitespace).

A similarly naive solution is to simply encode a programming language into the YAML. Certain YAML forms encode references (e.g., `{"Ref": "<identifier>"}` is equivalent to dereferencing a variable in source code). Another program evaluates this implicit language at runtime. This is the CloudFormation approach, and it also gives you some crude reuse while leaving much to be desired.

After stumbling through a few of these silly permutations, it becomes evident that this reuse problem isn't different than the reuse problems that standard programming languages solve for; however, what is different is that we don't want our configuration to have access to system APIs including I/O and we may also want to prevent against non-halting programs (which is to say that we may not want our language to be turing complete). An expression-based configuration language becomes a natural fit.

After using an expression-based configuration language, you realize that it's pretty difficult to make sure that your JSON/YAML output has the right "shape" such that it will be accepted by Kubernetes or CloudFormation or whatever your target is, so you realize the need for static type annotations and a type checker.

Note that at no point are we trying to implement the fibonacci sequence, and in fact we prefer not to be able to implement it at all because we expressly prefer a language that is guaranteed to halt (though this isn't a requirement for all use cases, I believe it does satisfy the range of use-cases that we're discussing, and the principle of least power suggests that we should prefer it to turing-complete solutions).


The use case of those executable configuration languages is that you often need to set the same setting on different programs, maybe even on different machines, but they must all reflect the same decision in different ways (like your services server can set a port to listen, so your firewall must set that port as open for internal traffic, and your applications must set that port as their data source).

That said, this one language does not look powerful enough for that. So I'm not sure where it can be used.


> That said, this one language does not look powerful enough for that. So I'm not sure where it can be used.

I mean, it's used to configure all of NixOS, so I'm not sure if that's true.


Oh, so I'm wrong. Makes more sense that way :)

Let me add another postit of "try NixOS in an environment" into my TODO list...


Yeah... I honestly don't see the appeal of nickel.

It is sold as "You use this to generate configuration in other formats like JSON"... but why? Why would I want to use some language other than the target format to configure things? Why am I making my configuration a 2 step process? And even if I bought all of those reasons, why wouldn't I just use a general purpose language instead? Why have some esoteric language dialect whose only purpose is... making configuration files?

I'd much rather use Bash, python, perl, javascript, typescript, groovy, Java, kotlin, C++, C, Rust, erlang, php, awk, pascal, go, Nim, Nix, VB, Hax, coffeescript etc. Really, take your pick. Any well established language seems like a much better approach than something like this.


Here's a snippet for configuring a systemd timer on NixOS. Note that if I were to use the systemd configuration language, it would be spread across two files (the timer and the service itself)[1]. If I don't have "startAt" in the definition, it won't generate the timer file. If I spell it "statrAt" it will give me an error when I generate it (or in my editor if I have that configured). Note it's possible to fallback on using the json-like syntax to generate the ini-like systemd configuration files manually, which is useful to have when needed, but mostly it's about writing fairly simple functions that increase the signal-to-noise of the configuration file by removing boilerplate while at the same time detecting mistakes earlier.

  systemd.services.tarsnapback  = {
    startAt = "*-*-* 05:20:00";
    path = [ pkgs.coreutils ];
    environment = {
      HOME = "/home/XXXX";
    };
    script = ''${pkgs.tarsnap}/bin/tarsnap -c -f "$(uname -n)-$(date +%Y-%m-%d_%H-%M-%S)" "$HOME/ts" '';
    serviceConfig.User = "XXXX";
  };
1: Quick reference if you aren't familiar with systemd timers: https://wiki.archlinux.org/index.php/Systemd/Timers


I would 100% choose to write a systemd foo.timer file, and the foo.service file, and reference those.

You're throwing away all the organizational learning and preexisting systemd documentation, and forcing something different on the world. `man systemd.timer` contains no mention of `startAt`; what you have there is something inherently different from systemd.

And what if I want more complex rules, like a combination of intervals and time from boot?


> I would 100% choose to write a systemd foo.timer file, and the foo.service file, and reference those.

NixOS gives you this option, and I choose not to. Fortunately nobody is forcing you to use this (or forcing me to not use it).

> You're throwing away all the organizational learning and preexisting systemd documentation, and forcing something different on the world. `man systemd.timer` contains no mention of `startAt`

Not quite throwing it all away, because you can easily observe the output of this before making it live. Yes, systemd.timer contains no mention of startAt because as you correctly observed this is somethign inherently different from systemd. startAt is used by other configuration options to specify items running at specific calendar times, so it's reasonably consistent within nixOS itself.

To read the nix documentation is quite simple (and it shows the currently configured value for you):

  % nixos-option systemd.services.tarsnapback.startAt
    Value:
    [ "*-*-* 05:20:00" ]

    Default:
    [ ]

    Type:
    "string or list of strings"

    Example:
    "Sun 14:00:00"

    Description:
    ''
      Automatically start this unit at the given date/time, which
      must be in the format described in
      <citerefentry><refentrytitle>systemd.time</refentrytitle>
      <manvolnum>7</manvolnum></citerefentry>.  This is equivalent
      to adding a corresponding timer unit with
      <option>OnCalendar</option> set to the value given here.
    ''
> what you have there is something inherently different from systemd.

That's kind of the point. If it was inherently the same as systemd there would be no point to it. Systemd timers are quite boilerplate heavy (compare to e.g. a crontab entry), so when I'm not using nixos, I often end up copying an existing timer and modifying it.

> And what if I want more complex rules, like a combination of intervals and time from boot?

Add a time from boot of 120 seconds with this:

  systemd.timers.tarsnapBack.timerConfig = { OnBootSec = "120"; };
For things that actually use all the bells and whistles of systemd, you'll need to specify all the various details.

[edit]

For a nice hyperlinked searching of options see also:

https://search.nixos.org/options?query=startAt&from=0&size=3...


Three things to address:

1) This doesn't have to be a two-step process. Specialized tools like kubecfg for Jsonnet will directly take a Jsonnet top-level config and instantiate it, traverse the tree, and apply the configuration intelligently to your Kubernetes Cluster.

2) General purpose languages are at a disadvantage, because most of them are impure. Languages that limit all filesystem imports to be local to a repository and disallow any I/O ensure that you can safely instantiate configuration on CI hosts, in production programs, etc. The fact that languages like Jsonnet also ship as a single binary (or simple library) that requires no environment setup, etc. also make them super easy to integrate to any stack.

3) Configuration languages tend to be functional, lazily evaluated and declarative, vastly simplifying building abstractions that feel more in-line with your data. This allows for progressive building of abstraction, from just a raw data representation, through removal of repeated fields, to anything you could imagine makes sense for your application.

Related reading: https://landing.google.com/sre/workbook/chapters/configurati...


I don’t think they tend to be lazily evaluated (unless you mean “lazy” in some other way than I’m familiar with), but in general I agree.


Jsonnet, Nix and CUE are lazily evaluated. Starlark is not IIRC. Dhall I don't know, but I would presume it is?

Nix as an example:

  nix-repl> { foo = 5 / 0; bar = 5; }    
  error: division by zero, at (string):1:9

  nix-repl> { foo = 5 / 0; bar = 5; }.bar 
  5
vs. Python as an obvious example of a language with eager evaluation:

  >>> { "foo": 5 / 0, "bar": 5 }.bar
  Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
  ZeroDivisionError: division by zero
This lazy evaluation allows for a very nice construct in Jsonnet:

  local widget = {
    id:: error "id must be set",
    name: "widget-%d" % [self.id],
    url: "https://factory.com/widget/%d" % [self.id],
  };
  widgetStandard: widget { id: 42 },
  widgetSpecial: widget { name: "foo"; url: "https://foo.com" },
When the resulting code only expects a widget to have a 'name' and 'url' field, you can either have both automatically defined based on a single to-level ID, or override them, even fully skipping the ID if not needed. (a :: in jsonnet is a hidden field, ie. one that will not be evaluated automatically when generating a YAML/JSON/..., but can be evaluated by other means).


JSON and YAML don’t offer any abstraction. If you want to describe kubernetes resources in such a way that you can deploy the same resources to many environments with subtle differences between them (e.g., namespace names, DNS names, etc) you want something to let you abstract so you aren’t manually trying to keep disparate copies of thousands of lines of frequently-changing config in sync.

The reason you don’t use regular languages for this task is because you want to enforce termination (programs can’t run forever without halting, allowing someone to DoS your system) or reproducibility (the config program doesn’t evaluate to different JSON depending on some outside state because the program did I/O). If your use case involves users who can be trusted not to violate these principles, then a standard programming language can work fine, but this frequently isn’t the case.


> you want to enforce termination

Nickel is turing complete. See, the fib example.

> or reproducibility

Nickel doesn't force reproducibility

So again, why Nickel and not a GP programming language?


Sorry, my reply, like throwaway's missed the main point of your original comment of "why not a GP programming language"

A configuration file is uniquely suited to a pure and lazy language.

Pure, because the all the advantages of a pure language remain, while none of the downsides; the result of evaluating the function is your configuration data. You don't need to do arbitrary I/O and ordering for generating configuration files.

Lazy because configuration files are naturally declarative, but you don't want to evaluate tons of things you have declared but then never used.


> Nickel is turing complete. See, the fib example.

I should have been more clear, I was listing potential reasons why you might not use a standard programming language. "Not wanting turing completeness" is a reason to use a non-turing-complete DSL. I wasn't suggesting that Nickel was appropriate for this particular use case, but many of the other languages in this category are (e.g., Starlark, Dhall).

> Nickel doesn't force reproducibility

Scanning the docs, I don't see anything about Nickel allowing I/O, so I believe you're mistaken.


https://github.com/tweag/nickel/blob/master/RATIONALE.md

> However, sometimes the situation does not fit in a rigid framework: as for Turing-completeness, there may be cases which mandates side-effects. An example is when writing Terraform configurations, some external values (an IP) used somewhere in the configuration may only be known once another part of the configuration has been evaluated and executed (deploying machines, in this context). Reading this IP is a side-effect, even if not called so in Terraform's terminology.


This is the relevant passage:

> Nickel permits side-effects, but they are heavily constrained: they must be commutative, a property which makes them not hurting parallelizability. They are extensible, meaning that third-party may define new effects and implement externally the associated effect handlers in order to customize Nickel for specific use-cases.

This answers your question about why Nickel is preferable to general purpose programming languages--the side-effects are more limited. Further, it reads to me like the "side-effects" are something that the owner of the runtime opts into by extending the sandbox with callables that can do side-effects as opposed to untrusted code being able to perform side-effects in any Nickel sandbox.


Hi, blog post author here. The idea behind effects in Nickel is to have very limited, use-case specific effects that can extend the standard interpreter. The goal is, as the example suggest, to make it able to interoperate with an external tool when absolutely necessary, such as Terraform or Nix. The idea is really not to have general effectful functions such as readFile or launchMissiles.


It's not clear from the docs whether any Nickel program can perform side-effects or if the Nickel interpreter must be extended to allow programs to perform side-effects (a la Starlark). Can you clarify this point?


The Nickel interpreter is intended to offer a mechanism to make it possible to extend it to add "effects", which is really just a pompous name for an external call. The idea is that, if you want to integrate it with Terraform for example, you would want to have a "getIp" effect to retrieve the IP of a machine once it has been evaluated. So you implement your external handler (say in Rust or whatever), and then you can call "getIp" from a Nickel program. Currently, we see no reason Nickel would ship with any effect by default. These are really just for extension purpose. Such additional effects would be required to be commutative in order not to hurt parallelizability, but you can't enforce that mechanically, so you'll have to trust the implementer.


This makes sense--allowing or preventing a particular instance of the interpreter to make side effects is perfectly reasonable. It would be concerning if all instances of the interpreter allowed client programs to make side effects.


Yabai (https://github.com/koekeishiya/yabai) and skhd (https://github.com/koekeishiya/skhd) together makes a very powerful combination that works extremely well. It's as close to i3 as you can get on MacOS, and outside of a few odd things with 3 monitors I haven't run into any issues.

Yabai is actually the second iteration of tiling windows that koekeishiya has made and it's super well developed.


Seconding this; the only thing I miss from i3 is the fact that yabai almost by definition can't be better integrated with the OS, so there's a slight increase in latency that's barely perceptible but enough to make me notice how much zippier my Linux boxes + i3 are. Small price to pay for how much more comfortable they make me on macOS, though.


In my experience, I have seen much less of the OOP-only group, especially in recent years. The general feel I have seen when communicating with other developers about this is that OOP is a tool and it's probably not the best one, but they are comfortable with it and it's problems.

In general it feels like the overall feeling to FP is either

"I don't have time to learn a new paradigm when my current one is working good enough to make the company money"

or

"I love using FP ideas in my code when it makes sense"

I really feel like this is the ideal state for Software Development, as either side "winning" will only hurt the robustness of the environment.


Oh, sure! I didn't mean to suggest that there were as many as the only-pure-FP crowd, though I can see how my phrasing may have suggested as much. I just meant to bring up that there are people like that, who refuse to (consciously) adopt any amount of FP in their development. I have interacted with some myself. Most of them make arguments about, like, "People think in terms of state so FP is inherently a bad user experience" or something to that effect.


I have been looking for a solution to this problem for a very long time and my reasoning is that on a trackpad, natural makes a lot of sense intuitively, since you are "pushing" the document (page/app/whatever) the direction you want it to move.

Whereas when I am using a mouse I feel like I the document is below the mouse more or less and when I move the scrollwheel it is physically moving the document as if it was tied to the scrollwheel.

Not sure if that makes any sense, as this is just some internal feelings on it, but I have beeen manually toggling the natural scroll when i plug and unplug my mouse ever since I started using MacOs


Yeah I think the exact same way! The top of the physical wheel spins the opposite direction of the bottom of the wheel which would be "touching" the page/content. Originally I was going to just make a background script to automatically toggle the option when it detects certain USB devices (like my mouse) but I couldn't find a way to apply settings changed via the "defaults write" command without logging out and then back in. In my research I came across discrete-scroll and scroll reverser on GitHub. Discrete-scroll worked in Catalina but had no GUI, and Scroll Reverser didn't work reliably on Catalina. So I combined the ideas from both in as little Swift code as possible so that anyone using my app wouldn't need to worry about allowing the app to "control your computer".


I have to say I really appreciate this! I was going down very similar lines just last week (I had installed Hammerspoon and was experimenting with some applescript hacks, but to no avail).

Just intercepting the actual scroll and inverting it is a really elegant solution (that doesn't require a relog) which is great.

This solves one of the two biggest gripes I had about MacOs - with the other being my inablility to "pin" my dock to one of my monitors, overriding the swap functionality. Thanks!


I've always pictured scrolling as moving a camera or visor over a static sheet.


I'm the same as you, and it also took me a while to figure out why it was. The best explanation I could come up with is that it felt like I was moving the scroll bar with a scroll wheel, while that's clearly not the case when I'm using a trackpad or touch.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: