Hacker Newsnew | past | comments | ask | show | jobs | submit | sciolizer's commentslogin

`man perl` and `man perlintro` are the easiest way to get started. Not sure about Raku.


The book by Moritz Lenz is quite good. https://link.springer.com/book/10.1007/978-1-4842-6109-5

There's also this polished three-hour introductory lecture: https://www.youtube.com/watch?v=eb-j1rxs7sc

Combine that with reading up on details in the reference and you're in for a decent start. https://docs.raku.org/reference


If you already know Perl, Raku is easy to pick up. Especially for basic text munging tasks. The RegEx stuff has changed though. Takes some getting used to.

Some of the warts are gone (like a list element needs to have a scalar context, the stuff that scares away beginners).

It is a _large_ language with paradigms and constructs that are from everywhere (ML, Haskell, Lisp, C, Perl you name it).

Powerful operators. Actually too powerful. Easy to write elegant line-noise kind of code.

Easy to use built in concurrency. (There isn't much that is not built in :-) )

Nice language for Sys/Ops scripting if you find Bash too dangerous and Python too tedious.


Reminds me of [puzzlescript][1].

[1]: https://www.puzzlescript.net/editor.html


PuzzleScript is super cool! I also really like crisp-game-lib, in the same family of tiny engines


Good question. Technically, ToOwned counts:

  pub trait ToOwned {
    type Owned: Borrow<Self>;
    fn clone_into(&self, target: &mut Self::Owned) { ... }
  }
Here the `Owned` is technically an input to `clone_into`, but of course semantically it's still an output.

A more subtle one is `StreamExt`:

  pub trait StreamExt: Stream {
    type Item;
    fn map<T, F>(self, f: F) -> Map<Self, F>
       where F: FnMut(Self::Item) -> T,
             Self: Sized { ... }
  }
Here, the associated type `Item` is the input to the mapping function, and since the mapping function is an input to the map function, it is an input-to-an-input - which basically makes it an output. i.e. the stream "outputs" the items into the mapping function. (aka two contravariants make a covariant).

I couldn't find any more direct examples.


To clarify things a bit further, I find it helpful to think of traits as open compile-time functions that input types (including Self) and output both types and functions.

  pub trait Mul<Rhs> {
      type Output;
  
      fn mul(self, rhs: Rhs) -> Self::Output;
  }
This begins the open declaration of the compile-time `Mul` function. It has two inputs: `Rhs` and `Self`. It has two outputs: `Output` and `mul` (the top-level runtime function).

Note that we haven't defined the compile-time `Mul` function yet. We've only opened its definition. It's sort of like writing down the type of a function before we write down its implementation.

The implementation is never written down, though, because it is always the same: a lookup in a table that is constructed at compile-time. Every impl fills in one cell of the compile-time table.

  impl Mul<f32> for i32 {
    type Output = f32;
  
    fn mul(self, rhs: f32) -> Self::Output {
      self as f32 * rhs
    }
  }
This adds a single cell to the `Mul` table. In psuedo-code, it's like we are saying:

  Mul[(i32,f32)] = (Output=f32, mul={self as f32 * rhs})
The cell is a pair of a type and a function. For traits with lots of functions, the cell is going to be mostly functions.

The main thing I'm pointing out (that the author didn't already say) is that `mul={self as f32 * rhs}` is also part of the compile-time table, not just `Output=f32`. The author says that asso­ci­ated types are no more than the return of a type-level func­tion, and I want to clarify that this isn't a metaphor or mental short-hand. Traits ALWAYS HAVE BEEN type-level functions. They input types and output mostly functions. Associated types just allow them to output types in addition to outputting functions. Notice how associated types are defined inside the curly braces of an `impl`, just like the functions are.

Once you realize this, it's all very simple. I think there are a few things that obscure this simplicity from beginners:

1. `Self` is an implicit input to the compile-time function, with its own special syntax, and for many traits it is the ONLY input. When reading a book on rust, the first examples you encounter won't have (other) type parameters, and so it's easy to overlook the fact that traits are compile-time functions.

2. Rust traits are syntactically similar to object-oriented polymorphism, but semantically duals of each other, so experienced OO programmers can jump to wrong conclusions about rust traits. Rust traits are compile-time and universally typed. Object-oriented polymorphism is run-time and existentially typed.

3. Because the trait-as-compile-time-function's implementation is so highly structured (it's just a table), it can actually be run backwards as well as forwards. Like a prolog predicate, there are 2^(#inputs+#outputs) ways to "invoke" it, and the type-inference engine behaves more like a logical language than a functional language, so from a certain perspective associated types can sometimes look like inputs and type parameters can sometimes look like outputs. The reason we call them functions and not merely relations is because they conform to the rule "unique inputs determine unique outputs".


Do you decelerate in both axes?


Why do you need runtime codegen? What exactly needs to be "instantiated"? Ultimately the runtime representation of a type parameter comes down to sizes and offsets. Why not have the caller pass those values into the generic method?



Ugh yeah. The ability to check at runtime if a value satisfies an interface, combined with structural typing...

So now the static expressivity of the type system is compromised in the name of runtime introspection. Reminiscent of how when Java added generics, runtime introspection ended up totally blind to them due to erasure.

This seems like the result of not taking care to account for the possible future addition of generics when originally designing the language — it was always well understood that they’d be a likely later addition to the language. The ability to check if a value satisfies an interface at runtime doesn’t seem all that critical, although I could be missing something, I’m not a regular user of the language.


It's not as common as unwrapping to concrete types, but still frequently used.

One place it's absolutely critical, and certainly the most common by number-of-calls even if people don't think about it, is `fmt` functions that check to see if the type implements `String() string`.

An idiom found in many data-shuffling libraries is to accept a small interface like `io.Writer`, but have fast-paths if the type also implements other methods. E.g. https://cs.opensource.google/go/go/+/refs/tags/go1.18:src/io...

It's a little annoying but I don't think it's as bad as Java's erasure. We're still very early in idiomatic Go-with-generics so maybe we'll see a huge impact from this limitation later, but most times I see people wanting generic method parameters, they've got a design in mind which would be horribly inefficient even if it was valid. If you are going to box everything and generate lots of garbage, you might as well write Java to begin with.


For the case of coercing a specifically a function argument to a different interface, there is a solution that isn't stupidly inefficient, but it's impossible to support for the general case of coercing any interface value to a different interface. I expect 90% of the time a function coerces and interface value to a different interface, that value is one of its arguments, but to support that case for generic interfaces but not the other 10% would be a really ugly design. I think you'd also have issues with treating functions that coerce their arguments to generic interfaces as first class functions.

    package p1
    type S struct{}
    func (S) Identity[T any](v T) T { return v }

    package p2
    type HasIdentity[T any] interface {
     Identity(T) T
    }

    package p3
    import "p2"
    // Because parameter v might be (and in this case is) coerced into
    // p2.HasIdentity[int], this function gets a type annotation in the compiler
    // output indicating that callers should pass a p2.HasIdentity[int] interface
    // reference for v if it exists, or nil if it doesn't, as an additional
    // argument.
    func CheckIdentity(v interface{}) {
     if vi, ok := v.(p2.HasIdentity[int]); ok {
      if got := vi.Identity(0); got != 0 {
       panic(got)
      }
     }
    }

    package p4
    import (
     "p1"
     "p3"
    )
    func CheckSIdentity() {
     p3.CheckIdentity(p1.S{})
    }


I'm not sure this approach would correctly handle the case where the function is oblivious to generics but receives an object where a generic method must be used to satisfy the interface. E.g. a function taking an `any` parameter, checking if it's an `io.Writer`, passed something with a `Write[T any]([]T) (int, error)` method. Intuitively you would expect that to "materialize" with `byte` and match. And as the FAQ says, if it doesn't, then what's the point of generic method arguments since methods exist primarily to implement interfaces?


It would be up to the caller which knows the concrete type of the value getting passed as `any` to provide an interface object with a function pointer to the concrete instantiation of `Write[T]` for T=byte. If the immediate caller also doesn't know the concrete type, it would also get that from its caller. It's ultimately very fragile, since there are definitely cases where at no point in the caller chain does anyone statically know the concrete type (like if this value is pulled from a global variable with an interface type).

I think it would be terrible to include in the language because of the inconsistencies, but it is possible to make the two examples you listed as typical cases of interface->interface coercion work with generic methods, ugly as it may be.


Nice find.


> Large proportions of the supposedly human-produced content on the internet are actually generated by artificial intelligence networks in conjunction with paid secret media influencers in order to manufacture consumers for an increasing range of newly-normalised cultural products.

> This isn’t true (yet)

It's at least partially true:

https://www.jasper.ai


Depends on what you mean by "decline". But if you're talking about stock price, then the current Shiller PE Ratios are:

Amazon = 238.08 Netflix = 219.32 Google = 73.95 Facebook = 68.36 Apple = 63.17

The Shiller PE Ratio of the S&P 500 is current ~30. All of these companies are overpriced, and Amazon and Netflix significantly so.


PE ratios are only meaningful for comparing companies with no revenue growth, which is well-understood by investors. For companies with insanely high revenue growth, like Amazon, a PE ratio is essentially meaningless because that growth is financed with earnings. The fact that Amazon is only trading at 3.7x revenue is a strong argument that it is underpriced given its revenue growth, not overpriced.


Answer to #1:

In the google search results, click the three vertical dots above the link and to the right of the domain. If using mobile, you'll need to switch to desktop mode to see the three dots. After clicking, an "About this result" pane will pop up to the right, probably[1]. In that pane you'll see the true link, and you can Right Click > Copy Link.

[1]: On my computer, the "About this result" pane says "BETA", so not sure if everyone can use it. It works for me in a private window, though.


Yudkowsky's explanation[1] is the first one that worked for me. I later found Quantum mysteries for anyone[2] helpful. The latter has less soap-boxing.

1: https://www.lesswrong.com/posts/AnHJX42C6r6deohTG/bell-s-the...

2: https://kantin.sabanciuniv.edu/sites/kantin.sabanciuniv.edu/...


I trust Yudkowsky on many things, but not on that explanation. It's still quite complicated, and a couple of times I miserably failed to reconstruct it over a beer or two. A red flag.

Plus, I'd rather expect at least one professional (QED) physicist exists able to explain it and he isn't one. Mermin is, but the explanation is decidedly less clear.

BTW I came here to say Bell's inequality as well. For me it's as baffling as science could ever be.


I started to read the first one but his insistence that Many Worlds is true was too frustrating. Many Worlds Theorem seems specifically useful at saying "the variables aren't hidden because everything before wavefunction collapses actually plays out in different worlds.

But, we specifically have no way of proving that theory. So now we're back to the essence of the original question - if these things seem random why do we know that they're in fact deterministic without any hidden variables?


Well, I'd recommend to read the whole series. It's not so bad as it sounds. There are so many steps from where you are to appreciating the utter weirdness of Bell's experimental result. Not the weirdness of any theory (or an interpretation, which Many Worlds actually is) but of the basic experimental result.

If you are properly amazed by it, rejecting MWI or any crazy-ish borderline-conspiracy theory seems suddenly a lot harder.

I feel the whole Yudkowsky's QM series in fact served to deliver that one post.


Why isn't the MWI another form of hidden variables (a supremely non-parsimonious one at that), where the hidden variable is which of the many worlds you happen to inhabit?


I think you can make an argument for viewing it that way, depending on exactly what you mean by "you".

But IIUC, one of the remarkable things about MWI is that it would be a local hidden variable theory!

This is a very important property to have because the principle of locality is deeply ingrained in the way the Universe behaves. Note that (almost?) no other quantum interpretation is both realist and local at the same time.

Maybe you wonder, how is it possible that MWI can be considered a local hidden variable theory if Bell's theorem precisely shows that local hidden variable theories are not possible?

I think that it was Bell himself who said that the theorem is only valid if you assume that there is only one outcome every time you run the experiment, which is not the case in MWI.

This means that MWI is one of the few (the only?) interpretation we have that can explain how we observe Bell's theorem while still being a local, deterministic, realist, hidden variable theory.


For it to be local (causality does not propagate faster than light), it must be superdeterministic (all the many worlds that ever will be, already are). For it not to be superdeterministic (many worlds decohere at the moment of experimentation), it is also not local (the decoherence happens faster than the speed of light, across the universe).


I'm sorry but I don't follow.

If you take the Bell test experiment where Alice and Bob perform their measurements at approximately the same time but very far apart, I think you and I both agree that when Alice does a measurement and observes an outcome, she will have locally decohered from the world where she observes the other outcome.

But I don't see why the decoherence necessarily has to happen faster than the speed of the light.

It makes sense that even if Alice decoheres from the world where she observes the other outcome, the outcomes of Bob's measurement are still in a superposition with respect to each Alice (and vice-versa).

And that only when Alices' and Bobs' light cones intersect each other will the Alices decohere from the Bobs in such a way that the resulting worlds will observe the expected correlations (due to how they were entangled or maybe even due to the worlds interfering with each other when their light cones intersect, like what happens in general with the wave function).

I admit I'm not an expert in this area, but is this not possible?


An awesome question. That is exactly what I have been wondering without being able to put it into words, and this is core of why the MW seems completely uselsess to me as a scientific theory. (As a philosophical one-maybe? But science?)


To be clear, I don't reject Many Worlds at all and in fact consider it a promising candidate due to it sort of "falling out" of the Schrodinger's equations taken literally unless you add complexity.

But the fact remains that it is impossible to prove and it is conveniently well equipped to handle this situation. I'd prefer an argument that presupposes the Copenhagen interpretation as that is when my intuition fails.


>But the fact remains that it is impossible to prove and it is conveniently well equipped to handle this situation. I'd prefer an argument that presupposes the Copenhagen interpretation as that is when my intuition fails.

Is that not like trying to get a better intuition for planetary movement by using an epicycle-based model? The fact that the interpretation is conveniently shaped in a way that a paradox isn't an issue is not a coincidental thing that should be overlooked in the spirit of fairness to alternative interpretations. Regardless, I think my post below is useful for answering your want.

>So now we're back to the essence of the original question - if these things seem random why do we know that they're in fact deterministic without any hidden variables?

The world is only deterministic under Many-Worlds, and it's deterministic in the sense of "each outcome happens (mostly) separately". It doesn't make any sense to try to make sense of the "deterministic" part separately from MWI. MWI is the only deterministic QM theory (unless you're going to consider "superdeterminism", but there's nothing concrete to that interpretation besides "what if there existed a way that we had QM+determinism but not MWI". There's no basis to it, besides a yearning from people that like the abstract idea of determinism and don't like the abstract idea of MWI).

EPR doesn't tell us that the world is deterministic. It tells us that local hidden variable interpretations (where experiments have a single outcome) of QM can't work, because it shows that a measurement on a particle can appear to you to affect the measurement made by someone else on a distant particle. The Copenhagen interpretation response to this is that the wave function collapse must be faster than light. Therefore, the Copenhagen interpretation is not a "local" theory. (The Copenhagen interpretation doesn't give us any answer for who we should expect to trigger this wave function collapse first when two measurements are taken simultaneously at a distance though.)


If experimenters disprove Many Worlds, they've also disproved Copenhagen. These are exactly the same equations after all.

Theoreticians choose very different mindsets about the same equations, which (they say) somehow create them grounds to form various new hypotheses. As far as I know neither approach was very fruitful so far in terms of new science, so people try multitude of others.

What I've meant to say above, I have much trouble using Copenhagen to understand Bell's experiment. MWI fits the bill here for me.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: