Hacker Newsnew | past | comments | ask | show | jobs | submit | kummappp's commentslogin

Crypto currencies offer non-repudiation and that is it. https://en.wikipedia.org/wiki/Non-repudiation


That is a great youtube channel there. The fall of the city of Ur was also a good one


Also one thing that works is bits to -1, +1 signal and then taking auto-correlation of it.


What a nice way to make 1d -> 2d. I made my MSc thesis about data visualization and the thing I found useful was excess entropy, which means how well you can you predict the next bit better, if you take one bit more into the sliding window you use to predict the next bit/byte. That is usually really dependent about the sliding window size. Imagine what happens with text written in 8 bit characters. With that trick one could make a 3d visualization.


'strongly typed language increases the productivity' How was this tested? Was there a control group? I would argue that currently Javascript and Python have been greatly improving the productivity. The type system works well in the maintenance, API definitions and 'no unit tests' scenarios, but when you have to do something you don't have yet any clue about -for exmple at the start of a years long app development project, the type system is just in the way.


I think that JavaScript and Python, from my experience, certainly increase speed... but that definitely != productivity.

If you want to get something up and running fast, then JS/Python should absolutely be your go-to. We have an "innovation sprint" every quarter where everyone gets to try out changes and new features and anything else they wish to hack with our system, and I would say 99% of people choose to do this work in JS/Python.

However, my personal opinion is that productivity's first and most important pillar should be maintainability, followed closely by readability, with speed relatively far behind.

Again, this is just my two cents, but I equate Python or JavaScript to sending a message without things like capitalization and punctuation. Works perfectly fine for Slack, but not as well for writing a novel.


There were (arguably unsuccesful) attempts at testing such things in labs: https://danluu.com/empirical-pl/

My interpretation is that the conclusion, at the moment, is that we can't know for sure, scientifically, which one is "better".

That being said, I sill have to find the randomized, double blind test that proves a hammer is the right way to hit a nail.


I don't see how you could test a statement like that scientifically. Maybe you could argue, for most people? For certain people? People conceive of problems and solutions differently and I doubt it is particularly standarizable in terms of type systems.


> How was this tested? Was there a control group?

I agree this deserves some study. But...

> when you have to do something you don't have yet any clue about the type system is just in the way

Even if you don't have any clue, you usually know the types of your functions and data structures.

I mostly write Python and OCaml code. Each langage has its use cases, but I rarely feel the OCaml type system is in my way. When it's in my way, it's because my code is incorrect in an obvious way.


Yeah, this pretty much echoes my experience working with type-safe languages.

In the case of my anecdote here, I'm mostly talking about typescript, but I've worked with a bunch of other ones in a non-web context.

Every single time, without fail, when I get a "snag" from the type-system, it's complaining about something real. Sometimes it's a trifling bug, like a typo, but ... even there it's usually pretty nice to have the type system immediately jump on it and report it, without me having to deploy the thing and run it and only then find out that something is wrong.

But the other class of bugs - that's where it's solid gold. It'll often catch really sneaky bugs, bugs related to "nullability", where some object I'm blindly using isn't guaranteed to stay allocated during the use case I expect it to be useable in, and holy smokes are those a lifesaver. Having had to deal with those bugs from the opposite direction, they're an unleaded nightmare to try to fix without the type system pinning down exactly the culprit that would be causing it. Every time I see one, I immediately think "wow, this would have been a 5-10 hour nightmare if I had to fix this because of some production bug". I've been in the office till 10pm, and ... I never want to do that again if I can avoid it.


Yeah, the type system helps in that, but for a unit testable product you need factory methods, which produce some valid and invalid cases. If you write simple tests for those it is a type checker on it's own and the type checker is just overhead. If I could choose my next project from two similar implementations where the other would be type checked and the other unit tested with basic datatype factors (at least) I would choose the unit tested one.

I would use typed API's though, because it saves documentation reading time and makes the editor autocomplete to work like magic.


> If you write simple tests for those it is a type checker on it's own and the type checker is just overhead.

This is not true. Unit tests cannot replace a type system, just like a type system cannot replace unit tests.

You need a unit tests because a type system cannot check for all possible types of correctness.

However, unit tests can only check for the presence of bugs; they cannot prove their absence. On the other hand, a type system can prove that certain classes of errors cannot exist in the program.


> the type system is just in the way

That's an usual symptom that you are trying to write untyped code in a typed language.

There are few cases where types do get in the way, but in the huge majority of the cases the types are there for you to explore your ideas on them first, and only mess with the code once they make sense.


> for example at the start of a years long app development project, the type system is just in the way.

Subjectively, my experience is that writing out the type definitions and signatures of main functions is a great way to start exploring an unknown problem space.


I assume we agree that "productivity" doesn't just cover writing code, but also avoiding/finding bugs and maintaining the code over a long period by different people. TypeScript is a big progress over JavaScript in this respect.


I'm not convinced of TS as of yet for my needs. The vast majority of bugs and complexity in UI code seem to come from state management, asynchronicity, complex validation and the like. Static typing doesn't help with any of these or at least not to a degree that is sufficient. TS also doesn't seem to help at all with performance, which I find to be the most important trade-off for introducing types and the implied complexity.

I'm heavily biased towards small teams and small to medium programs though. I can at least imagine how TS improves ad-hoc documentation in some cases, which can definitely help in the "maintaining the code over a long period by different people"-scenarios.


That's not my experience, sadly. We switched from Javascript to Typescript over a year ago, on a fairly new project, but it mostly results in more errors that need to be fixed, that wouldn't have been errors in Javascript.

Being able to specify interfaces is absolutely nice, but overall I'm not convinced it's worth the trouble.


> sadly. We switched from Javascript to Typescript (...) I'm not convinced it's worth the trouble.

Judging by the seismic shift in the industry away from vanilla JS towards TS I'd say that qualifies as an extraordinary claim.

It would be interesting to hear some of the details behind your experience.


It's mostly a lot of extra boilerplate that's suddenly required. We're using Vue, and every time we write a method or computed property for a component that uses the this pointer, we need to pass it (this: any) as parameter. Any, because every component is different, has different properties, and it's constantly changing, so writing interfaces for those isn't worth the effort, since they only call their own methods anyway. Forget it, and it might still work fine locally, but the build server complains, so we have to fix it.

Most of the functional errors will be caught either by unit tests or by functionality noticeably not working. These are not things that would be caught by Typescript anyway.

The irony is that we're using typescript in the front-end, where it mostly gets in the way. I think Typescript would have been more useful in the backend, but we're not using it there, because originally our backend was trivially simple. Now that the backend is becoming bigger, I can imagine Typescript would be more useful there.

It could be that Typescript doesn't work well with our version of Vue. (I think the latest version is designed around Typescript which will hopefully make the process a lot easier.)


> These are not things that would be caught by Typescript anyway.

In my experience working with React, which is pretty heavily invested in typescript these days, if you go reasonably deep on doing typescript interfaces, it's like a switch gets flipped.

A light dusting of typescript really does barely anything; it's just boilerplate. But once you get up to about 80-90% coverage, it's like a switch gets flipped. All of a sudden it's really, really good at detecting discrepancies - I had a thing I was working on today, where I had a cute little svg icon component in the giant SPA program we're writing - I was just reusing the thing, and attaching a click handler to it, and all of a sudden - this component we hadn't touched in months, typescript starts griping about it. And I'm like "oh come on, this is so basic - what the hell could be wrong about passing in a simple onclick handler?" Well - turns out nobody had ever needed to use the "event" param on that function, so it didn't even use one internally - what I was passing in, in plain JS, would have just been thrown away, because the internal 'passthrough' version of the function had no parameter at all. And I didn't notice it in light testing (we have TS set to emit our program even if it's failing tests). I tested the component, and because the behavior's invisible/internal, it seemed like it was probably fine. Maybe I would have caught it with really earnest, aggressive testing later, but I didn't even need to - typescript just nailed it instantly.

I've had the privilege of working on some game development stuff outside of a web stack, and holy smokes does working in a complete, algebraically typed language change everything. When you go from 80-90% type coverage, to "hard 100%", it's just a complete 180°. It's just _freaky_ how good it is at catching errors. I'll change one little thing, and it can tell me "oh yeah - you know that cutscene an hour into the game? Yeah, you broke that." It's uncanny. It just absolutely changes everything about how I work.


Yeah, I guess at least part of the problem is that we're using a version of Vue that's not natively designed around Typescript. There is a patch for it, but it's not that thorough. Half-hearted typescript doesn't work. A version of Vue that assumes you're using typescript and has interfaces defined for everything, would probably make a massive difference, and I think that's what Vue 3 does, but we're not in a position to migrate at this moment.

Basically, any use of `any` should be avoided. Once you tolerate one `any`, you're on the way down.

One thing that I really, really do like about typescript is that you need to be explicit about whether a value can be null. Java lacks that, but the difference between `foo: string` and `foo: string|null` is stark.


Typescript compiler can be configured to silence many kinds of errors, these are especially useful while migrating a large Javascript codebase. Also look at typescript not just from an error catching perspective but also from a tooling and documentation point of view. With types at hand, modern ides work much better and reading types often helps in understanding the code. All that said if the project is small and more of a throw away with only a few members contributing code typescript may not add benefits.


What sort of errors wouldn’t have been errors in JS?


The ones that are hellishly difficult to diagnose bugs waiting to be discovered I'll bet.


No, just the trivial boilerplate stuff. Forgetting to declare `this` as a parameter, for example.


There are some "cute" uses of truthiness checks that work in JS (and, in some cases, weren't flagged as errors by earlier versions of TS) that are probably a bad idea and are trivial to render more-explicit, so better, but do technically run OK. Example: "if(obj.some_method) {obj.some_method();}". Not an uncommon form in JS in the wild, but TS (correctly) flags it as a problem.

Otherwise I dunno what this could be. Especially what TS could be disallowing that'd be all of: valid in JS, a good idea in JS, and especially time-consuming to fix.


Honestly it sounds like it saved you some trouble.


It didn't. It caused it.

I honestly wanted to believe in the value of Typescript and was enthusiastic about the switch, but it really hasn't proven itself over the past year.


Implementing Oberon the language went hand in hand with implementing Oberon the operating system. So when we're talking about productivity he's mostly talking about operating systems and interfacing with hardware, where you usually have more clearer concepts. The direct comparison would be assembly or C here.

For web gadgetry or exploratory ad statistics, you might end up with different productivity-enhancing features.


Adding types at the start of a years long project is going to help massively later on at some slight friction early on. Static typing makes explicit connections between code that are usually implicit in dynamic languages, these connections if left untyped will break sooner or later once more programmers start contributing code without having the implicit knowledge about how the code is connected


I think in software engineering, the answer to almost everything is "it depends".

Strongly typed languages increase productivity if the cost of a type mismatch bug exceeds the cost of defining and specifying the types. That's true for some programs, and it's not true for others.


Strong typing over the long term will always produce a more maintainable software product when used correctly. For many applications, you might not encounter enough complexity for it to matter strongly one way or another. Once you do finally encounter that application with 40 different entity types, 12 different business contexts that each of those can uniquely interact within, and hundreds of properties for each, you will be scrambling to find some way to bring order to your chaos.

Dynamic typing is useful if you are forced to develop software before you understand the business model or if you need to expose some DSL to your users. I normally view it as a short term option that is typically reached for when not enough actual engineering has occurred (in more complex systems). It's also mandatory for working within certain domains (i.e. the web). I think this last point is why so many seem to think its a perfectly acceptable way to carry on in any sense.


Python is strongly typed. Maybe you are confusing strong vs static typing.


But I bet Python has a loopholes in the type system as well. I mean you can take the stack, and modify the running code in every way possible at runtime, but it is a slightly mad thing to do.


Yeah I did mix them. Thanks.


> when you have to do something you don’t have yet any clue about

This is an experience issue, not a type-system issue. Rapid refactoring is also possible during initial passes while design is settled.


Thanks. The 'Non-dominated sorting' was the magic phrase for google i was missing.


I did not find pareto front sorting library so I made one. It took a weekend. Save your weekend and use my module instead.


What about pymoo's [1] Non-dominated Sorting? [2]

I used it recently since I had the same problem.

[1]: https://pymoo.org/

[2]: https://github.com/msu-coinlab/pymoo/blob/master/pymoo/util/...


In the first example, why does (None, 0, 1) dominate (0, 0, 0)? The latter has a higher value in the first position.


Oh, it is a bug. I'll fix it. How embarrassing and thanks. I have mainly used the None means the whole row is inferior -thing.


This seems pretty cool, and something I might use.

Echoing another commenter: can you add a brief intro to what's meant by a Pareto front? A picture would do wonders.


Please test your library with the c10-archive test suite https://www.ee.oulu.fi/research/ouspg/PROTOS_Test-Suite_c10-...


How would one test a library for generating compressed archives with a test designed for testing decompressors?


Sorry. My bad.


What is c10-archive test?


Follow the link.


if you add multiplication functionality, identity and termination symbols to it, you get something that is closer to a proper category like done in this: https://github.com/kummahiih/python-category-equations

f1(?) |> ( f2(?), I ) |> f3(?) == f1(?) |> f2(?) |> f3(?) , f1(?) |> f3(?)

it just feels natural that way


And if you think how wood turns twisted, when it grows faster on the other side and slower on the other, it is not that big leap to think how a wave function would turn if it is evaluated faster on some side. I am proposing here, that the wave function evaluation speed differential causes the gravity.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: