Hacker Newsnew | past | comments | ask | show | jobs | submit | ad8e's commentslogin

Your suspicion is right; it is 100% AI-generated. https://www.pangram.com/history/8b593dcc-6a7d-496f-8c80-a588...


Oh wow, the guy used the word "versatility"... he even dared "narrative" and "just" - the latter one two times! Astonishing, does he have no shame copy pasting this obvious AI slop? It is obvious that no person in their right mind would utter such things!


>Pangram can detect AI-generated content in both short-form and long-form written content, with up to 99% accuracy. [0]

Yeah, 0% accuracy is still "up to 99%"!

[0] https://www.pangram.com/


Yes. At Caltech, you'd be socially ostracized by your friends if people knew you were cheating, or helping another person cheat.


That wasn't really my question. I was asking if the person helping other people in another class cheat would be violating the honor code. Not whether your friends would ostracize you if they find out you're cheating.


If you can't see that "helping other people in another class cheat" is violating the honour code then you might benefit from taking the time to do so.

> Failure to realize the consequences of a course of action does not justify it.

https://deans.caltech.edu/documents/24878/Honor_Code_Handboo...


> If you can't see that "helping other people in another class cheat" is violating the honour code then you might benefit from taking the time to do so.

I did, and I didn't see how. If Alice is helping Bob cheat, Bob is the one taking unfair advantage of others, not Alice. For all you know, Alice could be offering to help everyone cheat.

It's not like I'm saying something outlandish here. There's a reason most places have specific rules prohibiting assisting others with their violation. Even MIT goes out of its way to specifically call out "Facilitating Academic Dishonesty". [1]

[1] https://integrity.mit.edu/handbook/academic-integrity-mit/wh...


Alice is taking a small unfair advantage of every member of the community at once. A community and its practices is a form of commons shared by the members, so it's vulnerable to the tragedy of the commons. If one member acts in a way that deliberately goes against the trust, that's inherently unfair to the rest of the members because it tends to push everything toward a breaking point. If her goal depends on the community's existence or function (which if it doesn't, why is she even around?), then whatever her goal, Alice has gone after it in a way that takes unfair advantage of everyone else's commitment to the system. Even if Alice's action doesn't cause a final breakdown, she's moved things in the wrong direction for her own purposes.


> Alice is taking a small unfair advantage of every member of the community at once.

Interesting take, I like it; thanks! Let me mull over it a bit... it's making me wonder whether the existence of any personal benefits (by Alice) are a necessary component of a violation.


Agree, but I've seen other open-source ML people confirm it. For example:

Teknium: Okay and Ive had 4 high-ish level sources tell me they dont plan to release it ever though


A quote from discord: "apparently alpha-zero has been replicated in open source as leela-zero, and then leela-zero got a bunch of improvements so it's far ahead of alpha-zero. but leela-zero was barely mentioned at all in the paper; it was only dismissed in the introduction and not compared in the benchmarks. in the stockfish discord they are saying that leela zero can already do everything in this paper including using the transformer architecture."


Leela zero was an amazing project improving on AlphaZero, showing the feasibility of large scale training with contributed cycles, and snatching the TCEC crown in Season 16

It forced Stockfish to up its game, essentially by adopting neural techniques themselves (though a different type, Stockfish uses nnue).


Yeah, they need to compare against the latest BT2 policy head. It's probably about the same performance.


BT2 is old news, we have BT4 now


There are a few misunderstandings here. Rational Bézier curves are incidental in this article, and the conclusion is not that they are correct. They were just a convenient curve I used to express the main message of local control, and the article endorses looking for other curves that fit the properties.

The parameter count has not gone up; it's equal to or less than usual Bézier control handles, because I forced the parameters to take specific values.

The part that is "new" about the Béziers is the new formula that selects one specific Bézier out of all the possible choices. I think this linguistic accusation is silly; for example, if someone discovers a new rock, it would be fair to call it a "new rock" even though rocks have been discovered thousands of years ago.

These curves are also unrelated to the file format; the curves are designed to fit how the user thinks about curves rather than be computer-efficient. For the tooling point, I think you are agreeing with me while arguing; I agree that there's no reason the curve should match what is used in the final product, and some of the other curve choices I considered require a complex conversion step.


Maybe I'm not reading what you wrote, but what I'm getting from the writeup is not that you're describing a new set of curve constraints, not a new Bezier curve (even if we just go by the list of constraints you're trying to satisfy), so that doesn't feel like a linguistic accusation so much as a pretty important thing to call out?

In the analogy of rocks: we've certainly been discovering rocks thousands of years ago, but if you pick up the same rock that others have already picked up, from the same patch that we've been picking up rocks from for a long time, but using a different set of rules to decide how to pick the rock up, that's not a new rock. That's a new method.

So describing it as a new methodoly for finding appropriate Bezier curves, rather than finding new Bezier curves, would make it much more clear what you're doing. And new methodologies are always exciting.


I have used this in some commercial software before (maybe Illustrator?); my experience was not positive. When you move a node, some not-so-close curves start wiggling, and you think, "I already set that part correctly, stop moving please". It behaves very poorly around rounded corners. Adding points causes the curves to shift, usually not how you want, and then you try to add more points, which causes more shifts, etc. Arc length-based interpolation might do better in this respect, as opposed to the (# of points)-based interpolation which I expect it used.

The alternative, which obeys similar principles, is the Pencil tool. This simply spams out a ton of points to match what you draw. https://news.ycombinator.com/item?id=37460009 mentions that these points can be capably reduced, which could serve your purpose.


The ideal dimensionality of the parameter space depends on the jaggedness of the curve. Derivatives are useful when they stay constant over a stretch of time; when derivatives vary too much, it becomes more efficient to use more points of lower degree. This is the same calculation in other domains of approximation: numerical integration, DiffEq, and Taylor series.

For example, to draw rough surfaces, point-to-point lines are the most efficient way, with a ton of points. For industrial geometric shapes, lines and ellipses become efficient. As the smoothness of the curve goes up, more derivatives become valid. But sometimes those higher derivatives are not useful.

This is why the parent comment likes the Pencil tool; it's optimal for high variation, because it is a local (smoothed) control of position, the zeroth derivative. The cost is some loss of smoothness, which likely doesn't matter for her domain, since her hands move smoothly enough.

I think the Pen tool would be a more competitive alternative with a better control scheme. For example, a Pen tool with local control would unify the Pencil and Pen: letting you draw jagged curves by holding your mouse down, and letting you draw straight lines and smooth curves by clicking. There would be no interruption of flow, as you can use either method freely to continue a curve. They should inherently be the same tool anyway, just with different numbers of derivatives.


For example, to draw rough surfaces, point-to-point lines are the most efficient way, with a ton of points.

I usually just draw a quick simple line and tell Illustrator to rough it up for me. There's a lot of ways to do this; "use a custom brush" is a common one, so's "apply the Roughen effect". These will dynamically generate a buttload of virtual points in the vicinity of the basic path when it gets rendered; the editing view is still just a handful of points. Much easier to edit, much faster to draw. If I need precise, human-defined jagginess to a path, then I whip it out with the Pencil, and push it around with stuff like the Puppet Warp tool, which lets me drop a few pins into a complex set of paths, and distort it based on how I move those around.

Illustrator's Pencil tool actually performs a certain amount of smoothing on the raw positional input from the drawing tablet. There's a slider for how much it does this. I have it right in the middle of the range and it makes for a pretty nice compromise between catching every deliberate motion of my fingers/hand/arm across the tablet, and throwing away the little irregularities that I would be working hard to eliminate if I was working in pen and ink.


It's a UI problem, not a mathematical problem. No set of curves can represent everything exactly, but they all can get close. The goal is just to make it easy to get close.

The computer can produce a near-perfect circle with Béziers for each 1/4 arc, and near-perfect circles are usually good enough. It's just inconvenient. It's also hard to manipulate the generated circle; the user might have a clear image of his head of the changed circle he wants, but pushing the control points to get there is not easy.

The mathematical "axioms" in the article are only attempting to provide a clean interface, so that the user can easily translate what is in his head to what the program creates. The circle axiom says, "The user often wants a circle, and also expects a circle, so let's produce the expected circle." There's no other mathematical purity involved.


The source code is visible with Ctrl+U, I didn't minimize anything. The g9 use is pretty neat. It definitely deserves the shout-out I gave it.

I just now uploaded a worse C++ desktop version with saving and loading: https://github.com/ad8e/local-curves This desktop version is 5 years old. I think the web version is better. This github repo is only interesting if you want to copy code from it; it's not practical as a drawing tool.


Ok thanks. I will possibly try to port to SkiaSharp in the future to learn how your code works.


It's not as easy to visualize the tangent magnitude as you are suggesting, because the time parametrization makes the calculation fail - the curve doesn't travel forward at a constant rate. This is best seen with the second example: If the left handle is fully extended and the right handle is 0, the Bézier curve looks almost exactly the same as when the handles are reversed. Here's a picture: https://i.imgur.com/WkanN1G.png

The handle varies from 0 to full-strength, but the magnitude of the tangent vector stays constant. This means the handle doesn't decide the magnitude. Tracing the path in your head to visualize the changing tangent vectors would mean visualizing a competition between t^2, (1-t)^2t, and (1-t)^3, which I find difficult, even with some calculus knowledge.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: