Hacker Newsnew | past | comments | ask | show | jobs | submit | wheelie_boy's commentslogin

It's like that AI doom, where when you look at the floor and back up you're in a totally different room


I don't think the final point about programming languages makes much sense.

In the overall software development process, lots of people contribute different things to create the product.

The job of the software developer is to bring the amount of ambiguity in the specification to zero, because computers can only run a program with zero ambiguity.

There have been lots of high level programming languages that abstract away certain things or give the programmer control over those things. The real thing that you want to do is pick a programming language that allows you control over the things you care about. Do you care about when memory is allocated and deallocated? Do you care about how hardware is used (especially GPUs and ML accelerators) or do you want the hardware completely abstracted away? Do you care more about runtime or dev iteration time? Does your program need to exist in a certain tech context?

There's no programming language that will let people who care about different things deal with them or not deal with them.


I can see 3 ways that you can guarantee that the output of a model never violates copyright

1. Models are trained with 100% uncopyrighted or properly licensed input data

2. Every output of the ML model is evaluated to make sure it's not too close to training data

3. Copyright law is changed to have a specific cutout for AI

#1 is the approach taken by Adobe, although it generally is harder or more expensive to do.

#2 destroys most AI business models

#3 has been done in some countries, but seems likely that if done in the US it would still have some limits.

For example, I could train a model on a single image, song, or piece of written text/code. Then I run inference, and get out an exact copy of that image, song, or text. If there are no limits around AI and copyright, then we've got a loophole around all of copyright law. I don't think that the US would be up for devaluing intellectual property like that.


The much more likely outcome:

4. A ruling comes down that enshrines what all the big companies have been doing (with the blessings of their armies of expensive, talented, and conservative legal teams) as legitimate fair use


The much more likely scenario is that there is a precedent-setting court case. This is how it happened with practically every other instance of copyright bumping into technology.


I think the big difference is that it's not a direct replacement - it feeds off of the existing people while making it much harder for them to make a living.

It would be as if instead of cars running on gasoline, they ran on chopped up horseflesh. Not good for the horses, and not sustainable in the long term.


It seems very difficult to ensure that a model will never output any of the copyrighted content that it was trained on. I can only think of three ways, but perhaps there are others

1. Evaluate every output from the model to ensure that none of the outputs are copyrighted

2. Evaluate every input to a model to ensure that the inputs are either not copyrighted or properly licensed

3. Change the definition of copyright so that ML models can do whatever they want

Nobody is doing #1, because that makes the business models not work. Established brands (like Adobe) are doing #2. I get the feeling that there are a lot of ML startups that are hoping that #3 will happen, but it seems unlikely


Ensuring a model never outputs copyrighted content is unimportant and tangential. It's irrelevant. You don't look for a way to make humans output no copyrighted content, you address each time they do case by case.

A model training being rendered fair use doesn't mean any of its output can be used for whatever regardless.


> you address each time they do case by case.

That's what I listed as #1 - evaluate each individual output of the model to see if it violates copyright.


I think when GP says "address each time case by case", they mean "you sue them when they infringe", instead of "this human has an illegal brain because it remembers Taylor Swift's songs".

PS: your "#1" is really hard to do and I'd guess it is infeasible. Even Google (esp. Youtube) with their vast data capabilities, often gets it wrong.


I would have thought that purpleair was the best-known consumer brand. It looks like this May they introduced API pricing? But honestly, it still seems very reasonable

https://community.purpleair.com/t/api-pricing/


> "create a new currency"

Ah yes, Flooz and Beenz. So silly, but at least they didn't use a small country's worth of electricity.


they also had an internal source control system that they never released, which was a fork of perforce that they had purchased the rights to use and modify. Not sure if that's still in use in any projects.


The two best modelers I've seen for that kind of casual solid modeling are Plasticity https://www.plasticity.xyz and Moment of Inspiration http://moi3d.com

Both are really focused on UX, and license a kernel


> Plasticity really focused on UX, and license a kernel

Its good that author of Plasticity (@nkalen on HN) had finally switched[0] to use Parasolid (and abandoned previous plans to use Russian C3D kernel) after Russia started full scale invasion of Ukraine in 2022[1]... in eight years since Russia invaded Ukraine in 2014.[2]

N.B. https://en.wikipedia.org/wiki/Russo-Ukrainian_War

[0] https://news.ycombinator.com/item?id=30696305

[1] https://twitter.com/nk/status/441096177933905920

[2] https://twitter.com/nk/status/1507672831559151620


I have seen MOI before, but it wasn't quite my thing though maybe I'll revisit it.

Plasticity though is gorgeous, especially for a solo developer. Wow.


Previous plasticity discussion https://news.ycombinator.com/item?id=30695360


Yeah, on the big kernels the time just getting filleting working robustly can be measured in developer-careers.

The core problem isn't memory safety, it's due to difficulties with the mathematical foundation of NURBS surfaces. For example, the intersection of two NURBS surfaces can't be exactly represented by a NURBS curve.


In solvespace such curves are represented by handles to the two intersecting surfaces and a series of points along the curve. This is an exact representation (you can compute as many additional points on the curve as needed) but it's not analytic by any means. Exact NURBS curves are used when possible too.


Exactly, I'm sure you're familiar with the kinds of difficulties and complications that creates. Rather than representing that curve analytically, you have to keep track of the procedural tree that generated it. Then things like offsetting that curve, checking that curve for intersections, etc. are more complicated.

And then you export. Rather than exporting the entire procedural tree (which would require shipping the entire kernel), you approximate the curves with NURBS trimming curves. When you import from another CAD tool you need to deal with edges that only intersect within a given tolerance, and sometimes that tolerance is awful (Catia, I'm looking at you). Operations on toleranced edges are a huge source of complications.

And so on and so on. It would all be so much easier if the NURBS math could be exact


What are the requirements of a CAD kernel?


See for example Ian Stroud ”Boundary Representation Modelling Techniques” for a light introduction. The OpenCascade project is a fairly complete example of a complete one. But there is no ”by the book” way to do it - it requires both design and engineering savvy.

The simple explanation is the capability to model 3D shapes in a way that you can output the data to the rest of the manufacturing process. Hence what manifacturing process you target at least partially dictates your constraints.

The output could be - engineering drawings, CNC machines, 3D printers. Or another 3D modeling application down the pipeline.

In a way ”CAD kernel” alone is a worthless as a spec since it’s too general, just like ”a vehicle” is a useless spec for engineering.

CAD kernel- for modeling what, by whom, and where.

Naturally questions of numerical robustness and exactness of presentation soon enter the picture. How large or precise features do you need to model, for example.


Better get cozy because the explanation is probably a few weeks worth of discussion.


I remember hearing some anecdote about Solidworks development where someone was brought back to meet the Fikket TEAM, a team of like 10 people who just worked on fillers fr 15 years. That being said, the fileting in solidworks is killer


And even then it got very rough the moment I tried to do higher types of continuity in Fusion360.

Maybe SolidWorks handles it better.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: