Hacker Newsnew | past | comments | ask | show | jobs | submit | rustyhancock's commentslogin

Stunning work! Astounding progress since its under 3 months old from PCB to this result.

Funnily enough I've been musing this past month would I better separate work if I had a limited Amiga A1200 PC for anything other than work! This would nicely fit.

Please do submit to HackaDay I'm sure they'd salivate over this and it's amazing when you have the creator in the comments. Even if just to explain no a 555 wouldn't quite achieve the same result. No not even a 556...


It's not that MRIs suck at cancer. They provide fantastic structural and functional data.

The problem is the specificity of the results and the prior.

A full body MRI by definition will provide detailed views of areas where the pretest probability for cancer is negligible. That means even a specific test would result in a high risk of false positives.

As a counter point, MRS means that you can now MRI someone's prostate and do NMR on lesions you find.

Lets say someone has lower urinary tract symptoms. And is 60 years old. An MRI could visualize as well as do a analysis that would otherwise require a biopsy. With the raised prior you can be quite sure suspicious lesions are cancerous.

Similarly for CNS tumours. Where fine detail. Subtle diffusion defects can mark csncers you couldn't even see if you cut the person open.

No sensible doctor would give you a whole body CT unless there was a very good reason. That very good reason is probably "we already think you have disseminated cancer". That pushes the prior up.

And less so for a PET/CT. Lets flood you with x-rays and add some beta radiation and gamma to boot!

The danger of an unnecessary CT/PET is causing cancer, the danger of an unnecessary MRI chasing non existent cancer.


> Lets say someone has lower urinary tract symptoms. And is 60 years old. An MRI could visualize as well as ...

Not a doctor - but maybe start with some quick & cheap tests of their blood & urine, polite questions about their sexual partners, and possibly an ultrasound peek at things?

At least in America, high-tech scans are treated as a cash cow. And cheap & reasonable tests, if done, are merely an afterthought - after the patient has been milked for all the scan-bucks that their insurance will pay out.

Source: Bitter personal experience.


> At least in America, high-tech scans are treated as a cash cow. And cheap & reasonable tests, if done, are merely an afterthought - after the patient has been milked for all the scan-bucks that their insurance will pay out.

Maybe it's a regional thing, but that hasn't been my experience. I've had one MRI and one CT scan in the 25+ years that I've been a full-time employed adult with insurance.

I'd have been happy to sign up for more so I could have proactive health information and the raw data to use for hobby projects.


> The danger of an unnecessary CT/PET is causing cancer

You'd have to be massively overexposed to CT or PET scanning to cause cancer, like in the region of spending months being scanned continuously with it at full beam current.


Even if you don't agree with linear no threshold models for cancers induced by radiation (I don't think LNT is accurate).

It comes down to the scan and the age.

3 scans for a 1 year old? Strongly associated with cancers later in life. 5 scans of a 50 year old? Less so.

The 1 year old has an 80 year run way to develop cancer, along with cells already set in a state of rapid division, and a less developed immune system.

But the association is quite strong.

https://www.sciencedirect.com/science/article/pii/S0720048X2...


> I don't think LNT is accurate

There's excellent reason to think LNT is accurate: at low doses, almost every cell is exposed to at most one radiation event. The dose affects how many cells experience a (single) event, but does not affect the level of damage to those exposed cells. Linearity naturally falls out of this.

To abandon linearity you have to imagine some sort of signalling system (not observed) that kicks in at just the dose we're talking about (not lower, not higher) to allow exposure to one cell to affect other cells.

There's also no good evidence that LNT is wrong. The typical things that are pointed to by anti-LNT cranks are cherrypicked, often involving interim results from studies the full results from which do support LNT, which is evidence it was statistical noise.


I think the bigger point you are making is that the 50 year old is also more likely to have developed cancer.

Maybe a full body MRI once a decade is fine until your 30s, then once every 5 years until 50, then once ever 2 years beyond 50.

The test should scale with the probability of cancer.


> 3 scans for a 1 year old? Strongly associated with cancers later in life. 5 scans of a 50 year old? Less so.

Someone being born with no legs is strongly associated with them using a wheelchair in later life.

Why are you giving a one-year-old three CT scans? For shits and giggles? Or because you think they might have cancer?


> You'd have to be massively overexposed to CT or PET scanning to cause cancer

The mean effective dose for all patients from a single PET/CT scan was 20.6 mSv. For males aged 40 y, a single PET/CT scan is associated with a LAR of cancer incidence of 0.169%. This risk increased to 0.85% if an annual surveillance protocol for 5 y was performed. For female patients aged 40 y, the LAR of cancer mortality increased from 0.126 to 0.63% if an annual surveillance protocol for 5 y was performed.

https://pubmed.ncbi.nlm.nih.gov/36856709/


> 0.126 to 0.63%

So, a just-about-measurable increase, if you pick and choose your values carefully?

You are not going to die from cancer caused by getting a PET scan. This will not happen.

You're going to die of heart disease or as a not-too-distant second in a car accident.


That data is for one scan, ever.

Continuous scanning for months would give a dose many orders of magnitude higher.


Approx. 5% of all cancers in US are caused by CTs

[citation needed]


I think they're reading too much into it.

How are they determining "this cancer was caused by the CT scan" versus "this cancer was caused by the cancer we were originally looking for that was there all along"?


Radiation doesn’t label the cancers it causes.

Other than comparing population groups, what method do we have?


Well, you could work backwards and look at your assumptions.

Why is "We think this person has cancer so we gave them a CT scan and look! Now they've got cancer! It must be because of the CT scan!" the conclusion to jump to?


Please just read this article - https://jamanetwork.com/journals/jamainternalmedicine/fullar... It's funny that you instantly assumed that authors are stupid and did not think about this obvious pitfall. It's extra funny that you also accuse them of jumping to conclusion without actually reading the article.

The intensity of competition between models is so intense right now they are definitely benchmaxxing pelican on bike SVGs and Will Smith spaghetti dinner videos.

Parallel hypothesis: the intensity of competition between models is so intense that any high-engagement high-relevance web discussion about any LLM/AI generation is gonna hit the self-guided self-reinforced model training and result in de facto benchmaxxing.

Which is only to say: if we HN-front-page it, they will come (generate).


There was Lenna for digital image compression (https://en.wikipedia.org/wiki/Lenna).

A pelican on a bike is SFW, inclusive, yet cool.

It is not a full benchmark - rather a litmus test.


I never realized Lenna was a Playboy centerfold until years after I first encountered it, which was part of an MP in the data structures class all CS undergrads take at UIUC.

There’s also the foreman for video: https://youtube.com/watch?v=0cdM-7_xUXM


You can just try other svgs, I got some pretty good ones.

(*Disclaimer: I work for Google, but also I have zero idea about what they trained deepthink on)


So, again, when the indicator becomes a target, it stops being a good indicator.

> when the indicator becomes a target, it stops being a good indicator

But it's still a fair target. Unless it's hard coded into Gemini 3 DT, for which we have no evidence and decent evidence against, I'd say it's still informative.


That's how you know you've made it: when your pet benchmark becomes a target.

Goodhart's law in action.

note that this benchmark aside, they've gotten really good at SVGs, I used to rely on the nounproject for icons, and sometimes various libraries, but now coding agents just synthesize an SVG tag in the code and draw all icons.

Utterly bewildering that this can happen.

I'd never heard of it.

It does seem that it's in a sense pre cancerous although the article seems not to say so outright.

An acquired genetic change, following errors replication and mistakes in cell division that leads to cells having an "advantage". Associated with aging, smoking and increased mortality...

If you didn't know it was about this Y loss, it would seem to be directly referencing a pre cancerous condition.


I'm not the target for your question (I distribute 0 plugins).

But Lua support in Neovim is the primary reason I moved over from Emacs. Elisp and Vim are both so heart sink for me.

That said I'd have preferred something other than Lua if I had the choice.


> That said I'd have preferred something other than Lua if I had the choice.

Same. I know we as a community would never agree on what that language should be, but in my dreams it would have been ruby. Even javascript would have been better for me than Lua.


Lua, especially with LuaJIT, is nearly as fast as C. I certainly don't want to have to run a slow language like Ruby or especially a full blown JS runtime like V8 just to run Vim, the entire point is speed and keyboard ergonomics, otherwise just use VSCode.

You don't need V8 for running JS for scripting, you have quickjs[1] or mquickjs[2] for example. You might have problems importing npm packages, but as we can see from lua plugins you don't even need support for package managers. Performance is not as good as luajit, but it is good enough

[1]: https://bellard.org/quickjs/

[2]: https://github.com/bellard/mquickjs


I don’t want npm anywhere near my tooling thanks.

Quite a fair point! For intensive plugins and such, this would matter quite a bit.

V8 is faster than LuaJIT. But sure, it has a large binary size.

Isn't LuaJIT kind of a dead end?

Also Ruby has been getting quite fast since YJIT (and now ZJIT):

https://railsatscale.com/2023-08-29-ruby-outperforms-c/


  >  a full blown JS runtime
I absolutely hate all the random things that install npm on my machines

Babashka! Super fast clojure/lisp.

there's always fennel for a lispy layer over lua

> Even javascript would have been better for me than Lua.

Why?


Because I know javascript a lot more than I know Lua (and I suspect given js popularity, a lot of people are in the same boat). Yes Lua is easy to learn, but it's still different enough that there is friction. The differences also aren't just syntactically, it's also libraries/APIs, and more. I also don't have any need/use for Lua beyond neovim, so it's basically having to learn a language specifically for one tool. It's not ideal for me.

But the people who did the work wanted Lua, and I have no problem with that. That's their privilege as the people doing the work. I'm still free to fork it and make ruby or js or whatever (Elixir would be awesome!) first-class.


I was in the same boat, but you’d be surprised by the number of projects that have embedded lua. Zfs, nginx, redis, haproxy.

I agree but also wonder if editor plugins fall squarely in the range of things an LLM could vibe-code for me?

There is a large class of problems now for which I consider the chosen programming language to be irrelevant. I don't vibe code my driver code/systems programming stuff, but my helper scripts, gdb extensions, etc are mostly written or maintained by an LLM now.


I'm right there with you, and to be honest Lua just works. I helped with Neovim when it started ~10 years ago, and didn't understand the big deal about implementing lua.. But now that it's here, I can't believe it wasn't forked and implemented sooner

IME, Claude is quite good at generating Lua code for neovim. It takes some back and forth because there's no easy way for it to directly test what it's writing, but it works.

I vibe-coded a simple neovim lua plugin very recently. It worked well!

https://joeblu.com/blog/2026_01_introducing-nvim-beads-manag...


i’ve written probably north of a million lines of production js, maybe around 100,000 lines of production ruby, and about 300 lines of production lua. lua is a fun language and i think a much better fit than JS for technical reasons (who has a js engine that is both fast and embeds well? nobody), but i am certainly more productive in those other languages where i have more experience.

lua array index starting at 1 gets me at least once whenever i sit down to write a library for my nvim or wezterm.


> who has a js engine that is both fast and embeds well? nobody

Fabrice Bellard! https://github.com/bellard/mquickjs

(I agree with you, just wanted to note this super neat project)


quickjs/mquickjs are good at embedding but nowhere close to luajit in terms of speed. (i have some experience with quickjs https://github.com/justjake/quickjs-emscripten)

as an aside i’m curious how quickjs/mquickjs compares to mruby in speed and size. something to ponder


Doesn't Vim support extensions written in several languages? Or was that removed in Vim 9?

It still does, but those only work with a Vim built that has those interfaces compiled in.

I wish they supported Janet

Can you run Fennel in Neovim? It's a Lisp running on Lua. https://fennel-lang.org/


Yes.

> That said I'd have preferred something other than Lua if I had the choice.

Denops is super easy to use, works great. Connects over RPC. https://github.com/vim-denops/denops.vim

Nvim-oxi is wild. Uses neovim's FFI to let you write Rust that talks directly to neovim. https://github.com/noib3/nvim-oxi

Denops has always been a niche but it was a really popular niche for a couple years. Activity is fading somewhat. I'm still doing my plugin dev in lua, and it's... survivable. But I do think of switching more into one of these options.


Orange? It's a blue warning isn't it? Is this how one of us finds out he's colour blind?

The UAC dialog for unsigned software has an orange or yellow accent. You could be talking about the SmartScreen dialog. There's yet another dialog for executable files downloaded from the internet, which I think has a red shield for unsigned software.

Blue when it has a valid signature.

Orange when it's missing or invalid.


I'm sure there is truth in original author saying tax code complexity as the core challenge. But that's not what makes this hard. That's domain complexity we all come up against it's accidental complexity that killed the ports.

The real problem is idiosyncratic and esoteric coding practices from a single self-taught accountant working in a language that didn't encourage good structure.

I can translate well-written code without understanding what it does functionally, so long as I understand what it's doing mechanically.

The original author seems to build in the assumption you're not going to translate my code you'll need to rewrite it from the the tax code!


I continue to advocate for the fact that Michael's code is not bad at all. There are some anti patterns in it for sure, what engineer hasn't fallen into those traps. The fact is Michael is an infinitely better programmer than many of the senior developers I've worked work in my career. I truly sing high praise to his software development capabilities, not just coding itself but building the product, delivering results, and getting it out the door, especially a simulator like this with no reference points, no formal training, no help? Sure it took him 40 years and it's in BASIC and uses gosub everywhere. But the damn thing works and for anyone who took the time to learn the language and structure as I did, you will see that it is actually very enjoyable codebase to work with.

the difference between gosub and if blocks calling a function is more academic than practical, you still have a main event loop sending your path of execution someplace based on something that happens.

I might not be a basic practitioner, but as someone who as written serious things in bash and powershell, I can see the allure.


The code is bad by virtue of it putting his wonderful game at risk because no one can port it.

This story is it's own litmus test. Your story is only as notable as bad as Michaels code is!

Don't get me wrong. It seems fantastic game and like others I'm most interested in playing the original DOS version.

But good programs, written by good programmers are not necessarily made with good code!


I think the complexity of audio has out stripped quality of speaker systems.

My guest room has a cheap 40inch TV the audio is terrible compared to the visual output. And I can play what feels like cinema quality 7.1 audio and 4K video over it. The result is the audio is terrible, tiny distorted. Muddy. Hard to understand if it's anything other than a voice over.

In 2005 the quality of whatever I was watching was crap but it was mixed knowing that it was likely going to be viewed that way!

That's been my conclusion admittedly based on not much.


There's two paths here.

Bottom up and top down.

Bottom up would roughly be 1. Picking a simple introduction to programming textbook ideally Python 2. Work through a building a transformer LLM in python 3. Move to training it on a corpus

You're not mastering each step. Reading the python book and doing some exercises is fine.

The top down: This 3Blue1Brown playlist will have you covered https://youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_...

Either way you want to meet in the middle. There is still a lot in the middle that isn't clear so don't try and work from the middle out!


Yeah it's quite absurd.

The only reason any business is going along with it is they want access to people's ID.

I wonder why the UK hasn't just completed the trick and implemented a firewall like China, Iran or North Korea?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: