Stunning work! Astounding progress since its under 3 months old from PCB to this result.
Funnily enough I've been musing this past month would I better separate work if I had a limited Amiga A1200 PC for anything other than work! This would nicely fit.
Please do submit to HackaDay I'm sure they'd salivate over this and it's amazing when you have the creator in the comments. Even if just to explain no a 555 wouldn't quite achieve the same result. No not even a 556...
It's not that MRIs suck at cancer. They provide fantastic structural and functional data.
The problem is the specificity of the results and the prior.
A full body MRI by definition will provide detailed views of areas where the pretest probability for cancer is negligible. That means even a specific test would result in a high risk of false positives.
As a counter point, MRS means that you can now MRI someone's prostate and do NMR on lesions you find.
Lets say someone has lower urinary tract symptoms. And is 60 years old. An MRI could visualize as well as do a analysis that would otherwise require a biopsy. With the raised prior you can be quite sure suspicious lesions are cancerous.
Similarly for CNS tumours. Where fine detail. Subtle diffusion defects can mark csncers you couldn't even see if you cut the person open.
No sensible doctor would give you a whole body CT unless there was a very good reason. That very good reason is probably "we already think you have disseminated cancer". That pushes the prior up.
And less so for a PET/CT. Lets flood you with x-rays and add some beta radiation and gamma to boot!
The danger of an unnecessary CT/PET is causing cancer, the danger of an unnecessary MRI chasing non existent cancer.
> Lets say someone has lower urinary tract symptoms. And is 60 years old. An MRI could visualize as well as ...
Not a doctor - but maybe start with some quick & cheap tests of their blood & urine, polite questions about their sexual partners, and possibly an ultrasound peek at things?
At least in America, high-tech scans are treated as a cash cow. And cheap & reasonable tests, if done, are merely an afterthought - after the patient has been milked for all the scan-bucks that their insurance will pay out.
> At least in America, high-tech scans are treated as a cash cow. And cheap & reasonable tests, if done, are merely an afterthought - after the patient has been milked for all the scan-bucks that their insurance will pay out.
Maybe it's a regional thing, but that hasn't been my experience. I've had one MRI and one CT scan in the 25+ years that I've been a full-time employed adult with insurance.
I'd have been happy to sign up for more so I could have proactive health information and the raw data to use for hobby projects.
> The danger of an unnecessary CT/PET is causing cancer
You'd have to be massively overexposed to CT or PET scanning to cause cancer, like in the region of spending months being scanned continuously with it at full beam current.
Even if you don't agree with linear no threshold models for cancers induced by radiation (I don't think LNT is accurate).
It comes down to the scan and the age.
3 scans for a 1 year old? Strongly associated with cancers later in life. 5 scans of a 50 year old? Less so.
The 1 year old has an 80 year run way to develop cancer, along with cells already set in a state of rapid division, and a less developed immune system.
There's excellent reason to think LNT is accurate: at low doses, almost every cell is exposed to at most one radiation event. The dose affects how many cells experience a (single) event, but does not affect the level of damage to those exposed cells. Linearity naturally falls out of this.
To abandon linearity you have to imagine some sort of signalling system (not observed) that kicks in at just the dose we're talking about (not lower, not higher) to allow exposure to one cell to affect other cells.
There's also no good evidence that LNT is wrong. The typical things that are pointed to by anti-LNT cranks are cherrypicked, often involving interim results from studies the full results from which do support LNT, which is evidence it was statistical noise.
> You'd have to be massively overexposed to CT or PET scanning to cause cancer
The mean effective dose for all patients from a single PET/CT scan was 20.6 mSv. For males aged 40 y, a single PET/CT scan is associated with a LAR of cancer incidence of 0.169%. This risk increased to 0.85% if an annual surveillance protocol for 5 y was performed. For female patients aged 40 y, the LAR of cancer mortality increased from 0.126 to 0.63% if an annual surveillance protocol for 5 y was performed.
How are they determining "this cancer was caused by the CT scan" versus "this cancer was caused by the cancer we were originally looking for that was there all along"?
Well, you could work backwards and look at your assumptions.
Why is "We think this person has cancer so we gave them a CT scan and look! Now they've got cancer! It must be because of the CT scan!" the conclusion to jump to?
Please just read this article - https://jamanetwork.com/journals/jamainternalmedicine/fullar...
It's funny that you instantly assumed that authors are stupid and did not think about this obvious pitfall. It's extra funny that you also accuse them of jumping to conclusion without actually reading the article.
The intensity of competition between models is so intense right now they are definitely benchmaxxing pelican on bike SVGs and Will Smith spaghetti dinner videos.
Parallel hypothesis: the intensity of competition between models is so intense that any high-engagement high-relevance web discussion about any LLM/AI generation is gonna hit the self-guided self-reinforced model training and result in de facto benchmaxxing.
Which is only to say: if we HN-front-page it, they will come (generate).
I never realized Lenna was a Playboy centerfold until years after I first encountered it, which was part of an MP in the data structures class all CS undergrads take at UIUC.
> when the indicator becomes a target, it stops being a good indicator
But it's still a fair target. Unless it's hard coded into Gemini 3 DT, for which we have no evidence and decent evidence against, I'd say it's still informative.
note that this benchmark aside, they've gotten really good at SVGs, I used to rely on the nounproject for icons, and sometimes various libraries, but now coding agents just synthesize an SVG tag in the code and draw all icons.
It does seem that it's in a sense pre cancerous although the article seems not to say so outright.
An acquired genetic change, following errors replication and mistakes in cell division that leads to cells having an "advantage". Associated with aging, smoking and increased mortality...
If you didn't know it was about this Y loss, it would seem to be directly referencing a pre cancerous condition.
> That said I'd have preferred something other than Lua if I had the choice.
Same. I know we as a community would never agree on what that language should be, but in my dreams it would have been ruby. Even javascript would have been better for me than Lua.
Lua, especially with LuaJIT, is nearly as fast as C. I certainly don't want to have to run a slow language like Ruby or especially a full blown JS runtime like V8 just to run Vim, the entire point is speed and keyboard ergonomics, otherwise just use VSCode.
You don't need V8 for running JS for scripting, you have quickjs[1] or mquickjs[2] for example. You might have problems importing npm packages, but as we can see from lua plugins you don't even need support for package managers. Performance is not as good as luajit, but it is good enough
Because I know javascript a lot more than I know Lua (and I suspect given js popularity, a lot of people are in the same boat). Yes Lua is easy to learn, but it's still different enough that there is friction. The differences also aren't just syntactically, it's also libraries/APIs, and more. I also don't have any need/use for Lua beyond neovim, so it's basically having to learn a language specifically for one tool. It's not ideal for me.
But the people who did the work wanted Lua, and I have no problem with that. That's their privilege as the people doing the work. I'm still free to fork it and make ruby or js or whatever (Elixir would be awesome!) first-class.
I agree but also wonder if editor plugins fall squarely in the range of things an LLM could vibe-code for me?
There is a large class of problems now for which I consider the chosen programming language to be irrelevant. I don't vibe code my driver code/systems programming stuff, but my helper scripts, gdb extensions, etc are mostly written or maintained by an LLM now.
I'm right there with you, and to be honest Lua just works. I helped with Neovim when it started ~10 years ago, and didn't understand the big deal about implementing lua.. But now that it's here, I can't believe it wasn't forked and implemented sooner
IME, Claude is quite good at generating Lua code for neovim. It takes some back and forth because there's no easy way for it to directly test what it's writing, but it works.
i’ve written probably north of a million lines of production js, maybe around 100,000 lines of production ruby, and about 300 lines of production lua. lua is a fun language and i think a much better fit than JS for technical reasons (who has a js engine that is both fast and embeds well? nobody), but i am certainly more productive in those other languages where i have more experience.
lua array index starting at 1 gets me at least once whenever i sit down to write a library for my nvim or wezterm.
Denops has always been a niche but it was a really popular niche for a couple years. Activity is fading somewhat. I'm still doing my plugin dev in lua, and it's... survivable. But I do think of switching more into one of these options.
The UAC dialog for unsigned software has an orange or yellow accent. You could be talking about the SmartScreen dialog. There's yet another dialog for executable files downloaded from the internet, which I think has a red shield for unsigned software.
I'm sure there is truth in original author saying tax code complexity as the core challenge. But that's not what makes this hard. That's domain complexity we all come up against it's accidental complexity that killed the ports.
The real problem is idiosyncratic and esoteric coding practices from a single self-taught accountant working in a language that didn't encourage good structure.
I can translate well-written code without understanding what it does functionally, so long as I understand what it's doing mechanically.
The original author seems to build in the assumption you're not going to translate my code you'll need to rewrite it from the the tax code!
I continue to advocate for the fact that Michael's code is not bad at all. There are some anti patterns in it for sure, what engineer hasn't fallen into those traps. The fact is Michael is an infinitely better programmer than many of the senior developers I've worked work in my career. I truly sing high praise to his software development capabilities, not just coding itself but building the product, delivering results, and getting it out the door, especially a simulator like this with no reference points, no formal training, no help? Sure it took him 40 years and it's in BASIC and uses gosub everywhere. But the damn thing works and for anyone who took the time to learn the language and structure as I did, you will see that it is actually very enjoyable codebase to work with.
the difference between gosub and if blocks calling a function is more academic than practical, you still have a main event loop sending your path of execution someplace based on something that happens.
I might not be a basic practitioner, but as someone who as written serious things in bash and powershell, I can see the allure.
I think the complexity of audio has out stripped quality of speaker systems.
My guest room has a cheap 40inch TV the audio is terrible compared to the visual output. And I can play what feels like cinema quality 7.1 audio and 4K video over it. The result is the audio is terrible, tiny distorted. Muddy. Hard to understand if it's anything other than a voice over.
In 2005 the quality of whatever I was watching was crap but it was mixed knowing that it was likely going to be viewed that way!
That's been my conclusion admittedly based on not much.
Bottom up would roughly be
1. Picking a simple introduction to programming textbook ideally Python
2. Work through a building a transformer LLM in python
3. Move to training it on a corpus
You're not mastering each step. Reading the python book and doing some exercises is fine.
Funnily enough I've been musing this past month would I better separate work if I had a limited Amiga A1200 PC for anything other than work! This would nicely fit.
Please do submit to HackaDay I'm sure they'd salivate over this and it's amazing when you have the creator in the comments. Even if just to explain no a 555 wouldn't quite achieve the same result. No not even a 556...
reply