Hacker Newsnew | past | comments | ask | show | jobs | submit | y04nn's commentslogin

This is a weird application for such sensors. The train may be used as a test platform. DARPA launched a program to develop quantum sensors that are reliable outside the lab recently [1].

[1] https://www.nextbigfuture.com/2025/10/darpa-developing-quant...


How does it differ from cameras used in capsule endoscopy [1]? The technology has been available for 25 years and is widely used (4 millions units sold in 2024).

[1] https://en.wikipedia.org/wiki/Capsule_endoscopy


it doesnt differ at all because it IS the camera module in those capsules.


I would say that you are persistent if you keep getting rejected but keep improving but using feedback and you would be stubborn if you don't change a thing while keeping being rejected.


Most likely, a glorified black monolith brick like Ive loves them, without any display or a really small oled. Or a stick, like a pen you can clip on your pocket, but the battery may be an issue.


Yeah sounds about right. Maybe a detachable module with camera/mic that communicates back to the compute brick (which also serves as a charger).

At the end of the day, essentially a phone with some AI-specific form factor. There's just not much else it can be.


Indeed, the 90 days delay leaked in the morning before being denied and at the end of the day there was the official announcement. So some people new before the official announcement and shared the information.


I don't think Android does that. It's only Google Photo and only if you upload them to the cloud, if you don't sync/upload them, you can't search them with specific terms.


For everyone interested on technical details of the TSMC EUV process I would highly recommend this CCC talk [1] (From Silicon to Sovereignty: How Advanced Chips are Redefining Global Dominance).

[1] https://news.ycombinator.com/item?id=42546231


I knew the process was complex especially with the light source but I didn't realize that diffraction was something they also use which is absolutely insane.


Cool! I uploaded the video to YouTube here: https://www.youtube.com/watch?v=sB-y-tDlOSA

(It's licensed CC-BY so this should be allowed, and I like having videos like this on YouTube where I can easily watch them from anywhere and add them to my playlists.)


The "transistors shipped" in the history of computing was an interesting number. In 2024 it is now over 10^24. That's a massive number, more than estimate number of stars in the universe. But, in another sense, still quite small. It finally surpassed Avogadro's number, or 6*10^23 particles. This is the equivalent of a small shot glass filled with water (molecules).


Bipedalism is great to evolve on uneven terrain. Here it seems to just slow the process. Also, it uses its second hand for balance instead of achieving work, a counter weight would be as effective. In a factory, where the floor is flat, a human sized self moving robot 2 or 3 wheels would be way more effective, longer arms with more joins that are not mimicking humans one could also be better. There is already a lot of automation/robots in industry and it never look like a human. Even in our houses the best approach to automation never look like a human (eg. vacuum cleaner). I think that the only part of our body that would worth copying is the hand.


Humans are precariously tall. Good for spotting threats but not for navigating terrain. Four legs are better, with a lower center of gravity. Look at mountain goats for example.


Can LLM/AI have the opposite effect?

People would start writing more and getting better at it with the help of LLM. This would create a positive feedback loop that would encourage them to write more and better. LLM should be used a tool improve productivity and quality of output. Like we use a computer interface to write faster, move and edit text, instead of using a pencil and an eraser, today you can use a LLM to improve your writing. This will help people to get better at organizing their thoughts and think more clearly instead of replacing the thinking.


How does CLIP compare to YOLO[1]? I haven't looked into image classification/object recognition for a while, but I remember that YOLO was quite good was working on realtime video too.

[1]: https://pjreddie.com/darknet/yolo/


CLIP and YOLO work completely differently and have different purposes. CLIP uses transformers and embeddings and can compare text with images for classification. YOLO using a CNN and is trained with bounding boxes on images and is used for image recognition.

Give an image to CLIP and you can compare the similarity between the image and a sentence like 'a vase with roses in it'. Whereas with YOLO you give it an image and get the coordinates of bounding boxes around a vase, and around roses.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: