Hacker Newsnew | past | comments | ask | show | jobs | submit | benhoff's commentslogin

Did you fix the op5plus HDMI in driver?

Thing was unstable like 8-12 months ago


So far I haven't had any issue but I haven't done long tests yet. I'm using BredOS not their official OrangePI images though


I used this recently to download websites, stuffed them into a sqlite db, processed them with Mozllia's readability library, and then used the result and an llm to ask questions of the webpage itself.

It was helpful to take each step in chunks, as I didn't have a complete processing pipeline when I started.

I had wondered if there was an easier or better way to do this, as I probably would have liked to get the sitemap, pass the sitemap to an llm, then only download selected html pages vs the entire website.


But the sitemap could be incomplete, couldn't it?


True, I guess that's the advantage of HTTrack.

I guess for my use case, it would be better to get the parsing that HTTrack does, get all the url's, and pass that into an intelligence to selectively grab files.


Rk3588's have HDMI input (I think with hdcp decoding) that you can use v4l2 API. The multiplane API can be a bit tricky, but otherwise somewhat trivial


Communication books can be useful. I've heard good things about nonviolent communication and, while I've not finished it, crucial conversations has been useful


Along those lines: especially when coming from a technical background and dealing with non-technical stakeholders, wording like "hmm, this would likely be a pretty intensive multi-week project" might have been intended as carrying the benign context "...and the team would be excited if that's what leadership wants to prioritize" but can often be interpreted as "...and I'm going to fight you tooth and nail on this."

Pausing and engaging on the benefits of a proposal can be incredibly valuable, even if your mind has already raced to the considerations about implementation and opportunity cost. Many engineers understand that there's no higher praise than a leader diving into the weeds on something, but many other stakeholders don't have the same context!


It's more complex than that. A lot of the material properties depend on both the cooling and the tempering in aluminum alloys.

The phase diagrams for these types of alloys look wild (you often want to achieve a certain material phase during cooling to "lock" in to get certain characteristics), and it can be difficult to ensure that the smaller metals participate during cooling. Also difficult to dissipate these slightly during tempering, typically to increase ductility.

This is probably why 3d printing hasn't been done in earnest, you can't design something within tight tolerances with unknown material properties.


3D printing of metals is being done in earnest, although the industry prefers the term Additive Manufacturing. Metal powder bed fusion is a stable, reliable process that is being successfully used commercially. It's generally confined to high-value applications that require extreme geometric complexity, but it can be invaluable in industries like aerospace, motorsport and medical. The range of viable materials is still somewhat limited, but covers a good range from titanium and aluminium alloys through to tool steels and heat-resistant super-alloys.

https://www.renishaw.com/en/metal-3d-printing--32084

https://www.trumpf.com/en_US/products/machines-systems/addit...

https://www.carpenteradditive.com/powderrange-metal-powders


So you need to control the solidification process to plot a course through the phase diagram, spending the right amount of time in each region, and ending up in a good place. And this alloy has a phase diagram that is compatible with a method of 3d printing.


I think this is also why welding aluminum alloys was such a pain before the introduction of friction stir welding, which doesn't melt the metal. FSW was invented surprisingly late, in the 1990s.


I've been looking at the Radxa zero3w/zero3e.

Looks like this guy got Chromium to work? https://www.youtube.com/watch?v=XAnN1A_sye0


I've been considering that board, but a read of the forums suggest it has lots of issues.

I might try it anyway but I'm not sure what their long term support would be like compared to Pi if I do get it to work.


I've been trying to setup a pipeline of pass through hdmi from an HDMI input to an HDMI output with an OrangePI5 Plus. I could talk for a long time (now) about the issues with vendor supplied kernels and unsupported hardware. I was completely naive until I had the hardware in hand, having not done any embedded work.

Right now, the best of breed thought that I have is to run Weston, have a Qt application that is full screen, and to use DMA buffers so I can do some zero copy processing. Rockchip has their own MPP and RGA libraries that are tied into the Mali GPU, and I'm not smart enough to understand the current level of driver/userspace support to not leverage these libraries.

Rockchip and the ARM ecosystem is such a mess.

If anyone has any pointers, experience, approaches, code, etc, I would love to see it.


Not sure the kind of processing you need to do on the video stream, but have you considered giving `ffmpeg` a try if you just need plain pass-thru from video input to output? `ffmpeg` might be built with support for the Mali libraries you mention for the OS you are using. If you are able to run `weston`, `ffmpeg` should be able to output directly to the DRM card thru the use of SDL2 (assuming it was built with it).

If the HDMI-USB capture card that outputs `mjpeg` exposes a `/dev/video` node, then it might be as simple as running:

`SDL_VIDEODRIVER=kmsdrm ffmpeg -f video4linux2 -input_format mjpeg -i /dev/video0 -f opengl "hdmi output"`

An alternative could be if you can get a Raspberry Pi 3 or even 2, and can get a distro where `omxplayer` can still be installed. You can then use `omxplayer` to display your mjpeg stream on the output of your choice, just make sure the `kms/fkms` dtoverlay is not loaded because `omxplayer` works directly with DispmanX/GPU (BTW, not compatible with Pi 4 and above and much less 5), which contains a hardware `mjpeg` decoder, so for the most part, bytes are sent directly to the GPU.

Hope some of this info can be of help.


Looks helpful! I assume ffmpeg needs to be built with SDL for this to work? I couldn't get it to work with my current minimal compile, but I don't think the board I'm working on has SDL, so might need to install that and recompile


That's correct, `ffmpeg` needs to be built with `SDL` (SDL2 really is what is used on all recent versions). When `ffmpeg` is built, and the dev files for SDL2 are present, ffmpeg's build configuration picks it up automatically and it will link against the library unless instructed otherwise by a configuration flag. When you run `ffmpeg` first lines usually show the configuration which it was built, so it might have hints as to what it was built with, but if you want to confirm what it was links against you can do a quick:

$ ldd `which ffmpeg`

And you should get the list of dynamic libraries your build is linked against. If SDL2 is indeed included, you should see a line starting with "libSDL2-2...".

If I remember correctly you should be able to output to the framebuffer even if there is no support for SDL2, you just have to change the previous line provided from using `-f opengl "hdmi output"` to `-pix_fmt bgra -f fbdev /dev/fb0`.

You can also use any other framebuffer device if present and you'd prefer it (e.g. /dev/fb1, /dev/fb2). Also, you might need something different to `bgra` on your board, but usually `ffmpeg` will drop a hint as to what.


In general DRM/KMS can be quite confusing a there seems little userland documentation available. I assume you get the DMA buffers from the HDMI input somehow? If so, you should be able to use drmModeAddFB2WithModifiers to create a DRM framebuffer from them. Then attach that to a DRM plane, place that on a CRTC and then schedule a page flip after modesetting a video mode.

The advantage would be that you can directly run without starting into any kind of visual environment first. But it's a huge mess to get going: I wrote quite a bit of Pi4/5 code recently to make a zero copy HEVC/H264 decoder working and it was a quite a challenge. Maybe code like https://github.com/dvdhrm/docs/tree/master/drm-howto can help?


The HDMI receive device on the OrangePi5 plus is in a semi-functional state. Collabora is in the process of up-streaming code so the RK3588 will work with the mainline linux kernel.

Until that happens, working driver code is in a very transitive space.

To get going, and sidestep that problem, I've purchased an HDMI to USB capture cards that use MacroSilicon chips. I've some thought of using a cheaper CPU in the future with a daughter board based on this project which uses MacoSilicon chips: https://github.com/YuzukiHD/YuzukiLOHCC-PRO, which made it potentially not a waste of time to dig into.

The MacroSilicon HDMI to USB capture cards output MJEPG, which Rockchip's MPP library has decoder for.

So the thought is: (1) allocate a DMA buffer (2) set that DMA buffer as the MJEPG decoder target (3) get the decoded data to display (sounds like I may need to encode again?) & a parallel processing pipeline

I'll dig into the stuff you've sent over, very helpful thanks for the pointers!

I've thought about switching to Pi4/5 for this. Based on your experience, would you recommend that platform?


> I've thought about switching to Pi4/5 for this. Based on your experience, would you recommend that platform?

Their kernel fork is well maintained and if there is a reproducible problem it usually gets fixed quite quickly. Overall pretty happy. KMS/DRM was a bit wonky as there was a transition phase where they used a hacky mix between KMS and the old proprietary broadcom APIs (FakeKMS). But those days are over and so far KMS/DRM works pretty well for what I'm using it.


Not the same thing but there is this project that does digital rgb to hdmi using a pi https://github.com/hoglet67/RGBtoHDMI I believe they use a custom firmware on the pi and a CPLD, but you could probably eliminate that doing hdmi to hdmi.


Fascinating, thanks for pointing this project out!


I know there is at least one ffmpeg fork with Rockchip mpp and rga support, although I haven’t tested it myself yet: https://github.com/nyanmisaka/ffmpeg-rockchip

I have tested the mpp SDK a bit and the code is easy to work with, with examples for encode and decode, both sync and async.


They don't have a MJPEG decoder yet, which is a blocker for hardware acceleration, but I'm going to try and patch the library with the author and get it added. Thanks for pointing it out!


you can also run Qt directly on the console fb without wayland/X


I might end up doing that. When I was first digging into it, the Qt documentation seemed confusing. But after sinking 10-20 hours into everything, it's starting to click a lot more.

Thanks for the pointer!


Surprised they didn't mention portainer, a docker frontend. There are a lot of templates so that you can easily self host without understanding some of the ins and outs.

I also like using proxmox with Turnkey Linux's images. Helped me self host invoice ninja faster, or at least try it out.

I feel like between Portainer and Proxmox it can cut some of the pain of getting something off the ground.


If you were looking to create testing for networking, i.e., simulating network dropouts for a client-server connections, is this something that you can use namespaces for, or would virtual machines be more fit for purpose?


It may be both.

You could possibly just use network namespaces.

While I am too familiar with it to suggest effort required, looking at the Openstack project and how they provision tenant networks may help.

It is just python with a three tier model with a message bus. But how they interface with libvirt may help if the namespace abstraction support for the 'ip' command is too clunky for you.

It doesn't give you independent stacks and openvswitch will let you build almost anything you would need.


Ok interesting I'll take a look. I was looking at libvirt, specifically how red hat was doing testing for its container ecosystem to see if there wasn't some juice to squeeze out of how they are managing it.


Look up tc, the traffic control utility that interacts with the Kernel's packet scheduler using the netfilter framework.


This is a bit off topic, but I'm running Home Assistant on a system that doesn't have any ECC.

I'll reboot it occasionally. Every once in awhile it will exhibit odd behavior that seems to resolve on reboot.

Is it worth getting something that has ECC? I think I'm running like a Sandy Bridge i7 for reference.


it's not really possible to say if ecc would prevent the issue you're describing.. but that's why imo it is worth having ecc--so you don't have to worry as much about if the weird behavior you see is due to random memory errors.

On the more immediately practical side.. if this is happening frequently enough then running memtest86 or GSAT google stressful application test and see if it can pick up any errors.

You also might be able to improve things with a bios upgrade, or by lowering the clock speed of the ram.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: