Hacker Newsnew | past | comments | ask | show | jobs | submit | jamesg's commentslogin

> If you try to print big (let's say A3+) you'll rapidly find the limits of the m43 system

I’ve printed a good number of images shot on m43 at about that size, and that’s not been my experience. The specifics definitely matter, but I’ve found Olympus’s M.Zuiko 75/1.8 to be competitive with a 70-200/2.8 in terms of sharpness, distortion, etc, for instance (dxomark seem to broadly agree: https://www.dxomark.com/Lenses/Olympus/Olympus-MZUIKO-DIGITA... vs https://www.dxomark.com/Lenses/Nikon/AF-S-VR-Zoom-Nikkor-70-...). You do have a larger minimum DoF, but 150/3.6 equivalent is not that far off.

I would like a bit more resolution, but that’s also true of my Nikon D4S (also 16MP).

Clearly FF and m43 have different strengths, and you’ll get the best results by playing to the strengths of each. I concede that all else being equal, larger sensors do afford you more flexibility (lower light performance, for instance). On the other hand, traveling with my Nikon FF kit is a huge PITA (mostly due to the lenses; the body is a constant cost I can mostly deal with).

I’d love to hear more about the limitations you’ve hit with larger prints on m43.


I don't actually use m43, but 16 or even 24mp would be rather inadequate for my needs, especially from a bayer sensor.

I do my printing at 720 ppi on Pictorico OHP transparency stock, which I then use to contact print cyanotypes or pt/pd (I'm really dedicated to useless stuff). You can see how the resolution requirements for anything bigger than a postcard are stringent.

In practice I use either a Sony A7r (36mp, with very sharp primes) a Sigma with a foveon sensor (which holds up very well to the Sony at ISO 100, and it's not like I use the Sony at more than ISO 100) or high-resolution scans from Adox CMS20 II microfilm stock (ISO 20, there is a special low contrast developer for pictorial contrast) shoot on a Leica or Contax with their respective very highly resolving primes.


Gotcha. Cool setup!


Since you mentioned image processing in particular, I’d recommend looking into Halide instead of (or as well as) CUDA. Few reasons:

1. It allows for easy experimentation with the order in which work is done (which turns out to be a major factor in performance) —- IMO, this is one of the trickier parts of programming (GPU or not), so tools to accelerate experimentation accelerate learning also.

2. It allows you to write your algorithm once and emit code to run on OpenGL, OpenCL, CUDA, Metal, various SIMD flavors, and a bunch more exotic targets. CUDA effectively limits you to desktop/laptop computers, and at this point I’d rather bet on needing a mobile version at some point than not.

3. It eliminates a ton of boilerplate code, so you can get started quickly.

4. It’s what the pros use. Much of Adobe’s image processing code is in Halide now, for instance (source: pretty much any presentation extolling the virtues of Halide). The Halide authors cite a particular algorithm — the Local Laplacian Filter - where an intern, in one afternoon, beat out a hand optimized C++ implementation that had taken months to develop with a Halide implementation. I don’t know if the specifics of that have been exaggerated, but directionally I believe it. It was pretty transformational in the codepath I used it for.

I feel like developing an intuition for the “shape” of algorithms that will perform well before diving into the specifics of low-level tools like CUDA will serve you well.

http://halide-lang.org/


Would halide be a good option for writing a pathtracing engine?


You may want to try using JetBrains' Rider as your IDE. I've hit similar issues with XF in VS, but Rider has been a much better experience for me. For instance: it won't lose its shit if you double-click a XAML file :)

Also has the benefit of being substantially the same on Windows and Mac, so you can use a Mac directly for Mac / iOS development and everything works pretty much the way it did on Windows. VS on Mac is unfortunately pretty dissimilar to VS on Windows.


This. I don't have experience with Forms, but Rider is a massive improvement over VS on both platforms for "native" Xamarin development. I actually quite liked VS for Windows when I started working with it a couple of years ago, but as of late it seems to have gotten only slower and buggier. Since we do only Android / iOS at my job I don't even bother with Windows anymore. I still "need" VS for Mac to (a) build iOS layouts in interface designer (which is hell, I regularly consider going code only for those) and (b) to deploy Android apps when debugging. Rider will do that, but it doesn't do the fast assembly deployment that VS does so it's very slow even if there are no changes. And thus I write code in Rider, and run through VS. VS for Mac doesn't include as much bloat as VS for Windows, which makes it "better" because it's relatively light and doesn't lock up too long when switching to it. As an editor it's just annoying though. Debugging in it is a necessary pain right now.

If this sounds cumbersome... it is. I find it workable right now but I'm regularly annoyed. Build + deploys fail when I fire them off too soon after changing an .axml file. Builds fail as a rule, not an exception, when there are Xamarin or Android SDK updates. iOS is actually mostly fine, except for the hell that is the designer. All in all none of these things are show stoppers, but I don't think I could wholeheartedly recommend Xamarin for new apps.


That's fair, but I think that XF has a good story around integrations with other frameworks to fill its gaps: SkiaSharp has the SkiaSharp.Views.Forms namespace, for instance, and there's solid documentation on integrating the two, right there on Microsoft's Xamarin.Forms docs. Actually, speaking of docs, Charles Petzold's book on XF is an awesome resource -- I've not seen commensurate investments in documentation made by XF's competitors (admittedly it's not something I track that closely). Similarly XF's ability to add a native control directly to a StackLayout via extension methods makes it fairly straightforward to just drop in a native component if that's what you want to do.

I think it's a reasonable strategy to say "hey, we're not going to be able to solve all the problems, and it's probably not the right way to spend our time even if we could. What we will do is make integration with other solutions straightforward so we're not the bottleneck". Making a strategic choice not to be the bottleneck seems like a good call regardless. That said, the things I've found to be a hassle are things like getting OpenGL working on UWP from C# (tractable, but that's really not how I want to be spending my time).

But yeah, wow what a difference it's made having Microsoft throw their resources at it. My current codebase lets me build for iOS, Mac & UWP at present (I'll get to Android at some point), with really not very much effort. Being able to debug on the Mac version rather than waiting for iOS deploys is just about enough to repay the time investment on its own. I'm anxiously awaiting them getting their web platform support up to snuff -- I'd dearly love to never have to JavaScript again. :)


If you're considering learning Dvorak, I'd strongly recommend considering Colemak instead. I tried Dvorak with the hope that it would mitigate RSI, but it spreads the work among fingers pretty unevenly -- I really ended up just moving the problem rather than fixing it (mostly to my right pinky).

Colemak is (relatively) easy to learn if you know QWERTY, and it's been life-changing for me: I can work for more hours of the day, and I suspect more years of my life with Colemak.

Interestingly, I tried configuring my phone for Colemak a while ago and had to switch it back. The relatively small movements you make with Colemak meant that the swipe typing thing was just about useless -- it just couldn't discriminate between words.


I second this - as someone who learned Qwerty as most Americans do, then learned about alternative keyboard layouts and ended up learning Dvorak, then finally settled on Colemak. In each of them I was able to type 120+wpm, and 140wpm+ in Qwerty and Colemak, so I consider myself a proficient typist.

Dvorak offered no benefit to me. I firmly believe that any differences or gain that people attribute to Dvorak is attributed to finally learning how to properly type. Many people make this switch from a misguided thought that Dvorak will help them type faster (it will - but only because you relearn how to type) or that it is more ergonomic (I firmly believe that it isn't - but you're likely using proper finger movements now and thus it will be more comfortable).

Colemak is noticeably more ergonomic and doesn't mess up as many default keybinds. I much prefer typing in Colemak than Qwerty, although I type in both to retain my ability to type in Qwerty lest I find myself having to use someone elses' computer.

It takes a few minutes for the wires to switch over - but it's totally possible to retain one's ability to type in either layout. For those wanting to try a new layout but are scared of forgetting the original - just make sure to practice in both!

E: Since another user below also mentioned the context switching being difficult for a year - my context switching takes about 5~8 minutes and I've been using Colemak for about 2-3 years. I use Colemak while at home and Qwerty while in the office. So I spend roughly equal amounts of time in both. It doesn't seem to matter how much time I've spent "not typing" between the context switch. Colemak at 8pm at night, Qwerty at 8am the next day? Still need a few minutes before the brain swaps over. It's a little weird but I've grown accustomed to the "warmup period" I guess, but I know a few people who don't seem to have this problem at all.


Really interesting. It's much less difference, but I use QWERTZ at home and QWERTY on my laptops, and I don't have context switching period, but it's tied to the laptop. I don't even have to think about it - when I use my Logitech keyboard I write with the German layout and on Lenovo keyboards I write with us-intl. It's not a problem - unless someone gives me a Lenovo with QWERTZ - then I mistype everything. I can only imagine how much worse it would be with dvorak or colemak...


I use the same keyboard at work and at home - a DAS Model S with unmarked keycaps. Maybe that contributes to the brain confusion - but I like the keyboard too much to try and type using two different keyboards.


Same here. Colmak allows me to to type at higher speeds for longer without pain.

I carry a mechanical keyboard with dipswitches that allow me to use colmak on other people's machines without much trouble (as a bonus, mechanical keyboard generally nicer than their keyboard too).


Which keyboard out of curiosity?


Vortex Pok3r. It's a 60% keyboard which is a nice compromise for travel IMO. You still have programmable layers, but don't have to reach for them when doing basic programming tasks. 4 dipswitches at the bottom allow you to switch to colmak or dvorak in hardware, so no messing up the host system settings.


Cool, I almost got the same one but went with a leopold 660m, also 60%. I just have caps lock mapped to backspace and use vim for everything I can (checkout cVim for chrome) tried colmak out today but will stick with qwerty since I'm 100% at 100 wpm touch typing.


I just looked at cVim and it’s definitely interesting. I’m currently a user of Vimium. Have you tried both, by chance? Curious about a comparison


Vimium didn't work consistently when I tried it, where cVim did. I used to use VimFx on Firefox but it was abandoned. I like the way cVim does its follow tooltips.


Thanks! On FF Tridactyl is maintained. Vimium search bar dies in my FF because of number of tabs, so I have to use the native. Went to tridactyl recently and seems to be working better.

Edit: autocorrect fix


Actually, I just installed cVim to test and went through the docs. Very nice work. The config setup alone... anyway, Thank you! Appreciate the recommendation gp!


Some off the shelf 60% keyboards have this, the one I remember is Poker 2/Pok3r Keyboards.


I'd say that you should use whatever floats your boat. For me Colemak never stopped feeling weird, despite having spent a lot of time playing around with keyboard layouts.

Another thing people should try, at least if they are at a workstation is pedals. Back when I was still using evil (an Emacs mode for emulating vim) I tried a pedal that switched to normal mode for every other mode or switched to Insert mode from normal mode.

Together with a properly set up abbrev mode that has saved me more typing than anything else combined.


If you worry about RSI, use speech recognition whenever you write emails or comments. There is even a trick to use it on Linux if you're interested (not the crappy sphinx/kaldi).


> there is even a trick to use it on Linux (not the crappy sphinx/kaldi)

What do you mean? What are you using?


android tts to linux trick


What is your experience when you have to use the QWERTY layout? Is it a frustrating experience or can you use it just fine given that Colemak is more similar to it than Dvorak?


At this point I'm pretty terrible at QWERTY on a physical keyboard. I'm fine with it on phones and tablets for some reason, but my brain is pretty hard-wired for Colemak on physical keyboards at this point. I also use a Kinesis Advantage keyboard which adds to the context switch when I have to use a different computer (though this is less significant for sure).

If it's a Mac, Colemak is one of the pre-installed layouts, so if I need to work on someone else's computer I'll just enable it while I work on it and remove it afterwards. Otherwise I can manage, but it does slow me down quite a bit, and I'll have to look down at my hands pretty frequently. You immediately notice the extra workload though: Colemak is pretty low effort, but QWERTY just feels like finger moshing to me now.

I'm pretty sure it's possible to remain proficient in both, but QWERTY was sufficiently destructive to my hands that I use it as little as possible.


I use colemak as well. I find it way more rare than you might think that you have to type in QWERTY. There are probably couple of times a year I need to, and I can always get away with some quick hunt n' peck in those rare cases.


I've used Colemak for 5 years now. I've always just hobbled along with Qwerty, not pecking, but not really touch typing.

However, I rarely need to use Qwerty. It's less than once per week. The only times are when I'm in the UEFI settings on my laptop, or fixing my wife's computer. Any other time I have my own user, and can switch the layout to what I like.


I think it probably depends on if you have a separate keyboard or not. I have an ergodox on my home pc and work laptop, but my laptop is qwerty. I don't even notice moving between keyboards, I guess because they are so different.

Having an external keyboard also means that other people can still use my work laptop when needed (pairing etc)


Additionally, most cameras these days will under-expose by a pretty substantial amount. Digital darkroom software maintains a database of cameras with an entry for how much each camera under or over-exposes (mostly under) which is applied before any of your adjustments are layered on top. Adobe's DNG spec calls this "baseline exposure". I used to always under-expose by a about a third of a stop because I reasoned that whilst I could probably recover shadow detail (even if it were noisy), once the sensor has clipped, there's nothing I can do to recover lost highlights. With modern cameras, this doesn't really make sense any more: the camera will just meter that way to begin with.

It's a double-edged sword though: under-exposing will add more shadow noise.

Iliah Borg (one of LibRaw's authors) has a good write-up on it: https://www.rawdigger.com/howtouse/deriving-hidden-ble-compe...

DXOMark also maintains a database of their own measurements of each camera, including actual ISO sensitivity for each nominal ISO sensitivity, eg: https://www.dxomark.com/Cameras/Nikon/D850---Measurements


I’ve recently been experimenting with this board, which looks to be quite similar (Allwinner H5): https://www.friendlyarm.com/index.php?route=product/product&...

Overall it’s an impressive little package, however I’ve been finding that you need to underclock the CPU to make it stable. Or possibly use a giant heat sink, but that would somewhat counteract the benefits of such a tiny board. Do you know if Neutis have found a good solution for keeping the H5 stable?

However, in broader strokes, I’m pretty excited that we’re starting to see these boards with open hardware designs (the vocore and the beagle, for instance). Being able to use this to bootstrap more complex board designs feels a bit like how web apps became a lot easier once the so-called LAMP stack was robust enough to build on top of. There’s additional hurdles with hardware, for sure, but each barrier removed is meaningful progress.


Whilst it can be somewhat subjective, and different styles of photography will benefit different amounts from exotic gear, it's surprisingly complex even in fairly narrow domains.

You're probably right that the first image wouldn't be significantly different on any other lens: it looks to be shot with a fairly narrow aperture in fairly unchallenging lighting conditions, so you're not hitting any of the aspects of photography where lens design has really advanced. A really cheap lens might show distortion or chromatic aberration (admittedly less important in B&W) at the edges, but beyond that, you'd be fine.

But I saw this article recently, and it's a sample size of 1, but nonetheless I found it somewhat surprising: https://petapixel.com/2018/08/15/is-the-sensor-or-the-lens-t... -- unfortunately it doesn't specify how these images were processed so it's hard to draw conclusions (eg: if it was all SOOC JPEGs, I'd be willing to believe that the older image processor in the D610 has worse highlight reconstruction, for instance. Or possibly worse demoasaicing). However prior to reading that article I'd have unconditionally recommended investing in glass before getting a new body; evidently my mental model for this was at least slightly deficient.

FWIW, my 2c: take photos that leverage the gear you've got, and conversely choose gear to get the kinds of images you want. Eg: I love my Nikkor 85/1.4G lens, despite its shortcomings, and you'll never get the same results on a smartphone (I mean, 85/1.4 is a pretty razor thin depth of field). If what you want is a super creamy background and tons of detail on the in focus regions, and you like the framing at 85mm, then that's pretty much the only way to get it (that I know of anyway). However, if you're only shooting wide angle, with everything in focus, then the differences will definitely be more subtle. They'll be there (eg: a Zeiss 21/2.8 on a D850 will pull out detail that the smartphone just won't know is there, the D850 sensor will have a lot more dynamic range, etc), but those differences will be less readily apparent, especially if you're viewing these images on a smartphone screen.

One last point: you're no doubt aware, but there are more dimensions to a lens' quality than its resistance to chromatic aberration (re: fancy APO optics), speed, or resolving power. I have a couple of the Voigtlander lenses for Micro 4/3, and while they have a lot of shortcomings, (so much coma aberration!), the way they render the background has a particular quality to it that's hard to replicate; not always desirable either, but sometimes fun. I also enjoy playing with some of the shortcomings: it will render with a kind of a halo-like glow around close objects when shot wide open. Likewise, the bokeh on my Zeiss 100/2 MP is pretty special (I believe this is a consequence of them not using any aspherical elements), despite the fact that it suffers from terrible chromatic aberration (possibly also related to them not using any aspherical elements). There's also color rendition of different lenses, but much of that can be corrected / simulated in post if you're patient enough. And every now and then I shoot with an old Zeiss Jena 200/2.8 lens with an adapter precisely because of its shortcomings yield a particular look. Colors are a bit more muted, it's a bit less contrasty, and chromatic aberration is extremely pronounced. It looks like a photo that would have been taken in the 60s, which is kind of neat.

Apologies for the long post! I love this stuff. :)

Edit: one more link. Ming Thein's review of the Zeiss 100/2 MP gives some more details on that lens, and I think some of the shots he's included are a great example of that lens's specific background rendering. Despite occasional annoying ellipses, it has a "swirly" property to it that I really like and I just don't get on any of my other lenses. https://blog.mingthein.com/2012/07/27/revisited-and-reviewed...


"Bad" equipment can produce wonderful images. There's a unique charm to long-expired Kodachrome or T-MAX pushed to 3200 that can't be reproduced digitally. In the right hands, a DIY pinhole camera or a Kodak Brownie can be a fabulous photographic tool.


As for camera makers releasing SDKs, there’s a few, though I’ve found them to be unimpressive (binary only blobs for Windows, etc). However more interesting to me are cameras that speak standard(ish) protocols. I’ve been looking into this recently, here’s what I found: Panasonic cameras (GH4 is the one I’m looking at) expose a HTTP API over WiFi, and are able to upload images to an SMB share while you shoot. The API is fairly extensive: lets you control focus, etc. Nikon cameras have a pretty bad story on WiFi (at least up to the D810, which is the newest one I own), so I haven’t got far with them. I hear, though haven’t verified that Canon cameras speak PTP/IP, which is pretty neat. Olympus also has a WiFi control mode, but annoyingly it seems to disable the on-body controls when you use it (tested on an OMD EM1 Mk 1). I’m yet to get my hands on a Sony camera (will probably buy an A7 later this year), but I’ve seen videos that suggest it should be fairly straightforward to control remotely.


Sony has an Android app you can use to get live previews off the A7, tweak exposure parameters, and trigger images. Everything can go over either bluetooth or WiFi. I imagine that a little time with Wireshark would open an API up real quick :)


Sony already has an API and there are a few third party Android apps using it: https://developer.sony.com/develop/cameras/


Very interesting and informative. That sounds relatively easy to reverse engineer the protocol even if they don't document it. Thank you.


This is very well put. Much more insightful than my glib comments below. Thanks dude!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: