I really miss Objective-C, and in the world of Swift craziness [1] I'm reminded often of this blog post [2] wondering what would have happened if Apple hadn't encountered Second System Syndrome for its recommended language.
(There's a decent argument it encountered it in iOS and macOS too.)
[1] https://github.com/swiftlang/swift-evolution/blob/main/propo... -- apologies to the authors, but even as a previous C++ guy, my brain twisted at that. Inside Swift is a slim language waiting to get out... and that slim language is just a safer Objective C.
> Inside Swift is a slim language waiting to get out... and that slim language is just a safer Objective C.
Rust? Rust is basically a simpler Swift. The objective-c bindings are really nice too, and when you're working with obj-c you don't have have worry about lifetimes too much, because you can lean on the objective-c runtime's reference counting.
I think the way to think about it is that with Rust, it's as if all the goodness in Swift was implemented with in the "C" level, and the Objective-C but is still just library-level a runtime layer on top. Whereas Swift brings it's own runtime which greatly complicates things.
One of my recurring language design hot takes is that it's easier to design for speed and then make it easy to use than it is to make it easy to use and then try to speed it up.
>[1] https://github.com/swiftlang/swift-evolution/blob/main/propo... -- apologies to the authors, but even as a previous C++ guy, my brain twisted at that. Inside Swift is a slim language waiting to get out... and that slim language is just a safer Objective C.
These kinds of features are not intended for use in daily application development. They're systems-language features designed for building high performance, safe, very-low-level code. It will be entirely optional for the average Swift developer to learn how to use these features, just in the same way that it's optional for someone to learn Rust.
Obj-C’s simplicity can be nice, but on the other hand I don’t miss having to bring in a laundry list of CocoaPods to have features that are standard in Swift. I don’t miss maintaining header files or having to operate in old codebases that badly manage Obj-C’s looseness either.
I go back and forth. I do miss the simplicity of objc at times though. I think in a short amount of time someone can become close to an expert in objc. Swift is already incredibly complicated and there's no end in sight.
My current iPad is the iPad Air 3 (the one with the backlight issue that's never been acknowledged, to my understanding.)
Can someone explain to me why an iPad at all, let alone an iPad Air, needs as powerful a processor as a M4? That's stronger than my laptop (a M2) where I run multiple VMs and more.
The newer CPUs are more efficient and faster. In a mobile format you want the CPU to process everything as fast as possible and then return to a low power mode for battery life.
Apple re-uses the same core across their lineup because it’s cheaper to build 100 million of the same core than to design and maintain two separate CPUs that go into 50 million devices each.
Do they really do it just because it's cheaper? I thought they did it for each generation to offer the best of that generation; it makes sense for more powerful chips to have more cores and higher capacity, but it doesn't make sense for each core to arbitrarily be less efficient or less performant just because you didn't buy more of them. Especially because this approach makes the base models an extraordinarily high value compared to base models from competitors.
I have an iPad for the purpose of 3D modeling in Nomad Sculpt and Shapr3D. It’s an M2 Air, it’s still way overkill, and I’m regularly frustrated at how limited every piece of iPadOS software is compared to the hardware. The dichotomy of prioritizing iPad hardware but iPadOS being arguably their worst actively developed software is baffling.
Maybe there are people out there doing 8k video editing on their Pros, but I’ve yet to meet them.
In theory it improves battery life by doing more for less power. It also future proofs it for future workloads giving it an extended lifespan. Also note that thermals will limit what this is capable of compared to your laptop.
Can you explain, why not? If it’s easier for Apple to just maintain a fewer series of chips going forward, why not keep it up to date?
If your question is what do people use it for? Well thats different. iPads have a range of users from people who just browse the internet and will never stress this out, to people who do concept art and CAD who will appreciate the power.
But again, why do people always complain that a device got a spec bump?
There are some decently powerful apps available, like Final Cut Pro, and there is multi-window support including external displays.
I think the percentage of iPad users actually using this level of processing power is small, but there are some ways to do it.
I do really wish they would just allow running a VM on an iPad though at this point. Running a linux or even MacOS VM would be a nice escape valve for a lot of things that can't be done natively.
In theory an iPad is a computer and then you could run whatever you want on it. So maybe the better question is, why can't you run whatever you want on it?
It doesn’t necessarily need it other than for niche use cases, but they can’t well have the SoCs stagnate for many years, because SoC updates drive upgrades, whether the buyers really need it or not.
It's not like Apple is putting any thought into either the UX or the engineering side of utilising the compute properly (except calculating those glass effects extra inefficiently).
Minimise SKUs and get some use out of the binned chips who have a few failed cores.
Those devices are too young to start lagging. Eventually websites will bloat to the point that you will definitely notice. My estimate is that it will be at the 7 year point.
The ownership of "I made a mistake" (you noted responsibility) is important. I feel strongly in the value of accountability, not in a blame or even necessarily consequences sense but in an integrity and character sense. The way you phrased it there is important. You noted this and other aspects, but that part struck me.
Also: a forgiveness button. I sometimes feel like society has forgotten forgiveness: we seek revenge and punishment more than redemption and growth. So your Forgive button: I love it.
Yes, having a US "front" is how North Koreans pass the identity verifications at US companies looking for remote workers. I have personally spoken with numerous such individuals. Think about it, if you were a legitimite organization attempting to gain US presence would your first action be to SPAM individuals on Github or to register a business and submit a job post on LinkedIn?
Hah, the idea is to have an example on the site that is not offensive -- we're not going to write something offensive down -- but where you can understand what it would be or could be. It lets you infer / understand the point without us actually writing something awful. (Maybe we can do it better, though.)
Bears seemed a pretty inoffensive target, plus our backend uses Python with beartype and that library is all about bear jokes.
I empathise. We definitely mature over time :) And we all have bad days.
If we can bring the HN kind of interaction approach to more sites, we'll call ourselves successful. Dan and co provide an inspiration through this site. Someone commented below something about automating dan, and... dan, if you read this, I laughed, but kinda ;) At least, bringing a chance for or what we can of the general HN approach to other spaces.
Re specific users or karma, everyone is equal. Comments are judged on their own merits, within the context of the topic they are about (the API allows adding more context, such as other comments in the thread, which isn't shown in the demo.)
We've played around with the idea of building a kind of reputation over time, ie allowing people to build a score. If so, it's important to note that's not based on the content of what was written (eg specific political views) but based on how well, how healthily, someone expresses it. That line does blur because some opinions are inherently unhealthy, and cannot be expressed in a way that respects others, demonstrates decency / humanity, etc, but within the spectrum of 'being a decent person and just trying to interact well including while disagreeing' we specifically do not want to police topics. We want instead to encourage, and in future maybe try to build a rep, for how well someone engages with others.
And returning to your question, if we did that, every comment would still be assessed standalone. We want people to grow, kinda like you were talking about. If someone expressed themselves poorly a year ago and behaves more healthily now, now is what should be reflected.
Folks, Dave here -- it's half past two in the morning over here, things have slowed down a little, and so we need to pause and get some sleep.
Thankyou everyone who tested it out. We modified it live a lot during the discussion so much of it is already outdated / changed -- it was fantastic feedback. As of now it is a lot more direct, accepts things we never thought of, has much more accurate dogwhistle handling, and far more. I hope the intent, to teach people how to interact better, carries through. We have a bunch of signups and if you run a blog or site with comments, I hope we can help you build a healthy community. Thankyou again from both of us!
That is an excellent idea. It is 2AM my time, but I may set Claude going and check in the morning. (I tend to keep a closer eye on AI coding than that usually!)
The preset articles and trying out comments were intended to be something similar: see a topic, see how it works. But running it on each and every comment on an existing thread is really powerful.
There may be privacy concerns? General respect? I don't want to tie assessments to specific commenters, who published in good faith not expecting some kind of automated review, nor thereby imply they commented poorly for example. But I'll code it up on my end and see what we can do with it. It's truly a very nice idea.
If you look through the history of the Show HN category, you will see a near-endless stream of "analyzing HN discussions" prior arts that may offer some assurances. I'm not suggesting removing usernames because anonymity is involved — it is not! — but, instead, to specifically focus viewers on the substance of the comments rather than the who of them. That's also why I chose something from two years ago, so that there's no appropriate reason to witchhunt over the past. (Some may still, but nothing short of evaluating AI-generated data will stop them, and you can't make a reasonable case using AI to evaluate AI.)
reply