I think kernel modules will go away at some point. Having no third party Kexts would increase the security of the OS for in use systems. That's a nice way of saying not all third-party Kexts are created equal.
I could see an argument where moving existing hardware Kexts to user space is easier because IOKit uses the libkern C++ Runtime. The OO design of IOKit may lend itself very nicely to the driver approach BarrelFish takes (http://www.barrelfish.org). The real hard one to move to user space would be third-party filesystems. That's mainly because of dated VFS architecture used in *NIX systems. I could see Apple completely moving away from that at a future point too.
It's not sneaky just because you're not looking. I have found the CoreOS group to be guarded in their responses to future directions but never sneaky. There have been multiple times where an API/design/etc change in a WWDC session didn't make sense. I would follow up in the labs and be told the reason is because X is happening in the future. Granted I'm not always told. In that case a diff of kernel sources between releases along with some years working in the XNU kernel is enough to figure it out. Sometimes you're just lucky and hint it in the actual session. For example the 2013 WWDC Session titled "What's New in Kext Development" (https://asciiwwdc.com/2013/sessions/707) leaked SIP well before it was officially announced. They key line was:
So in the future, we are going to tighten down access to the system hierarchy, the whole hierarchy down from /System and everything in there.
Another example of them sharing future plans was user space networking. I forgot what year it was but in the session they noted something about network kernel extensions (NKE) going away and to use Network Extensions instead. NKEs weren't the best but for Apple to send all the effort to recreate the 'same' thing in a new framework was odd. A visit to the labs and you were instantly told of the move to user space networking.
One last example. Apple ships in the default OS a number of third party mass storage kernel drivers. Take a look at /Library/Extensions on a new install. This ensure when you try to install that new OS or boot that new OS, you can see you're drives. Apple likely needs to work with those third-parties to make that happen.
I understand why it might appear sneaky but I don't think that's the case.
Yep, there a couple of interviews where Chris Lattner states that Objective-C 2.0 and later improvements where already a long term roadmap to what would eventually become Swift.
It depends. Sophisticated malware, say a rootkit, can hide calls and even presence on the system. It can do things like modify the syscall table, register a MAC policy to alter what's returned to the rest of the system and use Mach ports to do things without tripping security systems. I say hide because you can still find the malware, it just takes a lot more work. Also the malware had to do something to get to latter point which is easier to detect. A lot of products just deal with that.
At what point, though, do we shift from trying to detect and block/remove malware to trying to prevent it from exploiting its way onto machines in the first place?
I'm sure the security industry has its reasons. It just seems like a great deal more ingenuity goes into the antivirus arms race than into hardening attack surfaces.
Those are different jobs, and both jobs are being done.
The sheer complexity of hardening makes it naive to think it will ever be bulletproof, as I'm sure you'll agree, so there will always be a call for another layer behind it.
World-facing firewall defends from the outside, strict routing and internal firewall defends the network from it self, firewalls on each server/computer defends from having exploits/worms spread like wildfire once they manage to find a crack, and detection software does it's damnedest to discover when something unwanted is happening.
Remove any of these, and the whole chain is less secure.
To make a computer completely secure, of course, you need a trash compactor and a boat to take it out to the Marianas trench, so it'll always be about balancing risks against accessibility and usability.
Frankly, I think the detection software part of the security stack just has better PR.
I am obviously not a security expert. But my understanding is that most breaches happen because of vulnerabilities that we've known about (and known how to defend against) for a long time, like
- Not deploying email encryption or even SPF, such that an attacker can convincingly impersonate others in the company by email (spear-phishing).
- Not updating software (which is necessarily exposed to a large audience by the firewall, because a large audience consumes it) when it has known vulnerabilities in it.
- Writing and running code in memory unsafe languages and not even mitigating that risk through static analysis or Valgrind.
- SQL injection and other failures to sanitize user input.
- Poorly thought out authentication/authorization schemes and bypass bugs, like URL enumeration.
- Services that make no attempt or an inadequate attempt to authenticate their consumers (i.e. firewall can't protect the MongoDB server from the web server; the whole point of the MonogDB server is to be accessed by the application tier).
- Not using TLS where appropriate.
- Not using 2FA for privileged insiders.
- Weak password reset schemes, and password expiry schemes that result in users writing them down on post-its at their workstations.
- Shared accounts.
It just seems odd to me that the security community will basically skin you alive for gross negligence if you don't have a firewall or antivirus, but this kind of stuff is more or less accepted as a fact of life.
And a firewall or antivrius is not necessarily going to do anything about it (if the attacker goes through routes that have to be open for the system to function, and writes their own exploits for which virus definition signatures don't exist).
Once a rootkit is installed, it can completely bypass system call monitors in all sorts of ways – communicating with a kernel component via a shared user/kernel memory page, or adding a new device and communicating using custom ioctls, or "backdooring" an existing system call when some userland parameter is set to a magic value, or ...
I am not at all confident that one could find such malware without human intervention.
Not if it's hypervisor-based monitoring with IO mediation. This is still a weak defence. Stronger model is kernel integrity + syscall restriction + MAC or capability protection for usage details.
I understand that once a rootkit is installed, all bets are off. I was wondering if the syscalls by which the rootkit gets installed will be obfuscated to make them look more like a benign/normal process, and evade detection by a malware-syscall-pattern-recognizer. or are some malware syscall patterns essentially "unhideable"?
Just a few other notes to help make kernel development with VMware Fusion easier:
1. Add -zc and -zp to your boot args. It's not documented but it greatly helps catch zone allocation problems (OSMalloc/OSFree/buffer overrun) issues.
2. Use snapshots instead of rebooting. It's much faster to revert back to a snapshot than to reboot your virtual machine instance after a crash.
3. If you use a shared directory between your instance and host machine for moving your KEXTs/other code, make sure to MD5 the files before loading. It's very common for the instance to be using stale cached blocks.
4. While not necessary, I create a separate network interface that's host only to debug on. I give it a static ip and add an entry to my host's host file. It makes debugging instances easier to do since I can connect by name, i.e. kdb-remote vm0.
Thanks for the tips! I had never thought of using a snapshot as a quick way to return to the state before a panic instead of rebooting. This is genius, I'll definitely use it a lot!
I am aware they shut down (someone was lamenting how VC firms in the Seattle area are shutting down, the other being Frazier technologies) and was curious why so I went to their site. If you miss deals like those, it is inevitable that you have to shut down :)
Wtf is Seattle so weak for VC? It has a lot of rich people, mostly who got rich from tech. Ignition is nice (but do most of their investments in the Bay Area now, wtf), and I've never talked to Madrona but they seem to be the other one. If I were going to do a 30-50mm global fund, I'd probably base it in Seattle. Great for security, cloud, etc., with a great school (UW), successful local companies, etc.
Sure, but based on what little I know about the Seattle tech scene, it seems like a common goal is to make a lot of money and then get out of the game entirely.
Contrast with Bay Area, where many people seem to love the game and aspire to be angel investors themselves one day.
This is just conjecture, there's likely some other reason.
Yeah, I get that from some Seattle people -- which seems really weird to me. If I didn't care about tech, I'd just do something like banking where returns are more predictable and to some extent easier. If I did care about tech (and do), I'd not want to leave just because I made a lot of money.
I'm skeptical. There are too many unsupported claims in this article. Off the top of my head:
- Assumes the Chinese put the backdoor in. There are plenty of others interested in backdoors.
- Assumes the designing company doesn't do any detailed production product checks. Not likely since this is a many, many billion dollar business.
- Claims a systemic problem but only notes one chip. That one FPGA could just have a design flaw. Need more details on the others.
- At the end it claims an investigation over ten years but the fab world has greatly changed over ten years. Many micro controller companies actually own their Chinese fabs now.
As a side note, if you discover something like this, don't assume you found something you weren't meant to find. You're discovery may just have made you found.
Many (maybe most? I don't specialize here) backdoors are deniably accidental, a term I'm coining here to mean "could be sabotage, could be a development artifact".
Whether any of those backdoors are deliberate is much less relevant than whether they're known to your adversaries. In the case of Chinese electronics engineering, your adversaries have the blueprints.
Do you really think it's likely that designers of bespoke silicon reliably decap, image, and analyze the finished products? I think you're attributing Intel/AMD-level wherewithal when, just like in software, a huge chunk of the market has nothing resembling the resources of the leading vendors.
I would add that Bluetooth Low Energy isn't totally free. Somewhere in your cost you have FCC/IC/CE testing, Bluetooth licensing/testing/certification, possible software stack cost and more expensive components (radio, antenna, etc). You also have a larger power requirement as low energy isn't as low as UART. I won't give specific numbers here as I cant't legally, but the cost difference is probably a wash. That's assuming your doing the radio layout and not using a module. Apple's move to wireless is definitely for their sake. I could also see an argument made that because of the possible extra costs for doing Blouetooth Low Energy (chips, testing, power, RF design, etc) you may see less accessories for a while.
In any case I think you're right that the tell tale signs are there.
The move to wireless is one part Apple's sake, and one part for users (no parts left over for developers, unfortunately). People tend to favor wireless accessories and associate a higher value with them. In some cases it just makes more sense- people want to use an accessory without handing over their iPhone (I'm looking at you, speaker docks)
The cost may be a wash, but it really does level the playing field.
I could see an argument where moving existing hardware Kexts to user space is easier because IOKit uses the libkern C++ Runtime. The OO design of IOKit may lend itself very nicely to the driver approach BarrelFish takes (http://www.barrelfish.org). The real hard one to move to user space would be third-party filesystems. That's mainly because of dated VFS architecture used in *NIX systems. I could see Apple completely moving away from that at a future point too.