Would love to hear from the "I don't think we need it on Linux" folks. What is it about antivirus that we don't need? Is it the wrong solution to securing a workstation?
AV as a concept needs reconsidering: the problem is that attackers can test to see if their malware is blocked and tweak it until it isn’t, so there’s always a lengthy period where they can launch attacks which aren’t detected. AV also doesn’t help with the common case where something runs entirely in memory in an exploited process - the vendors will blather about behavioral checks but that doesn’t seem to do more than keep marketers employed.
Where I’d prefer to see time going is basically two areas: rather than trying to catch every possible bad thing, only allow know okay binaries to run (the hard part being supporting software developers), and extensive sandboxing to catch up with Apple. It’s hard to block every possible bit of bad code but we can minimize a lot of the damage if, say, a malicious PDF file didn’t mean the attacker could just read AWS credentials or SSH keys.
The other benefit is that AV software has a history of security problems. Most of that is that the vendors still use C like it’s the 90s and putting complex binary decoding logic into a privileged context is a recipe for bugs.
Most software used comes from repositories. Most of the time the user is not logged in as root. Any virus would have to exploit a system vulnerability, if that is common and serious enough to cause harm then anti-virus software will take longer to catch than it will take to be automatically fixed by updates.
> Any virus would have to exploit a system vulnerability,
I'm not sure if by virus you mean some specific definition, but malware can still result in a very long and painful day/week/whatever with just access to your home directory and nothing else.
What would happen if your ~/.aws folder was piped to pastebin? Even if you're using short-lived STS sessions with ephemeral keys, I imagine most people would still find themselves in a world of hurt.
How about sending interesting files from your browser's userdata directory? All your cookies, your browser's password manager, possibly copies of your third-party password manager's cache (even if it's all encrypted), copies of cached files, your Downloads directory.
calling home or exfiltration is indeed a serious threat. otoh, it's fairly straightforward to partition / reduce / sandbox environments in Linux. do you need to touch AWS infrastructure from the same account, host, vm as you read email or surf the web? do these environments need full, direct internet access?
What percentage of desktop Linux users do that? Most distros don't do any sandboxing and those that do typically have easy ways to run binaries outside of a sandbox.
>Most of the time the user is not logged in as root
Why does this matter? Most malicious things someone would want to do don't require root. eg. VNC, DDoS, mic / webcam capture, token stealing, keylogging, ransomeware, stealing ssh / pgp keys, adware, backdoored web browsers. And for the small percentage that do you can just backdoor sudo or make a fake system update dialog that captures the user's password to let you have root whenever you want.
Two of the most ubiquitous categories of malware today are ransomware and agents used to steal secrets such as web browser sessions. Because both of these categories interact with files the user has access to anyway, privilege separation (especially only the basic form of privilege separation traditional on Linux) is of little help. The attack surface is all owned by the user anyway. Both sandboxing (such as kernel capabilities) and mandatory access control (SELinux) are helpful in reducing this possibility, but both of these are relatively difficult to use and so not common on workstations.
It's also reasonably common for an exploit to become known by AV vendors and have signatures released before it's been widely patched. Turnaround time from a major exploit becoming known to the industry to a signature release by AV vendors can be as short as a day, especially with the significant intelligence sharing that now happens in the AV industry. AV vendors sometimes release signatures before the exploit is publicly known as a result of information-sharing agreements, although this is a touchy issue because the signatures themselves become a form of public release. While keeping software up to date tremendously reduces risk, there is still a window of opportunity.
> Any virus would have to exploit a system vulnerability
XKCD 1200[0] disagrees:
> If someone steals my laptop while I'm logged in, they can read my email, take my money, and impersonate me to my friends, but at least they can't install drivers without my permission.
That’s one of my favorite xkcd comics because it describes the (very dire) situation so well. Unfortunately the linux userspace really doesn’t seem to care about security even a tiny bit, as if they were still living in the early days of computing where you could be naively trusting everything. And fortunately open-source software is indeed well-mannered for the most time, but that is no reason to be delusional.
Mobile OSs are way ahead in terms of security and the other two major desktop Os also does at least some mitigation against potential attacks. Yet our .ssh folder, web cache, backups everything can be read/written from the same user account one uses for npm installing any random package which has the potential to just encrypt your whole home directory..
I'm hopeful about efforts like bubblewrap, but widespread adoption is very tough. As long as policies are delegated (like AppArmor), I don't see that improving.
TPMs and Passkeys are also a good refuge - Just keep private material off the device.
What I'd like to see is a boundary between system installed packages (which I implicitly trust, but worried about malicious commits upstream, as others have noted) and other code, such as installed via pip, npm, cargo etc.
While it's feasible for me to audit a single shell script, or a PKGBUILD from AUR, it's pretty impossible for modern lanaguage package managers.
Well there's a couple problems. A) virus companies feed the FUD to make it seem really important and worthwhile B) virus writing folks have figured out how to make an arbitrary number of binaries that the virus checkers don't catch for days or weeks C) if attackers are running binaries on your system, you have problems than antivirus is not going to catch or fix.
Virus companies seem to focus much more on marketing then technical excellence. They typically run with full privs, regularly download code/rules from a central server, and are often written poorly. Seems like the industry is awash in security issues: buffer overflows, false positives, false negatives, not checking signatures on downloaded rules/code, and breaking various APIs, network protocols, etc by playing man in the middle. Even things like proxying SSL to scan traffic for SSL downloads ... but failing to check the cert.
So I see little value in running a closed source daemon from a anti-virus company to catch binaries that no serious attacker would use anyways. I trust the binaries from the OS's repos MUCH more than the antivirus programs. Similarly I don't trust IBM's BigFix that was malware Gateway used to help profit from tracking users and showing ads with their special "dock" that came installed on Windows systems. They of course made it very hard to uninstall, since that maximizes their profits.
Generally it seems like the wrong approach. If you want to do it right, have a whitelist for approved binaries. Ideally hooked up to your local mirror/repo so you can have approved signatures for all binaries BEFORE said binaries land on your Linux boxes. Spend whatever resources you would on anti-virus on patching, reporting, monitoring, firewalls, training, etc.
Most of it comes from a philosophical perspective, rather than a technical one. In many forms, viruses usually find their way in from software vulnerabilities, leaving deliberately (albeit unknowingly) installing in the minority. With the open source/free software development cycle, it should be possible to eliminate most avenues of vulnerability.
So in short, the questions we should be asking are:
1. How do viruses find their ways in?
2. And, what can be done (as a user or developer) to prevent that?
These are obvious, I know, and the software devs for Windows aren't deliberately writing insecure software, but these questions are ones better seen from a behavioural point of view.