Theo de Raadt announced the following in May 2019:
Recently I considered the potential case of code-upload into the JIT
W|Xcode arenas which might contain native system call instructions. I wish to block direct system calls from those areas, thereby forcing the attacker to deal with the (hopefully) more complex effort of using JIT support environment code, or probably even harder discovering the syscall stubs directly inside the randomly-relinked libc. Even if the JIT support access method is not more complicated, I want to remove the direct syscall exploitation avenue.
This diff refactors the
MAP_STACKpseudo-permission bit code, adding additional code which checks if a syscall is being performed from a writeable page. If the page is writeable, the process is killed.
This one is more a “good practices enforcement” than a mitigation: an attacker able to modify the JIT’ed code would just have to wait for it to be remapped as read/executable to have their payload executed.
Moreover, ROP or even ret2libc are still valid options, if only to
the write bit away. And RETGUARD
won’t help in this case, since an arbitrary R/W is usually involved when it
comes to JIT-related exploits.
The 27th of November 2019, Theo de Raadt added a mitigation for what he referred to previously as the “direct syscall exploitation avenue”:
The following change only permits system calls from address-ranges in the process which system calls are expected from.
If you manage to upload exploit code containing a raw system call sequence and instruction, and mprotect -w+x that block, such a system call will not succeed but the process is killed. This obliges the attacker to use the libc system call stubs, which in some circumstances are difficult to find due to libc random-relinking at boot.
This is done by adding 1 extra condition to the fast-path of the “syscall not on a writeable page” check.
For static binaries, the valid regions are the base program’s text segment and the signal trampoline page.
For dynamic binaries, valid regions are ld.so’s text segment, the signal trampoline, and libc.so’s text segment… AND the main program’s text.
So, if an attacker manages to get arbitrary code execution to inject a shellcode in the current process, and mark it as both non-writeable and executable, likely via ROP, they won’t be able to use syscalls directly inside the shellcode. An attacker with this amount of control can likely trivially gain an arbitrary read primitive, and ROP their way away.
Amusingly, since some programs are directly issuing syscalls (like everything written in Go,
.text segment is whitelisted as well, so this mitigation is unlikely to
break any existing exploit.
This is mitigation a strict subset of
PAX_MPROTECT/Windows’ AEG: it only prevents the
introduction of new usages of syscalls, but not of arbitrary code.
This approach looks a bit like a subset of Protecting Against Unexpected System Calls, from C. M. Linn, M. Rajagopalan, S. Baker, C. Collberg, S. K. Debray, J. H. Hartman, published in 2005 at the 14th USENIX Security Symposium. The papers assumes that the attacker has no arbitrary read of any kind before running arbitrary code, and add a bunch of more-or-less crazy counter-measures to thwart runtime arbitrary reads, like immediate bindings, adding fake entries to the GOT/PLT, static linking, inserting dead code, binary obfuscation, code layout randomisation at the block level, splitting the binary into several disjoin memory maps up to the basic blocks level, pointers encryption, local variables order randomisation, … and even with this, the paper says: “However, increasing the attack code’s time and space requirements make intrusion detection easier; ideally, the bar is raised high enough that attacks have obvious symptoms.”
I haven’t used OpenBSD in years so perhaps I am assuming too much. But an exploit mitigation that requires syscall call site verification seems like a minimal security gain in exchange for breaking the ABI for many language runtimes. The irony here is that this mitigation is intended to make exploitation of memory safety vulns harder but it breaks many memory safe languages in the process. If you have a system that absolutely must be secure as its first priority then by all means disable all the JITs, strip all the features, and enable stuff like this. But in order to get mass adoption a mitigation must work seamlessly for most general purpose use cases.
As well as:
I have a hard time believing only Go is affected but I’ll take the data at face value. Either way this mitigation is still entirely too myopic for me. It provides so little value its not worth it.
pledge is strong. Much prefer that approach to weak mitigations that effectively boil down to weakly held secrets
Windows Defender’s malware emulator did a thing like this to mitigate abuse of the custom “apicall” syscall instruction after @taviso discovered any malware binary could use it. It could be easily bypassed by just jumping to an
apicallinstruction in memory with controlled args
This is a useless mitigation, that doesn’t mitigate anything.