Kernel Defenses, Detection, and Exploitation

Written by TurboBorland

Intro: Nothing Good Lasts Forever

As mainstream shifts their attention towards the kernel landscape its sponsored defenses have made their way into our everyday hardware and officially supported kernel images. With any new defense the implementation and adoption is slow, but utilization on target systems will only increase in the years to come. Some exploit developers have had lots of time to play with these new defenses (KERNEXEC/UDEREF), but the newer generation have yet had the need to take aim against these hardened kernels. While the obvious(ly hilarious) KASLR (aka noob-killer) is gaining wide support in public circles there are two other cpu-supported defenses to be aware of. These being SMAP (Supervisor Mode Access Prevention) and SMEP (Supervisor Mode Execution Protection).

What is SM[AE]P?

Supervisor Mode Execution Protection (SMEP) is easy to understand if youre familiar with control flow and/or userland memory protection techniques. A generic kernel exploit back in the days of old allowed you to move execution control (IP/PC/etc.) to a userland controlled page with your buffer in it. This allowed kernel context to execute code from a userland mmap(, PROT_EXEC); controlled buffer. SMEP is here to ruin that party and deny kernel context to execute code from userland address space. Its basically NX for userland/kernel page boundaries. If anyone has ever played with KERNEXEC, here we have it supported hardware.

Supervisor Mode Access Prevention (SMAP) is more of an oddity which has aided to interesting implementation. This technique denies READ/WRITE access from kernel context to userland data. This allows both defenses combined to filter privilege control over the RWX spectrum. The problem is that telling the kernel what it can or cannot access is difficult. Sometimes data structures in userland need to be read or modified by kernel context and, obviously, userland data needs to be pulled for communication channels such as get_user() and put_user(). Due to this need we get a hilariously easy to bypass defense that is documented, not talked about in the loudest whitehat circles, and so seemingly odd that kernel developers accidentally marked huge code regions to not use the protection.

What is CPUID?

CPUID is an instruction implemented on modern cpu architectures. A large assortment of interesting information can be retrieved with CPUID by setting two arguments/registers and ANDing the returned result with a proper mask. These arguments are defined as the leaf node and the sub-leaf. The leaf node is the main category of information youre requesting cpuid for and it is controlled via the EAX register. The sub-leaf is barely used, but calls into a section of the main section via the ECX register. Its highly suggested that the interested reader take a look at the references in cpuid(4), at their own cpuid output, and a table of leafs/sub-leafs and associated returns.

Of interest in the above link is the Extended Features section (eax/leaf = 7 and ecx/subleaf = 0). This allows the caller to execute an unprivileged instruction that validates security features enabled on the victim system.

Detecting SM[AE]P via CPUID Instruction Code

        /* 
         * unprivileged cr4 value detection with cpuid
         * cpu's that support cpuid instruction will work (if not, cpu likely doesn't support SM[AE]P anyway)
         * problem is, although unlikly, bits can move and mask may be incorrect for specific cpu's
         * you can double validate by grep -oE "sm[ae]p" /proc/cpuinfo
        */
        int op = 0x07;    /* leaf */
        int smep_mask = 0x00000080;
        int smap_mask = 0x00100000;
        uint32_t regs[4] = { 0 };

        // no error detection, this will fault you if cpu doesn't support this instruction 
        asm volatile("cpuid" : "=a" (regs[0]), "=b" (regs[1]), "=c" (regs[2]), "=d" (regs[3]) \
                : "0" (op), "2" (0));

        // SMEP
        if ( (smep_mask & regs[1]) != 0) {
	...
        // SMAP
        if ( (smap_mask & regs[1]) != 0) {

Implementation of and Bypassing SM[AE]P

With all the pre-requisite information out of the way, lets get to the reason you all came here today. How to get around these new kernel defenses? Say, again, we have a vulnerability that gives a WRITE-WHAT-WHERE primitive and we know the KASLR sliding window via the millions and one ways possible to due so.

SMEP does not have a straightforward method to disable itself. Without being able to move execution to your own userland buffer, how do we start control flow? The SMEP bit exists in the 7th bit of the CR4 register, so it is possible to manipulate the cr4 register with logical bit-wise instructions (AND, OR, XOR), shifts, or even bitmask writes. However, the most likely path to be taken here is to do as you would in userland, which is to utilize borrowed code gadgets. Thankfully the kernel has a large surface to generate gadgets from. Grab the targets vmlinuz, search for gadgets, then KASLR_SLIDE+offset. If you are having a hard time getting the KASLR slider, chances are youll end up OOPSing the system in the first place.

SMAP, on the other hand, is much easier to work with. SMAP exists in the 20th bit in the cr4 register, but well just ignore the shit out of this protection mechanism. Remember that the kernel must be able to access userland data for a large assortment of tasks, such as taking arguments and data from userland. So they decided to utilize a documented “fuck you” bypass flag called Alignment Check/EFLAGS.AC. Regardless if that fancy new kernel protection mechanism is enabled and being the bitch bit on your userland page, we can simply call STAC and set the AC flag and completely bypass it. Not only that but this instruction is all of 3 bytes (0f 01 CB). Thats right, if you can manage to find this sequence of 3 bytes inside of the kernel .text range, you win and SMAP cries a shameful death and your function pointer ovewrite, IDT entry overwrite, or whatever fanciness you want can continue.