Age | Commit message (Collapse) | Author |
|
This fixes a bug that can lead to the calculation of a wrong bin `idx`,
which in turn leads to a too small chunk of memory being chosen for the
number of bytes (`nb`) to be allocated. This leads to a fault (or
possibly to memory being written in the wrong location) when using the
offset `nb` on top of the chunk for a write operation.
malloc() takes the number of bytes to allocate as size_t, but the
calculation of the global bin index idx is done via
malloc_largebin_index() which takes as parameter and calculates
internally with unsigned int. This leads, for large allocations, to a
truncation of the value and consequently to the idx being wrongly
calculated.
idx is an index into the bins which are sorted in ascending order of
size range of its including chunks (e.g. 8-16, 16-32, 32-64,...).
The malloc() algorithm depends on the idx being calculated such that
we begin searching only at a bin whose chunks are always large enough
to include the memory size to be allocated (nb).
If the idx is too small (as can happen due to the described integer
overflow), this code will lead to a write to a wrong
address (remainder->bk resp. remainder->fd) (lines from malloc.c):
1086 size = chunksize(victim);
1087
1088 /* We know the first chunk in this bin is big enough to use. */
1089 assert((unsigned long)(size) >= (unsigned long)(nb));
1108 remainder = chunk_at_offset(victim, nb);
1111 remainder->bk = remainder->fd = unsorted_chunks(av);
The chunk victim should normally be from a bin of a range where each
chunk is at least of the size nb. Since it's not, its size may be
smaller than nb. With assertions enabled the assertion in 1089 would
fail. Without assertions we add nb as an offset to the chunk but since
the size of the chunk is a lot smaller than nb, this will point to an
address somewhere else.
Signed-off-by: Marcus Haehnel <marcus.haehnel@kernkonzept.com>
|
|
|
|
Safe-Linking is a security mechanism that protects single-linked
lists (such as the fastbins) from being tampered by attackers. The
mechanism makes use of randomness from ASLR (mmap_base), and when
combined with chunk alignment integrity checks, it protects the
pointers from being hijacked by an attacker.
While Safe-Unlinking protects double-linked lists (such as the small
bins), there wasn't any similar protection for attacks against
single-linked lists. This solution protects against 3 common attacks:
* Partial pointer override: modifies the lower bytes (Little Endian)
* Full pointer override: hijacks the pointer to an attacker's location
* Unaligned chunks: pointing the list to an unaligned address
The design assumes an attacker doesn't know where the heap is located,
and uses the ASLR randomness to "sign" the single-linked pointers. We
mark the pointer as P and the location in which it is stored as L, and
the calculation will be:
* PROTECT(P) := (L >> PAGE_SHIFT) XOR (P)
* *L = PROTECT(P)
This way, the random bits from the address L (which start at the bits
in the PAGE_SHIFT position), will be merged with the LSB of the stored
protected pointer. This protection layer prevents an attacker from
modifying the pointer into a controlled value.
An additional check that the chunks are MALLOC_ALIGNed adds an
important layer:
* Attackers can't point to illegal (unaligned) memory addresses
* Attackers must guess correctly the alignment bits
On standard 32 bit Linux machines, an attacker will directly fail 7
out of 8 times, and on 64 bit machines it will fail 15 out of 16
times.
The proposed solution adds 3-4 asm instructions per malloc()/free()
and therefore has only minor performance implications if it has
any. A similar protection was added to Chromium's version of TCMalloc
in 2013, and according to their documentation the performance overhead
was less than 2%.
Signed-off-by: Eyal Itkin <eyalit@checkpoint.com>
|
|
|
|
This option is enabled for a long time and I see no
useful case where we should be incompatible to glibc here.
|
|
For some rarely cases(almost App bugs), calling malloc with
a very largre size, checked_request2size check will fail,set
ENOMEM, and return 0 to caller.
But this will let __malloc_lock futex locked and owned by the
caller. In multithread circumstance, other thread calling
malloc/calloc will NOT succeed and get locked.
Signed-off-by: Zhiqiang Zhang <zhangzhiqiang.zhang@huawei.com>
Signed-off-by: Bernhard Reutner-Fischer <rep.dot.nop@gmail.com>
|
|
Signed-off-by: Bernhard Reutner-Fischer <rep.dot.nop@gmail.com>
|
|
|
|
|
|
things, and avoid potential deadlocks caused when a thread holding a uClibc
internal lock get canceled and terminates without releasing the lock. This
change also provides a single place, bits/uClibc_mutex.h, for thread libraries
to modify to change all instances of internal locking.
|
|
|
|
most of global data relocations are back
|
|
|
|
is a useless attempt
|
|
test wont fail
|
|
Only use MAP_SHARED when mmu-less.
|
|
Lea. It is about 2x faster than the old malloc-930716, and behave itself much
better -- it will properly release memory back to the system, and it uses a
combination of brk() for small allocations and mmap() for larger allocations.
-Erik
|