Age | Commit message (Collapse) | Author |
|
This fixes a bug that can lead to the calculation of a wrong bin `idx`,
which in turn leads to a too small chunk of memory being chosen for the
number of bytes (`nb`) to be allocated. This leads to a fault (or
possibly to memory being written in the wrong location) when using the
offset `nb` on top of the chunk for a write operation.
malloc() takes the number of bytes to allocate as size_t, but the
calculation of the global bin index idx is done via
malloc_largebin_index() which takes as parameter and calculates
internally with unsigned int. This leads, for large allocations, to a
truncation of the value and consequently to the idx being wrongly
calculated.
idx is an index into the bins which are sorted in ascending order of
size range of its including chunks (e.g. 8-16, 16-32, 32-64,...).
The malloc() algorithm depends on the idx being calculated such that
we begin searching only at a bin whose chunks are always large enough
to include the memory size to be allocated (nb).
If the idx is too small (as can happen due to the described integer
overflow), this code will lead to a write to a wrong
address (remainder->bk resp. remainder->fd) (lines from malloc.c):
1086 size = chunksize(victim);
1087
1088 /* We know the first chunk in this bin is big enough to use. */
1089 assert((unsigned long)(size) >= (unsigned long)(nb));
1108 remainder = chunk_at_offset(victim, nb);
1111 remainder->bk = remainder->fd = unsorted_chunks(av);
The chunk victim should normally be from a bin of a range where each
chunk is at least of the size nb. Since it's not, its size may be
smaller than nb. With assertions enabled the assertion in 1089 would
fail. Without assertions we add nb as an offset to the chunk but since
the size of the chunk is a lot smaller than nb, this will point to an
address somewhere else.
Signed-off-by: Marcus Haehnel <marcus.haehnel@kernkonzept.com>
|
|
|
|
|
|
Safe-Linking alignment checks should be done on the user's buffer and not
the mchunkptr. The new check adds support for cases in which:
MALLOC_ALIGNMENT != 2*(sizeof(size_t))
The default case for both 32 bits and 64 bits was already supported, and
this patch adds support for the described irregular case.
|
|
|
|
Safe-Linking is a security mechanism that protects single-linked
lists (such as the fastbins) from being tampered by attackers. The
mechanism makes use of randomness from ASLR (mmap_base), and when
combined with chunk alignment integrity checks, it protects the
pointers from being hijacked by an attacker.
While Safe-Unlinking protects double-linked lists (such as the small
bins), there wasn't any similar protection for attacks against
single-linked lists. This solution protects against 3 common attacks:
* Partial pointer override: modifies the lower bytes (Little Endian)
* Full pointer override: hijacks the pointer to an attacker's location
* Unaligned chunks: pointing the list to an unaligned address
The design assumes an attacker doesn't know where the heap is located,
and uses the ASLR randomness to "sign" the single-linked pointers. We
mark the pointer as P and the location in which it is stored as L, and
the calculation will be:
* PROTECT(P) := (L >> PAGE_SHIFT) XOR (P)
* *L = PROTECT(P)
This way, the random bits from the address L (which start at the bits
in the PAGE_SHIFT position), will be merged with the LSB of the stored
protected pointer. This protection layer prevents an attacker from
modifying the pointer into a controlled value.
An additional check that the chunks are MALLOC_ALIGNed adds an
important layer:
* Attackers can't point to illegal (unaligned) memory addresses
* Attackers must guess correctly the alignment bits
On standard 32 bit Linux machines, an attacker will directly fail 7
out of 8 times, and on 64 bit machines it will fail 15 out of 16
times.
The proposed solution adds 3-4 asm instructions per malloc()/free()
and therefore has only minor performance implications if it has
any. A similar protection was added to Chromium's version of TCMalloc
in 2013, and according to their documentation the performance overhead
was less than 2%.
Signed-off-by: Eyal Itkin <eyalit@checkpoint.com>
|
|
|
|
|
|
sysconf creates a lot of code dependencies.
getpagesize dosen't.
staticly linked code that calls malloc is now much smaller.
Signed-off-by: Waldemar Brodkorb <wbx@uclibc-ng.org>
|
|
This option is enabled for a long time and I see no
useful case where we should be incompatible to glibc here.
|
|
|
|
Linuxthreads.new isn't really useful with the existence
of NPTL/TLS for well supported architectures. There is no
reason to use LT.new for ARM/MIPS or other architectures
supporting NPTL/TLS. It is not available for noMMU architectures
like Blackfin or FR-V. To simplify the live of the few uClibc-ng
developers, LT.new is removed and LT.old is renamed to LT.
LINUXTHREADS_OLD -> UCLIBC_HAS_LINUXTHREADS
|
|
This fix commit 76dfc7ce8c "Some requested additional malloc entry points"
from 2004's
Signed-off-by: Leonid Lisovskiy <lly.dev@gmail.com>
Signed-off-by: Waldemar Brodkorb <wbx@uclibc-ng.org>
|
|
Closes bugzilla #4586
Signed-off-by: Bernhard Reutner-Fischer <rep.dot.nop@gmail.com>
|
|
For some rarely cases(almost App bugs), calling malloc with
a very largre size, checked_request2size check will fail,set
ENOMEM, and return 0 to caller.
But this will let __malloc_lock futex locked and owned by the
caller. In multithread circumstance, other thread calling
malloc/calloc will NOT succeed and get locked.
Signed-off-by: Zhiqiang Zhang <zhangzhiqiang.zhang@huawei.com>
Signed-off-by: Bernhard Reutner-Fischer <rep.dot.nop@gmail.com>
|
|
Signed-off-by: Bernhard Reutner-Fischer <rep.dot.nop@gmail.com>
|
|
valloc uses memalign
Signed-off-by: Bernhard Reutner-Fischer <rep.dot.nop@gmail.com>
|
|
The name was changed to include a trailing 'D' when it went into the
kernel.
Signed-off-by: Bernhard Reutner-Fischer <rep.dot.nop@gmail.com>
|
|
Signed-off-by: Bernhard Reutner-Fischer <rep.dot.nop@gmail.com>
|
|
Signed-off-by: Peter S. Mazinger <ps.m@gmx.net>
Signed-off-by: Bernhard Reutner-Fischer <rep.dot.nop@gmail.com>
|
|
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
Signed-off-by: Bernhard Reutner-Fischer <rep.dot.nop@gmail.com>
|
|
Now that the kernel supports MAP_UNINITIALIZE, have the malloc places use
it to get real uninitialized memory on no-mmu systems. This avoids a lot
of normally useless overhead involved in zeroing out all of the memory
(sometimes multiple times).
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
sed -i -e '/Experimentally off - /d' $(grep -rl "Experimentally off - " *)
sed -i -e '/^\/\*[[:space:]]*libc_hidden_proto(/d' $(grep -rl "libc_hidden_proto" *)
should be a nop
Signed-off-by: Bernhard Reutner-Fischer <rep.dot.nop@gmail.com>
|
|
Handle O=
Signed-off-by: Bernhard Reutner-Fischer <rep.dot.nop@gmail.com>
|
|
Appears to build fine (several .configs tried)
|
|
|
|
|
|
|
|
|
|
|
|
in string.h and strings.h. This caught unguarded string ops in
libc/inet/ethers.c __ether_line_w() function.
I will wait for fallout reports for a week or so,
then continue converting more libc_hidden_proto's.
|
|
|
|
|
|
things, and avoid potential deadlocks caused when a thread holding a uClibc
internal lock get canceled and terminates without releasing the lock. This
change also provides a single place, bits/uClibc_mutex.h, for thread libraries
to modify to change all instances of internal locking.
|
|
|
|
I had clearly run search/replace on that were cluttering things up.
|
|
regardless of the setting of MALLOC_GLIBC_COMPAT.
|
|
most of global data relocations are back
|
|
|
|
libc.a/libc.so, the diffs go into libc-static-y/libc-shared-y exclusively, add IMA to libc, don't use any MSRC anymore
|
|
|
|
|
|
|
|
|
|
is a useless attempt
|
|
gone from libc. The remaining are left as exercise for others ;-)
|
|
|
|
missing headers, other jump relocs removed
|
|
|