Age | Commit message (Collapse) | Author |
|
|
|
This should have been in r23660. Untested.
|
|
Thank you Chase Douglas for reporting it and for the patch.
|
|
|
|
However, retesting on m68k showed up a problem that had appeared in
uClibc since the last time I tried. Specifically, revision 15785 did:
-#define HEAP_GRANULARITY (sizeof (HEAP_GRANULARITY_TYPE))
+#define HEAP_GRANULARITY (__alignof__ (HEAP_GRANULARITY_TYPE))
-#define MALLOC_ALIGNMENT (sizeof (double))
+#define MALLOC_ALIGNMENT (__alignof__ (double))
The problem is that
(a) MALLOC_HEADER_SIZE == MALLOC_ALIGNMENT
(b) the header contains a size value of type size_t
(c) sizeof (size_t) is 4 on m68k, but...
(d) __alignof__ (double) is only 2 (the largest alignment used on m68k)
So we only allocate 2 bytes for the 4-byte header, and the least
significant 2 bytes of the size are in the user's area rather than
the header. The patch below fixes that problem by redefining
MALLOC_HEADER_SIZE to:
MAX (MALLOC_ALIGNMENT, sizeof (size_t))
(but without the help of the MAX macro ;)). However, we really would
like to have word alignment on Coldfire. It makes a big performance
difference, and because we have to allocate a 4-byte header anyway,
what wastage there is will be confined to the end of the allocated block.
Any wastage will also be limited to 2 bytes per allocation compared to
the current alignment.
I've therefore used the __aligned__ type attribute to create a double
type that has at least sizeof (size_t) bytes of alignment. I've
introduced a new __attribute_aligned__ macro for this. It might seem
silly protecting against old or non-GNU compilers here, but the extra
alignment is only an optimisation, and having the macro is more in the
spirit of the other attribute code.
|
|
does not easily lend itself to becoming complete pthread cancelation
safe without first investing in some deep and serious thought...
The other malloc implementations are pthread cancelation safe, and
this one is mostly used for uClinux, where the lack is at least less
likely to be a common problem.
|
|
things, and avoid potential deadlocks caused when a thread holding a uClibc
internal lock get canceled and terminates without releasing the lock. This
change also provides a single place, bits/uClibc_mutex.h, for thread libraries
to modify to change all instances of internal locking.
|
|
bernds writes: Use __alignof__ instead of sizeof to get alignments. Eliminates some warnings about misalignments when malloc debugging is enabled.
|
|
most of global data relocations are back
|
|
|
|
missing headers, other jump relocs removed
|
|
it back
|
|
_dl_pagesize variable in ldso, so avoid aliasing.
-Erik
|
|
-Erik
|
|
|
|
were including libc-lock.h which had a bunch of weak pragmas. Also,
uClibc supplied a number of no-op weak thread functions even though
many weren't needed. This combined result was that sometimes the
functional versions of thread functions in pthread would not override
the weaks in libc.
While fixing this, I also prepended double-underscore to all necessary
weak thread funcs in uClibc, and removed all unused weaks.
I did a test build, but haven't tested this since these changes are
a backport from my working tree. I did test the changes there and
no longer need to explicitly add -lpthread in the perl build for
perl to pass its thread self tests.
|
|
|
|
|
|
|
|
|
|
__UCLIBC_UCLINUX_BROKEN_MUNMAP__ (which is currently not defined anywhere).
This makes other cases a tiny bit less efficient too.
* Move the malloc lock into the heap structure (locking is still done
at the malloc level though, not by the heap functions).
* Initialize the malloc heap to contain a tiny initial static free-area so
that programs that only do a very little allocation won't ever call mmap.
|
|
|
|
(__malloc_likely, __malloc_unlikely): Macros removed.
|
|
(MALLOC_SET_SIZE): Take the base-address of the block, not the user-address.
(MALLOC_ADDR): Macro removed.
|
|
|
|
Enable debugging if MALLOC_DEBUGGING is defined.
|
|
(MALLOC_BASE, MALLOC_ADDR): Use it.
|
|
|
|
|
|
the malloc/free level, not within the heap abstraction, and there's a
separate lock to control sbrk access.
Also, get rid of the separate `unmap_free_area' function in free.c, and
just put the code in the `free' function directly, which saves a bunch
of space (even compared to using an inline function) for some reason.
|
|
* Instead of using mmap/munmap directly for large allocations, just use
the heap for everything (this is reasonable now that heap memory can
be unmapped).
* Use sbrk instead of mmap/munmap on systems with an MMU.
|
|
|
|
smarter than the old "malloc-simple", and actually works, unlike
the old "malloc". So kill the old "malloc-simple" and the old
"malloc" and replace them with Miles' new malloc implementation.
Update Config files to match. Thanks Miles!
|